2008-06-26 13:24:47

by Haavard Skinnemoen

[permalink] [raw]
Subject: [PATCH v4 0/6] dmaengine/mmc: DMA slave interface and two new drivers

First of all, I'm sorry it went so much time between v3 and v4 of this
patchset. I was hoping to finish this stuff up before all kinds of
other tasks started demanding my attention, but I didn't, so I had to
put it on hold for a while. Let's try again...

This patchset extends the DMA engine API to allow drivers to offer DMA
to and from I/O registers with hardware handshaking, aka slave DMA.
Such functionality is very common in DMA controllers integrated on SoC
devices, and it's typically used to do DMA transfers to/from other
on-SoC peripherals, but it can often do DMA transfers to/from
externally connected devices as well (e.g. IDE hard drives).

The main differences from v3 of this patchset are:
* A DMA descriptor can hold a whole scatterlist. This means that
clients using slave DMA can submit large requests in a single call
to the driver, and they only need to keep track of a single
descriptor.
* The dma_slave_descriptor struct is gone since clients no longer
need to keep track of multiple descriptors.
* The drivers perform better and are more stable.

The dw_dmac driver depends on this patch:

http://lkml.org/lkml/2008/6/25/148

and the atmel-mci driver depends on this series:

http://lkml.org/lkml/2008/6/26/158

as well as all preceding patches in this series, of course.

Comments are welcome, as usual! Shortlog and diffstat follow.

Haavard Skinnemoen (6):
dmaengine: Add dma_client parameter to device_alloc_chan_resources
dmaengine: Add dma_chan_is_in_use() function
dmaengine: Add slave DMA interface
dmaengine: Make DMA Engine menu visible for AVR32 users
dmaengine: Driver for the Synopsys DesignWare DMA controller
Atmel MCI: Driver for Atmel on-chip MMC controllers

arch/avr32/boards/atngw100/setup.c | 7 +
arch/avr32/boards/atstk1000/atstk1002.c | 3 +
arch/avr32/mach-at32ap/at32ap700x.c | 73 ++-
drivers/dma/Kconfig | 11 +-
drivers/dma/Makefile | 1 +
drivers/dma/dmaengine.c | 31 +-
drivers/dma/dw_dmac.c | 1105 +++++++++++++++++++++
drivers/dma/dw_dmac_regs.h | 224 +++++
drivers/dma/ioat_dma.c | 5 +-
drivers/dma/iop-adma.c | 7 +-
drivers/mmc/host/Kconfig | 10 +
drivers/mmc/host/Makefile | 1 +
drivers/mmc/host/atmel-mci-regs.h | 194 ++++
drivers/mmc/host/atmel-mci.c | 1428 ++++++++++++++++++++++++++++
include/asm-avr32/arch-at32ap/at32ap700x.h | 16 +
include/asm-avr32/arch-at32ap/board.h | 6 +-
include/asm-avr32/atmel-mci.h | 12 +
include/linux/dmaengine.h | 73 ++-
include/linux/dw_dmac.h | 62 ++
19 files changed, 3229 insertions(+), 40 deletions(-)
create mode 100644 drivers/dma/dw_dmac.c
create mode 100644 drivers/dma/dw_dmac_regs.h
create mode 100644 drivers/mmc/host/atmel-mci-regs.h
create mode 100644 drivers/mmc/host/atmel-mci.c
create mode 100644 include/asm-avr32/atmel-mci.h
create mode 100644 include/linux/dw_dmac.h

Haavard


2008-06-26 13:24:29

by Haavard Skinnemoen

[permalink] [raw]
Subject: [PATCH v4 1/6] dmaengine: Add dma_client parameter to device_alloc_chan_resources

A DMA controller capable of doing slave transfers may need to know a
few things about the slave when preparing the channel. We don't want
to add this information to struct dma_channel since the channel hasn't
yet been bound to a client at this point.

Instead, pass a reference to the client requesting the channel to the
driver's device_alloc_chan_resources hook so that it can pick the
necessary information from the dma_client struct by itself.

Signed-off-by: Haavard Skinnemoen <[email protected]>
---
drivers/dma/dmaengine.c | 3 ++-
drivers/dma/ioat_dma.c | 5 +++--
drivers/dma/iop-adma.c | 7 ++++---
include/linux/dmaengine.h | 3 ++-
4 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 99c22b4..a57c337 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -174,7 +174,8 @@ static void dma_client_chan_alloc(struct dma_client *client)
if (!dma_chan_satisfies_mask(chan, client->cap_mask))
continue;

- desc = chan->device->device_alloc_chan_resources(chan);
+ desc = chan->device->device_alloc_chan_resources(
+ chan, client);
if (desc >= 0) {
ack = client->event_callback(client,
chan,
diff --git a/drivers/dma/ioat_dma.c b/drivers/dma/ioat_dma.c
index 318e8a2..90e5b0a 100644
--- a/drivers/dma/ioat_dma.c
+++ b/drivers/dma/ioat_dma.c
@@ -452,7 +452,8 @@ static void ioat2_dma_massage_chan_desc(struct ioat_dma_chan *ioat_chan)
* ioat_dma_alloc_chan_resources - returns the number of allocated descriptors
* @chan: the channel to be filled out
*/
-static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
+static int ioat_dma_alloc_chan_resources(struct dma_chan *chan,
+ struct dma_client *client)
{
struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
struct ioat_desc_sw *desc;
@@ -1049,7 +1050,7 @@ static int ioat_dma_self_test(struct ioatdma_device *device)
dma_chan = container_of(device->common.channels.next,
struct dma_chan,
device_node);
- if (device->common.device_alloc_chan_resources(dma_chan) < 1) {
+ if (device->common.device_alloc_chan_resources(dma_chan, NULL) < 1) {
dev_err(&device->pdev->dev,
"selftest cannot allocate chan resource\n");
err = -ENODEV;
diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c
index 0ec0f43..2664ea5 100644
--- a/drivers/dma/iop-adma.c
+++ b/drivers/dma/iop-adma.c
@@ -444,7 +444,8 @@ static void iop_chan_start_null_memcpy(struct iop_adma_chan *iop_chan);
static void iop_chan_start_null_xor(struct iop_adma_chan *iop_chan);

/* returns the number of allocated descriptors */
-static int iop_adma_alloc_chan_resources(struct dma_chan *chan)
+static int iop_adma_alloc_chan_resources(struct dma_chan *chan,
+ struct dma_client *client)
{
char *hw_desc;
int idx;
@@ -838,7 +839,7 @@ static int __devinit iop_adma_memcpy_self_test(struct iop_adma_device *device)
dma_chan = container_of(device->common.channels.next,
struct dma_chan,
device_node);
- if (iop_adma_alloc_chan_resources(dma_chan) < 1) {
+ if (iop_adma_alloc_chan_resources(dma_chan, NULL) < 1) {
err = -ENODEV;
goto out;
}
@@ -936,7 +937,7 @@ iop_adma_xor_zero_sum_self_test(struct iop_adma_device *device)
dma_chan = container_of(device->common.channels.next,
struct dma_chan,
device_node);
- if (iop_adma_alloc_chan_resources(dma_chan) < 1) {
+ if (iop_adma_alloc_chan_resources(dma_chan, NULL) < 1) {
err = -ENODEV;
goto out;
}
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index d08a5c5..cffb95f 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -279,7 +279,8 @@ struct dma_device {
int dev_id;
struct device *dev;

- int (*device_alloc_chan_resources)(struct dma_chan *chan);
+ int (*device_alloc_chan_resources)(struct dma_chan *chan,
+ struct dma_client *client);
void (*device_free_chan_resources)(struct dma_chan *chan);

struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)(
--
1.5.5.4

2008-06-26 13:25:12

by Haavard Skinnemoen

[permalink] [raw]
Subject: [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare DMA controller

This adds a driver for the Synopsys DesignWare DMA controller (aka
DMACA on AVR32 systems.) This DMA controller can be found integrated
on the AT32AP7000 chip and is primarily meant for peripheral DMA
transfer, but can also be used for memory-to-memory transfers.

This patch is based on a driver from David Brownell which was based on
an older version of the DMA Engine framework. It also implements the
proposed extensions to the DMA Engine API for slave DMA operations.

The dmatest client shows no problems, but there may still be room for
improvement performance-wise. DMA slave transfer performance is
definitely "good enough"; reading 100 MiB from an SD card running at ~20
MHz yields ~7.2 MiB/s average transfer rate.

Full documentation for this controller can be found in the Synopsys
DW AHB DMAC Databook:

http://www.synopsys.com/designware/docs/iip/DW_ahb_dmac/latest/doc/dw_ahb_dmac_db.pdf

The controller has lots of implementation options, so it's usually a
good idea to check the data sheet of the chip it's intergrated on as
well. The AT32AP7000 data sheet can be found here:

http://www.atmel.com/dyn/products/datasheets.asp?family_id=682

Signed-off-by: Haavard Skinnemoen <[email protected]>

Changes since v3:
* Update to latest DMA engine and DMA slave APIs
* Embed the hw descriptor into the sw descriptor
* Clean up and update MODULE_DESCRIPTION, copyright date, etc.

Changes since v2:
* Dequeue all pending transfers in terminate_all()
* Rename dw_dmac.h -> dw_dmac_regs.h
* Define and use controller-specific dma_slave data
* Fix up a few outdated comments
* Define hardware registers as structs (doesn't generate better
code, unfortunately, but it looks nicer.)
* Get number of channels from platform_data instead of hardcoding it
based on CONFIG_WHATEVER_CPU.
* Give slave clients exclusive access to the channel
---
arch/avr32/mach-at32ap/at32ap700x.c | 26 +-
drivers/dma/Kconfig | 9 +
drivers/dma/Makefile | 1 +
drivers/dma/dw_dmac.c | 1105 ++++++++++++++++++++++++++++
drivers/dma/dw_dmac_regs.h | 224 ++++++
include/asm-avr32/arch-at32ap/at32ap700x.h | 16 +
include/linux/dw_dmac.h | 62 ++
7 files changed, 1430 insertions(+), 13 deletions(-)
create mode 100644 drivers/dma/dw_dmac.c
create mode 100644 drivers/dma/dw_dmac_regs.h
create mode 100644 include/linux/dw_dmac.h

diff --git a/arch/avr32/mach-at32ap/at32ap700x.c b/arch/avr32/mach-at32ap/at32ap700x.c
index 0f24b4f..2b92047 100644
--- a/arch/avr32/mach-at32ap/at32ap700x.c
+++ b/arch/avr32/mach-at32ap/at32ap700x.c
@@ -599,6 +599,17 @@ static void __init genclk_init_parent(struct clk *clk)
clk->parent = parent;
}

+static struct dw_dma_platform_data dw_dmac0_data = {
+ .nr_channels = 3,
+};
+
+static struct resource dw_dmac0_resource[] = {
+ PBMEM(0xff200000),
+ IRQ(2),
+};
+DEFINE_DEV_DATA(dw_dmac, 0);
+DEV_CLK(hclk, dw_dmac0, hsb, 10);
+
/* --------------------------------------------------------------------
* System peripherals
* -------------------------------------------------------------------- */
@@ -705,17 +716,6 @@ static struct clk pico_clk = {
.users = 1,
};

-static struct resource dmaca0_resource[] = {
- {
- .start = 0xff200000,
- .end = 0xff20ffff,
- .flags = IORESOURCE_MEM,
- },
- IRQ(2),
-};
-DEFINE_DEV(dmaca, 0);
-DEV_CLK(hclk, dmaca0, hsb, 10);
-
/* --------------------------------------------------------------------
* HMATRIX
* -------------------------------------------------------------------- */
@@ -828,7 +828,7 @@ void __init at32_add_system_devices(void)
platform_device_register(&at32_eic0_device);
platform_device_register(&smc0_device);
platform_device_register(&pdc_device);
- platform_device_register(&dmaca0_device);
+ platform_device_register(&dw_dmac0_device);

platform_device_register(&at32_tcb0_device);
platform_device_register(&at32_tcb1_device);
@@ -1891,7 +1891,7 @@ struct clk *at32_clock_list[] = {
&smc0_mck,
&pdc_hclk,
&pdc_pclk,
- &dmaca0_hclk,
+ &dw_dmac0_hclk,
&pico_clk,
&pio0_mck,
&pio1_mck,
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 2ac09be..4fac4e3 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -37,6 +37,15 @@ config INTEL_IOP_ADMA
help
Enable support for the Intel(R) IOP Series RAID engines.

+config DW_DMAC
+ tristate "Synopsys DesignWare AHB DMA support"
+ depends on AVR32
+ select DMA_ENGINE
+ default y if CPU_AT32AP7000
+ help
+ Support the Synopsys DesignWare AHB DMA controller. This
+ can be integrated in chips such as the Atmel AT32ap7000.
+
config FSL_DMA
bool "Freescale MPC85xx/MPC83xx DMA support"
depends on PPC
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 2ff6d7f..beebae4 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -1,6 +1,7 @@
obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
obj-$(CONFIG_NET_DMA) += iovlock.o
obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o
+obj-$(CONFIG_DW_DMAC) += dw_dmac.o
ioatdma-objs := ioat.o ioat_dma.o ioat_dca.o
obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
obj-$(CONFIG_FSL_DMA) += fsldma.o
diff --git a/drivers/dma/dw_dmac.c b/drivers/dma/dw_dmac.c
new file mode 100644
index 0000000..e5389e1
--- /dev/null
+++ b/drivers/dma/dw_dmac.c
@@ -0,0 +1,1105 @@
+/*
+ * Driver for the Synopsys DesignWare DMA Controller (aka DMACA on
+ * AVR32 systems.)
+ *
+ * Copyright (C) 2007-2008 Atmel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include "dw_dmac_regs.h"
+
+/*
+ * This supports the Synopsys "DesignWare AHB Central DMA Controller",
+ * (DW_ahb_dmac) which is used with various AMBA 2.0 systems (not all
+ * of which use ARM any more). See the "Databook" from Synopsys for
+ * information beyond what licensees probably provide.
+ *
+ * The driver has currently been tested only with the Atmel AT32AP7000,
+ * which does not support descriptor writeback.
+ */
+
+/* NOTE: DMS+SMS is system-specific. We should get this information
+ * from the platform code somehow.
+ */
+#define DWC_DEFAULT_CTLLO (DWC_CTLL_DST_MSIZE(0) \
+ | DWC_CTLL_SRC_MSIZE(0) \
+ | DWC_CTLL_DMS(0) \
+ | DWC_CTLL_SMS(1) \
+ | DWC_CTLL_LLP_D_EN \
+ | DWC_CTLL_LLP_S_EN)
+
+/*
+ * This is configuration-dependent and usually a funny size like 4095.
+ * Let's round it down to the nearest power of two.
+ *
+ * Note that this is a transfer count, i.e. if we transfer 32-bit
+ * words, we can do 8192 bytes per descriptor.
+ *
+ * This parameter is also system-specific.
+ */
+#define DWC_MAX_COUNT 2048U
+
+/*
+ * Number of descriptors to allocate for each channel. This should be
+ * made configurable somehow; preferably, the clients (at least the
+ * ones using slave transfers) should be able to give us a hint.
+ */
+#define NR_DESCS_PER_CHANNEL 64
+
+/*----------------------------------------------------------------------*/
+
+/*
+ * Because we're not relying on writeback from the controller (it may not
+ * even be configured into the core!) we don't need to use dma_pool. These
+ * descriptors -- and associated data -- are cacheable. We do need to make
+ * sure their dcache entries are written back before handing them off to
+ * the controller, though.
+ */
+
+static struct dw_desc *dwc_first_active(struct dw_dma_chan *dwc)
+{
+ return list_entry(dwc->active_list.next, struct dw_desc, desc_node);
+}
+
+static struct dw_desc *dwc_first_queued(struct dw_dma_chan *dwc)
+{
+ return list_entry(dwc->queue.next, struct dw_desc, desc_node);
+}
+
+static struct dw_desc *dwc_desc_get(struct dw_dma_chan *dwc)
+{
+ struct dw_desc *desc, *_desc;
+ struct dw_desc *ret = NULL;
+ unsigned int i = 0;
+
+ spin_lock_bh(&dwc->lock);
+ list_for_each_entry_safe(desc, _desc, &dwc->free_list, desc_node) {
+ if (async_tx_test_ack(&desc->txd)) {
+ list_del(&desc->desc_node);
+ ret = desc;
+ break;
+ }
+ dev_dbg(&dwc->chan.dev, "desc %p not ACKed\n", desc);
+ i++;
+ }
+ spin_unlock_bh(&dwc->lock);
+
+ dev_vdbg(&dwc->chan.dev, "scanned %u descriptors on freelist\n", i);
+
+ return ret;
+}
+
+static void dwc_sync_desc_for_cpu(struct dw_dma_chan *dwc, struct dw_desc *desc)
+{
+ struct dw_desc *child;
+
+ list_for_each_entry(child, &desc->txd.tx_list, desc_node)
+ dma_sync_single_for_cpu(dwc->chan.dev.parent,
+ child->txd.phys, sizeof(child->lli),
+ DMA_TO_DEVICE);
+ dma_sync_single_for_cpu(dwc->chan.dev.parent,
+ desc->txd.phys, sizeof(desc->lli),
+ DMA_TO_DEVICE);
+}
+
+/*
+ * Move a descriptor, including any children, to the free list.
+ * `desc' must not be on any lists.
+ */
+static void dwc_desc_put(struct dw_dma_chan *dwc, struct dw_desc *desc)
+{
+ if (desc) {
+ struct dw_desc *child;
+
+ dwc_sync_desc_for_cpu(dwc, desc);
+
+ spin_lock_bh(&dwc->lock);
+ list_for_each_entry(child, &desc->txd.tx_list, desc_node)
+ dev_vdbg(&dwc->chan.dev,
+ "moving child desc %p to freelist\n",
+ child);
+ list_splice_init(&desc->txd.tx_list, &dwc->free_list);
+ dev_vdbg(&dwc->chan.dev, "moving desc %p to freelist\n", desc);
+ list_add(&desc->desc_node, &dwc->free_list);
+ spin_unlock_bh(&dwc->lock);
+ }
+}
+
+/* Called with dwc->lock held and bh disabled */
+static dma_cookie_t
+dwc_assign_cookie(struct dw_dma_chan *dwc, struct dw_desc *desc)
+{
+ dma_cookie_t cookie = dwc->chan.cookie;
+
+ if (++cookie < 0)
+ cookie = 1;
+
+ dwc->chan.cookie = cookie;
+ desc->txd.cookie = cookie;
+
+ return cookie;
+}
+
+/*----------------------------------------------------------------------*/
+
+/* Called with dwc->lock held and bh disabled */
+static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc *first)
+{
+ struct dw_dma *dw = to_dw_dma(dwc->chan.device);
+
+ /* ASSERT: channel is idle */
+ if (dma_readl(dw, CH_EN) & dwc->mask) {
+ dev_err(&dwc->chan.dev,
+ "BUG: Attempted to start non-idle channel\n");
+ dev_err(&dwc->chan.dev,
+ " SAR: 0x%x DAR: 0x%x LLP: 0x%x CTL: 0x%x:%08x\n",
+ channel_readl(dwc, SAR),
+ channel_readl(dwc, DAR),
+ channel_readl(dwc, LLP),
+ channel_readl(dwc, CTL_HI),
+ channel_readl(dwc, CTL_LO));
+
+ /* The tasklet will hopefully advance the queue... */
+ return;
+ }
+
+ channel_writel(dwc, LLP, first->txd.phys);
+ channel_writel(dwc, CTL_LO,
+ DWC_CTLL_LLP_D_EN | DWC_CTLL_LLP_S_EN);
+ channel_writel(dwc, CTL_HI, 0);
+ channel_set_bit(dw, CH_EN, dwc->mask);
+}
+
+/*----------------------------------------------------------------------*/
+
+static void
+dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc)
+{
+ dma_async_tx_callback callback;
+ void *param;
+ struct dma_async_tx_descriptor *txd = &desc->txd;
+
+ dev_vdbg(&dwc->chan.dev, "descriptor %u complete\n", txd->cookie);
+
+ dwc->completed = txd->cookie;
+ callback = txd->callback;
+ param = txd->callback_param;
+
+ dwc_sync_desc_for_cpu(dwc, desc);
+ list_splice_init(&txd->tx_list, &dwc->free_list);
+ list_move(&desc->desc_node, &dwc->free_list);
+
+ /*
+ * The API requires that no submissions are done from a
+ * callback, so we don't need to drop the lock here
+ */
+ if (callback)
+ callback(param);
+}
+
+static void dwc_complete_all(struct dw_dma *dw, struct dw_dma_chan *dwc)
+{
+ struct dw_desc *desc, *_desc;
+ LIST_HEAD(list);
+
+ if (dma_readl(dw, CH_EN) & dwc->mask) {
+ dev_err(&dwc->chan.dev,
+ "BUG: XFER bit set, but channel not idle!\n");
+
+ /* Try to continue after resetting the channel... */
+ channel_clear_bit(dw, CH_EN, dwc->mask);
+ while (dma_readl(dw, CH_EN) & dwc->mask)
+ cpu_relax();
+ }
+
+ /*
+ * Submit queued descriptors ASAP, i.e. before we go through
+ * the completed ones.
+ */
+ if (!list_empty(&dwc->queue))
+ dwc_dostart(dwc, dwc_first_queued(dwc));
+ list_splice_init(&dwc->active_list, &list);
+ list_splice_init(&dwc->queue, &dwc->active_list);
+
+ list_for_each_entry_safe(desc, _desc, &list, desc_node)
+ dwc_descriptor_complete(dwc, desc);
+}
+
+static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc)
+{
+ dma_addr_t llp;
+ struct dw_desc *desc, *_desc;
+ struct dw_desc *child;
+ u32 status_xfer;
+
+ /*
+ * Clear block interrupt flag before scanning so that we don't
+ * miss any, and read LLP before RAW_XFER to ensure it is
+ * valid if we decide to scan the list.
+ */
+ dma_writel(dw, CLEAR.BLOCK, dwc->mask);
+ llp = channel_readl(dwc, LLP);
+ status_xfer = dma_readl(dw, RAW.XFER);
+
+ if (status_xfer & dwc->mask) {
+ /* Everything we've submitted is done */
+ dma_writel(dw, CLEAR.XFER, dwc->mask);
+ dwc_complete_all(dw, dwc);
+ return;
+ }
+
+ dev_vdbg(&dwc->chan.dev, "scan_descriptors: llp=0x%x\n", llp);
+
+ list_for_each_entry_safe(desc, _desc, &dwc->active_list, desc_node) {
+ if (desc->lli.llp == llp)
+ /* This one is currently in progress */
+ return;
+
+ list_for_each_entry(child, &desc->txd.tx_list, desc_node)
+ if (child->lli.llp == llp)
+ /* Currently in progress */
+ return;
+
+ /*
+ * No descriptors so far seem to be in progress, i.e.
+ * this one must be done.
+ */
+ dwc_descriptor_complete(dwc, desc);
+ }
+
+ dev_err(&dwc->chan.dev,
+ "BUG: All descriptors done, but channel not idle!\n");
+
+ /* Try to continue after resetting the channel... */
+ channel_clear_bit(dw, CH_EN, dwc->mask);
+ while (dma_readl(dw, CH_EN) & dwc->mask)
+ cpu_relax();
+
+ if (!list_empty(&dwc->queue)) {
+ dwc_dostart(dwc, dwc_first_queued(dwc));
+ list_splice_init(&dwc->queue, &dwc->active_list);
+ }
+}
+
+static void dwc_dump_lli(struct dw_dma_chan *dwc, struct dw_lli *lli)
+{
+ dev_printk(KERN_CRIT, &dwc->chan.dev,
+ " desc: s0x%x d0x%x l0x%x c0x%x:%x\n",
+ lli->sar, lli->dar, lli->llp,
+ lli->ctlhi, lli->ctllo);
+}
+
+static void dwc_handle_error(struct dw_dma *dw, struct dw_dma_chan *dwc)
+{
+ struct dw_desc *bad_desc;
+ struct dw_desc *child;
+
+ dwc_scan_descriptors(dw, dwc);
+
+ /*
+ * The descriptor currently at the head of the active list is
+ * borked. Since we don't have any way to report errors, we'll
+ * just have to scream loudly and try to carry on.
+ */
+ bad_desc = dwc_first_active(dwc);
+ list_del_init(&bad_desc->desc_node);
+ list_splice_init(&dwc->queue, dwc->active_list.prev);
+
+ /* Clear the error flag and try to restart the controller */
+ dma_writel(dw, CLEAR.ERROR, dwc->mask);
+ if (!list_empty(&dwc->active_list))
+ dwc_dostart(dwc, dwc_first_active(dwc));
+
+ /*
+ * KERN_CRITICAL may seem harsh, but since this only happens
+ * when someone submits a bad physical address in a
+ * descriptor, we should consider ourselves lucky that the
+ * controller flagged an error instead of scribbling over
+ * random memory locations.
+ */
+ dev_printk(KERN_CRIT, &dwc->chan.dev,
+ "Bad descriptor submitted for DMA!\n");
+ dev_printk(KERN_CRIT, &dwc->chan.dev,
+ " cookie: %d\n", bad_desc->txd.cookie);
+ dwc_dump_lli(dwc, &bad_desc->lli);
+ list_for_each_entry(child, &bad_desc->txd.tx_list, desc_node)
+ dwc_dump_lli(dwc, &child->lli);
+
+ /* Pretend the descriptor completed successfully */
+ dwc_descriptor_complete(dwc, bad_desc);
+}
+
+static void dw_dma_tasklet(unsigned long data)
+{
+ struct dw_dma *dw = (struct dw_dma *)data;
+ struct dw_dma_chan *dwc;
+ u32 status_block;
+ u32 status_xfer;
+ u32 status_err;
+ int i;
+
+ status_block = dma_readl(dw, RAW.BLOCK);
+ status_xfer = dma_readl(dw, RAW.BLOCK);
+ status_err = dma_readl(dw, RAW.ERROR);
+
+ dev_vdbg(dw->dma.dev, "tasklet: status_block=%x status_err=%x\n",
+ status_block, status_err);
+
+ for (i = 0; i < dw->dma.chancnt; i++) {
+ dwc = &dw->chan[i];
+ spin_lock(&dwc->lock);
+ if (status_err & (1 << i))
+ dwc_handle_error(dw, dwc);
+ else if ((status_block | status_xfer) & (1 << i))
+ dwc_scan_descriptors(dw, dwc);
+ spin_unlock(&dwc->lock);
+ }
+
+ /*
+ * Re-enable interrupts. Block Complete interrupts are only
+ * enabled if the INT_EN bit in the descriptor is set. This
+ * will trigger a scan before the whole list is done.
+ */
+ channel_set_bit(dw, MASK.XFER, dw->all_chan_mask);
+ channel_set_bit(dw, MASK.BLOCK, dw->all_chan_mask);
+ channel_set_bit(dw, MASK.ERROR, dw->all_chan_mask);
+}
+
+static irqreturn_t dw_dma_interrupt(int irq, void *dev_id)
+{
+ struct dw_dma *dw = dev_id;
+ u32 status;
+
+ dev_vdbg(dw->dma.dev, "interrupt: status=0x%x\n",
+ dma_readl(dw, STATUS_INT));
+
+ /*
+ * Just disable the interrupts. We'll turn them back on in the
+ * softirq handler.
+ */
+ channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask);
+ channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask);
+ channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask);
+
+ status = dma_readl(dw, STATUS_INT);
+ if (status) {
+ dev_err(dw->dma.dev,
+ "BUG: Unexpected interrupts pending: 0x%x\n",
+ status);
+
+ /* Try to recover */
+ channel_clear_bit(dw, MASK.XFER, (1 << 8) - 1);
+ channel_clear_bit(dw, MASK.BLOCK, (1 << 8) - 1);
+ channel_clear_bit(dw, MASK.SRC_TRAN, (1 << 8) - 1);
+ channel_clear_bit(dw, MASK.DST_TRAN, (1 << 8) - 1);
+ channel_clear_bit(dw, MASK.ERROR, (1 << 8) - 1);
+ }
+
+ tasklet_schedule(&dw->tasklet);
+
+ return IRQ_HANDLED;
+}
+
+/*----------------------------------------------------------------------*/
+
+static dma_cookie_t dwc_tx_submit(struct dma_async_tx_descriptor *tx)
+{
+ struct dw_desc *desc = txd_to_dw_desc(tx);
+ struct dw_dma_chan *dwc = to_dw_dma_chan(tx->chan);
+ dma_cookie_t cookie;
+
+ spin_lock_bh(&dwc->lock);
+ cookie = dwc_assign_cookie(dwc, desc);
+
+ /*
+ * REVISIT: We should attempt to chain as many descriptors as
+ * possible, perhaps even appending to those already submitted
+ * for DMA. But this is hard to do in a race-free manner.
+ */
+ if (list_empty(&dwc->active_list)) {
+ dev_vdbg(&tx->chan->dev, "tx_submit: started %u\n",
+ desc->txd.cookie);
+ dwc_dostart(dwc, desc);
+ list_add_tail(&desc->desc_node, &dwc->active_list);
+ } else {
+ dev_vdbg(&tx->chan->dev, "tx_submit: queued %u\n",
+ desc->txd.cookie);
+
+ list_add_tail(&desc->desc_node, &dwc->queue);
+ }
+
+ spin_unlock_bh(&dwc->lock);
+
+ return cookie;
+}
+
+static struct dma_async_tx_descriptor *
+dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
+ size_t len, unsigned long flags)
+{
+ struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ struct dw_desc *desc;
+ struct dw_desc *first;
+ struct dw_desc *prev;
+ size_t xfer_count;
+ size_t offset;
+ unsigned int src_width;
+ unsigned int dst_width;
+ u32 ctllo;
+
+ dev_vdbg(&chan->dev, "prep_dma_memcpy d0x%x s0x%x l0x%zx f0x%lx\n",
+ dest, src, len, flags);
+
+ if (unlikely(!len)) {
+ dev_dbg(&chan->dev, "prep_dma_memcpy: length is zero!\n");
+ return NULL;
+ }
+
+ /*
+ * We can be a lot more clever here, but this should take care
+ * of the most common optimization.
+ */
+ if (!((src | dest | len) & 3))
+ src_width = dst_width = 2;
+ else if (!((src | dest | len) & 1))
+ src_width = dst_width = 1;
+ else
+ src_width = dst_width = 0;
+
+ ctllo = DWC_DEFAULT_CTLLO
+ | DWC_CTLL_DST_WIDTH(dst_width)
+ | DWC_CTLL_SRC_WIDTH(src_width)
+ | DWC_CTLL_DST_INC
+ | DWC_CTLL_SRC_INC
+ | DWC_CTLL_FC_M2M;
+ prev = first = NULL;
+
+ for (offset = 0; offset < len; offset += xfer_count << src_width) {
+ xfer_count = min_t(size_t, (len - offset) >> src_width,
+ DWC_MAX_COUNT);
+
+ desc = dwc_desc_get(dwc);
+ if (!desc)
+ goto err_desc_get;
+
+ desc->lli.sar = src + offset;
+ desc->lli.dar = dest + offset;
+ desc->lli.ctllo = ctllo;
+ desc->lli.ctlhi = xfer_count;
+
+ if (!first) {
+ first = desc;
+ } else {
+ prev->lli.llp = desc->txd.phys;
+ dma_sync_single_for_device(chan->dev.parent,
+ prev->txd.phys, sizeof(prev->lli),
+ DMA_TO_DEVICE);
+ list_add_tail(&desc->desc_node,
+ &first->txd.tx_list);
+ }
+ prev = desc;
+ }
+
+
+ if (flags & DMA_PREP_INTERRUPT)
+ /* Trigger interrupt after last block */
+ prev->lli.ctllo |= DWC_CTLL_INT_EN;
+
+ prev->lli.llp = 0;
+ dma_sync_single_for_device(chan->dev.parent,
+ prev->txd.phys, sizeof(prev->lli),
+ DMA_TO_DEVICE);
+
+ first->txd.flags = flags;
+
+ return &first->txd;
+
+err_desc_get:
+ dwc_desc_put(dwc, first);
+ return NULL;
+}
+
+static struct dma_async_tx_descriptor *
+dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+ unsigned int sg_len, enum dma_data_direction direction,
+ unsigned long flags)
+{
+ struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ struct dw_dma_slave *dws = dwc->dws;
+ struct dw_desc *prev;
+ struct dw_desc *first;
+ u32 ctllo;
+ dma_addr_t reg;
+ unsigned int reg_width;
+ unsigned int mem_width;
+ unsigned int i;
+ struct scatterlist *sg;
+
+ dev_vdbg(&chan->dev, "prep_dma_slave\n");
+
+ if (unlikely(!dws || !sg_len))
+ return NULL;
+
+ reg_width = dws->slave.reg_width;
+ prev = first = NULL;
+
+ sg_len = dma_map_sg(chan->dev.parent, sgl, sg_len, direction);
+
+ switch (direction) {
+ case DMA_TO_DEVICE:
+ ctllo = (DWC_DEFAULT_CTLLO
+ | DWC_CTLL_DST_WIDTH(reg_width)
+ | DWC_CTLL_DST_FIX
+ | DWC_CTLL_SRC_INC
+ | DWC_CTLL_FC_M2P);
+ reg = dws->slave.tx_reg;
+ for_each_sg(sgl, sg, sg_len, i) {
+ struct dw_desc *desc;
+ u32 len;
+ u32 mem;
+
+ desc = dwc_desc_get(dwc);
+ if (!desc) {
+ dev_err(&chan->dev,
+ "not enough descriptors available\n");
+ goto err_desc_get;
+ }
+
+ mem = sg_phys(sg);
+ len = sg_dma_len(sg);
+ mem_width = 2;
+ if (unlikely(mem & 3 || len & 3))
+ mem_width = 0;
+
+ desc->lli.sar = mem;
+ desc->lli.dar = reg;
+ desc->lli.ctllo = ctllo | DWC_CTLL_SRC_WIDTH(mem_width);
+ desc->lli.ctlhi = len >> mem_width;
+
+ if (!first) {
+ first = desc;
+ } else {
+ prev->lli.llp = desc->txd.phys;
+ dma_sync_single_for_device(chan->dev.parent,
+ prev->txd.phys,
+ sizeof(prev->lli),
+ DMA_TO_DEVICE);
+ list_add_tail(&desc->desc_node,
+ &first->txd.tx_list);
+ }
+ prev = desc;
+ }
+ break;
+ case DMA_FROM_DEVICE:
+ ctllo = (DWC_DEFAULT_CTLLO
+ | DWC_CTLL_SRC_WIDTH(reg_width)
+ | DWC_CTLL_DST_INC
+ | DWC_CTLL_SRC_FIX
+ | DWC_CTLL_FC_P2M);
+
+ reg = dws->slave.rx_reg;
+ for_each_sg(sgl, sg, sg_len, i) {
+ struct dw_desc *desc;
+ u32 len;
+ u32 mem;
+
+ desc = dwc_desc_get(dwc);
+ if (!desc) {
+ dev_err(&chan->dev,
+ "not enough descriptors available\n");
+ goto err_desc_get;
+ }
+
+ mem = sg_phys(sg);
+ len = sg_dma_len(sg);
+ mem_width = 2;
+ if (unlikely(mem & 3 || len & 3))
+ mem_width = 0;
+
+ desc->lli.sar = reg;
+ desc->lli.dar = mem;
+ desc->lli.ctllo = ctllo | DWC_CTLL_DST_WIDTH(mem_width);
+ desc->lli.ctlhi = len >> reg_width;
+
+ if (!first) {
+ first = desc;
+ } else {
+ prev->lli.llp = desc->txd.phys;
+ dma_sync_single_for_device(chan->dev.parent,
+ prev->txd.phys,
+ sizeof(prev->lli),
+ DMA_TO_DEVICE);
+ list_add_tail(&desc->desc_node,
+ &first->txd.tx_list);
+ }
+ prev = desc;
+ }
+ break;
+ default:
+ return NULL;
+ }
+
+ if (flags & DMA_PREP_INTERRUPT)
+ /* Trigger interrupt after last block */
+ prev->lli.ctllo |= DWC_CTLL_INT_EN;
+
+ prev->lli.llp = 0;
+ dma_sync_single_for_device(chan->dev.parent,
+ prev->txd.phys, sizeof(prev->lli),
+ DMA_TO_DEVICE);
+
+ return &first->txd;
+
+err_desc_get:
+ dwc_desc_put(dwc, first);
+ return NULL;
+}
+
+static void dwc_terminate_all(struct dma_chan *chan)
+{
+ struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ struct dw_dma *dw = to_dw_dma(chan->device);
+ struct dw_desc *desc, *_desc;
+ LIST_HEAD(list);
+
+ /*
+ * This is only called when something went wrong elsewhere, so
+ * we don't really care about the data. Just disable the
+ * channel. We still have to poll the channel enable bit due
+ * to AHB/HSB limitations.
+ */
+ spin_lock_bh(&dwc->lock);
+
+ channel_clear_bit(dw, CH_EN, dwc->mask);
+
+ while (dma_readl(dw, CH_EN) & dwc->mask)
+ cpu_relax();
+
+ /* active_list entries will end up before queued entries */
+ list_splice_init(&dwc->queue, &list);
+ list_splice_init(&dwc->active_list, &list);
+
+ spin_unlock_bh(&dwc->lock);
+
+ /* Flush all pending and queued descriptors */
+ list_for_each_entry_safe(desc, _desc, &list, desc_node)
+ dwc_descriptor_complete(dwc, desc);
+}
+
+static enum dma_status
+dwc_is_tx_complete(struct dma_chan *chan,
+ dma_cookie_t cookie,
+ dma_cookie_t *done, dma_cookie_t *used)
+{
+ struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ dma_cookie_t last_used;
+ dma_cookie_t last_complete;
+ int ret;
+
+ last_complete = dwc->completed;
+ last_used = chan->cookie;
+
+ ret = dma_async_is_complete(cookie, last_complete, last_used);
+ if (ret != DMA_SUCCESS) {
+ dwc_scan_descriptors(to_dw_dma(chan->device), dwc);
+
+ last_complete = dwc->completed;
+ last_used = chan->cookie;
+
+ ret = dma_async_is_complete(cookie, last_complete, last_used);
+ }
+
+ if (done)
+ *done = last_complete;
+ if (used)
+ *used = last_used;
+
+ return ret;
+}
+
+static void dwc_issue_pending(struct dma_chan *chan)
+{
+ struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+
+ spin_lock_bh(&dwc->lock);
+ if (!list_empty(&dwc->queue))
+ dwc_scan_descriptors(to_dw_dma(chan->device), dwc);
+ spin_unlock_bh(&dwc->lock);
+}
+
+static int dwc_alloc_chan_resources(struct dma_chan *chan,
+ struct dma_client *client)
+{
+ struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ struct dw_dma *dw = to_dw_dma(chan->device);
+ struct dw_desc *desc;
+ struct dma_slave *slave;
+ struct dw_dma_slave *dws;
+ int i;
+ u32 cfghi;
+ u32 cfglo;
+
+ dev_vdbg(&chan->dev, "alloc_chan_resources\n");
+
+ /* Channels doing slave DMA can only handle one client. */
+ if (dwc->dws || client->slave) {
+ if (dma_chan_is_in_use(chan))
+ return -EBUSY;
+ }
+
+ /* ASSERT: channel is idle */
+ if (dma_readl(dw, CH_EN) & dwc->mask) {
+ dev_dbg(&chan->dev, "DMA channel not idle?\n");
+ return -EIO;
+ }
+
+ dwc->completed = chan->cookie = 1;
+
+ cfghi = DWC_CFGH_FIFO_MODE;
+ cfglo = 0;
+
+ slave = client->slave;
+ if (slave) {
+ /*
+ * We need controller-specific data to set up slave
+ * transfers.
+ */
+ BUG_ON(!slave->dma_dev || slave->dma_dev != dw->dma.dev);
+
+ dws = container_of(slave, struct dw_dma_slave, slave);
+
+ dwc->dws = dws;
+ cfghi = dws->cfg_hi;
+ cfglo = dws->cfg_lo;
+ } else {
+ dwc->dws = NULL;
+ }
+
+ channel_writel(dwc, CFG_LO, cfglo);
+ channel_writel(dwc, CFG_HI, cfghi);
+
+ /*
+ * NOTE: some controllers may have additional features that we
+ * need to initialize here, like "scatter-gather" (which
+ * doesn't mean what you think it means), and status writeback.
+ */
+
+ spin_lock_bh(&dwc->lock);
+ i = dwc->descs_allocated;
+ while (dwc->descs_allocated < NR_DESCS_PER_CHANNEL) {
+ spin_unlock_bh(&dwc->lock);
+
+ desc = kzalloc(sizeof(struct dw_desc), GFP_KERNEL);
+ if (!desc) {
+ dev_info(&chan->dev,
+ "only allocated %d descriptors\n", i);
+ spin_lock_bh(&dwc->lock);
+ break;
+ }
+
+ dma_async_tx_descriptor_init(&desc->txd, chan);
+ desc->txd.tx_submit = dwc_tx_submit;
+ desc->txd.flags = DMA_CTRL_ACK;
+ INIT_LIST_HEAD(&desc->txd.tx_list);
+ desc->txd.phys = dma_map_single(chan->dev.parent, &desc->lli,
+ sizeof(desc->lli), DMA_TO_DEVICE);
+ dwc_desc_put(dwc, desc);
+
+ spin_lock_bh(&dwc->lock);
+ i = ++dwc->descs_allocated;
+ }
+
+ /* Enable interrupts */
+ channel_set_bit(dw, MASK.XFER, dwc->mask);
+ channel_set_bit(dw, MASK.BLOCK, dwc->mask);
+ channel_set_bit(dw, MASK.ERROR, dwc->mask);
+
+ spin_unlock_bh(&dwc->lock);
+
+ dev_dbg(&chan->dev,
+ "alloc_chan_resources allocated %d descriptors\n", i);
+
+ return i;
+}
+
+static void dwc_free_chan_resources(struct dma_chan *chan)
+{
+ struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ struct dw_dma *dw = to_dw_dma(chan->device);
+ struct dw_desc *desc, *_desc;
+ LIST_HEAD(list);
+
+ dev_dbg(&chan->dev, "free_chan_resources (descs allocated=%u)\n",
+ dwc->descs_allocated);
+
+ /* ASSERT: channel is idle */
+ BUG_ON(!list_empty(&dwc->active_list));
+ BUG_ON(!list_empty(&dwc->queue));
+ BUG_ON(dma_readl(to_dw_dma(chan->device), CH_EN) & dwc->mask);
+
+ spin_lock_bh(&dwc->lock);
+ list_splice_init(&dwc->free_list, &list);
+ dwc->descs_allocated = 0;
+ dwc->dws = NULL;
+
+ /* Disable interrupts */
+ channel_clear_bit(dw, MASK.XFER, dwc->mask);
+ channel_clear_bit(dw, MASK.BLOCK, dwc->mask);
+ channel_clear_bit(dw, MASK.ERROR, dwc->mask);
+
+ spin_unlock_bh(&dwc->lock);
+
+ list_for_each_entry_safe(desc, _desc, &list, desc_node) {
+ dev_vdbg(&chan->dev, " freeing descriptor %p\n", desc);
+ dma_unmap_single(chan->dev.parent, desc->txd.phys,
+ sizeof(desc->lli), DMA_TO_DEVICE);
+ kfree(desc);
+ }
+
+ dev_vdbg(&chan->dev, "free_chan_resources done\n");
+}
+
+/*----------------------------------------------------------------------*/
+
+static void dw_dma_off(struct dw_dma *dw)
+{
+ dma_writel(dw, CFG, 0);
+
+ channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask);
+ channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask);
+ channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask);
+ channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask);
+ channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask);
+
+ while (dma_readl(dw, CFG) & DW_CFG_DMA_EN)
+ cpu_relax();
+}
+
+static int __init dw_probe(struct platform_device *pdev)
+{
+ struct dw_dma_platform_data *pdata;
+ struct resource *io;
+ struct dw_dma *dw;
+ size_t size;
+ int irq;
+ int err;
+ int i;
+
+ pdata = pdev->dev.platform_data;
+ if (!pdata || pdata->nr_channels > DW_DMA_MAX_NR_CHANNELS)
+ return -EINVAL;
+
+ io = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!io)
+ return -EINVAL;
+
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0)
+ return irq;
+
+ size = sizeof(struct dw_dma);
+ size += pdata->nr_channels * sizeof(struct dw_dma_chan);
+ dw = kzalloc(size, GFP_KERNEL);
+ if (!dw)
+ return -ENOMEM;
+
+ if (!request_mem_region(io->start, DW_REGLEN, pdev->dev.driver->name)) {
+ err = -EBUSY;
+ goto err_kfree;
+ }
+
+ memset(dw, 0, sizeof *dw);
+
+ dw->regs = ioremap(io->start, DW_REGLEN);
+ if (!dw->regs) {
+ err = -ENOMEM;
+ goto err_release_r;
+ }
+
+ dw->clk = clk_get(&pdev->dev, "hclk");
+ if (IS_ERR(dw->clk)) {
+ err = PTR_ERR(dw->clk);
+ goto err_clk;
+ }
+ clk_enable(dw->clk);
+
+ /* force dma off, just in case */
+ dw_dma_off(dw);
+
+ err = request_irq(irq, dw_dma_interrupt, 0, "dw_dmac", dw);
+ if (err)
+ goto err_irq;
+
+ platform_set_drvdata(pdev, dw);
+
+ tasklet_init(&dw->tasklet, dw_dma_tasklet, (unsigned long)dw);
+
+ dw->all_chan_mask = (1 << pdata->nr_channels) - 1;
+
+ INIT_LIST_HEAD(&dw->dma.channels);
+ for (i = 0; i < pdata->nr_channels; i++, dw->dma.chancnt++) {
+ struct dw_dma_chan *dwc = &dw->chan[i];
+
+ dwc->chan.device = &dw->dma;
+ dwc->chan.cookie = dwc->completed = 1;
+ dwc->chan.chan_id = i;
+ list_add_tail(&dwc->chan.device_node, &dw->dma.channels);
+
+ dwc->ch_regs = &__dw_regs(dw)->CHAN[i];
+ spin_lock_init(&dwc->lock);
+ dwc->mask = 1 << i;
+
+ INIT_LIST_HEAD(&dwc->active_list);
+ INIT_LIST_HEAD(&dwc->queue);
+ INIT_LIST_HEAD(&dwc->free_list);
+
+ channel_clear_bit(dw, CH_EN, dwc->mask);
+ }
+
+ /* Clear/disable all interrupts on all channels. */
+ dma_writel(dw, CLEAR.XFER, dw->all_chan_mask);
+ dma_writel(dw, CLEAR.BLOCK, dw->all_chan_mask);
+ dma_writel(dw, CLEAR.SRC_TRAN, dw->all_chan_mask);
+ dma_writel(dw, CLEAR.DST_TRAN, dw->all_chan_mask);
+ dma_writel(dw, CLEAR.ERROR, dw->all_chan_mask);
+
+ channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask);
+ channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask);
+ channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask);
+ channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask);
+ channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask);
+
+ dma_cap_set(DMA_MEMCPY, dw->dma.cap_mask);
+ dma_cap_set(DMA_SLAVE, dw->dma.cap_mask);
+ dw->dma.dev = &pdev->dev;
+ dw->dma.device_alloc_chan_resources = dwc_alloc_chan_resources;
+ dw->dma.device_free_chan_resources = dwc_free_chan_resources;
+
+ dw->dma.device_prep_dma_memcpy = dwc_prep_dma_memcpy;
+
+ dw->dma.device_prep_slave_sg = dwc_prep_slave_sg;
+ dw->dma.device_terminate_all = dwc_terminate_all;
+
+ dw->dma.device_is_tx_complete = dwc_is_tx_complete;
+ dw->dma.device_issue_pending = dwc_issue_pending;
+
+ dma_writel(dw, CFG, DW_CFG_DMA_EN);
+
+ printk(KERN_INFO "%s: DesignWare DMA Controller, %d channels\n",
+ pdev->dev.bus_id, dw->dma.chancnt);
+
+ dma_async_device_register(&dw->dma);
+
+ return 0;
+
+err_irq:
+ clk_disable(dw->clk);
+ clk_put(dw->clk);
+err_clk:
+ iounmap(dw->regs);
+ dw->regs = NULL;
+err_release_r:
+ release_resource(io);
+err_kfree:
+ kfree(dw);
+ return err;
+}
+
+static int __exit dw_remove(struct platform_device *pdev)
+{
+ struct dw_dma *dw = platform_get_drvdata(pdev);
+ struct dw_dma_chan *dwc, *_dwc;
+ struct resource *io;
+
+ dw_dma_off(dw);
+ dma_async_device_unregister(&dw->dma);
+
+ free_irq(platform_get_irq(pdev, 0), dw);
+ tasklet_kill(&dw->tasklet);
+
+ list_for_each_entry_safe(dwc, _dwc, &dw->dma.channels,
+ chan.device_node) {
+ list_del(&dwc->chan.device_node);
+ channel_clear_bit(dw, CH_EN, dwc->mask);
+ }
+
+ clk_disable(dw->clk);
+ clk_put(dw->clk);
+
+ iounmap(dw->regs);
+ dw->regs = NULL;
+
+ io = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ release_mem_region(io->start, DW_REGLEN);
+
+ kfree(dw);
+
+ return 0;
+}
+
+static void dw_shutdown(struct platform_device *pdev)
+{
+ struct dw_dma *dw = platform_get_drvdata(pdev);
+
+ dw_dma_off(platform_get_drvdata(pdev));
+ clk_disable(dw->clk);
+}
+
+static int dw_suspend_late(struct platform_device *pdev, pm_message_t mesg)
+{
+ struct dw_dma *dw = platform_get_drvdata(pdev);
+
+ dw_dma_off(platform_get_drvdata(pdev));
+ clk_disable(dw->clk);
+ return 0;
+}
+
+static int dw_resume_early(struct platform_device *pdev)
+{
+ struct dw_dma *dw = platform_get_drvdata(pdev);
+
+ clk_enable(dw->clk);
+ dma_writel(dw, CFG, DW_CFG_DMA_EN);
+ return 0;
+
+}
+
+static struct platform_driver dw_driver = {
+ .remove = __exit_p(dw_remove),
+ .shutdown = dw_shutdown,
+ .suspend_late = dw_suspend_late,
+ .resume_early = dw_resume_early,
+ .driver = {
+ .name = "dw_dmac",
+ },
+};
+
+static int __init dw_init(void)
+{
+ return platform_driver_probe(&dw_driver, dw_probe);
+}
+module_init(dw_init);
+
+static void __exit dw_exit(void)
+{
+ platform_driver_unregister(&dw_driver);
+}
+module_exit(dw_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller driver");
+MODULE_AUTHOR("Haavard Skinnemoen <[email protected]>");
diff --git a/drivers/dma/dw_dmac_regs.h b/drivers/dma/dw_dmac_regs.h
new file mode 100644
index 0000000..119e65b
--- /dev/null
+++ b/drivers/dma/dw_dmac_regs.h
@@ -0,0 +1,224 @@
+/*
+ * Driver for the Synopsys DesignWare AHB DMA Controller
+ *
+ * Copyright (C) 2005-2007 Atmel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/dw_dmac.h>
+
+#define DW_DMA_MAX_NR_CHANNELS 8
+
+/*
+ * Redefine this macro to handle differences between 32- and 64-bit
+ * addressing, big vs. little endian, etc.
+ */
+#define DW_REG(name) u32 name; u32 __pad_##name
+
+/* Hardware register definitions. */
+struct dw_dma_chan_regs {
+ DW_REG(SAR); /* Source Address Register */
+ DW_REG(DAR); /* Destination Address Register */
+ DW_REG(LLP); /* Linked List Pointer */
+ u32 CTL_LO; /* Control Register Low */
+ u32 CTL_HI; /* Control Register High */
+ DW_REG(SSTAT);
+ DW_REG(DSTAT);
+ DW_REG(SSTATAR);
+ DW_REG(DSTATAR);
+ u32 CFG_LO; /* Configuration Register Low */
+ u32 CFG_HI; /* Configuration Register High */
+ DW_REG(SGR);
+ DW_REG(DSR);
+};
+
+struct dw_dma_irq_regs {
+ DW_REG(XFER);
+ DW_REG(BLOCK);
+ DW_REG(SRC_TRAN);
+ DW_REG(DST_TRAN);
+ DW_REG(ERROR);
+};
+
+struct dw_dma_regs {
+ /* per-channel registers */
+ struct dw_dma_chan_regs CHAN[DW_DMA_MAX_NR_CHANNELS];
+
+ /* irq handling */
+ struct dw_dma_irq_regs RAW; /* r */
+ struct dw_dma_irq_regs STATUS; /* r (raw & mask) */
+ struct dw_dma_irq_regs MASK; /* rw (set = irq enabled) */
+ struct dw_dma_irq_regs CLEAR; /* w (ack, affects "raw") */
+
+ DW_REG(STATUS_INT); /* r */
+
+ /* software handshaking */
+ DW_REG(REQ_SRC);
+ DW_REG(REQ_DST);
+ DW_REG(SGL_REQ_SRC);
+ DW_REG(SGL_REQ_DST);
+ DW_REG(LAST_SRC);
+ DW_REG(LAST_DST);
+
+ /* miscellaneous */
+ DW_REG(CFG);
+ DW_REG(CH_EN);
+ DW_REG(ID);
+ DW_REG(TEST);
+
+ /* optional encoded params, 0x3c8..0x3 */
+};
+
+/* Bitfields in CTL_LO */
+#define DWC_CTLL_INT_EN (1 << 0) /* irqs enabled? */
+#define DWC_CTLL_DST_WIDTH(n) ((n)<<1) /* bytes per element */
+#define DWC_CTLL_SRC_WIDTH(n) ((n)<<4)
+#define DWC_CTLL_DST_INC (0<<7) /* DAR update/not */
+#define DWC_CTLL_DST_DEC (1<<7)
+#define DWC_CTLL_DST_FIX (2<<7)
+#define DWC_CTLL_SRC_INC (0<<7) /* SAR update/not */
+#define DWC_CTLL_SRC_DEC (1<<9)
+#define DWC_CTLL_SRC_FIX (2<<9)
+#define DWC_CTLL_DST_MSIZE(n) ((n)<<11) /* burst, #elements */
+#define DWC_CTLL_SRC_MSIZE(n) ((n)<<14)
+#define DWC_CTLL_S_GATH_EN (1 << 17) /* src gather, !FIX */
+#define DWC_CTLL_D_SCAT_EN (1 << 18) /* dst scatter, !FIX */
+#define DWC_CTLL_FC_M2M (0 << 20) /* mem-to-mem */
+#define DWC_CTLL_FC_M2P (1 << 20) /* mem-to-periph */
+#define DWC_CTLL_FC_P2M (2 << 20) /* periph-to-mem */
+#define DWC_CTLL_FC_P2P (3 << 20) /* periph-to-periph */
+/* plus 4 transfer types for peripheral-as-flow-controller */
+#define DWC_CTLL_DMS(n) ((n)<<23) /* dst master select */
+#define DWC_CTLL_SMS(n) ((n)<<25) /* src master select */
+#define DWC_CTLL_LLP_D_EN (1 << 27) /* dest block chain */
+#define DWC_CTLL_LLP_S_EN (1 << 28) /* src block chain */
+
+/* Bitfields in CTL_HI */
+#define DWC_CTLH_DONE 0x00001000
+#define DWC_CTLH_BLOCK_TS_MASK 0x00000fff
+
+/* Bitfields in CFG_LO. Platform-configurable bits are in <linux/dw_dmac.h> */
+#define DWC_CFGL_CH_SUSP (1 << 8) /* pause xfer */
+#define DWC_CFGL_FIFO_EMPTY (1 << 9) /* pause xfer */
+#define DWC_CFGL_HS_DST (1 << 10) /* handshake w/dst */
+#define DWC_CFGL_HS_SRC (1 << 11) /* handshake w/src */
+#define DWC_CFGL_MAX_BURST(x) ((x) << 20)
+#define DWC_CFGL_RELOAD_SAR (1 << 30)
+#define DWC_CFGL_RELOAD_DAR (1 << 31)
+
+/* Bitfields in CFG_HI. Platform-configurable bits are in <linux/dw_dmac.h> */
+#define DWC_CFGH_DS_UPD_EN (1 << 5)
+#define DWC_CFGH_SS_UPD_EN (1 << 6)
+
+/* Bitfields in SGR */
+#define DWC_SGR_SGI(x) ((x) << 0)
+#define DWC_SGR_SGC(x) ((x) << 20)
+
+/* Bitfields in DSR */
+#define DWC_DSR_DSI(x) ((x) << 0)
+#define DWC_DSR_DSC(x) ((x) << 20)
+
+/* Bitfields in CFG */
+#define DW_CFG_DMA_EN (1 << 0)
+
+#define DW_REGLEN 0x400
+
+struct dw_dma_chan {
+ struct dma_chan chan;
+ void __iomem *ch_regs;
+ u8 mask;
+
+ spinlock_t lock;
+
+ /* these other elements are all protected by lock */
+ dma_cookie_t completed;
+ struct list_head active_list;
+ struct list_head queue;
+ struct list_head free_list;
+
+ struct dw_dma_slave *dws;
+
+ unsigned int descs_allocated;
+};
+
+static inline struct dw_dma_chan_regs __iomem *
+__dwc_regs(struct dw_dma_chan *dwc)
+{
+ return dwc->ch_regs;
+}
+
+#define channel_readl(dwc, name) \
+ __raw_readl(&(__dwc_regs(dwc)->name))
+#define channel_writel(dwc, name, val) \
+ __raw_writel((val), &(__dwc_regs(dwc)->name))
+
+static inline struct dw_dma_chan *to_dw_dma_chan(struct dma_chan *chan)
+{
+ return container_of(chan, struct dw_dma_chan, chan);
+}
+
+
+struct dw_dma {
+ struct dma_device dma;
+ void __iomem *regs;
+ struct tasklet_struct tasklet;
+ struct clk *clk;
+
+ u8 all_chan_mask;
+
+ struct dw_dma_chan chan[0];
+};
+
+static inline struct dw_dma_regs __iomem *__dw_regs(struct dw_dma *dw)
+{
+ return dw->regs;
+}
+
+#define dma_readl(dw, name) \
+ __raw_readl(&(__dw_regs(dw)->name))
+#define dma_writel(dw, name, val) \
+ __raw_writel((val), &(__dw_regs(dw)->name))
+
+#define channel_set_bit(dw, reg, mask) \
+ dma_writel(dw, reg, ((mask) << 8) | (mask))
+#define channel_clear_bit(dw, reg, mask) \
+ dma_writel(dw, reg, ((mask) << 8) | 0)
+
+static inline struct dw_dma *to_dw_dma(struct dma_device *ddev)
+{
+ return container_of(ddev, struct dw_dma, dma);
+}
+
+/* LLI == Linked List Item; a.k.a. DMA block descriptor */
+struct dw_lli {
+ /* values that are not changed by hardware */
+ dma_addr_t sar;
+ dma_addr_t dar;
+ dma_addr_t llp; /* chain to next lli */
+ u32 ctllo;
+ /* values that may get written back: */
+ u32 ctlhi;
+ /* sstat and dstat can snapshot peripheral register state.
+ * silicon config may discard either or both...
+ */
+ u32 sstat;
+ u32 dstat;
+};
+
+struct dw_desc {
+ /* FIRST values the hardware uses */
+ struct dw_lli lli;
+
+ /* THEN values for driver housekeeping */
+ struct list_head desc_node;
+ struct dma_async_tx_descriptor txd;
+};
+
+static inline struct dw_desc *
+txd_to_dw_desc(struct dma_async_tx_descriptor *txd)
+{
+ return container_of(txd, struct dw_desc, txd);
+}
diff --git a/include/asm-avr32/arch-at32ap/at32ap700x.h b/include/asm-avr32/arch-at32ap/at32ap700x.h
index 31e48b0..d18a305 100644
--- a/include/asm-avr32/arch-at32ap/at32ap700x.h
+++ b/include/asm-avr32/arch-at32ap/at32ap700x.h
@@ -30,4 +30,20 @@
#define GPIO_PIN_PD(N) (GPIO_PIOD_BASE + (N))
#define GPIO_PIN_PE(N) (GPIO_PIOE_BASE + (N))

+
+/*
+ * DMAC peripheral hardware handshaking interfaces, used with dw_dmac
+ */
+#define DMAC_MCI_RX 0
+#define DMAC_MCI_TX 1
+#define DMAC_DAC_TX 2
+#define DMAC_AC97_A_RX 3
+#define DMAC_AC97_A_TX 4
+#define DMAC_AC97_B_RX 5
+#define DMAC_AC97_B_TX 6
+#define DMAC_DMAREQ_0 7
+#define DMAC_DMAREQ_1 8
+#define DMAC_DMAREQ_2 9
+#define DMAC_DMAREQ_3 10
+
#endif /* __ASM_ARCH_AT32AP700X_H__ */
diff --git a/include/linux/dw_dmac.h b/include/linux/dw_dmac.h
new file mode 100644
index 0000000..04d217b
--- /dev/null
+++ b/include/linux/dw_dmac.h
@@ -0,0 +1,62 @@
+/*
+ * Driver for the Synopsys DesignWare DMA Controller (aka DMACA on
+ * AVR32 systems.)
+ *
+ * Copyright (C) 2007 Atmel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef DW_DMAC_H
+#define DW_DMAC_H
+
+#include <linux/dmaengine.h>
+
+/**
+ * struct dw_dma_platform_data - Controller configuration parameters
+ * @nr_channels: Number of channels supported by hardware (max 8)
+ */
+struct dw_dma_platform_data {
+ unsigned int nr_channels;
+};
+
+/**
+ * struct dw_dma_slave - Controller-specific information about a slave
+ * @slave: Generic information about the slave
+ * @ctl_lo: Platform-specific initializer for the CTL_LO register
+ * @cfg_hi: Platform-specific initializer for the CFG_HI register
+ * @cfg_lo: Platform-specific initializer for the CFG_LO register
+ */
+struct dw_dma_slave {
+ struct dma_slave slave;
+ u32 cfg_hi;
+ u32 cfg_lo;
+};
+
+/* Platform-configurable bits in CFG_HI */
+#define DWC_CFGH_FCMODE (1 << 0)
+#define DWC_CFGH_FIFO_MODE (1 << 1)
+#define DWC_CFGH_PROTCTL(x) ((x) << 2)
+#define DWC_CFGH_SRC_PER(x) ((x) << 7)
+#define DWC_CFGH_DST_PER(x) ((x) << 11)
+
+/* Platform-configurable bits in CFG_LO */
+#define DWC_CFGL_PRIO(x) ((x) << 5) /* priority */
+#define DWC_CFGL_LOCK_CH_XFER (0 << 12) /* scope of LOCK_CH */
+#define DWC_CFGL_LOCK_CH_BLOCK (1 << 12)
+#define DWC_CFGL_LOCK_CH_XACT (2 << 12)
+#define DWC_CFGL_LOCK_BUS_XFER (0 << 14) /* scope of LOCK_BUS */
+#define DWC_CFGL_LOCK_BUS_BLOCK (1 << 14)
+#define DWC_CFGL_LOCK_BUS_XACT (2 << 14)
+#define DWC_CFGL_LOCK_CH (1 << 15) /* channel lockout */
+#define DWC_CFGL_LOCK_BUS (1 << 16) /* busmaster lockout */
+#define DWC_CFGL_HS_DST_POL (1 << 18) /* dst handshake active low */
+#define DWC_CFGL_HS_SRC_POL (1 << 19) /* src handshake active low */
+
+static inline struct dw_dma_slave *to_dw_dma_slave(struct dma_slave *slave)
+{
+ return container_of(slave, struct dw_dma_slave, slave);
+}
+
+#endif /* DW_DMAC_H */
--
1.5.5.4

2008-06-26 13:25:43

by Haavard Skinnemoen

[permalink] [raw]
Subject: [PATCH v4 2/6] dmaengine: Add dma_chan_is_in_use() function

This moves the code checking if a DMA channel is in use from
show_in_use() into an inline helper function, dma_is_in_use(). DMA
controllers can use this in order to give clients exclusive access to
channels (usually necessary when setting up slave DMA.)

I have to admit that I don't really understand the channel refcounting
logic at all... dma_chan_get() simply increments a per-cpu value. How
can we be sure that whatever CPU calls dma_chan_is_in_use() sees the
same value?

Signed-off-by: Haavard Skinnemoen <[email protected]>
---
drivers/dma/dmaengine.c | 12 +-----------
include/linux/dmaengine.h | 17 +++++++++++++++++
2 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index a57c337..ad8d811 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -105,17 +105,7 @@ static ssize_t show_bytes_transferred(struct device *dev, struct device_attribut
static ssize_t show_in_use(struct device *dev, struct device_attribute *attr, char *buf)
{
struct dma_chan *chan = to_dma_chan(dev);
- int in_use = 0;
-
- if (unlikely(chan->slow_ref) &&
- atomic_read(&chan->refcount.refcount) > 1)
- in_use = 1;
- else {
- if (local_read(&(per_cpu_ptr(chan->local,
- get_cpu())->refcount)) > 0)
- in_use = 1;
- put_cpu();
- }
+ int in_use = dma_chan_is_in_use(chan);

return sprintf(buf, "%d\n", in_use);
}
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index cffb95f..4b602d3 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -180,6 +180,23 @@ static inline void dma_chan_put(struct dma_chan *chan)
}
}

+static inline bool dma_chan_is_in_use(struct dma_chan *chan)
+{
+ bool in_use = false;
+
+ if (unlikely(chan->slow_ref) &&
+ atomic_read(&chan->refcount.refcount) > 1)
+ in_use = true;
+ else {
+ if (local_read(&(per_cpu_ptr(chan->local,
+ get_cpu())->refcount)) > 0)
+ in_use = true;
+ put_cpu();
+ }
+
+ return in_use;
+}
+
/*
* typedef dma_event_callback - function pointer to a DMA event callback
* For each channel added to the system this routine is called for each client.
--
1.5.5.4

2008-06-26 13:25:59

by Haavard Skinnemoen

[permalink] [raw]
Subject: [PATCH v4 6/6] Atmel MCI: Driver for Atmel on-chip MMC controllers

This is a driver for the MMC controller on the AP7000 chips from
Atmel. It should in theory work on AT91 systems too with some
tweaking, but since the DMA interface is quite different, it's not
entirely clear if it's worth merging this with the at91_mci driver.

This driver has been around for a while in BSPs and kernel sources
provided by Atmel, but this particular version uses the generic DMA
Engine framework (with the slave extensions) instead of an
avr32-only DMA controller framework.

This driver can also use PIO transfers when no DMA channels are
available, and for transfers where using DMA may be difficult or
impractical for some reason (e.g. the DMA setup overhead is usually
not worth it for very short transfers, and badly aligned buffers or
lengths are difficult to handle.)

The driver has been tested using mmc-block and ext3fs on several SD,
SDHC and MMC+ cards. Reads and writes work fine, with read transfer
rates up to 7.2 MiB/s on fast cards with debugging disabled.

The driver has also been tested using the mmc_test module on the same
cards. All tests except 7, 9, 15 and 17 succeed. The first two are
unsupported by all the cards I have, so I don't know if the driver
handles this correctly. The last two fail because the hardware flags a
Data CRC Error instead of a Data Timeout error. I'm not sure how to deal
with that.

Documentation for this controller can be found in many data sheets from
Atmel, including the AT32AP7000 data sheet which can be found here:

http://www.atmel.com/dyn/products/datasheets.asp?family_id=682

Signed-off-by: Haavard Skinnemoen <[email protected]>

Changes since v3:
* Update to latest DMA slave API
* Use debugfs root created by mmc core
* Kill fmax module parameter
* Round MMC clock rate down
* Fix unreliable card detection (using a debounce timer and
terminating commands early.)
* Handle descriptor allocation errors (just fail the transfer)
* Tune block parameters (max_hw_segs, etc.)

Changes since v2:
* Reset the controller after each transfer since we're violating the
spec sometimes. This is very cheap, so we don't try to be clever.
* Turn off the MMC clock when no requests are pending.
* Implement support for PIO transfers (i.e. not using DMA.)
* Rename atmel-mci.h -> atmel-mci-regs.h
* Use controller-specific data passed from the platform code to set
up DMA slave transfers. These parameters include including physical
DMA device, peripheral handshake IDs, channel priorities, etc.
* Fix several card removal bugs
---
arch/avr32/boards/atngw100/setup.c | 7 +
arch/avr32/boards/atstk1000/atstk1002.c | 3 +
arch/avr32/mach-at32ap/at32ap700x.c | 47 +-
drivers/mmc/host/Kconfig | 10 +
drivers/mmc/host/Makefile | 1 +
drivers/mmc/host/atmel-mci-regs.h | 194 +++++
drivers/mmc/host/atmel-mci.c | 1428 +++++++++++++++++++++++++++++++
include/asm-avr32/arch-at32ap/board.h | 6 +-
include/asm-avr32/atmel-mci.h | 12 +
9 files changed, 1702 insertions(+), 6 deletions(-)
create mode 100644 drivers/mmc/host/atmel-mci-regs.h
create mode 100644 drivers/mmc/host/atmel-mci.c
create mode 100644 include/asm-avr32/atmel-mci.h

diff --git a/arch/avr32/boards/atngw100/setup.c b/arch/avr32/boards/atngw100/setup.c
index a398be2..96833bf 100644
--- a/arch/avr32/boards/atngw100/setup.c
+++ b/arch/avr32/boards/atngw100/setup.c
@@ -17,6 +17,7 @@
#include <linux/leds.h>
#include <linux/spi/spi.h>

+#include <asm/atmel-mci.h>
#include <asm/io.h>
#include <asm/setup.h>

@@ -42,6 +43,11 @@ static struct spi_board_info spi0_board_info[] __initdata = {
},
};

+static struct mci_platform_data __initdata mci0_data = {
+ .detect_pin = GPIO_PIN_PC(25),
+ .wp_pin = GPIO_PIN_PE(0),
+};
+
/*
* The next two functions should go away as the boot loader is
* supposed to initialize the macb address registers with a valid
@@ -157,6 +163,7 @@ static int __init atngw100_init(void)
set_hw_addr(at32_add_device_eth(1, &eth_data[1]));

at32_add_device_spi(0, spi0_board_info, ARRAY_SIZE(spi0_board_info));
+ at32_add_device_mci(0, &mci0_data);
at32_add_device_usba(0, NULL);

for (i = 0; i < ARRAY_SIZE(ngw_leds); i++) {
diff --git a/arch/avr32/boards/atstk1000/atstk1002.c b/arch/avr32/boards/atstk1000/atstk1002.c
index 000eb42..8b92cd6 100644
--- a/arch/avr32/boards/atstk1000/atstk1002.c
+++ b/arch/avr32/boards/atstk1000/atstk1002.c
@@ -228,6 +228,9 @@ static int __init atstk1002_init(void)
#ifdef CONFIG_BOARD_ATSTK100X_SPI1
at32_add_device_spi(1, spi1_board_info, ARRAY_SIZE(spi1_board_info));
#endif
+#ifndef CONFIG_BOARD_ATSTK1002_SW2_CUSTOM
+ at32_add_device_mci(0, NULL);
+#endif
#ifdef CONFIG_BOARD_ATSTK1002_SW5_CUSTOM
set_hw_addr(at32_add_device_eth(1, &eth_data[1]));
#else
diff --git a/arch/avr32/mach-at32ap/at32ap700x.c b/arch/avr32/mach-at32ap/at32ap700x.c
index 2b92047..1d47605 100644
--- a/arch/avr32/mach-at32ap/at32ap700x.c
+++ b/arch/avr32/mach-at32ap/at32ap700x.c
@@ -7,6 +7,7 @@
*/
#include <linux/clk.h>
#include <linux/delay.h>
+#include <linux/dw_dmac.h>
#include <linux/fb.h>
#include <linux/init.h>
#include <linux/platform_device.h>
@@ -14,6 +15,7 @@
#include <linux/spi/spi.h>
#include <linux/usb/atmel_usba_udc.h>

+#include <asm/atmel-mci.h>
#include <asm/io.h>
#include <asm/irq.h>

@@ -1199,20 +1201,48 @@ static struct clk atmel_mci0_pclk = {
.index = 9,
};

-struct platform_device *__init at32_add_device_mci(unsigned int id)
+struct platform_device *__init
+at32_add_device_mci(unsigned int id, struct mci_platform_data *data)
{
- struct platform_device *pdev;
+ struct mci_platform_data _data;
+ struct platform_device *pdev;
+ struct dw_dma_slave *dws;

if (id != 0)
return NULL;

pdev = platform_device_alloc("atmel_mci", id);
if (!pdev)
- return NULL;
+ goto fail;

if (platform_device_add_resources(pdev, atmel_mci0_resource,
ARRAY_SIZE(atmel_mci0_resource)))
- goto err_add_resources;
+ goto fail;
+
+ if (!data) {
+ data = &_data;
+ memset(data, 0, sizeof(struct mci_platform_data));
+ }
+
+ if (data->dma_slave)
+ dws = kmemdup(to_dw_dma_slave(data->dma_slave),
+ sizeof(struct dw_dma_slave), GFP_KERNEL);
+ else
+ dws = kzalloc(sizeof(struct dw_dma_slave), GFP_KERNEL);
+
+ dws->slave.dev = &pdev->dev;
+ dws->slave.dma_dev = &dw_dmac0_device.dev;
+ dws->slave.reg_width = DMA_SLAVE_WIDTH_32BIT;
+ dws->cfg_hi = (DWC_CFGH_SRC_PER(0)
+ | DWC_CFGH_DST_PER(1));
+ dws->cfg_lo &= ~(DWC_CFGL_HS_DST_POL
+ | DWC_CFGL_HS_SRC_POL);
+
+ data->dma_slave = &dws->slave;
+
+ if (platform_device_add_data(pdev, data,
+ sizeof(struct mci_platform_data)))
+ goto fail;

select_peripheral(PA(10), PERIPH_A, 0); /* CLK */
select_peripheral(PA(11), PERIPH_A, 0); /* CMD */
@@ -1221,12 +1251,19 @@ struct platform_device *__init at32_add_device_mci(unsigned int id)
select_peripheral(PA(14), PERIPH_A, 0); /* DATA2 */
select_peripheral(PA(15), PERIPH_A, 0); /* DATA3 */

+ if (data) {
+ if (data->detect_pin != GPIO_PIN_NONE)
+ at32_select_gpio(data->detect_pin, 0);
+ if (data->wp_pin != GPIO_PIN_NONE)
+ at32_select_gpio(data->wp_pin, 0);
+ }
+
atmel_mci0_pclk.dev = &pdev->dev;

platform_device_add(pdev);
return pdev;

-err_add_resources:
+fail:
platform_device_put(pdev);
return NULL;
}
diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
index dead617..fca47c1 100644
--- a/drivers/mmc/host/Kconfig
+++ b/drivers/mmc/host/Kconfig
@@ -91,6 +91,16 @@ config MMC_AT91

If unsure, say N.

+config MMC_ATMELMCI
+ tristate "Atmel Multimedia Card Interface support"
+ depends on AVR32 && DMA_ENGINE
+ help
+ This selects the Atmel Multimedia Card Interface driver. If
+ you have an AT32 (AVR32) platform with a Multimedia Card
+ slot, say Y or M here.
+
+ If unsure, say N.
+
config MMC_IMX
tristate "Motorola i.MX Multimedia Card Interface support"
depends on ARCH_IMX
diff --git a/drivers/mmc/host/Makefile b/drivers/mmc/host/Makefile
index 3877c87..e80ea72 100644
--- a/drivers/mmc/host/Makefile
+++ b/drivers/mmc/host/Makefile
@@ -15,6 +15,7 @@ obj-$(CONFIG_MMC_WBSD) += wbsd.o
obj-$(CONFIG_MMC_AU1X) += au1xmmc.o
obj-$(CONFIG_MMC_OMAP) += omap.o
obj-$(CONFIG_MMC_AT91) += at91_mci.o
+obj-$(CONFIG_MMC_ATMELMCI) += atmel-mci.o
obj-$(CONFIG_MMC_TIFM_SD) += tifm_sd.o
obj-$(CONFIG_MMC_SPI) += mmc_spi.o

diff --git a/drivers/mmc/host/atmel-mci-regs.h b/drivers/mmc/host/atmel-mci-regs.h
new file mode 100644
index 0000000..7719e37
--- /dev/null
+++ b/drivers/mmc/host/atmel-mci-regs.h
@@ -0,0 +1,194 @@
+/*
+ * Atmel MultiMedia Card Interface driver
+ *
+ * Copyright (C) 2004-2006 Atmel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef __DRIVERS_MMC_ATMEL_MCI_H__
+#define __DRIVERS_MMC_ATMEL_MCI_H__
+
+/* MCI register offsets */
+#define MCI_CR 0x0000
+#define MCI_MR 0x0004
+#define MCI_DTOR 0x0008
+#define MCI_SDCR 0x000c
+#define MCI_ARGR 0x0010
+#define MCI_CMDR 0x0014
+#define MCI_BLKR 0x0018
+#define MCI_RSPR 0x0020
+#define MCI_RSPR1 0x0024
+#define MCI_RSPR2 0x0028
+#define MCI_RSPR3 0x002c
+#define MCI_RDR 0x0030
+#define MCI_TDR 0x0034
+#define MCI_SR 0x0040
+#define MCI_IER 0x0044
+#define MCI_IDR 0x0048
+#define MCI_IMR 0x004c
+
+/* Bitfields in CR */
+#define MCI_MCIEN_OFFSET 0
+#define MCI_MCIEN_SIZE 1
+#define MCI_MCIDIS_OFFSET 1
+#define MCI_MCIDIS_SIZE 1
+#define MCI_PWSEN_OFFSET 2
+#define MCI_PWSEN_SIZE 1
+#define MCI_PWSDIS_OFFSET 3
+#define MCI_PWSDIS_SIZE 1
+#define MCI_SWRST_OFFSET 7
+#define MCI_SWRST_SIZE 1
+
+/* Bitfields in MR */
+#define MCI_CLKDIV_OFFSET 0
+#define MCI_CLKDIV_SIZE 8
+#define MCI_PWSDIV_OFFSET 8
+#define MCI_PWSDIV_SIZE 3
+#define MCI_RDPROOF_OFFSET 11
+#define MCI_RDPROOF_SIZE 1
+#define MCI_WRPROOF_OFFSET 12
+#define MCI_WRPROOF_SIZE 1
+#define MCI_PDCFBYTE_OFFSET 13
+#define MCI_PDCFBYTE_SIZE 1
+#define MCI_DMAPADV_OFFSET 14
+#define MCI_DMAPADV_SIZE 1
+#define MCI_BLKLEN_OFFSET 16
+#define MCI_BLKLEN_SIZE 16
+
+/* Bitfields in DTOR */
+#define MCI_DTOCYC_OFFSET 0
+#define MCI_DTOCYC_SIZE 4
+#define MCI_DTOMUL_OFFSET 4
+#define MCI_DTOMUL_SIZE 3
+
+/* Bitfields in SDCR */
+#define MCI_SDCSEL_OFFSET 0
+#define MCI_SDCSEL_SIZE 4
+#define MCI_SDCBUS_OFFSET 7
+#define MCI_SDCBUS_SIZE 1
+
+/* Bitfields in ARGR */
+#define MCI_ARG_OFFSET 0
+#define MCI_ARG_SIZE 32
+
+/* Bitfields in CMDR */
+#define MCI_CMDNB_OFFSET 0
+#define MCI_CMDNB_SIZE 6
+#define MCI_RSPTYP_OFFSET 6
+#define MCI_RSPTYP_SIZE 2
+#define MCI_SPCMD_OFFSET 8
+#define MCI_SPCMD_SIZE 3
+#define MCI_OPDCMD_OFFSET 11
+#define MCI_OPDCMD_SIZE 1
+#define MCI_MAXLAT_OFFSET 12
+#define MCI_MAXLAT_SIZE 1
+#define MCI_TRCMD_OFFSET 16
+#define MCI_TRCMD_SIZE 2
+#define MCI_TRDIR_OFFSET 18
+#define MCI_TRDIR_SIZE 1
+#define MCI_TRTYP_OFFSET 19
+#define MCI_TRTYP_SIZE 2
+
+/* Bitfields in BLKR */
+#define MCI_BCNT_OFFSET 0
+#define MCI_BCNT_SIZE 16
+
+/* Bitfields in RSPRn */
+#define MCI_RSP_OFFSET 0
+#define MCI_RSP_SIZE 32
+
+/* Bitfields in SR/IER/IDR/IMR */
+#define MCI_CMDRDY_OFFSET 0
+#define MCI_CMDRDY_SIZE 1
+#define MCI_RXRDY_OFFSET 1
+#define MCI_RXRDY_SIZE 1
+#define MCI_TXRDY_OFFSET 2
+#define MCI_TXRDY_SIZE 1
+#define MCI_BLKE_OFFSET 3
+#define MCI_BLKE_SIZE 1
+#define MCI_DTIP_OFFSET 4
+#define MCI_DTIP_SIZE 1
+#define MCI_NOTBUSY_OFFSET 5
+#define MCI_NOTBUSY_SIZE 1
+#define MCI_ENDRX_OFFSET 6
+#define MCI_ENDRX_SIZE 1
+#define MCI_ENDTX_OFFSET 7
+#define MCI_ENDTX_SIZE 1
+#define MCI_RXBUFF_OFFSET 14
+#define MCI_RXBUFF_SIZE 1
+#define MCI_TXBUFE_OFFSET 15
+#define MCI_TXBUFE_SIZE 1
+#define MCI_RINDE_OFFSET 16
+#define MCI_RINDE_SIZE 1
+#define MCI_RDIRE_OFFSET 17
+#define MCI_RDIRE_SIZE 1
+#define MCI_RCRCE_OFFSET 18
+#define MCI_RCRCE_SIZE 1
+#define MCI_RENDE_OFFSET 19
+#define MCI_RENDE_SIZE 1
+#define MCI_RTOE_OFFSET 20
+#define MCI_RTOE_SIZE 1
+#define MCI_DCRCE_OFFSET 21
+#define MCI_DCRCE_SIZE 1
+#define MCI_DTOE_OFFSET 22
+#define MCI_DTOE_SIZE 1
+#define MCI_OVRE_OFFSET 30
+#define MCI_OVRE_SIZE 1
+#define MCI_UNRE_OFFSET 31
+#define MCI_UNRE_SIZE 1
+
+/* Constants for DTOMUL */
+#define MCI_DTOMUL_1_CYCLE 0
+#define MCI_DTOMUL_16_CYCLES 1
+#define MCI_DTOMUL_128_CYCLES 2
+#define MCI_DTOMUL_256_CYCLES 3
+#define MCI_DTOMUL_1024_CYCLES 4
+#define MCI_DTOMUL_4096_CYCLES 5
+#define MCI_DTOMUL_65536_CYCLES 6
+#define MCI_DTOMUL_1048576_CYCLES 7
+
+/* Constants for RSPTYP */
+#define MCI_RSPTYP_NO_RESP 0
+#define MCI_RSPTYP_48_BIT 1
+#define MCI_RSPTYP_136_BIT 2
+
+/* Constants for SPCMD */
+#define MCI_SPCMD_NO_SPEC_CMD 0
+#define MCI_SPCMD_INIT_CMD 1
+#define MCI_SPCMD_SYNC_CMD 2
+#define MCI_SPCMD_INT_CMD 4
+#define MCI_SPCMD_INT_RESP 5
+
+/* Constants for TRCMD */
+#define MCI_TRCMD_NO_TRANS 0
+#define MCI_TRCMD_START_TRANS 1
+#define MCI_TRCMD_STOP_TRANS 2
+
+/* Constants for TRTYP */
+#define MCI_TRTYP_BLOCK 0
+#define MCI_TRTYP_MULTI_BLOCK 1
+#define MCI_TRTYP_STREAM 2
+
+/* Bit manipulation macros */
+#define MCI_BIT(name) \
+ (1 << MCI_##name##_OFFSET)
+#define MCI_BF(name,value) \
+ (((value) & ((1 << MCI_##name##_SIZE) - 1)) \
+ << MCI_##name##_OFFSET)
+#define MCI_BFEXT(name,value) \
+ (((value) >> MCI_##name##_OFFSET) \
+ & ((1 << MCI_##name##_SIZE) - 1))
+#define MCI_BFINS(name,value,old) \
+ (((old) & ~(((1 << MCI_##name##_SIZE) - 1) \
+ << MCI_##name##_OFFSET)) \
+ | MCI_BF(name,value))
+
+/* Register access macros */
+#define mci_readl(port,reg) \
+ __raw_readl((port)->regs + MCI_##reg)
+#define mci_writel(port,reg,value) \
+ __raw_writel((value), (port)->regs + MCI_##reg)
+
+#endif /* __DRIVERS_MMC_ATMEL_MCI_H__ */
diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
new file mode 100644
index 0000000..429bea8
--- /dev/null
+++ b/drivers/mmc/host/atmel-mci.c
@@ -0,0 +1,1428 @@
+/*
+ * Atmel MultiMedia Card Interface driver
+ *
+ * Copyright (C) 2004-2008 Atmel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/blkdev.h>
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/ioport.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+
+#include <linux/mmc/host.h>
+
+#include <asm/atmel-mci.h>
+#include <asm/io.h>
+#include <asm/unaligned.h>
+
+#include <asm/arch/board.h>
+#include <asm/arch/gpio.h>
+
+#include "atmel-mci-regs.h"
+
+#define ATMCI_DATA_ERROR_FLAGS (MCI_BIT(DCRCE) | MCI_BIT(DTOE) | \
+ MCI_BIT(OVRE) | MCI_BIT(UNRE))
+
+#define ATMCI_DMA_THRESHOLD 16
+
+enum {
+ EVENT_CMD_COMPLETE = 0,
+ EVENT_DATA_ERROR,
+ EVENT_DATA_COMPLETE,
+ EVENT_STOP_SENT,
+ EVENT_STOP_COMPLETE,
+ EVENT_DMA_COMPLETE,
+};
+
+struct atmel_mci_dma {
+ struct dma_client client;
+ struct dma_chan *chan;
+ struct dma_async_tx_descriptor *data_desc;
+};
+
+struct atmel_mci {
+ struct mmc_host *mmc;
+ void __iomem *regs;
+
+ struct scatterlist *sg;
+ unsigned int pio_offset;
+
+ struct mmc_request *mrq;
+ struct mmc_command *cmd;
+ struct mmc_data *data;
+
+ struct atmel_mci_dma dma;
+
+ /* DMA channel being used for the current data transfer */
+ struct dma_chan *data_chan;
+
+ u32 cmd_status;
+ u32 data_status;
+ u32 stop_status;
+ u32 stop_cmdr;
+
+ u32 mode_reg;
+ u32 sdc_reg;
+
+ struct tasklet_struct tasklet;
+ unsigned long pending_events;
+ unsigned long completed_events;
+
+ int present;
+ int detect_pin;
+ int wp_pin;
+
+ /* For detect pin debouncing */
+ struct timer_list detect_timer;
+
+ unsigned long bus_hz;
+ unsigned long mapbase;
+ struct clk *mck;
+ struct platform_device *pdev;
+
+#ifdef CONFIG_MMC_DEBUG_FS
+ struct dentry *debugfs_regs;
+ struct dentry *debugfs_req;
+ struct dentry *debugfs_pending_events;
+ struct dentry *debugfs_completed_events;
+#endif
+};
+
+static inline struct atmel_mci *
+dma_client_to_atmel_mci(struct dma_client *client)
+{
+ return container_of(client, struct atmel_mci, dma.client);
+}
+
+#define atmci_is_completed(host, event) \
+ test_bit(event, &host->completed_events)
+#define atmci_test_and_clear_pending(host, event) \
+ test_and_clear_bit(event, &host->pending_events)
+#define atmci_test_and_set_completed(host, event) \
+ test_and_set_bit(event, &host->completed_events)
+#define atmci_set_completed(host, event) \
+ set_bit(event, &host->completed_events)
+#define atmci_set_pending(host, event) \
+ set_bit(event, &host->pending_events)
+#define atmci_clear_pending(host, event) \
+ clear_bit(event, &host->pending_events)
+
+
+#ifdef CONFIG_MMC_DEBUG_FS
+#include <linux/debugfs.h>
+
+#define DBG_REQ_BUF_SIZE (4096U - (unsigned int)sizeof(unsigned int))
+
+struct req_dbg_data {
+ unsigned int nbytes;
+ char str[DBG_REQ_BUF_SIZE];
+};
+
+static int req_dbg_open(struct inode *inode, struct file *file)
+{
+ struct atmel_mci *host;
+ struct mmc_request *mrq;
+ struct mmc_command *cmd;
+ struct mmc_command *stop;
+ struct mmc_data *data;
+ struct req_dbg_data *priv;
+ char *str;
+ unsigned int n = 0;
+
+ priv = kzalloc(DBG_REQ_BUF_SIZE, GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+ str = priv->str;
+
+ mutex_lock(&inode->i_mutex);
+ host = inode->i_private;
+
+ spin_lock_irq(&host->mmc->lock);
+ mrq = host->mrq;
+ if (mrq) {
+ cmd = mrq->cmd;
+ data = mrq->data;
+ stop = mrq->stop;
+ n = snprintf(str, DBG_REQ_BUF_SIZE,
+ "CMD%u(0x%x) %x %x %x %x %x (err %d)\n",
+ cmd->opcode, cmd->arg, cmd->flags,
+ cmd->resp[0], cmd->resp[1], cmd->resp[2],
+ cmd->resp[3], cmd->error);
+ if (n < DBG_REQ_BUF_SIZE && data)
+ n += snprintf(str + n, DBG_REQ_BUF_SIZE - n,
+ "DATA %u * %u (%u) %x (err %d)\n",
+ data->blocks, data->blksz,
+ data->bytes_xfered, data->flags,
+ data->error);
+ if (n < DBG_REQ_BUF_SIZE && stop)
+ n += snprintf(str + n, DBG_REQ_BUF_SIZE - n,
+ "CMD%u(0x%x) %x %x %x %x %x (err %d)\n",
+ stop->opcode, stop->arg, stop->flags,
+ stop->resp[0], stop->resp[1],
+ stop->resp[2], stop->resp[3],
+ stop->error);
+ }
+ spin_unlock_irq(&host->mmc->lock);
+ mutex_unlock(&inode->i_mutex);
+
+ priv->nbytes = min(n, DBG_REQ_BUF_SIZE);
+ file->private_data = priv;
+
+ return 0;
+}
+
+static ssize_t req_dbg_read(struct file *file, char __user *buf,
+ size_t nbytes, loff_t *ppos)
+{
+ struct req_dbg_data *priv = file->private_data;
+
+ return simple_read_from_buffer(buf, nbytes, ppos,
+ priv->str, priv->nbytes);
+}
+
+static int req_dbg_release(struct inode *inode, struct file *file)
+{
+ kfree(file->private_data);
+ return 0;
+}
+
+static const struct file_operations req_dbg_fops = {
+ .owner = THIS_MODULE,
+ .open = req_dbg_open,
+ .llseek = no_llseek,
+ .read = req_dbg_read,
+ .release = req_dbg_release,
+};
+
+static int regs_dbg_open(struct inode *inode, struct file *file)
+{
+ struct atmel_mci *host;
+ unsigned int i;
+ u32 *data;
+ int ret;
+
+ mutex_lock(&inode->i_mutex);
+ host = inode->i_private;
+ data = kmalloc(inode->i_size, GFP_KERNEL);
+ if (!data) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ spin_lock_irq(&host->mmc->lock);
+ for (i = 0; i < inode->i_size / 4; i++)
+ data[i] = __raw_readl(host->regs + i * 4);
+ spin_unlock_irq(&host->mmc->lock);
+
+ file->private_data = data;
+ ret = 0;
+
+out:
+ mutex_unlock(&inode->i_mutex);
+
+ return ret;
+}
+
+static ssize_t regs_dbg_read(struct file *file, char __user *buf,
+ size_t nbytes, loff_t *ppos)
+{
+ struct inode *inode = file->f_dentry->d_inode;
+ int ret;
+
+ mutex_lock(&inode->i_mutex);
+ ret = simple_read_from_buffer(buf, nbytes, ppos,
+ file->private_data,
+ file->f_dentry->d_inode->i_size);
+ mutex_unlock(&inode->i_mutex);
+
+ return ret;
+}
+
+static int regs_dbg_release(struct inode *inode, struct file *file)
+{
+ kfree(file->private_data);
+ return 0;
+}
+
+static const struct file_operations regs_dbg_fops = {
+ .owner = THIS_MODULE,
+ .open = regs_dbg_open,
+ .llseek = generic_file_llseek,
+ .read = regs_dbg_read,
+ .release = regs_dbg_release,
+};
+
+static void atmci_init_debugfs(struct atmel_mci *host)
+{
+ struct mmc_host *mmc;
+ struct dentry *root;
+ struct dentry *regs;
+ struct resource *res;
+
+ mmc = host->mmc;
+ root = mmc->debugfs_root;
+ if (!root)
+ return;
+
+ regs = debugfs_create_file("regs", 0400, root, host, &regs_dbg_fops);
+ if (!regs)
+ goto err_regs;
+
+ res = platform_get_resource(host->pdev, IORESOURCE_MEM, 0);
+ regs->d_inode->i_size = res->end - res->start + 1;
+ host->debugfs_regs = regs;
+
+ host->debugfs_req = debugfs_create_file("req", 0400, root,
+ host, &req_dbg_fops);
+ if (!host->debugfs_req)
+ goto err_req;
+
+ host->debugfs_pending_events
+ = debugfs_create_x32("pending_events", 0400, root,
+ (u32 *)&host->pending_events);
+ if (!host->debugfs_pending_events)
+ goto err_pending_events;
+
+ host->debugfs_completed_events
+ = debugfs_create_x32("completed_events", 0400, root,
+ (u32 *)&host->completed_events);
+ if (!host->debugfs_completed_events)
+ goto err_completed_events;
+
+ return;
+
+err_completed_events:
+ debugfs_remove(host->debugfs_pending_events);
+ host->debugfs_pending_events = NULL;
+err_pending_events:
+ debugfs_remove(host->debugfs_regs);
+ host->debugfs_regs = NULL;
+err_regs:
+ debugfs_remove(host->debugfs_req);
+ host->debugfs_req = NULL;
+err_req:
+ dev_err(&host->pdev->dev,
+ "failed to initialize debugfs for controller\n");
+}
+
+static void atmci_cleanup_debugfs(struct atmel_mci *host)
+{
+ debugfs_remove(host->debugfs_completed_events);
+ debugfs_remove(host->debugfs_pending_events);
+ debugfs_remove(host->debugfs_regs);
+ debugfs_remove(host->debugfs_req);
+}
+#else
+static inline void atmci_init_debugfs(struct atmel_mci *host)
+{
+
+}
+
+static inline void atmci_cleanup_debugfs(struct atmel_mci *host)
+{
+
+}
+#endif /* CONFIG_MMC_DEBUG_FS */
+
+static void atmci_enable(struct atmel_mci *host)
+{
+ clk_enable(host->mck);
+ mci_writel(host, CR, MCI_BIT(MCIEN));
+ mci_writel(host, MR, host->mode_reg);
+ mci_writel(host, SDCR, host->sdc_reg);
+}
+
+static void atmci_disable(struct atmel_mci *host)
+{
+ mci_writel(host, CR, MCI_BIT(SWRST));
+
+ /* Stall until write is complete, then disable the bus clock */
+ mci_readl(host, SR);
+ clk_disable(host->mck);
+}
+
+static inline unsigned int ns_to_clocks(struct atmel_mci *host,
+ unsigned int ns)
+{
+ return (ns * (host->bus_hz / 1000000) + 999) / 1000;
+}
+
+static void atmci_set_timeout(struct atmel_mci *host,
+ struct mmc_data *data)
+{
+ static unsigned dtomul_to_shift[] = {
+ 0, 4, 7, 8, 10, 12, 16, 20
+ };
+ unsigned timeout;
+ unsigned dtocyc;
+ unsigned dtomul;
+
+ timeout = ns_to_clocks(host, data->timeout_ns) + data->timeout_clks;
+
+ for (dtomul = 0; dtomul < 8; dtomul++) {
+ unsigned shift = dtomul_to_shift[dtomul];
+ dtocyc = (timeout + (1 << shift) - 1) >> shift;
+ if (dtocyc < 15)
+ break;
+ }
+
+ if (dtomul >= 8) {
+ dtomul = 7;
+ dtocyc = 15;
+ }
+
+ dev_vdbg(&host->mmc->class_dev, "setting timeout to %u cycles\n",
+ dtocyc << dtomul_to_shift[dtomul]);
+ mci_writel(host, DTOR, (MCI_BF(DTOMUL, dtomul)
+ | MCI_BF(DTOCYC, dtocyc)));
+}
+
+/*
+ * Return mask with command flags to be enabled for this command.
+ */
+static u32 atmci_prepare_command(struct mmc_host *mmc,
+ struct mmc_command *cmd)
+{
+ struct mmc_data *data;
+ u32 cmdr;
+
+ cmd->error = -EINPROGRESS;
+
+ cmdr = MCI_BF(CMDNB, cmd->opcode);
+
+ if (cmd->flags & MMC_RSP_PRESENT) {
+ if (cmd->flags & MMC_RSP_136)
+ cmdr |= MCI_BF(RSPTYP, MCI_RSPTYP_136_BIT);
+ else
+ cmdr |= MCI_BF(RSPTYP, MCI_RSPTYP_48_BIT);
+ }
+
+ /*
+ * This should really be MAXLAT_5 for CMD2 and ACMD41, but
+ * it's too difficult to determine whether this is an ACMD or
+ * not. Better make it 64.
+ */
+ cmdr |= MCI_BIT(MAXLAT);
+
+ if (mmc->ios.bus_mode == MMC_BUSMODE_OPENDRAIN)
+ cmdr |= MCI_BIT(OPDCMD);
+
+ data = cmd->data;
+ if (data) {
+ cmdr |= MCI_BF(TRCMD, MCI_TRCMD_START_TRANS);
+ if (data->flags & MMC_DATA_STREAM)
+ cmdr |= MCI_BF(TRTYP, MCI_TRTYP_STREAM);
+ else if (data->blocks > 1)
+ cmdr |= MCI_BF(TRTYP, MCI_TRTYP_MULTI_BLOCK);
+ else
+ cmdr |= MCI_BF(TRTYP, MCI_TRTYP_BLOCK);
+
+ if (data->flags & MMC_DATA_READ)
+ cmdr |= MCI_BIT(TRDIR);
+ }
+
+ return cmdr;
+}
+
+static void atmci_start_command(struct atmel_mci *host,
+ struct mmc_command *cmd,
+ u32 cmd_flags)
+{
+ /* Must read host->cmd after testing event flags */
+ smp_rmb();
+ WARN_ON(host->cmd);
+ host->cmd = cmd;
+
+ dev_vdbg(&host->mmc->class_dev,
+ "start command: ARGR=0x%08x CMDR=0x%08x\n",
+ cmd->arg, cmd_flags);
+
+ mci_writel(host, ARGR, cmd->arg);
+ mci_writel(host, CMDR, cmd_flags);
+}
+
+static void send_stop_cmd(struct mmc_host *mmc, struct mmc_data *data)
+{
+ struct atmel_mci *host = mmc_priv(mmc);
+
+ atmci_start_command(host, data->stop, host->stop_cmdr);
+ mci_writel(host, IER, MCI_BIT(CMDRDY));
+}
+
+static void atmci_request_end(struct mmc_host *mmc, struct mmc_request *mrq)
+{
+ struct atmel_mci *host = mmc_priv(mmc);
+
+ WARN_ON(host->cmd || host->data);
+ host->mrq = NULL;
+
+ atmci_disable(host);
+
+ mmc_request_done(mmc, mrq);
+}
+
+static void atmci_dma_cleanup(struct atmel_mci *host)
+{
+ struct mmc_data *data = host->data;
+
+ dma_unmap_sg(&host->pdev->dev, data->sg, data->sg_len,
+ ((data->flags & MMC_DATA_WRITE)
+ ? DMA_TO_DEVICE : DMA_FROM_DEVICE));
+}
+
+static void atmci_stop_dma(struct atmel_mci *host)
+{
+ struct dma_chan *chan = host->data_chan;
+
+ if (chan) {
+ chan->device->device_terminate_all(chan);
+ atmci_dma_cleanup(host);
+ }
+}
+
+/* This function is called by the DMA driver from tasklet context. */
+static void atmci_dma_complete(void *arg)
+{
+ struct atmel_mci *host = arg;
+ struct mmc_data *data = host->data;
+
+ dev_vdbg(&host->mmc->class_dev, "DMA complete\n");
+
+ /*
+ * If the card was removed, data will be NULL. No point trying
+ * to send the stop command or waiting for NBUSY in this case.
+ */
+ if (data) {
+ /* A short DMA transfer may complete before the command */
+ atmci_set_completed(host, EVENT_DMA_COMPLETE);
+ smp_mb();
+ if (atmci_is_completed(host, EVENT_CMD_COMPLETE)
+ && data->stop
+ && !atmci_test_and_set_completed(host,
+ EVENT_STOP_SENT))
+ send_stop_cmd(host->mmc, data);
+ }
+
+ atmci_dma_cleanup(host);
+
+ /*
+ * Regardless of what the documentation says, we have to wait
+ * for NOTBUSY even after block read operations.
+ *
+ * When the DMA transfer is complete, the controller may still
+ * be reading the CRC from the card, i.e. the data transfer is
+ * still in progress and we haven't seen all the potential
+ * error bits yet.
+ *
+ * The interrupt handler will schedule a different tasklet to
+ * finish things up when the data transfer is completely done.
+ *
+ * We may not complete the mmc request here anyway because the
+ * mmc layer may call back and cause us to violate the "don't
+ * submit new operations from the completion callback" rule of
+ * the dma engine framework.
+ */
+ if (data)
+ mci_writel(host, IER, MCI_BIT(NOTBUSY));
+}
+
+static int
+atmci_submit_data_dma(struct atmel_mci *host, struct mmc_data *data)
+{
+ struct dma_chan *chan;
+ struct dma_async_tx_descriptor *desc;
+ struct scatterlist *sg;
+ unsigned long flags;
+ unsigned int i;
+ enum dma_data_direction direction;
+
+ /*
+ * We don't do DMA on "complex" transfers, i.e. with
+ * non-word-aligned buffers or lengths. Also, we don't bother
+ * with all the DMA setup overhead for short transfers.
+ */
+ if (data->blocks * data->blksz < ATMCI_DMA_THRESHOLD)
+ return -EINVAL;
+ if (data->blksz & 3)
+ return -EINVAL;
+
+ for_each_sg(data->sg, sg, data->sg_len, i) {
+ if (sg->offset & 3 || sg->length & 3)
+ return -EINVAL;
+ }
+
+ /* If we don't have a channel, we can't do DMA */
+ spin_lock_irqsave(&host->mmc->lock, flags);
+ chan = host->dma.chan;
+ if (chan) {
+ dma_chan_get(chan);
+ host->data_chan = chan;
+ }
+ spin_unlock_irqrestore(&host->mmc->lock, flags);
+
+ if (!chan)
+ return -ENODEV;
+
+ if (data->flags & MMC_DATA_READ)
+ direction = DMA_FROM_DEVICE;
+ else
+ direction = DMA_TO_DEVICE;
+
+ desc = chan->device->device_prep_slave_sg(chan,
+ data->sg, data->sg_len, direction,
+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ if (!desc)
+ return -ENOMEM;
+
+ host->dma.data_desc = desc;
+ desc->callback = atmci_dma_complete;
+ desc->callback_param = host;
+ desc->tx_submit(desc);
+
+ /* Go! */
+ chan->device->device_issue_pending(chan);
+
+ return 0;
+}
+
+/*
+ * Returns a mask of interrupt flags to be enabled after the whole
+ * request has been prepared.
+ */
+static u32 atmci_submit_data(struct mmc_host *mmc, struct mmc_data *data)
+{
+ struct atmel_mci *host = mmc_priv(mmc);
+ u32 iflags;
+
+ data->error = -EINPROGRESS;
+
+ WARN_ON(host->data);
+ host->sg = NULL;
+ host->data = data;
+
+ mci_writel(host, BLKR, (MCI_BF(BCNT, data->blocks)
+ | MCI_BF(BLKLEN, data->blksz)));
+ dev_vdbg(&mmc->class_dev, "BLKR=0x%08x\n",
+ (MCI_BF(BCNT, data->blocks)
+ | MCI_BF(BLKLEN, data->blksz)));
+
+ iflags = ATMCI_DATA_ERROR_FLAGS;
+ if (atmci_submit_data_dma(host, data)) {
+ host->data_chan = NULL;
+ host->sg = data->sg;
+ host->pio_offset = 0;
+ if (data->flags & MMC_DATA_READ)
+ iflags |= MCI_BIT(RXRDY);
+ else
+ iflags |= MCI_BIT(TXRDY);
+ }
+
+ return iflags;
+}
+
+static void atmci_request(struct mmc_host *mmc, struct mmc_request *mrq)
+{
+ struct atmel_mci *host = mmc_priv(mmc);
+ struct mmc_data *data;
+ struct mmc_command *cmd;
+ u32 iflags;
+ u32 cmdflags = 0;
+
+ iflags = mci_readl(host, IMR);
+ if (iflags)
+ dev_warn(&mmc->class_dev, "WARNING: IMR=0x%08x\n",
+ mci_readl(host, IMR));
+
+ WARN_ON(host->mrq != NULL);
+
+ /*
+ * We may "know" the card is gone even though there's still an
+ * electrical connection. If so, we really need to communicate
+ * this to the MMC core since there won't be any more
+ * interrupts as the card is completely removed. Otherwise,
+ * the MMC core might believe the card is still there even
+ * though the card was just removed very slowly.
+ */
+ if (!host->present) {
+ mrq->cmd->error = -ENOMEDIUM;
+ mmc_request_done(mmc, mrq);
+ return;
+ }
+
+ host->mrq = mrq;
+ host->pending_events = 0;
+ host->completed_events = 0;
+
+ atmci_enable(host);
+
+ /* We don't support multiple blocks of weird lengths. */
+ data = mrq->data;
+ if (data) {
+ if (data->blocks > 1 && data->blksz & 3)
+ goto fail;
+ atmci_set_timeout(host, data);
+ }
+
+ iflags = MCI_BIT(CMDRDY);
+ cmd = mrq->cmd;
+ cmdflags = atmci_prepare_command(mmc, cmd);
+ atmci_start_command(host, cmd, cmdflags);
+
+ if (data)
+ iflags |= atmci_submit_data(mmc, data);
+
+ if (mrq->stop) {
+ host->stop_cmdr = atmci_prepare_command(mmc, mrq->stop);
+ host->stop_cmdr |= MCI_BF(TRCMD, MCI_TRCMD_STOP_TRANS);
+ if (!(data->flags & MMC_DATA_WRITE))
+ host->stop_cmdr |= MCI_BIT(TRDIR);
+ if (data->flags & MMC_DATA_STREAM)
+ host->stop_cmdr |= MCI_BF(TRTYP, MCI_TRTYP_STREAM);
+ else
+ host->stop_cmdr |= MCI_BF(TRTYP, MCI_TRTYP_MULTI_BLOCK);
+ }
+
+ /*
+ * We could have enabled interrupts earlier, but I suspect
+ * that would open up a nice can of interesting race
+ * conditions (e.g. command and data complete, but stop not
+ * prepared yet.)
+ */
+ mci_writel(host, IER, iflags);
+
+ return;
+
+fail:
+ atmci_disable(host);
+ host->mrq = NULL;
+ mrq->cmd->error = -EINVAL;
+ mmc_request_done(mmc, mrq);
+}
+
+static void atmci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+{
+ struct atmel_mci *host = mmc_priv(mmc);
+
+ if (ios->clock) {
+ u32 clkdiv;
+
+ /* Set clock rate */
+ clkdiv = DIV_ROUND_UP(host->bus_hz, 2 * ios->clock) - 1;
+ if (clkdiv > 255) {
+ dev_warn(&mmc->class_dev,
+ "clock %u too slow; using %lu\n",
+ ios->clock, host->bus_hz / (2 * 256));
+ clkdiv = 255;
+ }
+
+ host->mode_reg = MCI_BF(CLKDIV, clkdiv)
+ | MCI_BIT(WRPROOF)
+ | MCI_BIT(RDPROOF);
+ }
+
+ switch (ios->bus_width) {
+ case MMC_BUS_WIDTH_1:
+ host->sdc_reg = 0;
+ break;
+ case MMC_BUS_WIDTH_4:
+ host->sdc_reg = MCI_BIT(SDCBUS);
+ break;
+ }
+
+ switch (ios->power_mode) {
+ case MMC_POWER_ON:
+ /* Send init sequence (74 clock cycles) */
+ atmci_enable(host);
+ mci_writel(host, CMDR, MCI_BF(SPCMD, MCI_SPCMD_INIT_CMD));
+ while (!(mci_readl(host, SR) & MCI_BIT(CMDRDY)))
+ cpu_relax();
+ atmci_disable(host);
+ break;
+ default:
+ /*
+ * TODO: None of the currently available AVR32-based
+ * boards allow MMC power to be turned off. Implement
+ * power control when this can be tested properly.
+ */
+ break;
+ }
+}
+
+static int atmci_get_ro(struct mmc_host *mmc)
+{
+ int read_only = 0;
+ struct atmel_mci *host = mmc_priv(mmc);
+
+ if (host->wp_pin >= 0) {
+ read_only = gpio_get_value(host->wp_pin);
+ dev_dbg(&mmc->class_dev, "card is %s\n",
+ read_only ? "read-only" : "read-write");
+ } else {
+ dev_dbg(&mmc->class_dev,
+ "no pin for checking read-only switch."
+ " Assuming write-enable.\n");
+ }
+
+ return read_only;
+}
+
+static struct mmc_host_ops atmci_ops = {
+ .request = atmci_request,
+ .set_ios = atmci_set_ios,
+ .get_ro = atmci_get_ro,
+};
+
+static void atmci_command_complete(struct atmel_mci *host,
+ struct mmc_command *cmd, u32 status)
+{
+ /* Read the response from the card (up to 16 bytes) */
+ cmd->resp[0] = mci_readl(host, RSPR);
+ cmd->resp[1] = mci_readl(host, RSPR);
+ cmd->resp[2] = mci_readl(host, RSPR);
+ cmd->resp[3] = mci_readl(host, RSPR);
+
+ if (status & MCI_BIT(RTOE))
+ cmd->error = -ETIMEDOUT;
+ else if ((cmd->flags & MMC_RSP_CRC) && (status & MCI_BIT(RCRCE)))
+ cmd->error = -EILSEQ;
+ else if (status & (MCI_BIT(RINDE) | MCI_BIT(RDIRE) | MCI_BIT(RENDE)))
+ cmd->error = -EIO;
+ else
+ cmd->error = 0;
+
+ if (cmd->error) {
+ dev_dbg(&host->mmc->class_dev,
+ "command error: status=0x%08x\n", status);
+
+ if (cmd->data) {
+ host->data = NULL;
+ atmci_stop_dma(host);
+ mci_writel(host, IDR, MCI_BIT(NOTBUSY)
+ | ATMCI_DATA_ERROR_FLAGS);
+ }
+ }
+}
+
+static void atmci_detect_change(unsigned long data)
+{
+ struct atmel_mci *host = (struct atmel_mci *)data;
+ struct mmc_request *mrq = host->mrq;
+ int present;
+
+ /*
+ * atmci_remove() sets detect_pin to -1 before freeing the
+ * interrupt. We must not re-enable the interrupt if it has
+ * been freed.
+ */
+ smp_rmb();
+ if (host->detect_pin < 0)
+ return;
+
+ enable_irq(gpio_to_irq(host->detect_pin));
+ present = !gpio_get_value(host->detect_pin);
+
+ dev_vdbg(&host->pdev->dev, "detect change: %d (was %d)\n",
+ present, host->present);
+
+ if (present != host->present) {
+ dev_dbg(&host->mmc->class_dev, "card %s\n",
+ present ? "inserted" : "removed");
+ host->present = present;
+
+ /* Reset controller if card is gone */
+ if (!present) {
+ mci_writel(host, CR, MCI_BIT(SWRST));
+ mci_writel(host, IDR, ~0UL);
+ mci_writel(host, CR, MCI_BIT(MCIEN));
+ }
+
+ /* Clean up queue if present */
+ if (mrq) {
+ /*
+ * Reset controller to terminate any ongoing
+ * commands or data transfers.
+ */
+ mci_writel(host, CR, MCI_BIT(SWRST));
+
+ if (!atmci_is_completed(host, EVENT_CMD_COMPLETE))
+ mrq->cmd->error = -ENOMEDIUM;
+
+ if (mrq->data && !atmci_is_completed(host,
+ EVENT_DATA_COMPLETE)) {
+ host->data = NULL;
+ mrq->data->error = -ENOMEDIUM;
+ atmci_stop_dma(host);
+ }
+ if (mrq->stop && !atmci_is_completed(host,
+ EVENT_STOP_COMPLETE))
+ mrq->stop->error = -ENOMEDIUM;
+
+ host->cmd = NULL;
+ atmci_request_end(host->mmc, mrq);
+ }
+
+ mmc_detect_change(host->mmc, 0);
+ }
+}
+
+static void atmci_tasklet_func(unsigned long priv)
+{
+ struct mmc_host *mmc = (struct mmc_host *)priv;
+ struct atmel_mci *host = mmc_priv(mmc);
+ struct mmc_request *mrq = host->mrq;
+ struct mmc_data *data = host->data;
+
+ dev_vdbg(&mmc->class_dev,
+ "tasklet: pending/completed/mask %lx/%lx/%x\n",
+ host->pending_events, host->completed_events,
+ mci_readl(host, IMR));
+
+ if (atmci_test_and_clear_pending(host, EVENT_CMD_COMPLETE)) {
+ /*
+ * host->cmd must be set to NULL before the interrupt
+ * handler sees EVENT_CMD_COMPLETE
+ */
+ host->cmd = NULL;
+ smp_wmb();
+ atmci_set_completed(host, EVENT_CMD_COMPLETE);
+ atmci_command_complete(host, mrq->cmd, host->cmd_status);
+
+ if (!mrq->cmd->error && mrq->stop
+ && atmci_is_completed(host, EVENT_DMA_COMPLETE)
+ && !atmci_test_and_set_completed(host,
+ EVENT_STOP_SENT))
+ send_stop_cmd(host->mmc, mrq->data);
+ }
+ if (atmci_test_and_clear_pending(host, EVENT_STOP_COMPLETE)) {
+ /*
+ * host->cmd must be set to NULL before the interrupt
+ * handler sees EVENT_STOP_COMPLETE
+ */
+ host->cmd = NULL;
+ smp_wmb();
+ atmci_set_completed(host, EVENT_STOP_COMPLETE);
+ atmci_command_complete(host, mrq->stop, host->stop_status);
+ }
+ if (atmci_test_and_clear_pending(host, EVENT_DATA_ERROR)) {
+ u32 status = host->data_status;
+
+ dev_vdbg(&mmc->class_dev, "data error: status=%08x\n", status);
+
+ atmci_set_completed(host, EVENT_DATA_ERROR);
+ atmci_set_completed(host, EVENT_DATA_COMPLETE);
+ atmci_stop_dma(host);
+
+ if (status & MCI_BIT(DTOE)) {
+ dev_dbg(&mmc->class_dev,
+ "data timeout error\n");
+ data->error = -ETIMEDOUT;
+ } else if (status & MCI_BIT(DCRCE)) {
+ dev_dbg(&mmc->class_dev, "data CRC error\n");
+ data->error = -EILSEQ;
+ } else {
+ dev_dbg(&mmc->class_dev,
+ "data FIFO error (status=%08x)\n",
+ status);
+ data->error = -EIO;
+ }
+
+ if (host->present && data->stop
+ && atmci_is_completed(host, EVENT_CMD_COMPLETE)
+ && !atmci_test_and_set_completed(
+ host, EVENT_STOP_SENT))
+ send_stop_cmd(host->mmc, data);
+
+ host->data = NULL;
+ }
+ if (atmci_test_and_clear_pending(host, EVENT_DATA_COMPLETE)) {
+ atmci_set_completed(host, EVENT_DATA_COMPLETE);
+
+ if (!atmci_is_completed(host, EVENT_DATA_ERROR)) {
+ data->bytes_xfered = data->blocks * data->blksz;
+ data->error = 0;
+ }
+
+ host->data = NULL;
+ }
+
+ if (host->mrq && !host->cmd && !host->data)
+ atmci_request_end(mmc, host->mrq);
+}
+
+static void atmci_read_data_pio(struct atmel_mci *host)
+{
+ struct scatterlist *sg = host->sg;
+ void *buf = sg_virt(sg);
+ unsigned int offset = host->pio_offset;
+ struct mmc_data *data = host->data;
+ u32 value;
+ u32 status;
+ unsigned int nbytes = 0;
+
+ do {
+ value = mci_readl(host, RDR);
+ if (likely(offset + 4 <= sg->length)) {
+ put_unaligned(value, (u32 *)(buf + offset));
+
+ offset += 4;
+ nbytes += 4;
+
+ if (offset == sg->length) {
+ host->sg = sg = sg_next(sg);
+ if (!sg)
+ goto done;
+
+ offset = 0;
+ buf = sg_virt(sg);
+ }
+ } else {
+ unsigned int remaining = sg->length - offset;
+ memcpy(buf + offset, &value, remaining);
+ nbytes += remaining;
+
+ flush_dcache_page(sg_page(sg));
+ host->sg = sg = sg_next(sg);
+ if (!sg)
+ goto done;
+
+ offset = 4 - remaining;
+ buf = sg_virt(sg);
+ memcpy(buf, (u8 *)&value + remaining, offset);
+ nbytes += offset;
+ }
+
+ status = mci_readl(host, SR);
+ if (status & ATMCI_DATA_ERROR_FLAGS) {
+ mci_writel(host, IDR, (MCI_BIT(NOTBUSY)
+ | MCI_BIT(RXRDY)
+ | ATMCI_DATA_ERROR_FLAGS));
+ host->data_status = status;
+ atmci_set_pending(host, EVENT_DATA_ERROR);
+ tasklet_schedule(&host->tasklet);
+ break;
+ }
+ } while (status & MCI_BIT(RXRDY));
+
+ host->pio_offset = offset;
+ data->bytes_xfered += nbytes;
+
+ return;
+
+done:
+ mci_writel(host, IDR, MCI_BIT(RXRDY));
+ mci_writel(host, IER, MCI_BIT(NOTBUSY));
+ data->bytes_xfered += nbytes;
+ atmci_set_completed(host, EVENT_DMA_COMPLETE);
+ if (data->stop && atmci_is_completed(host, EVENT_CMD_COMPLETE)
+ && !atmci_test_and_set_completed(host, EVENT_STOP_SENT))
+ send_stop_cmd(host->mmc, data);
+}
+
+static void atmci_write_data_pio(struct atmel_mci *host)
+{
+ struct scatterlist *sg = host->sg;
+ void *buf = sg_virt(sg);
+ unsigned int offset = host->pio_offset;
+ struct mmc_data *data = host->data;
+ u32 value;
+ u32 status;
+ unsigned int nbytes = 0;
+
+ do {
+ if (likely(offset + 4 <= sg->length)) {
+ value = get_unaligned((u32 *)(buf + offset));
+ mci_writel(host, TDR, value);
+
+ offset += 4;
+ nbytes += 4;
+ if (offset == sg->length) {
+ host->sg = sg = sg_next(sg);
+ if (!sg)
+ goto done;
+
+ offset = 0;
+ buf = sg_virt(sg);
+ }
+ } else {
+ unsigned int remaining = sg->length - offset;
+
+ value = 0;
+ memcpy(&value, buf + offset, remaining);
+ nbytes += remaining;
+
+ host->sg = sg = sg_next(sg);
+ if (!sg) {
+ mci_writel(host, TDR, value);
+ goto done;
+ }
+
+ offset = 4 - remaining;
+ buf = sg_virt(sg);
+ memcpy((u8 *)&value + remaining, buf, offset);
+ mci_writel(host, TDR, value);
+ nbytes += offset;
+ }
+
+ status = mci_readl(host, SR);
+ if (status & ATMCI_DATA_ERROR_FLAGS) {
+ mci_writel(host, IDR, (MCI_BIT(NOTBUSY)
+ | MCI_BIT(TXRDY)
+ | ATMCI_DATA_ERROR_FLAGS));
+ host->data_status = status;
+ atmci_set_pending(host, EVENT_DATA_ERROR);
+ tasklet_schedule(&host->tasklet);
+ break;
+ }
+ } while (status & MCI_BIT(TXRDY));
+
+ host->pio_offset = offset;
+ data->bytes_xfered += nbytes;
+
+ return;
+
+done:
+ mci_writel(host, IDR, MCI_BIT(TXRDY));
+ mci_writel(host, IER, MCI_BIT(NOTBUSY));
+ data->bytes_xfered += nbytes;
+ atmci_set_completed(host, EVENT_DMA_COMPLETE);
+ if (data->stop && atmci_is_completed(host, EVENT_CMD_COMPLETE)
+ && !atmci_test_and_set_completed(host, EVENT_STOP_SENT))
+ send_stop_cmd(host->mmc, data);
+}
+
+static void atmci_cmd_interrupt(struct mmc_host *mmc, u32 status)
+{
+ struct atmel_mci *host = mmc_priv(mmc);
+
+ mci_writel(host, IDR, MCI_BIT(CMDRDY));
+
+ if (atmci_is_completed(host, EVENT_STOP_SENT)) {
+ host->stop_status = status;
+ atmci_set_pending(host, EVENT_STOP_COMPLETE);
+ } else {
+ host->cmd_status = status;
+ atmci_set_pending(host, EVENT_CMD_COMPLETE);
+ }
+
+ tasklet_schedule(&host->tasklet);
+}
+
+static irqreturn_t atmci_interrupt(int irq, void *dev_id)
+{
+ struct mmc_host *mmc = dev_id;
+ struct atmel_mci *host = mmc_priv(mmc);
+ u32 status, mask, pending;
+ unsigned int pass_count = 0;
+
+ spin_lock(&mmc->lock);
+
+ do {
+ status = mci_readl(host, SR);
+ mask = mci_readl(host, IMR);
+ pending = status & mask;
+ if (!pending)
+ break;
+
+ if (pending & ATMCI_DATA_ERROR_FLAGS) {
+ mci_writel(host, IDR, ATMCI_DATA_ERROR_FLAGS
+ | MCI_BIT(RXRDY) | MCI_BIT(TXRDY));
+ pending &= mci_readl(host, IMR);
+ host->data_status = status;
+ atmci_set_pending(host, EVENT_DATA_ERROR);
+ tasklet_schedule(&host->tasklet);
+ }
+ if (pending & (MCI_BIT(NOTBUSY))) {
+ mci_writel(host, IDR, (MCI_BIT(NOTBUSY)
+ | ATMCI_DATA_ERROR_FLAGS));
+ atmci_set_pending(host, EVENT_DATA_COMPLETE);
+ tasklet_schedule(&host->tasklet);
+ }
+ if (pending & MCI_BIT(RXRDY))
+ atmci_read_data_pio(host);
+ if (pending & MCI_BIT(TXRDY))
+ atmci_write_data_pio(host);
+
+ if (pending & MCI_BIT(CMDRDY))
+ atmci_cmd_interrupt(mmc, status);
+ } while (pass_count++ < 5);
+
+ spin_unlock(&mmc->lock);
+
+ return pass_count ? IRQ_HANDLED : IRQ_NONE;
+}
+
+static irqreturn_t atmci_detect_interrupt(int irq, void *dev_id)
+{
+ struct mmc_host *mmc = dev_id;
+ struct atmel_mci *host = mmc_priv(mmc);
+
+ /*
+ * Disable interrupts until the pin has stabilized and check
+ * the state then. Use mod_timer() since we may be in the
+ * middle of the timer routine when this interrupt triggers.
+ */
+ disable_irq_nosync(irq);
+ mod_timer(&host->detect_timer, jiffies + msecs_to_jiffies(20));
+
+ return IRQ_HANDLED;
+}
+
+static enum dma_state_client atmci_dma_event(struct dma_client *client,
+ struct dma_chan *chan, enum dma_state state)
+{
+ struct atmel_mci *host;
+ enum dma_state_client ret = DMA_NAK;
+ unsigned long flags;
+
+ host = dma_client_to_atmel_mci(client);
+
+ switch (state) {
+ case DMA_RESOURCE_AVAILABLE:
+ spin_lock_irqsave(&host->mmc->lock, flags);
+ if (!host->dma.chan) {
+ host->dma.chan = chan;
+ ret = DMA_ACK;
+ }
+ spin_unlock_irqrestore(&host->mmc->lock, flags);
+
+ if (ret == DMA_ACK)
+ dev_info(&host->pdev->dev,
+ "Using %s for DMA transfers\n",
+ chan->dev.bus_id);
+ break;
+
+ case DMA_RESOURCE_REMOVED:
+ spin_lock_irqsave(&host->mmc->lock, flags);
+ if (host->dma.chan == chan) {
+ host->dma.chan = NULL;
+ ret = DMA_ACK;
+ }
+ spin_unlock_irqrestore(&host->mmc->lock, flags);
+
+ if (ret == DMA_ACK)
+ dev_info(&host->pdev->dev,
+ "Lost %s, falling back to PIO\n",
+ chan->dev.bus_id);
+ break;
+
+ default:
+ break;
+ }
+
+
+ return ret;
+}
+
+static int __init atmci_probe(struct platform_device *pdev)
+{
+ struct mci_platform_data *pdata;
+ struct atmel_mci *host;
+ struct mmc_host *mmc;
+ struct resource *regs;
+ int irq;
+ int ret;
+
+ regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!regs)
+ return -ENXIO;
+ pdata = pdev->dev.platform_data;
+ if (!pdata)
+ return -ENXIO;
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0)
+ return irq;
+
+ mmc = mmc_alloc_host(sizeof(struct atmel_mci), &pdev->dev);
+ if (!mmc)
+ return -ENOMEM;
+
+ host = mmc_priv(mmc);
+ host->pdev = pdev;
+ host->mmc = mmc;
+ host->detect_pin = pdata->detect_pin;
+ host->wp_pin = pdata->wp_pin;
+
+ host->mck = clk_get(&pdev->dev, "mci_clk");
+ if (IS_ERR(host->mck)) {
+ ret = PTR_ERR(host->mck);
+ goto err_clk_get;
+ }
+
+ ret = -ENOMEM;
+ host->regs = ioremap(regs->start, regs->end - regs->start + 1);
+ if (!host->regs)
+ goto err_ioremap;
+
+ clk_enable(host->mck);
+ mci_writel(host, CR, MCI_BIT(SWRST));
+ host->bus_hz = clk_get_rate(host->mck);
+ clk_disable(host->mck);
+
+ host->mapbase = regs->start;
+
+ mmc->ops = &atmci_ops;
+ mmc->f_min = (host->bus_hz + 511) / 512;
+ mmc->f_max = host->bus_hz / 2;
+ mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
+ mmc->caps |= MMC_CAP_4_BIT_DATA | MMC_CAP_MULTIWRITE;
+
+ mmc->max_hw_segs = 64;
+ mmc->max_phys_segs = 64;
+ mmc->max_req_size = 32768 * 512;
+ mmc->max_blk_size = 32768;
+ mmc->max_blk_count = 512;
+
+ tasklet_init(&host->tasklet, atmci_tasklet_func, (unsigned long)mmc);
+
+ ret = request_irq(irq, atmci_interrupt, 0, pdev->dev.bus_id, mmc);
+ if (ret)
+ goto err_request_irq;
+
+ if (pdata->dma_slave) {
+ struct dma_slave *slave = pdata->dma_slave;
+
+ slave->tx_reg = regs->start + MCI_TDR;
+ slave->rx_reg = regs->start + MCI_RDR;
+
+ /* Try to grab a DMA channel */
+ host->dma.client.event_callback = atmci_dma_event;
+ dma_cap_set(DMA_SLAVE, host->dma.client.cap_mask);
+ host->dma.client.slave = slave;
+
+ dma_async_client_register(&host->dma.client);
+ dma_async_client_chan_request(&host->dma.client);
+ } else {
+ dev_notice(&pdev->dev, "DMA not available, using PIO\n");
+ }
+
+ /* Assume card is present if we don't have a detect pin */
+ host->present = 1;
+ if (host->detect_pin >= 0) {
+ if (gpio_request(host->detect_pin, "mmc_detect")) {
+ dev_dbg(&mmc->class_dev, "no detect pin available\n");
+ host->detect_pin = -1;
+ } else {
+ host->present = !gpio_get_value(host->detect_pin);
+ }
+ }
+ if (host->wp_pin >= 0) {
+ if (gpio_request(host->wp_pin, "mmc_wp")) {
+ dev_dbg(&mmc->class_dev, "no WP pin available\n");
+ host->wp_pin = -1;
+ }
+ }
+
+ platform_set_drvdata(pdev, host);
+
+ mmc_add_host(mmc);
+
+ if (host->detect_pin >= 0) {
+ setup_timer(&host->detect_timer, atmci_detect_change,
+ (unsigned long)host);
+
+ ret = request_irq(gpio_to_irq(host->detect_pin),
+ atmci_detect_interrupt,
+ IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING,
+ "mmc-detect", mmc);
+ if (ret) {
+ dev_dbg(&mmc->class_dev,
+ "could not request IRQ %d for detect pin\n",
+ gpio_to_irq(host->detect_pin));
+ gpio_free(host->detect_pin);
+ host->detect_pin = -1;
+ }
+ }
+
+ dev_info(&mmc->class_dev,
+ "Atmel MCI controller at 0x%08lx irq %d\n",
+ host->mapbase, irq);
+
+ atmci_init_debugfs(host);
+
+ return 0;
+
+err_request_irq:
+ iounmap(host->regs);
+err_ioremap:
+ clk_put(host->mck);
+err_clk_get:
+ mmc_free_host(mmc);
+ return ret;
+}
+
+static int __exit atmci_remove(struct platform_device *pdev)
+{
+ struct atmel_mci *host = platform_get_drvdata(pdev);
+
+ platform_set_drvdata(pdev, NULL);
+
+ if (host) {
+ atmci_cleanup_debugfs(host);
+
+ if (host->detect_pin >= 0) {
+ int pin = host->detect_pin;
+
+ /* Make sure the timer doesn't enable the interrupt */
+ host->detect_pin = -1;
+ smp_wmb();
+
+ free_irq(gpio_to_irq(pin), host->mmc);
+ del_timer_sync(&host->detect_timer);
+ gpio_free(pin);
+ }
+
+ mmc_remove_host(host->mmc);
+
+ clk_enable(host->mck);
+ mci_writel(host, IDR, ~0UL);
+ mci_writel(host, CR, MCI_BIT(MCIDIS));
+ mci_readl(host, SR);
+ clk_disable(host->mck);
+
+ dma_async_client_unregister(&host->dma.client);
+
+ if (host->wp_pin >= 0)
+ gpio_free(host->wp_pin);
+
+ free_irq(platform_get_irq(pdev, 0), host->mmc);
+ iounmap(host->regs);
+
+ clk_put(host->mck);
+
+ mmc_free_host(host->mmc);
+ }
+ return 0;
+}
+
+static struct platform_driver atmci_driver = {
+ .remove = __exit_p(atmci_remove),
+ .driver = {
+ .name = "atmel_mci",
+ },
+};
+
+static int __init atmci_init(void)
+{
+ return platform_driver_probe(&atmci_driver, atmci_probe);
+}
+
+static void __exit atmci_exit(void)
+{
+ platform_driver_unregister(&atmci_driver);
+}
+
+module_init(atmci_init);
+module_exit(atmci_exit);
+
+MODULE_DESCRIPTION("Atmel Multimedia Card Interface driver");
+MODULE_AUTHOR("Haavard Skinnemoen <[email protected]>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/asm-avr32/arch-at32ap/board.h b/include/asm-avr32/arch-at32ap/board.h
index a4e2d28..b6f805b 100644
--- a/include/asm-avr32/arch-at32ap/board.h
+++ b/include/asm-avr32/arch-at32ap/board.h
@@ -70,7 +70,11 @@ struct i2c_board_info;
struct platform_device *at32_add_device_twi(unsigned int id,
struct i2c_board_info *b,
unsigned int n);
-struct platform_device *at32_add_device_mci(unsigned int id);
+
+struct mci_platform_data;
+struct platform_device *
+at32_add_device_mci(unsigned int id, struct mci_platform_data *data);
+
struct platform_device *at32_add_device_ac97c(unsigned int id);
struct platform_device *at32_add_device_abdac(unsigned int id);

diff --git a/include/asm-avr32/atmel-mci.h b/include/asm-avr32/atmel-mci.h
new file mode 100644
index 0000000..ea6e29d
--- /dev/null
+++ b/include/asm-avr32/atmel-mci.h
@@ -0,0 +1,12 @@
+#ifndef __ASM_AVR32_ATMEL_MCI_H
+#define __ASM_AVR32_ATMEL_MCI_H
+
+struct dma_slave;
+
+struct mci_platform_data {
+ struct dma_slave *dma_slave;
+ int detect_pin;
+ int wp_pin;
+};
+
+#endif /* __ASM_AVR32_ATMEL_MCI_H */
--
1.5.5.4

2008-06-26 13:26:24

by Haavard Skinnemoen

[permalink] [raw]
Subject: [PATCH v4 3/6] dmaengine: Add slave DMA interface

This patch adds the necessary interfaces to the DMA Engine framework
to use functionality found on most embedded DMA controllers: DMA from
and to I/O registers with hardware handshaking.

In this context, hardware hanshaking means that the peripheral that
owns the I/O registers in question is able to tell the DMA controller
when more data is available for reading, or when there is room for
more data to be written. This usually happens internally on the chip,
but these signals may also be exported outside the chip for things
like IDE DMA, etc.

A new struct dma_slave is introduced. This contains information that
the DMA engine driver needs to set up slave transfers to and from a
slave device. Most engines supporting DMA slave transfers will want to
extend this structure with controller-specific parameters. This
additional information is usually passed from the platform/board code
through the client driver.

A "slave" pointer is added to the dma_client struct. This must point
to a valid dma_slave structure iff the DMA_SLAVE capability is
requested. The DMA engine driver may use this information in its
device_alloc_chan_resources hook to configure the DMA controller for
slave transfers from and to the given slave device.

A new struct dma_slave_descriptor is added. This extends the standard
dma_async_tx_descriptor with a few members that are needed for doing
slave DMA from/to peripherals.

A new operation for creating such descriptors is added to struct
dma_device. Another new operation for terminating all pending
transfers is added as well. The latter is needed because there may be
errors outside the scope of the DMA Engine framework that may require
DMA operations to be terminated prematurely.

DMA Engine drivers may extend the dma_device, dma_chan and/or
dma_slave_descriptor structures to allow controller-specific
operations. The client driver can detect such extensions by looking at
the DMA Engine's struct device, or it can request a specific DMA
Engine device by setting the dma_dev field in struct dma_slave.

Signed-off-by: Haavard Skinnemoen <[email protected]>

dmaslave interface changes since v3:
* Use dma_data_direction instead of a new enum
* Submit slave transfers as scatterlists
* Remove the DMA slave descriptor struct

dmaslave interface changes since v2:
* Add a dma_dev field to struct dma_slave. If set, the client can
only be bound to the DMA controller that corresponds to this
device. This allows controller-specific extensions of the
dma_slave structure; if the device matches, the controller may
safely assume its extensions are present.
* Move reg_width into struct dma_slave as there are currently no
users that need to be able to set the width on a per-transfer
basis.

dmaslave interface changes since v1:
* Drop the set_direction and set_width descriptor hooks. Pass the
direction and width to the prep function instead.
* Declare a dma_slave struct with fixed information about a slave,
i.e. register addresses, handshake interfaces and such.
* Add pointer to a dma_slave struct to dma_client. Can be NULL if
the DMA_SLAVE capability isn't requested.
* Drop the set_slave device hook since the alloc_chan_resources hook
now has enough information to set up the channel for slave
transfers.
---
drivers/dma/dmaengine.c | 16 ++++++++++++-
include/linux/dmaengine.h | 53 ++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index ad8d811..2e0035f 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -159,7 +159,12 @@ static void dma_client_chan_alloc(struct dma_client *client)
enum dma_state_client ack;

/* Find a channel */
- list_for_each_entry(device, &dma_device_list, global_node)
+ list_for_each_entry(device, &dma_device_list, global_node) {
+ /* Does the client require a specific DMA controller? */
+ if (client->slave && client->slave->dma_dev
+ && client->slave->dma_dev != device->dev)
+ continue;
+
list_for_each_entry(chan, &device->channels, device_node) {
if (!dma_chan_satisfies_mask(chan, client->cap_mask))
continue;
@@ -180,6 +185,7 @@ static void dma_client_chan_alloc(struct dma_client *client)
return;
}
}
+ }
}

enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie)
@@ -276,6 +282,10 @@ static void dma_clients_notify_removed(struct dma_chan *chan)
*/
void dma_async_client_register(struct dma_client *client)
{
+ /* validate client data */
+ BUG_ON(dma_has_cap(DMA_SLAVE, client->cap_mask) &&
+ !client->slave);
+
mutex_lock(&dma_list_mutex);
list_add_tail(&client->global_node, &dma_client_list);
mutex_unlock(&dma_list_mutex);
@@ -350,6 +360,10 @@ int dma_async_device_register(struct dma_device *device)
!device->device_prep_dma_memset);
BUG_ON(dma_has_cap(DMA_INTERRUPT, device->cap_mask) &&
!device->device_prep_dma_interrupt);
+ BUG_ON(dma_has_cap(DMA_SLAVE, device->cap_mask) &&
+ !device->device_prep_slave_sg);
+ BUG_ON(dma_has_cap(DMA_SLAVE, device->cap_mask) &&
+ !device->device_terminate_all);

BUG_ON(!device->device_alloc_chan_resources);
BUG_ON(!device->device_free_chan_resources);
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 4b602d3..8ce03e8 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -89,10 +89,23 @@ enum dma_transaction_type {
DMA_MEMSET,
DMA_MEMCPY_CRC32C,
DMA_INTERRUPT,
+ DMA_SLAVE,
};

/* last transaction type for creation of the capabilities mask */
-#define DMA_TX_TYPE_END (DMA_INTERRUPT + 1)
+#define DMA_TX_TYPE_END (DMA_SLAVE + 1)
+
+/**
+ * enum dma_slave_width - DMA slave register access width.
+ * @DMA_SLAVE_WIDTH_8BIT: Do 8-bit slave register accesses
+ * @DMA_SLAVE_WIDTH_16BIT: Do 16-bit slave register accesses
+ * @DMA_SLAVE_WIDTH_32BIT: Do 32-bit slave register accesses
+ */
+enum dma_slave_width {
+ DMA_SLAVE_WIDTH_8BIT,
+ DMA_SLAVE_WIDTH_16BIT,
+ DMA_SLAVE_WIDTH_32BIT,
+};

/**
* enum dma_ctrl_flags - DMA flags to augment operation preparation,
@@ -115,6 +128,33 @@ enum dma_ctrl_flags {
typedef struct { DECLARE_BITMAP(bits, DMA_TX_TYPE_END); } dma_cap_mask_t;

/**
+ * struct dma_slave - Information about a DMA slave
+ * @dev: device acting as DMA slave
+ * @dma_dev: required DMA master device. If non-NULL, the client can not be
+ * bound to other masters than this. The master driver may use
+ * this to determine whether it's safe to access
+ * @tx_reg: physical address of data register used for
+ * memory-to-peripheral transfers
+ * @rx_reg: physical address of data register used for
+ * peripheral-to-memory transfers
+ * @reg_width: peripheral register width
+ *
+ * If dma_dev is non-NULL, the client can not be bound to other DMA
+ * masters than the one corresponding to this device. The DMA master
+ * driver may use this to determine if there is controller-specific
+ * data wrapped around this struct. Drivers of platform code that sets
+ * the dma_dev field must therefore make sure to use an appropriate
+ * controller-specific dma slave structure wrapping this struct.
+ */
+struct dma_slave {
+ struct device *dev;
+ struct device *dma_dev;
+ dma_addr_t tx_reg;
+ dma_addr_t rx_reg;
+ enum dma_slave_width reg_width;
+};
+
+/**
* struct dma_chan_percpu - the per-CPU part of struct dma_chan
* @refcount: local_t used for open-coded "bigref" counting
* @memcpy_count: transaction counter
@@ -219,11 +259,14 @@ typedef enum dma_state_client (*dma_event_callback) (struct dma_client *client,
* @event_callback: func ptr to call when something happens
* @cap_mask: only return channels that satisfy the requested capabilities
* a value of zero corresponds to any capability
+ * @slave: data for preparing slave transfer. Must be non-NULL iff the
+ * DMA_SLAVE capability is requested.
* @global_node: list_head for global dma_client_list
*/
struct dma_client {
dma_event_callback event_callback;
dma_cap_mask_t cap_mask;
+ struct dma_slave *slave;
struct list_head global_node;
};

@@ -280,6 +323,8 @@ struct dma_async_tx_descriptor {
* @device_prep_dma_zero_sum: prepares a zero_sum operation
* @device_prep_dma_memset: prepares a memset operation
* @device_prep_dma_interrupt: prepares an end of chain interrupt operation
+ * @device_prep_slave_sg: prepares a slave dma operation
+ * @device_terminate_all: terminate all pending operations
* @device_issue_pending: push pending transactions to hardware
*/
struct dma_device {
@@ -315,6 +360,12 @@ struct dma_device {
struct dma_async_tx_descriptor *(*device_prep_dma_interrupt)(
struct dma_chan *chan, unsigned long flags);

+ struct dma_async_tx_descriptor *(*device_prep_slave_sg)(
+ struct dma_chan *chan, struct scatterlist *sgl,
+ unsigned int sg_len, enum dma_data_direction direction,
+ unsigned long flags);
+ void (*device_terminate_all)(struct dma_chan *chan);
+
enum dma_status (*device_is_tx_complete)(struct dma_chan *chan,
dma_cookie_t cookie, dma_cookie_t *last,
dma_cookie_t *used);
--
1.5.5.4

2008-06-26 13:31:40

by Haavard Skinnemoen

[permalink] [raw]
Subject: [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users

This makes the DMA Engine menu visible on AVR32 by adding AVR32 to the
(growing) list of architectures DMADEVICES depends on. Though I'd prefer
to remove that whole "depends" line entirely...

The DMADEVICES menu used to be available for all architectures, but at
some point, we started building a huge dependency list with all the
architectures that might have support for this kind of hardware.

According to Dan Williams:

> Adrian had concerns about users enabling NET_DMA when the hardware
> capability is relatively rare.

which seems very strange as long as (PCI && X86) is enough to enable
this menu. In other words, the vast majority of users will see the menu
even though the hardware is rare.

Also, all DMA clients depend on DMA_ENGINE being set. This symbol is
selected by each DMA Engine driver, so users can't select a DMA client
without selecting a specific DMA Engine driver first.

So, while this patch solves my immediate problem of making DMA Engines
available on AVR32, I'd much rather remove the whole arch dependency
list because I think it's bogus. Comments?

Signed-off-by: Haavard Skinnemoen <[email protected]>
Cc: Adrian Bunk <[email protected]>
---
drivers/dma/Kconfig | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 18f6ef3..2ac09be 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -4,7 +4,7 @@

menuconfig DMADEVICES
bool "DMA Engine support"
- depends on (PCI && X86) || ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX || PPC
+ depends on (PCI && X86) || ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX || PPC || AVR32
depends on !HIGHMEM64G
help
DMA engines can do asynchronous data transfers without
--
1.5.5.4

2008-06-26 13:32:36

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 3/6] dmaengine: Add slave DMA interface

Ok, I guess I didn't update all the patch descriptions properly...

Haavard Skinnemoen <[email protected]> wrote:
> A new struct dma_slave_descriptor is added. This extends the standard
> dma_async_tx_descriptor with a few members that are needed for doing
> slave DMA from/to peripherals.

This isn't correct anymore, as the dma_slave_descriptor struct turned
out not to be needed after all.

> A new operation for creating such descriptors is added to struct
> dma_device.

This isn't entirely correct either -- regular dma_async_tx_descriptors
are created, but one such descriptor can represent a whole scatterlist.

I'll update the changelog for the next round...

Haavard

2008-06-26 14:18:21

by Adrian Bunk

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users

On Thu, Jun 26, 2008 at 03:23:21PM +0200, Haavard Skinnemoen wrote:
> This makes the DMA Engine menu visible on AVR32 by adding AVR32 to the
> (growing) list of architectures DMADEVICES depends on. Though I'd prefer
> to remove that whole "depends" line entirely...
>
> The DMADEVICES menu used to be available for all architectures, but at
> some point, we started building a huge dependency list with all the
> architectures that might have support for this kind of hardware.
>
> According to Dan Williams:
>
> > Adrian had concerns about users enabling NET_DMA when the hardware
> > capability is relatively rare.
>
> which seems very strange as long as (PCI && X86) is enough to enable
> this menu. In other words, the vast majority of users will see the menu
> even though the hardware is rare.
>
> Also, all DMA clients depend on DMA_ENGINE being set. This symbol is
> selected by each DMA Engine driver, so users can't select a DMA client
> without selecting a specific DMA Engine driver first.
>...

That discussion is mixing two different things I suggested besides other
things before the Kconfig file was added [1]:
- have DMA_ENGINE select'ed when a device gets enabled by the user,
and not be an independent option
- switch to menuconfig and don't offer an empty kconfig menu

There seems to be no disagreement about the former (which could
otherwise easily lead to users mistakenly enabling NET_DMA).

The latter is more a cosmetical kconfig UI thing, and I already said
back then that it "could be dropped if it would become a problem" [2].

So if you want to remove the architecture dependency from the DMADEVICES
menu that's OK with me.

cu
Adrian

[1] http://lkml.org/lkml/2007/7/26/15
[2] http://lkml.org/lkml/2007/8/9/537

--

"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed

2008-06-26 15:10:40

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users

Adrian Bunk <[email protected]> wrote:
> That discussion is mixing two different things I suggested besides other
> things before the Kconfig file was added [1]:
> - have DMA_ENGINE select'ed when a device gets enabled by the user,
> and not be an independent option
> - switch to menuconfig and don't offer an empty kconfig menu
>
> There seems to be no disagreement about the former (which could
> otherwise easily lead to users mistakenly enabling NET_DMA).
>
> The latter is more a cosmetical kconfig UI thing, and I already said
> back then that it "could be dropped if it would become a problem" [2].

Ok, thanks for explaining. The menu does appear empty if I remove the
architecture dependency without adding the driver...if that's a problem
maybe we should do the HAVE_DMA_DEVICE thing...

> So if you want to remove the architecture dependency from the DMADEVICES
> menu that's OK with me.

Ok, I'm gonna wait for Dan and others to respond. If it's fine with
them, I'll post a patch removing the arch dependency.

Haavard

2008-06-26 20:04:23

by David Brownell

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users

On Thursday 26 June 2008, Haavard Skinnemoen wrote:
> So, while this patch solves my immediate problem of making DMA Engines
> available on AVR32, I'd much rather remove the whole arch dependency
> list because I think it's bogus. Comments?

Alternatively, "depends on HAVE_DMA_ENGINE" which arch code selects.

> -???????depends on (PCI && X86) || ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX || PPC
> +???????depends on (PCI && X86) || ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX || PPC || AVR32

2008-06-27 00:59:53

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users


On Thu, 2008-06-26 at 07:46 -0700, Haavard Skinnemoen wrote:
> Adrian Bunk <[email protected]> wrote:
> > That discussion is mixing two different things I suggested besides other
> > things before the Kconfig file was added [1]:
> > - have DMA_ENGINE select'ed when a device gets enabled by the user,
> > and not be an independent option
> > - switch to menuconfig and don't offer an empty kconfig menu
> >
> > There seems to be no disagreement about the former (which could
> > otherwise easily lead to users mistakenly enabling NET_DMA).
> >
> > The latter is more a cosmetical kconfig UI thing, and I already said
> > back then that it "could be dropped if it would become a problem" [2].
>
> Ok, thanks for explaining. The menu does appear empty if I remove the
> architecture dependency without adding the driver...if that's a problem
> maybe we should do the HAVE_DMA_DEVICE thing...
>
> > So if you want to remove the architecture dependency from the DMADEVICES
> > menu that's OK with me.
>
> Ok, I'm gonna wait for Dan and others to respond. If it's fine with
> them, I'll post a patch removing the arch dependency.

I agree with removing the arch dependency, and I do not think we
necessarily need to add HAVE_DMA_ENGINE. Taking an example from libata
the SATA_FSL driver depends on FSL_SOC but the menuconfig for ATA does
not. We can use "depends on HAS_DMA" to make the menu disappear on
archs that will never have a dmaegine. So I propose the following:

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 6239c3d..e4dd006 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -4,13 +4,14 @@

menuconfig DMADEVICES
bool "DMA Engine support"
- depends on (PCI && X86) || ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX || PPC
- depends on !HIGHMEM64G
+ depends on !HIGHMEM64G && HAS_DMA
help
DMA engines can do asynchronous data transfers without
involving the host CPU. Currently, this framework can be
used to offload memory copies in the network stack and
- RAID operations in the MD driver.
+ RAID operations in the MD driver. This menu only presents
+ DMA Device drivers supported by the configured arch, it may
+ be empty in some cases.

if DMADEVICES

@@ -55,10 +56,12 @@ comment "DMA Clients"
config NET_DMA
bool "Network: TCP receive copy offload"
depends on DMA_ENGINE && NET
+ default (INTEL_IOATDMA || FSL_DMA)
help
This enables the use of DMA engines in the network stack to
offload receive copy-to-user operations, freeing CPU cycles.
- Since this is the main user of the DMA engine, it should be enabled;
- say Y here.
+
+ Say Y here if you enabled INTEL_IOATDMA or FSL_DMA, otherwise
+ say N.

endif


2008-06-27 17:35:29

by David Brownell

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users

On Thursday 26 June 2008, Dan Williams wrote:
> I agree with removing the arch dependency, and I do not think we
> necessarily need to add HAVE_DMA_ENGINE.

I think a HAVE_DMA_ENGINE would be better than what you're doing
below: moving the arch dependency into the network code, and
adding this !HIGHMEM64G thing (which is really just a more subtle
arch dependency).

Note that HAS_DMA is very different from having DMA engine support...
one is a specific interface, the other is the generic mechanism with
any of its numerous (and often peripheral-specific) interfaces.


> Taking an example from libata
> the SATA_FSL driver depends on FSL_SOC but the menuconfig for ATA does
> not. We can use "depends on HAS_DMA" to make the menu disappear on
> archs that will never have a dmaegine. So I propose the following:
>
> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
> index 6239c3d..e4dd006 100644
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -4,13 +4,14 @@
>
> menuconfig DMADEVICES
> bool "DMA Engine support"
> - depends on (PCI && X86) || ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX || PPC
> - depends on !HIGHMEM64G
> + depends on !HIGHMEM64G && HAS_DMA
> help
> DMA engines can do asynchronous data transfers without
> involving the host CPU. Currently, this framework can be
> used to offload memory copies in the network stack and
> - RAID operations in the MD driver.
> + RAID operations in the MD driver. This menu only presents
> + DMA Device drivers supported by the configured arch, it may
> + be empty in some cases.
>
> if DMADEVICES
>
> @@ -55,10 +56,12 @@ comment "DMA Clients"
> config NET_DMA
> bool "Network: TCP receive copy offload"
> depends on DMA_ENGINE && NET
> + default (INTEL_IOATDMA || FSL_DMA)
> help
> This enables the use of DMA engines in the network stack to
> offload receive copy-to-user operations, freeing CPU cycles.
> - Since this is the main user of the DMA engine, it should be enabled;
> - say Y here.
> +
> + Say Y here if you enabled INTEL_IOATDMA or FSL_DMA, otherwise
> + say N.
>
> endif
>
>
>

2008-06-27 17:47:17

by Adrian Bunk

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users

On Fri, Jun 27, 2008 at 09:37:21AM -0700, David Brownell wrote:
> On Thursday 26 June 2008, Dan Williams wrote:
> > I agree with removing the arch dependency, and I do not think we
> > necessarily need to add HAVE_DMA_ENGINE.
>
> I think a HAVE_DMA_ENGINE would be better than what you're doing
> below: moving the arch dependency into the network code, and
> adding this !HIGHMEM64G thing (which is really just a more subtle
> arch dependency).
>...

The only effect of the HAVE_DMA_ENGINE would be to not show an empty
kconfig menu.

That's IMHO too much effort for a purely cosmetical kconfig issue.

And I speak as the one who originally added the arch dependency...

cu
Adrian

--

"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed

2008-06-27 18:14:10

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users

On Fri, 27 Jun 2008 09:37:21 -0700
David Brownell <[email protected]> wrote:

> On Thursday 26 June 2008, Dan Williams wrote:
> > I agree with removing the arch dependency, and I do not think we
> > necessarily need to add HAVE_DMA_ENGINE.
>
> I think a HAVE_DMA_ENGINE would be better than what you're doing
> below: moving the arch dependency into the network code, and
> adding this !HIGHMEM64G thing (which is really just a more subtle
> arch dependency).

The !HIGHMEM64G dependency wasn't added; it was there before. I happen
to believe the code that breaks HIGHMEM64G is rather ugly, but that's no
reason to NAK this particular patch. Besides, I'm not really that
interested in the XOR parts of the framework.

> Note that HAS_DMA is very different from having DMA engine support...
> one is a specific interface, the other is the generic mechanism with
> any of its numerous (and often peripheral-specific) interfaces.

They may be different, but you can't have DMA engine support on
platforms that don't provide the DMA mapping API. At least not at the
moment.

The patch looks good to me.

Haavard

2008-06-27 18:24:58

by David Brownell

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users

On Friday 27 June 2008, Adrian Bunk wrote:
> The only effect of the HAVE_DMA_ENGINE would be to not show an empty
> kconfig menu.

Well, no. It would also make the network layer memcpy "acceleration"
option unavailable when there was no underlying engine ... similarly
with other pointless "we don't have that subsystem here" options.

Plus it would help ensure that the arch dependencies are comprehenible,
unlike that highmem thing.

2008-06-27 18:31:26

by Adrian Bunk

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users

On Fri, Jun 27, 2008 at 11:24:42AM -0700, David Brownell wrote:
> On Friday 27 June 2008, Adrian Bunk wrote:
> > The only effect of the HAVE_DMA_ENGINE would be to not show an empty
> > kconfig menu.
>
> Well, no. It would also make the network layer memcpy "acceleration"
> option unavailable when there was no underlying engine ... similarly
> with other pointless "we don't have that subsystem here" options.
>...

This NET_DMA issue has already been fixed 11 months ago - in the same
patch that added the arch dependency for the menu. [1]

NET_DMA now depends on DMA_ENGINE which gets select'ed by the device
options. NET_DMA can therefore never be offered on architectures
without any DMA device.

cu
Adrian

[1] http://lkml.org/lkml/2007/7/26/15

--

"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed

2008-06-27 18:32:07

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users

On Fri, Jun 27, 2008 at 11:24 AM, David Brownell <[email protected]> wrote:
> On Friday 27 June 2008, Adrian Bunk wrote:
>> The only effect of the HAVE_DMA_ENGINE would be to not show an empty
>> kconfig menu.
>
> Well, no. It would also make the network layer memcpy "acceleration"
> option unavailable when there was no underlying engine ... similarly
> with other pointless "we don't have that subsystem here" options.
>

Take another look. NET_DMA depends on DMA_ENGINE which only gets
selected when a dma device driver is selected. Each driver has its
architecture specific dependency, so the DMADEVICES arch dependency
was completely redundant.

> Plus it would help ensure that the arch dependencies are comprehenible,
> unlike that highmem thing.

The highmem dependency can go away its only purpose is to prevent
hitting the BUILD_BUG_ON in async_xor.c.

--
Dan

2008-06-27 19:10:48

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 6/6] Atmel MCI: Driver for Atmel on-chip MMC controllers

On Thu, 26 Jun 2008 15:23:23 +0200
Haavard Skinnemoen <[email protected]> wrote:

> This driver can also use PIO transfers when no DMA channels are
> available, and for transfers where using DMA may be difficult or
> impractical for some reason (e.g. the DMA setup overhead is usually
> not worth it for very short transfers, and badly aligned buffers or
> lengths are difficult to handle.)

Btw, it's probably not that hard to rip the DMA bits out and post them
as a separate patch. This would mean that:
* Pierre can merge the driver independently of the other 5 patches
* A separate patch adding DMA support would make it clearer how the
DMA slave interface is used.
* The chances of having MMC support out of the box on avr32 boards in
2.6.27 become greater, and many people have been asking about that
(including Pierre and David.)

The driver is surprisingly fast with DMA turned off (2-3 MiB/s), but
the CPU usage is of course horrible.

If that sounds like a good plan to you, I'll split the driver tomorrow.

This driver has been out of tree for way too long. I'm hoping we can
get it in before 2.6.27.

Haavard

2008-06-27 19:56:47

by Pierre Ossman

[permalink] [raw]
Subject: Re: [PATCH v4 6/6] Atmel MCI: Driver for Atmel on-chip MMC controllers

On Fri, 27 Jun 2008 21:10:14 +0200
Haavard Skinnemoen <[email protected]> wrote:

>
> Btw, it's probably not that hard to rip the DMA bits out and post them
> as a separate patch. This would mean that:
> * Pierre can merge the driver independently of the other 5 patches

*snip*

>
> If that sounds like a good plan to you, I'll split the driver tomorrow.
>

DMA is always nice, but I prefer a PIO-only driver over none at all.

I am a bit concerned about the problems with mmc_test you mentioned
though. Have you sent any info about those previously?

Rgds
--
-- Pierre Ossman

Linux kernel, MMC maintainer http://www.kernel.org
rdesktop, core developer http://www.rdesktop.org

WARNING: This correspondence is being monitored by the
Swedish government. Make sure your server uses encryption
for SMTP traffic and consider using PGP for end-to-end
encryption.


Attachments:
signature.asc (197.00 B)

2008-06-27 21:31:24

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH v4 6/6] Atmel MCI: Driver for Atmel on-chip MMC controllers


On Fri, 2008-06-27 at 12:10 -0700, Haavard Skinnemoen wrote:
> On Thu, 26 Jun 2008 15:23:23 +0200
> Haavard Skinnemoen <[email protected]> wrote:
>
> > This driver can also use PIO transfers when no DMA channels are
> > available, and for transfers where using DMA may be difficult or
> > impractical for some reason (e.g. the DMA setup overhead is usually
> > not worth it for very short transfers, and badly aligned buffers or
> > lengths are difficult to handle.)
>
> Btw, it's probably not that hard to rip the DMA bits out and post them
> as a separate patch. This would mean that:
> * Pierre can merge the driver independently of the other 5 patches
> * A separate patch adding DMA support would make it clearer how the
> DMA slave interface is used.
> * The chances of having MMC support out of the box on avr32 boards
> in
> 2.6.27 become greater, and many people have been asking about that
> (including Pierre and David.)

> The driver is surprisingly fast with DMA turned off (2-3 MiB/s), but
> the CPU usage is of course horrible.
>
> If that sounds like a good plan to you, I'll split the driver
> tomorrow.
>
> This driver has been out of tree for way too long. I'm hoping we can
> get it in before 2.6.27.

I have high confidence that we can get the dma bits applied in time for
2.6.27. You seem to have addressed my previous concerns and there has
been more than ample time for others with similar dma configurations to
comment on the dma-slave framework. I just want some more time to give
it an honest review.

Regards,
Dan

2008-06-28 12:29:24

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 3/6] dmaengine: Add slave DMA interface

Haavard Skinnemoen <[email protected]> wrote:
> + * @dma_dev: required DMA master device. If non-NULL, the client can not be
> + * bound to other masters than this. The master driver may use
> + * this to determine whether it's safe to access

> + struct dma_async_tx_descriptor *(*device_prep_slave_sg)(
> + struct dma_chan *chan, struct scatterlist *sgl,

Turns out I forgot to run checkpatch before posting. Here's a small
fixup. I'll fold it into this patch if I end up doing a v5 of this
series.

The unfinished comment above was redundant anyway, so I just removed
the last part.

include/linux/dmaengine.h | 5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 8ce03e8..3d57439 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -131,8 +131,7 @@ typedef struct { DECLARE_BITMAP(bits, DMA_TX_TYPE_END); } dma_cap_mask_t;
* struct dma_slave - Information about a DMA slave
* @dev: device acting as DMA slave
* @dma_dev: required DMA master device. If non-NULL, the client can not be
- * bound to other masters than this. The master driver may use
- * this to determine whether it's safe to access
+ * bound to other masters than this.
* @tx_reg: physical address of data register used for
* memory-to-peripheral transfers
* @rx_reg: physical address of data register used for
@@ -361,7 +360,7 @@ struct dma_device {
struct dma_chan *chan, unsigned long flags);

struct dma_async_tx_descriptor *(*device_prep_slave_sg)(
- struct dma_chan *chan, struct scatterlist *sgl,
+ struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_data_direction direction,
unsigned long flags);
void (*device_terminate_all)(struct dma_chan *chan);
--
1.5.5.4

2008-06-28 12:43:25

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 6/6] Atmel MCI: Driver for Atmel on-chip MMC controllers

Pierre Ossman <[email protected]> wrote:
> I am a bit concerned about the problems with mmc_test you mentioned
> though. Have you sent any info about those previously?

No, I don't think I have.

Here are the results from one of my cards:

sh-3.2# echo > /sys/class/mmc_host/mmc0/mmc0\:b368/test
mmc0: Starting tests of card mmc0:b368...
mmc0: Test case 1. Basic write (no data verification)...
mmc0: Result: OK
mmc0: Test case 2. Basic read (no data verification)...
mmc0: Result: OK
mmc0: Test case 3. Basic write (with data verification)...
mmc0: Result: OK
mmc0: Test case 4. Basic read (with data verification)...
mmc0: Result: OK
mmc0: Test case 5. Multi-block write...
mmc0: Warning: Host did not wait for busy state to end.
mmc0: Result: OK
mmc0: Test case 6. Multi-block read...
mmc0: Result: OK
mmc0: Test case 7. Power of two block writes...
mmc0: Result: UNSUPPORTED (by card)
mmc0: Test case 8. Power of two block reads...
mmc0: Result: OK
mmc0: Test case 9. Weird sized block writes...
mmc0: Result: UNSUPPORTED (by card)
mmc0: Test case 10. Weird sized block reads...
mmc0: Result: OK
mmc0: Test case 11. Badly aligned write...
mmc0: Result: OK
mmc0: Test case 12. Badly aligned read...
mmc0: Result: OK
mmc0: Test case 13. Badly aligned multi-block write...
mmc0: Warning: Host did not wait for busy state to end.
mmc0: Warning: Host did not wait for busy state to end.
mmc0: Result: OK
mmc0: Test case 14. Badly aligned multi-block read...
mmc0: Result: OK
mmc0: Test case 15. Correct xfer_size at write (start failure)...
mmc0: Result: ERROR (-84)
mmc0: Test case 16. Correct xfer_size at read (start failure)...
mmc0: Result: OK
mmc0: Test case 17. Correct xfer_size at write (midway failure)...
mmc0: Result: ERROR (-84)
mmc0: Test case 18. Correct xfer_size at read (midway failure)...
mmc0: Result: OK
mmc0: Tests completed.

Tests 7 and 9 are not supported by the card, so I can't do much about
it except go through all the cards I have available and see if one of
them supports this test.

Tests 15 and 17 return -EILSEQ instead of -ETIMEDOUT. The at91_mci
driver has the same problem, and I think it's a hardware issue -- the
controller wrongly flags a CRC error instead of a data timeout error
if the card doesn't respond with any CRC status after a write. I don't
know how to work around that problem.

Of course, I could cheat and return -ETIMEDOUT on CRC errors. That
would make the driver pass the tests, right? ;-)

The test results are the same regardless of whether DMA is used or not,
but short and/or difficult transfers are always done in PIO mode.

Haavard

2008-06-28 12:48:18

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 6/6] Atmel MCI: Driver for Atmel on-chip MMC controllers

Dan Williams <[email protected]> wrote:
> > This driver has been out of tree for way too long. I'm hoping we can
> > get it in before 2.6.27.
>
> I have high confidence that we can get the dma bits applied in time for
> 2.6.27. You seem to have addressed my previous concerns and there has
> been more than ample time for others with similar dma configurations to
> comment on the dma-slave framework. I just want some more time to give
> it an honest review.

Sorry, I didn't mean to put pressure on you -- I certainly want an
honest review. I meant it more as a sigh of relief that this
troublesome driver is finally making its way into mainline...and that I
can start working on preparing the other drivers that depend on the DMA
slave interface.

Haavard

2008-06-28 13:31:58

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 6/6] Atmel MCI: Driver for Atmel on-chip MMC controllers

Haavard Skinnemoen <[email protected]> wrote:
> Tests 7 and 9 are not supported by the card, so I can't do much about
> it except go through all the cards I have available and see if one of
> them supports this test.

Turns out none of my 12 cards of various brands and models support this
test. Do you know some specific model I can try?

Haavard

2008-06-28 13:45:31

by Pierre Ossman

[permalink] [raw]
Subject: Re: [PATCH v4 6/6] Atmel MCI: Driver for Atmel on-chip MMC controllers

On Sat, 28 Jun 2008 14:43:13 +0200
Haavard Skinnemoen <[email protected]> wrote:

> Tests 15 and 17 return -EILSEQ instead of -ETIMEDOUT. The at91_mci
> driver has the same problem, and I think it's a hardware issue -- the
> controller wrongly flags a CRC error instead of a data timeout error
> if the card doesn't respond with any CRC status after a write. I don't
> know how to work around that problem.

If that's how the hardware behaves, then EILSEQ will have to do. The
test is more about forcing people to do proper error management in the
driver than anything else. Have a check that you don't report a bad
bytes_xfered though.

Rgds
--
-- Pierre Ossman

Linux kernel, MMC maintainer http://www.kernel.org
rdesktop, core developer http://www.rdesktop.org

WARNING: This correspondence is being monitored by the
Swedish government. Make sure your server uses encryption
for SMTP traffic and consider using PGP for end-to-end
encryption.


Attachments:
signature.asc (197.00 B)

2008-06-28 14:01:38

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 6/6] Atmel MCI: Driver for Atmel on-chip MMC controllers

Pierre Ossman <[email protected]> wrote:
> On Sat, 28 Jun 2008 14:43:13 +0200
> Haavard Skinnemoen <[email protected]> wrote:
>
> > Tests 15 and 17 return -EILSEQ instead of -ETIMEDOUT. The at91_mci
> > driver has the same problem, and I think it's a hardware issue -- the
> > controller wrongly flags a CRC error instead of a data timeout error
> > if the card doesn't respond with any CRC status after a write. I don't
> > know how to work around that problem.
>
> If that's how the hardware behaves, then EILSEQ will have to do. The
> test is more about forcing people to do proper error management in the
> driver than anything else. Have a check that you don't report a bad
> bytes_xfered though.

bytes_xfered is 0 if any block failed. If I understand correctly, this
is good enough, but not optimal. I want to improve this later, but I
might need some more feedback from the DMA engine subsystem (e.g.
adding "actual" and "status" fields to the descriptor.)

The DMA slave interface isn't perfect yet, but I think the current
incarnation is actually useful and performs well even though it's very
basic. We can make incremental improvements later to improve error
reporting, offer more advanced control over the transfers, and support
other use cases better (e.g. audio.)

Haavard

2008-06-28 14:11:42

by Pierre Ossman

[permalink] [raw]
Subject: Re: [PATCH v4 6/6] Atmel MCI: Driver for Atmel on-chip MMC controllers

On Sat, 28 Jun 2008 16:01:21 +0200
Haavard Skinnemoen <[email protected]> wrote:

>
> bytes_xfered is 0 if any block failed. If I understand correctly, this
> is good enough, but not optimal. I want to improve this later, but I
> might need some more feedback from the DMA engine subsystem (e.g.
> adding "actual" and "status" fields to the descriptor.)
>

That's good enough yes. The only incorrect value is reporting more than
was actually written as that would completely undermine any attempts of
keeping data integrity in upper layers.

Rgds
--
-- Pierre Ossman

Linux kernel, MMC maintainer http://www.kernel.org
rdesktop, core developer http://www.rdesktop.org

WARNING: This correspondence is being monitored by the
Swedish government. Make sure your server uses encryption
for SMTP traffic and consider using PGP for end-to-end
encryption.


Attachments:
signature.asc (197.00 B)

2008-06-29 16:51:07

by Pierre Ossman

[permalink] [raw]
Subject: Re: [PATCH v4 6/6] Atmel MCI: Driver for Atmel on-chip MMC controllers

On Sat, 28 Jun 2008 15:31:44 +0200
Haavard Skinnemoen <[email protected]> wrote:

> Haavard Skinnemoen <[email protected]> wrote:
> > Tests 7 and 9 are not supported by the card, so I can't do much about
> > it except go through all the cards I have available and see if one of
> > them supports this test.
>
> Turns out none of my 12 cards of various brands and models support this
> test. Do you know some specific model I can try?
>

Of the set I have here, only one supported partial writes. A prototype
samsung MMC 4.2 card. It has the markings "MM8GH04GNACA-9A" on it. A
quick google doesn't turn up anything, but some other new Samsung MMC
card might have good odds.

Rgds
--
-- Pierre Ossman

Linux kernel, MMC maintainer http://www.kernel.org
rdesktop, core developer http://www.rdesktop.org

WARNING: This correspondence is being monitored by the
Swedish government. Make sure your server uses encryption
for SMTP traffic and consider using PGP for end-to-end
encryption.


Attachments:
signature.asc (197.00 B)

2008-07-01 13:51:18

by Sosnowski, Maciej

[permalink] [raw]
Subject: RE: [PATCH v4 1/6] dmaengine: Add dma_client parameter to device_alloc_chan_resources

> ---------- Original message ----------
> From: Haavard Skinnemoen <[email protected]>
> Date: Jun 26, 2008 3:23 PM
> Subject: [PATCH v4 1/6] dmaengine: Add dma_client parameter to
> device_alloc_chan_resources
> To: Dan Williams <[email protected]>, Pierre Ossman
> <[email protected]>
> Cc: [email protected], [email protected],
> [email protected], [email protected], David Brownell
> <[email protected]>, Haavard Skinnemoen
> <[email protected]>
>
>
> A DMA controller capable of doing slave transfers may need to know a
> few things about the slave when preparing the channel. We don't want
> to add this information to struct dma_channel since the channel hasn't
> yet been bound to a client at this point.
>
> Instead, pass a reference to the client requesting the channel to the
> driver's device_alloc_chan_resources hook so that it can pick the
> necessary information from the dma_client struct by itself.
>
> Signed-off-by: Haavard Skinnemoen <[email protected]>
> ---
> drivers/dma/dmaengine.c | 3 ++-
> drivers/dma/ioat_dma.c | 5 +++--
> drivers/dma/iop-adma.c | 7 ++++---
> include/linux/dmaengine.h | 3 ++-
> 4 files changed, 11 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
> index 99c22b4..a57c337 100644
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> @@ -174,7 +174,8 @@ static void dma_client_chan_alloc(struct
dma_client
> *client) if (!dma_chan_satisfies_mask(chan,
> client->cap_mask)) continue;
>
> - desc =
> chan->device->device_alloc_chan_resources(chan); +
desc
> = chan->device->device_alloc_chan_resources( +

> chan, client); if (desc >= 0) {
> ack = client->event_callback(client,
> chan,
> diff --git a/drivers/dma/ioat_dma.c b/drivers/dma/ioat_dma.c
> index 318e8a2..90e5b0a 100644
> --- a/drivers/dma/ioat_dma.c
> +++ b/drivers/dma/ioat_dma.c
> @@ -452,7 +452,8 @@ static void ioat2_dma_massage_chan_desc(struct
> ioat_dma_chan *ioat_chan)
> * ioat_dma_alloc_chan_resources - returns the number of allocated
descriptors
> * @chan: the channel to be filled out
> */
> -static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
> +static int ioat_dma_alloc_chan_resources(struct dma_chan *chan,
> + struct dma_client *client)
> {
> struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
> struct ioat_desc_sw *desc;
> @@ -1049,7 +1050,7 @@ static int ioat_dma_self_test(struct
> ioatdma_device *device)
> dma_chan = container_of(device->common.channels.next,
> struct dma_chan,
> device_node);
> - if (device->common.device_alloc_chan_resources(dma_chan) < 1)
{
> + if (device->common.device_alloc_chan_resources(dma_chan, NULL)
< 1) {
> dev_err(&device->pdev->dev,
> "selftest cannot allocate chan resource\n");
> err = -ENODEV;
> diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c
> index 0ec0f43..2664ea5 100644
> --- a/drivers/dma/iop-adma.c
> +++ b/drivers/dma/iop-adma.c
> @@ -444,7 +444,8 @@ static void iop_chan_start_null_memcpy(struct
> iop_adma_chan *iop_chan);
> static void iop_chan_start_null_xor(struct iop_adma_chan *iop_chan);
>
> /* returns the number of allocated descriptors */
> -static int iop_adma_alloc_chan_resources(struct dma_chan *chan)
> +static int iop_adma_alloc_chan_resources(struct dma_chan *chan,
> + struct dma_client *client)
> {
> char *hw_desc;
> int idx;
> @@ -838,7 +839,7 @@ static int __devinit
> iop_adma_memcpy_self_test(struct iop_adma_device *device)
> dma_chan = container_of(device->common.channels.next,
> struct dma_chan,
> device_node);
> - if (iop_adma_alloc_chan_resources(dma_chan) < 1) {
> + if (iop_adma_alloc_chan_resources(dma_chan, NULL) < 1) {
> err = -ENODEV;
> goto out;
> }
> @@ -936,7 +937,7 @@ iop_adma_xor_zero_sum_self_test(struct
> iop_adma_device *device)
> dma_chan = container_of(device->common.channels.next,
> struct dma_chan,
> device_node);
> - if (iop_adma_alloc_chan_resources(dma_chan) < 1) {
> + if (iop_adma_alloc_chan_resources(dma_chan, NULL) < 1) {
> err = -ENODEV;
> goto out;
> }
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index d08a5c5..cffb95f 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -279,7 +279,8 @@ struct dma_device {
> int dev_id;
> struct device *dev;
>
> - int (*device_alloc_chan_resources)(struct dma_chan *chan);
> + int (*device_alloc_chan_resources)(struct dma_chan *chan,
> + struct dma_client *client);
> void (*device_free_chan_resources)(struct dma_chan *chan);
>
> struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)(
> --
> 1.5.5.4

Acked-by: Maciej Sosnowski <[email protected]>

Regards,
Maciej

2008-07-01 13:53:57

by Sosnowski, Maciej

[permalink] [raw]
Subject: RE: [PATCH v4 2/6] dmaengine: Add dma_chan_is_in_use() function

> ---------- Original message ----------
> From: Haavard Skinnemoen <[email protected]>
> Date: Jun 26, 2008 3:23 PM
> Subject: [PATCH v4 2/6] dmaengine: Add dma_chan_is_in_use() function
> To: Dan Williams <[email protected]>, Pierre Ossman
> <[email protected]>
> Cc: [email protected], [email protected],
> [email protected], [email protected], David Brownell
> <[email protected]>, Haavard Skinnemoen
> <[email protected]>
>
>
> This moves the code checking if a DMA channel is in use from
> show_in_use() into an inline helper function, dma_is_in_use(). DMA
> controllers can use this in order to give clients exclusive access to
> channels (usually necessary when setting up slave DMA.)
>
> I have to admit that I don't really understand the channel refcounting
> logic at all... dma_chan_get() simply increments a per-cpu value. How
> can we be sure that whatever CPU calls dma_chan_is_in_use() sees the
> same value?
>
> Signed-off-by: Haavard Skinnemoen <[email protected]>
> ---
> drivers/dma/dmaengine.c | 12 +-----------
> include/linux/dmaengine.h | 17 +++++++++++++++++
> 2 files changed, 18 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
> index a57c337..ad8d811 100644
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> @@ -105,17 +105,7 @@ static ssize_t show_bytes_transferred(struct
> device *dev, struct device_attribut
> static ssize_t show_in_use(struct device *dev, struct
> device_attribute *attr, char *buf)
> {
> struct dma_chan *chan = to_dma_chan(dev);
> - int in_use = 0;
> -
> - if (unlikely(chan->slow_ref) &&
> - atomic_read(&chan->refcount.refcount) > 1)
> - in_use = 1;
> - else {
> - if (local_read(&(per_cpu_ptr(chan->local,
> - get_cpu())->refcount)) > 0)
> - in_use = 1;
> - put_cpu();
> - }
> + int in_use = dma_chan_is_in_use(chan);
>
> return sprintf(buf, "%d\n", in_use);
> }
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index cffb95f..4b602d3 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -180,6 +180,23 @@ static inline void dma_chan_put(struct dma_chan
*chan)
> }
> }
>
> +static inline bool dma_chan_is_in_use(struct dma_chan *chan)
> +{
> + bool in_use = false;
> +
> + if (unlikely(chan->slow_ref) &&
> + atomic_read(&chan->refcount.refcount) > 1)
> + in_use = true;
> + else {
> + if (local_read(&(per_cpu_ptr(chan->local,
> + get_cpu())->refcount)) > 0)
> + in_use = true;
> + put_cpu();
> + }
> +
> + return in_use;
> +}
> +
> /*
> * typedef dma_event_callback - function pointer to a DMA event
callback
> * For each channel added to the system this routine is called for
each
> client. --
> 1.5.5.4

Acked-by: Maciej Sosnowski <[email protected]>

Regards,
Maciej

2008-07-01 13:59:58

by Sosnowski, Maciej

[permalink] [raw]
Subject: RE: [PATCH v4 3/6] dmaengine: Add slave DMA interface

> ---------- Original message ----------
> From: Haavard Skinnemoen <[email protected]>
> Date: Jun 26, 2008 3:23 PM
> Subject: [PATCH v4 3/6] dmaengine: Add slave DMA interface
> To: Dan Williams <[email protected]>, Pierre Ossman
> <[email protected]>
> Cc: [email protected], [email protected],
> [email protected], [email protected], David Brownell
> <[email protected]>, Haavard Skinnemoen
> <[email protected]>
>
>
> This patch adds the necessary interfaces to the DMA Engine framework
> to use functionality found on most embedded DMA controllers: DMA from
> and to I/O registers with hardware handshaking.
>
> In this context, hardware hanshaking means that the peripheral that
> owns the I/O registers in question is able to tell the DMA controller
> when more data is available for reading, or when there is room for
> more data to be written. This usually happens internally on the chip,
> but these signals may also be exported outside the chip for things
> like IDE DMA, etc.
>
> A new struct dma_slave is introduced. This contains information that
> the DMA engine driver needs to set up slave transfers to and from a
> slave device. Most engines supporting DMA slave transfers will want to
> extend this structure with controller-specific parameters. This
> additional information is usually passed from the platform/board code
> through the client driver.
>
> A "slave" pointer is added to the dma_client struct. This must point
> to a valid dma_slave structure iff the DMA_SLAVE capability is
> requested. The DMA engine driver may use this information in its
> device_alloc_chan_resources hook to configure the DMA controller for
> slave transfers from and to the given slave device.
>
> A new struct dma_slave_descriptor is added. This extends the standard
> dma_async_tx_descriptor with a few members that are needed for doing
> slave DMA from/to peripherals.
>
> A new operation for creating such descriptors is added to struct
> dma_device. Another new operation for terminating all pending
> transfers is added as well. The latter is needed because there may be
> errors outside the scope of the DMA Engine framework that may require
> DMA operations to be terminated prematurely.
>
> DMA Engine drivers may extend the dma_device, dma_chan and/or
> dma_slave_descriptor structures to allow controller-specific
> operations. The client driver can detect such extensions by looking at
> the DMA Engine's struct device, or it can request a specific DMA
> Engine device by setting the dma_dev field in struct dma_slave.
>
> Signed-off-by: Haavard Skinnemoen <[email protected]>
>
> dmaslave interface changes since v3:
> * Use dma_data_direction instead of a new enum
> * Submit slave transfers as scatterlists
> * Remove the DMA slave descriptor struct
>
> dmaslave interface changes since v2:
> * Add a dma_dev field to struct dma_slave. If set, the client can
> only be bound to the DMA controller that corresponds to this
> device. This allows controller-specific extensions of the
> dma_slave structure; if the device matches, the controller may
> safely assume its extensions are present.
> * Move reg_width into struct dma_slave as there are currently no
> users that need to be able to set the width on a per-transfer
> basis.
>
> dmaslave interface changes since v1:
> * Drop the set_direction and set_width descriptor hooks. Pass the
> direction and width to the prep function instead.
> * Declare a dma_slave struct with fixed information about a slave,
> i.e. register addresses, handshake interfaces and such.
> * Add pointer to a dma_slave struct to dma_client. Can be NULL if
> the DMA_SLAVE capability isn't requested.
> * Drop the set_slave device hook since the alloc_chan_resources hook
> now has enough information to set up the channel for slave
> transfers.
> ---
> drivers/dma/dmaengine.c | 16 ++++++++++++-
> include/linux/dmaengine.h | 53
> ++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 67
> insertions(+), 2 deletions(-)
>
> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
> index ad8d811..2e0035f 100644
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> @@ -159,7 +159,12 @@ static void dma_client_chan_alloc(struct
> dma_client *client)
> enum dma_state_client ack;
>
> /* Find a channel */
> - list_for_each_entry(device, &dma_device_list, global_node)
> + list_for_each_entry(device, &dma_device_list, global_node) {
> + /* Does the client require a specific DMA controller?
*/
> + if (client->slave && client->slave->dma_dev
> + && client->slave->dma_dev !=
device->dev)
> + continue;
> +
> list_for_each_entry(chan, &device->channels,
device_node) {
> if (!dma_chan_satisfies_mask(chan,
client->cap_mask))
> continue;
> @@ -180,6 +185,7 @@ static void dma_client_chan_alloc(struct
dma_client
> *client) return;
> }
> }
> + }
> }
>
> enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t
cookie)
> @@ -276,6 +282,10 @@ static void dma_clients_notify_removed(struct
> dma_chan *chan)
> */
> void dma_async_client_register(struct dma_client *client)
> {
> + /* validate client data */
> + BUG_ON(dma_has_cap(DMA_SLAVE, client->cap_mask) &&
> + !client->slave);
> +
> mutex_lock(&dma_list_mutex);
> list_add_tail(&client->global_node, &dma_client_list);
> mutex_unlock(&dma_list_mutex);
> @@ -350,6 +360,10 @@ int dma_async_device_register(struct dma_device
*device)
> !device->device_prep_dma_memset);
> BUG_ON(dma_has_cap(DMA_INTERRUPT, device->cap_mask) &&
> !device->device_prep_dma_interrupt);
> + BUG_ON(dma_has_cap(DMA_SLAVE, device->cap_mask) &&
> + !device->device_prep_slave_sg);
> + BUG_ON(dma_has_cap(DMA_SLAVE, device->cap_mask) &&
> + !device->device_terminate_all);
>
> BUG_ON(!device->device_alloc_chan_resources);
> BUG_ON(!device->device_free_chan_resources);
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index 4b602d3..8ce03e8 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -89,10 +89,23 @@ enum dma_transaction_type {
> DMA_MEMSET,
> DMA_MEMCPY_CRC32C,
> DMA_INTERRUPT,
> + DMA_SLAVE,
> };
>
> /* last transaction type for creation of the capabilities mask */
> -#define DMA_TX_TYPE_END (DMA_INTERRUPT + 1)
> +#define DMA_TX_TYPE_END (DMA_SLAVE + 1)
> +
> +/**
> + * enum dma_slave_width - DMA slave register access width.
> + * @DMA_SLAVE_WIDTH_8BIT: Do 8-bit slave register accesses
> + * @DMA_SLAVE_WIDTH_16BIT: Do 16-bit slave register accesses
> + * @DMA_SLAVE_WIDTH_32BIT: Do 32-bit slave register accesses
> + */
> +enum dma_slave_width {
> + DMA_SLAVE_WIDTH_8BIT,
> + DMA_SLAVE_WIDTH_16BIT,
> + DMA_SLAVE_WIDTH_32BIT,
> +};
>
> /**
> * enum dma_ctrl_flags - DMA flags to augment operation preparation,
> @@ -115,6 +128,33 @@ enum dma_ctrl_flags {
> typedef struct { DECLARE_BITMAP(bits, DMA_TX_TYPE_END); }
dma_cap_mask_t;
>
> /**
> + * struct dma_slave - Information about a DMA slave
> + * @dev: device acting as DMA slave
> + * @dma_dev: required DMA master device. If non-NULL, the client can
not be
> + * bound to other masters than this. The master driver may use
> + * this to determine whether it's safe to access
> + * @tx_reg: physical address of data register used for
> + * memory-to-peripheral transfers
> + * @rx_reg: physical address of data register used for
> + * peripheral-to-memory transfers
> + * @reg_width: peripheral register width
> + *
> + * If dma_dev is non-NULL, the client can not be bound to other DMA
> + * masters than the one corresponding to this device. The DMA master
> + * driver may use this to determine if there is controller-specific
> + * data wrapped around this struct. Drivers of platform code that
sets
> + * the dma_dev field must therefore make sure to use an appropriate
> + * controller-specific dma slave structure wrapping this struct.
> + */
> +struct dma_slave {
> + struct device *dev;
> + struct device *dma_dev;
> + dma_addr_t tx_reg;
> + dma_addr_t rx_reg;
> + enum dma_slave_width reg_width;
> +};
> +
> +/**
> * struct dma_chan_percpu - the per-CPU part of struct dma_chan
> * @refcount: local_t used for open-coded "bigref" counting
> * @memcpy_count: transaction counter
> @@ -219,11 +259,14 @@ typedef enum dma_state_client
> (*dma_event_callback) (struct dma_client *client,
> * @event_callback: func ptr to call when something happens
> * @cap_mask: only return channels that satisfy the requested
capabilities
> * a value of zero corresponds to any capability
> + * @slave: data for preparing slave transfer. Must be non-NULL iff
the
> + * DMA_SLAVE capability is requested.
> * @global_node: list_head for global dma_client_list
> */
> struct dma_client {
> dma_event_callback event_callback;
> dma_cap_mask_t cap_mask;
> + struct dma_slave *slave;
> struct list_head global_node;
> };
>
> @@ -280,6 +323,8 @@ struct dma_async_tx_descriptor {
> * @device_prep_dma_zero_sum: prepares a zero_sum operation
> * @device_prep_dma_memset: prepares a memset operation
> * @device_prep_dma_interrupt: prepares an end of chain interrupt
operation
> + * @device_prep_slave_sg: prepares a slave dma operation
> + * @device_terminate_all: terminate all pending operations
> * @device_issue_pending: push pending transactions to hardware
> */
> struct dma_device {
> @@ -315,6 +360,12 @@ struct dma_device {
> struct dma_async_tx_descriptor *(*device_prep_dma_interrupt)(
> struct dma_chan *chan, unsigned long flags);
>
> + struct dma_async_tx_descriptor *(*device_prep_slave_sg)(
> + struct dma_chan *chan, struct scatterlist *sgl,
> + unsigned int sg_len, enum dma_data_direction
direction,
> + unsigned long flags);
> + void (*device_terminate_all)(struct dma_chan *chan);
> +
> enum dma_status (*device_is_tx_complete)(struct dma_chan *chan,
> dma_cookie_t cookie, dma_cookie_t *last,
> dma_cookie_t *used);
> --
> 1.5.5.4

Acked-by: Maciej Sosnowski <[email protected]>

Regards,
Maciej

2008-07-02 01:31:38

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH v4 2/6] dmaengine: Add dma_chan_is_in_use() function

On Thu, Jun 26, 2008 at 6:23 AM, Haavard Skinnemoen
<[email protected]> wrote:
> This moves the code checking if a DMA channel is in use from
> show_in_use() into an inline helper function, dma_is_in_use(). DMA
> controllers can use this in order to give clients exclusive access to
> channels (usually necessary when setting up slave DMA.)
>
> I have to admit that I don't really understand the channel refcounting
> logic at all... dma_chan_get() simply increments a per-cpu value. How
> can we be sure that whatever CPU calls dma_chan_is_in_use() sees the
> same value?

As Chris noted in the comments at the top of dmaengine.c this is an
implementation Rusty's 'bigref'. It seeks to avoid the
cache-line-bouncing overhead of maintaining a single global refcount
in hot paths like tcp_v{4,6}_rcv(). When the channel is being
removed, a rare event, we transition to the accurate, yet slow, global
method.

Your observation is correct, dma_chan_is_in_use() may lie in the case
when the current cpu is not using the channel. For this particular
test I think you can look to see if this channel's resources are
already allocated. If they are then some other client got a hold of
this channel before the current attempt. Hmm... that would also
require that we free the channel's resources in the case where the
client replies with DMA_NAK, probably something we should do anyways.

Thoughts?

--
Dan

2008-07-02 02:01:14

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH v4 2/6] dmaengine: Add dma_chan_is_in_use() function

On Tue, Jul 1, 2008 at 6:31 PM, Dan Williams <[email protected]> wrote:
> On Thu, Jun 26, 2008 at 6:23 AM, Haavard Skinnemoen
> <[email protected]> wrote:
>> This moves the code checking if a DMA channel is in use from
>> show_in_use() into an inline helper function, dma_is_in_use(). DMA
>> controllers can use this in order to give clients exclusive access to
>> channels (usually necessary when setting up slave DMA.)
>>
>> I have to admit that I don't really understand the channel refcounting
>> logic at all... dma_chan_get() simply increments a per-cpu value. How
>> can we be sure that whatever CPU calls dma_chan_is_in_use() sees the
>> same value?
>
> As Chris noted in the comments at the top of dmaengine.c this is an
> implementation Rusty's 'bigref'. It seeks to avoid the
> cache-line-bouncing overhead of maintaining a single global refcount
> in hot paths like tcp_v{4,6}_rcv(). When the channel is being
> removed, a rare event, we transition to the accurate, yet slow, global
> method.
>
> Your observation is correct, dma_chan_is_in_use() may lie in the case
> when the current cpu is not using the channel. For this particular
> test I think you can look to see if this channel's resources are
> already allocated. If they are then some other client got a hold of
> this channel before the current attempt. Hmm... that would also
> require that we free the channel's resources in the case where the
> client replies with DMA_NAK, probably something we should do anyways.
>
> Thoughts?
>

Actually we will probably need something like the following.
->client_count is protected by the dma_list_mutex.

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 99c22b4..10de69e 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -183,9 +183,10 @@ static void dma_client_chan_alloc(struct
dma_client *client)
/* we are done once this client rejects
* an available resource
*/
- if (ack == DMA_ACK)
+ if (ack == DMA_ACK) {
dma_chan_get(chan);
- else if (ack == DMA_NAK)
+ chan->client_count++;
+ } else if (ack == DMA_NAK)
return;
}
}
@@ -272,8 +273,10 @@ static void dma_clients_notify_removed(struct
dma_chan *chan)
/* client was holding resources for this channel so
* free it
*/
- if (ack == DMA_ACK)
+ if (ack == DMA_ACK) {
dma_chan_put(chan);
+ chan->client_count--;
+ }
}

mutex_unlock(&dma_list_mutex);
@@ -313,8 +316,10 @@ void dma_async_client_unregister(struct dma_client *client)
ack = client->event_callback(client, chan,
DMA_RESOURCE_REMOVED);

- if (ack == DMA_ACK)
+ if (ack == DMA_ACK) {
dma_chan_put(chan);
+ chan->client_count--;
+ }
}

list_del(&client->global_node);
@@ -394,6 +399,7 @@ int dma_async_device_register(struct dma_device *device)
kref_get(&device->refcount);
kref_get(&device->refcount);
kref_init(&chan->refcount);
+ chan->client_count = 0;
chan->slow_ref = 0;
INIT_RCU_HEAD(&chan->rcu);
}
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index d08a5c5..6432b83 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -139,6 +139,7 @@ struct dma_chan_percpu {
* @rcu: the DMA channel's RCU head
* @device_node: used to add this to the device chan list
* @local: per-cpu pointer to a struct dma_chan_percpu
+ * @client-count: how many clients are using this channel
*/
struct dma_chan {
struct dma_device *device;
@@ -154,6 +155,7 @@ struct dma_chan {

struct list_head device_node;
struct dma_chan_percpu *local;
+ int client_count;
};

#define to_dma_chan(p) container_of(p, struct dma_chan, dev)

2008-07-02 08:00:14

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 2/6] dmaengine: Add dma_chan_is_in_use() function

"Dan Williams" <[email protected]> wrote:
> On Thu, Jun 26, 2008 at 6:23 AM, Haavard Skinnemoen
> <[email protected]> wrote:
> > This moves the code checking if a DMA channel is in use from
> > show_in_use() into an inline helper function, dma_is_in_use(). DMA
> > controllers can use this in order to give clients exclusive access to
> > channels (usually necessary when setting up slave DMA.)
> >
> > I have to admit that I don't really understand the channel refcounting
> > logic at all... dma_chan_get() simply increments a per-cpu value. How
> > can we be sure that whatever CPU calls dma_chan_is_in_use() sees the
> > same value?
>
> As Chris noted in the comments at the top of dmaengine.c this is an
> implementation Rusty's 'bigref'. It seeks to avoid the
> cache-line-bouncing overhead of maintaining a single global refcount
> in hot paths like tcp_v{4,6}_rcv(). When the channel is being
> removed, a rare event, we transition to the accurate, yet slow, global
> method.

Ok, I was sort of wondering what happens if you call dma_chan_get() on
one cpu and dma_chan_put() on a different cpu later on. But it looks
like when it really matters, the sum across all cpus is used, so the end
result will be correct.

> Your observation is correct, dma_chan_is_in_use() may lie in the case
> when the current cpu is not using the channel. For this particular
> test I think you can look to see if this channel's resources are
> already allocated. If they are then some other client got a hold of
> this channel before the current attempt. Hmm... that would also
> require that we free the channel's resources in the case where the
> client replies with DMA_NAK, probably something we should do anyways.

Yes, I think that's good thing to do in general. In fact, I think the
dw_dmac driver will waste a channel for each slave because it always
assigns the channel to the client even if the client may NAK or DUP it
later on. I haven't seen this actually happening because I only have
one slave client at the moment.

Another reason to do this is to reclaim the memory used for
descriptors. Currently, a channel that was NAK'ed or DUP'ed will still
have a lot of preallocated descriptors, possibly with client-specific
parameters already set up.

Haavard

2008-07-02 08:00:37

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 2/6] dmaengine: Add dma_chan_is_in_use() function

"Dan Williams" <[email protected]> wrote:
> Actually we will probably need something like the following.
> ->client_count is protected by the dma_list_mutex.
>
> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
> index 99c22b4..10de69e 100644
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> @@ -183,9 +183,10 @@ static void dma_client_chan_alloc(struct
> dma_client *client)
> /* we are done once this client rejects
> * an available resource
> */
> - if (ack == DMA_ACK)
> + if (ack == DMA_ACK) {
> dma_chan_get(chan);
> - else if (ack == DMA_NAK)
> + chan->client_count++;
> + } else if (ack == DMA_NAK)
> return;
> }

This looks good to me. I can use client_count to determine if dwc->dws
is actually valid so that channels that were initially allocated for a
slave but NAK'ed or DUP'ed can be reclaimed for other purposes.

It still doesn't solve the issue with memory wastage, but we probably
shouldn't expect to keep a lot of unused channels around anyway.

Thanks!

Haavard

2008-07-04 00:40:36

by Dan Williams

[permalink] [raw]
Subject: dmaengine skip unmap (was: Re: [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare DMA controller)


On Thu, 2008-06-26 at 06:23 -0700, Haavard Skinnemoen wrote:
> This adds a driver for the Synopsys DesignWare DMA controller (aka
> DMACA on AVR32 systems.) This DMA controller can be found integrated
> on the AT32AP7000 chip and is primarily meant for peripheral DMA
> transfer, but can also be used for memory-to-memory transfers.
>
> This patch is based on a driver from David Brownell which was based on
> an older version of the DMA Engine framework. It also implements the
> proposed extensions to the DMA Engine API for slave DMA operations.
>
> The dmatest client shows no problems, but there may still be room for
> improvement performance-wise. DMA slave transfer performance is
> definitely "good enough"; reading 100 MiB from an SD card running at ~20
> MHz yields ~7.2 MiB/s average transfer rate.
>
> Full documentation for this controller can be found in the Synopsys
> DW AHB DMAC Databook:
>
> http://www.synopsys.com/designware/docs/iip/DW_ahb_dmac/latest/doc/dw_ahb_dmac_db.pdf
>
> The controller has lots of implementation options, so it's usually a
> good idea to check the data sheet of the chip it's intergrated on as
> well. The AT32AP7000 data sheet can be found here:
>
> http://www.atmel.com/dyn/products/datasheets.asp?family_id=682
>
> Signed-off-by: Haavard Skinnemoen <[email protected]>
>
[..]

> +static void
> +dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc)
> +{
> + dma_async_tx_callback callback;
> + void *param;
> + struct dma_async_tx_descriptor *txd = &desc->txd;
> +
> + dev_vdbg(&dwc->chan.dev, "descriptor %u complete\n", txd->cookie);
> +
> + dwc->completed = txd->cookie;
> + callback = txd->callback;
> + param = txd->callback_param;
> +
> + dwc_sync_desc_for_cpu(dwc, desc);
> + list_splice_init(&txd->tx_list, &dwc->free_list);
> + list_move(&desc->desc_node, &dwc->free_list);
> +
> + /*
> + * The API requires that no submissions are done from a
> + * callback, so we don't need to drop the lock here
> + */
> + if (callback)
> + callback(param);
> +}
> +

The one thing that stands out is that this driver does not unmap the
source or destination buffers (and I now notice that fsldma is not doing
this either, hmm...). Yes, it is a no-op on avr32, for now, but the
dma-mapping-api assumes that dma_map is always paired with dma_unmap. I
remember we discussed this earlier and that discussion inspired the
patch below. The end result is that dw_dmac can try to automatically
dma_unmap the buffers unless an intelligent client, like the mmc driver,
has disabled unmap.

Thoughts?

----snip--->
async_tx: add DMA_COMPL_SKIP_{SRC,DEST}_UNMAP flags to control dma unmap

From: Dan Williams <[email protected]>

In some cases client code may need the dma-driver to skip the unmap of source
and/or destination buffers. Setting these flags indicates to the driver to
skip the unmap step. In this regard async_xor is currently broken in that it
allows the destination buffer to be unmapped while an operation is still in
progress, i.e. when the number of sources exceeds the hardware channel's
maximum (fixed in a subsequent patch).

Signed-off-by: Dan Williams <[email protected]>
---

drivers/dma/ioat_dma.c | 48 ++++++++++++++++++++++-----------------------
drivers/dma/iop-adma.c | 17 ++++++++++++----
include/linux/dmaengine.h | 4 ++++
3 files changed, 40 insertions(+), 29 deletions(-)


diff --git a/drivers/dma/ioat_dma.c b/drivers/dma/ioat_dma.c
index 318e8a2..1be33ae 100644
--- a/drivers/dma/ioat_dma.c
+++ b/drivers/dma/ioat_dma.c
@@ -756,6 +756,27 @@ static void ioat_dma_cleanup_tasklet(unsigned long data)
chan->reg_base + IOAT_CHANCTRL_OFFSET);
}

+static void
+ioat_dma_unmap(struct ioat_dma_chan *ioat_chan, struct ioat_desc_sw *desc)
+{
+ /*
+ * yes we are unmapping both _page and _single
+ * alloc'd regions with unmap_page. Is this
+ * *really* that bad?
+ */
+ if (!(desc->async_tx.flags & DMA_COMPL_SKIP_DEST_UNMAP))
+ pci_unmap_page(ioat_chan->device->pdev,
+ pci_unmap_addr(desc, dst),
+ pci_unmap_len(desc, len),
+ PCI_DMA_FROMDEVICE);
+
+ if (!(desc->async_tx.flags & DMA_COMPL_SKIP_SRC_UNMAP))
+ pci_unmap_page(ioat_chan->device->pdev,
+ pci_unmap_addr(desc, src),
+ pci_unmap_len(desc, len),
+ PCI_DMA_TODEVICE);
+}
+
/**
* ioat_dma_memcpy_cleanup - cleanup up finished descriptors
* @chan: ioat channel to be cleaned up
@@ -816,21 +837,7 @@ static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *ioat_chan)
*/
if (desc->async_tx.cookie) {
cookie = desc->async_tx.cookie;
-
- /*
- * yes we are unmapping both _page and _single
- * alloc'd regions with unmap_page. Is this
- * *really* that bad?
- */
- pci_unmap_page(ioat_chan->device->pdev,
- pci_unmap_addr(desc, dst),
- pci_unmap_len(desc, len),
- PCI_DMA_FROMDEVICE);
- pci_unmap_page(ioat_chan->device->pdev,
- pci_unmap_addr(desc, src),
- pci_unmap_len(desc, len),
- PCI_DMA_TODEVICE);
-
+ ioat_dma_unmap(ioat_chan, desc);
if (desc->async_tx.callback) {
desc->async_tx.callback(desc->async_tx.callback_param);
desc->async_tx.callback = NULL;
@@ -889,16 +896,7 @@ static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *ioat_chan)
if (desc->async_tx.cookie) {
cookie = desc->async_tx.cookie;
desc->async_tx.cookie = 0;
-
- pci_unmap_page(ioat_chan->device->pdev,
- pci_unmap_addr(desc, dst),
- pci_unmap_len(desc, len),
- PCI_DMA_FROMDEVICE);
- pci_unmap_page(ioat_chan->device->pdev,
- pci_unmap_addr(desc, src),
- pci_unmap_len(desc, len),
- PCI_DMA_TODEVICE);
-
+ ioat_dma_unmap(ioat_chan, desc);
if (desc->async_tx.callback) {
desc->async_tx.callback(desc->async_tx.callback_param);
desc->async_tx.callback = NULL;
diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c
index 0ec0f43..0b2106e 100644
--- a/drivers/dma/iop-adma.c
+++ b/drivers/dma/iop-adma.c
@@ -82,11 +82,20 @@ iop_adma_run_tx_complete_actions(struct iop_adma_desc_slot *desc,
struct device *dev =
&iop_chan->device->pdev->dev;
u32 len = unmap->unmap_len;
- u32 src_cnt = unmap->unmap_src_cnt;
- dma_addr_t addr = iop_desc_get_dest_addr(unmap,
- iop_chan);
+ enum dma_ctrl_flags flags = desc->async_tx.flags;
+ u32 src_cnt;
+ dma_addr_t addr;
+
+ if (!(flags & DMA_COMPL_SKIP_DEST_UNMAP)) {
+ addr = iop_desc_get_dest_addr(unmap, iop_chan);
+ dma_unmap_page(dev, addr, len, DMA_FROM_DEVICE);
+ }
+
+ if (flags & DMA_COMPL_SKIP_SRC_UNMAP)
+ src_cnt = 0;
+ else
+ src_cnt = unmap->unmap_src_cnt;

- dma_unmap_page(dev, addr, len, DMA_FROM_DEVICE);
while (src_cnt--) {
addr = iop_desc_get_src_addr(unmap,
iop_chan,
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index d08a5c5..78da5c5 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -102,10 +102,14 @@ enum dma_transaction_type {
* @DMA_CTRL_ACK - the descriptor cannot be reused until the client
* acknowledges receipt, i.e. has has a chance to establish any
* dependency chains
+ * @DMA_COMPL_SKIP_SRC_UNMAP - set to disable dma-unmapping the source buffer(s)
+ * @DMA_COMPL_SKIP_DEST_UNMAP - set to disable dma-unmapping the destination(s)
*/
enum dma_ctrl_flags {
DMA_PREP_INTERRUPT = (1 << 0),
DMA_CTRL_ACK = (1 << 1),
+ DMA_COMPL_SKIP_SRC_UNMAP = (1 << 2),
+ DMA_COMPL_SKIP_DEST_UNMAP = (1 << 3),
};

/**


2008-07-04 01:06:46

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] dmaengine/mmc: DMA slave interface and two new drivers

On Thu, Jun 26, 2008 at 6:23 AM, Haavard Skinnemoen
<[email protected]> wrote:
> First of all, I'm sorry it went so much time between v3 and v4 of this
> patchset. I was hoping to finish this stuff up before all kinds of
> other tasks started demanding my attention, but I didn't, so I had to
> put it on hold for a while. Let's try again...
>
> This patchset extends the DMA engine API to allow drivers to offer DMA
> to and from I/O registers with hardware handshaking, aka slave DMA.
> Such functionality is very common in DMA controllers integrated on SoC
> devices, and it's typically used to do DMA transfers to/from other
> on-SoC peripherals, but it can often do DMA transfers to/from
> externally connected devices as well (e.g. IDE hard drives).
>
> The main differences from v3 of this patchset are:
> * A DMA descriptor can hold a whole scatterlist. This means that
> clients using slave DMA can submit large requests in a single call
> to the driver, and they only need to keep track of a single
> descriptor.
> * The dma_slave_descriptor struct is gone since clients no longer
> need to keep track of multiple descriptors.
> * The drivers perform better and are more stable.
>
> The dw_dmac driver depends on this patch:
>
> http://lkml.org/lkml/2008/6/25/148
>
> and the atmel-mci driver depends on this series:
>
> http://lkml.org/lkml/2008/6/26/158
>
> as well as all preceding patches in this series, of course.
>
> Comments are welcome, as usual! Shortlog and diffstat follow.
>
> Haavard Skinnemoen (6):
> dmaengine: Add dma_client parameter to device_alloc_chan_resources

Applied. I fixed it up for fsldma and mv_xor.

> dmaengine: Add dma_chan_is_in_use() function

I applied the chan->client_count patch that we talked about.

> dmaengine: Add slave DMA interface

There were some comments to the change log and other fixes, so I'll
wait for v5 of this patch.

> dmaengine: Make DMA Engine menu visible for AVR32 users

Applied the "remove arch dependency in drivers/dma/Kconfig" instead.

> dmaengine: Driver for the Synopsys DesignWare DMA controller
> Atmel MCI: Driver for Atmel on-chip MMC controllers

I will wait for v5 on these as well for the chan->client_count fixups
and a response to the dma_unmap situation.

Thanks,
Dan

2008-07-04 14:48:20

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: dmaengine skip unmap (was: Re: [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare DMA controller)

Dan Williams <[email protected]> wrote:
> The one thing that stands out is that this driver does not unmap the
> source or destination buffers (and I now notice that fsldma is not doing
> this either, hmm...). Yes, it is a no-op on avr32, for now, but the
> dma-mapping-api assumes that dma_map is always paired with dma_unmap. I
> remember we discussed this earlier and that discussion inspired the
> patch below. The end result is that dw_dmac can try to automatically
> dma_unmap the buffers unless an intelligent client, like the mmc driver,
> has disabled unmap.
>
> Thoughts?

Looks reasonable. I'll update the dw_dmac driver and post a new version
in a few moments.

Haavard

2008-07-04 15:13:59

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] dmaengine/mmc: DMA slave interface and two new drivers

"Dan Williams" <[email protected]> wrote:
> On Thu, Jun 26, 2008 at 6:23 AM, Haavard Skinnemoen
> <[email protected]> wrote:
> > dmaengine: Add dma_client parameter to device_alloc_chan_resources
>
> Applied. I fixed it up for fsldma and mv_xor.

Thanks.

> > dmaengine: Add dma_chan_is_in_use() function
>
> I applied the chan->client_count patch that we talked about.

Ok, I've updated the dw_dmac driver.

> > dmaengine: Add slave DMA interface
>
> There were some comments to the change log and other fixes, so I'll
> wait for v5 of this patch.

Will post v5 right after I finish typing this.

> > dmaengine: Make DMA Engine menu visible for AVR32 users
>
> Applied the "remove arch dependency in drivers/dma/Kconfig" instead.

Ok.

> > dmaengine: Driver for the Synopsys DesignWare DMA controller
> > Atmel MCI: Driver for Atmel on-chip MMC controllers
>
> I will wait for v5 on these as well for the chan->client_count fixups
> and a response to the dma_unmap situation.

I'll send you v5 of the dw_dmac driver. The MMC driver should go in via
Pierre.

Thanks,

Haavard

2008-07-04 15:34:28

by Sosnowski, Maciej

[permalink] [raw]
Subject: RE: [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare DMA controller

> ---------- Original message ----------
> From: Haavard Skinnemoen <[email protected]>
> Date: Jun 26, 2008 3:23 PM
> Subject: [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare
> DMA controller
> To: Dan Williams <[email protected]>, Pierre Ossman
> <[email protected]>
> Cc: [email protected], [email protected],
> [email protected], [email protected], David Brownell
> <[email protected]>, Haavard Skinnemoen
> <[email protected]>
>
>
> This adds a driver for the Synopsys DesignWare DMA controller (aka
> DMACA on AVR32 systems.) This DMA controller can be found integrated
> on the AT32AP7000 chip and is primarily meant for peripheral DMA
> transfer, but can also be used for memory-to-memory transfers.
>
> This patch is based on a driver from David Brownell which was based on
> an older version of the DMA Engine framework. It also implements the
> proposed extensions to the DMA Engine API for slave DMA operations.
>
> The dmatest client shows no problems, but there may still be room for
> improvement performance-wise. DMA slave transfer performance is
> definitely "good enough"; reading 100 MiB from an SD card running at
~20
> MHz yields ~7.2 MiB/s average transfer rate.
>
> Full documentation for this controller can be found in the Synopsys
> DW AHB DMAC Databook:
>
>
http://www.synopsys.com/designware/docs/iip/DW_ahb_dmac/latest/doc/dw_ah
b_dmac_db.pdf
>
> The controller has lots of implementation options, so it's usually a
> good idea to check the data sheet of the chip it's intergrated on as
> well. The AT32AP7000 data sheet can be found here:
>
> http://www.atmel.com/dyn/products/datasheets.asp?family_id=682
>
> Signed-off-by: Haavard Skinnemoen <[email protected]>
>
> Changes since v3:
> * Update to latest DMA engine and DMA slave APIs
> * Embed the hw descriptor into the sw descriptor
> * Clean up and update MODULE_DESCRIPTION, copyright date, etc.
>
> Changes since v2:
> * Dequeue all pending transfers in terminate_all()
> * Rename dw_dmac.h -> dw_dmac_regs.h
> * Define and use controller-specific dma_slave data
> * Fix up a few outdated comments
> * Define hardware registers as structs (doesn't generate better
> code, unfortunately, but it looks nicer.)
> * Get number of channels from platform_data instead of hardcoding it
> based on CONFIG_WHATEVER_CPU.
> * Give slave clients exclusive access to the channel

Coulpe of questions and comments from my side below.
Apart from that the code looks fine to me.

Acked-by: Maciej Sosnowski <[email protected]>

> ---
> arch/avr32/mach-at32ap/at32ap700x.c | 26 +-
> drivers/dma/Kconfig | 9 +
> drivers/dma/Makefile | 1 +
> drivers/dma/dw_dmac.c | 1105
> ++++++++++++++++++++++++++++ drivers/dma/dw_dmac_regs.h
|
> 224 ++++++ include/asm-avr32/arch-at32ap/at32ap700x.h | 16 +
> include/linux/dw_dmac.h | 62 ++
> 7 files changed, 1430 insertions(+), 13 deletions(-)
> create mode 100644 drivers/dma/dw_dmac.c
> create mode 100644 drivers/dma/dw_dmac_regs.h
> create mode 100644 include/linux/dw_dmac.h
>
> diff --git a/arch/avr32/mach-at32ap/at32ap700x.c
> b/arch/avr32/mach-at32ap/at32ap700x.c
> index 0f24b4f..2b92047 100644
> --- a/arch/avr32/mach-at32ap/at32ap700x.c
> +++ b/arch/avr32/mach-at32ap/at32ap700x.c
> @@ -599,6 +599,17 @@ static void __init genclk_init_parent(struct clk
*clk)
> clk->parent = parent;
> }
>
> +static struct dw_dma_platform_data dw_dmac0_data = {
> + .nr_channels = 3,
> +};
> +
> +static struct resource dw_dmac0_resource[] = {
> + PBMEM(0xff200000),
> + IRQ(2),
> +};
> +DEFINE_DEV_DATA(dw_dmac, 0);
> +DEV_CLK(hclk, dw_dmac0, hsb, 10);
> +
> /*
--------------------------------------------------------------------
> * System peripherals
> *
-------------------------------------------------------------------- */
> @@ -705,17 +716,6 @@ static struct clk pico_clk = {
> .users = 1,
> };
>
> -static struct resource dmaca0_resource[] = {
> - {
> - .start = 0xff200000,
> - .end = 0xff20ffff,
> - .flags = IORESOURCE_MEM,
> - },
> - IRQ(2),
> -};
> -DEFINE_DEV(dmaca, 0);
> -DEV_CLK(hclk, dmaca0, hsb, 10);
> -
> /*
--------------------------------------------------------------------
> * HMATRIX
> *
-------------------------------------------------------------------- */
> @@ -828,7 +828,7 @@ void __init at32_add_system_devices(void)
> platform_device_register(&at32_eic0_device);
> platform_device_register(&smc0_device);
> platform_device_register(&pdc_device);
> - platform_device_register(&dmaca0_device);
> + platform_device_register(&dw_dmac0_device);
>
> platform_device_register(&at32_tcb0_device);
> platform_device_register(&at32_tcb1_device);
> @@ -1891,7 +1891,7 @@ struct clk *at32_clock_list[] = {
> &smc0_mck,
> &pdc_hclk,
> &pdc_pclk,
> - &dmaca0_hclk,
> + &dw_dmac0_hclk,
> &pico_clk,
> &pio0_mck,
> &pio1_mck,
> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
> index 2ac09be..4fac4e3 100644
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -37,6 +37,15 @@ config INTEL_IOP_ADMA
> help
> Enable support for the Intel(R) IOP Series RAID engines.
>
> +config DW_DMAC
> + tristate "Synopsys DesignWare AHB DMA support"
> + depends on AVR32
> + select DMA_ENGINE
> + default y if CPU_AT32AP7000
> + help
> + Support the Synopsys DesignWare AHB DMA controller. This
> + can be integrated in chips such as the Atmel AT32ap7000.
> +
> config FSL_DMA
> bool "Freescale MPC85xx/MPC83xx DMA support"
> depends on PPC
> diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
> index 2ff6d7f..beebae4 100644
> --- a/drivers/dma/Makefile
> +++ b/drivers/dma/Makefile
> @@ -1,6 +1,7 @@
> obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
> obj-$(CONFIG_NET_DMA) += iovlock.o
> obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o
> +obj-$(CONFIG_DW_DMAC) += dw_dmac.o
> ioatdma-objs := ioat.o ioat_dma.o ioat_dca.o
> obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
> obj-$(CONFIG_FSL_DMA) += fsldma.o
> diff --git a/drivers/dma/dw_dmac.c b/drivers/dma/dw_dmac.c
> new file mode 100644
> index 0000000..e5389e1
> --- /dev/null
> +++ b/drivers/dma/dw_dmac.c
> @@ -0,0 +1,1105 @@
> +/*
> + * Driver for the Synopsys DesignWare DMA Controller (aka DMACA on
> + * AVR32 systems.)
> + *
> + * Copyright (C) 2007-2008 Atmel Corporation
> + *
> + * This program is free software; you can redistribute it and/or
modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +#include <linux/clk.h>
> +#include <linux/delay.h>
> +#include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/mm.h>
> +#include <linux/module.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +
> +#include "dw_dmac_regs.h"
> +
> +/*
> + * This supports the Synopsys "DesignWare AHB Central DMA
Controller",
> + * (DW_ahb_dmac) which is used with various AMBA 2.0 systems (not all
> + * of which use ARM any more). See the "Databook" from Synopsys for
> + * information beyond what licensees probably provide.
> + *
> + * The driver has currently been tested only with the Atmel
AT32AP7000,
> + * which does not support descriptor writeback.
> + */
> +
> +/* NOTE: DMS+SMS is system-specific. We should get this information
> + * from the platform code somehow.
> + */
> +#define DWC_DEFAULT_CTLLO (DWC_CTLL_DST_MSIZE(0) \
> + | DWC_CTLL_SRC_MSIZE(0) \
> + | DWC_CTLL_DMS(0) \
> + | DWC_CTLL_SMS(1) \
> + | DWC_CTLL_LLP_D_EN \
> + | DWC_CTLL_LLP_S_EN)
> +
> +/*
> + * This is configuration-dependent and usually a funny size like
4095.
> + * Let's round it down to the nearest power of two.
> + *
> + * Note that this is a transfer count, i.e. if we transfer 32-bit
> + * words, we can do 8192 bytes per descriptor.
> + *
> + * This parameter is also system-specific.
> + */
> +#define DWC_MAX_COUNT 2048U
> +
> +/*
> + * Number of descriptors to allocate for each channel. This should be
> + * made configurable somehow; preferably, the clients (at least the
> + * ones using slave transfers) should be able to give us a hint.
> + */
> +#define NR_DESCS_PER_CHANNEL 64
> +
>
+/*---------------------------------------------------------------------
-*/
> +
> +/*
> + * Because we're not relying on writeback from the controller (it may
not
> + * even be configured into the core!) we don't need to use dma_pool.
These
> + * descriptors -- and associated data -- are cacheable. We do need
to make
> + * sure their dcache entries are written back before handing them off
to
> + * the controller, though.
> + */
> +
> +static struct dw_desc *dwc_first_active(struct dw_dma_chan *dwc)
> +{
> + return list_entry(dwc->active_list.next, struct dw_desc,
desc_node);
> +}
> +
> +static struct dw_desc *dwc_first_queued(struct dw_dma_chan *dwc)
> +{
> + return list_entry(dwc->queue.next, struct dw_desc, desc_node);
> +}
> +
> +static struct dw_desc *dwc_desc_get(struct dw_dma_chan *dwc)
> +{
> + struct dw_desc *desc, *_desc;
> + struct dw_desc *ret = NULL;
> + unsigned int i = 0;
> +
> + spin_lock_bh(&dwc->lock);
> + list_for_each_entry_safe(desc, _desc, &dwc->free_list,
desc_node) {
> + if (async_tx_test_ack(&desc->txd)) {
> + list_del(&desc->desc_node);
> + ret = desc;
> + break;
> + }
> + dev_dbg(&dwc->chan.dev, "desc %p not ACKed\n", desc);
> + i++;
> + }
> + spin_unlock_bh(&dwc->lock);
> +
> + dev_vdbg(&dwc->chan.dev, "scanned %u descriptors on
freelist\n", i);
> +
> + return ret;
> +}
> +
> +static void dwc_sync_desc_for_cpu(struct dw_dma_chan *dwc, struct
> dw_desc *desc)
> +{
> + struct dw_desc *child;
> +
> + list_for_each_entry(child, &desc->txd.tx_list, desc_node)
> + dma_sync_single_for_cpu(dwc->chan.dev.parent,
> + child->txd.phys, sizeof(child->lli),
> + DMA_TO_DEVICE);
> + dma_sync_single_for_cpu(dwc->chan.dev.parent,
> + desc->txd.phys, sizeof(desc->lli),
> + DMA_TO_DEVICE);
> +}
> +
> +/*
> + * Move a descriptor, including any children, to the free list.
> + * `desc' must not be on any lists.
> + */
> +static void dwc_desc_put(struct dw_dma_chan *dwc, struct dw_desc
*desc)
> +{
> + if (desc) {
> + struct dw_desc *child;
> +
> + dwc_sync_desc_for_cpu(dwc, desc);
> +
> + spin_lock_bh(&dwc->lock);
> + list_for_each_entry(child, &desc->txd.tx_list,
desc_node)
> + dev_vdbg(&dwc->chan.dev,
> + "moving child desc %p to
freelist\n",
> + child);
> + list_splice_init(&desc->txd.tx_list, &dwc->free_list);
> + dev_vdbg(&dwc->chan.dev, "moving desc %p to
freelist\n",
> desc); + list_add(&desc->desc_node, &dwc->free_list);
> + spin_unlock_bh(&dwc->lock);
> + }
> +}
> +
> +/* Called with dwc->lock held and bh disabled */
> +static dma_cookie_t
> +dwc_assign_cookie(struct dw_dma_chan *dwc, struct dw_desc *desc)
> +{
> + dma_cookie_t cookie = dwc->chan.cookie;
> +
> + if (++cookie < 0)
> + cookie = 1;
> +
> + dwc->chan.cookie = cookie;
> + desc->txd.cookie = cookie;
> +
> + return cookie;
> +}
> +
>
+/*---------------------------------------------------------------------
-*/
> +
> +/* Called with dwc->lock held and bh disabled */
> +static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc
*first)
> +{
> + struct dw_dma *dw = to_dw_dma(dwc->chan.device);
> +
> + /* ASSERT: channel is idle */
> + if (dma_readl(dw, CH_EN) & dwc->mask) {
> + dev_err(&dwc->chan.dev,
> + "BUG: Attempted to start non-idle channel\n");
> + dev_err(&dwc->chan.dev,
> + " SAR: 0x%x DAR: 0x%x LLP: 0x%x CTL:
0x%x:%08x\n",
> + channel_readl(dwc, SAR),
> + channel_readl(dwc, DAR),
> + channel_readl(dwc, LLP),
> + channel_readl(dwc, CTL_HI),
> + channel_readl(dwc, CTL_LO));
> +
> + /* The tasklet will hopefully advance the queue... */
> + return;

Should not at this point an error status be returned
so that it can be handled accordingly by dwc_dostart() caller?

> + }
> +
> + channel_writel(dwc, LLP, first->txd.phys);
> + channel_writel(dwc, CTL_LO,
> + DWC_CTLL_LLP_D_EN | DWC_CTLL_LLP_S_EN);
> + channel_writel(dwc, CTL_HI, 0);
> + channel_set_bit(dw, CH_EN, dwc->mask);
> +}
> +
>
+/*---------------------------------------------------------------------
-*/
> +
> +static void
> +dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc
*desc)
> +{
> + dma_async_tx_callback callback;
> + void *param;
> + struct dma_async_tx_descriptor *txd = &desc->txd;
> +
> + dev_vdbg(&dwc->chan.dev, "descriptor %u complete\n",
txd->cookie);
> +
> + dwc->completed = txd->cookie;
> + callback = txd->callback;
> + param = txd->callback_param;
> +
> + dwc_sync_desc_for_cpu(dwc, desc);
> + list_splice_init(&txd->tx_list, &dwc->free_list);
> + list_move(&desc->desc_node, &dwc->free_list);
> +
> + /*
> + * The API requires that no submissions are done from a
> + * callback, so we don't need to drop the lock here
> + */
> + if (callback)
> + callback(param);
> +}
> +
> +static void dwc_complete_all(struct dw_dma *dw, struct dw_dma_chan
*dwc)
> +{
> + struct dw_desc *desc, *_desc;
> + LIST_HEAD(list);
> +
> + if (dma_readl(dw, CH_EN) & dwc->mask) {
> + dev_err(&dwc->chan.dev,
> + "BUG: XFER bit set, but channel not idle!\n");
> +
> + /* Try to continue after resetting the channel... */
> + channel_clear_bit(dw, CH_EN, dwc->mask);
> + while (dma_readl(dw, CH_EN) & dwc->mask)
> + cpu_relax();
> + }
> +
> + /*
> + * Submit queued descriptors ASAP, i.e. before we go through
> + * the completed ones.
> + */
> + if (!list_empty(&dwc->queue))
> + dwc_dostart(dwc, dwc_first_queued(dwc));
> + list_splice_init(&dwc->active_list, &list);
> + list_splice_init(&dwc->queue, &dwc->active_list);
> +
> + list_for_each_entry_safe(desc, _desc, &list, desc_node)
> + dwc_descriptor_complete(dwc, desc);
> +}
> +
> +static void dwc_scan_descriptors(struct dw_dma *dw, struct
dw_dma_chan *dwc)
> +{
> + dma_addr_t llp;
> + struct dw_desc *desc, *_desc;
> + struct dw_desc *child;
> + u32 status_xfer;
> +
> + /*
> + * Clear block interrupt flag before scanning so that we don't
> + * miss any, and read LLP before RAW_XFER to ensure it is
> + * valid if we decide to scan the list.
> + */
> + dma_writel(dw, CLEAR.BLOCK, dwc->mask);
> + llp = channel_readl(dwc, LLP);
> + status_xfer = dma_readl(dw, RAW.XFER);
> +
> + if (status_xfer & dwc->mask) {
> + /* Everything we've submitted is done */
> + dma_writel(dw, CLEAR.XFER, dwc->mask);
> + dwc_complete_all(dw, dwc);
> + return;
> + }
> +
> + dev_vdbg(&dwc->chan.dev, "scan_descriptors: llp=0x%x\n", llp);
> +
> + list_for_each_entry_safe(desc, _desc, &dwc->active_list,
desc_node) {
> + if (desc->lli.llp == llp)
> + /* This one is currently in progress */
> + return;
> +
> + list_for_each_entry(child, &desc->txd.tx_list,
desc_node)
> + if (child->lli.llp == llp)
> + /* Currently in progress */
> + return;
> +
> + /*
> + * No descriptors so far seem to be in progress, i.e.
> + * this one must be done.
> + */
> + dwc_descriptor_complete(dwc, desc);
> + }
> +
> + dev_err(&dwc->chan.dev,
> + "BUG: All descriptors done, but channel not idle!\n");
> +
> + /* Try to continue after resetting the channel... */
> + channel_clear_bit(dw, CH_EN, dwc->mask);
> + while (dma_readl(dw, CH_EN) & dwc->mask)
> + cpu_relax();
> +
> + if (!list_empty(&dwc->queue)) {
> + dwc_dostart(dwc, dwc_first_queued(dwc));
> + list_splice_init(&dwc->queue, &dwc->active_list);
> + }
> +}
> +
> +static void dwc_dump_lli(struct dw_dma_chan *dwc, struct dw_lli *lli)
> +{
> + dev_printk(KERN_CRIT, &dwc->chan.dev,
> + " desc: s0x%x d0x%x l0x%x c0x%x:%x\n",
> + lli->sar, lli->dar, lli->llp,
> + lli->ctlhi, lli->ctllo);
> +}
> +
> +static void dwc_handle_error(struct dw_dma *dw, struct dw_dma_chan
*dwc)
> +{
> + struct dw_desc *bad_desc;
> + struct dw_desc *child;
> +
> + dwc_scan_descriptors(dw, dwc);
> +
> + /*
> + * The descriptor currently at the head of the active list is
> + * borked. Since we don't have any way to report errors, we'll
> + * just have to scream loudly and try to carry on.
> + */
> + bad_desc = dwc_first_active(dwc);
> + list_del_init(&bad_desc->desc_node);
> + list_splice_init(&dwc->queue, dwc->active_list.prev);
> +
> + /* Clear the error flag and try to restart the controller */
> + dma_writel(dw, CLEAR.ERROR, dwc->mask);
> + if (!list_empty(&dwc->active_list))
> + dwc_dostart(dwc, dwc_first_active(dwc));
> +
> + /*
> + * KERN_CRITICAL may seem harsh, but since this only happens
> + * when someone submits a bad physical address in a
> + * descriptor, we should consider ourselves lucky that the
> + * controller flagged an error instead of scribbling over
> + * random memory locations.
> + */
> + dev_printk(KERN_CRIT, &dwc->chan.dev,
> + "Bad descriptor submitted for DMA!\n");
> + dev_printk(KERN_CRIT, &dwc->chan.dev,
> + " cookie: %d\n", bad_desc->txd.cookie);
> + dwc_dump_lli(dwc, &bad_desc->lli);
> + list_for_each_entry(child, &bad_desc->txd.tx_list, desc_node)
> + dwc_dump_lli(dwc, &child->lli);
> +
> + /* Pretend the descriptor completed successfully */
> + dwc_descriptor_complete(dwc, bad_desc);
> +}
> +
> +static void dw_dma_tasklet(unsigned long data)
> +{
> + struct dw_dma *dw = (struct dw_dma *)data;
> + struct dw_dma_chan *dwc;
> + u32 status_block;
> + u32 status_xfer;
> + u32 status_err;
> + int i;
> +
> + status_block = dma_readl(dw, RAW.BLOCK);
> + status_xfer = dma_readl(dw, RAW.BLOCK);
> + status_err = dma_readl(dw, RAW.ERROR);
> +
> + dev_vdbg(dw->dma.dev, "tasklet: status_block=%x
status_err=%x\n",
> + status_block, status_err);
> +
> + for (i = 0; i < dw->dma.chancnt; i++) {
> + dwc = &dw->chan[i];
> + spin_lock(&dwc->lock);
> + if (status_err & (1 << i))
> + dwc_handle_error(dw, dwc);
> + else if ((status_block | status_xfer) & (1 << i))
> + dwc_scan_descriptors(dw, dwc);
> + spin_unlock(&dwc->lock);
> + }
> +
> + /*
> + * Re-enable interrupts. Block Complete interrupts are only
> + * enabled if the INT_EN bit in the descriptor is set. This
> + * will trigger a scan before the whole list is done.
> + */
> + channel_set_bit(dw, MASK.XFER, dw->all_chan_mask);
> + channel_set_bit(dw, MASK.BLOCK, dw->all_chan_mask);
> + channel_set_bit(dw, MASK.ERROR, dw->all_chan_mask);
> +}
> +
> +static irqreturn_t dw_dma_interrupt(int irq, void *dev_id)
> +{
> + struct dw_dma *dw = dev_id;
> + u32 status;
> +
> + dev_vdbg(dw->dma.dev, "interrupt: status=0x%x\n",
> + dma_readl(dw, STATUS_INT));
> +
> + /*
> + * Just disable the interrupts. We'll turn them back on in the
> + * softirq handler.
> + */
> + channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask);
> + channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask);
> + channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask);
> +
> + status = dma_readl(dw, STATUS_INT);
> + if (status) {
> + dev_err(dw->dma.dev,
> + "BUG: Unexpected interrupts pending: 0x%x\n",
> + status);
> +
> + /* Try to recover */
> + channel_clear_bit(dw, MASK.XFER, (1 << 8) - 1);
> + channel_clear_bit(dw, MASK.BLOCK, (1 << 8) - 1);
> + channel_clear_bit(dw, MASK.SRC_TRAN, (1 << 8) - 1);
> + channel_clear_bit(dw, MASK.DST_TRAN, (1 << 8) - 1);
> + channel_clear_bit(dw, MASK.ERROR, (1 << 8) - 1);
> + }
> +
> + tasklet_schedule(&dw->tasklet);
> +
> + return IRQ_HANDLED;
> +}
> +
>
+/*---------------------------------------------------------------------
-*/
> +
> +static dma_cookie_t dwc_tx_submit(struct dma_async_tx_descriptor *tx)
> +{
> + struct dw_desc *desc = txd_to_dw_desc(tx);
> + struct dw_dma_chan *dwc = to_dw_dma_chan(tx->chan);
> + dma_cookie_t cookie;
> +
> + spin_lock_bh(&dwc->lock);
> + cookie = dwc_assign_cookie(dwc, desc);
> +
> + /*
> + * REVISIT: We should attempt to chain as many descriptors as
> + * possible, perhaps even appending to those already submitted
> + * for DMA. But this is hard to do in a race-free manner.
> + */
> + if (list_empty(&dwc->active_list)) {
> + dev_vdbg(&tx->chan->dev, "tx_submit: started %u\n",
> + desc->txd.cookie);
> + dwc_dostart(dwc, desc);
> + list_add_tail(&desc->desc_node, &dwc->active_list);
> + } else {
> + dev_vdbg(&tx->chan->dev, "tx_submit: queued %u\n",
> + desc->txd.cookie);
> +
> + list_add_tail(&desc->desc_node, &dwc->queue);
> + }
> +
> + spin_unlock_bh(&dwc->lock);
> +
> + return cookie;
> +}
> +
> +static struct dma_async_tx_descriptor *
> +dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest,
dma_addr_t src,
> + size_t len, unsigned long flags)
> +{
> + struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
> + struct dw_desc *desc;
> + struct dw_desc *first;
> + struct dw_desc *prev;
> + size_t xfer_count;
> + size_t offset;
> + unsigned int src_width;
> + unsigned int dst_width;
> + u32 ctllo;
> +
> + dev_vdbg(&chan->dev, "prep_dma_memcpy d0x%x s0x%x l0x%zx
f0x%lx\n",
> + dest, src, len, flags);
> +
> + if (unlikely(!len)) {
> + dev_dbg(&chan->dev, "prep_dma_memcpy: length is
zero!\n");
> + return NULL;
> + }
> +
> + /*
> + * We can be a lot more clever here, but this should take care
> + * of the most common optimization.
> + */
> + if (!((src | dest | len) & 3))
> + src_width = dst_width = 2;
> + else if (!((src | dest | len) & 1))
> + src_width = dst_width = 1;
> + else
> + src_width = dst_width = 0;
> +
> + ctllo = DWC_DEFAULT_CTLLO
> + | DWC_CTLL_DST_WIDTH(dst_width)
> + | DWC_CTLL_SRC_WIDTH(src_width)
> + | DWC_CTLL_DST_INC
> + | DWC_CTLL_SRC_INC
> + | DWC_CTLL_FC_M2M;
> + prev = first = NULL;
> +
> + for (offset = 0; offset < len; offset += xfer_count <<
src_width) {
> + xfer_count = min_t(size_t, (len - offset) >>
src_width,
> + DWC_MAX_COUNT);

Here it looks like the maximum xfer_count value can change - it depends
on src_width,
so it may be different for different transactions.
Is that ok?

> +
> + desc = dwc_desc_get(dwc);
> + if (!desc)
> + goto err_desc_get;
> +
> + desc->lli.sar = src + offset;
> + desc->lli.dar = dest + offset;
> + desc->lli.ctllo = ctllo;
> + desc->lli.ctlhi = xfer_count;
> +
> + if (!first) {
> + first = desc;
> + } else {
> + prev->lli.llp = desc->txd.phys;
> + dma_sync_single_for_device(chan->dev.parent,
> + prev->txd.phys,
sizeof(prev->lli),
> + DMA_TO_DEVICE);
> + list_add_tail(&desc->desc_node,
> + &first->txd.tx_list);
> + }
> + prev = desc;
> + }
> +
> +
> + if (flags & DMA_PREP_INTERRUPT)
> + /* Trigger interrupt after last block */
> + prev->lli.ctllo |= DWC_CTLL_INT_EN;
> +
> + prev->lli.llp = 0;
> + dma_sync_single_for_device(chan->dev.parent,
> + prev->txd.phys, sizeof(prev->lli),
> + DMA_TO_DEVICE);
> +
> + first->txd.flags = flags;
> +
> + return &first->txd;
> +
> +err_desc_get:
> + dwc_desc_put(dwc, first);
> + return NULL;
> +}
> +
> +static struct dma_async_tx_descriptor *
> +dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
> + unsigned int sg_len, enum dma_data_direction
direction,
> + unsigned long flags)
> +{
> + struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
> + struct dw_dma_slave *dws = dwc->dws;
> + struct dw_desc *prev;
> + struct dw_desc *first;
> + u32 ctllo;
> + dma_addr_t reg;
> + unsigned int reg_width;
> + unsigned int mem_width;
> + unsigned int i;
> + struct scatterlist *sg;
> +
> + dev_vdbg(&chan->dev, "prep_dma_slave\n");
> +
> + if (unlikely(!dws || !sg_len))
> + return NULL;
> +
> + reg_width = dws->slave.reg_width;
> + prev = first = NULL;
> +
> + sg_len = dma_map_sg(chan->dev.parent, sgl, sg_len, direction);
> +
> + switch (direction) {
> + case DMA_TO_DEVICE:
> + ctllo = (DWC_DEFAULT_CTLLO
> + | DWC_CTLL_DST_WIDTH(reg_width)
> + | DWC_CTLL_DST_FIX
> + | DWC_CTLL_SRC_INC
> + | DWC_CTLL_FC_M2P);
> + reg = dws->slave.tx_reg;
> + for_each_sg(sgl, sg, sg_len, i) {
> + struct dw_desc *desc;
> + u32 len;
> + u32 mem;
> +
> + desc = dwc_desc_get(dwc);
> + if (!desc) {
> + dev_err(&chan->dev,
> + "not enough descriptors
available\n");
> + goto err_desc_get;
> + }
> +
> + mem = sg_phys(sg);
> + len = sg_dma_len(sg);
> + mem_width = 2;
> + if (unlikely(mem & 3 || len & 3))
> + mem_width = 0;
> +
> + desc->lli.sar = mem;
> + desc->lli.dar = reg;
> + desc->lli.ctllo = ctllo |
> DWC_CTLL_SRC_WIDTH(mem_width); + desc->lli.ctlhi
= len
> >> mem_width; +
> + if (!first) {
> + first = desc;
> + } else {
> + prev->lli.llp = desc->txd.phys;
> +
dma_sync_single_for_device(chan->dev.parent,
> + prev->txd.phys,
> + sizeof(prev->lli),
> + DMA_TO_DEVICE);
> + list_add_tail(&desc->desc_node,
> + &first->txd.tx_list);
> + }
> + prev = desc;
> + }
> + break;
> + case DMA_FROM_DEVICE:
> + ctllo = (DWC_DEFAULT_CTLLO
> + | DWC_CTLL_SRC_WIDTH(reg_width)
> + | DWC_CTLL_DST_INC
> + | DWC_CTLL_SRC_FIX
> + | DWC_CTLL_FC_P2M);
> +
> + reg = dws->slave.rx_reg;
> + for_each_sg(sgl, sg, sg_len, i) {
> + struct dw_desc *desc;
> + u32 len;
> + u32 mem;
> +
> + desc = dwc_desc_get(dwc);
> + if (!desc) {
> + dev_err(&chan->dev,
> + "not enough descriptors
available\n");
> + goto err_desc_get;
> + }
> +
> + mem = sg_phys(sg);
> + len = sg_dma_len(sg);
> + mem_width = 2;
> + if (unlikely(mem & 3 || len & 3))
> + mem_width = 0;
> +
> + desc->lli.sar = reg;
> + desc->lli.dar = mem;
> + desc->lli.ctllo = ctllo |
> DWC_CTLL_DST_WIDTH(mem_width); + desc->lli.ctlhi
= len
> >> reg_width; +
> + if (!first) {
> + first = desc;
> + } else {
> + prev->lli.llp = desc->txd.phys;
> +
dma_sync_single_for_device(chan->dev.parent,
> + prev->txd.phys,
> + sizeof(prev->lli),
> + DMA_TO_DEVICE);
> + list_add_tail(&desc->desc_node,
> + &first->txd.tx_list);
> + }
> + prev = desc;
> + }
> + break;
> + default:
> + return NULL;
> + }
> +
> + if (flags & DMA_PREP_INTERRUPT)
> + /* Trigger interrupt after last block */
> + prev->lli.ctllo |= DWC_CTLL_INT_EN;
> +
> + prev->lli.llp = 0;
> + dma_sync_single_for_device(chan->dev.parent,
> + prev->txd.phys, sizeof(prev->lli),
> + DMA_TO_DEVICE);
> +
> + return &first->txd;
> +
> +err_desc_get:
> + dwc_desc_put(dwc, first);
> + return NULL;
> +}
> +
> +static void dwc_terminate_all(struct dma_chan *chan)
> +{
> + struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
> + struct dw_dma *dw = to_dw_dma(chan->device);
> + struct dw_desc *desc, *_desc;
> + LIST_HEAD(list);
> +
> + /*
> + * This is only called when something went wrong elsewhere, so
> + * we don't really care about the data. Just disable the
> + * channel. We still have to poll the channel enable bit due
> + * to AHB/HSB limitations.
> + */
> + spin_lock_bh(&dwc->lock);
> +
> + channel_clear_bit(dw, CH_EN, dwc->mask);
> +
> + while (dma_readl(dw, CH_EN) & dwc->mask)
> + cpu_relax();
> +
> + /* active_list entries will end up before queued entries */
> + list_splice_init(&dwc->queue, &list);
> + list_splice_init(&dwc->active_list, &list);
> +
> + spin_unlock_bh(&dwc->lock);
> +
> + /* Flush all pending and queued descriptors */
> + list_for_each_entry_safe(desc, _desc, &list, desc_node)
> + dwc_descriptor_complete(dwc, desc);
> +}
> +
> +static enum dma_status
> +dwc_is_tx_complete(struct dma_chan *chan,
> + dma_cookie_t cookie,
> + dma_cookie_t *done, dma_cookie_t *used)
> +{
> + struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
> + dma_cookie_t last_used;
> + dma_cookie_t last_complete;
> + int ret;
> +
> + last_complete = dwc->completed;
> + last_used = chan->cookie;
> +
> + ret = dma_async_is_complete(cookie, last_complete, last_used);
> + if (ret != DMA_SUCCESS) {
> + dwc_scan_descriptors(to_dw_dma(chan->device), dwc);
> +
> + last_complete = dwc->completed;
> + last_used = chan->cookie;
> +
> + ret = dma_async_is_complete(cookie, last_complete,
last_used);
> + }
> +
> + if (done)
> + *done = last_complete;
> + if (used)
> + *used = last_used;
> +
> + return ret;
> +}
> +
> +static void dwc_issue_pending(struct dma_chan *chan)
> +{
> + struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
> +
> + spin_lock_bh(&dwc->lock);
> + if (!list_empty(&dwc->queue))
> + dwc_scan_descriptors(to_dw_dma(chan->device), dwc);
> + spin_unlock_bh(&dwc->lock);
> +}
> +
> +static int dwc_alloc_chan_resources(struct dma_chan *chan,
> + struct dma_client *client)
> +{
> + struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
> + struct dw_dma *dw = to_dw_dma(chan->device);
> + struct dw_desc *desc;
> + struct dma_slave *slave;
> + struct dw_dma_slave *dws;
> + int i;
> + u32 cfghi;
> + u32 cfglo;
> +
> + dev_vdbg(&chan->dev, "alloc_chan_resources\n");
> +
> + /* Channels doing slave DMA can only handle one client. */
> + if (dwc->dws || client->slave) {
> + if (dma_chan_is_in_use(chan))
> + return -EBUSY;
> + }
> +
> + /* ASSERT: channel is idle */
> + if (dma_readl(dw, CH_EN) & dwc->mask) {
> + dev_dbg(&chan->dev, "DMA channel not idle?\n");
> + return -EIO;
> + }
> +
> + dwc->completed = chan->cookie = 1;
> +
> + cfghi = DWC_CFGH_FIFO_MODE;
> + cfglo = 0;
> +
> + slave = client->slave;
> + if (slave) {
> + /*
> + * We need controller-specific data to set up slave
> + * transfers.
> + */
> + BUG_ON(!slave->dma_dev || slave->dma_dev !=
dw->dma.dev);
> +
> + dws = container_of(slave, struct dw_dma_slave, slave);
> +
> + dwc->dws = dws;
> + cfghi = dws->cfg_hi;
> + cfglo = dws->cfg_lo;
> + } else {
> + dwc->dws = NULL;
> + }
> +
> + channel_writel(dwc, CFG_LO, cfglo);
> + channel_writel(dwc, CFG_HI, cfghi);
> +
> + /*
> + * NOTE: some controllers may have additional features that we
> + * need to initialize here, like "scatter-gather" (which
> + * doesn't mean what you think it means), and status
writeback.
> + */
> +
> + spin_lock_bh(&dwc->lock);
> + i = dwc->descs_allocated;
> + while (dwc->descs_allocated < NR_DESCS_PER_CHANNEL) {
> + spin_unlock_bh(&dwc->lock);
> +
> + desc = kzalloc(sizeof(struct dw_desc), GFP_KERNEL);
> + if (!desc) {
> + dev_info(&chan->dev,
> + "only allocated %d descriptors\n", i);
> + spin_lock_bh(&dwc->lock);
> + break;
> + }
> +
> + dma_async_tx_descriptor_init(&desc->txd, chan);
> + desc->txd.tx_submit = dwc_tx_submit;
> + desc->txd.flags = DMA_CTRL_ACK;
> + INIT_LIST_HEAD(&desc->txd.tx_list);
> + desc->txd.phys = dma_map_single(chan->dev.parent,
&desc->lli,
> + sizeof(desc->lli), DMA_TO_DEVICE);
> + dwc_desc_put(dwc, desc);
> +
> + spin_lock_bh(&dwc->lock);
> + i = ++dwc->descs_allocated;
> + }
> +
> + /* Enable interrupts */
> + channel_set_bit(dw, MASK.XFER, dwc->mask);
> + channel_set_bit(dw, MASK.BLOCK, dwc->mask);
> + channel_set_bit(dw, MASK.ERROR, dwc->mask);
> +
> + spin_unlock_bh(&dwc->lock);
> +
> + dev_dbg(&chan->dev,
> + "alloc_chan_resources allocated %d descriptors\n", i);
> +
> + return i;
> +}
> +
> +static void dwc_free_chan_resources(struct dma_chan *chan)
> +{
> + struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
> + struct dw_dma *dw = to_dw_dma(chan->device);
> + struct dw_desc *desc, *_desc;
> + LIST_HEAD(list);
> +
> + dev_dbg(&chan->dev, "free_chan_resources (descs
allocated=%u)\n",
> + dwc->descs_allocated);
> +
> + /* ASSERT: channel is idle */
> + BUG_ON(!list_empty(&dwc->active_list));
> + BUG_ON(!list_empty(&dwc->queue));
> + BUG_ON(dma_readl(to_dw_dma(chan->device), CH_EN) & dwc->mask);
> +
> + spin_lock_bh(&dwc->lock);
> + list_splice_init(&dwc->free_list, &list);
> + dwc->descs_allocated = 0;
> + dwc->dws = NULL;
> +
> + /* Disable interrupts */
> + channel_clear_bit(dw, MASK.XFER, dwc->mask);
> + channel_clear_bit(dw, MASK.BLOCK, dwc->mask);
> + channel_clear_bit(dw, MASK.ERROR, dwc->mask);
> +
> + spin_unlock_bh(&dwc->lock);
> +
> + list_for_each_entry_safe(desc, _desc, &list, desc_node) {
> + dev_vdbg(&chan->dev, " freeing descriptor %p\n",
desc);
> + dma_unmap_single(chan->dev.parent, desc->txd.phys,
> + sizeof(desc->lli), DMA_TO_DEVICE);
> + kfree(desc);
> + }
> +
> + dev_vdbg(&chan->dev, "free_chan_resources done\n");
> +}
> +
>
+/*---------------------------------------------------------------------
-*/
> +
> +static void dw_dma_off(struct dw_dma *dw)
> +{
> + dma_writel(dw, CFG, 0);
> +
> + channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask);
> + channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask);
> + channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask);
> + channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask);
> + channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask);
> +
> + while (dma_readl(dw, CFG) & DW_CFG_DMA_EN)
> + cpu_relax();
> +}
> +
> +static int __init dw_probe(struct platform_device *pdev)
> +{
> + struct dw_dma_platform_data *pdata;
> + struct resource *io;
> + struct dw_dma *dw;
> + size_t size;
> + int irq;
> + int err;
> + int i;
> +
> + pdata = pdev->dev.platform_data;
> + if (!pdata || pdata->nr_channels > DW_DMA_MAX_NR_CHANNELS)
> + return -EINVAL;
> +
> + io = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + if (!io)
> + return -EINVAL;
> +
> + irq = platform_get_irq(pdev, 0);
> + if (irq < 0)
> + return irq;
> +
> + size = sizeof(struct dw_dma);
> + size += pdata->nr_channels * sizeof(struct dw_dma_chan);
> + dw = kzalloc(size, GFP_KERNEL);
> + if (!dw)
> + return -ENOMEM;
> +
> + if (!request_mem_region(io->start, DW_REGLEN,
> pdev->dev.driver->name)) { + err = -EBUSY;
> + goto err_kfree;
> + }
> +
> + memset(dw, 0, sizeof *dw);
> +
> + dw->regs = ioremap(io->start, DW_REGLEN);
> + if (!dw->regs) {
> + err = -ENOMEM;
> + goto err_release_r;
> + }
> +
> + dw->clk = clk_get(&pdev->dev, "hclk");
> + if (IS_ERR(dw->clk)) {
> + err = PTR_ERR(dw->clk);
> + goto err_clk;
> + }
> + clk_enable(dw->clk);
> +
> + /* force dma off, just in case */
> + dw_dma_off(dw);
> +
> + err = request_irq(irq, dw_dma_interrupt, 0, "dw_dmac", dw);
> + if (err)
> + goto err_irq;
> +
> + platform_set_drvdata(pdev, dw);
> +
> + tasklet_init(&dw->tasklet, dw_dma_tasklet, (unsigned long)dw);
> +
> + dw->all_chan_mask = (1 << pdata->nr_channels) - 1;
> +
> + INIT_LIST_HEAD(&dw->dma.channels);
> + for (i = 0; i < pdata->nr_channels; i++, dw->dma.chancnt++) {
> + struct dw_dma_chan *dwc = &dw->chan[i];
> +
> + dwc->chan.device = &dw->dma;
> + dwc->chan.cookie = dwc->completed = 1;
> + dwc->chan.chan_id = i;
> + list_add_tail(&dwc->chan.device_node,
&dw->dma.channels);
> +
> + dwc->ch_regs = &__dw_regs(dw)->CHAN[i];
> + spin_lock_init(&dwc->lock);
> + dwc->mask = 1 << i;
> +
> + INIT_LIST_HEAD(&dwc->active_list);
> + INIT_LIST_HEAD(&dwc->queue);
> + INIT_LIST_HEAD(&dwc->free_list);
> +
> + channel_clear_bit(dw, CH_EN, dwc->mask);
> + }
> +
> + /* Clear/disable all interrupts on all channels. */
> + dma_writel(dw, CLEAR.XFER, dw->all_chan_mask);
> + dma_writel(dw, CLEAR.BLOCK, dw->all_chan_mask);
> + dma_writel(dw, CLEAR.SRC_TRAN, dw->all_chan_mask);
> + dma_writel(dw, CLEAR.DST_TRAN, dw->all_chan_mask);
> + dma_writel(dw, CLEAR.ERROR, dw->all_chan_mask);
> +
> + channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask);
> + channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask);
> + channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask);
> + channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask);
> + channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask);
> +
> + dma_cap_set(DMA_MEMCPY, dw->dma.cap_mask);
> + dma_cap_set(DMA_SLAVE, dw->dma.cap_mask);
> + dw->dma.dev = &pdev->dev;
> + dw->dma.device_alloc_chan_resources =
dwc_alloc_chan_resources;
> + dw->dma.device_free_chan_resources = dwc_free_chan_resources;
> +
> + dw->dma.device_prep_dma_memcpy = dwc_prep_dma_memcpy;
> +
> + dw->dma.device_prep_slave_sg = dwc_prep_slave_sg;
> + dw->dma.device_terminate_all = dwc_terminate_all;
> +
> + dw->dma.device_is_tx_complete = dwc_is_tx_complete;
> + dw->dma.device_issue_pending = dwc_issue_pending;
> +
> + dma_writel(dw, CFG, DW_CFG_DMA_EN);
> +
> + printk(KERN_INFO "%s: DesignWare DMA Controller, %d
channels\n",
> + pdev->dev.bus_id, dw->dma.chancnt);
> +
> + dma_async_device_register(&dw->dma);
> +
> + return 0;
> +
> +err_irq:
> + clk_disable(dw->clk);
> + clk_put(dw->clk);
> +err_clk:
> + iounmap(dw->regs);
> + dw->regs = NULL;
> +err_release_r:
> + release_resource(io);
> +err_kfree:
> + kfree(dw);
> + return err;
> +}

This driver does not perform any self-test during initialization.
What about adding some initial HW checking?

> +
> +static int __exit dw_remove(struct platform_device *pdev)
> +{
> + struct dw_dma *dw = platform_get_drvdata(pdev);
> + struct dw_dma_chan *dwc, *_dwc;
> + struct resource *io;
> +
> + dw_dma_off(dw);
> + dma_async_device_unregister(&dw->dma);
> +
> + free_irq(platform_get_irq(pdev, 0), dw);
> + tasklet_kill(&dw->tasklet);
> +
> + list_for_each_entry_safe(dwc, _dwc, &dw->dma.channels,
> + chan.device_node) {
> + list_del(&dwc->chan.device_node);
> + channel_clear_bit(dw, CH_EN, dwc->mask);
> + }
> +
> + clk_disable(dw->clk);
> + clk_put(dw->clk);
> +
> + iounmap(dw->regs);
> + dw->regs = NULL;
> +
> + io = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + release_mem_region(io->start, DW_REGLEN);
> +
> + kfree(dw);
> +
> + return 0;
> +}
> +
> +static void dw_shutdown(struct platform_device *pdev)
> +{
> + struct dw_dma *dw = platform_get_drvdata(pdev);
> +
> + dw_dma_off(platform_get_drvdata(pdev));
> + clk_disable(dw->clk);
> +}
> +
> +static int dw_suspend_late(struct platform_device *pdev, pm_message_t
mesg)
> +{
> + struct dw_dma *dw = platform_get_drvdata(pdev);
> +
> + dw_dma_off(platform_get_drvdata(pdev));
> + clk_disable(dw->clk);
> + return 0;
> +}
> +
> +static int dw_resume_early(struct platform_device *pdev)
> +{
> + struct dw_dma *dw = platform_get_drvdata(pdev);
> +
> + clk_enable(dw->clk);
> + dma_writel(dw, CFG, DW_CFG_DMA_EN);
> + return 0;
> +
> +}
> +
> +static struct platform_driver dw_driver = {
> + .remove = __exit_p(dw_remove),
> + .shutdown = dw_shutdown,
> + .suspend_late = dw_suspend_late,
> + .resume_early = dw_resume_early,
> + .driver = {
> + .name = "dw_dmac",
> + },
> +};
> +
> +static int __init dw_init(void)
> +{
> + return platform_driver_probe(&dw_driver, dw_probe);
> +}
> +module_init(dw_init);
> +
> +static void __exit dw_exit(void)
> +{
> + platform_driver_unregister(&dw_driver);
> +}
> +module_exit(dw_exit);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller driver");
> +MODULE_AUTHOR("Haavard Skinnemoen <[email protected]>");
> diff --git a/drivers/dma/dw_dmac_regs.h b/drivers/dma/dw_dmac_regs.h
> new file mode 100644
> index 0000000..119e65b
> --- /dev/null
> +++ b/drivers/dma/dw_dmac_regs.h
> @@ -0,0 +1,224 @@
> +/*
> + * Driver for the Synopsys DesignWare AHB DMA Controller
> + *
> + * Copyright (C) 2005-2007 Atmel Corporation
> + *
> + * This program is free software; you can redistribute it and/or
modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include <linux/dw_dmac.h>
> +
> +#define DW_DMA_MAX_NR_CHANNELS 8
> +
> +/*
> + * Redefine this macro to handle differences between 32- and 64-bit
> + * addressing, big vs. little endian, etc.
> + */
> +#define DW_REG(name) u32 name; u32 __pad_##name
> +
> +/* Hardware register definitions. */
> +struct dw_dma_chan_regs {
> + DW_REG(SAR); /* Source Address Register */
> + DW_REG(DAR); /* Destination Address Register */
> + DW_REG(LLP); /* Linked List Pointer */
> + u32 CTL_LO; /* Control Register Low */
> + u32 CTL_HI; /* Control Register High */
> + DW_REG(SSTAT);
> + DW_REG(DSTAT);
> + DW_REG(SSTATAR);
> + DW_REG(DSTATAR);
> + u32 CFG_LO; /* Configuration Register Low */
> + u32 CFG_HI; /* Configuration Register High */
> + DW_REG(SGR);
> + DW_REG(DSR);
> +};
> +
> +struct dw_dma_irq_regs {
> + DW_REG(XFER);
> + DW_REG(BLOCK);
> + DW_REG(SRC_TRAN);
> + DW_REG(DST_TRAN);
> + DW_REG(ERROR);
> +};
> +
> +struct dw_dma_regs {
> + /* per-channel registers */
> + struct dw_dma_chan_regs CHAN[DW_DMA_MAX_NR_CHANNELS];
> +
> + /* irq handling */
> + struct dw_dma_irq_regs RAW; /* r */
> + struct dw_dma_irq_regs STATUS; /* r (raw & mask) */
> + struct dw_dma_irq_regs MASK; /* rw (set = irq
enabled) */
> + struct dw_dma_irq_regs CLEAR; /* w (ack, affects
"raw") */
> +
> + DW_REG(STATUS_INT); /* r */
> +
> + /* software handshaking */
> + DW_REG(REQ_SRC);
> + DW_REG(REQ_DST);
> + DW_REG(SGL_REQ_SRC);
> + DW_REG(SGL_REQ_DST);
> + DW_REG(LAST_SRC);
> + DW_REG(LAST_DST);
> +
> + /* miscellaneous */
> + DW_REG(CFG);
> + DW_REG(CH_EN);
> + DW_REG(ID);
> + DW_REG(TEST);
> +
> + /* optional encoded params, 0x3c8..0x3 */
> +};
> +
> +/* Bitfields in CTL_LO */
> +#define DWC_CTLL_INT_EN (1 << 0) /* irqs
enabled? */
> +#define DWC_CTLL_DST_WIDTH(n) ((n)<<1) /* bytes per element
*/
> +#define DWC_CTLL_SRC_WIDTH(n) ((n)<<4)
> +#define DWC_CTLL_DST_INC (0<<7) /* DAR update/not */
> +#define DWC_CTLL_DST_DEC (1<<7)
> +#define DWC_CTLL_DST_FIX (2<<7)
> +#define DWC_CTLL_SRC_INC (0<<7) /* SAR update/not */
> +#define DWC_CTLL_SRC_DEC (1<<9)
> +#define DWC_CTLL_SRC_FIX (2<<9)
> +#define DWC_CTLL_DST_MSIZE(n) ((n)<<11) /* burst, #elements */
> +#define DWC_CTLL_SRC_MSIZE(n) ((n)<<14)
> +#define DWC_CTLL_S_GATH_EN (1 << 17) /* src gather, !FIX */
> +#define DWC_CTLL_D_SCAT_EN (1 << 18) /* dst scatter, !FIX
*/
> +#define DWC_CTLL_FC_M2M (0 << 20) /* mem-to-mem
*/
> +#define DWC_CTLL_FC_M2P (1 << 20) /*
mem-to-periph */
> +#define DWC_CTLL_FC_P2M (2 << 20) /*
periph-to-mem */
> +#define DWC_CTLL_FC_P2P (3 << 20) /*
periph-to-periph */
> +/* plus 4 transfer types for peripheral-as-flow-controller */
> +#define DWC_CTLL_DMS(n) ((n)<<23) /* dst master
select
> */ +#define DWC_CTLL_SMS(n) ((n)<<25) /* src
master
> select */ +#define DWC_CTLL_LLP_D_EN (1 << 27) /* dest
block chain
> */ +#define DWC_CTLL_LLP_S_EN (1 << 28) /* src block chain
*/
> +
> +/* Bitfields in CTL_HI */
> +#define DWC_CTLH_DONE 0x00001000
> +#define DWC_CTLH_BLOCK_TS_MASK 0x00000fff
> +
> +/* Bitfields in CFG_LO. Platform-configurable bits are in
<linux/dw_dmac.h>
> */ +#define DWC_CFGL_CH_SUSP (1 << 8) /* pause xfer */
> +#define DWC_CFGL_FIFO_EMPTY (1 << 9) /* pause xfer */
> +#define DWC_CFGL_HS_DST (1 << 10) /* handshake
w/dst */
> +#define DWC_CFGL_HS_SRC (1 << 11) /* handshake
w/src */
> +#define DWC_CFGL_MAX_BURST(x) ((x) << 20)
> +#define DWC_CFGL_RELOAD_SAR (1 << 30)
> +#define DWC_CFGL_RELOAD_DAR (1 << 31)
> +
> +/* Bitfields in CFG_HI. Platform-configurable bits are in
<linux/dw_dmac.h>
> */ +#define DWC_CFGH_DS_UPD_EN (1 << 5)
> +#define DWC_CFGH_SS_UPD_EN (1 << 6)
> +
> +/* Bitfields in SGR */
> +#define DWC_SGR_SGI(x) ((x) << 0)
> +#define DWC_SGR_SGC(x) ((x) << 20)
> +
> +/* Bitfields in DSR */
> +#define DWC_DSR_DSI(x) ((x) << 0)
> +#define DWC_DSR_DSC(x) ((x) << 20)
> +
> +/* Bitfields in CFG */
> +#define DW_CFG_DMA_EN (1 << 0)
> +
> +#define DW_REGLEN 0x400
> +
> +struct dw_dma_chan {
> + struct dma_chan chan;
> + void __iomem *ch_regs;
> + u8 mask;
> +
> + spinlock_t lock;
> +
> + /* these other elements are all protected by lock */
> + dma_cookie_t completed;
> + struct list_head active_list;
> + struct list_head queue;
> + struct list_head free_list;
> +
> + struct dw_dma_slave *dws;
> +
> + unsigned int descs_allocated;
> +};
> +
> +static inline struct dw_dma_chan_regs __iomem *
> +__dwc_regs(struct dw_dma_chan *dwc)
> +{
> + return dwc->ch_regs;
> +}
> +
> +#define channel_readl(dwc, name) \
> + __raw_readl(&(__dwc_regs(dwc)->name))
> +#define channel_writel(dwc, name, val) \
> + __raw_writel((val), &(__dwc_regs(dwc)->name))
> +
> +static inline struct dw_dma_chan *to_dw_dma_chan(struct dma_chan
*chan)
> +{
> + return container_of(chan, struct dw_dma_chan, chan);
> +}
> +
> +
> +struct dw_dma {
> + struct dma_device dma;
> + void __iomem *regs;
> + struct tasklet_struct tasklet;
> + struct clk *clk;
> +
> + u8 all_chan_mask;
> +
> + struct dw_dma_chan chan[0];
> +};
> +
> +static inline struct dw_dma_regs __iomem *__dw_regs(struct dw_dma
*dw)
> +{
> + return dw->regs;
> +}
> +
> +#define dma_readl(dw, name) \
> + __raw_readl(&(__dw_regs(dw)->name))
> +#define dma_writel(dw, name, val) \
> + __raw_writel((val), &(__dw_regs(dw)->name))
> +
> +#define channel_set_bit(dw, reg, mask) \
> + dma_writel(dw, reg, ((mask) << 8) | (mask))
> +#define channel_clear_bit(dw, reg, mask) \
> + dma_writel(dw, reg, ((mask) << 8) | 0)
> +
> +static inline struct dw_dma *to_dw_dma(struct dma_device *ddev)
> +{
> + return container_of(ddev, struct dw_dma, dma);
> +}
> +
> +/* LLI == Linked List Item; a.k.a. DMA block descriptor */
> +struct dw_lli {
> + /* values that are not changed by hardware */
> + dma_addr_t sar;
> + dma_addr_t dar;
> + dma_addr_t llp; /* chain to next lli */
> + u32 ctllo;
> + /* values that may get written back: */
> + u32 ctlhi;
> + /* sstat and dstat can snapshot peripheral register state.
> + * silicon config may discard either or both...
> + */
> + u32 sstat;
> + u32 dstat;
> +};
> +
> +struct dw_desc {
> + /* FIRST values the hardware uses */
> + struct dw_lli lli;
> +
> + /* THEN values for driver housekeeping */
> + struct list_head desc_node;
> + struct dma_async_tx_descriptor txd;
> +};
> +
> +static inline struct dw_desc *
> +txd_to_dw_desc(struct dma_async_tx_descriptor *txd)
> +{
> + return container_of(txd, struct dw_desc, txd);
> +}
> diff --git a/include/asm-avr32/arch-at32ap/at32ap700x.h
> b/include/asm-avr32/arch-at32ap/at32ap700x.h
> index 31e48b0..d18a305 100644
> --- a/include/asm-avr32/arch-at32ap/at32ap700x.h
> +++ b/include/asm-avr32/arch-at32ap/at32ap700x.h
> @@ -30,4 +30,20 @@
> #define GPIO_PIN_PD(N) (GPIO_PIOD_BASE + (N))
> #define GPIO_PIN_PE(N) (GPIO_PIOE_BASE + (N))
>
> +
> +/*
> + * DMAC peripheral hardware handshaking interfaces, used with dw_dmac
> + */
> +#define DMAC_MCI_RX 0
> +#define DMAC_MCI_TX 1
> +#define DMAC_DAC_TX 2
> +#define DMAC_AC97_A_RX 3
> +#define DMAC_AC97_A_TX 4
> +#define DMAC_AC97_B_RX 5
> +#define DMAC_AC97_B_TX 6
> +#define DMAC_DMAREQ_0 7
> +#define DMAC_DMAREQ_1 8
> +#define DMAC_DMAREQ_2 9
> +#define DMAC_DMAREQ_3 10
> +
> #endif /* __ASM_ARCH_AT32AP700X_H__ */
> diff --git a/include/linux/dw_dmac.h b/include/linux/dw_dmac.h
> new file mode 100644
> index 0000000..04d217b
> --- /dev/null
> +++ b/include/linux/dw_dmac.h
> @@ -0,0 +1,62 @@
> +/*
> + * Driver for the Synopsys DesignWare DMA Controller (aka DMACA on
> + * AVR32 systems.)
> + *
> + * Copyright (C) 2007 Atmel Corporation
> + *
> + * This program is free software; you can redistribute it and/or
modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +#ifndef DW_DMAC_H
> +#define DW_DMAC_H
> +
> +#include <linux/dmaengine.h>
> +
> +/**
> + * struct dw_dma_platform_data - Controller configuration parameters
> + * @nr_channels: Number of channels supported by hardware (max 8)
> + */
> +struct dw_dma_platform_data {
> + unsigned int nr_channels;
> +};
> +
> +/**
> + * struct dw_dma_slave - Controller-specific information about a
slave
> + * @slave: Generic information about the slave
> + * @ctl_lo: Platform-specific initializer for the CTL_LO register
> + * @cfg_hi: Platform-specific initializer for the CFG_HI register
> + * @cfg_lo: Platform-specific initializer for the CFG_LO register
> + */
> +struct dw_dma_slave {
> + struct dma_slave slave;
> + u32 cfg_hi;
> + u32 cfg_lo;
> +};
> +
> +/* Platform-configurable bits in CFG_HI */
> +#define DWC_CFGH_FCMODE (1 << 0)
> +#define DWC_CFGH_FIFO_MODE (1 << 1)
> +#define DWC_CFGH_PROTCTL(x) ((x) << 2)
> +#define DWC_CFGH_SRC_PER(x) ((x) << 7)
> +#define DWC_CFGH_DST_PER(x) ((x) << 11)
> +
> +/* Platform-configurable bits in CFG_LO */
> +#define DWC_CFGL_PRIO(x) ((x) << 5) /* priority */
> +#define DWC_CFGL_LOCK_CH_XFER (0 << 12) /* scope of LOCK_CH */
> +#define DWC_CFGL_LOCK_CH_BLOCK (1 << 12)
> +#define DWC_CFGL_LOCK_CH_XACT (2 << 12)
> +#define DWC_CFGL_LOCK_BUS_XFER (0 << 14) /* scope of LOCK_BUS
*/
> +#define DWC_CFGL_LOCK_BUS_BLOCK (1 << 14)
> +#define DWC_CFGL_LOCK_BUS_XACT (2 << 14)
> +#define DWC_CFGL_LOCK_CH (1 << 15) /* channel lockout */
> +#define DWC_CFGL_LOCK_BUS (1 << 16) /* busmaster lockout
*/
> +#define DWC_CFGL_HS_DST_POL (1 << 18) /* dst handshake
active low */
> +#define DWC_CFGL_HS_SRC_POL (1 << 19) /* src handshake
active low */
> +
> +static inline struct dw_dma_slave *to_dw_dma_slave(struct dma_slave
*slave)
> +{
> + return container_of(slave, struct dw_dma_slave, slave);
> +}
> +
> +#endif /* DW_DMAC_H */
> --
> 1.5.5.4

Regards,
Maciej

2008-07-04 16:11:17

by Haavard Skinnemoen

[permalink] [raw]
Subject: Re: [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare DMA controller

On Fri, 4 Jul 2008 16:33:53 +0100
"Sosnowski, Maciej" <[email protected]> wrote:
> Coulpe of questions and comments from my side below.
> Apart from that the code looks fine to me.
>
> Acked-by: Maciej Sosnowski <[email protected]>

Thanks a lot for reviewing!

> > +/* Called with dwc->lock held and bh disabled */
> > +static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc
> *first)
> > +{
> > + struct dw_dma *dw = to_dw_dma(dwc->chan.device);
> > +
> > + /* ASSERT: channel is idle */
> > + if (dma_readl(dw, CH_EN) & dwc->mask) {
> > + dev_err(&dwc->chan.dev,
> > + "BUG: Attempted to start non-idle channel\n");
> > + dev_err(&dwc->chan.dev,
> > + " SAR: 0x%x DAR: 0x%x LLP: 0x%x CTL:
> 0x%x:%08x\n",
> > + channel_readl(dwc, SAR),
> > + channel_readl(dwc, DAR),
> > + channel_readl(dwc, LLP),
> > + channel_readl(dwc, CTL_HI),
> > + channel_readl(dwc, CTL_LO));
> > +
> > + /* The tasklet will hopefully advance the queue... */
> > + return;
>
> Should not at this point an error status be returned
> so that it can be handled accordingly by dwc_dostart() caller?

There's not a whole lot of meaningful things to do for the caller. It
should never happen in the first place, but if the channel _is_ active
at this point, we will eventually get an xfer complete interrupt when
the currently pending transfers are done. The descriptors have already
been added to the list, so the driver should recover from this kind of
bug automatically.

I've never actually triggered this code, so I can't really say for
certain that it works, but at least in theory it makes much more sense
to fix things up when the channel eventually becomes idle.

> > + ctllo = DWC_DEFAULT_CTLLO
> > + | DWC_CTLL_DST_WIDTH(dst_width)
> > + | DWC_CTLL_SRC_WIDTH(src_width)
> > + | DWC_CTLL_DST_INC
> > + | DWC_CTLL_SRC_INC
> > + | DWC_CTLL_FC_M2M;
> > + prev = first = NULL;
> > +
> > + for (offset = 0; offset < len; offset += xfer_count <<
> src_width) {
> > + xfer_count = min_t(size_t, (len - offset) >>
> src_width,
> > + DWC_MAX_COUNT);
>
> Here it looks like the maximum xfer_count value can change - it depends
> on src_width,
> so it may be different for different transactions.
> Is that ok?

Yes, the maximum tranfer count is defined as the maximum number of
source transactions on the bus. So if the controller is set up to do 32
bits at a time on the source side, the maximum transfer _length_ is
four times the maximum transfer _count_.

The value written to the descriptor is also a transaction count, not a
byte count.

> This driver does not perform any self-test during initialization.
> What about adding some initial HW checking?

I'm not sure if it makes a lot of sense -- this device is typically
integrated on the same silicon as the CPU, so if there are any issues
with the DMA controller, they should be caught during production
testing.

I'm using the dmatest module for validating the driver, so I feel the
self-test stuff becomes somewhat redundant.

Haavard