2014-10-28 21:30:50

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 00/58] dmaengine: Implement generic slave capabilities retrieval

Hi,

As we discussed a couple of weeks ago, this is the third attempt at
creating a generic behaviour for slave capabilities retrieval so that
generic layers using dmaengine can actually rely on that.

That has been done mostly through two steps: by moving out the
sub-commands of the device_control callback, so that the dmaengine
core can then infer from that wether a sub-command is implemented, and
then by moving the slave properties, such as the supported buswidth,
to the structure dma_device itself.

Comments are as usual appreciated!

Thanks,
Maxime

Changes from v3:
- Removed the generic_slave_caps flag
- Merged the patch introducing a generic slave caps functions with
the one introducing the dma_device capabilities

Changes from v2:
- Reworked dma_chan_get/dma_chan_put in order to optionally use
device_alloc_chan_resources/device_free_chan_resources
- Modified a few client drivers that were still calling
device_control directly to use the dmaengine API instead
- Totally remove device_control and device_slave_config
- Move device_control BUG_ON removal earlier in the patch set to
avoid breaking the bisectability
- Converted rapidio tsi721 driver too.
- Minor cosmetics changes and fixes suggested by Laurent Pinchart
and Andy Shevchenko
- Fixed a few build warnings
- Collected the various Acked-by
- Rebased on top of 3.18-rc1

Changes from v1:
- Add a flag to trigger the generic slave caps mechanism
- Add a warning whenever this flag is not set, or when a
device_control callback is still defined
- Migrate all existing users to use the new callbacks, and the
generic slave capabilities

Maxime Ripard (58):
crypto: ux500: Use dmaengine_terminate_all API
serial: at91: Use dmaengine_slave_config API
dmaengine: Make the destination abbreviation coherent
dmaengine: Rework dma_chan_get
dmaengine: Make channel allocation callbacks optional
dmaengine: Introduce a device_config callback
dmaengine: split out pause/resume operations from device_control
dmaengine: Add device_terminate_all callback
dmaengine: Remove the need to declare device_control
dmaengine: Create a generic dma_slave_caps callback
dmaengine: pl08x: Split device_control
dmaengine: hdmac: Split device_control
dmaengine: bcm2835: Split device_control
dmaengine: coh901318: Split device_control
dmaengine: cppi41: Split device_control
dmaengine: jz4740: Split device_control
dmaengine: dw: Split device_control
dmaengine: edma: Split device_control
dmaengine: ep93xx: Split device_control
dmaengine: fsl-edma: Split device_control
dmaengine: imx: Split device_control
dmaengine: imx-sdma: Split device_control
dmaengine: intel-mid-dma: Split device_control
dmaengine: ipu-idmac: Split device_control
dmaengine: k3: Split device_control
dmaengine: mmp-pdma: Split device_control
dmaengine: mmp-tdma: Split device_control
dmaengine: moxart: Split device_control
dmaengine: fsl-dma: Split device_control
dmaengine: mpc512x: Split device_control
dmaengine: mxs: Split device_control
dmaengine: nbpfaxi: Split device_control
dmaengine: omap: Split device_control
dmaengine: pl330: Split device_control
dmaengine: bam-dma: Split device_control
dmaengine: s3c24xx: Split device_control
dmaengine: sa11x0: Split device_control
dmaengine: sh: Split device_control
dmaengine: sirf: Split device_control
dmaengine: sun6i: Split device_control
dmaengine: d40: Split device_control
dmaengine: tegra20: Split device_control
dmaengine: xilinx: Split device_control
dmaengine: mv_xor: Remove device_control
dmaengine: pch-dma: Rename device_control
dmaengine: td: Rename device_control
dmaengine: txx9: Rename device_control
dmaengine: rapidio: tsi721: Rename device_control
dmaengine: bcm2835: Declare slave capabilities for the generic code
dmaengine: fsl-edma: Declare slave capabilities for the generic code
dmaengine: edma: Declare slave capabilities for the generic code
dmaengine: nbpfaxi: Declare slave capabilities for the generic code
dmaengine: omap: Declare slave capabilities for the generic code
dmaengine: pl330: Declare slave capabilities for the generic code
dmaengine: sirf: Declare slave capabilities for the generic code
dmaengine: sun6i: Declare slave capabilities for the generic code
dmaengine: Add a warning for drivers not using the generic slave caps
retrieval
dmaengine: Remove device_control and device_slave_caps

drivers/crypto/ux500/cryp/cryp_core.c | 4 +-
drivers/crypto/ux500/hash/hash_core.c | 2 +-
drivers/dma/amba-pl08x.c | 156 +++++++++++++++------------
drivers/dma/at_hdmac.c | 121 ++++++++++++---------
drivers/dma/bcm2835-dma.c | 46 ++------
drivers/dma/coh901318.c | 137 +++++++++++------------
drivers/dma/cppi41.c | 30 +-----
drivers/dma/dma-jz4740.c | 20 +---
drivers/dma/dmaengine.c | 51 +++++----
drivers/dma/dw/core.c | 82 +++++++-------
drivers/dma/edma.c | 70 ++++--------
drivers/dma/ep93xx_dma.c | 41 ++-----
drivers/dma/fsl-edma.c | 123 ++++++++++-----------
drivers/dma/fsldma.c | 91 ++++++----------
drivers/dma/imx-dma.c | 103 +++++++++---------
drivers/dma/imx-sdma.c | 66 ++++++------
drivers/dma/intel_mid_dma.c | 25 ++---
drivers/dma/ipu/ipu_idmac.c | 96 +++++++++--------
drivers/dma/k3dma.c | 197 ++++++++++++++++++----------------
drivers/dma/mmp_pdma.c | 109 ++++++++++---------
drivers/dma/mmp_tdma.c | 82 +++++++-------
drivers/dma/moxart-dma.c | 25 +----
drivers/dma/mpc512x_dma.c | 111 +++++++++----------
drivers/dma/mv_xor.c | 9 --
drivers/dma/mxs-dma.c | 59 ++++------
drivers/dma/nbpfaxi.c | 110 +++++++++----------
drivers/dma/omap-dma.c | 69 ++++--------
drivers/dma/pch_dma.c | 8 +-
drivers/dma/pl330.c | 126 ++++++++++------------
drivers/dma/qcom_bam_dma.c | 85 +++++++--------
drivers/dma/s3c24xx-dma.c | 75 +++++++------
drivers/dma/sa11x0-dma.c | 158 ++++++++++++++-------------
drivers/dma/sh/shdma-base.c | 72 ++++++-------
drivers/dma/sirf-dma.c | 59 +++-------
drivers/dma/ste_dma40.c | 60 +++++------
drivers/dma/sun6i-dma.c | 158 ++++++++++++++-------------
drivers/dma/tegra20-apb-dma.c | 22 +---
drivers/dma/timb_dma.c | 8 +-
drivers/dma/txx9dmac.c | 9 +-
drivers/dma/xilinx/xilinx_vdma.c | 29 ++---
drivers/rapidio/devices/tsi721_dma.c | 8 +-
drivers/tty/serial/atmel_serial.c | 10 +-
include/linux/dmaengine.h | 121 ++++++++++++---------
sound/soc/soc-generic-dmaengine-pcm.c | 2 +-
44 files changed, 1400 insertions(+), 1645 deletions(-)

--
2.1.1


2014-10-28 21:30:07

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 10/58] dmaengine: Create a generic dma_slave_caps callback

dma_slave_caps is very important to the generic layers that might interact with
dmaengine, such as ASoC. Unfortunately, it has been added as yet another
dma_device callback, and most of the existing drivers haven't implemented it,
reducing its reliability.

Introduce a generic behaviour to implement this, that rely on both the split of
device_control to derive which functions are supported and on new variables to
be set in the dma_device structure.

These variables holds what used to be the capabilities, that were set
per-channel. However, this proved to be a bit overkill, since every driver
filling these so far were hardcoding it, disregarding which channel was
actually given.

Signed-off-by: Maxime Ripard <[email protected]>
---
include/linux/dmaengine.h | 41 +++++++++++++++++++++++++++++++++++++----
1 file changed, 37 insertions(+), 4 deletions(-)

diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index b0efc805937a..be9e60aa8f5e 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -593,6 +593,14 @@ struct dma_tx_state {
* @fill_align: alignment shift for memset operations
* @dev_id: unique device ID
* @dev: struct device reference for dma mapping api
+ * @src_addr_widths: bit mask of src addr widths the device supports
+ * @dst_addr_widths: bit mask of dst addr widths the device supports
+ * @directions: bit mask of slave direction the device supports since
+ * the enum dma_transfer_direction is not defined as bits for
+ * each type of direction, the dma controller should fill (1 <<
+ * <TYPE>) and same should be checked by controller as well
+ * @residue_granularity: granularity of the transfer residue reported
+ * by tx_status
* @device_alloc_chan_resources: allocate resources and return the
* number of allocated descriptors
* @device_free_chan_resources: release DMA channel's resources
@@ -642,6 +650,11 @@ struct dma_device {
int dev_id;
struct device *dev;

+ u32 src_addr_widths;
+ u32 dst_addr_widths;
+ u32 directions;
+ enum dma_residue_granularity residue_granularity;
+
int (*device_alloc_chan_resources)(struct dma_chan *chan);
void (*device_free_chan_resources)(struct dma_chan *chan);

@@ -783,17 +796,37 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_sg(

static inline int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
{
+ struct dma_device *device;
+
if (!chan || !caps)
return -EINVAL;

+ device = chan->device;
+
/* check if the channel supports slave transactions */
- if (!test_bit(DMA_SLAVE, chan->device->cap_mask.bits))
+ if (!test_bit(DMA_SLAVE, device->cap_mask.bits))
+ return -ENXIO;
+
+ if (device->device_slave_caps)
+ return device->device_slave_caps(chan, caps);
+
+ /*
+ * Check whether it reports it uses the generic slave
+ * capabilities, if not, that means it doesn't support any
+ * kind of slave capabilities reporting.
+ */
+ if (!device->directions)
return -ENXIO;

- if (chan->device->device_slave_caps)
- return chan->device->device_slave_caps(chan, caps);
+ caps->src_addr_widths = device->src_addr_widths;
+ caps->dst_addr_widths = device->dst_addr_widths;
+ caps->directions = device->directions;
+ caps->residue_granularity = device->residue_granularity;
+
+ caps->cmd_pause = !!device->device_pause;
+ caps->cmd_terminate = !!device->device_terminate_all;

- return -ENXIO;
+ return 0;
}

static inline int dmaengine_terminate_all(struct dma_chan *chan)
--
2.1.1

2014-10-28 21:30:09

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 11/58] dmaengine: pl08x: Split device_control

Split the device_control callback of the AMBA PL08x DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/amba-pl08x.c | 156 +++++++++++++++++++++++++++--------------------
1 file changed, 90 insertions(+), 66 deletions(-)

diff --git a/drivers/dma/amba-pl08x.c b/drivers/dma/amba-pl08x.c
index e34024b000a4..aa99c13b5666 100644
--- a/drivers/dma/amba-pl08x.c
+++ b/drivers/dma/amba-pl08x.c
@@ -1386,32 +1386,6 @@ static u32 pl08x_get_cctl(struct pl08x_dma_chan *plchan,
return pl08x_cctl(cctl);
}

-static int dma_set_runtime_config(struct dma_chan *chan,
- struct dma_slave_config *config)
-{
- struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
- struct pl08x_driver_data *pl08x = plchan->host;
-
- if (!plchan->slave)
- return -EINVAL;
-
- /* Reject definitely invalid configurations */
- if (config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES ||
- config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
- return -EINVAL;
-
- if (config->device_fc && pl08x->vd->pl080s) {
- dev_err(&pl08x->adev->dev,
- "%s: PL080S does not support peripheral flow control\n",
- __func__);
- return -EINVAL;
- }
-
- plchan->cfg = *config;
-
- return 0;
-}
-
/*
* Slave transactions callback to the slave device to allow
* synchronization of slave DMA signals with the DMAC enable
@@ -1693,20 +1667,71 @@ static struct dma_async_tx_descriptor *pl08x_prep_dma_cyclic(
return vchan_tx_prep(&plchan->vc, &txd->vd, flags);
}

-static int pl08x_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int pl08x_config(struct dma_chan *chan,
+ struct dma_slave_config *config)
+{
+ struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
+ struct pl08x_driver_data *pl08x = plchan->host;
+
+ if (!plchan->slave)
+ return -EINVAL;
+
+ /* Reject definitely invalid configurations */
+ if (config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES ||
+ config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
+ return -EINVAL;
+
+ if (config->device_fc && pl08x->vd->pl080s) {
+ dev_err(&pl08x->adev->dev,
+ "%s: PL080S does not support peripheral flow control\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ plchan->cfg = *config;
+
+ return 0;
+}
+
+static int pl08x_terminate_all(struct dma_chan *chan)
{
struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
struct pl08x_driver_data *pl08x = plchan->host;
unsigned long flags;
- int ret = 0;

- /* Controls applicable to inactive channels */
- if (cmd == DMA_SLAVE_CONFIG) {
- return dma_set_runtime_config(chan,
- (struct dma_slave_config *)arg);
+ spin_lock_irqsave(&plchan->vc.lock, flags);
+ if (!plchan->phychan && !plchan->at) {
+ spin_unlock_irqrestore(&plchan->vc.lock, flags);
+ return 0;
}

+ plchan->state = PL08X_CHAN_IDLE;
+
+ if (plchan->phychan) {
+ /*
+ * Mark physical channel as free and free any slave
+ * signal
+ */
+ pl08x_phy_free(plchan);
+ }
+ /* Dequeue jobs and free LLIs */
+ if (plchan->at) {
+ pl08x_desc_free(&plchan->at->vd);
+ plchan->at = NULL;
+ }
+ /* Dequeue jobs not yet fired as well */
+ pl08x_free_txd_list(pl08x, plchan);
+
+ spin_unlock_irqrestore(&plchan->vc.lock, flags);
+
+ return 0;
+}
+
+static int pl08x_pause(struct dma_chan *chan)
+{
+ struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
+ unsigned long flags;
+
/*
* Anything succeeds on channels with no physical allocation and
* no queued transfers.
@@ -1717,42 +1742,35 @@ static int pl08x_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
return 0;
}

- switch (cmd) {
- case DMA_TERMINATE_ALL:
- plchan->state = PL08X_CHAN_IDLE;
+ pl08x_pause_phy_chan(plchan->phychan);
+ plchan->state = PL08X_CHAN_PAUSED;

- if (plchan->phychan) {
- /*
- * Mark physical channel as free and free any slave
- * signal
- */
- pl08x_phy_free(plchan);
- }
- /* Dequeue jobs and free LLIs */
- if (plchan->at) {
- pl08x_desc_free(&plchan->at->vd);
- plchan->at = NULL;
- }
- /* Dequeue jobs not yet fired as well */
- pl08x_free_txd_list(pl08x, plchan);
- break;
- case DMA_PAUSE:
- pl08x_pause_phy_chan(plchan->phychan);
- plchan->state = PL08X_CHAN_PAUSED;
- break;
- case DMA_RESUME:
- pl08x_resume_phy_chan(plchan->phychan);
- plchan->state = PL08X_CHAN_RUNNING;
- break;
- default:
- /* Unknown command */
- ret = -ENXIO;
- break;
+ spin_unlock_irqrestore(&plchan->vc.lock, flags);
+
+ return 0;
+}
+
+static int pl08x_resume(struct dma_chan *chan)
+{
+ struct pl08x_dma_chan *plchan = to_pl08x_chan(chan);
+ unsigned long flags;
+
+ /*
+ * Anything succeeds on channels with no physical allocation and
+ * no queued transfers.
+ */
+ spin_lock_irqsave(&plchan->vc.lock, flags);
+ if (!plchan->phychan && !plchan->at) {
+ spin_unlock_irqrestore(&plchan->vc.lock, flags);
+ return 0;
}

+ pl08x_resume_phy_chan(plchan->phychan);
+ plchan->state = PL08X_CHAN_RUNNING;
+
spin_unlock_irqrestore(&plchan->vc.lock, flags);

- return ret;
+ return 0;
}

bool pl08x_filter_id(struct dma_chan *chan, void *chan_id)
@@ -2048,7 +2066,10 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
pl08x->memcpy.device_prep_dma_interrupt = pl08x_prep_dma_interrupt;
pl08x->memcpy.device_tx_status = pl08x_dma_tx_status;
pl08x->memcpy.device_issue_pending = pl08x_issue_pending;
- pl08x->memcpy.device_control = pl08x_control;
+ pl08x->memcpy.device_config = pl08x_config;
+ pl08x->memcpy.device_pause = pl08x_pause;
+ pl08x->memcpy.device_resume = pl08x_resume;
+ pl08x->memcpy.device_terminate_all = pl08x_terminate_all;

/* Initialize slave engine */
dma_cap_set(DMA_SLAVE, pl08x->slave.cap_mask);
@@ -2061,7 +2082,10 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
pl08x->slave.device_issue_pending = pl08x_issue_pending;
pl08x->slave.device_prep_slave_sg = pl08x_prep_slave_sg;
pl08x->slave.device_prep_dma_cyclic = pl08x_prep_dma_cyclic;
- pl08x->slave.device_control = pl08x_control;
+ pl08x->slave.device_config = pl08x_config;
+ pl08x->slave.device_pause = pl08x_pause;
+ pl08x->slave.device_resume = pl08x_resume;
+ pl08x->slave.device_terminate_all = pl08x_terminate_all;

/* Get the platform data */
pl08x->pd = dev_get_platdata(&adev->dev);
--
2.1.1

2014-10-28 21:30:16

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 15/58] dmaengine: cppi41: Split device_control

Split the device_control callback of the TI CPPI41 DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/cppi41.c | 30 +-----------------------------
1 file changed, 1 insertion(+), 29 deletions(-)

diff --git a/drivers/dma/cppi41.c b/drivers/dma/cppi41.c
index a58eec3b2cad..0f41b5b497db 100644
--- a/drivers/dma/cppi41.c
+++ b/drivers/dma/cppi41.c
@@ -524,12 +524,6 @@ static struct dma_async_tx_descriptor *cppi41_dma_prep_slave_sg(
return &c->txd;
}

-static int cpp41_cfg_chan(struct cppi41_channel *c,
- struct dma_slave_config *cfg)
-{
- return 0;
-}
-
static void cppi41_compute_td_desc(struct cppi41_desc *d)
{
d->pd0 = DESC_TYPE_TEARD << DESC_TYPE;
@@ -642,28 +636,6 @@ static int cppi41_stop_chan(struct dma_chan *chan)
return 0;
}

-static int cppi41_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- struct cppi41_channel *c = to_cpp41_chan(chan);
- int ret;
-
- switch (cmd) {
- case DMA_SLAVE_CONFIG:
- ret = cpp41_cfg_chan(c, (struct dma_slave_config *) arg);
- break;
-
- case DMA_TERMINATE_ALL:
- ret = cppi41_stop_chan(chan);
- break;
-
- default:
- ret = -ENXIO;
- break;
- }
- return ret;
-}
-
static void cleanup_chans(struct cppi41_dd *cdd)
{
while (!list_empty(&cdd->ddev.channels)) {
@@ -948,7 +920,7 @@ static int cppi41_dma_probe(struct platform_device *pdev)
cdd->ddev.device_tx_status = cppi41_dma_tx_status;
cdd->ddev.device_issue_pending = cppi41_dma_issue_pending;
cdd->ddev.device_prep_slave_sg = cppi41_dma_prep_slave_sg;
- cdd->ddev.device_control = cppi41_dma_control;
+ cdd->ddev.device_terminate_all = cppi41_stop_chan;
cdd->ddev.dev = dev;
INIT_LIST_HEAD(&cdd->ddev.channels);
cpp41_dma_info.dma_cap = cdd->ddev.cap_mask;
--
2.1.1

2014-10-28 21:30:27

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 23/58] dmaengine: intel-mid-dma: Split device_control

Split the device_control callback of the Intel MID DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/intel_mid_dma.c | 25 ++++++-------------------
1 file changed, 6 insertions(+), 19 deletions(-)

diff --git a/drivers/dma/intel_mid_dma.c b/drivers/dma/intel_mid_dma.c
index 1aab8130efa1..5aaead9b56f7 100644
--- a/drivers/dma/intel_mid_dma.c
+++ b/drivers/dma/intel_mid_dma.c
@@ -492,10 +492,10 @@ static enum dma_status intel_mid_dma_tx_status(struct dma_chan *chan,
return ret;
}

-static int dma_slave_control(struct dma_chan *chan, unsigned long arg)
+static int intel_mid_dma_config(struct dma_chan *chan,
+ struct dma_slave_config *slave)
{
struct intel_mid_dma_chan *midc = to_intel_mid_dma_chan(chan);
- struct dma_slave_config *slave = (struct dma_slave_config *)arg;
struct intel_mid_dma_slave *mid_slave;

BUG_ON(!midc);
@@ -509,28 +509,14 @@ static int dma_slave_control(struct dma_chan *chan, unsigned long arg)
midc->mid_slave = mid_slave;
return 0;
}
-/**
- * intel_mid_dma_device_control - DMA device control
- * @chan: chan for DMA control
- * @cmd: control cmd
- * @arg: cmd arg value
- *
- * Perform DMA control command
- */
-static int intel_mid_dma_device_control(struct dma_chan *chan,
- enum dma_ctrl_cmd cmd, unsigned long arg)
+
+static int intel_mid_dma_terminate_all(struct dma_chan *chan)
{
struct intel_mid_dma_chan *midc = to_intel_mid_dma_chan(chan);
struct middma_device *mid = to_middma_device(chan->device);
struct intel_mid_dma_desc *desc, *_desc;
union intel_mid_dma_cfg_lo cfg_lo;

- if (cmd == DMA_SLAVE_CONFIG)
- return dma_slave_control(chan, arg);
-
- if (cmd != DMA_TERMINATE_ALL)
- return -ENXIO;
-
spin_lock_bh(&midc->lock);
if (midc->busy == false) {
spin_unlock_bh(&midc->lock);
@@ -1148,7 +1134,8 @@ static int mid_setup_dma(struct pci_dev *pdev)
dma->common.device_prep_dma_memcpy = intel_mid_dma_prep_memcpy;
dma->common.device_issue_pending = intel_mid_dma_issue_pending;
dma->common.device_prep_slave_sg = intel_mid_dma_prep_slave_sg;
- dma->common.device_control = intel_mid_dma_device_control;
+ dma->common.device_config = intel_mid_dma_config;
+ dma->common.device_terminate_all = intel_mid_dma_terminate_all;

/*enable dma cntrl*/
iowrite32(REG_BIT0, dma->dma_base + DMA_CFG);
--
2.1.1

2014-10-28 21:30:32

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 25/58] dmaengine: k3: Split device_control

Split the device_control callback of the Hisilicon K3 DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/k3dma.c | 197 ++++++++++++++++++++++++++++------------------------
1 file changed, 107 insertions(+), 90 deletions(-)

diff --git a/drivers/dma/k3dma.c b/drivers/dma/k3dma.c
index a1f911aaf220..fb4dd2cffd80 100644
--- a/drivers/dma/k3dma.c
+++ b/drivers/dma/k3dma.c
@@ -441,7 +441,7 @@ static struct dma_async_tx_descriptor *k3_dma_prep_memcpy(
num = 0;

if (!c->ccfg) {
- /* default is memtomem, without calling device_control */
+ /* default is memtomem, without calling device_config */
c->ccfg = CX_CFG_SRCINCR | CX_CFG_DSTINCR | CX_CFG_EN;
c->ccfg |= (0xf << 20) | (0xf << 24); /* burst = 16 */
c->ccfg |= (0x3 << 12) | (0x3 << 16); /* width = 64 bit */
@@ -523,112 +523,126 @@ static struct dma_async_tx_descriptor *k3_dma_prep_slave_sg(
return vchan_tx_prep(&c->vc, &ds->vd, flags);
}

-static int k3_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int k3_dma_config(struct dma_chan *chan,
+ struct dma_slave_config *cfg)
+{
+ struct k3_dma_chan *c = to_k3_chan(chan);
+ u32 maxburst = 0, val = 0;
+ enum dma_slave_buswidth width = DMA_SLAVE_BUSWIDTH_UNDEFINED;
+
+ if (cfg == NULL)
+ return -EINVAL;
+ c->dir = cfg->direction;
+ if (c->dir == DMA_DEV_TO_MEM) {
+ c->ccfg = CX_CFG_DSTINCR;
+ c->dev_addr = cfg->src_addr;
+ maxburst = cfg->src_maxburst;
+ width = cfg->src_addr_width;
+ } else if (c->dir == DMA_MEM_TO_DEV) {
+ c->ccfg = CX_CFG_SRCINCR;
+ c->dev_addr = cfg->dst_addr;
+ maxburst = cfg->dst_maxburst;
+ width = cfg->dst_addr_width;
+ }
+ switch (width) {
+ case DMA_SLAVE_BUSWIDTH_1_BYTE:
+ case DMA_SLAVE_BUSWIDTH_2_BYTES:
+ case DMA_SLAVE_BUSWIDTH_4_BYTES:
+ case DMA_SLAVE_BUSWIDTH_8_BYTES:
+ val = __ffs(width);
+ break;
+ default:
+ val = 3;
+ break;
+ }
+ c->ccfg |= (val << 12) | (val << 16);
+
+ if ((maxburst == 0) || (maxburst > 16))
+ val = 16;
+ else
+ val = maxburst - 1;
+ c->ccfg |= (val << 20) | (val << 24);
+ c->ccfg |= CX_CFG_MEM2PER | CX_CFG_EN;
+
+ /* specific request line */
+ c->ccfg |= c->vc.chan.chan_id << 4;
+
+ return 0;
+}
+
+static int k3_dma_terminate_all(struct dma_chan *chan)
{
struct k3_dma_chan *c = to_k3_chan(chan);
struct k3_dma_dev *d = to_k3_dma(chan->device);
- struct dma_slave_config *cfg = (void *)arg;
struct k3_dma_phy *p = c->phy;
unsigned long flags;
- u32 maxburst = 0, val = 0;
- enum dma_slave_buswidth width = DMA_SLAVE_BUSWIDTH_UNDEFINED;
LIST_HEAD(head);

- switch (cmd) {
- case DMA_SLAVE_CONFIG:
- if (cfg == NULL)
- return -EINVAL;
- c->dir = cfg->direction;
- if (c->dir == DMA_DEV_TO_MEM) {
- c->ccfg = CX_CFG_DSTINCR;
- c->dev_addr = cfg->src_addr;
- maxburst = cfg->src_maxburst;
- width = cfg->src_addr_width;
- } else if (c->dir == DMA_MEM_TO_DEV) {
- c->ccfg = CX_CFG_SRCINCR;
- c->dev_addr = cfg->dst_addr;
- maxburst = cfg->dst_maxburst;
- width = cfg->dst_addr_width;
- }
- switch (width) {
- case DMA_SLAVE_BUSWIDTH_1_BYTE:
- case DMA_SLAVE_BUSWIDTH_2_BYTES:
- case DMA_SLAVE_BUSWIDTH_4_BYTES:
- case DMA_SLAVE_BUSWIDTH_8_BYTES:
- val = __ffs(width);
- break;
- default:
- val = 3;
- break;
- }
- c->ccfg |= (val << 12) | (val << 16);
+ dev_dbg(d->slave.dev, "vchan %p: terminate all\n", &c->vc);

- if ((maxburst == 0) || (maxburst > 16))
- val = 16;
- else
- val = maxburst - 1;
- c->ccfg |= (val << 20) | (val << 24);
- c->ccfg |= CX_CFG_MEM2PER | CX_CFG_EN;
+ /* Prevent this channel being scheduled */
+ spin_lock(&d->lock);
+ list_del_init(&c->node);
+ spin_unlock(&d->lock);

- /* specific request line */
- c->ccfg |= c->vc.chan.chan_id << 4;
- break;
+ /* Clear the tx descriptor lists */
+ spin_lock_irqsave(&c->vc.lock, flags);
+ vchan_get_all_descriptors(&c->vc, &head);
+ if (p) {
+ /* vchan is assigned to a pchan - stop the channel */
+ k3_dma_terminate_chan(p, d);
+ c->phy = NULL;
+ p->vchan = NULL;
+ p->ds_run = p->ds_done = NULL;
+ }
+ spin_unlock_irqrestore(&c->vc.lock, flags);
+ vchan_dma_desc_free_list(&c->vc, &head);

- case DMA_TERMINATE_ALL:
- dev_dbg(d->slave.dev, "vchan %p: terminate all\n", &c->vc);
+ return 0;
+}

- /* Prevent this channel being scheduled */
- spin_lock(&d->lock);
- list_del_init(&c->node);
- spin_unlock(&d->lock);
+static int k3_dma_pause(struct dma_chan *chan)
+{
+ struct k3_dma_chan *c = to_k3_chan(chan);
+ struct k3_dma_dev *d = to_k3_dma(chan->device);
+ struct k3_dma_phy *p = c->phy;

- /* Clear the tx descriptor lists */
- spin_lock_irqsave(&c->vc.lock, flags);
- vchan_get_all_descriptors(&c->vc, &head);
+ dev_dbg(d->slave.dev, "vchan %p: pause\n", &c->vc);
+ if (c->status == DMA_IN_PROGRESS) {
+ c->status = DMA_PAUSED;
if (p) {
- /* vchan is assigned to a pchan - stop the channel */
- k3_dma_terminate_chan(p, d);
- c->phy = NULL;
- p->vchan = NULL;
- p->ds_run = p->ds_done = NULL;
+ k3_dma_pause_dma(p, false);
+ } else {
+ spin_lock(&d->lock);
+ list_del_init(&c->node);
+ spin_unlock(&d->lock);
}
- spin_unlock_irqrestore(&c->vc.lock, flags);
- vchan_dma_desc_free_list(&c->vc, &head);
- break;
+ }

- case DMA_PAUSE:
- dev_dbg(d->slave.dev, "vchan %p: pause\n", &c->vc);
- if (c->status == DMA_IN_PROGRESS) {
- c->status = DMA_PAUSED;
- if (p) {
- k3_dma_pause_dma(p, false);
- } else {
- spin_lock(&d->lock);
- list_del_init(&c->node);
- spin_unlock(&d->lock);
- }
- }
- break;
+ return 0;
+}

- case DMA_RESUME:
- dev_dbg(d->slave.dev, "vchan %p: resume\n", &c->vc);
- spin_lock_irqsave(&c->vc.lock, flags);
- if (c->status == DMA_PAUSED) {
- c->status = DMA_IN_PROGRESS;
- if (p) {
- k3_dma_pause_dma(p, true);
- } else if (!list_empty(&c->vc.desc_issued)) {
- spin_lock(&d->lock);
- list_add_tail(&c->node, &d->chan_pending);
- spin_unlock(&d->lock);
- }
+static int k3_dma_resume(struct dma_chan *chan)
+{
+ struct k3_dma_chan *c = to_k3_chan(chan);
+ struct k3_dma_dev *d = to_k3_dma(chan->device);
+ struct k3_dma_phy *p = c->phy;
+ unsigned long flags;
+
+ dev_dbg(d->slave.dev, "vchan %p: resume\n", &c->vc);
+ spin_lock_irqsave(&c->vc.lock, flags);
+ if (c->status == DMA_PAUSED) {
+ c->status = DMA_IN_PROGRESS;
+ if (p) {
+ k3_dma_pause_dma(p, true);
+ } else if (!list_empty(&c->vc.desc_issued)) {
+ spin_lock(&d->lock);
+ list_add_tail(&c->node, &d->chan_pending);
+ spin_unlock(&d->lock);
}
- spin_unlock_irqrestore(&c->vc.lock, flags);
- break;
- default:
- return -ENXIO;
}
+ spin_unlock_irqrestore(&c->vc.lock, flags);
+
return 0;
}

@@ -720,7 +734,10 @@ static int k3_dma_probe(struct platform_device *op)
d->slave.device_prep_dma_memcpy = k3_dma_prep_memcpy;
d->slave.device_prep_slave_sg = k3_dma_prep_slave_sg;
d->slave.device_issue_pending = k3_dma_issue_pending;
- d->slave.device_control = k3_dma_control;
+ d->slave.device_config = k3_dma_config;
+ d->slave.device_pause = k3_dma_pause;
+ d->slave.device_resume = k3_dma_resume;
+ d->slave.device_terminate_all = k3_dma_terminate_all;
d->slave.copy_align = DMA_ALIGN;
d->slave.chancnt = d->dma_requests;

--
2.1.1

2014-10-28 21:30:40

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 29/58] dmaengine: fsl-dma: Split device_control

Split the device_control callback of the Freescale Elo DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

While we're at it, remove the useless prep_sg callback.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/fsldma.c | 91 ++++++++++++++++++----------------------------------
1 file changed, 32 insertions(+), 59 deletions(-)

diff --git a/drivers/dma/fsldma.c b/drivers/dma/fsldma.c
index 994bcb2c6b92..4b557643a737 100644
--- a/drivers/dma/fsldma.c
+++ b/drivers/dma/fsldma.c
@@ -941,37 +941,8 @@ fail:
return NULL;
}

-/**
- * fsl_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
- * @chan: DMA channel
- * @sgl: scatterlist to transfer to/from
- * @sg_len: number of entries in @scatterlist
- * @direction: DMA direction
- * @flags: DMAEngine flags
- * @context: transaction context (ignored)
- *
- * Prepare a set of descriptors for a DMA_SLAVE transaction. Following the
- * DMA_SLAVE API, this gets the device-specific information from the
- * chan->private variable.
- */
-static struct dma_async_tx_descriptor *fsl_dma_prep_slave_sg(
- struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
- enum dma_transfer_direction direction, unsigned long flags,
- void *context)
-{
- /*
- * This operation is not supported on the Freescale DMA controller
- *
- * However, we need to provide the function pointer to allow the
- * device_control() method to work.
- */
- return NULL;
-}
-
-static int fsl_dma_device_control(struct dma_chan *dchan,
- enum dma_ctrl_cmd cmd, unsigned long arg)
+static int fsl_dma_device_terminate_all(struct dma_chan *dchan)
{
- struct dma_slave_config *config;
struct fsldma_chan *chan;
int size;

@@ -980,45 +951,47 @@ static int fsl_dma_device_control(struct dma_chan *dchan,

chan = to_fsl_chan(dchan);

- switch (cmd) {
- case DMA_TERMINATE_ALL:
- spin_lock_bh(&chan->desc_lock);
-
- /* Halt the DMA engine */
- dma_halt(chan);
+ spin_lock_bh(&chan->desc_lock);

- /* Remove and free all of the descriptors in the LD queue */
- fsldma_free_desc_list(chan, &chan->ld_pending);
- fsldma_free_desc_list(chan, &chan->ld_running);
- fsldma_free_desc_list(chan, &chan->ld_completed);
- chan->idle = true;
+ /* Halt the DMA engine */
+ dma_halt(chan);

- spin_unlock_bh(&chan->desc_lock);
- return 0;
+ /* Remove and free all of the descriptors in the LD queue */
+ fsldma_free_desc_list(chan, &chan->ld_pending);
+ fsldma_free_desc_list(chan, &chan->ld_running);
+ fsldma_free_desc_list(chan, &chan->ld_completed);
+ chan->idle = true;

- case DMA_SLAVE_CONFIG:
- config = (struct dma_slave_config *)arg;
+ spin_unlock_bh(&chan->desc_lock);
+ return 0;
+}

- /* make sure the channel supports setting burst size */
- if (!chan->set_request_count)
- return -ENXIO;
+static int fsl_dma_device_config(struct dma_chan *dchan,
+ struct dma_slave_config *config)
+{
+ struct fsldma_chan *chan;
+ int size;

- /* we set the controller burst size depending on direction */
- if (config->direction == DMA_MEM_TO_DEV)
- size = config->dst_addr_width * config->dst_maxburst;
- else
- size = config->src_addr_width * config->src_maxburst;
+ if (!dchan)
+ return -EINVAL;

- chan->set_request_count(chan, size);
- return 0;
+ chan = to_fsl_chan(dchan);

- default:
+ /* make sure the channel supports setting burst size */
+ if (!chan->set_request_count)
return -ENXIO;
- }

+ /* we set the controller burst size depending on direction */
+ if (config->direction == DMA_MEM_TO_DEV)
+ size = config->dst_addr_width * config->dst_maxburst;
+ else
+ size = config->src_addr_width * config->src_maxburst;
+
+ chan->set_request_count(chan, size);
return 0;
}

+
/**
* fsl_dma_memcpy_issue_pending - Issue the DMA start command
* @chan : Freescale DMA channel
@@ -1396,8 +1369,8 @@ static int fsldma_of_probe(struct platform_device *op)
fdev->common.device_prep_dma_sg = fsl_dma_prep_sg;
fdev->common.device_tx_status = fsl_tx_status;
fdev->common.device_issue_pending = fsl_dma_memcpy_issue_pending;
- fdev->common.device_prep_slave_sg = fsl_dma_prep_slave_sg;
- fdev->common.device_control = fsl_dma_device_control;
+ fdev->common.device_config = fsl_dma_device_config;
+ fdev->common.device_terminate_all = fsl_dma_device_terminate_all;
fdev->common.dev = &op->dev;

dma_set_mask(&(op->dev), DMA_BIT_MASK(36));
--
2.1.1

2014-10-28 21:30:46

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 07/58] dmaengine: split out pause/resume operations from device_control

Split out the pause and resume operations to callbacks of their own. In order
to preserve some backwark compatibility, the dmaengine_pause/dmaengine_resume
are still falling back on dmaengine_device_control.

Eventually, that will allow to get the device capabilities in a generic way,
removing the need to implement device_slave_caps.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Laurent Pinchart <[email protected]>
---
include/linux/dmaengine.h | 12 ++++++++++++
1 file changed, 12 insertions(+)

diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index f797cb81355f..4583c55f3355 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -611,6 +611,10 @@ struct dma_tx_state {
* code
* @device_control: manipulate all pending operations on a channel, returns
* zero or error code
+ * @device_pause: Pauses any transfer happening on a channel. Returns
+ * 0 or an error code
+ * @device_resume: Resumes any transfer on a channel previously
+ * paused. Returns 0 or an error code
* @device_tx_status: poll for transaction completion, the optional
* txstate parameter can be supplied with a pointer to get a
* struct with auxiliary transfer status information, otherwise the call
@@ -680,6 +684,8 @@ struct dma_device {
struct dma_slave_config *config);
int (*device_control)(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg);
+ int (*device_pause)(struct dma_chan *chan);
+ int (*device_resume)(struct dma_chan *chan);

enum dma_status (*device_tx_status)(struct dma_chan *chan,
dma_cookie_t cookie,
@@ -794,11 +800,17 @@ static inline int dmaengine_terminate_all(struct dma_chan *chan)

static inline int dmaengine_pause(struct dma_chan *chan)
{
+ if (chan->device->device_pause)
+ return chan->device->device_pause(chan);
+
return dmaengine_device_control(chan, DMA_PAUSE, 0);
}

static inline int dmaengine_resume(struct dma_chan *chan)
{
+ if (chan->device->device_resume)
+ return chan->device->device_resume(chan);
+
return dmaengine_device_control(chan, DMA_RESUME, 0);
}

--
2.1.1

2014-10-28 21:31:13

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 36/58] dmaengine: s3c24xx: Split device_control

Split the device_control callback of the Samsung S3C24xxx DMA driver to make
use of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/s3c24xx-dma.c | 75 +++++++++++++++++++++++------------------------
1 file changed, 36 insertions(+), 39 deletions(-)

diff --git a/drivers/dma/s3c24xx-dma.c b/drivers/dma/s3c24xx-dma.c
index 7416572d1e40..2177bf5a742c 100644
--- a/drivers/dma/s3c24xx-dma.c
+++ b/drivers/dma/s3c24xx-dma.c
@@ -384,20 +384,30 @@ static u32 s3c24xx_dma_getbytes_chan(struct s3c24xx_dma_chan *s3cchan)
return tc * txd->width;
}

-static int s3c24xx_dma_set_runtime_config(struct s3c24xx_dma_chan *s3cchan,
+static int s3c24xx_dma_set_runtime_config(struct dma_chan *chan,
struct dma_slave_config *config)
{
- if (!s3cchan->slave)
- return -EINVAL;
+ struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(chan);
+ unsigned long flags;
+ int ret = 0;

/* Reject definitely invalid configurations */
if (config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES ||
config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
return -EINVAL;

+ spin_lock_irqsave(&s3cchan->vc.lock, flags);
+
+ if (!s3cchan->slave) {
+ ret = -EINVAL;
+ goto out;
+ }
+
s3cchan->cfg = *config;

- return 0;
+out:
+ spin_lock_irqrestore(&s3cchan->vc.lock, flags);
+ return ret;
}

/*
@@ -703,53 +713,38 @@ static irqreturn_t s3c24xx_dma_irq(int irq, void *data)
* The DMA ENGINE API
*/

-static int s3c24xx_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int s3c24xx_dma_terminate_all(struct dma_chan *chan)
{
struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(chan);
struct s3c24xx_dma_engine *s3cdma = s3cchan->host;
unsigned long flags;
- int ret = 0;

spin_lock_irqsave(&s3cchan->vc.lock, flags);

- switch (cmd) {
- case DMA_SLAVE_CONFIG:
- ret = s3c24xx_dma_set_runtime_config(s3cchan,
- (struct dma_slave_config *)arg);
- break;
- case DMA_TERMINATE_ALL:
- if (!s3cchan->phy && !s3cchan->at) {
- dev_err(&s3cdma->pdev->dev, "trying to terminate already stopped channel %d\n",
- s3cchan->id);
- ret = -EINVAL;
- break;
- }
-
- s3cchan->state = S3C24XX_DMA_CHAN_IDLE;
+ if (!s3cchan->phy && !s3cchan->at) {
+ dev_err(&s3cdma->pdev->dev, "trying to terminate already stopped channel %d\n",
+ s3cchan->id);
+ return -EINVAL;
+ }

- /* Mark physical channel as free */
- if (s3cchan->phy)
- s3c24xx_dma_phy_free(s3cchan);
+ s3cchan->state = S3C24XX_DMA_CHAN_IDLE;

- /* Dequeue current job */
- if (s3cchan->at) {
- s3c24xx_dma_desc_free(&s3cchan->at->vd);
- s3cchan->at = NULL;
- }
+ /* Mark physical channel as free */
+ if (s3cchan->phy)
+ s3c24xx_dma_phy_free(s3cchan);

- /* Dequeue jobs not yet fired as well */
- s3c24xx_dma_free_txd_list(s3cdma, s3cchan);
- break;
- default:
- /* Unknown command */
- ret = -ENXIO;
- break;
+ /* Dequeue current job */
+ if (s3cchan->at) {
+ s3c24xx_dma_desc_free(&s3cchan->at->vd);
+ s3cchan->at = NULL;
}

+ /* Dequeue jobs not yet fired as well */
+ s3c24xx_dma_free_txd_list(s3cdma, s3cchan);
+
spin_unlock_irqrestore(&s3cchan->vc.lock, flags);

- return ret;
+ return 0;
}

static int s3c24xx_dma_alloc_chan_resources(struct dma_chan *chan)
@@ -1300,7 +1295,8 @@ static int s3c24xx_dma_probe(struct platform_device *pdev)
s3cdma->memcpy.device_prep_dma_memcpy = s3c24xx_dma_prep_memcpy;
s3cdma->memcpy.device_tx_status = s3c24xx_dma_tx_status;
s3cdma->memcpy.device_issue_pending = s3c24xx_dma_issue_pending;
- s3cdma->memcpy.device_control = s3c24xx_dma_control;
+ s3cdma->memcpy.device_config = s3c24xx_dma_set_runtime_config;
+ s3cdma->memcpy.device_terminate_all = s3c24xx_dma_terminate_all;

/* Initialize slave engine for SoC internal dedicated peripherals */
dma_cap_set(DMA_SLAVE, s3cdma->slave.cap_mask);
@@ -1315,7 +1311,8 @@ static int s3c24xx_dma_probe(struct platform_device *pdev)
s3cdma->slave.device_issue_pending = s3c24xx_dma_issue_pending;
s3cdma->slave.device_prep_slave_sg = s3c24xx_dma_prep_slave_sg;
s3cdma->slave.device_prep_dma_cyclic = s3c24xx_dma_prep_dma_cyclic;
- s3cdma->slave.device_control = s3c24xx_dma_control;
+ s3cdma->slave.device_config = s3c24xx_dma_set_runtime_config;
+ s3cdma->slave.device_terminate_all = s3c24xx_dma_terminate_all;

/* Register as many memcpy channels as there are physical channels */
ret = s3c24xx_dma_init_virtual_channels(s3cdma, &s3cdma->memcpy,
--
2.1.1

2014-10-28 21:31:35

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 31/58] dmaengine: mxs: Split device_control

Split the device_control callback of the Freescale MXS DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/mxs-dma.c | 59 ++++++++++++++++++++-------------------------------
1 file changed, 23 insertions(+), 36 deletions(-)

diff --git a/drivers/dma/mxs-dma.c b/drivers/dma/mxs-dma.c
index 5ea61201dbf0..834041e5a769 100644
--- a/drivers/dma/mxs-dma.c
+++ b/drivers/dma/mxs-dma.c
@@ -202,8 +202,9 @@ static struct mxs_dma_chan *to_mxs_dma_chan(struct dma_chan *chan)
return container_of(chan, struct mxs_dma_chan, chan);
}

-static void mxs_dma_reset_chan(struct mxs_dma_chan *mxs_chan)
+static void mxs_dma_reset_chan(struct dma_chan *chan)
{
+ struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;
int chan_id = mxs_chan->chan.chan_id;

@@ -250,8 +251,9 @@ static void mxs_dma_reset_chan(struct mxs_dma_chan *mxs_chan)
mxs_chan->status = DMA_COMPLETE;
}

-static void mxs_dma_enable_chan(struct mxs_dma_chan *mxs_chan)
+static void mxs_dma_enable_chan(struct dma_chan *chan)
{
+ struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;
int chan_id = mxs_chan->chan.chan_id;

@@ -272,13 +274,16 @@ static void mxs_dma_enable_chan(struct mxs_dma_chan *mxs_chan)
mxs_chan->reset = false;
}

-static void mxs_dma_disable_chan(struct mxs_dma_chan *mxs_chan)
+static void mxs_dma_disable_chan(struct dma_chan *chan)
{
+ struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
+
mxs_chan->status = DMA_COMPLETE;
}

-static void mxs_dma_pause_chan(struct mxs_dma_chan *mxs_chan)
+static void mxs_dma_pause_chan(struct dma_chan *chan)
{
+ struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;
int chan_id = mxs_chan->chan.chan_id;

@@ -293,8 +298,9 @@ static void mxs_dma_pause_chan(struct mxs_dma_chan *mxs_chan)
mxs_chan->status = DMA_PAUSED;
}

-static void mxs_dma_resume_chan(struct mxs_dma_chan *mxs_chan)
+static void mxs_dma_resume_chan(struct dma_chan *chan)
{
+ struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;
int chan_id = mxs_chan->chan.chan_id;

@@ -383,7 +389,7 @@ static irqreturn_t mxs_dma_int_handler(int irq, void *dev_id)
"%s: error in channel %d\n", __func__,
chan);
mxs_chan->status = DMA_ERROR;
- mxs_dma_reset_chan(mxs_chan);
+ mxs_dma_reset_chan(mxs_chan->chan);
} else if (mxs_chan->status != DMA_COMPLETE) {
if (mxs_chan->flags & MXS_DMA_SG_LOOP) {
mxs_chan->status = DMA_IN_PROGRESS;
@@ -432,7 +438,7 @@ static int mxs_dma_alloc_chan_resources(struct dma_chan *chan)
if (ret)
goto err_clk;

- mxs_dma_reset_chan(mxs_chan);
+ mxs_dma_reset_chan(chan);

dma_async_tx_descriptor_init(&mxs_chan->desc, chan);
mxs_chan->desc.tx_submit = mxs_dma_tx_submit;
@@ -456,7 +462,7 @@ static void mxs_dma_free_chan_resources(struct dma_chan *chan)
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma;

- mxs_dma_disable_chan(mxs_chan);
+ mxs_dma_disable_chan(chan);

free_irq(mxs_chan->chan_irq, mxs_dma);

@@ -651,28 +657,14 @@ err_out:
return NULL;
}

-static int mxs_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int mxs_dma_terminate_all(struct dma_chan *chan)
{
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
- int ret = 0;
-
- switch (cmd) {
- case DMA_TERMINATE_ALL:
- mxs_dma_reset_chan(mxs_chan);
- mxs_dma_disable_chan(mxs_chan);
- break;
- case DMA_PAUSE:
- mxs_dma_pause_chan(mxs_chan);
- break;
- case DMA_RESUME:
- mxs_dma_resume_chan(mxs_chan);
- break;
- default:
- ret = -ENOSYS;
- }

- return ret;
+ mxs_dma_reset_chan(chan);
+ mxs_dma_disable_chan(chan);
+
+ return 0;
}

static enum dma_status mxs_dma_tx_status(struct dma_chan *chan,
@@ -701,13 +693,6 @@ static enum dma_status mxs_dma_tx_status(struct dma_chan *chan,
return mxs_chan->status;
}

-static void mxs_dma_issue_pending(struct dma_chan *chan)
-{
- struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
-
- mxs_dma_enable_chan(mxs_chan);
-}
-
static int __init mxs_dma_init(struct mxs_dma_engine *mxs_dma)
{
int ret;
@@ -860,8 +845,10 @@ static int __init mxs_dma_probe(struct platform_device *pdev)
mxs_dma->dma_device.device_tx_status = mxs_dma_tx_status;
mxs_dma->dma_device.device_prep_slave_sg = mxs_dma_prep_slave_sg;
mxs_dma->dma_device.device_prep_dma_cyclic = mxs_dma_prep_dma_cyclic;
- mxs_dma->dma_device.device_control = mxs_dma_control;
- mxs_dma->dma_device.device_issue_pending = mxs_dma_issue_pending;
+ mxs_dma->dma_device.device_pause = mxs_dma_pause_chan;
+ mxs_dma->dma_device.device_resume = mxs_dma_resume_chan;
+ mxs_dma->dma_device.device_terminate_all = mxs_dma_terminate_all;
+ mxs_dma->dma_device.device_issue_pending = mxs_dma_enable_chan;

ret = dma_async_device_register(&mxs_dma->dma_device);
if (ret) {
--
2.1.1

2014-10-28 21:31:11

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 33/58] dmaengine: omap: Split device_control

Split the device_control callback of the TI OMAP DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/omap-dma.c | 51 +++++++++++++++-----------------------------------
1 file changed, 15 insertions(+), 36 deletions(-)

diff --git a/drivers/dma/omap-dma.c b/drivers/dma/omap-dma.c
index 8035b8c33cfd..90c0bdd53bb0 100644
--- a/drivers/dma/omap-dma.c
+++ b/drivers/dma/omap-dma.c
@@ -948,8 +948,10 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_cyclic(
return vchan_tx_prep(&c->vc, &d->vd, flags);
}

-static int omap_dma_slave_config(struct omap_chan *c, struct dma_slave_config *cfg)
+static int omap_dma_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg)
{
+ struct omap_chan *c = to_omap_dma_chan(chan);
+
if (cfg->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES ||
cfg->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
return -EINVAL;
@@ -959,8 +961,9 @@ static int omap_dma_slave_config(struct omap_chan *c, struct dma_slave_config *c
return 0;
}

-static int omap_dma_terminate_all(struct omap_chan *c)
+static int omap_dma_terminate_all(struct dma_chan *chan)
{
+ struct omap_chan *c = to_omap_dma_chan(chan);
struct omap_dmadev *d = to_omap_dma_dev(c->vc.chan.device);
unsigned long flags;
LIST_HEAD(head);
@@ -996,8 +999,10 @@ static int omap_dma_terminate_all(struct omap_chan *c)
return 0;
}

-static int omap_dma_pause(struct omap_chan *c)
+static int omap_dma_pause(struct dma_chan *chan)
{
+ struct omap_chan *c = to_omap_dma_chan(chan);
+
/* Pause/Resume only allowed with cyclic mode */
if (!c->cyclic)
return -EINVAL;
@@ -1010,8 +1015,10 @@ static int omap_dma_pause(struct omap_chan *c)
return 0;
}

-static int omap_dma_resume(struct omap_chan *c)
+static int omap_dma_resume(struct dma_chan *chan)
{
+ struct omap_chan *c = to_omap_dma_chan(chan);
+
/* Pause/Resume only allowed with cyclic mode */
if (!c->cyclic)
return -EINVAL;
@@ -1029,37 +1036,6 @@ static int omap_dma_resume(struct omap_chan *c)
return 0;
}

-static int omap_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- struct omap_chan *c = to_omap_dma_chan(chan);
- int ret;
-
- switch (cmd) {
- case DMA_SLAVE_CONFIG:
- ret = omap_dma_slave_config(c, (struct dma_slave_config *)arg);
- break;
-
- case DMA_TERMINATE_ALL:
- ret = omap_dma_terminate_all(c);
- break;
-
- case DMA_PAUSE:
- ret = omap_dma_pause(c);
- break;
-
- case DMA_RESUME:
- ret = omap_dma_resume(c);
- break;
-
- default:
- ret = -ENXIO;
- break;
- }
-
- return ret;
-}
-
static int omap_dma_chan_init(struct omap_dmadev *od, int dma_sig)
{
struct omap_chan *c;
@@ -1138,7 +1114,10 @@ static int omap_dma_probe(struct platform_device *pdev)
od->ddev.device_issue_pending = omap_dma_issue_pending;
od->ddev.device_prep_slave_sg = omap_dma_prep_slave_sg;
od->ddev.device_prep_dma_cyclic = omap_dma_prep_dma_cyclic;
- od->ddev.device_control = omap_dma_control;
+ od->ddev.device_config = omap_dma_config;
+ od->ddev.device_pause = omap_dma_pause;
+ od->ddev.device_resume = omap_dma_resume;
+ od->ddev.device_terminate_all = omap_dma_terminate_all;
od->ddev.device_slave_caps = omap_dma_device_slave_caps;
od->ddev.dev = &pdev->dev;
INIT_LIST_HEAD(&od->ddev.channels);
--
2.1.1

2014-10-28 21:31:52

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 40/58] dmaengine: sun6i: Split device_control

Split the device_control callback of the Allwinner A31 DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/sun6i-dma.c | 149 +++++++++++++++++++++++++-----------------------
1 file changed, 77 insertions(+), 72 deletions(-)

diff --git a/drivers/dma/sun6i-dma.c b/drivers/dma/sun6i-dma.c
index 3aa10b328254..c3a6b3f789a8 100644
--- a/drivers/dma/sun6i-dma.c
+++ b/drivers/dma/sun6i-dma.c
@@ -358,38 +358,6 @@ static void sun6i_dma_free_desc(struct virt_dma_desc *vd)
kfree(txd);
}

-static int sun6i_dma_terminate_all(struct sun6i_vchan *vchan)
-{
- struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(vchan->vc.chan.device);
- struct sun6i_pchan *pchan = vchan->phy;
- unsigned long flags;
- LIST_HEAD(head);
-
- spin_lock(&sdev->lock);
- list_del_init(&vchan->node);
- spin_unlock(&sdev->lock);
-
- spin_lock_irqsave(&vchan->vc.lock, flags);
-
- vchan_get_all_descriptors(&vchan->vc, &head);
-
- if (pchan) {
- writel(DMA_CHAN_ENABLE_STOP, pchan->base + DMA_CHAN_ENABLE);
- writel(DMA_CHAN_PAUSE_RESUME, pchan->base + DMA_CHAN_PAUSE);
-
- vchan->phy = NULL;
- pchan->vchan = NULL;
- pchan->desc = NULL;
- pchan->done = NULL;
- }
-
- spin_unlock_irqrestore(&vchan->vc.lock, flags);
-
- vchan_dma_desc_free_list(&vchan->vc, &head);
-
- return 0;
-}
-
static int sun6i_dma_start_desc(struct sun6i_vchan *vchan)
{
struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(vchan->vc.chan.device);
@@ -673,57 +641,92 @@ err_lli_free:
return NULL;
}

-static int sun6i_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int sun6i_dma_config(struct dma_chan *chan,
+ struct dma_slave_config *config)
+{
+ struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+
+ memcpy(&vchan->cfg, config, sizeof(*config));
+
+ return 0;
+}
+
+static int sun6i_dma_pause(struct dma_chan *chan)
+{
+ struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+ struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+ struct sun6i_pchan *pchan = vchan->phy;
+
+ dev_dbg(chan2dev(chan), "vchan %p: pause\n", &vchan->vc);
+
+ if (pchan) {
+ writel(DMA_CHAN_PAUSE_PAUSE,
+ pchan->base + DMA_CHAN_PAUSE);
+ } else {
+ spin_lock(&sdev->lock);
+ list_del_init(&vchan->node);
+ spin_unlock(&sdev->lock);
+ }
+
+ return 0;
+}
+
+static int sun6i_dma_resume(struct dma_chan *chan)
{
struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
struct sun6i_pchan *pchan = vchan->phy;
unsigned long flags;
- int ret = 0;

- switch (cmd) {
- case DMA_RESUME:
- dev_dbg(chan2dev(chan), "vchan %p: resume\n", &vchan->vc);
+ dev_dbg(chan2dev(chan), "vchan %p: resume\n", &vchan->vc);

- spin_lock_irqsave(&vchan->vc.lock, flags);
+ spin_lock_irqsave(&vchan->vc.lock, flags);

- if (pchan) {
- writel(DMA_CHAN_PAUSE_RESUME,
- pchan->base + DMA_CHAN_PAUSE);
- } else if (!list_empty(&vchan->vc.desc_issued)) {
- spin_lock(&sdev->lock);
- list_add_tail(&vchan->node, &sdev->pending);
- spin_unlock(&sdev->lock);
- }
+ if (pchan) {
+ writel(DMA_CHAN_PAUSE_RESUME,
+ pchan->base + DMA_CHAN_PAUSE);
+ } else if (!list_empty(&vchan->vc.desc_issued)) {
+ spin_lock(&sdev->lock);
+ list_add_tail(&vchan->node, &sdev->pending);
+ spin_unlock(&sdev->lock);
+ }

- spin_unlock_irqrestore(&vchan->vc.lock, flags);
- break;
+ spin_unlock_irqrestore(&vchan->vc.lock, flags);

- case DMA_PAUSE:
- dev_dbg(chan2dev(chan), "vchan %p: pause\n", &vchan->vc);
+ return 0;
+}

- if (pchan) {
- writel(DMA_CHAN_PAUSE_PAUSE,
- pchan->base + DMA_CHAN_PAUSE);
- } else {
- spin_lock(&sdev->lock);
- list_del_init(&vchan->node);
- spin_unlock(&sdev->lock);
- }
- break;
+static int sun6i_dma_terminate_all(struct dma_chan *chan)
+{
+ struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+ struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+ struct sun6i_pchan *pchan = vchan->phy;
+ unsigned long flags;
+ LIST_HEAD(head);

- case DMA_TERMINATE_ALL:
- ret = sun6i_dma_terminate_all(vchan);
- break;
- case DMA_SLAVE_CONFIG:
- memcpy(&vchan->cfg, (void *)arg, sizeof(struct dma_slave_config));
- break;
- default:
- ret = -ENXIO;
- break;
+ spin_lock(&sdev->lock);
+ list_del_init(&vchan->node);
+ spin_unlock(&sdev->lock);
+
+ spin_lock_irqsave(&vchan->vc.lock, flags);
+
+ vchan_get_all_descriptors(&vchan->vc, &head);
+
+ if (pchan) {
+ writel(DMA_CHAN_ENABLE_STOP, pchan->base + DMA_CHAN_ENABLE);
+ writel(DMA_CHAN_PAUSE_RESUME, pchan->base + DMA_CHAN_PAUSE);
+
+ vchan->phy = NULL;
+ pchan->vchan = NULL;
+ pchan->desc = NULL;
+ pchan->done = NULL;
}
- return ret;
+
+ spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+ vchan_dma_desc_free_list(&vchan->vc, &head);
+
+ return 0;
}

static enum dma_status sun6i_dma_tx_status(struct dma_chan *chan,
@@ -913,9 +916,11 @@ static int sun6i_dma_probe(struct platform_device *pdev)
sdc->slave.device_issue_pending = sun6i_dma_issue_pending;
sdc->slave.device_prep_slave_sg = sun6i_dma_prep_slave_sg;
sdc->slave.device_prep_dma_memcpy = sun6i_dma_prep_dma_memcpy;
- sdc->slave.device_control = sun6i_dma_control;
+ sdc->slave.device_config = sun6i_dma_config;
+ sdc->slave.device_pause = sun6i_dma_pause;
+ sdc->slave.device_resume = sun6i_dma_resume;
+ sdc->slave.device_terminate_all = sun6i_dma_terminate_all;
sdc->slave.chancnt = NR_MAX_VCHANS;
-
sdc->slave.dev = &pdev->dev;

sdc->pchans = devm_kcalloc(&pdev->dev, NR_MAX_CHANNELS,
--
2.1.1

2014-10-28 21:31:09

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 34/58] dmaengine: pl330: Split device_control

Split the device_control callback of the AMBA PL330 DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/pl330.c | 108 ++++++++++++++++++++++++++--------------------------
1 file changed, 53 insertions(+), 55 deletions(-)

diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
index b563bfa6cc4d..bd0e47eefa34 100644
--- a/drivers/dma/pl330.c
+++ b/drivers/dma/pl330.c
@@ -2062,72 +2062,69 @@ static int pl330_alloc_chan_resources(struct dma_chan *chan)
return 1;
}

-static int pl330_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, unsigned long arg)
+static int pl330_config(struct dma_chan *chan,
+ struct dma_slave_config *slave_config)
+{
+ struct dma_pl330_chan *pch = to_pchan(chan);
+
+ if (slave_config->direction == DMA_MEM_TO_DEV) {
+ if (slave_config->dst_addr)
+ pch->fifo_addr = slave_config->dst_addr;
+ if (slave_config->dst_addr_width)
+ pch->burst_sz = __ffs(slave_config->dst_addr_width);
+ if (slave_config->dst_maxburst)
+ pch->burst_len = slave_config->dst_maxburst;
+ } else if (slave_config->direction == DMA_DEV_TO_MEM) {
+ if (slave_config->src_addr)
+ pch->fifo_addr = slave_config->src_addr;
+ if (slave_config->src_addr_width)
+ pch->burst_sz = __ffs(slave_config->src_addr_width);
+ if (slave_config->src_maxburst)
+ pch->burst_len = slave_config->src_maxburst;
+ }
+
+ return 0;
+}
+
+static int pl330_terminate_all(struct dma_chan *chan)
{
struct dma_pl330_chan *pch = to_pchan(chan);
struct dma_pl330_desc *desc;
unsigned long flags;
struct pl330_dmac *pl330 = pch->dmac;
- struct dma_slave_config *slave_config;
LIST_HEAD(list);

- switch (cmd) {
- case DMA_TERMINATE_ALL:
- spin_lock_irqsave(&pch->lock, flags);
-
- spin_lock(&pl330->lock);
- _stop(pch->thread);
- spin_unlock(&pl330->lock);
+ spin_lock_irqsave(&pch->lock, flags);

- pch->thread->req[0].desc = NULL;
- pch->thread->req[1].desc = NULL;
- pch->thread->req_running = -1;
+ spin_lock(&pl330->lock);
+ _stop(pch->thread);
+ spin_unlock(&pl330->lock);

- /* Mark all desc done */
- list_for_each_entry(desc, &pch->submitted_list, node) {
- desc->status = FREE;
- dma_cookie_complete(&desc->txd);
- }
+ pch->thread->req[0].desc = NULL;
+ pch->thread->req[1].desc = NULL;
+ pch->thread->req_running = -1;

- list_for_each_entry(desc, &pch->work_list , node) {
- desc->status = FREE;
- dma_cookie_complete(&desc->txd);
- }
+ /* Mark all desc done */
+ list_for_each_entry(desc, &pch->submitted_list, node) {
+ desc->status = FREE;
+ dma_cookie_complete(&desc->txd);
+ }

- list_for_each_entry(desc, &pch->completed_list , node) {
- desc->status = FREE;
- dma_cookie_complete(&desc->txd);
- }
+ list_for_each_entry(desc, &pch->work_list , node) {
+ desc->status = FREE;
+ dma_cookie_complete(&desc->txd);
+ }

- list_splice_tail_init(&pch->submitted_list, &pl330->desc_pool);
- list_splice_tail_init(&pch->work_list, &pl330->desc_pool);
- list_splice_tail_init(&pch->completed_list, &pl330->desc_pool);
- spin_unlock_irqrestore(&pch->lock, flags);
- break;
- case DMA_SLAVE_CONFIG:
- slave_config = (struct dma_slave_config *)arg;
-
- if (slave_config->direction == DMA_MEM_TO_DEV) {
- if (slave_config->dst_addr)
- pch->fifo_addr = slave_config->dst_addr;
- if (slave_config->dst_addr_width)
- pch->burst_sz = __ffs(slave_config->dst_addr_width);
- if (slave_config->dst_maxburst)
- pch->burst_len = slave_config->dst_maxburst;
- } else if (slave_config->direction == DMA_DEV_TO_MEM) {
- if (slave_config->src_addr)
- pch->fifo_addr = slave_config->src_addr;
- if (slave_config->src_addr_width)
- pch->burst_sz = __ffs(slave_config->src_addr_width);
- if (slave_config->src_maxburst)
- pch->burst_len = slave_config->src_maxburst;
- }
- break;
- default:
- dev_err(pch->dmac->ddma.dev, "Not supported command.\n");
- return -ENXIO;
+ list_for_each_entry(desc, &pch->completed_list , node) {
+ desc->status = FREE;
+ dma_cookie_complete(&desc->txd);
}

+ list_splice_tail_init(&pch->submitted_list, &pl330->desc_pool);
+ list_splice_tail_init(&pch->work_list, &pl330->desc_pool);
+ list_splice_tail_init(&pch->completed_list, &pl330->desc_pool);
+ spin_unlock_irqrestore(&pch->lock, flags);
+
return 0;
}

@@ -2701,7 +2698,8 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
pd->device_prep_dma_cyclic = pl330_prep_dma_cyclic;
pd->device_tx_status = pl330_tx_status;
pd->device_prep_slave_sg = pl330_prep_slave_sg;
- pd->device_control = pl330_control;
+ pd->device_config = pl330_config;
+ pd->device_terminate_all = pl330_terminate_all;
pd->device_issue_pending = pl330_issue_pending;
pd->device_slave_caps = pl330_dma_device_slave_caps;

@@ -2749,7 +2747,7 @@ probe_err3:

/* Flush the channel */
if (pch->thread) {
- pl330_control(&pch->chan, DMA_TERMINATE_ALL, 0);
+ pl330_terminate_all(&pch->chan);
pl330_free_chan_resources(&pch->chan);
}
}
@@ -2778,7 +2776,7 @@ static int pl330_remove(struct amba_device *adev)

/* Flush the channel */
if (pch->thread) {
- pl330_control(&pch->chan, DMA_TERMINATE_ALL, 0);
+ pl330_terminate_all(&pch->chan);
pl330_free_chan_resources(&pch->chan);
}
}
--
2.1.1

2014-10-28 21:32:18

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 39/58] dmaengine: sirf: Split device_control

Split the device_control callback of the SiRF Prima 2 DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/sirf-dma.c | 43 +++++++++++++------------------------------
1 file changed, 13 insertions(+), 30 deletions(-)

diff --git a/drivers/dma/sirf-dma.c b/drivers/dma/sirf-dma.c
index 149d7fddfa75..711c2bae9003 100644
--- a/drivers/dma/sirf-dma.c
+++ b/drivers/dma/sirf-dma.c
@@ -281,9 +281,10 @@ static dma_cookie_t sirfsoc_dma_tx_submit(struct dma_async_tx_descriptor *txd)
return cookie;
}

-static int sirfsoc_dma_slave_config(struct sirfsoc_dma_chan *schan,
- struct dma_slave_config *config)
+static int sirfsoc_dma_slave_config(struct dma_chan *chan,
+ struct dma_slave_config *config)
{
+ struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan);
unsigned long flags;

if ((config->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||
@@ -297,8 +298,9 @@ static int sirfsoc_dma_slave_config(struct sirfsoc_dma_chan *schan,
return 0;
}

-static int sirfsoc_dma_terminate_all(struct sirfsoc_dma_chan *schan)
+static int sirfsoc_dma_terminate_all(struct dma_chan *chan)
{
+ struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan);
struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(&schan->chan);
int cid = schan->chan.chan_id;
unsigned long flags;
@@ -327,8 +329,9 @@ static int sirfsoc_dma_terminate_all(struct sirfsoc_dma_chan *schan)
return 0;
}

-static int sirfsoc_dma_pause_chan(struct sirfsoc_dma_chan *schan)
+static int sirfsoc_dma_pause_chan(struct dma_chan *chan)
{
+ struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan);
struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(&schan->chan);
int cid = schan->chan.chan_id;
unsigned long flags;
@@ -348,8 +351,9 @@ static int sirfsoc_dma_pause_chan(struct sirfsoc_dma_chan *schan)
return 0;
}

-static int sirfsoc_dma_resume_chan(struct sirfsoc_dma_chan *schan)
+static int sirfsoc_dma_resume_chan(struct dma_chan *chan)
{
+ struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan);
struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(&schan->chan);
int cid = schan->chan.chan_id;
unsigned long flags;
@@ -369,30 +373,6 @@ static int sirfsoc_dma_resume_chan(struct sirfsoc_dma_chan *schan)
return 0;
}

-static int sirfsoc_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- struct dma_slave_config *config;
- struct sirfsoc_dma_chan *schan = dma_chan_to_sirfsoc_dma_chan(chan);
-
- switch (cmd) {
- case DMA_PAUSE:
- return sirfsoc_dma_pause_chan(schan);
- case DMA_RESUME:
- return sirfsoc_dma_resume_chan(schan);
- case DMA_TERMINATE_ALL:
- return sirfsoc_dma_terminate_all(schan);
- case DMA_SLAVE_CONFIG:
- config = (struct dma_slave_config *)arg;
- return sirfsoc_dma_slave_config(schan, config);
-
- default:
- break;
- }
-
- return -ENOSYS;
-}
-
/* Alloc channel resources */
static int sirfsoc_dma_alloc_chan_resources(struct dma_chan *chan)
{
@@ -740,7 +720,10 @@ static int sirfsoc_dma_probe(struct platform_device *op)
dma->device_alloc_chan_resources = sirfsoc_dma_alloc_chan_resources;
dma->device_free_chan_resources = sirfsoc_dma_free_chan_resources;
dma->device_issue_pending = sirfsoc_dma_issue_pending;
- dma->device_control = sirfsoc_dma_control;
+ dma->device_config = sirfsoc_dma_slave_config;
+ dma->device_pause = sirfsoc_dma_pause_chan;
+ dma->device_resume = sirfsoc_dma_resume_chan;
+ dma->device_terminate_all = sirfsoc_dma_terminate_all;
dma->device_tx_status = sirfsoc_dma_tx_status;
dma->device_prep_interleaved_dma = sirfsoc_dma_prep_interleaved;
dma->device_prep_dma_cyclic = sirfsoc_dma_prep_cyclic;
--
2.1.1

2014-10-28 21:32:40

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 38/58] dmaengine: sh: Split device_control

Split the device_control callback of the Super-H DMA driver to make use of the
newly introduced callbacks, that will eventually be used to retrieve slave
capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Laurent Pinchart <[email protected]>
---
drivers/dma/sh/shdma-base.c | 72 +++++++++++++++++++++------------------------
1 file changed, 33 insertions(+), 39 deletions(-)

diff --git a/drivers/dma/sh/shdma-base.c b/drivers/dma/sh/shdma-base.c
index 42d497416196..706cb2611e4d 100644
--- a/drivers/dma/sh/shdma-base.c
+++ b/drivers/dma/sh/shdma-base.c
@@ -727,57 +727,50 @@ static struct dma_async_tx_descriptor *shdma_prep_dma_cyclic(
return desc;
}

-static int shdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int shdma_terminate_all(struct dma_chan *chan)
{
struct shdma_chan *schan = to_shdma_chan(chan);
struct shdma_dev *sdev = to_shdma_dev(chan->device);
const struct shdma_ops *ops = sdev->ops;
- struct dma_slave_config *config;
unsigned long flags;
- int ret;

- switch (cmd) {
- case DMA_TERMINATE_ALL:
- spin_lock_irqsave(&schan->chan_lock, flags);
- ops->halt_channel(schan);
+ spin_lock_irqsave(&schan->chan_lock, flags);
+ ops->halt_channel(schan);

- if (ops->get_partial && !list_empty(&schan->ld_queue)) {
- /* Record partial transfer */
- struct shdma_desc *desc = list_first_entry(&schan->ld_queue,
- struct shdma_desc, node);
- desc->partial = ops->get_partial(schan, desc);
- }
+ if (ops->get_partial && !list_empty(&schan->ld_queue)) {
+ /* Record partial transfer */
+ struct shdma_desc *desc = list_first_entry(&schan->ld_queue,
+ struct shdma_desc, node);
+ desc->partial = ops->get_partial(schan, desc);
+ }

- spin_unlock_irqrestore(&schan->chan_lock, flags);
+ spin_unlock_irqrestore(&schan->chan_lock, flags);

- shdma_chan_ld_cleanup(schan, true);
- break;
- case DMA_SLAVE_CONFIG:
- /*
- * So far only .slave_id is used, but the slave drivers are
- * encouraged to also set a transfer direction and an address.
- */
- if (!arg)
- return -EINVAL;
- /*
- * We could lock this, but you shouldn't be configuring the
- * channel, while using it...
- */
- config = (struct dma_slave_config *)arg;
- ret = shdma_setup_slave(schan, config->slave_id,
- config->direction == DMA_DEV_TO_MEM ?
- config->src_addr : config->dst_addr);
- if (ret < 0)
- return ret;
- break;
- default:
- return -ENXIO;
- }
+ shdma_chan_ld_cleanup(schan, true);

return 0;
}

+static int shdma_config(struct dma_chan *chan,
+ struct dma_slave_config *config)
+{
+ struct shdma_chan *schan = to_shdma_chan(chan);
+
+ /*
+ * So far only .slave_id is used, but the slave drivers are
+ * encouraged to also set a transfer direction and an address.
+ */
+ if (!config)
+ return -EINVAL;
+ /*
+ * We could lock this, but you shouldn't be configuring the
+ * channel, while using it...
+ */
+ return shdma_setup_slave(schan, config->slave_id,
+ config->direction == DMA_DEV_TO_MEM ?
+ config->src_addr : config->dst_addr);
+}
+
static void shdma_issue_pending(struct dma_chan *chan)
{
struct shdma_chan *schan = to_shdma_chan(chan);
@@ -1000,7 +993,8 @@ int shdma_init(struct device *dev, struct shdma_dev *sdev,
/* Compulsory for DMA_SLAVE fields */
dma_dev->device_prep_slave_sg = shdma_prep_slave_sg;
dma_dev->device_prep_dma_cyclic = shdma_prep_dma_cyclic;
- dma_dev->device_control = shdma_control;
+ dma_dev->device_config = shdma_config;
+ dma_dev->device_terminate_all = shdma_terminate_all;

dma_dev->dev = dev;

--
2.1.1

2014-10-28 21:32:54

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 37/58] dmaengine: sa11x0: Split device_control

Split the device_control callback of the SA-11x0 DMA driver to make use of the
newly introduced callbacks, that will eventually be used to retrieve slave
capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/sa11x0-dma.c | 158 +++++++++++++++++++++++++----------------------
1 file changed, 84 insertions(+), 74 deletions(-)

diff --git a/drivers/dma/sa11x0-dma.c b/drivers/dma/sa11x0-dma.c
index 4b0ef043729a..432342459ab5 100644
--- a/drivers/dma/sa11x0-dma.c
+++ b/drivers/dma/sa11x0-dma.c
@@ -669,8 +669,10 @@ static struct dma_async_tx_descriptor *sa11x0_dma_prep_dma_cyclic(
return vchan_tx_prep(&c->vc, &txd->vd, DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
}

-static int sa11x0_dma_slave_config(struct sa11x0_dma_chan *c, struct dma_slave_config *cfg)
+static int sa11x0_dma_slave_config(struct dma_chan *chan,
+ struct dma_slave_config *cfg)
{
+ struct sa11x0_dma_chan *c = to_sa11x0_dma_chan(chan);
u32 ddar = c->ddar & ((0xf << 4) | DDAR_RW);
dma_addr_t addr;
enum dma_slave_buswidth width;
@@ -704,8 +706,7 @@ static int sa11x0_dma_slave_config(struct sa11x0_dma_chan *c, struct dma_slave_c
return 0;
}

-static int sa11x0_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int sa11x0_dma_pause(struct dma_chan *chan)
{
struct sa11x0_dma_chan *c = to_sa11x0_dma_chan(chan);
struct sa11x0_dma_dev *d = to_sa11x0_dma(chan->device);
@@ -714,89 +715,95 @@ static int sa11x0_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long flags;
int ret;

- switch (cmd) {
- case DMA_SLAVE_CONFIG:
- return sa11x0_dma_slave_config(c, (struct dma_slave_config *)arg);
-
- case DMA_TERMINATE_ALL:
- dev_dbg(d->slave.dev, "vchan %p: terminate all\n", &c->vc);
- /* Clear the tx descriptor lists */
- spin_lock_irqsave(&c->vc.lock, flags);
- vchan_get_all_descriptors(&c->vc, &head);
+ dev_dbg(d->slave.dev, "vchan %p: pause\n", &c->vc);
+ spin_lock_irqsave(&c->vc.lock, flags);
+ if (c->status == DMA_IN_PROGRESS) {
+ c->status = DMA_PAUSED;

p = c->phy;
if (p) {
- dev_dbg(d->slave.dev, "pchan %u: terminating\n", p->num);
- /* vchan is assigned to a pchan - stop the channel */
- writel(DCSR_RUN | DCSR_IE |
- DCSR_STRTA | DCSR_DONEA |
- DCSR_STRTB | DCSR_DONEB,
- p->base + DMA_DCSR_C);
-
- if (p->txd_load) {
- if (p->txd_load != p->txd_done)
- list_add_tail(&p->txd_load->vd.node, &head);
- p->txd_load = NULL;
- }
- if (p->txd_done) {
- list_add_tail(&p->txd_done->vd.node, &head);
- p->txd_done = NULL;
- }
- c->phy = NULL;
+ writel(DCSR_RUN | DCSR_IE, p->base + DMA_DCSR_C);
+ } else {
spin_lock(&d->lock);
- p->vchan = NULL;
+ list_del_init(&c->node);
spin_unlock(&d->lock);
- tasklet_schedule(&d->task);
}
- spin_unlock_irqrestore(&c->vc.lock, flags);
- vchan_dma_desc_free_list(&c->vc, &head);
- ret = 0;
- break;
+ }
+ spin_unlock_irqrestore(&c->vc.lock, flags);

- case DMA_PAUSE:
- dev_dbg(d->slave.dev, "vchan %p: pause\n", &c->vc);
- spin_lock_irqsave(&c->vc.lock, flags);
- if (c->status == DMA_IN_PROGRESS) {
- c->status = DMA_PAUSED;
+ return 0;
+}

- p = c->phy;
- if (p) {
- writel(DCSR_RUN | DCSR_IE, p->base + DMA_DCSR_C);
- } else {
- spin_lock(&d->lock);
- list_del_init(&c->node);
- spin_unlock(&d->lock);
- }
- }
- spin_unlock_irqrestore(&c->vc.lock, flags);
- ret = 0;
- break;
+static int sa11x0_dma_resume(struct dma_chan *chan)
+{
+ struct sa11x0_dma_chan *c = to_sa11x0_dma_chan(chan);
+ struct sa11x0_dma_dev *d = to_sa11x0_dma(chan->device);
+ struct sa11x0_dma_phy *p;
+ LIST_HEAD(head);
+ unsigned long flags;
+ int ret;

- case DMA_RESUME:
- dev_dbg(d->slave.dev, "vchan %p: resume\n", &c->vc);
- spin_lock_irqsave(&c->vc.lock, flags);
- if (c->status == DMA_PAUSED) {
- c->status = DMA_IN_PROGRESS;
-
- p = c->phy;
- if (p) {
- writel(DCSR_RUN | DCSR_IE, p->base + DMA_DCSR_S);
- } else if (!list_empty(&c->vc.desc_issued)) {
- spin_lock(&d->lock);
- list_add_tail(&c->node, &d->chan_pending);
- spin_unlock(&d->lock);
- }
+ dev_dbg(d->slave.dev, "vchan %p: resume\n", &c->vc);
+ spin_lock_irqsave(&c->vc.lock, flags);
+ if (c->status == DMA_PAUSED) {
+ c->status = DMA_IN_PROGRESS;
+
+ p = c->phy;
+ if (p) {
+ writel(DCSR_RUN | DCSR_IE, p->base + DMA_DCSR_S);
+ } else if (!list_empty(&c->vc.desc_issued)) {
+ spin_lock(&d->lock);
+ list_add_tail(&c->node, &d->chan_pending);
+ spin_unlock(&d->lock);
}
- spin_unlock_irqrestore(&c->vc.lock, flags);
- ret = 0;
- break;
+ }
+ spin_unlock_irqrestore(&c->vc.lock, flags);

- default:
- ret = -ENXIO;
- break;
+ return 0;
+}
+
+static int sa11x0_dma_terminate_all(struct dma_chan *chan)
+{
+ struct sa11x0_dma_chan *c = to_sa11x0_dma_chan(chan);
+ struct sa11x0_dma_dev *d = to_sa11x0_dma(chan->device);
+ struct sa11x0_dma_phy *p;
+ LIST_HEAD(head);
+ unsigned long flags;
+ int ret;
+
+ dev_dbg(d->slave.dev, "vchan %p: terminate all\n", &c->vc);
+ /* Clear the tx descriptor lists */
+ spin_lock_irqsave(&c->vc.lock, flags);
+ vchan_get_all_descriptors(&c->vc, &head);
+
+ p = c->phy;
+ if (p) {
+ dev_dbg(d->slave.dev, "pchan %u: terminating\n", p->num);
+ /* vchan is assigned to a pchan - stop the channel */
+ writel(DCSR_RUN | DCSR_IE |
+ DCSR_STRTA | DCSR_DONEA |
+ DCSR_STRTB | DCSR_DONEB,
+ p->base + DMA_DCSR_C);
+
+ if (p->txd_load) {
+ if (p->txd_load != p->txd_done)
+ list_add_tail(&p->txd_load->vd.node, &head);
+ p->txd_load = NULL;
+ }
+ if (p->txd_done) {
+ list_add_tail(&p->txd_done->vd.node, &head);
+ p->txd_done = NULL;
+ }
+ c->phy = NULL;
+ spin_lock(&d->lock);
+ p->vchan = NULL;
+ spin_unlock(&d->lock);
+ tasklet_schedule(&d->task);
}
+ spin_unlock_irqrestore(&c->vc.lock, flags);
+ vchan_dma_desc_free_list(&c->vc, &head);

- return ret;
+ return 0;
}

struct sa11x0_dma_channel_desc {
@@ -834,7 +841,10 @@ static int sa11x0_dma_init_dmadev(struct dma_device *dmadev,
dmadev->dev = dev;
dmadev->device_alloc_chan_resources = sa11x0_dma_alloc_chan_resources;
dmadev->device_free_chan_resources = sa11x0_dma_free_chan_resources;
- dmadev->device_control = sa11x0_dma_control;
+ dmadev->device_config = sa11x0_dma_slave_config;
+ dmadev->device_pause = sa11x0_dma_pause;
+ dmadev->device_resume = sa11x0_dma_resume;
+ dmadev->device_terminate_all = sa11x0_dma_terminate_all;
dmadev->device_tx_status = sa11x0_dma_tx_status;
dmadev->device_issue_pending = sa11x0_dma_issue_pending;

--
2.1.1

2014-10-28 21:31:04

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 32/58] dmaengine: nbpfaxi: Split device_control

Split the device_control callback of the NBPF AXI DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/nbpfaxi.c | 93 +++++++++++++++++++++++++--------------------------
1 file changed, 46 insertions(+), 47 deletions(-)

diff --git a/drivers/dma/nbpfaxi.c b/drivers/dma/nbpfaxi.c
index b4a61ccbb443..a07698fc798f 100644
--- a/drivers/dma/nbpfaxi.c
+++ b/drivers/dma/nbpfaxi.c
@@ -565,13 +565,6 @@ static void nbpf_configure(struct nbpf_device *nbpf)
nbpf_write(nbpf, NBPF_CTRL, NBPF_CTRL_LVINT);
}

-static void nbpf_pause(struct nbpf_channel *chan)
-{
- nbpf_chan_write(chan, NBPF_CHAN_CTRL, NBPF_CHAN_CTRL_SETSUS);
- /* See comment in nbpf_prep_one() */
- nbpf_chan_write(chan, NBPF_CHAN_CTRL, NBPF_CHAN_CTRL_CLREN);
-}
-
/* Generic part */

/* DMA ENGINE functions */
@@ -837,54 +830,58 @@ static void nbpf_chan_idle(struct nbpf_channel *chan)
}
}

-static int nbpf_control(struct dma_chan *dchan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int nbpf_pause(struct dma_chan *dchan)
{
struct nbpf_channel *chan = nbpf_to_chan(dchan);
- struct dma_slave_config *config;

- dev_dbg(dchan->device->dev, "Entry %s(%d)\n", __func__, cmd);
+ dev_dbg(dchan->device->dev, "Entry %s\n", __func__);

- switch (cmd) {
- case DMA_TERMINATE_ALL:
- dev_dbg(dchan->device->dev, "Terminating\n");
- nbpf_chan_halt(chan);
- nbpf_chan_idle(chan);
- break;
+ chan->paused = true;
+ nbpf_chan_write(chan, NBPF_CHAN_CTRL, NBPF_CHAN_CTRL_SETSUS);
+ /* See comment in nbpf_prep_one() */
+ nbpf_chan_write(chan, NBPF_CHAN_CTRL, NBPF_CHAN_CTRL_CLREN);

- case DMA_SLAVE_CONFIG:
- if (!arg)
- return -EINVAL;
- config = (struct dma_slave_config *)arg;
+ return 0;
+}

- /*
- * We could check config->slave_id to match chan->terminal here,
- * but with DT they would be coming from the same source, so
- * such a check would be superflous
- */
+static int nbpf_terminate_all(struct dma_chan *dchan)
+{
+ struct nbpf_channel *chan = nbpf_to_chan(dchan);

- chan->slave_dst_addr = config->dst_addr;
- chan->slave_dst_width = nbpf_xfer_size(chan->nbpf,
- config->dst_addr_width, 1);
- chan->slave_dst_burst = nbpf_xfer_size(chan->nbpf,
- config->dst_addr_width,
- config->dst_maxburst);
- chan->slave_src_addr = config->src_addr;
- chan->slave_src_width = nbpf_xfer_size(chan->nbpf,
- config->src_addr_width, 1);
- chan->slave_src_burst = nbpf_xfer_size(chan->nbpf,
- config->src_addr_width,
- config->src_maxburst);
- break;
+ dev_dbg(dchan->device->dev, "Entry %s\n", __func__);
+ dev_dbg(dchan->device->dev, "Terminating\n");

- case DMA_PAUSE:
- chan->paused = true;
- nbpf_pause(chan);
- break;
+ nbpf_chan_halt(chan);
+ nbpf_chan_idle(chan);

- default:
- return -ENXIO;
- }
+ return 0;
+}
+
+static int nbpf_config(struct dma_chan *dchan,
+ struct dma_slave_config *config)
+{
+ struct nbpf_channel *chan = nbpf_to_chan(dchan);
+
+ dev_dbg(dchan->device->dev, "Entry %s\n", __func__);
+
+ /*
+ * We could check config->slave_id to match chan->terminal here,
+ * but with DT they would be coming from the same source, so
+ * such a check would be superflous
+ */
+
+ chan->slave_dst_addr = config->dst_addr;
+ chan->slave_dst_width = nbpf_xfer_size(chan->nbpf,
+ config->dst_addr_width, 1);
+ chan->slave_dst_burst = nbpf_xfer_size(chan->nbpf,
+ config->dst_addr_width,
+ config->dst_maxburst);
+ chan->slave_src_addr = config->src_addr;
+ chan->slave_src_width = nbpf_xfer_size(chan->nbpf,
+ config->src_addr_width, 1);
+ chan->slave_src_burst = nbpf_xfer_size(chan->nbpf,
+ config->src_addr_width,
+ config->src_maxburst);

return 0;
}
@@ -1426,7 +1423,9 @@ static int nbpf_probe(struct platform_device *pdev)

/* Compulsory for DMA_SLAVE fields */
dma_dev->device_prep_slave_sg = nbpf_prep_slave_sg;
- dma_dev->device_control = nbpf_control;
+ dma_dev->device_config = nbpf_config;
+ dma_dev->device_pause = nbpf_pause;
+ dma_dev->device_terminate_all = nbpf_terminate_all;

platform_set_drvdata(pdev, nbpf);

--
2.1.1

2014-10-28 21:33:31

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 35/58] dmaengine: bam-dma: Split device_control

Split the device_control callback of the Qualcomm BAM DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/qcom_bam_dma.c | 85 +++++++++++++++++++++++-----------------------
1 file changed, 43 insertions(+), 42 deletions(-)

diff --git a/drivers/dma/qcom_bam_dma.c b/drivers/dma/qcom_bam_dma.c
index 7a4bbb0f80a5..03b1bf2afd4e 100644
--- a/drivers/dma/qcom_bam_dma.c
+++ b/drivers/dma/qcom_bam_dma.c
@@ -448,11 +448,18 @@ static void bam_free_chan(struct dma_chan *chan)
* Sets slave configuration for channel
*
*/
-static void bam_slave_config(struct bam_chan *bchan,
- struct dma_slave_config *cfg)
+static int bam_slave_config(struct dma_chan *chan,
+ struct dma_slave_config *cfg)
{
+ struct bam_chan *bchan = to_bam_chan(chan);
+ unsigned long flag;
+
+ spin_lock_irqsave(&bchan->vc.lock, flag);
memcpy(&bchan->slave, cfg, sizeof(*cfg));
bchan->reconfigure = 1;
+ spin_unlock_irqrestore(&bchan->vc.lock, flag);
+
+ return 0;
}

/**
@@ -545,8 +552,9 @@ err_out:
* No callbacks are done
*
*/
-static void bam_dma_terminate_all(struct bam_chan *bchan)
+static int bam_dma_terminate_all(struct dma_chan *chan)
{
+ struct bam_chan *bchan = to_bam_chan(chan);
unsigned long flag;
LIST_HEAD(head);

@@ -561,56 +569,46 @@ static void bam_dma_terminate_all(struct bam_chan *bchan)
spin_unlock_irqrestore(&bchan->vc.lock, flag);

vchan_dma_desc_free_list(&bchan->vc, &head);
+
+ return 0;
}

/**
- * bam_control - DMA device control
+ * bam_pause - Pause DMA channel
* @chan: dma channel
- * @cmd: control cmd
- * @arg: cmd argument
*
- * Perform DMA control command
+ */
+static int bam_pause(struct dma_chan *chan)
+{
+ struct bam_chan *bchan = to_bam_chan(chan);
+ struct bam_device *bdev = bchan->bdev;
+ unsigned long flag;
+
+ spin_lock_irqsave(&bchan->vc.lock, flag);
+ writel_relaxed(1, bdev->regs + BAM_P_HALT(bchan->id));
+ bchan->paused = 1;
+ spin_unlock_irqrestore(&bchan->vc.lock, flag);
+
+ return 0;
+}
+
+/**
+ * bam_resume - Resume DMA channel operations
+ * @chan: dma channel
*
*/
-static int bam_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int bam_resume(struct dma_chan *chan)
{
struct bam_chan *bchan = to_bam_chan(chan);
struct bam_device *bdev = bchan->bdev;
- int ret = 0;
unsigned long flag;

- switch (cmd) {
- case DMA_PAUSE:
- spin_lock_irqsave(&bchan->vc.lock, flag);
- writel_relaxed(1, bdev->regs + BAM_P_HALT(bchan->id));
- bchan->paused = 1;
- spin_unlock_irqrestore(&bchan->vc.lock, flag);
- break;
-
- case DMA_RESUME:
- spin_lock_irqsave(&bchan->vc.lock, flag);
- writel_relaxed(0, bdev->regs + BAM_P_HALT(bchan->id));
- bchan->paused = 0;
- spin_unlock_irqrestore(&bchan->vc.lock, flag);
- break;
-
- case DMA_TERMINATE_ALL:
- bam_dma_terminate_all(bchan);
- break;
-
- case DMA_SLAVE_CONFIG:
- spin_lock_irqsave(&bchan->vc.lock, flag);
- bam_slave_config(bchan, (struct dma_slave_config *)arg);
- spin_unlock_irqrestore(&bchan->vc.lock, flag);
- break;
-
- default:
- ret = -ENXIO;
- break;
- }
+ spin_lock_irqsave(&bchan->vc.lock, flag);
+ writel_relaxed(0, bdev->regs + BAM_P_HALT(bchan->id));
+ bchan->paused = 0;
+ spin_unlock_irqrestore(&bchan->vc.lock, flag);

- return ret;
+ return 0;
}

/**
@@ -1050,7 +1048,10 @@ static int bam_dma_probe(struct platform_device *pdev)
bdev->common.device_alloc_chan_resources = bam_alloc_chan;
bdev->common.device_free_chan_resources = bam_free_chan;
bdev->common.device_prep_slave_sg = bam_prep_slave_sg;
- bdev->common.device_control = bam_control;
+ bdev->common.device_config = bam_slave_config;
+ bdev->common.device_pause = bam_pause;
+ bdev->common.device_resume = bam_resume;
+ bdev->common.device_terminate_all = bam_dma_terminate_all;
bdev->common.device_issue_pending = bam_issue_pending;
bdev->common.device_tx_status = bam_tx_status;
bdev->common.dev = bdev->dev;
@@ -1089,7 +1090,7 @@ static int bam_dma_remove(struct platform_device *pdev)
devm_free_irq(bdev->dev, bdev->irq, bdev);

for (i = 0; i < bdev->num_channels; i++) {
- bam_dma_terminate_all(&bdev->channels[i]);
+ bam_dma_terminate_all(&bdev->channels[i].vc.chan);
tasklet_kill(&bdev->channels[i].vc.task);

dma_free_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE,
--
2.1.1

2014-10-28 21:33:52

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 58/58] dmaengine: Remove device_control and device_slave_caps

Now that device_control has been split into several functions, and
device_slave_caps rendered useless, we can safely remove them.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Laurent Pinchart <[email protected]>
---
include/linux/dmaengine.h | 52 ++++++-----------------------------------------
1 file changed, 6 insertions(+), 46 deletions(-)

diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index be9e60aa8f5e..b01b7a7c581b 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -189,25 +189,6 @@ enum dma_ctrl_flags {
};

/**
- * enum dma_ctrl_cmd - DMA operations that can optionally be exercised
- * on a running channel.
- * @DMA_TERMINATE_ALL: terminate all ongoing transfers
- * @DMA_PAUSE: pause ongoing transfers
- * @DMA_RESUME: resume paused transfer
- * @DMA_SLAVE_CONFIG: this command is only implemented by DMA controllers
- * that need to runtime reconfigure the slave channels (as opposed to passing
- * configuration data in statically from the platform). An additional
- * argument of struct dma_slave_config must be passed in with this
- * command.
- */
-enum dma_ctrl_cmd {
- DMA_TERMINATE_ALL,
- DMA_PAUSE,
- DMA_RESUME,
- DMA_SLAVE_CONFIG,
-};
-
-/**
* enum sum_check_bits - bit position of pq_check_flags
*/
enum sum_check_bits {
@@ -336,9 +317,8 @@ enum dma_slave_buswidth {
* This struct is passed in as configuration data to a DMA engine
* in order to set up a certain channel for DMA transport at runtime.
* The DMA device/engine has to provide support for an additional
- * command in the channel config interface, DMA_SLAVE_CONFIG
- * and this struct will then be passed in as an argument to the
- * DMA engine device_control() function.
+ * callback in the dma_device structure, device_config and this struct
+ * will then be passed in as an argument to the function.
*
* The rationale for adding configuration information to this struct is as
* follows: if it is likely that more than one DMA slave controllers in
@@ -617,8 +597,6 @@ struct dma_tx_state {
* @device_prep_interleaved_dma: Transfer expression in a generic way.
* @device_config: Pushes a new configuration to a channel, return 0 or an error
* code
- * @device_control: manipulate all pending operations on a channel, returns
- * zero or error code
* @device_pause: Pauses any transfer happening on a channel. Returns
* 0 or an error code
* @device_resume: Resumes any transfer on a channel previously
@@ -630,7 +608,6 @@ struct dma_tx_state {
* struct with auxiliary transfer status information, otherwise the call
* will just return a simple status code
* @device_issue_pending: push pending transactions to hardware
- * @device_slave_caps: return the slave channel capabilities
*/
struct dma_device {

@@ -697,8 +674,6 @@ struct dma_device {

int (*device_config)(struct dma_chan *chan,
struct dma_slave_config *config);
- int (*device_control)(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg);
int (*device_pause)(struct dma_chan *chan);
int (*device_resume)(struct dma_chan *chan);
int (*device_terminate_all)(struct dma_chan *chan);
@@ -707,27 +682,15 @@ struct dma_device {
dma_cookie_t cookie,
struct dma_tx_state *txstate);
void (*device_issue_pending)(struct dma_chan *chan);
- int (*device_slave_caps)(struct dma_chan *chan, struct dma_slave_caps *caps);
};

-static inline int dmaengine_device_control(struct dma_chan *chan,
- enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- if (chan->device->device_control)
- return chan->device->device_control(chan, cmd, arg);
-
- return -ENOSYS;
-}
-
static inline int dmaengine_slave_config(struct dma_chan *chan,
struct dma_slave_config *config)
{
if (chan->device->device_config)
return chan->device->device_config(chan, config);

- return dmaengine_device_control(chan, DMA_SLAVE_CONFIG,
- (unsigned long)config);
+ return -ENOSYS;
}

static inline bool is_slave_direction(enum dma_transfer_direction direction)
@@ -807,9 +770,6 @@ static inline int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_cap
if (!test_bit(DMA_SLAVE, device->cap_mask.bits))
return -ENXIO;

- if (device->device_slave_caps)
- return device->device_slave_caps(chan, caps);
-
/*
* Check whether it reports it uses the generic slave
* capabilities, if not, that means it doesn't support any
@@ -834,7 +794,7 @@ static inline int dmaengine_terminate_all(struct dma_chan *chan)
if (chan->device->device_terminate_all)
return chan->device->device_terminate_all(chan);

- return dmaengine_device_control(chan, DMA_TERMINATE_ALL, 0);
+ return -ENOSYS;
}

static inline int dmaengine_pause(struct dma_chan *chan)
@@ -842,7 +802,7 @@ static inline int dmaengine_pause(struct dma_chan *chan)
if (chan->device->device_pause)
return chan->device->device_pause(chan);

- return dmaengine_device_control(chan, DMA_PAUSE, 0);
+ return -ENOSYS;
}

static inline int dmaengine_resume(struct dma_chan *chan)
@@ -850,7 +810,7 @@ static inline int dmaengine_resume(struct dma_chan *chan)
if (chan->device->device_resume)
return chan->device->device_resume(chan);

- return dmaengine_device_control(chan, DMA_RESUME, 0);
+ return -ENOSYS;
}

static inline enum dma_status dmaengine_tx_status(struct dma_chan *chan,
--
2.1.1

2014-10-28 21:31:02

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 44/58] dmaengine: mv_xor: Remove device_control

The Marvell XOR engine doesn't allow any operations that use to be defined in
device_control, it shouldn't need to be defined. Since it's going to be
deprecated, remove it altogether.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/mv_xor.c | 9 ---------
1 file changed, 9 deletions(-)

diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index a63837ca1410..10d6dd8b8ee4 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -928,14 +928,6 @@ out:
return err;
}

-/* This driver does not implement any of the optional DMA operations. */
-static int
-mv_xor_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- return -ENOSYS;
-}
-
static int mv_xor_channel_remove(struct mv_xor_chan *mv_chan)
{
struct dma_chan *chan, *_chan;
@@ -1008,7 +1000,6 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
dma_dev->device_free_chan_resources = mv_xor_free_chan_resources;
dma_dev->device_tx_status = mv_xor_status;
dma_dev->device_issue_pending = mv_xor_issue_pending;
- dma_dev->device_control = mv_xor_control;
dma_dev->dev = &pdev->dev;

/* set prep routines based on capability */
--
2.1.1

2014-10-28 21:34:14

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 57/58] dmaengine: Add a warning for drivers not using the generic slave caps retrieval

For the slave caps retrieval to be really useful, most drivers need to
implement it.

Hence, we need to be slightly more aggressive, and trigger a warning at
registration time for drivers that don't fill their caps infos in order to
encourage them to implement it.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/dmaengine.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 98e9431f85ec..300c8cd2786c 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -827,6 +827,9 @@ int dma_async_device_register(struct dma_device *device)
BUG_ON(!device->device_issue_pending);
BUG_ON(!device->dev);

+ WARN(dma_has_cap(DMA_SLAVE, device->cap_mask) && !device->directions,
+ "this driver doesn't support generic slave capabilities reporting\n");
+
/* note: this only matters in the
* CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH=n case
*/
--
2.1.1

2014-10-28 21:34:33

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 56/58] dmaengine: sun6i: Declare slave capabilities for the generic code

Now that the generic slave caps code can make use of the device assigned
capabilities, instead of relying on a callback to be implemented.

Make use of this code.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/sun6i-dma.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/drivers/dma/sun6i-dma.c b/drivers/dma/sun6i-dma.c
index c3a6b3f789a8..8c30b2000d9a 100644
--- a/drivers/dma/sun6i-dma.c
+++ b/drivers/dma/sun6i-dma.c
@@ -921,6 +921,15 @@ static int sun6i_dma_probe(struct platform_device *pdev)
sdc->slave.device_resume = sun6i_dma_resume;
sdc->slave.device_terminate_all = sun6i_dma_terminate_all;
sdc->slave.chancnt = NR_MAX_VCHANS;
+ sdc->slave.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
+ BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
+ BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+ sdc->slave.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) |
+ BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) |
+ BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+ sdc->slave.directions = BIT(DMA_DEV_TO_MEM) |
+ BIT(DMA_MEM_TO_DEV);
+ sdc->slave.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
sdc->slave.dev = &pdev->dev;

sdc->pchans = devm_kcalloc(&pdev->dev, NR_MAX_CHANNELS,
--
2.1.1

2014-10-28 21:34:52

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 55/58] dmaengine: sirf: Declare slave capabilities for the generic code

Now that the generic slave caps code can make use of the device assigned
capabilities, instead of relying on a callback to be implemented.

Make use of this code.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/sirf-dma.c | 16 +++-------------
1 file changed, 3 insertions(+), 13 deletions(-)

diff --git a/drivers/dma/sirf-dma.c b/drivers/dma/sirf-dma.c
index 711c2bae9003..b583a7bdb6b1 100644
--- a/drivers/dma/sirf-dma.c
+++ b/drivers/dma/sirf-dma.c
@@ -628,18 +628,6 @@ EXPORT_SYMBOL(sirfsoc_dma_filter_id);
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES))

-static int sirfsoc_dma_device_slave_caps(struct dma_chan *dchan,
- struct dma_slave_caps *caps)
-{
- caps->src_addr_widths = SIRFSOC_DMA_BUSWIDTHS;
- caps->dst_addr_widths = SIRFSOC_DMA_BUSWIDTHS;
- caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
- caps->cmd_pause = true;
- caps->cmd_terminate = true;
-
- return 0;
-}
-
static struct dma_chan *of_dma_sirfsoc_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
@@ -727,7 +715,9 @@ static int sirfsoc_dma_probe(struct platform_device *op)
dma->device_tx_status = sirfsoc_dma_tx_status;
dma->device_prep_interleaved_dma = sirfsoc_dma_prep_interleaved;
dma->device_prep_dma_cyclic = sirfsoc_dma_prep_cyclic;
- dma->device_slave_caps = sirfsoc_dma_device_slave_caps;
+ dma->src_addr_widths = SIRFSOC_DMA_BUSWIDTHS;
+ dma->dst_addr_widths = SIRFSOC_DMA_BUSWIDTHS;
+ dma->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);

INIT_LIST_HEAD(&dma->channels);
dma_cap_set(DMA_SLAVE, dma->cap_mask);
--
2.1.1

2014-10-28 21:35:14

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 53/58] dmaengine: omap: Declare slave capabilities for the generic code

Now that the generic slave caps code can make use of the device assigned
capabilities, instead of relying on a callback to be implemented.

Make use of this code.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/omap-dma.c | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)

diff --git a/drivers/dma/omap-dma.c b/drivers/dma/omap-dma.c
index 90c0bdd53bb0..8b38b6eab137 100644
--- a/drivers/dma/omap-dma.c
+++ b/drivers/dma/omap-dma.c
@@ -1072,19 +1072,6 @@ static void omap_dma_free(struct omap_dmadev *od)
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES))

-static int omap_dma_device_slave_caps(struct dma_chan *dchan,
- struct dma_slave_caps *caps)
-{
- caps->src_addr_widths = OMAP_DMA_BUSWIDTHS;
- caps->dst_addr_widths = OMAP_DMA_BUSWIDTHS;
- caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
- caps->cmd_pause = true;
- caps->cmd_terminate = true;
- caps->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
-
- return 0;
-}
-
static int omap_dma_probe(struct platform_device *pdev)
{
struct omap_dmadev *od;
@@ -1118,7 +1105,10 @@ static int omap_dma_probe(struct platform_device *pdev)
od->ddev.device_pause = omap_dma_pause;
od->ddev.device_resume = omap_dma_resume;
od->ddev.device_terminate_all = omap_dma_terminate_all;
- od->ddev.device_slave_caps = omap_dma_device_slave_caps;
+ od->ddev.src_addr_widths = OMAP_DMA_BUSWIDTHS;
+ od->ddev.dst_addr_widths = OMAP_DMA_BUSWIDTHS;
+ od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+ od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
od->ddev.dev = &pdev->dev;
INIT_LIST_HEAD(&od->ddev.channels);
INIT_LIST_HEAD(&od->pending);
--
2.1.1

2014-10-28 21:35:13

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 54/58] dmaengine: pl330: Declare slave capabilities for the generic code

Now that the generic slave caps code can make use of the device assigned
capabilities, instead of relying on a callback to be implemented.

Make use of this code.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/pl330.c | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)

diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
index bd0e47eefa34..7286664ee8be 100644
--- a/drivers/dma/pl330.c
+++ b/drivers/dma/pl330.c
@@ -2569,19 +2569,6 @@ static irqreturn_t pl330_irq_handler(int irq, void *data)
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES)

-static int pl330_dma_device_slave_caps(struct dma_chan *dchan,
- struct dma_slave_caps *caps)
-{
- caps->src_addr_widths = PL330_DMA_BUSWIDTHS;
- caps->dst_addr_widths = PL330_DMA_BUSWIDTHS;
- caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
- caps->cmd_pause = false;
- caps->cmd_terminate = true;
- caps->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
-
- return 0;
-}
-
static int
pl330_probe(struct amba_device *adev, const struct amba_id *id)
{
@@ -2701,7 +2688,10 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
pd->device_config = pl330_config;
pd->device_terminate_all = pl330_terminate_all;
pd->device_issue_pending = pl330_issue_pending;
- pd->device_slave_caps = pl330_dma_device_slave_caps;
+ pd->src_addr_widths = PL330_DMA_BUSWIDTHS;
+ pd->dst_addr_widths = PL330_DMA_BUSWIDTHS;
+ pd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+ pd->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;

ret = dma_async_device_register(pd);
if (ret) {
--
2.1.1

2014-10-28 21:30:59

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 45/58] dmaengine: pch-dma: Rename device_control

Rename the device_control callback of the Intel PCH DMA driver to terminate_all
since it's all it's really doing. That will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/pch_dma.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/dma/pch_dma.c b/drivers/dma/pch_dma.c
index 9f9ca9fe5ce6..d03b9c93ccab 100644
--- a/drivers/dma/pch_dma.c
+++ b/drivers/dma/pch_dma.c
@@ -665,16 +665,12 @@ err_desc_get:
return NULL;
}

-static int pd_device_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int pd_device_terminate_all(struct dma_chan *chan)
{
struct pch_dma_chan *pd_chan = to_pd_chan(chan);
struct pch_dma_desc *desc, *_d;
LIST_HEAD(list);

- if (cmd != DMA_TERMINATE_ALL)
- return -ENXIO;
-
spin_lock_irq(&pd_chan->lock);

pdc_set_mode(&pd_chan->chan, DMA_CTL0_DISABLE);
@@ -932,7 +928,7 @@ static int pch_dma_probe(struct pci_dev *pdev,
pd->dma.device_tx_status = pd_tx_status;
pd->dma.device_issue_pending = pd_issue_pending;
pd->dma.device_prep_slave_sg = pd_prep_slave_sg;
- pd->dma.device_control = pd_device_control;
+ pd->dma.device_terminate_all = pd_device_terminate_all;

err = dma_async_device_register(&pd->dma);
if (err) {
--
2.1.1

2014-10-28 21:36:36

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 52/58] dmaengine: nbpfaxi: Declare slave capabilities for the generic code

Now that the generic slave caps code can make use of the device assigned
capabilities, instead of relying on a callback to be implemented.

Make use of this code.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/nbpfaxi.c | 17 ++++-------------
1 file changed, 4 insertions(+), 13 deletions(-)

diff --git a/drivers/dma/nbpfaxi.c b/drivers/dma/nbpfaxi.c
index a07698fc798f..11bcfc1f413a 100644
--- a/drivers/dma/nbpfaxi.c
+++ b/drivers/dma/nbpfaxi.c
@@ -1069,18 +1069,6 @@ static void nbpf_free_chan_resources(struct dma_chan *dchan)
}
}

-static int nbpf_slave_caps(struct dma_chan *dchan,
- struct dma_slave_caps *caps)
-{
- caps->src_addr_widths = NBPF_DMA_BUSWIDTHS;
- caps->dst_addr_widths = NBPF_DMA_BUSWIDTHS;
- caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
- caps->cmd_pause = false;
- caps->cmd_terminate = true;
-
- return 0;
-}
-
static struct dma_chan *nbpf_of_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
@@ -1411,7 +1399,6 @@ static int nbpf_probe(struct platform_device *pdev)
dma_dev->device_prep_dma_memcpy = nbpf_prep_memcpy;
dma_dev->device_tx_status = nbpf_tx_status;
dma_dev->device_issue_pending = nbpf_issue_pending;
- dma_dev->device_slave_caps = nbpf_slave_caps;

/*
* If we drop support for unaligned MEMCPY buffer addresses and / or
@@ -1427,6 +1414,10 @@ static int nbpf_probe(struct platform_device *pdev)
dma_dev->device_pause = nbpf_pause;
dma_dev->device_terminate_all = nbpf_terminate_all;

+ dma_dev->src_addr_widths = NBPF_DMA_BUSWIDTHS;
+ dma_dev->dst_addr_widths = NBPF_DMA_BUSWIDTHS;
+ dma_dev->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+
platform_set_drvdata(pdev, nbpf);

ret = clk_prepare_enable(nbpf->clk);
--
2.1.1

2014-10-28 21:30:57

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 47/58] dmaengine: txx9: Rename device_control

Rename the device_control callback of the TXX9 DMA driver to terminate_all
since it's all it's really doing. That will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/txx9dmac.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/dma/txx9dmac.c b/drivers/dma/txx9dmac.c
index 17686caf64d5..cb24d3efe3a2 100644
--- a/drivers/dma/txx9dmac.c
+++ b/drivers/dma/txx9dmac.c
@@ -901,17 +901,12 @@ txx9dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
return &first->txd;
}

-static int txx9dmac_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int txx9dmac_terminate_all(struct dma_chan *chan)
{
struct txx9dmac_chan *dc = to_txx9dmac_chan(chan);
struct txx9dmac_desc *desc, *_desc;
LIST_HEAD(list);

- /* Only supports DMA_TERMINATE_ALL */
- if (cmd != DMA_TERMINATE_ALL)
- return -EINVAL;
-
dev_vdbg(chan2dev(chan), "terminate_all\n");
spin_lock_bh(&dc->lock);

@@ -1109,7 +1104,7 @@ static int __init txx9dmac_chan_probe(struct platform_device *pdev)
dc->dma.dev = &pdev->dev;
dc->dma.device_alloc_chan_resources = txx9dmac_alloc_chan_resources;
dc->dma.device_free_chan_resources = txx9dmac_free_chan_resources;
- dc->dma.device_control = txx9dmac_control;
+ dc->dma.device_terminate_all = txx9dmac_terminate_all;
dc->dma.device_tx_status = txx9dmac_tx_status;
dc->dma.device_issue_pending = txx9dmac_issue_pending;
if (pdata && pdata->memcpy_chan == ch) {
--
2.1.1

2014-10-28 21:37:43

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 51/58] dmaengine: edma: Declare slave capabilities for the generic code

Now that the generic slave caps code can make use of the device assigned
capabilities, instead of relying on a callback to be implemented.

Make use of this code.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/edma.c | 20 ++++++--------------
1 file changed, 6 insertions(+), 14 deletions(-)

diff --git a/drivers/dma/edma.c b/drivers/dma/edma.c
index 56735a0aa3a8..7aaa59048e2d 100644
--- a/drivers/dma/edma.c
+++ b/drivers/dma/edma.c
@@ -971,19 +971,6 @@ static void __init edma_chan_init(struct edma_cc *ecc,
BIT(DMA_SLAVE_BUSWIDTH_3_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES))

-static int edma_dma_device_slave_caps(struct dma_chan *dchan,
- struct dma_slave_caps *caps)
-{
- caps->src_addr_widths = EDMA_DMA_BUSWIDTHS;
- caps->dst_addr_widths = EDMA_DMA_BUSWIDTHS;
- caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
- caps->cmd_pause = true;
- caps->cmd_terminate = true;
- caps->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
-
- return 0;
-}
-
static void edma_dma_init(struct edma_cc *ecc, struct dma_device *dma,
struct device *dev)
{
@@ -998,7 +985,12 @@ static void edma_dma_init(struct edma_cc *ecc, struct dma_device *dma,
dma->device_pause = edma_dma_pause;
dma->device_resume = edma_dma_resume;
dma->device_terminate_all = edma_terminate_all;
- dma->device_slave_caps = edma_dma_device_slave_caps;
+
+ dma->src_addr_widths = EDMA_DMA_BUSWIDTHS;
+ dma->dst_addr_widths = EDMA_DMA_BUSWIDTHS;
+ dma->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+ dma->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
+
dma->dev = dev;

/*
--
2.1.1

2014-10-28 21:30:55

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 46/58] dmaengine: td: Rename device_control

Rename the device_control callback of the Timberdal DMA driver to terminate_all
since it's all it's really doing. That will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/timb_dma.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/dma/timb_dma.c b/drivers/dma/timb_dma.c
index 4506a7b4f972..0536a75354d6 100644
--- a/drivers/dma/timb_dma.c
+++ b/drivers/dma/timb_dma.c
@@ -561,8 +561,7 @@ static struct dma_async_tx_descriptor *td_prep_slave_sg(struct dma_chan *chan,
return &td_desc->txd;
}

-static int td_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int td_terminate_all(struct dma_chan *chan)
{
struct timb_dma_chan *td_chan =
container_of(chan, struct timb_dma_chan, chan);
@@ -570,9 +569,6 @@ static int td_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,

dev_dbg(chan2dev(chan), "%s: Entry\n", __func__);

- if (cmd != DMA_TERMINATE_ALL)
- return -ENXIO;
-
/* first the easy part, put the queue into the free list */
spin_lock_bh(&td_chan->lock);
list_for_each_entry_safe(td_desc, _td_desc, &td_chan->queue,
@@ -697,7 +693,7 @@ static int td_probe(struct platform_device *pdev)
dma_cap_set(DMA_SLAVE, td->dma.cap_mask);
dma_cap_set(DMA_PRIVATE, td->dma.cap_mask);
td->dma.device_prep_slave_sg = td_prep_slave_sg;
- td->dma.device_control = td_control;
+ td->dma.device_terminate_all = td_terminate_all;

td->dma.dev = &pdev->dev;

--
2.1.1

2014-10-28 21:38:34

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 50/58] dmaengine: fsl-edma: Declare slave capabilities for the generic code

Now that the generic slave caps code can make use of the device assigned
capabilities, instead of relying on a callback to be implemented.

Make use of this code.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/fsl-edma.c | 17 ++++-------------
1 file changed, 4 insertions(+), 13 deletions(-)

diff --git a/drivers/dma/fsl-edma.c b/drivers/dma/fsl-edma.c
index b5b7474f8859..1382332e7e49 100644
--- a/drivers/dma/fsl-edma.c
+++ b/drivers/dma/fsl-edma.c
@@ -784,18 +784,6 @@ static void fsl_edma_free_chan_resources(struct dma_chan *chan)
fsl_chan->tcd_pool = NULL;
}

-static int fsl_dma_device_slave_caps(struct dma_chan *dchan,
- struct dma_slave_caps *caps)
-{
- caps->src_addr_widths = FSL_EDMA_BUSWIDTHS;
- caps->dst_addr_widths = FSL_EDMA_BUSWIDTHS;
- caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
- caps->cmd_pause = true;
- caps->cmd_terminate = true;
-
- return 0;
-}
-
static int
fsl_edma_irq_init(struct platform_device *pdev, struct fsl_edma_engine *fsl_edma)
{
@@ -926,7 +914,10 @@ static int fsl_edma_probe(struct platform_device *pdev)
fsl_edma->dma_dev.device_resume = fsl_edma_resume;
fsl_edma->dma_dev.device_terminate_all = fsl_edma_terminate_all;
fsl_edma->dma_dev.device_issue_pending = fsl_edma_issue_pending;
- fsl_edma->dma_dev.device_slave_caps = fsl_dma_device_slave_caps;
+
+ fsl_edma->dma_dev.src_addr_widths = FSL_EDMA_BUSWIDTHS;
+ fsl_edma->dma_dev.dst_addr_widths = FSL_EDMA_BUSWIDTHS;
+ fsl_edma->dma_dev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);

platform_set_drvdata(pdev, fsl_edma);

--
2.1.1

2014-10-28 21:38:55

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 49/58] dmaengine: bcm2835: Declare slave capabilities for the generic code

Now that the generic slave caps code can make use of the device assigned
capabilities, instead of relying on a callback to be implemented.

Make use of this code.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Stephen Warren <[email protected]>
---
drivers/dma/bcm2835-dma.c | 15 +++------------
1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c
index 243bb1ed6bfc..891183ff6d1b 100644
--- a/drivers/dma/bcm2835-dma.c
+++ b/drivers/dma/bcm2835-dma.c
@@ -552,18 +552,6 @@ static struct dma_chan *bcm2835_dma_xlate(struct of_phandle_args *spec,
return chan;
}

-static int bcm2835_dma_device_slave_caps(struct dma_chan *dchan,
- struct dma_slave_caps *caps)
-{
- caps->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
- caps->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
- caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
- caps->cmd_pause = false;
- caps->cmd_terminate = true;
-
- return 0;
-}
-
static int bcm2835_dma_probe(struct platform_device *pdev)
{
struct bcm2835_dmadev *od;
@@ -605,6 +593,9 @@ static int bcm2835_dma_probe(struct platform_device *pdev)
od->ddev.device_prep_dma_cyclic = bcm2835_dma_prep_dma_cyclic;
od->ddev.device_config = bcm2835_dma_slave_config;
od->ddev.device_terminate_all = bcm2835_dma_terminate_all;
+ od->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+ od->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+ od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
od->ddev.dev = &pdev->dev;
INIT_LIST_HEAD(&od->ddev.channels);
spin_lock_init(&od->lock);
--
2.1.1

2014-10-28 21:39:16

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 48/58] dmaengine: rapidio: tsi721: Rename device_control

Rename the device_control callback of the TXX9 DMA driver to terminate_all
since it's all it's really doing. That will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/rapidio/devices/tsi721_dma.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/rapidio/devices/tsi721_dma.c b/drivers/rapidio/devices/tsi721_dma.c
index f64c5decb747..47295940a868 100644
--- a/drivers/rapidio/devices/tsi721_dma.c
+++ b/drivers/rapidio/devices/tsi721_dma.c
@@ -815,8 +815,7 @@ struct dma_async_tx_descriptor *tsi721_prep_rio_sg(struct dma_chan *dchan,
return txd;
}

-static int tsi721_device_control(struct dma_chan *dchan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int tsi721_terminate_all(struct dma_chan *dchan)
{
struct tsi721_bdma_chan *bdma_chan = to_tsi721_chan(dchan);
struct tsi721_tx_desc *desc, *_d;
@@ -825,9 +824,6 @@ static int tsi721_device_control(struct dma_chan *dchan, enum dma_ctrl_cmd cmd,

dev_dbg(dchan->device->dev, "%s: Entry\n", __func__);

- if (cmd != DMA_TERMINATE_ALL)
- return -ENOSYS;
-
spin_lock_bh(&bdma_chan->lock);

bdma_chan->active = false;
@@ -901,7 +897,7 @@ int tsi721_register_dma(struct tsi721_device *priv)
mport->dma.device_tx_status = tsi721_tx_status;
mport->dma.device_issue_pending = tsi721_issue_pending;
mport->dma.device_prep_slave_sg = tsi721_prep_rio_sg;
- mport->dma.device_control = tsi721_device_control;
+ mport->dma.device_terminate_all = tsi721_terminate_all;

err = dma_async_device_register(&mport->dma);
if (err)
--
2.1.1

2014-10-28 21:39:52

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 43/58] dmaengine: xilinx: Split device_control

Split the device_control callback of the Xilinx VDMA driver to make use of the
newly introduced callbacks, that will eventually be used to retrieve slave
capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Laurent Pinchart <[email protected]>
---
drivers/dma/xilinx/xilinx_vdma.c | 29 ++++++-----------------------
1 file changed, 6 insertions(+), 23 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_vdma.c b/drivers/dma/xilinx/xilinx_vdma.c
index a6e64767186e..1df06b45c798 100644
--- a/drivers/dma/xilinx/xilinx_vdma.c
+++ b/drivers/dma/xilinx/xilinx_vdma.c
@@ -996,13 +996,17 @@ error:
* xilinx_vdma_terminate_all - Halt the channel and free descriptors
* @chan: Driver specific VDMA Channel pointer
*/
-static void xilinx_vdma_terminate_all(struct xilinx_vdma_chan *chan)
+static int xilinx_vdma_terminate_all(struct dma_chan *dchan)
{
+ struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
+
/* Halt the DMA engine */
xilinx_vdma_halt(chan);

/* Remove and free all of the descriptors in the lists */
xilinx_vdma_free_descriptors(chan);
+
+ return 0;
}

/**
@@ -1070,27 +1074,6 @@ int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
}
EXPORT_SYMBOL(xilinx_vdma_channel_set_config);

-/**
- * xilinx_vdma_device_control - Configure DMA channel of the device
- * @dchan: DMA Channel pointer
- * @cmd: DMA control command
- * @arg: Channel configuration
- *
- * Return: '0' on success and failure value on error
- */
-static int xilinx_vdma_device_control(struct dma_chan *dchan,
- enum dma_ctrl_cmd cmd, unsigned long arg)
-{
- struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
-
- if (cmd != DMA_TERMINATE_ALL)
- return -ENXIO;
-
- xilinx_vdma_terminate_all(chan);
-
- return 0;
-}
-
/* -----------------------------------------------------------------------------
* Probe and remove
*/
@@ -1295,7 +1278,7 @@ static int xilinx_vdma_probe(struct platform_device *pdev)
xilinx_vdma_free_chan_resources;
xdev->common.device_prep_interleaved_dma =
xilinx_vdma_dma_prep_interleaved;
- xdev->common.device_control = xilinx_vdma_device_control;
+ xdev->common.device_terminate_all = xilinx_vdma_terminate_all;
xdev->common.device_tx_status = xilinx_vdma_tx_status;
xdev->common.device_issue_pending = xilinx_vdma_issue_pending;

--
2.1.1

2014-10-28 21:40:23

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 41/58] dmaengine: d40: Split device_control

Split the device_control callback of the ST-Ericsson DMA 40 driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Linus Walleij <[email protected]>
---
drivers/dma/ste_dma40.c | 60 +++++++++++++++++++++++--------------------------
1 file changed, 28 insertions(+), 32 deletions(-)

diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c
index 5fe59335e247..b781c988b592 100644
--- a/drivers/dma/ste_dma40.c
+++ b/drivers/dma/ste_dma40.c
@@ -1429,11 +1429,17 @@ static bool d40_tx_is_linked(struct d40_chan *d40c)
return is_link;
}

-static int d40_pause(struct d40_chan *d40c)
+static int d40_pause(struct dma_chan *chan)
{
+ struct d40_chan *d40c = container_of(chan, struct d40_chan, chan);
int res = 0;
unsigned long flags;

+ if (d40c->phy_chan == NULL) {
+ chan_err(d40c, "Channel is not allocated!\n");
+ return -EINVAL;
+ }
+
if (!d40c->busy)
return 0;

@@ -1448,11 +1454,17 @@ static int d40_pause(struct d40_chan *d40c)
return res;
}

-static int d40_resume(struct d40_chan *d40c)
+static int d40_resume(struct dma_chan *chan)
{
+ struct d40_chan *d40c = container_of(chan, struct d40_chan, chan);
int res = 0;
unsigned long flags;

+ if (d40c->phy_chan == NULL) {
+ chan_err(d40c, "Channel is not allocated!\n");
+ return -EINVAL;
+ }
+
if (!d40c->busy)
return 0;

@@ -2610,6 +2622,11 @@ static void d40_terminate_all(struct dma_chan *chan)
struct d40_chan *d40c = container_of(chan, struct d40_chan, chan);
int ret;

+ if (d40c->phy_chan == NULL) {
+ chan_err(d40c, "Channel is not allocated!\n");
+ return -EINVAL;
+ }
+
spin_lock_irqsave(&d40c->lock, flags);

pm_runtime_get_sync(d40c->base->dev);
@@ -2673,6 +2690,11 @@ static int d40_set_runtime_config(struct dma_chan *chan,
u32 src_maxburst, dst_maxburst;
int ret;

+ if (d40c->phy_chan == NULL) {
+ chan_err(d40c, "Channel is not allocated!\n");
+ return -EINVAL;
+ }
+
src_addr_width = config->src_addr_width;
src_maxburst = config->src_maxburst;
dst_addr_width = config->dst_addr_width;
@@ -2781,35 +2803,6 @@ static int d40_set_runtime_config(struct dma_chan *chan,
return 0;
}

-static int d40_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- struct d40_chan *d40c = container_of(chan, struct d40_chan, chan);
-
- if (d40c->phy_chan == NULL) {
- chan_err(d40c, "Channel is not allocated!\n");
- return -EINVAL;
- }
-
- switch (cmd) {
- case DMA_TERMINATE_ALL:
- d40_terminate_all(chan);
- return 0;
- case DMA_PAUSE:
- return d40_pause(d40c);
- case DMA_RESUME:
- return d40_resume(d40c);
- case DMA_SLAVE_CONFIG:
- return d40_set_runtime_config(chan,
- (struct dma_slave_config *) arg);
- default:
- break;
- }
-
- /* Other commands are unimplemented */
- return -ENXIO;
-}
-
/* Initialization functions */

static void __init d40_chan_init(struct d40_base *base, struct dma_device *dma,
@@ -2870,7 +2863,10 @@ static void d40_ops_init(struct d40_base *base, struct dma_device *dev)
dev->device_free_chan_resources = d40_free_chan_resources;
dev->device_issue_pending = d40_issue_pending;
dev->device_tx_status = d40_tx_status;
- dev->device_control = d40_control;
+ dev->device_config = d40_set_runtime_config;
+ dev->device_pause = d40_pause;
+ dev->device_resume = d40_resume;
+ dev->device_terminate_all = d40_terminate_all;
dev->dev = base->dev;
}

--
2.1.1

2014-10-28 21:40:22

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 42/58] dmaengine: tegra20: Split device_control

Split the device_control callback of the NVidia Tegra20 APB DMA driver to make
use of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/tegra20-apb-dma.c | 22 ++--------------------
1 file changed, 2 insertions(+), 20 deletions(-)

diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
index 16efa603ff65..dea6b78080aa 100644
--- a/drivers/dma/tegra20-apb-dma.c
+++ b/drivers/dma/tegra20-apb-dma.c
@@ -827,25 +827,6 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc,
return ret;
}

-static int tegra_dma_device_control(struct dma_chan *dc, enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- switch (cmd) {
- case DMA_SLAVE_CONFIG:
- return tegra_dma_slave_config(dc,
- (struct dma_slave_config *)arg);
-
- case DMA_TERMINATE_ALL:
- tegra_dma_terminate_all(dc);
- return 0;
-
- default:
- break;
- }
-
- return -ENXIO;
-}
-
static inline int get_bus_width(struct tegra_dma_channel *tdc,
enum dma_slave_buswidth slave_bw)
{
@@ -1443,7 +1424,8 @@ static int tegra_dma_probe(struct platform_device *pdev)
tegra_dma_free_chan_resources;
tdma->dma_dev.device_prep_slave_sg = tegra_dma_prep_slave_sg;
tdma->dma_dev.device_prep_dma_cyclic = tegra_dma_prep_dma_cyclic;
- tdma->dma_dev.device_control = tegra_dma_device_control;
+ tdma->dma_dev.device_config = tegra_dma_slave_config;
+ tdma->dma_dev.device_terminate_all = tegra_dma_terminate_all;
tdma->dma_dev.device_tx_status = tegra_dma_tx_status;
tdma->dma_dev.device_issue_pending = tegra_dma_issue_pending;

--
2.1.1

2014-10-28 21:41:29

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 09/58] dmaengine: Remove the need to declare device_control

In order to migrate the drivers without triggering a BUG_ON for the converted
drivers, which would cause bisectability issues, we need to remove that check
before removing the device_control function entirely.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Laurent Pinchart <[email protected]>
---
drivers/dma/dmaengine.c | 2 --
1 file changed, 2 deletions(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 9b2cd74b8f2e..98e9431f85ec 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -820,8 +820,6 @@ int dma_async_device_register(struct dma_device *device)
!device->device_prep_dma_sg);
BUG_ON(dma_has_cap(DMA_CYCLIC, device->cap_mask) &&
!device->device_prep_dma_cyclic);
- BUG_ON(dma_has_cap(DMA_SLAVE, device->cap_mask) &&
- !device->device_control);
BUG_ON(dma_has_cap(DMA_INTERLEAVE, device->cap_mask) &&
!device->device_prep_interleaved_dma);

--
2.1.1

2014-10-28 21:41:49

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 08/58] dmaengine: Add device_terminate_all callback

Split out the terminate_all command from device_control to a dma_device
callback. In order to preserve backward capability, still rely on
device_control if no such callback has been implemented.

Eventually, this will allow to create a generic dma_slave_caps callback.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Laurent Pinchart <[email protected]>
---
include/linux/dmaengine.h | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 4583c55f3355..b0efc805937a 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -615,6 +615,8 @@ struct dma_tx_state {
* 0 or an error code
* @device_resume: Resumes any transfer on a channel previously
* paused. Returns 0 or an error code
+ * @device_terminate_all: Aborts all transfers on a channel. Returns 0
+ * or an error code
* @device_tx_status: poll for transaction completion, the optional
* txstate parameter can be supplied with a pointer to get a
* struct with auxiliary transfer status information, otherwise the call
@@ -686,6 +688,7 @@ struct dma_device {
unsigned long arg);
int (*device_pause)(struct dma_chan *chan);
int (*device_resume)(struct dma_chan *chan);
+ int (*device_terminate_all)(struct dma_chan *chan);

enum dma_status (*device_tx_status)(struct dma_chan *chan,
dma_cookie_t cookie,
@@ -795,6 +798,9 @@ static inline int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_cap

static inline int dmaengine_terminate_all(struct dma_chan *chan)
{
+ if (chan->device->device_terminate_all)
+ return chan->device->device_terminate_all(chan);
+
return dmaengine_device_control(chan, DMA_TERMINATE_ALL, 0);
}

--
2.1.1

2014-10-28 21:42:18

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 04/58] dmaengine: Rework dma_chan_get

dma_chan_get uses a rather interesting error handling and code path.

Change it to something more usual in the kernel.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Laurent Pinchart <[email protected]>
---
drivers/dma/dmaengine.c | 36 +++++++++++++++++++-----------------
1 file changed, 19 insertions(+), 17 deletions(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 24bfaf0b92ba..72f10e2f5c32 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -222,31 +222,33 @@ static void balance_ref_count(struct dma_chan *chan)
*/
static int dma_chan_get(struct dma_chan *chan)
{
- int err = -ENODEV;
struct module *owner = dma_chan_to_owner(chan);
+ int ret;

+ /* The channel is already in use, update client count */
if (chan->client_count) {
__module_get(owner);
- err = 0;
- } else if (try_module_get(owner))
- err = 0;
+ goto out;
+ }

- if (err == 0)
- chan->client_count++;
+ if (!try_module_get(owner))
+ return -ENODEV;

/* allocate upon first client reference */
- if (chan->client_count == 1 && err == 0) {
- int desc_cnt = chan->device->device_alloc_chan_resources(chan);
-
- if (desc_cnt < 0) {
- err = desc_cnt;
- chan->client_count = 0;
- module_put(owner);
- } else if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask))
- balance_ref_count(chan);
- }
+ ret = chan->device->device_alloc_chan_resources(chan);
+ if (ret < 0)
+ goto err_out;

- return err;
+ if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask))
+ balance_ref_count(chan);
+
+out:
+ chan->client_count++;
+ return 0;
+
+err_out:
+ module_put(owner);
+ return ret;
}

/**
--
2.1.1

2014-10-28 21:42:17

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 05/58] dmaengine: Make channel allocation callbacks optional

Nowadays, some drivers don't have anything in there channel allocation
callbacks anymore.

Remove the BUG_ON if those callbacks aren't implemented, in order to allow
drivers to not implement them.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Laurent Pinchart <[email protected]>
---
drivers/dma/dmaengine.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 72f10e2f5c32..9b2cd74b8f2e 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -235,9 +235,11 @@ static int dma_chan_get(struct dma_chan *chan)
return -ENODEV;

/* allocate upon first client reference */
- ret = chan->device->device_alloc_chan_resources(chan);
- if (ret < 0)
- goto err_out;
+ if (chan->device->device_alloc_chan_resources) {
+ ret = chan->device->device_alloc_chan_resources(chan);
+ if (ret < 0)
+ goto err_out;
+ }

if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask))
balance_ref_count(chan);
@@ -259,11 +261,15 @@ err_out:
*/
static void dma_chan_put(struct dma_chan *chan)
{
+ /* This channel is not in use, bail out */
if (!chan->client_count)
- return; /* this channel failed alloc_chan_resources */
+ return;
+
chan->client_count--;
module_put(dma_chan_to_owner(chan));
- if (chan->client_count == 0)
+
+ /* This channel is not in use anymore, free it */
+ if (!chan->client_count && chan->device->device_free_chan_resources)
chan->device->device_free_chan_resources(chan);
}

@@ -819,8 +825,6 @@ int dma_async_device_register(struct dma_device *device)
BUG_ON(dma_has_cap(DMA_INTERLEAVE, device->cap_mask) &&
!device->device_prep_interleaved_dma);

- BUG_ON(!device->device_alloc_chan_resources);
- BUG_ON(!device->device_free_chan_resources);
BUG_ON(!device->device_tx_status);
BUG_ON(!device->device_issue_pending);
BUG_ON(!device->dev);
--
2.1.1

2014-10-28 21:42:15

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 06/58] dmaengine: Introduce a device_config callback

The fact that the channel configuration is done in device_control is rather
misleading, since it's not really advertised as such, plus, the fact that the
framework exposes a function of its own makes it not really intuitive, while
we're losing the type checking whenever we pass that unsigned long argument.

Add a device_config callback to dma_device, with a fallback on the old
behaviour for now for existing drivers to opt in.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Laurent Pinchart <[email protected]>
---
include/linux/dmaengine.h | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 4d912575fec0..f797cb81355f 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -607,6 +607,8 @@ struct dma_tx_state {
* The function takes a buffer of size buf_len. The callback function will
* be called after period_len bytes have been transferred.
* @device_prep_interleaved_dma: Transfer expression in a generic way.
+ * @device_config: Pushes a new configuration to a channel, return 0 or an error
+ * code
* @device_control: manipulate all pending operations on a channel, returns
* zero or error code
* @device_tx_status: poll for transaction completion, the optional
@@ -673,6 +675,9 @@ struct dma_device {
struct dma_async_tx_descriptor *(*device_prep_interleaved_dma)(
struct dma_chan *chan, struct dma_interleaved_template *xt,
unsigned long flags);
+
+ int (*device_config)(struct dma_chan *chan,
+ struct dma_slave_config *config);
int (*device_control)(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg);

@@ -696,6 +701,9 @@ static inline int dmaengine_device_control(struct dma_chan *chan,
static inline int dmaengine_slave_config(struct dma_chan *chan,
struct dma_slave_config *config)
{
+ if (chan->device->device_config)
+ return chan->device->device_config(chan, config);
+
return dmaengine_device_control(chan, DMA_SLAVE_CONFIG,
(unsigned long)config);
}
--
2.1.1

2014-10-28 21:43:11

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 03/58] dmaengine: Make the destination abbreviation coherent

The dmaengine header abbreviates destination as at least two different strings.
Make a coherent use of a single one.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Mark Brown <[email protected]>
Acked-by: Laurent Pinchart <[email protected]>
Acked-by: Stephen Warren <[email protected]>
---
drivers/dma/bcm2835-dma.c | 2 +-
drivers/dma/edma.c | 2 +-
drivers/dma/fsl-edma.c | 2 +-
drivers/dma/nbpfaxi.c | 2 +-
drivers/dma/omap-dma.c | 2 +-
drivers/dma/pl330.c | 2 +-
drivers/dma/sirf-dma.c | 2 +-
include/linux/dmaengine.h | 8 ++++----
sound/soc/soc-generic-dmaengine-pcm.c | 2 +-
9 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c
index 68007974961a..8a6a9c27fe83 100644
--- a/drivers/dma/bcm2835-dma.c
+++ b/drivers/dma/bcm2835-dma.c
@@ -571,7 +571,7 @@ static int bcm2835_dma_device_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
- caps->dstn_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+ caps->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = false;
caps->cmd_terminate = true;
diff --git a/drivers/dma/edma.c b/drivers/dma/edma.c
index 123f578d6dd3..790054eef76c 100644
--- a/drivers/dma/edma.c
+++ b/drivers/dma/edma.c
@@ -998,7 +998,7 @@ static int edma_dma_device_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = EDMA_DMA_BUSWIDTHS;
- caps->dstn_addr_widths = EDMA_DMA_BUSWIDTHS;
+ caps->dst_addr_widths = EDMA_DMA_BUSWIDTHS;
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = true;
caps->cmd_terminate = true;
diff --git a/drivers/dma/fsl-edma.c b/drivers/dma/fsl-edma.c
index 3c5711d5fe97..13df85a16438 100644
--- a/drivers/dma/fsl-edma.c
+++ b/drivers/dma/fsl-edma.c
@@ -781,7 +781,7 @@ static int fsl_dma_device_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = FSL_EDMA_BUSWIDTHS;
- caps->dstn_addr_widths = FSL_EDMA_BUSWIDTHS;
+ caps->dst_addr_widths = FSL_EDMA_BUSWIDTHS;
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = true;
caps->cmd_terminate = true;
diff --git a/drivers/dma/nbpfaxi.c b/drivers/dma/nbpfaxi.c
index 5aeada56a442..b4a61ccbb443 100644
--- a/drivers/dma/nbpfaxi.c
+++ b/drivers/dma/nbpfaxi.c
@@ -1076,7 +1076,7 @@ static int nbpf_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = NBPF_DMA_BUSWIDTHS;
- caps->dstn_addr_widths = NBPF_DMA_BUSWIDTHS;
+ caps->dst_addr_widths = NBPF_DMA_BUSWIDTHS;
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = false;
caps->cmd_terminate = true;
diff --git a/drivers/dma/omap-dma.c b/drivers/dma/omap-dma.c
index bbea8243f9e8..8035b8c33cfd 100644
--- a/drivers/dma/omap-dma.c
+++ b/drivers/dma/omap-dma.c
@@ -1100,7 +1100,7 @@ static int omap_dma_device_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = OMAP_DMA_BUSWIDTHS;
- caps->dstn_addr_widths = OMAP_DMA_BUSWIDTHS;
+ caps->dst_addr_widths = OMAP_DMA_BUSWIDTHS;
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = true;
caps->cmd_terminate = true;
diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
index 4839bfa74a10..b563bfa6cc4d 100644
--- a/drivers/dma/pl330.c
+++ b/drivers/dma/pl330.c
@@ -2576,7 +2576,7 @@ static int pl330_dma_device_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = PL330_DMA_BUSWIDTHS;
- caps->dstn_addr_widths = PL330_DMA_BUSWIDTHS;
+ caps->dst_addr_widths = PL330_DMA_BUSWIDTHS;
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = false;
caps->cmd_terminate = true;
diff --git a/drivers/dma/sirf-dma.c b/drivers/dma/sirf-dma.c
index aac03ab10c54..149d7fddfa75 100644
--- a/drivers/dma/sirf-dma.c
+++ b/drivers/dma/sirf-dma.c
@@ -652,7 +652,7 @@ static int sirfsoc_dma_device_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = SIRFSOC_DMA_BUSWIDTHS;
- caps->dstn_addr_widths = SIRFSOC_DMA_BUSWIDTHS;
+ caps->dst_addr_widths = SIRFSOC_DMA_BUSWIDTHS;
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = true;
caps->cmd_terminate = true;
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 653a1fd07ae8..4d912575fec0 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -387,7 +387,7 @@ enum dma_residue_granularity {
/* struct dma_slave_caps - expose capabilities of a slave channel only
*
* @src_addr_widths: bit mask of src addr widths the channel supports
- * @dstn_addr_widths: bit mask of dstn addr widths the channel supports
+ * @dst_addr_widths: bit mask of dstn addr widths the channel supports
* @directions: bit mask of slave direction the channel supported
* since the enum dma_transfer_direction is not defined as bits for each
* type of direction, the dma controller should fill (1 << <TYPE>) and same
@@ -398,7 +398,7 @@ enum dma_residue_granularity {
*/
struct dma_slave_caps {
u32 src_addr_widths;
- u32 dstn_addr_widths;
+ u32 dst_addr_widths;
u32 directions;
bool cmd_pause;
bool cmd_terminate;
@@ -638,10 +638,10 @@ struct dma_device {
void (*device_free_chan_resources)(struct dma_chan *chan);

struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)(
- struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
+ struct dma_chan *chan, dma_addr_t dst, dma_addr_t src,
size_t len, unsigned long flags);
struct dma_async_tx_descriptor *(*device_prep_dma_xor)(
- struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
+ struct dma_chan *chan, dma_addr_t dst, dma_addr_t *src,
unsigned int src_cnt, size_t len, unsigned long flags);
struct dma_async_tx_descriptor *(*device_prep_dma_xor_val)(
struct dma_chan *chan, dma_addr_t *src, unsigned int src_cnt,
diff --git a/sound/soc/soc-generic-dmaengine-pcm.c b/sound/soc/soc-generic-dmaengine-pcm.c
index b329b84bc5af..851f7afcd5dc 100644
--- a/sound/soc/soc-generic-dmaengine-pcm.c
+++ b/sound/soc/soc-generic-dmaengine-pcm.c
@@ -151,7 +151,7 @@ static int dmaengine_pcm_set_runtime_hwparams(struct snd_pcm_substream *substrea
hw.info |= SNDRV_PCM_INFO_BATCH;

if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
- addr_widths = dma_caps.dstn_addr_widths;
+ addr_widths = dma_caps.dst_addr_widths;
else
addr_widths = dma_caps.src_addr_widths;
}
--
2.1.1

2014-10-28 21:30:38

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 02/58] serial: at91: Use dmaengine_slave_config API

We are removing the dmaengine_device_control API, that shouldn't even have been
exposed in the first place. Change the callers to use the proper API.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/tty/serial/atmel_serial.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
index edde3eca055d..10e6b382d68f 100644
--- a/drivers/tty/serial/atmel_serial.c
+++ b/drivers/tty/serial/atmel_serial.c
@@ -862,9 +862,8 @@ static int atmel_prepare_tx_dma(struct uart_port *port)
config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
config.dst_addr = port->mapbase + ATMEL_US_THR;

- ret = dmaengine_device_control(atmel_port->chan_tx,
- DMA_SLAVE_CONFIG,
- (unsigned long)&config);
+ ret = dmaengine_slave_config(atmel_port->chan_tx,
+ &config);
if (ret) {
dev_err(port->dev, "DMA tx slave configuration failed\n");
goto chan_err;
@@ -1026,9 +1025,8 @@ static int atmel_prepare_rx_dma(struct uart_port *port)
config.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
config.src_addr = port->mapbase + ATMEL_US_RHR;

- ret = dmaengine_device_control(atmel_port->chan_rx,
- DMA_SLAVE_CONFIG,
- (unsigned long)&config);
+ ret = dmaengine_slave_config(atmel_port->chan_rx,
+ &config);
if (ret) {
dev_err(port->dev, "DMA rx slave configuration failed\n");
goto chan_err;
--
2.1.1

2014-10-28 21:43:42

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 30/58] dmaengine: mpc512x: Split device_control

Split the device_control callback of the Freescale MPC512x DMA driver to make
use of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/mpc512x_dma.c | 111 +++++++++++++++++++++-------------------------
1 file changed, 51 insertions(+), 60 deletions(-)

diff --git a/drivers/dma/mpc512x_dma.c b/drivers/dma/mpc512x_dma.c
index 881db2bcb48b..15f2b56359e6 100644
--- a/drivers/dma/mpc512x_dma.c
+++ b/drivers/dma/mpc512x_dma.c
@@ -800,79 +800,69 @@ err_prep:
return NULL;
}

-static int mpc_dma_device_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int mpc_dma_device_config(struct dma_chan *chan,
+ struct dma_slave_config *cfg)
{
- struct mpc_dma_chan *mchan;
- struct mpc_dma *mdma;
- struct dma_slave_config *cfg;
+ struct mpc_dma_chan *mchan = dma_chan_to_mpc_dma_chan(chan);
unsigned long flags;

- mchan = dma_chan_to_mpc_dma_chan(chan);
- switch (cmd) {
- case DMA_TERMINATE_ALL:
- /* Disable channel requests */
- mdma = dma_chan_to_mpc_dma(chan);
-
- spin_lock_irqsave(&mchan->lock, flags);
-
- out_8(&mdma->regs->dmacerq, chan->chan_id);
- list_splice_tail_init(&mchan->prepared, &mchan->free);
- list_splice_tail_init(&mchan->queued, &mchan->free);
- list_splice_tail_init(&mchan->active, &mchan->free);
-
- spin_unlock_irqrestore(&mchan->lock, flags);
+ /*
+ * Software constraints:
+ * - only transfers between a peripheral device and
+ * memory are supported;
+ * - only peripheral devices with 4-byte FIFO access register
+ * are supported;
+ * - minimal transfer chunk is 4 bytes and consequently
+ * source and destination addresses must be 4-byte aligned
+ * and transfer size must be aligned on (4 * maxburst)
+ * boundary;
+ * - during the transfer RAM address is being incremented by
+ * the size of minimal transfer chunk;
+ * - peripheral port's address is constant during the transfer.
+ */

- return 0;
+ if (cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||
+ cfg->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||
+ !IS_ALIGNED(cfg->src_addr, 4) ||
+ !IS_ALIGNED(cfg->dst_addr, 4)) {
+ return -EINVAL;
+ }

- case DMA_SLAVE_CONFIG:
- /*
- * Software constraints:
- * - only transfers between a peripheral device and
- * memory are supported;
- * - only peripheral devices with 4-byte FIFO access register
- * are supported;
- * - minimal transfer chunk is 4 bytes and consequently
- * source and destination addresses must be 4-byte aligned
- * and transfer size must be aligned on (4 * maxburst)
- * boundary;
- * - during the transfer RAM address is being incremented by
- * the size of minimal transfer chunk;
- * - peripheral port's address is constant during the transfer.
- */
+ spin_lock_irqsave(&mchan->lock, flags);

- cfg = (void *)arg;
+ mchan->src_per_paddr = cfg->src_addr;
+ mchan->src_tcd_nunits = cfg->src_maxburst;
+ mchan->dst_per_paddr = cfg->dst_addr;
+ mchan->dst_tcd_nunits = cfg->dst_maxburst;

- if (cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||
- cfg->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||
- !IS_ALIGNED(cfg->src_addr, 4) ||
- !IS_ALIGNED(cfg->dst_addr, 4)) {
- return -EINVAL;
- }
+ /* Apply defaults */
+ if (mchan->src_tcd_nunits == 0)
+ mchan->src_tcd_nunits = 1;
+ if (mchan->dst_tcd_nunits == 0)
+ mchan->dst_tcd_nunits = 1;

- spin_lock_irqsave(&mchan->lock, flags);
+ spin_unlock_irqrestore(&mchan->lock, flags);

- mchan->src_per_paddr = cfg->src_addr;
- mchan->src_tcd_nunits = cfg->src_maxburst;
- mchan->dst_per_paddr = cfg->dst_addr;
- mchan->dst_tcd_nunits = cfg->dst_maxburst;
+ return 0;
+}

- /* Apply defaults */
- if (mchan->src_tcd_nunits == 0)
- mchan->src_tcd_nunits = 1;
- if (mchan->dst_tcd_nunits == 0)
- mchan->dst_tcd_nunits = 1;
+static int mpc_dma_device_terminate_all(struct dma_chan *chan)
+{
+ struct mpc_dma_chan *mchan = dma_chan_to_mpc_dma_chan(chan);
+ struct mpc_dma *mdma = dma_chan_to_mpc_dma(chan);
+ unsigned long flags;

- spin_unlock_irqrestore(&mchan->lock, flags);
+ /* Disable channel requests */
+ spin_lock_irqsave(&mchan->lock, flags);

- return 0;
+ out_8(&mdma->regs->dmacerq, chan->chan_id);
+ list_splice_tail_init(&mchan->prepared, &mchan->free);
+ list_splice_tail_init(&mchan->queued, &mchan->free);
+ list_splice_tail_init(&mchan->active, &mchan->free);

- default:
- /* Unknown command */
- break;
- }
+ spin_unlock_irqrestore(&mchan->lock, flags);

- return -ENXIO;
+ return 0;
}

static int mpc_dma_probe(struct platform_device *op)
@@ -966,7 +956,8 @@ static int mpc_dma_probe(struct platform_device *op)
dma->device_tx_status = mpc_dma_tx_status;
dma->device_prep_dma_memcpy = mpc_dma_prep_memcpy;
dma->device_prep_slave_sg = mpc_dma_prep_slave_sg;
- dma->device_control = mpc_dma_device_control;
+ dma->device_config = mpc_dma_device_config;
+ dma->device_terminate_all = mpc_dma_device_terminate_all;

INIT_LIST_HEAD(&dma->channels);
dma_cap_set(DMA_MEMCPY, dma->cap_mask);
--
2.1.1

2014-10-28 21:44:02

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 28/58] dmaengine: moxart: Split device_control

Split the device_control callback of the Moxart DMA driver to make use of the
newly introduced callbacks, that will eventually be used to retrieve slave
capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/moxart-dma.c | 25 ++-----------------------
1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/drivers/dma/moxart-dma.c b/drivers/dma/moxart-dma.c
index 3258e484e4f6..35e1a3ab5fb8 100644
--- a/drivers/dma/moxart-dma.c
+++ b/drivers/dma/moxart-dma.c
@@ -263,28 +263,6 @@ static int moxart_slave_config(struct dma_chan *chan,
return 0;
}

-static int moxart_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- int ret = 0;
-
- switch (cmd) {
- case DMA_PAUSE:
- case DMA_RESUME:
- return -EINVAL;
- case DMA_TERMINATE_ALL:
- moxart_terminate_all(chan);
- break;
- case DMA_SLAVE_CONFIG:
- ret = moxart_slave_config(chan, (struct dma_slave_config *)arg);
- break;
- default:
- ret = -ENOSYS;
- }
-
- return ret;
-}
-
static struct dma_async_tx_descriptor *moxart_prep_slave_sg(
struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction dir,
@@ -531,7 +509,8 @@ static void moxart_dma_init(struct dma_device *dma, struct device *dev)
dma->device_free_chan_resources = moxart_free_chan_resources;
dma->device_issue_pending = moxart_issue_pending;
dma->device_tx_status = moxart_tx_status;
- dma->device_control = moxart_control;
+ dma->device_config = moxart_slave_config;
+ dma->device_terminate_all = moxart_terminate_all;
dma->dev = dev;

INIT_LIST_HEAD(&dma->channels);
--
2.1.1

2014-10-28 21:44:29

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 27/58] dmaengine: mmp-tdma: Split device_control

Split the device_control callback of the Marvell MMP TDMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/mmp_tdma.c | 82 +++++++++++++++++++++++++++-----------------------
1 file changed, 44 insertions(+), 38 deletions(-)

diff --git a/drivers/dma/mmp_tdma.c b/drivers/dma/mmp_tdma.c
index c6bd015b7165..1fe6eabceb70 100644
--- a/drivers/dma/mmp_tdma.c
+++ b/drivers/dma/mmp_tdma.c
@@ -164,33 +164,46 @@ static void mmp_tdma_enable_chan(struct mmp_tdma_chan *tdmac)
tdmac->status = DMA_IN_PROGRESS;
}

-static void mmp_tdma_disable_chan(struct mmp_tdma_chan *tdmac)
+static int mmp_tdma_disable_chan(struct dma_chan *chan)
{
+ struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
+
writel(readl(tdmac->reg_base + TDCR) & ~TDCR_CHANEN,
tdmac->reg_base + TDCR);

tdmac->status = DMA_COMPLETE;
+
+ return 0;
}

-static void mmp_tdma_resume_chan(struct mmp_tdma_chan *tdmac)
+static int mmp_tdma_resume_chan(struct dma_chan *chan)
{
+ struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
+
writel(readl(tdmac->reg_base + TDCR) | TDCR_CHANEN,
tdmac->reg_base + TDCR);
tdmac->status = DMA_IN_PROGRESS;
+
+ return 0;
}

-static void mmp_tdma_pause_chan(struct mmp_tdma_chan *tdmac)
+static int mmp_tdma_pause_chan(struct dma_chan *chan)
{
+ struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
+
writel(readl(tdmac->reg_base + TDCR) & ~TDCR_CHANEN,
tdmac->reg_base + TDCR);
tdmac->status = DMA_PAUSED;
+
+ return 0;
}

-static int mmp_tdma_config_chan(struct mmp_tdma_chan *tdmac)
+static int mmp_tdma_config_chan(struct dma_chan *chan)
{
+ struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
unsigned int tdcr = 0;

- mmp_tdma_disable_chan(tdmac);
+ mmp_tdma_disable_chan(chan);

if (tdmac->dir == DMA_MEM_TO_DEV)
tdcr = TDCR_DSTDIR_ADDR_HOLD | TDCR_SRCDIR_ADDR_INC;
@@ -452,42 +465,32 @@ err_out:
return NULL;
}

-static int mmp_tdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int mmp_tdma_terminate_all(struct dma_chan *chan)
{
struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
- struct dma_slave_config *dmaengine_cfg = (void *)arg;
- int ret = 0;
-
- switch (cmd) {
- case DMA_TERMINATE_ALL:
- mmp_tdma_disable_chan(tdmac);
- /* disable interrupt */
- mmp_tdma_enable_irq(tdmac, false);
- break;
- case DMA_PAUSE:
- mmp_tdma_pause_chan(tdmac);
- break;
- case DMA_RESUME:
- mmp_tdma_resume_chan(tdmac);
- break;
- case DMA_SLAVE_CONFIG:
- if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
- tdmac->dev_addr = dmaengine_cfg->src_addr;
- tdmac->burst_sz = dmaengine_cfg->src_maxburst;
- tdmac->buswidth = dmaengine_cfg->src_addr_width;
- } else {
- tdmac->dev_addr = dmaengine_cfg->dst_addr;
- tdmac->burst_sz = dmaengine_cfg->dst_maxburst;
- tdmac->buswidth = dmaengine_cfg->dst_addr_width;
- }
- tdmac->dir = dmaengine_cfg->direction;
- return mmp_tdma_config_chan(tdmac);
- default:
- ret = -ENOSYS;
+
+ mmp_tdma_disable_chan(chan);
+ /* disable interrupt */
+ mmp_tdma_enable_irq(tdmac, false);
+}
+
+static int mmp_tdma_config(struct dma_chan *chan,
+ struct dma_slave_config *dmaengine_cfg)
+{
+ struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
+
+ if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
+ tdmac->dev_addr = dmaengine_cfg->src_addr;
+ tdmac->burst_sz = dmaengine_cfg->src_maxburst;
+ tdmac->buswidth = dmaengine_cfg->src_addr_width;
+ } else {
+ tdmac->dev_addr = dmaengine_cfg->dst_addr;
+ tdmac->burst_sz = dmaengine_cfg->dst_maxburst;
+ tdmac->buswidth = dmaengine_cfg->dst_addr_width;
}
+ tdmac->dir = dmaengine_cfg->direction;

- return ret;
+ return mmp_tdma_config_chan(chan);
}

static enum dma_status mmp_tdma_tx_status(struct dma_chan *chan,
@@ -668,7 +671,10 @@ static int mmp_tdma_probe(struct platform_device *pdev)
tdev->device.device_prep_dma_cyclic = mmp_tdma_prep_dma_cyclic;
tdev->device.device_tx_status = mmp_tdma_tx_status;
tdev->device.device_issue_pending = mmp_tdma_issue_pending;
- tdev->device.device_control = mmp_tdma_control;
+ tdev->device.device_config = mmp_tdma_config;
+ tdev->device.device_pause = mmp_tdma_pause_chan;
+ tdev->device.device_resume = mmp_tdma_resume_chan;
+ tdev->device.device_terminate_all = mmp_tdma_terminate_all;
tdev->device.copy_align = TDMA_ALIGNMENT;

dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
--
2.1.1

2014-10-28 21:30:30

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 26/58] dmaengine: mmp-pdma: Split device_control

Split the device_control callback of the Marvell MMP PDMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/mmp_pdma.c | 109 +++++++++++++++++++++++++------------------------
1 file changed, 56 insertions(+), 53 deletions(-)

diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index a1a4db5721b8..75787b5009b5 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -683,68 +683,70 @@ fail:
return NULL;
}

-static int mmp_pdma_control(struct dma_chan *dchan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int mmp_pdma_config(struct dma_chan *dchan,
+ struct dma_slave_config *cfg)
{
struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
- struct dma_slave_config *cfg = (void *)arg;
- unsigned long flags;
u32 maxburst = 0, addr = 0;
enum dma_slave_buswidth width = DMA_SLAVE_BUSWIDTH_UNDEFINED;

if (!dchan)
return -EINVAL;

- switch (cmd) {
- case DMA_TERMINATE_ALL:
- disable_chan(chan->phy);
- mmp_pdma_free_phy(chan);
- spin_lock_irqsave(&chan->desc_lock, flags);
- mmp_pdma_free_desc_list(chan, &chan->chain_pending);
- mmp_pdma_free_desc_list(chan, &chan->chain_running);
- spin_unlock_irqrestore(&chan->desc_lock, flags);
- chan->idle = true;
- break;
- case DMA_SLAVE_CONFIG:
- if (cfg->direction == DMA_DEV_TO_MEM) {
- chan->dcmd = DCMD_INCTRGADDR | DCMD_FLOWSRC;
- maxburst = cfg->src_maxburst;
- width = cfg->src_addr_width;
- addr = cfg->src_addr;
- } else if (cfg->direction == DMA_MEM_TO_DEV) {
- chan->dcmd = DCMD_INCSRCADDR | DCMD_FLOWTRG;
- maxburst = cfg->dst_maxburst;
- width = cfg->dst_addr_width;
- addr = cfg->dst_addr;
- }
-
- if (width == DMA_SLAVE_BUSWIDTH_1_BYTE)
- chan->dcmd |= DCMD_WIDTH1;
- else if (width == DMA_SLAVE_BUSWIDTH_2_BYTES)
- chan->dcmd |= DCMD_WIDTH2;
- else if (width == DMA_SLAVE_BUSWIDTH_4_BYTES)
- chan->dcmd |= DCMD_WIDTH4;
-
- if (maxburst == 8)
- chan->dcmd |= DCMD_BURST8;
- else if (maxburst == 16)
- chan->dcmd |= DCMD_BURST16;
- else if (maxburst == 32)
- chan->dcmd |= DCMD_BURST32;
-
- chan->dir = cfg->direction;
- chan->dev_addr = addr;
- /* FIXME: drivers should be ported over to use the filter
- * function. Once that's done, the following two lines can
- * be removed.
- */
- if (cfg->slave_id)
- chan->drcmr = cfg->slave_id;
- break;
- default:
- return -ENOSYS;
+ if (cfg->direction == DMA_DEV_TO_MEM) {
+ chan->dcmd = DCMD_INCTRGADDR | DCMD_FLOWSRC;
+ maxburst = cfg->src_maxburst;
+ width = cfg->src_addr_width;
+ addr = cfg->src_addr;
+ } else if (cfg->direction == DMA_MEM_TO_DEV) {
+ chan->dcmd = DCMD_INCSRCADDR | DCMD_FLOWTRG;
+ maxburst = cfg->dst_maxburst;
+ width = cfg->dst_addr_width;
+ addr = cfg->dst_addr;
}

+ if (width == DMA_SLAVE_BUSWIDTH_1_BYTE)
+ chan->dcmd |= DCMD_WIDTH1;
+ else if (width == DMA_SLAVE_BUSWIDTH_2_BYTES)
+ chan->dcmd |= DCMD_WIDTH2;
+ else if (width == DMA_SLAVE_BUSWIDTH_4_BYTES)
+ chan->dcmd |= DCMD_WIDTH4;
+
+ if (maxburst == 8)
+ chan->dcmd |= DCMD_BURST8;
+ else if (maxburst == 16)
+ chan->dcmd |= DCMD_BURST16;
+ else if (maxburst == 32)
+ chan->dcmd |= DCMD_BURST32;
+
+ chan->dir = cfg->direction;
+ chan->dev_addr = addr;
+ /* FIXME: drivers should be ported over to use the filter
+ * function. Once that's done, the following two lines can
+ * be removed.
+ */
+ if (cfg->slave_id)
+ chan->drcmr = cfg->slave_id;
+
+ return 0;
+}
+
+static int mmp_pdma_terminate_all(struct dma_chan *dchan)
+{
+ struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
+ unsigned long flags;
+
+ if (!dchan)
+ return -EINVAL;
+
+ disable_chan(chan->phy);
+ mmp_pdma_free_phy(chan);
+ spin_lock_irqsave(&chan->desc_lock, flags);
+ mmp_pdma_free_desc_list(chan, &chan->chain_pending);
+ mmp_pdma_free_desc_list(chan, &chan->chain_running);
+ spin_unlock_irqrestore(&chan->desc_lock, flags);
+ chan->idle = true;
+
return 0;
}

@@ -1061,7 +1063,8 @@ static int mmp_pdma_probe(struct platform_device *op)
pdev->device.device_prep_slave_sg = mmp_pdma_prep_slave_sg;
pdev->device.device_prep_dma_cyclic = mmp_pdma_prep_dma_cyclic;
pdev->device.device_issue_pending = mmp_pdma_issue_pending;
- pdev->device.device_control = mmp_pdma_control;
+ pdev->device.device_config = mmp_pdma_config;
+ pdev->device.device_terminate_all = mmp_pdma_terminate_all;
pdev->device.copy_align = PDMA_ALIGNMENT;

if (pdev->dev->coherent_dma_mask)
--
2.1.1

2014-10-28 21:30:26

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 21/58] dmaengine: imx: Split device_control

Split the device_control callback of the Freescale IMX DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/imx-dma.c | 103 +++++++++++++++++++++++++-------------------------
1 file changed, 51 insertions(+), 52 deletions(-)

diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c
index 9d2c9e7374dc..dfd1557384ca 100644
--- a/drivers/dma/imx-dma.c
+++ b/drivers/dma/imx-dma.c
@@ -669,69 +669,67 @@ out:

}

-static int imxdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int imxdma_terminate_all(struct dma_chan *chan)
{
struct imxdma_channel *imxdmac = to_imxdma_chan(chan);
- struct dma_slave_config *dmaengine_cfg = (void *)arg;
struct imxdma_engine *imxdma = imxdmac->imxdma;
unsigned long flags;
- unsigned int mode = 0;
-
- switch (cmd) {
- case DMA_TERMINATE_ALL:
- imxdma_disable_hw(imxdmac);

- spin_lock_irqsave(&imxdma->lock, flags);
- list_splice_tail_init(&imxdmac->ld_active, &imxdmac->ld_free);
- list_splice_tail_init(&imxdmac->ld_queue, &imxdmac->ld_free);
- spin_unlock_irqrestore(&imxdma->lock, flags);
- return 0;
- case DMA_SLAVE_CONFIG:
- if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
- imxdmac->per_address = dmaengine_cfg->src_addr;
- imxdmac->watermark_level = dmaengine_cfg->src_maxburst;
- imxdmac->word_size = dmaengine_cfg->src_addr_width;
- } else {
- imxdmac->per_address = dmaengine_cfg->dst_addr;
- imxdmac->watermark_level = dmaengine_cfg->dst_maxburst;
- imxdmac->word_size = dmaengine_cfg->dst_addr_width;
- }
-
- switch (imxdmac->word_size) {
- case DMA_SLAVE_BUSWIDTH_1_BYTE:
- mode = IMX_DMA_MEMSIZE_8;
- break;
- case DMA_SLAVE_BUSWIDTH_2_BYTES:
- mode = IMX_DMA_MEMSIZE_16;
- break;
- default:
- case DMA_SLAVE_BUSWIDTH_4_BYTES:
- mode = IMX_DMA_MEMSIZE_32;
- break;
- }
+ imxdma_disable_hw(imxdmac);

- imxdmac->hw_chaining = 0;
+ spin_lock_irqsave(&imxdma->lock, flags);
+ list_splice_tail_init(&imxdmac->ld_active, &imxdmac->ld_free);
+ list_splice_tail_init(&imxdmac->ld_queue, &imxdmac->ld_free);
+ spin_unlock_irqrestore(&imxdma->lock, flags);
+ return 0;
+}

- imxdmac->ccr_from_device = (mode | IMX_DMA_TYPE_FIFO) |
- ((IMX_DMA_MEMSIZE_32 | IMX_DMA_TYPE_LINEAR) << 2) |
- CCR_REN;
- imxdmac->ccr_to_device =
- (IMX_DMA_MEMSIZE_32 | IMX_DMA_TYPE_LINEAR) |
- ((mode | IMX_DMA_TYPE_FIFO) << 2) | CCR_REN;
- imx_dmav1_writel(imxdma, imxdmac->dma_request,
- DMA_RSSR(imxdmac->channel));
+static int imxdma_config(struct dma_chan *chan,
+ struct dma_slave_config *dmaengine_cfg)
+{
+ struct imxdma_channel *imxdmac = to_imxdma_chan(chan);
+ struct imxdma_engine *imxdma = imxdmac->imxdma;
+ unsigned int mode = 0;

- /* Set burst length */
- imx_dmav1_writel(imxdma, imxdmac->watermark_level *
- imxdmac->word_size, DMA_BLR(imxdmac->channel));
+ if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
+ imxdmac->per_address = dmaengine_cfg->src_addr;
+ imxdmac->watermark_level = dmaengine_cfg->src_maxburst;
+ imxdmac->word_size = dmaengine_cfg->src_addr_width;
+ } else {
+ imxdmac->per_address = dmaengine_cfg->dst_addr;
+ imxdmac->watermark_level = dmaengine_cfg->dst_maxburst;
+ imxdmac->word_size = dmaengine_cfg->dst_addr_width;
+ }

- return 0;
+ switch (imxdmac->word_size) {
+ case DMA_SLAVE_BUSWIDTH_1_BYTE:
+ mode = IMX_DMA_MEMSIZE_8;
+ break;
+ case DMA_SLAVE_BUSWIDTH_2_BYTES:
+ mode = IMX_DMA_MEMSIZE_16;
+ break;
default:
- return -ENOSYS;
+ case DMA_SLAVE_BUSWIDTH_4_BYTES:
+ mode = IMX_DMA_MEMSIZE_32;
+ break;
}

- return -EINVAL;
+ imxdmac->hw_chaining = 0;
+
+ imxdmac->ccr_from_device = (mode | IMX_DMA_TYPE_FIFO) |
+ ((IMX_DMA_MEMSIZE_32 | IMX_DMA_TYPE_LINEAR) << 2) |
+ CCR_REN;
+ imxdmac->ccr_to_device =
+ (IMX_DMA_MEMSIZE_32 | IMX_DMA_TYPE_LINEAR) |
+ ((mode | IMX_DMA_TYPE_FIFO) << 2) | CCR_REN;
+ imx_dmav1_writel(imxdma, imxdmac->dma_request,
+ DMA_RSSR(imxdmac->channel));
+
+ /* Set burst length */
+ imx_dmav1_writel(imxdma, imxdmac->watermark_level *
+ imxdmac->word_size, DMA_BLR(imxdmac->channel));
+
+ return 0;
}

static enum dma_status imxdma_tx_status(struct dma_chan *chan,
@@ -1184,7 +1182,8 @@ static int __init imxdma_probe(struct platform_device *pdev)
imxdma->dma_device.device_prep_dma_cyclic = imxdma_prep_dma_cyclic;
imxdma->dma_device.device_prep_dma_memcpy = imxdma_prep_dma_memcpy;
imxdma->dma_device.device_prep_interleaved_dma = imxdma_prep_dma_interleaved;
- imxdma->dma_device.device_control = imxdma_control;
+ imxdma->dma_device.device_config = imxdma_config;
+ imxdma->dma_device.device_terminate_all = imxdma_terminate_all;
imxdma->dma_device.device_issue_pending = imxdma_issue_pending;

platform_set_drvdata(pdev, imxdma);
--
2.1.1

2014-10-28 21:45:17

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 24/58] dmaengine: ipu-idmac: Split device_control

Split the device_control callback of the IPU IDMAC driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/ipu/ipu_idmac.c | 96 ++++++++++++++++++++++++---------------------
1 file changed, 51 insertions(+), 45 deletions(-)

diff --git a/drivers/dma/ipu/ipu_idmac.c b/drivers/dma/ipu/ipu_idmac.c
index bbf62927bd72..b36f5a47578a 100644
--- a/drivers/dma/ipu/ipu_idmac.c
+++ b/drivers/dma/ipu/ipu_idmac.c
@@ -1398,76 +1398,81 @@ static void idmac_issue_pending(struct dma_chan *chan)
*/
}

-static int __idmac_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int idmac_pause(struct dma_chan *chan)
{
struct idmac_channel *ichan = to_idmac_chan(chan);
struct idmac *idmac = to_idmac(chan->device);
struct ipu *ipu = to_ipu(idmac);
struct list_head *list, *tmp;
unsigned long flags;
- int i;

- switch (cmd) {
- case DMA_PAUSE:
- spin_lock_irqsave(&ipu->lock, flags);
- ipu_ic_disable_task(ipu, chan->chan_id);
+ mutex_lock(&ichan->chan_mutex);

- /* Return all descriptors into "prepared" state */
- list_for_each_safe(list, tmp, &ichan->queue)
- list_del_init(list);
+ spin_lock_irqsave(&ipu->lock, flags);
+ ipu_ic_disable_task(ipu, chan->chan_id);

- ichan->sg[0] = NULL;
- ichan->sg[1] = NULL;
+ /* Return all descriptors into "prepared" state */
+ list_for_each_safe(list, tmp, &ichan->queue)
+ list_del_init(list);

- spin_unlock_irqrestore(&ipu->lock, flags);
+ ichan->sg[0] = NULL;
+ ichan->sg[1] = NULL;

- ichan->status = IPU_CHANNEL_INITIALIZED;
- break;
- case DMA_TERMINATE_ALL:
- ipu_disable_channel(idmac, ichan,
- ichan->status >= IPU_CHANNEL_ENABLED);
+ spin_unlock_irqrestore(&ipu->lock, flags);

- tasklet_disable(&ipu->tasklet);
+ ichan->status = IPU_CHANNEL_INITIALIZED;

- /* ichan->queue is modified in ISR, have to spinlock */
- spin_lock_irqsave(&ichan->lock, flags);
- list_splice_init(&ichan->queue, &ichan->free_list);
+ mutex_unlock(&ichan->chan_mutex);

- if (ichan->desc)
- for (i = 0; i < ichan->n_tx_desc; i++) {
- struct idmac_tx_desc *desc = ichan->desc + i;
- if (list_empty(&desc->list))
- /* Descriptor was prepared, but not submitted */
- list_add(&desc->list, &ichan->free_list);
+ return 0;
+}

- async_tx_clear_ack(&desc->txd);
- }
+static int __idmac_terminate_all(struct dma_chan *chan)
+{
+ struct idmac_channel *ichan = to_idmac_chan(chan);
+ struct idmac *idmac = to_idmac(chan->device);
+ struct ipu *ipu = to_ipu(idmac);
+ unsigned long flags;
+ int i;

- ichan->sg[0] = NULL;
- ichan->sg[1] = NULL;
- spin_unlock_irqrestore(&ichan->lock, flags);
+ ipu_disable_channel(idmac, ichan,
+ ichan->status >= IPU_CHANNEL_ENABLED);

- tasklet_enable(&ipu->tasklet);
+ tasklet_disable(&ipu->tasklet);

- ichan->status = IPU_CHANNEL_INITIALIZED;
- break;
- default:
- return -ENOSYS;
- }
+ /* ichan->queue is modified in ISR, have to spinlock */
+ spin_lock_irqsave(&ichan->lock, flags);
+ list_splice_init(&ichan->queue, &ichan->free_list);
+
+ if (ichan->desc)
+ for (i = 0; i < ichan->n_tx_desc; i++) {
+ struct idmac_tx_desc *desc = ichan->desc + i;
+ if (list_empty(&desc->list))
+ /* Descriptor was prepared, but not submitted */
+ list_add(&desc->list, &ichan->free_list);
+
+ async_tx_clear_ack(&desc->txd);
+ }
+
+ ichan->sg[0] = NULL;
+ ichan->sg[1] = NULL;
+ spin_unlock_irqrestore(&ichan->lock, flags);
+
+ tasklet_enable(&ipu->tasklet);
+
+ ichan->status = IPU_CHANNEL_INITIALIZED;

return 0;
}

-static int idmac_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int idmac_terminate_all(struct dma_chan *chan)
{
struct idmac_channel *ichan = to_idmac_chan(chan);
int ret;

mutex_lock(&ichan->chan_mutex);

- ret = __idmac_control(chan, cmd, arg);
+ ret = __idmac_terminate_all(chan);

mutex_unlock(&ichan->chan_mutex);

@@ -1568,7 +1573,7 @@ static void idmac_free_chan_resources(struct dma_chan *chan)

mutex_lock(&ichan->chan_mutex);

- __idmac_control(chan, DMA_TERMINATE_ALL, 0);
+ __idmac_terminate_all(chan);

if (ichan->status > IPU_CHANNEL_FREE) {
#ifdef DEBUG
@@ -1622,7 +1627,8 @@ static int __init ipu_idmac_init(struct ipu *ipu)

/* Compulsory for DMA_SLAVE fields */
dma->device_prep_slave_sg = idmac_prep_slave_sg;
- dma->device_control = idmac_control;
+ dma->device_pause = idmac_pause;
+ dma->device_terminate_all = idmac_terminate_all;

INIT_LIST_HEAD(&dma->channels);
for (i = 0; i < IPU_CHANNELS_NUM; i++) {
@@ -1655,7 +1661,7 @@ static void ipu_idmac_exit(struct ipu *ipu)
for (i = 0; i < IPU_CHANNELS_NUM; i++) {
struct idmac_channel *ichan = ipu->channel + i;

- idmac_control(&ichan->dma_chan, DMA_TERMINATE_ALL, 0);
+ idmac_terminate_all(&ichan->dma_chan);
}

dma_async_device_unregister(&idmac->dma);
--
2.1.1

2014-10-28 21:30:23

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 22/58] dmaengine: imx-sdma: Split device_control

Split the device_control callback of the Freescale IMX SDMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/imx-sdma.c | 66 +++++++++++++++++++++++---------------------------
1 file changed, 30 insertions(+), 36 deletions(-)

diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
index 88afc48c2ca7..02a731f07f07 100644
--- a/drivers/dma/imx-sdma.c
+++ b/drivers/dma/imx-sdma.c
@@ -829,20 +829,29 @@ static int sdma_load_context(struct sdma_channel *sdmac)
return ret;
}

-static void sdma_disable_channel(struct sdma_channel *sdmac)
+static struct sdma_channel *to_sdma_chan(struct dma_chan *chan)
+{
+ return container_of(chan, struct sdma_channel, chan);
+}
+
+static int sdma_disable_channel(struct dma_chan *chan)
{
+ struct sdma_channel *sdmac = to_sdma_chan(chan);
struct sdma_engine *sdma = sdmac->sdma;
int channel = sdmac->channel;

writel_relaxed(BIT(channel), sdma->regs + SDMA_H_STATSTOP);
sdmac->status = DMA_ERROR;
+
+ return 0;
}

-static int sdma_config_channel(struct sdma_channel *sdmac)
+static int sdma_config_channel(struct dma_chan *chan)
{
+ struct sdma_channel *sdmac = to_sdma_chan(chan);
int ret;

- sdma_disable_channel(sdmac);
+ sdma_disable_channel(chan);

sdmac->event_mask[0] = 0;
sdmac->event_mask[1] = 0;
@@ -934,11 +943,6 @@ out:
return ret;
}

-static struct sdma_channel *to_sdma_chan(struct dma_chan *chan)
-{
- return container_of(chan, struct sdma_channel, chan);
-}
-
static dma_cookie_t sdma_tx_submit(struct dma_async_tx_descriptor *tx)
{
unsigned long flags;
@@ -1003,7 +1007,7 @@ static void sdma_free_chan_resources(struct dma_chan *chan)
struct sdma_channel *sdmac = to_sdma_chan(chan);
struct sdma_engine *sdma = sdmac->sdma;

- sdma_disable_channel(sdmac);
+ sdma_disable_channel(chan);

if (sdmac->event_id0)
sdma_event_disable(sdmac, sdmac->event_id0);
@@ -1202,35 +1206,24 @@ err_out:
return NULL;
}

-static int sdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int sdma_config(struct dma_chan *chan,
+ struct dma_slave_config *dmaengine_cfg)
{
struct sdma_channel *sdmac = to_sdma_chan(chan);
- struct dma_slave_config *dmaengine_cfg = (void *)arg;
-
- switch (cmd) {
- case DMA_TERMINATE_ALL:
- sdma_disable_channel(sdmac);
- return 0;
- case DMA_SLAVE_CONFIG:
- if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
- sdmac->per_address = dmaengine_cfg->src_addr;
- sdmac->watermark_level = dmaengine_cfg->src_maxburst *
- dmaengine_cfg->src_addr_width;
- sdmac->word_size = dmaengine_cfg->src_addr_width;
- } else {
- sdmac->per_address = dmaengine_cfg->dst_addr;
- sdmac->watermark_level = dmaengine_cfg->dst_maxburst *
- dmaengine_cfg->dst_addr_width;
- sdmac->word_size = dmaengine_cfg->dst_addr_width;
- }
- sdmac->direction = dmaengine_cfg->direction;
- return sdma_config_channel(sdmac);
- default:
- return -ENOSYS;
- }

- return -EINVAL;
+ if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) {
+ sdmac->per_address = dmaengine_cfg->src_addr;
+ sdmac->watermark_level = dmaengine_cfg->src_maxburst *
+ dmaengine_cfg->src_addr_width;
+ sdmac->word_size = dmaengine_cfg->src_addr_width;
+ } else {
+ sdmac->per_address = dmaengine_cfg->dst_addr;
+ sdmac->watermark_level = dmaengine_cfg->dst_maxburst *
+ dmaengine_cfg->dst_addr_width;
+ sdmac->word_size = dmaengine_cfg->dst_addr_width;
+ }
+ sdmac->direction = dmaengine_cfg->direction;
+ return sdma_config_channel(chan);
}

static enum dma_status sdma_tx_status(struct dma_chan *chan,
@@ -1598,7 +1591,8 @@ static int sdma_probe(struct platform_device *pdev)
sdma->dma_device.device_tx_status = sdma_tx_status;
sdma->dma_device.device_prep_slave_sg = sdma_prep_slave_sg;
sdma->dma_device.device_prep_dma_cyclic = sdma_prep_dma_cyclic;
- sdma->dma_device.device_control = sdma_control;
+ sdma->dma_device.device_config = sdma_config;
+ sdma->dma_device.device_terminate_all = sdma_disable_channel;
sdma->dma_device.device_issue_pending = sdma_issue_pending;
sdma->dma_device.dev->dma_parms = &sdma->dma_parms;
dma_set_max_seg_size(sdma->dma_device.dev, 65535);
--
2.1.1

2014-10-28 21:30:19

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 01/58] crypto: ux500: Use dmaengine_terminate_all API

We are removing the dmaengine_device_control API, that shouldn't even have been
exposed in the first place. Change the callers to use the proper API.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/crypto/ux500/cryp/cryp_core.c | 4 ++--
drivers/crypto/ux500/hash/hash_core.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/ux500/cryp/cryp_core.c b/drivers/crypto/ux500/cryp/cryp_core.c
index 92105f3dc8e0..75bd0f1e9a94 100644
--- a/drivers/crypto/ux500/cryp/cryp_core.c
+++ b/drivers/crypto/ux500/cryp/cryp_core.c
@@ -606,12 +606,12 @@ static void cryp_dma_done(struct cryp_ctx *ctx)
dev_dbg(ctx->device->dev, "[%s]: ", __func__);

chan = ctx->device->dma.chan_mem2cryp;
- dmaengine_device_control(chan, DMA_TERMINATE_ALL, 0);
+ dmaengine_terminate_all(chan);
dma_unmap_sg(chan->device->dev, ctx->device->dma.sg_src,
ctx->device->dma.sg_src_len, DMA_TO_DEVICE);

chan = ctx->device->dma.chan_cryp2mem;
- dmaengine_device_control(chan, DMA_TERMINATE_ALL, 0);
+ dmaengine_terminate_all(chan);
dma_unmap_sg(chan->device->dev, ctx->device->dma.sg_dst,
ctx->device->dma.sg_dst_len, DMA_FROM_DEVICE);
}
diff --git a/drivers/crypto/ux500/hash/hash_core.c b/drivers/crypto/ux500/hash/hash_core.c
index 1c73f4fbc252..48bdc5df7b3b 100644
--- a/drivers/crypto/ux500/hash/hash_core.c
+++ b/drivers/crypto/ux500/hash/hash_core.c
@@ -202,7 +202,7 @@ static void hash_dma_done(struct hash_ctx *ctx)
struct dma_chan *chan;

chan = ctx->device->dma.chan_mem2hash;
- dmaengine_device_control(chan, DMA_TERMINATE_ALL, 0);
+ dmaengine_terminate_all(chan);
dma_unmap_sg(chan->device->dev, ctx->device->dma.sg,
ctx->device->dma.sg_len, DMA_TO_DEVICE);
}
--
2.1.1

2014-10-28 21:46:23

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 20/58] dmaengine: fsl-edma: Split device_control

Split the device_control callback of the Freescale EDMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/fsl-edma.c | 106 +++++++++++++++++++++++++++----------------------
1 file changed, 58 insertions(+), 48 deletions(-)

diff --git a/drivers/dma/fsl-edma.c b/drivers/dma/fsl-edma.c
index 13df85a16438..b5b7474f8859 100644
--- a/drivers/dma/fsl-edma.c
+++ b/drivers/dma/fsl-edma.c
@@ -292,62 +292,69 @@ static void fsl_edma_free_desc(struct virt_dma_desc *vdesc)
kfree(fsl_desc);
}

-static int fsl_edma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int fsl_edma_terminate_all(struct dma_chan *chan)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
- struct dma_slave_config *cfg = (void *)arg;
unsigned long flags;
LIST_HEAD(head);

- switch (cmd) {
- case DMA_TERMINATE_ALL:
- spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
+ spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
+ fsl_edma_disable_request(fsl_chan);
+ fsl_chan->edesc = NULL;
+ vchan_get_all_descriptors(&fsl_chan->vchan, &head);
+ spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
+ vchan_dma_desc_free_list(&fsl_chan->vchan, &head);
+ return 0;
+}
+
+static int fsl_edma_pause(struct dma_chan *chan)
+{
+ struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
+ unsigned long flags;
+
+ spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
+ if (fsl_chan->edesc) {
fsl_edma_disable_request(fsl_chan);
- fsl_chan->edesc = NULL;
- vchan_get_all_descriptors(&fsl_chan->vchan, &head);
- spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
- vchan_dma_desc_free_list(&fsl_chan->vchan, &head);
- return 0;
-
- case DMA_SLAVE_CONFIG:
- fsl_chan->fsc.dir = cfg->direction;
- if (cfg->direction == DMA_DEV_TO_MEM) {
- fsl_chan->fsc.dev_addr = cfg->src_addr;
- fsl_chan->fsc.addr_width = cfg->src_addr_width;
- fsl_chan->fsc.burst = cfg->src_maxburst;
- fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg->src_addr_width);
- } else if (cfg->direction == DMA_MEM_TO_DEV) {
- fsl_chan->fsc.dev_addr = cfg->dst_addr;
- fsl_chan->fsc.addr_width = cfg->dst_addr_width;
- fsl_chan->fsc.burst = cfg->dst_maxburst;
- fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg->dst_addr_width);
- } else {
- return -EINVAL;
- }
- return 0;
+ fsl_chan->status = DMA_PAUSED;
+ }
+ spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
+ return 0;
+}

- case DMA_PAUSE:
- spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
- if (fsl_chan->edesc) {
- fsl_edma_disable_request(fsl_chan);
- fsl_chan->status = DMA_PAUSED;
- }
- spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
- return 0;
-
- case DMA_RESUME:
- spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
- if (fsl_chan->edesc) {
- fsl_edma_enable_request(fsl_chan);
- fsl_chan->status = DMA_IN_PROGRESS;
- }
- spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
- return 0;
+static int fsl_edma_resume(struct dma_chan *chan)
+{
+ struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
+ unsigned long flags;

- default:
- return -ENXIO;
+ spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
+ if (fsl_chan->edesc) {
+ fsl_edma_enable_request(fsl_chan);
+ fsl_chan->status = DMA_IN_PROGRESS;
}
+ spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
+ return 0;
+}
+
+static int fsl_edma_slave_config(struct dma_chan *chan,
+ struct dma_slave_config *cfg)
+{
+ struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
+
+ fsl_chan->fsc.dir = cfg->direction;
+ if (cfg->direction == DMA_DEV_TO_MEM) {
+ fsl_chan->fsc.dev_addr = cfg->src_addr;
+ fsl_chan->fsc.addr_width = cfg->src_addr_width;
+ fsl_chan->fsc.burst = cfg->src_maxburst;
+ fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg->src_addr_width);
+ } else if (cfg->direction == DMA_MEM_TO_DEV) {
+ fsl_chan->fsc.dev_addr = cfg->dst_addr;
+ fsl_chan->fsc.addr_width = cfg->dst_addr_width;
+ fsl_chan->fsc.burst = cfg->dst_maxburst;
+ fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg->dst_addr_width);
+ } else {
+ return -EINVAL;
+ }
+ return 0;
}

static size_t fsl_edma_desc_residue(struct fsl_edma_chan *fsl_chan,
@@ -914,7 +921,10 @@ static int fsl_edma_probe(struct platform_device *pdev)
fsl_edma->dma_dev.device_tx_status = fsl_edma_tx_status;
fsl_edma->dma_dev.device_prep_slave_sg = fsl_edma_prep_slave_sg;
fsl_edma->dma_dev.device_prep_dma_cyclic = fsl_edma_prep_dma_cyclic;
- fsl_edma->dma_dev.device_control = fsl_edma_control;
+ fsl_edma->dma_dev.device_config = fsl_edma_slave_config;
+ fsl_edma->dma_dev.device_pause = fsl_edma_pause;
+ fsl_edma->dma_dev.device_resume = fsl_edma_resume;
+ fsl_edma->dma_dev.device_terminate_all = fsl_edma_terminate_all;
fsl_edma->dma_dev.device_issue_pending = fsl_edma_issue_pending;
fsl_edma->dma_dev.device_slave_caps = fsl_dma_device_slave_caps;

--
2.1.1

2014-10-28 21:46:38

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 19/58] dmaengine: ep93xx: Split device_control

Split the device_control callback of the Cirrus Logic EP93xx driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/ep93xx_dma.c | 41 +++++++----------------------------------
1 file changed, 7 insertions(+), 34 deletions(-)

diff --git a/drivers/dma/ep93xx_dma.c b/drivers/dma/ep93xx_dma.c
index 7650470196c4..a8bcbb5bc0ed 100644
--- a/drivers/dma/ep93xx_dma.c
+++ b/drivers/dma/ep93xx_dma.c
@@ -1164,13 +1164,14 @@ fail:

/**
* ep93xx_dma_terminate_all - terminate all transactions
- * @edmac: channel
+ * @chan: channel
*
* Stops all DMA transactions. All descriptors are put back to the
* @edmac->free_list and callbacks are _not_ called.
*/
-static int ep93xx_dma_terminate_all(struct ep93xx_dma_chan *edmac)
+static int ep93xx_dma_terminate_all(struct dma_chan *chan)
{
+ struct ep93xx_dma_chan *edmac = to_ep93xx_dma_chan(chan);
struct ep93xx_dma_desc *desc, *_d;
unsigned long flags;
LIST_HEAD(list);
@@ -1194,9 +1195,10 @@ static int ep93xx_dma_terminate_all(struct ep93xx_dma_chan *edmac)
return 0;
}

-static int ep93xx_dma_slave_config(struct ep93xx_dma_chan *edmac,
+static int ep93xx_dma_slave_config(struct dma_chan *chan,
struct dma_slave_config *config)
{
+ struct ep93xx_dma_chan *edmac = to_ep93xx_dma_chan(chan);
enum dma_slave_buswidth width;
unsigned long flags;
u32 addr, ctrl;
@@ -1242,36 +1244,6 @@ static int ep93xx_dma_slave_config(struct ep93xx_dma_chan *edmac,
}

/**
- * ep93xx_dma_control - manipulate all pending operations on a channel
- * @chan: channel
- * @cmd: control command to perform
- * @arg: optional argument
- *
- * Controls the channel. Function returns %0 in case of success or negative
- * error in case of failure.
- */
-static int ep93xx_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- struct ep93xx_dma_chan *edmac = to_ep93xx_dma_chan(chan);
- struct dma_slave_config *config;
-
- switch (cmd) {
- case DMA_TERMINATE_ALL:
- return ep93xx_dma_terminate_all(edmac);
-
- case DMA_SLAVE_CONFIG:
- config = (struct dma_slave_config *)arg;
- return ep93xx_dma_slave_config(edmac, config);
-
- default:
- break;
- }
-
- return -ENOSYS;
-}
-
-/**
* ep93xx_dma_tx_status - check if a transaction is completed
* @chan: channel
* @cookie: transaction specific cookie
@@ -1352,7 +1324,8 @@ static int __init ep93xx_dma_probe(struct platform_device *pdev)
dma_dev->device_free_chan_resources = ep93xx_dma_free_chan_resources;
dma_dev->device_prep_slave_sg = ep93xx_dma_prep_slave_sg;
dma_dev->device_prep_dma_cyclic = ep93xx_dma_prep_dma_cyclic;
- dma_dev->device_control = ep93xx_dma_control;
+ dma_dev->device_config = ep93xx_dma_slave_config;
+ dma_dev->device_terminate_all = ep93xx_dma_terminate_all;
dma_dev->device_issue_pending = ep93xx_dma_issue_pending;
dma_dev->device_tx_status = ep93xx_dma_tx_status;

--
2.1.1

2014-10-28 21:46:55

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 18/58] dmaengine: edma: Split device_control

Split the device_control callback of the TI EDMA driver to make use of the
newly introduced callbacks, that will eventually be used to retrieve slave
capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/edma.c | 50 +++++++++++++++-----------------------------------
1 file changed, 15 insertions(+), 35 deletions(-)

diff --git a/drivers/dma/edma.c b/drivers/dma/edma.c
index 790054eef76c..56735a0aa3a8 100644
--- a/drivers/dma/edma.c
+++ b/drivers/dma/edma.c
@@ -244,8 +244,9 @@ static void edma_execute(struct edma_chan *echan)
}
}

-static int edma_terminate_all(struct edma_chan *echan)
+static int edma_terminate_all(struct dma_chan *chan)
{
+ struct edma_chan *echan = to_edma_chan(chan);
unsigned long flags;
LIST_HEAD(head);

@@ -273,9 +274,11 @@ static int edma_terminate_all(struct edma_chan *echan)
return 0;
}

-static int edma_slave_config(struct edma_chan *echan,
+static int edma_slave_config(struct dma_chan *chan,
struct dma_slave_config *cfg)
{
+ struct edma_chan *echan = to_edma_chan(chan);
+
if (cfg->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES ||
cfg->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
return -EINVAL;
@@ -285,8 +288,10 @@ static int edma_slave_config(struct edma_chan *echan,
return 0;
}

-static int edma_dma_pause(struct edma_chan *echan)
+static int edma_dma_pause(struct dma_chan *chan)
{
+ struct edma_chan *echan = to_edma_chan(chan);
+
/* Pause/Resume only allowed with cyclic mode */
if (!echan->edesc || !echan->edesc->cyclic)
return -EINVAL;
@@ -295,8 +300,10 @@ static int edma_dma_pause(struct edma_chan *echan)
return 0;
}

-static int edma_dma_resume(struct edma_chan *echan)
+static int edma_dma_resume(struct dma_chan *chan)
{
+ struct edma_chan *echan = to_edma_chan(chan);
+
/* Pause/Resume only allowed with cyclic mode */
if (!echan->edesc->cyclic)
return -EINVAL;
@@ -305,36 +312,6 @@ static int edma_dma_resume(struct edma_chan *echan)
return 0;
}

-static int edma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- int ret = 0;
- struct dma_slave_config *config;
- struct edma_chan *echan = to_edma_chan(chan);
-
- switch (cmd) {
- case DMA_TERMINATE_ALL:
- edma_terminate_all(echan);
- break;
- case DMA_SLAVE_CONFIG:
- config = (struct dma_slave_config *)arg;
- ret = edma_slave_config(echan, config);
- break;
- case DMA_PAUSE:
- ret = edma_dma_pause(echan);
- break;
-
- case DMA_RESUME:
- ret = edma_dma_resume(echan);
- break;
-
- default:
- ret = -ENOSYS;
- }
-
- return ret;
-}
-
/*
* A PaRAM set configuration abstraction used by other modes
* @chan: Channel who's PaRAM set we're configuring
@@ -1017,7 +994,10 @@ static void edma_dma_init(struct edma_cc *ecc, struct dma_device *dma,
dma->device_free_chan_resources = edma_free_chan_resources;
dma->device_issue_pending = edma_issue_pending;
dma->device_tx_status = edma_tx_status;
- dma->device_control = edma_control;
+ dma->device_config = edma_slave_config;
+ dma->device_pause = edma_dma_pause;
+ dma->device_resume = edma_dma_resume;
+ dma->device_terminate_all = edma_terminate_all;
dma->device_slave_caps = edma_dma_device_slave_caps;
dma->dev = dev;

--
2.1.1

2014-10-28 21:46:57

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 17/58] dmaengine: dw: Split device_control

Split the device_control callback of the DesignWare DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/dw/core.c | 82 +++++++++++++++++++++++++++------------------------
1 file changed, 44 insertions(+), 38 deletions(-)

diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
index 244722170410..0bcf3f37fd5d 100644
--- a/drivers/dma/dw/core.c
+++ b/drivers/dma/dw/core.c
@@ -954,8 +954,7 @@ static inline void convert_burst(u32 *maxburst)
*maxburst = 0;
}

-static int
-set_runtime_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
+static int dwc_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);

@@ -972,16 +971,25 @@ set_runtime_config(struct dma_chan *chan, struct dma_slave_config *sconfig)
return 0;
}

-static inline void dwc_chan_pause(struct dw_dma_chan *dwc)
+static int dwc_pause(struct dma_chan *chan)
{
- u32 cfglo = channel_readl(dwc, CFG_LO);
- unsigned int count = 20; /* timeout iterations */
+ struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ unsigned long flags;
+ unsigned int count = 20; /* timeout iterations */
+ u32 cfglo;
+
+ spin_lock_irqsave(&dwc->lock, flags);

+ cfglo = channel_readl(dwc, CFG_LO);
channel_writel(dwc, CFG_LO, cfglo | DWC_CFGL_CH_SUSP);
while (!(channel_readl(dwc, CFG_LO) & DWC_CFGL_FIFO_EMPTY) && count--)
udelay(2);

dwc->paused = true;
+
+ spin_unlock_irqrestore(&dwc->lock, flags);
+
+ return 0;
}

static inline void dwc_chan_resume(struct dw_dma_chan *dwc)
@@ -993,53 +1001,48 @@ static inline void dwc_chan_resume(struct dw_dma_chan *dwc)
dwc->paused = false;
}

-static int dwc_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+static int dwc_resume(struct dma_chan *chan)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
- struct dw_dma *dw = to_dw_dma(chan->device);
- struct dw_desc *desc, *_desc;
unsigned long flags;
- LIST_HEAD(list);

- if (cmd == DMA_PAUSE) {
- spin_lock_irqsave(&dwc->lock, flags);
+ if (!dwc->paused)
+ return 0;

- dwc_chan_pause(dwc);
+ spin_lock_irqsave(&dwc->lock, flags);

- spin_unlock_irqrestore(&dwc->lock, flags);
- } else if (cmd == DMA_RESUME) {
- if (!dwc->paused)
- return 0;
+ dwc_chan_resume(dwc);

- spin_lock_irqsave(&dwc->lock, flags);
+ spin_unlock_irqrestore(&dwc->lock, flags);

- dwc_chan_resume(dwc);
+ return 0;
+}

- spin_unlock_irqrestore(&dwc->lock, flags);
- } else if (cmd == DMA_TERMINATE_ALL) {
- spin_lock_irqsave(&dwc->lock, flags);
+static int dwc_terminate_all(struct dma_chan *chan)
+{
+ struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ struct dw_dma *dw = to_dw_dma(chan->device);
+ struct dw_desc *desc, *_desc;
+ unsigned long flags;
+ LIST_HEAD(list);
+
+ spin_lock_irqsave(&dwc->lock, flags);

- clear_bit(DW_DMA_IS_SOFT_LLP, &dwc->flags);
+ clear_bit(DW_DMA_IS_SOFT_LLP, &dwc->flags);

- dwc_chan_disable(dw, dwc);
+ dwc_chan_disable(dw, dwc);

- dwc_chan_resume(dwc);
+ dwc_chan_resume(dwc);

- /* active_list entries will end up before queued entries */
- list_splice_init(&dwc->queue, &list);
- list_splice_init(&dwc->active_list, &list);
+ /* active_list entries will end up before queued entries */
+ list_splice_init(&dwc->queue, &list);
+ list_splice_init(&dwc->active_list, &list);

- spin_unlock_irqrestore(&dwc->lock, flags);
+ spin_unlock_irqrestore(&dwc->lock, flags);

- /* Flush all pending and queued descriptors */
- list_for_each_entry_safe(desc, _desc, &list, desc_node)
- dwc_descriptor_complete(dwc, desc, false);
- } else if (cmd == DMA_SLAVE_CONFIG) {
- return set_runtime_config(chan, (struct dma_slave_config *)arg);
- } else {
- return -ENXIO;
- }
+ /* Flush all pending and queued descriptors */
+ list_for_each_entry_safe(desc, _desc, &list, desc_node)
+ dwc_descriptor_complete(dwc, desc, false);

return 0;
}
@@ -1655,7 +1658,10 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
dw->dma.device_prep_dma_memcpy = dwc_prep_dma_memcpy;

dw->dma.device_prep_slave_sg = dwc_prep_slave_sg;
- dw->dma.device_control = dwc_control;
+ dw->dma.device_config = dwc_config;
+ dw->dma.device_pause = dwc_pause;
+ dw->dma.device_resume = dwc_resume;
+ dw->dma.device_terminate_all = dwc_terminate_all;

dw->dma.device_tx_status = dwc_tx_status;
dw->dma.device_issue_pending = dwc_issue_pending;
--
2.1.1

2014-10-28 21:30:14

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 16/58] dmaengine: jz4740: Split device_control

Split the device_control callback of the JZ4740 DMA driver to make use of the
newly introduced callbacks, that will eventually be used to retrieve slave
capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/dma-jz4740.c | 20 +++-----------------
1 file changed, 3 insertions(+), 17 deletions(-)

diff --git a/drivers/dma/dma-jz4740.c b/drivers/dma/dma-jz4740.c
index ae2ab14e64b3..c25fc0aa7776 100644
--- a/drivers/dma/dma-jz4740.c
+++ b/drivers/dma/dma-jz4740.c
@@ -210,7 +210,7 @@ static enum jz4740_dma_transfer_size jz4740_dma_maxburst(u32 maxburst)
}

static int jz4740_dma_slave_config(struct dma_chan *c,
- const struct dma_slave_config *config)
+ struct dma_slave_config *config)
{
struct jz4740_dmaengine_chan *chan = to_jz4740_dma_chan(c);
struct jz4740_dma_dev *dmadev = jz4740_dma_chan_get_dev(chan);
@@ -290,21 +290,6 @@ static int jz4740_dma_terminate_all(struct dma_chan *c)
return 0;
}

-static int jz4740_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- struct dma_slave_config *config = (struct dma_slave_config *)arg;
-
- switch (cmd) {
- case DMA_SLAVE_CONFIG:
- return jz4740_dma_slave_config(chan, config);
- case DMA_TERMINATE_ALL:
- return jz4740_dma_terminate_all(chan);
- default:
- return -ENOSYS;
- }
-}
-
static int jz4740_dma_start_transfer(struct jz4740_dmaengine_chan *chan)
{
struct jz4740_dma_dev *dmadev = jz4740_dma_chan_get_dev(chan);
@@ -561,7 +546,8 @@ static int jz4740_dma_probe(struct platform_device *pdev)
dd->device_issue_pending = jz4740_dma_issue_pending;
dd->device_prep_slave_sg = jz4740_dma_prep_slave_sg;
dd->device_prep_dma_cyclic = jz4740_dma_prep_dma_cyclic;
- dd->device_control = jz4740_dma_control;
+ dd->device_config = jz4740_dma_slave_config;
+ dd->device_terminate_all = jz4740_dma_terminate_all;
dd->dev = &pdev->dev;
dd->chancnt = JZ_DMA_NR_CHANS;
INIT_LIST_HEAD(&dd->channels);
--
2.1.1

2014-10-28 21:47:56

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 14/58] dmaengine: coh901318: Split device_control

Split the device_control callback of the ST-Ericsson COH901318 DMA driver to
make use of the newly introduced callbacks, that will eventually be used to
retrieve slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Linus Walleij <[email protected]>
---
drivers/dma/coh901318.c | 137 +++++++++++++++++++++---------------------------
1 file changed, 60 insertions(+), 77 deletions(-)

diff --git a/drivers/dma/coh901318.c b/drivers/dma/coh901318.c
index e88588d8ecd3..418e4e4fb7ba 100644
--- a/drivers/dma/coh901318.c
+++ b/drivers/dma/coh901318.c
@@ -2114,6 +2114,57 @@ static irqreturn_t dma_irq_handler(int irq, void *dev_id)
return IRQ_HANDLED;
}

+static int coh901318_terminate_all(struct dma_chan *chan)
+{
+ unsigned long flags;
+ struct coh901318_chan *cohc = to_coh901318_chan(chan);
+ struct coh901318_desc *cohd;
+ void __iomem *virtbase = cohc->base->virtbase;
+
+ /* The remainder of this function terminates the transfer */
+ coh901318_pause(chan);
+ spin_lock_irqsave(&cohc->lock, flags);
+
+ /* Clear any pending BE or TC interrupt */
+ if (cohc->id < 32) {
+ writel(1 << cohc->id, virtbase + COH901318_BE_INT_CLEAR1);
+ writel(1 << cohc->id, virtbase + COH901318_TC_INT_CLEAR1);
+ } else {
+ writel(1 << (cohc->id - 32), virtbase +
+ COH901318_BE_INT_CLEAR2);
+ writel(1 << (cohc->id - 32), virtbase +
+ COH901318_TC_INT_CLEAR2);
+ }
+
+ enable_powersave(cohc);
+
+ while ((cohd = coh901318_first_active_get(cohc))) {
+ /* release the lli allocation*/
+ coh901318_lli_free(&cohc->base->pool, &cohd->lli);
+
+ /* return desc to free-list */
+ coh901318_desc_remove(cohd);
+ coh901318_desc_free(cohc, cohd);
+ }
+
+ while ((cohd = coh901318_first_queued(cohc))) {
+ /* release the lli allocation*/
+ coh901318_lli_free(&cohc->base->pool, &cohd->lli);
+
+ /* return desc to free-list */
+ coh901318_desc_remove(cohd);
+ coh901318_desc_free(cohc, cohd);
+ }
+
+
+ cohc->nbr_active_done = 0;
+ cohc->busy = 0;
+
+ spin_unlock_irqrestore(&cohc->lock, flags);
+
+ return 0;
+}
+
static int coh901318_alloc_chan_resources(struct dma_chan *chan)
{
struct coh901318_chan *cohc = to_coh901318_chan(chan);
@@ -2156,7 +2207,7 @@ coh901318_free_chan_resources(struct dma_chan *chan)

spin_unlock_irqrestore(&cohc->lock, flags);

- dmaengine_terminate_all(chan);
+ coh901318_terminate_all(chan);
}


@@ -2540,80 +2591,6 @@ static void coh901318_dma_set_runtimeconfig(struct dma_chan *chan,
cohc->ctrl = ctrl;
}

-static int
-coh901318_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- unsigned long flags;
- struct coh901318_chan *cohc = to_coh901318_chan(chan);
- struct coh901318_desc *cohd;
- void __iomem *virtbase = cohc->base->virtbase;
-
- if (cmd == DMA_SLAVE_CONFIG) {
- struct dma_slave_config *config =
- (struct dma_slave_config *) arg;
-
- coh901318_dma_set_runtimeconfig(chan, config);
- return 0;
- }
-
- if (cmd == DMA_PAUSE) {
- coh901318_pause(chan);
- return 0;
- }
-
- if (cmd == DMA_RESUME) {
- coh901318_resume(chan);
- return 0;
- }
-
- if (cmd != DMA_TERMINATE_ALL)
- return -ENXIO;
-
- /* The remainder of this function terminates the transfer */
- coh901318_pause(chan);
- spin_lock_irqsave(&cohc->lock, flags);
-
- /* Clear any pending BE or TC interrupt */
- if (cohc->id < 32) {
- writel(1 << cohc->id, virtbase + COH901318_BE_INT_CLEAR1);
- writel(1 << cohc->id, virtbase + COH901318_TC_INT_CLEAR1);
- } else {
- writel(1 << (cohc->id - 32), virtbase +
- COH901318_BE_INT_CLEAR2);
- writel(1 << (cohc->id - 32), virtbase +
- COH901318_TC_INT_CLEAR2);
- }
-
- enable_powersave(cohc);
-
- while ((cohd = coh901318_first_active_get(cohc))) {
- /* release the lli allocation*/
- coh901318_lli_free(&cohc->base->pool, &cohd->lli);
-
- /* return desc to free-list */
- coh901318_desc_remove(cohd);
- coh901318_desc_free(cohc, cohd);
- }
-
- while ((cohd = coh901318_first_queued(cohc))) {
- /* release the lli allocation*/
- coh901318_lli_free(&cohc->base->pool, &cohd->lli);
-
- /* return desc to free-list */
- coh901318_desc_remove(cohd);
- coh901318_desc_free(cohc, cohd);
- }
-
-
- cohc->nbr_active_done = 0;
- cohc->busy = 0;
-
- spin_unlock_irqrestore(&cohc->lock, flags);
-
- return 0;
-}
-
void coh901318_base_init(struct dma_device *dma, const int *pick_chans,
struct coh901318_base *base)
{
@@ -2717,7 +2694,10 @@ static int __init coh901318_probe(struct platform_device *pdev)
base->dma_slave.device_prep_slave_sg = coh901318_prep_slave_sg;
base->dma_slave.device_tx_status = coh901318_tx_status;
base->dma_slave.device_issue_pending = coh901318_issue_pending;
- base->dma_slave.device_control = coh901318_control;
+ base->dma_slave.device_config = coh901318_dma_set_runtimeconfig;
+ base->dma_slave.device_pause = coh901318_pause;
+ base->dma_slave.device_resume = coh901318_resume;
+ base->dma_slave.device_terminate_all = coh901318_terminate_all;
base->dma_slave.dev = &pdev->dev;

err = dma_async_device_register(&base->dma_slave);
@@ -2737,7 +2717,10 @@ static int __init coh901318_probe(struct platform_device *pdev)
base->dma_memcpy.device_prep_dma_memcpy = coh901318_prep_memcpy;
base->dma_memcpy.device_tx_status = coh901318_tx_status;
base->dma_memcpy.device_issue_pending = coh901318_issue_pending;
- base->dma_memcpy.device_control = coh901318_control;
+ base->dma_memcpy.device_config = coh901318_dma_set_runtimeconfig;
+ base->dma_memcpy.device_pause = coh901318_pause;
+ base->dma_memcpy.device_resume = coh901318_resume;
+ base->dma_memcpy.device_terminate_all = coh901318_terminate_all;
base->dma_memcpy.dev = &pdev->dev;
/*
* This controller can only access address at even 32bit boundaries,
--
2.1.1

2014-10-28 21:48:26

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 13/58] dmaengine: bcm2835: Split device_control

Split the device_control callback of the Broadcom BCM2835 DMA driver to make
use of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
Acked-by: Stephen Warren <[email protected]>
---
drivers/dma/bcm2835-dma.c | 31 ++++++++-----------------------
1 file changed, 8 insertions(+), 23 deletions(-)

diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c
index 8a6a9c27fe83..243bb1ed6bfc 100644
--- a/drivers/dma/bcm2835-dma.c
+++ b/drivers/dma/bcm2835-dma.c
@@ -436,9 +436,11 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic(
return vchan_tx_prep(&c->vc, &d->vd, flags);
}

-static int bcm2835_dma_slave_config(struct bcm2835_chan *c,
- struct dma_slave_config *cfg)
+static int bcm2835_dma_slave_config(struct dma_chan *chan,
+ struct dma_slave_config *cfg)
{
+ struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
+
if ((cfg->direction == DMA_DEV_TO_MEM &&
cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||
(cfg->direction == DMA_MEM_TO_DEV &&
@@ -452,8 +454,9 @@ static int bcm2835_dma_slave_config(struct bcm2835_chan *c,
return 0;
}

-static int bcm2835_dma_terminate_all(struct bcm2835_chan *c)
+static int bcm2835_dma_terminate_all(struct dma_chan *chan)
{
+ struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
struct bcm2835_dmadev *d = to_bcm2835_dma_dev(c->vc.chan.device);
unsigned long flags;
int timeout = 10000;
@@ -495,24 +498,6 @@ static int bcm2835_dma_terminate_all(struct bcm2835_chan *c)
return 0;
}

-static int bcm2835_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
-{
- struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
-
- switch (cmd) {
- case DMA_SLAVE_CONFIG:
- return bcm2835_dma_slave_config(c,
- (struct dma_slave_config *)arg);
-
- case DMA_TERMINATE_ALL:
- return bcm2835_dma_terminate_all(c);
-
- default:
- return -ENXIO;
- }
-}
-
static int bcm2835_dma_chan_init(struct bcm2835_dmadev *d, int chan_id, int irq)
{
struct bcm2835_chan *c;
@@ -617,9 +602,9 @@ static int bcm2835_dma_probe(struct platform_device *pdev)
od->ddev.device_free_chan_resources = bcm2835_dma_free_chan_resources;
od->ddev.device_tx_status = bcm2835_dma_tx_status;
od->ddev.device_issue_pending = bcm2835_dma_issue_pending;
- od->ddev.device_slave_caps = bcm2835_dma_device_slave_caps;
od->ddev.device_prep_dma_cyclic = bcm2835_dma_prep_dma_cyclic;
- od->ddev.device_control = bcm2835_dma_control;
+ od->ddev.device_config = bcm2835_dma_slave_config;
+ od->ddev.device_terminate_all = bcm2835_dma_terminate_all;
od->ddev.dev = &pdev->dev;
INIT_LIST_HEAD(&od->ddev.channels);
spin_lock_init(&od->lock);
--
2.1.1

2014-10-28 21:48:46

by Maxime Ripard

[permalink] [raw]
Subject: [PATCH v4 12/58] dmaengine: hdmac: Split device_control

Split the device_control callback of the Atmel HDMAC driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <[email protected]>
---
drivers/dma/at_hdmac.c | 121 +++++++++++++++++++++++++++++--------------------
1 file changed, 73 insertions(+), 48 deletions(-)

diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
index ca9dd2613283..86450b3442f2 100644
--- a/drivers/dma/at_hdmac.c
+++ b/drivers/dma/at_hdmac.c
@@ -972,11 +972,13 @@ err_out:
return NULL;
}

-static int set_runtime_config(struct dma_chan *chan,
- struct dma_slave_config *sconfig)
+static int atc_config(struct dma_chan *chan,
+ struct dma_slave_config *sconfig)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);

+ dev_vdbg(chan2dev(chan), "%s\n", __func__);
+
/* Check if it is chan is configured for slave transfers */
if (!chan->private)
return -EINVAL;
@@ -989,9 +991,28 @@ static int set_runtime_config(struct dma_chan *chan,
return 0;
}

+static int atc_pause(struct dma_chan *chan)
+{
+ struct at_dma_chan *atchan = to_at_dma_chan(chan);
+ struct at_dma *atdma = to_at_dma(chan->device);
+ int chan_id = atchan->chan_common.chan_id;
+ unsigned long flags;

-static int atc_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
- unsigned long arg)
+ LIST_HEAD(list);
+
+ dev_vdbg(chan2dev(chan), "%s\n", __func__);
+
+ spin_lock_irqsave(&atchan->lock, flags);
+
+ dma_writel(atdma, CHER, AT_DMA_SUSP(chan_id));
+ set_bit(ATC_IS_PAUSED, &atchan->status);
+
+ spin_unlock_irqrestore(&atchan->lock, flags);
+
+ return 0;
+}
+
+static int atc_resume(struct dma_chan *chan)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
struct at_dma *atdma = to_at_dma(chan->device);
@@ -1000,60 +1021,61 @@ static int atc_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,

LIST_HEAD(list);

- dev_vdbg(chan2dev(chan), "atc_control (%d)\n", cmd);
+ dev_vdbg(chan2dev(chan), "%s\n", __func__);

- if (cmd == DMA_PAUSE) {
- spin_lock_irqsave(&atchan->lock, flags);
+ if (!atc_chan_is_paused(atchan))
+ return 0;

- dma_writel(atdma, CHER, AT_DMA_SUSP(chan_id));
- set_bit(ATC_IS_PAUSED, &atchan->status);
+ spin_lock_irqsave(&atchan->lock, flags);

- spin_unlock_irqrestore(&atchan->lock, flags);
- } else if (cmd == DMA_RESUME) {
- if (!atc_chan_is_paused(atchan))
- return 0;
+ dma_writel(atdma, CHDR, AT_DMA_RES(chan_id));
+ clear_bit(ATC_IS_PAUSED, &atchan->status);

- spin_lock_irqsave(&atchan->lock, flags);
+ spin_unlock_irqrestore(&atchan->lock, flags);

- dma_writel(atdma, CHDR, AT_DMA_RES(chan_id));
- clear_bit(ATC_IS_PAUSED, &atchan->status);
+ return 0;
+}

- spin_unlock_irqrestore(&atchan->lock, flags);
- } else if (cmd == DMA_TERMINATE_ALL) {
- struct at_desc *desc, *_desc;
- /*
- * This is only called when something went wrong elsewhere, so
- * we don't really care about the data. Just disable the
- * channel. We still have to poll the channel enable bit due
- * to AHB/HSB limitations.
- */
- spin_lock_irqsave(&atchan->lock, flags);
+static int atc_terminate_all(struct dma_chan *chan)
+{
+ struct at_dma_chan *atchan = to_at_dma_chan(chan);
+ struct at_dma *atdma = to_at_dma(chan->device);
+ int chan_id = atchan->chan_common.chan_id;
+ struct at_desc *desc, *_desc;
+ unsigned long flags;

- /* disabling channel: must also remove suspend state */
- dma_writel(atdma, CHDR, AT_DMA_RES(chan_id) | atchan->mask);
+ LIST_HEAD(list);

- /* confirm that this channel is disabled */
- while (dma_readl(atdma, CHSR) & atchan->mask)
- cpu_relax();
+ dev_vdbg(chan2dev(chan), "%s\n", __func__);

- /* active_list entries will end up before queued entries */
- list_splice_init(&atchan->queue, &list);
- list_splice_init(&atchan->active_list, &list);
+ /*
+ * This is only called when something went wrong elsewhere, so
+ * we don't really care about the data. Just disable the
+ * channel. We still have to poll the channel enable bit due
+ * to AHB/HSB limitations.
+ */
+ spin_lock_irqsave(&atchan->lock, flags);

- /* Flush all pending and queued descriptors */
- list_for_each_entry_safe(desc, _desc, &list, desc_node)
- atc_chain_complete(atchan, desc);
+ /* disabling channel: must also remove suspend state */
+ dma_writel(atdma, CHDR, AT_DMA_RES(chan_id) | atchan->mask);

- clear_bit(ATC_IS_PAUSED, &atchan->status);
- /* if channel dedicated to cyclic operations, free it */
- clear_bit(ATC_IS_CYCLIC, &atchan->status);
+ /* confirm that this channel is disabled */
+ while (dma_readl(atdma, CHSR) & atchan->mask)
+ cpu_relax();

- spin_unlock_irqrestore(&atchan->lock, flags);
- } else if (cmd == DMA_SLAVE_CONFIG) {
- return set_runtime_config(chan, (struct dma_slave_config *)arg);
- } else {
- return -ENXIO;
- }
+ /* active_list entries will end up before queued entries */
+ list_splice_init(&atchan->queue, &list);
+ list_splice_init(&atchan->active_list, &list);
+
+ /* Flush all pending and queued descriptors */
+ list_for_each_entry_safe(desc, _desc, &list, desc_node)
+ atc_chain_complete(atchan, desc);
+
+ clear_bit(ATC_IS_PAUSED, &atchan->status);
+ /* if channel dedicated to cyclic operations, free it */
+ clear_bit(ATC_IS_CYCLIC, &atchan->status);
+
+ spin_unlock_irqrestore(&atchan->lock, flags);

return 0;
}
@@ -1505,7 +1527,10 @@ static int __init at_dma_probe(struct platform_device *pdev)
/* controller can do slave DMA: can trigger cyclic transfers */
dma_cap_set(DMA_CYCLIC, atdma->dma_common.cap_mask);
atdma->dma_common.device_prep_dma_cyclic = atc_prep_dma_cyclic;
- atdma->dma_common.device_control = atc_control;
+ atdma->dma_common.device_config = atc_config;
+ atdma->dma_common.device_pause = atc_pause;
+ atdma->dma_common.device_resume = atc_resume;
+ atdma->dma_common.device_terminate_all = atc_terminate_all;
}

dma_writel(atdma, EN, AT_DMA_ENABLE);
@@ -1622,7 +1647,7 @@ static void atc_suspend_cyclic(struct at_dma_chan *atchan)
if (!atc_chan_is_paused(atchan)) {
dev_warn(chan2dev(chan),
"cyclic channel not paused, should be done by channel user\n");
- atc_control(chan, DMA_PAUSE, 0);
+ atc_pause(chan);
}

/* now preserve additional data for cyclic operations */
--
2.1.1

2014-10-28 22:06:49

by Laurent Pinchart

[permalink] [raw]
Subject: Re: [PATCH v4 57/58] dmaengine: Add a warning for drivers not using the generic slave caps retrieval

Hi Maxime,

Thank you for the patch.

On Tuesday 28 October 2014 22:26:12 Maxime Ripard wrote:
> For the slave caps retrieval to be really useful, most drivers need to
> implement it.
>
> Hence, we need to be slightly more aggressive, and trigger a warning at
> registration time for drivers that don't fill their caps infos in order to
> encourage them to implement it.
>
> Signed-off-by: Maxime Ripard <[email protected]>

Acked-by: Laurent Pinchart <[email protected]>

> ---
> drivers/dma/dmaengine.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
> index 98e9431f85ec..300c8cd2786c 100644
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> @@ -827,6 +827,9 @@ int dma_async_device_register(struct dma_device *device)
> BUG_ON(!device->device_issue_pending);
> BUG_ON(!device->dev);
>
> + WARN(dma_has_cap(DMA_SLAVE, device->cap_mask) && !device->directions,
> + "this driver doesn't support generic slave capabilities
> reporting\n"); +
> /* note: this only matters in the
> * CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH=n case
> */

--
Regards,

Laurent Pinchart

2014-10-28 22:30:57

by Laurent Pinchart

[permalink] [raw]
Subject: [PATCH] dmaengine: Move dma_get_slave_caps() implementation to dmaengine.c

The function is too big to be a static inline.

Signed-off-by: Laurent Pinchart <[email protected]>
---
drivers/dma/dmaengine.c | 33 +++++++++++++++++++++++++++++++++
include/linux/dmaengine.h | 32 +-------------------------------
2 files changed, 34 insertions(+), 31 deletions(-)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 300c8cd2786c..caa91abca2ee 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -480,6 +480,39 @@ static void dma_channel_rebalance(void)
}
}

+int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
+{
+ struct dma_device *device;
+
+ if (!chan || !caps)
+ return -EINVAL;
+
+ device = chan->device;
+
+ /* check if the channel supports slave transactions */
+ if (!test_bit(DMA_SLAVE, device->cap_mask.bits))
+ return -ENXIO;
+
+ /*
+ * Check whether it reports it uses the generic slave
+ * capabilities, if not, that means it doesn't support any
+ * kind of slave capabilities reporting.
+ */
+ if (!device->directions)
+ return -ENXIO;
+
+ caps->src_addr_widths = device->src_addr_widths;
+ caps->dst_addr_widths = device->dst_addr_widths;
+ caps->directions = device->directions;
+ caps->residue_granularity = device->residue_granularity;
+
+ caps->cmd_pause = !!device->device_pause;
+ caps->cmd_terminate = !!device->device_terminate_all;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(dma_get_slave_caps);
+
static struct dma_chan *private_candidate(const dma_cap_mask_t *mask,
struct dma_device *dev,
dma_filter_fn fn, void *fn_param)
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index b01b7a7c581b..92e0ca01061c 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -757,37 +757,7 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_sg(
src_sg, src_nents, flags);
}

-static inline int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
-{
- struct dma_device *device;
-
- if (!chan || !caps)
- return -EINVAL;
-
- device = chan->device;
-
- /* check if the channel supports slave transactions */
- if (!test_bit(DMA_SLAVE, device->cap_mask.bits))
- return -ENXIO;
-
- /*
- * Check whether it reports it uses the generic slave
- * capabilities, if not, that means it doesn't support any
- * kind of slave capabilities reporting.
- */
- if (!device->directions)
- return -ENXIO;
-
- caps->src_addr_widths = device->src_addr_widths;
- caps->dst_addr_widths = device->dst_addr_widths;
- caps->directions = device->directions;
- caps->residue_granularity = device->residue_granularity;
-
- caps->cmd_pause = !!device->device_pause;
- caps->cmd_terminate = !!device->device_terminate_all;
-
- return 0;
-}
+int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps);

static inline int dmaengine_terminate_all(struct dma_chan *chan)
{
--
Regards,

Laurent Pinchart

2014-10-28 22:31:44

by Laurent Pinchart

[permalink] [raw]
Subject: Re: [PATCH v4 10/58] dmaengine: Create a generic dma_slave_caps callback

Hi Maxime,

Thank you for the patch.

On Tuesday 28 October 2014 22:25:25 Maxime Ripard wrote:
> dma_slave_caps is very important to the generic layers that might interact
> with dmaengine, such as ASoC. Unfortunately, it has been added as yet
> another dma_device callback, and most of the existing drivers haven't
> implemented it, reducing its reliability.
>
> Introduce a generic behaviour to implement this, that rely on both the split
> of device_control to derive which functions are supported and on new
> variables to be set in the dma_device structure.
>
> These variables holds what used to be the capabilities, that were set
> per-channel. However, this proved to be a bit overkill, since every driver
> filling these so far were hardcoding it, disregarding which channel was
> actually given.
>
> Signed-off-by: Maxime Ripard <[email protected]>

Acked-by: Laurent Pinchart <[email protected]>

You might want to consider adding "dmaengine: Move dma_get_slave_caps()
implementation to dmaengine.c" to this series.

> ---
> include/linux/dmaengine.h | 41 +++++++++++++++++++++++++++++++++++++----
> 1 file changed, 37 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index b0efc805937a..be9e60aa8f5e 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -593,6 +593,14 @@ struct dma_tx_state {
> * @fill_align: alignment shift for memset operations
> * @dev_id: unique device ID
> * @dev: struct device reference for dma mapping api
> + * @src_addr_widths: bit mask of src addr widths the device supports
> + * @dst_addr_widths: bit mask of dst addr widths the device supports
> + * @directions: bit mask of slave direction the device supports since
> + * the enum dma_transfer_direction is not defined as bits for
> + * each type of direction, the dma controller should fill (1 <<
> + * <TYPE>) and same should be checked by controller as well
> + * @residue_granularity: granularity of the transfer residue reported
> + * by tx_status
> * @device_alloc_chan_resources: allocate resources and return the
> * number of allocated descriptors
> * @device_free_chan_resources: release DMA channel's resources
> @@ -642,6 +650,11 @@ struct dma_device {
> int dev_id;
> struct device *dev;
>
> + u32 src_addr_widths;
> + u32 dst_addr_widths;
> + u32 directions;
> + enum dma_residue_granularity residue_granularity;
> +
> int (*device_alloc_chan_resources)(struct dma_chan *chan);
> void (*device_free_chan_resources)(struct dma_chan *chan);
>
> @@ -783,17 +796,37 @@ static inline struct dma_async_tx_descriptor
> *dmaengine_prep_dma_sg(
>
> static inline int dma_get_slave_caps(struct dma_chan *chan, struct
> dma_slave_caps *caps) {
> + struct dma_device *device;
> +
> if (!chan || !caps)
> return -EINVAL;
>
> + device = chan->device;
> +
> /* check if the channel supports slave transactions */
> - if (!test_bit(DMA_SLAVE, chan->device->cap_mask.bits))
> + if (!test_bit(DMA_SLAVE, device->cap_mask.bits))
> + return -ENXIO;
> +
> + if (device->device_slave_caps)
> + return device->device_slave_caps(chan, caps);
> +
> + /*
> + * Check whether it reports it uses the generic slave
> + * capabilities, if not, that means it doesn't support any
> + * kind of slave capabilities reporting.
> + */
> + if (!device->directions)
> return -ENXIO;
>
> - if (chan->device->device_slave_caps)
> - return chan->device->device_slave_caps(chan, caps);
> + caps->src_addr_widths = device->src_addr_widths;
> + caps->dst_addr_widths = device->dst_addr_widths;
> + caps->directions = device->directions;
> + caps->residue_granularity = device->residue_granularity;
> +
> + caps->cmd_pause = !!device->device_pause;
> + caps->cmd_terminate = !!device->device_terminate_all;
>
> - return -ENXIO;
> + return 0;
> }
>
> static inline int dmaengine_terminate_all(struct dma_chan *chan)

--
Regards,

Laurent Pinchart

2014-11-03 11:12:52

by Nicolas Ferre

[permalink] [raw]
Subject: Re: [PATCH v4 02/58] serial: at91: Use dmaengine_slave_config API

On 28/10/2014 22:25, Maxime Ripard :
> We are removing the dmaengine_device_control API, that shouldn't even have been
> exposed in the first place. Change the callers to use the proper API.
>
> Signed-off-by: Maxime Ripard <[email protected]>

Acked-by: Nicolas Ferre <[email protected]>

Thanks, Maxime. Do I have to carry the patch myself or is it part of a
series?

Bye,

> ---
> drivers/tty/serial/atmel_serial.c | 10 ++++------
> 1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
> index edde3eca055d..10e6b382d68f 100644
> --- a/drivers/tty/serial/atmel_serial.c
> +++ b/drivers/tty/serial/atmel_serial.c
> @@ -862,9 +862,8 @@ static int atmel_prepare_tx_dma(struct uart_port *port)
> config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
> config.dst_addr = port->mapbase + ATMEL_US_THR;
>
> - ret = dmaengine_device_control(atmel_port->chan_tx,
> - DMA_SLAVE_CONFIG,
> - (unsigned long)&config);
> + ret = dmaengine_slave_config(atmel_port->chan_tx,
> + &config);
> if (ret) {
> dev_err(port->dev, "DMA tx slave configuration failed\n");
> goto chan_err;
> @@ -1026,9 +1025,8 @@ static int atmel_prepare_rx_dma(struct uart_port *port)
> config.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
> config.src_addr = port->mapbase + ATMEL_US_RHR;
>
> - ret = dmaengine_device_control(atmel_port->chan_rx,
> - DMA_SLAVE_CONFIG,
> - (unsigned long)&config);
> + ret = dmaengine_slave_config(atmel_port->chan_rx,
> + &config);
> if (ret) {
> dev_err(port->dev, "DMA rx slave configuration failed\n");
> goto chan_err;
>


--
Nicolas Ferre

2014-11-03 11:15:58

by Nicolas Ferre

[permalink] [raw]
Subject: Re: [PATCH v4 12/58] dmaengine: hdmac: Split device_control

On 28/10/2014 22:25, Maxime Ripard :
> Split the device_control callback of the Atmel HDMAC driver to make use
> of the newly introduced callbacks, that will eventually be used to retrieve
> slave capabilities.
>
> Signed-off-by: Maxime Ripard <[email protected]>

It seems okay:

Acked-by: Nicolas Ferre <[email protected]>

Thanks.

> ---
> drivers/dma/at_hdmac.c | 121 +++++++++++++++++++++++++++++--------------------
> 1 file changed, 73 insertions(+), 48 deletions(-)
>
> diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
> index ca9dd2613283..86450b3442f2 100644
> --- a/drivers/dma/at_hdmac.c
> +++ b/drivers/dma/at_hdmac.c
> @@ -972,11 +972,13 @@ err_out:
> return NULL;
> }
>
> -static int set_runtime_config(struct dma_chan *chan,
> - struct dma_slave_config *sconfig)
> +static int atc_config(struct dma_chan *chan,
> + struct dma_slave_config *sconfig)
> {
> struct at_dma_chan *atchan = to_at_dma_chan(chan);
>
> + dev_vdbg(chan2dev(chan), "%s\n", __func__);
> +
> /* Check if it is chan is configured for slave transfers */
> if (!chan->private)
> return -EINVAL;
> @@ -989,9 +991,28 @@ static int set_runtime_config(struct dma_chan *chan,
> return 0;
> }
>
> +static int atc_pause(struct dma_chan *chan)
> +{
> + struct at_dma_chan *atchan = to_at_dma_chan(chan);
> + struct at_dma *atdma = to_at_dma(chan->device);
> + int chan_id = atchan->chan_common.chan_id;
> + unsigned long flags;
>
> -static int atc_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
> - unsigned long arg)
> + LIST_HEAD(list);
> +
> + dev_vdbg(chan2dev(chan), "%s\n", __func__);
> +
> + spin_lock_irqsave(&atchan->lock, flags);
> +
> + dma_writel(atdma, CHER, AT_DMA_SUSP(chan_id));
> + set_bit(ATC_IS_PAUSED, &atchan->status);
> +
> + spin_unlock_irqrestore(&atchan->lock, flags);
> +
> + return 0;
> +}
> +
> +static int atc_resume(struct dma_chan *chan)
> {
> struct at_dma_chan *atchan = to_at_dma_chan(chan);
> struct at_dma *atdma = to_at_dma(chan->device);
> @@ -1000,60 +1021,61 @@ static int atc_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
>
> LIST_HEAD(list);
>
> - dev_vdbg(chan2dev(chan), "atc_control (%d)\n", cmd);
> + dev_vdbg(chan2dev(chan), "%s\n", __func__);
>
> - if (cmd == DMA_PAUSE) {
> - spin_lock_irqsave(&atchan->lock, flags);
> + if (!atc_chan_is_paused(atchan))
> + return 0;
>
> - dma_writel(atdma, CHER, AT_DMA_SUSP(chan_id));
> - set_bit(ATC_IS_PAUSED, &atchan->status);
> + spin_lock_irqsave(&atchan->lock, flags);
>
> - spin_unlock_irqrestore(&atchan->lock, flags);
> - } else if (cmd == DMA_RESUME) {
> - if (!atc_chan_is_paused(atchan))
> - return 0;
> + dma_writel(atdma, CHDR, AT_DMA_RES(chan_id));
> + clear_bit(ATC_IS_PAUSED, &atchan->status);
>
> - spin_lock_irqsave(&atchan->lock, flags);
> + spin_unlock_irqrestore(&atchan->lock, flags);
>
> - dma_writel(atdma, CHDR, AT_DMA_RES(chan_id));
> - clear_bit(ATC_IS_PAUSED, &atchan->status);
> + return 0;
> +}
>
> - spin_unlock_irqrestore(&atchan->lock, flags);
> - } else if (cmd == DMA_TERMINATE_ALL) {
> - struct at_desc *desc, *_desc;
> - /*
> - * This is only called when something went wrong elsewhere, so
> - * we don't really care about the data. Just disable the
> - * channel. We still have to poll the channel enable bit due
> - * to AHB/HSB limitations.
> - */
> - spin_lock_irqsave(&atchan->lock, flags);
> +static int atc_terminate_all(struct dma_chan *chan)
> +{
> + struct at_dma_chan *atchan = to_at_dma_chan(chan);
> + struct at_dma *atdma = to_at_dma(chan->device);
> + int chan_id = atchan->chan_common.chan_id;
> + struct at_desc *desc, *_desc;
> + unsigned long flags;
>
> - /* disabling channel: must also remove suspend state */
> - dma_writel(atdma, CHDR, AT_DMA_RES(chan_id) | atchan->mask);
> + LIST_HEAD(list);
>
> - /* confirm that this channel is disabled */
> - while (dma_readl(atdma, CHSR) & atchan->mask)
> - cpu_relax();
> + dev_vdbg(chan2dev(chan), "%s\n", __func__);
>
> - /* active_list entries will end up before queued entries */
> - list_splice_init(&atchan->queue, &list);
> - list_splice_init(&atchan->active_list, &list);
> + /*
> + * This is only called when something went wrong elsewhere, so
> + * we don't really care about the data. Just disable the
> + * channel. We still have to poll the channel enable bit due
> + * to AHB/HSB limitations.
> + */
> + spin_lock_irqsave(&atchan->lock, flags);
>
> - /* Flush all pending and queued descriptors */
> - list_for_each_entry_safe(desc, _desc, &list, desc_node)
> - atc_chain_complete(atchan, desc);
> + /* disabling channel: must also remove suspend state */
> + dma_writel(atdma, CHDR, AT_DMA_RES(chan_id) | atchan->mask);
>
> - clear_bit(ATC_IS_PAUSED, &atchan->status);
> - /* if channel dedicated to cyclic operations, free it */
> - clear_bit(ATC_IS_CYCLIC, &atchan->status);
> + /* confirm that this channel is disabled */
> + while (dma_readl(atdma, CHSR) & atchan->mask)
> + cpu_relax();
>
> - spin_unlock_irqrestore(&atchan->lock, flags);
> - } else if (cmd == DMA_SLAVE_CONFIG) {
> - return set_runtime_config(chan, (struct dma_slave_config *)arg);
> - } else {
> - return -ENXIO;
> - }
> + /* active_list entries will end up before queued entries */
> + list_splice_init(&atchan->queue, &list);
> + list_splice_init(&atchan->active_list, &list);
> +
> + /* Flush all pending and queued descriptors */
> + list_for_each_entry_safe(desc, _desc, &list, desc_node)
> + atc_chain_complete(atchan, desc);
> +
> + clear_bit(ATC_IS_PAUSED, &atchan->status);
> + /* if channel dedicated to cyclic operations, free it */
> + clear_bit(ATC_IS_CYCLIC, &atchan->status);
> +
> + spin_unlock_irqrestore(&atchan->lock, flags);
>
> return 0;
> }
> @@ -1505,7 +1527,10 @@ static int __init at_dma_probe(struct platform_device *pdev)
> /* controller can do slave DMA: can trigger cyclic transfers */
> dma_cap_set(DMA_CYCLIC, atdma->dma_common.cap_mask);
> atdma->dma_common.device_prep_dma_cyclic = atc_prep_dma_cyclic;
> - atdma->dma_common.device_control = atc_control;
> + atdma->dma_common.device_config = atc_config;
> + atdma->dma_common.device_pause = atc_pause;
> + atdma->dma_common.device_resume = atc_resume;
> + atdma->dma_common.device_terminate_all = atc_terminate_all;
> }
>
> dma_writel(atdma, EN, AT_DMA_ENABLE);
> @@ -1622,7 +1647,7 @@ static void atc_suspend_cyclic(struct at_dma_chan *atchan)
> if (!atc_chan_is_paused(atchan)) {
> dev_warn(chan2dev(chan),
> "cyclic channel not paused, should be done by channel user\n");
> - atc_control(chan, DMA_PAUSE, 0);
> + atc_pause(chan);
> }
>
> /* now preserve additional data for cyclic operations */
>


--
Nicolas Ferre

2014-11-03 12:35:09

by Maxime Ripard

[permalink] [raw]
Subject: Re: [PATCH v4 02/58] serial: at91: Use dmaengine_slave_config API

Hi Nicolas,

On Mon, Nov 03, 2014 at 12:12:44PM +0100, Nicolas Ferre wrote:
> On 28/10/2014 22:25, Maxime Ripard :
> > We are removing the dmaengine_device_control API, that shouldn't even have been
> > exposed in the first place. Change the callers to use the proper API.
> >
> > Signed-off-by: Maxime Ripard <[email protected]>
>
> Acked-by: Nicolas Ferre <[email protected]>
>
> Thanks, Maxime. Do I have to carry the patch myself or is it part of a
> series?

I think it would be better to keep it together with that serie, since
we're removing dmaengine_device_control in a latter patch. If it's ok
for you of course.

Maxime

--
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com


Attachments:
(No filename) (763.00 B)
signature.asc (819.00 B)
Digital signature
Download all attachments

2014-11-03 17:46:42

by Nicolas Ferre

[permalink] [raw]
Subject: Re: [PATCH v4 02/58] serial: at91: Use dmaengine_slave_config API

On 03/11/2014 13:33, Maxime Ripard :
> Hi Nicolas,
>
> On Mon, Nov 03, 2014 at 12:12:44PM +0100, Nicolas Ferre wrote:
>> On 28/10/2014 22:25, Maxime Ripard :
>>> We are removing the dmaengine_device_control API, that shouldn't even have been
>>> exposed in the first place. Change the callers to use the proper API.
>>>
>>> Signed-off-by: Maxime Ripard <[email protected]>
>>
>> Acked-by: Nicolas Ferre <[email protected]>
>>
>> Thanks, Maxime. Do I have to carry the patch myself or is it part of a
>> series?
>
> I think it would be better to keep it together with that serie, since
> we're removing dmaengine_device_control in a latter patch. If it's ok
> for you of course.

Fair enough, keep it in.

Thanks Maxime, bye,
--
Nicolas Ferre

2014-11-06 14:35:07

by Maxime Ripard

[permalink] [raw]
Subject: Re: [PATCH v4 00/58] dmaengine: Implement generic slave capabilities retrieval

Hi Vinod,

On Tue, Oct 28, 2014 at 10:25:15PM +0100, Maxime Ripard wrote:
> Hi,
>
> As we discussed a couple of weeks ago, this is the third attempt at
> creating a generic behaviour for slave capabilities retrieval so that
> generic layers using dmaengine can actually rely on that.
>
> That has been done mostly through two steps: by moving out the
> sub-commands of the device_control callback, so that the dmaengine
> core can then infer from that wether a sub-command is implemented, and
> then by moving the slave properties, such as the supported buswidth,
> to the structure dma_device itself.
>
> Comments are as usual appreciated!

How can we move forward on this?

I didn't have any comments on this version, and gathered quite a lot
of Acked-by already.

Do you want me to rebase on top of your current next branch and send
you a pull request?

Thanks,
Maxime

--
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com


Attachments:
(No filename) (983.00 B)
signature.asc (819.00 B)
Digital signature
Download all attachments

2014-11-12 11:24:42

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH v4 00/58] dmaengine: Implement generic slave capabilities retrieval

On Thu, Nov 06, 2014 at 03:33:49PM +0100, Maxime Ripard wrote:
> Hi Vinod,
>
> On Tue, Oct 28, 2014 at 10:25:15PM +0100, Maxime Ripard wrote:
> > Hi,
> >
> > As we discussed a couple of weeks ago, this is the third attempt at
> > creating a generic behaviour for slave capabilities retrieval so that
> > generic layers using dmaengine can actually rely on that.
> >
> > That has been done mostly through two steps: by moving out the
> > sub-commands of the device_control callback, so that the dmaengine
> > core can then infer from that wether a sub-command is implemented, and
> > then by moving the slave properties, such as the supported buswidth,
> > to the structure dma_device itself.
> >
> > Comments are as usual appreciated!
>
> How can we move forward on this?
>
> I didn't have any comments on this version, and gathered quite a lot
> of Acked-by already.
>
> Do you want me to rebase on top of your current next branch and send
> you a pull request?

Hi Maxime,

Thanks for the huge cleanup work

I quickly looked thru the series and looks okay. I will do a detailed review
in next couple of days and then host it on a topic branch so that Feng's
robot can test it before merging it.

Thanks

--
~Vinod


Attachments:
(No filename) (1.20 kB)
signature.asc (836.00 B)
Digital signature
Download all attachments

2014-11-12 13:20:06

by Maxime Ripard

[permalink] [raw]
Subject: Re: [PATCH v4 00/58] dmaengine: Implement generic slave capabilities retrieval

Hi,

On Wed, Nov 12, 2014 at 04:55:29PM +0530, Vinod Koul wrote:
> On Thu, Nov 06, 2014 at 03:33:49PM +0100, Maxime Ripard wrote:
> > Hi Vinod,
> >
> > On Tue, Oct 28, 2014 at 10:25:15PM +0100, Maxime Ripard wrote:
> > > Hi,
> > >
> > > As we discussed a couple of weeks ago, this is the third attempt at
> > > creating a generic behaviour for slave capabilities retrieval so that
> > > generic layers using dmaengine can actually rely on that.
> > >
> > > That has been done mostly through two steps: by moving out the
> > > sub-commands of the device_control callback, so that the dmaengine
> > > core can then infer from that wether a sub-command is implemented, and
> > > then by moving the slave properties, such as the supported buswidth,
> > > to the structure dma_device itself.
> > >
> > > Comments are as usual appreciated!
> >
> > How can we move forward on this?
> >
> > I didn't have any comments on this version, and gathered quite a lot
> > of Acked-by already.
> >
> > Do you want me to rebase on top of your current next branch and send
> > you a pull request?
>
> Hi Maxime,
>
> Thanks for the huge cleanup work
>
> I quickly looked thru the series and looks okay. I will do a detailed review
> in next couple of days and then host it on a topic branch so that Feng's
> robot can test it before merging it.

I know that it will break, because of the Atmel's XDMAC driver that
has been merged since. This is why I mentionned rebasing it ;)

FWIW, the branch I was using for this serie has been published like 3
weeks ago, and my git repo is built by Feng's bot, so there shouldn't
be any compiling issues.

Maxime

--
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com


Attachments:
(No filename) (1.71 kB)
signature.asc (819.00 B)
Digital signature
Download all attachments

2014-11-13 15:35:23

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH v4 00/58] dmaengine: Implement generic slave capabilities retrieval

On Wed, Nov 12, 2014 at 02:15:02PM +0100, Maxime Ripard wrote:
> Hi,
>
> On Wed, Nov 12, 2014 at 04:55:29PM +0530, Vinod Koul wrote:
> > On Thu, Nov 06, 2014 at 03:33:49PM +0100, Maxime Ripard wrote:
> > > Hi Vinod,
> > >
> > > On Tue, Oct 28, 2014 at 10:25:15PM +0100, Maxime Ripard wrote:
> > > > Hi,
> > > >
> > > > As we discussed a couple of weeks ago, this is the third attempt at
> > > > creating a generic behaviour for slave capabilities retrieval so that
> > > > generic layers using dmaengine can actually rely on that.
> > > >
> > > > That has been done mostly through two steps: by moving out the
> > > > sub-commands of the device_control callback, so that the dmaengine
> > > > core can then infer from that wether a sub-command is implemented, and
> > > > then by moving the slave properties, such as the supported buswidth,
> > > > to the structure dma_device itself.
> > > >
> > > > Comments are as usual appreciated!
> > >
> > > How can we move forward on this?
> > >
> > > I didn't have any comments on this version, and gathered quite a lot
> > > of Acked-by already.
> > >
> > > Do you want me to rebase on top of your current next branch and send
> > > you a pull request?
> >
> > Hi Maxime,
> >
> > Thanks for the huge cleanup work
> >
> > I quickly looked thru the series and looks okay. I will do a detailed review
> > in next couple of days and then host it on a topic branch so that Feng's
> > robot can test it before merging it.
>
> I know that it will break, because of the Atmel's XDMAC driver that
> has been merged since. This is why I mentionned rebasing it ;)
Okay then pls resend :)
I will keep couple of days reserved next week for it so that we merge it
quickly

>
> FWIW, the branch I was using for this serie has been published like 3
> weeks ago, and my git repo is built by Feng's bot, so there shouldn't
> be any compiling issues.
Thats good indeed :)

--
~Vinod


Attachments:
(No filename) (1.88 kB)
signature.asc (836.00 B)
Digital signature
Download all attachments