2020-01-23 14:05:19

by Tudor Ambarus

[permalink] [raw]
Subject: [PATCH 01/10] dmaengine: at_hdmac: Substitute kzalloc with kmalloc

From: Tudor Ambarus <[email protected]>

All members of the structure are initialized below in the function,
there is no need to use kzalloc.

Signed-off-by: Tudor Ambarus <[email protected]>
---
drivers/dma/at_hdmac.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
index 672c73b4a2d4..cad6dcd9cfb5 100644
--- a/drivers/dma/at_hdmac.c
+++ b/drivers/dma/at_hdmac.c
@@ -1671,7 +1671,7 @@ static struct dma_chan *at_dma_xlate(struct of_phandle_args *dma_spec,
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);

- atslave = kzalloc(sizeof(*atslave), GFP_KERNEL);
+ atslave = kmalloc(sizeof(*atslave), GFP_KERNEL);
if (!atslave)
return NULL;

--
2.23.0


2020-01-23 14:05:26

by Tudor Ambarus

[permalink] [raw]
Subject: [PATCH 02/10] dmaengine: at_hdmac: Drop locking in at_hdmac_alloc_chan_resources()

From: Tudor Ambarus <[email protected]>

There is no need for locking in device_alloc_chan_resources(),
the DMA core takes care of it by using a dma_list_mutex around
the DMA devices.

Signed-off-by: Tudor Ambarus <[email protected]>
---
drivers/dma/at_hdmac.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
index cad6dcd9cfb5..301bae45cf8d 100644
--- a/drivers/dma/at_hdmac.c
+++ b/drivers/dma/at_hdmac.c
@@ -1542,10 +1542,8 @@ static int atc_alloc_chan_resources(struct dma_chan *chan)
struct at_dma *atdma = to_at_dma(chan->device);
struct at_desc *desc;
struct at_dma_slave *atslave;
- unsigned long flags;
int i;
u32 cfg;
- LIST_HEAD(tmp_list);

dev_vdbg(chan2dev(chan), "alloc_chan_resources\n");

@@ -1583,14 +1581,11 @@ static int atc_alloc_chan_resources(struct dma_chan *chan)
"Only %d initial descriptors\n", i);
break;
}
- list_add_tail(&desc->desc_node, &tmp_list);
+ list_add_tail(&desc->desc_node, &atchan->free_list);
}

- spin_lock_irqsave(&atchan->lock, flags);
atchan->descs_allocated = i;
- list_splice(&tmp_list, &atchan->free_list);
dma_cookie_init(chan);
- spin_unlock_irqrestore(&atchan->lock, flags);

/* channel parameters */
channel_writel(atchan, CFG, cfg);
--
2.23.0

2020-01-23 14:05:27

by Tudor Ambarus

[permalink] [raw]
Subject: [PATCH 07/10] dmaengine: at_xdmac: Drop always true check

From: Tudor Ambarus <[email protected]>

The code in cause is already in the else case of
'if (at_xdmac_chan_is_cyclic(atchan))', drop the redundant check.

Signed-off-by: Tudor Ambarus <[email protected]>
---
drivers/dma/at_xdmac.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
index f71c9f77d405..3d6e84def7a6 100644
--- a/drivers/dma/at_xdmac.c
+++ b/drivers/dma/at_xdmac.c
@@ -1656,11 +1656,9 @@ static void at_xdmac_tasklet(unsigned long data)
at_xdmac_remove_xfer(atchan, desc);
spin_unlock(&atchan->lock);

- if (!at_xdmac_chan_is_cyclic(atchan)) {
- dma_cookie_complete(txd);
- if (txd->flags & DMA_PREP_INTERRUPT)
- dmaengine_desc_get_callback_invoke(txd, NULL);
- }
+ dma_cookie_complete(txd);
+ if (txd->flags & DMA_PREP_INTERRUPT)
+ dmaengine_desc_get_callback_invoke(txd, NULL);

dma_run_dependencies(txd);

--
2.23.0

2020-01-23 14:05:31

by Tudor Ambarus

[permalink] [raw]
Subject: [PATCH 04/10] dmaengine: at_hdmac: Drop description for a not defined parameter

From: Tudor Ambarus <[email protected]>

Probably a leftover, drop it.

Signed-off-by: Tudor Ambarus <[email protected]>
---
drivers/dma/at_hdmac.c | 1 -
1 file changed, 1 deletion(-)

diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
index e17b75075904..44d998bc894b 100644
--- a/drivers/dma/at_hdmac.c
+++ b/drivers/dma/at_hdmac.c
@@ -1523,7 +1523,6 @@ static void atc_issue_pending(struct dma_chan *chan)
/**
* atc_alloc_chan_resources - allocate resources for DMA channel
* @chan: allocate descriptor resources for this channel
- * @client: current client requesting the channel be ready for requests
*
* return - the number of allocated descriptors
*/
--
2.23.0

2020-01-23 14:05:34

by Tudor Ambarus

[permalink] [raw]
Subject: [PATCH 08/10] dmaengine: at_xdmac: Drop locking in at_xdmac_alloc_chan_resources()

From: Tudor Ambarus <[email protected]>

There is no need for locking in device_alloc_chan_resources(),
the DMA core takes care of it by using a dma_list_mutex around
the DMA devices.

Signed-off-by: Tudor Ambarus <[email protected]>
---
drivers/dma/at_xdmac.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
index 3d6e84def7a6..8fb01bc90ba7 100644
--- a/drivers/dma/at_xdmac.c
+++ b/drivers/dma/at_xdmac.c
@@ -1820,22 +1820,17 @@ static int at_xdmac_alloc_chan_resources(struct dma_chan *chan)
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac_desc *desc;
int i;
- unsigned long flags;
-
- spin_lock_irqsave(&atchan->lock, flags);

if (at_xdmac_chan_is_enabled(atchan)) {
dev_err(chan2dev(chan),
"can't allocate channel resources (channel enabled)\n");
- i = -EIO;
- goto spin_unlock;
+ return -EIO;
}

if (!list_empty(&atchan->free_descs_list)) {
dev_err(chan2dev(chan),
"can't allocate channel resources (channel not free from a previous use)\n");
- i = -EIO;
- goto spin_unlock;
+ return -EIO;
}

for (i = 0; i < init_nr_desc_per_channel; i++) {
@@ -1852,8 +1847,6 @@ static int at_xdmac_alloc_chan_resources(struct dma_chan *chan)

dev_dbg(chan2dev(chan), "%s: allocated %d descriptors\n", __func__, i);

-spin_unlock:
- spin_unlock_irqrestore(&atchan->lock, flags);
return i;
}

--
2.23.0

2020-01-23 14:05:37

by Tudor Ambarus

[permalink] [raw]
Subject: [PATCH 05/10] dmaengine: at_hdmac: Switch atomic allocations to GFP_NOWAIT

From: Tudor Ambarus <[email protected]>

Avoids sleeping without depleting the emergency pool.
The rationale being that in most cases a dma device is either
offloading an operation that will automatically fallback to
software when the descriptor allocation fails, or we can simply poll
and wait for the dma device to release some in use descriptors.

Signed-off-by: Tudor Ambarus <[email protected]>
---
drivers/dma/at_hdmac.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
index 44d998bc894b..8e8e04bd1b28 100644
--- a/drivers/dma/at_hdmac.c
+++ b/drivers/dma/at_hdmac.c
@@ -147,7 +147,7 @@ static struct at_desc *atc_desc_get(struct at_dma_chan *atchan)

/* no more descriptor available in initial pool: create one more */
if (!ret)
- ret = atc_alloc_descriptor(&atchan->chan_common, GFP_ATOMIC);
+ ret = atc_alloc_descriptor(&atchan->chan_common, GFP_NOWAIT);

return ret;
}
@@ -931,7 +931,7 @@ atc_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
return NULL;
}

- vaddr = dma_pool_alloc(atdma->memset_pool, GFP_ATOMIC, &paddr);
+ vaddr = dma_pool_alloc(atdma->memset_pool, GFP_NOWAIT, &paddr);
if (!vaddr) {
dev_err(chan2dev(chan), "%s: couldn't allocate buffer\n",
__func__);
@@ -989,7 +989,7 @@ atc_prep_dma_memset_sg(struct dma_chan *chan,
return NULL;
}

- vaddr = dma_pool_alloc(atdma->memset_pool, GFP_ATOMIC, &paddr);
+ vaddr = dma_pool_alloc(atdma->memset_pool, GFP_NOWAIT, &paddr);
if (!vaddr) {
dev_err(chan2dev(chan), "%s: couldn't allocate buffer\n",
__func__);
--
2.23.0

2020-01-23 14:06:42

by Tudor Ambarus

[permalink] [raw]
Subject: [PATCH 10/10] dmaengine: at_xdmac: Fix locking in tasklet

From: Tudor Ambarus <[email protected]>

Tasklets run with all the interrupts enabled. This means that we should
replace all the (already present) spin_lock_irqsave() uses in the tasklet
with spin_lock_irq() to protect being interrupted by a IRQ which tries
to get the same lock (via calls to device_prep_dma_* for example).

spin_lock and spin_lock_bh in tasklets are not enough to protect from IRQs,
update these to spin_lock_irq().

at_xdmac_advance_work() can be called with all the interrupts enabled (when
called from tasklet), or with interrupts disabled (when called from
at_xdmac_issue_pending). Move the locking in the callers to be able to use
spin_lock_irq() and spin_lock_irqsave() for these cases.

Signed-off-by: Tudor Ambarus <[email protected]>
---
drivers/dma/at_xdmac.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
index 31321da69ae6..bb0eaf38b594 100644
--- a/drivers/dma/at_xdmac.c
+++ b/drivers/dma/at_xdmac.c
@@ -1543,9 +1543,6 @@ static void at_xdmac_remove_xfer(struct at_xdmac_chan *atchan,
static void at_xdmac_advance_work(struct at_xdmac_chan *atchan)
{
struct at_xdmac_desc *desc;
- unsigned long flags;
-
- spin_lock_irqsave(&atchan->lock, flags);

/*
* If channel is enabled, do nothing, advance_work will be triggered
@@ -1559,8 +1556,6 @@ static void at_xdmac_advance_work(struct at_xdmac_chan *atchan)
if (!desc->active_xfer)
at_xdmac_start_xfer(atchan, desc);
}
-
- spin_unlock_irqrestore(&atchan->lock, flags);
}

static void at_xdmac_handle_cyclic(struct at_xdmac_chan *atchan)
@@ -1596,7 +1591,7 @@ static void at_xdmac_handle_error(struct at_xdmac_chan *atchan)
if (atchan->irq_status & AT_XDMAC_CIS_ROIS)
dev_err(chan2dev(&atchan->chan), "request overflow error!!!");

- spin_lock_bh(&atchan->lock);
+ spin_lock_irq(&atchan->lock);

/* Channel must be disabled first as it's not done automatically */
at_xdmac_write(atxdmac, AT_XDMAC_GD, atchan->mask);
@@ -1607,7 +1602,7 @@ static void at_xdmac_handle_error(struct at_xdmac_chan *atchan)
struct at_xdmac_desc,
xfer_node);

- spin_unlock_bh(&atchan->lock);
+ spin_unlock_irq(&atchan->lock);

/* Print bad descriptor's details if needed */
dev_dbg(chan2dev(&atchan->chan),
@@ -1640,21 +1635,21 @@ static void at_xdmac_tasklet(unsigned long data)
if (atchan->irq_status & error_mask)
at_xdmac_handle_error(atchan);

- spin_lock(&atchan->lock);
+ spin_lock_irq(&atchan->lock);
desc = list_first_entry(&atchan->xfers_list,
struct at_xdmac_desc,
xfer_node);
dev_vdbg(chan2dev(&atchan->chan), "%s: desc 0x%p\n", __func__, desc);
if (!desc->active_xfer) {
dev_err(chan2dev(&atchan->chan), "Xfer not active: exiting");
- spin_unlock(&atchan->lock);
+ spin_unlock_irq(&atchan->lock);
return;
}

txd = &desc->tx_dma_desc;

at_xdmac_remove_xfer(atchan, desc);
- spin_unlock(&atchan->lock);
+ spin_unlock_irq(&atchan->lock);

dma_cookie_complete(txd);
if (txd->flags & DMA_PREP_INTERRUPT)
@@ -1662,7 +1657,9 @@ static void at_xdmac_tasklet(unsigned long data)

dma_run_dependencies(txd);

+ spin_lock_irq(&atchan->lock);
at_xdmac_advance_work(atchan);
+ spin_unlock_irq(&atchan->lock);
}
}

@@ -1723,11 +1720,15 @@ static irqreturn_t at_xdmac_interrupt(int irq, void *dev_id)
static void at_xdmac_issue_pending(struct dma_chan *chan)
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
+ unsigned long flags;

dev_dbg(chan2dev(&atchan->chan), "%s\n", __func__);

- if (!at_xdmac_chan_is_cyclic(atchan))
+ if (!at_xdmac_chan_is_cyclic(atchan)) {
+ spin_lock_irqsave(&atchan->lock, flags);
at_xdmac_advance_work(atchan);
+ spin_unlock_irqrestore(&atchan->lock, flags);
+ }

return;
}
--
2.23.0

2020-01-23 14:07:03

by Tudor Ambarus

[permalink] [raw]
Subject: [PATCH 09/10] dmaengine: at_xdmac: GFP_KERNEL for user that can sleep

From: Tudor Ambarus <[email protected]>

device_alloc_chan_resources can sleep, use GFP_KERNEL flag.

Signed-off-by: Tudor Ambarus <[email protected]>
---
drivers/dma/at_xdmac.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
index 8fb01bc90ba7..31321da69ae6 100644
--- a/drivers/dma/at_xdmac.c
+++ b/drivers/dma/at_xdmac.c
@@ -1834,7 +1834,7 @@ static int at_xdmac_alloc_chan_resources(struct dma_chan *chan)
}

for (i = 0; i < init_nr_desc_per_channel; i++) {
- desc = at_xdmac_alloc_desc(chan, GFP_ATOMIC);
+ desc = at_xdmac_alloc_desc(chan, GFP_KERNEL);
if (!desc) {
dev_warn(chan2dev(chan),
"only %d descriptors have been allocated\n", i);
--
2.23.0

2020-01-23 14:07:10

by Tudor Ambarus

[permalink] [raw]
Subject: [PATCH 03/10] dmaengine: at_hdmac: Return err in case the chan is not free at alloc res time

From: Tudor Ambarus <[email protected]>

Having a list of descriptors allocated for the channel at
device_alloc_chan_resources() time is a sign for bad free usage.
Return err and add a debug message in case the channel is not
free from a previous use.

atchan->descs_allocated becomes useless, get rid of it. More,
drop the error message in atc_desc_get() because now it would
introduce an extra if statement. The callers of atc_desc_get()
already print error messages in case the callee fails, no one
is hurt.

Signed-off-by: Tudor Ambarus <[email protected]>
---
drivers/dma/at_hdmac.c | 31 ++++++++-----------------------
drivers/dma/at_hdmac_regs.h | 2 --
2 files changed, 8 insertions(+), 25 deletions(-)

diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
index 301bae45cf8d..e17b75075904 100644
--- a/drivers/dma/at_hdmac.c
+++ b/drivers/dma/at_hdmac.c
@@ -146,17 +146,8 @@ static struct at_desc *atc_desc_get(struct at_dma_chan *atchan)
"scanned %u descriptors on freelist\n", i);

/* no more descriptor available in initial pool: create one more */
- if (!ret) {
+ if (!ret)
ret = atc_alloc_descriptor(&atchan->chan_common, GFP_ATOMIC);
- if (ret) {
- spin_lock_irqsave(&atchan->lock, flags);
- atchan->descs_allocated++;
- spin_unlock_irqrestore(&atchan->lock, flags);
- } else {
- dev_err(chan2dev(&atchan->chan_common),
- "not enough descriptors available\n");
- }
- }

return ret;
}
@@ -1553,6 +1544,11 @@ static int atc_alloc_chan_resources(struct dma_chan *chan)
return -EIO;
}

+ if (!list_empty(&atchan->free_list)) {
+ dev_dbg(chan2dev(chan), "can't allocate channel resources (channel not freed from a previous use)\n");
+ return -EIO;
+ }
+
cfg = ATC_DEFAULT_CFG;

atslave = chan->private;
@@ -1568,11 +1564,6 @@ static int atc_alloc_chan_resources(struct dma_chan *chan)
cfg = atslave->cfg;
}

- /* have we already been set up?
- * reconfigure channel but no need to reallocate descriptors */
- if (!list_empty(&atchan->free_list))
- return atchan->descs_allocated;
-
/* Allocate initial pool of descriptors */
for (i = 0; i < init_nr_desc_per_channel; i++) {
desc = atc_alloc_descriptor(chan, GFP_KERNEL);
@@ -1584,17 +1575,15 @@ static int atc_alloc_chan_resources(struct dma_chan *chan)
list_add_tail(&desc->desc_node, &atchan->free_list);
}

- atchan->descs_allocated = i;
dma_cookie_init(chan);

/* channel parameters */
channel_writel(atchan, CFG, cfg);

dev_dbg(chan2dev(chan),
- "alloc_chan_resources: allocated %d descriptors\n",
- atchan->descs_allocated);
+ "alloc_chan_resources: allocated %d descriptors\n", i);

- return atchan->descs_allocated;
+ return i;
}

/**
@@ -1608,9 +1597,6 @@ static void atc_free_chan_resources(struct dma_chan *chan)
struct at_desc *desc, *_desc;
LIST_HEAD(list);

- dev_dbg(chan2dev(chan), "free_chan_resources: (descs allocated=%u)\n",
- atchan->descs_allocated);
-
/* ASSERT: channel is idle */
BUG_ON(!list_empty(&atchan->active_list));
BUG_ON(!list_empty(&atchan->queue));
@@ -1623,7 +1609,6 @@ static void atc_free_chan_resources(struct dma_chan *chan)
dma_pool_free(atdma->dma_desc_pool, desc, desc->txd.phys);
}
list_splice_init(&atchan->free_list, &list);
- atchan->descs_allocated = 0;
atchan->status = 0;

/*
diff --git a/drivers/dma/at_hdmac_regs.h b/drivers/dma/at_hdmac_regs.h
index fe8a5853ec49..397692e937b3 100644
--- a/drivers/dma/at_hdmac_regs.h
+++ b/drivers/dma/at_hdmac_regs.h
@@ -243,7 +243,6 @@ enum atc_status {
* @active_list: list of descriptors dmaengine is being running on
* @queue: list of descriptors ready to be submitted to engine
* @free_list: list of descriptors usable by the channel
- * @descs_allocated: records the actual size of the descriptor pool
*/
struct at_dma_chan {
struct dma_chan chan_common;
@@ -264,7 +263,6 @@ struct at_dma_chan {
struct list_head active_list;
struct list_head queue;
struct list_head free_list;
- unsigned int descs_allocated;
};

#define channel_readl(atchan, name) \
--
2.23.0

2020-01-23 14:07:30

by Tudor Ambarus

[permalink] [raw]
Subject: [PATCH 06/10] dmaengine: at_hdmac: Fix deadlocks

From: Tudor Ambarus <[email protected]>

Fix the following deadlocks:
1/ atc_handle_cyclic() and atc_chain_complete() called
dmaengine_desc_get_callback_invoke() while wrongly holding the
atchan->lock. Clients can set the callback to dmaengine_terminate_sync()
which will end up trying to get the same lock, thus a deadlock occurred.
2/ dma_run_dependencies() was called with the atchan->lock held, but the
method calls device_issue_pending() which tries to get the same lock,
and so a deadlock occurred.

The driver must not hold the lock when invoking the callback or when
running dependencies. Releasing the spinlock within a called function
before calling the callback is not a nice thing to do -> called functions
become non-atomic when called within an atomic region. Thus the lock is
now taken in the child routines whereever is needed.

Signed-off-by: Tudor Ambarus <[email protected]>
---
drivers/dma/at_hdmac.c | 74 ++++++++++++++++++++++--------------------
1 file changed, 39 insertions(+), 35 deletions(-)

diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
index 8e8e04bd1b28..73a20780744b 100644
--- a/drivers/dma/at_hdmac.c
+++ b/drivers/dma/at_hdmac.c
@@ -426,17 +426,19 @@ static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie)
* atc_chain_complete - finish work for one transaction chain
* @atchan: channel we work on
* @desc: descriptor at the head of the chain we want do complete
- *
- * Called with atchan->lock held and bh disabled */
+ */
static void
atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc)
{
struct dma_async_tx_descriptor *txd = &desc->txd;
struct at_dma *atdma = to_at_dma(atchan->chan_common.device);
+ unsigned long flags;

dev_vdbg(chan2dev(&atchan->chan_common),
"descriptor %u complete\n", txd->cookie);

+ spin_lock_irqsave(&atchan->lock, flags);
+
/* mark the descriptor as complete for non cyclic cases only */
if (!atc_chan_is_cyclic(atchan))
dma_cookie_complete(txd);
@@ -453,16 +455,13 @@ atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc)
/* move myself to free_list */
list_move(&desc->desc_node, &atchan->free_list);

+ spin_unlock_irqrestore(&atchan->lock, flags);
+
dma_descriptor_unmap(txd);
/* for cyclic transfers,
* no need to replay callback function while stopping */
- if (!atc_chan_is_cyclic(atchan)) {
- /*
- * The API requires that no submissions are done from a
- * callback, so we don't need to drop the lock here
- */
+ if (!atc_chan_is_cyclic(atchan))
dmaengine_desc_get_callback_invoke(txd, NULL);
- }

dma_run_dependencies(txd);
}
@@ -480,9 +479,12 @@ static void atc_complete_all(struct at_dma_chan *atchan)
{
struct at_desc *desc, *_desc;
LIST_HEAD(list);
+ unsigned long flags;

dev_vdbg(chan2dev(&atchan->chan_common), "complete all\n");

+ spin_lock_irqsave(&atchan->lock, flags);
+
/*
* Submit queued descriptors ASAP, i.e. before we go through
* the completed ones.
@@ -494,6 +496,8 @@ static void atc_complete_all(struct at_dma_chan *atchan)
/* empty queue list by moving descriptors (if any) to active_list */
list_splice_init(&atchan->queue, &atchan->active_list);

+ spin_unlock_irqrestore(&atchan->lock, flags);
+
list_for_each_entry_safe(desc, _desc, &list, desc_node)
atc_chain_complete(atchan, desc);
}
@@ -501,38 +505,44 @@ static void atc_complete_all(struct at_dma_chan *atchan)
/**
* atc_advance_work - at the end of a transaction, move forward
* @atchan: channel where the transaction ended
- *
- * Called with atchan->lock held and bh disabled
*/
static void atc_advance_work(struct at_dma_chan *atchan)
{
+ unsigned long flags;
+ int ret;
+
dev_vdbg(chan2dev(&atchan->chan_common), "advance_work\n");

- if (atc_chan_is_enabled(atchan))
+ spin_lock_irqsave(&atchan->lock, flags);
+ ret = atc_chan_is_enabled(atchan);
+ spin_unlock_irqrestore(&atchan->lock, flags);
+ if (ret)
return;

if (list_empty(&atchan->active_list) ||
- list_is_singular(&atchan->active_list)) {
- atc_complete_all(atchan);
- } else {
- atc_chain_complete(atchan, atc_first_active(atchan));
- /* advance work */
- atc_dostart(atchan, atc_first_active(atchan));
- }
+ list_is_singular(&atchan->active_list))
+ return atc_complete_all(atchan);
+
+ atc_chain_complete(atchan, atc_first_active(atchan));
+
+ /* advance work */
+ spin_lock_irqsave(&atchan->lock, flags);
+ atc_dostart(atchan, atc_first_active(atchan));
+ spin_unlock_irqrestore(&atchan->lock, flags);
}


/**
* atc_handle_error - handle errors reported by DMA controller
* @atchan: channel where error occurs
- *
- * Called with atchan->lock held and bh disabled
*/
static void atc_handle_error(struct at_dma_chan *atchan)
{
struct at_desc *bad_desc;
struct at_desc *child;
+ unsigned long flags;

+ spin_lock_irqsave(&atchan->lock, flags);
/*
* The descriptor currently at the head of the active list is
* broked. Since we don't have any way to report errors, we'll
@@ -564,6 +574,8 @@ static void atc_handle_error(struct at_dma_chan *atchan)
list_for_each_entry(child, &bad_desc->tx_list, desc_node)
atc_dump_lli(atchan, &child->lli);

+ spin_unlock_irqrestore(&atchan->lock, flags);
+
/* Pretend the descriptor completed successfully */
atc_chain_complete(atchan, bad_desc);
}
@@ -571,8 +583,6 @@ static void atc_handle_error(struct at_dma_chan *atchan)
/**
* atc_handle_cyclic - at the end of a period, run callback function
* @atchan: channel used for cyclic operations
- *
- * Called with atchan->lock held and bh disabled
*/
static void atc_handle_cyclic(struct at_dma_chan *atchan)
{
@@ -591,17 +601,14 @@ static void atc_handle_cyclic(struct at_dma_chan *atchan)
static void atc_tasklet(unsigned long data)
{
struct at_dma_chan *atchan = (struct at_dma_chan *)data;
- unsigned long flags;

- spin_lock_irqsave(&atchan->lock, flags);
if (test_and_clear_bit(ATC_IS_ERROR, &atchan->status))
- atc_handle_error(atchan);
- else if (atc_chan_is_cyclic(atchan))
- atc_handle_cyclic(atchan);
- else
- atc_advance_work(atchan);
+ return atc_handle_error(atchan);

- spin_unlock_irqrestore(&atchan->lock, flags);
+ if (atc_chan_is_cyclic(atchan))
+ return atc_handle_cyclic(atchan);
+
+ atc_advance_work(atchan);
}

static irqreturn_t at_dma_interrupt(int irq, void *dev_id)
@@ -1437,6 +1444,8 @@ static int atc_terminate_all(struct dma_chan *chan)
list_splice_init(&atchan->queue, &list);
list_splice_init(&atchan->active_list, &list);

+ spin_unlock_irqrestore(&atchan->lock, flags);
+
/* Flush all pending and queued descriptors */
list_for_each_entry_safe(desc, _desc, &list, desc_node)
atc_chain_complete(atchan, desc);
@@ -1445,8 +1454,6 @@ static int atc_terminate_all(struct dma_chan *chan)
/* if channel dedicated to cyclic operations, free it */
clear_bit(ATC_IS_CYCLIC, &atchan->status);

- spin_unlock_irqrestore(&atchan->lock, flags);
-
return 0;
}

@@ -1507,7 +1514,6 @@ atc_tx_status(struct dma_chan *chan,
static void atc_issue_pending(struct dma_chan *chan)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
- unsigned long flags;

dev_vdbg(chan2dev(chan), "issue_pending\n");

@@ -1515,9 +1521,7 @@ static void atc_issue_pending(struct dma_chan *chan)
if (atc_chan_is_cyclic(atchan))
return;

- spin_lock_irqsave(&atchan->lock, flags);
atc_advance_work(atchan);
- spin_unlock_irqrestore(&atchan->lock, flags);
}

/**
--
2.23.0

2020-02-03 10:00:11

by Ludovic Desroches

[permalink] [raw]
Subject: Re: [PATCH 01/10] dmaengine: at_hdmac: Substitute kzalloc with kmalloc

On Thu, Jan 23, 2020 at 02:03:02PM +0000, Tudor Ambarus - M18064 wrote:
> From: Tudor Ambarus <[email protected]>
>
> All members of the structure are initialized below in the function,
> there is no need to use kzalloc.
>
> Signed-off-by: Tudor Ambarus <[email protected]>

Hi Tudor,

It sounds good for me. Thanks for the cleanup and deadlock fixes.

For the whole set of patches:
Acked-by: Ludovic Desroches <[email protected]>

Ludovic

> ---
> drivers/dma/at_hdmac.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/dma/at_hdmac.c b/drivers/dma/at_hdmac.c
> index 672c73b4a2d4..cad6dcd9cfb5 100644
> --- a/drivers/dma/at_hdmac.c
> +++ b/drivers/dma/at_hdmac.c
> @@ -1671,7 +1671,7 @@ static struct dma_chan *at_dma_xlate(struct of_phandle_args *dma_spec,
> dma_cap_zero(mask);
> dma_cap_set(DMA_SLAVE, mask);
>
> - atslave = kzalloc(sizeof(*atslave), GFP_KERNEL);
> + atslave = kmalloc(sizeof(*atslave), GFP_KERNEL);
> if (!atslave)
> return NULL;
>
> --
> 2.23.0

2020-02-25 05:58:37

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH 01/10] dmaengine: at_hdmac: Substitute kzalloc with kmalloc

On 23-01-20, 14:03, [email protected] wrote:
> From: Tudor Ambarus <[email protected]>
>
> All members of the structure are initialized below in the function,
> there is no need to use kzalloc.

Applied, thanks

Please use cover letter for long series..

--
~Vinod