2024-03-08 21:00:52

by Sean Anderson

[permalink] [raw]
Subject: [PATCH 0/3] dma: xilinx_dpdma: Fix locking

This series fixes some locking problems with the xilinx dpdma driver. It
also adds some additional lockdep asserts to make catching such errors
easier.


Sean Anderson (3):
dma: xilinx_dpdma: Fix locking
dma: xilinx_dpdma: Remove unnecessary use of irqsave/restore
dma: Add lockdep asserts to virt-dma

drivers/dma/virt-dma.h | 10 ++++++++++
drivers/dma/xilinx/xilinx_dpdma.c | 23 ++++++++++++++---------
2 files changed, 24 insertions(+), 9 deletions(-)

--
2.35.1.1320.gc452695387.dirty



2024-03-08 21:01:01

by Sean Anderson

[permalink] [raw]
Subject: [PATCH 1/3] dma: xilinx_dpdma: Fix locking

There are several places where either chan->lock or chan->vchan.lock was
not held. Add appropriate locking. This fixes lockdep warnings like

[ 31.077578] ------------[ cut here ]------------
[ 31.077831] WARNING: CPU: 2 PID: 40 at drivers/dma/xilinx/xilinx_dpdma.c:834 xilinx_dpdma_chan_queue_transfer+0x274/0x5e0
[ 31.077953] Modules linked in:
[ 31.078019] CPU: 2 PID: 40 Comm: kworker/u12:1 Not tainted 6.6.20+ #98
[ 31.078102] Hardware name: xlnx,zynqmp (DT)
[ 31.078169] Workqueue: events_unbound deferred_probe_work_func
[ 31.078272] pstate: 600000c5 (nZCv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 31.078377] pc : xilinx_dpdma_chan_queue_transfer+0x274/0x5e0
[ 31.078473] lr : xilinx_dpdma_chan_queue_transfer+0x270/0x5e0
[ 31.078550] sp : ffffffc083bb2e10
[ 31.078590] x29: ffffffc083bb2e10 x28: 0000000000000000 x27: ffffff880165a168
[ 31.078754] x26: ffffff880164e920 x25: ffffff880164eab8 x24: ffffff880164d480
[ 31.078920] x23: ffffff880165a148 x22: ffffff880164e988 x21: 0000000000000000
[ 31.079132] x20: ffffffc082aa3000 x19: ffffff880164e880 x18: 0000000000000000
[ 31.079295] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
[ 31.079453] x14: 0000000000000000 x13: ffffff8802263dc0 x12: 0000000000000001
[ 31.079613] x11: 0001ffc083bb2e34 x10: 0001ff880164e98f x9 : 0001ffc082aa3def
[ 31.079824] x8 : 0001ffc082aa3dec x7 : 0000000000000000 x6 : 0000000000000516
[ 31.079982] x5 : ffffffc7f8d43000 x4 : ffffff88003c9c40 x3 : ffffffffffffffff
[ 31.080147] x2 : ffffffc7f8d43000 x1 : 00000000000000c0 x0 : 0000000000000000
[ 31.080307] Call trace:
[ 31.080340] xilinx_dpdma_chan_queue_transfer+0x274/0x5e0
[ 31.080518] xilinx_dpdma_issue_pending+0x11c/0x120
[ 31.080595] zynqmp_disp_layer_update+0x180/0x3ac
[ 31.080712] zynqmp_dpsub_plane_atomic_update+0x11c/0x21c
[ 31.080825] drm_atomic_helper_commit_planes+0x20c/0x684
[ 31.080951] drm_atomic_helper_commit_tail+0x5c/0xb0
[ 31.081139] commit_tail+0x234/0x294
[ 31.081246] drm_atomic_helper_commit+0x1f8/0x210
[ 31.081363] drm_atomic_commit+0x100/0x140
[ 31.081477] drm_client_modeset_commit_atomic+0x318/0x384
[ 31.081634] drm_client_modeset_commit_locked+0x8c/0x24c
[ 31.081725] drm_client_modeset_commit+0x34/0x5c
[ 31.081812] __drm_fb_helper_restore_fbdev_mode_unlocked+0x104/0x168
[ 31.081899] drm_fb_helper_set_par+0x50/0x70
[ 31.081971] fbcon_init+0x538/0xc48
[ 31.082047] visual_init+0x16c/0x23c
[ 31.082207] do_bind_con_driver.isra.0+0x2d0/0x634
[ 31.082320] do_take_over_console+0x24c/0x33c
[ 31.082429] do_fbcon_takeover+0xbc/0x1b0
[ 31.082503] fbcon_fb_registered+0x2d0/0x34c
[ 31.082663] register_framebuffer+0x27c/0x38c
[ 31.082767] __drm_fb_helper_initial_config_and_unlock+0x5c0/0x91c
[ 31.082939] drm_fb_helper_initial_config+0x50/0x74
[ 31.083012] drm_fbdev_dma_client_hotplug+0xb8/0x108
[ 31.083115] drm_client_register+0xa0/0xf4
[ 31.083195] drm_fbdev_dma_setup+0xb0/0x1cc
[ 31.083293] zynqmp_dpsub_drm_init+0x45c/0x4e0
[ 31.083431] zynqmp_dpsub_probe+0x444/0x5e0
[ 31.083616] platform_probe+0x8c/0x13c
[ 31.083713] really_probe+0x258/0x59c
[ 31.083793] __driver_probe_device+0xc4/0x224
[ 31.083878] driver_probe_device+0x70/0x1c0
[ 31.083961] __device_attach_driver+0x108/0x1e0
[ 31.084052] bus_for_each_drv+0x9c/0x100
[ 31.084125] __device_attach+0x100/0x298
[ 31.084207] device_initial_probe+0x14/0x20
[ 31.084292] bus_probe_device+0xd8/0xdc
[ 31.084368] deferred_probe_work_func+0x11c/0x180
[ 31.084451] process_one_work+0x3ac/0x988
[ 31.084643] worker_thread+0x398/0x694
[ 31.084752] kthread+0x1bc/0x1c0
[ 31.084848] ret_from_fork+0x10/0x20
[ 31.084932] irq event stamp: 64549
[ 31.084970] hardirqs last enabled at (64548): [<ffffffc081adf35c>] _raw_spin_unlock_irqrestore+0x80/0x90
[ 31.085157] hardirqs last disabled at (64549): [<ffffffc081adf010>] _raw_spin_lock_irqsave+0xc0/0xdc
[ 31.085277] softirqs last enabled at (64503): [<ffffffc08001071c>] __do_softirq+0x47c/0x500
[ 31.085390] softirqs last disabled at (64498): [<ffffffc080017134>] ____do_softirq+0x10/0x1c
[ 31.085501] ---[ end trace 0000000000000000 ]---

Fixes: 7cbb0c63de3f ("dmaengine: xilinx: dpdma: Add the Xilinx DisplayPort DMA engine driver")
Signed-off-by: Sean Anderson <[email protected]>
---

drivers/dma/xilinx/xilinx_dpdma.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
index b82815e64d24..eb0637d90342 100644
--- a/drivers/dma/xilinx/xilinx_dpdma.c
+++ b/drivers/dma/xilinx/xilinx_dpdma.c
@@ -214,7 +214,8 @@ struct xilinx_dpdma_tx_desc {
* @running: true if the channel is running
* @first_frame: flag for the first frame of stream
* @video_group: flag if multi-channel operation is needed for video channels
- * @lock: lock to access struct xilinx_dpdma_chan
+ * @lock: lock to access struct xilinx_dpdma_chan. Must be taken before
+ * @vchan.lock, if both are to be held.
* @desc_pool: descriptor allocation pool
* @err_task: error IRQ bottom half handler
* @desc: References to descriptors being processed
@@ -1097,12 +1098,14 @@ static void xilinx_dpdma_chan_vsync_irq(struct xilinx_dpdma_chan *chan)
* Complete the active descriptor, if any, promote the pending
* descriptor to active, and queue the next transfer, if any.
*/
+ spin_lock(&chan->vchan.lock);
if (chan->desc.active)
vchan_cookie_complete(&chan->desc.active->vdesc);
chan->desc.active = pending;
chan->desc.pending = NULL;

xilinx_dpdma_chan_queue_transfer(chan);
+ spin_unlock(&chan->vchan.lock);

out:
spin_unlock_irqrestore(&chan->lock, flags);
@@ -1264,10 +1267,12 @@ static void xilinx_dpdma_issue_pending(struct dma_chan *dchan)
struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan);
unsigned long flags;

- spin_lock_irqsave(&chan->vchan.lock, flags);
+ spin_lock_irqsave(&chan->lock, flags);
+ spin_lock(&chan->vchan.lock);
if (vchan_issue_pending(&chan->vchan))
xilinx_dpdma_chan_queue_transfer(chan);
- spin_unlock_irqrestore(&chan->vchan.lock, flags);
+ spin_unlock(&chan->vchan.lock);
+ spin_unlock_irqrestore(&chan->lock, flags);
}

static int xilinx_dpdma_config(struct dma_chan *dchan,
@@ -1495,7 +1500,9 @@ static void xilinx_dpdma_chan_err_task(struct tasklet_struct *t)
XILINX_DPDMA_EINTR_CHAN_ERR_MASK << chan->id);

spin_lock_irqsave(&chan->lock, flags);
+ spin_lock(&chan->vchan.lock);
xilinx_dpdma_chan_queue_transfer(chan);
+ spin_unlock(&chan->vchan.lock);
spin_unlock_irqrestore(&chan->lock, flags);
}

--
2.35.1.1320.gc452695387.dirty


2024-03-08 21:01:26

by Sean Anderson

[permalink] [raw]
Subject: [PATCH 2/3] dma: xilinx_dpdma: Remove unnecessary use of irqsave/restore

xilinx_dpdma_chan_done_irq and xilinx_dpdma_chan_vsync_irq are always
called with IRQs disabled from xilinx_dpdma_irq_handler. Therefore we
don't need to save/restore the IRQ flags.

Signed-off-by: Sean Anderson <[email protected]>
---

drivers/dma/xilinx/xilinx_dpdma.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
index eb0637d90342..36bd4825d389 100644
--- a/drivers/dma/xilinx/xilinx_dpdma.c
+++ b/drivers/dma/xilinx/xilinx_dpdma.c
@@ -1043,9 +1043,8 @@ static int xilinx_dpdma_chan_stop(struct xilinx_dpdma_chan *chan)
static void xilinx_dpdma_chan_done_irq(struct xilinx_dpdma_chan *chan)
{
struct xilinx_dpdma_tx_desc *active;
- unsigned long flags;

- spin_lock_irqsave(&chan->lock, flags);
+ spin_lock(&chan->lock);

xilinx_dpdma_debugfs_desc_done_irq(chan);

@@ -1057,7 +1056,7 @@ static void xilinx_dpdma_chan_done_irq(struct xilinx_dpdma_chan *chan)
"chan%u: DONE IRQ with no active descriptor!\n",
chan->id);

- spin_unlock_irqrestore(&chan->lock, flags);
+ spin_unlock(&chan->lock);
}

/**
@@ -1072,10 +1071,9 @@ static void xilinx_dpdma_chan_vsync_irq(struct xilinx_dpdma_chan *chan)
{
struct xilinx_dpdma_tx_desc *pending;
struct xilinx_dpdma_sw_desc *sw_desc;
- unsigned long flags;
u32 desc_id;

- spin_lock_irqsave(&chan->lock, flags);
+ spin_lock(&chan->lock);

pending = chan->desc.pending;
if (!chan->running || !pending)
@@ -1108,7 +1106,7 @@ static void xilinx_dpdma_chan_vsync_irq(struct xilinx_dpdma_chan *chan)
spin_unlock(&chan->vchan.lock);

out:
- spin_unlock_irqrestore(&chan->lock, flags);
+ spin_unlock(&chan->lock);
}

/**
--
2.35.1.1320.gc452695387.dirty


2024-03-08 21:01:33

by Sean Anderson

[permalink] [raw]
Subject: [PATCH 3/3] dma: Add lockdep asserts to virt-dma

Add lockdep asserts to all functions with "vc.lock must be held by
caller" in their documentation. This will help catch cases where these
assumptions do not hold.

Signed-off-by: Sean Anderson <[email protected]>
---

drivers/dma/virt-dma.h | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/drivers/dma/virt-dma.h b/drivers/dma/virt-dma.h
index e9f5250fbe4d..59d9eabc8b67 100644
--- a/drivers/dma/virt-dma.h
+++ b/drivers/dma/virt-dma.h
@@ -81,6 +81,8 @@ static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan
*/
static inline bool vchan_issue_pending(struct virt_dma_chan *vc)
{
+ lockdep_assert_held(&vc->lock);
+
list_splice_tail_init(&vc->desc_submitted, &vc->desc_issued);
return !list_empty(&vc->desc_issued);
}
@@ -96,6 +98,8 @@ static inline void vchan_cookie_complete(struct virt_dma_desc *vd)
struct virt_dma_chan *vc = to_virt_chan(vd->tx.chan);
dma_cookie_t cookie;

+ lockdep_assert_held(&vc->lock);
+
cookie = vd->tx.cookie;
dma_cookie_complete(&vd->tx);
dev_vdbg(vc->chan.device->dev, "txd %p[%x]: marked complete\n",
@@ -146,6 +150,8 @@ static inline void vchan_terminate_vdesc(struct virt_dma_desc *vd)
{
struct virt_dma_chan *vc = to_virt_chan(vd->tx.chan);

+ lockdep_assert_held(&vc->lock);
+
list_add_tail(&vd->node, &vc->desc_terminated);

if (vc->cyclic == vd)
@@ -160,6 +166,8 @@ static inline void vchan_terminate_vdesc(struct virt_dma_desc *vd)
*/
static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc)
{
+ lockdep_assert_held(&vc->lock);
+
return list_first_entry_or_null(&vc->desc_issued,
struct virt_dma_desc, node);
}
@@ -177,6 +185,8 @@ static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc)
static inline void vchan_get_all_descriptors(struct virt_dma_chan *vc,
struct list_head *head)
{
+ lockdep_assert_held(&vc->lock);
+
list_splice_tail_init(&vc->desc_allocated, head);
list_splice_tail_init(&vc->desc_submitted, head);
list_splice_tail_init(&vc->desc_issued, head);
--
2.35.1.1320.gc452695387.dirty


2024-03-27 12:02:47

by Tomi Valkeinen

[permalink] [raw]
Subject: Re: [PATCH 1/3] dma: xilinx_dpdma: Fix locking

On 08/03/2024 23:00, Sean Anderson wrote:
> There are several places where either chan->lock or chan->vchan.lock was
> not held. Add appropriate locking. This fixes lockdep warnings like
>

Reviewed-by: Tomi Valkeinen <[email protected]>

Tomi

> [ 31.077578] ------------[ cut here ]------------
> [ 31.077831] WARNING: CPU: 2 PID: 40 at drivers/dma/xilinx/xilinx_dpdma.c:834 xilinx_dpdma_chan_queue_transfer+0x274/0x5e0
> [ 31.077953] Modules linked in:
> [ 31.078019] CPU: 2 PID: 40 Comm: kworker/u12:1 Not tainted 6.6.20+ #98
> [ 31.078102] Hardware name: xlnx,zynqmp (DT)
> [ 31.078169] Workqueue: events_unbound deferred_probe_work_func
> [ 31.078272] pstate: 600000c5 (nZCv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> [ 31.078377] pc : xilinx_dpdma_chan_queue_transfer+0x274/0x5e0
> [ 31.078473] lr : xilinx_dpdma_chan_queue_transfer+0x270/0x5e0
> [ 31.078550] sp : ffffffc083bb2e10
> [ 31.078590] x29: ffffffc083bb2e10 x28: 0000000000000000 x27: ffffff880165a168
> [ 31.078754] x26: ffffff880164e920 x25: ffffff880164eab8 x24: ffffff880164d480
> [ 31.078920] x23: ffffff880165a148 x22: ffffff880164e988 x21: 0000000000000000
> [ 31.079132] x20: ffffffc082aa3000 x19: ffffff880164e880 x18: 0000000000000000
> [ 31.079295] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
> [ 31.079453] x14: 0000000000000000 x13: ffffff8802263dc0 x12: 0000000000000001
> [ 31.079613] x11: 0001ffc083bb2e34 x10: 0001ff880164e98f x9 : 0001ffc082aa3def
> [ 31.079824] x8 : 0001ffc082aa3dec x7 : 0000000000000000 x6 : 0000000000000516
> [ 31.079982] x5 : ffffffc7f8d43000 x4 : ffffff88003c9c40 x3 : ffffffffffffffff
> [ 31.080147] x2 : ffffffc7f8d43000 x1 : 00000000000000c0 x0 : 0000000000000000
> [ 31.080307] Call trace:
> [ 31.080340] xilinx_dpdma_chan_queue_transfer+0x274/0x5e0
> [ 31.080518] xilinx_dpdma_issue_pending+0x11c/0x120
> [ 31.080595] zynqmp_disp_layer_update+0x180/0x3ac
> [ 31.080712] zynqmp_dpsub_plane_atomic_update+0x11c/0x21c
> [ 31.080825] drm_atomic_helper_commit_planes+0x20c/0x684
> [ 31.080951] drm_atomic_helper_commit_tail+0x5c/0xb0
> [ 31.081139] commit_tail+0x234/0x294
> [ 31.081246] drm_atomic_helper_commit+0x1f8/0x210
> [ 31.081363] drm_atomic_commit+0x100/0x140
> [ 31.081477] drm_client_modeset_commit_atomic+0x318/0x384
> [ 31.081634] drm_client_modeset_commit_locked+0x8c/0x24c
> [ 31.081725] drm_client_modeset_commit+0x34/0x5c
> [ 31.081812] __drm_fb_helper_restore_fbdev_mode_unlocked+0x104/0x168
> [ 31.081899] drm_fb_helper_set_par+0x50/0x70
> [ 31.081971] fbcon_init+0x538/0xc48
> [ 31.082047] visual_init+0x16c/0x23c
> [ 31.082207] do_bind_con_driver.isra.0+0x2d0/0x634
> [ 31.082320] do_take_over_console+0x24c/0x33c
> [ 31.082429] do_fbcon_takeover+0xbc/0x1b0
> [ 31.082503] fbcon_fb_registered+0x2d0/0x34c
> [ 31.082663] register_framebuffer+0x27c/0x38c
> [ 31.082767] __drm_fb_helper_initial_config_and_unlock+0x5c0/0x91c
> [ 31.082939] drm_fb_helper_initial_config+0x50/0x74
> [ 31.083012] drm_fbdev_dma_client_hotplug+0xb8/0x108
> [ 31.083115] drm_client_register+0xa0/0xf4
> [ 31.083195] drm_fbdev_dma_setup+0xb0/0x1cc
> [ 31.083293] zynqmp_dpsub_drm_init+0x45c/0x4e0
> [ 31.083431] zynqmp_dpsub_probe+0x444/0x5e0
> [ 31.083616] platform_probe+0x8c/0x13c
> [ 31.083713] really_probe+0x258/0x59c
> [ 31.083793] __driver_probe_device+0xc4/0x224
> [ 31.083878] driver_probe_device+0x70/0x1c0
> [ 31.083961] __device_attach_driver+0x108/0x1e0
> [ 31.084052] bus_for_each_drv+0x9c/0x100
> [ 31.084125] __device_attach+0x100/0x298
> [ 31.084207] device_initial_probe+0x14/0x20
> [ 31.084292] bus_probe_device+0xd8/0xdc
> [ 31.084368] deferred_probe_work_func+0x11c/0x180
> [ 31.084451] process_one_work+0x3ac/0x988
> [ 31.084643] worker_thread+0x398/0x694
> [ 31.084752] kthread+0x1bc/0x1c0
> [ 31.084848] ret_from_fork+0x10/0x20
> [ 31.084932] irq event stamp: 64549
> [ 31.084970] hardirqs last enabled at (64548): [<ffffffc081adf35c>] _raw_spin_unlock_irqrestore+0x80/0x90
> [ 31.085157] hardirqs last disabled at (64549): [<ffffffc081adf010>] _raw_spin_lock_irqsave+0xc0/0xdc
> [ 31.085277] softirqs last enabled at (64503): [<ffffffc08001071c>] __do_softirq+0x47c/0x500
> [ 31.085390] softirqs last disabled at (64498): [<ffffffc080017134>] ____do_softirq+0x10/0x1c
> [ 31.085501] ---[ end trace 0000000000000000 ]---
>
> Fixes: 7cbb0c63de3f ("dmaengine: xilinx: dpdma: Add the Xilinx DisplayPort DMA engine driver")
> Signed-off-by: Sean Anderson <[email protected]>
> ---
>
> drivers/dma/xilinx/xilinx_dpdma.c | 13 ++++++++++---
> 1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
> index b82815e64d24..eb0637d90342 100644
> --- a/drivers/dma/xilinx/xilinx_dpdma.c
> +++ b/drivers/dma/xilinx/xilinx_dpdma.c
> @@ -214,7 +214,8 @@ struct xilinx_dpdma_tx_desc {
> * @running: true if the channel is running
> * @first_frame: flag for the first frame of stream
> * @video_group: flag if multi-channel operation is needed for video channels
> - * @lock: lock to access struct xilinx_dpdma_chan
> + * @lock: lock to access struct xilinx_dpdma_chan. Must be taken before
> + * @vchan.lock, if both are to be held.
> * @desc_pool: descriptor allocation pool
> * @err_task: error IRQ bottom half handler
> * @desc: References to descriptors being processed
> @@ -1097,12 +1098,14 @@ static void xilinx_dpdma_chan_vsync_irq(struct xilinx_dpdma_chan *chan)
> * Complete the active descriptor, if any, promote the pending
> * descriptor to active, and queue the next transfer, if any.
> */
> + spin_lock(&chan->vchan.lock);
> if (chan->desc.active)
> vchan_cookie_complete(&chan->desc.active->vdesc);
> chan->desc.active = pending;
> chan->desc.pending = NULL;
>
> xilinx_dpdma_chan_queue_transfer(chan);
> + spin_unlock(&chan->vchan.lock);
>
> out:
> spin_unlock_irqrestore(&chan->lock, flags);
> @@ -1264,10 +1267,12 @@ static void xilinx_dpdma_issue_pending(struct dma_chan *dchan)
> struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan);
> unsigned long flags;
>
> - spin_lock_irqsave(&chan->vchan.lock, flags);
> + spin_lock_irqsave(&chan->lock, flags);
> + spin_lock(&chan->vchan.lock);
> if (vchan_issue_pending(&chan->vchan))
> xilinx_dpdma_chan_queue_transfer(chan);
> - spin_unlock_irqrestore(&chan->vchan.lock, flags);
> + spin_unlock(&chan->vchan.lock);
> + spin_unlock_irqrestore(&chan->lock, flags);
> }
>
> static int xilinx_dpdma_config(struct dma_chan *dchan,
> @@ -1495,7 +1500,9 @@ static void xilinx_dpdma_chan_err_task(struct tasklet_struct *t)
> XILINX_DPDMA_EINTR_CHAN_ERR_MASK << chan->id);
>
> spin_lock_irqsave(&chan->lock, flags);
> + spin_lock(&chan->vchan.lock);
> xilinx_dpdma_chan_queue_transfer(chan);
> + spin_unlock(&chan->vchan.lock);
> spin_unlock_irqrestore(&chan->lock, flags);
> }
>


2024-03-27 14:08:56

by Tomi Valkeinen

[permalink] [raw]
Subject: Re: [PATCH 3/3] dma: Add lockdep asserts to virt-dma

On 08/03/2024 23:00, Sean Anderson wrote:
> Add lockdep asserts to all functions with "vc.lock must be held by
> caller" in their documentation. This will help catch cases where these
> assumptions do not hold.
>
> Signed-off-by: Sean Anderson <[email protected]>
> ---

Reviewed-by: Tomi Valkeinen <[email protected]>

Tomi

> drivers/dma/virt-dma.h | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/drivers/dma/virt-dma.h b/drivers/dma/virt-dma.h
> index e9f5250fbe4d..59d9eabc8b67 100644
> --- a/drivers/dma/virt-dma.h
> +++ b/drivers/dma/virt-dma.h
> @@ -81,6 +81,8 @@ static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan
> */
> static inline bool vchan_issue_pending(struct virt_dma_chan *vc)
> {
> + lockdep_assert_held(&vc->lock);
> +
> list_splice_tail_init(&vc->desc_submitted, &vc->desc_issued);
> return !list_empty(&vc->desc_issued);
> }
> @@ -96,6 +98,8 @@ static inline void vchan_cookie_complete(struct virt_dma_desc *vd)
> struct virt_dma_chan *vc = to_virt_chan(vd->tx.chan);
> dma_cookie_t cookie;
>
> + lockdep_assert_held(&vc->lock);
> +
> cookie = vd->tx.cookie;
> dma_cookie_complete(&vd->tx);
> dev_vdbg(vc->chan.device->dev, "txd %p[%x]: marked complete\n",
> @@ -146,6 +150,8 @@ static inline void vchan_terminate_vdesc(struct virt_dma_desc *vd)
> {
> struct virt_dma_chan *vc = to_virt_chan(vd->tx.chan);
>
> + lockdep_assert_held(&vc->lock);
> +
> list_add_tail(&vd->node, &vc->desc_terminated);
>
> if (vc->cyclic == vd)
> @@ -160,6 +166,8 @@ static inline void vchan_terminate_vdesc(struct virt_dma_desc *vd)
> */
> static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc)
> {
> + lockdep_assert_held(&vc->lock);
> +
> return list_first_entry_or_null(&vc->desc_issued,
> struct virt_dma_desc, node);
> }
> @@ -177,6 +185,8 @@ static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc)
> static inline void vchan_get_all_descriptors(struct virt_dma_chan *vc,
> struct list_head *head)
> {
> + lockdep_assert_held(&vc->lock);
> +
> list_splice_tail_init(&vc->desc_allocated, head);
> list_splice_tail_init(&vc->desc_submitted, head);
> list_splice_tail_init(&vc->desc_issued, head);


2024-03-27 14:38:26

by Tomi Valkeinen

[permalink] [raw]
Subject: Re: [PATCH 2/3] dma: xilinx_dpdma: Remove unnecessary use of irqsave/restore

Hi,

On 08/03/2024 23:00, Sean Anderson wrote:
> xilinx_dpdma_chan_done_irq and xilinx_dpdma_chan_vsync_irq are always
> called with IRQs disabled from xilinx_dpdma_irq_handler. Therefore we
> don't need to save/restore the IRQ flags.

I think this is fine, but a few thoughts:

- Is spin_lock clearly faster than the irqsave variant, or is this a
pointless optimization? It's safer to just use irqsave variant, instead
of making sure the code is always called from the expected contexts.
- Is this style documented/recommended anywhere? Going through docs, I
only found docs telling to use irqsave when mixing irq and non-irq contexts.
- Does this cause issues on PREEMPT_RT?

Tomi

> Signed-off-by: Sean Anderson <[email protected]>
> ---
>
> drivers/dma/xilinx/xilinx_dpdma.c | 10 ++++------
> 1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
> index eb0637d90342..36bd4825d389 100644
> --- a/drivers/dma/xilinx/xilinx_dpdma.c
> +++ b/drivers/dma/xilinx/xilinx_dpdma.c
> @@ -1043,9 +1043,8 @@ static int xilinx_dpdma_chan_stop(struct xilinx_dpdma_chan *chan)
> static void xilinx_dpdma_chan_done_irq(struct xilinx_dpdma_chan *chan)
> {
> struct xilinx_dpdma_tx_desc *active;
> - unsigned long flags;
>
> - spin_lock_irqsave(&chan->lock, flags);
> + spin_lock(&chan->lock);
>
> xilinx_dpdma_debugfs_desc_done_irq(chan);
>
> @@ -1057,7 +1056,7 @@ static void xilinx_dpdma_chan_done_irq(struct xilinx_dpdma_chan *chan)
> "chan%u: DONE IRQ with no active descriptor!\n",
> chan->id);
>
> - spin_unlock_irqrestore(&chan->lock, flags);
> + spin_unlock(&chan->lock);
> }
>
> /**
> @@ -1072,10 +1071,9 @@ static void xilinx_dpdma_chan_vsync_irq(struct xilinx_dpdma_chan *chan)
> {
> struct xilinx_dpdma_tx_desc *pending;
> struct xilinx_dpdma_sw_desc *sw_desc;
> - unsigned long flags;
> u32 desc_id;
>
> - spin_lock_irqsave(&chan->lock, flags);
> + spin_lock(&chan->lock);
>
> pending = chan->desc.pending;
> if (!chan->running || !pending)
> @@ -1108,7 +1106,7 @@ static void xilinx_dpdma_chan_vsync_irq(struct xilinx_dpdma_chan *chan)
> spin_unlock(&chan->vchan.lock);
>
> out:
> - spin_unlock_irqrestore(&chan->lock, flags);
> + spin_unlock(&chan->lock);
> }
>
> /**


2024-03-28 15:00:40

by Sean Anderson

[permalink] [raw]
Subject: Re: [PATCH 2/3] dma: xilinx_dpdma: Remove unnecessary use of irqsave/restore

On 3/27/24 08:27, Tomi Valkeinen wrote:
> Hi,
>
> On 08/03/2024 23:00, Sean Anderson wrote:
>> xilinx_dpdma_chan_done_irq and xilinx_dpdma_chan_vsync_irq are always
>> called with IRQs disabled from xilinx_dpdma_irq_handler. Therefore we
>> don't need to save/restore the IRQ flags.
>
> I think this is fine, but a few thoughts:
>
> - Is spin_lock clearly faster than the irqsave variant, or is this a pointless optimization? It's safer to just use irqsave variant, instead of making sure the code is always called from the expected contexts.

It's not an optimization. Technically this will save a few instructions,
but...

> - Is this style documented/recommended anywhere? Going through docs, I only found docs telling to use irqsave when mixing irq and non-irq contexts.

The purpose is mainly to make it clear that this is meant to be called
in IRQ context. With irqsave, there's an implication that this could be
called in non-IRQ context, which it never is.

> - Does this cause issues on PREEMPT_RT?

Why would it?

--Sean

>
> Tomi
>
>> Signed-off-by: Sean Anderson <[email protected]>
>> ---
>>
>> drivers/dma/xilinx/xilinx_dpdma.c | 10 ++++------
>> 1 file changed, 4 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
>> index eb0637d90342..36bd4825d389 100644
>> --- a/drivers/dma/xilinx/xilinx_dpdma.c
>> +++ b/drivers/dma/xilinx/xilinx_dpdma.c
>> @@ -1043,9 +1043,8 @@ static int xilinx_dpdma_chan_stop(struct xilinx_dpdma_chan *chan)
>> static void xilinx_dpdma_chan_done_irq(struct xilinx_dpdma_chan *chan)
>> {
>> struct xilinx_dpdma_tx_desc *active;
>> - unsigned long flags;
>> - spin_lock_irqsave(&chan->lock, flags);
>> + spin_lock(&chan->lock);
>> xilinx_dpdma_debugfs_desc_done_irq(chan);
>> @@ -1057,7 +1056,7 @@ static void xilinx_dpdma_chan_done_irq(struct xilinx_dpdma_chan *chan)
>> "chan%u: DONE IRQ with no active descriptor!\n",
>> chan->id);
>> - spin_unlock_irqrestore(&chan->lock, flags);
>> + spin_unlock(&chan->lock);
>> }
>> /**
>> @@ -1072,10 +1071,9 @@ static void xilinx_dpdma_chan_vsync_irq(struct xilinx_dpdma_chan *chan)
>> {
>> struct xilinx_dpdma_tx_desc *pending;
>> struct xilinx_dpdma_sw_desc *sw_desc;
>> - unsigned long flags;
>> u32 desc_id;
>> - spin_lock_irqsave(&chan->lock, flags);
>> + spin_lock(&chan->lock);
>> pending = chan->desc.pending;
>> if (!chan->running || !pending)
>> @@ -1108,7 +1106,7 @@ static void xilinx_dpdma_chan_vsync_irq(struct xilinx_dpdma_chan *chan)
>> spin_unlock(&chan->vchan.lock);
>> out:
>> - spin_unlock_irqrestore(&chan->lock, flags);
>> + spin_unlock(&chan->lock);
>> }
>> /**
>

2024-03-28 16:41:10

by Tomi Valkeinen

[permalink] [raw]
Subject: Re: [PATCH 2/3] dma: xilinx_dpdma: Remove unnecessary use of irqsave/restore

On 28/03/2024 17:00, Sean Anderson wrote:
> On 3/27/24 08:27, Tomi Valkeinen wrote:
>> Hi,
>>
>> On 08/03/2024 23:00, Sean Anderson wrote:
>>> xilinx_dpdma_chan_done_irq and xilinx_dpdma_chan_vsync_irq are always
>>> called with IRQs disabled from xilinx_dpdma_irq_handler. Therefore we
>>> don't need to save/restore the IRQ flags.
>>
>> I think this is fine, but a few thoughts:
>>
>> - Is spin_lock clearly faster than the irqsave variant, or is this a pointless optimization? It's safer to just use irqsave variant, instead of making sure the code is always called from the expected contexts.
>
> It's not an optimization. Technically this will save a few instructions,
> but...
>
>> - Is this style documented/recommended anywhere? Going through docs, I only found docs telling to use irqsave when mixing irq and non-irq contexts.
>
> The purpose is mainly to make it clear that this is meant to be called
> in IRQ context. With irqsave, there's an implication that this could be
> called in non-IRQ context, which it never is.

Hmm, I see. Yes, I think that makes sense.

>> - Does this cause issues on PREEMPT_RT?
>
> Why would it?

I was reading locktypes.rst, I started wondering what it means if
spinlocks are changed into sleeping locks. But thinking about it again,
it doesn't matter, as the irq will still be masked when in irq-context.

So:

Reviewed-by: Tomi Valkeinen <[email protected]>

Tomi



2024-04-07 16:38:44

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH 0/3] dma: xilinx_dpdma: Fix locking


On Fri, 08 Mar 2024 16:00:31 -0500, Sean Anderson wrote:
> This series fixes some locking problems with the xilinx dpdma driver. It
> also adds some additional lockdep asserts to make catching such errors
> easier.
>
>
> Sean Anderson (3):
> dma: xilinx_dpdma: Fix locking
> dma: xilinx_dpdma: Remove unnecessary use of irqsave/restore
> dma: Add lockdep asserts to virt-dma
>
> [...]

Applied, thanks!

[1/3] dma: xilinx_dpdma: Fix locking
commit: 244296cc3a155199a8b080d19e645d7d49081a38
[2/3] dma: xilinx_dpdma: Remove unnecessary use of irqsave/restore
(no commit info)
[3/3] dma: Add lockdep asserts to virt-dma
(no commit info)

Best regards,
--
~Vinod