2014-04-16 09:16:17

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH v7 2/2] dma: Add Xilinx AXI Video Direct Memory Access Engine driver support

On Fri, Mar 28, 2014 at 05:33:42PM +0530, Srikanth Thokala wrote:
> This is the driver for the AXI Video Direct Memory Access (AXI
> VDMA) core, which is a soft Xilinx IP core that provides high-
> bandwidth direct memory access between memory and AXI4-Stream
> type video target peripherals. The core provides efficient two
> dimensional DMA operations with independent asynchronous read
> and write channel operation.
>
> This module works on Zynq (ARM Based SoC) and Microblaze platforms.

Okay the series is fine and was going to apply it BUT
1) need ack on DT patch..
2) issues below on managing the descriptor and resetting the cookie :(

> +
> +/**
> + * xilinx_vdma_tx_descriptor - Allocate transaction descriptor
> + * @chan: Driver specific VDMA channel
> + *
> + * Return: The allocated descriptor on success and NULL on failure.
> + */
> +static struct xilinx_vdma_tx_descriptor *
> +xilinx_vdma_alloc_tx_descriptor(struct xilinx_vdma_chan *chan)
> +{
> + struct xilinx_vdma_tx_descriptor *desc;
> + unsigned long flags;
> +
> + if (chan->allocated_desc)
> + return chan->allocated_desc;
??

> +
> + desc = kzalloc(sizeof(*desc), GFP_KERNEL);
> + if (!desc)
> + return NULL;
> +
> + spin_lock_irqsave(&chan->lock, flags);
> + chan->allocated_desc = desc;
ah why do you need this?

So this essentailly prevents you from preparing two trasactions at same time as
you would overwrite??

You should maintain a list for pending and submitted.

> + spin_unlock_irqrestore(&chan->lock, flags);
> +
> + INIT_LIST_HEAD(&desc->segments);
> +
> + return desc;
> +}
> +

> +/**
> + * xilinx_vdma_tx_submit - Submit DMA transaction
> + * @tx: Async transaction descriptor
> + *
> + * Return: cookie value on success and failure value on error
> + */
> +static dma_cookie_t xilinx_vdma_tx_submit(struct dma_async_tx_descriptor *tx)
> +{
> + struct xilinx_vdma_tx_descriptor *desc = to_vdma_tx_descriptor(tx);
> + struct xilinx_vdma_chan *chan = to_xilinx_chan(tx->chan);
> + dma_cookie_t cookie;
> + unsigned long flags;
> + int err;
> +
> + if (chan->err) {
> + /*
> + * If reset fails, need to hard reset the system.
> + * Channel is no longer functional
> + */
> + err = xilinx_vdma_chan_reset(chan);
> + if (err < 0)
> + return err;
> + }
> +
> + spin_lock_irqsave(&chan->lock, flags);
> +
> + cookie = dma_cookie_assign(tx);
> +
> + /* Append the transaction to the pending transactions queue. */
> + list_add_tail(&desc->node, &chan->pending_list);
> +
> + /* Free the allocated desc */
> + chan->allocated_desc = NULL;
> +
> + spin_unlock_irqrestore(&chan->lock, flags);
> +
> + return cookie;
> +}
> +
> +/**
> + * xilinx_vdma_dma_prep_interleaved - prepare a descriptor for a
> + * DMA_SLAVE transaction
> + * @dchan: DMA channel
> + * @xt: Interleaved template pointer
> + * @flags: transfer ack flags
> + *
> + * Return: Async transaction descriptor on success and NULL on failure
> + */
> +static struct dma_async_tx_descriptor *
> +xilinx_vdma_dma_prep_interleaved(struct dma_chan *dchan,
> + struct dma_interleaved_template *xt,
> + unsigned long flags)
> +{
> + struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
> + struct xilinx_vdma_tx_descriptor *desc;
> + struct xilinx_vdma_tx_segment *segment, *prev = NULL;
> + struct xilinx_vdma_desc_hw *hw;
> +
> + if (!is_slave_direction(xt->dir))
> + return NULL;
> +
> + if (!xt->numf || !xt->sgl[0].size)
> + return NULL;
> +
> + /* Allocate a transaction descriptor. */
> + desc = xilinx_vdma_alloc_tx_descriptor(chan);
> + if (!desc)
> + return NULL;
> +
> + dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
> + desc->async_tx.tx_submit = xilinx_vdma_tx_submit;
> + desc->async_tx.cookie = 0;
why this is initialized in submit.. when you call dma_cookie_assign()

--
~Vinod


2014-04-16 10:11:41

by Srikanth Thokala

[permalink] [raw]
Subject: Re: [PATCH v7 2/2] dma: Add Xilinx AXI Video Direct Memory Access Engine driver support

On Wed, Apr 16, 2014 at 2:36 PM, Vinod Koul <[email protected]> wrote:
> On Fri, Mar 28, 2014 at 05:33:42PM +0530, Srikanth Thokala wrote:
>> This is the driver for the AXI Video Direct Memory Access (AXI
>> VDMA) core, which is a soft Xilinx IP core that provides high-
>> bandwidth direct memory access between memory and AXI4-Stream
>> type video target peripherals. The core provides efficient two
>> dimensional DMA operations with independent asynchronous read
>> and write channel operation.
>>
>> This module works on Zynq (ARM Based SoC) and Microblaze platforms.
>
> Okay the series is fine and was going to apply it BUT
> 1) need ack on DT patch..
> 2) issues below on managing the descriptor and resetting the cookie :(

Ok.

>
>> +
>> +/**
>> + * xilinx_vdma_tx_descriptor - Allocate transaction descriptor
>> + * @chan: Driver specific VDMA channel
>> + *
>> + * Return: The allocated descriptor on success and NULL on failure.
>> + */
>> +static struct xilinx_vdma_tx_descriptor *
>> +xilinx_vdma_alloc_tx_descriptor(struct xilinx_vdma_chan *chan)
>> +{
>> + struct xilinx_vdma_tx_descriptor *desc;
>> + unsigned long flags;
>> +
>> + if (chan->allocated_desc)
>> + return chan->allocated_desc;
> ??
>
>> +
>> + desc = kzalloc(sizeof(*desc), GFP_KERNEL);
>> + if (!desc)
>> + return NULL;
>> +
>> + spin_lock_irqsave(&chan->lock, flags);
>> + chan->allocated_desc = desc;
> ah why do you need this?
>
> So this essentailly prevents you from preparing two trasactions at same time as
> you would overwrite??

This will allow to queue up multiple segments on to a single
transaction descriptor.
User will submit this single desc and in the issue_pending() we decode multiple
segments and submit to SG HW engine. We free up the allocated_desc when it is
submitted to the HW. This is added after my discussion with Jaswinder, to best
utilize HW SG engine.

>
> You should maintain a list for pending and submitted.

Yes, we maintain two lists pending_list and done_list.

>
>> + spin_unlock_irqrestore(&chan->lock, flags);
>> +
>> + INIT_LIST_HEAD(&desc->segments);
>> +
>> + return desc;
>> +}
>> +
>
>> +/**
>> + * xilinx_vdma_tx_submit - Submit DMA transaction
>> + * @tx: Async transaction descriptor
>> + *
>> + * Return: cookie value on success and failure value on error
>> + */
>> +static dma_cookie_t xilinx_vdma_tx_submit(struct dma_async_tx_descriptor *tx)
>> +{
>> + struct xilinx_vdma_tx_descriptor *desc = to_vdma_tx_descriptor(tx);
>> + struct xilinx_vdma_chan *chan = to_xilinx_chan(tx->chan);
>> + dma_cookie_t cookie;
>> + unsigned long flags;
>> + int err;
>> +
>> + if (chan->err) {
>> + /*
>> + * If reset fails, need to hard reset the system.
>> + * Channel is no longer functional
>> + */
>> + err = xilinx_vdma_chan_reset(chan);
>> + if (err < 0)
>> + return err;
>> + }
>> +
>> + spin_lock_irqsave(&chan->lock, flags);
>> +
>> + cookie = dma_cookie_assign(tx);
>> +
>> + /* Append the transaction to the pending transactions queue. */
>> + list_add_tail(&desc->node, &chan->pending_list);
>> +
>> + /* Free the allocated desc */
>> + chan->allocated_desc = NULL;
>> +
>> + spin_unlock_irqrestore(&chan->lock, flags);
>> +
>> + return cookie;
>> +}
>> +
>> +/**
>> + * xilinx_vdma_dma_prep_interleaved - prepare a descriptor for a
>> + * DMA_SLAVE transaction
>> + * @dchan: DMA channel
>> + * @xt: Interleaved template pointer
>> + * @flags: transfer ack flags
>> + *
>> + * Return: Async transaction descriptor on success and NULL on failure
>> + */
>> +static struct dma_async_tx_descriptor *
>> +xilinx_vdma_dma_prep_interleaved(struct dma_chan *dchan,
>> + struct dma_interleaved_template *xt,
>> + unsigned long flags)
>> +{
>> + struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
>> + struct xilinx_vdma_tx_descriptor *desc;
>> + struct xilinx_vdma_tx_segment *segment, *prev = NULL;
>> + struct xilinx_vdma_desc_hw *hw;
>> +
>> + if (!is_slave_direction(xt->dir))
>> + return NULL;
>> +
>> + if (!xt->numf || !xt->sgl[0].size)
>> + return NULL;
>> +
>> + /* Allocate a transaction descriptor. */
>> + desc = xilinx_vdma_alloc_tx_descriptor(chan);
>> + if (!desc)
>> + return NULL;
>> +
>> + dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
>> + desc->async_tx.tx_submit = xilinx_vdma_tx_submit;
>> + desc->async_tx.cookie = 0;
> why this is initialized in submit.. when you call dma_cookie_assign()

This is while preparing the descs, but I see your point. I will fix
it in my next version.

Srikanth

>
> --
> ~Vinod
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2014-04-16 10:37:13

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH v7 2/2] dma: Add Xilinx AXI Video Direct Memory Access Engine driver support

On Wed, Apr 16, 2014 at 03:41:34PM +0530, Srikanth Thokala wrote:
> On Wed, Apr 16, 2014 at 2:36 PM, Vinod Koul <[email protected]> wrote:
> > On Fri, Mar 28, 2014 at 05:33:42PM +0530, Srikanth Thokala wrote:
> >> This is the driver for the AXI Video Direct Memory Access (AXI
> >> VDMA) core, which is a soft Xilinx IP core that provides high-
> >> bandwidth direct memory access between memory and AXI4-Stream
> >> type video target peripherals. The core provides efficient two
> >> dimensional DMA operations with independent asynchronous read
> >> and write channel operation.
> >>
> >> This module works on Zynq (ARM Based SoC) and Microblaze platforms.
> >
> > Okay the series is fine and was going to apply it BUT
> > 1) need ack on DT patch..
> > 2) issues below on managing the descriptor and resetting the cookie :(
>
> Ok.
>
> >
> >> +
> >> +/**
> >> + * xilinx_vdma_tx_descriptor - Allocate transaction descriptor
> >> + * @chan: Driver specific VDMA channel
> >> + *
> >> + * Return: The allocated descriptor on success and NULL on failure.
> >> + */
> >> +static struct xilinx_vdma_tx_descriptor *
> >> +xilinx_vdma_alloc_tx_descriptor(struct xilinx_vdma_chan *chan)
> >> +{
> >> + struct xilinx_vdma_tx_descriptor *desc;
> >> + unsigned long flags;
> >> +
> >> + if (chan->allocated_desc)
> >> + return chan->allocated_desc;
> > ??
> >
> >> +
> >> + desc = kzalloc(sizeof(*desc), GFP_KERNEL);
> >> + if (!desc)
> >> + return NULL;
> >> +
> >> + spin_lock_irqsave(&chan->lock, flags);
> >> + chan->allocated_desc = desc;
> > ah why do you need this?
> >
> > So this essentailly prevents you from preparing two trasactions at same time as
> > you would overwrite??
>
> This will allow to queue up multiple segments on to a single
> transaction descriptor.
> User will submit this single desc and in the issue_pending() we decode multiple
> segments and submit to SG HW engine. We free up the allocated_desc when it is
> submitted to the HW. This is added after my discussion with Jaswinder, to best
> utilize HW SG engine.

I think best utilization of HW SG engine would happen if we collate the pending
list when you start dma....

--
~Vinod

2014-04-16 11:31:14

by Srikanth Thokala

[permalink] [raw]
Subject: Re: [PATCH v7 2/2] dma: Add Xilinx AXI Video Direct Memory Access Engine driver support

On Wed, Apr 16, 2014 at 3:56 PM, Vinod Koul <[email protected]> wrote:
> On Wed, Apr 16, 2014 at 03:41:34PM +0530, Srikanth Thokala wrote:
>> On Wed, Apr 16, 2014 at 2:36 PM, Vinod Koul <[email protected]> wrote:
>> > On Fri, Mar 28, 2014 at 05:33:42PM +0530, Srikanth Thokala wrote:
>> >> This is the driver for the AXI Video Direct Memory Access (AXI
>> >> VDMA) core, which is a soft Xilinx IP core that provides high-
>> >> bandwidth direct memory access between memory and AXI4-Stream
>> >> type video target peripherals. The core provides efficient two
>> >> dimensional DMA operations with independent asynchronous read
>> >> and write channel operation.
>> >>
>> >> This module works on Zynq (ARM Based SoC) and Microblaze platforms.
>> >
>> > Okay the series is fine and was going to apply it BUT
>> > 1) need ack on DT patch..
>> > 2) issues below on managing the descriptor and resetting the cookie :(
>>
>> Ok.
>>
>> >
>> >> +
>> >> +/**
>> >> + * xilinx_vdma_tx_descriptor - Allocate transaction descriptor
>> >> + * @chan: Driver specific VDMA channel
>> >> + *
>> >> + * Return: The allocated descriptor on success and NULL on failure.
>> >> + */
>> >> +static struct xilinx_vdma_tx_descriptor *
>> >> +xilinx_vdma_alloc_tx_descriptor(struct xilinx_vdma_chan *chan)
>> >> +{
>> >> + struct xilinx_vdma_tx_descriptor *desc;
>> >> + unsigned long flags;
>> >> +
>> >> + if (chan->allocated_desc)
>> >> + return chan->allocated_desc;
>> > ??
>> >
>> >> +
>> >> + desc = kzalloc(sizeof(*desc), GFP_KERNEL);
>> >> + if (!desc)
>> >> + return NULL;
>> >> +
>> >> + spin_lock_irqsave(&chan->lock, flags);
>> >> + chan->allocated_desc = desc;
>> > ah why do you need this?
>> >
>> > So this essentailly prevents you from preparing two trasactions at same time as
>> > you would overwrite??
>>
>> This will allow to queue up multiple segments on to a single
>> transaction descriptor.
>> User will submit this single desc and in the issue_pending() we decode multiple
>> segments and submit to SG HW engine. We free up the allocated_desc when it is
>> submitted to the HW. This is added after my discussion with Jaswinder, to best
>> utilize HW SG engine.
>
> I think best utilization of HW SG engine would happen if we collate the pending
> list when you start dma....

Is that ok if I revisit this code as an enhancement at a later time?

Srikanth

>
> --
> ~Vinod
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/