Largely, an adaptation of Lars' work, applied on the IIO multi-buffer
support + high-speed/mmap support [1].
Found here:
https://github.com/larsclausen/linux/commits/iio-high-speed-5.10
But this isn't tested.
[1] Requires that these sets be applied (in this order):
* https://lore.kernel.org/linux-iio/[email protected]/T/#t
* https://lore.kernel.org/linux-iio/[email protected]/T/#t
Some of the variation from the original work are:
1. It's applied on top of the multibuffer support; so the direction of the
data is set per iio_buffer, and not iio_dev
2. Cyclic mode is a separate patch
3. devm_iio_dmaengine_buffer_alloc() requires the definition of
'enum iio_buffer_direction'; which means that 'linux/iio/buffer.h'
needs to be included in buffer-dma.h; Lars tried to use a bool, but
using the enum seems a bit more consistent and allows us to maybe
go down the route of both I/O buffers (some day); not sure if
that's sane or not (you never know)
4. Various re-formatting; and added some docstrings where I remembered
to so so
Lars-Peter Clausen (5):
iio: Add output buffer support
iio: kfifo-buffer: Add output buffer support
iio: buffer-dma: Allow to provide custom buffer ops
iio: buffer-dma: Add output buffer support
iio: buffer-dma: add support for cyclic DMA transfers
drivers/iio/adc/adi-axi-adc.c | 5 +-
drivers/iio/buffer/industrialio-buffer-dma.c | 120 ++++++++++++++++--
.../buffer/industrialio-buffer-dmaengine.c | 57 +++++++--
drivers/iio/buffer/kfifo_buf.c | 50 ++++++++
drivers/iio/industrialio-buffer.c | 110 +++++++++++++++-
include/linux/iio/buffer-dma.h | 11 +-
include/linux/iio/buffer-dmaengine.h | 7 +-
include/linux/iio/buffer.h | 7 +
include/linux/iio/buffer_impl.h | 11 ++
include/uapi/linux/iio/buffer.h | 1 +
10 files changed, 348 insertions(+), 31 deletions(-)
--
2.17.1
From: Lars-Peter Clausen <[email protected]>
Currently IIO only supports buffer mode for capture devices like ADCs. Add
support for buffered mode for output devices like DACs.
The output buffer implementation is analogous to the input buffer
implementation. Instead of using read() to get data from the buffer write()
is used to copy data into the buffer.
poll() with POLLOUT will wakeup if there is space available for more or
equal to the configured watermark of samples.
Drivers can remove data from a buffer using iio_buffer_remove_sample(), the
function can e.g. called from a trigger handler to write the data to
hardware.
A buffer can only be either a output buffer or an input, but not both. So,
for a device that has an ADC and DAC path, this will mean 2 IIO buffers
(one for each direction).
The direction of the buffer is decided by the new direction field of the
iio_buffer struct and should be set after allocating and before registering
it.
Signed-off-by: Lars-Peter Clausen <[email protected]>
Signed-off-by: Alexandru Ardelean <[email protected]>
---
drivers/iio/industrialio-buffer.c | 110 ++++++++++++++++++++++++++++--
include/linux/iio/buffer.h | 7 ++
include/linux/iio/buffer_impl.h | 11 +++
3 files changed, 124 insertions(+), 4 deletions(-)
diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index a0d1ad86022f..6f4f5f5544f3 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -162,6 +162,69 @@ static ssize_t iio_buffer_read(struct file *filp, char __user *buf,
return ret;
}
+static size_t iio_buffer_space_available(struct iio_buffer *buf)
+{
+ if (buf->access->space_available)
+ return buf->access->space_available(buf);
+
+ return SIZE_MAX;
+}
+
+static ssize_t iio_buffer_write(struct file *filp, const char __user *buf,
+ size_t n, loff_t *f_ps)
+{
+ struct iio_dev_buffer_pair *ib = filp->private_data;
+ struct iio_buffer *rb = ib->buffer;
+ struct iio_dev *indio_dev = ib->indio_dev;
+ DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ size_t datum_size;
+ size_t to_wait;
+ int ret;
+
+ if (!rb || !rb->access->write)
+ return -EINVAL;
+
+ datum_size = rb->bytes_per_datum;
+
+ /*
+ * If datum_size is 0 there will never be anything to read from the
+ * buffer, so signal end of file now.
+ */
+ if (!datum_size)
+ return 0;
+
+ if (filp->f_flags & O_NONBLOCK)
+ to_wait = 0;
+ else
+ to_wait = min_t(size_t, n / datum_size, rb->watermark);
+
+ add_wait_queue(&rb->pollq, &wait);
+ do {
+ if (!indio_dev->info) {
+ ret = -ENODEV;
+ break;
+ }
+
+ if (iio_buffer_space_available(rb) < to_wait) {
+ if (signal_pending(current)) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+
+ wait_woken(&wait, TASK_INTERRUPTIBLE,
+ MAX_SCHEDULE_TIMEOUT);
+ continue;
+ }
+
+ ret = rb->access->write(rb, n, buf);
+ if (ret == 0 && (filp->f_flags & O_NONBLOCK))
+ ret = -EAGAIN;
+ } while (ret == 0);
+ remove_wait_queue(&rb->pollq, &wait);
+
+ return ret;
+}
+
/**
* iio_buffer_poll() - poll the buffer to find out if it has data
* @filp: File structure pointer for device access
@@ -182,8 +245,19 @@ static __poll_t iio_buffer_poll(struct file *filp,
return 0;
poll_wait(filp, &rb->pollq, wait);
- if (iio_buffer_ready(indio_dev, rb, rb->watermark, 0))
- return EPOLLIN | EPOLLRDNORM;
+
+ switch (rb->direction) {
+ case IIO_BUFFER_DIRECTION_IN:
+ if (iio_buffer_ready(indio_dev, rb, rb->watermark, 0))
+ return EPOLLIN | EPOLLRDNORM;
+ break;
+ case IIO_BUFFER_DIRECTION_OUT:
+ if (iio_buffer_space_available(rb) >= rb->watermark)
+ return EPOLLOUT | EPOLLWRNORM;
+ break;
+ }
+
+ /* need a way of knowing if there may be enough data... */
return 0;
}
@@ -232,6 +306,16 @@ void iio_buffer_wakeup_poll(struct iio_dev *indio_dev)
}
}
+int iio_buffer_remove_sample(struct iio_buffer *buffer, u8 *data)
+{
+ if (!buffer || !buffer->access)
+ return -EINVAL;
+ if (!buffer->access->write)
+ return -ENOSYS;
+ return buffer->access->remove_from(buffer, data);
+}
+EXPORT_SYMBOL_GPL(iio_buffer_remove_sample);
+
void iio_buffer_init(struct iio_buffer *buffer)
{
INIT_LIST_HEAD(&buffer->demux_list);
@@ -803,6 +887,8 @@ static int iio_verify_update(struct iio_dev *indio_dev,
}
if (insert_buffer) {
+ if (insert_buffer->direction == IIO_BUFFER_DIRECTION_OUT)
+ strict_scanmask = true;
bitmap_or(compound_mask, compound_mask,
insert_buffer->scan_mask, indio_dev->masklength);
scan_timestamp |= insert_buffer->scan_timestamp;
@@ -945,6 +1031,8 @@ static int iio_update_demux(struct iio_dev *indio_dev)
int ret;
list_for_each_entry(buffer, &iio_dev_opaque->buffer_list, buffer_list) {
+ if (buffer->direction == IIO_BUFFER_DIRECTION_OUT)
+ continue;
ret = iio_buffer_update_demux(indio_dev, buffer);
if (ret < 0)
goto error_clear_mux_table;
@@ -1155,6 +1243,11 @@ int iio_update_buffers(struct iio_dev *indio_dev,
mutex_lock(&indio_dev->info_exist_lock);
mutex_lock(&indio_dev->mlock);
+ if (insert_buffer->direction == IIO_BUFFER_DIRECTION_OUT) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
if (insert_buffer && iio_buffer_is_active(insert_buffer))
insert_buffer = NULL;
@@ -1400,6 +1493,7 @@ static const struct file_operations iio_buffer_chrdev_fileops = {
.owner = THIS_MODULE,
.llseek = noop_llseek,
.read = iio_buffer_read,
+ .write = iio_buffer_write,
.poll = iio_buffer_poll,
.unlocked_ioctl = iio_buffer_ioctl,
.compat_ioctl = compat_ptr_ioctl,
@@ -1914,8 +2008,16 @@ static int iio_buffer_mmap(struct file *filep, struct vm_area_struct *vma)
if (!(vma->vm_flags & VM_SHARED))
return -EINVAL;
- if (!(vma->vm_flags & VM_READ))
- return -EINVAL;
+ switch (buffer->direction) {
+ case IIO_BUFFER_DIRECTION_IN:
+ if (!(vma->vm_flags & VM_READ))
+ return -EINVAL;
+ break;
+ case IIO_BUFFER_DIRECTION_OUT:
+ if (!(vma->vm_flags & VM_WRITE))
+ return -EINVAL;
+ break;
+ }
return buffer->access->mmap(buffer, vma);
}
diff --git a/include/linux/iio/buffer.h b/include/linux/iio/buffer.h
index b6928ac5c63d..e87b8773253d 100644
--- a/include/linux/iio/buffer.h
+++ b/include/linux/iio/buffer.h
@@ -11,8 +11,15 @@
struct iio_buffer;
+enum iio_buffer_direction {
+ IIO_BUFFER_DIRECTION_IN,
+ IIO_BUFFER_DIRECTION_OUT,
+};
+
int iio_push_to_buffers(struct iio_dev *indio_dev, const void *data);
+int iio_buffer_remove_sample(struct iio_buffer *buffer, u8 *data);
+
/**
* iio_push_to_buffers_with_timestamp() - push data and timestamp to buffers
* @indio_dev: iio_dev structure for device.
diff --git a/include/linux/iio/buffer_impl.h b/include/linux/iio/buffer_impl.h
index 1d57dc7ccb4f..47bdbf4a4519 100644
--- a/include/linux/iio/buffer_impl.h
+++ b/include/linux/iio/buffer_impl.h
@@ -7,6 +7,7 @@
#ifdef CONFIG_IIO_BUFFER
#include <uapi/linux/iio/buffer.h>
+#include <linux/iio/buffer.h>
struct iio_dev;
struct iio_buffer;
@@ -23,6 +24,10 @@ struct iio_buffer;
* @read: try to get a specified number of bytes (must exist)
* @data_available: indicates how much data is available for reading from
* the buffer.
+ * @remove_from: remove sample from buffer. Drivers should calls this to
+ * remove a sample from a buffer.
+ * @write: try to write a number of bytes
+ * @space_available: returns the amount of bytes available in a buffer
* @request_update: if a parameter change has been marked, update underlying
* storage.
* @set_bytes_per_datum:set number of bytes per datum
@@ -61,6 +66,9 @@ struct iio_buffer_access_funcs {
int (*store_to)(struct iio_buffer *buffer, const void *data);
int (*read)(struct iio_buffer *buffer, size_t n, char __user *buf);
size_t (*data_available)(struct iio_buffer *buffer);
+ int (*remove_from)(struct iio_buffer *buffer, void *data);
+ int (*write)(struct iio_buffer *buffer, size_t n, const char __user *buf);
+ size_t (*space_available)(struct iio_buffer *buffer);
int (*request_update)(struct iio_buffer *buffer);
@@ -103,6 +111,9 @@ struct iio_buffer {
/** @bytes_per_datum: Size of individual datum including timestamp. */
size_t bytes_per_datum;
+ /* @direction: Direction of the data stream (in/out). */
+ enum iio_buffer_direction direction;
+
/**
* @access: Buffer access functions associated with the
* implementation.
--
2.17.1
From: Lars-Peter Clausen <[email protected]>
Add output buffer support to the kfifo buffer implementation.
The implementation is straight forward and mostly just wraps the kfifo
API to provide the required operations.
Signed-off-by: Lars-Peter Clausen <[email protected]>
Signed-off-by: Alexandru Ardelean <[email protected]>
---
drivers/iio/buffer/kfifo_buf.c | 50 ++++++++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)
diff --git a/drivers/iio/buffer/kfifo_buf.c b/drivers/iio/buffer/kfifo_buf.c
index 1359abed3b31..6e055176f969 100644
--- a/drivers/iio/buffer/kfifo_buf.c
+++ b/drivers/iio/buffer/kfifo_buf.c
@@ -138,10 +138,60 @@ static void iio_kfifo_buffer_release(struct iio_buffer *buffer)
kfree(kf);
}
+static size_t iio_kfifo_buf_space_available(struct iio_buffer *r)
+{
+ struct iio_kfifo *kf = iio_to_kfifo(r);
+ size_t avail;
+
+ mutex_lock(&kf->user_lock);
+ avail = kfifo_avail(&kf->kf);
+ mutex_unlock(&kf->user_lock);
+
+ return avail;
+}
+
+static int iio_kfifo_remove_from(struct iio_buffer *r, void *data)
+{
+ int ret;
+ struct iio_kfifo *kf = iio_to_kfifo(r);
+
+ if (kfifo_size(&kf->kf) < r->bytes_per_datum)
+ return -EBUSY;
+
+ ret = kfifo_out(&kf->kf, data, r->bytes_per_datum);
+ if (ret != r->bytes_per_datum)
+ return -EBUSY;
+
+ wake_up_interruptible_poll(&r->pollq, POLLOUT | POLLWRNORM);
+
+ return 0;
+}
+
+static int iio_kfifo_write(struct iio_buffer *r, size_t n,
+ const char __user *buf)
+{
+ struct iio_kfifo *kf = iio_to_kfifo(r);
+ int ret, copied;
+
+ mutex_lock(&kf->user_lock);
+ if (!kfifo_initialized(&kf->kf) || n < kfifo_esize(&kf->kf))
+ ret = -EINVAL;
+ else
+ ret = kfifo_from_user(&kf->kf, buf, n, &copied);
+ mutex_unlock(&kf->user_lock);
+ if (ret)
+ return ret;
+
+ return copied;
+}
+
static const struct iio_buffer_access_funcs kfifo_access_funcs = {
.store_to = &iio_store_to_kfifo,
.read = &iio_read_kfifo,
.data_available = iio_kfifo_buf_data_available,
+ .remove_from = &iio_kfifo_remove_from,
+ .write = &iio_kfifo_write,
+ .space_available = &iio_kfifo_buf_space_available,
.request_update = &iio_request_update_kfifo,
.set_bytes_per_datum = &iio_set_bytes_per_datum_kfifo,
.set_length = &iio_set_length_kfifo,
--
2.17.1
From: Lars-Peter Clausen <[email protected]>
Some devices that want to make use of the DMA buffer might need to do
something special, like write a register when the buffer is enabled.
Extend the API to allow those drivers to provide their own buffer ops.
Signed-off-by: Lars-Peter Clausen <[email protected]>
Signed-off-by: Alexandru Ardelean <[email protected]>
---
drivers/iio/adc/adi-axi-adc.c | 2 +-
drivers/iio/buffer/industrialio-buffer-dma.c | 4 +++-
drivers/iio/buffer/industrialio-buffer-dmaengine.c | 14 ++++++++++----
include/linux/iio/buffer-dma.h | 5 ++++-
include/linux/iio/buffer-dmaengine.h | 4 +++-
5 files changed, 21 insertions(+), 8 deletions(-)
diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
index 74a6da35fd69..45ce97d1f41e 100644
--- a/drivers/iio/adc/adi-axi-adc.c
+++ b/drivers/iio/adc/adi-axi-adc.c
@@ -116,7 +116,7 @@ static int adi_axi_adc_config_dma_buffer(struct device *dev,
dma_name = "rx";
buffer = devm_iio_dmaengine_buffer_alloc(indio_dev->dev.parent,
- dma_name);
+ dma_name, NULL, NULL);
if (IS_ERR(buffer))
return PTR_ERR(buffer);
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index befb0a3d2def..57f2284a292f 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -883,13 +883,15 @@ EXPORT_SYMBOL_GPL(iio_dma_buffer_set_length);
* allocations are done from a memory region that can be accessed by the device.
*/
int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
- struct device *dev, const struct iio_dma_buffer_ops *ops)
+ struct device *dev, const struct iio_dma_buffer_ops *ops,
+ void *driver_data)
{
iio_buffer_init(&queue->buffer);
queue->buffer.length = PAGE_SIZE;
queue->buffer.watermark = queue->buffer.length / 2;
queue->dev = dev;
queue->ops = ops;
+ queue->driver_data = driver_data;
INIT_LIST_HEAD(&queue->incoming);
INIT_LIST_HEAD(&queue->outgoing);
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index bb022922ec23..0736526b36ec 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -163,6 +163,8 @@ static const struct attribute *iio_dmaengine_buffer_attrs[] = {
* iio_dmaengine_buffer_alloc() - Allocate new buffer which uses DMAengine
* @dev: Parent device for the buffer
* @channel: DMA channel name, typically "rx".
+ * @ops: Custom iio_dma_buffer_ops, if NULL default ops will be used
+ * @driver_data: Driver data to be passed to custom iio_dma_buffer_ops
*
* This allocates a new IIO buffer which internally uses the DMAengine framework
* to perform its transfers. The parent device will be used to request the DMA
@@ -172,7 +174,8 @@ static const struct attribute *iio_dmaengine_buffer_attrs[] = {
* release it.
*/
static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
- const char *channel)
+ const char *channel, const struct iio_dma_buffer_ops *ops,
+ void *driver_data)
{
struct dmaengine_buffer *dmaengine_buffer;
unsigned int width, src_width, dest_width;
@@ -211,7 +214,7 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
dmaengine_buffer->max_size = dma_get_max_seg_size(chan->device->dev);
iio_dma_buffer_init(&dmaengine_buffer->queue, chan->device->dev,
- &iio_dmaengine_default_ops);
+ ops ? ops : &iio_dmaengine_default_ops, driver_data);
dmaengine_buffer->queue.buffer.attrs = iio_dmaengine_buffer_attrs;
dmaengine_buffer->queue.buffer.access = &iio_dmaengine_buffer_ops;
@@ -249,6 +252,8 @@ static void __devm_iio_dmaengine_buffer_free(struct device *dev, void *res)
* devm_iio_dmaengine_buffer_alloc() - Resource-managed iio_dmaengine_buffer_alloc()
* @dev: Parent device for the buffer
* @channel: DMA channel name, typically "rx".
+ * @ops: Custom iio_dma_buffer_ops, if NULL default ops will be used
+ * @driver_data: Driver data to be passed to custom iio_dma_buffer_ops
*
* This allocates a new IIO buffer which internally uses the DMAengine framework
* to perform its transfers. The parent device will be used to request the DMA
@@ -257,7 +262,8 @@ static void __devm_iio_dmaengine_buffer_free(struct device *dev, void *res)
* The buffer will be automatically de-allocated once the device gets destroyed.
*/
struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev,
- const char *channel)
+ const char *channel, const struct iio_dma_buffer_ops *ops,
+ void *driver_data)
{
struct iio_buffer **bufferp, *buffer;
@@ -266,7 +272,7 @@ struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev,
if (!bufferp)
return ERR_PTR(-ENOMEM);
- buffer = iio_dmaengine_buffer_alloc(dev, channel);
+ buffer = iio_dmaengine_buffer_alloc(dev, channel, ops, driver_data);
if (IS_ERR(buffer)) {
devres_free(bufferp);
return buffer;
diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
index 315a8d750986..c23fad847f0d 100644
--- a/include/linux/iio/buffer-dma.h
+++ b/include/linux/iio/buffer-dma.h
@@ -110,6 +110,8 @@ struct iio_dma_buffer_queue {
bool active;
+ void *driver_data;
+
unsigned int num_blocks;
struct iio_dma_buffer_block **blocks;
unsigned int max_offset;
@@ -144,7 +146,8 @@ int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length);
int iio_dma_buffer_request_update(struct iio_buffer *buffer);
int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
- struct device *dma_dev, const struct iio_dma_buffer_ops *ops);
+ struct device *dma_dev, const struct iio_dma_buffer_ops *ops,
+ void *driver_data);
void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue);
void iio_dma_buffer_release(struct iio_dma_buffer_queue *queue);
diff --git a/include/linux/iio/buffer-dmaengine.h b/include/linux/iio/buffer-dmaengine.h
index 5b502291d6a4..464adee95d4b 100644
--- a/include/linux/iio/buffer-dmaengine.h
+++ b/include/linux/iio/buffer-dmaengine.h
@@ -7,10 +7,12 @@
#ifndef __IIO_DMAENGINE_H__
#define __IIO_DMAENGINE_H__
+struct iio_dma_buffer_ops;
struct iio_buffer;
struct device;
struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev,
- const char *channel);
+ const char *channel, const struct iio_dma_buffer_ops *ops,
+ void *driver_data);
#endif
--
2.17.1
From: Lars-Peter Clausen <[email protected]>
Add support for output buffers to the dma buffer implementation.
Signed-off-by: Lars-Peter Clausen <[email protected]>
Signed-off-by: Alexandru Ardelean <[email protected]>
---
drivers/iio/adc/adi-axi-adc.c | 3 +-
drivers/iio/buffer/industrialio-buffer-dma.c | 116 ++++++++++++++++--
.../buffer/industrialio-buffer-dmaengine.c | 31 +++--
include/linux/iio/buffer-dma.h | 6 +
include/linux/iio/buffer-dmaengine.h | 7 +-
5 files changed, 144 insertions(+), 19 deletions(-)
diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
index 45ce97d1f41e..d088ab77ba5c 100644
--- a/drivers/iio/adc/adi-axi-adc.c
+++ b/drivers/iio/adc/adi-axi-adc.c
@@ -106,6 +106,7 @@ static unsigned int adi_axi_adc_read(struct adi_axi_adc_state *st,
static int adi_axi_adc_config_dma_buffer(struct device *dev,
struct iio_dev *indio_dev)
{
+ enum iio_buffer_direction dir = IIO_BUFFER_DIRECTION_IN;
struct iio_buffer *buffer;
const char *dma_name;
@@ -115,7 +116,7 @@ static int adi_axi_adc_config_dma_buffer(struct device *dev,
if (device_property_read_string(dev, "dma-names", &dma_name))
dma_name = "rx";
- buffer = devm_iio_dmaengine_buffer_alloc(indio_dev->dev.parent,
+ buffer = devm_iio_dmaengine_buffer_alloc(indio_dev->dev.parent, dir,
dma_name, NULL, NULL);
if (IS_ERR(buffer))
return PTR_ERR(buffer);
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index 57f2284a292f..36e6e79d2e04 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -223,7 +223,8 @@ void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block)
spin_unlock_irqrestore(&queue->list_lock, flags);
iio_buffer_block_put_atomic(block);
- wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM);
+ wake_up_interruptible_poll(&queue->buffer.pollq,
+ (uintptr_t)queue->poll_wakup_flags);
}
EXPORT_SYMBOL_GPL(iio_dma_buffer_block_done);
@@ -252,7 +253,8 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
}
spin_unlock_irqrestore(&queue->list_lock, flags);
- wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM);
+ wake_up_interruptible_poll(&queue->buffer.pollq,
+ (uintptr_t)queue->poll_wakup_flags);
}
EXPORT_SYMBOL_GPL(iio_dma_buffer_block_list_abort);
@@ -353,9 +355,6 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
}
block->block.id = i;
-
- block->state = IIO_BLOCK_STATE_QUEUED;
- list_add_tail(&block->head, &queue->incoming);
}
out_unlock:
@@ -437,7 +436,29 @@ int iio_dma_buffer_enable(struct iio_buffer *buffer,
struct iio_dma_buffer_block *block, *_block;
mutex_lock(&queue->lock);
+
+ if (buffer->direction == IIO_BUFFER_DIRECTION_IN)
+ queue->poll_wakup_flags = POLLIN | POLLRDNORM;
+ else
+ queue->poll_wakup_flags = POLLOUT | POLLWRNORM;
+
queue->fileio.enabled = !queue->num_blocks;
+ if (queue->fileio.enabled) {
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
+ struct iio_dma_buffer_block *block =
+ queue->fileio.blocks[i];
+ if (buffer->direction == IIO_BUFFER_DIRECTION_IN) {
+ block->state = IIO_BLOCK_STATE_QUEUED;
+ list_add_tail(&block->head, &queue->incoming);
+ } else {
+ block->state = IIO_BLOCK_STATE_DEQUEUED;
+ list_add_tail(&block->head, &queue->outgoing);
+ }
+ }
+ }
+
queue->active = true;
list_for_each_entry_safe(block, _block, &queue->incoming, head) {
list_del(&block->head);
@@ -567,6 +588,61 @@ int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
}
EXPORT_SYMBOL_GPL(iio_dma_buffer_read);
+int iio_dma_buffer_write(struct iio_buffer *buf, size_t n,
+ const char __user *user_buffer)
+{
+ struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buf);
+ struct iio_dma_buffer_block *block;
+ int ret;
+
+ if (n < buf->bytes_per_datum)
+ return -EINVAL;
+
+ mutex_lock(&queue->lock);
+
+ if (!queue->fileio.enabled) {
+ ret = -EBUSY;
+ goto out_unlock;
+ }
+
+ if (!queue->fileio.active_block) {
+ block = iio_dma_buffer_dequeue(queue);
+ if (block == NULL) {
+ ret = 0;
+ goto out_unlock;
+ }
+ queue->fileio.pos = 0;
+ queue->fileio.active_block = block;
+ } else {
+ block = queue->fileio.active_block;
+ }
+
+ n = rounddown(n, buf->bytes_per_datum);
+ if (n > block->block.size - queue->fileio.pos)
+ n = block->block.size - queue->fileio.pos;
+
+ if (copy_from_user(block->vaddr + queue->fileio.pos, user_buffer, n)) {
+ ret = -EFAULT;
+ goto out_unlock;
+ }
+
+ queue->fileio.pos += n;
+
+ if (queue->fileio.pos == block->block.size) {
+ queue->fileio.active_block = NULL;
+ block->block.bytes_used = block->block.size;
+ iio_dma_buffer_enqueue(queue, block);
+ }
+
+ ret = n;
+
+out_unlock:
+ mutex_unlock(&queue->lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(iio_dma_buffer_write);
+
/**
* iio_dma_buffer_data_available() - DMA buffer data_available callback
* @buf: Buffer to check for data availability
@@ -588,12 +664,14 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf)
*/
mutex_lock(&queue->lock);
- if (queue->fileio.active_block)
- data_available += queue->fileio.active_block->block.size;
+ if (queue->fileio.active_block) {
+ data_available += queue->fileio.active_block->block.bytes_used -
+ queue->fileio.pos;
+ }
spin_lock_irq(&queue->list_lock);
list_for_each_entry(block, &queue->outgoing, head)
- data_available += block->block.size;
+ data_available += block->block.bytes_used;
spin_unlock_irq(&queue->list_lock);
mutex_unlock(&queue->lock);
@@ -601,6 +679,28 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf)
}
EXPORT_SYMBOL_GPL(iio_dma_buffer_data_available);
+size_t iio_dma_buffer_space_available(struct iio_buffer *buf)
+{
+ struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buf);
+ struct iio_dma_buffer_block *block;
+ size_t space_available = 0;
+
+ mutex_lock(&queue->lock);
+ if (queue->fileio.active_block) {
+ space_available += queue->fileio.active_block->block.size -
+ queue->fileio.pos;
+ }
+
+ spin_lock_irq(&queue->list_lock);
+ list_for_each_entry(block, &queue->outgoing, head)
+ space_available += block->block.size;
+ spin_unlock_irq(&queue->list_lock);
+ mutex_unlock(&queue->lock);
+
+ return space_available;
+}
+EXPORT_SYMBOL_GPL(iio_dma_buffer_space_available);
+
int iio_dma_buffer_alloc_blocks(struct iio_buffer *buffer,
struct iio_buffer_block_alloc_req *req)
{
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index 0736526b36ec..013cc7c1ecf4 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -37,6 +37,8 @@ struct dmaengine_buffer {
size_t align;
size_t max_size;
+
+ bool is_tx;
};
static struct dmaengine_buffer *iio_buffer_to_dmaengine_buffer(
@@ -64,9 +66,12 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
struct dmaengine_buffer *dmaengine_buffer =
iio_buffer_to_dmaengine_buffer(&queue->buffer);
struct dma_async_tx_descriptor *desc;
+ enum dma_transfer_direction direction;
dma_cookie_t cookie;
- block->block.bytes_used = min(block->block.size,
+ if (!dmaengine_buffer->is_tx)
+ block->block.bytes_used = block->block.size;
+ block->block.bytes_used = min(block->block.bytes_used,
dmaengine_buffer->max_size);
block->block.bytes_used = rounddown(block->block.bytes_used,
dmaengine_buffer->align);
@@ -75,8 +80,10 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
return 0;
}
+ direction = dmaengine_buffer->is_tx ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM;
+
desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
- block->phys_addr, block->block.bytes_used, DMA_DEV_TO_MEM,
+ block->phys_addr, block->block.bytes_used, direction,
DMA_PREP_INTERRUPT);
if (!desc)
return -ENOMEM;
@@ -117,12 +124,14 @@ static void iio_dmaengine_buffer_release(struct iio_buffer *buf)
static const struct iio_buffer_access_funcs iio_dmaengine_buffer_ops = {
.read = iio_dma_buffer_read,
+ .write = iio_dma_buffer_write,
.set_bytes_per_datum = iio_dma_buffer_set_bytes_per_datum,
.set_length = iio_dma_buffer_set_length,
.request_update = iio_dma_buffer_request_update,
.enable = iio_dma_buffer_enable,
.disable = iio_dma_buffer_disable,
.data_available = iio_dma_buffer_data_available,
+ .space_available = iio_dma_buffer_space_available,
.release = iio_dmaengine_buffer_release,
.alloc_blocks = iio_dma_buffer_alloc_blocks,
@@ -162,6 +171,7 @@ static const struct attribute *iio_dmaengine_buffer_attrs[] = {
/**
* iio_dmaengine_buffer_alloc() - Allocate new buffer which uses DMAengine
* @dev: Parent device for the buffer
+ * @direction: Set the direction of the data.
* @channel: DMA channel name, typically "rx".
* @ops: Custom iio_dma_buffer_ops, if NULL default ops will be used
* @driver_data: Driver data to be passed to custom iio_dma_buffer_ops
@@ -174,11 +184,12 @@ static const struct attribute *iio_dmaengine_buffer_attrs[] = {
* release it.
*/
static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
- const char *channel, const struct iio_dma_buffer_ops *ops,
- void *driver_data)
+ enum iio_buffer_direction direction, const char *channel,
+ const struct iio_dma_buffer_ops *ops, void *driver_data)
{
struct dmaengine_buffer *dmaengine_buffer;
unsigned int width, src_width, dest_width;
+ bool is_tx = (direction == IIO_BUFFER_DIRECTION_OUT);
struct dma_slave_caps caps;
struct dma_chan *chan;
int ret;
@@ -187,6 +198,9 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
if (!dmaengine_buffer)
return ERR_PTR(-ENOMEM);
+ if (!channel)
+ channel = is_tx ? "tx" : "rx";
+
chan = dma_request_chan(dev, channel);
if (IS_ERR(chan)) {
ret = PTR_ERR(chan);
@@ -212,6 +226,7 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
dmaengine_buffer->chan = chan;
dmaengine_buffer->align = width;
dmaengine_buffer->max_size = dma_get_max_seg_size(chan->device->dev);
+ dmaengine_buffer->is_tx = is_tx;
iio_dma_buffer_init(&dmaengine_buffer->queue, chan->device->dev,
ops ? ops : &iio_dmaengine_default_ops, driver_data);
@@ -251,6 +266,7 @@ static void __devm_iio_dmaengine_buffer_free(struct device *dev, void *res)
/**
* devm_iio_dmaengine_buffer_alloc() - Resource-managed iio_dmaengine_buffer_alloc()
* @dev: Parent device for the buffer
+ * @direction: Set the direction of the data.
* @channel: DMA channel name, typically "rx".
* @ops: Custom iio_dma_buffer_ops, if NULL default ops will be used
* @driver_data: Driver data to be passed to custom iio_dma_buffer_ops
@@ -262,8 +278,8 @@ static void __devm_iio_dmaengine_buffer_free(struct device *dev, void *res)
* The buffer will be automatically de-allocated once the device gets destroyed.
*/
struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev,
- const char *channel, const struct iio_dma_buffer_ops *ops,
- void *driver_data)
+ enum iio_buffer_direction direction, const char *channel,
+ const struct iio_dma_buffer_ops *ops, void *driver_data)
{
struct iio_buffer **bufferp, *buffer;
@@ -272,7 +288,8 @@ struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev,
if (!bufferp)
return ERR_PTR(-ENOMEM);
- buffer = iio_dmaengine_buffer_alloc(dev, channel, ops, driver_data);
+ buffer = iio_dmaengine_buffer_alloc(dev, direction, channel, ops,
+ driver_data);
if (IS_ERR(buffer)) {
devres_free(bufferp);
return buffer;
diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
index c23fad847f0d..0fd844c7f47a 100644
--- a/include/linux/iio/buffer-dma.h
+++ b/include/linux/iio/buffer-dma.h
@@ -112,6 +112,8 @@ struct iio_dma_buffer_queue {
void *driver_data;
+ unsigned int poll_wakup_flags;
+
unsigned int num_blocks;
struct iio_dma_buffer_block **blocks;
unsigned int max_offset;
@@ -145,6 +147,10 @@ int iio_dma_buffer_set_bytes_per_datum(struct iio_buffer *buffer, size_t bpd);
int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length);
int iio_dma_buffer_request_update(struct iio_buffer *buffer);
+int iio_dma_buffer_write(struct iio_buffer *buf, size_t n,
+ const char __user *user_buffer);
+size_t iio_dma_buffer_space_available(struct iio_buffer *buf);
+
int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
struct device *dma_dev, const struct iio_dma_buffer_ops *ops,
void *driver_data);
diff --git a/include/linux/iio/buffer-dmaengine.h b/include/linux/iio/buffer-dmaengine.h
index 464adee95d4b..009a601c406c 100644
--- a/include/linux/iio/buffer-dmaengine.h
+++ b/include/linux/iio/buffer-dmaengine.h
@@ -7,12 +7,13 @@
#ifndef __IIO_DMAENGINE_H__
#define __IIO_DMAENGINE_H__
+#include <linux/iio/buffer.h>
+
struct iio_dma_buffer_ops;
-struct iio_buffer;
struct device;
struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev,
- const char *channel, const struct iio_dma_buffer_ops *ops,
- void *driver_data);
+ enum iio_buffer_direction direction, const char *channel,
+ const struct iio_dma_buffer_ops *ops, void *driver_data);
#endif
--
2.17.1
From: Lars-Peter Clausen <[email protected]>
This change adds support for cyclic DMA transfers using the IIO buffer DMA
infrastructure.
To do this, userspace must set the IIO_BUFFER_BLOCK_FLAG_CYCLIC flag on the
block when enqueueing them via the ENQUEUE_BLOCK ioctl().
Signed-off-by: Lars-Peter Clausen <[email protected]>
Signed-off-by: Alexandru Ardelean <[email protected]>
---
.../buffer/industrialio-buffer-dmaengine.c | 24 ++++++++++++-------
include/uapi/linux/iio/buffer.h | 1 +
2 files changed, 17 insertions(+), 8 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index 013cc7c1ecf4..94c93a636ad4 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -82,14 +82,22 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
direction = dmaengine_buffer->is_tx ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM;
- desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
- block->phys_addr, block->block.bytes_used, direction,
- DMA_PREP_INTERRUPT);
- if (!desc)
- return -ENOMEM;
-
- desc->callback_result = iio_dmaengine_buffer_block_done;
- desc->callback_param = block;
+ if (block->block.flags & IIO_BUFFER_BLOCK_FLAG_CYCLIC) {
+ desc = dmaengine_prep_dma_cyclic(dmaengine_buffer->chan,
+ block->phys_addr, block->block.bytes_used,
+ block->block.bytes_used, direction, 0);
+ if (!desc)
+ return -ENOMEM;
+ } else {
+ desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
+ block->phys_addr, block->block.bytes_used, direction,
+ DMA_PREP_INTERRUPT);
+ if (!desc)
+ return -ENOMEM;
+
+ desc->callback_result = iio_dmaengine_buffer_block_done;
+ desc->callback_param = block;
+ }
cookie = dmaengine_submit(desc);
if (dma_submit_error(cookie))
diff --git a/include/uapi/linux/iio/buffer.h b/include/uapi/linux/iio/buffer.h
index 70ad3aea01ea..0e0c95f1c38b 100644
--- a/include/uapi/linux/iio/buffer.h
+++ b/include/uapi/linux/iio/buffer.h
@@ -13,6 +13,7 @@ struct iio_buffer_block_alloc_req {
};
#define IIO_BUFFER_BLOCK_FLAG_TIMESTAMP_VALID (1 << 0)
+#define IIO_BUFFER_BLOCK_FLAG_CYCLIC (1 << 1)
struct iio_buffer_block {
__u32 id;
--
2.17.1
On Fri, 12 Feb 2021 12:20:17 +0200
Alexandru Ardelean <[email protected]> wrote:
> From: Lars-Peter Clausen <[email protected]>
>
> Currently IIO only supports buffer mode for capture devices like ADCs. Add
> support for buffered mode for output devices like DACs.
>
> The output buffer implementation is analogous to the input buffer
> implementation. Instead of using read() to get data from the buffer write()
> is used to copy data into the buffer.
>
> poll() with POLLOUT will wakeup if there is space available for more or
> equal to the configured watermark of samples.
>
> Drivers can remove data from a buffer using iio_buffer_remove_sample(), the
> function can e.g. called from a trigger handler to write the data to
> hardware.
>
> A buffer can only be either a output buffer or an input, but not both. So,
> for a device that has an ADC and DAC path, this will mean 2 IIO buffers
> (one for each direction).
>
> The direction of the buffer is decided by the new direction field of the
> iio_buffer struct and should be set after allocating and before registering
> it.
>
> Signed-off-by: Lars-Peter Clausen <[email protected]>
> Signed-off-by: Alexandru Ardelean <[email protected]>
> ---
> drivers/iio/industrialio-buffer.c | 110 ++++++++++++++++++++++++++++--
> include/linux/iio/buffer.h | 7 ++
> include/linux/iio/buffer_impl.h | 11 +++
> 3 files changed, 124 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
> index a0d1ad86022f..6f4f5f5544f3 100644
> --- a/drivers/iio/industrialio-buffer.c
> +++ b/drivers/iio/industrialio-buffer.c
> @@ -162,6 +162,69 @@ static ssize_t iio_buffer_read(struct file *filp, char __user *buf,
> return ret;
> }
>
> +static size_t iio_buffer_space_available(struct iio_buffer *buf)
> +{
> + if (buf->access->space_available)
> + return buf->access->space_available(buf);
> +
> + return SIZE_MAX;
> +}
> +
> +static ssize_t iio_buffer_write(struct file *filp, const char __user *buf,
> + size_t n, loff_t *f_ps)
> +{
> + struct iio_dev_buffer_pair *ib = filp->private_data;
> + struct iio_buffer *rb = ib->buffer;
> + struct iio_dev *indio_dev = ib->indio_dev;
> + DEFINE_WAIT_FUNC(wait, woken_wake_function);
> + size_t datum_size;
> + size_t to_wait;
> + int ret;
> +
> + if (!rb || !rb->access->write)
> + return -EINVAL;
> +
> + datum_size = rb->bytes_per_datum;
> +
> + /*
> + * If datum_size is 0 there will never be anything to read from the
> + * buffer, so signal end of file now.
> + */
> + if (!datum_size)
> + return 0;
> +
> + if (filp->f_flags & O_NONBLOCK)
> + to_wait = 0;
> + else
> + to_wait = min_t(size_t, n / datum_size, rb->watermark);
> +
> + add_wait_queue(&rb->pollq, &wait);
> + do {
> + if (!indio_dev->info) {
> + ret = -ENODEV;
> + break;
> + }
> +
> + if (iio_buffer_space_available(rb) < to_wait) {
In the non blocking case, we still hit here, but query for less than 0
which seems a bit pointless. in theory at least
iio_buffer_space_available() might be expensive. Can we save on that
query?
> + if (signal_pending(current)) {
> + ret = -ERESTARTSYS;
> + break;
> + }
> +
> + wait_woken(&wait, TASK_INTERRUPTIBLE,
> + MAX_SCHEDULE_TIMEOUT);
> + continue;
> + }
> +
> + ret = rb->access->write(rb, n, buf);
> + if (ret == 0 && (filp->f_flags & O_NONBLOCK))
> + ret = -EAGAIN;
> + } while (ret == 0);
> + remove_wait_queue(&rb->pollq, &wait);
> +
> + return ret;
> +}
> +
> /**
> * iio_buffer_poll() - poll the buffer to find out if it has data
> * @filp: File structure pointer for device access
> @@ -182,8 +245,19 @@ static __poll_t iio_buffer_poll(struct file *filp,
> return 0;
>
> poll_wait(filp, &rb->pollq, wait);
> - if (iio_buffer_ready(indio_dev, rb, rb->watermark, 0))
> - return EPOLLIN | EPOLLRDNORM;
> +
> + switch (rb->direction) {
> + case IIO_BUFFER_DIRECTION_IN:
> + if (iio_buffer_ready(indio_dev, rb, rb->watermark, 0))
> + return EPOLLIN | EPOLLRDNORM;
> + break;
> + case IIO_BUFFER_DIRECTION_OUT:
> + if (iio_buffer_space_available(rb) >= rb->watermark)
> + return EPOLLOUT | EPOLLWRNORM;
> + break;
> + }
> +
> + /* need a way of knowing if there may be enough data... */
> return 0;
> }
>
> @@ -232,6 +306,16 @@ void iio_buffer_wakeup_poll(struct iio_dev *indio_dev)
> }
> }
>
> +int iio_buffer_remove_sample(struct iio_buffer *buffer, u8 *data)
> +{
> + if (!buffer || !buffer->access)
> + return -EINVAL;
> + if (!buffer->access->write)
> + return -ENOSYS;
> + return buffer->access->remove_from(buffer, data);
> +}
> +EXPORT_SYMBOL_GPL(iio_buffer_remove_sample);
> +
> void iio_buffer_init(struct iio_buffer *buffer)
> {
> INIT_LIST_HEAD(&buffer->demux_list);
> @@ -803,6 +887,8 @@ static int iio_verify_update(struct iio_dev *indio_dev,
> }
>
> if (insert_buffer) {
> + if (insert_buffer->direction == IIO_BUFFER_DIRECTION_OUT)
> + strict_scanmask = true;
> bitmap_or(compound_mask, compound_mask,
> insert_buffer->scan_mask, indio_dev->masklength);
> scan_timestamp |= insert_buffer->scan_timestamp;
> @@ -945,6 +1031,8 @@ static int iio_update_demux(struct iio_dev *indio_dev)
> int ret;
>
> list_for_each_entry(buffer, &iio_dev_opaque->buffer_list, buffer_list) {
> + if (buffer->direction == IIO_BUFFER_DIRECTION_OUT)
> + continue;
> ret = iio_buffer_update_demux(indio_dev, buffer);
> if (ret < 0)
> goto error_clear_mux_table;
> @@ -1155,6 +1243,11 @@ int iio_update_buffers(struct iio_dev *indio_dev,
> mutex_lock(&indio_dev->info_exist_lock);
> mutex_lock(&indio_dev->mlock);
>
> + if (insert_buffer->direction == IIO_BUFFER_DIRECTION_OUT) {
> + ret = -EINVAL;
> + goto out_unlock;
> + }
> +
> if (insert_buffer && iio_buffer_is_active(insert_buffer))
> insert_buffer = NULL;
>
> @@ -1400,6 +1493,7 @@ static const struct file_operations iio_buffer_chrdev_fileops = {
> .owner = THIS_MODULE,
> .llseek = noop_llseek,
> .read = iio_buffer_read,
> + .write = iio_buffer_write,
> .poll = iio_buffer_poll,
> .unlocked_ioctl = iio_buffer_ioctl,
> .compat_ioctl = compat_ptr_ioctl,
> @@ -1914,8 +2008,16 @@ static int iio_buffer_mmap(struct file *filep, struct vm_area_struct *vma)
> if (!(vma->vm_flags & VM_SHARED))
> return -EINVAL;
>
> - if (!(vma->vm_flags & VM_READ))
> - return -EINVAL;
> + switch (buffer->direction) {
> + case IIO_BUFFER_DIRECTION_IN:
> + if (!(vma->vm_flags & VM_READ))
> + return -EINVAL;
> + break;
> + case IIO_BUFFER_DIRECTION_OUT:
> + if (!(vma->vm_flags & VM_WRITE))
> + return -EINVAL;
> + break;
> + }
>
> return buffer->access->mmap(buffer, vma);
> }
> diff --git a/include/linux/iio/buffer.h b/include/linux/iio/buffer.h
> index b6928ac5c63d..e87b8773253d 100644
> --- a/include/linux/iio/buffer.h
> +++ b/include/linux/iio/buffer.h
> @@ -11,8 +11,15 @@
>
> struct iio_buffer;
>
> +enum iio_buffer_direction {
> + IIO_BUFFER_DIRECTION_IN,
> + IIO_BUFFER_DIRECTION_OUT,
> +};
> +
> int iio_push_to_buffers(struct iio_dev *indio_dev, const void *data);
>
> +int iio_buffer_remove_sample(struct iio_buffer *buffer, u8 *data);
> +
> /**
> * iio_push_to_buffers_with_timestamp() - push data and timestamp to buffers
> * @indio_dev: iio_dev structure for device.
> diff --git a/include/linux/iio/buffer_impl.h b/include/linux/iio/buffer_impl.h
> index 1d57dc7ccb4f..47bdbf4a4519 100644
> --- a/include/linux/iio/buffer_impl.h
> +++ b/include/linux/iio/buffer_impl.h
> @@ -7,6 +7,7 @@
> #ifdef CONFIG_IIO_BUFFER
>
> #include <uapi/linux/iio/buffer.h>
> +#include <linux/iio/buffer.h>
>
> struct iio_dev;
> struct iio_buffer;
> @@ -23,6 +24,10 @@ struct iio_buffer;
> * @read: try to get a specified number of bytes (must exist)
> * @data_available: indicates how much data is available for reading from
> * the buffer.
> + * @remove_from: remove sample from buffer. Drivers should calls this to
> + * remove a sample from a buffer.
> + * @write: try to write a number of bytes
> + * @space_available: returns the amount of bytes available in a buffer
> * @request_update: if a parameter change has been marked, update underlying
> * storage.
> * @set_bytes_per_datum:set number of bytes per datum
> @@ -61,6 +66,9 @@ struct iio_buffer_access_funcs {
> int (*store_to)(struct iio_buffer *buffer, const void *data);
> int (*read)(struct iio_buffer *buffer, size_t n, char __user *buf);
> size_t (*data_available)(struct iio_buffer *buffer);
> + int (*remove_from)(struct iio_buffer *buffer, void *data);
> + int (*write)(struct iio_buffer *buffer, size_t n, const char __user *buf);
> + size_t (*space_available)(struct iio_buffer *buffer);
>
> int (*request_update)(struct iio_buffer *buffer);
>
> @@ -103,6 +111,9 @@ struct iio_buffer {
> /** @bytes_per_datum: Size of individual datum including timestamp. */
> size_t bytes_per_datum;
>
> + /* @direction: Direction of the data stream (in/out). */
> + enum iio_buffer_direction direction;
> +
> /**
> * @access: Buffer access functions associated with the
> * implementation.
On Fri, 12 Feb 2021 12:20:18 +0200
Alexandru Ardelean <[email protected]> wrote:
> From: Lars-Peter Clausen <[email protected]>
>
> Add output buffer support to the kfifo buffer implementation.
>
> The implementation is straight forward and mostly just wraps the kfifo
> API to provide the required operations.
>
> Signed-off-by: Lars-Peter Clausen <[email protected]>
> Signed-off-by: Alexandru Ardelean <[email protected]>
Nice. For some reason I thought it would be more complex than this :)
Jonathan
> ---
> drivers/iio/buffer/kfifo_buf.c | 50 ++++++++++++++++++++++++++++++++++
> 1 file changed, 50 insertions(+)
>
> diff --git a/drivers/iio/buffer/kfifo_buf.c b/drivers/iio/buffer/kfifo_buf.c
> index 1359abed3b31..6e055176f969 100644
> --- a/drivers/iio/buffer/kfifo_buf.c
> +++ b/drivers/iio/buffer/kfifo_buf.c
> @@ -138,10 +138,60 @@ static void iio_kfifo_buffer_release(struct iio_buffer *buffer)
> kfree(kf);
> }
>
> +static size_t iio_kfifo_buf_space_available(struct iio_buffer *r)
> +{
> + struct iio_kfifo *kf = iio_to_kfifo(r);
> + size_t avail;
> +
> + mutex_lock(&kf->user_lock);
> + avail = kfifo_avail(&kf->kf);
> + mutex_unlock(&kf->user_lock);
> +
> + return avail;
> +}
> +
> +static int iio_kfifo_remove_from(struct iio_buffer *r, void *data)
> +{
> + int ret;
> + struct iio_kfifo *kf = iio_to_kfifo(r);
> +
> + if (kfifo_size(&kf->kf) < r->bytes_per_datum)
> + return -EBUSY;
> +
> + ret = kfifo_out(&kf->kf, data, r->bytes_per_datum);
> + if (ret != r->bytes_per_datum)
> + return -EBUSY;
> +
> + wake_up_interruptible_poll(&r->pollq, POLLOUT | POLLWRNORM);
> +
> + return 0;
> +}
> +
> +static int iio_kfifo_write(struct iio_buffer *r, size_t n,
> + const char __user *buf)
> +{
> + struct iio_kfifo *kf = iio_to_kfifo(r);
> + int ret, copied;
> +
> + mutex_lock(&kf->user_lock);
> + if (!kfifo_initialized(&kf->kf) || n < kfifo_esize(&kf->kf))
> + ret = -EINVAL;
> + else
> + ret = kfifo_from_user(&kf->kf, buf, n, &copied);
> + mutex_unlock(&kf->user_lock);
> + if (ret)
> + return ret;
> +
> + return copied;
> +}
> +
> static const struct iio_buffer_access_funcs kfifo_access_funcs = {
> .store_to = &iio_store_to_kfifo,
> .read = &iio_read_kfifo,
> .data_available = iio_kfifo_buf_data_available,
> + .remove_from = &iio_kfifo_remove_from,
> + .write = &iio_kfifo_write,
> + .space_available = &iio_kfifo_buf_space_available,
> .request_update = &iio_request_update_kfifo,
> .set_bytes_per_datum = &iio_set_bytes_per_datum_kfifo,
> .set_length = &iio_set_length_kfifo,
On Fri, 12 Feb 2021 12:20:20 +0200
Alexandru Ardelean <[email protected]> wrote:
> From: Lars-Peter Clausen <[email protected]>
>
> Add support for output buffers to the dma buffer implementation.
OK, this looks reasonable, but...
Examples of use? This is a bunch of infrastructure not
yet used by any drivers. Given amount of time the dma buffers sat
unused before, I'd definitely like to see some users of each of the
types of buffer introduced.
Jonathan
>
> Signed-off-by: Lars-Peter Clausen <[email protected]>
> Signed-off-by: Alexandru Ardelean <[email protected]>
> ---
> drivers/iio/adc/adi-axi-adc.c | 3 +-
> drivers/iio/buffer/industrialio-buffer-dma.c | 116 ++++++++++++++++--
> .../buffer/industrialio-buffer-dmaengine.c | 31 +++--
> include/linux/iio/buffer-dma.h | 6 +
> include/linux/iio/buffer-dmaengine.h | 7 +-
> 5 files changed, 144 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
> index 45ce97d1f41e..d088ab77ba5c 100644
> --- a/drivers/iio/adc/adi-axi-adc.c
> +++ b/drivers/iio/adc/adi-axi-adc.c
> @@ -106,6 +106,7 @@ static unsigned int adi_axi_adc_read(struct adi_axi_adc_state *st,
> static int adi_axi_adc_config_dma_buffer(struct device *dev,
> struct iio_dev *indio_dev)
> {
> + enum iio_buffer_direction dir = IIO_BUFFER_DIRECTION_IN;
> struct iio_buffer *buffer;
> const char *dma_name;
>
> @@ -115,7 +116,7 @@ static int adi_axi_adc_config_dma_buffer(struct device *dev,
> if (device_property_read_string(dev, "dma-names", &dma_name))
> dma_name = "rx";
>
> - buffer = devm_iio_dmaengine_buffer_alloc(indio_dev->dev.parent,
> + buffer = devm_iio_dmaengine_buffer_alloc(indio_dev->dev.parent, dir,
> dma_name, NULL, NULL);
> if (IS_ERR(buffer))
> return PTR_ERR(buffer);
> diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
> index 57f2284a292f..36e6e79d2e04 100644
> --- a/drivers/iio/buffer/industrialio-buffer-dma.c
> +++ b/drivers/iio/buffer/industrialio-buffer-dma.c
> @@ -223,7 +223,8 @@ void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block)
> spin_unlock_irqrestore(&queue->list_lock, flags);
>
> iio_buffer_block_put_atomic(block);
> - wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM);
> + wake_up_interruptible_poll(&queue->buffer.pollq,
> + (uintptr_t)queue->poll_wakup_flags);
> }
> EXPORT_SYMBOL_GPL(iio_dma_buffer_block_done);
>
> @@ -252,7 +253,8 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
> }
> spin_unlock_irqrestore(&queue->list_lock, flags);
>
> - wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM);
> + wake_up_interruptible_poll(&queue->buffer.pollq,
> + (uintptr_t)queue->poll_wakup_flags);
> }
> EXPORT_SYMBOL_GPL(iio_dma_buffer_block_list_abort);
>
> @@ -353,9 +355,6 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
> }
>
> block->block.id = i;
> -
> - block->state = IIO_BLOCK_STATE_QUEUED;
> - list_add_tail(&block->head, &queue->incoming);
> }
>
> out_unlock:
> @@ -437,7 +436,29 @@ int iio_dma_buffer_enable(struct iio_buffer *buffer,
> struct iio_dma_buffer_block *block, *_block;
>
> mutex_lock(&queue->lock);
> +
> + if (buffer->direction == IIO_BUFFER_DIRECTION_IN)
> + queue->poll_wakup_flags = POLLIN | POLLRDNORM;
> + else
> + queue->poll_wakup_flags = POLLOUT | POLLWRNORM;
> +
> queue->fileio.enabled = !queue->num_blocks;
> + if (queue->fileio.enabled) {
> + unsigned int i;
> +
> + for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
> + struct iio_dma_buffer_block *block =
> + queue->fileio.blocks[i];
> + if (buffer->direction == IIO_BUFFER_DIRECTION_IN) {
> + block->state = IIO_BLOCK_STATE_QUEUED;
> + list_add_tail(&block->head, &queue->incoming);
> + } else {
> + block->state = IIO_BLOCK_STATE_DEQUEUED;
> + list_add_tail(&block->head, &queue->outgoing);
> + }
> + }
> + }
> +
> queue->active = true;
> list_for_each_entry_safe(block, _block, &queue->incoming, head) {
> list_del(&block->head);
> @@ -567,6 +588,61 @@ int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
> }
> EXPORT_SYMBOL_GPL(iio_dma_buffer_read);
>
> +int iio_dma_buffer_write(struct iio_buffer *buf, size_t n,
> + const char __user *user_buffer)
> +{
> + struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buf);
> + struct iio_dma_buffer_block *block;
> + int ret;
> +
> + if (n < buf->bytes_per_datum)
> + return -EINVAL;
> +
> + mutex_lock(&queue->lock);
> +
> + if (!queue->fileio.enabled) {
> + ret = -EBUSY;
> + goto out_unlock;
> + }
> +
> + if (!queue->fileio.active_block) {
> + block = iio_dma_buffer_dequeue(queue);
> + if (block == NULL) {
> + ret = 0;
> + goto out_unlock;
> + }
> + queue->fileio.pos = 0;
> + queue->fileio.active_block = block;
> + } else {
> + block = queue->fileio.active_block;
> + }
> +
> + n = rounddown(n, buf->bytes_per_datum);
> + if (n > block->block.size - queue->fileio.pos)
> + n = block->block.size - queue->fileio.pos;
> +
> + if (copy_from_user(block->vaddr + queue->fileio.pos, user_buffer, n)) {
> + ret = -EFAULT;
> + goto out_unlock;
> + }
> +
> + queue->fileio.pos += n;
> +
> + if (queue->fileio.pos == block->block.size) {
> + queue->fileio.active_block = NULL;
> + block->block.bytes_used = block->block.size;
> + iio_dma_buffer_enqueue(queue, block);
> + }
> +
> + ret = n;
> +
> +out_unlock:
> + mutex_unlock(&queue->lock);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(iio_dma_buffer_write);
> +
> /**
> * iio_dma_buffer_data_available() - DMA buffer data_available callback
> * @buf: Buffer to check for data availability
> @@ -588,12 +664,14 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf)
> */
>
> mutex_lock(&queue->lock);
> - if (queue->fileio.active_block)
> - data_available += queue->fileio.active_block->block.size;
> + if (queue->fileio.active_block) {
> + data_available += queue->fileio.active_block->block.bytes_used -
> + queue->fileio.pos;
> + }
>
> spin_lock_irq(&queue->list_lock);
> list_for_each_entry(block, &queue->outgoing, head)
> - data_available += block->block.size;
> + data_available += block->block.bytes_used;
> spin_unlock_irq(&queue->list_lock);
> mutex_unlock(&queue->lock);
>
> @@ -601,6 +679,28 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf)
> }
> EXPORT_SYMBOL_GPL(iio_dma_buffer_data_available);
>
> +size_t iio_dma_buffer_space_available(struct iio_buffer *buf)
> +{
> + struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buf);
> + struct iio_dma_buffer_block *block;
> + size_t space_available = 0;
> +
> + mutex_lock(&queue->lock);
> + if (queue->fileio.active_block) {
> + space_available += queue->fileio.active_block->block.size -
> + queue->fileio.pos;
> + }
> +
> + spin_lock_irq(&queue->list_lock);
> + list_for_each_entry(block, &queue->outgoing, head)
> + space_available += block->block.size;
> + spin_unlock_irq(&queue->list_lock);
> + mutex_unlock(&queue->lock);
> +
> + return space_available;
> +}
> +EXPORT_SYMBOL_GPL(iio_dma_buffer_space_available);
> +
> int iio_dma_buffer_alloc_blocks(struct iio_buffer *buffer,
> struct iio_buffer_block_alloc_req *req)
> {
> diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> index 0736526b36ec..013cc7c1ecf4 100644
> --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> @@ -37,6 +37,8 @@ struct dmaengine_buffer {
>
> size_t align;
> size_t max_size;
> +
> + bool is_tx;
> };
>
> static struct dmaengine_buffer *iio_buffer_to_dmaengine_buffer(
> @@ -64,9 +66,12 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
> struct dmaengine_buffer *dmaengine_buffer =
> iio_buffer_to_dmaengine_buffer(&queue->buffer);
> struct dma_async_tx_descriptor *desc;
> + enum dma_transfer_direction direction;
> dma_cookie_t cookie;
>
> - block->block.bytes_used = min(block->block.size,
> + if (!dmaengine_buffer->is_tx)
> + block->block.bytes_used = block->block.size;
> + block->block.bytes_used = min(block->block.bytes_used,
> dmaengine_buffer->max_size);
> block->block.bytes_used = rounddown(block->block.bytes_used,
> dmaengine_buffer->align);
> @@ -75,8 +80,10 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
> return 0;
> }
>
> + direction = dmaengine_buffer->is_tx ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM;
> +
> desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
> - block->phys_addr, block->block.bytes_used, DMA_DEV_TO_MEM,
> + block->phys_addr, block->block.bytes_used, direction,
> DMA_PREP_INTERRUPT);
> if (!desc)
> return -ENOMEM;
> @@ -117,12 +124,14 @@ static void iio_dmaengine_buffer_release(struct iio_buffer *buf)
>
> static const struct iio_buffer_access_funcs iio_dmaengine_buffer_ops = {
> .read = iio_dma_buffer_read,
> + .write = iio_dma_buffer_write,
> .set_bytes_per_datum = iio_dma_buffer_set_bytes_per_datum,
> .set_length = iio_dma_buffer_set_length,
> .request_update = iio_dma_buffer_request_update,
> .enable = iio_dma_buffer_enable,
> .disable = iio_dma_buffer_disable,
> .data_available = iio_dma_buffer_data_available,
> + .space_available = iio_dma_buffer_space_available,
> .release = iio_dmaengine_buffer_release,
>
> .alloc_blocks = iio_dma_buffer_alloc_blocks,
> @@ -162,6 +171,7 @@ static const struct attribute *iio_dmaengine_buffer_attrs[] = {
> /**
> * iio_dmaengine_buffer_alloc() - Allocate new buffer which uses DMAengine
> * @dev: Parent device for the buffer
> + * @direction: Set the direction of the data.
> * @channel: DMA channel name, typically "rx".
> * @ops: Custom iio_dma_buffer_ops, if NULL default ops will be used
> * @driver_data: Driver data to be passed to custom iio_dma_buffer_ops
> @@ -174,11 +184,12 @@ static const struct attribute *iio_dmaengine_buffer_attrs[] = {
> * release it.
> */
> static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
> - const char *channel, const struct iio_dma_buffer_ops *ops,
> - void *driver_data)
> + enum iio_buffer_direction direction, const char *channel,
> + const struct iio_dma_buffer_ops *ops, void *driver_data)
> {
> struct dmaengine_buffer *dmaengine_buffer;
> unsigned int width, src_width, dest_width;
> + bool is_tx = (direction == IIO_BUFFER_DIRECTION_OUT);
> struct dma_slave_caps caps;
> struct dma_chan *chan;
> int ret;
> @@ -187,6 +198,9 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
> if (!dmaengine_buffer)
> return ERR_PTR(-ENOMEM);
>
> + if (!channel)
> + channel = is_tx ? "tx" : "rx";
> +
> chan = dma_request_chan(dev, channel);
> if (IS_ERR(chan)) {
> ret = PTR_ERR(chan);
> @@ -212,6 +226,7 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
> dmaengine_buffer->chan = chan;
> dmaengine_buffer->align = width;
> dmaengine_buffer->max_size = dma_get_max_seg_size(chan->device->dev);
> + dmaengine_buffer->is_tx = is_tx;
>
> iio_dma_buffer_init(&dmaengine_buffer->queue, chan->device->dev,
> ops ? ops : &iio_dmaengine_default_ops, driver_data);
> @@ -251,6 +266,7 @@ static void __devm_iio_dmaengine_buffer_free(struct device *dev, void *res)
> /**
> * devm_iio_dmaengine_buffer_alloc() - Resource-managed iio_dmaengine_buffer_alloc()
> * @dev: Parent device for the buffer
> + * @direction: Set the direction of the data.
> * @channel: DMA channel name, typically "rx".
> * @ops: Custom iio_dma_buffer_ops, if NULL default ops will be used
> * @driver_data: Driver data to be passed to custom iio_dma_buffer_ops
> @@ -262,8 +278,8 @@ static void __devm_iio_dmaengine_buffer_free(struct device *dev, void *res)
> * The buffer will be automatically de-allocated once the device gets destroyed.
> */
> struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev,
> - const char *channel, const struct iio_dma_buffer_ops *ops,
> - void *driver_data)
> + enum iio_buffer_direction direction, const char *channel,
> + const struct iio_dma_buffer_ops *ops, void *driver_data)
> {
> struct iio_buffer **bufferp, *buffer;
>
> @@ -272,7 +288,8 @@ struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev,
> if (!bufferp)
> return ERR_PTR(-ENOMEM);
>
> - buffer = iio_dmaengine_buffer_alloc(dev, channel, ops, driver_data);
> + buffer = iio_dmaengine_buffer_alloc(dev, direction, channel, ops,
> + driver_data);
> if (IS_ERR(buffer)) {
> devres_free(bufferp);
> return buffer;
> diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
> index c23fad847f0d..0fd844c7f47a 100644
> --- a/include/linux/iio/buffer-dma.h
> +++ b/include/linux/iio/buffer-dma.h
> @@ -112,6 +112,8 @@ struct iio_dma_buffer_queue {
>
> void *driver_data;
>
> + unsigned int poll_wakup_flags;
> +
> unsigned int num_blocks;
> struct iio_dma_buffer_block **blocks;
> unsigned int max_offset;
> @@ -145,6 +147,10 @@ int iio_dma_buffer_set_bytes_per_datum(struct iio_buffer *buffer, size_t bpd);
> int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length);
> int iio_dma_buffer_request_update(struct iio_buffer *buffer);
>
> +int iio_dma_buffer_write(struct iio_buffer *buf, size_t n,
> + const char __user *user_buffer);
> +size_t iio_dma_buffer_space_available(struct iio_buffer *buf);
> +
> int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
> struct device *dma_dev, const struct iio_dma_buffer_ops *ops,
> void *driver_data);
> diff --git a/include/linux/iio/buffer-dmaengine.h b/include/linux/iio/buffer-dmaengine.h
> index 464adee95d4b..009a601c406c 100644
> --- a/include/linux/iio/buffer-dmaengine.h
> +++ b/include/linux/iio/buffer-dmaengine.h
> @@ -7,12 +7,13 @@
> #ifndef __IIO_DMAENGINE_H__
> #define __IIO_DMAENGINE_H__
>
> +#include <linux/iio/buffer.h>
> +
> struct iio_dma_buffer_ops;
> -struct iio_buffer;
> struct device;
>
> struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev,
> - const char *channel, const struct iio_dma_buffer_ops *ops,
> - void *driver_data);
> + enum iio_buffer_direction direction, const char *channel,
> + const struct iio_dma_buffer_ops *ops, void *driver_data);
>
> #endif
On Fri, 12 Feb 2021 12:20:21 +0200
Alexandru Ardelean <[email protected]> wrote:
> From: Lars-Peter Clausen <[email protected]>
>
> This change adds support for cyclic DMA transfers using the IIO buffer DMA
> infrastructure.
> To do this, userspace must set the IIO_BUFFER_BLOCK_FLAG_CYCLIC flag on the
> block when enqueueing them via the ENQUEUE_BLOCK ioctl().
We should have more than that in the way of documentation!
What is the dataflow that we end up with as a result of this?
Jonathan
>
> Signed-off-by: Lars-Peter Clausen <[email protected]>
> Signed-off-by: Alexandru Ardelean <[email protected]>
> ---
> .../buffer/industrialio-buffer-dmaengine.c | 24 ++++++++++++-------
> include/uapi/linux/iio/buffer.h | 1 +
> 2 files changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> index 013cc7c1ecf4..94c93a636ad4 100644
> --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> @@ -82,14 +82,22 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
>
> direction = dmaengine_buffer->is_tx ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM;
>
> - desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
> - block->phys_addr, block->block.bytes_used, direction,
> - DMA_PREP_INTERRUPT);
> - if (!desc)
> - return -ENOMEM;
> -
> - desc->callback_result = iio_dmaengine_buffer_block_done;
> - desc->callback_param = block;
> + if (block->block.flags & IIO_BUFFER_BLOCK_FLAG_CYCLIC) {
> + desc = dmaengine_prep_dma_cyclic(dmaengine_buffer->chan,
> + block->phys_addr, block->block.bytes_used,
> + block->block.bytes_used, direction, 0);
> + if (!desc)
> + return -ENOMEM;
> + } else {
> + desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
> + block->phys_addr, block->block.bytes_used, direction,
> + DMA_PREP_INTERRUPT);
> + if (!desc)
> + return -ENOMEM;
> +
> + desc->callback_result = iio_dmaengine_buffer_block_done;
> + desc->callback_param = block;
> + }
>
> cookie = dmaengine_submit(desc);
> if (dma_submit_error(cookie))
> diff --git a/include/uapi/linux/iio/buffer.h b/include/uapi/linux/iio/buffer.h
> index 70ad3aea01ea..0e0c95f1c38b 100644
> --- a/include/uapi/linux/iio/buffer.h
> +++ b/include/uapi/linux/iio/buffer.h
> @@ -13,6 +13,7 @@ struct iio_buffer_block_alloc_req {
> };
>
> #define IIO_BUFFER_BLOCK_FLAG_TIMESTAMP_VALID (1 << 0)
> +#define IIO_BUFFER_BLOCK_FLAG_CYCLIC (1 << 1)
>
> struct iio_buffer_block {
> __u32 id;
On Fri, 12 Feb 2021 12:20:16 +0200
Alexandru Ardelean <[email protected]> wrote:
> Largely, an adaptation of Lars' work, applied on the IIO multi-buffer
> support + high-speed/mmap support [1].
> Found here:
> https://github.com/larsclausen/linux/commits/iio-high-speed-5.10
> But this isn't tested.
>
> [1] Requires that these sets be applied (in this order):
> * https://lore.kernel.org/linux-iio/[email protected]/T/#t
> * https://lore.kernel.org/linux-iio/[email protected]/T/#t
>
> Some of the variation from the original work are:
> 1. It's applied on top of the multibuffer support; so the direction of the
> data is set per iio_buffer, and not iio_dev
> 2. Cyclic mode is a separate patch
> 3. devm_iio_dmaengine_buffer_alloc() requires the definition of
> 'enum iio_buffer_direction'; which means that 'linux/iio/buffer.h'
> needs to be included in buffer-dma.h; Lars tried to use a bool, but
> using the enum seems a bit more consistent and allows us to maybe
> go down the route of both I/O buffers (some day); not sure if
> that's sane or not (you never know)
> 4. Various re-formatting; and added some docstrings where I remembered
> to so so
Just thinking about how this is different from input buffers.
For now at least I guess we can assume there is no equivalent of multiple
consumers and the mux logic needed to support them.
However I can definitely see we may get inkernel 'consumers' of these
output buffers.
Come to think of it, we probably need to rework the inkern logic anyway
to deal with multiple buffer input devices. Hopefully it just continues
working with the single buffer cases so there won't be any regressions
on that front.
Largest issue with this series is lack of users. It all seems good
in principal but until drivers are making use of it, I'm not keen
on merging this extra infrastructure.
Jonathan
>
> Lars-Peter Clausen (5):
> iio: Add output buffer support
> iio: kfifo-buffer: Add output buffer support
> iio: buffer-dma: Allow to provide custom buffer ops
> iio: buffer-dma: Add output buffer support
> iio: buffer-dma: add support for cyclic DMA transfers
>
> drivers/iio/adc/adi-axi-adc.c | 5 +-
> drivers/iio/buffer/industrialio-buffer-dma.c | 120 ++++++++++++++++--
> .../buffer/industrialio-buffer-dmaengine.c | 57 +++++++--
> drivers/iio/buffer/kfifo_buf.c | 50 ++++++++
> drivers/iio/industrialio-buffer.c | 110 +++++++++++++++-
> include/linux/iio/buffer-dma.h | 11 +-
> include/linux/iio/buffer-dmaengine.h | 7 +-
> include/linux/iio/buffer.h | 7 +
> include/linux/iio/buffer_impl.h | 11 ++
> include/uapi/linux/iio/buffer.h | 1 +
> 10 files changed, 348 insertions(+), 31 deletions(-)
>