2023-09-14 06:32:55

by Shengjiu Wang

[permalink] [raw]
Subject: [RFC PATCH v3 0/9] Add audio support in v4l2 framework

Audio signal processing also has the requirement for memory to
memory similar as Video.

This asrc memory to memory (memory ->asrc->memory) case is a non
real time use case.

User fills the input buffer to the asrc module, after conversion, then asrc
sends back the output buffer to user. So it is not a traditional ALSA playback
and capture case.

It is a specific use case, there is no reference in current kernel.
v4l2 memory to memory is the closed implementation, v4l2 current
support video, image, radio, tuner, touch devices, so it is not
complicated to add support for this specific audio case.

Because we had implemented the "memory -> asrc ->i2s device-> codec"
use case in ALSA. Now the "memory->asrc->memory" needs
to reuse the code in asrc driver, so the first 3 patches is for refining
the code to make it can be shared by the "memory->asrc->memory"
driver.

The main change is in the v4l2 side, A /dev/vl4-audioX will be created,
user applications only use the ioctl of v4l2 framework.

Other change is to add memory to memory support for two kinds of i.MX ASRC
module.

changes in v3:
- Modify documents for adding audio m2m support
- Add audio virtual m2m driver
- Defined V4L2_AUDIO_FMT_LPCM format type for audio.
- Defined V4L2_CAP_AUDIO_M2M capability type for audio m2m case.
- with modification in v4l-utils, pass v4l2-compliance test.

changes in v2:
- decouple the implementation in v4l2 and ALSA
- implement the memory to memory driver as a platfrom driver
and move it to driver/media
- move fsl_asrc_common.h to include/sound folder

Shengjiu Wang (9):
ASoC: fsl_asrc: define functions for memory to memory usage
ASoC: fsl_easrc: define functions for memory to memory usage
ASoC: fsl_asrc: move fsl_asrc_common.h to include/sound
ASoC: fsl_asrc: register m2m platform device
ASoC: fsl_easrc: register m2m platform device
media: v4l2: Add audio capture and output support
media: uapi: Add V4L2_CID_USER_IMX_ASRC_RATIO_MOD control
media: audm2m: add virtual driver for audio memory to memory
media: imx-asrc: Add memory to memory driver

.../userspace-api/media/v4l/audio-formats.rst | 15 +
.../userspace-api/media/v4l/buffer.rst | 6 +
.../userspace-api/media/v4l/control.rst | 5 +
.../userspace-api/media/v4l/dev-audio.rst | 63 +
.../userspace-api/media/v4l/devices.rst | 1 +
.../media/v4l/pixfmt-aud-lpcm.rst | 31 +
.../userspace-api/media/v4l/pixfmt.rst | 1 +
.../media/v4l/vidioc-enum-fmt.rst | 2 +
.../userspace-api/media/v4l/vidioc-g-fmt.rst | 4 +
.../media/v4l/vidioc-querycap.rst | 3 +
.../media/videodev2.h.rst.exceptions | 2 +
.../media/common/videobuf2/videobuf2-v4l2.c | 4 +
drivers/media/platform/nxp/Kconfig | 12 +
drivers/media/platform/nxp/Makefile | 1 +
drivers/media/platform/nxp/imx-asrc.c | 1058 +++++++++++++++++
drivers/media/test-drivers/Kconfig | 9 +
drivers/media/test-drivers/Makefile | 1 +
drivers/media/test-drivers/audm2m.c | 767 ++++++++++++
drivers/media/v4l2-core/v4l2-ctrls-defs.c | 1 +
drivers/media/v4l2-core/v4l2-dev.c | 17 +
drivers/media/v4l2-core/v4l2-ioctl.c | 53 +
include/media/v4l2-dev.h | 2 +
include/media/v4l2-ioctl.h | 34 +
.../fsl => include/sound}/fsl_asrc_common.h | 54 +
include/uapi/linux/v4l2-controls.h | 1 +
include/uapi/linux/videodev2.h | 25 +
sound/soc/fsl/fsl_asrc.c | 162 +++
sound/soc/fsl/fsl_asrc.h | 4 +-
sound/soc/fsl/fsl_asrc_dma.c | 2 +-
sound/soc/fsl/fsl_easrc.c | 239 ++++
sound/soc/fsl/fsl_easrc.h | 8 +-
31 files changed, 2584 insertions(+), 3 deletions(-)
create mode 100644 Documentation/userspace-api/media/v4l/audio-formats.rst
create mode 100644 Documentation/userspace-api/media/v4l/dev-audio.rst
create mode 100644 Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
create mode 100644 drivers/media/platform/nxp/imx-asrc.c
create mode 100644 drivers/media/test-drivers/audm2m.c
rename {sound/soc/fsl => include/sound}/fsl_asrc_common.h (60%)

--
2.34.1


2023-09-14 06:33:21

by Shengjiu Wang

[permalink] [raw]
Subject: [RFC PATCH v3 3/9] ASoC: fsl_asrc: move fsl_asrc_common.h to include/sound

Move fsl_asrc_common.h to include/sound that it can be
included from other drivers.

Signed-off-by: Shengjiu Wang <[email protected]>
---
{sound/soc/fsl => include/sound}/fsl_asrc_common.h | 0
sound/soc/fsl/fsl_asrc.h | 2 +-
sound/soc/fsl/fsl_asrc_dma.c | 2 +-
sound/soc/fsl/fsl_easrc.h | 2 +-
4 files changed, 3 insertions(+), 3 deletions(-)
rename {sound/soc/fsl => include/sound}/fsl_asrc_common.h (100%)

diff --git a/sound/soc/fsl/fsl_asrc_common.h b/include/sound/fsl_asrc_common.h
similarity index 100%
rename from sound/soc/fsl/fsl_asrc_common.h
rename to include/sound/fsl_asrc_common.h
diff --git a/sound/soc/fsl/fsl_asrc.h b/sound/soc/fsl/fsl_asrc.h
index 1c492eb237f5..66544624de7b 100644
--- a/sound/soc/fsl/fsl_asrc.h
+++ b/sound/soc/fsl/fsl_asrc.h
@@ -10,7 +10,7 @@
#ifndef _FSL_ASRC_H
#define _FSL_ASRC_H

-#include "fsl_asrc_common.h"
+#include <sound/fsl_asrc_common.h>

#define ASRC_M2M_INPUTFIFO_WML 0x4
#define ASRC_M2M_OUTPUTFIFO_WML 0x2
diff --git a/sound/soc/fsl/fsl_asrc_dma.c b/sound/soc/fsl/fsl_asrc_dma.c
index 05a7d1588d20..b034fee3f1f4 100644
--- a/sound/soc/fsl/fsl_asrc_dma.c
+++ b/sound/soc/fsl/fsl_asrc_dma.c
@@ -12,7 +12,7 @@
#include <sound/dmaengine_pcm.h>
#include <sound/pcm_params.h>

-#include "fsl_asrc_common.h"
+#include <sound/fsl_asrc_common.h>

#define FSL_ASRC_DMABUF_SIZE (256 * 1024)

diff --git a/sound/soc/fsl/fsl_easrc.h b/sound/soc/fsl/fsl_easrc.h
index bee887c8b4f2..f571647c508f 100644
--- a/sound/soc/fsl/fsl_easrc.h
+++ b/sound/soc/fsl/fsl_easrc.h
@@ -9,7 +9,7 @@
#include <sound/asound.h>
#include <linux/dma/imx-dma.h>

-#include "fsl_asrc_common.h"
+#include <sound/fsl_asrc_common.h>

/* EASRC Register Map */

--
2.34.1

2023-09-14 06:34:50

by Shengjiu Wang

[permalink] [raw]
Subject: [RFC PATCH v3 4/9] ASoC: fsl_asrc: register m2m platform device

Register m2m platform device, that user can
use M2M feature.

Defined platform data structure and platform
driver name.

Signed-off-by: Shengjiu Wang <[email protected]>
---
include/sound/fsl_asrc_common.h | 12 ++++++++++++
sound/soc/fsl/fsl_asrc.c | 12 ++++++++++++
2 files changed, 24 insertions(+)

diff --git a/include/sound/fsl_asrc_common.h b/include/sound/fsl_asrc_common.h
index 7f7e725075fe..e978a2f9cd24 100644
--- a/include/sound/fsl_asrc_common.h
+++ b/include/sound/fsl_asrc_common.h
@@ -69,6 +69,7 @@ struct fsl_asrc_pair {
* @dma_params_rx: DMA parameters for receive channel
* @dma_params_tx: DMA parameters for transmit channel
* @pdev: platform device pointer
+ * @m2m_pdev: m2m platform device pointer
* @regmap: regmap handler
* @paddr: physical address to the base address of registers
* @mem_clk: clock source to access register
@@ -104,6 +105,7 @@ struct fsl_asrc {
struct snd_dmaengine_dai_dma_data dma_params_rx;
struct snd_dmaengine_dai_dma_data dma_params_tx;
struct platform_device *pdev;
+ struct platform_device *m2m_pdev;
struct regmap *regmap;
unsigned long paddr;
struct clk *mem_clk;
@@ -144,6 +146,16 @@ struct fsl_asrc {
void *private;
};

+/**
+ * struct fsl_asrc_m2m_pdata - platform data
+ * @asrc: pointer to struct fsl_asrc
+ *
+ */
+struct fsl_asrc_m2m_pdata {
+ struct fsl_asrc *asrc;
+};
+
+#define M2M_DRV_NAME "fsl_asrc_m2m"
#define DRV_NAME "fsl-asrc-dai"
extern struct snd_soc_component_driver fsl_asrc_component;

diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c
index f9d830e0957f..7b72d6bcf281 100644
--- a/sound/soc/fsl/fsl_asrc.c
+++ b/sound/soc/fsl/fsl_asrc.c
@@ -1208,6 +1208,7 @@ static int fsl_asrc_runtime_suspend(struct device *dev);
static int fsl_asrc_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
+ struct fsl_asrc_m2m_pdata m2m_pdata;
struct fsl_asrc_priv *asrc_priv;
struct fsl_asrc *asrc;
struct resource *res;
@@ -1392,6 +1393,12 @@ static int fsl_asrc_probe(struct platform_device *pdev)
goto err_pm_get_sync;
}

+ m2m_pdata.asrc = asrc;
+ asrc->m2m_pdev = platform_device_register_data(&pdev->dev,
+ M2M_DRV_NAME,
+ PLATFORM_DEVID_AUTO,
+ &m2m_pdata,
+ sizeof(m2m_pdata));
return 0;

err_pm_get_sync:
@@ -1404,6 +1411,11 @@ static int fsl_asrc_probe(struct platform_device *pdev)

static void fsl_asrc_remove(struct platform_device *pdev)
{
+ struct fsl_asrc *asrc = dev_get_drvdata(&pdev->dev);
+
+ if (asrc->m2m_pdev && !IS_ERR(asrc->m2m_pdev))
+ platform_device_unregister(asrc->m2m_pdev);
+
pm_runtime_disable(&pdev->dev);
if (!pm_runtime_status_suspended(&pdev->dev))
fsl_asrc_runtime_suspend(&pdev->dev);
--
2.34.1

2023-09-14 06:34:52

by Shengjiu Wang

[permalink] [raw]
Subject: [RFC PATCH v3 7/9] media: uapi: Add V4L2_CID_USER_IMX_ASRC_RATIO_MOD control

The input clock and output clock may not be the accurate
rate as the sample rate, there is some drift, so the convert
ratio of i.MX ASRC module need to be changed according to
actual clock rate.

Add V4L2_CID_USER_IMX_ASRC_RATIO_MOD control for user to
adjust the ratio.

Signed-off-by: Shengjiu Wang <[email protected]>
---
Documentation/userspace-api/media/v4l/control.rst | 5 +++++
drivers/media/v4l2-core/v4l2-ctrls-defs.c | 1 +
include/uapi/linux/v4l2-controls.h | 1 +
3 files changed, 7 insertions(+)

diff --git a/Documentation/userspace-api/media/v4l/control.rst b/Documentation/userspace-api/media/v4l/control.rst
index 4463fce694b0..2bc175900a34 100644
--- a/Documentation/userspace-api/media/v4l/control.rst
+++ b/Documentation/userspace-api/media/v4l/control.rst
@@ -318,6 +318,11 @@ Control IDs
depending on particular custom controls should check the driver name
and version, see :ref:`querycap`.

+.. _v4l2-audio-imx:
+
+``V4L2_CID_USER_IMX_ASRC_RATIO_MOD``
+ sets the rasampler ratio modifier of i.MX asrc module.
+
Applications can enumerate the available controls with the
:ref:`VIDIOC_QUERYCTRL` and
:ref:`VIDIOC_QUERYMENU <VIDIOC_QUERYCTRL>` ioctls, get and set a
diff --git a/drivers/media/v4l2-core/v4l2-ctrls-defs.c b/drivers/media/v4l2-core/v4l2-ctrls-defs.c
index 8696eb1cdd61..16f66f66198c 100644
--- a/drivers/media/v4l2-core/v4l2-ctrls-defs.c
+++ b/drivers/media/v4l2-core/v4l2-ctrls-defs.c
@@ -1242,6 +1242,7 @@ const char *v4l2_ctrl_get_name(u32 id)
case V4L2_CID_COLORIMETRY_CLASS: return "Colorimetry Controls";
case V4L2_CID_COLORIMETRY_HDR10_CLL_INFO: return "HDR10 Content Light Info";
case V4L2_CID_COLORIMETRY_HDR10_MASTERING_DISPLAY: return "HDR10 Mastering Display";
+ case V4L2_CID_USER_IMX_ASRC_RATIO_MOD: return "ASRC RATIO MOD";
default:
return NULL;
}
diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h
index c3604a0a3e30..b1c319906d12 100644
--- a/include/uapi/linux/v4l2-controls.h
+++ b/include/uapi/linux/v4l2-controls.h
@@ -162,6 +162,7 @@ enum v4l2_colorfx {
/* The base for the imx driver controls.
* We reserve 16 controls for this driver. */
#define V4L2_CID_USER_IMX_BASE (V4L2_CID_USER_BASE + 0x10b0)
+#define V4L2_CID_USER_IMX_ASRC_RATIO_MOD (V4L2_CID_USER_IMX_BASE + 0)

/*
* The base for the atmel isc driver controls.
--
2.34.1

2023-09-14 06:38:00

by Shengjiu Wang

[permalink] [raw]
Subject: [RFC PATCH v3 8/9] media: audm2m: add virtual driver for audio memory to memory

Audio memory to memory virtual driver use video memory to memory
virtual driver vim2m.c as example. The main difference is
device type is VFL_TYPE_AUDIO and device cap type is V4L2_CAP_AUDIO_M2M.

The device_run function is a dummy function, which is simply
copy the data from input buffer to output buffer.

Signed-off-by: Shengjiu Wang <[email protected]>
---
drivers/media/test-drivers/Kconfig | 9 +
drivers/media/test-drivers/Makefile | 1 +
drivers/media/test-drivers/audm2m.c | 767 ++++++++++++++++++++++++++++
3 files changed, 777 insertions(+)
create mode 100644 drivers/media/test-drivers/audm2m.c

diff --git a/drivers/media/test-drivers/Kconfig b/drivers/media/test-drivers/Kconfig
index 459b433e9fae..be60d73cbf97 100644
--- a/drivers/media/test-drivers/Kconfig
+++ b/drivers/media/test-drivers/Kconfig
@@ -17,6 +17,15 @@ config VIDEO_VIM2M
This is a virtual test device for the memory-to-memory driver
framework.

+config VIDEO_AUDM2M
+ tristate "Virtual Memory-to-Memory Driver For Audio"
+ depends on VIDEO_DEV
+ select VIDEOBUF2_VMALLOC
+ select V4L2_MEM2MEM_DEV
+ help
+ This is a virtual audio test device for the memory-to-memory driver
+ framework.
+
source "drivers/media/test-drivers/vicodec/Kconfig"
source "drivers/media/test-drivers/vimc/Kconfig"
source "drivers/media/test-drivers/vivid/Kconfig"
diff --git a/drivers/media/test-drivers/Makefile b/drivers/media/test-drivers/Makefile
index 740714a4584d..b53ed7e6eaf1 100644
--- a/drivers/media/test-drivers/Makefile
+++ b/drivers/media/test-drivers/Makefile
@@ -10,6 +10,7 @@ obj-$(CONFIG_DVB_VIDTV) += vidtv/

obj-$(CONFIG_VIDEO_VICODEC) += vicodec/
obj-$(CONFIG_VIDEO_VIM2M) += vim2m.o
+obj-$(CONFIG_VIDEO_AUDM2M) += audm2m.o
obj-$(CONFIG_VIDEO_VIMC) += vimc/
obj-$(CONFIG_VIDEO_VIVID) += vivid/
obj-$(CONFIG_VIDEO_VISL) += visl/
diff --git a/drivers/media/test-drivers/audm2m.c b/drivers/media/test-drivers/audm2m.c
new file mode 100644
index 000000000000..d54bc99b9275
--- /dev/null
+++ b/drivers/media/test-drivers/audm2m.c
@@ -0,0 +1,767 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * A virtual v4l2-mem2mem example for audio device.
+ */
+
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+
+#include <linux/platform_device.h>
+#include <media/v4l2-mem2mem.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-event.h>
+#include <media/videobuf2-vmalloc.h>
+#include <sound/dmaengine_pcm.h>
+
+MODULE_DESCRIPTION("Virtual device for audio mem2mem testing");
+MODULE_LICENSE("GPL");
+MODULE_VERSION("0.1");
+MODULE_ALIAS("audio_mem2mem_testdev");
+
+static unsigned int debug;
+module_param(debug, uint, 0644);
+MODULE_PARM_DESC(debug, "debug level");
+
+/* Flags that indicate a format can be used for capture/output */
+#define MEM2MEM_CAPTURE BIT(0)
+#define MEM2MEM_OUTPUT BIT(1)
+
+#define MEM2MEM_NAME "audm2m"
+
+#define dprintk(dev, lvl, fmt, arg...) \
+ v4l2_dbg(lvl, debug, &(dev)->v4l2_dev, "%s: " fmt, __func__, ## arg)
+
+#define SAMPLE_NUM 4096
+
+static void audm2m_dev_release(struct device *dev)
+{}
+
+static struct platform_device audm2m_pdev = {
+ .name = MEM2MEM_NAME,
+ .dev.release = audm2m_dev_release,
+};
+
+struct audm2m_fmt {
+ u32 fourcc;
+ u32 types;
+};
+
+static struct audm2m_fmt formats[] = {
+ {
+ .fourcc = V4L2_AUDIO_FMT_LPCM,
+ .types = MEM2MEM_CAPTURE | MEM2MEM_OUTPUT,
+ }
+};
+
+#define NUM_FORMATS ARRAY_SIZE(formats)
+
+/* Per-queue, driver-specific private data */
+struct audm2m_q_data {
+ unsigned int rate;
+ snd_pcm_format_t format;
+ unsigned int channels;
+ unsigned int buffersize;
+ struct audm2m_fmt *fmt;
+};
+
+enum {
+ V4L2_M2M_SRC = 0,
+ V4L2_M2M_DST = 1,
+};
+
+static struct audm2m_fmt *find_format(u32 fourcc)
+{
+ struct audm2m_fmt *fmt;
+ unsigned int k;
+
+ for (k = 0; k < NUM_FORMATS; k++) {
+ fmt = &formats[k];
+ if (fmt->fourcc == fourcc)
+ break;
+ }
+
+ if (k == NUM_FORMATS)
+ return NULL;
+
+ return &formats[k];
+}
+
+struct audm2m_dev {
+ struct v4l2_device v4l2_dev;
+ struct video_device vfd;
+
+ atomic_t num_inst;
+ struct mutex dev_mutex;
+
+ struct v4l2_m2m_dev *m2m_dev;
+};
+
+struct audm2m_ctx {
+ struct v4l2_fh fh;
+ struct audm2m_dev *dev;
+
+ struct mutex vb_mutex;
+
+ /* Abort requested by m2m */
+ int aborting;
+
+ /* Source and destination queue data */
+ struct audm2m_q_data q_data[2];
+};
+
+static inline struct audm2m_ctx *file2ctx(struct file *file)
+{
+ return container_of(file->private_data, struct audm2m_ctx, fh);
+}
+
+static struct audm2m_q_data *get_q_data(struct audm2m_ctx *ctx,
+ enum v4l2_buf_type type)
+{
+ switch (type) {
+ case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+ return &ctx->q_data[V4L2_M2M_SRC];
+ case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+ return &ctx->q_data[V4L2_M2M_DST];
+ default:
+ return NULL;
+ }
+}
+
+static const char *type_name(enum v4l2_buf_type type)
+{
+ switch (type) {
+ case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+ return "Output";
+ case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+ return "Capture";
+ default:
+ return "Invalid";
+ }
+}
+
+/*
+ * mem2mem callbacks
+ */
+
+/*
+ * job_ready() - check whether an instance is ready to be scheduled to run
+ */
+static int job_ready(void *priv)
+{
+ struct audm2m_ctx *ctx = priv;
+
+ if (v4l2_m2m_num_src_bufs_ready(ctx->fh.m2m_ctx) < 1 ||
+ v4l2_m2m_num_dst_bufs_ready(ctx->fh.m2m_ctx) < 1) {
+ dprintk(ctx->dev, 1, "Not enough buffers available\n");
+ return 0;
+ }
+
+ return 1;
+}
+
+static void job_abort(void *priv)
+{
+ struct audm2m_ctx *ctx = priv;
+
+ /* Will cancel the transaction in the next interrupt handler */
+ ctx->aborting = 1;
+}
+
+/*
+ * device_run() - prepares and starts the device
+ */
+static void device_run(void *priv)
+{
+ struct audm2m_ctx *ctx = priv;
+ struct audm2m_dev *audm2m_dev;
+ struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ struct audm2m_q_data *q_data_src, *q_data_dst;
+ int src_size, dst_size;
+
+ audm2m_dev = ctx->dev;
+
+ q_data_src = get_q_data(ctx, V4L2_BUF_TYPE_AUDIO_OUTPUT);
+ if (!q_data_src)
+ return;
+
+ q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_AUDIO_CAPTURE);
+ if (!q_data_dst)
+ return;
+
+ src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+
+ /* Process the conversion */
+ src_size = vb2_get_plane_payload(&src_buf->vb2_buf, 0);
+
+ if (src_size > q_data_dst->buffersize)
+ dst_size = q_data_dst->buffersize;
+ else
+ dst_size = src_size;
+
+ memcpy(vb2_plane_vaddr(&dst_buf->vb2_buf, 0),
+ vb2_plane_vaddr(&src_buf->vb2_buf, 0),
+ dst_size);
+
+ vb2_set_plane_payload(&dst_buf->vb2_buf, 0, dst_size);
+
+ src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+
+ v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_DONE);
+ v4l2_m2m_buf_done(dst_buf, VB2_BUF_STATE_DONE);
+ v4l2_m2m_job_finish(audm2m_dev->m2m_dev, ctx->fh.m2m_ctx);
+}
+
+static int audm2m_querycap(struct file *file, void *priv,
+ struct v4l2_capability *cap)
+{
+ strscpy(cap->driver, MEM2MEM_NAME, sizeof(cap->driver));
+ strscpy(cap->card, MEM2MEM_NAME, sizeof(cap->card));
+ snprintf(cap->bus_info, sizeof(cap->bus_info),
+ "platform:%s", MEM2MEM_NAME);
+
+ return 0;
+}
+
+static int enum_fmt(struct v4l2_fmtdesc *f, u32 type)
+{
+ int i, num;
+ struct audm2m_fmt *fmt;
+
+ num = 0;
+
+ for (i = 0; i < NUM_FORMATS; ++i) {
+ if (formats[i].types & type) {
+ if (num == f->index)
+ break;
+ /*
+ * Correct type but haven't reached our index yet,
+ * just increment per-type index
+ */
+ ++num;
+ }
+ }
+
+ if (i < NUM_FORMATS) {
+ /* Format found */
+ fmt = &formats[i];
+ f->pixelformat = fmt->fourcc;
+ return 0;
+ }
+
+ /* Format not found */
+ return -EINVAL;
+}
+
+static int audm2m_enum_fmt_audio_cap(struct file *file, void *priv,
+ struct v4l2_fmtdesc *f)
+{
+ return enum_fmt(f, MEM2MEM_CAPTURE);
+}
+
+static int audm2m_enum_fmt_audio_out(struct file *file, void *priv,
+ struct v4l2_fmtdesc *f)
+{
+ return enum_fmt(f, MEM2MEM_OUTPUT);
+}
+
+static int audm2m_g_fmt(struct audm2m_ctx *ctx, struct v4l2_format *f)
+{
+ struct vb2_queue *vq;
+ struct audm2m_q_data *q_data;
+
+ vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type);
+ if (!vq)
+ return -EINVAL;
+
+ q_data = get_q_data(ctx, f->type);
+ if (!q_data)
+ return -EINVAL;
+
+ f->fmt.audio.rate = q_data->rate;
+ f->fmt.audio.format = q_data->format;
+ f->fmt.audio.channels = q_data->channels;
+ f->fmt.audio.buffersize = q_data->buffersize;
+
+ return 0;
+}
+
+static int audm2m_g_fmt_audio_out(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ return audm2m_g_fmt(file2ctx(file), f);
+}
+
+static int audm2m_g_fmt_audio_cap(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ return audm2m_g_fmt(file2ctx(file), f);
+}
+
+static int audm2m_try_fmt(struct v4l2_format *f, struct audm2m_fmt *fmt)
+{
+ if (f->fmt.audio.rate < 8000)
+ f->fmt.audio.rate = 8000;
+ else if (f->fmt.audio.rate > 192000)
+ f->fmt.audio.rate = 192000;
+
+ if (f->fmt.audio.channels < 1)
+ f->fmt.audio.channels = 1;
+ else if (f->fmt.audio.channels > 8)
+ f->fmt.audio.channels = 8;
+
+ if (f->fmt.audio.format != SNDRV_PCM_FORMAT_S16_LE ||
+ f->fmt.audio.format != SNDRV_PCM_FORMAT_S32_LE)
+ f->fmt.audio.format = SNDRV_PCM_FORMAT_S32_LE;
+
+ f->fmt.audio.buffersize = f->fmt.audio.channels *
+ snd_pcm_format_physical_width(f->fmt.audio.format) *
+ SAMPLE_NUM;
+ return 0;
+}
+
+static int audm2m_try_fmt_audio_cap(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ struct audm2m_fmt *fmt;
+ struct audm2m_ctx *ctx = file2ctx(file);
+
+ fmt = find_format(f->fmt.pix.pixelformat);
+ if (!fmt) {
+ f->fmt.pix.pixelformat = formats[0].fourcc;
+ fmt = find_format(f->fmt.pix.pixelformat);
+ }
+
+ if (!(fmt->types & MEM2MEM_CAPTURE)) {
+ v4l2_err(&ctx->dev->v4l2_dev,
+ "Fourcc format (0x%08x) invalid.\n",
+ f->fmt.pix.pixelformat);
+ return -EINVAL;
+ }
+
+ return audm2m_try_fmt(f, fmt);
+}
+
+static int audm2m_try_fmt_audio_out(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ struct audm2m_fmt *fmt;
+ struct audm2m_ctx *ctx = file2ctx(file);
+
+ fmt = find_format(f->fmt.pix.pixelformat);
+ if (!fmt) {
+ f->fmt.pix.pixelformat = formats[0].fourcc;
+ fmt = find_format(f->fmt.pix.pixelformat);
+ }
+ if (!(fmt->types & MEM2MEM_OUTPUT)) {
+ v4l2_err(&ctx->dev->v4l2_dev,
+ "Fourcc format (0x%08x) invalid.\n",
+ f->fmt.pix.pixelformat);
+ return -EINVAL;
+ }
+
+ return audm2m_try_fmt(f, fmt);
+}
+
+static int audm2m_s_fmt(struct audm2m_ctx *ctx, struct v4l2_format *f)
+{
+ struct audm2m_q_data *q_data;
+ struct vb2_queue *vq;
+
+ vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type);
+ if (!vq)
+ return -EINVAL;
+
+ q_data = get_q_data(ctx, f->type);
+ if (!q_data)
+ return -EINVAL;
+
+ if (vb2_is_busy(vq)) {
+ v4l2_err(&ctx->dev->v4l2_dev, "%s queue busy\n", __func__);
+ return -EBUSY;
+ }
+
+ q_data->fmt = find_format(f->fmt.pix.pixelformat);
+ q_data->rate = f->fmt.audio.rate;
+ q_data->format = f->fmt.audio.format;
+ q_data->channels = f->fmt.audio.channels;
+ q_data->buffersize = q_data->channels *
+ snd_pcm_format_physical_width(q_data->format) *
+ SAMPLE_NUM;
+
+ dprintk(ctx->dev, 1,
+ "Format for type %s: %d/%d/%d, fmt: %c%c%c%c\n",
+ type_name(f->type), q_data->rate, q_data->format,
+ q_data->channels,
+ (q_data->fmt->fourcc & 0xff),
+ (q_data->fmt->fourcc >> 8) & 0xff,
+ (q_data->fmt->fourcc >> 16) & 0xff,
+ (q_data->fmt->fourcc >> 24) & 0xff);
+
+ return 0;
+}
+
+static int audm2m_s_fmt_audio_cap(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ int ret;
+
+ ret = audm2m_try_fmt_audio_cap(file, priv, f);
+ if (ret)
+ return ret;
+
+ return audm2m_s_fmt(file2ctx(file), f);
+}
+
+static int audm2m_s_fmt_audio_out(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ int ret;
+
+ ret = audm2m_try_fmt_audio_out(file, priv, f);
+ if (ret)
+ return ret;
+
+ return audm2m_s_fmt(file2ctx(file), f);
+}
+
+static const struct v4l2_ioctl_ops audm2m_ioctl_ops = {
+ .vidioc_querycap = audm2m_querycap,
+
+ .vidioc_enum_fmt_audio_cap = audm2m_enum_fmt_audio_cap,
+ .vidioc_g_fmt_audio_cap = audm2m_g_fmt_audio_cap,
+ .vidioc_try_fmt_audio_cap = audm2m_try_fmt_audio_cap,
+ .vidioc_s_fmt_audio_cap = audm2m_s_fmt_audio_cap,
+
+ .vidioc_enum_fmt_audio_out = audm2m_enum_fmt_audio_out,
+ .vidioc_g_fmt_audio_out = audm2m_g_fmt_audio_out,
+ .vidioc_try_fmt_audio_out = audm2m_try_fmt_audio_out,
+ .vidioc_s_fmt_audio_out = audm2m_s_fmt_audio_out,
+
+ .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs,
+ .vidioc_querybuf = v4l2_m2m_ioctl_querybuf,
+ .vidioc_qbuf = v4l2_m2m_ioctl_qbuf,
+ .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf,
+ .vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf,
+ .vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs,
+ .vidioc_expbuf = v4l2_m2m_ioctl_expbuf,
+
+ .vidioc_streamon = v4l2_m2m_ioctl_streamon,
+ .vidioc_streamoff = v4l2_m2m_ioctl_streamoff,
+
+ .vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
+ .vidioc_unsubscribe_event = v4l2_event_unsubscribe,
+};
+
+/*
+ * Queue operations
+ */
+static int audm2m_queue_setup(struct vb2_queue *vq,
+ unsigned int *nbuffers,
+ unsigned int *nplanes,
+ unsigned int sizes[],
+ struct device *alloc_devs[])
+{
+ struct audm2m_ctx *ctx = vb2_get_drv_priv(vq);
+ struct audm2m_q_data *q_data;
+
+ q_data = get_q_data(ctx, vq->type);
+ if (!q_data)
+ return -EINVAL;
+
+ *nplanes = 1;
+ sizes[0] = q_data->buffersize;
+
+ dprintk(ctx->dev, 1, "%s: get %d buffer(s) of size %d each.\n",
+ type_name(vq->type), *nplanes, sizes[0]);
+
+ return 0;
+}
+
+static void audm2m_buf_queue(struct vb2_buffer *vb)
+{
+ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ struct audm2m_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+
+ v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf);
+}
+
+static int audm2m_start_streaming(struct vb2_queue *q, unsigned int count)
+{
+ struct audm2m_ctx *ctx = vb2_get_drv_priv(q);
+ struct audm2m_q_data *q_data = get_q_data(ctx, q->type);
+
+ if (!q_data)
+ return -EINVAL;
+
+ if (V4L2_TYPE_IS_OUTPUT(q->type))
+ ctx->aborting = 0;
+
+ return 0;
+}
+
+static void audm2m_stop_streaming(struct vb2_queue *q)
+{
+ struct audm2m_ctx *ctx = vb2_get_drv_priv(q);
+ struct vb2_v4l2_buffer *vbuf;
+
+ for (;;) {
+ if (V4L2_TYPE_IS_OUTPUT(q->type))
+ vbuf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ else
+ vbuf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+ if (!vbuf)
+ return;
+ v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR);
+ }
+}
+
+static const struct vb2_ops audm2m_qops = {
+ .queue_setup = audm2m_queue_setup,
+ .buf_queue = audm2m_buf_queue,
+ .start_streaming = audm2m_start_streaming,
+ .stop_streaming = audm2m_stop_streaming,
+ .wait_prepare = vb2_ops_wait_prepare,
+ .wait_finish = vb2_ops_wait_finish,
+};
+
+static int queue_init(void *priv, struct vb2_queue *src_vq,
+ struct vb2_queue *dst_vq)
+{
+ struct audm2m_ctx *ctx = priv;
+ int ret;
+
+ src_vq->type = V4L2_BUF_TYPE_AUDIO_OUTPUT;
+ src_vq->io_modes = VB2_MMAP | VB2_DMABUF;
+ src_vq->drv_priv = ctx;
+ src_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
+ src_vq->ops = &audm2m_qops;
+ src_vq->mem_ops = &vb2_vmalloc_memops;
+ src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+ src_vq->lock = &ctx->vb_mutex;
+ src_vq->min_buffers_needed = 1;
+
+ ret = vb2_queue_init(src_vq);
+ if (ret)
+ return ret;
+
+ dst_vq->type = V4L2_BUF_TYPE_AUDIO_CAPTURE;
+ dst_vq->io_modes = VB2_MMAP | VB2_DMABUF;
+ dst_vq->drv_priv = ctx;
+ dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
+ dst_vq->ops = &audm2m_qops;
+ dst_vq->mem_ops = &vb2_vmalloc_memops;
+ dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+ dst_vq->lock = &ctx->vb_mutex;
+ dst_vq->min_buffers_needed = 1;
+
+ return vb2_queue_init(dst_vq);
+}
+
+/*
+ * File operations
+ */
+static int audm2m_open(struct file *file)
+{
+ struct audm2m_dev *dev = video_drvdata(file);
+ struct audm2m_ctx *ctx = NULL;
+ int rc = 0;
+
+ if (mutex_lock_interruptible(&dev->dev_mutex))
+ return -ERESTARTSYS;
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx) {
+ rc = -ENOMEM;
+ goto open_unlock;
+ }
+
+ v4l2_fh_init(&ctx->fh, video_devdata(file));
+ file->private_data = &ctx->fh;
+ ctx->dev = dev;
+
+ ctx->q_data[V4L2_M2M_SRC].fmt = &formats[0];
+ ctx->q_data[V4L2_M2M_SRC].rate = 8000;
+ ctx->q_data[V4L2_M2M_SRC].format = SNDRV_PCM_FORMAT_S32_LE;
+ ctx->q_data[V4L2_M2M_SRC].channels = 2;
+
+ /* Fix to 4096 samples */
+ ctx->q_data[V4L2_M2M_SRC].buffersize = SAMPLE_NUM * 2 * 4;
+ ctx->q_data[V4L2_M2M_DST] = ctx->q_data[V4L2_M2M_SRC];
+
+ ctx->fh.m2m_ctx = v4l2_m2m_ctx_init(dev->m2m_dev, ctx, &queue_init);
+
+ mutex_init(&ctx->vb_mutex);
+
+ if (IS_ERR(ctx->fh.m2m_ctx)) {
+ rc = PTR_ERR(ctx->fh.m2m_ctx);
+
+ v4l2_fh_exit(&ctx->fh);
+ kfree(ctx);
+ goto open_unlock;
+ }
+
+ v4l2_fh_add(&ctx->fh);
+ atomic_inc(&dev->num_inst);
+
+ dprintk(dev, 1, "Created instance: %p, m2m_ctx: %p\n",
+ ctx, ctx->fh.m2m_ctx);
+
+open_unlock:
+ mutex_unlock(&dev->dev_mutex);
+ return rc;
+}
+
+static int audm2m_release(struct file *file)
+{
+ struct audm2m_dev *dev = video_drvdata(file);
+ struct audm2m_ctx *ctx = file2ctx(file);
+
+ dprintk(dev, 1, "Releasing instance %p\n", ctx);
+
+ v4l2_fh_del(&ctx->fh);
+ v4l2_fh_exit(&ctx->fh);
+ mutex_lock(&dev->dev_mutex);
+ v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
+ mutex_unlock(&dev->dev_mutex);
+ kfree(ctx);
+
+ atomic_dec(&dev->num_inst);
+
+ return 0;
+}
+
+static void audm2m_device_release(struct video_device *vdev)
+{
+ struct audm2m_dev *dev = container_of(vdev, struct audm2m_dev, vfd);
+
+ v4l2_device_unregister(&dev->v4l2_dev);
+ v4l2_m2m_release(dev->m2m_dev);
+
+ kfree(dev);
+}
+
+static const struct v4l2_file_operations audm2m_fops = {
+ .owner = THIS_MODULE,
+ .open = audm2m_open,
+ .release = audm2m_release,
+ .poll = v4l2_m2m_fop_poll,
+ .unlocked_ioctl = video_ioctl2,
+ .mmap = v4l2_m2m_fop_mmap,
+};
+
+static const struct video_device audm2m_videodev = {
+ .name = MEM2MEM_NAME,
+ .vfl_dir = VFL_DIR_M2M,
+ .fops = &audm2m_fops,
+ .ioctl_ops = &audm2m_ioctl_ops,
+ .minor = -1,
+ .release = audm2m_device_release,
+ .device_caps = V4L2_CAP_AUDIO_M2M | V4L2_CAP_STREAMING,
+};
+
+static const struct v4l2_m2m_ops m2m_ops = {
+ .device_run = device_run,
+ .job_ready = job_ready,
+ .job_abort = job_abort,
+};
+
+static int audm2m_probe(struct platform_device *pdev)
+{
+ struct audm2m_dev *dev;
+ struct video_device *vfd;
+ int ret;
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev)
+ return -ENOMEM;
+
+ ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
+ if (ret)
+ goto error_free;
+
+ atomic_set(&dev->num_inst, 0);
+ mutex_init(&dev->dev_mutex);
+
+ dev->vfd = audm2m_videodev;
+ vfd = &dev->vfd;
+ vfd->lock = &dev->dev_mutex;
+ vfd->v4l2_dev = &dev->v4l2_dev;
+
+ video_set_drvdata(vfd, dev);
+ v4l2_info(&dev->v4l2_dev,
+ "Device registered as /dev/v4l-audio%d\n", vfd->num);
+
+ platform_set_drvdata(pdev, dev);
+
+ dev->m2m_dev = v4l2_m2m_init(&m2m_ops);
+ if (IS_ERR(dev->m2m_dev)) {
+ v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem device\n");
+ ret = PTR_ERR(dev->m2m_dev);
+ dev->m2m_dev = NULL;
+ goto error_dev;
+ }
+
+ ret = video_register_device(vfd, VFL_TYPE_AUDIO, 0);
+ if (ret) {
+ v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
+ goto error_m2m;
+ }
+
+ return 0;
+
+error_m2m:
+ v4l2_m2m_release(dev->m2m_dev);
+error_dev:
+ v4l2_device_unregister(&dev->v4l2_dev);
+error_free:
+ kfree(dev);
+
+ return ret;
+}
+
+static void audm2m_remove(struct platform_device *pdev)
+{
+ struct audm2m_dev *dev = platform_get_drvdata(pdev);
+
+ v4l2_info(&dev->v4l2_dev, "Removing " MEM2MEM_NAME);
+
+ video_unregister_device(&dev->vfd);
+}
+
+static struct platform_driver audm2m_pdrv = {
+ .probe = audm2m_probe,
+ .remove_new = audm2m_remove,
+ .driver = {
+ .name = MEM2MEM_NAME,
+ },
+};
+
+static void __exit audm2m_exit(void)
+{
+ platform_driver_unregister(&audm2m_pdrv);
+ platform_device_unregister(&audm2m_pdev);
+}
+
+static int __init audm2m_init(void)
+{
+ int ret;
+
+ ret = platform_device_register(&audm2m_pdev);
+ if (ret)
+ return ret;
+
+ ret = platform_driver_register(&audm2m_pdrv);
+ if (ret)
+ platform_device_unregister(&audm2m_pdev);
+
+ return ret;
+}
+
+module_init(audm2m_init);
+module_exit(audm2m_exit);
--
2.34.1

2023-09-14 07:59:14

by Shengjiu Wang

[permalink] [raw]
Subject: [RFC PATCH v3 1/9] ASoC: fsl_asrc: define functions for memory to memory usage

ASRC can be used on memory to memory case, define several
functions for m2m usage.

m2m_start_part_one: first part of the start steps
m2m_start_part_two: second part of the start steps
m2m_stop_part_one: first part of stop steps
m2m_stop_part_two: second part of stop steps, optional
m2m_check_format: check format is supported or not
m2m_calc_out_len: calculate output length according to input length
m2m_get_maxburst: burst size for dma
m2m_pair_suspend: suspend function of pair, optional.
m2m_pair_resume: resume function of pair
get_output_fifo_size: get remaining data size in FIFO

Signed-off-by: Shengjiu Wang <[email protected]>
---
sound/soc/fsl/fsl_asrc.c | 150 ++++++++++++++++++++++++++++++++
sound/soc/fsl/fsl_asrc.h | 2 +
sound/soc/fsl/fsl_asrc_common.h | 42 +++++++++
3 files changed, 194 insertions(+)

diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c
index b793263291dc..f9d830e0957f 100644
--- a/sound/soc/fsl/fsl_asrc.c
+++ b/sound/soc/fsl/fsl_asrc.c
@@ -1063,6 +1063,145 @@ static int fsl_asrc_get_fifo_addr(u8 dir, enum asrc_pair_index index)
return REG_ASRDx(dir, index);
}

+/* Get sample numbers in FIFO */
+static unsigned int fsl_asrc_get_output_fifo_size(struct fsl_asrc_pair *pair)
+{
+ struct fsl_asrc *asrc = pair->asrc;
+ enum asrc_pair_index index = pair->index;
+ u32 val;
+
+ regmap_read(asrc->regmap, REG_ASRFST(index), &val);
+
+ val &= ASRFSTi_OUTPUT_FIFO_MASK;
+
+ return val >> ASRFSTi_OUTPUT_FIFO_SHIFT;
+}
+
+static int fsl_asrc_m2m_start_part_one(struct fsl_asrc_pair *pair)
+{
+ struct fsl_asrc_pair_priv *pair_priv = pair->private;
+ struct fsl_asrc *asrc = pair->asrc;
+ struct device *dev = &asrc->pdev->dev;
+ struct asrc_config config;
+ int ret;
+
+ /* fill config */
+ config.pair = pair->index;
+ config.channel_num = pair->channels;
+ config.input_sample_rate = pair->rate[IN];
+ config.output_sample_rate = pair->rate[OUT];
+ config.input_format = pair->sample_format[IN];
+ config.output_format = pair->sample_format[OUT];
+ config.inclk = INCLK_NONE;
+ config.outclk = OUTCLK_ASRCK1_CLK;
+
+ pair_priv->config = &config;
+ ret = fsl_asrc_config_pair(pair, true);
+ if (ret) {
+ dev_err(dev, "failed to config pair: %d\n", ret);
+ return ret;
+ }
+
+ fsl_asrc_start_pair(pair);
+
+ return 0;
+}
+
+static int fsl_asrc_m2m_start_part_two(struct fsl_asrc_pair *pair)
+{
+ /*
+ * Clear DMA request during the stall state of ASRC:
+ * During STALL state, the remaining in input fifo would never be
+ * smaller than the input threshold while the output fifo would not
+ * be bigger than output one. Thus the DMA request would be cleared.
+ */
+ fsl_asrc_set_watermarks(pair, ASRC_FIFO_THRESHOLD_MIN,
+ ASRC_FIFO_THRESHOLD_MAX);
+
+ /* Update the real input threshold to raise DMA request */
+ fsl_asrc_set_watermarks(pair, ASRC_M2M_INPUTFIFO_WML,
+ ASRC_M2M_OUTPUTFIFO_WML);
+
+ return 0;
+}
+
+static int fsl_asrc_m2m_stop_part_one(struct fsl_asrc_pair *pair)
+{
+ fsl_asrc_stop_pair(pair);
+
+ return 0;
+}
+
+static int fsl_asrc_m2m_check_format(u8 dir, u32 format)
+{
+ u64 support_format = FSL_ASRC_FORMATS;
+
+ if (dir == IN)
+ support_format |= SNDRV_PCM_FMTBIT_S8;
+
+ if (!(1 << format & support_format))
+ return -EINVAL;
+
+ return 0;
+}
+
+static int fsl_asrc_m2m_check_rate(u8 dir, u32 rate)
+{
+ if (rate < 5512 || rate > 192000)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int fsl_asrc_m2m_check_channel(u8 dir, u32 channels)
+{
+ if (channels < 1 || channels > 10)
+ return -EINVAL;
+
+ return 0;
+}
+
+/* calculate capture data length according to output data length and sample rate */
+static int fsl_asrc_m2m_calc_out_len(struct fsl_asrc_pair *pair, int input_buffer_length)
+{
+ unsigned int in_width, out_width;
+ unsigned int channels = pair->channels;
+ unsigned int in_samples, out_samples;
+ unsigned int out_length;
+
+ in_width = snd_pcm_format_physical_width(pair->sample_format[IN]) / 8;
+ out_width = snd_pcm_format_physical_width(pair->sample_format[OUT]) / 8;
+
+ in_samples = input_buffer_length / in_width / channels;
+ out_samples = pair->rate[OUT] * in_samples / pair->rate[IN];
+ out_length = (out_samples - ASRC_OUTPUT_LAST_SAMPLE) * out_width * channels;
+
+ return out_length;
+}
+
+static int fsl_asrc_m2m_get_maxburst(u8 dir, struct fsl_asrc_pair *pair)
+{
+ struct fsl_asrc *asrc = pair->asrc;
+ struct fsl_asrc_priv *asrc_priv = asrc->private;
+ int wml = (dir == IN) ? ASRC_M2M_INPUTFIFO_WML : ASRC_M2M_OUTPUTFIFO_WML;
+
+ if (!asrc_priv->soc->use_edma)
+ return wml * pair->channels;
+ else
+ return 1;
+}
+
+static int fsl_asrc_m2m_pair_resume(struct fsl_asrc_pair *pair)
+{
+ struct fsl_asrc *asrc = pair->asrc;
+ int i;
+
+ for (i = 0; i < pair->channels * 4; i++)
+ regmap_write(asrc->regmap, REG_ASRDI(pair->index), 0);
+
+ return 0;
+}
+
static int fsl_asrc_runtime_resume(struct device *dev);
static int fsl_asrc_runtime_suspend(struct device *dev);

@@ -1147,6 +1286,17 @@ static int fsl_asrc_probe(struct platform_device *pdev)
asrc->get_fifo_addr = fsl_asrc_get_fifo_addr;
asrc->pair_priv_size = sizeof(struct fsl_asrc_pair_priv);

+ asrc->m2m_start_part_one = fsl_asrc_m2m_start_part_one;
+ asrc->m2m_start_part_two = fsl_asrc_m2m_start_part_two;
+ asrc->m2m_stop_part_one = fsl_asrc_m2m_stop_part_one;
+ asrc->get_output_fifo_size = fsl_asrc_get_output_fifo_size;
+ asrc->m2m_check_format = fsl_asrc_m2m_check_format;
+ asrc->m2m_check_rate = fsl_asrc_m2m_check_rate;
+ asrc->m2m_check_channel = fsl_asrc_m2m_check_channel;
+ asrc->m2m_calc_out_len = fsl_asrc_m2m_calc_out_len;
+ asrc->m2m_get_maxburst = fsl_asrc_m2m_get_maxburst;
+ asrc->m2m_pair_resume = fsl_asrc_m2m_pair_resume;
+
if (of_device_is_compatible(np, "fsl,imx35-asrc")) {
asrc_priv->clk_map[IN] = input_clk_map_imx35;
asrc_priv->clk_map[OUT] = output_clk_map_imx35;
diff --git a/sound/soc/fsl/fsl_asrc.h b/sound/soc/fsl/fsl_asrc.h
index 86d2422ad606..1c492eb237f5 100644
--- a/sound/soc/fsl/fsl_asrc.h
+++ b/sound/soc/fsl/fsl_asrc.h
@@ -12,6 +12,8 @@

#include "fsl_asrc_common.h"

+#define ASRC_M2M_INPUTFIFO_WML 0x4
+#define ASRC_M2M_OUTPUTFIFO_WML 0x2
#define ASRC_DMA_BUFFER_NUM 2
#define ASRC_INPUTFIFO_THRESHOLD 32
#define ASRC_OUTPUTFIFO_THRESHOLD 32
diff --git a/sound/soc/fsl/fsl_asrc_common.h b/sound/soc/fsl/fsl_asrc_common.h
index 7e1c13ca37f1..7f7e725075fe 100644
--- a/sound/soc/fsl/fsl_asrc_common.h
+++ b/sound/soc/fsl/fsl_asrc_common.h
@@ -34,6 +34,11 @@ enum asrc_pair_index {
* @pos: hardware pointer position
* @req_dma_chan: flag to release dev_to_dev chan
* @private: pair private area
+ * @complete: dma task complete
+ * @sample_format: format of m2m
+ * @rate: rate of m2m
+ * @buf_len: buffer length of m2m
+ * @req_pair: flag for request pair
*/
struct fsl_asrc_pair {
struct fsl_asrc *asrc;
@@ -49,6 +54,13 @@ struct fsl_asrc_pair {
bool req_dma_chan;

void *private;
+
+ /* used for m2m */
+ struct completion complete[2];
+ snd_pcm_format_t sample_format[2];
+ unsigned int rate[2];
+ unsigned int buf_len[2];
+ bool req_pair;
};

/**
@@ -72,6 +84,19 @@ struct fsl_asrc_pair {
* @request_pair: function pointer
* @release_pair: function pointer
* @get_fifo_addr: function pointer
+ * @m2m_start_part_one: function pointer
+ * @m2m_start_part_two: function pointer
+ * @m2m_stop_part_one: function pointer
+ * @m2m_stop_part_two: function pointer
+ * @m2m_check_format: function pointer
+ * @m2m_check_rate: function pointer
+ * @m2m_check_channel: function pointer
+ * @m2m_calc_out_len: function pointer
+ * @m2m_get_maxburst: function pointer
+ * @m2m_pair_suspend: function pointer
+ * @m2m_pair_resume: function pointer
+ * @m2m_set_ratio_mod: function pointer
+ * @get_output_fifo_size: function pointer
* @pair_priv_size: size of pair private struct.
* @private: private data structure
*/
@@ -97,6 +122,23 @@ struct fsl_asrc {
int (*request_pair)(int channels, struct fsl_asrc_pair *pair);
void (*release_pair)(struct fsl_asrc_pair *pair);
int (*get_fifo_addr)(u8 dir, enum asrc_pair_index index);
+
+ int (*m2m_start_part_one)(struct fsl_asrc_pair *pair);
+ int (*m2m_start_part_two)(struct fsl_asrc_pair *pair);
+ int (*m2m_stop_part_one)(struct fsl_asrc_pair *pair);
+ int (*m2m_stop_part_two)(struct fsl_asrc_pair *pair);
+
+ int (*m2m_check_format)(u8 dir, u32 format);
+ int (*m2m_check_rate)(u8 dir, u32 rate);
+ int (*m2m_check_channel)(u8 dir, u32 channels);
+
+ int (*m2m_calc_out_len)(struct fsl_asrc_pair *pair, int input_buffer_length);
+ int (*m2m_get_maxburst)(u8 dir, struct fsl_asrc_pair *pair);
+ int (*m2m_pair_suspend)(struct fsl_asrc_pair *pair);
+ int (*m2m_pair_resume)(struct fsl_asrc_pair *pair);
+ int (*m2m_set_ratio_mod)(struct fsl_asrc_pair *pair, int val);
+
+ unsigned int (*get_output_fifo_size)(struct fsl_asrc_pair *pair);
size_t pair_priv_size;

void *private;
--
2.34.1

2023-09-14 08:08:54

by Shengjiu Wang

[permalink] [raw]
Subject: [RFC PATCH v3 5/9] ASoC: fsl_easrc: register m2m platform device

Register m2m platform device,that user can
use M2M feature.

Signed-off-by: Shengjiu Wang <[email protected]>
---
sound/soc/fsl/fsl_easrc.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c
index f517b407672d..b719d517f9b4 100644
--- a/sound/soc/fsl/fsl_easrc.c
+++ b/sound/soc/fsl/fsl_easrc.c
@@ -2084,6 +2084,7 @@ MODULE_DEVICE_TABLE(of, fsl_easrc_dt_ids);
static int fsl_easrc_probe(struct platform_device *pdev)
{
struct fsl_easrc_priv *easrc_priv;
+ struct fsl_asrc_m2m_pdata m2m_pdata;
struct device *dev = &pdev->dev;
struct fsl_asrc *easrc;
struct resource *res;
@@ -2202,11 +2203,23 @@ static int fsl_easrc_probe(struct platform_device *pdev)
return ret;
}

+ m2m_pdata.asrc = easrc;
+ easrc->m2m_pdev = platform_device_register_data(&pdev->dev,
+ M2M_DRV_NAME,
+ PLATFORM_DEVID_AUTO,
+ &m2m_pdata,
+ sizeof(m2m_pdata));
+
return 0;
}

static void fsl_easrc_remove(struct platform_device *pdev)
{
+ struct fsl_asrc *easrc = dev_get_drvdata(&pdev->dev);
+
+ if (easrc->m2m_pdev && !IS_ERR(easrc->m2m_pdev))
+ platform_device_unregister(easrc->m2m_pdev);
+
pm_runtime_disable(&pdev->dev);
}

--
2.34.1

2023-09-14 08:08:56

by Shengjiu Wang

[permalink] [raw]
Subject: [RFC PATCH v3 9/9] media: imx-asrc: Add memory to memory driver

Implement the ASRC memory to memory function using
the v4l2 framework, user can use this function with
v4l2 ioctl interface.

User send the output and capture buffer to driver and
driver store the converted data to the capture buffer.

This feature can be shared by ASRC and EASRC drivers

Signed-off-by: Shengjiu Wang <[email protected]>
---
drivers/media/platform/nxp/Kconfig | 12 +
drivers/media/platform/nxp/Makefile | 1 +
drivers/media/platform/nxp/imx-asrc.c | 1058 +++++++++++++++++++++++++
3 files changed, 1071 insertions(+)
create mode 100644 drivers/media/platform/nxp/imx-asrc.c

diff --git a/drivers/media/platform/nxp/Kconfig b/drivers/media/platform/nxp/Kconfig
index 40e3436669e2..8234644ee341 100644
--- a/drivers/media/platform/nxp/Kconfig
+++ b/drivers/media/platform/nxp/Kconfig
@@ -67,3 +67,15 @@ config VIDEO_MX2_EMMAPRP

source "drivers/media/platform/nxp/dw100/Kconfig"
source "drivers/media/platform/nxp/imx-jpeg/Kconfig"
+
+config VIDEO_IMX_ASRC
+ tristate "NXP i.MX ASRC M2M support"
+ depends on V4L_MEM2MEM_DRIVERS
+ depends on MEDIA_SUPPORT
+ select VIDEOBUF2_DMA_CONTIG
+ select V4L2_MEM2MEM_DEV
+ help
+ Say Y if you want to add ASRC M2M support for NXP CPUs.
+ It is a complement for ASRC M2P and ASRC P2M features.
+ This option is only useful for out-of-tree drivers since
+ in-tree drivers select it automatically.
diff --git a/drivers/media/platform/nxp/Makefile b/drivers/media/platform/nxp/Makefile
index 4d90eb713652..1325675e34f5 100644
--- a/drivers/media/platform/nxp/Makefile
+++ b/drivers/media/platform/nxp/Makefile
@@ -9,3 +9,4 @@ obj-$(CONFIG_VIDEO_IMX8MQ_MIPI_CSI2) += imx8mq-mipi-csi2.o
obj-$(CONFIG_VIDEO_IMX_MIPI_CSIS) += imx-mipi-csis.o
obj-$(CONFIG_VIDEO_IMX_PXP) += imx-pxp.o
obj-$(CONFIG_VIDEO_MX2_EMMAPRP) += mx2_emmaprp.o
+obj-$(CONFIG_VIDEO_IMX_ASRC) += imx-asrc.o
diff --git a/drivers/media/platform/nxp/imx-asrc.c b/drivers/media/platform/nxp/imx-asrc.c
new file mode 100644
index 000000000000..21079c7abd27
--- /dev/null
+++ b/drivers/media/platform/nxp/imx-asrc.c
@@ -0,0 +1,1058 @@
+// SPDX-License-Identifier: GPL-2.0
+//
+// Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
+// Copyright (C) 2019-2023 NXP
+//
+// Freescale ASRC Memory to Memory (M2M) driver
+
+#include <linux/dma/imx-dma.h>
+#include <linux/pm_runtime.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-fh.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-mem2mem.h>
+#include <media/videobuf2-dma-contig.h>
+#include <sound/dmaengine_pcm.h>
+#include <sound/fsl_asrc_common.h>
+
+#define V4L_CAP OUT
+#define V4L_OUT IN
+
+/* Flags that indicate a format can be used for capture/output */
+#define MEM2MEM_CAPTURE BIT(0)
+#define MEM2MEM_OUTPUT BIT(1)
+
+#define ASRC_xPUT_DMA_CALLBACK(dir) \
+ (((dir) == V4L_OUT) ? asrc_input_dma_callback \
+ : asrc_output_dma_callback)
+
+#define DIR_STR(dir) (dir) == V4L_OUT ? "out" : "cap"
+
+#define ASRC_M2M_BUFFER_SIZE (512 * 1024)
+#define ASRC_M2M_PERIOD_SIZE (48 * 1024)
+#define ASRC_M2M_SG_NUM (20)
+
+struct asrc_pair_m2m {
+ struct fsl_asrc_pair *pair;
+ struct asrc_m2m *m2m;
+ struct v4l2_fh fh;
+ struct v4l2_ctrl_handler ctrl_handler;
+};
+
+struct asrc_m2m {
+ struct fsl_asrc *asrc;
+ struct v4l2_device v4l2_dev;
+ struct v4l2_m2m_dev *m2m_dev;
+ struct video_device *dec_vdev;
+ struct mutex mlock; /* v4l2 ioctls serialization */
+ struct platform_device *pdev;
+};
+
+struct asrc_fmt {
+ u32 fourcc;
+ u32 types;
+};
+
+static struct asrc_fmt formats[] = {
+ {
+ .fourcc = V4L2_AUDIO_FMT_LPCM,
+ .types = MEM2MEM_CAPTURE | MEM2MEM_OUTPUT,
+ },
+};
+
+#define NUM_FORMATS ARRAY_SIZE(formats)
+
+static inline struct asrc_pair_m2m *asrc_m2m_fh_to_ctx(struct v4l2_fh *fh)
+{
+ return container_of(fh, struct asrc_pair_m2m, fh);
+}
+
+/**
+ * asrc_read_last_fifo: read all the remaining data from FIFO
+ * @pair: Structure pointer of fsl_asrc_pair
+ * @dma_vaddr: virtual address of capture buffer
+ * @length: payload length of capture buffer
+ */
+static void asrc_read_last_fifo(struct fsl_asrc_pair *pair, void *dma_vaddr, u32 *length)
+{
+ struct fsl_asrc *asrc = pair->asrc;
+ enum asrc_pair_index index = pair->index;
+ u32 i, reg, size, t_size = 0, width;
+ u32 *reg32 = NULL;
+ u16 *reg16 = NULL;
+ u8 *reg24 = NULL;
+
+ width = snd_pcm_format_physical_width(pair->sample_format[V4L_CAP]);
+ if (width == 32)
+ reg32 = dma_vaddr + *length;
+ else if (width == 16)
+ reg16 = dma_vaddr + *length;
+ else
+ reg24 = dma_vaddr + *length;
+retry:
+ size = asrc->get_output_fifo_size(pair);
+ if (size + *length > ASRC_M2M_BUFFER_SIZE)
+ goto end;
+
+ for (i = 0; i < size * pair->channels; i++) {
+ regmap_read(asrc->regmap, asrc->get_fifo_addr(OUT, index), &reg);
+ if (reg32) {
+ *(reg32) = reg;
+ reg32++;
+ } else if (reg16) {
+ *(reg16) = (u16)reg;
+ reg16++;
+ } else {
+ *reg24++ = (u8)reg;
+ *reg24++ = (u8)(reg >> 8);
+ *reg24++ = (u8)(reg >> 16);
+ }
+ }
+ t_size += size;
+
+ /* In case there is data left in FIFO */
+ if (size)
+ goto retry;
+end:
+ /* Update payload length */
+ if (reg32)
+ *length += t_size * pair->channels * 4;
+ else if (reg16)
+ *length += t_size * pair->channels * 2;
+ else
+ *length += t_size * pair->channels * 3;
+}
+
+static int asrc_m2m_start_streaming(struct vb2_queue *q, unsigned int count)
+{
+ struct asrc_pair_m2m *pair_m2m = vb2_get_drv_priv(q);
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+ struct asrc_m2m *m2m = pair_m2m->m2m;
+ struct fsl_asrc *asrc = pair->asrc;
+ struct device *dev = &m2m->pdev->dev;
+ struct vb2_v4l2_buffer *buf;
+ bool request_flag = false;
+ int ret;
+
+ dev_dbg(dev, "Start streaming pair=%p, %d\n", pair, q->type);
+
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0) {
+ dev_err(dev, "Failed to power up asrc\n");
+ goto err_pm_runtime;
+ }
+
+ /* Request asrc pair/context */
+ if (!pair->req_pair) {
+ /* flag for error handler of this function */
+ request_flag = true;
+
+ ret = asrc->request_pair(pair->channels, pair);
+ if (ret) {
+ dev_err(dev, "failed to request pair: %d\n", ret);
+ goto err_request_pair;
+ }
+
+ ret = asrc->m2m_start_part_one(pair);
+ if (ret) {
+ dev_err(dev, "failed to start pair part one: %d\n", ret);
+ goto err_start_part_one;
+ }
+
+ pair->req_pair = true;
+ }
+
+ /* Request dma channels */
+ if (V4L2_TYPE_IS_OUTPUT(q->type)) {
+ pair->dma_chan[V4L_OUT] = asrc->get_dma_channel(pair, IN);
+ if (!pair->dma_chan[V4L_OUT]) {
+ dev_err(dev, "[ctx%d] failed to get input DMA channel\n", pair->index);
+ ret = -EBUSY;
+ goto err_dma_channel;
+ }
+ } else {
+ pair->dma_chan[V4L_CAP] = asrc->get_dma_channel(pair, OUT);
+ if (!pair->dma_chan[V4L_CAP]) {
+ dev_err(dev, "[ctx%d] failed to get output DMA channel\n", pair->index);
+ ret = -EBUSY;
+ goto err_dma_channel;
+ }
+ }
+
+ v4l2_m2m_update_start_streaming_state(pair_m2m->fh.m2m_ctx, q);
+
+ return 0;
+
+err_dma_channel:
+ if (request_flag && asrc->m2m_stop_part_one)
+ asrc->m2m_stop_part_one(pair);
+err_start_part_one:
+ if (request_flag)
+ asrc->release_pair(pair);
+err_request_pair:
+ pm_runtime_put_sync(dev);
+err_pm_runtime:
+ /* Release buffers */
+ if (V4L2_TYPE_IS_OUTPUT(q->type)) {
+ while ((buf = v4l2_m2m_src_buf_remove(pair_m2m->fh.m2m_ctx)))
+ v4l2_m2m_buf_done(buf, VB2_BUF_STATE_QUEUED);
+ } else {
+ while ((buf = v4l2_m2m_dst_buf_remove(pair_m2m->fh.m2m_ctx)))
+ v4l2_m2m_buf_done(buf, VB2_BUF_STATE_QUEUED);
+ }
+ return ret;
+}
+
+static void asrc_m2m_stop_streaming(struct vb2_queue *q)
+{
+ struct asrc_pair_m2m *pair_m2m = vb2_get_drv_priv(q);
+ struct asrc_m2m *m2m = pair_m2m->m2m;
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+ struct fsl_asrc *asrc = pair->asrc;
+ struct device *dev = &m2m->pdev->dev;
+
+ dev_dbg(dev, "Stop streaming pair=%p, %d\n", pair, q->type);
+
+ v4l2_m2m_update_stop_streaming_state(pair_m2m->fh.m2m_ctx, q);
+
+ /* Stop & release pair/context */
+ if (asrc->m2m_stop_part_two)
+ asrc->m2m_stop_part_two(pair);
+
+ if (pair->req_pair) {
+ if (asrc->m2m_stop_part_one)
+ asrc->m2m_stop_part_one(pair);
+ asrc->release_pair(pair);
+ pair->req_pair = false;
+ }
+
+ /* Release dma channel */
+ if (V4L2_TYPE_IS_OUTPUT(q->type)) {
+ if (pair->dma_chan[V4L_OUT])
+ dma_release_channel(pair->dma_chan[V4L_OUT]);
+ } else {
+ if (pair->dma_chan[V4L_CAP])
+ dma_release_channel(pair->dma_chan[V4L_CAP]);
+ }
+
+ pm_runtime_put_sync(dev);
+}
+
+static int asrc_m2m_queue_setup(struct vb2_queue *q,
+ unsigned int *num_buffers, unsigned int *num_planes,
+ unsigned int sizes[], struct device *alloc_devs[])
+{
+ struct asrc_pair_m2m *pair_m2m = vb2_get_drv_priv(q);
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+
+ /* single buffer */
+ *num_planes = 1;
+
+ /*
+ * The capture buffer size depends on output buffer size
+ * and the convert ratio.
+ *
+ * Here just use a fix length for capture and output buffer.
+ * User need to care about it.
+ */
+
+ if (V4L2_TYPE_IS_OUTPUT(q->type))
+ sizes[0] = pair->buf_len[V4L_OUT];
+ else
+ sizes[0] = pair->buf_len[V4L_CAP];
+
+ return 0;
+}
+
+static void asrc_m2m_buf_queue(struct vb2_buffer *vb)
+{
+ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ struct asrc_pair_m2m *pair_m2m = vb2_get_drv_priv(vb->vb2_queue);
+
+ /* queue buffer */
+ v4l2_m2m_buf_queue(pair_m2m->fh.m2m_ctx, vbuf);
+}
+
+static const struct vb2_ops asrc_m2m_qops = {
+ .wait_prepare = vb2_ops_wait_prepare,
+ .wait_finish = vb2_ops_wait_finish,
+ .start_streaming = asrc_m2m_start_streaming,
+ .stop_streaming = asrc_m2m_stop_streaming,
+ .queue_setup = asrc_m2m_queue_setup,
+ .buf_queue = asrc_m2m_buf_queue,
+};
+
+/* Init video buffer queue for src and dst. */
+static int asrc_m2m_queue_init(void *priv, struct vb2_queue *src_vq,
+ struct vb2_queue *dst_vq)
+{
+ struct asrc_pair_m2m *pair_m2m = priv;
+ struct asrc_m2m *m2m = pair_m2m->m2m;
+ int ret;
+
+ src_vq->type = V4L2_BUF_TYPE_AUDIO_OUTPUT;
+ src_vq->io_modes = VB2_MMAP | VB2_DMABUF;
+ src_vq->drv_priv = pair_m2m;
+ src_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
+ src_vq->ops = &asrc_m2m_qops;
+ src_vq->mem_ops = &vb2_dma_contig_memops;
+ src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+ src_vq->lock = &m2m->mlock;
+ src_vq->dev = &m2m->pdev->dev;
+ src_vq->min_buffers_needed = 1;
+
+ ret = vb2_queue_init(src_vq);
+ if (ret)
+ return ret;
+
+ dst_vq->type = V4L2_BUF_TYPE_AUDIO_CAPTURE;
+ dst_vq->io_modes = VB2_MMAP | VB2_DMABUF;
+ dst_vq->drv_priv = pair_m2m;
+ dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
+ dst_vq->ops = &asrc_m2m_qops;
+ dst_vq->mem_ops = &vb2_dma_contig_memops;
+ dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+ dst_vq->lock = &m2m->mlock;
+ dst_vq->dev = &m2m->pdev->dev;
+ dst_vq->min_buffers_needed = 1;
+
+ ret = vb2_queue_init(dst_vq);
+ return ret;
+}
+
+static int asrc_m2m_op_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+ struct asrc_pair_m2m *pair_m2m =
+ container_of(ctrl->handler, struct asrc_pair_m2m, ctrl_handler);
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+ struct fsl_asrc *asrc = pair->asrc;
+ int ret = 0;
+
+ switch (ctrl->id) {
+ case V4L2_CID_USER_IMX_ASRC_RATIO_MOD:
+ if (asrc->m2m_set_ratio_mod)
+ asrc->m2m_set_ratio_mod(pair, ctrl->val);
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
+static const struct v4l2_ctrl_ops asrc_m2m_ctrl_ops = {
+ .s_ctrl = asrc_m2m_op_s_ctrl,
+};
+
+/* system callback for open() */
+static int asrc_m2m_open(struct file *file)
+{
+ struct asrc_m2m *m2m = video_drvdata(file);
+ struct fsl_asrc *asrc = m2m->asrc;
+ struct video_device *vdev = video_devdata(file);
+ struct fsl_asrc_pair *pair;
+ struct asrc_pair_m2m *pair_m2m;
+ int ret = 0;
+
+ if (mutex_lock_interruptible(&m2m->mlock))
+ return -ERESTARTSYS;
+
+ pair = kzalloc(sizeof(*pair) + asrc->pair_priv_size, GFP_KERNEL);
+ if (!pair) {
+ ret = -ENOMEM;
+ goto err_alloc_pair;
+ }
+
+ pair_m2m = kzalloc(sizeof(*pair_m2m), GFP_KERNEL);
+ if (!pair_m2m) {
+ ret = -ENOMEM;
+ goto err_alloc_pair_m2m;
+ }
+
+ pair->private = (void *)pair + sizeof(struct fsl_asrc_pair);
+ pair->asrc = m2m->asrc;
+
+ pair->buf_len[V4L_OUT] = ASRC_M2M_BUFFER_SIZE;
+ pair->buf_len[V4L_CAP] = ASRC_M2M_BUFFER_SIZE;
+
+ pair->channels = 2;
+ pair->rate[V4L_OUT] = 8000;
+ pair->rate[V4L_CAP] = 8000;
+ pair->sample_format[V4L_OUT] = SNDRV_PCM_FORMAT_S16_LE;
+ pair->sample_format[V4L_CAP] = SNDRV_PCM_FORMAT_S16_LE;
+
+ init_completion(&pair->complete[V4L_OUT]);
+ init_completion(&pair->complete[V4L_CAP]);
+
+ v4l2_fh_init(&pair_m2m->fh, vdev);
+ v4l2_fh_add(&pair_m2m->fh);
+ file->private_data = &pair_m2m->fh;
+
+ pair_m2m->pair = pair;
+ pair_m2m->m2m = m2m;
+ /* m2m context init */
+ pair_m2m->fh.m2m_ctx = v4l2_m2m_ctx_init(m2m->m2m_dev, pair_m2m,
+ asrc_m2m_queue_init);
+ if (IS_ERR(pair_m2m->fh.m2m_ctx)) {
+ ret = PTR_ERR(pair_m2m->fh.m2m_ctx);
+ goto err_ctx_init;
+ }
+
+ v4l2_ctrl_handler_init(&pair_m2m->ctrl_handler, 2);
+
+ /* use V4L2_CID_GAIN for ratio update control */
+ v4l2_ctrl_new_std(&pair_m2m->ctrl_handler, &asrc_m2m_ctrl_ops,
+ V4L2_CID_USER_IMX_ASRC_RATIO_MOD,
+ 0xFFFFFFFF80000001, 0x7fffffff, 1, 0);
+
+ if (pair_m2m->ctrl_handler.error) {
+ ret = pair_m2m->ctrl_handler.error;
+ v4l2_ctrl_handler_free(&pair_m2m->ctrl_handler);
+ goto err_ctrl_handler;
+ }
+
+ pair_m2m->fh.ctrl_handler = &pair_m2m->ctrl_handler;
+
+ mutex_unlock(&m2m->mlock);
+
+ return 0;
+
+err_ctrl_handler:
+ v4l2_m2m_ctx_release(pair_m2m->fh.m2m_ctx);
+err_ctx_init:
+ v4l2_fh_del(&pair_m2m->fh);
+ v4l2_fh_exit(&pair_m2m->fh);
+ kfree(pair_m2m);
+err_alloc_pair_m2m:
+ kfree(pair);
+err_alloc_pair:
+ mutex_unlock(&m2m->mlock);
+ return ret;
+}
+
+static int asrc_m2m_release(struct file *file)
+{
+ struct asrc_m2m *m2m = video_drvdata(file);
+ struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(file->private_data);
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+
+ mutex_lock(&m2m->mlock);
+ v4l2_ctrl_handler_free(&pair_m2m->ctrl_handler);
+ v4l2_m2m_ctx_release(pair_m2m->fh.m2m_ctx);
+ v4l2_fh_del(&pair_m2m->fh);
+ v4l2_fh_exit(&pair_m2m->fh);
+ kfree(pair_m2m);
+ kfree(pair);
+ mutex_unlock(&m2m->mlock);
+
+ return 0;
+}
+
+static const struct v4l2_file_operations asrc_m2m_fops = {
+ .owner = THIS_MODULE,
+ .open = asrc_m2m_open,
+ .release = asrc_m2m_release,
+ .poll = v4l2_m2m_fop_poll,
+ .unlocked_ioctl = video_ioctl2,
+ .mmap = v4l2_m2m_fop_mmap,
+};
+
+static int asrc_m2m_querycap(struct file *file, void *priv,
+ struct v4l2_capability *cap)
+{
+ strscpy(cap->driver, "asrc m2m", sizeof(cap->driver));
+ strscpy(cap->card, "asrc m2m", sizeof(cap->card));
+ cap->device_caps = V4L2_CAP_STREAMING | V4L2_CAP_AUDIO_M2M;
+ cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
+
+ return 0;
+}
+
+static int enum_fmt(struct v4l2_fmtdesc *f, u32 type)
+{
+ int i, num;
+ struct asrc_fmt *fmt;
+
+ num = 0;
+
+ for (i = 0; i < NUM_FORMATS; ++i) {
+ if (formats[i].types & type) {
+ if (num == f->index)
+ break;
+ /*
+ * Correct type but haven't reached our index yet,
+ * just increment per-type index
+ */
+ ++num;
+ }
+ }
+
+ if (i < NUM_FORMATS) {
+ /* Format found */
+ fmt = &formats[i];
+ f->pixelformat = fmt->fourcc;
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int asrc_m2m_enum_fmt_aud_cap(struct file *file, void *fh,
+ struct v4l2_fmtdesc *f)
+{
+ return enum_fmt(f, MEM2MEM_CAPTURE);
+}
+
+static int asrc_m2m_enum_fmt_aud_out(struct file *file, void *fh,
+ struct v4l2_fmtdesc *f)
+{
+ return enum_fmt(f, MEM2MEM_OUTPUT);
+}
+
+static int asrc_m2m_g_fmt_aud_cap(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh);
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+
+ f->fmt.audio.channels = pair->channels;
+ f->fmt.audio.rate = pair->rate[V4L_CAP];
+ f->fmt.audio.format = pair->sample_format[V4L_CAP];
+ f->fmt.audio.buffersize = pair->buf_len[V4L_CAP];
+
+ return 0;
+}
+
+static int asrc_m2m_g_fmt_aud_out(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh);
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+
+ f->fmt.audio.channels = pair->channels;
+ f->fmt.audio.rate = pair->rate[V4L_OUT];
+ f->fmt.audio.format = pair->sample_format[V4L_OUT];
+ f->fmt.audio.buffersize = pair->buf_len[V4L_OUT];
+
+ return 0;
+}
+
+/* output for asrc */
+static int asrc_m2m_s_fmt_aud_cap(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh);
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+ struct asrc_m2m *m2m = pair_m2m->m2m;
+ struct fsl_asrc *asrc = pair->asrc;
+ struct device *dev = &m2m->pdev->dev;
+ int ret;
+
+ ret = asrc->m2m_check_format(OUT, f->fmt.audio.format);
+ if (ret)
+ f->fmt.audio.format = pair->sample_format[V4L_CAP];
+
+ ret = asrc->m2m_check_rate(OUT, f->fmt.audio.rate);
+ if (ret)
+ f->fmt.audio.rate = pair->rate[V4L_CAP];
+
+ ret = asrc->m2m_check_channel(OUT, f->fmt.audio.channels);
+ if (ret)
+ f->fmt.audio.channels = pair->channels;
+
+ if (pair->channels > 0 && pair->channels != f->fmt.audio.channels) {
+ dev_err(dev, "channels don't match for cap and out\n");
+ return -EINVAL;
+ }
+
+ pair->channels = f->fmt.audio.channels;
+ pair->rate[V4L_CAP] = f->fmt.audio.rate;
+ pair->sample_format[V4L_CAP] = f->fmt.audio.format;
+
+ return 0;
+}
+
+/* input for asrc */
+static int asrc_m2m_s_fmt_aud_out(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh);
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+ struct asrc_m2m *m2m = pair_m2m->m2m;
+ struct fsl_asrc *asrc = pair->asrc;
+ struct device *dev = &m2m->pdev->dev;
+ int ret;
+
+ ret = asrc->m2m_check_format(IN, f->fmt.audio.format);
+ if (ret)
+ f->fmt.audio.format = pair->sample_format[V4L_OUT];
+
+ ret = asrc->m2m_check_rate(IN, f->fmt.audio.rate);
+ if (ret)
+ f->fmt.audio.rate = pair->rate[V4L_OUT];
+
+ ret = asrc->m2m_check_channel(IN, f->fmt.audio.channels);
+ if (ret)
+ f->fmt.audio.channels = pair->channels;
+
+ if (pair->channels > 0 && pair->channels != f->fmt.audio.channels) {
+ dev_err(dev, "channels don't match for cap and out\n");
+ return -EINVAL;
+ }
+
+ pair->channels = f->fmt.audio.channels;
+ pair->rate[V4L_OUT] = f->fmt.audio.rate;
+ pair->sample_format[V4L_OUT] = f->fmt.audio.format;
+
+ return 0;
+}
+
+static int asrc_m2m_try_fmt_audio_cap(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh);
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+ struct asrc_m2m *m2m = video_drvdata(file);
+ struct fsl_asrc *asrc = m2m->asrc;
+ int ret;
+
+ ret = asrc->m2m_check_format(OUT, f->fmt.audio.format);
+ if (ret)
+ f->fmt.audio.format = pair->sample_format[V4L_CAP];
+
+ ret = asrc->m2m_check_rate(OUT, f->fmt.audio.rate);
+ if (ret)
+ f->fmt.audio.rate = pair->rate[V4L_CAP];
+
+ ret = asrc->m2m_check_channel(OUT, f->fmt.audio.channels);
+ if (ret)
+ f->fmt.audio.channels = pair->channels;
+
+ return 0;
+}
+
+static int asrc_m2m_try_fmt_audio_out(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh);
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+ struct asrc_m2m *m2m = video_drvdata(file);
+ struct fsl_asrc *asrc = m2m->asrc;
+ int ret;
+
+ ret = asrc->m2m_check_format(IN, f->fmt.audio.format);
+ if (ret)
+ f->fmt.audio.format = pair->sample_format[V4L_OUT];
+
+ ret = asrc->m2m_check_rate(IN, f->fmt.audio.rate);
+ if (ret)
+ f->fmt.audio.rate = pair->rate[V4L_OUT];
+
+ ret = asrc->m2m_check_channel(IN, f->fmt.audio.channels);
+ if (ret)
+ f->fmt.audio.channels = pair->channels;
+
+ return 0;
+}
+
+static const struct v4l2_ioctl_ops asrc_m2m_ioctl_ops = {
+ .vidioc_querycap = asrc_m2m_querycap,
+
+ .vidioc_enum_fmt_audio_cap = asrc_m2m_enum_fmt_aud_cap,
+ .vidioc_enum_fmt_audio_out = asrc_m2m_enum_fmt_aud_out,
+
+ .vidioc_g_fmt_audio_cap = asrc_m2m_g_fmt_aud_cap,
+ .vidioc_g_fmt_audio_out = asrc_m2m_g_fmt_aud_out,
+
+ .vidioc_s_fmt_audio_cap = asrc_m2m_s_fmt_aud_cap,
+ .vidioc_s_fmt_audio_out = asrc_m2m_s_fmt_aud_out,
+
+ .vidioc_try_fmt_audio_cap = asrc_m2m_try_fmt_audio_cap,
+ .vidioc_try_fmt_audio_out = asrc_m2m_try_fmt_audio_out,
+
+ .vidioc_qbuf = v4l2_m2m_ioctl_qbuf,
+ .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf,
+
+ .vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs,
+ .vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf,
+ .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs,
+ .vidioc_querybuf = v4l2_m2m_ioctl_querybuf,
+ .vidioc_streamon = v4l2_m2m_ioctl_streamon,
+ .vidioc_streamoff = v4l2_m2m_ioctl_streamoff,
+ .vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
+ .vidioc_unsubscribe_event = v4l2_event_unsubscribe,
+};
+
+/* dma complete callback */
+static void asrc_input_dma_callback(void *data)
+{
+ struct fsl_asrc_pair *pair = (struct fsl_asrc_pair *)data;
+
+ complete(&pair->complete[V4L_OUT]);
+}
+
+/* dma complete callback */
+static void asrc_output_dma_callback(void *data)
+{
+ struct fsl_asrc_pair *pair = (struct fsl_asrc_pair *)data;
+
+ complete(&pair->complete[V4L_CAP]);
+}
+
+/* config dma channel */
+static int asrc_dmaconfig(struct asrc_pair_m2m *pair_m2m,
+ struct dma_chan *chan,
+ u32 dma_addr, dma_addr_t buf_addr, u32 buf_len,
+ int dir, int width)
+{
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+ struct fsl_asrc *asrc = pair->asrc;
+ struct asrc_m2m *m2m = pair_m2m->m2m;
+ struct device *dev = &m2m->pdev->dev;
+ struct dma_slave_config slave_config;
+ struct scatterlist sg[ASRC_M2M_SG_NUM];
+ enum dma_slave_buswidth buswidth;
+ unsigned int sg_len, max_period_size;
+ int ret, i;
+
+ switch (width) {
+ case 8:
+ buswidth = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ break;
+ case 16:
+ buswidth = DMA_SLAVE_BUSWIDTH_2_BYTES;
+ break;
+ case 24:
+ buswidth = DMA_SLAVE_BUSWIDTH_3_BYTES;
+ break;
+ case 32:
+ buswidth = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ break;
+ default:
+ dev_err(dev, "invalid word width\n");
+ return -EINVAL;
+ }
+
+ memset(&slave_config, 0, sizeof(slave_config));
+ if (dir == V4L_OUT) {
+ slave_config.direction = DMA_MEM_TO_DEV;
+ slave_config.dst_addr = dma_addr;
+ slave_config.dst_addr_width = buswidth;
+ slave_config.dst_maxburst = asrc->m2m_get_maxburst(IN, pair);
+ } else {
+ slave_config.direction = DMA_DEV_TO_MEM;
+ slave_config.src_addr = dma_addr;
+ slave_config.src_addr_width = buswidth;
+ slave_config.src_maxburst = asrc->m2m_get_maxburst(OUT, pair);
+ }
+
+ ret = dmaengine_slave_config(chan, &slave_config);
+ if (ret) {
+ dev_err(dev, "failed to config dmaengine for %s task: %d\n",
+ DIR_STR(dir), ret);
+ return -EINVAL;
+ }
+
+ max_period_size = rounddown(ASRC_M2M_PERIOD_SIZE, width * pair->channels / 8);
+ /* scatter gather mode */
+ sg_len = buf_len / max_period_size;
+ if (buf_len % max_period_size)
+ sg_len += 1;
+
+ sg_init_table(sg, sg_len);
+ for (i = 0; i < (sg_len - 1); i++) {
+ sg_dma_address(&sg[i]) = buf_addr + i * max_period_size;
+ sg_dma_len(&sg[i]) = max_period_size;
+ }
+ sg_dma_address(&sg[i]) = buf_addr + i * max_period_size;
+ sg_dma_len(&sg[i]) = buf_len - i * max_period_size;
+
+ pair->desc[dir] = dmaengine_prep_slave_sg(chan, sg, sg_len,
+ slave_config.direction,
+ DMA_PREP_INTERRUPT);
+ if (!pair->desc[dir]) {
+ dev_err(dev, "failed to prepare dmaengine for %s task\n", DIR_STR(dir));
+ return -EINVAL;
+ }
+
+ pair->desc[dir]->callback = ASRC_xPUT_DMA_CALLBACK(dir);
+ pair->desc[dir]->callback_param = pair;
+
+ return 0;
+}
+
+/* main function of converter */
+static void asrc_m2m_device_run(void *priv)
+{
+ struct asrc_pair_m2m *pair_m2m = priv;
+ struct fsl_asrc_pair *pair = pair_m2m->pair;
+ struct asrc_m2m *m2m = pair_m2m->m2m;
+ struct fsl_asrc *asrc = pair->asrc;
+ struct device *dev = &m2m->pdev->dev;
+ enum asrc_pair_index index = pair->index;
+ struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ unsigned int out_buf_len;
+ unsigned int cap_dma_len;
+ unsigned int width;
+ u32 fifo_addr;
+ int ret;
+
+ src_buf = v4l2_m2m_next_src_buf(pair_m2m->fh.m2m_ctx);
+ dst_buf = v4l2_m2m_next_dst_buf(pair_m2m->fh.m2m_ctx);
+
+ width = snd_pcm_format_physical_width(pair->sample_format[V4L_OUT]);
+ fifo_addr = asrc->paddr + asrc->get_fifo_addr(IN, index);
+ out_buf_len = vb2_get_plane_payload(&src_buf->vb2_buf, 0);
+ if (out_buf_len < width * pair->channels / 8 ||
+ out_buf_len > ASRC_M2M_BUFFER_SIZE ||
+ out_buf_len % (width * pair->channels / 8)) {
+ dev_err(dev, "out buffer size is error: [%d]\n", out_buf_len);
+ goto end;
+ }
+
+ /* dma config for output dma channel */
+ ret = asrc_dmaconfig(pair_m2m,
+ pair->dma_chan[V4L_OUT],
+ fifo_addr,
+ vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0),
+ out_buf_len, V4L_OUT, width);
+ if (ret) {
+ dev_err(dev, "out dma config error\n");
+ goto end;
+ }
+
+ width = snd_pcm_format_physical_width(pair->sample_format[V4L_CAP]);
+ fifo_addr = asrc->paddr + asrc->get_fifo_addr(OUT, index);
+ cap_dma_len = asrc->m2m_calc_out_len(pair, out_buf_len);
+ if (cap_dma_len > 0 && cap_dma_len <= ASRC_M2M_BUFFER_SIZE) {
+ /* dma config for capture dma channel */
+ ret = asrc_dmaconfig(pair_m2m,
+ pair->dma_chan[V4L_CAP],
+ fifo_addr,
+ vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0),
+ cap_dma_len, V4L_CAP, width);
+ if (ret) {
+ dev_err(dev, "cap dma config error\n");
+ goto end;
+ }
+ } else if (cap_dma_len > ASRC_M2M_BUFFER_SIZE) {
+ dev_err(dev, "cap buffer size error\n");
+ goto end;
+ }
+
+ reinit_completion(&pair->complete[V4L_OUT]);
+ reinit_completion(&pair->complete[V4L_CAP]);
+
+ /* Submit DMA request */
+ dmaengine_submit(pair->desc[V4L_OUT]);
+ dma_async_issue_pending(pair->desc[V4L_OUT]->chan);
+ if (cap_dma_len > 0) {
+ dmaengine_submit(pair->desc[V4L_CAP]);
+ dma_async_issue_pending(pair->desc[V4L_CAP]->chan);
+ }
+
+ asrc->m2m_start_part_two(pair);
+
+ if (!wait_for_completion_interruptible_timeout(&pair->complete[V4L_OUT], 10 * HZ)) {
+ dev_err(dev, "out DMA task timeout\n");
+ goto end;
+ }
+
+ if (cap_dma_len > 0) {
+ if (!wait_for_completion_interruptible_timeout(&pair->complete[V4L_CAP], 10 * HZ)) {
+ dev_err(dev, "cap DMA task timeout\n");
+ goto end;
+ }
+ }
+
+ /* read the last words from FIFO */
+ asrc_read_last_fifo(pair, vb2_plane_vaddr(&dst_buf->vb2_buf, 0), &cap_dma_len);
+ /* update payload length for capture */
+ vb2_set_plane_payload(&dst_buf->vb2_buf, 0, cap_dma_len);
+
+end:
+ src_buf = v4l2_m2m_src_buf_remove(pair_m2m->fh.m2m_ctx);
+ dst_buf = v4l2_m2m_dst_buf_remove(pair_m2m->fh.m2m_ctx);
+
+ v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_DONE);
+ v4l2_m2m_buf_done(dst_buf, VB2_BUF_STATE_DONE);
+
+ v4l2_m2m_job_finish(m2m->m2m_dev, pair_m2m->fh.m2m_ctx);
+}
+
+static int asrc_m2m_job_ready(void *priv)
+{
+ struct asrc_pair_m2m *pair_m2m = priv;
+
+ if (v4l2_m2m_num_src_bufs_ready(pair_m2m->fh.m2m_ctx) > 0 &&
+ v4l2_m2m_num_dst_bufs_ready(pair_m2m->fh.m2m_ctx) > 0) {
+ return 1;
+ }
+
+ return 0;
+}
+
+static const struct v4l2_m2m_ops asrc_m2m_ops = {
+ .job_ready = asrc_m2m_job_ready,
+ .device_run = asrc_m2m_device_run,
+};
+
+static int asrc_m2m_probe(struct platform_device *pdev)
+{
+ struct fsl_asrc_m2m_pdata *data = pdev->dev.platform_data;
+ struct fsl_asrc *asrc = data->asrc;
+ struct device *dev = &pdev->dev;
+ struct asrc_m2m *m2m;
+ int ret;
+
+ m2m = devm_kzalloc(dev, sizeof(struct asrc_m2m), GFP_KERNEL);
+ if (!m2m)
+ return -ENOMEM;
+
+ m2m->asrc = asrc;
+ m2m->pdev = pdev;
+
+ ret = v4l2_device_register(dev, &m2m->v4l2_dev);
+ if (ret) {
+ dev_err(dev, "failed to register v4l2 device\n");
+ goto err_register;
+ }
+
+ m2m->m2m_dev = v4l2_m2m_init(&asrc_m2m_ops);
+ if (IS_ERR(m2m->m2m_dev)) {
+ dev_err(dev, "failed to register v4l2 device\n");
+ ret = PTR_ERR(m2m->m2m_dev);
+ goto err_m2m;
+ }
+
+ m2m->dec_vdev = video_device_alloc();
+ if (!m2m->dec_vdev) {
+ dev_err(dev, "failed to register v4l2 device\n");
+ ret = -ENOMEM;
+ goto err_vdev_alloc;
+ }
+
+ mutex_init(&m2m->mlock);
+
+ m2m->dec_vdev->fops = &asrc_m2m_fops;
+ m2m->dec_vdev->ioctl_ops = &asrc_m2m_ioctl_ops;
+ m2m->dec_vdev->minor = -1;
+ m2m->dec_vdev->release = video_device_release;
+ m2m->dec_vdev->lock = &m2m->mlock; /* lock for ioctl serialization */
+ m2m->dec_vdev->v4l2_dev = &m2m->v4l2_dev;
+ m2m->dec_vdev->vfl_dir = VFL_DIR_M2M;
+ m2m->dec_vdev->device_caps = V4L2_CAP_STREAMING | V4L2_CAP_AUDIO_M2M;
+
+ ret = video_register_device(m2m->dec_vdev, VFL_TYPE_AUDIO, -1);
+ if (ret) {
+ dev_err(dev, "failed to register video device\n");
+ goto err_vdev_register;
+ }
+
+ video_set_drvdata(m2m->dec_vdev, m2m);
+ platform_set_drvdata(pdev, m2m);
+ pm_runtime_enable(&pdev->dev);
+
+ return 0;
+
+err_vdev_register:
+ video_device_release(m2m->dec_vdev);
+err_vdev_alloc:
+ v4l2_m2m_release(m2m->m2m_dev);
+err_m2m:
+ v4l2_device_unregister(&m2m->v4l2_dev);
+err_register:
+ return ret;
+}
+
+static void asrc_m2m_remove(struct platform_device *pdev)
+{
+ struct asrc_m2m *m2m = platform_get_drvdata(pdev);
+
+ pm_runtime_disable(&pdev->dev);
+ video_unregister_device(m2m->dec_vdev);
+ video_device_release(m2m->dec_vdev);
+ v4l2_m2m_release(m2m->m2m_dev);
+ v4l2_device_unregister(&m2m->v4l2_dev);
+}
+
+/* suspend callback for m2m */
+static int asrc_m2m_suspend(struct device *dev)
+{
+ struct asrc_m2m *m2m = dev_get_drvdata(dev);
+ struct fsl_asrc *asrc = m2m->asrc;
+ struct fsl_asrc_pair *pair;
+ unsigned long lock_flags;
+ int i;
+
+ for (i = 0; i < PAIR_CTX_NUM; i++) {
+ spin_lock_irqsave(&asrc->lock, lock_flags);
+ pair = asrc->pair[i];
+ if (!pair || !pair->req_pair) {
+ spin_unlock_irqrestore(&asrc->lock, lock_flags);
+ continue;
+ }
+ if (!completion_done(&pair->complete[V4L_OUT])) {
+ if (pair->dma_chan[V4L_OUT])
+ dmaengine_terminate_all(pair->dma_chan[V4L_OUT]);
+ asrc_input_dma_callback((void *)pair);
+ }
+ if (!completion_done(&pair->complete[V4L_CAP])) {
+ if (pair->dma_chan[V4L_CAP])
+ dmaengine_terminate_all(pair->dma_chan[V4L_CAP]);
+ asrc_output_dma_callback((void *)pair);
+ }
+
+ if (asrc->m2m_pair_suspend)
+ asrc->m2m_pair_suspend(pair);
+
+ spin_unlock_irqrestore(&asrc->lock, lock_flags);
+ }
+
+ return 0;
+}
+
+static int asrc_m2m_resume(struct device *dev)
+{
+ struct asrc_m2m *m2m = dev_get_drvdata(dev);
+ struct fsl_asrc *asrc = m2m->asrc;
+ struct fsl_asrc_pair *pair;
+ unsigned long lock_flags;
+ int i;
+
+ for (i = 0; i < PAIR_CTX_NUM; i++) {
+ spin_lock_irqsave(&asrc->lock, lock_flags);
+ pair = asrc->pair[i];
+ if (!pair || !pair->req_pair) {
+ spin_unlock_irqrestore(&asrc->lock, lock_flags);
+ continue;
+ }
+ if (asrc->m2m_pair_resume)
+ asrc->m2m_pair_resume(pair);
+
+ spin_unlock_irqrestore(&asrc->lock, lock_flags);
+ }
+
+ return 0;
+}
+
+static const struct dev_pm_ops asrc_m2m_pm_ops = {
+ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(asrc_m2m_suspend,
+ asrc_m2m_resume)
+};
+
+static struct platform_driver asrc_m2m_driver = {
+ .probe = asrc_m2m_probe,
+ .remove_new = asrc_m2m_remove,
+ .driver = {
+ .name = "fsl_asrc_m2m",
+ .pm = &asrc_m2m_pm_ops,
+ },
+};
+module_platform_driver(asrc_m2m_driver);
+
+MODULE_DESCRIPTION("Freescale ASRC M2M driver");
+MODULE_LICENSE("GPL");
--
2.34.1

2023-09-14 16:26:35

by Shengjiu Wang

[permalink] [raw]
Subject: [RFC PATCH v3 2/9] ASoC: fsl_easrc: define functions for memory to memory usage

ASRC can be used on memory to memory case, define several
functions for m2m usage and export them as function pointer.

Signed-off-by: Shengjiu Wang <[email protected]>
---
sound/soc/fsl/fsl_easrc.c | 226 ++++++++++++++++++++++++++++++++++++++
sound/soc/fsl/fsl_easrc.h | 6 +
2 files changed, 232 insertions(+)

diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c
index ba62995c909a..f517b407672d 100644
--- a/sound/soc/fsl/fsl_easrc.c
+++ b/sound/soc/fsl/fsl_easrc.c
@@ -1861,6 +1861,220 @@ static int fsl_easrc_get_fifo_addr(u8 dir, enum asrc_pair_index index)
return REG_EASRC_FIFO(dir, index);
}

+/* Get sample numbers in FIFO */
+static unsigned int fsl_easrc_get_output_fifo_size(struct fsl_asrc_pair *pair)
+{
+ struct fsl_asrc *asrc = pair->asrc;
+ enum asrc_pair_index index = pair->index;
+ u32 val;
+
+ regmap_read(asrc->regmap, REG_EASRC_SFS(index), &val);
+ val &= EASRC_SFS_NSGO_MASK;
+
+ return val >> EASRC_SFS_NSGO_SHIFT;
+}
+
+static int fsl_easrc_m2m_start_part_one(struct fsl_asrc_pair *pair)
+{
+ struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+ struct fsl_asrc *asrc = pair->asrc;
+ struct device *dev = &asrc->pdev->dev;
+ int ret;
+
+ ctx_priv->in_params.sample_rate = pair->rate[IN];
+ ctx_priv->in_params.sample_format = pair->sample_format[IN];
+ ctx_priv->out_params.sample_rate = pair->rate[OUT];
+ ctx_priv->out_params.sample_format = pair->sample_format[OUT];
+
+ ctx_priv->in_params.fifo_wtmk = FSL_EASRC_INPUTFIFO_WML;
+ ctx_priv->out_params.fifo_wtmk = FSL_EASRC_OUTPUTFIFO_WML;
+ /* Fill the right half of the re-sampler with zeros */
+ ctx_priv->rs_init_mode = 0x2;
+ /* Zero fill the right half of the prefilter */
+ ctx_priv->pf_init_mode = 0x2;
+
+ ret = fsl_easrc_set_ctx_format(pair,
+ &ctx_priv->in_params.sample_format,
+ &ctx_priv->out_params.sample_format);
+ if (ret) {
+ dev_err(dev, "failed to set context format: %d\n", ret);
+ return ret;
+ }
+
+ ret = fsl_easrc_config_context(asrc, pair->index);
+ if (ret) {
+ dev_err(dev, "failed to config context %d\n", ret);
+ return ret;
+ }
+
+ ctx_priv->in_params.iterations = 1;
+ ctx_priv->in_params.group_len = pair->channels;
+ ctx_priv->in_params.access_len = pair->channels;
+ ctx_priv->out_params.iterations = 1;
+ ctx_priv->out_params.group_len = pair->channels;
+ ctx_priv->out_params.access_len = pair->channels;
+
+ ret = fsl_easrc_set_ctx_organziation(pair);
+ if (ret) {
+ dev_err(dev, "failed to set fifo organization\n");
+ return ret;
+ }
+
+ /* The context start flag */
+ ctx_priv->first_convert = 1;
+ return 0;
+}
+
+static int fsl_easrc_m2m_start_part_two(struct fsl_asrc_pair *pair)
+{
+ struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+ /* start context once */
+ if (ctx_priv->first_convert) {
+ fsl_easrc_start_context(pair);
+ ctx_priv->first_convert = 0;
+ }
+
+ return 0;
+}
+
+static int fsl_easrc_m2m_stop_part_two(struct fsl_asrc_pair *pair)
+{
+ struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+ /* Stop pair/context */
+ if (!ctx_priv->first_convert) {
+ fsl_easrc_stop_context(pair);
+ ctx_priv->first_convert = 1;
+ }
+
+ return 0;
+}
+
+static int fsl_easrc_m2m_check_format(u8 dir, u32 format)
+{
+ u64 support_format = FSL_EASRC_FORMATS;
+
+ if (dir == OUT)
+ support_format |= SNDRV_PCM_FMTBIT_IEC958_SUBFRAME_LE;
+
+ if (!(1 << format & support_format))
+ return -EINVAL;
+
+ return 0;
+}
+
+static int fsl_easrc_m2m_check_rate(u8 dir, u32 rate)
+{
+ if (rate < 8000 || rate > 768000)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int fsl_easrc_m2m_check_channel(u8 dir, u32 channels)
+{
+ if (channels < 1 || channels > 32)
+ return -EINVAL;
+
+ return 0;
+}
+
+/* calculate capture data length according to output data length and sample rate */
+static int fsl_easrc_m2m_calc_out_len(struct fsl_asrc_pair *pair, int input_buffer_length)
+{
+ struct fsl_asrc *easrc = pair->asrc;
+ struct fsl_easrc_priv *easrc_priv = easrc->private;
+ struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+ unsigned int in_rate = ctx_priv->in_params.norm_rate;
+ unsigned int out_rate = ctx_priv->out_params.norm_rate;
+ unsigned int channels = pair->channels;
+ unsigned int in_samples, out_samples;
+ unsigned int in_width, out_width;
+ unsigned int out_length;
+ unsigned int frac_bits;
+ u64 val1, val2;
+
+ switch (easrc_priv->rs_num_taps) {
+ case EASRC_RS_32_TAPS:
+ /* integer bits = 5; */
+ frac_bits = 39;
+ break;
+ case EASRC_RS_64_TAPS:
+ /* integer bits = 6; */
+ frac_bits = 38;
+ break;
+ case EASRC_RS_128_TAPS:
+ /* integer bits = 7; */
+ frac_bits = 37;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ val1 = (u64)in_rate << frac_bits;
+ do_div(val1, out_rate);
+ val1 = val1 + ctx_priv->ratio_mod;
+
+ in_width = snd_pcm_format_physical_width(ctx_priv->in_params.sample_format) / 8;
+ out_width = snd_pcm_format_physical_width(ctx_priv->out_params.sample_format) / 8;
+
+ ctx_priv->in_filled_len += input_buffer_length;
+ if (ctx_priv->in_filled_len <= ctx_priv->in_filled_sample * in_width * channels) {
+ out_length = 0;
+ } else {
+ in_samples = ctx_priv->in_filled_len / (in_width * channels) -
+ ctx_priv->in_filled_sample;
+
+ /* right shift 12 bit to make ratio in 32bit space */
+ val2 = (u64)in_samples << (frac_bits - 12);
+ val1 = val1 >> 12;
+ do_div(val2, val1);
+ out_samples = val2;
+
+ out_length = out_samples * out_width * channels;
+ ctx_priv->in_filled_len = ctx_priv->in_filled_sample * in_width * channels;
+ }
+
+ return out_length;
+}
+
+static int fsl_easrc_m2m_get_maxburst(u8 dir, struct fsl_asrc_pair *pair)
+{
+ struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+
+ if (dir == IN)
+ return ctx_priv->in_params.fifo_wtmk * pair->channels;
+ else
+ return ctx_priv->out_params.fifo_wtmk * pair->channels;
+}
+
+static int fsl_easrc_m2m_pair_suspend(struct fsl_asrc_pair *pair)
+{
+ fsl_easrc_stop_context(pair);
+
+ return 0;
+}
+
+static int fsl_easrc_m2m_pair_resume(struct fsl_asrc_pair *pair)
+{
+ struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+
+ ctx_priv->first_convert = 1;
+ ctx_priv->in_filled_len = 0;
+
+ return 0;
+}
+
+static int fsl_easrc_m2m_set_ratio_mod(struct fsl_asrc_pair *pair, int val)
+{
+ struct fsl_easrc_ctx_priv *ctx_priv = pair->private;
+ struct fsl_asrc *easrc = pair->asrc;
+
+ ctx_priv->ratio_mod += val;
+ regmap_write(easrc->regmap, REG_EASRC_RUC(pair->index), EASRC_RSUC_RS_RM(val));
+
+ return 0;
+}
+
static const struct of_device_id fsl_easrc_dt_ids[] = {
{ .compatible = "fsl,imx8mn-easrc",},
{}
@@ -1926,6 +2140,18 @@ static int fsl_easrc_probe(struct platform_device *pdev)
easrc->release_pair = fsl_easrc_release_context;
easrc->get_fifo_addr = fsl_easrc_get_fifo_addr;
easrc->pair_priv_size = sizeof(struct fsl_easrc_ctx_priv);
+ easrc->m2m_start_part_one = fsl_easrc_m2m_start_part_one;
+ easrc->m2m_start_part_two = fsl_easrc_m2m_start_part_two;
+ easrc->m2m_stop_part_two = fsl_easrc_m2m_stop_part_two;
+ easrc->get_output_fifo_size = fsl_easrc_get_output_fifo_size;
+ easrc->m2m_check_format = fsl_easrc_m2m_check_format;
+ easrc->m2m_check_rate = fsl_easrc_m2m_check_rate;
+ easrc->m2m_check_channel = fsl_easrc_m2m_check_channel;
+ easrc->m2m_calc_out_len = fsl_easrc_m2m_calc_out_len;
+ easrc->m2m_get_maxburst = fsl_easrc_m2m_get_maxburst;
+ easrc->m2m_pair_suspend = fsl_easrc_m2m_pair_suspend;
+ easrc->m2m_pair_resume = fsl_easrc_m2m_pair_resume;
+ easrc->m2m_set_ratio_mod = fsl_easrc_m2m_set_ratio_mod;

easrc_priv->rs_num_taps = EASRC_RS_32_TAPS;
easrc_priv->const_coeff = 0x3FF0000000000000;
diff --git a/sound/soc/fsl/fsl_easrc.h b/sound/soc/fsl/fsl_easrc.h
index 7c70dac52713..bee887c8b4f2 100644
--- a/sound/soc/fsl/fsl_easrc.h
+++ b/sound/soc/fsl/fsl_easrc.h
@@ -601,6 +601,9 @@ struct fsl_easrc_slot {
* @out_missed_sample: sample missed in output
* @st1_addexp: exponent added for stage1
* @st2_addexp: exponent added for stage2
+ * @ratio_mod: update ratio
+ * @first_convert: start of conversion
+ * @in_filled_len: input filled length
*/
struct fsl_easrc_ctx_priv {
struct fsl_easrc_io_params in_params;
@@ -618,6 +621,9 @@ struct fsl_easrc_ctx_priv {
int out_missed_sample;
int st1_addexp;
int st2_addexp;
+ int ratio_mod;
+ unsigned int first_convert;
+ unsigned int in_filled_len;
};

/**
--
2.34.1

2023-09-14 16:26:53

by Shengjiu Wang

[permalink] [raw]
Subject: [RFC PATCH v3 6/9] media: v4l2: Add audio capture and output support

Audio signal processing has the requirement for memory to
memory similar as Video.

This patch is to add this support in v4l2 framework, defined
new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
for audio case usage.

Defined V4L2_AUDIO_FMT_LPCM format type for audio.

Defined V4L2_CAP_AUDIO_M2M capability type for audio memory
to memory case.

The created audio device is named "/dev/v4l-audioX".

Signed-off-by: Shengjiu Wang <[email protected]>
---
.../userspace-api/media/v4l/audio-formats.rst | 15 +++++
.../userspace-api/media/v4l/buffer.rst | 6 ++
.../userspace-api/media/v4l/dev-audio.rst | 63 +++++++++++++++++++
.../userspace-api/media/v4l/devices.rst | 1 +
.../media/v4l/pixfmt-aud-lpcm.rst | 31 +++++++++
.../userspace-api/media/v4l/pixfmt.rst | 1 +
.../media/v4l/vidioc-enum-fmt.rst | 2 +
.../userspace-api/media/v4l/vidioc-g-fmt.rst | 4 ++
.../media/v4l/vidioc-querycap.rst | 3 +
.../media/videodev2.h.rst.exceptions | 2 +
.../media/common/videobuf2/videobuf2-v4l2.c | 4 ++
drivers/media/v4l2-core/v4l2-dev.c | 17 +++++
drivers/media/v4l2-core/v4l2-ioctl.c | 53 ++++++++++++++++
include/media/v4l2-dev.h | 2 +
include/media/v4l2-ioctl.h | 34 ++++++++++
include/uapi/linux/videodev2.h | 25 ++++++++
16 files changed, 263 insertions(+)
create mode 100644 Documentation/userspace-api/media/v4l/audio-formats.rst
create mode 100644 Documentation/userspace-api/media/v4l/dev-audio.rst
create mode 100644 Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst

diff --git a/Documentation/userspace-api/media/v4l/audio-formats.rst b/Documentation/userspace-api/media/v4l/audio-formats.rst
new file mode 100644
index 000000000000..bc52712d20d3
--- /dev/null
+++ b/Documentation/userspace-api/media/v4l/audio-formats.rst
@@ -0,0 +1,15 @@
+.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
+
+.. _audio-formats:
+
+*************
+Audio Formats
+*************
+
+These formats are used for :ref:`audio` interface only.
+
+
+.. toctree::
+ :maxdepth: 1
+
+ pixfmt-aud-lpcm
diff --git a/Documentation/userspace-api/media/v4l/buffer.rst b/Documentation/userspace-api/media/v4l/buffer.rst
index 04dec3e570ed..80cf2cb20dfe 100644
--- a/Documentation/userspace-api/media/v4l/buffer.rst
+++ b/Documentation/userspace-api/media/v4l/buffer.rst
@@ -438,6 +438,12 @@ enum v4l2_buf_type
* - ``V4L2_BUF_TYPE_META_OUTPUT``
- 14
- Buffer for metadata output, see :ref:`metadata`.
+ * - ``V4L2_BUF_TYPE_AUDIO_CAPTURE``
+ - 15
+ - Buffer for audio capture, see :ref:`audio`.
+ * - ``V4L2_BUF_TYPE_AUDIO_OUTPUT``
+ - 16
+ - Buffer for audio output, see :ref:`audio`.


.. _buffer-flags:
diff --git a/Documentation/userspace-api/media/v4l/dev-audio.rst b/Documentation/userspace-api/media/v4l/dev-audio.rst
new file mode 100644
index 000000000000..f9bcf0c7b056
--- /dev/null
+++ b/Documentation/userspace-api/media/v4l/dev-audio.rst
@@ -0,0 +1,63 @@
+.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
+
+.. _audiodev:
+
+******************
+audio Interface
+******************
+
+The audio interface is implemented on audio device nodes. The audio device
+which uses application software for modulation or demodulation. This
+interface is intended for controlling and data streaming of such devices
+
+Audio devices are accessed through character device special files named
+``/dev/v4l-audio``
+
+Querying Capabilities
+=====================
+
+Device nodes supporting the audio capture and output interface set the
+``V4L2_CAP_AUDIO_M2M`` flag in the ``device_caps`` field of the
+:c:type:`v4l2_capability` structure returned by the :c:func:`VIDIOC_QUERYCAP`
+ioctl.
+
+At least one of the read/write or streaming I/O methods must be supported.
+
+
+Data Format Negotiation
+=======================
+
+The audio device uses the :ref:`format` ioctls to select the capture format.
+The audio buffer content format is bound to that selected format. In addition
+to the basic :ref:`format` ioctls, the :c:func:`VIDIOC_ENUM_FMT` ioctl must be
+supported as well.
+
+To use the :ref:`format` ioctls applications set the ``type`` field of the
+:c:type:`v4l2_format` structure to ``V4L2_BUF_TYPE_AUDIO_CAPTURE`` or to
+``V4L2_BUF_TYPE_AUDIO_OUTPUT``. Both drivers and applications must set the
+remainder of the :c:type:`v4l2_format` structure to 0.
+
+.. c:type:: v4l2_audio_format
+
+.. tabularcolumns:: |p{1.4cm}|p{2.4cm}|p{13.5cm}|
+
+.. flat-table:: struct v4l2_audio_format
+ :header-rows: 0
+ :stub-columns: 0
+ :widths: 1 1 2
+
+ * - __u32
+ - ``rate``
+ - The sample rate, set by the application. The range is [5512, 768000].
+ * - __u32
+ - ``format``
+ - The sample format, set by the application. format is defined as
+ SNDRV_PCM_FORMAT_S8, SNDRV_PCM_FORMAT_U8, ...,
+ * - __u32
+ - ``channels``
+ - The channel number, set by the application. channel number range is
+ [1, 32].
+ * - __u32
+ - ``buffersize``
+ - Maximum buffer size in bytes required for data. The value is set by the
+ driver.
diff --git a/Documentation/userspace-api/media/v4l/devices.rst b/Documentation/userspace-api/media/v4l/devices.rst
index 8bfbad65a9d4..8261f3468489 100644
--- a/Documentation/userspace-api/media/v4l/devices.rst
+++ b/Documentation/userspace-api/media/v4l/devices.rst
@@ -24,3 +24,4 @@ Interfaces
dev-event
dev-subdev
dev-meta
+ dev-audio
diff --git a/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst b/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
new file mode 100644
index 000000000000..f9ebe2a05f69
--- /dev/null
+++ b/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
@@ -0,0 +1,31 @@
+.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
+
+.. _v4l2-aud-fmt-lpcm:
+
+*************************
+V4L2_AUDIO_FMT_LPCM ('LPCM')
+*************************
+
+Linear Pulse-Code Modulation (LPCM)
+
+
+Description
+===========
+
+This describes audio format used by the audio memory to memory driver.
+
+It contains the following fields:
+
+.. flat-table::
+ :widths: 1 4
+ :header-rows: 1
+ :stub-columns: 0
+
+ * - Field
+ - Description
+ * - u32 samplerate;
+ - which is the number of times per second that samples are taken.
+ * - u32 sampleformat;
+ - which determines the number of possible digital values that can be used to represent each sample
+ * - u32 channels;
+ - channel number for each sample.
diff --git a/Documentation/userspace-api/media/v4l/pixfmt.rst b/Documentation/userspace-api/media/v4l/pixfmt.rst
index 11dab4a90630..e205db5fa8af 100644
--- a/Documentation/userspace-api/media/v4l/pixfmt.rst
+++ b/Documentation/userspace-api/media/v4l/pixfmt.rst
@@ -36,3 +36,4 @@ see also :ref:`VIDIOC_G_FBUF <VIDIOC_G_FBUF>`.)
colorspaces
colorspaces-defs
colorspaces-details
+ audio-formats
diff --git a/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst b/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst
index 000c154b0f98..42deb07f4ff4 100644
--- a/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst
+++ b/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst
@@ -96,6 +96,8 @@ the ``mbus_code`` field is handled differently:
``V4L2_BUF_TYPE_VIDEO_OVERLAY``,
``V4L2_BUF_TYPE_SDR_CAPTURE``,
``V4L2_BUF_TYPE_SDR_OUTPUT``,
+ ``V4L2_BUF_TYPE_AUDIO_CAPTURE``,
+ ``V4L2_BUF_TYPE_AUDIO_OUTPUT``,
``V4L2_BUF_TYPE_META_CAPTURE`` and
``V4L2_BUF_TYPE_META_OUTPUT``.
See :c:type:`v4l2_buf_type`.
diff --git a/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst b/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst
index 675c385e5aca..1ecb7d640057 100644
--- a/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst
+++ b/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst
@@ -130,6 +130,10 @@ The format as returned by :ref:`VIDIOC_TRY_FMT <VIDIOC_G_FMT>` must be identical
- ``meta``
- Definition of a metadata format, see :ref:`meta-formats`, used by
metadata capture devices.
+ * - struct :c:type:`v4l2_audio_format`
+ - ``audio``
+ - Definition of a audio data format, see :ref:`dev-audio`, used by
+ audio capture and output devices
* - __u8
- ``raw_data``\ [200]
- Place holder for future extensions.
diff --git a/Documentation/userspace-api/media/v4l/vidioc-querycap.rst b/Documentation/userspace-api/media/v4l/vidioc-querycap.rst
index 6c57b8428356..0b3cefefc86b 100644
--- a/Documentation/userspace-api/media/v4l/vidioc-querycap.rst
+++ b/Documentation/userspace-api/media/v4l/vidioc-querycap.rst
@@ -259,6 +259,9 @@ specification the ioctl returns an ``EINVAL`` error code.
video topology configuration, including which I/O entity is routed to
the input/output, is configured by userspace via the Media Controller.
See :ref:`media_controller`.
+ * - ``V4L2_CAP_AUDIO_M2M``
+ - 0x40000000
+ - The device supports the audio Memory-To-Memory interface.
* - ``V4L2_CAP_DEVICE_CAPS``
- 0x80000000
- The driver fills the ``device_caps`` field. This capability can
diff --git a/Documentation/userspace-api/media/videodev2.h.rst.exceptions b/Documentation/userspace-api/media/videodev2.h.rst.exceptions
index 3e58aac4ef0b..48ef3bce3d20 100644
--- a/Documentation/userspace-api/media/videodev2.h.rst.exceptions
+++ b/Documentation/userspace-api/media/videodev2.h.rst.exceptions
@@ -29,6 +29,8 @@ replace symbol V4L2_FIELD_SEQ_TB :c:type:`v4l2_field`
replace symbol V4L2_FIELD_TOP :c:type:`v4l2_field`

# Documented enum v4l2_buf_type
+replace symbol V4L2_BUF_TYPE_AUDIO_CAPTURE :c:type:`v4l2_buf_type`
+replace symbol V4L2_BUF_TYPE_AUDIO_OUTPUT :c:type:`v4l2_buf_type`
replace symbol V4L2_BUF_TYPE_META_CAPTURE :c:type:`v4l2_buf_type`
replace symbol V4L2_BUF_TYPE_META_OUTPUT :c:type:`v4l2_buf_type`
replace symbol V4L2_BUF_TYPE_SDR_CAPTURE :c:type:`v4l2_buf_type`
diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c
index c7a54d82a55e..12f2be2773a2 100644
--- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
+++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
@@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create)
case V4L2_BUF_TYPE_META_OUTPUT:
requested_sizes[0] = f->fmt.meta.buffersize;
break;
+ case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+ case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+ requested_sizes[0] = f->fmt.audio.buffersize;
+ break;
default:
return -EINVAL;
}
diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
index f81279492682..b92c760b611a 100644
--- a/drivers/media/v4l2-core/v4l2-dev.c
+++ b/drivers/media/v4l2-core/v4l2-dev.c
@@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev)
bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH;
bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO &&
(vdev->device_caps & meta_caps);
+ bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO;
bool is_rx = vdev->vfl_dir != VFL_DIR_TX;
bool is_tx = vdev->vfl_dir != VFL_DIR_RX;
bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
@@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct video_device *vdev)
SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out);
SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out);
}
+ if (is_audio && is_rx) {
+ /* audio capture specific ioctls */
+ SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap);
+ SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
+ SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
+ SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap);
+ } else if (is_audio && is_tx) {
+ /* audio output specific ioctls */
+ SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out);
+ SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
+ SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
+ SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out);
+ }
if (is_vbi) {
/* vbi specific ioctls */
if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
@@ -927,6 +941,9 @@ int __video_register_device(struct video_device *vdev,
case VFL_TYPE_TOUCH:
name_base = "v4l-touch";
break;
+ case VFL_TYPE_AUDIO:
+ name_base = "v4l-audio";
+ break;
default:
pr_err("%s called with unknown type: %d\n",
__func__, type);
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
index f4d9d6279094..767588d5822a 100644
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -188,6 +188,8 @@ const char *v4l2_type_names[] = {
[V4L2_BUF_TYPE_SDR_OUTPUT] = "sdr-out",
[V4L2_BUF_TYPE_META_CAPTURE] = "meta-cap",
[V4L2_BUF_TYPE_META_OUTPUT] = "meta-out",
+ [V4L2_BUF_TYPE_AUDIO_CAPTURE] = "audio-cap",
+ [V4L2_BUF_TYPE_AUDIO_OUTPUT] = "audio-out",
};
EXPORT_SYMBOL(v4l2_type_names);

@@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only)
const struct v4l2_sliced_vbi_format *sliced;
const struct v4l2_window *win;
const struct v4l2_meta_format *meta;
+ const struct v4l2_audio_format *audio;
u32 pixelformat;
u32 planes;
unsigned i;
@@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool write_only)
pr_cont(", dataformat=%p4cc, buffersize=%u\n",
&pixelformat, meta->buffersize);
break;
+ case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+ case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+ audio = &p->fmt.audio;
+ pr_cont(", rate=%u, format=%u, channels=%u, buffersize=%u\n",
+ audio->rate, audio->format, audio->channels, audio->buffersize);
+ break;
}
}

@@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO &&
(vfd->device_caps & meta_caps);
+ bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO;
bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
bool is_tx = vfd->vfl_dir != VFL_DIR_RX;

@@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out)
return 0;
break;
+ case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+ if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
+ return 0;
+ break;
+ case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+ if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
+ return 0;
+ break;
default:
break;
}
@@ -1452,6 +1470,7 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
case V4L2_PIX_FMT_Y210: descr = "10-bit YUYV Packed"; break;
case V4L2_PIX_FMT_Y212: descr = "12-bit YUYV Packed"; break;
case V4L2_PIX_FMT_Y216: descr = "16-bit YUYV Packed"; break;
+ case V4L2_AUDIO_FMT_LPCM: descr = "Audio LPCM"; break;

default:
/* Compressed formats */
@@ -1596,6 +1615,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
break;
ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg);
break;
+ case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+ if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
+ break;
+ ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
+ break;
+ case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+ if (unlikely(!ops->vidioc_enum_fmt_audio_out))
+ break;
+ ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
+ break;
}
if (ret == 0)
v4l_fill_fmtdesc(p);
@@ -1672,6 +1701,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
case V4L2_BUF_TYPE_META_OUTPUT:
return ops->vidioc_g_fmt_meta_out(file, fh, arg);
+ case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+ return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
+ case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+ return ops->vidioc_g_fmt_audio_out(file, fh, arg);
}
return -EINVAL;
}
@@ -1783,6 +1816,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
break;
memset_after(p, 0, fmt.meta);
return ops->vidioc_s_fmt_meta_out(file, fh, arg);
+ case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+ if (unlikely(!ops->vidioc_s_fmt_audio_cap))
+ break;
+ memset_after(p, 0, fmt.audio);
+ return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
+ case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+ if (unlikely(!ops->vidioc_s_fmt_audio_out))
+ break;
+ memset_after(p, 0, fmt.audio);
+ return ops->vidioc_s_fmt_audio_out(file, fh, arg);
}
return -EINVAL;
}
@@ -1891,6 +1934,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
break;
memset_after(p, 0, fmt.meta);
return ops->vidioc_try_fmt_meta_out(file, fh, arg);
+ case V4L2_BUF_TYPE_AUDIO_CAPTURE:
+ if (unlikely(!ops->vidioc_try_fmt_audio_cap))
+ break;
+ memset_after(p, 0, fmt.audio);
+ return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
+ case V4L2_BUF_TYPE_AUDIO_OUTPUT:
+ if (unlikely(!ops->vidioc_try_fmt_audio_out))
+ break;
+ memset_after(p, 0, fmt.audio);
+ return ops->vidioc_try_fmt_audio_out(file, fh, arg);
}
return -EINVAL;
}
diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
index e0a13505f88d..0924e6d1dab1 100644
--- a/include/media/v4l2-dev.h
+++ b/include/media/v4l2-dev.h
@@ -30,6 +30,7 @@
* @VFL_TYPE_SUBDEV: for V4L2 subdevices
* @VFL_TYPE_SDR: for Software Defined Radio tuners
* @VFL_TYPE_TOUCH: for touch sensors
+ * @VFL_TYPE_AUDIO: for audio input/output devices
* @VFL_TYPE_MAX: number of VFL types, must always be last in the enum
*/
enum vfl_devnode_type {
@@ -39,6 +40,7 @@ enum vfl_devnode_type {
VFL_TYPE_SUBDEV,
VFL_TYPE_SDR,
VFL_TYPE_TOUCH,
+ VFL_TYPE_AUDIO,
VFL_TYPE_MAX /* Shall be the last one */
};

diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
index edb733f21604..f840cf740ce1 100644
--- a/include/media/v4l2-ioctl.h
+++ b/include/media/v4l2-ioctl.h
@@ -45,6 +45,12 @@ struct v4l2_fh;
* @vidioc_enum_fmt_meta_out: pointer to the function that implements
* :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
* for metadata output
+ * @vidioc_enum_fmt_audio_cap: pointer to the function that implements
+ * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
+ * for audio capture
+ * @vidioc_enum_fmt_audio_out: pointer to the function that implements
+ * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
+ * for audio output
* @vidioc_g_fmt_vid_cap: pointer to the function that implements
* :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
* in single plane mode
@@ -79,6 +85,10 @@ struct v4l2_fh;
* :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
* @vidioc_g_fmt_meta_out: pointer to the function that implements
* :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
+ * @vidioc_g_fmt_audio_cap: pointer to the function that implements
+ * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
+ * @vidioc_g_fmt_audio_out: pointer to the function that implements
+ * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
* @vidioc_s_fmt_vid_cap: pointer to the function that implements
* :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
* in single plane mode
@@ -113,6 +123,10 @@ struct v4l2_fh;
* :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
* @vidioc_s_fmt_meta_out: pointer to the function that implements
* :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
+ * @vidioc_s_fmt_audio_cap: pointer to the function that implements
+ * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
+ * @vidioc_s_fmt_audio_out: pointer to the function that implements
+ * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
* @vidioc_try_fmt_vid_cap: pointer to the function that implements
* :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
* in single plane mode
@@ -149,6 +163,10 @@ struct v4l2_fh;
* :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
* @vidioc_try_fmt_meta_out: pointer to the function that implements
* :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output
+ * @vidioc_try_fmt_audio_cap: pointer to the function that implements
+ * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
+ * @vidioc_try_fmt_audio_out: pointer to the function that implements
+ * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
* @vidioc_reqbufs: pointer to the function that implements
* :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
* @vidioc_querybuf: pointer to the function that implements
@@ -315,6 +333,10 @@ struct v4l2_ioctl_ops {
struct v4l2_fmtdesc *f);
int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh,
struct v4l2_fmtdesc *f);
+ int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
+ struct v4l2_fmtdesc *f);
+ int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
+ struct v4l2_fmtdesc *f);

/* VIDIOC_G_FMT handlers */
int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
@@ -345,6 +367,10 @@ struct v4l2_ioctl_ops {
struct v4l2_format *f);
int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh,
struct v4l2_format *f);
+ int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
+ struct v4l2_format *f);
+ int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
+ struct v4l2_format *f);

/* VIDIOC_S_FMT handlers */
int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
@@ -375,6 +401,10 @@ struct v4l2_ioctl_ops {
struct v4l2_format *f);
int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh,
struct v4l2_format *f);
+ int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
+ struct v4l2_format *f);
+ int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
+ struct v4l2_format *f);

/* VIDIOC_TRY_FMT handlers */
int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
@@ -405,6 +435,10 @@ struct v4l2_ioctl_ops {
struct v4l2_format *f);
int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh,
struct v4l2_format *f);
+ int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
+ struct v4l2_format *f);
+ int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
+ struct v4l2_format *f);

/* Buffer handlers */
int (*vidioc_reqbufs)(struct file *file, void *fh,
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
index 78260e5d9985..8dc615f2b60c 100644
--- a/include/uapi/linux/videodev2.h
+++ b/include/uapi/linux/videodev2.h
@@ -153,6 +153,8 @@ enum v4l2_buf_type {
V4L2_BUF_TYPE_SDR_OUTPUT = 12,
V4L2_BUF_TYPE_META_CAPTURE = 13,
V4L2_BUF_TYPE_META_OUTPUT = 14,
+ V4L2_BUF_TYPE_AUDIO_CAPTURE = 15,
+ V4L2_BUF_TYPE_AUDIO_OUTPUT = 16,
/* Deprecated, do not use */
V4L2_BUF_TYPE_PRIVATE = 0x80,
};
@@ -169,6 +171,7 @@ enum v4l2_buf_type {
|| (type) == V4L2_BUF_TYPE_VBI_OUTPUT \
|| (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT \
|| (type) == V4L2_BUF_TYPE_SDR_OUTPUT \
+ || (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT \
|| (type) == V4L2_BUF_TYPE_META_OUTPUT)

#define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type))
@@ -508,6 +511,7 @@ struct v4l2_capability {
#define V4L2_CAP_TOUCH 0x10000000 /* Is a touch device */

#define V4L2_CAP_IO_MC 0x20000000 /* Is input/output controlled by the media controller */
+#define V4L2_CAP_AUDIO_M2M 0x40000000

#define V4L2_CAP_DEVICE_CAPS 0x80000000 /* sets device capabilities field */

@@ -838,6 +842,9 @@ struct v4l2_pix_format {
#define V4L2_META_FMT_RK_ISP1_PARAMS v4l2_fourcc('R', 'K', '1', 'P') /* Rockchip ISP1 3A Parameters */
#define V4L2_META_FMT_RK_ISP1_STAT_3A v4l2_fourcc('R', 'K', '1', 'S') /* Rockchip ISP1 3A Statistics */

+/* Audio-data formats */
+#define V4L2_AUDIO_FMT_LPCM v4l2_fourcc('L', 'P', 'C', 'M') /* audio lpcm */
+
/* priv field value to indicates that subsequent fields are valid. */
#define V4L2_PIX_FMT_PRIV_MAGIC 0xfeedcafe

@@ -2417,6 +2424,22 @@ struct v4l2_meta_format {
__u32 buffersize;
} __attribute__ ((packed));

+/**
+ * struct v4l2_audio_format - audio data format definition
+ * @pixelformat: little endian four character code (fourcc)
+ * @rate: sample rate
+ * @format: sample format
+ * @channels: channel numbers
+ * @buffersize: maximum size in bytes required for data
+ */
+struct v4l2_audio_format {
+ __u32 pixelformat;
+ __u32 rate;
+ __u32 format;
+ __u32 channels;
+ __u32 buffersize;
+} __attribute__ ((packed));
+
/**
* struct v4l2_format - stream data format
* @type: enum v4l2_buf_type; type of the data stream
@@ -2425,6 +2448,7 @@ struct v4l2_meta_format {
* @win: definition of an overlaid image
* @vbi: raw VBI capture or output parameters
* @sliced: sliced VBI capture or output parameters
+ * @audio: definition of an audio format
* @raw_data: placeholder for future extensions and custom formats
* @fmt: union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
* and @raw_data
@@ -2439,6 +2463,7 @@ struct v4l2_format {
struct v4l2_sliced_vbi_format sliced; /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
struct v4l2_sdr_format sdr; /* V4L2_BUF_TYPE_SDR_CAPTURE */
struct v4l2_meta_format meta; /* V4L2_BUF_TYPE_META_CAPTURE */
+ struct v4l2_audio_format audio; /* V4L2_BUF_TYPE_AUDIO_CAPTURE */
__u8 raw_data[200]; /* user-defined */
} fmt;
};
--
2.34.1

2023-09-14 19:13:08

by Sakari Ailus

[permalink] [raw]
Subject: Re: [RFC PATCH v3 6/9] media: v4l2: Add audio capture and output support

Hi Shenjiu,

Thanks for the update.

On Thu, Sep 14, 2023 at 01:54:02PM +0800, Shengjiu Wang wrote:
> Audio signal processing has the requirement for memory to
> memory similar as Video.
>
> This patch is to add this support in v4l2 framework, defined
> new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
> V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
> for audio case usage.
>
> Defined V4L2_AUDIO_FMT_LPCM format type for audio.

This would be nicer as a separate patch. Also see the related comments
below.

>
> Defined V4L2_CAP_AUDIO_M2M capability type for audio memory
> to memory case.
>
> The created audio device is named "/dev/v4l-audioX".
>
> Signed-off-by: Shengjiu Wang <[email protected]>
> ---
> .../userspace-api/media/v4l/audio-formats.rst | 15 +++++
> .../userspace-api/media/v4l/buffer.rst | 6 ++
> .../userspace-api/media/v4l/dev-audio.rst | 63 +++++++++++++++++++
> .../userspace-api/media/v4l/devices.rst | 1 +
> .../media/v4l/pixfmt-aud-lpcm.rst | 31 +++++++++
> .../userspace-api/media/v4l/pixfmt.rst | 1 +
> .../media/v4l/vidioc-enum-fmt.rst | 2 +
> .../userspace-api/media/v4l/vidioc-g-fmt.rst | 4 ++
> .../media/v4l/vidioc-querycap.rst | 3 +
> .../media/videodev2.h.rst.exceptions | 2 +
> .../media/common/videobuf2/videobuf2-v4l2.c | 4 ++
> drivers/media/v4l2-core/v4l2-dev.c | 17 +++++
> drivers/media/v4l2-core/v4l2-ioctl.c | 53 ++++++++++++++++
> include/media/v4l2-dev.h | 2 +
> include/media/v4l2-ioctl.h | 34 ++++++++++
> include/uapi/linux/videodev2.h | 25 ++++++++
> 16 files changed, 263 insertions(+)
> create mode 100644 Documentation/userspace-api/media/v4l/audio-formats.rst
> create mode 100644 Documentation/userspace-api/media/v4l/dev-audio.rst
> create mode 100644 Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
>
> diff --git a/Documentation/userspace-api/media/v4l/audio-formats.rst b/Documentation/userspace-api/media/v4l/audio-formats.rst
> new file mode 100644
> index 000000000000..bc52712d20d3
> --- /dev/null
> +++ b/Documentation/userspace-api/media/v4l/audio-formats.rst
> @@ -0,0 +1,15 @@
> +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
> +
> +.. _audio-formats:
> +
> +*************
> +Audio Formats
> +*************
> +
> +These formats are used for :ref:`audio` interface only.
> +
> +
> +.. toctree::
> + :maxdepth: 1
> +
> + pixfmt-aud-lpcm
> diff --git a/Documentation/userspace-api/media/v4l/buffer.rst b/Documentation/userspace-api/media/v4l/buffer.rst
> index 04dec3e570ed..80cf2cb20dfe 100644
> --- a/Documentation/userspace-api/media/v4l/buffer.rst
> +++ b/Documentation/userspace-api/media/v4l/buffer.rst
> @@ -438,6 +438,12 @@ enum v4l2_buf_type
> * - ``V4L2_BUF_TYPE_META_OUTPUT``
> - 14
> - Buffer for metadata output, see :ref:`metadata`.
> + * - ``V4L2_BUF_TYPE_AUDIO_CAPTURE``
> + - 15
> + - Buffer for audio capture, see :ref:`audio`.
> + * - ``V4L2_BUF_TYPE_AUDIO_OUTPUT``
> + - 16
> + - Buffer for audio output, see :ref:`audio`.
>
>
> .. _buffer-flags:
> diff --git a/Documentation/userspace-api/media/v4l/dev-audio.rst b/Documentation/userspace-api/media/v4l/dev-audio.rst
> new file mode 100644
> index 000000000000..f9bcf0c7b056
> --- /dev/null
> +++ b/Documentation/userspace-api/media/v4l/dev-audio.rst
> @@ -0,0 +1,63 @@
> +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
> +
> +.. _audiodev:
> +
> +******************
> +audio Interface

Capital "A"?

> +******************

Too many asterisks (same a few lines above, too).

> +
> +The audio interface is implemented on audio device nodes. The audio device
> +which uses application software for modulation or demodulation. This
> +interface is intended for controlling and data streaming of such devices
> +
> +Audio devices are accessed through character device special files named
> +``/dev/v4l-audio``
> +
> +Querying Capabilities
> +=====================
> +
> +Device nodes supporting the audio capture and output interface set the
> +``V4L2_CAP_AUDIO_M2M`` flag in the ``device_caps`` field of the
> +:c:type:`v4l2_capability` structure returned by the :c:func:`VIDIOC_QUERYCAP`
> +ioctl.
> +
> +At least one of the read/write or streaming I/O methods must be supported.
> +
> +
> +Data Format Negotiation
> +=======================
> +
> +The audio device uses the :ref:`format` ioctls to select the capture format.
> +The audio buffer content format is bound to that selected format. In addition
> +to the basic :ref:`format` ioctls, the :c:func:`VIDIOC_ENUM_FMT` ioctl must be
> +supported as well.
> +
> +To use the :ref:`format` ioctls applications set the ``type`` field of the
> +:c:type:`v4l2_format` structure to ``V4L2_BUF_TYPE_AUDIO_CAPTURE`` or to
> +``V4L2_BUF_TYPE_AUDIO_OUTPUT``. Both drivers and applications must set the
> +remainder of the :c:type:`v4l2_format` structure to 0.
> +
> +.. c:type:: v4l2_audio_format
> +
> +.. tabularcolumns:: |p{1.4cm}|p{2.4cm}|p{13.5cm}|
> +
> +.. flat-table:: struct v4l2_audio_format
> + :header-rows: 0
> + :stub-columns: 0
> + :widths: 1 1 2
> +
> + * - __u32
> + - ``rate``
> + - The sample rate, set by the application. The range is [5512, 768000].
> + * - __u32
> + - ``format``
> + - The sample format, set by the application. format is defined as
> + SNDRV_PCM_FORMAT_S8, SNDRV_PCM_FORMAT_U8, ...,
> + * - __u32
> + - ``channels``
> + - The channel number, set by the application. channel number range is
> + [1, 32].
> + * - __u32
> + - ``buffersize``
> + - Maximum buffer size in bytes required for data. The value is set by the
> + driver.
> diff --git a/Documentation/userspace-api/media/v4l/devices.rst b/Documentation/userspace-api/media/v4l/devices.rst
> index 8bfbad65a9d4..8261f3468489 100644
> --- a/Documentation/userspace-api/media/v4l/devices.rst
> +++ b/Documentation/userspace-api/media/v4l/devices.rst
> @@ -24,3 +24,4 @@ Interfaces
> dev-event
> dev-subdev
> dev-meta
> + dev-audio
> diff --git a/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst b/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
> new file mode 100644
> index 000000000000..f9ebe2a05f69
> --- /dev/null
> +++ b/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
> @@ -0,0 +1,31 @@
> +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
> +
> +.. _v4l2-aud-fmt-lpcm:
> +
> +*************************
> +V4L2_AUDIO_FMT_LPCM ('LPCM')
> +*************************
> +
> +Linear Pulse-Code Modulation (LPCM)
> +
> +
> +Description
> +===========
> +
> +This describes audio format used by the audio memory to memory driver.
> +
> +It contains the following fields:
> +
> +.. flat-table::
> + :widths: 1 4
> + :header-rows: 1
> + :stub-columns: 0
> +
> + * - Field
> + - Description
> + * - u32 samplerate;
> + - which is the number of times per second that samples are taken.
> + * - u32 sampleformat;
> + - which determines the number of possible digital values that can be used to represent each sample

80 characters (or less) per line, please.

Which values could this field have and what do they signify?

> + * - u32 channels;
> + - channel number for each sample.

I suppose the rest of the buffer would be samples? This should be
documented. I think there are also different ways the data could be
arrangeed and this needs to be documented, too.

--
Kind regards,

Sakari Ailus

2023-09-19 10:38:04

by Shengjiu Wang

[permalink] [raw]
Subject: Re: [RFC PATCH v3 6/9] media: v4l2: Add audio capture and output support

On Thu, Sep 14, 2023 at 6:17 PM Sakari Ailus <[email protected]> wrote:
>
> Hi Shenjiu,
>
> Thanks for the update.
>
> On Thu, Sep 14, 2023 at 01:54:02PM +0800, Shengjiu Wang wrote:
> > Audio signal processing has the requirement for memory to
> > memory similar as Video.
> >
> > This patch is to add this support in v4l2 framework, defined
> > new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
> > V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
> > for audio case usage.
> >
> > Defined V4L2_AUDIO_FMT_LPCM format type for audio.
>
> This would be nicer as a separate patch. Also see the related comments
> below.

OK, will separate it.

>
> >
> > Defined V4L2_CAP_AUDIO_M2M capability type for audio memory
> > to memory case.
> >
> > The created audio device is named "/dev/v4l-audioX".
> >
> > Signed-off-by: Shengjiu Wang <[email protected]>
> > ---
> > .../userspace-api/media/v4l/audio-formats.rst | 15 +++++
> > .../userspace-api/media/v4l/buffer.rst | 6 ++
> > .../userspace-api/media/v4l/dev-audio.rst | 63 +++++++++++++++++++
> > .../userspace-api/media/v4l/devices.rst | 1 +
> > .../media/v4l/pixfmt-aud-lpcm.rst | 31 +++++++++
> > .../userspace-api/media/v4l/pixfmt.rst | 1 +
> > .../media/v4l/vidioc-enum-fmt.rst | 2 +
> > .../userspace-api/media/v4l/vidioc-g-fmt.rst | 4 ++
> > .../media/v4l/vidioc-querycap.rst | 3 +
> > .../media/videodev2.h.rst.exceptions | 2 +
> > .../media/common/videobuf2/videobuf2-v4l2.c | 4 ++
> > drivers/media/v4l2-core/v4l2-dev.c | 17 +++++
> > drivers/media/v4l2-core/v4l2-ioctl.c | 53 ++++++++++++++++
> > include/media/v4l2-dev.h | 2 +
> > include/media/v4l2-ioctl.h | 34 ++++++++++
> > include/uapi/linux/videodev2.h | 25 ++++++++
> > 16 files changed, 263 insertions(+)
> > create mode 100644 Documentation/userspace-api/media/v4l/audio-formats.rst
> > create mode 100644 Documentation/userspace-api/media/v4l/dev-audio.rst
> > create mode 100644 Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
> >
> > diff --git a/Documentation/userspace-api/media/v4l/audio-formats.rst b/Documentation/userspace-api/media/v4l/audio-formats.rst
> > new file mode 100644
> > index 000000000000..bc52712d20d3
> > --- /dev/null
> > +++ b/Documentation/userspace-api/media/v4l/audio-formats.rst
> > @@ -0,0 +1,15 @@
> > +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
> > +
> > +.. _audio-formats:
> > +
> > +*************
> > +Audio Formats
> > +*************
> > +
> > +These formats are used for :ref:`audio` interface only.
> > +
> > +
> > +.. toctree::
> > + :maxdepth: 1
> > +
> > + pixfmt-aud-lpcm
> > diff --git a/Documentation/userspace-api/media/v4l/buffer.rst b/Documentation/userspace-api/media/v4l/buffer.rst
> > index 04dec3e570ed..80cf2cb20dfe 100644
> > --- a/Documentation/userspace-api/media/v4l/buffer.rst
> > +++ b/Documentation/userspace-api/media/v4l/buffer.rst
> > @@ -438,6 +438,12 @@ enum v4l2_buf_type
> > * - ``V4L2_BUF_TYPE_META_OUTPUT``
> > - 14
> > - Buffer for metadata output, see :ref:`metadata`.
> > + * - ``V4L2_BUF_TYPE_AUDIO_CAPTURE``
> > + - 15
> > + - Buffer for audio capture, see :ref:`audio`.
> > + * - ``V4L2_BUF_TYPE_AUDIO_OUTPUT``
> > + - 16
> > + - Buffer for audio output, see :ref:`audio`.
> >
> >
> > .. _buffer-flags:
> > diff --git a/Documentation/userspace-api/media/v4l/dev-audio.rst b/Documentation/userspace-api/media/v4l/dev-audio.rst
> > new file mode 100644
> > index 000000000000..f9bcf0c7b056
> > --- /dev/null
> > +++ b/Documentation/userspace-api/media/v4l/dev-audio.rst
> > @@ -0,0 +1,63 @@
> > +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
> > +
> > +.. _audiodev:
> > +
> > +******************
> > +audio Interface
>
> Capital "A"?

OK, will modify it.

>
> > +******************
>
> Too many asterisks (same a few lines above, too).

ok, will update it.

>
> > +
> > +The audio interface is implemented on audio device nodes. The audio device
> > +which uses application software for modulation or demodulation. This
> > +interface is intended for controlling and data streaming of such devices
> > +
> > +Audio devices are accessed through character device special files named
> > +``/dev/v4l-audio``
> > +
> > +Querying Capabilities
> > +=====================
> > +
> > +Device nodes supporting the audio capture and output interface set the
> > +``V4L2_CAP_AUDIO_M2M`` flag in the ``device_caps`` field of the
> > +:c:type:`v4l2_capability` structure returned by the :c:func:`VIDIOC_QUERYCAP`
> > +ioctl.
> > +
> > +At least one of the read/write or streaming I/O methods must be supported.
> > +
> > +
> > +Data Format Negotiation
> > +=======================
> > +
> > +The audio device uses the :ref:`format` ioctls to select the capture format.
> > +The audio buffer content format is bound to that selected format. In addition
> > +to the basic :ref:`format` ioctls, the :c:func:`VIDIOC_ENUM_FMT` ioctl must be
> > +supported as well.
> > +
> > +To use the :ref:`format` ioctls applications set the ``type`` field of the
> > +:c:type:`v4l2_format` structure to ``V4L2_BUF_TYPE_AUDIO_CAPTURE`` or to
> > +``V4L2_BUF_TYPE_AUDIO_OUTPUT``. Both drivers and applications must set the
> > +remainder of the :c:type:`v4l2_format` structure to 0.
> > +
> > +.. c:type:: v4l2_audio_format
> > +
> > +.. tabularcolumns:: |p{1.4cm}|p{2.4cm}|p{13.5cm}|
> > +
> > +.. flat-table:: struct v4l2_audio_format
> > + :header-rows: 0
> > + :stub-columns: 0
> > + :widths: 1 1 2
> > +
> > + * - __u32
> > + - ``rate``
> > + - The sample rate, set by the application. The range is [5512, 768000].
> > + * - __u32
> > + - ``format``
> > + - The sample format, set by the application. format is defined as
> > + SNDRV_PCM_FORMAT_S8, SNDRV_PCM_FORMAT_U8, ...,
> > + * - __u32
> > + - ``channels``
> > + - The channel number, set by the application. channel number range is
> > + [1, 32].
> > + * - __u32
> > + - ``buffersize``
> > + - Maximum buffer size in bytes required for data. The value is set by the
> > + driver.
> > diff --git a/Documentation/userspace-api/media/v4l/devices.rst b/Documentation/userspace-api/media/v4l/devices.rst
> > index 8bfbad65a9d4..8261f3468489 100644
> > --- a/Documentation/userspace-api/media/v4l/devices.rst
> > +++ b/Documentation/userspace-api/media/v4l/devices.rst
> > @@ -24,3 +24,4 @@ Interfaces
> > dev-event
> > dev-subdev
> > dev-meta
> > + dev-audio
> > diff --git a/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst b/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
> > new file mode 100644
> > index 000000000000..f9ebe2a05f69
> > --- /dev/null
> > +++ b/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
> > @@ -0,0 +1,31 @@
> > +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
> > +
> > +.. _v4l2-aud-fmt-lpcm:
> > +
> > +*************************
> > +V4L2_AUDIO_FMT_LPCM ('LPCM')
> > +*************************
> > +
> > +Linear Pulse-Code Modulation (LPCM)
> > +
> > +
> > +Description
> > +===========
> > +
> > +This describes audio format used by the audio memory to memory driver.
> > +
> > +It contains the following fields:
> > +
> > +.. flat-table::
> > + :widths: 1 4
> > + :header-rows: 1
> > + :stub-columns: 0
> > +
> > + * - Field
> > + - Description
> > + * - u32 samplerate;
> > + - which is the number of times per second that samples are taken.
> > + * - u32 sampleformat;
> > + - which determines the number of possible digital values that can be used to represent each sample
>
> 80 characters (or less) per line, please.

Ok, will change it.

>
> Which values could this field have and what do they signify?

The values are SNDRV_PCM_FORMAT_S8, SNDRV_PCM_FORMAT_U8...
which are the PCM format, defined in ALSA.

>
> > + * - u32 channels;
> > + - channel number for each sample.
>
> I suppose the rest of the buffer would be samples? This should be
> documented. I think there are also different ways the data could be
> arrangeed and this needs to be documented, too.

All data in the buffer are the samples, the 'samplerate', 'sampleformat'
'channels' I list here is try to describe the samples.
I was confused how to write this document, so I list the characters.

Best regards
Wang Shengjiu

2023-09-19 21:24:09

by Sakari Ailus

[permalink] [raw]
Subject: Re: [RFC PATCH v3 6/9] media: v4l2: Add audio capture and output support

Hi Shengjiu,

On Tue, Sep 19, 2023 at 06:31:09PM +0800, Shengjiu Wang wrote:

...

> > > +*************************
> > > +V4L2_AUDIO_FMT_LPCM ('LPCM')
> > > +*************************

Something to fix here, too...?

> > > +
> > > +Linear Pulse-Code Modulation (LPCM)
> > > +
> > > +
> > > +Description
> > > +===========
> > > +
> > > +This describes audio format used by the audio memory to memory driver.
> > > +
> > > +It contains the following fields:
> > > +
> > > +.. flat-table::
> > > + :widths: 1 4
> > > + :header-rows: 1
> > > + :stub-columns: 0
> > > +
> > > + * - Field
> > > + - Description
> > > + * - u32 samplerate;
> > > + - which is the number of times per second that samples are taken.
> > > + * - u32 sampleformat;
> > > + - which determines the number of possible digital values that can be used to represent each sample
> >
> > 80 characters (or less) per line, please.
>
> Ok, will change it.
>
> >
> > Which values could this field have and what do they signify?
>
> The values are SNDRV_PCM_FORMAT_S8, SNDRV_PCM_FORMAT_U8...
> which are the PCM format, defined in ALSA.

I suppose this is documented in ALSA documentation. Could you refer to
that?

>
> >
> > > + * - u32 channels;
> > > + - channel number for each sample.
> >
> > I suppose the rest of the buffer would be samples? This should be
> > documented. I think there are also different ways the data could be
> > arrangeed and this needs to be documented, too.
>
> All data in the buffer are the samples, the 'samplerate', 'sampleformat'
> 'channels' I list here is try to describe the samples.
> I was confused how to write this document, so I list the characters.

The layout of this data in memory needs to be documented. I think a
reference to ALSA documentation would be the best.

--
Regards,

Sakari Ailus

2023-09-20 10:24:27

by Hans Verkuil

[permalink] [raw]
Subject: Re: [RFC PATCH v3 6/9] media: v4l2: Add audio capture and output support

Hi Shengjiu,

I just noticed you posted a v4, but I expect that my comments below are still valid...

On 14/09/2023 07:54, Shengjiu Wang wrote:
> Audio signal processing has the requirement for memory to
> memory similar as Video.
>
> This patch is to add this support in v4l2 framework, defined
> new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
> V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
> for audio case usage.
>
> Defined V4L2_AUDIO_FMT_LPCM format type for audio.
>
> Defined V4L2_CAP_AUDIO_M2M capability type for audio memory
> to memory case.
>
> The created audio device is named "/dev/v4l-audioX".
>
> Signed-off-by: Shengjiu Wang <[email protected]>
> ---
> .../userspace-api/media/v4l/audio-formats.rst | 15 +++++
> .../userspace-api/media/v4l/buffer.rst | 6 ++
> .../userspace-api/media/v4l/dev-audio.rst | 63 +++++++++++++++++++
> .../userspace-api/media/v4l/devices.rst | 1 +
> .../media/v4l/pixfmt-aud-lpcm.rst | 31 +++++++++
> .../userspace-api/media/v4l/pixfmt.rst | 1 +
> .../media/v4l/vidioc-enum-fmt.rst | 2 +
> .../userspace-api/media/v4l/vidioc-g-fmt.rst | 4 ++
> .../media/v4l/vidioc-querycap.rst | 3 +
> .../media/videodev2.h.rst.exceptions | 2 +
> .../media/common/videobuf2/videobuf2-v4l2.c | 4 ++
> drivers/media/v4l2-core/v4l2-dev.c | 17 +++++
> drivers/media/v4l2-core/v4l2-ioctl.c | 53 ++++++++++++++++
> include/media/v4l2-dev.h | 2 +
> include/media/v4l2-ioctl.h | 34 ++++++++++
> include/uapi/linux/videodev2.h | 25 ++++++++
> 16 files changed, 263 insertions(+)
> create mode 100644 Documentation/userspace-api/media/v4l/audio-formats.rst
> create mode 100644 Documentation/userspace-api/media/v4l/dev-audio.rst
> create mode 100644 Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
>
> diff --git a/Documentation/userspace-api/media/v4l/audio-formats.rst b/Documentation/userspace-api/media/v4l/audio-formats.rst
> new file mode 100644
> index 000000000000..bc52712d20d3
> --- /dev/null
> +++ b/Documentation/userspace-api/media/v4l/audio-formats.rst
> @@ -0,0 +1,15 @@
> +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
> +
> +.. _audio-formats:
> +
> +*************
> +Audio Formats
> +*************
> +
> +These formats are used for :ref:`audio` interface only.
> +
> +
> +.. toctree::
> + :maxdepth: 1
> +
> + pixfmt-aud-lpcm
> diff --git a/Documentation/userspace-api/media/v4l/buffer.rst b/Documentation/userspace-api/media/v4l/buffer.rst
> index 04dec3e570ed..80cf2cb20dfe 100644
> --- a/Documentation/userspace-api/media/v4l/buffer.rst
> +++ b/Documentation/userspace-api/media/v4l/buffer.rst
> @@ -438,6 +438,12 @@ enum v4l2_buf_type
> * - ``V4L2_BUF_TYPE_META_OUTPUT``
> - 14
> - Buffer for metadata output, see :ref:`metadata`.
> + * - ``V4L2_BUF_TYPE_AUDIO_CAPTURE``
> + - 15
> + - Buffer for audio capture, see :ref:`audio`.
> + * - ``V4L2_BUF_TYPE_AUDIO_OUTPUT``
> + - 16
> + - Buffer for audio output, see :ref:`audio`.
>
>
> .. _buffer-flags:
> diff --git a/Documentation/userspace-api/media/v4l/dev-audio.rst b/Documentation/userspace-api/media/v4l/dev-audio.rst
> new file mode 100644
> index 000000000000..f9bcf0c7b056
> --- /dev/null
> +++ b/Documentation/userspace-api/media/v4l/dev-audio.rst

Rename the file to dev-audio-mem2mem.rst as this is specific to an audio
M2M interface.

> @@ -0,0 +1,63 @@
> +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
> +
> +.. _audiodev:
> +
> +******************
> +audio Interface
> +******************
> +
> +The audio interface is implemented on audio device nodes. The audio device
> +which uses application software for modulation or demodulation. This
> +interface is intended for controlling and data streaming of such devices
> +
> +Audio devices are accessed through character device special files named
> +``/dev/v4l-audio``

I think this intro is somewhat confusing. I would recommend to copy the intro
from dev-mem2mem.rst instead, adapting it for audio.

> +
> +Querying Capabilities
> +=====================
> +
> +Device nodes supporting the audio capture and output interface set the
> +``V4L2_CAP_AUDIO_M2M`` flag in the ``device_caps`` field of the
> +:c:type:`v4l2_capability` structure returned by the :c:func:`VIDIOC_QUERYCAP`
> +ioctl.
> +
> +At least one of the read/write or streaming I/O methods must be supported.

M2M devices do not support read/write, only streaming I/O is supported.

> +
> +
> +Data Format Negotiation
> +=======================
> +
> +The audio device uses the :ref:`format` ioctls to select the capture format.
> +The audio buffer content format is bound to that selected format. In addition
> +to the basic :ref:`format` ioctls, the :c:func:`VIDIOC_ENUM_FMT` ioctl must be
> +supported as well.
> +
> +To use the :ref:`format` ioctls applications set the ``type`` field of the
> +:c:type:`v4l2_format` structure to ``V4L2_BUF_TYPE_AUDIO_CAPTURE`` or to
> +``V4L2_BUF_TYPE_AUDIO_OUTPUT``. Both drivers and applications must set the
> +remainder of the :c:type:`v4l2_format` structure to 0.
> +
> +.. c:type:: v4l2_audio_format
> +
> +.. tabularcolumns:: |p{1.4cm}|p{2.4cm}|p{13.5cm}|
> +
> +.. flat-table:: struct v4l2_audio_format
> + :header-rows: 0
> + :stub-columns: 0
> + :widths: 1 1 2
> +
> + * - __u32
> + - ``rate``
> + - The sample rate, set by the application. The range is [5512, 768000].
> + * - __u32
> + - ``format``
> + - The sample format, set by the application. format is defined as
> + SNDRV_PCM_FORMAT_S8, SNDRV_PCM_FORMAT_U8, ...,
> + * - __u32
> + - ``channels``
> + - The channel number, set by the application. channel number range is
> + [1, 32].
> + * - __u32
> + - ``buffersize``
> + - Maximum buffer size in bytes required for data. The value is set by the
> + driver.
> diff --git a/Documentation/userspace-api/media/v4l/devices.rst b/Documentation/userspace-api/media/v4l/devices.rst
> index 8bfbad65a9d4..8261f3468489 100644
> --- a/Documentation/userspace-api/media/v4l/devices.rst
> +++ b/Documentation/userspace-api/media/v4l/devices.rst
> @@ -24,3 +24,4 @@ Interfaces
> dev-event
> dev-subdev
> dev-meta
> + dev-audio
> diff --git a/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst b/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
> new file mode 100644
> index 000000000000..f9ebe2a05f69
> --- /dev/null
> +++ b/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
> @@ -0,0 +1,31 @@
> +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
> +
> +.. _v4l2-aud-fmt-lpcm:
> +
> +*************************
> +V4L2_AUDIO_FMT_LPCM ('LPCM')
> +*************************
> +
> +Linear Pulse-Code Modulation (LPCM)
> +
> +
> +Description
> +===========
> +
> +This describes audio format used by the audio memory to memory driver.
> +
> +It contains the following fields:
> +
> +.. flat-table::
> + :widths: 1 4
> + :header-rows: 1
> + :stub-columns: 0
> +
> + * - Field
> + - Description
> + * - u32 samplerate;
> + - which is the number of times per second that samples are taken.
> + * - u32 sampleformat;
> + - which determines the number of possible digital values that can be used to represent each sample
> + * - u32 channels;
> + - channel number for each sample.

See Sakari's comments. This section describes how the audio data is formatted
in the buffer memory. Presumably this is already documented somewhere in the ALSA
docs, so a reference to that would work.

> diff --git a/Documentation/userspace-api/media/v4l/pixfmt.rst b/Documentation/userspace-api/media/v4l/pixfmt.rst
> index 11dab4a90630..e205db5fa8af 100644
> --- a/Documentation/userspace-api/media/v4l/pixfmt.rst
> +++ b/Documentation/userspace-api/media/v4l/pixfmt.rst
> @@ -36,3 +36,4 @@ see also :ref:`VIDIOC_G_FBUF <VIDIOC_G_FBUF>`.)
> colorspaces
> colorspaces-defs
> colorspaces-details
> + audio-formats
> diff --git a/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst b/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst
> index 000c154b0f98..42deb07f4ff4 100644
> --- a/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst
> +++ b/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst
> @@ -96,6 +96,8 @@ the ``mbus_code`` field is handled differently:
> ``V4L2_BUF_TYPE_VIDEO_OVERLAY``,
> ``V4L2_BUF_TYPE_SDR_CAPTURE``,
> ``V4L2_BUF_TYPE_SDR_OUTPUT``,
> + ``V4L2_BUF_TYPE_AUDIO_CAPTURE``,
> + ``V4L2_BUF_TYPE_AUDIO_OUTPUT``,
> ``V4L2_BUF_TYPE_META_CAPTURE`` and
> ``V4L2_BUF_TYPE_META_OUTPUT``.
> See :c:type:`v4l2_buf_type`.
> diff --git a/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst b/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst
> index 675c385e5aca..1ecb7d640057 100644
> --- a/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst
> +++ b/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst
> @@ -130,6 +130,10 @@ The format as returned by :ref:`VIDIOC_TRY_FMT <VIDIOC_G_FMT>` must be identical
> - ``meta``
> - Definition of a metadata format, see :ref:`meta-formats`, used by
> metadata capture devices.
> + * - struct :c:type:`v4l2_audio_format`
> + - ``audio``
> + - Definition of a audio data format, see :ref:`dev-audio`, used by
> + audio capture and output devices
> * - __u8
> - ``raw_data``\ [200]
> - Place holder for future extensions.
> diff --git a/Documentation/userspace-api/media/v4l/vidioc-querycap.rst b/Documentation/userspace-api/media/v4l/vidioc-querycap.rst
> index 6c57b8428356..0b3cefefc86b 100644
> --- a/Documentation/userspace-api/media/v4l/vidioc-querycap.rst
> +++ b/Documentation/userspace-api/media/v4l/vidioc-querycap.rst
> @@ -259,6 +259,9 @@ specification the ioctl returns an ``EINVAL`` error code.
> video topology configuration, including which I/O entity is routed to
> the input/output, is configured by userspace via the Media Controller.
> See :ref:`media_controller`.
> + * - ``V4L2_CAP_AUDIO_M2M``
> + - 0x40000000
> + - The device supports the audio Memory-To-Memory interface.
> * - ``V4L2_CAP_DEVICE_CAPS``
> - 0x80000000
> - The driver fills the ``device_caps`` field. This capability can
> diff --git a/Documentation/userspace-api/media/videodev2.h.rst.exceptions b/Documentation/userspace-api/media/videodev2.h.rst.exceptions
> index 3e58aac4ef0b..48ef3bce3d20 100644
> --- a/Documentation/userspace-api/media/videodev2.h.rst.exceptions
> +++ b/Documentation/userspace-api/media/videodev2.h.rst.exceptions
> @@ -29,6 +29,8 @@ replace symbol V4L2_FIELD_SEQ_TB :c:type:`v4l2_field`
> replace symbol V4L2_FIELD_TOP :c:type:`v4l2_field`
>
> # Documented enum v4l2_buf_type
> +replace symbol V4L2_BUF_TYPE_AUDIO_CAPTURE :c:type:`v4l2_buf_type`
> +replace symbol V4L2_BUF_TYPE_AUDIO_OUTPUT :c:type:`v4l2_buf_type`
> replace symbol V4L2_BUF_TYPE_META_CAPTURE :c:type:`v4l2_buf_type`
> replace symbol V4L2_BUF_TYPE_META_OUTPUT :c:type:`v4l2_buf_type`
> replace symbol V4L2_BUF_TYPE_SDR_CAPTURE :c:type:`v4l2_buf_type`
> diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> index c7a54d82a55e..12f2be2773a2 100644
> --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
> +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create)
> case V4L2_BUF_TYPE_META_OUTPUT:
> requested_sizes[0] = f->fmt.meta.buffersize;
> break;
> + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> + requested_sizes[0] = f->fmt.audio.buffersize;
> + break;
> default:
> return -EINVAL;
> }
> diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
> index f81279492682..b92c760b611a 100644
> --- a/drivers/media/v4l2-core/v4l2-dev.c
> +++ b/drivers/media/v4l2-core/v4l2-dev.c
> @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev)
> bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH;
> bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO &&
> (vdev->device_caps & meta_caps);
> + bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO;
> bool is_rx = vdev->vfl_dir != VFL_DIR_TX;
> bool is_tx = vdev->vfl_dir != VFL_DIR_RX;
> bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
> @@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct video_device *vdev)
> SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out);
> SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out);
> }
> + if (is_audio && is_rx) {
> + /* audio capture specific ioctls */
> + SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap);
> + SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
> + SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
> + SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap);
> + } else if (is_audio && is_tx) {
> + /* audio output specific ioctls */
> + SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out);
> + SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
> + SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
> + SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out);
> + }
> if (is_vbi) {
> /* vbi specific ioctls */
> if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
> @@ -927,6 +941,9 @@ int __video_register_device(struct video_device *vdev,
> case VFL_TYPE_TOUCH:
> name_base = "v4l-touch";
> break;
> + case VFL_TYPE_AUDIO:
> + name_base = "v4l-audio";
> + break;
> default:
> pr_err("%s called with unknown type: %d\n",
> __func__, type);
> diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
> index f4d9d6279094..767588d5822a 100644
> --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
> @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = {
> [V4L2_BUF_TYPE_SDR_OUTPUT] = "sdr-out",
> [V4L2_BUF_TYPE_META_CAPTURE] = "meta-cap",
> [V4L2_BUF_TYPE_META_OUTPUT] = "meta-out",
> + [V4L2_BUF_TYPE_AUDIO_CAPTURE] = "audio-cap",
> + [V4L2_BUF_TYPE_AUDIO_OUTPUT] = "audio-out",
> };
> EXPORT_SYMBOL(v4l2_type_names);
>
> @@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only)
> const struct v4l2_sliced_vbi_format *sliced;
> const struct v4l2_window *win;
> const struct v4l2_meta_format *meta;
> + const struct v4l2_audio_format *audio;
> u32 pixelformat;
> u32 planes;
> unsigned i;
> @@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool write_only)
> pr_cont(", dataformat=%p4cc, buffersize=%u\n",
> &pixelformat, meta->buffersize);
> break;
> + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> + audio = &p->fmt.audio;
> + pr_cont(", rate=%u, format=%u, channels=%u, buffersize=%u\n",
> + audio->rate, audio->format, audio->channels, audio->buffersize);
> + break;
> }
> }
>
> @@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
> bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
> bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO &&
> (vfd->device_caps & meta_caps);
> + bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO;
> bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
> bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
>
> @@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
> if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out)
> return 0;
> break;
> + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> + if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
> + return 0;
> + break;
> + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> + if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
> + return 0;
> + break;
> default:
> break;
> }
> @@ -1452,6 +1470,7 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
> case V4L2_PIX_FMT_Y210: descr = "10-bit YUYV Packed"; break;
> case V4L2_PIX_FMT_Y212: descr = "12-bit YUYV Packed"; break;
> case V4L2_PIX_FMT_Y216: descr = "16-bit YUYV Packed"; break;
> + case V4L2_AUDIO_FMT_LPCM: descr = "Audio LPCM"; break;
>
> default:
> /* Compressed formats */
> @@ -1596,6 +1615,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
> break;
> ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg);
> break;
> + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> + if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
> + break;
> + ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
> + break;
> + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> + if (unlikely(!ops->vidioc_enum_fmt_audio_out))
> + break;
> + ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
> + break;
> }
> if (ret == 0)
> v4l_fill_fmtdesc(p);
> @@ -1672,6 +1701,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
> return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
> case V4L2_BUF_TYPE_META_OUTPUT:
> return ops->vidioc_g_fmt_meta_out(file, fh, arg);
> + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> + return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
> + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> + return ops->vidioc_g_fmt_audio_out(file, fh, arg);
> }
> return -EINVAL;
> }
> @@ -1783,6 +1816,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
> break;
> memset_after(p, 0, fmt.meta);
> return ops->vidioc_s_fmt_meta_out(file, fh, arg);
> + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> + if (unlikely(!ops->vidioc_s_fmt_audio_cap))
> + break;
> + memset_after(p, 0, fmt.audio);
> + return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
> + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> + if (unlikely(!ops->vidioc_s_fmt_audio_out))
> + break;
> + memset_after(p, 0, fmt.audio);
> + return ops->vidioc_s_fmt_audio_out(file, fh, arg);
> }
> return -EINVAL;
> }
> @@ -1891,6 +1934,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
> break;
> memset_after(p, 0, fmt.meta);
> return ops->vidioc_try_fmt_meta_out(file, fh, arg);
> + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> + if (unlikely(!ops->vidioc_try_fmt_audio_cap))
> + break;
> + memset_after(p, 0, fmt.audio);
> + return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
> + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> + if (unlikely(!ops->vidioc_try_fmt_audio_out))
> + break;
> + memset_after(p, 0, fmt.audio);
> + return ops->vidioc_try_fmt_audio_out(file, fh, arg);
> }
> return -EINVAL;
> }
> diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
> index e0a13505f88d..0924e6d1dab1 100644
> --- a/include/media/v4l2-dev.h
> +++ b/include/media/v4l2-dev.h
> @@ -30,6 +30,7 @@
> * @VFL_TYPE_SUBDEV: for V4L2 subdevices
> * @VFL_TYPE_SDR: for Software Defined Radio tuners
> * @VFL_TYPE_TOUCH: for touch sensors
> + * @VFL_TYPE_AUDIO: for audio input/output devices

Change this to: "for audio memory-to-memory devices"
That's the only audio type we support at the moment. I don't see a need
for pure capture or output audio devices, since that would be handled in
alsa.

> * @VFL_TYPE_MAX: number of VFL types, must always be last in the enum
> */
> enum vfl_devnode_type {
> @@ -39,6 +40,7 @@ enum vfl_devnode_type {
> VFL_TYPE_SUBDEV,
> VFL_TYPE_SDR,
> VFL_TYPE_TOUCH,
> + VFL_TYPE_AUDIO,
> VFL_TYPE_MAX /* Shall be the last one */
> };
>
> diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
> index edb733f21604..f840cf740ce1 100644
> --- a/include/media/v4l2-ioctl.h
> +++ b/include/media/v4l2-ioctl.h
> @@ -45,6 +45,12 @@ struct v4l2_fh;
> * @vidioc_enum_fmt_meta_out: pointer to the function that implements
> * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> * for metadata output
> + * @vidioc_enum_fmt_audio_cap: pointer to the function that implements
> + * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> + * for audio capture
> + * @vidioc_enum_fmt_audio_out: pointer to the function that implements
> + * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> + * for audio output
> * @vidioc_g_fmt_vid_cap: pointer to the function that implements
> * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
> * in single plane mode
> @@ -79,6 +85,10 @@ struct v4l2_fh;
> * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
> * @vidioc_g_fmt_meta_out: pointer to the function that implements
> * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> + * @vidioc_g_fmt_audio_cap: pointer to the function that implements
> + * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> + * @vidioc_g_fmt_audio_out: pointer to the function that implements
> + * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
> * @vidioc_s_fmt_vid_cap: pointer to the function that implements
> * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
> * in single plane mode
> @@ -113,6 +123,10 @@ struct v4l2_fh;
> * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
> * @vidioc_s_fmt_meta_out: pointer to the function that implements
> * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> + * @vidioc_s_fmt_audio_cap: pointer to the function that implements
> + * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> + * @vidioc_s_fmt_audio_out: pointer to the function that implements
> + * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
> * @vidioc_try_fmt_vid_cap: pointer to the function that implements
> * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
> * in single plane mode
> @@ -149,6 +163,10 @@ struct v4l2_fh;
> * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
> * @vidioc_try_fmt_meta_out: pointer to the function that implements
> * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> + * @vidioc_try_fmt_audio_cap: pointer to the function that implements
> + * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> + * @vidioc_try_fmt_audio_out: pointer to the function that implements
> + * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
> * @vidioc_reqbufs: pointer to the function that implements
> * :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
> * @vidioc_querybuf: pointer to the function that implements
> @@ -315,6 +333,10 @@ struct v4l2_ioctl_ops {
> struct v4l2_fmtdesc *f);
> int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh,
> struct v4l2_fmtdesc *f);
> + int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
> + struct v4l2_fmtdesc *f);
> + int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
> + struct v4l2_fmtdesc *f);
>
> /* VIDIOC_G_FMT handlers */
> int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
> @@ -345,6 +367,10 @@ struct v4l2_ioctl_ops {
> struct v4l2_format *f);
> int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh,
> struct v4l2_format *f);
> + int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
> + struct v4l2_format *f);
> + int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
> + struct v4l2_format *f);
>
> /* VIDIOC_S_FMT handlers */
> int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
> @@ -375,6 +401,10 @@ struct v4l2_ioctl_ops {
> struct v4l2_format *f);
> int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh,
> struct v4l2_format *f);
> + int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
> + struct v4l2_format *f);
> + int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
> + struct v4l2_format *f);
>
> /* VIDIOC_TRY_FMT handlers */
> int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
> @@ -405,6 +435,10 @@ struct v4l2_ioctl_ops {
> struct v4l2_format *f);
> int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh,
> struct v4l2_format *f);
> + int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
> + struct v4l2_format *f);
> + int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
> + struct v4l2_format *f);
>
> /* Buffer handlers */
> int (*vidioc_reqbufs)(struct file *file, void *fh,
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index 78260e5d9985..8dc615f2b60c 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -153,6 +153,8 @@ enum v4l2_buf_type {
> V4L2_BUF_TYPE_SDR_OUTPUT = 12,
> V4L2_BUF_TYPE_META_CAPTURE = 13,
> V4L2_BUF_TYPE_META_OUTPUT = 14,
> + V4L2_BUF_TYPE_AUDIO_CAPTURE = 15,
> + V4L2_BUF_TYPE_AUDIO_OUTPUT = 16,
> /* Deprecated, do not use */
> V4L2_BUF_TYPE_PRIVATE = 0x80,
> };
> @@ -169,6 +171,7 @@ enum v4l2_buf_type {
> || (type) == V4L2_BUF_TYPE_VBI_OUTPUT \
> || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT \
> || (type) == V4L2_BUF_TYPE_SDR_OUTPUT \
> + || (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT \
> || (type) == V4L2_BUF_TYPE_META_OUTPUT)
>
> #define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type))
> @@ -508,6 +511,7 @@ struct v4l2_capability {
> #define V4L2_CAP_TOUCH 0x10000000 /* Is a touch device */
>
> #define V4L2_CAP_IO_MC 0x20000000 /* Is input/output controlled by the media controller */
> +#define V4L2_CAP_AUDIO_M2M 0x40000000
>
> #define V4L2_CAP_DEVICE_CAPS 0x80000000 /* sets device capabilities field */
>
> @@ -838,6 +842,9 @@ struct v4l2_pix_format {
> #define V4L2_META_FMT_RK_ISP1_PARAMS v4l2_fourcc('R', 'K', '1', 'P') /* Rockchip ISP1 3A Parameters */
> #define V4L2_META_FMT_RK_ISP1_STAT_3A v4l2_fourcc('R', 'K', '1', 'S') /* Rockchip ISP1 3A Statistics */
>
> +/* Audio-data formats */
> +#define V4L2_AUDIO_FMT_LPCM v4l2_fourcc('L', 'P', 'C', 'M') /* audio lpcm */
> +

Hmm, this I am uncertain about. This doesn't add anything. If you enumerate the
formats, they will all report just this format, so you still don't know which
actual audio formats are supported.

The real audio format is the 'format' field.

> /* priv field value to indicates that subsequent fields are valid. */
> #define V4L2_PIX_FMT_PRIV_MAGIC 0xfeedcafe
>
> @@ -2417,6 +2424,22 @@ struct v4l2_meta_format {
> __u32 buffersize;
> } __attribute__ ((packed));
>
> +/**
> + * struct v4l2_audio_format - audio data format definition
> + * @pixelformat: little endian four character code (fourcc)
> + * @rate: sample rate
> + * @format: sample format
> + * @channels: channel numbers
> + * @buffersize: maximum size in bytes required for data
> + */
> +struct v4l2_audio_format {
> + __u32 pixelformat;

Why not just drop this field, and instead use the format field?

You would have to update the ENUM_FMT documentation to indicate that for
audio m2m device the pixelformat field of v4l2_fmtdesc is actually the
audio format, and that it is not a fourcc, but a SNDRV_PCM_FORMAT_ format.

v4l_fill_fmtdesc can just add 'case SNDRV_PCM_FORMAT_U8:' etc., since they
luckily won't conflict with the existing fourccs as far as I can tell.

One problem might be the use of %p4cc as a printf formatter for fourcc
values, that would fail with these formats.

One option to solve this could be to add a define to videodev2.h that converts
a SNDRV_PCM_FORMAT_* to a fourcc, e.g.:

#define v4l2_fourcc_pcm(pcm_fmt) v4l2_fourcc('A', 'U', (pcm_fmt) / 10 + '0', (pcm_fmt) % 10 + '0')

So all audio formats end up like 'ADXX' where XX is the SNDRV_PCM_FORMAT_* value.

You would also need a define to translate a fourcc back to a pcm format.

This scheme would allow %p4cc to continue to be used. Alternatively, you need
to check all places where %p4cc is used in the media subsystem core and see if
you need to check if it is an audio buffer type, and if so, just use %u.

I'm not quite sure which of the two options is best.

More input on this would be welcome.

> + __u32 rate;
> + __u32 format;
> + __u32 channels;
> + __u32 buffersize;
> +} __attribute__ ((packed));
> +
> /**
> * struct v4l2_format - stream data format
> * @type: enum v4l2_buf_type; type of the data stream
> @@ -2425,6 +2448,7 @@ struct v4l2_meta_format {
> * @win: definition of an overlaid image
> * @vbi: raw VBI capture or output parameters
> * @sliced: sliced VBI capture or output parameters
> + * @audio: definition of an audio format
> * @raw_data: placeholder for future extensions and custom formats
> * @fmt: union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
> * and @raw_data
> @@ -2439,6 +2463,7 @@ struct v4l2_format {
> struct v4l2_sliced_vbi_format sliced; /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
> struct v4l2_sdr_format sdr; /* V4L2_BUF_TYPE_SDR_CAPTURE */
> struct v4l2_meta_format meta; /* V4L2_BUF_TYPE_META_CAPTURE */
> + struct v4l2_audio_format audio; /* V4L2_BUF_TYPE_AUDIO_CAPTURE */
> __u8 raw_data[200]; /* user-defined */
> } fmt;
> };

Regards,

Hans

2023-09-22 14:07:42

by Shengjiu Wang

[permalink] [raw]
Subject: Re: [RFC PATCH v3 6/9] media: v4l2: Add audio capture and output support

On Wed, Sep 20, 2023 at 6:12 PM Hans Verkuil <[email protected]> wrote:
>
> Hi Shengjiu,
>
> I just noticed you posted a v4, but I expect that my comments below are still valid...
>
> On 14/09/2023 07:54, Shengjiu Wang wrote:
> > Audio signal processing has the requirement for memory to
> > memory similar as Video.
> >
> > This patch is to add this support in v4l2 framework, defined
> > new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and
> > V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format
> > for audio case usage.
> >
> > Defined V4L2_AUDIO_FMT_LPCM format type for audio.
> >
> > Defined V4L2_CAP_AUDIO_M2M capability type for audio memory
> > to memory case.
> >
> > The created audio device is named "/dev/v4l-audioX".
> >
> > Signed-off-by: Shengjiu Wang <[email protected]>
> > ---
> > .../userspace-api/media/v4l/audio-formats.rst | 15 +++++
> > .../userspace-api/media/v4l/buffer.rst | 6 ++
> > .../userspace-api/media/v4l/dev-audio.rst | 63 +++++++++++++++++++
> > .../userspace-api/media/v4l/devices.rst | 1 +
> > .../media/v4l/pixfmt-aud-lpcm.rst | 31 +++++++++
> > .../userspace-api/media/v4l/pixfmt.rst | 1 +
> > .../media/v4l/vidioc-enum-fmt.rst | 2 +
> > .../userspace-api/media/v4l/vidioc-g-fmt.rst | 4 ++
> > .../media/v4l/vidioc-querycap.rst | 3 +
> > .../media/videodev2.h.rst.exceptions | 2 +
> > .../media/common/videobuf2/videobuf2-v4l2.c | 4 ++
> > drivers/media/v4l2-core/v4l2-dev.c | 17 +++++
> > drivers/media/v4l2-core/v4l2-ioctl.c | 53 ++++++++++++++++
> > include/media/v4l2-dev.h | 2 +
> > include/media/v4l2-ioctl.h | 34 ++++++++++
> > include/uapi/linux/videodev2.h | 25 ++++++++
> > 16 files changed, 263 insertions(+)
> > create mode 100644 Documentation/userspace-api/media/v4l/audio-formats.rst
> > create mode 100644 Documentation/userspace-api/media/v4l/dev-audio.rst
> > create mode 100644 Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
> >
> > diff --git a/Documentation/userspace-api/media/v4l/audio-formats.rst b/Documentation/userspace-api/media/v4l/audio-formats.rst
> > new file mode 100644
> > index 000000000000..bc52712d20d3
> > --- /dev/null
> > +++ b/Documentation/userspace-api/media/v4l/audio-formats.rst
> > @@ -0,0 +1,15 @@
> > +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
> > +
> > +.. _audio-formats:
> > +
> > +*************
> > +Audio Formats
> > +*************
> > +
> > +These formats are used for :ref:`audio` interface only.
> > +
> > +
> > +.. toctree::
> > + :maxdepth: 1
> > +
> > + pixfmt-aud-lpcm
> > diff --git a/Documentation/userspace-api/media/v4l/buffer.rst b/Documentation/userspace-api/media/v4l/buffer.rst
> > index 04dec3e570ed..80cf2cb20dfe 100644
> > --- a/Documentation/userspace-api/media/v4l/buffer.rst
> > +++ b/Documentation/userspace-api/media/v4l/buffer.rst
> > @@ -438,6 +438,12 @@ enum v4l2_buf_type
> > * - ``V4L2_BUF_TYPE_META_OUTPUT``
> > - 14
> > - Buffer for metadata output, see :ref:`metadata`.
> > + * - ``V4L2_BUF_TYPE_AUDIO_CAPTURE``
> > + - 15
> > + - Buffer for audio capture, see :ref:`audio`.
> > + * - ``V4L2_BUF_TYPE_AUDIO_OUTPUT``
> > + - 16
> > + - Buffer for audio output, see :ref:`audio`.
> >
> >
> > .. _buffer-flags:
> > diff --git a/Documentation/userspace-api/media/v4l/dev-audio.rst b/Documentation/userspace-api/media/v4l/dev-audio.rst
> > new file mode 100644
> > index 000000000000..f9bcf0c7b056
> > --- /dev/null
> > +++ b/Documentation/userspace-api/media/v4l/dev-audio.rst
>
> Rename the file to dev-audio-mem2mem.rst as this is specific to an audio
> M2M interface.
>
> > @@ -0,0 +1,63 @@
> > +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
> > +
> > +.. _audiodev:
> > +
> > +******************
> > +audio Interface
> > +******************
> > +
> > +The audio interface is implemented on audio device nodes. The audio device
> > +which uses application software for modulation or demodulation. This
> > +interface is intended for controlling and data streaming of such devices
> > +
> > +Audio devices are accessed through character device special files named
> > +``/dev/v4l-audio``
>
> I think this intro is somewhat confusing. I would recommend to copy the intro
> from dev-mem2mem.rst instead, adapting it for audio.
>
> > +
> > +Querying Capabilities
> > +=====================
> > +
> > +Device nodes supporting the audio capture and output interface set the
> > +``V4L2_CAP_AUDIO_M2M`` flag in the ``device_caps`` field of the
> > +:c:type:`v4l2_capability` structure returned by the :c:func:`VIDIOC_QUERYCAP`
> > +ioctl.
> > +
> > +At least one of the read/write or streaming I/O methods must be supported.
>
> M2M devices do not support read/write, only streaming I/O is supported.
>
> > +
> > +
> > +Data Format Negotiation
> > +=======================
> > +
> > +The audio device uses the :ref:`format` ioctls to select the capture format.
> > +The audio buffer content format is bound to that selected format. In addition
> > +to the basic :ref:`format` ioctls, the :c:func:`VIDIOC_ENUM_FMT` ioctl must be
> > +supported as well.
> > +
> > +To use the :ref:`format` ioctls applications set the ``type`` field of the
> > +:c:type:`v4l2_format` structure to ``V4L2_BUF_TYPE_AUDIO_CAPTURE`` or to
> > +``V4L2_BUF_TYPE_AUDIO_OUTPUT``. Both drivers and applications must set the
> > +remainder of the :c:type:`v4l2_format` structure to 0.
> > +
> > +.. c:type:: v4l2_audio_format
> > +
> > +.. tabularcolumns:: |p{1.4cm}|p{2.4cm}|p{13.5cm}|
> > +
> > +.. flat-table:: struct v4l2_audio_format
> > + :header-rows: 0
> > + :stub-columns: 0
> > + :widths: 1 1 2
> > +
> > + * - __u32
> > + - ``rate``
> > + - The sample rate, set by the application. The range is [5512, 768000].
> > + * - __u32
> > + - ``format``
> > + - The sample format, set by the application. format is defined as
> > + SNDRV_PCM_FORMAT_S8, SNDRV_PCM_FORMAT_U8, ...,
> > + * - __u32
> > + - ``channels``
> > + - The channel number, set by the application. channel number range is
> > + [1, 32].
> > + * - __u32
> > + - ``buffersize``
> > + - Maximum buffer size in bytes required for data. The value is set by the
> > + driver.
> > diff --git a/Documentation/userspace-api/media/v4l/devices.rst b/Documentation/userspace-api/media/v4l/devices.rst
> > index 8bfbad65a9d4..8261f3468489 100644
> > --- a/Documentation/userspace-api/media/v4l/devices.rst
> > +++ b/Documentation/userspace-api/media/v4l/devices.rst
> > @@ -24,3 +24,4 @@ Interfaces
> > dev-event
> > dev-subdev
> > dev-meta
> > + dev-audio
> > diff --git a/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst b/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
> > new file mode 100644
> > index 000000000000..f9ebe2a05f69
> > --- /dev/null
> > +++ b/Documentation/userspace-api/media/v4l/pixfmt-aud-lpcm.rst
> > @@ -0,0 +1,31 @@
> > +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
> > +
> > +.. _v4l2-aud-fmt-lpcm:
> > +
> > +*************************
> > +V4L2_AUDIO_FMT_LPCM ('LPCM')
> > +*************************
> > +
> > +Linear Pulse-Code Modulation (LPCM)
> > +
> > +
> > +Description
> > +===========
> > +
> > +This describes audio format used by the audio memory to memory driver.
> > +
> > +It contains the following fields:
> > +
> > +.. flat-table::
> > + :widths: 1 4
> > + :header-rows: 1
> > + :stub-columns: 0
> > +
> > + * - Field
> > + - Description
> > + * - u32 samplerate;
> > + - which is the number of times per second that samples are taken.
> > + * - u32 sampleformat;
> > + - which determines the number of possible digital values that can be used to represent each sample
> > + * - u32 channels;
> > + - channel number for each sample.
>
> See Sakari's comments. This section describes how the audio data is formatted
> in the buffer memory. Presumably this is already documented somewhere in the ALSA
> docs, so a reference to that would work.
>
> > diff --git a/Documentation/userspace-api/media/v4l/pixfmt.rst b/Documentation/userspace-api/media/v4l/pixfmt.rst
> > index 11dab4a90630..e205db5fa8af 100644
> > --- a/Documentation/userspace-api/media/v4l/pixfmt.rst
> > +++ b/Documentation/userspace-api/media/v4l/pixfmt.rst
> > @@ -36,3 +36,4 @@ see also :ref:`VIDIOC_G_FBUF <VIDIOC_G_FBUF>`.)
> > colorspaces
> > colorspaces-defs
> > colorspaces-details
> > + audio-formats
> > diff --git a/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst b/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst
> > index 000c154b0f98..42deb07f4ff4 100644
> > --- a/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst
> > +++ b/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst
> > @@ -96,6 +96,8 @@ the ``mbus_code`` field is handled differently:
> > ``V4L2_BUF_TYPE_VIDEO_OVERLAY``,
> > ``V4L2_BUF_TYPE_SDR_CAPTURE``,
> > ``V4L2_BUF_TYPE_SDR_OUTPUT``,
> > + ``V4L2_BUF_TYPE_AUDIO_CAPTURE``,
> > + ``V4L2_BUF_TYPE_AUDIO_OUTPUT``,
> > ``V4L2_BUF_TYPE_META_CAPTURE`` and
> > ``V4L2_BUF_TYPE_META_OUTPUT``.
> > See :c:type:`v4l2_buf_type`.
> > diff --git a/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst b/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst
> > index 675c385e5aca..1ecb7d640057 100644
> > --- a/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst
> > +++ b/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst
> > @@ -130,6 +130,10 @@ The format as returned by :ref:`VIDIOC_TRY_FMT <VIDIOC_G_FMT>` must be identical
> > - ``meta``
> > - Definition of a metadata format, see :ref:`meta-formats`, used by
> > metadata capture devices.
> > + * - struct :c:type:`v4l2_audio_format`
> > + - ``audio``
> > + - Definition of a audio data format, see :ref:`dev-audio`, used by
> > + audio capture and output devices
> > * - __u8
> > - ``raw_data``\ [200]
> > - Place holder for future extensions.
> > diff --git a/Documentation/userspace-api/media/v4l/vidioc-querycap.rst b/Documentation/userspace-api/media/v4l/vidioc-querycap.rst
> > index 6c57b8428356..0b3cefefc86b 100644
> > --- a/Documentation/userspace-api/media/v4l/vidioc-querycap.rst
> > +++ b/Documentation/userspace-api/media/v4l/vidioc-querycap.rst
> > @@ -259,6 +259,9 @@ specification the ioctl returns an ``EINVAL`` error code.
> > video topology configuration, including which I/O entity is routed to
> > the input/output, is configured by userspace via the Media Controller.
> > See :ref:`media_controller`.
> > + * - ``V4L2_CAP_AUDIO_M2M``
> > + - 0x40000000
> > + - The device supports the audio Memory-To-Memory interface.
> > * - ``V4L2_CAP_DEVICE_CAPS``
> > - 0x80000000
> > - The driver fills the ``device_caps`` field. This capability can
> > diff --git a/Documentation/userspace-api/media/videodev2.h.rst.exceptions b/Documentation/userspace-api/media/videodev2.h.rst.exceptions
> > index 3e58aac4ef0b..48ef3bce3d20 100644
> > --- a/Documentation/userspace-api/media/videodev2.h.rst.exceptions
> > +++ b/Documentation/userspace-api/media/videodev2.h.rst.exceptions
> > @@ -29,6 +29,8 @@ replace symbol V4L2_FIELD_SEQ_TB :c:type:`v4l2_field`
> > replace symbol V4L2_FIELD_TOP :c:type:`v4l2_field`
> >
> > # Documented enum v4l2_buf_type
> > +replace symbol V4L2_BUF_TYPE_AUDIO_CAPTURE :c:type:`v4l2_buf_type`
> > +replace symbol V4L2_BUF_TYPE_AUDIO_OUTPUT :c:type:`v4l2_buf_type`
> > replace symbol V4L2_BUF_TYPE_META_CAPTURE :c:type:`v4l2_buf_type`
> > replace symbol V4L2_BUF_TYPE_META_OUTPUT :c:type:`v4l2_buf_type`
> > replace symbol V4L2_BUF_TYPE_SDR_CAPTURE :c:type:`v4l2_buf_type`
> > diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> > index c7a54d82a55e..12f2be2773a2 100644
> > --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
> > +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> > @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create)
> > case V4L2_BUF_TYPE_META_OUTPUT:
> > requested_sizes[0] = f->fmt.meta.buffersize;
> > break;
> > + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > + requested_sizes[0] = f->fmt.audio.buffersize;
> > + break;
> > default:
> > return -EINVAL;
> > }
> > diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
> > index f81279492682..b92c760b611a 100644
> > --- a/drivers/media/v4l2-core/v4l2-dev.c
> > +++ b/drivers/media/v4l2-core/v4l2-dev.c
> > @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev)
> > bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH;
> > bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO &&
> > (vdev->device_caps & meta_caps);
> > + bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO;
> > bool is_rx = vdev->vfl_dir != VFL_DIR_TX;
> > bool is_tx = vdev->vfl_dir != VFL_DIR_RX;
> > bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
> > @@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct video_device *vdev)
> > SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out);
> > SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out);
> > }
> > + if (is_audio && is_rx) {
> > + /* audio capture specific ioctls */
> > + SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap);
> > + SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
> > + SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
> > + SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap);
> > + } else if (is_audio && is_tx) {
> > + /* audio output specific ioctls */
> > + SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out);
> > + SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
> > + SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
> > + SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out);
> > + }
> > if (is_vbi) {
> > /* vbi specific ioctls */
> > if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
> > @@ -927,6 +941,9 @@ int __video_register_device(struct video_device *vdev,
> > case VFL_TYPE_TOUCH:
> > name_base = "v4l-touch";
> > break;
> > + case VFL_TYPE_AUDIO:
> > + name_base = "v4l-audio";
> > + break;
> > default:
> > pr_err("%s called with unknown type: %d\n",
> > __func__, type);
> > diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
> > index f4d9d6279094..767588d5822a 100644
> > --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> > +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
> > @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = {
> > [V4L2_BUF_TYPE_SDR_OUTPUT] = "sdr-out",
> > [V4L2_BUF_TYPE_META_CAPTURE] = "meta-cap",
> > [V4L2_BUF_TYPE_META_OUTPUT] = "meta-out",
> > + [V4L2_BUF_TYPE_AUDIO_CAPTURE] = "audio-cap",
> > + [V4L2_BUF_TYPE_AUDIO_OUTPUT] = "audio-out",
> > };
> > EXPORT_SYMBOL(v4l2_type_names);
> >
> > @@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only)
> > const struct v4l2_sliced_vbi_format *sliced;
> > const struct v4l2_window *win;
> > const struct v4l2_meta_format *meta;
> > + const struct v4l2_audio_format *audio;
> > u32 pixelformat;
> > u32 planes;
> > unsigned i;
> > @@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool write_only)
> > pr_cont(", dataformat=%p4cc, buffersize=%u\n",
> > &pixelformat, meta->buffersize);
> > break;
> > + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > + audio = &p->fmt.audio;
> > + pr_cont(", rate=%u, format=%u, channels=%u, buffersize=%u\n",
> > + audio->rate, audio->format, audio->channels, audio->buffersize);
> > + break;
> > }
> > }
> >
> > @@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
> > bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
> > bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO &&
> > (vfd->device_caps & meta_caps);
> > + bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO;
> > bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
> > bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
> >
> > @@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
> > if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out)
> > return 0;
> > break;
> > + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > + if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
> > + return 0;
> > + break;
> > + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > + if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
> > + return 0;
> > + break;
> > default:
> > break;
> > }
> > @@ -1452,6 +1470,7 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
> > case V4L2_PIX_FMT_Y210: descr = "10-bit YUYV Packed"; break;
> > case V4L2_PIX_FMT_Y212: descr = "12-bit YUYV Packed"; break;
> > case V4L2_PIX_FMT_Y216: descr = "16-bit YUYV Packed"; break;
> > + case V4L2_AUDIO_FMT_LPCM: descr = "Audio LPCM"; break;
> >
> > default:
> > /* Compressed formats */
> > @@ -1596,6 +1615,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
> > break;
> > ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg);
> > break;
> > + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > + if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
> > + break;
> > + ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
> > + break;
> > + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > + if (unlikely(!ops->vidioc_enum_fmt_audio_out))
> > + break;
> > + ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
> > + break;
> > }
> > if (ret == 0)
> > v4l_fill_fmtdesc(p);
> > @@ -1672,6 +1701,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
> > return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
> > case V4L2_BUF_TYPE_META_OUTPUT:
> > return ops->vidioc_g_fmt_meta_out(file, fh, arg);
> > + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > + return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
> > + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > + return ops->vidioc_g_fmt_audio_out(file, fh, arg);
> > }
> > return -EINVAL;
> > }
> > @@ -1783,6 +1816,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
> > break;
> > memset_after(p, 0, fmt.meta);
> > return ops->vidioc_s_fmt_meta_out(file, fh, arg);
> > + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > + if (unlikely(!ops->vidioc_s_fmt_audio_cap))
> > + break;
> > + memset_after(p, 0, fmt.audio);
> > + return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
> > + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > + if (unlikely(!ops->vidioc_s_fmt_audio_out))
> > + break;
> > + memset_after(p, 0, fmt.audio);
> > + return ops->vidioc_s_fmt_audio_out(file, fh, arg);
> > }
> > return -EINVAL;
> > }
> > @@ -1891,6 +1934,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
> > break;
> > memset_after(p, 0, fmt.meta);
> > return ops->vidioc_try_fmt_meta_out(file, fh, arg);
> > + case V4L2_BUF_TYPE_AUDIO_CAPTURE:
> > + if (unlikely(!ops->vidioc_try_fmt_audio_cap))
> > + break;
> > + memset_after(p, 0, fmt.audio);
> > + return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
> > + case V4L2_BUF_TYPE_AUDIO_OUTPUT:
> > + if (unlikely(!ops->vidioc_try_fmt_audio_out))
> > + break;
> > + memset_after(p, 0, fmt.audio);
> > + return ops->vidioc_try_fmt_audio_out(file, fh, arg);
> > }
> > return -EINVAL;
> > }
> > diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
> > index e0a13505f88d..0924e6d1dab1 100644
> > --- a/include/media/v4l2-dev.h
> > +++ b/include/media/v4l2-dev.h
> > @@ -30,6 +30,7 @@
> > * @VFL_TYPE_SUBDEV: for V4L2 subdevices
> > * @VFL_TYPE_SDR: for Software Defined Radio tuners
> > * @VFL_TYPE_TOUCH: for touch sensors
> > + * @VFL_TYPE_AUDIO: for audio input/output devices
>
> Change this to: "for audio memory-to-memory devices"
> That's the only audio type we support at the moment. I don't see a need
> for pure capture or output audio devices, since that would be handled in
> alsa.
>
> > * @VFL_TYPE_MAX: number of VFL types, must always be last in the enum
> > */
> > enum vfl_devnode_type {
> > @@ -39,6 +40,7 @@ enum vfl_devnode_type {
> > VFL_TYPE_SUBDEV,
> > VFL_TYPE_SDR,
> > VFL_TYPE_TOUCH,
> > + VFL_TYPE_AUDIO,
> > VFL_TYPE_MAX /* Shall be the last one */
> > };
> >
> > diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
> > index edb733f21604..f840cf740ce1 100644
> > --- a/include/media/v4l2-ioctl.h
> > +++ b/include/media/v4l2-ioctl.h
> > @@ -45,6 +45,12 @@ struct v4l2_fh;
> > * @vidioc_enum_fmt_meta_out: pointer to the function that implements
> > * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> > * for metadata output
> > + * @vidioc_enum_fmt_audio_cap: pointer to the function that implements
> > + * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> > + * for audio capture
> > + * @vidioc_enum_fmt_audio_out: pointer to the function that implements
> > + * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
> > + * for audio output
> > * @vidioc_g_fmt_vid_cap: pointer to the function that implements
> > * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
> > * in single plane mode
> > @@ -79,6 +85,10 @@ struct v4l2_fh;
> > * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
> > * @vidioc_g_fmt_meta_out: pointer to the function that implements
> > * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> > + * @vidioc_g_fmt_audio_cap: pointer to the function that implements
> > + * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> > + * @vidioc_g_fmt_audio_out: pointer to the function that implements
> > + * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
> > * @vidioc_s_fmt_vid_cap: pointer to the function that implements
> > * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
> > * in single plane mode
> > @@ -113,6 +123,10 @@ struct v4l2_fh;
> > * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
> > * @vidioc_s_fmt_meta_out: pointer to the function that implements
> > * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> > + * @vidioc_s_fmt_audio_cap: pointer to the function that implements
> > + * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> > + * @vidioc_s_fmt_audio_out: pointer to the function that implements
> > + * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
> > * @vidioc_try_fmt_vid_cap: pointer to the function that implements
> > * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
> > * in single plane mode
> > @@ -149,6 +163,10 @@ struct v4l2_fh;
> > * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
> > * @vidioc_try_fmt_meta_out: pointer to the function that implements
> > * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output
> > + * @vidioc_try_fmt_audio_cap: pointer to the function that implements
> > + * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
> > + * @vidioc_try_fmt_audio_out: pointer to the function that implements
> > + * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
> > * @vidioc_reqbufs: pointer to the function that implements
> > * :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
> > * @vidioc_querybuf: pointer to the function that implements
> > @@ -315,6 +333,10 @@ struct v4l2_ioctl_ops {
> > struct v4l2_fmtdesc *f);
> > int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh,
> > struct v4l2_fmtdesc *f);
> > + int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
> > + struct v4l2_fmtdesc *f);
> > + int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
> > + struct v4l2_fmtdesc *f);
> >
> > /* VIDIOC_G_FMT handlers */
> > int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
> > @@ -345,6 +367,10 @@ struct v4l2_ioctl_ops {
> > struct v4l2_format *f);
> > int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh,
> > struct v4l2_format *f);
> > + int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
> > + struct v4l2_format *f);
> > + int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
> > + struct v4l2_format *f);
> >
> > /* VIDIOC_S_FMT handlers */
> > int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
> > @@ -375,6 +401,10 @@ struct v4l2_ioctl_ops {
> > struct v4l2_format *f);
> > int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh,
> > struct v4l2_format *f);
> > + int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
> > + struct v4l2_format *f);
> > + int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
> > + struct v4l2_format *f);
> >
> > /* VIDIOC_TRY_FMT handlers */
> > int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
> > @@ -405,6 +435,10 @@ struct v4l2_ioctl_ops {
> > struct v4l2_format *f);
> > int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh,
> > struct v4l2_format *f);
> > + int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
> > + struct v4l2_format *f);
> > + int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
> > + struct v4l2_format *f);
> >
> > /* Buffer handlers */
> > int (*vidioc_reqbufs)(struct file *file, void *fh,
> > diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> > index 78260e5d9985..8dc615f2b60c 100644
> > --- a/include/uapi/linux/videodev2.h
> > +++ b/include/uapi/linux/videodev2.h
> > @@ -153,6 +153,8 @@ enum v4l2_buf_type {
> > V4L2_BUF_TYPE_SDR_OUTPUT = 12,
> > V4L2_BUF_TYPE_META_CAPTURE = 13,
> > V4L2_BUF_TYPE_META_OUTPUT = 14,
> > + V4L2_BUF_TYPE_AUDIO_CAPTURE = 15,
> > + V4L2_BUF_TYPE_AUDIO_OUTPUT = 16,
> > /* Deprecated, do not use */
> > V4L2_BUF_TYPE_PRIVATE = 0x80,
> > };
> > @@ -169,6 +171,7 @@ enum v4l2_buf_type {
> > || (type) == V4L2_BUF_TYPE_VBI_OUTPUT \
> > || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT \
> > || (type) == V4L2_BUF_TYPE_SDR_OUTPUT \
> > + || (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT \
> > || (type) == V4L2_BUF_TYPE_META_OUTPUT)
> >
> > #define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type))
> > @@ -508,6 +511,7 @@ struct v4l2_capability {
> > #define V4L2_CAP_TOUCH 0x10000000 /* Is a touch device */
> >
> > #define V4L2_CAP_IO_MC 0x20000000 /* Is input/output controlled by the media controller */
> > +#define V4L2_CAP_AUDIO_M2M 0x40000000
> >
> > #define V4L2_CAP_DEVICE_CAPS 0x80000000 /* sets device capabilities field */
> >
> > @@ -838,6 +842,9 @@ struct v4l2_pix_format {
> > #define V4L2_META_FMT_RK_ISP1_PARAMS v4l2_fourcc('R', 'K', '1', 'P') /* Rockchip ISP1 3A Parameters */
> > #define V4L2_META_FMT_RK_ISP1_STAT_3A v4l2_fourcc('R', 'K', '1', 'S') /* Rockchip ISP1 3A Statistics */
> >
> > +/* Audio-data formats */
> > +#define V4L2_AUDIO_FMT_LPCM v4l2_fourcc('L', 'P', 'C', 'M') /* audio lpcm */
> > +
>
> Hmm, this I am uncertain about. This doesn't add anything. If you enumerate the
> formats, they will all report just this format, so you still don't know which
> actual audio formats are supported.
>
> The real audio format is the 'format' field.
>
> > /* priv field value to indicates that subsequent fields are valid. */
> > #define V4L2_PIX_FMT_PRIV_MAGIC 0xfeedcafe
> >
> > @@ -2417,6 +2424,22 @@ struct v4l2_meta_format {
> > __u32 buffersize;
> > } __attribute__ ((packed));
> >
> > +/**
> > + * struct v4l2_audio_format - audio data format definition
> > + * @pixelformat: little endian four character code (fourcc)
> > + * @rate: sample rate
> > + * @format: sample format
> > + * @channels: channel numbers
> > + * @buffersize: maximum size in bytes required for data
> > + */
> > +struct v4l2_audio_format {
> > + __u32 pixelformat;
>
> Why not just drop this field, and instead use the format field?
>
> You would have to update the ENUM_FMT documentation to indicate that for
> audio m2m device the pixelformat field of v4l2_fmtdesc is actually the
> audio format, and that it is not a fourcc, but a SNDRV_PCM_FORMAT_ format.
>
> v4l_fill_fmtdesc can just add 'case SNDRV_PCM_FORMAT_U8:' etc., since they
> luckily won't conflict with the existing fourccs as far as I can tell.
>
> One problem might be the use of %p4cc as a printf formatter for fourcc
> values, that would fail with these formats.
>
> One option to solve this could be to add a define to videodev2.h that converts
> a SNDRV_PCM_FORMAT_* to a fourcc, e.g.:
>
> #define v4l2_fourcc_pcm(pcm_fmt) v4l2_fourcc('A', 'U', (pcm_fmt) / 10 + '0', (pcm_fmt) % 10 + '0')
>
> So all audio formats end up like 'ADXX' where XX is the SNDRV_PCM_FORMAT_* value.
>
> You would also need a define to translate a fourcc back to a pcm format.
>
> This scheme would allow %p4cc to continue to be used. Alternatively, you need
> to check all places where %p4cc is used in the media subsystem core and see if
> you need to check if it is an audio buffer type, and if so, just use %u.
>
> I'm not quite sure which of the two options is best.
>
> More input on this would be welcome.

One thing I try to avoid is to include the asound.h, which has the
definition of
'format', because in user space, there is another copy in alsa-lib. there will
be conflict in userspace for include videodev2.h/asound.h and asoundlib.h.

Another reason for adding V4L2_AUDIO_FMT_LPCM, I think use LPCM is
general for audio cases, maybe in the future there will be a requirement for
non-PCM, like MP3/AAC format.

If use SNDRV_PCM_FORMAT_*, there will be a lot of pixfmt need to be added.
a little complicated:)

'format' field is one character of LPCM. From this point of view, it seems LPCM
is simple.

Best regards
Wang shengjiu

>
> > + __u32 rate;
> > + __u32 format;
> > + __u32 channels;
> > + __u32 buffersize;
> > +} __attribute__ ((packed));
> > +
> > /**
> > * struct v4l2_format - stream data format
> > * @type: enum v4l2_buf_type; type of the data stream
> > @@ -2425,6 +2448,7 @@ struct v4l2_meta_format {
> > * @win: definition of an overlaid image
> > * @vbi: raw VBI capture or output parameters
> > * @sliced: sliced VBI capture or output parameters
> > + * @audio: definition of an audio format
> > * @raw_data: placeholder for future extensions and custom formats
> > * @fmt: union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
> > * and @raw_data
> > @@ -2439,6 +2463,7 @@ struct v4l2_format {
> > struct v4l2_sliced_vbi_format sliced; /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
> > struct v4l2_sdr_format sdr; /* V4L2_BUF_TYPE_SDR_CAPTURE */
> > struct v4l2_meta_format meta; /* V4L2_BUF_TYPE_META_CAPTURE */
> > + struct v4l2_audio_format audio; /* V4L2_BUF_TYPE_AUDIO_CAPTURE */
> > __u8 raw_data[200]; /* user-defined */
> > } fmt;
> > };
>
> Regards,
>
> Hans