2021-11-30 09:48:43

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 00/13] amphion video decoder/encoder driver

Hi all,

This patch series adds support for
the amphion video encoder and decoder
via the VPU block present in imx8q platforms.
Currently, support for IMX8QXP and IMX8QM is included.

It features decoding for the following formats:
- H.264
- HEVC
- MPEG4
- MPEG2
- VC1
- VP8

It features encoding for the following formats:
- H.264

The driver creates a separate device node for the encoder and decoder.

This driver is dependent on vpu firmwares.
The firmwares have been submitted to linux-firmware.
The firmware patch is since commit
b563148fd28623f6b6ce68bb06c3dd3bd138b058:
linux-firmware: Update firmware file for Intel Bluetooth 9462
(Fri Oct 8 16:30:14 2021 +0530)

and it's available in the git repository at:
https://github.com/mingqian-0/linux-firmware.git

for you to fetch changes up to bb3eee4f99589d4910dee4c053a3a685546b5dbb:
amphion: add VPU firmwares for NXP i.MX8Q SoCs
(Tue Oct 12 15:09:57 2021 +0800)

encoder is tested with gstreamer
decoder is tested with gstreamer, but the following patches are required:
https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1379
https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1381


Tested-by: Nicolas Dufresne <[email protected]>


Changelog:

v13
- make a workaround that avoid firmware entering wfi wrongly

v12
- support reset decoder when starting a new stream
- don't append an empty last buffer, set last_buffer_dequeued
- improve the resolution change flow
- return all buffers if start_streaming fail
- fill encoder capture buffer's filed to none
- fix a bug in calculating bytesperline

v11
- fix dt_binding_check error after upgrade dtschema
- remove "default y"
- add media device

v10
- refine vpu log, remove custom logging infrastructure
- support non contiguous planes format nv12m instead of nv12
- rename V4L2_PIX_FMT_NV12_8L128 to V4L2_PIX_FMT_NV12MT_8L128
- rename V4L2_PIX_FMT_NV12_10BE_8L128 to V4L2_PIX_FMT_NV12MT_10BE_8L128
- merge two module into one
- fix kernel panic in rmmod

v9
- drop V4L2_BUF_FLAG_CODECCONFIG
- drop V4L2_EVENT_CODEC_ERROR
- drop V4L2_EVENT_SKIP - use the v4l2_buffer.sequence counter
- fix some build warnings with W=1 reported by kernel test robot

v8
- move driver from driver/media/platform/imx/vpu-8q to
driver/media/platform/amphion
- rename driver name to amphion
- remove imx_vpu.h
- move the definition of V4L2_EVENT_CODEC_ERROR to videodev2.h
- move the definition of V4L2_EVENT_SKIP to videodev2.h

v7
- fix build warnings with W=1 reported by kernel test robot

v6:
- rename V4L2_PIX_FMT_NT8 to V4L2_PIX_FMT_NV12_8L128
- rename V4L2_PIX_FMT_NT10 to V4L2_PIX_FMT_NV12_10BE_8L128

v5:
- move some definition from imx_vph.h to videodev2.h
- remove some unnecessary content
- add some documentation descriptions
- pass the lateset v4l2-compliance test

v4:
- redefine the memory-region in devicetree bindings documentation
- use v4l2's mechanism to implement synchronize queuing ioctl
- remove the unnecessary mutex ioctl_sync
- don't notify source change event if the parameters are same as previously established
- add flag V4L2_FMT_FLAG_DYN_RESOLUTION to decoder's capture format

v3:
- don't make vpu device node a simple-bus
- trigger probing vpu core in the driver
- remove unnecessary vpu core index property

v2:
- fix dt bindings build error
- split driver patch into several parts to avoid exceeding bytes limit

Compliance
==========
# v4l2-compliance -d /dev/video0
v4l2-compliance 1.21.0-4859, 64 bits, 64-bit time_t
v4l2-compliance SHA: 493af03f3c57 2021-10-08 17:23:11

Compliance test for amphion-vpu device /dev/video0:

Driver Info:
Driver name : amphion-vpu
Card type : amphion vpu decoder
Bus info : platform: amphion-vpu
Driver version : 5.15.0
Capabilities : 0x84204000
Video Memory-to-Memory Multiplanar
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04204000
Video Memory-to-Memory Multiplanar
Streaming
Extended Pix Format
Detected Stateful Decoder
Media Driver Info:
Driver name : amphion-vpu
Model : amphion-vpu
Serial :
Bus info : platform: amphion-vpu
Media version : 5.15.0
Hardware revision: 0x00000000 (0)
Driver version : 5.15.0
Interface Info:
ID : 0x0300000c
Type : V4L Video
Entity Info:
ID : 0x00000001 (1)
Name : amphion-vpu-decoder-source
Function : V4L2 I/O
Pad 0x01000002 : 0: Source
Link 0x02000008: to remote pad 0x1000004 of entity 'amphion-vpu-decoder-proc' (Video Decoder): Data, Enabled, Immutable

Required ioctls:
test MC information (see 'Media Driver Info' above): OK
test VIDIOC_QUERYCAP: OK
test invalid ioctls: OK

Allow for multiple opens:
test second /dev/video0 open: OK
test VIDIOC_QUERYCAP: OK
test VIDIOC_G/S_PRIORITY: OK
test for unlimited opens: OK

Debug ioctls:
test VIDIOC_DBG_G/S_REGISTER: OK (Not Supported)
test VIDIOC_LOG_STATUS: OK (Not Supported)

Input ioctls:
test VIDIOC_G/S_TUNER/ENUM_FREQ_BANDS: OK (Not Supported)
test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
test VIDIOC_S_HW_FREQ_SEEK: OK (Not Supported)
test VIDIOC_ENUMAUDIO: OK (Not Supported)
test VIDIOC_G/S/ENUMINPUT: OK (Not Supported)
test VIDIOC_G/S_AUDIO: OK (Not Supported)
Inputs: 0 Audio Inputs: 0 Tuners: 0

Output ioctls:
test VIDIOC_G/S_MODULATOR: OK (Not Supported)
test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
test VIDIOC_ENUMAUDOUT: OK (Not Supported)
test VIDIOC_G/S/ENUMOUTPUT: OK (Not Supported)
test VIDIOC_G/S_AUDOUT: OK (Not Supported)
Outputs: 0 Audio Outputs: 0 Modulators: 0

Input/Output configuration ioctls:
test VIDIOC_ENUM/G/S/QUERY_STD: OK (Not Supported)
test VIDIOC_ENUM/G/S/QUERY_DV_TIMINGS: OK (Not Supported)
test VIDIOC_DV_TIMINGS_CAP: OK (Not Supported)
test VIDIOC_G/S_EDID: OK (Not Supported)

Control ioctls:
test VIDIOC_QUERY_EXT_CTRL/QUERYMENU: OK
test VIDIOC_QUERYCTRL: OK
test VIDIOC_G/S_CTRL: OK
test VIDIOC_G/S/TRY_EXT_CTRLS: OK
test VIDIOC_(UN)SUBSCRIBE_EVENT/DQEVENT: OK
test VIDIOC_G/S_JPEGCOMP: OK (Not Supported)
Standard Controls: 3 Private Controls: 0

Format ioctls:
test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: OK
test VIDIOC_G/S_PARM: OK (Not Supported)
test VIDIOC_G_FBUF: OK (Not Supported)
test VIDIOC_G_FMT: OK
test VIDIOC_TRY_FMT: OK
test VIDIOC_S_FMT: OK
test VIDIOC_G_SLICED_VBI_CAP: OK (Not Supported)
test Cropping: OK (Not Supported)
test Composing: OK
test Scaling: OK (Not Supported)

Codec ioctls:
test VIDIOC_(TRY_)ENCODER_CMD: OK (Not Supported)
test VIDIOC_G_ENC_INDEX: OK (Not Supported)
test VIDIOC_(TRY_)DECODER_CMD: OK

Buffer ioctls:
test VIDIOC_REQBUFS/CREATE_BUFS/QUERYBUF: OK
test VIDIOC_EXPBUF: OK
test Requests: OK (Not Supported)

Total for amphion-vpu device /dev/video0: 46, Succeeded: 46, Failed: 0, Warnings: 0

# v4l2-compliance -d /dev/video1
v4l2-compliance 1.21.0-4859, 64 bits, 64-bit time_t
v4l2-compliance SHA: 493af03f3c57 2021-10-08 17:23:11

Compliance test for amphion-vpu device /dev/video1:

Driver Info:
Driver name : amphion-vpu
Card type : amphion vpu encoder
Bus info : platform: amphion-vpu
Driver version : 5.15.0
Capabilities : 0x84204000
Video Memory-to-Memory Multiplanar
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04204000
Video Memory-to-Memory Multiplanar
Streaming
Extended Pix Format
Detected Stateful Encoder
Media Driver Info:
Driver name : amphion-vpu
Model : amphion-vpu
Serial :
Bus info : platform: amphion-vpu
Media version : 5.15.0
Hardware revision: 0x00000000 (0)
Driver version : 5.15.0
Interface Info:
ID : 0x0300001a
Type : V4L Video
Entity Info:
ID : 0x0000000f (15)
Name : amphion-vpu-encoder-source
Function : V4L2 I/O
Pad 0x01000010 : 0: Source
Link 0x02000016: to remote pad 0x1000012 of entity 'amphion-vpu-encoder-proc' (Video Encoder): Data, Enabled, Immutable

Required ioctls:
test MC information (see 'Media Driver Info' above): OK
test VIDIOC_QUERYCAP: OK
test invalid ioctls: OK

Allow for multiple opens:
test second /dev/video1 open: OK
test VIDIOC_QUERYCAP: OK
test VIDIOC_G/S_PRIORITY: OK
test for unlimited opens: OK

Debug ioctls:
test VIDIOC_DBG_G/S_REGISTER: OK (Not Supported)
test VIDIOC_LOG_STATUS: OK (Not Supported)

Input ioctls:
test VIDIOC_G/S_TUNER/ENUM_FREQ_BANDS: OK (Not Supported)
test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
test VIDIOC_S_HW_FREQ_SEEK: OK (Not Supported)
test VIDIOC_ENUMAUDIO: OK (Not Supported)
test VIDIOC_G/S/ENUMINPUT: OK (Not Supported)
test VIDIOC_G/S_AUDIO: OK (Not Supported)
Inputs: 0 Audio Inputs: 0 Tuners: 0

Output ioctls:
test VIDIOC_G/S_MODULATOR: OK (Not Supported)
test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
test VIDIOC_ENUMAUDOUT: OK (Not Supported)
test VIDIOC_G/S/ENUMOUTPUT: OK (Not Supported)
test VIDIOC_G/S_AUDOUT: OK (Not Supported)
Outputs: 0 Audio Outputs: 0 Modulators: 0

Input/Output configuration ioctls:
test VIDIOC_ENUM/G/S/QUERY_STD: OK (Not Supported)
test VIDIOC_ENUM/G/S/QUERY_DV_TIMINGS: OK (Not Supported)
test VIDIOC_DV_TIMINGS_CAP: OK (Not Supported)
test VIDIOC_G/S_EDID: OK (Not Supported)

Control ioctls:
test VIDIOC_QUERY_EXT_CTRL/QUERYMENU: OK
test VIDIOC_QUERYCTRL: OK
test VIDIOC_G/S_CTRL: OK
test VIDIOC_G/S/TRY_EXT_CTRLS: OK
test VIDIOC_(UN)SUBSCRIBE_EVENT/DQEVENT: OK
test VIDIOC_G/S_JPEGCOMP: OK (Not Supported)
Standard Controls: 20 Private Controls: 0

Format ioctls:
test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: OK
test VIDIOC_G/S_PARM: OK
test VIDIOC_G_FBUF: OK (Not Supported)
test VIDIOC_G_FMT: OK
test VIDIOC_TRY_FMT: OK
test VIDIOC_S_FMT: OK
test VIDIOC_G_SLICED_VBI_CAP: OK (Not Supported)
test Cropping: OK
test Composing: OK (Not Supported)
test Scaling: OK (Not Supported)

Codec ioctls:
test VIDIOC_(TRY_)ENCODER_CMD: OK
test VIDIOC_G_ENC_INDEX: OK (Not Supported)
test VIDIOC_(TRY_)DECODER_CMD: OK (Not Supported)

Buffer ioctls:
test VIDIOC_REQBUFS/CREATE_BUFS/QUERYBUF: OK
test VIDIOC_EXPBUF: OK
test Requests: OK (Not Supported)

Total for amphion-vpu device /dev/video1: 46, Succeeded: 46, Failed: 0, Warnings: 0

# v4l2-compliance -d /dev/media0
v4l2-compliance 1.21.0-4859, 64 bits, 64-bit time_t
v4l2-compliance SHA: 493af03f3c57 2021-10-08 17:23:11

Compliance test for amphion-vpu device /dev/media0:

Media Driver Info:
Driver name : amphion-vpu
Model : amphion-vpu
Serial :
Bus info : platform: amphion-vpu
Media version : 5.15.0
Hardware revision: 0x00000000 (0)
Driver version : 5.15.0

Required ioctls:
test MEDIA_IOC_DEVICE_INFO: OK
test invalid ioctls: OK

Allow for multiple opens:
test second /dev/media0 open: OK
test MEDIA_IOC_DEVICE_INFO: OK
test for unlimited opens: OK

Media Controller ioctls:
test MEDIA_IOC_G_TOPOLOGY: OK
Entities: 6 Interfaces: 2 Pads: 8 Links: 8
test MEDIA_IOC_ENUM_ENTITIES/LINKS: OK
test MEDIA_IOC_SETUP_LINK: OK

Total for amphion-vpu device /dev/media0: 8, Succeeded: 8, Failed: 0, Warnings: 0

Ming Qian (13):
dt-bindings: media: amphion: add amphion video codec bindings
media:Add nv12mt_8l128 and nv12mt_10be_8l128 video format.
media: amphion: add amphion vpu device driver
media: amphion: add vpu core driver
media: amphion: implement vpu core communication based on mailbox
media: amphion: add vpu v4l2 m2m support
media: amphion: add v4l2 m2m vpu encoder stateful driver
media: amphion: add v4l2 m2m vpu decoder stateful driver
media: amphion: implement windsor encoder rpc interface
media: amphion: implement malone decoder rpc interface
ARM64: dts: freescale: imx8q: add imx vpu codec entries
firmware: imx: scu-pd: imx8q: add vpu mu resources
MAINTAINERS: add AMPHION VPU CODEC V4L2 driver entry

.../bindings/media/amphion,vpu.yaml | 180 ++
.../media/v4l/pixfmt-yuv-planar.rst | 15 +
MAINTAINERS | 9 +
.../arm64/boot/dts/freescale/imx8-ss-vpu.dtsi | 72 +
arch/arm64/boot/dts/freescale/imx8qxp-mek.dts | 17 +
arch/arm64/boot/dts/freescale/imx8qxp.dtsi | 24 +
arch/arm64/configs/defconfig | 1 +
drivers/firmware/imx/scu-pd.c | 4 +
drivers/media/platform/Kconfig | 19 +
drivers/media/platform/Makefile | 2 +
drivers/media/platform/amphion/Makefile | 20 +
drivers/media/platform/amphion/vdec.c | 1680 +++++++++++++++++
drivers/media/platform/amphion/venc.c | 1351 +++++++++++++
drivers/media/platform/amphion/vpu.h | 357 ++++
drivers/media/platform/amphion/vpu_cmds.c | 439 +++++
drivers/media/platform/amphion/vpu_cmds.h | 25 +
drivers/media/platform/amphion/vpu_codec.h | 67 +
drivers/media/platform/amphion/vpu_color.c | 190 ++
drivers/media/platform/amphion/vpu_core.c | 906 +++++++++
drivers/media/platform/amphion/vpu_core.h | 15 +
drivers/media/platform/amphion/vpu_dbg.c | 495 +++++
drivers/media/platform/amphion/vpu_defs.h | 186 ++
drivers/media/platform/amphion/vpu_drv.c | 265 +++
drivers/media/platform/amphion/vpu_helpers.c | 436 +++++
drivers/media/platform/amphion/vpu_helpers.h | 71 +
drivers/media/platform/amphion/vpu_imx8q.c | 271 +++
drivers/media/platform/amphion/vpu_imx8q.h | 116 ++
drivers/media/platform/amphion/vpu_malone.c | 1679 ++++++++++++++++
drivers/media/platform/amphion/vpu_malone.h | 42 +
drivers/media/platform/amphion/vpu_mbox.c | 124 ++
drivers/media/platform/amphion/vpu_mbox.h | 16 +
drivers/media/platform/amphion/vpu_msgs.c | 414 ++++
drivers/media/platform/amphion/vpu_msgs.h | 14 +
drivers/media/platform/amphion/vpu_rpc.c | 279 +++
drivers/media/platform/amphion/vpu_rpc.h | 464 +++++
drivers/media/platform/amphion/vpu_v4l2.c | 703 +++++++
drivers/media/platform/amphion/vpu_v4l2.h | 54 +
drivers/media/platform/amphion/vpu_windsor.c | 1222 ++++++++++++
drivers/media/platform/amphion/vpu_windsor.h | 39 +
drivers/media/v4l2-core/v4l2-ioctl.c | 2 +
include/uapi/linux/videodev2.h | 2 +
41 files changed, 12287 insertions(+)
create mode 100644 Documentation/devicetree/bindings/media/amphion,vpu.yaml
create mode 100644 arch/arm64/boot/dts/freescale/imx8-ss-vpu.dtsi
create mode 100644 drivers/media/platform/amphion/Makefile
create mode 100644 drivers/media/platform/amphion/vdec.c
create mode 100644 drivers/media/platform/amphion/venc.c
create mode 100644 drivers/media/platform/amphion/vpu.h
create mode 100644 drivers/media/platform/amphion/vpu_cmds.c
create mode 100644 drivers/media/platform/amphion/vpu_cmds.h
create mode 100644 drivers/media/platform/amphion/vpu_codec.h
create mode 100644 drivers/media/platform/amphion/vpu_color.c
create mode 100644 drivers/media/platform/amphion/vpu_core.c
create mode 100644 drivers/media/platform/amphion/vpu_core.h
create mode 100644 drivers/media/platform/amphion/vpu_dbg.c
create mode 100644 drivers/media/platform/amphion/vpu_defs.h
create mode 100644 drivers/media/platform/amphion/vpu_drv.c
create mode 100644 drivers/media/platform/amphion/vpu_helpers.c
create mode 100644 drivers/media/platform/amphion/vpu_helpers.h
create mode 100644 drivers/media/platform/amphion/vpu_imx8q.c
create mode 100644 drivers/media/platform/amphion/vpu_imx8q.h
create mode 100644 drivers/media/platform/amphion/vpu_malone.c
create mode 100644 drivers/media/platform/amphion/vpu_malone.h
create mode 100644 drivers/media/platform/amphion/vpu_mbox.c
create mode 100644 drivers/media/platform/amphion/vpu_mbox.h
create mode 100644 drivers/media/platform/amphion/vpu_msgs.c
create mode 100644 drivers/media/platform/amphion/vpu_msgs.h
create mode 100644 drivers/media/platform/amphion/vpu_rpc.c
create mode 100644 drivers/media/platform/amphion/vpu_rpc.h
create mode 100644 drivers/media/platform/amphion/vpu_v4l2.c
create mode 100644 drivers/media/platform/amphion/vpu_v4l2.h
create mode 100644 drivers/media/platform/amphion/vpu_windsor.c
create mode 100644 drivers/media/platform/amphion/vpu_windsor.h


base-commit: 999ed03518cb01aa9ef55c025db79567eec6268c
--
2.33.0



2021-11-30 09:48:49

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 01/13] dt-bindings: media: amphion: add amphion video codec bindings

Add devicetree binding documentation for amphion
Video Processing Unit IP presents on NXP i.MX8Q

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
Reviewed-by: Rob Herring <[email protected]>
---
.../bindings/media/amphion,vpu.yaml | 180 ++++++++++++++++++
1 file changed, 180 insertions(+)
create mode 100644 Documentation/devicetree/bindings/media/amphion,vpu.yaml

diff --git a/Documentation/devicetree/bindings/media/amphion,vpu.yaml b/Documentation/devicetree/bindings/media/amphion,vpu.yaml
new file mode 100644
index 000000000000..a9d80eaeeeb6
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/amphion,vpu.yaml
@@ -0,0 +1,180 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/media/amphion,vpu.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Amphion VPU codec IP
+
+maintainers:
+ - Ming Qian <[email protected]>
+ - Shijie Qin <[email protected]>
+
+description: |-
+ The Amphion MXC video encoder(Windsor) and decoder(Malone) accelerators present
+ on NXP i.MX8Q SoCs.
+
+properties:
+ $nodename:
+ pattern: "^vpu@[0-9a-f]+$"
+
+ compatible:
+ items:
+ - enum:
+ - nxp,imx8qm-vpu
+ - nxp,imx8qxp-vpu
+
+ reg:
+ maxItems: 1
+
+ power-domains:
+ maxItems: 1
+
+ "#address-cells":
+ const: 1
+
+ "#size-cells":
+ const: 1
+
+ ranges: true
+
+patternProperties:
+ "^mailbox@[0-9a-f]+$":
+ description:
+ Each vpu encoder or decoder correspond a MU, which used for communication
+ between driver and firmware. Implement via mailbox on driver.
+ $ref: ../mailbox/fsl,mu.yaml#
+
+
+ "^vpu_core@[0-9a-f]+$":
+ description:
+ Each core correspond a decoder or encoder, need to configure them
+ separately. NXP i.MX8QM SoC has one decoder and two encoder, i.MX8QXP SoC
+ has one decoder and one encoder.
+ type: object
+
+ properties:
+ compatible:
+ items:
+ - enum:
+ - nxp,imx8q-vpu-decoder
+ - nxp,imx8q-vpu-encoder
+
+ reg:
+ maxItems: 1
+
+ power-domains:
+ maxItems: 1
+
+ mbox-names:
+ items:
+ - const: tx0
+ - const: tx1
+ - const: rx
+
+ mboxes:
+ description:
+ List of phandle of 2 MU channels for tx, 1 MU channel for rx.
+ maxItems: 3
+
+ memory-region:
+ description:
+ Phandle to the reserved memory nodes to be associated with the
+ remoteproc device. The reserved memory nodes should be carveout nodes,
+ and should be defined as per the bindings in
+ Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+ items:
+ - description: region reserved for firmware image sections.
+ - description: region used for RPC shared memory between firmware and
+ driver.
+
+ required:
+ - compatible
+ - reg
+ - power-domains
+ - mbox-names
+ - mboxes
+ - memory-region
+
+ additionalProperties: false
+
+required:
+ - compatible
+ - reg
+ - power-domains
+
+additionalProperties: false
+
+examples:
+ # Device node example for i.MX8QM platform:
+ - |
+ #include <dt-bindings/firmware/imx/rsrc.h>
+
+ vpu: vpu@2c000000 {
+ compatible = "nxp,imx8qm-vpu";
+ ranges = <0x2c000000 0x2c000000 0x2000000>;
+ reg = <0x2c000000 0x1000000>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ power-domains = <&pd IMX_SC_R_VPU>;
+
+ mu_m0: mailbox@2d000000 {
+ compatible = "fsl,imx6sx-mu";
+ reg = <0x2d000000 0x20000>;
+ interrupts = <0 472 4>;
+ #mbox-cells = <2>;
+ power-domains = <&pd IMX_SC_R_VPU_MU_0>;
+ };
+
+ mu1_m0: mailbox@2d020000 {
+ compatible = "fsl,imx6sx-mu";
+ reg = <0x2d020000 0x20000>;
+ interrupts = <0 473 4>;
+ #mbox-cells = <2>;
+ power-domains = <&pd IMX_SC_R_VPU_MU_1>;
+ };
+
+ mu2_m0: mailbox@2d040000 {
+ compatible = "fsl,imx6sx-mu";
+ reg = <0x2d040000 0x20000>;
+ interrupts = <0 474 4>;
+ #mbox-cells = <2>;
+ power-domains = <&pd IMX_SC_R_VPU_MU_2>;
+ };
+
+ vpu_core0: vpu_core@2d080000 {
+ compatible = "nxp,imx8q-vpu-decoder";
+ reg = <0x2d080000 0x10000>;
+ power-domains = <&pd IMX_SC_R_VPU_DEC_0>;
+ mbox-names = "tx0", "tx1", "rx";
+ mboxes = <&mu_m0 0 0>,
+ <&mu_m0 0 1>,
+ <&mu_m0 1 0>;
+ memory-region = <&decoder_boot>, <&decoder_rpc>;
+ };
+
+ vpu_core1: vpu_core@2d090000 {
+ compatible = "nxp,imx8q-vpu-encoder";
+ reg = <0x2d090000 0x10000>;
+ power-domains = <&pd IMX_SC_R_VPU_ENC_0>;
+ mbox-names = "tx0", "tx1", "rx";
+ mboxes = <&mu1_m0 0 0>,
+ <&mu1_m0 0 1>,
+ <&mu1_m0 1 0>;
+ memory-region = <&encoder1_boot>, <&encoder1_rpc>;
+ };
+
+ vpu_core2: vpu_core@2d0a0000 {
+ reg = <0x2d0a0000 0x10000>;
+ compatible = "nxp,imx8q-vpu-encoder";
+ power-domains = <&pd IMX_SC_R_VPU_ENC_1>;
+ mbox-names = "tx0", "tx1", "rx";
+ mboxes = <&mu2_m0 0 0>,
+ <&mu2_m0 0 1>,
+ <&mu2_m0 1 0>;
+ memory-region = <&encoder2_boot>, <&encoder2_rpc>;
+ };
+ };
+
+...
--
2.33.0


2021-11-30 09:49:18

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 02/13] media:Add nv12mt_8l128 and nv12mt_10be_8l128 video format.

nv12mt_8l128 is 8-bit tiled nv12 format used by amphion decoder.
nv12mt_10be_8l128 is 10-bit tiled format used by amphion decoder.
The tile size is 8x128

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
---
.../userspace-api/media/v4l/pixfmt-yuv-planar.rst | 15 +++++++++++++++
drivers/media/v4l2-core/v4l2-ioctl.c | 2 ++
include/uapi/linux/videodev2.h | 2 ++
3 files changed, 19 insertions(+)

diff --git a/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst b/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst
index 3a09d93d405b..fc3baa2753ab 100644
--- a/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst
+++ b/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst
@@ -257,6 +257,8 @@ of the luma plane.
.. _V4L2-PIX-FMT-NV12-4L4:
.. _V4L2-PIX-FMT-NV12-16L16:
.. _V4L2-PIX-FMT-NV12-32L32:
+.. _V4L2_PIX_FMT_NV12MT_8L128:
+.. _V4L2_PIX_FMT_NV12MT_10BE_8L128:

Tiled NV12
----------
@@ -296,6 +298,19 @@ tiles linearly in memory. The line stride and image height must be
aligned to a multiple of 32. The layouts of the luma and chroma planes are
identical.

+``V4L2_PIX_FMT_NV12MT_8L128`` is similar to ``V4L2_PIX_FMT_NV12M`` but stores
+pixel in 2D 8x128 tiles, and stores tiles linearly in memory.
+The image height must be aligned to a multiple of 128.
+The layouts of the luma and chroma planes are identical.
+
+``V4L2_PIX_FMT_NV12MT_10BE_8L128`` is similar to ``V4L2_PIX_FMT_NV12M`` but stores
+10 bits pixel in 2D 8x128 tiles, and stores tiles linearly in memory.
+the data is arranged at the big end.
+The image height must be aligned to a multiple of 128.
+The layouts of the luma and chroma planes are identical.
+Note the tile size is 8bytes multiplied by 128 bytes,
+it means that the low bits and high bits of one pixel may be in differnt tiles.
+
.. _nv12mt:

.. kernel-figure:: nv12mt.svg
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
index 69b74d0e8a90..400eec0157a7 100644
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -1388,6 +1388,8 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
case V4L2_META_FMT_VIVID: descr = "Vivid Metadata"; break;
case V4L2_META_FMT_RK_ISP1_PARAMS: descr = "Rockchip ISP1 3A Parameters"; break;
case V4L2_META_FMT_RK_ISP1_STAT_3A: descr = "Rockchip ISP1 3A Statistics"; break;
+ case V4L2_PIX_FMT_NV12MT_8L128: descr = "NV12M (8x128 Linear)"; break;
+ case V4L2_PIX_FMT_NV12MT_10BE_8L128: descr = "NV12M 10BE(8x128 Linear)"; break;

default:
/* Compressed formats */
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
index f118fe7a9f58..9443c3109928 100644
--- a/include/uapi/linux/videodev2.h
+++ b/include/uapi/linux/videodev2.h
@@ -632,6 +632,8 @@ struct v4l2_pix_format {
/* Tiled YUV formats, non contiguous planes */
#define V4L2_PIX_FMT_NV12MT v4l2_fourcc('T', 'M', '1', '2') /* 12 Y/CbCr 4:2:0 64x32 tiles */
#define V4L2_PIX_FMT_NV12MT_16X16 v4l2_fourcc('V', 'M', '1', '2') /* 12 Y/CbCr 4:2:0 16x16 tiles */
+#define V4L2_PIX_FMT_NV12MT_8L128 v4l2_fourcc('N', 'A', '1', '2') /* Y/CbCr 4:2:0 8x128 tiles */
+#define V4L2_PIX_FMT_NV12MT_10BE_8L128 v4l2_fourcc('N', 'T', '1', '2') /* Y/CbCr 4:2:0 10-bit 8x128 tiles */

/* Bayer formats - see http://www.siliconimaging.com/RGB%20Bayer.htm */
#define V4L2_PIX_FMT_SBGGR8 v4l2_fourcc('B', 'A', '8', '1') /* 8 BGBG.. GRGR.. */
--
2.33.0


2021-11-30 09:49:23

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 03/13] media: amphion: add amphion vpu device driver

The amphion vpu codec ip contains encoder and decoder.
Windsor is the encoder, it supports to encode H.264.
Malone is the decoder, it features a powerful
video processing unit able to decode many foramts,
such as H.264, HEVC, and other foramts.

This Driver is for this IP that is based on the v4l2 mem2mem framework.

Supported SoCs are: IMX8QXP, IMX8QM

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
Reported-by: kernel test robot <[email protected]>
---
arch/arm64/configs/defconfig | 1 +
drivers/media/platform/Kconfig | 19 ++
drivers/media/platform/Makefile | 2 +
drivers/media/platform/amphion/Makefile | 20 ++
drivers/media/platform/amphion/vpu.h | 357 +++++++++++++++++++++
drivers/media/platform/amphion/vpu_defs.h | 186 +++++++++++
drivers/media/platform/amphion/vpu_drv.c | 265 +++++++++++++++
drivers/media/platform/amphion/vpu_imx8q.c | 271 ++++++++++++++++
drivers/media/platform/amphion/vpu_imx8q.h | 116 +++++++
9 files changed, 1237 insertions(+)
create mode 100644 drivers/media/platform/amphion/Makefile
create mode 100644 drivers/media/platform/amphion/vpu.h
create mode 100644 drivers/media/platform/amphion/vpu_defs.h
create mode 100644 drivers/media/platform/amphion/vpu_drv.c
create mode 100644 drivers/media/platform/amphion/vpu_imx8q.c
create mode 100644 drivers/media/platform/amphion/vpu_imx8q.h

diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
index f2e2b9bdd702..cc3633112f3f 100644
--- a/arch/arm64/configs/defconfig
+++ b/arch/arm64/configs/defconfig
@@ -657,6 +657,7 @@ CONFIG_V4L_PLATFORM_DRIVERS=y
CONFIG_VIDEO_RCAR_CSI2=m
CONFIG_VIDEO_RCAR_VIN=m
CONFIG_VIDEO_SUN6I_CSI=m
+CONFIG_VIDEO_AMPHION_VPU=m
CONFIG_V4L_MEM2MEM_DRIVERS=y
CONFIG_VIDEO_SAMSUNG_S5P_JPEG=m
CONFIG_VIDEO_SAMSUNG_S5P_MFC=m
diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
index 9fbdba0fd1e7..7d4a8cd52a9e 100644
--- a/drivers/media/platform/Kconfig
+++ b/drivers/media/platform/Kconfig
@@ -216,6 +216,25 @@ config VIDEO_RCAR_ISP
To compile this driver as a module, choose M here: the
module will be called rcar-isp.

+config VIDEO_AMPHION_VPU
+ tristate "Amphion VPU(Video Processing Unit) Codec IP"
+ depends on ARCH_MXC
+ depends on MEDIA_SUPPORT
+ depends on VIDEO_DEV
+ depends on VIDEO_V4L2
+ select MEDIA_CONTROLLER
+ select V4L2_MEM2MEM_DEV
+ select VIDEOBUF2_DMA_CONTIG
+ select VIDEOBUF2_VMALLOC
+ help
+ Amphion VPU Codec IP contains two parts: Windsor and Malone.
+ Windsor is encoder that supports H.264, and Malone is decoder
+ that supports H.264, HEVC, and other video formats.
+ This is a V4L2 driver for NXP MXC 8Q video accelerator hardware.
+ It accelerates encoding and decoding operations on
+ various NXP SoCs.
+ To compile this driver as a module choose m here.
+
endif # V4L_PLATFORM_DRIVERS

menuconfig V4L_MEM2MEM_DRIVERS
diff --git a/drivers/media/platform/Makefile b/drivers/media/platform/Makefile
index 19bcbced7382..53709df654ee 100644
--- a/drivers/media/platform/Makefile
+++ b/drivers/media/platform/Makefile
@@ -88,3 +88,5 @@ obj-$(CONFIG_VIDEO_QCOM_VENUS) += qcom/venus/
obj-y += sunxi/

obj-$(CONFIG_VIDEO_MESON_GE2D) += meson/ge2d/
+
+obj-$(CONFIG_VIDEO_AMPHION_VPU) += amphion/
diff --git a/drivers/media/platform/amphion/Makefile b/drivers/media/platform/amphion/Makefile
new file mode 100644
index 000000000000..80717312835f
--- /dev/null
+++ b/drivers/media/platform/amphion/Makefile
@@ -0,0 +1,20 @@
+# SPDX-License-Identifier: GPL-2.0
+# Makefile for NXP VPU driver
+
+amphion-vpu-objs += vpu_drv.o \
+ vpu_core.o \
+ vpu_mbox.o \
+ vpu_v4l2.o \
+ vpu_helpers.o \
+ vpu_cmds.o \
+ vpu_msgs.o \
+ vpu_rpc.o \
+ vpu_imx8q.o \
+ vpu_windsor.o \
+ vpu_malone.o \
+ vpu_color.o \
+ vdec.o \
+ venc.o \
+ vpu_dbg.o
+
+obj-$(CONFIG_VIDEO_AMPHION_VPU) += amphion-vpu.o
diff --git a/drivers/media/platform/amphion/vpu.h b/drivers/media/platform/amphion/vpu.h
new file mode 100644
index 000000000000..b21f7ddd7c89
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu.h
@@ -0,0 +1,357 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_H
+#define _AMPHION_VPU_H
+
+#include <media/v4l2-device.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-mem2mem.h>
+#include <linux/mailbox_client.h>
+#include <linux/mailbox_controller.h>
+#include <linux/kfifo.h>
+
+#define VPU_TIMEOUT msecs_to_jiffies(1000)
+#define VPU_INST_NULL_ID (-1L)
+#define VPU_MSG_BUFFER_SIZE (8192)
+
+enum imx_plat_type {
+ IMX8QXP = 0,
+ IMX8QM = 1,
+ IMX8DM,
+ IMX8DX,
+ PLAT_TYPE_RESERVED
+};
+
+enum vpu_core_type {
+ VPU_CORE_TYPE_ENC = 0,
+ VPU_CORE_TYPE_DEC = 0x10,
+};
+
+struct vpu_dev;
+struct vpu_resources {
+ enum imx_plat_type plat_type;
+ u32 mreg_base;
+ int (*setup)(struct vpu_dev *vpu);
+ int (*setup_encoder)(struct vpu_dev *vpu);
+ int (*setup_decoder)(struct vpu_dev *vpu);
+ int (*reset)(struct vpu_dev *vpu);
+};
+
+struct vpu_buffer {
+ void *virt;
+ dma_addr_t phys;
+ u32 length;
+ u32 bytesused;
+ struct device *dev;
+};
+
+struct vpu_func {
+ struct video_device *vfd;
+ struct v4l2_m2m_dev *m2m_dev;
+ enum vpu_core_type type;
+ int function;
+};
+
+struct vpu_dev {
+ void __iomem *base;
+ struct platform_device *pdev;
+ struct device *dev;
+ struct mutex lock;
+ const struct vpu_resources *res;
+ struct list_head cores;
+
+ struct v4l2_device v4l2_dev;
+ struct vpu_func encoder;
+ struct vpu_func decoder;
+ struct media_device mdev;
+
+ struct delayed_work watchdog_work;
+ void (*get_vpu)(struct vpu_dev *vpu);
+ void (*put_vpu)(struct vpu_dev *vpu);
+ void (*get_enc)(struct vpu_dev *vpu);
+ void (*put_enc)(struct vpu_dev *vpu);
+ void (*get_dec)(struct vpu_dev *vpu);
+ void (*put_dec)(struct vpu_dev *vpu);
+ atomic_t ref_vpu;
+ atomic_t ref_enc;
+ atomic_t ref_dec;
+
+ struct dentry *debugfs;
+};
+
+struct vpu_format {
+ u32 pixfmt;
+ unsigned int num_planes;
+ u32 type;
+ u32 flags;
+ u32 width;
+ u32 height;
+ u32 sizeimage[VIDEO_MAX_PLANES];
+ u32 bytesperline[VIDEO_MAX_PLANES];
+ u32 field;
+};
+
+struct vpu_core_resources {
+ enum vpu_core_type type;
+ const char *fwname;
+ u32 stride;
+ u32 max_width;
+ u32 min_width;
+ u32 step_width;
+ u32 max_height;
+ u32 min_height;
+ u32 step_height;
+ u32 rpc_size;
+ u32 fwlog_size;
+ u32 act_size;
+ bool standalone;
+};
+
+struct vpu_mbox {
+ char name[20];
+ struct mbox_client cl;
+ struct mbox_chan *ch;
+ bool block;
+};
+
+enum vpu_core_state {
+ VPU_CORE_DEINIT = 0,
+ VPU_CORE_ACTIVE,
+ VPU_CORE_SNAPSHOT,
+ VPU_CORE_HANG
+};
+
+struct vpu_core {
+ void __iomem *base;
+ struct platform_device *pdev;
+ struct device *dev;
+ struct device *parent;
+ struct device *pd;
+ struct device_link *pd_link;
+ struct mutex lock;
+ struct mutex cmd_lock;
+ struct list_head list;
+ enum vpu_core_type type;
+ int id;
+ const struct vpu_core_resources *res;
+ unsigned long instance_mask;
+ u32 supported_instance_count;
+ unsigned long hang_mask;
+ u32 request_count;
+ struct list_head instances;
+ enum vpu_core_state state;
+ u32 fw_version;
+
+ struct vpu_buffer fw;
+ struct vpu_buffer rpc;
+ struct vpu_buffer log;
+ struct vpu_buffer act;
+
+ struct vpu_mbox tx_type;
+ struct vpu_mbox tx_data;
+ struct vpu_mbox rx;
+ unsigned long cmd_seq;
+
+ wait_queue_head_t ack_wq;
+ struct completion cmp;
+ struct workqueue_struct *workqueue;
+ struct work_struct msg_work;
+ struct delayed_work msg_delayed_work;
+ struct kfifo msg_fifo;
+ void *msg_buffer;
+ unsigned int msg_buffer_size;
+
+ struct vpu_dev *vpu;
+ void *iface;
+
+ struct dentry *debugfs;
+ struct dentry *debugfs_fwlog;
+};
+
+enum vpu_codec_state {
+ VPU_CODEC_STATE_DEINIT = 1,
+ VPU_CODEC_STATE_CONFIGURED,
+ VPU_CODEC_STATE_START,
+ VPU_CODEC_STATE_STARTED,
+ VPU_CODEC_STATE_ACTIVE,
+ VPU_CODEC_STATE_SEEK,
+ VPU_CODEC_STATE_STOP,
+ VPU_CODEC_STATE_DRAIN,
+ VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE,
+};
+
+struct vpu_frame_info {
+ u32 type;
+ u32 id;
+ u32 sequence;
+ u32 luma;
+ u32 chroma_u;
+ u32 chroma_v;
+ u32 data_offset;
+ u32 flags;
+ u32 skipped;
+ s64 timestamp;
+};
+
+struct vpu_inst;
+struct vpu_inst_ops {
+ int (*ctrl_init)(struct vpu_inst *inst);
+ int (*start)(struct vpu_inst *inst, u32 type);
+ int (*stop)(struct vpu_inst *inst, u32 type);
+ int (*abort)(struct vpu_inst *inst);
+ bool (*check_ready)(struct vpu_inst *inst, unsigned int type);
+ void (*buf_done)(struct vpu_inst *inst, struct vpu_frame_info *frame);
+ void (*event_notify)(struct vpu_inst *inst, u32 event, void *data);
+ void (*release)(struct vpu_inst *inst);
+ void (*cleanup)(struct vpu_inst *inst);
+ void (*mem_request)(struct vpu_inst *inst,
+ u32 enc_frame_size,
+ u32 enc_frame_num,
+ u32 ref_frame_size,
+ u32 ref_frame_num,
+ u32 act_frame_size,
+ u32 act_frame_num);
+ void (*input_done)(struct vpu_inst *inst);
+ void (*stop_done)(struct vpu_inst *inst);
+ int (*process_output)(struct vpu_inst *inst, struct vb2_buffer *vb);
+ int (*process_capture)(struct vpu_inst *inst, struct vb2_buffer *vb);
+ int (*get_one_frame)(struct vpu_inst *inst, void *info);
+ void (*on_queue_empty)(struct vpu_inst *inst, u32 type);
+ int (*get_debug_info)(struct vpu_inst *inst, char *str, u32 size, u32 i);
+ void (*wait_prepare)(struct vpu_inst *inst);
+ void (*wait_finish)(struct vpu_inst *inst);
+};
+
+struct vpu_inst {
+ struct list_head list;
+ struct mutex lock;
+ struct vpu_dev *vpu;
+ struct vpu_core *core;
+ struct device *dev;
+ int id;
+
+ struct v4l2_fh fh;
+ struct v4l2_ctrl_handler ctrl_handler;
+ atomic_t ref_count;
+ int (*release)(struct vpu_inst *inst);
+
+ enum vpu_codec_state state;
+ enum vpu_core_type type;
+
+ struct workqueue_struct *workqueue;
+ struct work_struct msg_work;
+ struct kfifo msg_fifo;
+ u8 msg_buffer[VPU_MSG_BUFFER_SIZE];
+
+ struct vpu_buffer stream_buffer;
+ bool use_stream_buffer;
+ struct vpu_buffer act;
+
+ struct list_head cmd_q;
+ void *pending;
+
+ struct vpu_inst_ops *ops;
+ const struct vpu_format *formats;
+ struct vpu_format out_format;
+ struct vpu_format cap_format;
+ u32 min_buffer_cap;
+ u32 min_buffer_out;
+
+ struct v4l2_rect crop;
+ u32 colorspace;
+ u8 ycbcr_enc;
+ u8 quantization;
+ u8 xfer_func;
+ u32 sequence;
+ u32 extra_size;
+
+ u32 flows[16];
+ u32 flow_idx;
+
+ pid_t pid;
+ pid_t tgid;
+ struct dentry *debugfs;
+
+ void *priv;
+};
+
+#define call_vop(inst, op, args...) \
+ ((inst)->ops->op ? (inst)->ops->op(inst, ##args) : 0) \
+
+enum {
+ VPU_BUF_STATE_IDLE = 0,
+ VPU_BUF_STATE_INUSE,
+ VPU_BUF_STATE_DECODED,
+ VPU_BUF_STATE_READY,
+ VPU_BUF_STATE_SKIP,
+ VPU_BUF_STATE_ERROR
+};
+
+struct vpu_vb2_buffer {
+ struct v4l2_m2m_buffer m2m_buf;
+ dma_addr_t luma;
+ dma_addr_t chroma_u;
+ dma_addr_t chroma_v;
+ unsigned int state;
+ u32 tag;
+};
+
+void vpu_writel(struct vpu_dev *vpu, u32 reg, u32 val);
+u32 vpu_readl(struct vpu_dev *vpu, u32 reg);
+
+static inline struct vpu_vb2_buffer *to_vpu_vb2_buffer(struct vb2_v4l2_buffer *vbuf)
+{
+ struct v4l2_m2m_buffer *m2m_buf = container_of(vbuf, struct v4l2_m2m_buffer, vb);
+
+ return container_of(m2m_buf, struct vpu_vb2_buffer, m2m_buf);
+}
+
+static inline const char *vpu_core_type_desc(enum vpu_core_type type)
+{
+ return type == VPU_CORE_TYPE_ENC ? "encoder" : "decoder";
+}
+
+static inline struct vpu_inst *to_inst(struct file *filp)
+{
+ return container_of(filp->private_data, struct vpu_inst, fh);
+}
+
+#define ctrl_to_inst(ctrl) \
+ container_of((ctrl)->handler, struct vpu_inst, ctrl_handler)
+
+const struct v4l2_ioctl_ops *venc_get_ioctl_ops(void);
+const struct v4l2_file_operations *venc_get_fops(void);
+const struct v4l2_ioctl_ops *vdec_get_ioctl_ops(void);
+const struct v4l2_file_operations *vdec_get_fops(void);
+
+int vpu_add_func(struct vpu_dev *vpu, struct vpu_func *func);
+void vpu_remove_func(struct vpu_func *func);
+
+struct vpu_inst *vpu_inst_get(struct vpu_inst *inst);
+void vpu_inst_put(struct vpu_inst *inst);
+struct vpu_core *vpu_request_core(struct vpu_dev *vpu, enum vpu_core_type type);
+void vpu_release_core(struct vpu_core *core);
+int vpu_inst_register(struct vpu_inst *inst);
+int vpu_inst_unregister(struct vpu_inst *inst);
+const struct vpu_core_resources *vpu_get_resource(struct vpu_inst *inst);
+
+int vpu_inst_create_dbgfs_file(struct vpu_inst *inst);
+int vpu_inst_remove_dbgfs_file(struct vpu_inst *inst);
+int vpu_core_create_dbgfs_file(struct vpu_core *core);
+int vpu_core_remove_dbgfs_file(struct vpu_core *core);
+void vpu_inst_record_flow(struct vpu_inst *inst, u32 flow);
+
+int vpu_core_driver_init(void);
+void vpu_core_driver_exit(void);
+
+extern bool debug;
+#define vpu_trace(dev, fmt, arg...) \
+ do { \
+ if (debug) \
+ dev_info(dev, "%s: " fmt, __func__, ## arg); \
+ } while (0)
+
+#endif
diff --git a/drivers/media/platform/amphion/vpu_defs.h b/drivers/media/platform/amphion/vpu_defs.h
new file mode 100644
index 000000000000..9b7e26eefc33
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_defs.h
@@ -0,0 +1,186 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_DEFS_H
+#define _AMPHION_VPU_DEFS_H
+
+enum MSG_TYPE {
+ INIT_DONE = 1,
+ PRC_BUF_OFFSET,
+ BOOT_ADDRESS,
+ COMMAND,
+ EVENT,
+};
+
+enum {
+ VPU_IRQ_CODE_BOOT_DONE = 0x55,
+ VPU_IRQ_CODE_SNAPSHOT_DONE = 0xa5,
+ VPU_IRQ_CODE_SYNC = 0xaa,
+};
+
+enum {
+ VPU_CMD_ID_NOOP = 0x0,
+ VPU_CMD_ID_CONFIGURE_CODEC,
+ VPU_CMD_ID_START,
+ VPU_CMD_ID_STOP,
+ VPU_CMD_ID_ABORT,
+ VPU_CMD_ID_RST_BUF,
+ VPU_CMD_ID_SNAPSHOT,
+ VPU_CMD_ID_FIRM_RESET,
+ VPU_CMD_ID_UPDATE_PARAMETER,
+ VPU_CMD_ID_FRAME_ENCODE,
+ VPU_CMD_ID_SKIP,
+ VPU_CMD_ID_PARSE_NEXT_SEQ,
+ VPU_CMD_ID_PARSE_NEXT_I,
+ VPU_CMD_ID_PARSE_NEXT_IP,
+ VPU_CMD_ID_PARSE_NEXT_ANY,
+ VPU_CMD_ID_DEC_PIC,
+ VPU_CMD_ID_FS_ALLOC,
+ VPU_CMD_ID_FS_RELEASE,
+ VPU_CMD_ID_TIMESTAMP,
+ VPU_CMD_ID_DEBUG
+};
+
+enum {
+ VPU_MSG_ID_NOOP = 0x100,
+ VPU_MSG_ID_RESET_DONE,
+ VPU_MSG_ID_START_DONE,
+ VPU_MSG_ID_STOP_DONE,
+ VPU_MSG_ID_ABORT_DONE,
+ VPU_MSG_ID_BUF_RST,
+ VPU_MSG_ID_MEM_REQUEST,
+ VPU_MSG_ID_PARAM_UPD_DONE,
+ VPU_MSG_ID_FRAME_INPUT_DONE,
+ VPU_MSG_ID_ENC_DONE,
+ VPU_MSG_ID_DEC_DONE,
+ VPU_MSG_ID_FRAME_REQ,
+ VPU_MSG_ID_FRAME_RELEASE,
+ VPU_MSG_ID_SEQ_HDR_FOUND,
+ VPU_MSG_ID_RES_CHANGE,
+ VPU_MSG_ID_PIC_HDR_FOUND,
+ VPU_MSG_ID_PIC_DECODED,
+ VPU_MSG_ID_PIC_EOS,
+ VPU_MSG_ID_FIFO_LOW,
+ VPU_MSG_ID_FIFO_HIGH,
+ VPU_MSG_ID_FIFO_EMPTY,
+ VPU_MSG_ID_FIFO_FULL,
+ VPU_MSG_ID_BS_ERROR,
+ VPU_MSG_ID_UNSUPPORTED,
+ VPU_MSG_ID_TIMESTAMP_INFO,
+
+ VPU_MSG_ID_FIRMWARE_XCPT,
+};
+
+enum VPU_ENC_MEMORY_RESOURSE {
+ MEM_RES_ENC,
+ MEM_RES_REF,
+ MEM_RES_ACT
+};
+
+enum VPU_DEC_MEMORY_RESOURCE {
+ MEM_RES_FRAME,
+ MEM_RES_MBI,
+ MEM_RES_DCP
+};
+
+enum VPU_SCODE_TYPE {
+ SCODE_PADDING_EOS = 1,
+ SCODE_PADDING_BUFFLUSH = 2,
+ SCODE_PADDING_ABORT = 3,
+ SCODE_SEQUENCE = 0x31,
+ SCODE_PICTURE = 0x32,
+ SCODE_SLICE = 0x33
+};
+
+struct vpu_pkt_mem_req_data {
+ u32 enc_frame_size;
+ u32 enc_frame_num;
+ u32 ref_frame_size;
+ u32 ref_frame_num;
+ u32 act_buf_size;
+ u32 act_buf_num;
+};
+
+struct vpu_enc_pic_info {
+ u32 frame_id;
+ u32 pic_type;
+ u32 skipped_frame;
+ u32 error_flag;
+ u32 psnr;
+ u32 frame_size;
+ u32 wptr;
+ u32 crc;
+ s64 timestamp;
+};
+
+struct vpu_dec_codec_info {
+ u32 pixfmt;
+ u32 num_ref_frms;
+ u32 num_dpb_frms;
+ u32 num_dfe_area;
+ u32 color_primaries;
+ u32 transfer_chars;
+ u32 matrix_coeffs;
+ u32 full_range;
+ u32 vui_present;
+ u32 progressive;
+ u32 width;
+ u32 height;
+ u32 decoded_width;
+ u32 decoded_height;
+ struct v4l2_fract frame_rate;
+ u32 dsp_asp_ratio;
+ u32 level_idc;
+ u32 bit_depth_luma;
+ u32 bit_depth_chroma;
+ u32 chroma_fmt;
+ u32 mvc_num_views;
+ u32 offset_x;
+ u32 offset_y;
+ u32 tag;
+ u32 sizeimage[VIDEO_MAX_PLANES];
+ u32 bytesperline[VIDEO_MAX_PLANES];
+ u32 mbi_size;
+ u32 dcp_size;
+ u32 stride;
+};
+
+struct vpu_dec_pic_info {
+ u32 id;
+ u32 luma;
+ u32 start;
+ u32 end;
+ u32 pic_size;
+ u32 stride;
+ u32 skipped;
+ s64 timestamp;
+ u32 consumed_count;
+};
+
+struct vpu_fs_info {
+ u32 id;
+ u32 type;
+ u32 tag;
+ u32 luma_addr;
+ u32 luma_size;
+ u32 chroma_addr;
+ u32 chromau_size;
+ u32 chromav_addr;
+ u32 chromav_size;
+ u32 bytesperline;
+ u32 not_displayed;
+};
+
+struct vpu_ts_info {
+ s64 timestamp;
+ u32 size;
+};
+
+#define BITRATE_STEP (1024)
+#define BITRATE_MIN (16 * BITRATE_STEP)
+#define BITRATE_MAX (240 * 1024 * BITRATE_STEP)
+#define BITRATE_DEFAULT (2 * 1024 * BITRATE_STEP)
+
+#endif
diff --git a/drivers/media/platform/amphion/vpu_drv.c b/drivers/media/platform/amphion/vpu_drv.c
new file mode 100644
index 000000000000..bbe2cb36a326
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_drv.c
@@ -0,0 +1,265 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/dma-map-ops.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/pm_runtime.h>
+#include <linux/videodev2.h>
+#include <linux/of_reserved_mem.h>
+#include <media/v4l2-device.h>
+#include <media/videobuf2-v4l2.h>
+#include <media/v4l2-mem2mem.h>
+#include <media/v4l2-ioctl.h>
+#include <linux/debugfs.h>
+#include "vpu.h"
+#include "vpu_imx8q.h"
+
+bool debug;
+module_param(debug, bool, 0644);
+
+void vpu_writel(struct vpu_dev *vpu, u32 reg, u32 val)
+{
+ writel(val, vpu->base + reg);
+}
+
+u32 vpu_readl(struct vpu_dev *vpu, u32 reg)
+{
+ return readl(vpu->base + reg);
+}
+
+static void vpu_dev_get(struct vpu_dev *vpu)
+{
+ if (atomic_inc_return(&vpu->ref_vpu) == 1 && vpu->res->setup)
+ vpu->res->setup(vpu);
+}
+
+static void vpu_dev_put(struct vpu_dev *vpu)
+{
+ atomic_dec(&vpu->ref_vpu);
+}
+
+static void vpu_enc_get(struct vpu_dev *vpu)
+{
+ if (atomic_inc_return(&vpu->ref_enc) == 1 && vpu->res->setup_encoder)
+ vpu->res->setup_encoder(vpu);
+}
+
+static void vpu_enc_put(struct vpu_dev *vpu)
+{
+ atomic_dec(&vpu->ref_enc);
+}
+
+static void vpu_dec_get(struct vpu_dev *vpu)
+{
+ if (atomic_inc_return(&vpu->ref_dec) == 1 && vpu->res->setup_decoder)
+ vpu->res->setup_decoder(vpu);
+}
+
+static void vpu_dec_put(struct vpu_dev *vpu)
+{
+ atomic_dec(&vpu->ref_dec);
+}
+
+static int vpu_init_media_device(struct vpu_dev *vpu)
+{
+ vpu->mdev.dev = vpu->dev;
+ strscpy(vpu->mdev.model, "amphion-vpu", sizeof(vpu->mdev.model));
+ strscpy(vpu->mdev.bus_info, "platform: amphion-vpu", sizeof(vpu->mdev.bus_info));
+ media_device_init(&vpu->mdev);
+ vpu->v4l2_dev.mdev = &vpu->mdev;
+
+ return 0;
+}
+
+static int vpu_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct vpu_dev *vpu;
+ int ret;
+
+ dev_dbg(dev, "probe\n");
+ vpu = devm_kzalloc(dev, sizeof(*vpu), GFP_KERNEL);
+ if (!vpu)
+ return -ENOMEM;
+
+ vpu->pdev = pdev;
+ vpu->dev = dev;
+ mutex_init(&vpu->lock);
+ INIT_LIST_HEAD(&vpu->cores);
+ platform_set_drvdata(pdev, vpu);
+ atomic_set(&vpu->ref_vpu, 0);
+ atomic_set(&vpu->ref_enc, 0);
+ atomic_set(&vpu->ref_dec, 0);
+ vpu->get_vpu = vpu_dev_get;
+ vpu->put_vpu = vpu_dev_put;
+ vpu->get_enc = vpu_enc_get;
+ vpu->put_enc = vpu_enc_put;
+ vpu->get_dec = vpu_dec_get;
+ vpu->put_dec = vpu_dec_put;
+
+ vpu->base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(vpu->base))
+ return PTR_ERR(vpu->base);
+
+ vpu->res = of_device_get_match_data(dev);
+ if (!vpu->res)
+ return -ENODEV;
+
+ pm_runtime_enable(dev);
+ ret = pm_runtime_get_sync(dev);
+ if (ret)
+ goto err_runtime_disable;
+
+ pm_runtime_put_sync(dev);
+
+ ret = v4l2_device_register(dev, &vpu->v4l2_dev);
+ if (ret)
+ goto err_vpu_deinit;
+
+ vpu_init_media_device(vpu);
+ vpu->encoder.type = VPU_CORE_TYPE_ENC;
+ vpu->encoder.function = MEDIA_ENT_F_PROC_VIDEO_ENCODER;
+ vpu->decoder.type = VPU_CORE_TYPE_DEC;
+ vpu->decoder.function = MEDIA_ENT_F_PROC_VIDEO_DECODER;
+ vpu_add_func(vpu, &vpu->decoder);
+ vpu_add_func(vpu, &vpu->encoder);
+ ret = media_device_register(&vpu->mdev);
+ if (ret)
+ goto err_vpu_media;
+ vpu->debugfs = debugfs_create_dir("amphion_vpu", NULL);
+
+ of_platform_populate(dev->of_node, NULL, NULL, dev);
+
+ return 0;
+
+err_vpu_media:
+ vpu_remove_func(&vpu->encoder);
+ vpu_remove_func(&vpu->decoder);
+ v4l2_device_unregister(&vpu->v4l2_dev);
+err_vpu_deinit:
+err_runtime_disable:
+ pm_runtime_set_suspended(dev);
+ pm_runtime_disable(dev);
+
+ return ret;
+}
+
+static int vpu_remove(struct platform_device *pdev)
+{
+ struct vpu_dev *vpu = platform_get_drvdata(pdev);
+ struct device *dev = &pdev->dev;
+ int ret;
+
+ ret = pm_runtime_get_sync(dev);
+ WARN_ON(ret < 0);
+
+ debugfs_remove_recursive(vpu->debugfs);
+ vpu->debugfs = NULL;
+
+ pm_runtime_put_sync(dev);
+ pm_runtime_disable(dev);
+
+ media_device_unregister(&vpu->mdev);
+ vpu_remove_func(&vpu->decoder);
+ vpu_remove_func(&vpu->encoder);
+ media_device_cleanup(&vpu->mdev);
+ v4l2_device_unregister(&vpu->v4l2_dev);
+ mutex_destroy(&vpu->lock);
+
+ return 0;
+}
+
+static int __maybe_unused vpu_runtime_resume(struct device *dev)
+{
+ return 0;
+}
+
+static int __maybe_unused vpu_runtime_suspend(struct device *dev)
+{
+ return 0;
+}
+
+static int __maybe_unused vpu_resume(struct device *dev)
+{
+ return 0;
+}
+
+static int __maybe_unused vpu_suspend(struct device *dev)
+{
+ return 0;
+}
+
+static const struct dev_pm_ops vpu_pm_ops = {
+ SET_RUNTIME_PM_OPS(vpu_runtime_suspend, vpu_runtime_resume, NULL)
+ SET_SYSTEM_SLEEP_PM_OPS(vpu_suspend, vpu_resume)
+};
+
+struct vpu_resources imx8qxp_res = {
+ .plat_type = IMX8QXP,
+ .mreg_base = 0x40000000,
+ .setup = vpu_imx8q_setup,
+ .setup_encoder = vpu_imx8q_setup_enc,
+ .setup_decoder = vpu_imx8q_setup_dec,
+ .reset = vpu_imx8q_reset
+};
+
+struct vpu_resources imx8qm_res = {
+ .plat_type = IMX8QM,
+ .mreg_base = 0x40000000,
+ .setup = vpu_imx8q_setup,
+ .setup_encoder = vpu_imx8q_setup_enc,
+ .setup_decoder = vpu_imx8q_setup_dec,
+ .reset = vpu_imx8q_reset
+};
+
+static const struct of_device_id vpu_dt_match[] = {
+ { .compatible = "nxp,imx8qxp-vpu", .data = &imx8qxp_res },
+ { .compatible = "nxp,imx8qm-vpu", .data = &imx8qm_res },
+ {}
+};
+MODULE_DEVICE_TABLE(of, vpu_dt_match);
+
+static struct platform_driver amphion_vpu_driver = {
+ .probe = vpu_probe,
+ .remove = vpu_remove,
+ .driver = {
+ .name = "amphion-vpu",
+ .of_match_table = vpu_dt_match,
+ .pm = &vpu_pm_ops,
+ },
+};
+
+static int __init vpu_driver_init(void)
+{
+ int ret;
+
+ ret = platform_driver_register(&amphion_vpu_driver);
+ if (ret)
+ return ret;
+
+ return vpu_core_driver_init();
+}
+
+static void __exit vpu_driver_exit(void)
+{
+ vpu_core_driver_exit();
+ platform_driver_unregister(&amphion_vpu_driver);
+}
+module_init(vpu_driver_init);
+module_exit(vpu_driver_exit);
+
+MODULE_AUTHOR("Freescale Semiconductor, Inc.");
+MODULE_DESCRIPTION("Linux VPU driver for Freescale i.MX8Q");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/media/platform/amphion/vpu_imx8q.c b/drivers/media/platform/amphion/vpu_imx8q.c
new file mode 100644
index 000000000000..92c0948fe0f7
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_imx8q.c
@@ -0,0 +1,271 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/types.h>
+#include "vpu.h"
+#include "vpu_core.h"
+#include "vpu_imx8q.h"
+#include "vpu_rpc.h"
+
+#define IMX8Q_CSR_CM0Px_ADDR_OFFSET 0x00000000
+#define IMX8Q_CSR_CM0Px_CPUWAIT 0x00000004
+
+#ifdef CONFIG_IMX_SCU
+#include <linux/firmware/imx/ipc.h>
+#include <linux/firmware/imx/svc/misc.h>
+
+#define VPU_DISABLE_BITS 0x7
+#define VPU_IMX_DECODER_FUSE_OFFSET 14
+#define VPU_ENCODER_MASK 0x1
+#define VPU_DECODER_MASK 0x3UL
+#define VPU_DECODER_H264_MASK 0x2UL
+#define VPU_DECODER_HEVC_MASK 0x1UL
+
+static u32 imx8q_fuse;
+
+struct vpu_sc_msg_misc {
+ struct imx_sc_rpc_msg hdr;
+ u32 word;
+} __packed;
+#endif
+
+int vpu_imx8q_setup_dec(struct vpu_dev *vpu)
+{
+ const off_t offset = DEC_MFD_XREG_SLV_BASE + MFD_BLK_CTRL;
+
+ vpu_writel(vpu, offset + MFD_BLK_CTRL_MFD_SYS_CLOCK_ENABLE_SET, 0x1f);
+ vpu_writel(vpu, offset + MFD_BLK_CTRL_MFD_SYS_RESET_SET, 0xffffffff);
+
+ return 0;
+}
+
+int vpu_imx8q_setup_enc(struct vpu_dev *vpu)
+{
+ return 0;
+}
+
+int vpu_imx8q_setup(struct vpu_dev *vpu)
+{
+ const off_t offset = SCB_XREG_SLV_BASE + SCB_SCB_BLK_CTRL;
+
+ vpu_readl(vpu, offset + 0x108);
+
+ vpu_writel(vpu, offset + SCB_BLK_CTRL_SCB_CLK_ENABLE_SET, 0x1);
+ vpu_writel(vpu, offset + 0x190, 0xffffffff);
+ vpu_writel(vpu, offset + SCB_BLK_CTRL_XMEM_RESET_SET, 0xffffffff);
+ vpu_writel(vpu, offset + SCB_BLK_CTRL_SCB_CLK_ENABLE_SET, 0xE);
+ vpu_writel(vpu, offset + SCB_BLK_CTRL_CACHE_RESET_SET, 0x7);
+ vpu_writel(vpu, XMEM_CONTROL, 0x102);
+
+ vpu_readl(vpu, offset + 0x108);
+
+ return 0;
+}
+
+static int vpu_imx8q_reset_enc(struct vpu_dev *vpu)
+{
+ return 0;
+}
+
+static int vpu_imx8q_reset_dec(struct vpu_dev *vpu)
+{
+ const off_t offset = DEC_MFD_XREG_SLV_BASE + MFD_BLK_CTRL;
+
+ vpu_writel(vpu, offset + MFD_BLK_CTRL_MFD_SYS_RESET_CLR, 0xffffffff);
+
+ return 0;
+}
+
+int vpu_imx8q_reset(struct vpu_dev *vpu)
+{
+ const off_t offset = SCB_XREG_SLV_BASE + SCB_SCB_BLK_CTRL;
+
+ vpu_writel(vpu, offset + SCB_BLK_CTRL_CACHE_RESET_CLR, 0x7);
+ vpu_imx8q_reset_enc(vpu);
+ vpu_imx8q_reset_dec(vpu);
+
+ return 0;
+}
+int vpu_imx8q_set_system_cfg_common(struct vpu_rpc_system_config *config,
+ u32 regs, u32 core_id)
+{
+ if (!config)
+ return -EINVAL;
+
+ switch (core_id) {
+ case 0:
+ config->malone_base_addr[0] = regs + DEC_MFD_XREG_SLV_BASE;
+ config->num_malones = 1;
+ config->num_windsors = 0;
+ break;
+ case 1:
+ config->windsor_base_addr[0] = regs + ENC_MFD_XREG_SLV_0_BASE;
+ config->num_windsors = 1;
+ config->num_malones = 0;
+ break;
+ case 2:
+ config->windsor_base_addr[0] = regs + ENC_MFD_XREG_SLV_1_BASE;
+ config->num_windsors = 1;
+ config->num_malones = 0;
+ break;
+ default:
+ return -EINVAL;
+ }
+ if (config->num_windsors) {
+ config->windsor_irq_pin[0x0][0x0] = WINDSOR_PAL_IRQ_PIN_L;
+ config->windsor_irq_pin[0x0][0x1] = WINDSOR_PAL_IRQ_PIN_H;
+ }
+
+ config->malone_base_addr[0x1] = 0x0;
+ config->hif_offset[0x0] = MFD_HIF;
+ config->hif_offset[0x1] = 0x0;
+
+ config->dpv_base_addr = 0x0;
+ config->dpv_irq_pin = 0x0;
+ config->pixif_base_addr = regs + DEC_MFD_XREG_SLV_BASE + MFD_PIX_IF;
+ config->cache_base_addr[0] = regs + MC_CACHE_0_BASE;
+ config->cache_base_addr[1] = regs + MC_CACHE_1_BASE;
+
+ return 0;
+}
+
+int vpu_imx8q_boot_core(struct vpu_core *core)
+{
+ csr_writel(core, IMX8Q_CSR_CM0Px_ADDR_OFFSET, core->fw.phys);
+ csr_writel(core, IMX8Q_CSR_CM0Px_CPUWAIT, 0);
+ return 0;
+}
+
+int vpu_imx8q_get_power_state(struct vpu_core *core)
+{
+ if (csr_readl(core, IMX8Q_CSR_CM0Px_CPUWAIT) == 1)
+ return 0;
+ return 1;
+}
+
+int vpu_imx8q_on_firmware_loaded(struct vpu_core *core)
+{
+ u8 *p;
+
+ p = core->fw.virt;
+ p[16] = core->vpu->res->plat_type;
+ p[17] = core->id;
+ p[18] = 1;
+
+ return 0;
+}
+
+u32 vpu_imx8q_check_memory_region(dma_addr_t base, dma_addr_t addr, u32 size)
+{
+ const struct vpu_rpc_region_t imx8q_regions[] = {
+ {0x00000000, 0x08000000, VPU_CORE_MEMORY_CACHED},
+ {0x08000000, 0x10000000, VPU_CORE_MEMORY_UNCACHED},
+ {0x10000000, 0x20000000, VPU_CORE_MEMORY_CACHED},
+ {0x20000000, 0x40000000, VPU_CORE_MEMORY_UNCACHED}
+ };
+ int i;
+
+ if (addr < base)
+ return VPU_CORE_MEMORY_INVALID;
+
+ addr -= base;
+ for (i = 0; i < ARRAY_SIZE(imx8q_regions); i++) {
+ const struct vpu_rpc_region_t *region = &imx8q_regions[i];
+
+ if (addr >= region->start && addr + size < region->end)
+ return region->type;
+ }
+
+ return VPU_CORE_MEMORY_INVALID;
+}
+
+#ifdef CONFIG_IMX_SCU
+static u32 vpu_imx8q_get_fuse(void)
+{
+ static u32 fuse_got;
+ struct imx_sc_ipc *ipc;
+ struct vpu_sc_msg_misc msg;
+ struct imx_sc_rpc_msg *hdr = &msg.hdr;
+ int ret;
+
+ if (fuse_got)
+ return imx8q_fuse;
+
+ ret = imx_scu_get_handle(&ipc);
+ if (ret) {
+ pr_err("error: get sct handle fail: %d\n", ret);
+ return 0;
+ }
+
+ hdr->ver = IMX_SC_RPC_VERSION;
+ hdr->svc = IMX_SC_RPC_SVC_MISC;
+ hdr->func = IMX_SC_MISC_FUNC_OTP_FUSE_READ;
+ hdr->size = 2;
+
+ msg.word = VPU_DISABLE_BITS;
+
+ ret = imx_scu_call_rpc(ipc, &msg, true);
+ if (ret)
+ return 0;
+
+ imx8q_fuse = msg.word;
+ fuse_got = 1;
+ return imx8q_fuse;
+}
+
+bool vpu_imx8q_check_codec(enum vpu_core_type type)
+{
+ u32 fuse = vpu_imx8q_get_fuse();
+
+ if (type == VPU_CORE_TYPE_ENC) {
+ if (fuse & VPU_ENCODER_MASK)
+ return false;
+ } else if (type == VPU_CORE_TYPE_DEC) {
+ fuse >>= VPU_IMX_DECODER_FUSE_OFFSET;
+ fuse &= VPU_DECODER_MASK;
+
+ if (fuse == VPU_DECODER_MASK)
+ return false;
+ }
+ return true;
+}
+
+bool vpu_imx8q_check_fmt(enum vpu_core_type type, u32 pixelfmt)
+{
+ u32 fuse = vpu_imx8q_get_fuse();
+
+ if (type == VPU_CORE_TYPE_DEC) {
+ fuse >>= VPU_IMX_DECODER_FUSE_OFFSET;
+ fuse &= VPU_DECODER_MASK;
+
+ if (fuse == VPU_DECODER_HEVC_MASK && pixelfmt == V4L2_PIX_FMT_HEVC)
+ return false;
+ if (fuse == VPU_DECODER_H264_MASK && pixelfmt == V4L2_PIX_FMT_H264)
+ return false;
+ if (fuse == VPU_DECODER_MASK)
+ return false;
+ }
+
+ return true;
+}
+#else
+bool vpu_imx8q_check_codec(enum vpu_core_type type)
+{
+ return true;
+}
+
+bool vpu_imx8q_check_fmt(enum vpu_core_type type, u32 pixelfmt)
+{
+ return true;
+}
+#endif
diff --git a/drivers/media/platform/amphion/vpu_imx8q.h b/drivers/media/platform/amphion/vpu_imx8q.h
new file mode 100644
index 000000000000..c50d055da233
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_imx8q.h
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_IMX8Q_H
+#define _AMPHION_VPU_IMX8Q_H
+
+#define SCB_XREG_SLV_BASE 0x00000000
+#define SCB_SCB_BLK_CTRL 0x00070000
+#define SCB_BLK_CTRL_XMEM_RESET_SET 0x00000090
+#define SCB_BLK_CTRL_CACHE_RESET_SET 0x000000A0
+#define SCB_BLK_CTRL_CACHE_RESET_CLR 0x000000A4
+#define SCB_BLK_CTRL_SCB_CLK_ENABLE_SET 0x00000100
+
+#define XMEM_CONTROL 0x00041000
+
+#define MC_CACHE_0_BASE 0x00060000
+#define MC_CACHE_1_BASE 0x00068000
+
+#define DEC_MFD_XREG_SLV_BASE 0x00180000
+#define ENC_MFD_XREG_SLV_0_BASE 0x00800000
+#define ENC_MFD_XREG_SLV_1_BASE 0x00A00000
+
+#define MFD_HIF 0x0001C000
+#define MFD_HIF_MSD_REG_INTERRUPT_STATUS 0x00000018
+#define MFD_SIF 0x0001D000
+#define MFD_SIF_CTRL_STATUS 0x000000F0
+#define MFD_SIF_INTR_STATUS 0x000000F4
+#define MFD_MCX 0x00020800
+#define MFD_MCX_OFF 0x00000020
+#define MFD_PIX_IF 0x00020000
+
+#define MFD_BLK_CTRL 0x00030000
+#define MFD_BLK_CTRL_MFD_SYS_RESET_SET 0x00000000
+#define MFD_BLK_CTRL_MFD_SYS_RESET_CLR 0x00000004
+#define MFD_BLK_CTRL_MFD_SYS_CLOCK_ENABLE_SET 0x00000100
+#define MFD_BLK_CTRL_MFD_SYS_CLOCK_ENABLE_CLR 0x00000104
+
+#define VID_API_NUM_STREAMS 8
+#define VID_API_MAX_BUF_PER_STR 3
+#define VID_API_MAX_NUM_MVC_VIEWS 4
+#define MEDIAIP_MAX_NUM_MALONES 2
+#define MEDIAIP_MAX_NUM_MALONE_IRQ_PINS 2
+#define MEDIAIP_MAX_NUM_WINDSORS 1
+#define MEDIAIP_MAX_NUM_WINDSOR_IRQ_PINS 2
+#define MEDIAIP_MAX_NUM_CMD_IRQ_PINS 2
+#define MEDIAIP_MAX_NUM_MSG_IRQ_PINS 1
+#define MEDIAIP_MAX_NUM_TIMER_IRQ_PINS 4
+#define MEDIAIP_MAX_NUM_TIMER_IRQ_SLOTS 4
+
+#define WINDSOR_PAL_IRQ_PIN_L 0x4
+#define WINDSOR_PAL_IRQ_PIN_H 0x5
+
+struct vpu_rpc_system_config {
+ u32 cfg_cookie;
+
+ u32 num_malones;
+ u32 malone_base_addr[MEDIAIP_MAX_NUM_MALONES];
+ u32 hif_offset[MEDIAIP_MAX_NUM_MALONES];
+ u32 malone_irq_pin[MEDIAIP_MAX_NUM_MALONES][MEDIAIP_MAX_NUM_MALONE_IRQ_PINS];
+ u32 malone_irq_target[MEDIAIP_MAX_NUM_MALONES][MEDIAIP_MAX_NUM_MALONE_IRQ_PINS];
+
+ u32 num_windsors;
+ u32 windsor_base_addr[MEDIAIP_MAX_NUM_WINDSORS];
+ u32 windsor_irq_pin[MEDIAIP_MAX_NUM_WINDSORS][MEDIAIP_MAX_NUM_WINDSOR_IRQ_PINS];
+ u32 windsor_irq_target[MEDIAIP_MAX_NUM_WINDSORS][MEDIAIP_MAX_NUM_WINDSOR_IRQ_PINS];
+
+ u32 cmd_irq_pin[MEDIAIP_MAX_NUM_CMD_IRQ_PINS];
+ u32 cmd_irq_target[MEDIAIP_MAX_NUM_CMD_IRQ_PINS];
+
+ u32 msg_irq_pin[MEDIAIP_MAX_NUM_MSG_IRQ_PINS];
+ u32 msg_irq_target[MEDIAIP_MAX_NUM_MSG_IRQ_PINS];
+
+ u32 sys_clk_freq;
+ u32 num_timers;
+ u32 timer_base_addr;
+ u32 timer_irq_pin[MEDIAIP_MAX_NUM_TIMER_IRQ_PINS];
+ u32 timer_irq_target[MEDIAIP_MAX_NUM_TIMER_IRQ_PINS];
+ u32 timer_slots[MEDIAIP_MAX_NUM_TIMER_IRQ_SLOTS];
+
+ u32 gic_base_addr;
+ u32 uart_base_addr;
+
+ u32 dpv_base_addr;
+ u32 dpv_irq_pin;
+ u32 dpv_irq_target;
+
+ u32 pixif_base_addr;
+
+ u32 pal_trace_level;
+ u32 pal_trace_destination;
+
+ u32 pal_trace_level1;
+ u32 pal_trace_destination1;
+
+ u32 uHeapBase;
+ u32 uHeapSize;
+
+ u32 cache_base_addr[2];
+};
+
+int vpu_imx8q_setup_dec(struct vpu_dev *vpu);
+int vpu_imx8q_setup_enc(struct vpu_dev *vpu);
+int vpu_imx8q_setup(struct vpu_dev *vpu);
+int vpu_imx8q_reset(struct vpu_dev *vpu);
+int vpu_imx8q_set_system_cfg_common(struct vpu_rpc_system_config *config,
+ u32 regs, u32 core_id);
+int vpu_imx8q_boot_core(struct vpu_core *core);
+int vpu_imx8q_get_power_state(struct vpu_core *core);
+int vpu_imx8q_on_firmware_loaded(struct vpu_core *core);
+u32 vpu_imx8q_check_memory_region(dma_addr_t base, dma_addr_t addr, u32 size);
+bool vpu_imx8q_check_codec(enum vpu_core_type type);
+bool vpu_imx8q_check_fmt(enum vpu_core_type type, u32 pixelfmt);
+
+#endif
--
2.33.0


2021-11-30 09:49:26

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 05/13] media: amphion: implement vpu core communication based on mailbox

driver use mailbox to communicate with vpu core.
and there are a command buffer and a message buffer.
driver will write commands to the command buffer,
then trigger a vpu core interrupt
vpu core will write messages to the message buffer,
then trigger a cpu interrupt.

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
Reported-by: kernel test robot <[email protected]>
---
drivers/media/platform/amphion/vpu_cmds.c | 439 ++++++++++++++++++++++
drivers/media/platform/amphion/vpu_cmds.h | 25 ++
drivers/media/platform/amphion/vpu_mbox.c | 124 ++++++
drivers/media/platform/amphion/vpu_mbox.h | 16 +
drivers/media/platform/amphion/vpu_msgs.c | 414 ++++++++++++++++++++
drivers/media/platform/amphion/vpu_msgs.h | 14 +
6 files changed, 1032 insertions(+)
create mode 100644 drivers/media/platform/amphion/vpu_cmds.c
create mode 100644 drivers/media/platform/amphion/vpu_cmds.h
create mode 100644 drivers/media/platform/amphion/vpu_mbox.c
create mode 100644 drivers/media/platform/amphion/vpu_mbox.h
create mode 100644 drivers/media/platform/amphion/vpu_msgs.c
create mode 100644 drivers/media/platform/amphion/vpu_msgs.h

diff --git a/drivers/media/platform/amphion/vpu_cmds.c b/drivers/media/platform/amphion/vpu_cmds.c
new file mode 100644
index 000000000000..3cfe08f9c19d
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_cmds.c
@@ -0,0 +1,439 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <linux/vmalloc.h>
+#include "vpu.h"
+#include "vpu_defs.h"
+#include "vpu_cmds.h"
+#include "vpu_rpc.h"
+#include "vpu_mbox.h"
+
+struct vpu_cmd_request {
+ u32 request;
+ u32 response;
+ u32 handled;
+};
+
+struct vpu_cmd_t {
+ struct list_head list;
+ u32 id;
+ struct vpu_cmd_request *request;
+ struct vpu_rpc_event *pkt;
+ unsigned long key;
+};
+
+static struct vpu_cmd_request vpu_cmd_requests[] = {
+ {
+ .request = VPU_CMD_ID_CONFIGURE_CODEC,
+ .response = VPU_MSG_ID_MEM_REQUEST,
+ .handled = 1,
+ },
+ {
+ .request = VPU_CMD_ID_START,
+ .response = VPU_MSG_ID_START_DONE,
+ .handled = 0,
+ },
+ {
+ .request = VPU_CMD_ID_STOP,
+ .response = VPU_MSG_ID_STOP_DONE,
+ .handled = 0,
+ },
+ {
+ .request = VPU_CMD_ID_ABORT,
+ .response = VPU_MSG_ID_ABORT_DONE,
+ .handled = 0,
+ },
+ {
+ .request = VPU_CMD_ID_RST_BUF,
+ .response = VPU_MSG_ID_BUF_RST,
+ .handled = 1,
+ },
+};
+
+static int vpu_cmd_send(struct vpu_core *core, struct vpu_rpc_event *pkt)
+{
+ int ret = 0;
+
+ WARN_ON(!core || !pkt);
+
+ ret = vpu_iface_send_cmd(core, pkt);
+ if (ret)
+ return ret;
+
+ /*write cmd data to cmd buffer before trigger a cmd interrupt*/
+ mb();
+ vpu_mbox_send_type(core, COMMAND);
+
+ return ret;
+}
+
+static struct vpu_cmd_t *vpu_alloc_cmd(struct vpu_inst *inst, u32 id, void *data)
+{
+ struct vpu_cmd_t *cmd;
+ int i;
+ int ret;
+
+ cmd = vzalloc(sizeof(*cmd));
+ if (!cmd)
+ return NULL;
+
+ cmd->pkt = vzalloc(sizeof(*cmd->pkt));
+ if (!cmd->pkt) {
+ vfree(cmd);
+ return NULL;
+ }
+
+ cmd->id = id;
+ ret = vpu_iface_pack_cmd(inst->core, cmd->pkt, inst->id, id, data);
+ if (ret) {
+ dev_err(inst->dev, "iface pack cmd(%d) fail\n", id);
+ vfree(cmd->pkt);
+ vfree(cmd);
+ return NULL;
+ }
+ for (i = 0; i < ARRAY_SIZE(vpu_cmd_requests); i++) {
+ if (vpu_cmd_requests[i].request == id) {
+ cmd->request = &vpu_cmd_requests[i];
+ break;
+ }
+ }
+
+ return cmd;
+}
+
+static void vpu_free_cmd(struct vpu_cmd_t *cmd)
+{
+ if (!cmd)
+ return;
+ if (cmd->pkt)
+ vfree(cmd->pkt);
+ vfree(cmd);
+}
+
+static int vpu_session_process_cmd(struct vpu_inst *inst, struct vpu_cmd_t *cmd)
+{
+ int ret;
+
+ if (!inst || !cmd || !cmd->pkt)
+ return -EINVAL;
+
+ dev_dbg(inst->dev, "[%d]send cmd(0x%x)\n", inst->id, cmd->id);
+ vpu_iface_pre_send_cmd(inst);
+ ret = vpu_cmd_send(inst->core, cmd->pkt);
+ if (!ret) {
+ vpu_iface_post_send_cmd(inst);
+ vpu_inst_record_flow(inst, cmd->id);
+ } else
+ dev_err(inst->dev, "[%d] iface send cmd(0x%x) fail\n", inst->id, cmd->id);
+
+ return ret;
+}
+
+static void vpu_process_cmd_request(struct vpu_inst *inst)
+{
+ struct vpu_cmd_t *cmd;
+ struct vpu_cmd_t *tmp;
+
+ if (!inst || inst->pending)
+ return;
+
+ list_for_each_entry_safe(cmd, tmp, &inst->cmd_q, list) {
+ list_del_init(&cmd->list);
+ if (vpu_session_process_cmd(inst, cmd))
+ dev_err(inst->dev, "[%d] process cmd(%d) fail\n", inst->id, cmd->id);
+ if (cmd->request) {
+ inst->pending = (void *)cmd;
+ break;
+ }
+ vpu_free_cmd(cmd);
+ }
+}
+
+static int vpu_request_cmd(struct vpu_inst *inst, u32 id, void *data,
+ unsigned long *key, int *sync)
+{
+ struct vpu_core *core;
+ struct vpu_cmd_t *cmd;
+
+ if (!inst || !inst->core)
+ return -EINVAL;
+
+ core = inst->core;
+ cmd = vpu_alloc_cmd(inst, id, data);
+ if (!cmd)
+ return -ENOMEM;
+
+ mutex_lock(&core->cmd_lock);
+ cmd->key = core->cmd_seq++;
+ if (key)
+ *key = cmd->key;
+ if (sync)
+ *sync = cmd->request ? true : false;
+ list_add_tail(&cmd->list, &inst->cmd_q);
+ vpu_process_cmd_request(inst);
+ mutex_unlock(&core->cmd_lock);
+
+ return 0;
+}
+
+static void vpu_clear_pending(struct vpu_inst *inst)
+{
+ if (!inst || !inst->pending)
+ return;
+
+ vpu_free_cmd(inst->pending);
+ wake_up_all(&inst->core->ack_wq);
+ inst->pending = NULL;
+}
+
+static bool vpu_check_response(struct vpu_cmd_t *cmd, u32 response, u32 handled)
+{
+ struct vpu_cmd_request *request;
+
+ if (!cmd || !cmd->request)
+ return false;
+
+ request = cmd->request;
+ if (request->response != response)
+ return false;
+ if (request->handled != handled)
+ return false;
+
+ return true;
+}
+
+int vpu_response_cmd(struct vpu_inst *inst, u32 response, u32 handled)
+{
+ struct vpu_core *core;
+
+ if (!inst || !inst->core)
+ return -EINVAL;
+
+ core = inst->core;
+ mutex_lock(&core->cmd_lock);
+ if (vpu_check_response(inst->pending, response, handled))
+ vpu_clear_pending(inst);
+
+ vpu_process_cmd_request(inst);
+ mutex_unlock(&core->cmd_lock);
+
+ return 0;
+}
+
+void vpu_clear_request(struct vpu_inst *inst)
+{
+ struct vpu_cmd_t *cmd;
+ struct vpu_cmd_t *tmp;
+
+ mutex_lock(&inst->core->cmd_lock);
+ if (inst->pending)
+ vpu_clear_pending(inst);
+
+ list_for_each_entry_safe(cmd, tmp, &inst->cmd_q, list) {
+ list_del_init(&cmd->list);
+ vpu_free_cmd(cmd);
+ }
+ mutex_unlock(&inst->core->cmd_lock);
+}
+
+static bool check_is_responsed(struct vpu_inst *inst, unsigned long key)
+{
+ struct vpu_core *core = inst->core;
+ struct vpu_cmd_t *cmd;
+ bool flag = true;
+
+ mutex_lock(&core->cmd_lock);
+ cmd = inst->pending;
+ if (cmd && key == cmd->key) {
+ flag = false;
+ goto exit;
+ }
+ list_for_each_entry(cmd, &inst->cmd_q, list) {
+ if (key == cmd->key) {
+ flag = false;
+ break;
+ }
+ }
+exit:
+ mutex_unlock(&core->cmd_lock);
+
+ return flag;
+}
+
+static int sync_session_response(struct vpu_inst *inst, unsigned long key)
+{
+ struct vpu_core *core;
+
+ if (!inst || !inst->core)
+ return -EINVAL;
+
+ core = inst->core;
+
+ call_vop(inst, wait_prepare);
+ wait_event_timeout(core->ack_wq,
+ check_is_responsed(inst, key),
+ VPU_TIMEOUT);
+ call_vop(inst, wait_finish);
+
+ if (!check_is_responsed(inst, key)) {
+ dev_err(inst->dev, "[%d] sync session timeout\n", inst->id);
+ set_bit(inst->id, &core->hang_mask);
+ mutex_lock(&inst->core->cmd_lock);
+ vpu_clear_pending(inst);
+ mutex_unlock(&inst->core->cmd_lock);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int vpu_session_send_cmd(struct vpu_inst *inst, u32 id, void *data)
+{
+ unsigned long key;
+ int sync = false;
+ int ret = -EINVAL;
+
+ WARN_ON(!inst || !inst->core || inst->id < 0);
+
+ ret = vpu_request_cmd(inst, id, data, &key, &sync);
+ if (!ret && sync)
+ ret = sync_session_response(inst, key);
+
+ if (ret)
+ dev_err(inst->dev, "[%d] send cmd(0x%x) fail\n", inst->id, id);
+
+ return ret;
+}
+
+int vpu_session_configure_codec(struct vpu_inst *inst)
+{
+ return vpu_session_send_cmd(inst, VPU_CMD_ID_CONFIGURE_CODEC, NULL);
+}
+
+int vpu_session_start(struct vpu_inst *inst)
+{
+ vpu_trace(inst->dev, "[%d]\n", inst->id);
+
+ return vpu_session_send_cmd(inst, VPU_CMD_ID_START, NULL);
+}
+
+int vpu_session_stop(struct vpu_inst *inst)
+{
+ int ret;
+
+ vpu_trace(inst->dev, "[%d]\n", inst->id);
+
+ ret = vpu_session_send_cmd(inst, VPU_CMD_ID_STOP, NULL);
+ /* workaround for a firmware bug,
+ * if the next command is too close after stop cmd,
+ * the firmware may enter wfi wrongly.
+ */
+ usleep_range(3000, 5000);
+ return ret;
+}
+
+int vpu_session_encode_frame(struct vpu_inst *inst, s64 timestamp)
+{
+ return vpu_session_send_cmd(inst, VPU_CMD_ID_FRAME_ENCODE, &timestamp);
+}
+
+int vpu_session_alloc_fs(struct vpu_inst *inst, struct vpu_fs_info *fs)
+{
+ return vpu_session_send_cmd(inst, VPU_CMD_ID_FS_ALLOC, fs);
+}
+
+int vpu_session_release_fs(struct vpu_inst *inst, struct vpu_fs_info *fs)
+{
+ return vpu_session_send_cmd(inst, VPU_CMD_ID_FS_RELEASE, fs);
+}
+
+int vpu_session_abort(struct vpu_inst *inst)
+{
+ return vpu_session_send_cmd(inst, VPU_CMD_ID_ABORT, NULL);
+}
+
+int vpu_session_rst_buf(struct vpu_inst *inst)
+{
+ return vpu_session_send_cmd(inst, VPU_CMD_ID_RST_BUF, NULL);
+}
+
+int vpu_session_fill_timestamp(struct vpu_inst *inst, struct vpu_ts_info *info)
+{
+ return vpu_session_send_cmd(inst, VPU_CMD_ID_TIMESTAMP, info);
+}
+
+int vpu_session_update_parameters(struct vpu_inst *inst, void *arg)
+{
+ if (inst->type & VPU_CORE_TYPE_DEC)
+ vpu_iface_set_decode_params(inst, arg, 1);
+ else
+ vpu_iface_set_encode_params(inst, arg, 1);
+
+ return vpu_session_send_cmd(inst, VPU_CMD_ID_UPDATE_PARAMETER, arg);
+}
+
+int vpu_session_debug(struct vpu_inst *inst)
+{
+ return vpu_session_send_cmd(inst, VPU_CMD_ID_DEBUG, NULL);
+}
+
+int vpu_core_snapshot(struct vpu_core *core)
+{
+ struct vpu_inst *inst;
+ int ret;
+
+ WARN_ON(!core || list_empty(&core->instances));
+
+ inst = list_first_entry(&core->instances, struct vpu_inst, list);
+
+ reinit_completion(&core->cmp);
+ ret = vpu_session_send_cmd(inst, VPU_CMD_ID_SNAPSHOT, NULL);
+ if (ret)
+ return ret;
+ ret = wait_for_completion_timeout(&core->cmp, VPU_TIMEOUT);
+ if (!ret) {
+ dev_err(core->dev, "snapshot timeout\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int vpu_core_sw_reset(struct vpu_core *core)
+{
+ struct vpu_rpc_event pkt;
+ int ret;
+
+ WARN_ON(!core);
+
+ memset(&pkt, 0, sizeof(pkt));
+ vpu_iface_pack_cmd(core, &pkt, 0, VPU_CMD_ID_FIRM_RESET, NULL);
+
+ reinit_completion(&core->cmp);
+ mutex_lock(&core->cmd_lock);
+ ret = vpu_cmd_send(core, &pkt);
+ mutex_unlock(&core->cmd_lock);
+ if (ret)
+ return ret;
+ ret = wait_for_completion_timeout(&core->cmp, VPU_TIMEOUT);
+ if (!ret) {
+ dev_err(core->dev, "sw reset timeout\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
diff --git a/drivers/media/platform/amphion/vpu_cmds.h b/drivers/media/platform/amphion/vpu_cmds.h
new file mode 100644
index 000000000000..bc538d277bc9
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_cmds.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_CMDS_H
+#define _AMPHION_VPU_CMDS_H
+
+int vpu_session_configure_codec(struct vpu_inst *inst);
+int vpu_session_start(struct vpu_inst *inst);
+int vpu_session_stop(struct vpu_inst *inst);
+int vpu_session_abort(struct vpu_inst *inst);
+int vpu_session_rst_buf(struct vpu_inst *inst);
+int vpu_session_encode_frame(struct vpu_inst *inst, s64 timestamp);
+int vpu_session_alloc_fs(struct vpu_inst *inst, struct vpu_fs_info *fs);
+int vpu_session_release_fs(struct vpu_inst *inst, struct vpu_fs_info *fs);
+int vpu_session_fill_timestamp(struct vpu_inst *inst, struct vpu_ts_info *info);
+int vpu_session_update_parameters(struct vpu_inst *inst, void *arg);
+int vpu_core_snapshot(struct vpu_core *core);
+int vpu_core_sw_reset(struct vpu_core *core);
+int vpu_response_cmd(struct vpu_inst *inst, u32 response, u32 handled);
+void vpu_clear_request(struct vpu_inst *inst);
+int vpu_session_debug(struct vpu_inst *inst);
+
+#endif
diff --git a/drivers/media/platform/amphion/vpu_mbox.c b/drivers/media/platform/amphion/vpu_mbox.c
new file mode 100644
index 000000000000..87f8743bedea
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_mbox.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include "vpu.h"
+#include "vpu_mbox.h"
+#include "vpu_msgs.h"
+
+static void vpu_mbox_rx_callback(struct mbox_client *cl, void *msg)
+{
+ struct vpu_mbox *rx = container_of(cl, struct vpu_mbox, cl);
+ struct vpu_core *core = container_of(rx, struct vpu_core, rx);
+
+ vpu_isr(core, *(u32 *)msg);
+}
+
+static int vpu_mbox_request_channel(struct device *dev, struct vpu_mbox *mbox)
+{
+ struct mbox_chan *ch;
+ struct mbox_client *cl;
+
+ if (!dev || !mbox)
+ return -EINVAL;
+ if (mbox->ch)
+ return 0;
+
+ cl = &mbox->cl;
+ cl->dev = dev;
+ if (mbox->block) {
+ cl->tx_block = true;
+ cl->tx_tout = 1000;
+ } else {
+ cl->tx_block = false;
+ }
+ cl->knows_txdone = false;
+ cl->rx_callback = vpu_mbox_rx_callback;
+
+ ch = mbox_request_channel_byname(cl, mbox->name);
+ if (IS_ERR(ch)) {
+ dev_err(dev, "Failed to request mbox chan %s, ret : %ld\n",
+ mbox->name, PTR_ERR(ch));
+ return PTR_ERR(ch);
+ }
+
+ mbox->ch = ch;
+ return 0;
+}
+
+int vpu_mbox_init(struct vpu_core *core)
+{
+ WARN_ON(!core);
+
+ scnprintf(core->tx_type.name, sizeof(core->tx_type.name) - 1, "tx0");
+ core->tx_type.block = true;
+
+ scnprintf(core->tx_data.name, sizeof(core->tx_data.name) - 1, "tx1");
+ core->tx_data.block = false;
+
+ scnprintf(core->rx.name, sizeof(core->rx.name) - 1, "rx");
+ core->rx.block = true;
+
+ return 0;
+}
+
+int vpu_mbox_request(struct vpu_core *core)
+{
+ int ret;
+
+ WARN_ON(!core);
+
+ ret = vpu_mbox_request_channel(core->dev, &core->tx_type);
+ if (ret)
+ goto error;
+ ret = vpu_mbox_request_channel(core->dev, &core->tx_data);
+ if (ret)
+ goto error;
+ ret = vpu_mbox_request_channel(core->dev, &core->rx);
+ if (ret)
+ goto error;
+
+ dev_dbg(core->dev, "%s request mbox\n", vpu_core_type_desc(core->type));
+ return 0;
+error:
+ vpu_mbox_free(core);
+ return ret;
+}
+
+void vpu_mbox_free(struct vpu_core *core)
+{
+ WARN_ON(!core);
+
+ mbox_free_channel(core->tx_type.ch);
+ mbox_free_channel(core->tx_data.ch);
+ mbox_free_channel(core->rx.ch);
+ core->tx_type.ch = NULL;
+ core->tx_data.ch = NULL;
+ core->rx.ch = NULL;
+ dev_dbg(core->dev, "%s free mbox\n", vpu_core_type_desc(core->type));
+}
+
+void vpu_mbox_send_type(struct vpu_core *core, u32 type)
+{
+ mbox_send_message(core->tx_type.ch, &type);
+}
+
+void vpu_mbox_send_msg(struct vpu_core *core, u32 type, u32 data)
+{
+ mbox_send_message(core->tx_data.ch, &data);
+ mbox_send_message(core->tx_type.ch, &type);
+}
+
+void vpu_mbox_enable_rx(struct vpu_dev *dev)
+{
+}
diff --git a/drivers/media/platform/amphion/vpu_mbox.h b/drivers/media/platform/amphion/vpu_mbox.h
new file mode 100644
index 000000000000..79cfd874e92b
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_mbox.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_MBOX_H
+#define _AMPHION_VPU_MBOX_H
+
+int vpu_mbox_init(struct vpu_core *core);
+int vpu_mbox_request(struct vpu_core *core);
+void vpu_mbox_free(struct vpu_core *core);
+void vpu_mbox_send_msg(struct vpu_core *core, u32 type, u32 data);
+void vpu_mbox_send_type(struct vpu_core *core, u32 type);
+void vpu_mbox_enable_rx(struct vpu_dev *dev);
+
+#endif
diff --git a/drivers/media/platform/amphion/vpu_msgs.c b/drivers/media/platform/amphion/vpu_msgs.c
new file mode 100644
index 000000000000..34d3da4d1a57
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_msgs.c
@@ -0,0 +1,414 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include "vpu.h"
+#include "vpu_core.h"
+#include "vpu_rpc.h"
+#include "vpu_mbox.h"
+#include "vpu_defs.h"
+#include "vpu_cmds.h"
+#include "vpu_msgs.h"
+#include "vpu_v4l2.h"
+
+#define VPU_PKT_HEADER_LENGTH 3
+
+struct vpu_msg_handler {
+ u32 id;
+ void (*done)(struct vpu_inst *inst, struct vpu_rpc_event *pkt);
+};
+
+static void vpu_session_handle_start_done(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ WARN_ON(!inst || !inst->core);
+
+ vpu_trace(inst->dev, "[%d]\n", inst->id);
+}
+
+static void vpu_session_handle_mem_request(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ struct vpu_pkt_mem_req_data req_data;
+
+ WARN_ON(!inst || !inst->core || !inst->ops);
+
+ vpu_iface_unpack_msg_data(inst->core, pkt, (void *)&req_data);
+ vpu_trace(inst->dev, "[%d] %d:%d %d:%d %d:%d\n",
+ inst->id,
+ req_data.enc_frame_size,
+ req_data.enc_frame_num,
+ req_data.ref_frame_size,
+ req_data.ref_frame_num,
+ req_data.act_buf_size,
+ req_data.act_buf_num);
+ call_vop(inst, mem_request,
+ req_data.enc_frame_size,
+ req_data.enc_frame_num,
+ req_data.ref_frame_size,
+ req_data.ref_frame_num,
+ req_data.act_buf_size,
+ req_data.act_buf_num);
+}
+
+static void vpu_session_handle_stop_done(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ WARN_ON(!inst || !inst->core);
+
+ vpu_trace(inst->dev, "[%d]\n", inst->id);
+
+ call_vop(inst, stop_done);
+}
+
+static void vpu_session_handle_seq_hdr(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ struct vpu_dec_codec_info info;
+ const struct vpu_core_resources *res;
+
+ WARN_ON(!inst || !inst->core);
+
+ memset(&info, 0, sizeof(info));
+ res = vpu_get_resource(inst);
+ info.stride = res ? res->stride : 1;
+ vpu_iface_unpack_msg_data(inst->core, pkt, (void *)&info);
+ call_vop(inst, event_notify, VPU_MSG_ID_SEQ_HDR_FOUND, &info);
+}
+
+static void vpu_session_handle_resolution_change(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ WARN_ON(!inst || !inst->core);
+
+ call_vop(inst, event_notify, VPU_MSG_ID_RES_CHANGE, NULL);
+}
+
+static void vpu_session_handle_enc_frame_done(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ struct vpu_enc_pic_info info;
+
+ WARN_ON(!inst || !inst->core);
+
+ vpu_iface_unpack_msg_data(inst->core, pkt, (void *)&info);
+ dev_dbg(inst->dev, "[%d] frame id = %d, wptr = 0x%x, size = %d\n",
+ inst->id, info.frame_id, info.wptr, info.frame_size);
+ call_vop(inst, get_one_frame, &info);
+}
+
+static void vpu_session_handle_frame_request(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ struct vpu_fs_info fs;
+
+ vpu_iface_unpack_msg_data(inst->core, pkt, &fs);
+ call_vop(inst, event_notify, VPU_MSG_ID_FRAME_REQ, &fs);
+}
+
+static void vpu_session_handle_frame_release(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+
+ WARN_ON(!inst || !inst->core);
+
+ if (inst->core->type == VPU_CORE_TYPE_ENC) {
+ struct vpu_frame_info info;
+
+ memset(&info, 0, sizeof(info));
+ vpu_iface_unpack_msg_data(inst->core, pkt, (void *)&info.sequence);
+ dev_dbg(inst->dev, "[%d] %d\n", inst->id, info.sequence);
+ info.type = inst->out_format.type;
+ call_vop(inst, buf_done, &info);
+ } else if (inst->core->type == VPU_CORE_TYPE_DEC) {
+ struct vpu_fs_info fs;
+
+ vpu_iface_unpack_msg_data(inst->core, pkt, &fs);
+ call_vop(inst, event_notify, VPU_MSG_ID_FRAME_RELEASE, &fs);
+ }
+}
+
+static void vpu_session_handle_input_done(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+
+ WARN_ON(!inst || !inst->core);
+
+ dev_dbg(inst->dev, "[%d]\n", inst->id);
+ call_vop(inst, input_done);
+}
+
+static void vpu_session_handle_pic_decoded(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ struct vpu_dec_pic_info info;
+
+ WARN_ON(!inst || !inst->core);
+
+ vpu_iface_unpack_msg_data(inst->core, pkt, (void *)&info);
+ call_vop(inst, get_one_frame, &info);
+}
+
+static void vpu_session_handle_pic_done(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ struct vpu_dec_pic_info info;
+ struct vpu_frame_info frame;
+
+ WARN_ON(!inst || !inst->core);
+
+ memset(&frame, 0, sizeof(frame));
+ vpu_iface_unpack_msg_data(inst->core, pkt, (void *)&info);
+ if (inst->core->type == VPU_CORE_TYPE_DEC)
+ frame.type = inst->cap_format.type;
+ frame.id = info.id;
+ frame.luma = info.luma;
+ frame.skipped = info.skipped;
+ frame.timestamp = info.timestamp;
+
+ call_vop(inst, buf_done, &frame);
+}
+
+static void vpu_session_handle_eos(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ call_vop(inst, event_notify, VPU_MSG_ID_PIC_EOS, NULL);
+}
+
+static void vpu_session_handle_error(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ dev_err(inst->dev, "unsupported stream\n");
+ call_vop(inst, event_notify, VPU_MSG_ID_UNSUPPORTED, NULL);
+ vpu_v4l2_set_error(inst);
+}
+
+static void vpu_session_handle_firmware_xcpt(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ char *str = (char *)pkt->data;
+
+ dev_err(inst->dev, "%s firmware xcpt: %s\n",
+ vpu_core_type_desc(inst->core->type), str);
+ call_vop(inst, event_notify, VPU_MSG_ID_FIRMWARE_XCPT, NULL);
+ set_bit(inst->id, &inst->core->hang_mask);
+ vpu_v4l2_set_error(inst);
+}
+
+struct vpu_msg_handler handlers[] = {
+ {VPU_MSG_ID_START_DONE, vpu_session_handle_start_done},
+ {VPU_MSG_ID_STOP_DONE, vpu_session_handle_stop_done},
+ {VPU_MSG_ID_MEM_REQUEST, vpu_session_handle_mem_request},
+ {VPU_MSG_ID_SEQ_HDR_FOUND, vpu_session_handle_seq_hdr},
+ {VPU_MSG_ID_RES_CHANGE, vpu_session_handle_resolution_change},
+ {VPU_MSG_ID_FRAME_INPUT_DONE, vpu_session_handle_input_done},
+ {VPU_MSG_ID_FRAME_REQ, vpu_session_handle_frame_request},
+ {VPU_MSG_ID_FRAME_RELEASE, vpu_session_handle_frame_release},
+ {VPU_MSG_ID_ENC_DONE, vpu_session_handle_enc_frame_done},
+ {VPU_MSG_ID_PIC_DECODED, vpu_session_handle_pic_decoded},
+ {VPU_MSG_ID_DEC_DONE, vpu_session_handle_pic_done},
+ {VPU_MSG_ID_PIC_EOS, vpu_session_handle_eos},
+ {VPU_MSG_ID_UNSUPPORTED, vpu_session_handle_error},
+ {VPU_MSG_ID_FIRMWARE_XCPT, vpu_session_handle_firmware_xcpt},
+};
+
+static int vpu_session_handle_msg(struct vpu_inst *inst, struct vpu_rpc_event *msg)
+{
+ int ret;
+ u32 msg_id;
+ struct vpu_msg_handler *handler = NULL;
+ unsigned int i;
+
+ ret = vpu_iface_convert_msg_id(inst->core, msg->hdr.id);
+ if (ret < 0)
+ return -EINVAL;
+
+ msg_id = ret;
+ dev_dbg(inst->dev, "[%d] receive event(0x%x)\n", inst->id, msg_id);
+
+ for (i = 0; i < ARRAY_SIZE(handlers); i++) {
+ if (handlers[i].id == msg_id) {
+ handler = &handlers[i];
+ break;
+ }
+ }
+
+ if (handler && handler->done)
+ handler->done(inst, msg);
+
+ vpu_response_cmd(inst, msg_id, 1);
+
+ return 0;
+}
+
+static bool vpu_inst_receive_msg(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ u32 bytes = sizeof(struct vpu_rpc_event_header);
+ u32 ret;
+
+ memset(pkt, 0, sizeof(*pkt));
+ if (kfifo_len(&inst->msg_fifo) < bytes)
+ return false;
+
+ ret = kfifo_out(&inst->msg_fifo, pkt, bytes);
+ if (ret != bytes)
+ return false;
+
+ if (pkt->hdr.num > 0) {
+ bytes = pkt->hdr.num * sizeof(u32);
+ ret = kfifo_out(&inst->msg_fifo, pkt->data, bytes);
+ if (ret != bytes)
+ return false;
+ }
+
+ return true;
+}
+
+void vpu_inst_run_work(struct work_struct *work)
+{
+ struct vpu_inst *inst = container_of(work, struct vpu_inst, msg_work);
+ struct vpu_rpc_event pkt;
+
+ while (vpu_inst_receive_msg(inst, &pkt))
+ vpu_session_handle_msg(inst, &pkt);
+}
+
+static void vpu_inst_handle_msg(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+{
+ u32 bytes;
+ u32 id = pkt->hdr.id;
+ int ret;
+
+ if (!inst->workqueue) {
+ vpu_session_handle_msg(inst, pkt);
+ return;
+ }
+
+ bytes = sizeof(pkt->hdr) + pkt->hdr.num * sizeof(u32);
+ ret = kfifo_in(&inst->msg_fifo, pkt, bytes);
+ if (ret != bytes)
+ dev_err(inst->dev, "[%d:%d]overflow: %d\n", inst->core->id, inst->id, id);
+ queue_work(inst->workqueue, &inst->msg_work);
+}
+
+static int vpu_handle_msg(struct vpu_core *core)
+{
+ struct vpu_rpc_event pkt;
+ struct vpu_inst *inst;
+ int ret;
+
+ memset(&pkt, 0, sizeof(pkt));
+ while (!vpu_iface_receive_msg(core, &pkt)) {
+ dev_dbg(core->dev, "event index = %d, id = %d, num = %d\n",
+ pkt.hdr.index, pkt.hdr.id, pkt.hdr.num);
+
+ ret = vpu_iface_convert_msg_id(core, pkt.hdr.id);
+ if (ret < 0)
+ continue;
+
+ inst = vpu_core_find_instance(core, pkt.hdr.index);
+ if (inst) {
+ vpu_response_cmd(inst, ret, 0);
+ mutex_lock(&core->cmd_lock);
+ vpu_inst_record_flow(inst, ret);
+ mutex_unlock(&core->cmd_lock);
+
+ vpu_inst_handle_msg(inst, &pkt);
+ vpu_inst_put(inst);
+ }
+ memset(&pkt, 0, sizeof(pkt));
+ }
+
+ return 0;
+}
+
+static int vpu_isr_thread(struct vpu_core *core, u32 irq_code)
+{
+ WARN_ON(!core);
+
+ dev_dbg(core->dev, "irq code = 0x%x\n", irq_code);
+ switch (irq_code) {
+ case VPU_IRQ_CODE_SYNC:
+ vpu_mbox_send_msg(core, PRC_BUF_OFFSET, core->rpc.phys - core->fw.phys);
+ vpu_mbox_send_msg(core, BOOT_ADDRESS, core->fw.phys);
+ vpu_mbox_send_msg(core, INIT_DONE, 2);
+ break;
+ case VPU_IRQ_CODE_BOOT_DONE:
+ break;
+ case VPU_IRQ_CODE_SNAPSHOT_DONE:
+ break;
+ default:
+ vpu_handle_msg(core);
+ break;
+ }
+
+ return 0;
+}
+
+static void vpu_core_run_msg_work(struct vpu_core *core)
+{
+ const unsigned int SIZE = sizeof(u32);
+
+ while (kfifo_len(&core->msg_fifo) >= SIZE) {
+ u32 data;
+
+ if (kfifo_out(&core->msg_fifo, &data, SIZE) == SIZE)
+ vpu_isr_thread(core, data);
+ }
+}
+
+void vpu_msg_run_work(struct work_struct *work)
+{
+ struct vpu_core *core = container_of(work, struct vpu_core, msg_work);
+ unsigned long delay = msecs_to_jiffies(10);
+
+ vpu_core_run_msg_work(core);
+ queue_delayed_work(core->workqueue, &core->msg_delayed_work, delay);
+}
+
+void vpu_msg_delayed_work(struct work_struct *work)
+{
+ struct vpu_core *core;
+ struct delayed_work *dwork;
+ u32 bytes = sizeof(bytes);
+ u32 i;
+
+ if (!work)
+ return;
+
+ dwork = to_delayed_work(work);
+ core = container_of(dwork, struct vpu_core, msg_delayed_work);
+ if (kfifo_len(&core->msg_fifo) >= bytes)
+ vpu_core_run_msg_work(core);
+
+ bytes = sizeof(struct vpu_rpc_event_header);
+ for (i = 0; i < core->supported_instance_count; i++) {
+ struct vpu_inst *inst = vpu_core_find_instance(core, i);
+
+ if (!inst)
+ continue;
+
+ if (inst->workqueue && kfifo_len(&inst->msg_fifo) >= bytes)
+ queue_work(inst->workqueue, &inst->msg_work);
+
+ vpu_inst_put(inst);
+ }
+}
+
+
+int vpu_isr(struct vpu_core *core, u32 irq)
+{
+ WARN_ON(!core);
+
+ switch (irq) {
+ case VPU_IRQ_CODE_SYNC:
+ break;
+ case VPU_IRQ_CODE_BOOT_DONE:
+ complete(&core->cmp);
+ break;
+ case VPU_IRQ_CODE_SNAPSHOT_DONE:
+ complete(&core->cmp);
+ break;
+ default:
+ break;
+ }
+
+ if (kfifo_in(&core->msg_fifo, &irq, sizeof(irq)) != sizeof(irq))
+ dev_err(core->dev, "[%d]overflow: %d\n", core->id, irq);
+ queue_work(core->workqueue, &core->msg_work);
+
+ return 0;
+}
diff --git a/drivers/media/platform/amphion/vpu_msgs.h b/drivers/media/platform/amphion/vpu_msgs.h
new file mode 100644
index 000000000000..c466b4f62aad
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_msgs.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_MSGS_H
+#define _AMPHION_VPU_MSGS_H
+
+int vpu_isr(struct vpu_core *core, u32 irq);
+void vpu_inst_run_work(struct work_struct *work);
+void vpu_msg_run_work(struct work_struct *work);
+void vpu_msg_delayed_work(struct work_struct *work);
+
+#endif
--
2.33.0


2021-11-30 09:49:36

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 06/13] media: amphion: add vpu v4l2 m2m support

vpu_v4l2.c implements the v4l2 m2m driver methods.
vpu_helpers.c implements the common helper functions
vpu_color.c converts the v4l2 colorspace with iso

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
Reported-by: kernel test robot <[email protected]>
---
drivers/media/platform/amphion/vpu_color.c | 190 +++++
drivers/media/platform/amphion/vpu_helpers.c | 436 ++++++++++++
drivers/media/platform/amphion/vpu_helpers.h | 71 ++
drivers/media/platform/amphion/vpu_v4l2.c | 703 +++++++++++++++++++
drivers/media/platform/amphion/vpu_v4l2.h | 54 ++
5 files changed, 1454 insertions(+)
create mode 100644 drivers/media/platform/amphion/vpu_color.c
create mode 100644 drivers/media/platform/amphion/vpu_helpers.c
create mode 100644 drivers/media/platform/amphion/vpu_helpers.h
create mode 100644 drivers/media/platform/amphion/vpu_v4l2.c
create mode 100644 drivers/media/platform/amphion/vpu_v4l2.h

diff --git a/drivers/media/platform/amphion/vpu_color.c b/drivers/media/platform/amphion/vpu_color.c
new file mode 100644
index 000000000000..c3f45dd9ee30
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_color.c
@@ -0,0 +1,190 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/types.h>
+#include <media/v4l2-device.h>
+#include "vpu.h"
+#include "vpu_helpers.h"
+
+static const u8 colorprimaries[] = {
+ 0,
+ V4L2_COLORSPACE_REC709, /*Rec. ITU-R BT.709-6*/
+ 0,
+ 0,
+ V4L2_COLORSPACE_470_SYSTEM_M, /*Rec. ITU-R BT.470-6 System M*/
+ V4L2_COLORSPACE_470_SYSTEM_BG,/*Rec. ITU-R BT.470-6 System B, G*/
+ V4L2_COLORSPACE_SMPTE170M, /*SMPTE170M*/
+ V4L2_COLORSPACE_SMPTE240M, /*SMPTE240M*/
+ 0, /*Generic film*/
+ V4L2_COLORSPACE_BT2020, /*Rec. ITU-R BT.2020-2*/
+ 0, /*SMPTE ST 428-1*/
+};
+
+static const u8 colortransfers[] = {
+ 0,
+ V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.709-6*/
+ 0,
+ 0,
+ 0, /*Rec. ITU-R BT.470-6 System M*/
+ 0, /*Rec. ITU-R BT.470-6 System B, G*/
+ V4L2_XFER_FUNC_709, /*SMPTE170M*/
+ V4L2_XFER_FUNC_SMPTE240M,/*SMPTE240M*/
+ V4L2_XFER_FUNC_NONE, /*Linear transfer characteristics*/
+ 0,
+ 0,
+ 0, /*IEC 61966-2-4*/
+ 0, /*Rec. ITU-R BT.1361-0 extended colour gamut*/
+ V4L2_XFER_FUNC_SRGB, /*IEC 61966-2-1 sRGB or sYCC*/
+ V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.2020-2 (10 bit system)*/
+ V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.2020-2 (12 bit system)*/
+ V4L2_XFER_FUNC_SMPTE2084,/*SMPTE ST 2084*/
+ 0, /*SMPTE ST 428-1*/
+ 0 /*Rec. ITU-R BT.2100-0 hybrid log-gamma (HLG)*/
+};
+
+static const u8 colormatrixcoefs[] = {
+ 0,
+ V4L2_YCBCR_ENC_709, /*Rec. ITU-R BT.709-6*/
+ 0,
+ 0,
+ 0, /*Title 47 Code of Federal Regulations*/
+ V4L2_YCBCR_ENC_601, /*Rec. ITU-R BT.601-7 625*/
+ V4L2_YCBCR_ENC_601, /*Rec. ITU-R BT.601-7 525*/
+ V4L2_YCBCR_ENC_SMPTE240M, /*SMPTE240M*/
+ 0,
+ V4L2_YCBCR_ENC_BT2020, /*Rec. ITU-R BT.2020-2*/
+ V4L2_YCBCR_ENC_BT2020_CONST_LUM /*Rec. ITU-R BT.2020-2 constant*/
+};
+
+u32 vpu_color_cvrt_primaries_v2i(u32 primaries)
+{
+ return VPU_ARRAY_FIND(colorprimaries, primaries);
+}
+
+u32 vpu_color_cvrt_primaries_i2v(u32 primaries)
+{
+ return VPU_ARRAY_AT(colorprimaries, primaries);
+}
+
+u32 vpu_color_cvrt_transfers_v2i(u32 transfers)
+{
+ return VPU_ARRAY_FIND(colortransfers, transfers);
+}
+
+u32 vpu_color_cvrt_transfers_i2v(u32 transfers)
+{
+ return VPU_ARRAY_AT(colortransfers, transfers);
+}
+
+u32 vpu_color_cvrt_matrix_v2i(u32 matrix)
+{
+ return VPU_ARRAY_FIND(colormatrixcoefs, matrix);
+}
+
+u32 vpu_color_cvrt_matrix_i2v(u32 matrix)
+{
+ return VPU_ARRAY_AT(colormatrixcoefs, matrix);
+}
+
+u32 vpu_color_cvrt_full_range_v2i(u32 full_range)
+{
+ return (full_range == V4L2_QUANTIZATION_FULL_RANGE);
+}
+
+u32 vpu_color_cvrt_full_range_i2v(u32 full_range)
+{
+ if (full_range)
+ return V4L2_QUANTIZATION_FULL_RANGE;
+
+ return V4L2_QUANTIZATION_LIM_RANGE;
+}
+
+int vpu_color_check_primaries(u32 primaries)
+{
+ return vpu_color_cvrt_primaries_v2i(primaries) ? 0 : -EINVAL;
+}
+
+int vpu_color_check_transfers(u32 transfers)
+{
+ return vpu_color_cvrt_transfers_v2i(transfers) ? 0 : -EINVAL;
+}
+
+int vpu_color_check_matrix(u32 matrix)
+{
+ return vpu_color_cvrt_matrix_v2i(matrix) ? 0 : -EINVAL;
+}
+
+int vpu_color_check_full_range(u32 full_range)
+{
+ int ret = -EINVAL;
+
+ switch (full_range) {
+ case V4L2_QUANTIZATION_FULL_RANGE:
+ case V4L2_QUANTIZATION_LIM_RANGE:
+ ret = 0;
+ break;
+ default:
+ break;
+
+ }
+
+ return ret;
+}
+
+int vpu_color_get_default(u32 primaries,
+ u32 *ptransfers, u32 *pmatrix, u32 *pfull_range)
+{
+ u32 transfers;
+ u32 matrix;
+ u32 full_range;
+
+ switch (primaries) {
+ case V4L2_COLORSPACE_REC709:
+ transfers = V4L2_XFER_FUNC_709;
+ matrix = V4L2_YCBCR_ENC_709;
+ full_range = V4L2_QUANTIZATION_LIM_RANGE;
+ break;
+ case V4L2_COLORSPACE_470_SYSTEM_M:
+ case V4L2_COLORSPACE_470_SYSTEM_BG:
+ case V4L2_COLORSPACE_SMPTE170M:
+ transfers = V4L2_XFER_FUNC_709;
+ matrix = V4L2_YCBCR_ENC_601;
+ full_range = V4L2_QUANTIZATION_LIM_RANGE;
+ break;
+ case V4L2_COLORSPACE_SMPTE240M:
+ transfers = V4L2_XFER_FUNC_SMPTE240M;
+ matrix = V4L2_YCBCR_ENC_SMPTE240M;
+ full_range = V4L2_QUANTIZATION_LIM_RANGE;
+ break;
+ case V4L2_COLORSPACE_BT2020:
+ transfers = V4L2_XFER_FUNC_709;
+ matrix = V4L2_YCBCR_ENC_BT2020;
+ full_range = V4L2_QUANTIZATION_LIM_RANGE;
+ break;
+ default:
+ transfers = V4L2_XFER_FUNC_709;
+ matrix = V4L2_YCBCR_ENC_709;
+ full_range = V4L2_QUANTIZATION_LIM_RANGE;
+ break;
+ }
+
+ if (ptransfers)
+ *ptransfers = transfers;
+ if (pmatrix)
+ *pmatrix = matrix;
+ if (pfull_range)
+ *pfull_range = full_range;
+
+
+ return 0;
+}
diff --git a/drivers/media/platform/amphion/vpu_helpers.c b/drivers/media/platform/amphion/vpu_helpers.c
new file mode 100644
index 000000000000..4b9fb82f24fd
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_helpers.c
@@ -0,0 +1,436 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include "vpu.h"
+#include "vpu_core.h"
+#include "vpu_rpc.h"
+#include "vpu_helpers.h"
+
+int vpu_helper_find_in_array_u8(const u8 *array, u32 size, u32 x)
+{
+ int i;
+
+ for (i = 0; i < size; i++) {
+ if (array[i] == x)
+ return i;
+ }
+
+ return 0;
+}
+
+bool vpu_helper_check_type(struct vpu_inst *inst, u32 type)
+{
+ const struct vpu_format *pfmt;
+
+ for (pfmt = inst->formats; pfmt->pixfmt; pfmt++) {
+ if (!vpu_iface_check_format(inst, pfmt->pixfmt))
+ continue;
+ if (pfmt->type == type)
+ return true;
+ }
+
+ return false;
+}
+
+const struct vpu_format *vpu_helper_find_format(struct vpu_inst *inst, u32 type, u32 pixelfmt)
+{
+ const struct vpu_format *pfmt;
+
+ if (!inst || !inst->formats)
+ return NULL;
+
+ if (!vpu_iface_check_format(inst, pixelfmt))
+ return NULL;
+
+ for (pfmt = inst->formats; pfmt->pixfmt; pfmt++) {
+ if (pfmt->pixfmt == pixelfmt && (!type || type == pfmt->type))
+ return pfmt;
+ }
+
+ return NULL;
+}
+
+const struct vpu_format *vpu_helper_enum_format(struct vpu_inst *inst, u32 type, int index)
+{
+ const struct vpu_format *pfmt;
+ int i = 0;
+
+ if (!inst || !inst->formats)
+ return NULL;
+
+ for (pfmt = inst->formats; pfmt->pixfmt; pfmt++) {
+ if (!vpu_iface_check_format(inst, pfmt->pixfmt))
+ continue;
+
+ if (pfmt->type == type) {
+ if (index == i)
+ return pfmt;
+ i++;
+ }
+ }
+
+ return NULL;
+}
+
+u32 vpu_helper_valid_frame_width(struct vpu_inst *inst, u32 width)
+{
+ const struct vpu_core_resources *res;
+
+ if (!inst)
+ return width;
+
+ res = vpu_get_resource(inst);
+ if (!res)
+ return width;
+ if (res->max_width)
+ width = clamp(width, res->min_width, res->max_width);
+ if (res->step_width)
+ width = ALIGN(width, res->step_width);
+
+ return width;
+}
+
+u32 vpu_helper_valid_frame_height(struct vpu_inst *inst, u32 height)
+{
+ const struct vpu_core_resources *res;
+
+ if (!inst)
+ return height;
+
+ res = vpu_get_resource(inst);
+ if (!res)
+ return height;
+ if (res->max_height)
+ height = clamp(height, res->min_height, res->max_height);
+ if (res->step_height)
+ height = ALIGN(height, res->step_height);
+
+ return height;
+}
+
+static u32 get_nv12_plane_size(u32 width, u32 height, int plane_no,
+ u32 stride, u32 interlaced, u32 *pbl)
+{
+ u32 bytesperline;
+ u32 size = 0;
+
+ bytesperline = ALIGN(width, stride);
+ if (pbl)
+ bytesperline = max(bytesperline, *pbl);
+ height = ALIGN(height, 2);
+ if (plane_no == 0)
+ size = bytesperline * height;
+ else if (plane_no == 1)
+ size = bytesperline * height >> 1;
+ if (pbl)
+ *pbl = bytesperline;
+
+ return size;
+}
+
+static u32 get_tiled_8l128_plane_size(u32 fmt, u32 width, u32 height, int plane_no,
+ u32 stride, u32 interlaced, u32 *pbl)
+{
+ u32 ws = 3;
+ u32 hs = 7;
+ u32 bitdepth = 8;
+ u32 bytesperline;
+ u32 size = 0;
+
+ if (interlaced)
+ hs++;
+ if (fmt == V4L2_PIX_FMT_NV12MT_10BE_8L128)
+ bitdepth = 10;
+ bytesperline = DIV_ROUND_UP(width * bitdepth, BITS_PER_BYTE);
+ bytesperline = ALIGN(bytesperline, 1 << ws);
+ bytesperline = ALIGN(bytesperline, stride);
+ if (pbl)
+ bytesperline = max(bytesperline, *pbl);
+ height = ALIGN(height, 1 << hs);
+ if (plane_no == 0)
+ size = bytesperline * height;
+ else if (plane_no == 1)
+ size = (bytesperline * ALIGN(height, 1 << (hs + 1))) >> 1;
+ if (pbl)
+ *pbl = bytesperline;
+
+ return size;
+}
+
+static u32 get_default_plane_size(u32 width, u32 height, int plane_no,
+ u32 stride, u32 interlaced, u32 *pbl)
+{
+ u32 bytesperline;
+ u32 size = 0;
+
+ bytesperline = ALIGN(width, stride);
+ if (pbl)
+ bytesperline = max(bytesperline, *pbl);
+ if (plane_no == 0)
+ size = bytesperline * height;
+ if (pbl)
+ *pbl = bytesperline;
+
+ return size;
+}
+
+u32 vpu_helper_get_plane_size(u32 fmt, u32 w, u32 h, int plane_no,
+ u32 stride, u32 interlaced, u32 *pbl)
+{
+ switch (fmt) {
+ case V4L2_PIX_FMT_NV12M:
+ return get_nv12_plane_size(w, h, plane_no, stride, interlaced, pbl);
+ case V4L2_PIX_FMT_NV12MT_8L128:
+ case V4L2_PIX_FMT_NV12MT_10BE_8L128:
+ return get_tiled_8l128_plane_size(fmt, w, h, plane_no, stride, interlaced, pbl);
+ default:
+ return get_default_plane_size(w, h, plane_no, stride, interlaced, pbl);
+ }
+}
+
+u32 vpu_helper_copy_from_stream_buffer(struct vpu_buffer *stream_buffer,
+ u32 *rptr, u32 size, void *dst)
+{
+ u32 offset;
+ u32 start;
+ u32 end;
+ void *virt;
+
+ if (!stream_buffer || !rptr || !dst)
+ return -EINVAL;
+
+ if (!size)
+ return 0;
+
+ offset = *rptr;
+ start = stream_buffer->phys;
+ end = start + stream_buffer->length;
+ virt = stream_buffer->virt;
+
+ if (offset < start || offset > end)
+ return -EINVAL;
+
+ if (offset + size <= end) {
+ memcpy(dst, virt + (offset - start), size);
+ } else {
+ memcpy(dst, virt + (offset - start), end - offset);
+ memcpy(dst + end - offset, virt, size + offset - end);
+ }
+
+ *rptr = vpu_helper_step_walk(stream_buffer, offset, size);
+ return size;
+}
+
+u32 vpu_helper_copy_to_stream_buffer(struct vpu_buffer *stream_buffer,
+ u32 *wptr, u32 size, void *src)
+{
+ u32 offset;
+ u32 start;
+ u32 end;
+ void *virt;
+
+ if (!stream_buffer || !wptr || !src)
+ return -EINVAL;
+
+ if (!size)
+ return 0;
+
+ offset = *wptr;
+ start = stream_buffer->phys;
+ end = start + stream_buffer->length;
+ virt = stream_buffer->virt;
+ if (offset < start || offset > end)
+ return -EINVAL;
+
+ if (offset + size <= end) {
+ memcpy(virt + (offset - start), src, size);
+ } else {
+ memcpy(virt + (offset - start), src, end - offset);
+ memcpy(virt, src + end - offset, size + offset - end);
+ }
+
+ *wptr = vpu_helper_step_walk(stream_buffer, offset, size);
+
+ return size;
+}
+
+u32 vpu_helper_memset_stream_buffer(struct vpu_buffer *stream_buffer,
+ u32 *wptr, u8 val, u32 size)
+{
+ u32 offset;
+ u32 start;
+ u32 end;
+ void *virt;
+
+ if (!stream_buffer || !wptr)
+ return -EINVAL;
+
+ if (!size)
+ return 0;
+
+ offset = *wptr;
+ start = stream_buffer->phys;
+ end = start + stream_buffer->length;
+ virt = stream_buffer->virt;
+ if (offset < start || offset > end)
+ return -EINVAL;
+
+ if (offset + size <= end) {
+ memset(virt + (offset - start), val, size);
+ } else {
+ memset(virt + (offset - start), val, end - offset);
+ memset(virt, val, size + offset - end);
+ }
+
+ offset += size;
+ if (offset >= end)
+ offset -= stream_buffer->length;
+
+ *wptr = offset;
+
+ return size;
+}
+
+u32 vpu_helper_get_free_space(struct vpu_inst *inst)
+{
+ struct vpu_rpc_buffer_desc desc;
+
+ if (vpu_iface_get_stream_buffer_desc(inst, &desc))
+ return 0;
+
+ if (desc.rptr > desc.wptr)
+ return desc.rptr - desc.wptr;
+ else if (desc.rptr < desc.wptr)
+ return (desc.end - desc.start + desc.rptr - desc.wptr);
+ else
+ return desc.end - desc.start;
+}
+
+u32 vpu_helper_get_used_space(struct vpu_inst *inst)
+{
+ struct vpu_rpc_buffer_desc desc;
+
+ if (vpu_iface_get_stream_buffer_desc(inst, &desc))
+ return 0;
+
+ if (desc.wptr > desc.rptr)
+ return desc.wptr - desc.rptr;
+ else if (desc.wptr < desc.rptr)
+ return (desc.end - desc.start + desc.wptr - desc.rptr);
+ else
+ return 0;
+}
+
+int vpu_helper_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
+{
+ struct vpu_inst *inst = ctrl_to_inst(ctrl);
+
+ switch (ctrl->id) {
+ case V4L2_CID_MIN_BUFFERS_FOR_CAPTURE:
+ ctrl->val = inst->min_buffer_cap;
+ break;
+ case V4L2_CID_MIN_BUFFERS_FOR_OUTPUT:
+ ctrl->val = inst->min_buffer_out;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+u32 vpu_helper_calc_coprime(u32 *a, u32 *b)
+{
+ int m = *a;
+ int n = *b;
+
+ if (m == 0)
+ return n;
+ if (n == 0)
+ return m;
+
+ while (n != 0) {
+ int tmp = m % n;
+
+ m = n;
+ n = tmp;
+ }
+ *a = (*a) / m;
+ *b = (*b) / m;
+
+ return m;
+}
+
+#define READ_BYTE(buffer, pos) (*(u8 *)((buffer)->virt + ((pos) % buffer->length)))
+int vpu_helper_find_startcode(struct vpu_buffer *stream_buffer,
+ u32 pixelformat, u32 offset, u32 bytesused)
+{
+ u32 start_code;
+ int start_code_size;
+ u32 val = 0;
+ int i;
+ int ret = -EINVAL;
+
+ if (!stream_buffer || !stream_buffer->virt)
+ return -EINVAL;
+
+ switch (pixelformat) {
+ case V4L2_PIX_FMT_H264:
+ start_code_size = 4;
+ start_code = 0x00000001;
+ break;
+ default:
+ return 0;
+ }
+
+ for (i = 0; i < bytesused; i++) {
+ val = (val << 8) | READ_BYTE(stream_buffer, offset + i);
+ if (i < start_code_size - 1)
+ continue;
+ if (val == start_code) {
+ ret = i + 1 - start_code_size;
+ break;
+ }
+ }
+
+ return ret;
+}
+
+int vpu_find_dst_by_src(struct vpu_pair *pairs, u32 cnt, u32 src)
+{
+ u32 i;
+
+ if (!pairs || !cnt)
+ return -EINVAL;
+
+ for (i = 0; i < cnt; i++) {
+ if (pairs[i].src == src)
+ return pairs[i].dst;
+ }
+
+ return -EINVAL;
+}
+
+int vpu_find_src_by_dst(struct vpu_pair *pairs, u32 cnt, u32 dst)
+{
+ u32 i;
+
+ if (!pairs || !cnt)
+ return -EINVAL;
+
+ for (i = 0; i < cnt; i++) {
+ if (pairs[i].dst == dst)
+ return pairs[i].src;
+ }
+
+ return -EINVAL;
+}
diff --git a/drivers/media/platform/amphion/vpu_helpers.h b/drivers/media/platform/amphion/vpu_helpers.h
new file mode 100644
index 000000000000..65d4451ad8a1
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_helpers.h
@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_HELPERS_H
+#define _AMPHION_VPU_HELPERS_H
+
+struct vpu_pair {
+ u32 src;
+ u32 dst;
+};
+
+#define MAKE_TIMESTAMP(s, ns) (((s32)(s) * NSEC_PER_SEC) + (ns))
+#define VPU_INVALID_TIMESTAMP MAKE_TIMESTAMP(-1, 0)
+#define VPU_ARRAY_AT(array, i) (((i) < ARRAY_SIZE(array)) ? array[i] : 0)
+#define VPU_ARRAY_FIND(array, x) vpu_helper_find_in_array_u8(array, ARRAY_SIZE(array), x)
+
+int vpu_helper_find_in_array_u8(const u8 *array, u32 size, u32 x);
+bool vpu_helper_check_type(struct vpu_inst *inst, u32 type);
+const struct vpu_format *vpu_helper_find_format(struct vpu_inst *inst, u32 type, u32 pixelfmt);
+const struct vpu_format *vpu_helper_enum_format(struct vpu_inst *inst, u32 type, int index);
+u32 vpu_helper_valid_frame_width(struct vpu_inst *inst, u32 width);
+u32 vpu_helper_valid_frame_height(struct vpu_inst *inst, u32 height);
+u32 vpu_helper_get_plane_size(u32 fmt, u32 width, u32 height, int plane_no,
+ u32 stride, u32 interlaced, u32 *pbl);
+u32 vpu_helper_copy_from_stream_buffer(struct vpu_buffer *stream_buffer,
+ u32 *rptr, u32 size, void *dst);
+u32 vpu_helper_copy_to_stream_buffer(struct vpu_buffer *stream_buffer,
+ u32 *wptr, u32 size, void *src);
+u32 vpu_helper_memset_stream_buffer(struct vpu_buffer *stream_buffer,
+ u32 *wptr, u8 val, u32 size);
+u32 vpu_helper_get_free_space(struct vpu_inst *inst);
+u32 vpu_helper_get_used_space(struct vpu_inst *inst);
+int vpu_helper_g_volatile_ctrl(struct v4l2_ctrl *ctrl);
+u32 vpu_helper_calc_coprime(u32 *a, u32 *b);
+void vpu_helper_get_kmp_next(const u8 *pattern, int *next, int size);
+int vpu_helper_kmp_search(u8 *s, int s_len, const u8 *p, int p_len, int *next);
+int vpu_helper_kmp_search_in_stream_buffer(struct vpu_buffer *stream_buffer,
+ u32 offset, int bytesused,
+ const u8 *p, int p_len, int *next);
+int vpu_helper_find_startcode(struct vpu_buffer *stream_buffer,
+ u32 pixelformat, u32 offset, u32 bytesused);
+
+static inline u32 vpu_helper_step_walk(struct vpu_buffer *stream_buffer, u32 pos, u32 step)
+{
+ pos += step;
+ if (pos > stream_buffer->phys + stream_buffer->length)
+ pos -= stream_buffer->length;
+
+ return pos;
+}
+
+int vpu_color_check_primaries(u32 primaries);
+int vpu_color_check_transfers(u32 transfers);
+int vpu_color_check_matrix(u32 matrix);
+int vpu_color_check_full_range(u32 full_range);
+u32 vpu_color_cvrt_primaries_v2i(u32 primaries);
+u32 vpu_color_cvrt_primaries_i2v(u32 primaries);
+u32 vpu_color_cvrt_transfers_v2i(u32 transfers);
+u32 vpu_color_cvrt_transfers_i2v(u32 transfers);
+u32 vpu_color_cvrt_matrix_v2i(u32 matrix);
+u32 vpu_color_cvrt_matrix_i2v(u32 matrix);
+u32 vpu_color_cvrt_full_range_v2i(u32 full_range);
+u32 vpu_color_cvrt_full_range_i2v(u32 full_range);
+int vpu_color_get_default(u32 primaries,
+ u32 *ptransfers, u32 *pmatrix, u32 *pfull_range);
+
+int vpu_find_dst_by_src(struct vpu_pair *pairs, u32 cnt, u32 src);
+int vpu_find_src_by_dst(struct vpu_pair *pairs, u32 cnt, u32 dst);
+#endif
diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c
new file mode 100644
index 000000000000..909a94d5aa8a
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_v4l2.c
@@ -0,0 +1,703 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pm_runtime.h>
+#include <linux/videodev2.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-mem2mem.h>
+#include <media/v4l2-ioctl.h>
+#include <media/videobuf2-v4l2.h>
+#include <media/videobuf2-dma-contig.h>
+#include <media/videobuf2-vmalloc.h>
+#include "vpu.h"
+#include "vpu_core.h"
+#include "vpu_v4l2.h"
+#include "vpu_msgs.h"
+#include "vpu_helpers.h"
+
+void vpu_inst_lock(struct vpu_inst *inst)
+{
+ mutex_lock(&inst->lock);
+}
+
+void vpu_inst_unlock(struct vpu_inst *inst)
+{
+ mutex_unlock(&inst->lock);
+}
+
+dma_addr_t vpu_get_vb_phy_addr(struct vb2_buffer *vb, u32 plane_no)
+{
+ if (plane_no >= vb->num_planes)
+ return 0;
+ return vb2_dma_contig_plane_dma_addr(vb, plane_no) +
+ vb->planes[plane_no].data_offset;
+}
+
+unsigned int vpu_get_vb_length(struct vb2_buffer *vb, u32 plane_no)
+{
+ if (plane_no >= vb->num_planes)
+ return 0;
+ return vb2_plane_size(vb, plane_no) - vb->planes[plane_no].data_offset;
+}
+
+void vpu_v4l2_set_error(struct vpu_inst *inst)
+{
+ struct vb2_queue *src_q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
+ struct vb2_queue *dst_q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
+
+ dev_err(inst->dev, "some error occurs in codec\n");
+ if (src_q)
+ src_q->error = 1;
+ if (dst_q)
+ dst_q->error = 1;
+}
+
+int vpu_notify_eos(struct vpu_inst *inst)
+{
+ const struct v4l2_event ev = {
+ .id = 0,
+ .type = V4L2_EVENT_EOS
+ };
+
+ vpu_trace(inst->dev, "[%d]\n", inst->id);
+ v4l2_event_queue_fh(&inst->fh, &ev);
+
+ return 0;
+}
+
+int vpu_notify_source_change(struct vpu_inst *inst)
+{
+ const struct v4l2_event ev = {
+ .id = 0,
+ .type = V4L2_EVENT_SOURCE_CHANGE,
+ .u.src_change.changes = V4L2_EVENT_SRC_CH_RESOLUTION
+ };
+
+ vpu_trace(inst->dev, "[%d]\n", inst->id);
+ v4l2_event_queue_fh(&inst->fh, &ev);
+ return 0;
+}
+
+int vpu_set_last_buffer_dequeued(struct vpu_inst *inst)
+{
+ struct vb2_queue *q;
+
+ if (!inst || !inst->fh.m2m_ctx)
+ return -EINVAL;
+
+ q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
+ if (!list_empty(&q->done_list))
+ return -EINVAL;
+
+ vpu_trace(inst->dev, "last buffer dequeued\n");
+ q->last_buffer_dequeued = true;
+ wake_up(&q->done_wq);
+ vpu_notify_eos(inst);
+ return 0;
+}
+
+const struct vpu_format *vpu_try_fmt_common(struct vpu_inst *inst,
+ struct v4l2_format *f)
+{
+ struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp;
+ u32 type = f->type;
+ u32 stride = 1;
+ u32 bytesperline;
+ u32 sizeimage;
+ const struct vpu_format *fmt;
+ const struct vpu_core_resources *res;
+ int i;
+
+ fmt = vpu_helper_find_format(inst, type, pixmp->pixelformat);
+ if (!fmt) {
+ fmt = vpu_helper_enum_format(inst, type, 0);
+ if (!fmt)
+ return NULL;
+ pixmp->pixelformat = fmt->pixfmt;
+ }
+
+ res = vpu_get_resource(inst);
+ if (res)
+ stride = res->stride;
+ if (pixmp->width)
+ pixmp->width = vpu_helper_valid_frame_width(inst, pixmp->width);
+ if (pixmp->height)
+ pixmp->height = vpu_helper_valid_frame_height(inst, pixmp->height);
+ pixmp->flags = fmt->flags;
+ pixmp->num_planes = fmt->num_planes;
+ if (pixmp->field == V4L2_FIELD_ANY)
+ pixmp->field = V4L2_FIELD_NONE;
+ for (i = 0; i < pixmp->num_planes; i++) {
+ bytesperline = max_t(s32, pixmp->plane_fmt[i].bytesperline, 0);
+ sizeimage = vpu_helper_get_plane_size(pixmp->pixelformat,
+ pixmp->width, pixmp->height, i, stride,
+ pixmp->field == V4L2_FIELD_INTERLACED ? 1 : 0,
+ &bytesperline);
+ sizeimage = max_t(s32, pixmp->plane_fmt[i].sizeimage, sizeimage);
+ pixmp->plane_fmt[i].bytesperline = bytesperline;
+ pixmp->plane_fmt[i].sizeimage = sizeimage;
+ }
+
+ return fmt;
+}
+
+static bool vpu_check_ready(struct vpu_inst *inst, u32 type)
+{
+ if (!inst)
+ return false;
+ if (inst->state == VPU_CODEC_STATE_DEINIT || inst->id < 0)
+ return false;
+ if (!inst->ops->check_ready)
+ return true;
+ return call_vop(inst, check_ready, type);
+}
+
+int vpu_process_output_buffer(struct vpu_inst *inst)
+{
+ struct v4l2_m2m_buffer *buf = NULL;
+ struct vpu_vb2_buffer *vpu_buf = NULL;
+
+ if (!inst)
+ return -EINVAL;
+
+ if (!vpu_check_ready(inst, inst->out_format.type))
+ return -EINVAL;
+
+ v4l2_m2m_for_each_src_buf(inst->fh.m2m_ctx, buf) {
+ vpu_buf = container_of(buf, struct vpu_vb2_buffer, m2m_buf);
+ if (vpu_buf->state == VPU_BUF_STATE_IDLE)
+ break;
+ vpu_buf = NULL;
+ }
+
+ if (!vpu_buf)
+ return -EINVAL;
+
+ dev_dbg(inst->dev, "[%d]frame id = %d / %d\n",
+ inst->id, vpu_buf->m2m_buf.vb.sequence, inst->sequence);
+ return call_vop(inst, process_output, &vpu_buf->m2m_buf.vb.vb2_buf);
+}
+
+int vpu_process_capture_buffer(struct vpu_inst *inst)
+{
+ struct v4l2_m2m_buffer *buf = NULL;
+ struct vpu_vb2_buffer *vpu_buf = NULL;
+
+ if (!inst)
+ return -EINVAL;
+
+ if (!vpu_check_ready(inst, inst->cap_format.type))
+ return -EINVAL;
+
+ v4l2_m2m_for_each_dst_buf(inst->fh.m2m_ctx, buf) {
+ vpu_buf = container_of(buf, struct vpu_vb2_buffer, m2m_buf);
+ if (vpu_buf->state == VPU_BUF_STATE_IDLE)
+ break;
+ vpu_buf = NULL;
+ }
+ if (!vpu_buf)
+ return -EINVAL;
+
+ return call_vop(inst, process_capture, &vpu_buf->m2m_buf.vb.vb2_buf);
+}
+
+struct vb2_v4l2_buffer *vpu_find_buf_by_sequence(struct vpu_inst *inst,
+ u32 type, u32 sequence)
+{
+ struct v4l2_m2m_buffer *buf = NULL;
+ struct vb2_v4l2_buffer *vbuf = NULL;
+
+ if (V4L2_TYPE_IS_OUTPUT(type)) {
+ v4l2_m2m_for_each_src_buf(inst->fh.m2m_ctx, buf) {
+ vbuf = &buf->vb;
+ if (vbuf->sequence == sequence)
+ break;
+ vbuf = NULL;
+ }
+ } else {
+ v4l2_m2m_for_each_dst_buf(inst->fh.m2m_ctx, buf) {
+ vbuf = &buf->vb;
+ if (vbuf->sequence == sequence)
+ break;
+ vbuf = NULL;
+ }
+ }
+
+ return vbuf;
+}
+
+struct vb2_v4l2_buffer *vpu_find_buf_by_idx(struct vpu_inst *inst,
+ u32 type, u32 idx)
+{
+ struct v4l2_m2m_buffer *buf = NULL;
+ struct vb2_v4l2_buffer *vbuf = NULL;
+
+ if (V4L2_TYPE_IS_OUTPUT(type)) {
+ v4l2_m2m_for_each_src_buf(inst->fh.m2m_ctx, buf) {
+ vbuf = &buf->vb;
+ if (vbuf->vb2_buf.index == idx)
+ break;
+ vbuf = NULL;
+ }
+ } else {
+ v4l2_m2m_for_each_dst_buf(inst->fh.m2m_ctx, buf) {
+ vbuf = &buf->vb;
+ if (vbuf->vb2_buf.index == idx)
+ break;
+ vbuf = NULL;
+ }
+ }
+
+ return vbuf;
+}
+
+int vpu_get_num_buffers(struct vpu_inst *inst, u32 type)
+{
+ struct vb2_queue *q;
+
+ if (!inst || !inst->fh.m2m_ctx)
+ return -EINVAL;
+ if (V4L2_TYPE_IS_OUTPUT(type))
+ q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
+ else
+ q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
+
+ return q->num_buffers;
+}
+
+static void vpu_m2m_device_run(void *priv)
+{
+}
+
+static void vpu_m2m_job_abort(void *priv)
+{
+ struct vpu_inst *inst = priv;
+ struct v4l2_m2m_ctx *m2m_ctx = inst->fh.m2m_ctx;
+
+ v4l2_m2m_job_finish(m2m_ctx->m2m_dev, m2m_ctx);
+}
+
+static const struct v4l2_m2m_ops vpu_m2m_ops = {
+ .device_run = vpu_m2m_device_run,
+ .job_abort = vpu_m2m_job_abort
+};
+
+static int vpu_vb2_queue_setup(struct vb2_queue *vq,
+ unsigned int *buf_count,
+ unsigned int *plane_count,
+ unsigned int psize[],
+ struct device *allocators[])
+{
+ struct vpu_inst *inst = vb2_get_drv_priv(vq);
+ struct vpu_format *cur_fmt;
+ int i;
+
+ cur_fmt = vpu_get_format(inst, vq->type);
+
+ if (*plane_count) {
+ if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE) {
+ for (i = 0; i < *plane_count; i++) {
+ if (!psize[i])
+ psize[i] = cur_fmt->sizeimage[i];
+ }
+ return 0;
+ }
+ if (*plane_count != cur_fmt->num_planes)
+ return -EINVAL;
+ for (i = 0; i < cur_fmt->num_planes; i++) {
+ if (psize[i] < cur_fmt->sizeimage[i])
+ return -EINVAL;
+ }
+ return 0;
+ }
+
+ *plane_count = cur_fmt->num_planes;
+ for (i = 0; i < cur_fmt->num_planes; i++)
+ psize[i] = cur_fmt->sizeimage[i];
+
+ return 0;
+}
+
+static int vpu_vb2_buf_init(struct vb2_buffer *vb)
+{
+ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
+
+ vpu_buf->state = VPU_BUF_STATE_IDLE;
+
+ return 0;
+}
+
+static void vpu_vb2_buf_cleanup(struct vb2_buffer *vb)
+{
+}
+
+static int vpu_vb2_buf_prepare(struct vb2_buffer *vb)
+{
+ struct vpu_inst *inst = vb2_get_drv_priv(vb->vb2_queue);
+ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
+ struct vpu_format *cur_fmt;
+ u32 i;
+
+ cur_fmt = vpu_get_format(inst, vb->type);
+ if (vb->num_planes != cur_fmt->num_planes)
+ return -EINVAL;
+ for (i = 0; i < cur_fmt->num_planes; i++) {
+ if (vpu_get_vb_length(vb, i) < cur_fmt->sizeimage[i]) {
+ dev_dbg(inst->dev, "[%d] %s buf[%d] is invalid\n",
+ inst->id,
+ vpu_type_name(vb->type),
+ vb->index);
+ vpu_buf->state = VPU_BUF_STATE_ERROR;
+ }
+ }
+
+ return 0;
+}
+
+static void vpu_vb2_buf_finish(struct vb2_buffer *vb)
+{
+ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ struct vpu_inst *inst = vb2_get_drv_priv(vb->vb2_queue);
+ struct vb2_queue *q = vb->vb2_queue;
+
+ if (vbuf->flags & V4L2_BUF_FLAG_LAST)
+ vpu_notify_eos(inst);
+
+ if (list_empty(&q->done_list))
+ call_vop(inst, on_queue_empty, q->type);
+}
+
+void vpu_vb2_buffers_return(struct vpu_inst *inst,
+ unsigned int type, enum vb2_buffer_state state)
+{
+ struct vb2_v4l2_buffer *buf;
+
+ if (!inst || !inst->fh.m2m_ctx)
+ return;
+
+ if (V4L2_TYPE_IS_OUTPUT(type)) {
+ while ((buf = v4l2_m2m_src_buf_remove(inst->fh.m2m_ctx)))
+ v4l2_m2m_buf_done(buf, state);
+ } else {
+ while ((buf = v4l2_m2m_dst_buf_remove(inst->fh.m2m_ctx)))
+ v4l2_m2m_buf_done(buf, state);
+ }
+}
+
+static int vpu_vb2_start_streaming(struct vb2_queue *q, unsigned int count)
+{
+ struct vpu_inst *inst = vb2_get_drv_priv(q);
+ struct vpu_format *fmt = vpu_get_format(inst, q->type);
+ int ret;
+
+ vpu_inst_unlock(inst);
+ ret = vpu_inst_register(inst);
+ vpu_inst_lock(inst);
+ if (ret) {
+ vpu_vb2_buffers_return(inst, q->type, VB2_BUF_STATE_QUEUED);
+ return ret;
+ }
+
+ vpu_trace(inst->dev, "[%d] %s %c%c%c%c %dx%d %u(%u) %u(%u) %u(%u) %d\n",
+ inst->id, vpu_type_name(q->type),
+ fmt->pixfmt,
+ fmt->pixfmt >> 8,
+ fmt->pixfmt >> 16,
+ fmt->pixfmt >> 24,
+ fmt->width, fmt->height,
+ fmt->sizeimage[0], fmt->bytesperline[0],
+ fmt->sizeimage[1], fmt->bytesperline[1],
+ fmt->sizeimage[2], fmt->bytesperline[2],
+ q->num_buffers);
+ call_vop(inst, start, q->type);
+ vb2_clear_last_buffer_dequeued(q);
+
+ return 0;
+}
+
+static void vpu_vb2_stop_streaming(struct vb2_queue *q)
+{
+ struct vpu_inst *inst = vb2_get_drv_priv(q);
+
+ vpu_trace(inst->dev, "[%d] %s\n", inst->id, vpu_type_name(q->type));
+
+ call_vop(inst, stop, q->type);
+ vpu_vb2_buffers_return(inst, q->type, VB2_BUF_STATE_ERROR);
+ if (V4L2_TYPE_IS_OUTPUT(q->type))
+ inst->sequence = 0;
+}
+
+static void vpu_vb2_buf_queue(struct vb2_buffer *vb)
+{
+ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ struct vpu_inst *inst = vb2_get_drv_priv(vb->vb2_queue);
+
+ if (V4L2_TYPE_IS_OUTPUT(vb->type)) {
+ vbuf->sequence = inst->sequence++;
+ if ((s64)vb->timestamp < 0)
+ vb->timestamp = VPU_INVALID_TIMESTAMP;
+ }
+
+ v4l2_m2m_buf_queue(inst->fh.m2m_ctx, vbuf);
+ vpu_process_output_buffer(inst);
+ vpu_process_capture_buffer(inst);
+}
+
+static struct vb2_ops vpu_vb2_ops = {
+ .queue_setup = vpu_vb2_queue_setup,
+ .buf_init = vpu_vb2_buf_init,
+ .buf_cleanup = vpu_vb2_buf_cleanup,
+ .buf_prepare = vpu_vb2_buf_prepare,
+ .buf_finish = vpu_vb2_buf_finish,
+ .start_streaming = vpu_vb2_start_streaming,
+ .stop_streaming = vpu_vb2_stop_streaming,
+ .buf_queue = vpu_vb2_buf_queue,
+ .wait_prepare = vb2_ops_wait_prepare,
+ .wait_finish = vb2_ops_wait_finish,
+};
+
+static int vpu_m2m_queue_init(void *priv, struct vb2_queue *src_vq,
+ struct vb2_queue *dst_vq)
+{
+ struct vpu_inst *inst = priv;
+ int ret;
+
+ inst->out_format.type = src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
+ src_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
+ src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+ src_vq->ops = &vpu_vb2_ops;
+ src_vq->mem_ops = &vb2_dma_contig_memops;
+ if (inst->type == VPU_CORE_TYPE_DEC && inst->use_stream_buffer)
+ src_vq->mem_ops = &vb2_vmalloc_memops;
+ src_vq->drv_priv = inst;
+ src_vq->buf_struct_size = sizeof(struct vpu_vb2_buffer);
+ src_vq->allow_zero_bytesused = 1;
+ src_vq->min_buffers_needed = 1;
+ src_vq->dev = inst->vpu->dev;
+ src_vq->lock = &inst->lock;
+ ret = vb2_queue_init(src_vq);
+ if (ret)
+ return ret;
+
+ inst->cap_format.type = dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
+ dst_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
+ dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+ dst_vq->ops = &vpu_vb2_ops;
+ dst_vq->mem_ops = &vb2_dma_contig_memops;
+ if (inst->type == VPU_CORE_TYPE_ENC && inst->use_stream_buffer)
+ dst_vq->mem_ops = &vb2_vmalloc_memops;
+ dst_vq->drv_priv = inst;
+ dst_vq->buf_struct_size = sizeof(struct vpu_vb2_buffer);
+ dst_vq->allow_zero_bytesused = 1;
+ dst_vq->min_buffers_needed = 1;
+ dst_vq->dev = inst->vpu->dev;
+ dst_vq->lock = &inst->lock;
+ ret = vb2_queue_init(dst_vq);
+ if (ret) {
+ vb2_queue_release(src_vq);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int vpu_v4l2_release(struct vpu_inst *inst)
+{
+ vpu_trace(inst->vpu->dev, "%p\n", inst);
+
+ vpu_release_core(inst->core);
+ put_device(inst->dev);
+
+ if (inst->workqueue) {
+ cancel_work_sync(&inst->msg_work);
+ destroy_workqueue(inst->workqueue);
+ inst->workqueue = NULL;
+ }
+ if (inst->fh.m2m_ctx) {
+ v4l2_m2m_ctx_release(inst->fh.m2m_ctx);
+ inst->fh.m2m_ctx = NULL;
+ }
+
+ v4l2_ctrl_handler_free(&inst->ctrl_handler);
+ mutex_destroy(&inst->lock);
+ v4l2_fh_del(&inst->fh);
+ v4l2_fh_exit(&inst->fh);
+
+ call_vop(inst, cleanup);
+
+ return 0;
+}
+
+int vpu_v4l2_open(struct file *file, struct vpu_inst *inst)
+{
+ struct vpu_dev *vpu = video_drvdata(file);
+ struct vpu_func *func;
+ int ret = 0;
+
+ WARN_ON(!file || !inst || !inst->ops);
+
+ if (inst->type == VPU_CORE_TYPE_ENC)
+ func = &vpu->encoder;
+ else
+ func = &vpu->decoder;
+
+ atomic_set(&inst->ref_count, 0);
+ vpu_inst_get(inst);
+ inst->vpu = vpu;
+ inst->core = vpu_request_core(vpu, inst->type);
+ if (inst->core)
+ inst->dev = get_device(inst->core->dev);
+ mutex_init(&inst->lock);
+ INIT_LIST_HEAD(&inst->cmd_q);
+ inst->id = VPU_INST_NULL_ID;
+ inst->release = vpu_v4l2_release;
+ inst->pid = current->pid;
+ inst->tgid = current->tgid;
+ inst->min_buffer_cap = 2;
+ inst->min_buffer_out = 2;
+ v4l2_fh_init(&inst->fh, func->vfd);
+ v4l2_fh_add(&inst->fh);
+
+ ret = call_vop(inst, ctrl_init);
+ if (ret)
+ goto error;
+
+ inst->fh.m2m_ctx = v4l2_m2m_ctx_init(func->m2m_dev,
+ inst, vpu_m2m_queue_init);
+ if (IS_ERR(inst->fh.m2m_ctx)) {
+ dev_err(vpu->dev, "v4l2_m2m_ctx_init fail\n");
+ ret = PTR_ERR(func->m2m_dev);
+ goto error;
+ }
+
+ inst->fh.ctrl_handler = &inst->ctrl_handler;
+ file->private_data = &inst->fh;
+ inst->state = VPU_CODEC_STATE_DEINIT;
+ inst->workqueue = alloc_workqueue("vpu_inst", WQ_UNBOUND | WQ_MEM_RECLAIM, 1);
+ if (inst->workqueue) {
+ INIT_WORK(&inst->msg_work, vpu_inst_run_work);
+ ret = kfifo_init(&inst->msg_fifo,
+ inst->msg_buffer,
+ roundup_pow_of_two(sizeof(inst->msg_buffer)));
+ if (ret) {
+ destroy_workqueue(inst->workqueue);
+ inst->workqueue = NULL;
+ }
+ }
+ vpu_trace(vpu->dev, "tgid = %d, pid = %d, type = %s, inst = %p\n",
+ inst->tgid, inst->pid, vpu_core_type_desc(inst->type), inst);
+
+ return 0;
+error:
+ vpu_inst_put(inst);
+ return ret;
+}
+
+int vpu_v4l2_close(struct file *file)
+{
+ struct vpu_dev *vpu = video_drvdata(file);
+ struct vpu_inst *inst = to_inst(file);
+ struct vb2_queue *src_q;
+ struct vb2_queue *dst_q;
+
+ vpu_trace(vpu->dev, "tgid = %d, pid = %d, inst = %p\n",
+ inst->tgid, inst->pid, inst);
+ src_q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
+ dst_q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
+ vpu_inst_lock(inst);
+ if (vb2_is_streaming(src_q))
+ v4l2_m2m_streamoff(file, inst->fh.m2m_ctx, src_q->type);
+ if (vb2_is_streaming(dst_q))
+ v4l2_m2m_streamoff(file, inst->fh.m2m_ctx, dst_q->type);
+ vpu_inst_unlock(inst);
+
+ call_vop(inst, release);
+ vpu_inst_unregister(inst);
+ vpu_inst_put(inst);
+
+ return 0;
+}
+
+int vpu_add_func(struct vpu_dev *vpu, struct vpu_func *func)
+{
+ struct video_device *vfd;
+ int ret;
+
+ if (!vpu || !func)
+ return -EINVAL;
+
+ if (func->vfd)
+ return 0;
+
+ vfd = video_device_alloc();
+ if (!vfd) {
+ dev_err(vpu->dev, "alloc vpu decoder video device fail\n");
+ return -ENOMEM;
+ }
+ vfd->release = video_device_release;
+ vfd->vfl_dir = VFL_DIR_M2M;
+ vfd->v4l2_dev = &vpu->v4l2_dev;
+ vfd->device_caps = V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING;
+ if (func->type == VPU_CORE_TYPE_ENC) {
+ strscpy(vfd->name, "amphion-vpu-encoder", sizeof(vfd->name));
+ vfd->fops = venc_get_fops();
+ vfd->ioctl_ops = venc_get_ioctl_ops();
+ } else {
+ strscpy(vfd->name, "amphion-vpu-decoder", sizeof(vfd->name));
+ vfd->fops = vdec_get_fops();
+ vfd->ioctl_ops = vdec_get_ioctl_ops();
+ }
+
+ ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
+ if (ret) {
+ video_device_release(vfd);
+ return ret;
+ }
+ video_set_drvdata(vfd, vpu);
+ func->vfd = vfd;
+ func->m2m_dev = v4l2_m2m_init(&vpu_m2m_ops);
+ if (IS_ERR(func->m2m_dev)) {
+ dev_err(vpu->dev, "v4l2_m2m_init fail\n");
+ video_unregister_device(func->vfd);
+ func->vfd = NULL;
+ return PTR_ERR(func->m2m_dev);
+ }
+
+ ret = v4l2_m2m_register_media_controller(func->m2m_dev, func->vfd, func->function);
+ if (ret) {
+ v4l2_m2m_release(func->m2m_dev);
+ func->m2m_dev = NULL;
+ video_unregister_device(func->vfd);
+ func->vfd = NULL;
+ return ret;
+ }
+
+ return 0;
+}
+
+void vpu_remove_func(struct vpu_func *func)
+{
+ if (!func)
+ return;
+
+ if (func->m2m_dev) {
+ v4l2_m2m_unregister_media_controller(func->m2m_dev);
+ v4l2_m2m_release(func->m2m_dev);
+ func->m2m_dev = NULL;
+ }
+ if (func->vfd) {
+ video_unregister_device(func->vfd);
+ func->vfd = NULL;
+ }
+}
diff --git a/drivers/media/platform/amphion/vpu_v4l2.h b/drivers/media/platform/amphion/vpu_v4l2.h
new file mode 100644
index 000000000000..c9ed7aec637a
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_v4l2.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_V4L2_H
+#define _AMPHION_VPU_V4L2_H
+
+#include <linux/videodev2.h>
+
+void vpu_inst_lock(struct vpu_inst *inst);
+void vpu_inst_unlock(struct vpu_inst *inst);
+
+int vpu_v4l2_open(struct file *file, struct vpu_inst *inst);
+int vpu_v4l2_close(struct file *file);
+
+const struct vpu_format *vpu_try_fmt_common(struct vpu_inst *inst, struct v4l2_format *f);
+int vpu_process_output_buffer(struct vpu_inst *inst);
+int vpu_process_capture_buffer(struct vpu_inst *inst);
+struct vb2_v4l2_buffer *vpu_find_buf_by_sequence(struct vpu_inst *inst, u32 type, u32 sequence);
+struct vb2_v4l2_buffer *vpu_find_buf_by_idx(struct vpu_inst *inst, u32 type, u32 idx);
+void vpu_v4l2_set_error(struct vpu_inst *inst);
+int vpu_notify_eos(struct vpu_inst *inst);
+int vpu_notify_source_change(struct vpu_inst *inst);
+int vpu_set_last_buffer_dequeued(struct vpu_inst *inst);
+void vpu_vb2_buffers_return(struct vpu_inst *inst,
+ unsigned int type, enum vb2_buffer_state state);
+int vpu_get_num_buffers(struct vpu_inst *inst, u32 type);
+
+dma_addr_t vpu_get_vb_phy_addr(struct vb2_buffer *vb, u32 plane_no);
+unsigned int vpu_get_vb_length(struct vb2_buffer *vb, u32 plane_no);
+static inline struct vpu_format *vpu_get_format(struct vpu_inst *inst, u32 type)
+{
+ if (V4L2_TYPE_IS_OUTPUT(type))
+ return &inst->out_format;
+ else
+ return &inst->cap_format;
+}
+
+static inline char *vpu_type_name(u32 type)
+{
+ return V4L2_TYPE_IS_OUTPUT(type) ? "output" : "capture";
+}
+
+static inline int vpu_vb_is_codecconfig(struct vb2_v4l2_buffer *vbuf)
+{
+#ifdef V4L2_BUF_FLAG_CODECCONFIG
+ return (vbuf->flags & V4L2_BUF_FLAG_CODECCONFIG) ? 1 : 0;
+#else
+ return 0;
+#endif
+}
+
+#endif
--
2.33.0


2021-11-30 09:49:39

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 04/13] media: amphion: add vpu core driver

The vpu supports encoder and decoder.
it needs mu core to handle it.
core will run either encoder or decoder firmware.

This driver is for support the vpu core.

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
---
drivers/media/platform/amphion/vpu_codec.h | 67 ++
drivers/media/platform/amphion/vpu_core.c | 906 +++++++++++++++++++++
drivers/media/platform/amphion/vpu_core.h | 15 +
drivers/media/platform/amphion/vpu_dbg.c | 495 +++++++++++
drivers/media/platform/amphion/vpu_rpc.c | 279 +++++++
drivers/media/platform/amphion/vpu_rpc.h | 464 +++++++++++
6 files changed, 2226 insertions(+)
create mode 100644 drivers/media/platform/amphion/vpu_codec.h
create mode 100644 drivers/media/platform/amphion/vpu_core.c
create mode 100644 drivers/media/platform/amphion/vpu_core.h
create mode 100644 drivers/media/platform/amphion/vpu_dbg.c
create mode 100644 drivers/media/platform/amphion/vpu_rpc.c
create mode 100644 drivers/media/platform/amphion/vpu_rpc.h

diff --git a/drivers/media/platform/amphion/vpu_codec.h b/drivers/media/platform/amphion/vpu_codec.h
new file mode 100644
index 000000000000..bf8920e9f6d7
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_codec.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_CODEC_H
+#define _AMPHION_VPU_CODEC_H
+
+struct vpu_encode_params {
+ u32 input_format;
+ u32 codec_format;
+ u32 profile;
+ u32 tier;
+ u32 level;
+ struct v4l2_fract frame_rate;
+ u32 src_stride;
+ u32 src_width;
+ u32 src_height;
+ struct v4l2_rect crop;
+ u32 out_width;
+ u32 out_height;
+
+ u32 gop_length;
+ u32 bframes;
+
+ u32 rc_mode;
+ u32 bitrate;
+ u32 bitrate_min;
+ u32 bitrate_max;
+
+ u32 i_frame_qp;
+ u32 p_frame_qp;
+ u32 b_frame_qp;
+ u32 qp_min;
+ u32 qp_max;
+ u32 qp_min_i;
+ u32 qp_max_i;
+
+ struct {
+ u32 enable;
+ u32 idc;
+ u32 width;
+ u32 height;
+ } sar;
+
+ struct {
+ u32 primaries;
+ u32 transfer;
+ u32 matrix;
+ u32 full_range;
+ } color;
+};
+
+struct vpu_decode_params {
+ u32 codec_format;
+ u32 output_format;
+ u32 b_dis_reorder;
+ u32 b_non_frame;
+ u32 frame_count;
+ u32 end_flag;
+ struct {
+ u32 base;
+ u32 size;
+ } udata;
+};
+
+#endif
diff --git a/drivers/media/platform/amphion/vpu_core.c b/drivers/media/platform/amphion/vpu_core.c
new file mode 100644
index 000000000000..0dbfd1c84f75
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_core.c
@@ -0,0 +1,906 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/pm_runtime.h>
+#include <linux/pm_domain.h>
+#include <linux/firmware.h>
+#include "vpu.h"
+#include "vpu_defs.h"
+#include "vpu_core.h"
+#include "vpu_mbox.h"
+#include "vpu_msgs.h"
+#include "vpu_rpc.h"
+#include "vpu_cmds.h"
+
+void csr_writel(struct vpu_core *core, u32 reg, u32 val)
+{
+ writel(val, core->base + reg);
+}
+
+u32 csr_readl(struct vpu_core *core, u32 reg)
+{
+ return readl(core->base + reg);
+}
+
+static int vpu_core_load_firmware(struct vpu_core *core)
+{
+ const struct firmware *pfw = NULL;
+ int ret = 0;
+
+ WARN_ON(!core || !core->res || !core->res->fwname);
+ if (!core->fw.virt) {
+ dev_err(core->dev, "firmware buffer is not ready\n");
+ return -EINVAL;
+ }
+
+ ret = request_firmware(&pfw, core->res->fwname, core->dev);
+ dev_dbg(core->dev, "request_firmware %s : %d\n", core->res->fwname, ret);
+ if (ret) {
+ dev_err(core->dev, "request firmware %s failed, ret = %d\n",
+ core->res->fwname, ret);
+ return ret;
+ }
+
+ if (core->fw.length < pfw->size) {
+ dev_err(core->dev, "firmware buffer size want %zu, but %d\n",
+ pfw->size, core->fw.length);
+ ret = -EINVAL;
+ goto exit;
+ }
+
+ memset_io(core->fw.virt, 0, core->fw.length);
+ memcpy(core->fw.virt, pfw->data, pfw->size);
+ core->fw.bytesused = pfw->size;
+ ret = vpu_iface_on_firmware_loaded(core);
+exit:
+ release_firmware(pfw);
+ pfw = NULL;
+
+ return ret;
+}
+
+static int vpu_core_boot_done(struct vpu_core *core)
+{
+ u32 fw_version;
+
+ fw_version = vpu_iface_get_version(core);
+ dev_info(core->dev, "%s firmware version : %d.%d.%d\n",
+ vpu_core_type_desc(core->type),
+ (fw_version >> 16) & 0xff,
+ (fw_version >> 8) & 0xff,
+ fw_version & 0xff);
+ core->supported_instance_count = vpu_iface_get_max_instance_count(core);
+ if (core->res->act_size) {
+ u32 count = core->act.length / core->res->act_size;
+
+ core->supported_instance_count = min(core->supported_instance_count, count);
+ }
+ core->fw_version = fw_version;
+ core->state = VPU_CORE_ACTIVE;
+
+ return 0;
+}
+
+static int vpu_core_wait_boot_done(struct vpu_core *core)
+{
+ int ret;
+
+ ret = wait_for_completion_timeout(&core->cmp, VPU_TIMEOUT);
+ if (!ret) {
+ dev_err(core->dev, "boot timeout\n");
+ return -EINVAL;
+ }
+ return vpu_core_boot_done(core);
+}
+
+static int vpu_core_boot(struct vpu_core *core, bool load)
+{
+ int ret;
+
+ WARN_ON(!core);
+
+ if (!core->res->standalone)
+ return 0;
+
+ reinit_completion(&core->cmp);
+ if (load) {
+ ret = vpu_core_load_firmware(core);
+ if (ret)
+ return ret;
+ }
+
+ vpu_iface_boot_core(core);
+ return vpu_core_wait_boot_done(core);
+}
+
+static int vpu_core_shutdown(struct vpu_core *core)
+{
+ if (!core->res->standalone)
+ return 0;
+ return vpu_iface_shutdown_core(core);
+}
+
+static int vpu_core_restore(struct vpu_core *core)
+{
+ int ret;
+
+ if (!core->res->standalone)
+ return 0;
+ ret = vpu_core_sw_reset(core);
+ if (ret)
+ return ret;
+
+ vpu_core_boot_done(core);
+ return vpu_iface_restore_core(core);
+}
+
+static int __vpu_alloc_dma(struct device *dev, struct vpu_buffer *buf)
+{
+ gfp_t gfp = GFP_KERNEL | GFP_DMA32;
+
+ WARN_ON(!dev || !buf);
+
+ if (!buf->length)
+ return 0;
+
+ buf->virt = dma_alloc_coherent(dev, buf->length, &buf->phys, gfp);
+ if (!buf->virt)
+ return -ENOMEM;
+
+ buf->dev = dev;
+
+ return 0;
+}
+
+void vpu_free_dma(struct vpu_buffer *buf)
+{
+ WARN_ON(!buf);
+
+ if (!buf->virt || !buf->dev)
+ return;
+
+ dma_free_coherent(buf->dev, buf->length, buf->virt, buf->phys);
+ buf->virt = NULL;
+ buf->phys = 0;
+ buf->length = 0;
+ buf->bytesused = 0;
+ buf->dev = NULL;
+}
+
+int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf)
+{
+ WARN_ON(!core || !buf);
+
+ return __vpu_alloc_dma(core->dev, buf);
+}
+
+static void vpu_core_check_hang(struct vpu_core *core)
+{
+ if (core->hang_mask)
+ core->state = VPU_CORE_HANG;
+}
+
+static struct vpu_core *vpu_core_find_proper_by_type(struct vpu_dev *vpu, u32 type)
+{
+ struct vpu_core *core = NULL;
+ int request_count = INT_MAX;
+ struct vpu_core *c;
+
+ WARN_ON(!vpu);
+
+ list_for_each_entry(c, &vpu->cores, list) {
+ dev_dbg(c->dev, "instance_mask = 0x%lx, state = %d\n",
+ c->instance_mask,
+ c->state);
+ if (c->type != type)
+ continue;
+ if (c->state == VPU_CORE_DEINIT) {
+ core = c;
+ break;
+ }
+ vpu_core_check_hang(c);
+ if (c->state != VPU_CORE_ACTIVE)
+ continue;
+ if (c->request_count < request_count) {
+ request_count = c->request_count;
+ core = c;
+ }
+ if (!request_count)
+ break;
+ }
+
+ return core;
+}
+
+static bool vpu_core_is_exist(struct vpu_dev *vpu, struct vpu_core *core)
+{
+ struct vpu_core *c;
+
+ list_for_each_entry(c, &vpu->cores, list) {
+ if (c == core)
+ return true;
+ }
+
+ return false;
+}
+
+static void vpu_core_get_vpu(struct vpu_core *core)
+{
+ core->vpu->get_vpu(core->vpu);
+ if (core->type == VPU_CORE_TYPE_ENC)
+ core->vpu->get_enc(core->vpu);
+ if (core->type == VPU_CORE_TYPE_DEC)
+ core->vpu->get_dec(core->vpu);
+}
+
+static int vpu_core_register(struct device *dev, struct vpu_core *core)
+{
+ struct vpu_dev *vpu = dev_get_drvdata(dev);
+ int ret = 0;
+
+ dev_dbg(core->dev, "register core %s\n", vpu_core_type_desc(core->type));
+ if (vpu_core_is_exist(vpu, core))
+ return 0;
+
+ core->workqueue = alloc_workqueue("vpu", WQ_UNBOUND | WQ_MEM_RECLAIM, 1);
+ if (!core->workqueue) {
+ dev_err(core->dev, "fail to alloc workqueue\n");
+ return -ENOMEM;
+ }
+ INIT_WORK(&core->msg_work, vpu_msg_run_work);
+ INIT_DELAYED_WORK(&core->msg_delayed_work, vpu_msg_delayed_work);
+ core->msg_buffer_size = roundup_pow_of_two(VPU_MSG_BUFFER_SIZE);
+ core->msg_buffer = vzalloc(core->msg_buffer_size);
+ if (!core->msg_buffer) {
+ dev_err(core->dev, "failed allocate buffer for fifo\n");
+ ret = -ENOMEM;
+ goto error;
+ }
+ ret = kfifo_init(&core->msg_fifo, core->msg_buffer, core->msg_buffer_size);
+ if (ret) {
+ dev_err(core->dev, "failed init kfifo\n");
+ goto error;
+ }
+
+ list_add_tail(&core->list, &vpu->cores);
+
+ vpu_core_get_vpu(core);
+
+ if (vpu_iface_get_power_state(core))
+ ret = vpu_core_restore(core);
+ if (ret)
+ goto error;
+
+ return 0;
+error:
+ if (core->msg_buffer) {
+ vfree(core->msg_buffer);
+ core->msg_buffer = NULL;
+ }
+ if (core->workqueue) {
+ destroy_workqueue(core->workqueue);
+ core->workqueue = NULL;
+ }
+ return ret;
+}
+
+static void vpu_core_put_vpu(struct vpu_core *core)
+{
+ if (core->type == VPU_CORE_TYPE_ENC)
+ core->vpu->put_enc(core->vpu);
+ if (core->type == VPU_CORE_TYPE_DEC)
+ core->vpu->put_dec(core->vpu);
+ core->vpu->put_vpu(core->vpu);
+}
+
+static int vpu_core_unregister(struct device *dev, struct vpu_core *core)
+{
+ list_del_init(&core->list);
+
+ vpu_core_put_vpu(core);
+ core->vpu = NULL;
+ vfree(core->msg_buffer);
+ core->msg_buffer = NULL;
+
+ if (core->workqueue) {
+ cancel_work_sync(&core->msg_work);
+ cancel_delayed_work_sync(&core->msg_delayed_work);
+ destroy_workqueue(core->workqueue);
+ core->workqueue = NULL;
+ }
+
+ return 0;
+}
+
+static int vpu_core_acquire_instance(struct vpu_core *core)
+{
+ int id;
+
+ WARN_ON(!core);
+
+ id = ffz(core->instance_mask);
+ if (id >= core->supported_instance_count)
+ return -EINVAL;
+
+ set_bit(id, &core->instance_mask);
+
+ return id;
+}
+
+static void vpu_core_release_instance(struct vpu_core *core, int id)
+{
+ WARN_ON(!core);
+
+ if (id < 0 || id >= core->supported_instance_count)
+ return;
+
+ clear_bit(id, &core->instance_mask);
+}
+
+struct vpu_inst *vpu_inst_get(struct vpu_inst *inst)
+{
+ if (!inst)
+ return NULL;
+
+ atomic_inc(&inst->ref_count);
+
+ return inst;
+}
+
+void vpu_inst_put(struct vpu_inst *inst)
+{
+ if (!inst)
+ return;
+ if (atomic_dec_and_test(&inst->ref_count)) {
+ if (inst->release)
+ inst->release(inst);
+ }
+}
+
+struct vpu_core *vpu_request_core(struct vpu_dev *vpu, enum vpu_core_type type)
+{
+ struct vpu_core *core = NULL;
+ int ret;
+
+ mutex_lock(&vpu->lock);
+
+ core = vpu_core_find_proper_by_type(vpu, type);
+ if (!core)
+ goto exit;
+
+ mutex_lock(&core->lock);
+ pm_runtime_get_sync(core->dev);
+
+ if (core->state == VPU_CORE_DEINIT) {
+ ret = vpu_core_boot(core, true);
+ if (ret) {
+ pm_runtime_put_sync(core->dev);
+ mutex_unlock(&core->lock);
+ core = NULL;
+ goto exit;
+ }
+ }
+
+ core->request_count++;
+
+ mutex_unlock(&core->lock);
+exit:
+ mutex_unlock(&vpu->lock);
+
+ return core;
+}
+
+void vpu_release_core(struct vpu_core *core)
+{
+ if (!core)
+ return;
+
+ mutex_lock(&core->lock);
+ pm_runtime_put_sync(core->dev);
+ if (core->request_count)
+ core->request_count--;
+ mutex_unlock(&core->lock);
+}
+
+int vpu_inst_register(struct vpu_inst *inst)
+{
+ struct vpu_dev *vpu;
+ struct vpu_core *core;
+ int ret = 0;
+
+ WARN_ON(!inst || !inst->vpu);
+
+ vpu = inst->vpu;
+ core = inst->core;
+ if (!core) {
+ core = vpu_request_core(vpu, inst->type);
+ if (!core) {
+ dev_err(vpu->dev, "there is no vpu core for %s\n",
+ vpu_core_type_desc(inst->type));
+ return -EINVAL;
+ }
+ inst->core = core;
+ inst->dev = get_device(core->dev);
+ }
+
+ mutex_lock(&core->lock);
+ if (inst->id >= 0 && inst->id < core->supported_instance_count)
+ goto exit;
+
+ ret = vpu_core_acquire_instance(core);
+ if (ret < 0)
+ goto exit;
+
+ vpu_trace(inst->dev, "[%d] %p\n", ret, inst);
+ inst->id = ret;
+ list_add_tail(&inst->list, &core->instances);
+ ret = 0;
+ if (core->res->act_size) {
+ inst->act.phys = core->act.phys + core->res->act_size * inst->id;
+ inst->act.virt = core->act.virt + core->res->act_size * inst->id;
+ inst->act.length = core->res->act_size;
+ }
+ vpu_inst_create_dbgfs_file(inst);
+exit:
+ mutex_unlock(&core->lock);
+
+ if (ret)
+ dev_err(core->dev, "register instance fail\n");
+ return ret;
+}
+
+int vpu_inst_unregister(struct vpu_inst *inst)
+{
+ struct vpu_core *core;
+
+ WARN_ON(!inst);
+
+ if (!inst->core)
+ return 0;
+
+ core = inst->core;
+ vpu_clear_request(inst);
+ mutex_lock(&core->lock);
+ if (inst->id >= 0 && inst->id < core->supported_instance_count) {
+ vpu_inst_remove_dbgfs_file(inst);
+ list_del_init(&inst->list);
+ vpu_core_release_instance(core, inst->id);
+ inst->id = VPU_INST_NULL_ID;
+ }
+ vpu_core_check_hang(core);
+ if (core->state == VPU_CORE_HANG && !core->instance_mask) {
+ dev_info(core->dev, "reset hang core\n");
+ if (!vpu_core_sw_reset(core)) {
+ core->state = VPU_CORE_ACTIVE;
+ core->hang_mask = 0;
+ }
+ }
+ mutex_unlock(&core->lock);
+
+ return 0;
+}
+
+struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index)
+{
+ struct vpu_inst *inst = NULL;
+ struct vpu_inst *tmp;
+
+ mutex_lock(&core->lock);
+ if (!test_bit(index, &core->instance_mask))
+ goto exit;
+ list_for_each_entry(tmp, &core->instances, list) {
+ if (tmp->id == index) {
+ inst = vpu_inst_get(tmp);
+ break;
+ }
+ }
+exit:
+ mutex_unlock(&core->lock);
+
+ return inst;
+}
+
+const struct vpu_core_resources *vpu_get_resource(struct vpu_inst *inst)
+{
+ struct vpu_dev *vpu;
+ struct vpu_core *core = NULL;
+ const struct vpu_core_resources *res = NULL;
+
+ if (!inst || !inst->vpu)
+ return NULL;
+
+ if (inst->core && inst->core->res)
+ return inst->core->res;
+
+ vpu = inst->vpu;
+ mutex_lock(&vpu->lock);
+ list_for_each_entry(core, &vpu->cores, list) {
+ if (core->type == inst->type) {
+ res = core->res;
+ break;
+ }
+ }
+ mutex_unlock(&vpu->lock);
+
+ return res;
+}
+
+static int vpu_core_parse_dt(struct vpu_core *core, struct device_node *np)
+{
+ struct device_node *node;
+ struct resource res;
+
+ if (of_count_phandle_with_args(np, "memory-region", NULL) < 2) {
+ dev_err(core->dev, "need 2 memory-region for boot and rpc\n");
+ return -ENODEV;
+ }
+
+ node = of_parse_phandle(np, "memory-region", 0);
+ if (!node) {
+ dev_err(core->dev, "boot-region of_parse_phandle error\n");
+ return -ENODEV;
+ }
+ if (of_address_to_resource(node, 0, &res)) {
+ dev_err(core->dev, "boot-region of_address_to_resource error\n");
+ return -EINVAL;
+ }
+ core->fw.phys = res.start;
+ core->fw.length = resource_size(&res);
+
+ node = of_parse_phandle(np, "memory-region", 1);
+ if (!node) {
+ dev_err(core->dev, "rpc-region of_parse_phandle error\n");
+ return -ENODEV;
+ }
+ if (of_address_to_resource(node, 0, &res)) {
+ dev_err(core->dev, "rpc-region of_address_to_resource error\n");
+ return -EINVAL;
+ }
+ core->rpc.phys = res.start;
+ core->rpc.length = resource_size(&res);
+
+ if (core->rpc.length < core->res->rpc_size + core->res->fwlog_size) {
+ dev_err(core->dev, "the rpc-region <%pad, 0x%x> is not enough\n",
+ &core->rpc.phys, core->rpc.length);
+ return -EINVAL;
+ }
+
+ core->fw.virt = ioremap_wc(core->fw.phys, core->fw.length);
+ core->rpc.virt = ioremap_wc(core->rpc.phys, core->rpc.length);
+ memset_io(core->rpc.virt, 0, core->rpc.length);
+
+ if (vpu_iface_check_memory_region(core,
+ core->rpc.phys,
+ core->rpc.length) != VPU_CORE_MEMORY_UNCACHED) {
+ dev_err(core->dev, "rpc region<%pad, 0x%x> isn't uncached\n",
+ &core->rpc.phys, core->rpc.length);
+ return -EINVAL;
+ }
+
+ core->log.phys = core->rpc.phys + core->res->rpc_size;
+ core->log.virt = core->rpc.virt + core->res->rpc_size;
+ core->log.length = core->res->fwlog_size;
+ core->act.phys = core->log.phys + core->log.length;
+ core->act.virt = core->log.virt + core->log.length;
+ core->act.length = core->rpc.length - core->res->rpc_size - core->log.length;
+ core->rpc.length = core->res->rpc_size;
+
+ return 0;
+}
+
+static int vpu_core_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct vpu_core *core;
+ struct vpu_dev *vpu = dev_get_drvdata(dev->parent);
+ struct vpu_shared_addr *iface;
+ u32 iface_data_size;
+ int ret;
+
+ dev_dbg(dev, "probe\n");
+ if (!vpu)
+ return -EINVAL;
+ core = devm_kzalloc(dev, sizeof(*core), GFP_KERNEL);
+ if (!core)
+ return -ENOMEM;
+
+ core->pdev = pdev;
+ core->dev = dev;
+ platform_set_drvdata(pdev, core);
+ core->vpu = vpu;
+ INIT_LIST_HEAD(&core->instances);
+ mutex_init(&core->lock);
+ mutex_init(&core->cmd_lock);
+ init_completion(&core->cmp);
+ init_waitqueue_head(&core->ack_wq);
+ core->state = VPU_CORE_DEINIT;
+
+ core->res = of_device_get_match_data(dev);
+ if (!core->res)
+ return -ENODEV;
+
+ core->type = core->res->type;
+ core->id = of_alias_get_id(dev->of_node, "vpu_core");
+ if (core->id < 0) {
+ dev_err(dev, "can't get vpu core id\n");
+ return core->id;
+ }
+ dev_info(core->dev, "[%d] = %s\n", core->id, vpu_core_type_desc(core->type));
+ ret = vpu_core_parse_dt(core, dev->of_node);
+ if (ret)
+ return ret;
+
+ core->base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(core->base))
+ return PTR_ERR(core->base);
+
+ if (!vpu_iface_check_codec(core)) {
+ dev_err(core->dev, "is not supported\n");
+ return -EINVAL;
+ }
+
+ ret = vpu_mbox_init(core);
+ if (ret)
+ return ret;
+
+ iface = devm_kzalloc(dev, sizeof(*iface), GFP_KERNEL);
+ if (!iface)
+ return -ENOMEM;
+
+ iface_data_size = vpu_iface_get_data_size(core);
+ if (iface_data_size) {
+ iface->priv = devm_kzalloc(dev, iface_data_size, GFP_KERNEL);
+ if (!iface->priv)
+ return -ENOMEM;
+ }
+
+ ret = vpu_iface_init(core, iface, &core->rpc, core->fw.phys);
+ if (ret) {
+ dev_err(core->dev, "init iface fail, ret = %d\n", ret);
+ return ret;
+ }
+
+ vpu_iface_config_system(core, vpu->res->mreg_base, vpu->base);
+ vpu_iface_set_log_buf(core, &core->log);
+
+ pm_runtime_enable(dev);
+ ret = pm_runtime_get_sync(dev);
+ if (ret) {
+ pm_runtime_put_noidle(dev);
+ pm_runtime_set_suspended(dev);
+ goto err_runtime_disable;
+ }
+
+ ret = vpu_core_register(dev->parent, core);
+ if (ret)
+ goto err_core_register;
+ core->parent = dev->parent;
+
+ pm_runtime_put_sync(dev);
+ vpu_core_create_dbgfs_file(core);
+
+ return 0;
+
+err_core_register:
+ pm_runtime_put_sync(dev);
+err_runtime_disable:
+ pm_runtime_disable(dev);
+
+ return ret;
+}
+
+static int vpu_core_remove(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct vpu_core *core = platform_get_drvdata(pdev);
+ int ret;
+
+ vpu_core_remove_dbgfs_file(core);
+ ret = pm_runtime_get_sync(dev);
+ WARN_ON(ret < 0);
+
+ vpu_core_shutdown(core);
+ pm_runtime_put_sync(dev);
+ pm_runtime_disable(dev);
+
+ vpu_core_unregister(core->parent, core);
+ iounmap(core->fw.virt);
+ iounmap(core->rpc.virt);
+ mutex_destroy(&core->lock);
+ mutex_destroy(&core->cmd_lock);
+
+ return 0;
+}
+
+static int __maybe_unused vpu_core_runtime_resume(struct device *dev)
+{
+ struct vpu_core *core = dev_get_drvdata(dev);
+
+ return vpu_mbox_request(core);
+}
+
+static int __maybe_unused vpu_core_runtime_suspend(struct device *dev)
+{
+ struct vpu_core *core = dev_get_drvdata(dev);
+
+ vpu_mbox_free(core);
+ return 0;
+}
+
+static void vpu_core_cancel_work(struct vpu_core *core)
+{
+ struct vpu_inst *inst = NULL;
+
+ cancel_work_sync(&core->msg_work);
+ cancel_delayed_work_sync(&core->msg_delayed_work);
+
+ mutex_lock(&core->lock);
+ list_for_each_entry(inst, &core->instances, list)
+ cancel_work_sync(&inst->msg_work);
+ mutex_unlock(&core->lock);
+}
+
+static void vpu_core_resume_work(struct vpu_core *core)
+{
+ struct vpu_inst *inst = NULL;
+ unsigned long delay = msecs_to_jiffies(10);
+
+ queue_work(core->workqueue, &core->msg_work);
+ queue_delayed_work(core->workqueue, &core->msg_delayed_work, delay);
+
+ mutex_lock(&core->lock);
+ list_for_each_entry(inst, &core->instances, list)
+ queue_work(inst->workqueue, &inst->msg_work);
+ mutex_unlock(&core->lock);
+}
+
+static int __maybe_unused vpu_core_resume(struct device *dev)
+{
+ struct vpu_core *core = dev_get_drvdata(dev);
+ int ret = 0;
+
+ if (!core->res->standalone)
+ return 0;
+
+ mutex_lock(&core->lock);
+ pm_runtime_get_sync(dev);
+ vpu_core_get_vpu(core);
+ if (core->state != VPU_CORE_SNAPSHOT)
+ goto exit;
+
+ if (!vpu_iface_get_power_state(core)) {
+ if (!list_empty(&core->instances)) {
+ ret = vpu_core_boot(core, false);
+ if (ret) {
+ dev_err(core->dev, "%s boot fail\n", __func__);
+ core->state = VPU_CORE_DEINIT;
+ goto exit;
+ }
+ } else {
+ core->state = VPU_CORE_DEINIT;
+ }
+ } else {
+ if (!list_empty(&core->instances)) {
+ ret = vpu_core_sw_reset(core);
+ if (ret) {
+ dev_err(core->dev, "%s sw_reset fail\n", __func__);
+ core->state = VPU_CORE_HANG;
+ goto exit;
+ }
+ }
+ core->state = VPU_CORE_ACTIVE;
+ }
+
+exit:
+ pm_runtime_put_sync(dev);
+ mutex_unlock(&core->lock);
+
+ vpu_core_resume_work(core);
+ return ret;
+}
+
+static int __maybe_unused vpu_core_suspend(struct device *dev)
+{
+ struct vpu_core *core = dev_get_drvdata(dev);
+ int ret = 0;
+
+ if (!core->res->standalone)
+ return 0;
+
+ mutex_lock(&core->lock);
+ if (core->state == VPU_CORE_ACTIVE) {
+ if (!list_empty(&core->instances)) {
+ ret = vpu_core_snapshot(core);
+ if (ret) {
+ mutex_unlock(&core->lock);
+ return ret;
+ }
+ }
+
+ core->state = VPU_CORE_SNAPSHOT;
+ }
+ mutex_unlock(&core->lock);
+
+ vpu_core_cancel_work(core);
+
+ mutex_lock(&core->lock);
+ vpu_core_put_vpu(core);
+ mutex_unlock(&core->lock);
+ return ret;
+}
+
+static const struct dev_pm_ops vpu_core_pm_ops = {
+ SET_RUNTIME_PM_OPS(vpu_core_runtime_suspend, vpu_core_runtime_resume, NULL)
+ SET_SYSTEM_SLEEP_PM_OPS(vpu_core_suspend, vpu_core_resume)
+};
+
+static struct vpu_core_resources imx8q_enc = {
+ .type = VPU_CORE_TYPE_ENC,
+ .fwname = "vpu/vpu_fw_imx8_enc.bin",
+ .stride = 16,
+ .max_width = 1920,
+ .max_height = 1920,
+ .min_width = 64,
+ .min_height = 48,
+ .step_width = 2,
+ .step_height = 2,
+ .rpc_size = 0x80000,
+ .fwlog_size = 0x80000,
+ .act_size = 0xc0000,
+ .standalone = true,
+};
+
+static struct vpu_core_resources imx8q_dec = {
+ .type = VPU_CORE_TYPE_DEC,
+ .fwname = "vpu/vpu_fw_imx8_dec.bin",
+ .stride = 256,
+ .max_width = 8188,
+ .max_height = 8188,
+ .min_width = 16,
+ .min_height = 16,
+ .step_width = 1,
+ .step_height = 1,
+ .rpc_size = 0x80000,
+ .fwlog_size = 0x80000,
+ .standalone = true,
+};
+
+static const struct of_device_id vpu_core_dt_match[] = {
+ { .compatible = "nxp,imx8q-vpu-encoder", .data = &imx8q_enc },
+ { .compatible = "nxp,imx8q-vpu-decoder", .data = &imx8q_dec },
+ {}
+};
+MODULE_DEVICE_TABLE(of, vpu_core_dt_match);
+
+static struct platform_driver amphion_vpu_core_driver = {
+ .probe = vpu_core_probe,
+ .remove = vpu_core_remove,
+ .driver = {
+ .name = "amphion-vpu-core",
+ .of_match_table = vpu_core_dt_match,
+ .pm = &vpu_core_pm_ops,
+ },
+};
+
+int __init vpu_core_driver_init(void)
+{
+ return platform_driver_register(&amphion_vpu_core_driver);
+}
+
+void __exit vpu_core_driver_exit(void)
+{
+ platform_driver_unregister(&amphion_vpu_core_driver);
+}
diff --git a/drivers/media/platform/amphion/vpu_core.h b/drivers/media/platform/amphion/vpu_core.h
new file mode 100644
index 000000000000..00a662997da4
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_core.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_CORE_H
+#define _AMPHION_VPU_CORE_H
+
+void csr_writel(struct vpu_core *core, u32 reg, u32 val);
+u32 csr_readl(struct vpu_core *core, u32 reg);
+int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf);
+void vpu_free_dma(struct vpu_buffer *buf);
+struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index);
+
+#endif
diff --git a/drivers/media/platform/amphion/vpu_dbg.c b/drivers/media/platform/amphion/vpu_dbg.c
new file mode 100644
index 000000000000..2e7e11101f99
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_dbg.c
@@ -0,0 +1,495 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/pm_runtime.h>
+#include <media/v4l2-device.h>
+#include <linux/debugfs.h>
+#include "vpu.h"
+#include "vpu_defs.h"
+#include "vpu_helpers.h"
+#include "vpu_cmds.h"
+#include "vpu_rpc.h"
+
+struct print_buf_desc {
+ u32 start_h_phy;
+ u32 start_h_vir;
+ u32 start_m;
+ u32 bytes;
+ u32 read;
+ u32 write;
+ char buffer[0];
+};
+
+static char *vb2_stat_name[] = {
+ [VB2_BUF_STATE_DEQUEUED] = "dequeued",
+ [VB2_BUF_STATE_IN_REQUEST] = "in_request",
+ [VB2_BUF_STATE_PREPARING] = "preparing",
+ [VB2_BUF_STATE_QUEUED] = "queued",
+ [VB2_BUF_STATE_ACTIVE] = "active",
+ [VB2_BUF_STATE_DONE] = "done",
+ [VB2_BUF_STATE_ERROR] = "error",
+};
+
+static char *vpu_stat_name[] = {
+ [VPU_BUF_STATE_IDLE] = "idle",
+ [VPU_BUF_STATE_INUSE] = "inuse",
+ [VPU_BUF_STATE_DECODED] = "decoded",
+ [VPU_BUF_STATE_READY] = "ready",
+ [VPU_BUF_STATE_SKIP] = "skip",
+ [VPU_BUF_STATE_ERROR] = "error",
+};
+
+static int vpu_dbg_instance(struct seq_file *s, void *data)
+{
+ struct vpu_inst *inst = s->private;
+ char str[128];
+ int num;
+ struct vb2_queue *vq;
+ int i;
+
+ num = scnprintf(str, sizeof(str), "[%s]\n", vpu_core_type_desc(inst->type));
+ if (seq_write(s, str, num))
+ return 0;
+
+ num = scnprintf(str, sizeof(str), "tgig = %d,pid = %d\n", inst->tgid, inst->pid);
+ if (seq_write(s, str, num))
+ return 0;
+ num = scnprintf(str, sizeof(str), "state = %d\n", inst->state);
+ if (seq_write(s, str, num))
+ return 0;
+ num = scnprintf(str, sizeof(str),
+ "min_buffer_out = %d, min_buffer_cap = %d\n",
+ inst->min_buffer_out, inst->min_buffer_cap);
+ if (seq_write(s, str, num))
+ return 0;
+
+
+ vq = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
+ num = scnprintf(str, sizeof(str),
+ "output (%2d, %2d): fmt = %c%c%c%c %d x %d, %d;",
+ vb2_is_streaming(vq),
+ vq->num_buffers,
+ inst->out_format.pixfmt,
+ inst->out_format.pixfmt >> 8,
+ inst->out_format.pixfmt >> 16,
+ inst->out_format.pixfmt >> 24,
+ inst->out_format.width,
+ inst->out_format.height,
+ vq->last_buffer_dequeued);
+ if (seq_write(s, str, num))
+ return 0;
+ for (i = 0; i < inst->out_format.num_planes; i++) {
+ num = scnprintf(str, sizeof(str), " %d(%d)",
+ inst->out_format.sizeimage[i],
+ inst->out_format.bytesperline[i]);
+ if (seq_write(s, str, num))
+ return 0;
+ }
+ if (seq_write(s, "\n", 1))
+ return 0;
+
+ vq = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
+ num = scnprintf(str, sizeof(str),
+ "capture(%2d, %2d): fmt = %c%c%c%c %d x %d, %d;",
+ vb2_is_streaming(vq),
+ vq->num_buffers,
+ inst->cap_format.pixfmt,
+ inst->cap_format.pixfmt >> 8,
+ inst->cap_format.pixfmt >> 16,
+ inst->cap_format.pixfmt >> 24,
+ inst->cap_format.width,
+ inst->cap_format.height,
+ vq->last_buffer_dequeued);
+ if (seq_write(s, str, num))
+ return 0;
+ for (i = 0; i < inst->cap_format.num_planes; i++) {
+ num = scnprintf(str, sizeof(str), " %d(%d)",
+ inst->cap_format.sizeimage[i],
+ inst->cap_format.bytesperline[i]);
+ if (seq_write(s, str, num))
+ return 0;
+ }
+ if (seq_write(s, "\n", 1))
+ return 0;
+ num = scnprintf(str, sizeof(str), "crop: (%d, %d) %d x %d\n",
+ inst->crop.left,
+ inst->crop.top,
+ inst->crop.width,
+ inst->crop.height);
+ if (seq_write(s, str, num))
+ return 0;
+
+ vq = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
+ for (i = 0; i < vq->num_buffers; i++) {
+ struct vb2_buffer *vb = vq->bufs[i];
+ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
+
+ if (vb->state == VB2_BUF_STATE_DEQUEUED)
+ continue;
+ num = scnprintf(str, sizeof(str),
+ "output [%2d] state = %10s, %8s\n",
+ i, vb2_stat_name[vb->state],
+ vpu_stat_name[vpu_buf->state]);
+ if (seq_write(s, str, num))
+ return 0;
+ }
+
+ vq = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
+ for (i = 0; i < vq->num_buffers; i++) {
+ struct vb2_buffer *vb = vq->bufs[i];
+ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
+
+ if (vb->state == VB2_BUF_STATE_DEQUEUED)
+ continue;
+ num = scnprintf(str, sizeof(str),
+ "capture[%2d] state = %10s, %8s\n",
+ i, vb2_stat_name[vb->state],
+ vpu_stat_name[vpu_buf->state]);
+ if (seq_write(s, str, num))
+ return 0;
+ }
+
+ num = scnprintf(str, sizeof(str), "sequence = %d\n", inst->sequence);
+ if (seq_write(s, str, num))
+ return 0;
+
+ if (inst->use_stream_buffer) {
+ num = scnprintf(str, sizeof(str), "stream_buffer = %d / %d, <%pad, 0x%x>\n",
+ vpu_helper_get_used_space(inst),
+ inst->stream_buffer.length,
+ &inst->stream_buffer.phys,
+ inst->stream_buffer.length);
+ if (seq_write(s, str, num))
+ return 0;
+ }
+ num = scnprintf(str, sizeof(str), "kfifo len = 0x%x\n", kfifo_len(&inst->msg_fifo));
+ if (seq_write(s, str, num))
+ return 0;
+
+ num = scnprintf(str, sizeof(str), "flow :\n");
+ if (seq_write(s, str, num))
+ return 0;
+
+ mutex_lock(&inst->core->cmd_lock);
+ for (i = 0; i < ARRAY_SIZE(inst->flows); i++) {
+ u32 idx = (inst->flow_idx + i) % (ARRAY_SIZE(inst->flows));
+
+ if (!inst->flows[idx])
+ continue;
+ num = scnprintf(str, sizeof(str), "\t[%s]0x%x\n",
+ inst->flows[idx] >= VPU_MSG_ID_NOOP ? "M" : "C",
+ inst->flows[idx]);
+ if (seq_write(s, str, num)) {
+ mutex_unlock(&inst->core->cmd_lock);
+ return 0;
+ }
+ }
+ mutex_unlock(&inst->core->cmd_lock);
+
+ i = 0;
+ while (true) {
+ num = call_vop(inst, get_debug_info, str, sizeof(str), i++);
+ if (num <= 0)
+ break;
+ if (seq_write(s, str, num))
+ return 0;
+ }
+
+ return 0;
+}
+
+static int vpu_dbg_core(struct seq_file *s, void *data)
+{
+ struct vpu_core *core = s->private;
+ struct vpu_shared_addr *iface = core->iface;
+ char str[128];
+ int num;
+
+ num = scnprintf(str, sizeof(str), "[%s]\n", vpu_core_type_desc(core->type));
+ if (seq_write(s, str, num))
+ return 0;
+
+ num = scnprintf(str, sizeof(str), "boot_region = <%pad, 0x%x>\n",
+ &core->fw.phys, core->fw.length);
+ if (seq_write(s, str, num))
+ return 0;
+ num = scnprintf(str, sizeof(str), "rpc_region = <%pad, 0x%x> used = 0x%x\n",
+ &core->rpc.phys, core->rpc.length, core->rpc.bytesused);
+ if (seq_write(s, str, num))
+ return 0;
+ num = scnprintf(str, sizeof(str), "fwlog_region = <%pad, 0x%x>\n",
+ &core->log.phys, core->log.length);
+ if (seq_write(s, str, num))
+ return 0;
+
+ num = scnprintf(str, sizeof(str), "state = %d\n", core->state);
+ if (seq_write(s, str, num))
+ return 0;
+ if (core->state == VPU_CORE_DEINIT)
+ return 0;
+ num = scnprintf(str, sizeof(str), "fw version = %d.%d.%d\n",
+ (core->fw_version >> 16) & 0xff,
+ (core->fw_version >> 8) & 0xff,
+ core->fw_version & 0xff);
+ if (seq_write(s, str, num))
+ return 0;
+ num = scnprintf(str, sizeof(str), "instances = %d/%d (0x%02lx), %d\n",
+ hweight32(core->instance_mask),
+ core->supported_instance_count,
+ core->instance_mask,
+ core->request_count);
+ if (seq_write(s, str, num))
+ return 0;
+ num = scnprintf(str, sizeof(str), "kfifo len = 0x%x\n", kfifo_len(&core->msg_fifo));
+ if (seq_write(s, str, num))
+ return 0;
+ num = scnprintf(str, sizeof(str),
+ "cmd_buf:[0x%x, 0x%x], wptr = 0x%x, rptr = 0x%x\n",
+ iface->cmd_desc->start,
+ iface->cmd_desc->end,
+ iface->cmd_desc->wptr,
+ iface->cmd_desc->rptr);
+ if (seq_write(s, str, num))
+ return 0;
+ num = scnprintf(str, sizeof(str),
+ "msg_buf:[0x%x, 0x%x], wptr = 0x%x, rptr = 0x%x\n",
+ iface->msg_desc->start,
+ iface->msg_desc->end,
+ iface->msg_desc->wptr,
+ iface->msg_desc->rptr);
+ if (seq_write(s, str, num))
+ return 0;
+
+ return 0;
+}
+
+static int vpu_dbg_fwlog(struct seq_file *s, void *data)
+{
+ struct vpu_core *core = s->private;
+ struct print_buf_desc *print_buf;
+ int length;
+ u32 rptr;
+ u32 wptr;
+ int ret = 0;
+
+ if (!core->log.virt || core->state == VPU_CORE_DEINIT)
+ return 0;
+
+ print_buf = core->log.virt;
+ rptr = print_buf->read;
+ wptr = print_buf->write;
+
+ if (rptr == wptr)
+ return 0;
+ else if (rptr < wptr)
+ length = wptr - rptr;
+ else
+ length = print_buf->bytes + wptr - rptr;
+
+ if (s->count + length >= s->size) {
+ s->count = s->size;
+ return 0;
+ }
+
+ if (rptr + length >= print_buf->bytes) {
+ int num = print_buf->bytes - rptr;
+
+ if (seq_write(s, print_buf->buffer + rptr, num))
+ ret = -1;
+ length -= num;
+ rptr = 0;
+ }
+
+ if (length) {
+ if (seq_write(s, print_buf->buffer + rptr, length))
+ ret = -1;
+ rptr += length;
+ }
+ if (!ret)
+ print_buf->read = rptr;
+
+ return 0;
+}
+
+static int vpu_dbg_inst_open(struct inode *inode, struct file *filp)
+{
+ return single_open(filp, vpu_dbg_instance, inode->i_private);
+}
+
+static ssize_t vpu_dbg_inst_write(struct file *file,
+ const char __user *user_buf, size_t size, loff_t *ppos)
+{
+ struct seq_file *s = file->private_data;
+ struct vpu_inst *inst = s->private;
+
+ vpu_session_debug(inst);
+
+ return size;
+}
+
+static ssize_t vpu_dbg_core_write(struct file *file,
+ const char __user *user_buf, size_t size, loff_t *ppos)
+{
+ struct seq_file *s = file->private_data;
+ struct vpu_core *core = s->private;
+
+ pm_runtime_get_sync(core->dev);
+ mutex_lock(&core->lock);
+ if (core->state != VPU_CORE_DEINIT && !core->instance_mask) {
+ dev_info(core->dev, "reset\n");
+ if (!vpu_core_sw_reset(core)) {
+ core->state = VPU_CORE_ACTIVE;
+ core->hang_mask = 0;
+ }
+ }
+ mutex_unlock(&core->lock);
+ pm_runtime_put_sync(core->dev);
+
+ return size;
+}
+
+static int vpu_dbg_core_open(struct inode *inode, struct file *filp)
+{
+ return single_open(filp, vpu_dbg_core, inode->i_private);
+}
+
+static int vpu_dbg_fwlog_open(struct inode *inode, struct file *filp)
+{
+ return single_open(filp, vpu_dbg_fwlog, inode->i_private);
+}
+
+static const struct file_operations vpu_dbg_inst_fops = {
+ .owner = THIS_MODULE,
+ .open = vpu_dbg_inst_open,
+ .release = single_release,
+ .read = seq_read,
+ .write = vpu_dbg_inst_write,
+};
+
+static const struct file_operations vpu_dbg_core_fops = {
+ .owner = THIS_MODULE,
+ .open = vpu_dbg_core_open,
+ .release = single_release,
+ .read = seq_read,
+ .write = vpu_dbg_core_write,
+};
+
+static const struct file_operations vpu_dbg_fwlog_fops = {
+ .owner = THIS_MODULE,
+ .open = vpu_dbg_fwlog_open,
+ .release = single_release,
+ .read = seq_read,
+};
+
+int vpu_inst_create_dbgfs_file(struct vpu_inst *inst)
+{
+ struct vpu_dev *vpu;
+ char name[64];
+
+ if (!inst || !inst->core || !inst->core->vpu)
+ return -EINVAL;
+
+ vpu = inst->core->vpu;
+ if (!vpu->debugfs)
+ return -EINVAL;
+
+ if (inst->debugfs)
+ return 0;
+
+ scnprintf(name, sizeof(name), "instance.%d.%d",
+ inst->core->id, inst->id);
+ inst->debugfs = debugfs_create_file((const char *)name,
+ VERIFY_OCTAL_PERMISSIONS(0644),
+ vpu->debugfs,
+ inst,
+ &vpu_dbg_inst_fops);
+ if (!inst->debugfs) {
+ dev_err(inst->dev, "vpu create debugfs %s fail\n", name);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int vpu_inst_remove_dbgfs_file(struct vpu_inst *inst)
+{
+ if (!inst)
+ return 0;
+
+ debugfs_remove(inst->debugfs);
+ inst->debugfs = NULL;
+
+ return 0;
+}
+
+int vpu_core_create_dbgfs_file(struct vpu_core *core)
+{
+ struct vpu_dev *vpu;
+ char name[64];
+
+ if (!core || !core->vpu)
+ return -EINVAL;
+
+ vpu = core->vpu;
+ if (!vpu->debugfs)
+ return -EINVAL;
+
+ if (!core->debugfs) {
+ scnprintf(name, sizeof(name), "core.%d", core->id);
+ core->debugfs = debugfs_create_file((const char *)name,
+ VERIFY_OCTAL_PERMISSIONS(0644),
+ vpu->debugfs,
+ core,
+ &vpu_dbg_core_fops);
+ if (!core->debugfs) {
+ dev_err(core->dev, "vpu create debugfs %s fail\n", name);
+ return -EINVAL;
+ }
+ }
+ if (!core->debugfs_fwlog) {
+ scnprintf(name, sizeof(name), "fwlog.%d", core->id);
+ core->debugfs_fwlog = debugfs_create_file((const char *)name,
+ VERIFY_OCTAL_PERMISSIONS(0444),
+ vpu->debugfs,
+ core,
+ &vpu_dbg_fwlog_fops);
+ if (!core->debugfs_fwlog) {
+ dev_err(core->dev, "vpu create debugfs %s fail\n", name);
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+int vpu_core_remove_dbgfs_file(struct vpu_core *core)
+{
+ if (!core)
+ return 0;
+ debugfs_remove(core->debugfs);
+ core->debugfs = NULL;
+ debugfs_remove(core->debugfs_fwlog);
+ core->debugfs_fwlog = NULL;
+
+ return 0;
+}
+
+void vpu_inst_record_flow(struct vpu_inst *inst, u32 flow)
+{
+ if (!inst)
+ return;
+
+ inst->flows[inst->flow_idx] = flow;
+ inst->flow_idx = (inst->flow_idx + 1) % (ARRAY_SIZE(inst->flows));
+}
diff --git a/drivers/media/platform/amphion/vpu_rpc.c b/drivers/media/platform/amphion/vpu_rpc.c
new file mode 100644
index 000000000000..7b5e9177e010
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_rpc.c
@@ -0,0 +1,279 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/firmware/imx/ipc.h>
+#include <linux/firmware/imx/svc/misc.h>
+#include "vpu.h"
+#include "vpu_rpc.h"
+#include "vpu_imx8q.h"
+#include "vpu_windsor.h"
+#include "vpu_malone.h"
+
+u32 vpu_iface_check_memory_region(struct vpu_core *core, dma_addr_t addr, u32 size)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops || !ops->check_memory_region)
+ return VPU_CORE_MEMORY_INVALID;
+
+ return ops->check_memory_region(core->fw.phys, addr, size);
+}
+
+static u32 vpu_rpc_check_buffer_space(struct vpu_rpc_buffer_desc *desc, bool write)
+{
+ u32 ptr1;
+ u32 ptr2;
+ u32 size;
+
+ WARN_ON(!desc);
+
+ size = desc->end - desc->start;
+ if (write) {
+ ptr1 = desc->wptr;
+ ptr2 = desc->rptr;
+ } else {
+ ptr1 = desc->rptr;
+ ptr2 = desc->wptr;
+ }
+
+ if (ptr1 == ptr2) {
+ if (!write)
+ return 0;
+ else
+ return size;
+ }
+
+ return (ptr2 + size - ptr1) % size;
+}
+
+static int vpu_rpc_send_cmd_buf(struct vpu_shared_addr *shared,
+ struct vpu_rpc_event *cmd)
+{
+ struct vpu_rpc_buffer_desc *desc;
+ u32 space = 0;
+ u32 *data;
+ u32 wptr;
+ u32 i;
+
+ WARN_ON(!shared || !shared->cmd_mem_vir || !cmd);
+
+ desc = shared->cmd_desc;
+ space = vpu_rpc_check_buffer_space(desc, true);
+ if (space < (((cmd->hdr.num + 1) << 2) + 16)) {
+ pr_err("Cmd Buffer is no space for [%d] %d\n",
+ cmd->hdr.index, cmd->hdr.id);
+ return -EINVAL;
+ }
+ wptr = desc->wptr;
+ data = (u32 *)(shared->cmd_mem_vir + desc->wptr - desc->start);
+ *data = 0;
+ *data |= ((cmd->hdr.index & 0xff) << 24);
+ *data |= ((cmd->hdr.num & 0xff) << 16);
+ *data |= (cmd->hdr.id & 0x3fff);
+ wptr += 4;
+ data++;
+ if (wptr >= desc->end) {
+ wptr = desc->start;
+ data = shared->cmd_mem_vir;
+ }
+
+ for (i = 0; i < cmd->hdr.num; i++) {
+ *data = cmd->data[i];
+ wptr += 4;
+ data++;
+ if (wptr >= desc->end) {
+ wptr = desc->start;
+ data = shared->cmd_mem_vir;
+ }
+ }
+
+ /*update wptr after data is written*/
+ mb();
+ desc->wptr = wptr;
+
+ return 0;
+}
+
+static bool vpu_rpc_check_msg(struct vpu_shared_addr *shared)
+{
+ struct vpu_rpc_buffer_desc *desc;
+ u32 space = 0;
+ u32 msgword;
+ u32 msgnum;
+
+ WARN_ON(!shared || !shared->msg_desc);
+
+ desc = shared->msg_desc;
+ space = vpu_rpc_check_buffer_space(desc, 0);
+ space = (space >> 2);
+
+ if (space) {
+ msgword = *(u32 *)(shared->msg_mem_vir + desc->rptr - desc->start);
+ msgnum = (msgword & 0xff0000) >> 16;
+ if (msgnum <= space)
+ return true;
+ }
+
+ return false;
+}
+
+static int vpu_rpc_receive_msg_buf(struct vpu_shared_addr *shared, struct vpu_rpc_event *msg)
+{
+ struct vpu_rpc_buffer_desc *desc;
+ u32 *data;
+ u32 msgword;
+ u32 rptr;
+ u32 i;
+
+ WARN_ON(!shared || !shared->msg_desc || !msg);
+
+ if (!vpu_rpc_check_msg(shared))
+ return -EINVAL;
+
+ desc = shared->msg_desc;
+ data = (u32 *)(shared->msg_mem_vir + desc->rptr - desc->start);
+ rptr = desc->rptr;
+ msgword = *data;
+ data++;
+ rptr += 4;
+ if (rptr >= desc->end) {
+ rptr = desc->start;
+ data = shared->msg_mem_vir;
+ }
+
+ msg->hdr.index = (msgword >> 24) & 0xff;
+ msg->hdr.num = (msgword >> 16) & 0xff;
+ msg->hdr.id = msgword & 0x3fff;
+
+ if (msg->hdr.num > ARRAY_SIZE(msg->data)) {
+ pr_err("msg(%d) data length(%d) is out of range\n",
+ msg->hdr.id, msg->hdr.num);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < msg->hdr.num; i++) {
+ msg->data[i] = *data;
+ data++;
+ rptr += 4;
+ if (rptr >= desc->end) {
+ rptr = desc->start;
+ data = shared->msg_mem_vir;
+ }
+ }
+
+ /*update rptr after data is read*/
+ mb();
+ desc->rptr = rptr;
+
+ return 0;
+}
+
+struct vpu_iface_ops imx8q_rpc_ops[] = {
+ [VPU_CORE_TYPE_ENC] = {
+ .check_codec = vpu_imx8q_check_codec,
+ .check_fmt = vpu_imx8q_check_fmt,
+ .boot_core = vpu_imx8q_boot_core,
+ .get_power_state = vpu_imx8q_get_power_state,
+ .on_firmware_loaded = vpu_imx8q_on_firmware_loaded,
+ .get_data_size = vpu_windsor_get_data_size,
+ .check_memory_region = vpu_imx8q_check_memory_region,
+ .init_rpc = vpu_windsor_init_rpc,
+ .set_log_buf = vpu_windsor_set_log_buf,
+ .set_system_cfg = vpu_windsor_set_system_cfg,
+ .get_version = vpu_windsor_get_version,
+ .send_cmd_buf = vpu_rpc_send_cmd_buf,
+ .receive_msg_buf = vpu_rpc_receive_msg_buf,
+ .pack_cmd = vpu_windsor_pack_cmd,
+ .convert_msg_id = vpu_windsor_convert_msg_id,
+ .unpack_msg_data = vpu_windsor_unpack_msg_data,
+ .config_memory_resource = vpu_windsor_config_memory_resource,
+ .get_stream_buffer_size = vpu_windsor_get_stream_buffer_size,
+ .config_stream_buffer = vpu_windsor_config_stream_buffer,
+ .get_stream_buffer_desc = vpu_windsor_get_stream_buffer_desc,
+ .update_stream_buffer = vpu_windsor_update_stream_buffer,
+ .set_encode_params = vpu_windsor_set_encode_params,
+ .input_frame = vpu_windsor_input_frame,
+ .get_max_instance_count = vpu_windsor_get_max_instance_count,
+ },
+ [VPU_CORE_TYPE_DEC] = {
+ .check_codec = vpu_imx8q_check_codec,
+ .check_fmt = vpu_imx8q_check_fmt,
+ .boot_core = vpu_imx8q_boot_core,
+ .get_power_state = vpu_imx8q_get_power_state,
+ .on_firmware_loaded = vpu_imx8q_on_firmware_loaded,
+ .get_data_size = vpu_malone_get_data_size,
+ .check_memory_region = vpu_imx8q_check_memory_region,
+ .init_rpc = vpu_malone_init_rpc,
+ .set_log_buf = vpu_malone_set_log_buf,
+ .set_system_cfg = vpu_malone_set_system_cfg,
+ .get_version = vpu_malone_get_version,
+ .send_cmd_buf = vpu_rpc_send_cmd_buf,
+ .receive_msg_buf = vpu_rpc_receive_msg_buf,
+ .get_stream_buffer_size = vpu_malone_get_stream_buffer_size,
+ .config_stream_buffer = vpu_malone_config_stream_buffer,
+ .set_decode_params = vpu_malone_set_decode_params,
+ .pack_cmd = vpu_malone_pack_cmd,
+ .convert_msg_id = vpu_malone_convert_msg_id,
+ .unpack_msg_data = vpu_malone_unpack_msg_data,
+ .get_stream_buffer_desc = vpu_malone_get_stream_buffer_desc,
+ .update_stream_buffer = vpu_malone_update_stream_buffer,
+ .add_scode = vpu_malone_add_scode,
+ .input_frame = vpu_malone_input_frame,
+ .pre_send_cmd = vpu_malone_pre_cmd,
+ .post_send_cmd = vpu_malone_post_cmd,
+ .init_instance = vpu_malone_init_instance,
+ .get_max_instance_count = vpu_malone_get_max_instance_count,
+ },
+};
+
+
+static struct vpu_iface_ops *vpu_get_iface(struct vpu_dev *vpu, enum vpu_core_type type)
+{
+ struct vpu_iface_ops *rpc_ops = NULL;
+ u32 size = 0;
+
+ WARN_ON(!vpu || !vpu->res);
+
+ switch (vpu->res->plat_type) {
+ case IMX8QXP:
+ case IMX8QM:
+ rpc_ops = imx8q_rpc_ops;
+ size = ARRAY_SIZE(imx8q_rpc_ops);
+ break;
+ default:
+ return NULL;
+ }
+
+ if (type >= size)
+ return NULL;
+
+ return &rpc_ops[type];
+}
+
+struct vpu_iface_ops *vpu_core_get_iface(struct vpu_core *core)
+{
+ WARN_ON(!core || !core->vpu);
+
+ return vpu_get_iface(core->vpu, core->type);
+}
+
+struct vpu_iface_ops *vpu_inst_get_iface(struct vpu_inst *inst)
+{
+ WARN_ON(!inst || !inst->vpu);
+
+ if (inst->core)
+ return vpu_core_get_iface(inst->core);
+
+ return vpu_get_iface(inst->vpu, inst->type);
+}
diff --git a/drivers/media/platform/amphion/vpu_rpc.h b/drivers/media/platform/amphion/vpu_rpc.h
new file mode 100644
index 000000000000..abe998e5a5be
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_rpc.h
@@ -0,0 +1,464 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_RPC_H
+#define _AMPHION_VPU_RPC_H
+
+#include <media/videobuf2-core.h>
+#include "vpu_codec.h"
+
+struct vpu_rpc_buffer_desc {
+ u32 wptr;
+ u32 rptr;
+ u32 start;
+ u32 end;
+};
+
+struct vpu_shared_addr {
+ void *iface;
+ struct vpu_rpc_buffer_desc *cmd_desc;
+ void *cmd_mem_vir;
+ struct vpu_rpc_buffer_desc *msg_desc;
+ void *msg_mem_vir;
+
+ unsigned long boot_addr;
+ struct vpu_core *core;
+ void *priv;
+};
+
+struct vpu_rpc_event_header {
+ u32 index;
+ u32 id;
+ u32 num;
+};
+
+struct vpu_rpc_event {
+ struct vpu_rpc_event_header hdr;
+ u32 data[128];
+};
+
+struct vpu_iface_ops {
+ bool (*check_codec)(enum vpu_core_type type);
+ bool (*check_fmt)(enum vpu_core_type type, u32 pixelfmt);
+ u32 (*get_data_size)(void);
+ u32 (*check_memory_region)(dma_addr_t base, dma_addr_t addr, u32 size);
+ int (*boot_core)(struct vpu_core *core);
+ int (*shutdown_core)(struct vpu_core *core);
+ int (*restore_core)(struct vpu_core *core);
+ int (*get_power_state)(struct vpu_core *core);
+ int (*on_firmware_loaded)(struct vpu_core *core);
+ void (*init_rpc)(struct vpu_shared_addr *shared,
+ struct vpu_buffer *rpc, dma_addr_t boot_addr);
+ void (*set_log_buf)(struct vpu_shared_addr *shared,
+ struct vpu_buffer *log);
+ void (*set_system_cfg)(struct vpu_shared_addr *shared,
+ u32 regs_base, void __iomem *regs, u32 index);
+ void (*set_stream_cfg)(struct vpu_shared_addr *shared, u32 index);
+ u32 (*get_version)(struct vpu_shared_addr *shared);
+ u32 (*get_max_instance_count)(struct vpu_shared_addr *shared);
+ int (*get_stream_buffer_size)(struct vpu_shared_addr *shared);
+ int (*send_cmd_buf)(struct vpu_shared_addr *shared,
+ struct vpu_rpc_event *cmd);
+ int (*receive_msg_buf)(struct vpu_shared_addr *shared,
+ struct vpu_rpc_event *msg);
+ int (*pack_cmd)(struct vpu_rpc_event *pkt, u32 index, u32 id, void *data);
+ int (*convert_msg_id)(u32 msg_id);
+ int (*unpack_msg_data)(struct vpu_rpc_event *pkt, void *data);
+ int (*input_frame)(struct vpu_shared_addr *shared,
+ struct vpu_inst *inst, struct vb2_buffer *vb);
+ int (*config_memory_resource)(struct vpu_shared_addr *shared,
+ u32 instance,
+ u32 type,
+ u32 index,
+ struct vpu_buffer *buf);
+ int (*config_stream_buffer)(struct vpu_shared_addr *shared,
+ u32 instance,
+ struct vpu_buffer *buf);
+ int (*update_stream_buffer)(struct vpu_shared_addr *shared,
+ u32 instance, u32 ptr, bool write);
+ int (*get_stream_buffer_desc)(struct vpu_shared_addr *shared,
+ u32 instance,
+ struct vpu_rpc_buffer_desc *desc);
+ int (*set_encode_params)(struct vpu_shared_addr *shared,
+ u32 instance,
+ struct vpu_encode_params *params, u32 update);
+ int (*set_decode_params)(struct vpu_shared_addr *shared,
+ u32 instance,
+ struct vpu_decode_params *params, u32 update);
+ int (*add_scode)(struct vpu_shared_addr *shared,
+ u32 instance,
+ struct vpu_buffer *stream_buffer,
+ u32 pixelformat,
+ u32 scode_type);
+ int (*pre_send_cmd)(struct vpu_shared_addr *shared, u32 instance);
+ int (*post_send_cmd)(struct vpu_shared_addr *shared, u32 instance);
+ int (*init_instance)(struct vpu_shared_addr *shared, u32 instance);
+};
+
+enum {
+ VPU_CORE_MEMORY_INVALID = 0,
+ VPU_CORE_MEMORY_CACHED,
+ VPU_CORE_MEMORY_UNCACHED
+};
+
+struct vpu_rpc_region_t {
+ dma_addr_t start;
+ dma_addr_t end;
+ dma_addr_t type;
+};
+
+struct vpu_iface_ops *vpu_core_get_iface(struct vpu_core *core);
+struct vpu_iface_ops *vpu_inst_get_iface(struct vpu_inst *inst);
+u32 vpu_iface_check_memory_region(struct vpu_core *core, dma_addr_t addr, u32 size);
+
+static inline bool vpu_iface_check_codec(struct vpu_core *core)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (ops && ops->check_codec)
+ return ops->check_codec(core->type);
+
+ return true;
+}
+
+static inline bool vpu_iface_check_format(struct vpu_inst *inst, u32 pixelfmt)
+{
+ struct vpu_iface_ops *ops = vpu_inst_get_iface(inst);
+
+ if (ops && ops->check_fmt)
+ return ops->check_fmt(inst->type, pixelfmt);
+
+ return true;
+}
+
+static inline int vpu_iface_boot_core(struct vpu_core *core)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (ops && ops->boot_core)
+ return ops->boot_core(core);
+ return 0;
+}
+
+static inline int vpu_iface_get_power_state(struct vpu_core *core)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (ops && ops->get_power_state)
+ return ops->get_power_state(core);
+ return 1;
+}
+
+static inline int vpu_iface_shutdown_core(struct vpu_core *core)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (ops && ops->shutdown_core)
+ return ops->shutdown_core(core);
+ return 0;
+}
+
+static inline int vpu_iface_restore_core(struct vpu_core *core)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (ops && ops->restore_core)
+ return ops->restore_core(core);
+ return 0;
+}
+
+static inline int vpu_iface_on_firmware_loaded(struct vpu_core *core)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (ops && ops->on_firmware_loaded)
+ return ops->on_firmware_loaded(core);
+
+ return 0;
+}
+
+static inline u32 vpu_iface_get_data_size(struct vpu_core *core)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops || !ops->get_data_size)
+ return 0;
+
+ return ops->get_data_size();
+}
+
+static inline int vpu_iface_init(struct vpu_core *core,
+ struct vpu_shared_addr *shared,
+ struct vpu_buffer *rpc,
+ dma_addr_t boot_addr)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops || !ops->init_rpc)
+ return -EINVAL;
+
+ ops->init_rpc(shared, rpc, boot_addr);
+ core->iface = shared;
+ shared->core = core;
+ if (rpc->bytesused > rpc->length)
+ return -ENOSPC;
+ return 0;
+}
+
+static inline int vpu_iface_set_log_buf(struct vpu_core *core,
+ struct vpu_buffer *log)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops)
+ return -EINVAL;
+
+ if (ops->set_log_buf)
+ ops->set_log_buf(core->iface, log);
+
+ return 0;
+}
+
+static inline int vpu_iface_config_system(struct vpu_core *core,
+ u32 regs_base, void __iomem *regs)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops)
+ return -EINVAL;
+ if (ops->set_system_cfg)
+ ops->set_system_cfg(core->iface, regs_base, regs, core->id);
+
+ return 0;
+}
+
+static inline int vpu_iface_get_stream_buffer_size(struct vpu_core *core)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops || !ops->get_stream_buffer_size)
+ return 0;
+
+ return ops->get_stream_buffer_size(core->iface);
+}
+
+static inline int vpu_iface_config_stream(struct vpu_inst *inst)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
+
+ WARN_ON(inst->id < 0);
+ if (!ops)
+ return -EINVAL;
+ if (ops->set_stream_cfg)
+ ops->set_stream_cfg(inst->core->iface, inst->id);
+ return 0;
+}
+
+static inline int vpu_iface_send_cmd(struct vpu_core *core, struct vpu_rpc_event *cmd)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops || !ops->send_cmd_buf)
+ return -EINVAL;
+
+ return ops->send_cmd_buf(core->iface, cmd);
+}
+
+static inline int vpu_iface_receive_msg(struct vpu_core *core, struct vpu_rpc_event *msg)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops || !ops->receive_msg_buf)
+ return -EINVAL;
+
+ return ops->receive_msg_buf(core->iface, msg);
+}
+
+static inline int vpu_iface_pack_cmd(struct vpu_core *core,
+ struct vpu_rpc_event *pkt,
+ u32 index, u32 id, void *data)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops || !ops->pack_cmd)
+ return -EINVAL;
+ return ops->pack_cmd(pkt, index, id, data);
+}
+
+static inline int vpu_iface_convert_msg_id(struct vpu_core *core, u32 msg_id)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops || !ops->convert_msg_id)
+ return -EINVAL;
+
+ return ops->convert_msg_id(msg_id);
+}
+
+static inline int vpu_iface_unpack_msg_data(struct vpu_core *core,
+ struct vpu_rpc_event *pkt, void *data)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops || !ops->unpack_msg_data)
+ return -EINVAL;
+
+ return ops->unpack_msg_data(pkt, data);
+}
+
+static inline int vpu_iface_input_frame(struct vpu_inst *inst,
+ struct vb2_buffer *vb)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
+
+ if (!ops || !ops->input_frame)
+ return -EINVAL;
+
+ return ops->input_frame(inst->core->iface, inst, vb);
+}
+
+static inline int vpu_iface_config_memory_resource(struct vpu_inst *inst,
+ u32 type, u32 index, struct vpu_buffer *buf)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
+
+ WARN_ON(inst->id < 0);
+ if (!ops || !ops->config_memory_resource)
+ return -EINVAL;
+
+ return ops->config_memory_resource(inst->core->iface,
+ inst->id,
+ type, index, buf);
+}
+
+static inline int vpu_iface_config_stream_buffer(struct vpu_inst *inst,
+ struct vpu_buffer *buf)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
+
+ WARN_ON(inst->id < 0);
+ if (!ops || !ops->config_stream_buffer)
+ return -EINVAL;
+
+ return ops->config_stream_buffer(inst->core->iface, inst->id, buf);
+}
+
+static inline int vpu_iface_update_stream_buffer(struct vpu_inst *inst,
+ u32 ptr, bool write)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
+
+ WARN_ON(inst->id < 0);
+ if (!ops || !ops->update_stream_buffer)
+ return -EINVAL;
+
+ return ops->update_stream_buffer(inst->core->iface, inst->id, ptr, write);
+}
+
+static inline int vpu_iface_get_stream_buffer_desc(struct vpu_inst *inst,
+ struct vpu_rpc_buffer_desc *desc)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
+
+ WARN_ON(inst->id < 0);
+ if (!ops || !ops->get_stream_buffer_desc)
+ return -EINVAL;
+
+ if (!desc)
+ return 0;
+
+ return ops->get_stream_buffer_desc(inst->core->iface, inst->id, desc);
+}
+
+static inline u32 vpu_iface_get_version(struct vpu_core *core)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops || !ops->get_version)
+ return 0;
+
+ return ops->get_version(core->iface);
+}
+
+static inline u32 vpu_iface_get_max_instance_count(struct vpu_core *core)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(core);
+
+ if (!ops || !ops->get_max_instance_count)
+ return 0;
+
+ return ops->get_max_instance_count(core->iface);
+}
+
+static inline int vpu_iface_set_encode_params(struct vpu_inst *inst,
+ struct vpu_encode_params *params, u32 update)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
+
+ WARN_ON(inst->id < 0);
+ if (!ops || !ops->set_encode_params)
+ return -EINVAL;
+
+ return ops->set_encode_params(inst->core->iface, inst->id, params, update);
+}
+
+static inline int vpu_iface_set_decode_params(struct vpu_inst *inst,
+ struct vpu_decode_params *params, u32 update)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
+
+ WARN_ON(inst->id < 0);
+ if (!ops || !ops->set_decode_params)
+ return -EINVAL;
+
+ return ops->set_decode_params(inst->core->iface, inst->id, params, update);
+}
+
+static inline int vpu_iface_add_scode(struct vpu_inst *inst, u32 scode_type)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
+
+ WARN_ON(inst->id < 0);
+ if (!ops || !ops->add_scode)
+ return -EINVAL;
+
+ return ops->add_scode(inst->core->iface, inst->id,
+ &inst->stream_buffer,
+ inst->out_format.pixfmt,
+ scode_type);
+}
+
+static inline int vpu_iface_pre_send_cmd(struct vpu_inst *inst)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
+
+ WARN_ON(inst->id < 0);
+ if (ops && ops->pre_send_cmd)
+ return ops->pre_send_cmd(inst->core->iface, inst->id);
+ return 0;
+}
+
+static inline int vpu_iface_post_send_cmd(struct vpu_inst *inst)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
+
+ WARN_ON(inst->id < 0);
+ if (ops && ops->post_send_cmd)
+ return ops->post_send_cmd(inst->core->iface, inst->id);
+ return 0;
+}
+
+static inline int vpu_iface_init_instance(struct vpu_inst *inst)
+{
+ struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
+
+ WARN_ON(inst->id < 0);
+ if (ops && ops->init_instance)
+ return ops->init_instance(inst->core->iface, inst->id);
+
+ return 0;
+}
+
+#endif
--
2.33.0


2021-11-30 09:49:47

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 11/13] ARM64: dts: freescale: imx8q: add imx vpu codec entries

Add the Video Processing Unit node for IMX8Q SoC.

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
---
.../arm64/boot/dts/freescale/imx8-ss-vpu.dtsi | 72 +++++++++++++++++++
arch/arm64/boot/dts/freescale/imx8qxp-mek.dts | 17 +++++
arch/arm64/boot/dts/freescale/imx8qxp.dtsi | 24 +++++++
3 files changed, 113 insertions(+)
create mode 100644 arch/arm64/boot/dts/freescale/imx8-ss-vpu.dtsi

diff --git a/arch/arm64/boot/dts/freescale/imx8-ss-vpu.dtsi b/arch/arm64/boot/dts/freescale/imx8-ss-vpu.dtsi
new file mode 100644
index 000000000000..f2dde6d14ca3
--- /dev/null
+++ b/arch/arm64/boot/dts/freescale/imx8-ss-vpu.dtsi
@@ -0,0 +1,72 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Copyright 2021 NXP
+ * Dong Aisheng <[email protected]>
+ */
+
+vpu: vpu@2c000000 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0x2c000000 0x0 0x2c000000 0x2000000>;
+ reg = <0 0x2c000000 0 0x1000000>;
+ power-domains = <&pd IMX_SC_R_VPU>;
+ status = "disabled";
+
+ mu_m0: mailbox@2d000000 {
+ compatible = "fsl,imx6sx-mu";
+ reg = <0x2d000000 0x20000>;
+ interrupts = <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>;
+ #mbox-cells = <2>;
+ power-domains = <&pd IMX_SC_R_VPU_MU_0>;
+ status = "okay";
+ };
+
+ mu1_m0: mailbox@2d020000 {
+ compatible = "fsl,imx6sx-mu";
+ reg = <0x2d020000 0x20000>;
+ interrupts = <GIC_SPI 470 IRQ_TYPE_LEVEL_HIGH>;
+ #mbox-cells = <2>;
+ power-domains = <&pd IMX_SC_R_VPU_MU_1>;
+ status = "okay";
+ };
+
+ mu2_m0: mailbox@2d040000 {
+ compatible = "fsl,imx6sx-mu";
+ reg = <0x2d040000 0x20000>;
+ interrupts = <GIC_SPI 474 IRQ_TYPE_LEVEL_HIGH>;
+ #mbox-cells = <2>;
+ power-domains = <&pd IMX_SC_R_VPU_MU_2>;
+ status = "disabled";
+ };
+
+ vpu_core0: vpu_core@2d080000 {
+ reg = <0x2d080000 0x10000>;
+ compatible = "nxp,imx8q-vpu-decoder";
+ power-domains = <&pd IMX_SC_R_VPU_DEC_0>;
+ mbox-names = "tx0", "tx1", "rx";
+ mboxes = <&mu_m0 0 0>,
+ <&mu_m0 0 1>,
+ <&mu_m0 1 0>;
+ status = "disabled";
+ };
+ vpu_core1: vpu_core@2d090000 {
+ reg = <0x2d090000 0x10000>;
+ compatible = "nxp,imx8q-vpu-encoder";
+ power-domains = <&pd IMX_SC_R_VPU_ENC_0>;
+ mbox-names = "tx0", "tx1", "rx";
+ mboxes = <&mu1_m0 0 0>,
+ <&mu1_m0 0 1>,
+ <&mu1_m0 1 0>;
+ status = "disabled";
+ };
+ vpu_core2: vpu_core@2d0a0000 {
+ reg = <0x2d0a0000 0x10000>;
+ compatible = "nxp,imx8q-vpu-encoder";
+ power-domains = <&pd IMX_SC_R_VPU_ENC_1>;
+ mbox-names = "tx0", "tx1", "rx";
+ mboxes = <&mu2_m0 0 0>,
+ <&mu2_m0 0 1>,
+ <&mu2_m0 1 0>;
+ status = "disabled";
+ };
+};
diff --git a/arch/arm64/boot/dts/freescale/imx8qxp-mek.dts b/arch/arm64/boot/dts/freescale/imx8qxp-mek.dts
index 863232a47004..05495b60beb8 100644
--- a/arch/arm64/boot/dts/freescale/imx8qxp-mek.dts
+++ b/arch/arm64/boot/dts/freescale/imx8qxp-mek.dts
@@ -196,6 +196,23 @@ &usdhc2 {
status = "okay";
};

+&vpu {
+ compatible = "nxp,imx8qxp-vpu";
+ status = "okay";
+};
+
+&vpu_core0 {
+ reg = <0x2d040000 0x10000>;
+ memory-region = <&decoder_boot>, <&decoder_rpc>;
+ status = "okay";
+};
+
+&vpu_core1 {
+ reg = <0x2d050000 0x10000>;
+ memory-region = <&encoder_boot>, <&encoder_rpc>;
+ status = "okay";
+};
+
&iomuxc {
pinctrl_fec1: fec1grp {
fsl,pins = <
diff --git a/arch/arm64/boot/dts/freescale/imx8qxp.dtsi b/arch/arm64/boot/dts/freescale/imx8qxp.dtsi
index 617618edf77e..6b6d3c71632b 100644
--- a/arch/arm64/boot/dts/freescale/imx8qxp.dtsi
+++ b/arch/arm64/boot/dts/freescale/imx8qxp.dtsi
@@ -46,6 +46,9 @@ aliases {
serial1 = &lpuart1;
serial2 = &lpuart2;
serial3 = &lpuart3;
+ vpu_core0 = &vpu_core0;
+ vpu_core1 = &vpu_core1;
+ vpu_core2 = &vpu_core2;
};

cpus {
@@ -134,10 +137,30 @@ reserved-memory {
#size-cells = <2>;
ranges;

+ decoder_boot: decoder-boot@84000000 {
+ reg = <0 0x84000000 0 0x2000000>;
+ no-map;
+ };
+
+ encoder_boot: encoder-boot@86000000 {
+ reg = <0 0x86000000 0 0x200000>;
+ no-map;
+ };
+
+ decoder_rpc: decoder-rpc@0x92000000 {
+ reg = <0 0x92000000 0 0x100000>;
+ no-map;
+ };
+
dsp_reserved: dsp@92400000 {
reg = <0 0x92400000 0 0x2000000>;
no-map;
};
+
+ encoder_rpc: encoder-rpc@0x94400000 {
+ reg = <0 0x94400000 0 0x700000>;
+ no-map;
+ };
};

pmu {
@@ -259,6 +282,7 @@ map0 {

/* sorted in register address */
#include "imx8-ss-img.dtsi"
+ #include "imx8-ss-vpu.dtsi"
#include "imx8-ss-adma.dtsi"
#include "imx8-ss-conn.dtsi"
#include "imx8-ss-ddr.dtsi"
--
2.33.0


2021-11-30 09:49:44

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 07/13] media: amphion: add v4l2 m2m vpu encoder stateful driver

This consists of video encoder implementation plus encoder controls.

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
---
drivers/media/platform/amphion/venc.c | 1351 +++++++++++++++++++++++++
1 file changed, 1351 insertions(+)
create mode 100644 drivers/media/platform/amphion/venc.c

diff --git a/drivers/media/platform/amphion/venc.c b/drivers/media/platform/amphion/venc.c
new file mode 100644
index 000000000000..468608a76b78
--- /dev/null
+++ b/drivers/media/platform/amphion/venc.c
@@ -0,0 +1,1351 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/videodev2.h>
+#include <linux/ktime.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-mem2mem.h>
+#include <media/v4l2-ioctl.h>
+#include <media/videobuf2-v4l2.h>
+#include <media/videobuf2-dma-contig.h>
+#include <media/videobuf2-vmalloc.h>
+#include "vpu.h"
+#include "vpu_defs.h"
+#include "vpu_core.h"
+#include "vpu_helpers.h"
+#include "vpu_v4l2.h"
+#include "vpu_cmds.h"
+#include "vpu_rpc.h"
+
+#define VENC_OUTPUT_ENABLE (1 << 0)
+#define VENC_CAPTURE_ENABLE (1 << 1)
+#define VENC_ENABLE_MASK (VENC_OUTPUT_ENABLE | VENC_CAPTURE_ENABLE)
+#define VENC_MAX_BUF_CNT 8
+
+struct venc_t {
+ struct vpu_encode_params params;
+ u32 request_key_frame;
+ u32 input_ready;
+ u32 cpb_size;
+ bool bitrate_change;
+
+ struct vpu_buffer enc[VENC_MAX_BUF_CNT];
+ struct vpu_buffer ref[VENC_MAX_BUF_CNT];
+ struct vpu_buffer act[VENC_MAX_BUF_CNT];
+ struct list_head frames;
+ u32 frame_count;
+ u32 encode_count;
+ u32 ready_count;
+ u32 enable;
+ u32 stopped;
+
+ u32 skipped_count;
+ u32 skipped_bytes;
+
+ wait_queue_head_t wq;
+};
+
+struct venc_frame_t {
+ struct list_head list;
+ struct vpu_enc_pic_info info;
+ u32 bytesused;
+ s64 timestamp;
+};
+
+static const struct vpu_format venc_formats[] = {
+ {
+ .pixfmt = V4L2_PIX_FMT_NV12M,
+ .num_planes = 2,
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_H264,
+ .num_planes = 1,
+ .type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
+ },
+ {0, 0, 0, 0},
+};
+
+static int venc_querycap(struct file *file, void *fh, struct v4l2_capability *cap)
+{
+ strscpy(cap->driver, "amphion-vpu", sizeof(cap->driver));
+ strscpy(cap->card, "amphion vpu encoder", sizeof(cap->card));
+ strscpy(cap->bus_info, "platform: amphion-vpu", sizeof(cap->bus_info));
+
+ return 0;
+}
+
+static int venc_enum_fmt(struct file *file, void *fh, struct v4l2_fmtdesc *f)
+{
+ struct vpu_inst *inst = to_inst(file);
+ const struct vpu_format *fmt;
+
+ memset(f->reserved, 0, sizeof(f->reserved));
+ fmt = vpu_helper_enum_format(inst, f->type, f->index);
+ if (!fmt)
+ return -EINVAL;
+
+ f->pixelformat = fmt->pixfmt;
+ f->flags = fmt->flags;
+
+ return 0;
+}
+
+static int venc_enum_framesizes(struct file *file, void *fh, struct v4l2_frmsizeenum *fsize)
+{
+ struct vpu_inst *inst = to_inst(file);
+ const struct vpu_core_resources *res;
+
+ if (!fsize || fsize->index)
+ return -EINVAL;
+
+ if (!vpu_helper_find_format(inst, 0, fsize->pixel_format))
+ return -EINVAL;
+
+ res = vpu_get_resource(inst);
+ if (!res)
+ return -EINVAL;
+ fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE;
+ fsize->stepwise.max_width = res->max_width;
+ fsize->stepwise.max_height = res->max_height;
+ fsize->stepwise.min_width = res->min_width;
+ fsize->stepwise.min_height = res->min_height;
+ fsize->stepwise.step_width = res->step_width;
+ fsize->stepwise.step_height = res->step_height;
+
+ return 0;
+}
+
+static int venc_enum_frameintervals(struct file *file, void *fh, struct v4l2_frmivalenum *fival)
+{
+ struct vpu_inst *inst = to_inst(file);
+ const struct vpu_core_resources *res;
+
+ if (!fival || fival->index)
+ return -EINVAL;
+
+ if (!vpu_helper_find_format(inst, 0, fival->pixel_format))
+ return -EINVAL;
+
+ if (!fival->width || !fival->height)
+ return -EINVAL;
+
+ res = vpu_get_resource(inst);
+ if (!res)
+ return -EINVAL;
+ if (fival->width < res->min_width ||
+ fival->width > res->max_width ||
+ fival->height < res->min_height ||
+ fival->height > res->max_height)
+ return -EINVAL;
+
+ fival->type = V4L2_FRMIVAL_TYPE_CONTINUOUS;
+ fival->stepwise.min.numerator = 1;
+ fival->stepwise.min.denominator = USHRT_MAX;
+ fival->stepwise.max.numerator = USHRT_MAX;
+ fival->stepwise.max.denominator = 1;
+ fival->stepwise.step.numerator = 1;
+ fival->stepwise.step.denominator = 1;
+
+ return 0;
+}
+
+static int venc_g_fmt(struct file *file, void *fh, struct v4l2_format *f)
+{
+ struct vpu_inst *inst = to_inst(file);
+ struct venc_t *venc = inst->priv;
+ struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp;
+ struct vpu_format *cur_fmt;
+ int i;
+
+ cur_fmt = vpu_get_format(inst, f->type);
+
+ pixmp->pixelformat = cur_fmt->pixfmt;
+ pixmp->num_planes = cur_fmt->num_planes;
+ pixmp->width = cur_fmt->width;
+ pixmp->height = cur_fmt->height;
+ pixmp->field = cur_fmt->field;
+ pixmp->flags = cur_fmt->flags;
+ for (i = 0; i < pixmp->num_planes; i++) {
+ pixmp->plane_fmt[i].bytesperline = cur_fmt->bytesperline[i];
+ pixmp->plane_fmt[i].sizeimage = cur_fmt->sizeimage[i];
+ }
+
+ f->fmt.pix_mp.colorspace = venc->params.color.primaries;
+ f->fmt.pix_mp.xfer_func = venc->params.color.transfer;
+ f->fmt.pix_mp.ycbcr_enc = venc->params.color.matrix;
+ f->fmt.pix_mp.quantization = venc->params.color.full_range;
+
+ return 0;
+}
+
+static int venc_try_fmt(struct file *file, void *fh, struct v4l2_format *f)
+{
+ struct vpu_inst *inst = to_inst(file);
+
+ vpu_try_fmt_common(inst, f);
+
+ return 0;
+}
+
+static int venc_s_fmt(struct file *file, void *fh, struct v4l2_format *f)
+{
+ struct vpu_inst *inst = to_inst(file);
+ const struct vpu_format *fmt;
+ struct vpu_format *cur_fmt;
+ struct vb2_queue *q;
+ struct venc_t *venc = inst->priv;
+ struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
+ int i;
+
+ q = v4l2_m2m_get_vq(inst->fh.m2m_ctx, f->type);
+ if (!q)
+ return -EINVAL;
+ if (vb2_is_streaming(q))
+ return -EBUSY;
+
+ fmt = vpu_try_fmt_common(inst, f);
+ if (!fmt)
+ return -EINVAL;
+
+ cur_fmt = vpu_get_format(inst, f->type);
+
+ cur_fmt->pixfmt = fmt->pixfmt;
+ cur_fmt->num_planes = fmt->num_planes;
+ cur_fmt->flags = fmt->flags;
+ cur_fmt->width = pix_mp->width;
+ cur_fmt->height = pix_mp->height;
+ for (i = 0; i < fmt->num_planes; i++) {
+ cur_fmt->sizeimage[i] = pix_mp->plane_fmt[i].sizeimage;
+ cur_fmt->bytesperline[i] = pix_mp->plane_fmt[i].bytesperline;
+ }
+
+ if (pix_mp->field != V4L2_FIELD_ANY)
+ cur_fmt->field = pix_mp->field;
+
+ if (V4L2_TYPE_IS_OUTPUT(f->type)) {
+ venc->params.input_format = cur_fmt->pixfmt;
+ venc->params.src_stride = cur_fmt->bytesperline[0];
+ venc->params.src_width = cur_fmt->width;
+ venc->params.src_height = cur_fmt->height;
+ venc->params.crop.left = 0;
+ venc->params.crop.top = 0;
+ venc->params.crop.width = cur_fmt->width;
+ venc->params.crop.height = cur_fmt->height;
+ } else {
+ venc->params.codec_format = cur_fmt->pixfmt;
+ venc->params.out_width = cur_fmt->width;
+ venc->params.out_height = cur_fmt->height;
+ }
+
+ if (V4L2_TYPE_IS_OUTPUT(f->type)) {
+ if (!vpu_color_check_primaries(pix_mp->colorspace)) {
+ venc->params.color.primaries = pix_mp->colorspace;
+ vpu_color_get_default(venc->params.color.primaries,
+ &venc->params.color.transfer,
+ &venc->params.color.matrix,
+ &venc->params.color.full_range);
+ }
+ if (!vpu_color_check_transfers(pix_mp->xfer_func))
+ venc->params.color.transfer = pix_mp->xfer_func;
+ if (!vpu_color_check_matrix(pix_mp->ycbcr_enc))
+ venc->params.color.matrix = pix_mp->ycbcr_enc;
+ if (!vpu_color_check_full_range(pix_mp->quantization))
+ venc->params.color.full_range = pix_mp->quantization;
+ }
+
+ pix_mp->colorspace = venc->params.color.primaries;
+ pix_mp->xfer_func = venc->params.color.transfer;
+ pix_mp->ycbcr_enc = venc->params.color.matrix;
+ pix_mp->quantization = venc->params.color.full_range;
+
+ return 0;
+}
+
+static int venc_g_parm(struct file *file, void *fh, struct v4l2_streamparm *parm)
+{
+ struct vpu_inst *inst = to_inst(file);
+ struct venc_t *venc = inst->priv;
+ struct v4l2_fract *timeperframe = &parm->parm.capture.timeperframe;
+
+ if (!parm)
+ return -EINVAL;
+
+ if (!vpu_helper_check_type(inst, parm->type))
+ return -EINVAL;
+
+ parm->parm.capture.capability = V4L2_CAP_TIMEPERFRAME;
+ parm->parm.capture.readbuffers = 0;
+ timeperframe->numerator = venc->params.frame_rate.numerator;
+ timeperframe->denominator = venc->params.frame_rate.denominator;
+
+ return 0;
+}
+
+static int venc_s_parm(struct file *file, void *fh, struct v4l2_streamparm *parm)
+{
+ struct vpu_inst *inst = to_inst(file);
+ struct venc_t *venc = inst->priv;
+ struct v4l2_fract *timeperframe = &parm->parm.capture.timeperframe;
+
+ if (!parm)
+ return -EINVAL;
+
+ if (!vpu_helper_check_type(inst, parm->type))
+ return -EINVAL;
+
+ if (!timeperframe->numerator)
+ timeperframe->numerator = venc->params.frame_rate.numerator;
+ if (!timeperframe->denominator)
+ timeperframe->denominator = venc->params.frame_rate.denominator;
+
+ venc->params.frame_rate.numerator = timeperframe->numerator;
+ venc->params.frame_rate.denominator = timeperframe->denominator;
+
+ vpu_helper_calc_coprime(&venc->params.frame_rate.numerator,
+ &venc->params.frame_rate.denominator);
+
+ parm->parm.capture.capability = V4L2_CAP_TIMEPERFRAME;
+ memset(parm->parm.capture.reserved,
+ 0, sizeof(parm->parm.capture.reserved));
+
+ return 0;
+}
+
+static int venc_g_selection(struct file *file, void *fh, struct v4l2_selection *s)
+{
+ struct vpu_inst *inst = to_inst(file);
+ struct venc_t *venc = inst->priv;
+
+ if (s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT && s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
+ return -EINVAL;
+
+ switch (s->target) {
+ case V4L2_SEL_TGT_CROP_DEFAULT:
+ case V4L2_SEL_TGT_CROP_BOUNDS:
+ s->r.left = 0;
+ s->r.top = 0;
+ s->r.width = inst->out_format.width;
+ s->r.height = inst->out_format.height;
+ break;
+ case V4L2_SEL_TGT_CROP:
+ s->r = venc->params.crop;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int venc_valid_crop(struct venc_t *venc, const struct vpu_core_resources *res)
+{
+ struct v4l2_rect *rect = NULL;
+ u32 min_width;
+ u32 min_height;
+ u32 src_width;
+ u32 src_height;
+
+ rect = &venc->params.crop;
+ min_width = res->min_width;
+ min_height = res->min_height;
+ src_width = venc->params.src_width;
+ src_height = venc->params.src_height;
+
+ if (rect->width == 0 || rect->height == 0)
+ return -EINVAL;
+ if (rect->left > src_width - min_width ||
+ rect->top > src_height - min_height)
+ return -EINVAL;
+
+ rect->width = min(rect->width, src_width - rect->left);
+ rect->width = max_t(u32, rect->width, min_width);
+
+ rect->height = min(rect->height, src_height - rect->top);
+ rect->height = max_t(u32, rect->height, min_height);
+
+ return 0;
+}
+
+static int venc_s_selection(struct file *file, void *fh, struct v4l2_selection *s)
+{
+ struct vpu_inst *inst = to_inst(file);
+ const struct vpu_core_resources *res;
+ struct venc_t *venc = inst->priv;
+
+ res = vpu_get_resource(inst);
+ if (!res)
+ return -EINVAL;
+
+ if (s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT && s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
+ return -EINVAL;
+ if (s->target != V4L2_SEL_TGT_CROP)
+ return -EINVAL;
+
+ venc->params.crop.left = ALIGN(s->r.left, res->step_width);
+ venc->params.crop.top = ALIGN(s->r.top, res->step_height);
+ venc->params.crop.width = ALIGN(s->r.width, res->step_width);
+ venc->params.crop.height = ALIGN(s->r.height, res->step_height);
+ if (venc_valid_crop(venc, res)) {
+ venc->params.crop.left = 0;
+ venc->params.crop.top = 0;
+ venc->params.crop.width = venc->params.src_width;
+ venc->params.crop.height = venc->params.src_height;
+ }
+
+ inst->crop = venc->params.crop;
+
+ return 0;
+}
+
+static int venc_drain(struct vpu_inst *inst)
+{
+ struct venc_t *venc = inst->priv;
+ int ret;
+
+ if (inst->state != VPU_CODEC_STATE_DRAIN)
+ return 0;
+
+ if (v4l2_m2m_num_src_bufs_ready(inst->fh.m2m_ctx))
+ return 0;
+
+ if (!venc->input_ready)
+ return 0;
+
+ venc->input_ready = false;
+ vpu_trace(inst->dev, "[%d]\n", inst->id);
+ ret = vpu_session_stop(inst);
+ if (ret)
+ return ret;
+ inst->state = VPU_CODEC_STATE_STOP;
+ wake_up_all(&venc->wq);
+
+ return 0;
+}
+
+static int venc_request_eos(struct vpu_inst *inst)
+{
+ inst->state = VPU_CODEC_STATE_DRAIN;
+ venc_drain(inst);
+
+ return 0;
+}
+
+static int venc_encoder_cmd(struct file *file, void *fh, struct v4l2_encoder_cmd *cmd)
+{
+ struct vpu_inst *inst = to_inst(file);
+ int ret;
+
+ ret = v4l2_m2m_ioctl_try_encoder_cmd(file, fh, cmd);
+ if (ret)
+ return ret;
+
+ vpu_inst_lock(inst);
+ if (cmd->cmd == V4L2_ENC_CMD_STOP) {
+ if (inst->state == VPU_CODEC_STATE_DEINIT)
+ vpu_set_last_buffer_dequeued(inst);
+ else
+ venc_request_eos(inst);
+ }
+ vpu_inst_unlock(inst);
+
+ return 0;
+}
+
+static int venc_subscribe_event(struct v4l2_fh *fh, const struct v4l2_event_subscription *sub)
+{
+ switch (sub->type) {
+ case V4L2_EVENT_EOS:
+ return v4l2_event_subscribe(fh, sub, 0, NULL);
+ case V4L2_EVENT_CTRL:
+ return v4l2_ctrl_subscribe_event(fh, sub);
+ default:
+ return -EINVAL;
+ }
+}
+
+static const struct v4l2_ioctl_ops venc_ioctl_ops = {
+ .vidioc_querycap = venc_querycap,
+ .vidioc_enum_fmt_vid_cap = venc_enum_fmt,
+ .vidioc_enum_fmt_vid_out = venc_enum_fmt,
+ .vidioc_enum_framesizes = venc_enum_framesizes,
+ .vidioc_enum_frameintervals = venc_enum_frameintervals,
+ .vidioc_g_fmt_vid_cap_mplane = venc_g_fmt,
+ .vidioc_g_fmt_vid_out_mplane = venc_g_fmt,
+ .vidioc_try_fmt_vid_cap_mplane = venc_try_fmt,
+ .vidioc_try_fmt_vid_out_mplane = venc_try_fmt,
+ .vidioc_s_fmt_vid_cap_mplane = venc_s_fmt,
+ .vidioc_s_fmt_vid_out_mplane = venc_s_fmt,
+ .vidioc_g_parm = venc_g_parm,
+ .vidioc_s_parm = venc_s_parm,
+ .vidioc_g_selection = venc_g_selection,
+ .vidioc_s_selection = venc_s_selection,
+ .vidioc_try_encoder_cmd = v4l2_m2m_ioctl_try_encoder_cmd,
+ .vidioc_encoder_cmd = venc_encoder_cmd,
+ .vidioc_subscribe_event = venc_subscribe_event,
+ .vidioc_unsubscribe_event = v4l2_event_unsubscribe,
+ .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs,
+ .vidioc_querybuf = v4l2_m2m_ioctl_querybuf,
+ .vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs,
+ .vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf,
+ .vidioc_qbuf = v4l2_m2m_ioctl_qbuf,
+ .vidioc_expbuf = v4l2_m2m_ioctl_expbuf,
+ .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf,
+ .vidioc_streamon = v4l2_m2m_ioctl_streamon,
+ .vidioc_streamoff = v4l2_m2m_ioctl_streamoff,
+};
+
+static int venc_op_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+ struct vpu_inst *inst = ctrl_to_inst(ctrl);
+ struct venc_t *venc = inst->priv;
+ int ret = 0;
+
+ vpu_inst_lock(inst);
+ switch (ctrl->id) {
+ case V4L2_CID_MPEG_VIDEO_H264_PROFILE:
+ venc->params.profile = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_LEVEL:
+ venc->params.level = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_BITRATE_MODE:
+ venc->params.rc_mode = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_BITRATE:
+ if (ctrl->val != venc->params.bitrate)
+ venc->bitrate_change = true;
+ venc->params.bitrate = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_GOP_SIZE:
+ venc->params.gop_length = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_B_FRAMES:
+ venc->params.bframes = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_I_FRAME_QP:
+ venc->params.i_frame_qp = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_P_FRAME_QP:
+ venc->params.p_frame_qp = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_B_FRAME_QP:
+ venc->params.b_frame_qp = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME:
+ venc->request_key_frame = 1;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_CPB_SIZE:
+ venc->cpb_size = ctrl->val * 1024;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_ENABLE:
+ venc->params.sar.enable = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_IDC:
+ venc->params.sar.idc = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_VUI_EXT_SAR_WIDTH:
+ venc->params.sar.width = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_VUI_EXT_SAR_HEIGHT:
+ venc->params.sar.height = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_HEADER_MODE:
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+ vpu_inst_unlock(inst);
+
+ return ret;
+}
+
+static const struct v4l2_ctrl_ops venc_ctrl_ops = {
+ .s_ctrl = venc_op_s_ctrl,
+ .g_volatile_ctrl = vpu_helper_g_volatile_ctrl,
+};
+
+static int venc_ctrl_init(struct vpu_inst *inst)
+{
+ struct v4l2_ctrl *ctrl;
+ int ret;
+
+ ret = v4l2_ctrl_handler_init(&inst->ctrl_handler, 20);
+ if (ret)
+ return ret;
+
+ v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_PROFILE,
+ V4L2_MPEG_VIDEO_H264_PROFILE_HIGH,
+ ~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE) |
+ (1 << V4L2_MPEG_VIDEO_H264_PROFILE_MAIN) |
+ (1 << V4L2_MPEG_VIDEO_H264_PROFILE_HIGH)),
+ V4L2_MPEG_VIDEO_H264_PROFILE_HIGH);
+
+ v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_LEVEL,
+ V4L2_MPEG_VIDEO_H264_LEVEL_5_1,
+ 0x0,
+ V4L2_MPEG_VIDEO_H264_LEVEL_4_0);
+
+ v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_BITRATE_MODE,
+ V4L2_MPEG_VIDEO_BITRATE_MODE_CBR,
+ 0x0,
+ V4L2_MPEG_VIDEO_BITRATE_MODE_CBR);
+
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_BITRATE,
+ BITRATE_MIN,
+ BITRATE_MAX,
+ BITRATE_STEP,
+ BITRATE_DEFAULT);
+
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_GOP_SIZE, 0, (1 << 16) - 1, 1, 30);
+
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_B_FRAMES, 0, 4, 1, 0);
+
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_I_FRAME_QP, 1, 51, 1, 26);
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_P_FRAME_QP, 1, 51, 1, 28);
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_B_FRAME_QP, 1, 51, 1, 30);
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME, 0, 0, 0, 0);
+ ctrl = v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MIN_BUFFERS_FOR_CAPTURE, 1, 32, 1, 2);
+ if (ctrl)
+ ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE;
+ ctrl = v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MIN_BUFFERS_FOR_OUTPUT, 1, 32, 1, 2);
+ if (ctrl)
+ ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE;
+
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_CPB_SIZE, 64, 10240, 1, 1024);
+
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_ENABLE, 0, 1, 1, 1);
+ v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_IDC,
+ V4L2_MPEG_VIDEO_H264_VUI_SAR_IDC_EXTENDED,
+ 0x0,
+ V4L2_MPEG_VIDEO_H264_VUI_SAR_IDC_1x1);
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_VUI_EXT_SAR_WIDTH,
+ 0, USHRT_MAX, 1, 1);
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_VUI_EXT_SAR_HEIGHT,
+ 0, USHRT_MAX, 1, 1);
+ v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_HEADER_MODE,
+ V4L2_MPEG_VIDEO_HEADER_MODE_JOINED_WITH_1ST_FRAME,
+ ~(1 << V4L2_MPEG_VIDEO_HEADER_MODE_JOINED_WITH_1ST_FRAME),
+ V4L2_MPEG_VIDEO_HEADER_MODE_JOINED_WITH_1ST_FRAME);
+
+ ret = v4l2_ctrl_handler_setup(&inst->ctrl_handler);
+ if (ret) {
+ dev_err(inst->dev, "[%d] setup ctrls fail, ret = %d\n", inst->id, ret);
+ v4l2_ctrl_handler_free(&inst->ctrl_handler);
+ return ret;
+ }
+
+ return 0;
+}
+
+static bool venc_check_ready(struct vpu_inst *inst, unsigned int type)
+{
+ struct venc_t *venc = inst->priv;
+
+ if (V4L2_TYPE_IS_OUTPUT(type)) {
+ if (vpu_helper_get_free_space(inst) < venc->cpb_size)
+ return false;
+ return venc->input_ready;
+ }
+
+ if (list_empty(&venc->frames))
+ return false;
+ return true;
+}
+
+static u32 venc_get_enable_mask(u32 type)
+{
+ if (V4L2_TYPE_IS_OUTPUT(type))
+ return VENC_OUTPUT_ENABLE;
+ else
+ return VENC_CAPTURE_ENABLE;
+}
+
+static void venc_set_enable(struct venc_t *venc, u32 type, int enable)
+{
+ u32 mask = venc_get_enable_mask(type);
+
+ if (enable)
+ venc->enable |= mask;
+ else
+ venc->enable &= ~mask;
+}
+
+static u32 venc_get_enable(struct venc_t *venc, u32 type)
+{
+ return venc->enable & venc_get_enable_mask(type);
+}
+
+static void venc_input_done(struct vpu_inst *inst)
+{
+ struct venc_t *venc = inst->priv;
+
+ vpu_inst_lock(inst);
+ venc->input_ready = true;
+ vpu_process_output_buffer(inst);
+ if (inst->state == VPU_CODEC_STATE_DRAIN)
+ venc_drain(inst);
+ vpu_inst_unlock(inst);
+}
+
+/*
+ * It's hardware limitation, that there may be several bytes
+ * redundant data at the beginning of frame.
+ * For android platform, the redundant data may cause cts test fail
+ * So driver will strip them
+ */
+static int venc_precheck_encoded_frame(struct vpu_inst *inst, struct venc_frame_t *frame)
+{
+ struct venc_t *venc;
+ int skipped;
+
+ if (!inst || !frame || !frame->bytesused)
+ return -EINVAL;
+
+ venc = inst->priv;
+ skipped = vpu_helper_find_startcode(&inst->stream_buffer,
+ inst->cap_format.pixfmt,
+ frame->info.wptr - inst->stream_buffer.phys,
+ frame->bytesused);
+ if (skipped > 0) {
+ frame->bytesused -= skipped;
+ frame->info.wptr = vpu_helper_step_walk(&inst->stream_buffer,
+ frame->info.wptr, skipped);
+ venc->skipped_bytes += skipped;
+ venc->skipped_count++;
+ }
+
+ return 0;
+}
+
+static int venc_get_one_encoded_frame(struct vpu_inst *inst,
+ struct venc_frame_t *frame,
+ struct vb2_v4l2_buffer *vbuf)
+{
+ struct venc_t *venc = inst->priv;
+ struct vpu_vb2_buffer *vpu_buf;
+
+ if (!vbuf)
+ return -EAGAIN;
+
+ if (!venc_get_enable(inst->priv, vbuf->vb2_buf.type)) {
+ v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR);
+ return 0;
+ }
+ vpu_buf = to_vpu_vb2_buffer(vbuf);
+ if (frame->bytesused > vbuf->vb2_buf.planes[0].length) {
+ v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR);
+ return -ENOMEM;
+ }
+
+ venc_precheck_encoded_frame(inst, frame);
+
+ if (frame->bytesused) {
+ u32 rptr = frame->info.wptr;
+ void *dst = vb2_plane_vaddr(&vbuf->vb2_buf, 0);
+
+ vpu_helper_copy_from_stream_buffer(&inst->stream_buffer,
+ &rptr, frame->bytesused, dst);
+ vpu_iface_update_stream_buffer(inst, rptr, 0);
+ }
+ vb2_set_plane_payload(&vbuf->vb2_buf, 0, frame->bytesused);
+ vbuf->sequence = frame->info.frame_id;
+ vbuf->vb2_buf.timestamp = frame->info.timestamp;
+ vbuf->field = inst->cap_format.field;
+ vbuf->flags |= frame->info.pic_type;
+ vpu_buf->state = VPU_BUF_STATE_IDLE;
+ dev_dbg(inst->dev, "[%d][OUTPUT TS]%32lld\n", inst->id, frame->info.timestamp);
+ v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
+ venc->ready_count++;
+
+ if (vbuf->flags & V4L2_BUF_FLAG_KEYFRAME)
+ dev_dbg(inst->dev, "[%d][%d]key frame\n", inst->id, frame->info.frame_id);
+
+ return 0;
+}
+
+static int venc_get_encoded_frames(struct vpu_inst *inst)
+{
+ struct venc_t *venc;
+ struct venc_frame_t *frame;
+ struct venc_frame_t *tmp;
+
+ if (!inst || !inst->priv)
+ return -EINVAL;
+
+ venc = inst->priv;
+ list_for_each_entry_safe(frame, tmp, &venc->frames, list) {
+ if (venc_get_one_encoded_frame(inst, frame,
+ v4l2_m2m_dst_buf_remove(inst->fh.m2m_ctx)))
+ break;
+ list_del_init(&frame->list);
+ vfree(frame);
+ }
+
+ return 0;
+}
+
+static int venc_frame_encoded(struct vpu_inst *inst, void *arg)
+{
+ struct vpu_enc_pic_info *info = arg;
+ struct venc_frame_t *frame;
+ struct venc_t *venc;
+ int ret = 0;
+
+ if (!inst || !info)
+ return -EINVAL;
+ venc = inst->priv;
+ frame = vzalloc(sizeof(*frame));
+ if (!frame)
+ return -ENOMEM;
+
+ memcpy(&frame->info, info, sizeof(frame->info));
+ frame->bytesused = info->frame_size;
+
+ vpu_inst_lock(inst);
+ list_add_tail(&frame->list, &venc->frames);
+ venc->encode_count++;
+ venc_get_encoded_frames(inst);
+ vpu_inst_unlock(inst);
+
+ return ret;
+}
+
+static void venc_buf_done(struct vpu_inst *inst, struct vpu_frame_info *frame)
+{
+ struct vb2_v4l2_buffer *vbuf;
+ struct vpu_vb2_buffer *vpu_buf;
+
+ if (!inst || !frame)
+ return;
+
+ vpu_inst_lock(inst);
+ if (!venc_get_enable(inst->priv, frame->type))
+ goto exit;
+ vbuf = vpu_find_buf_by_sequence(inst, frame->type, frame->sequence);
+ if (!vbuf) {
+ dev_err(inst->dev, "[%d] can't find buf: type %d, sequence %d\n",
+ inst->id, frame->type, frame->sequence);
+ goto exit;
+ }
+
+ vpu_buf = to_vpu_vb2_buffer(vbuf);
+ vpu_buf->state = VPU_BUF_STATE_IDLE;
+ if (V4L2_TYPE_IS_OUTPUT(frame->type))
+ v4l2_m2m_src_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf);
+ else
+ v4l2_m2m_dst_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf);
+ v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
+exit:
+ vpu_inst_unlock(inst);
+}
+
+static void venc_set_last_buffer_dequeued(struct vpu_inst *inst)
+{
+ struct venc_t *venc = inst->priv;
+
+ if (venc->stopped && list_empty(&venc->frames))
+ vpu_set_last_buffer_dequeued(inst);
+}
+
+static void venc_stop_done(struct vpu_inst *inst)
+{
+ struct venc_t *venc = inst->priv;
+
+ vpu_inst_lock(inst);
+ venc->stopped = true;
+ venc_set_last_buffer_dequeued(inst);
+ vpu_inst_unlock(inst);
+
+ wake_up_all(&venc->wq);
+}
+
+static void venc_event_notify(struct vpu_inst *inst, u32 event, void *data)
+{
+}
+
+static void venc_release(struct vpu_inst *inst)
+{
+}
+
+static void venc_cleanup(struct vpu_inst *inst)
+{
+ struct venc_t *venc;
+
+ if (!inst)
+ return;
+
+ venc = inst->priv;
+ if (venc)
+ vfree(venc);
+ inst->priv = NULL;
+ vfree(inst);
+}
+
+static int venc_start_session(struct vpu_inst *inst, u32 type)
+{
+ struct venc_t *venc = inst->priv;
+ int stream_buffer_size;
+ int ret;
+
+ venc_set_enable(venc, type, 1);
+ if ((venc->enable & VENC_ENABLE_MASK) != VENC_ENABLE_MASK)
+ return 0;
+
+ vpu_iface_init_instance(inst);
+ stream_buffer_size = vpu_iface_get_stream_buffer_size(inst->core);
+ if (stream_buffer_size > 0) {
+ inst->stream_buffer.length = max_t(u32, stream_buffer_size, venc->cpb_size * 3);
+ ret = vpu_alloc_dma(inst->core, &inst->stream_buffer);
+ if (ret)
+ goto error;
+
+ inst->use_stream_buffer = true;
+ vpu_iface_config_stream_buffer(inst, &inst->stream_buffer);
+ }
+
+ ret = vpu_iface_set_encode_params(inst, &venc->params, 0);
+ if (ret)
+ goto error;
+ ret = vpu_session_configure_codec(inst);
+ if (ret)
+ goto error;
+
+ inst->state = VPU_CODEC_STATE_CONFIGURED;
+ /*vpu_iface_config_memory_resource*/
+
+ /*config enc expert mode parameter*/
+ ret = vpu_iface_set_encode_params(inst, &venc->params, 1);
+ if (ret)
+ goto error;
+
+ ret = vpu_session_start(inst);
+ if (ret)
+ goto error;
+ inst->state = VPU_CODEC_STATE_STARTED;
+
+ venc->bitrate_change = false;
+ venc->input_ready = true;
+ venc->frame_count = 0;
+ venc->encode_count = 0;
+ venc->ready_count = 0;
+ venc->stopped = false;
+ vpu_process_output_buffer(inst);
+ if (venc->frame_count == 0)
+ dev_err(inst->dev, "[%d] there is no input when starting\n", inst->id);
+
+ return 0;
+error:
+ venc_set_enable(venc, type, 0);
+ inst->state = VPU_CODEC_STATE_DEINIT;
+
+ vpu_free_dma(&inst->stream_buffer);
+ return ret;
+}
+
+static void venc_cleanup_mem_resource(struct vpu_inst *inst)
+{
+ struct venc_t *venc;
+ u32 i;
+
+ WARN_ON(!inst || !inst->priv);
+
+ venc = inst->priv;
+
+ for (i = 0; i < ARRAY_SIZE(venc->enc); i++)
+ vpu_free_dma(&venc->enc[i]);
+ for (i = 0; i < ARRAY_SIZE(venc->ref); i++)
+ vpu_free_dma(&venc->ref[i]);
+ for (i = 0; i < ARRAY_SIZE(venc->act); i++)
+ vpu_free_dma(&venc->act[i]);
+}
+
+static void venc_request_mem_resource(struct vpu_inst *inst,
+ u32 enc_frame_size,
+ u32 enc_frame_num,
+ u32 ref_frame_size,
+ u32 ref_frame_num,
+ u32 act_frame_size,
+ u32 act_frame_num)
+{
+ struct venc_t *venc;
+ u32 i;
+ int ret;
+
+ WARN_ON(!inst || !inst->priv || !inst->core);
+
+ venc = inst->priv;
+
+ if (enc_frame_num > ARRAY_SIZE(venc->enc)) {
+ dev_err(inst->dev, "[%d] enc num(%d) is out of range\n",
+ inst->id, enc_frame_num);
+ return;
+ }
+ if (ref_frame_num > ARRAY_SIZE(venc->ref)) {
+ dev_err(inst->dev, "[%d] ref num(%d) is out of range\n",
+ inst->id, ref_frame_num);
+ return;
+ }
+ if (act_frame_num > ARRAY_SIZE(venc->act)) {
+ dev_err(inst->dev, "[%d] act num(%d) is out of range\n",
+ inst->id, act_frame_num);
+ return;
+ }
+
+ for (i = 0; i < enc_frame_num; i++) {
+ venc->enc[i].length = enc_frame_size;
+ ret = vpu_alloc_dma(inst->core, &venc->enc[i]);
+ if (ret) {
+ venc_cleanup_mem_resource(inst);
+ return;
+ }
+ }
+ for (i = 0; i < ref_frame_num; i++) {
+ venc->ref[i].length = ref_frame_size;
+ ret = vpu_alloc_dma(inst->core, &venc->ref[i]);
+ if (ret) {
+ venc_cleanup_mem_resource(inst);
+ return;
+ }
+ }
+ if (act_frame_num != 1 || act_frame_size > inst->act.length) {
+ venc_cleanup_mem_resource(inst);
+ return;
+ }
+ venc->act[0].length = act_frame_size;
+ venc->act[0].phys = inst->act.phys;
+ venc->act[0].virt = inst->act.virt;
+
+ for (i = 0; i < enc_frame_num; i++)
+ vpu_iface_config_memory_resource(inst, MEM_RES_ENC, i, &venc->enc[i]);
+ for (i = 0; i < ref_frame_num; i++)
+ vpu_iface_config_memory_resource(inst, MEM_RES_REF, i, &venc->ref[i]);
+ for (i = 0; i < act_frame_num; i++)
+ vpu_iface_config_memory_resource(inst, MEM_RES_ACT, i, &venc->act[i]);
+}
+
+static void venc_cleanup_frames(struct venc_t *venc)
+{
+ struct venc_frame_t *frame;
+ struct venc_frame_t *tmp;
+
+ list_for_each_entry_safe(frame, tmp, &venc->frames, list) {
+ list_del_init(&frame->list);
+ vfree(frame);
+ }
+}
+
+static int venc_stop_session(struct vpu_inst *inst, u32 type)
+{
+ struct venc_t *venc = inst->priv;
+
+ venc_set_enable(venc, type, 0);
+ if (venc->enable & VENC_ENABLE_MASK)
+ return 0;
+
+ if (inst->state == VPU_CODEC_STATE_DEINIT)
+ return 0;
+
+ if (inst->state != VPU_CODEC_STATE_STOP)
+ venc_request_eos(inst);
+
+ call_vop(inst, wait_prepare);
+ if (!wait_event_timeout(venc->wq, venc->stopped, VPU_TIMEOUT)) {
+ set_bit(inst->id, &inst->core->hang_mask);
+ vpu_session_debug(inst);
+ }
+ call_vop(inst, wait_finish);
+
+ inst->state = VPU_CODEC_STATE_DEINIT;
+ venc_cleanup_frames(inst->priv);
+ vpu_free_dma(&inst->stream_buffer);
+ venc_cleanup_mem_resource(inst);
+
+ return 0;
+}
+
+static int venc_process_output(struct vpu_inst *inst, struct vb2_buffer *vb)
+{
+ struct venc_t *venc = inst->priv;
+ struct vb2_v4l2_buffer *vbuf;
+ struct vpu_vb2_buffer *vpu_buf = NULL;
+ u32 flags;
+
+ if (inst->state == VPU_CODEC_STATE_DEINIT)
+ return -EINVAL;
+
+ vbuf = to_vb2_v4l2_buffer(vb);
+ vpu_buf = to_vpu_vb2_buffer(vbuf);
+ if (inst->state == VPU_CODEC_STATE_STARTED)
+ inst->state = VPU_CODEC_STATE_ACTIVE;
+
+ flags = vbuf->flags;
+ if (venc->request_key_frame) {
+ vbuf->flags |= V4L2_BUF_FLAG_KEYFRAME;
+ venc->request_key_frame = 0;
+ }
+ if (venc->bitrate_change) {
+ vpu_session_update_parameters(inst, &venc->params);
+ venc->bitrate_change = false;
+ }
+ dev_dbg(inst->dev, "[%d][INPUT TS]%32lld\n", inst->id, vb->timestamp);
+ vpu_iface_input_frame(inst, vb);
+ vbuf->flags = flags;
+ venc->input_ready = false;
+ venc->frame_count++;
+ vpu_buf->state = VPU_BUF_STATE_INUSE;
+
+ return 0;
+}
+
+static int venc_process_capture(struct vpu_inst *inst, struct vb2_buffer *vb)
+{
+ struct venc_t *venc;
+ struct venc_frame_t *frame = NULL;
+ struct vb2_v4l2_buffer *vbuf;
+ int ret;
+
+ venc = inst->priv;
+ if (list_empty(&venc->frames))
+ return -EINVAL;
+
+ frame = list_first_entry(&venc->frames, struct venc_frame_t, list);
+ vbuf = to_vb2_v4l2_buffer(vb);
+ v4l2_m2m_dst_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf);
+ ret = venc_get_one_encoded_frame(inst, frame, vbuf);
+ if (ret)
+ return ret;
+
+ list_del_init(&frame->list);
+ vfree(frame);
+ return 0;
+}
+
+static void venc_on_queue_empty(struct vpu_inst *inst, u32 type)
+{
+ struct venc_t *venc = inst->priv;
+
+ if (V4L2_TYPE_IS_OUTPUT(type))
+ return;
+
+ if (venc->stopped)
+ venc_set_last_buffer_dequeued(inst);
+}
+
+static int venc_get_debug_info(struct vpu_inst *inst, char *str, u32 size, u32 i)
+{
+ struct venc_t *venc = inst->priv;
+ int num = -1;
+
+ switch (i) {
+ case 0:
+ num = scnprintf(str, size, "profile = %d\n", venc->params.profile);
+ break;
+ case 1:
+ num = scnprintf(str, size, "level = %d\n", venc->params.level);
+ break;
+ case 2:
+ num = scnprintf(str, size, "fps = %d/%d\n",
+ venc->params.frame_rate.numerator,
+ venc->params.frame_rate.denominator);
+ break;
+ case 3:
+ num = scnprintf(str, size, "%d x %d -> %d x %d\n",
+ venc->params.src_width,
+ venc->params.src_height,
+ venc->params.out_width,
+ venc->params.out_height);
+ break;
+ case 4:
+ num = scnprintf(str, size, "(%d, %d) %d x %d\n",
+ venc->params.crop.left,
+ venc->params.crop.top,
+ venc->params.crop.width,
+ venc->params.crop.height);
+ break;
+ case 5:
+ num = scnprintf(str, size,
+ "enable = 0x%x, input = %d, encode = %d, ready = %d, stopped = %d\n",
+ venc->enable,
+ venc->frame_count, venc->encode_count,
+ venc->ready_count,
+ venc->stopped);
+ break;
+ case 6:
+ num = scnprintf(str, size, "gop = %d\n", venc->params.gop_length);
+ break;
+ case 7:
+ num = scnprintf(str, size, "bframes = %d\n", venc->params.bframes);
+ break;
+ case 8:
+ num = scnprintf(str, size, "rc: mode = %d, bitrate = %d, qp = %d\n",
+ venc->params.rc_mode,
+ venc->params.bitrate,
+ venc->params.i_frame_qp);
+ break;
+ case 9:
+ num = scnprintf(str, size, "sar: enable = %d, idc = %d, %d x %d\n",
+ venc->params.sar.enable,
+ venc->params.sar.idc,
+ venc->params.sar.width,
+ venc->params.sar.height);
+
+ break;
+ case 10:
+ num = scnprintf(str, size,
+ "colorspace: primaries = %d, transfer = %d, matrix = %d, full_range = %d\n",
+ venc->params.color.primaries,
+ venc->params.color.transfer,
+ venc->params.color.matrix,
+ venc->params.color.full_range);
+ break;
+ case 11:
+ num = scnprintf(str, size, "skipped: count = %d, bytes = %d\n",
+ venc->skipped_count, venc->skipped_bytes);
+ break;
+ default:
+ break;
+ }
+
+ return num;
+}
+
+static struct vpu_inst_ops venc_inst_ops = {
+ .ctrl_init = venc_ctrl_init,
+ .check_ready = venc_check_ready,
+ .input_done = venc_input_done,
+ .get_one_frame = venc_frame_encoded,
+ .buf_done = venc_buf_done,
+ .stop_done = venc_stop_done,
+ .event_notify = venc_event_notify,
+ .release = venc_release,
+ .cleanup = venc_cleanup,
+ .start = venc_start_session,
+ .mem_request = venc_request_mem_resource,
+ .stop = venc_stop_session,
+ .process_output = venc_process_output,
+ .process_capture = venc_process_capture,
+ .on_queue_empty = venc_on_queue_empty,
+ .get_debug_info = venc_get_debug_info,
+ .wait_prepare = vpu_inst_unlock,
+ .wait_finish = vpu_inst_lock,
+};
+
+static void venc_init(struct file *file)
+{
+ struct vpu_inst *inst = to_inst(file);
+ struct venc_t *venc;
+ struct v4l2_format f;
+ struct v4l2_streamparm parm;
+
+ venc = inst->priv;
+ venc->params.qp_min = 1;
+ venc->params.qp_max = 51;
+ venc->params.qp_min_i = 1;
+ venc->params.qp_max_i = 51;
+ venc->params.bitrate_max = BITRATE_MAX;
+ venc->params.bitrate_min = BITRATE_MIN;
+
+ memset(&f, 0, sizeof(f));
+ f.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
+ f.fmt.pix_mp.pixelformat = V4L2_PIX_FMT_NV12M;
+ f.fmt.pix_mp.width = 1280;
+ f.fmt.pix_mp.height = 720;
+ f.fmt.pix_mp.field = V4L2_FIELD_NONE;
+ f.fmt.pix_mp.colorspace = V4L2_COLORSPACE_REC709;
+ venc_s_fmt(file, &inst->fh, &f);
+
+ memset(&f, 0, sizeof(f));
+ f.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
+ f.fmt.pix_mp.pixelformat = V4L2_PIX_FMT_H264;
+ f.fmt.pix_mp.width = 1280;
+ f.fmt.pix_mp.height = 720;
+ f.fmt.pix_mp.field = V4L2_FIELD_NONE;
+ venc_s_fmt(file, &inst->fh, &f);
+
+ memset(&parm, 0, sizeof(parm));
+ parm.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
+ parm.parm.capture.timeperframe.numerator = 1;
+ parm.parm.capture.timeperframe.denominator = 30;
+ venc_s_parm(file, &inst->fh, &parm);
+}
+
+static int venc_open(struct file *file)
+{
+ struct vpu_inst *inst;
+ struct venc_t *venc;
+ int ret;
+
+ inst = vzalloc(sizeof(*inst));
+ if (!inst)
+ return -ENOMEM;
+
+ venc = vzalloc(sizeof(*venc));
+ if (!venc) {
+ vfree(inst);
+ return -ENOMEM;
+ }
+
+ inst->ops = &venc_inst_ops;
+ inst->formats = venc_formats;
+ inst->type = VPU_CORE_TYPE_ENC;
+ inst->priv = venc;
+ INIT_LIST_HEAD(&venc->frames);
+ init_waitqueue_head(&venc->wq);
+
+ ret = vpu_v4l2_open(file, inst);
+ if (ret)
+ return ret;
+
+ venc_init(file);
+
+ return 0;
+}
+
+static const struct v4l2_file_operations venc_fops = {
+ .owner = THIS_MODULE,
+ .open = venc_open,
+ .release = vpu_v4l2_close,
+ .unlocked_ioctl = video_ioctl2,
+ .poll = v4l2_m2m_fop_poll,
+ .mmap = v4l2_m2m_fop_mmap,
+};
+
+const struct v4l2_ioctl_ops *venc_get_ioctl_ops(void)
+{
+ return &venc_ioctl_ops;
+}
+
+const struct v4l2_file_operations *venc_get_fops(void)
+{
+ return &venc_fops;
+}
--
2.33.0


2021-11-30 09:49:55

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 10/13] media: amphion: implement malone decoder rpc interface

This part implements the malone decoder rpc interface.

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
---
drivers/media/platform/amphion/vpu_malone.c | 1679 +++++++++++++++++++
drivers/media/platform/amphion/vpu_malone.h | 42 +
2 files changed, 1721 insertions(+)
create mode 100644 drivers/media/platform/amphion/vpu_malone.c
create mode 100644 drivers/media/platform/amphion/vpu_malone.h

diff --git a/drivers/media/platform/amphion/vpu_malone.c b/drivers/media/platform/amphion/vpu_malone.c
new file mode 100644
index 000000000000..d04d26054dda
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_malone.c
@@ -0,0 +1,1679 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <media/videobuf2-v4l2.h>
+#include <media/videobuf2-dma-contig.h>
+#include <linux/videodev2.h>
+#include "vpu.h"
+#include "vpu_rpc.h"
+#include "vpu_defs.h"
+#include "vpu_helpers.h"
+#include "vpu_v4l2.h"
+#include "vpu_cmds.h"
+#include "vpu_imx8q.h"
+#include "vpu_malone.h"
+
+#define CMD_SIZE 25600
+#define MSG_SIZE 25600
+#define CODEC_SIZE 0x1000
+#define JPEG_SIZE 0x1000
+#define SEQ_SIZE 0x1000
+#define GOP_SIZE 0x1000
+#define PIC_SIZE 0x1000
+#define QMETER_SIZE 0x1000
+#define DBGLOG_SIZE 0x10000
+#define DEBUG_SIZE 0x80000
+#define ENG_SIZE 0x1000
+#define MALONE_SKIPPED_FRAME_ID 0x555
+
+#define MALONE_ALIGN_MBI 0x800
+#define MALONE_DCP_CHUNK_BIT 16
+#define MALONE_DCP_SIZE_MAX 0x3000000
+#define MALONE_DCP_SIZE_MIN 0x100000
+#define MALONE_DCP_FIXED_MB_ALLOC 250
+
+#define CONFIG_SET(val, cfg, pos, mask) \
+ (*(cfg) |= (((val) << (pos)) & (mask)))
+//x means source data , y means destination data
+#define STREAM_CONFIG_FORMAT_SET(x, y) CONFIG_SET(x, y, 0, 0x0000000F)
+#define STREAM_CONFIG_STRBUFIDX_SET(x, y) CONFIG_SET(x, y, 8, 0x00000300)
+#define STREAM_CONFIG_NOSEQ_SET(x, y) CONFIG_SET(x, y, 10, 0x00000400)
+#define STREAM_CONFIG_DEBLOCK_SET(x, y) CONFIG_SET(x, y, 11, 0x00000800)
+#define STREAM_CONFIG_DERING_SET(x, y) CONFIG_SET(x, y, 12, 0x00001000)
+#define STREAM_CONFIG_IBWAIT_SET(x, y) CONFIG_SET(x, y, 13, 0x00002000)
+#define STREAM_CONFIG_FBC_SET(x, y) CONFIG_SET(x, y, 14, 0x00004000)
+#define STREAM_CONFIG_PLAY_MODE_SET(x, y) CONFIG_SET(x, y, 16, 0x00030000)
+#define STREAM_CONFIG_ENABLE_DCP_SET(x, y) CONFIG_SET(x, y, 20, 0x00100000)
+#define STREAM_CONFIG_NUM_STR_BUF_SET(x, y) CONFIG_SET(x, y, 21, 0x00600000)
+#define STREAM_CONFIG_MALONE_USAGE_SET(x, y) CONFIG_SET(x, y, 23, 0x01800000)
+#define STREAM_CONFIG_MULTI_VID_SET(x, y) CONFIG_SET(x, y, 25, 0x02000000)
+#define STREAM_CONFIG_OBFUSC_EN_SET(x, y) CONFIG_SET(x, y, 26, 0x04000000)
+#define STREAM_CONFIG_RC4_EN_SET(x, y) CONFIG_SET(x, y, 27, 0x08000000)
+#define STREAM_CONFIG_MCX_SET(x, y) CONFIG_SET(x, y, 28, 0x10000000)
+#define STREAM_CONFIG_PES_SET(x, y) CONFIG_SET(x, y, 29, 0x20000000)
+#define STREAM_CONFIG_NUM_DBE_SET(x, y) CONFIG_SET(x, y, 30, 0x40000000)
+#define STREAM_CONFIG_FS_CTRL_MODE_SET(x, y) CONFIG_SET(x, y, 31, 0x80000000)
+
+enum vpu_malone_stream_input_mode {
+ INVALID_MODE = 0,
+ FRAME_LVL,
+ NON_FRAME_LVL
+};
+
+enum vpu_malone_format {
+ MALONE_FMT_NULL = 0x0,
+ MALONE_FMT_AVC = 0x1,
+ MALONE_FMT_MP2 = 0x2,
+ MALONE_FMT_VC1 = 0x3,
+ MALONE_FMT_AVS = 0x4,
+ MALONE_FMT_ASP = 0x5,
+ MALONE_FMT_JPG = 0x6,
+ MALONE_FMT_RV = 0x7,
+ MALONE_FMT_VP6 = 0x8,
+ MALONE_FMT_SPK = 0x9,
+ MALONE_FMT_VP8 = 0xA,
+ MALONE_FMT_HEVC = 0xB,
+ MALONE_FMT_LAST = MALONE_FMT_HEVC
+};
+
+enum {
+ VID_API_CMD_NULL = 0x00,
+ VID_API_CMD_PARSE_NEXT_SEQ = 0x01,
+ VID_API_CMD_PARSE_NEXT_I = 0x02,
+ VID_API_CMD_PARSE_NEXT_IP = 0x03,
+ VID_API_CMD_PARSE_NEXT_ANY = 0x04,
+ VID_API_CMD_DEC_PIC = 0x05,
+ VID_API_CMD_UPDATE_ES_WR_PTR = 0x06,
+ VID_API_CMD_UPDATE_ES_RD_PTR = 0x07,
+ VID_API_CMD_UPDATE_UDATA = 0x08,
+ VID_API_CMD_GET_FSINFO = 0x09,
+ VID_API_CMD_SKIP_PIC = 0x0a,
+ VID_API_CMD_DEC_CHUNK = 0x0b,
+ VID_API_CMD_START = 0x10,
+ VID_API_CMD_STOP = 0x11,
+ VID_API_CMD_ABORT = 0x12,
+ VID_API_CMD_RST_BUF = 0x13,
+ VID_API_CMD_FS_RELEASE = 0x15,
+ VID_API_CMD_MEM_REGION_ATTACH = 0x16,
+ VID_API_CMD_MEM_REGION_DETACH = 0x17,
+ VID_API_CMD_MVC_VIEW_SELECT = 0x18,
+ VID_API_CMD_FS_ALLOC = 0x19,
+ VID_API_CMD_DBG_GET_STATUS = 0x1C,
+ VID_API_CMD_DBG_START_LOG = 0x1D,
+ VID_API_CMD_DBG_STOP_LOG = 0x1E,
+ VID_API_CMD_DBG_DUMP_LOG = 0x1F,
+ VID_API_CMD_YUV_READY = 0x20,
+ VID_API_CMD_TS = 0x21,
+
+ VID_API_CMD_FIRM_RESET = 0x40,
+
+ VID_API_CMD_SNAPSHOT = 0xAA,
+ VID_API_CMD_ROLL_SNAPSHOT = 0xAB,
+ VID_API_CMD_LOCK_SCHEDULER = 0xAC,
+ VID_API_CMD_UNLOCK_SCHEDULER = 0xAD,
+ VID_API_CMD_CQ_FIFO_DUMP = 0xAE,
+ VID_API_CMD_DBG_FIFO_DUMP = 0xAF,
+ VID_API_CMD_SVC_ILP = 0xBB,
+ VID_API_CMD_FW_STATUS = 0xF0,
+ VID_API_CMD_INVALID = 0xFF
+};
+
+enum {
+ VID_API_EVENT_NULL = 0x00,
+ VID_API_EVENT_RESET_DONE = 0x01,
+ VID_API_EVENT_SEQ_HDR_FOUND = 0x02,
+ VID_API_EVENT_PIC_HDR_FOUND = 0x03,
+ VID_API_EVENT_PIC_DECODED = 0x04,
+ VID_API_EVENT_FIFO_LOW = 0x05,
+ VID_API_EVENT_FIFO_HIGH = 0x06,
+ VID_API_EVENT_FIFO_EMPTY = 0x07,
+ VID_API_EVENT_FIFO_FULL = 0x08,
+ VID_API_EVENT_BS_ERROR = 0x09,
+ VID_API_EVENT_UDATA_FIFO_UPTD = 0x0A,
+ VID_API_EVENT_RES_CHANGE = 0x0B,
+ VID_API_EVENT_FIFO_OVF = 0x0C,
+ VID_API_EVENT_CHUNK_DECODED = 0x0D,
+ VID_API_EVENT_REQ_FRAME_BUFF = 0x10,
+ VID_API_EVENT_FRAME_BUFF_RDY = 0x11,
+ VID_API_EVENT_REL_FRAME_BUFF = 0x12,
+ VID_API_EVENT_STR_BUF_RST = 0x13,
+ VID_API_EVENT_RET_PING = 0x14,
+ VID_API_EVENT_QMETER = 0x15,
+ VID_API_EVENT_STR_FMT_CHANGE = 0x16,
+ VID_API_EVENT_FIRMWARE_XCPT = 0x17,
+ VID_API_EVENT_START_DONE = 0x18,
+ VID_API_EVENT_STOPPED = 0x19,
+ VID_API_EVENT_ABORT_DONE = 0x1A,
+ VID_API_EVENT_FINISHED = 0x1B,
+ VID_API_EVENT_DBG_STAT_UPDATE = 0x1C,
+ VID_API_EVENT_DBG_LOG_STARTED = 0x1D,
+ VID_API_EVENT_DBG_LOG_STOPPED = 0x1E,
+ VID_API_EVENT_DBG_LOG_UPDATED = 0x1F,
+ VID_API_EVENT_DBG_MSG_DEC = 0x20,
+ VID_API_EVENT_DEC_SC_ERR = 0x21,
+ VID_API_EVENT_CQ_FIFO_DUMP = 0x22,
+ VID_API_EVENT_DBG_FIFO_DUMP = 0x23,
+ VID_API_EVENT_DEC_CHECK_RES = 0x24,
+ VID_API_EVENT_DEC_CFG_INFO = 0x25,
+ VID_API_EVENT_UNSUPPORTED_STREAM = 0x26,
+ VID_API_EVENT_STR_SUSPENDED = 0x30,
+ VID_API_EVENT_SNAPSHOT_DONE = 0x40,
+ VID_API_EVENT_FW_STATUS = 0xF0,
+ VID_API_EVENT_INVALID = 0xFF
+};
+
+struct vpu_malone_buffer_desc {
+ struct vpu_rpc_buffer_desc buffer;
+ u32 low;
+ u32 high;
+};
+
+struct vpu_malone_str_buffer {
+ u32 wptr;
+ u32 rptr;
+ u32 start;
+ u32 end;
+ u32 lwm;
+};
+
+struct vpu_malone_picth_info {
+ u32 frame_pitch;
+};
+
+struct vpu_malone_table_desc {
+ u32 array_base;
+ u32 size;
+};
+
+struct vpu_malone_dbglog_desc {
+ u32 addr;
+ u32 size;
+ u32 level;
+ u32 reserved;
+};
+
+struct vpu_malone_frame_buffer {
+ u32 addr;
+ u32 size;
+};
+
+struct vpu_malone_udata {
+ u32 base;
+ u32 total_size;
+ u32 slot_size;
+};
+
+struct vpu_malone_buffer_info {
+ u32 stream_input_mode;
+ u32 stream_pic_input_count;
+ u32 stream_pic_parsed_count;
+ u32 stream_buffer_threshold;
+ u32 stream_pic_end_flag;
+};
+
+struct vpu_malone_encrypt_info {
+ u32 rec4key[8];
+ u32 obfusc;
+};
+
+struct malone_iface {
+ u32 exec_base_addr;
+ u32 exec_area_size;
+ struct vpu_malone_buffer_desc cmd_buffer_desc;
+ struct vpu_malone_buffer_desc msg_buffer_desc;
+ u32 cmd_int_enable[VID_API_NUM_STREAMS];
+ struct vpu_malone_picth_info stream_pitch_info[VID_API_NUM_STREAMS];
+ u32 stream_config[VID_API_NUM_STREAMS];
+ struct vpu_malone_table_desc codec_param_tab_desc;
+ struct vpu_malone_table_desc jpeg_param_tab_desc;
+ u32 stream_buffer_desc[VID_API_NUM_STREAMS][VID_API_MAX_BUF_PER_STR];
+ struct vpu_malone_table_desc seq_info_tab_desc;
+ struct vpu_malone_table_desc pic_info_tab_desc;
+ struct vpu_malone_table_desc gop_info_tab_desc;
+ struct vpu_malone_table_desc qmeter_info_tab_desc;
+ u32 stream_error[VID_API_NUM_STREAMS];
+ u32 fw_version;
+ u32 fw_offset;
+ u32 max_streams;
+ struct vpu_malone_dbglog_desc dbglog_desc;
+ struct vpu_rpc_buffer_desc api_cmd_buffer_desc[VID_API_NUM_STREAMS];
+ struct vpu_malone_udata udata_buffer[VID_API_NUM_STREAMS];
+ struct vpu_malone_buffer_desc debug_buffer_desc;
+ struct vpu_malone_buffer_desc eng_access_buff_desc[VID_API_NUM_STREAMS];
+ u32 encrypt_info[VID_API_NUM_STREAMS];
+ struct vpu_rpc_system_config system_cfg;
+ u32 api_version;
+ struct vpu_malone_buffer_info stream_buff_info[VID_API_NUM_STREAMS];
+};
+
+struct malone_jpg_params {
+ u32 rotation_angle;
+ u32 horiz_scale_factor;
+ u32 vert_scale_factor;
+ u32 rotation_mode;
+ u32 rgb_mode;
+ u32 chunk_mode; /* 0 ~ 1 */
+ u32 last_chunk; /* 0 ~ 1 */
+ u32 chunk_rows; /* 0 ~ 255 */
+ u32 num_bytes;
+ u32 jpg_crop_x;
+ u32 jpg_crop_y;
+ u32 jpg_crop_width;
+ u32 jpg_crop_height;
+ u32 jpg_mjpeg_mode;
+ u32 jpg_mjpeg_interlaced;
+};
+
+struct malone_codec_params {
+ u32 disp_imm;
+ u32 fourcc;
+ u32 codec_version;
+ u32 frame_rate;
+ u32 dbglog_enable;
+ u32 bsdma_lwm;
+ u32 bbd_coring;
+ u32 bbd_s_thr_row;
+ u32 bbd_p_thr_row;
+ u32 bbd_s_thr_logo_row;
+ u32 bbd_p_thr_logo_row;
+ u32 bbd_s_thr_col;
+ u32 bbd_p_thr_col;
+ u32 bbd_chr_thr_row;
+ u32 bbd_chr_thr_col;
+ u32 bbd_uv_mid_level;
+ u32 bbd_excl_win_mb_left;
+ u32 bbd_excl_win_mb_right;
+};
+
+struct malone_padding_scode {
+ u32 scode_type;
+ u32 pixelformat;
+ u32 data[2];
+};
+
+struct malone_fmt_mapping {
+ u32 pixelformat;
+ enum vpu_malone_format malone_format;
+};
+
+struct malone_scode_t {
+ struct vpu_inst *inst;
+ struct vb2_buffer *vb;
+ u32 wptr;
+ u32 need_data;
+};
+
+struct malone_scode_handler {
+ u32 pixelformat;
+ int (*insert_scode_seq)(struct malone_scode_t *scode);
+ int (*insert_scode_pic)(struct malone_scode_t *scode);
+};
+
+struct vpu_dec_ctrl {
+ struct malone_codec_params *codec_param;
+ struct malone_jpg_params *jpg;
+ void *seq_mem;
+ void *pic_mem;
+ void *gop_mem;
+ void *qmeter_mem;
+ void *dbglog_mem;
+ struct vpu_malone_str_buffer *str_buf[VID_API_NUM_STREAMS];
+ u32 buf_addr[VID_API_NUM_STREAMS];
+};
+
+u32 vpu_malone_get_data_size(void)
+{
+ return sizeof(struct vpu_dec_ctrl);
+}
+
+void vpu_malone_init_rpc(struct vpu_shared_addr *shared,
+ struct vpu_buffer *rpc, dma_addr_t boot_addr)
+{
+ struct malone_iface *iface;
+ struct vpu_dec_ctrl *hc;
+ unsigned long base_phy_addr;
+ unsigned long phy_addr;
+ unsigned long offset;
+ unsigned int i;
+
+ WARN_ON(!shared || !shared->priv);
+ WARN_ON(!rpc || !rpc->phys || !rpc->length || rpc->phys < boot_addr);
+
+ iface = rpc->virt;
+ base_phy_addr = rpc->phys - boot_addr;
+ hc = shared->priv;
+
+ shared->iface = iface;
+ shared->boot_addr = boot_addr;
+
+ iface->exec_base_addr = base_phy_addr;
+ iface->exec_area_size = rpc->length;
+
+ offset = sizeof(struct malone_iface);
+ phy_addr = base_phy_addr + offset;
+
+ shared->cmd_desc = &iface->cmd_buffer_desc.buffer;
+ shared->cmd_mem_vir = rpc->virt + offset;
+ iface->cmd_buffer_desc.buffer.start =
+ iface->cmd_buffer_desc.buffer.rptr =
+ iface->cmd_buffer_desc.buffer.wptr = phy_addr;
+ iface->cmd_buffer_desc.buffer.end = iface->cmd_buffer_desc.buffer.start + CMD_SIZE;
+ offset += CMD_SIZE;
+ phy_addr = base_phy_addr + offset;
+
+ shared->msg_desc = &iface->msg_buffer_desc.buffer;
+ shared->msg_mem_vir = rpc->virt + offset;
+ iface->msg_buffer_desc.buffer.start =
+ iface->msg_buffer_desc.buffer.wptr =
+ iface->msg_buffer_desc.buffer.rptr = phy_addr;
+ iface->msg_buffer_desc.buffer.end = iface->msg_buffer_desc.buffer.start + MSG_SIZE;
+ offset += MSG_SIZE;
+ phy_addr = base_phy_addr + offset;
+
+ iface->codec_param_tab_desc.array_base = phy_addr;
+ hc->codec_param = rpc->virt + offset;
+ offset += CODEC_SIZE;
+ phy_addr = base_phy_addr + offset;
+
+ iface->jpeg_param_tab_desc.array_base = phy_addr;
+ hc->jpg = rpc->virt + offset;
+ offset += JPEG_SIZE;
+ phy_addr = base_phy_addr + offset;
+
+ iface->seq_info_tab_desc.array_base = phy_addr;
+ hc->seq_mem = rpc->virt + offset;
+ offset += SEQ_SIZE;
+ phy_addr = base_phy_addr + offset;
+
+ iface->pic_info_tab_desc.array_base = phy_addr;
+ hc->pic_mem = rpc->virt + offset;
+ offset += PIC_SIZE;
+ phy_addr = base_phy_addr + offset;
+
+ iface->gop_info_tab_desc.array_base = phy_addr;
+ hc->gop_mem = rpc->virt + offset;
+ offset += GOP_SIZE;
+ phy_addr = base_phy_addr + offset;
+
+ iface->qmeter_info_tab_desc.array_base = phy_addr;
+ hc->qmeter_mem = rpc->virt + offset;
+ offset += QMETER_SIZE;
+ phy_addr = base_phy_addr + offset;
+
+ iface->dbglog_desc.addr = phy_addr;
+ iface->dbglog_desc.size = DBGLOG_SIZE;
+ hc->dbglog_mem = rpc->virt + offset;
+ offset += DBGLOG_SIZE;
+ phy_addr = base_phy_addr + offset;
+
+ for (i = 0; i < VID_API_NUM_STREAMS; i++) {
+ iface->eng_access_buff_desc[i].buffer.start =
+ iface->eng_access_buff_desc[i].buffer.wptr =
+ iface->eng_access_buff_desc[i].buffer.rptr = phy_addr;
+ iface->eng_access_buff_desc[i].buffer.end =
+ iface->eng_access_buff_desc[i].buffer.start + ENG_SIZE;
+ offset += ENG_SIZE;
+ phy_addr = base_phy_addr + offset;
+ }
+
+ for (i = 0; i < VID_API_NUM_STREAMS; i++) {
+ iface->encrypt_info[i] = phy_addr;
+ offset += sizeof(struct vpu_malone_encrypt_info);
+ phy_addr = base_phy_addr + offset;
+ }
+
+ rpc->bytesused = offset;
+}
+
+void vpu_malone_set_log_buf(struct vpu_shared_addr *shared,
+ struct vpu_buffer *log)
+{
+ struct malone_iface *iface;
+
+ WARN_ON(!shared || !log || !log->phys);
+ iface = shared->iface;
+ iface->debug_buffer_desc.buffer.start =
+ iface->debug_buffer_desc.buffer.wptr =
+ iface->debug_buffer_desc.buffer.rptr = log->phys - shared->boot_addr;
+ iface->debug_buffer_desc.buffer.end = iface->debug_buffer_desc.buffer.start + log->length;
+}
+
+static u32 get_str_buffer_offset(u32 instance)
+{
+ return DEC_MFD_XREG_SLV_BASE + MFD_MCX + MFD_MCX_OFF * instance;
+}
+
+void vpu_malone_set_system_cfg(struct vpu_shared_addr *shared,
+ u32 regs_base, void __iomem *regs, u32 core_id)
+{
+ struct malone_iface *iface;
+ struct vpu_rpc_system_config *config;
+ struct vpu_dec_ctrl *hc;
+ int i;
+
+ WARN_ON(!shared || !shared->iface || !shared->core || !shared->priv);
+
+ iface = shared->iface;
+ config = &iface->system_cfg;
+ hc = shared->priv;
+
+ vpu_imx8q_set_system_cfg_common(config, regs_base, core_id);
+ for (i = 0; i < VID_API_NUM_STREAMS; i++) {
+ u32 offset = get_str_buffer_offset(i);
+
+ hc->buf_addr[i] = regs_base + offset;
+ hc->str_buf[i] = regs + offset;
+ }
+}
+
+u32 vpu_malone_get_version(struct vpu_shared_addr *shared)
+{
+ struct malone_iface *iface;
+
+ WARN_ON(!shared || !shared->iface);
+
+ iface = shared->iface;
+ return iface->fw_version;
+}
+
+int vpu_malone_get_stream_buffer_size(struct vpu_shared_addr *shared)
+{
+ return 0xc00000;
+}
+
+int vpu_malone_config_stream_buffer(struct vpu_shared_addr *shared,
+ u32 instance,
+ struct vpu_buffer *buf)
+{
+ struct malone_iface *iface;
+ struct vpu_dec_ctrl *hc;
+ struct vpu_malone_str_buffer *str_buf;
+
+ WARN_ON(!shared || !shared->iface || !shared->core || !shared->priv);
+
+ iface = shared->iface;
+ hc = shared->priv;
+ str_buf = hc->str_buf[instance];
+ str_buf->wptr = str_buf->rptr = str_buf->start = buf->phys;
+ str_buf->end = buf->phys + buf->length;
+ str_buf->lwm = 0x1;
+
+ iface->stream_buffer_desc[instance][0] = hc->buf_addr[instance];
+
+ return 0;
+}
+
+int vpu_malone_get_stream_buffer_desc(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_rpc_buffer_desc *desc)
+{
+ struct vpu_dec_ctrl *hc;
+ struct vpu_malone_str_buffer *str_buf;
+
+ WARN_ON(!shared || !shared->iface || !shared->core || !shared->priv);
+
+ hc = shared->priv;
+ str_buf = hc->str_buf[instance];
+
+ if (desc) {
+ desc->wptr = str_buf->wptr;
+ desc->rptr = str_buf->rptr;
+ desc->start = str_buf->start;
+ desc->end = str_buf->end;
+ }
+
+ return 0;
+}
+
+static void vpu_malone_update_wptr(struct vpu_malone_str_buffer *str_buf,
+ u32 wptr)
+{
+ u32 size = str_buf->end - str_buf->start;
+ u32 space = (str_buf->rptr + size - str_buf->wptr) % size;
+ u32 step = (wptr + size - str_buf->wptr) % size;
+
+ if (space && step > space)
+ pr_err("update wptr from 0x%x to 0x%x, cross over rptr 0x%x\n",
+ str_buf->wptr, wptr, str_buf->rptr);
+
+ /*update wptr after data is written*/
+ mb();
+ str_buf->wptr = wptr;
+}
+
+static void vpu_malone_update_rptr(struct vpu_malone_str_buffer *str_buf,
+ u32 rptr)
+{
+ u32 size = str_buf->end - str_buf->start;
+ u32 space = (str_buf->wptr + size - str_buf->rptr) % size;
+ u32 step = (rptr + size - str_buf->rptr) % size;
+
+ if (step > space)
+ pr_err("update rptr from 0x%x to 0x%x, cross over wptr 0x%x\n",
+ str_buf->rptr, rptr, str_buf->wptr);
+ /*update rptr after data is read*/
+ mb();
+ str_buf->rptr = rptr;
+}
+
+int vpu_malone_update_stream_buffer(struct vpu_shared_addr *shared,
+ u32 instance, u32 ptr, bool write)
+{
+ struct vpu_dec_ctrl *hc;
+ struct vpu_malone_str_buffer *str_buf;
+
+ WARN_ON(!shared || !shared->iface || !shared->core || !shared->priv);
+
+ hc = shared->priv;
+ str_buf = hc->str_buf[instance];
+
+ if (write)
+ vpu_malone_update_wptr(str_buf, ptr);
+ else
+ vpu_malone_update_rptr(str_buf, ptr);
+
+ return 0;
+}
+
+static struct malone_fmt_mapping fmt_mappings[] = {
+ {V4L2_PIX_FMT_H264, MALONE_FMT_AVC},
+ {V4L2_PIX_FMT_H264_MVC, MALONE_FMT_AVC},
+ {V4L2_PIX_FMT_HEVC, MALONE_FMT_HEVC},
+ {V4L2_PIX_FMT_VC1_ANNEX_G, MALONE_FMT_VC1},
+ {V4L2_PIX_FMT_VC1_ANNEX_L, MALONE_FMT_VC1},
+ {V4L2_PIX_FMT_MPEG2, MALONE_FMT_MP2},
+ {V4L2_PIX_FMT_MPEG4, MALONE_FMT_ASP},
+ {V4L2_PIX_FMT_XVID, MALONE_FMT_ASP},
+ {V4L2_PIX_FMT_H263, MALONE_FMT_ASP},
+ {V4L2_PIX_FMT_JPEG, MALONE_FMT_JPG},
+ {V4L2_PIX_FMT_VP8, MALONE_FMT_VP8},
+};
+
+static enum vpu_malone_format vpu_malone_format_remap(u32 pixelformat)
+{
+ u32 i;
+
+ for (i = 0; i < ARRAY_SIZE(fmt_mappings); i++) {
+ if (pixelformat == fmt_mappings[i].pixelformat)
+ return fmt_mappings[i].malone_format;
+ }
+
+ return MALONE_FMT_NULL;
+}
+
+static void vpu_malone_set_stream_cfg(struct vpu_shared_addr *shared,
+ u32 instance, enum vpu_malone_format malone_format)
+{
+ struct malone_iface *iface;
+ u32 *curr_str_cfg;
+
+ iface = shared->iface;
+ curr_str_cfg = &iface->stream_config[instance];
+
+ *curr_str_cfg = 0;
+ STREAM_CONFIG_FORMAT_SET(malone_format, curr_str_cfg);
+ STREAM_CONFIG_STRBUFIDX_SET(0, curr_str_cfg);
+ STREAM_CONFIG_NOSEQ_SET(0, curr_str_cfg);
+ STREAM_CONFIG_DEBLOCK_SET(0, curr_str_cfg);
+ STREAM_CONFIG_DERING_SET(0, curr_str_cfg);
+ STREAM_CONFIG_PLAY_MODE_SET(0x3, curr_str_cfg);
+ STREAM_CONFIG_FS_CTRL_MODE_SET(0x1, curr_str_cfg);
+ STREAM_CONFIG_ENABLE_DCP_SET(1, curr_str_cfg);
+ STREAM_CONFIG_NUM_STR_BUF_SET(1, curr_str_cfg);
+ STREAM_CONFIG_MALONE_USAGE_SET(1, curr_str_cfg);
+ STREAM_CONFIG_MULTI_VID_SET(0, curr_str_cfg);
+ STREAM_CONFIG_OBFUSC_EN_SET(0, curr_str_cfg);
+ STREAM_CONFIG_RC4_EN_SET(0, curr_str_cfg);
+ STREAM_CONFIG_MCX_SET(1, curr_str_cfg);
+ STREAM_CONFIG_PES_SET(0, curr_str_cfg);
+ STREAM_CONFIG_NUM_DBE_SET(1, curr_str_cfg);
+}
+
+static int vpu_malone_set_params(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_decode_params *params)
+{
+ struct malone_iface *iface;
+ struct vpu_dec_ctrl *hc;
+ enum vpu_malone_format malone_format;
+
+ iface = shared->iface;
+ hc = shared->priv;
+ malone_format = vpu_malone_format_remap(params->codec_format);
+ iface->udata_buffer[instance].base = params->udata.base;
+ iface->udata_buffer[instance].slot_size = params->udata.size;
+
+ vpu_malone_set_stream_cfg(shared, instance, malone_format);
+
+ if (malone_format == MALONE_FMT_JPG) {
+ //1:JPGD_MJPEG_MODE_A; 2:JPGD_MJPEG_MODE_B
+ hc->jpg[instance].jpg_mjpeg_mode = 1;
+ //0: JPGD_MJPEG_PROGRESSIVE
+ hc->jpg[instance].jpg_mjpeg_interlaced = 0;
+ }
+
+ hc->codec_param[instance].disp_imm = params->b_dis_reorder ? 1 : 0;
+ hc->codec_param[instance].dbglog_enable = 0;
+ iface->dbglog_desc.level = 0;
+
+ if (params->b_non_frame)
+ iface->stream_buff_info[instance].stream_input_mode = NON_FRAME_LVL;
+ else
+ iface->stream_buff_info[instance].stream_input_mode = FRAME_LVL;
+ iface->stream_buff_info[instance].stream_buffer_threshold = 0;
+ iface->stream_buff_info[instance].stream_pic_input_count = 0;
+
+ return 0;
+}
+
+static bool vpu_malone_is_non_frame_mode(struct vpu_shared_addr *shared,
+ u32 instance)
+{
+ struct malone_iface *iface;
+
+ iface = shared->iface;
+ if (iface->stream_buff_info[instance].stream_input_mode == NON_FRAME_LVL)
+ return true;
+
+ return false;
+}
+
+static int vpu_malone_update_params(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_decode_params *params)
+{
+ struct malone_iface *iface;
+
+ iface = shared->iface;
+
+ if (params->end_flag)
+ iface->stream_buff_info[instance].stream_pic_end_flag = params->end_flag;
+ params->end_flag = 0;
+
+ return 0;
+}
+
+int vpu_malone_set_decode_params(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_decode_params *params, u32 update)
+{
+ if (!params)
+ return -EINVAL;
+
+ if (!update)
+ return vpu_malone_set_params(shared, instance, params);
+ else
+ return vpu_malone_update_params(shared, instance, params);
+}
+
+static struct vpu_pair malone_cmds[] = {
+ {VPU_CMD_ID_START, VID_API_CMD_START},
+ {VPU_CMD_ID_STOP, VID_API_CMD_STOP},
+ {VPU_CMD_ID_ABORT, VID_API_CMD_ABORT},
+ {VPU_CMD_ID_RST_BUF, VID_API_CMD_RST_BUF},
+ {VPU_CMD_ID_SNAPSHOT, VID_API_CMD_SNAPSHOT},
+ {VPU_CMD_ID_FIRM_RESET, VID_API_CMD_FIRM_RESET},
+ {VPU_CMD_ID_FS_ALLOC, VID_API_CMD_FS_ALLOC},
+ {VPU_CMD_ID_FS_RELEASE, VID_API_CMD_FS_RELEASE},
+ {VPU_CMD_ID_TIMESTAMP, VID_API_CMD_TS},
+ {VPU_CMD_ID_DEBUG, VID_API_CMD_FW_STATUS},
+};
+
+static struct vpu_pair malone_msgs[] = {
+ {VPU_MSG_ID_RESET_DONE, VID_API_EVENT_RESET_DONE},
+ {VPU_MSG_ID_START_DONE, VID_API_EVENT_START_DONE},
+ {VPU_MSG_ID_STOP_DONE, VID_API_EVENT_STOPPED},
+ {VPU_MSG_ID_ABORT_DONE, VID_API_EVENT_ABORT_DONE},
+ {VPU_MSG_ID_BUF_RST, VID_API_EVENT_STR_BUF_RST},
+ {VPU_MSG_ID_PIC_EOS, VID_API_EVENT_FINISHED},
+ {VPU_MSG_ID_SEQ_HDR_FOUND, VID_API_EVENT_SEQ_HDR_FOUND},
+ {VPU_MSG_ID_RES_CHANGE, VID_API_EVENT_RES_CHANGE},
+ {VPU_MSG_ID_PIC_HDR_FOUND, VID_API_EVENT_PIC_HDR_FOUND},
+ {VPU_MSG_ID_PIC_DECODED, VID_API_EVENT_PIC_DECODED},
+ {VPU_MSG_ID_DEC_DONE, VID_API_EVENT_FRAME_BUFF_RDY},
+ {VPU_MSG_ID_FRAME_REQ, VID_API_EVENT_REQ_FRAME_BUFF},
+ {VPU_MSG_ID_FRAME_RELEASE, VID_API_EVENT_REL_FRAME_BUFF},
+ {VPU_MSG_ID_FIFO_LOW, VID_API_EVENT_FIFO_LOW},
+ {VPU_MSG_ID_BS_ERROR, VID_API_EVENT_BS_ERROR},
+ {VPU_MSG_ID_UNSUPPORTED, VID_API_EVENT_UNSUPPORTED_STREAM},
+ {VPU_MSG_ID_FIRMWARE_XCPT, VID_API_EVENT_FIRMWARE_XCPT},
+};
+
+static void vpu_malone_pack_fs_alloc(struct vpu_rpc_event *pkt,
+ struct vpu_fs_info *fs)
+{
+ const u32 fs_type[] = {
+ [MEM_RES_FRAME] = 0,
+ [MEM_RES_MBI] = 1,
+ [MEM_RES_DCP] = 2,
+ };
+
+ pkt->hdr.num = 7;
+ pkt->data[0] = fs->id | (fs->tag << 24);
+ pkt->data[1] = fs->luma_addr;
+ if (fs->type == MEM_RES_FRAME) {
+ /*
+ * if luma_addr equal to chroma_addr,
+ * means luma(plane[0]) and chromau(plane[1]) used the
+ * same fd -- usage of NXP codec2. Need to manually
+ * offset chroma addr.
+ */
+ if (fs->luma_addr == fs->chroma_addr)
+ fs->chroma_addr = fs->luma_addr + fs->luma_size;
+ pkt->data[2] = fs->luma_addr + fs->luma_size / 2;
+ pkt->data[3] = fs->chroma_addr;
+ pkt->data[4] = fs->chroma_addr + fs->chromau_size / 2;
+ pkt->data[5] = fs->bytesperline;
+ } else {
+ pkt->data[2] = fs->luma_size;
+ pkt->data[3] = 0;
+ pkt->data[4] = 0;
+ pkt->data[5] = 0;
+ }
+ pkt->data[6] = fs_type[fs->type];
+}
+
+static void vpu_malone_pack_fs_release(struct vpu_rpc_event *pkt,
+ struct vpu_fs_info *fs)
+{
+ pkt->hdr.num = 1;
+ pkt->data[0] = fs->id | (fs->tag << 24);
+}
+
+static void vpu_malone_pack_timestamp(struct vpu_rpc_event *pkt,
+ struct vpu_ts_info *info)
+{
+ pkt->hdr.num = 3;
+ if (info->timestamp < 0) {
+ pkt->data[0] = (u32)-1;
+ pkt->data[1] = 0;
+ } else {
+ pkt->data[0] = info->timestamp / NSEC_PER_SEC;
+ pkt->data[1] = info->timestamp % NSEC_PER_SEC;
+ }
+ pkt->data[2] = info->size;
+}
+
+int vpu_malone_pack_cmd(struct vpu_rpc_event *pkt, u32 index, u32 id, void *data)
+{
+ int ret;
+
+ WARN_ON(!pkt);
+
+ ret = vpu_find_dst_by_src(malone_cmds, ARRAY_SIZE(malone_cmds), id);
+ if (ret < 0)
+ return ret;
+
+ pkt->hdr.id = ret;
+ pkt->hdr.num = 0;
+ pkt->hdr.index = index;
+
+ switch (id) {
+ case VPU_CMD_ID_FS_ALLOC:
+ vpu_malone_pack_fs_alloc(pkt, data);
+ break;
+ case VPU_CMD_ID_FS_RELEASE:
+ vpu_malone_pack_fs_release(pkt, data);
+ break;
+ case VPU_CMD_ID_TIMESTAMP:
+ vpu_malone_pack_timestamp(pkt, data);
+ break;
+ }
+
+ pkt->hdr.index = index;
+ return 0;
+}
+
+int vpu_malone_convert_msg_id(u32 id)
+{
+ return vpu_find_src_by_dst(malone_msgs, ARRAY_SIZE(malone_msgs), id);
+}
+
+static void vpu_malone_fill_planes(struct vpu_dec_codec_info *info)
+{
+ u32 interlaced = info->progressive ? 0 : 1;
+
+ info->bytesperline[0] = 0;
+ info->sizeimage[0] = vpu_helper_get_plane_size(info->pixfmt,
+ info->decoded_width, info->decoded_height,
+ 0, info->stride, interlaced,
+ &info->bytesperline[0]);
+ info->bytesperline[1] = 0;
+ info->sizeimage[1] = vpu_helper_get_plane_size(info->pixfmt,
+ info->decoded_width, info->decoded_height,
+ 1, info->stride, interlaced,
+ &info->bytesperline[1]);
+}
+
+static void vpu_malone_init_seq_hdr(struct vpu_dec_codec_info *info)
+{
+ u32 chunks = info->num_dfe_area >> MALONE_DCP_CHUNK_BIT;
+
+ vpu_malone_fill_planes(info);
+
+ info->mbi_size = (info->sizeimage[0] + info->sizeimage[1]) >> 2;
+ info->mbi_size = ALIGN(info->mbi_size, MALONE_ALIGN_MBI);
+
+ info->dcp_size = MALONE_DCP_SIZE_MAX;
+ if (chunks) {
+ u32 mb_num;
+ u32 mb_w;
+ u32 mb_h;
+
+ mb_w = DIV_ROUND_UP(info->decoded_width, 16);
+ mb_h = DIV_ROUND_UP(info->decoded_height, 16);
+ mb_num = mb_w * mb_h;
+ info->dcp_size = mb_num * MALONE_DCP_FIXED_MB_ALLOC * chunks;
+ info->dcp_size = clamp_t(u32, info->dcp_size,
+ MALONE_DCP_SIZE_MIN, MALONE_DCP_SIZE_MAX);
+ }
+}
+
+static void vpu_malone_unpack_seq_hdr(struct vpu_rpc_event *pkt,
+ struct vpu_dec_codec_info *info)
+{
+ info->num_ref_frms = pkt->data[0];
+ info->num_dpb_frms = pkt->data[1];
+ info->num_dfe_area = pkt->data[2];
+ info->progressive = pkt->data[3];
+ info->width = pkt->data[5];
+ info->height = pkt->data[4];
+ info->decoded_width = pkt->data[12];
+ info->decoded_height = pkt->data[11];
+ info->frame_rate.numerator = 1000;
+ info->frame_rate.denominator = pkt->data[8];
+ info->dsp_asp_ratio = pkt->data[9];
+ info->level_idc = pkt->data[10];
+ info->bit_depth_luma = pkt->data[13];
+ info->bit_depth_chroma = pkt->data[14];
+ info->chroma_fmt = pkt->data[15];
+ info->color_primaries = vpu_color_cvrt_primaries_i2v(pkt->data[16]);
+ info->transfer_chars = vpu_color_cvrt_transfers_i2v(pkt->data[17]);
+ info->matrix_coeffs = vpu_color_cvrt_matrix_i2v(pkt->data[18]);
+ info->full_range = vpu_color_cvrt_full_range_i2v(pkt->data[19]);
+ info->vui_present = pkt->data[20];
+ info->mvc_num_views = pkt->data[21];
+ info->offset_x = pkt->data[23];
+ info->offset_y = pkt->data[25];
+ info->tag = pkt->data[27];
+ if (info->bit_depth_luma > 8)
+ info->pixfmt = V4L2_PIX_FMT_NV12MT_10BE_8L128;
+ else
+ info->pixfmt = V4L2_PIX_FMT_NV12MT_8L128;
+ vpu_helper_calc_coprime(&info->frame_rate.numerator, &info->frame_rate.denominator);
+ vpu_malone_init_seq_hdr(info);
+}
+
+static void vpu_malone_unpack_pic_info(struct vpu_rpc_event *pkt,
+ struct vpu_dec_pic_info *info)
+{
+ info->id = pkt->data[7];
+ info->luma = pkt->data[0];
+ info->start = pkt->data[10];
+ info->end = pkt->data[12];
+ info->pic_size = pkt->data[11];
+ info->stride = pkt->data[5];
+ info->consumed_count = pkt->data[13];
+ if (info->id == MALONE_SKIPPED_FRAME_ID)
+ info->skipped = 1;
+ else
+ info->skipped = 0;
+}
+
+static void vpu_malone_unpack_req_frame(struct vpu_rpc_event *pkt,
+ struct vpu_fs_info *info)
+{
+ info->type = pkt->data[1];
+}
+
+static void vpu_malone_unpack_rel_frame(struct vpu_rpc_event *pkt,
+ struct vpu_fs_info *info)
+{
+ info->id = pkt->data[0];
+ info->type = pkt->data[1];
+ info->not_displayed = pkt->data[2];
+}
+
+static void vpu_malone_unpack_buff_rdy(struct vpu_rpc_event *pkt,
+ struct vpu_dec_pic_info *info)
+{
+ info->id = pkt->data[0];
+ info->luma = pkt->data[1];
+ info->stride = pkt->data[3];
+ if (info->id == MALONE_SKIPPED_FRAME_ID)
+ info->skipped = 1;
+ else
+ info->skipped = 0;
+ info->timestamp = MAKE_TIMESTAMP(pkt->data[9], pkt->data[10]);
+}
+
+int vpu_malone_unpack_msg_data(struct vpu_rpc_event *pkt, void *data)
+{
+ if (!pkt || !data)
+ return -EINVAL;
+
+ switch (pkt->hdr.id) {
+ case VID_API_EVENT_SEQ_HDR_FOUND:
+ vpu_malone_unpack_seq_hdr(pkt, data);
+ break;
+ case VID_API_EVENT_PIC_DECODED:
+ vpu_malone_unpack_pic_info(pkt, data);
+ break;
+ case VID_API_EVENT_REQ_FRAME_BUFF:
+ vpu_malone_unpack_req_frame(pkt, data);
+ break;
+ case VID_API_EVENT_REL_FRAME_BUFF:
+ vpu_malone_unpack_rel_frame(pkt, data);
+ break;
+ case VID_API_EVENT_FRAME_BUFF_RDY:
+ vpu_malone_unpack_buff_rdy(pkt, data);
+ break;
+ }
+
+ return 0;
+}
+
+static const struct malone_padding_scode padding_scodes[] = {
+ {SCODE_PADDING_EOS, V4L2_PIX_FMT_H264, {0x0B010000, 0}},
+ {SCODE_PADDING_EOS, V4L2_PIX_FMT_H264_MVC, {0x0B010000, 0}},
+ {SCODE_PADDING_EOS, V4L2_PIX_FMT_HEVC, {0x4A010000, 0x20}},
+ {SCODE_PADDING_EOS, V4L2_PIX_FMT_VC1_ANNEX_G, {0x0a010000, 0x0}},
+ {SCODE_PADDING_EOS, V4L2_PIX_FMT_VC1_ANNEX_L, {0x0a010000, 0x0}},
+ {SCODE_PADDING_EOS, V4L2_PIX_FMT_MPEG2, {0xCC010000, 0x0}},
+ {SCODE_PADDING_EOS, V4L2_PIX_FMT_MPEG4, {0xb1010000, 0x0}},
+ {SCODE_PADDING_EOS, V4L2_PIX_FMT_XVID, {0xb1010000, 0x0}},
+ {SCODE_PADDING_EOS, V4L2_PIX_FMT_H263, {0xb1010000, 0x0}},
+ {SCODE_PADDING_EOS, V4L2_PIX_FMT_VP8, {0x34010000, 0x0}},
+ {SCODE_PADDING_EOS, V4L2_PIX_FMT_JPEG, {0xefff0000, 0x0}},
+ {SCODE_PADDING_ABORT, V4L2_PIX_FMT_H264, {0x0B010000, 0}},
+ {SCODE_PADDING_ABORT, V4L2_PIX_FMT_H264_MVC, {0x0B010000, 0}},
+ {SCODE_PADDING_ABORT, V4L2_PIX_FMT_HEVC, {0x4A010000, 0x20}},
+ {SCODE_PADDING_ABORT, V4L2_PIX_FMT_VC1_ANNEX_G, {0x0a010000, 0x0}},
+ {SCODE_PADDING_ABORT, V4L2_PIX_FMT_VC1_ANNEX_L, {0x0a010000, 0x0}},
+ {SCODE_PADDING_ABORT, V4L2_PIX_FMT_MPEG2, {0xb7010000, 0x0}},
+ {SCODE_PADDING_ABORT, V4L2_PIX_FMT_MPEG4, {0xb1010000, 0x0}},
+ {SCODE_PADDING_ABORT, V4L2_PIX_FMT_XVID, {0xb1010000, 0x0}},
+ {SCODE_PADDING_ABORT, V4L2_PIX_FMT_H263, {0xb1010000, 0x0}},
+ {SCODE_PADDING_ABORT, V4L2_PIX_FMT_VP8, {0x34010000, 0x0}},
+ {SCODE_PADDING_EOS, V4L2_PIX_FMT_JPEG, {0x0, 0x0}},
+ {SCODE_PADDING_BUFFLUSH, V4L2_PIX_FMT_H264, {0x15010000, 0x0}},
+ {SCODE_PADDING_BUFFLUSH, V4L2_PIX_FMT_H264_MVC, {0x15010000, 0x0}},
+};
+static const struct malone_padding_scode padding_scode_dft = {0x0, 0x0};
+
+static const struct malone_padding_scode *get_padding_scode(u32 type, u32 fmt)
+{
+ const struct malone_padding_scode *s;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(padding_scodes); i++) {
+ s = &padding_scodes[i];
+
+ if (s->scode_type == type && s->pixelformat == fmt)
+ return s;
+ }
+
+ if (type != SCODE_PADDING_BUFFLUSH)
+ return &padding_scode_dft;
+
+ return NULL;
+}
+
+static int vpu_malone_add_padding_scode(struct vpu_buffer *stream_buffer,
+ struct vpu_malone_str_buffer *str_buf,
+ u32 pixelformat, u32 scode_type)
+{
+ u32 wptr;
+ u32 size;
+ u32 total_size = 0;
+ const struct malone_padding_scode *ps;
+ const u32 padding_size = 4096;
+ int ret;
+
+ ps = get_padding_scode(scode_type, pixelformat);
+ if (!ps)
+ return -EINVAL;
+
+ wptr = str_buf->wptr;
+ size = ALIGN(wptr, 4) - wptr;
+ if (size)
+ vpu_helper_memset_stream_buffer(stream_buffer, &wptr, 0, size);
+ total_size += size;
+
+ size = sizeof(ps->data);
+ ret = vpu_helper_copy_to_stream_buffer(stream_buffer, &wptr, size, (void *)ps->data);
+ if (ret < size)
+ return -EINVAL;
+ total_size += size;
+
+ size = padding_size - sizeof(ps->data);
+ vpu_helper_memset_stream_buffer(stream_buffer, &wptr, 0, size);
+ total_size += size;
+
+ vpu_malone_update_wptr(str_buf, wptr);
+ return total_size;
+}
+
+int vpu_malone_add_scode(struct vpu_shared_addr *shared,
+ u32 instance,
+ struct vpu_buffer *stream_buffer,
+ u32 pixelformat,
+ u32 scode_type)
+{
+ struct vpu_dec_ctrl *hc;
+ struct vpu_malone_str_buffer *str_buf;
+ int ret = -EINVAL;
+
+ WARN_ON(!shared || !shared->iface || !shared->core || !shared->priv);
+
+ hc = shared->priv;
+ str_buf = hc->str_buf[instance];
+
+ switch (scode_type) {
+ case SCODE_PADDING_EOS:
+ case SCODE_PADDING_ABORT:
+ case SCODE_PADDING_BUFFLUSH:
+ ret = vpu_malone_add_padding_scode(stream_buffer,
+ str_buf, pixelformat, scode_type);
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+#define MALONE_PAYLOAD_HEADER_SIZE 16
+#define MALONE_CODEC_VERSION_ID 0x1
+#define MALONE_CODEC_ID_VC1_SIMPLE 0x10
+#define MALONE_CODEC_ID_VC1_MAIN 0x11
+#define MALONE_CODEC_ID_ARV8 0x28
+#define MALONE_CODEC_ID_ARV9 0x29
+#define MALONE_CODEC_ID_VP6 0x36
+#define MALONE_CODEC_ID_VP8 0x36
+#define MALONE_CODEC_ID_DIVX3 0x38
+#define MALONE_CODEC_ID_SPK 0x39
+
+#define MALONE_VP8_IVF_SEQ_HEADER_LEN 32
+#define MALONE_VP8_IVF_FRAME_HEADER_LEN 8
+
+#define MALONE_VC1_RCV_CODEC_V1_VERSION 0x85
+#define MALONE_VC1_RCV_CODEC_V2_VERSION 0xC5
+#define MALONE_VC1_RCV_NUM_FRAMES 0xFF
+#define MALONE_VC1_RCV_SEQ_EXT_DATA_SIZE 4
+#define MALONE_VC1_RCV_SEQ_HEADER_LEN 20
+#define MALONE_VC1_RCV_PIC_HEADER_LEN 4
+#define MALONE_VC1_NAL_HEADER_LEN 4
+#define MALONE_VC1_CONTAIN_NAL(data) ((data & 0x00FFFFFF) == 0x00010000)
+
+
+static void set_payload_hdr(u8 *dst, u32 scd_type, u32 codec_id,
+ u32 buffer_size, u32 width, u32 height)
+{
+ unsigned int payload_size;
+ /* payload_size = buffer_size + itself_size(16) - start_code(4) */
+ payload_size = buffer_size + 12;
+
+ dst[0] = 0x00;
+ dst[1] = 0x00;
+ dst[2] = 0x01;
+ dst[3] = scd_type;
+
+ /* length */
+ dst[4] = ((payload_size>>16)&0xff);
+ dst[5] = ((payload_size>>8)&0xff);
+ dst[6] = 0x4e;
+ dst[7] = ((payload_size>>0)&0xff);
+
+ /* Codec ID and Version */
+ dst[8] = codec_id;
+ dst[9] = MALONE_CODEC_VERSION_ID;
+
+ /* width */
+ dst[10] = ((width>>8)&0xff);
+ dst[11] = ((width>>0)&0xff);
+ dst[12] = 0x58;
+
+ /* height */
+ dst[13] = ((height>>8)&0xff);
+ dst[14] = ((height>>0)&0xff);
+ dst[15] = 0x50;
+}
+
+static void set_vp8_ivf_seqhdr(u8 *dst, u32 width, u32 height)
+{
+ /* 0-3byte signature "DKIF" */
+ dst[0] = 0x44;
+ dst[1] = 0x4b;
+ dst[2] = 0x49;
+ dst[3] = 0x46;
+ /* 4-5byte version: should be 0*/
+ dst[4] = 0x00;
+ dst[5] = 0x00;
+ /* 6-7 length of Header */
+ dst[6] = MALONE_VP8_IVF_SEQ_HEADER_LEN;
+ dst[7] = MALONE_VP8_IVF_SEQ_HEADER_LEN >> 8;
+ /* 8-11 VP8 fourcc */
+ dst[8] = 0x56;
+ dst[9] = 0x50;
+ dst[10] = 0x38;
+ dst[11] = 0x30;
+ /* 12-13 width in pixels */
+ dst[12] = width;
+ dst[13] = width >> 8;
+ /* 14-15 height in pixels */
+ dst[14] = height;
+ dst[15] = height >> 8;
+ /* 16-19 frame rate */
+ dst[16] = 0xe8;
+ dst[17] = 0x03;
+ dst[18] = 0x00;
+ dst[19] = 0x00;
+ /* 20-23 time scale */
+ dst[20] = 0x01;
+ dst[21] = 0x00;
+ dst[22] = 0x00;
+ dst[23] = 0x00;
+ /* 24-27 number frames */
+ dst[24] = 0xdf;
+ dst[25] = 0xf9;
+ dst[26] = 0x09;
+ dst[27] = 0x00;
+ /* 28-31 reserved */
+}
+
+static void set_vp8_ivf_pichdr(u8 *dst, u32 frame_size)
+{
+ /*
+ * firmware just parse 64-bit timestamp(8 bytes).
+ * As not transfer timestamp to firmware, use default value(ZERO).
+ * No need to do anything here
+ */
+}
+
+static void set_vc1_rcv_seqhdr(u8 *dst, u8 *src, u32 width, u32 height)
+{
+ u32 frames = MALONE_VC1_RCV_NUM_FRAMES;
+ u32 ext_data_size = MALONE_VC1_RCV_SEQ_EXT_DATA_SIZE;
+
+ /* 0-2 Number of frames, used default value 0xFF */
+ dst[0] = frames;
+ dst[1] = frames >> 8;
+ dst[2] = frames >> 16;
+
+ /* 3 RCV version, used V1 */
+ dst[3] = MALONE_VC1_RCV_CODEC_V1_VERSION;
+
+ /* 4-7 extension data size */
+ dst[4] = ext_data_size;
+ dst[5] = ext_data_size >> 8;
+ dst[6] = ext_data_size >> 16;
+ dst[7] = ext_data_size >> 24;
+ /* 8-11 extension data */
+ dst[8] = src[0];
+ dst[9] = src[1];
+ dst[10] = src[2];
+ dst[11] = src[3];
+
+ /* height */
+ dst[12] = height;
+ dst[13] = (height >> 8) & 0xff;
+ dst[14] = (height >> 16) & 0xff;
+ dst[15] = (height >> 24) & 0xff;
+ /* width */
+ dst[16] = width;
+ dst[17] = (width >> 8) & 0xff;
+ dst[18] = (width >> 16) & 0xff;
+ dst[19] = (width >> 24) & 0xff;
+}
+
+static void set_vc1_rcv_pichdr(u8 *dst, u32 buffer_size)
+{
+ dst[0] = buffer_size;
+ dst[1] = buffer_size >> 8;
+ dst[2] = buffer_size >> 16;
+ dst[3] = buffer_size >> 24;
+}
+
+static void create_vc1_nal_pichdr(u8 *dst)
+{
+ /* need insert nal header: special ID */
+ dst[0] = 0x0;
+ dst[1] = 0x0;
+ dst[2] = 0x01;
+ dst[3] = 0x0D;
+}
+
+static int vpu_malone_insert_scode_seq(struct malone_scode_t *scode, u32 codec_id, u32 ext_size)
+{
+ u8 hdr[MALONE_PAYLOAD_HEADER_SIZE];
+ int ret;
+
+ set_payload_hdr(hdr,
+ SCODE_SEQUENCE,
+ codec_id,
+ ext_size,
+ scode->inst->out_format.width,
+ scode->inst->out_format.height);
+ ret = vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer,
+ &scode->wptr,
+ sizeof(hdr),
+ hdr);
+ return ret;
+}
+
+static int vpu_malone_insert_scode_pic(struct malone_scode_t *scode, u32 codec_id, u32 ext_size)
+{
+ u8 hdr[MALONE_PAYLOAD_HEADER_SIZE];
+
+ set_payload_hdr(hdr,
+ SCODE_PICTURE,
+ codec_id,
+ ext_size + vb2_get_plane_payload(scode->vb, 0),
+ scode->inst->out_format.width,
+ scode->inst->out_format.height);
+ return vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer,
+ &scode->wptr,
+ sizeof(hdr),
+ hdr);
+}
+
+static int vpu_malone_insert_scode_vc1_g_pic(struct malone_scode_t *scode)
+{
+ struct vb2_v4l2_buffer *vbuf;
+ u8 nal_hdr[MALONE_VC1_NAL_HEADER_LEN];
+ u32 *data = NULL;
+
+ vbuf = to_vb2_v4l2_buffer(scode->vb);
+ data = vb2_plane_vaddr(scode->vb, 0);
+
+ if (vbuf->sequence == 0 || vpu_vb_is_codecconfig(vbuf))
+ return 0;
+ if (MALONE_VC1_CONTAIN_NAL(*data))
+ return 0;
+
+ create_vc1_nal_pichdr(nal_hdr);
+ return vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer,
+ &scode->wptr,
+ sizeof(nal_hdr),
+ nal_hdr);
+}
+
+static int vpu_malone_insert_scode_vc1_l_seq(struct malone_scode_t *scode)
+{
+ int ret;
+ int size = 0;
+ u8 rcv_seqhdr[MALONE_VC1_RCV_SEQ_HEADER_LEN];
+
+ scode->need_data = 0;
+
+ ret = vpu_malone_insert_scode_seq(scode, MALONE_CODEC_ID_VC1_SIMPLE,
+ sizeof(rcv_seqhdr));
+ if (ret < 0)
+ return ret;
+ size = ret;
+
+ set_vc1_rcv_seqhdr(rcv_seqhdr,
+ vb2_plane_vaddr(scode->vb, 0),
+ scode->inst->out_format.width,
+ scode->inst->out_format.height);
+ ret = vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer,
+ &scode->wptr,
+ sizeof(rcv_seqhdr),
+ rcv_seqhdr);
+
+ if (ret < 0)
+ return ret;
+ size += ret;
+ return size;
+}
+
+static int vpu_malone_insert_scode_vc1_l_pic(struct malone_scode_t *scode)
+{
+ int ret;
+ int size = 0;
+ u8 rcv_pichdr[MALONE_VC1_RCV_PIC_HEADER_LEN];
+
+ ret = vpu_malone_insert_scode_pic(scode, MALONE_CODEC_ID_VC1_SIMPLE,
+ sizeof(rcv_pichdr));
+ if (ret < 0)
+ return ret;
+ size = ret;
+
+ set_vc1_rcv_pichdr(rcv_pichdr, vb2_get_plane_payload(scode->vb, 0));
+ ret = vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer,
+ &scode->wptr,
+ sizeof(rcv_pichdr),
+ rcv_pichdr);
+ if (ret < 0)
+ return ret;
+ size += ret;
+ return size;
+}
+
+static int vpu_malone_insert_scode_vp8_seq(struct malone_scode_t *scode)
+{
+ int ret;
+ int size = 0;
+ u8 ivf_hdr[MALONE_VP8_IVF_SEQ_HEADER_LEN];
+
+ ret = vpu_malone_insert_scode_seq(scode, MALONE_CODEC_ID_VP8, sizeof(ivf_hdr));
+ if (ret < 0)
+ return ret;
+ size = ret;
+
+ set_vp8_ivf_seqhdr(ivf_hdr,
+ scode->inst->out_format.width,
+ scode->inst->out_format.height);
+ ret = vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer,
+ &scode->wptr,
+ sizeof(ivf_hdr),
+ ivf_hdr);
+ if (ret < 0)
+ return ret;
+ size += ret;
+
+ return size;
+}
+
+static int vpu_malone_insert_scode_vp8_pic(struct malone_scode_t *scode)
+{
+ int ret;
+ int size = 0;
+ u8 ivf_hdr[MALONE_VP8_IVF_FRAME_HEADER_LEN] = {0};
+
+ ret = vpu_malone_insert_scode_pic(scode, MALONE_CODEC_ID_VP8, sizeof(ivf_hdr));
+ if (ret < 0)
+ return ret;
+ size = ret;
+
+ set_vp8_ivf_pichdr(ivf_hdr, vb2_get_plane_payload(scode->vb, 0));
+ ret = vpu_helper_copy_to_stream_buffer(&scode->inst->stream_buffer,
+ &scode->wptr,
+ sizeof(ivf_hdr),
+ ivf_hdr);
+ if (ret < 0)
+ return ret;
+ size += ret;
+
+ return size;
+}
+
+static const struct malone_scode_handler scode_handlers[] = {
+ {
+ /* fix me, need to swap return operation after gstreamer swap */
+ .pixelformat = V4L2_PIX_FMT_VC1_ANNEX_L,
+ .insert_scode_seq = vpu_malone_insert_scode_vc1_l_seq,
+ .insert_scode_pic = vpu_malone_insert_scode_vc1_l_pic,
+ },
+ {
+ .pixelformat = V4L2_PIX_FMT_VC1_ANNEX_G,
+ .insert_scode_pic = vpu_malone_insert_scode_vc1_g_pic,
+ },
+ {
+ .pixelformat = V4L2_PIX_FMT_VP8,
+ .insert_scode_seq = vpu_malone_insert_scode_vp8_seq,
+ .insert_scode_pic = vpu_malone_insert_scode_vp8_pic,
+ },
+};
+
+static const struct malone_scode_handler *get_scode_handler(u32 pixelformat)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(scode_handlers); i++) {
+ if (scode_handlers[i].pixelformat == pixelformat)
+ return &scode_handlers[i];
+ }
+
+ return NULL;
+}
+
+static int vpu_malone_insert_scode(struct malone_scode_t *scode, u32 type)
+{
+ const struct malone_scode_handler *handler;
+ int ret = 0;
+
+ if (!scode || !scode->inst || !scode->vb)
+ return 0;
+
+ scode->need_data = 1;
+ handler = get_scode_handler(scode->inst->out_format.pixfmt);
+ if (!handler)
+ return 0;
+
+ switch (type) {
+ case SCODE_SEQUENCE:
+ if (handler->insert_scode_seq)
+ ret = handler->insert_scode_seq(scode);
+ break;
+ case SCODE_PICTURE:
+ if (handler->insert_scode_pic)
+ ret = handler->insert_scode_pic(scode);
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+static int vpu_malone_input_frame_data(struct vpu_malone_str_buffer *str_buf,
+ struct vpu_inst *inst, struct vb2_buffer *vb,
+ u32 disp_imm)
+{
+ struct malone_scode_t scode;
+ struct vb2_v4l2_buffer *vbuf;
+ u32 wptr;
+ int size = 0;
+ int ret = 0;
+
+ wptr = str_buf->wptr;
+
+ /*add scode: SCODE_SEQUENCE, SCODE_PICTURE, SCODE_SLICE*/
+ vbuf = to_vb2_v4l2_buffer(vb);
+ scode.inst = inst;
+ scode.vb = vb;
+ scode.wptr = wptr;
+ scode.need_data = 1;
+ if (vbuf->sequence == 0 || vpu_vb_is_codecconfig(vbuf))
+ ret = vpu_malone_insert_scode(&scode, SCODE_SEQUENCE);
+
+ if (ret < 0)
+ return -ENOMEM;
+ size += ret;
+ wptr = scode.wptr;
+ if (!scode.need_data) {
+ vpu_malone_update_wptr(str_buf, wptr);
+ return size;
+ }
+
+ ret = vpu_malone_insert_scode(&scode, SCODE_PICTURE);
+ if (ret < 0)
+ return -ENOMEM;
+ size += ret;
+ wptr = scode.wptr;
+
+ ret = vpu_helper_copy_to_stream_buffer(&inst->stream_buffer,
+ &wptr,
+ vb2_get_plane_payload(vb, 0),
+ vb2_plane_vaddr(vb, 0));
+ if (ret < vb2_get_plane_payload(vb, 0))
+ return -ENOMEM;
+ size += ret;
+
+ vpu_malone_update_wptr(str_buf, wptr);
+
+ if (disp_imm && !vpu_vb_is_codecconfig(vbuf)) {
+ ret = vpu_malone_add_scode(inst->core->iface,
+ inst->id,
+ &inst->stream_buffer,
+ inst->out_format.pixfmt,
+ SCODE_PADDING_BUFFLUSH);
+ if (ret < 0)
+ return ret;
+ size += ret;
+ }
+
+ return size;
+}
+
+static int vpu_malone_input_stream_data(struct vpu_malone_str_buffer *str_buf,
+ struct vpu_inst *inst, struct vb2_buffer *vb)
+{
+ u32 wptr;
+ int ret = 0;
+
+ wptr = str_buf->wptr;
+ ret = vpu_helper_copy_to_stream_buffer(&inst->stream_buffer,
+ &wptr,
+ vb2_get_plane_payload(vb, 0),
+ vb2_plane_vaddr(vb, 0));
+ if (ret < vb2_get_plane_payload(vb, 0))
+ return -ENOMEM;
+
+ vpu_malone_update_wptr(str_buf, wptr);
+
+ return ret;
+}
+
+static int vpu_malone_input_ts(struct vpu_inst *inst, s64 timestamp, u32 size)
+{
+ struct vpu_ts_info info;
+
+ memset(&info, 0, sizeof(info));
+ info.timestamp = timestamp;
+ info.size = size;
+
+ return vpu_session_fill_timestamp(inst, &info);
+}
+
+int vpu_malone_input_frame(struct vpu_shared_addr *shared,
+ struct vpu_inst *inst, struct vb2_buffer *vb)
+{
+ struct vpu_dec_ctrl *hc;
+ struct vb2_v4l2_buffer *vbuf;
+ struct vpu_malone_str_buffer *str_buf;
+ u32 disp_imm = 0;
+ u32 size;
+ int ret;
+
+ WARN_ON(!shared || !shared->iface || !shared->core || !shared->priv);
+ hc = shared->priv;
+ str_buf = hc->str_buf[inst->id];
+ disp_imm = hc->codec_param[inst->id].disp_imm;
+
+ if (vpu_malone_is_non_frame_mode(shared, inst->id))
+ ret = vpu_malone_input_stream_data(str_buf, inst, vb);
+ else
+ ret = vpu_malone_input_frame_data(str_buf, inst, vb, disp_imm);
+ if (ret < 0)
+ return ret;
+ size = ret;
+
+ /*
+ * if buffer only contain codec data, and the timestamp is invalid,
+ * don't put the invalid timestamp to resync
+ * merge the data to next frame
+ */
+ vbuf = to_vb2_v4l2_buffer(vb);
+ if (vpu_vb_is_codecconfig(vbuf) && (s64)vb->timestamp < 0) {
+ inst->extra_size += size;
+ return 0;
+ }
+ if (inst->extra_size) {
+ size += inst->extra_size;
+ inst->extra_size = 0;
+ }
+
+ ret = vpu_malone_input_ts(inst, vb->timestamp, size);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static bool vpu_malone_check_ready(struct vpu_shared_addr *shared, u32 instance)
+{
+ struct malone_iface *iface;
+ struct vpu_rpc_buffer_desc *desc;
+ u32 size;
+ u32 rptr;
+ u32 wptr;
+ u32 used;
+
+ iface = shared->iface;
+ desc = &iface->api_cmd_buffer_desc[instance];
+ size = desc->end - desc->start;
+ rptr = desc->rptr;
+ wptr = desc->wptr;
+ used = (wptr + size - rptr) % size;
+ if (!size || used < size / 2)
+ return true;
+
+ return false;
+}
+
+bool vpu_malone_is_ready(struct vpu_shared_addr *shared, u32 instance)
+{
+ u32 cnt = 0;
+
+ while (!vpu_malone_check_ready(shared, instance)) {
+ if (cnt > 30)
+ return false;
+ mdelay(1);
+ cnt++;
+ }
+ return true;
+}
+
+int vpu_malone_pre_cmd(struct vpu_shared_addr *shared, u32 instance)
+{
+ if (!vpu_malone_is_ready(shared, instance))
+ return -EINVAL;
+
+ return 0;
+}
+
+int vpu_malone_post_cmd(struct vpu_shared_addr *shared, u32 instance)
+{
+ struct malone_iface *iface;
+ struct vpu_rpc_buffer_desc *desc;
+
+ iface = shared->iface;
+ desc = &iface->api_cmd_buffer_desc[instance];
+ desc->wptr++;
+ if (desc->wptr == desc->end)
+ desc->wptr = desc->start;
+
+ return 0;
+}
+
+int vpu_malone_init_instance(struct vpu_shared_addr *shared, u32 instance)
+{
+ struct malone_iface *iface;
+ struct vpu_rpc_buffer_desc *desc;
+
+ iface = shared->iface;
+ desc = &iface->api_cmd_buffer_desc[instance];
+ desc->wptr = desc->rptr;
+ if (desc->wptr == desc->end)
+ desc->wptr = desc->start;
+
+ return 0;
+}
+
+u32 vpu_malone_get_max_instance_count(struct vpu_shared_addr *shared)
+{
+ struct malone_iface *iface = shared->iface;
+
+ return iface->max_streams;
+}
diff --git a/drivers/media/platform/amphion/vpu_malone.h b/drivers/media/platform/amphion/vpu_malone.h
new file mode 100644
index 000000000000..4699252fc73c
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_malone.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_MALONE_H
+#define _AMPHION_VPU_MALONE_H
+
+u32 vpu_malone_get_data_size(void);
+void vpu_malone_init_rpc(struct vpu_shared_addr *shared,
+ struct vpu_buffer *rpc, dma_addr_t boot_addr);
+void vpu_malone_set_log_buf(struct vpu_shared_addr *shared,
+ struct vpu_buffer *log);
+void vpu_malone_set_system_cfg(struct vpu_shared_addr *shared,
+ u32 regs_base, void __iomem *regs, u32 core_id);
+u32 vpu_malone_get_version(struct vpu_shared_addr *shared);
+int vpu_malone_get_stream_buffer_size(struct vpu_shared_addr *shared);
+int vpu_malone_config_stream_buffer(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_buffer *buf);
+int vpu_malone_get_stream_buffer_desc(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_rpc_buffer_desc *desc);
+int vpu_malone_update_stream_buffer(struct vpu_shared_addr *shared,
+ u32 instance, u32 ptr, bool write);
+int vpu_malone_set_decode_params(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_decode_params *params, u32 update);
+int vpu_malone_pack_cmd(struct vpu_rpc_event *pkt, u32 index, u32 id, void *data);
+int vpu_malone_convert_msg_id(u32 msg_id);
+int vpu_malone_unpack_msg_data(struct vpu_rpc_event *pkt, void *data);
+int vpu_malone_add_scode(struct vpu_shared_addr *shared,
+ u32 instance,
+ struct vpu_buffer *stream_buffer,
+ u32 pixelformat,
+ u32 scode_type);
+int vpu_malone_input_frame(struct vpu_shared_addr *shared,
+ struct vpu_inst *inst, struct vb2_buffer *vb);
+bool vpu_malone_is_ready(struct vpu_shared_addr *shared, u32 instance);
+int vpu_malone_pre_cmd(struct vpu_shared_addr *shared, u32 instance);
+int vpu_malone_post_cmd(struct vpu_shared_addr *shared, u32 instance);
+int vpu_malone_init_instance(struct vpu_shared_addr *shared, u32 instance);
+u32 vpu_malone_get_max_instance_count(struct vpu_shared_addr *shared);
+
+#endif
--
2.33.0


2021-11-30 09:49:59

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 08/13] media: amphion: add v4l2 m2m vpu decoder stateful driver

This consists of video decoder implementation plus decoder controls.

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
---
drivers/media/platform/amphion/vdec.c | 1680 +++++++++++++++++++++++++
1 file changed, 1680 insertions(+)
create mode 100644 drivers/media/platform/amphion/vdec.c

diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c
new file mode 100644
index 000000000000..a66d34d02a50
--- /dev/null
+++ b/drivers/media/platform/amphion/vdec.c
@@ -0,0 +1,1680 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/videodev2.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-mem2mem.h>
+#include <media/v4l2-ioctl.h>
+#include <media/videobuf2-v4l2.h>
+#include <media/videobuf2-dma-contig.h>
+#include <media/videobuf2-vmalloc.h>
+#include "vpu.h"
+#include "vpu_defs.h"
+#include "vpu_core.h"
+#include "vpu_helpers.h"
+#include "vpu_v4l2.h"
+#include "vpu_cmds.h"
+#include "vpu_rpc.h"
+
+#define VDEC_FRAME_DEPTH 256
+#define VDEC_MIN_BUFFER_CAP 8
+
+struct vdec_fs_info {
+ char name[8];
+ u32 type;
+ u32 max_count;
+ u32 req_count;
+ u32 count;
+ u32 index;
+ u32 size;
+ struct vpu_buffer buffer[32];
+ u32 tag;
+};
+
+struct vdec_t {
+ u32 seq_hdr_found;
+ struct vpu_buffer udata;
+ struct vpu_decode_params params;
+ struct vpu_dec_codec_info codec_info;
+ enum vpu_codec_state state;
+
+ struct vpu_vb2_buffer *slots[VB2_MAX_FRAME];
+ u32 req_frame_count;
+ struct vdec_fs_info mbi;
+ struct vdec_fs_info dcp;
+ u32 seq_tag;
+
+ bool reset_codec;
+ bool fixed_fmt;
+ u32 decoded_frame_count;
+ u32 display_frame_count;
+ u32 sequence;
+ u32 eos_received;
+ bool is_source_changed;
+ u32 source_change;
+ u32 drain;
+ u32 ts_pre_count;
+ u32 frame_depth;
+ s64 ts_start;
+ s64 ts_input;
+ s64 timestamp;
+};
+
+static const struct vpu_format vdec_formats[] = {
+ {
+ .pixfmt = V4L2_PIX_FMT_NV12MT_8L128,
+ .num_planes = 2,
+ .type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_NV12MT_10BE_8L128,
+ .num_planes = 2,
+ .type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_H264,
+ .num_planes = 1,
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+ .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_H264_MVC,
+ .num_planes = 1,
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+ .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_HEVC,
+ .num_planes = 1,
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+ .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_VC1_ANNEX_G,
+ .num_planes = 1,
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+ .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_VC1_ANNEX_L,
+ .num_planes = 1,
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+ .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_MPEG2,
+ .num_planes = 1,
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+ .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_MPEG4,
+ .num_planes = 1,
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+ .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_XVID,
+ .num_planes = 1,
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+ .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_VP8,
+ .num_planes = 1,
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+ .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
+ },
+ {
+ .pixfmt = V4L2_PIX_FMT_H263,
+ .num_planes = 1,
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+ .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
+ },
+ {0, 0, 0, 0},
+};
+
+static const struct v4l2_ctrl_ops vdec_ctrl_ops = {
+ .g_volatile_ctrl = vpu_helper_g_volatile_ctrl,
+};
+
+static int vdec_ctrl_init(struct vpu_inst *inst)
+{
+ struct v4l2_ctrl *ctrl;
+ int ret;
+
+ ret = v4l2_ctrl_handler_init(&inst->ctrl_handler, 20);
+ if (ret)
+ return ret;
+
+ ctrl = v4l2_ctrl_new_std(&inst->ctrl_handler, &vdec_ctrl_ops,
+ V4L2_CID_MIN_BUFFERS_FOR_CAPTURE, 1, 32, 1, 2);
+ if (ctrl)
+ ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE;
+
+ ctrl = v4l2_ctrl_new_std(&inst->ctrl_handler, &vdec_ctrl_ops,
+ V4L2_CID_MIN_BUFFERS_FOR_OUTPUT, 1, 32, 1, 2);
+ if (ctrl)
+ ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE;
+
+ ret = v4l2_ctrl_handler_setup(&inst->ctrl_handler);
+ if (ret) {
+ dev_err(inst->dev, "[%d] setup ctrls fail, ret = %d\n", inst->id, ret);
+ v4l2_ctrl_handler_free(&inst->ctrl_handler);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void vdec_set_last_buffer_dequeued(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ if (vdec->eos_received) {
+ if (!vpu_set_last_buffer_dequeued(inst))
+ vdec->eos_received--;
+ }
+}
+
+static void vdec_handle_resolution_change(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+ struct vb2_queue *q;
+
+ if (inst->state != VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
+ return;
+ if (!vdec->source_change)
+ return;
+
+ q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
+ if (!list_empty(&q->done_list))
+ return;
+
+ vdec->source_change--;
+ vpu_notify_source_change(inst);
+}
+
+static int vdec_update_state(struct vpu_inst *inst,
+ enum vpu_codec_state state, u32 force)
+{
+ struct vdec_t *vdec = inst->priv;
+ enum vpu_codec_state pre_state = inst->state;
+
+ if (state == VPU_CODEC_STATE_SEEK) {
+ if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
+ vdec->state = inst->state;
+ else
+ vdec->state = VPU_CODEC_STATE_ACTIVE;
+ }
+ if (inst->state != VPU_CODEC_STATE_SEEK || force)
+ inst->state = state;
+ else if (state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
+ vdec->state = VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE;
+
+ if (inst->state != pre_state)
+ vpu_trace(inst->dev, "[%d] %d -> %d\n", inst->id, pre_state, inst->state);
+
+ if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
+ vdec_handle_resolution_change(inst);
+
+ return 0;
+}
+
+static int vdec_querycap(struct file *file, void *fh, struct v4l2_capability *cap)
+{
+ strscpy(cap->driver, "amphion-vpu", sizeof(cap->driver));
+ strscpy(cap->card, "amphion vpu decoder", sizeof(cap->card));
+ strscpy(cap->bus_info, "platform: amphion-vpu", sizeof(cap->bus_info));
+
+ return 0;
+}
+
+static int vdec_enum_fmt(struct file *file, void *fh, struct v4l2_fmtdesc *f)
+{
+ struct vpu_inst *inst = to_inst(file);
+ struct vdec_t *vdec = inst->priv;
+ const struct vpu_format *fmt;
+ int ret = -EINVAL;
+
+ vpu_inst_lock(inst);
+ if (!V4L2_TYPE_IS_OUTPUT(f->type) && vdec->fixed_fmt) {
+ if (f->index == 0) {
+ f->pixelformat = inst->cap_format.pixfmt;
+ f->flags = inst->cap_format.flags;
+ ret = 0;
+ }
+ } else {
+ fmt = vpu_helper_enum_format(inst, f->type, f->index);
+ memset(f->reserved, 0, sizeof(f->reserved));
+ if (!fmt)
+ goto exit;
+
+ f->pixelformat = fmt->pixfmt;
+ f->flags = fmt->flags;
+ ret = 0;
+ }
+
+exit:
+ vpu_inst_unlock(inst);
+ return ret;
+}
+
+static int vdec_g_fmt(struct file *file, void *fh, struct v4l2_format *f)
+{
+ struct vpu_inst *inst = to_inst(file);
+ struct vdec_t *vdec = inst->priv;
+ struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp;
+ struct vpu_format *cur_fmt;
+ int i;
+
+ cur_fmt = vpu_get_format(inst, f->type);
+
+ pixmp->pixelformat = cur_fmt->pixfmt;
+ pixmp->num_planes = cur_fmt->num_planes;
+ pixmp->width = cur_fmt->width;
+ pixmp->height = cur_fmt->height;
+ pixmp->field = cur_fmt->field;
+ pixmp->flags = cur_fmt->flags;
+ for (i = 0; i < pixmp->num_planes; i++) {
+ pixmp->plane_fmt[i].bytesperline = cur_fmt->bytesperline[i];
+ pixmp->plane_fmt[i].sizeimage = cur_fmt->sizeimage[i];
+ }
+
+ f->fmt.pix_mp.colorspace = vdec->codec_info.color_primaries;
+ f->fmt.pix_mp.xfer_func = vdec->codec_info.transfer_chars;
+ f->fmt.pix_mp.ycbcr_enc = vdec->codec_info.matrix_coeffs;
+ f->fmt.pix_mp.quantization = vdec->codec_info.full_range;
+
+ return 0;
+}
+
+static int vdec_try_fmt(struct file *file, void *fh, struct v4l2_format *f)
+{
+ struct vpu_inst *inst = to_inst(file);
+ struct vdec_t *vdec = inst->priv;
+
+ vpu_try_fmt_common(inst, f);
+
+ vpu_inst_lock(inst);
+ if (vdec->fixed_fmt) {
+ f->fmt.pix_mp.colorspace = vdec->codec_info.color_primaries;
+ f->fmt.pix_mp.xfer_func = vdec->codec_info.transfer_chars;
+ f->fmt.pix_mp.ycbcr_enc = vdec->codec_info.matrix_coeffs;
+ f->fmt.pix_mp.quantization = vdec->codec_info.full_range;
+ } else {
+ f->fmt.pix_mp.colorspace = V4L2_COLORSPACE_DEFAULT;
+ f->fmt.pix_mp.xfer_func = V4L2_XFER_FUNC_DEFAULT;
+ f->fmt.pix_mp.ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT;
+ f->fmt.pix_mp.quantization = V4L2_QUANTIZATION_DEFAULT;
+ }
+ vpu_inst_unlock(inst);
+
+ return 0;
+}
+
+static int vdec_s_fmt_common(struct vpu_inst *inst, struct v4l2_format *f)
+{
+ struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp;
+ const struct vpu_format *fmt;
+ struct vpu_format *cur_fmt;
+ struct vb2_queue *q;
+ struct vdec_t *vdec = inst->priv;
+ int i;
+
+ q = v4l2_m2m_get_vq(inst->fh.m2m_ctx, f->type);
+ if (!q)
+ return -EINVAL;
+ if (vb2_is_streaming(q))
+ return -EBUSY;
+
+ fmt = vpu_try_fmt_common(inst, f);
+ if (!fmt)
+ return -EINVAL;
+
+ cur_fmt = vpu_get_format(inst, f->type);
+ if (V4L2_TYPE_IS_OUTPUT(f->type) && inst->state != VPU_CODEC_STATE_DEINIT) {
+ if (cur_fmt->pixfmt != fmt->pixfmt ||
+ (pixmp->width && cur_fmt->width != pixmp->width) ||
+ (pixmp->height && cur_fmt->height != pixmp->height)) {
+ vdec->reset_codec = true;
+ vdec->fixed_fmt = false;
+ }
+ }
+ cur_fmt->pixfmt = fmt->pixfmt;
+ if (V4L2_TYPE_IS_OUTPUT(f->type) || !vdec->fixed_fmt) {
+ cur_fmt->num_planes = fmt->num_planes;
+ cur_fmt->flags = fmt->flags;
+ cur_fmt->width = pixmp->width;
+ cur_fmt->height = pixmp->height;
+ for (i = 0; i < fmt->num_planes; i++) {
+ cur_fmt->sizeimage[i] = pixmp->plane_fmt[i].sizeimage;
+ cur_fmt->bytesperline[i] = pixmp->plane_fmt[i].bytesperline;
+ }
+ if (pixmp->field != V4L2_FIELD_ANY)
+ cur_fmt->field = pixmp->field;
+ } else {
+ pixmp->num_planes = cur_fmt->num_planes;
+ pixmp->width = cur_fmt->width;
+ pixmp->height = cur_fmt->height;
+ for (i = 0; i < pixmp->num_planes; i++) {
+ pixmp->plane_fmt[i].bytesperline = cur_fmt->bytesperline[i];
+ pixmp->plane_fmt[i].sizeimage = cur_fmt->sizeimage[i];
+ }
+ pixmp->field = cur_fmt->field;
+ }
+
+ if (!vdec->fixed_fmt) {
+ if (V4L2_TYPE_IS_OUTPUT(f->type)) {
+ vdec->params.codec_format = cur_fmt->pixfmt;
+ vdec->codec_info.color_primaries = f->fmt.pix_mp.colorspace;
+ vdec->codec_info.transfer_chars = f->fmt.pix_mp.xfer_func;
+ vdec->codec_info.matrix_coeffs = f->fmt.pix_mp.ycbcr_enc;
+ vdec->codec_info.full_range = f->fmt.pix_mp.quantization;
+ } else {
+ vdec->params.output_format = cur_fmt->pixfmt;
+ inst->crop.left = 0;
+ inst->crop.top = 0;
+ inst->crop.width = cur_fmt->width;
+ inst->crop.height = cur_fmt->height;
+ }
+ }
+
+ return 0;
+}
+
+static int vdec_s_fmt(struct file *file, void *fh, struct v4l2_format *f)
+{
+ struct vpu_inst *inst = to_inst(file);
+ struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp;
+ struct vdec_t *vdec = inst->priv;
+ int ret = 0;
+
+ vpu_inst_lock(inst);
+ ret = vdec_s_fmt_common(inst, f);
+ if (ret)
+ goto exit;
+
+ if (V4L2_TYPE_IS_OUTPUT(f->type) && !vdec->fixed_fmt) {
+ struct v4l2_format fc;
+
+ memset(&fc, 0, sizeof(fc));
+ fc.type = inst->cap_format.type;
+ fc.fmt.pix_mp.pixelformat = inst->cap_format.pixfmt;
+ fc.fmt.pix_mp.width = pixmp->width;
+ fc.fmt.pix_mp.height = pixmp->height;
+ vdec_s_fmt_common(inst, &fc);
+ }
+
+ f->fmt.pix_mp.colorspace = vdec->codec_info.color_primaries;
+ f->fmt.pix_mp.xfer_func = vdec->codec_info.transfer_chars;
+ f->fmt.pix_mp.ycbcr_enc = vdec->codec_info.matrix_coeffs;
+ f->fmt.pix_mp.quantization = vdec->codec_info.full_range;
+
+exit:
+ vpu_inst_unlock(inst);
+ return ret;
+}
+
+static int vdec_g_selection(struct file *file, void *fh,
+ struct v4l2_selection *s)
+{
+ struct vpu_inst *inst = to_inst(file);
+
+ if (s->type != V4L2_BUF_TYPE_VIDEO_CAPTURE &&
+ s->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
+ return -EINVAL;
+
+ switch (s->target) {
+ case V4L2_SEL_TGT_COMPOSE:
+ case V4L2_SEL_TGT_COMPOSE_DEFAULT:
+ case V4L2_SEL_TGT_COMPOSE_PADDED:
+ s->r = inst->crop;
+ break;
+ case V4L2_SEL_TGT_COMPOSE_BOUNDS:
+ s->r.left = 0;
+ s->r.top = 0;
+ s->r.width = inst->cap_format.width;
+ s->r.height = inst->cap_format.height;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int vdec_drain(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ if (!vdec->drain)
+ return 0;
+
+ if (v4l2_m2m_num_src_bufs_ready(inst->fh.m2m_ctx))
+ return 0;
+
+ if (!vdec->params.frame_count) {
+ vpu_set_last_buffer_dequeued(inst);
+ return 0;
+ }
+
+ vpu_iface_add_scode(inst, SCODE_PADDING_EOS);
+ vdec->params.end_flag = 1;
+ vpu_iface_set_decode_params(inst, &vdec->params, 1);
+ vdec->drain = 0;
+ vpu_trace(inst->dev, "[%d] frame_count = %d\n", inst->id, vdec->params.frame_count);
+
+ return 0;
+}
+
+static int vdec_cmd_start(struct vpu_inst *inst)
+{
+ if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
+ vdec_update_state(inst, VPU_CODEC_STATE_ACTIVE, 0);
+ vpu_process_capture_buffer(inst);
+ return 0;
+}
+
+static int vdec_cmd_stop(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ vpu_trace(inst->dev, "[%d]\n", inst->id);
+
+ if (inst->state == VPU_CODEC_STATE_DEINIT) {
+ vpu_set_last_buffer_dequeued(inst);
+ } else {
+ vdec->drain = 1;
+ vdec_drain(inst);
+ }
+
+ return 0;
+}
+
+static int vdec_decoder_cmd(struct file *file,
+ void *fh,
+ struct v4l2_decoder_cmd *cmd)
+{
+ struct vpu_inst *inst = to_inst(file);
+ int ret;
+
+ ret = v4l2_m2m_ioctl_try_decoder_cmd(file, fh, cmd);
+ if (ret)
+ return ret;
+
+ vpu_inst_lock(inst);
+ switch (cmd->cmd) {
+ case V4L2_DEC_CMD_START:
+ vdec_cmd_start(inst);
+ break;
+ case V4L2_DEC_CMD_STOP:
+ vdec_cmd_stop(inst);
+ break;
+ default:
+ break;
+ }
+ vpu_inst_unlock(inst);
+
+ return 0;
+}
+
+static int vdec_subscribe_event(struct v4l2_fh *fh,
+ const struct v4l2_event_subscription *sub)
+{
+ switch (sub->type) {
+ case V4L2_EVENT_EOS:
+ return v4l2_event_subscribe(fh, sub, 0, NULL);
+ case V4L2_EVENT_SOURCE_CHANGE:
+ return v4l2_src_change_event_subscribe(fh, sub);
+ case V4L2_EVENT_CTRL:
+ return v4l2_ctrl_subscribe_event(fh, sub);
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static const struct v4l2_ioctl_ops vdec_ioctl_ops = {
+ .vidioc_querycap = vdec_querycap,
+ .vidioc_enum_fmt_vid_cap = vdec_enum_fmt,
+ .vidioc_enum_fmt_vid_out = vdec_enum_fmt,
+ .vidioc_g_fmt_vid_cap_mplane = vdec_g_fmt,
+ .vidioc_g_fmt_vid_out_mplane = vdec_g_fmt,
+ .vidioc_try_fmt_vid_cap_mplane = vdec_try_fmt,
+ .vidioc_try_fmt_vid_out_mplane = vdec_try_fmt,
+ .vidioc_s_fmt_vid_cap_mplane = vdec_s_fmt,
+ .vidioc_s_fmt_vid_out_mplane = vdec_s_fmt,
+ .vidioc_g_selection = vdec_g_selection,
+ .vidioc_try_decoder_cmd = v4l2_m2m_ioctl_try_decoder_cmd,
+ .vidioc_decoder_cmd = vdec_decoder_cmd,
+ .vidioc_subscribe_event = vdec_subscribe_event,
+ .vidioc_unsubscribe_event = v4l2_event_unsubscribe,
+ .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs,
+ .vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs,
+ .vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf,
+ .vidioc_querybuf = v4l2_m2m_ioctl_querybuf,
+ .vidioc_qbuf = v4l2_m2m_ioctl_qbuf,
+ .vidioc_expbuf = v4l2_m2m_ioctl_expbuf,
+ .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf,
+ .vidioc_streamon = v4l2_m2m_ioctl_streamon,
+ .vidioc_streamoff = v4l2_m2m_ioctl_streamoff,
+};
+
+static bool vdec_check_ready(struct vpu_inst *inst, unsigned int type)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ if (V4L2_TYPE_IS_OUTPUT(type)) {
+ if (vdec->ts_pre_count >= vdec->frame_depth)
+ return false;
+ return true;
+ }
+
+ if (vdec->req_frame_count)
+ return true;
+
+ return false;
+}
+
+static int vdec_frame_decoded(struct vpu_inst *inst, void *arg)
+{
+ struct vdec_t *vdec = inst->priv;
+ struct vpu_dec_pic_info *info = arg;
+ struct vpu_vb2_buffer *vpu_buf;
+ int ret = 0;
+
+ if (!info || info->id >= ARRAY_SIZE(vdec->slots))
+ return -EINVAL;
+
+ vpu_inst_lock(inst);
+ vpu_buf = vdec->slots[info->id];
+ if (!vpu_buf) {
+ dev_err(inst->dev, "[%d] decoded invalid frame[%d]\n", inst->id, info->id);
+ ret = -EINVAL;
+ goto exit;
+ }
+ if (vpu_buf->state == VPU_BUF_STATE_DECODED)
+ dev_info(inst->dev, "[%d] buf[%d] has been decoded\n", inst->id, info->id);
+ vpu_buf->state = VPU_BUF_STATE_DECODED;
+ vdec->decoded_frame_count++;
+ if (vdec->ts_pre_count >= info->consumed_count)
+ vdec->ts_pre_count -= info->consumed_count;
+ else
+ vdec->ts_pre_count = 0;
+exit:
+ vpu_inst_unlock(inst);
+
+ return ret;
+}
+
+static struct vpu_vb2_buffer *vdec_find_buffer(struct vpu_inst *inst, u32 luma)
+{
+ struct vdec_t *vdec = inst->priv;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(vdec->slots); i++) {
+ if (!vdec->slots[i])
+ continue;
+ if (luma == vdec->slots[i]->luma)
+ return vdec->slots[i];
+ }
+
+ return NULL;
+}
+
+static void vdec_buf_done(struct vpu_inst *inst, struct vpu_frame_info *frame)
+{
+ struct vdec_t *vdec = inst->priv;
+ struct vpu_vb2_buffer *vpu_buf;
+ struct vb2_v4l2_buffer *vbuf;
+ u32 sequence;
+
+ if (!frame)
+ return;
+
+ vpu_inst_lock(inst);
+ sequence = vdec->sequence++;
+ vpu_buf = vdec_find_buffer(inst, frame->luma);
+ vpu_inst_unlock(inst);
+ if (!vpu_buf) {
+ dev_err(inst->dev, "[%d] can't find buffer, id = %d, addr = 0x%x\n",
+ inst->id, frame->id, frame->luma);
+ return;
+ }
+ if (frame->skipped) {
+ dev_dbg(inst->dev, "[%d] frame skip\n", inst->id);
+ return;
+ }
+
+ vbuf = &vpu_buf->m2m_buf.vb;
+ if (vbuf->vb2_buf.index != frame->id)
+ dev_err(inst->dev, "[%d] buffer id(%d, %d) dismatch\n",
+ inst->id, vbuf->vb2_buf.index, frame->id);
+
+ if (vpu_buf->state != VPU_BUF_STATE_DECODED)
+ dev_err(inst->dev, "[%d] buffer(%d) ready without decoded\n",
+ inst->id, frame->id);
+ vpu_buf->state = VPU_BUF_STATE_READY;
+ vb2_set_plane_payload(&vbuf->vb2_buf, 0, inst->cap_format.sizeimage[0]);
+ vb2_set_plane_payload(&vbuf->vb2_buf, 1, inst->cap_format.sizeimage[1]);
+ vbuf->vb2_buf.timestamp = frame->timestamp;
+ vbuf->field = inst->cap_format.field;
+ vbuf->sequence = sequence;
+ dev_dbg(inst->dev, "[%d][OUTPUT TS]%32lld\n", inst->id, frame->timestamp);
+
+ v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
+ vpu_inst_lock(inst);
+ vdec->timestamp = frame->timestamp;
+ vdec->display_frame_count++;
+ vpu_inst_unlock(inst);
+ dev_dbg(inst->dev, "[%d] decoded : %d, display : %d, sequence : %d\n",
+ inst->id,
+ vdec->decoded_frame_count,
+ vdec->display_frame_count,
+ vdec->sequence);
+}
+
+static void vdec_stop_done(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ vpu_inst_lock(inst);
+ vdec_update_state(inst, VPU_CODEC_STATE_DEINIT, 0);
+ vdec->seq_hdr_found = 0;
+ vdec->req_frame_count = 0;
+ vdec->reset_codec = false;
+ vdec->fixed_fmt = false;
+ vdec->params.end_flag = 0;
+ vdec->drain = 0;
+ vdec->ts_pre_count = 0;
+ vdec->timestamp = VPU_INVALID_TIMESTAMP;
+ vdec->ts_start = VPU_INVALID_TIMESTAMP;
+ vdec->ts_input = VPU_INVALID_TIMESTAMP;
+ vdec->params.frame_count = 0;
+ vdec->decoded_frame_count = 0;
+ vdec->display_frame_count = 0;
+ vdec->sequence = 0;
+ vdec->eos_received = 0;
+ vdec->is_source_changed = false;
+ vdec->source_change = 0;
+ vpu_inst_unlock(inst);
+}
+
+static bool vdec_check_source_change(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+ const struct vpu_format *fmt;
+ int i;
+
+ if (!vb2_is_streaming(v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx)))
+ return true;
+ fmt = vpu_helper_find_format(inst, inst->cap_format.type, vdec->codec_info.pixfmt);
+ if (inst->cap_format.pixfmt != vdec->codec_info.pixfmt)
+ return true;
+ if (inst->cap_format.width != vdec->codec_info.decoded_width)
+ return true;
+ if (inst->cap_format.height != vdec->codec_info.decoded_height)
+ return true;
+ if (vpu_get_num_buffers(inst, inst->cap_format.type) < inst->min_buffer_cap)
+ return true;
+ if (inst->crop.left != vdec->codec_info.offset_x)
+ return true;
+ if (inst->crop.top != vdec->codec_info.offset_y)
+ return true;
+ if (inst->crop.width != vdec->codec_info.width)
+ return true;
+ if (inst->crop.height != vdec->codec_info.height)
+ return true;
+ if (fmt && inst->cap_format.num_planes != fmt->num_planes)
+ return true;
+ for (i = 0; i < inst->cap_format.num_planes; i++) {
+ if (inst->cap_format.bytesperline[i] != vdec->codec_info.bytesperline[i])
+ return true;
+ if (inst->cap_format.sizeimage[i] != vdec->codec_info.sizeimage[i])
+ return true;
+ }
+
+ return false;
+}
+
+static void vdec_init_fmt(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+ const struct vpu_format *fmt;
+ int i;
+
+ fmt = vpu_helper_find_format(inst, inst->cap_format.type, vdec->codec_info.pixfmt);
+ inst->out_format.width = vdec->codec_info.width;
+ inst->out_format.height = vdec->codec_info.height;
+ inst->cap_format.width = vdec->codec_info.decoded_width;
+ inst->cap_format.height = vdec->codec_info.decoded_height;
+ inst->cap_format.pixfmt = vdec->codec_info.pixfmt;
+ if (fmt) {
+ inst->cap_format.num_planes = fmt->num_planes;
+ inst->cap_format.flags = fmt->flags;
+ }
+ for (i = 0; i < inst->cap_format.num_planes; i++) {
+ inst->cap_format.bytesperline[i] = vdec->codec_info.bytesperline[i];
+ inst->cap_format.sizeimage[i] = vdec->codec_info.sizeimage[i];
+ }
+ if (vdec->codec_info.progressive)
+ inst->cap_format.field = V4L2_FIELD_NONE;
+ else
+ inst->cap_format.field = V4L2_FIELD_INTERLACED;
+ if (vdec->codec_info.color_primaries == V4L2_COLORSPACE_DEFAULT)
+ vdec->codec_info.color_primaries = V4L2_COLORSPACE_REC709;
+ if (vdec->codec_info.transfer_chars == V4L2_XFER_FUNC_DEFAULT)
+ vdec->codec_info.transfer_chars = V4L2_XFER_FUNC_709;
+ if (vdec->codec_info.matrix_coeffs == V4L2_YCBCR_ENC_DEFAULT)
+ vdec->codec_info.matrix_coeffs = V4L2_YCBCR_ENC_709;
+ if (vdec->codec_info.full_range == V4L2_QUANTIZATION_DEFAULT)
+ vdec->codec_info.full_range = V4L2_QUANTIZATION_LIM_RANGE;
+}
+
+static void vdec_init_crop(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ inst->crop.left = vdec->codec_info.offset_x;
+ inst->crop.top = vdec->codec_info.offset_y;
+ inst->crop.width = vdec->codec_info.width;
+ inst->crop.height = vdec->codec_info.height;
+}
+
+static void vdec_init_mbi(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ vdec->mbi.size = vdec->codec_info.mbi_size;
+ vdec->mbi.max_count = ARRAY_SIZE(vdec->mbi.buffer);
+ scnprintf(vdec->mbi.name, sizeof(vdec->mbi.name), "mbi");
+ vdec->mbi.type = MEM_RES_MBI;
+ vdec->mbi.tag = vdec->seq_tag;
+}
+
+static void vdec_init_dcp(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ vdec->dcp.size = vdec->codec_info.dcp_size;
+ vdec->dcp.max_count = ARRAY_SIZE(vdec->dcp.buffer);
+ scnprintf(vdec->dcp.name, sizeof(vdec->dcp.name), "dcp");
+ vdec->dcp.type = MEM_RES_DCP;
+ vdec->dcp.tag = vdec->seq_tag;
+}
+
+static void vdec_request_one_fs(struct vdec_fs_info *fs)
+{
+ WARN_ON(!fs);
+
+ fs->req_count++;
+ if (fs->req_count > fs->max_count)
+ fs->req_count = fs->max_count;
+}
+
+static int vdec_alloc_fs_buffer(struct vpu_inst *inst, struct vdec_fs_info *fs)
+{
+ struct vpu_buffer *buffer;
+
+ if (!inst || !fs || !fs->size)
+ return -EINVAL;
+
+ if (fs->count >= fs->req_count)
+ return -EINVAL;
+
+ buffer = &fs->buffer[fs->count];
+ if (buffer->virt && buffer->length >= fs->size)
+ return 0;
+
+ vpu_free_dma(buffer);
+ buffer->length = fs->size;
+ return vpu_alloc_dma(inst->core, buffer);
+}
+
+static void vdec_alloc_fs(struct vpu_inst *inst, struct vdec_fs_info *fs)
+{
+ int ret;
+
+ while (fs->count < fs->req_count) {
+ ret = vdec_alloc_fs_buffer(inst, fs);
+ if (ret)
+ break;
+ fs->count++;
+ }
+}
+
+static void vdec_clear_fs(struct vdec_fs_info *fs)
+{
+ u32 i;
+
+ if (!fs)
+ return;
+
+ for (i = 0; i < ARRAY_SIZE(fs->buffer); i++)
+ vpu_free_dma(&fs->buffer[i]);
+ memset(fs, 0, sizeof(*fs));
+}
+
+static int vdec_response_fs(struct vpu_inst *inst, struct vdec_fs_info *fs)
+{
+ struct vpu_fs_info info;
+ int ret;
+
+ if (fs->index >= fs->count)
+ return 0;
+
+ memset(&info, 0, sizeof(info));
+ info.id = fs->index;
+ info.type = fs->type;
+ info.tag = fs->tag;
+ info.luma_addr = fs->buffer[fs->index].phys;
+ info.luma_size = fs->buffer[fs->index].length;
+ ret = vpu_session_alloc_fs(inst, &info);
+ if (ret)
+ return ret;
+
+ fs->index++;
+ return 0;
+}
+
+static int vdec_response_frame_abnormal(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+ struct vpu_fs_info info;
+
+ if (!vdec->req_frame_count)
+ return 0;
+
+ memset(&info, 0, sizeof(info));
+ info.type = MEM_RES_FRAME;
+ info.tag = vdec->seq_tag + 0xf0;
+ vpu_session_alloc_fs(inst, &info);
+ vdec->req_frame_count--;
+
+ return 0;
+}
+
+static int vdec_response_frame(struct vpu_inst *inst, struct vb2_v4l2_buffer *vbuf)
+{
+ struct vdec_t *vdec = inst->priv;
+ struct vpu_vb2_buffer *vpu_buf;
+ struct vpu_fs_info info;
+ int ret;
+
+ if (inst->state != VPU_CODEC_STATE_ACTIVE)
+ return -EINVAL;
+
+ if (!vdec->req_frame_count)
+ return -EINVAL;
+
+ if (!vbuf)
+ return -EINVAL;
+
+ if (vdec->slots[vbuf->vb2_buf.index]) {
+ dev_err(inst->dev, "[%d] repeat alloc fs %d\n",
+ inst->id, vbuf->vb2_buf.index);
+ return -EINVAL;
+ }
+
+ dev_dbg(inst->dev, "[%d] state = %d, alloc fs %d, tag = 0x%x\n",
+ inst->id, inst->state, vbuf->vb2_buf.index, vdec->seq_tag);
+ vpu_buf = to_vpu_vb2_buffer(vbuf);
+
+ memset(&info, 0, sizeof(info));
+ info.id = vbuf->vb2_buf.index;
+ info.type = MEM_RES_FRAME;
+ info.tag = vdec->seq_tag;
+ info.luma_addr = vpu_get_vb_phy_addr(&vbuf->vb2_buf, 0);
+ info.luma_size = inst->cap_format.sizeimage[0];
+ info.chroma_addr = vpu_get_vb_phy_addr(&vbuf->vb2_buf, 1);
+ info.chromau_size = inst->cap_format.sizeimage[1];
+ info.bytesperline = inst->cap_format.bytesperline[0];
+ ret = vpu_session_alloc_fs(inst, &info);
+ if (ret)
+ return ret;
+
+ vpu_buf->tag = info.tag;
+ vpu_buf->luma = info.luma_addr;
+ vpu_buf->chroma_u = info.chromau_size;
+ vpu_buf->chroma_v = 0;
+ vpu_buf->state = VPU_BUF_STATE_INUSE;
+ vdec->slots[info.id] = vpu_buf;
+ vdec->req_frame_count--;
+
+ return 0;
+}
+
+static void vdec_response_fs_request(struct vpu_inst *inst, bool force)
+{
+ struct vdec_t *vdec = inst->priv;
+ int i;
+ int ret;
+
+ if (force) {
+ for (i = vdec->req_frame_count; i > 0; i--)
+ vdec_response_frame_abnormal(inst);
+ return;
+ }
+
+ for (i = vdec->req_frame_count; i > 0; i--) {
+ ret = vpu_process_capture_buffer(inst);
+ if (ret)
+ break;
+ if (vdec->eos_received)
+ break;
+ }
+
+ for (i = vdec->mbi.index; i < vdec->mbi.count; i++) {
+ if (vdec_response_fs(inst, &vdec->mbi))
+ break;
+ if (vdec->eos_received)
+ break;
+ }
+ for (i = vdec->dcp.index; i < vdec->dcp.count; i++) {
+ if (vdec_response_fs(inst, &vdec->dcp))
+ break;
+ if (vdec->eos_received)
+ break;
+ }
+}
+
+static void vdec_response_fs_release(struct vpu_inst *inst, u32 id, u32 tag)
+{
+ struct vpu_fs_info info;
+
+ memset(&info, 0, sizeof(info));
+ info.id = id;
+ info.tag = tag;
+ vpu_session_release_fs(inst, &info);
+}
+
+static void vdec_recycle_buffer(struct vpu_inst *inst, struct vb2_v4l2_buffer *vbuf)
+{
+ if (!inst || !vbuf)
+ return;
+
+ if (vbuf->vb2_buf.state != VB2_BUF_STATE_ACTIVE)
+ return;
+ if (vpu_find_buf_by_idx(inst, vbuf->vb2_buf.type, vbuf->vb2_buf.index))
+ return;
+ v4l2_m2m_buf_queue(inst->fh.m2m_ctx, vbuf);
+}
+
+static void vdec_clear_slots(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+ struct vpu_vb2_buffer *vpu_buf;
+ struct vb2_v4l2_buffer *vbuf;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(vdec->slots); i++) {
+ if (!vdec->slots[i])
+ continue;
+
+ vpu_buf = vdec->slots[i];
+ vbuf = &vpu_buf->m2m_buf.vb;
+
+ vdec_response_fs_release(inst, i, vpu_buf->tag);
+ vdec_recycle_buffer(inst, vbuf);
+ vdec->slots[i]->state = VPU_BUF_STATE_IDLE;
+ vdec->slots[i] = NULL;
+ }
+}
+
+static void vdec_event_seq_hdr(struct vpu_inst *inst,
+ struct vpu_dec_codec_info *hdr)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ vpu_inst_lock(inst);
+ memcpy(&vdec->codec_info, hdr, sizeof(vdec->codec_info));
+
+ vpu_trace(inst->dev, "[%d] %d x %d, crop : (%d, %d) %d x %d, %d, %d\n",
+ inst->id,
+ vdec->codec_info.decoded_width,
+ vdec->codec_info.decoded_height,
+ vdec->codec_info.offset_x,
+ vdec->codec_info.offset_y,
+ vdec->codec_info.width,
+ vdec->codec_info.height,
+ hdr->num_ref_frms,
+ hdr->num_dpb_frms);
+ inst->min_buffer_cap = hdr->num_ref_frms + hdr->num_dpb_frms;
+ vdec->is_source_changed = vdec_check_source_change(inst);
+ vdec_init_fmt(inst);
+ vdec_init_crop(inst);
+ vdec_init_mbi(inst);
+ vdec_init_dcp(inst);
+ if (!vdec->seq_hdr_found) {
+ vdec->seq_tag = vdec->codec_info.tag;
+ if (vdec->is_source_changed) {
+ vdec_update_state(inst, VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE, 0);
+ vpu_notify_source_change(inst);
+ vdec->is_source_changed = false;
+ }
+ }
+ if (vdec->seq_tag != vdec->codec_info.tag) {
+ vdec_response_fs_request(inst, true);
+ vpu_trace(inst->dev, "[%d] seq tag change: %d -> %d\n",
+ inst->id, vdec->seq_tag, vdec->codec_info.tag);
+ }
+ vdec->seq_hdr_found++;
+ vdec->fixed_fmt = true;
+ vpu_inst_unlock(inst);
+}
+
+static void vdec_event_resolution_change(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ vpu_trace(inst->dev, "[%d]\n", inst->id);
+ vpu_inst_lock(inst);
+ vdec->seq_tag = vdec->codec_info.tag;
+ vdec_clear_fs(&vdec->mbi);
+ vdec_clear_fs(&vdec->dcp);
+ vdec_clear_slots(inst);
+ vdec_init_mbi(inst);
+ vdec_init_dcp(inst);
+ if (vdec->is_source_changed) {
+ vdec_update_state(inst, VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE, 0);
+ vdec->source_change++;
+ vdec_handle_resolution_change(inst);
+ vdec->is_source_changed = false;
+ }
+ vpu_inst_unlock(inst);
+}
+
+static void vdec_event_req_fs(struct vpu_inst *inst, struct vpu_fs_info *fs)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ if (!fs)
+ return;
+
+ vpu_inst_lock(inst);
+
+ switch (fs->type) {
+ case MEM_RES_FRAME:
+ vdec->req_frame_count++;
+ break;
+ case MEM_RES_MBI:
+ vdec_request_one_fs(&vdec->mbi);
+ break;
+ case MEM_RES_DCP:
+ vdec_request_one_fs(&vdec->dcp);
+ break;
+ default:
+ break;
+ }
+
+ vdec_alloc_fs(inst, &vdec->mbi);
+ vdec_alloc_fs(inst, &vdec->dcp);
+
+ vdec_response_fs_request(inst, false);
+
+ vpu_inst_unlock(inst);
+}
+
+static void vdec_evnet_rel_fs(struct vpu_inst *inst, struct vpu_fs_info *fs)
+{
+ struct vdec_t *vdec = inst->priv;
+ struct vpu_vb2_buffer *vpu_buf;
+ struct vb2_v4l2_buffer *vbuf;
+
+ if (!fs || fs->id >= ARRAY_SIZE(vdec->slots))
+ return;
+ if (fs->type != MEM_RES_FRAME)
+ return;
+
+ if (fs->id >= vpu_get_num_buffers(inst, inst->cap_format.type)) {
+ dev_err(inst->dev, "[%d] invalid fs(%d) to release\n", inst->id, fs->id);
+ return;
+ }
+
+ vpu_inst_lock(inst);
+ vpu_buf = vdec->slots[fs->id];
+ vdec->slots[fs->id] = NULL;
+
+ if (!vpu_buf) {
+ dev_dbg(inst->dev, "[%d] fs[%d] has bee released\n", inst->id, fs->id);
+ goto exit;
+ }
+
+ if (vpu_buf->state == VPU_BUF_STATE_DECODED) {
+ dev_dbg(inst->dev, "[%d] frame skip\n", inst->id);
+ vdec->sequence++;
+ }
+
+ vdec_response_fs_release(inst, fs->id, vpu_buf->tag);
+ vbuf = &vpu_buf->m2m_buf.vb;
+ if (vpu_buf->state != VPU_BUF_STATE_READY)
+ vdec_recycle_buffer(inst, vbuf);
+
+ vpu_buf->state = VPU_BUF_STATE_IDLE;
+ vpu_process_capture_buffer(inst);
+
+exit:
+ vpu_inst_unlock(inst);
+}
+
+static void vdec_event_eos(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ vpu_trace(inst->dev, "[%d] input : %d, decoded : %d, display : %d, sequence : %d\n",
+ inst->id,
+ vdec->params.frame_count,
+ vdec->decoded_frame_count,
+ vdec->display_frame_count,
+ vdec->sequence);
+ vpu_inst_lock(inst);
+ vdec->eos_received++;
+ vdec->fixed_fmt = false;
+ inst->min_buffer_cap = VDEC_MIN_BUFFER_CAP;
+ vdec_update_state(inst, VPU_CODEC_STATE_DRAIN, 0);
+ vdec_set_last_buffer_dequeued(inst);
+ vpu_inst_unlock(inst);
+}
+
+static void vdec_event_notify(struct vpu_inst *inst, u32 event, void *data)
+{
+ switch (event) {
+ case VPU_MSG_ID_SEQ_HDR_FOUND:
+ vdec_event_seq_hdr(inst, data);
+ break;
+ case VPU_MSG_ID_RES_CHANGE:
+ vdec_event_resolution_change(inst);
+ break;
+ case VPU_MSG_ID_FRAME_REQ:
+ vdec_event_req_fs(inst, data);
+ break;
+ case VPU_MSG_ID_FRAME_RELEASE:
+ vdec_evnet_rel_fs(inst, data);
+ break;
+ case VPU_MSG_ID_PIC_EOS:
+ vdec_event_eos(inst);
+ break;
+ default:
+ break;
+ }
+}
+
+static int vdec_process_output(struct vpu_inst *inst, struct vb2_buffer *vb)
+{
+ struct vdec_t *vdec = inst->priv;
+ struct vb2_v4l2_buffer *vbuf;
+ struct vpu_vb2_buffer *vpu_buf;
+ struct vpu_rpc_buffer_desc desc;
+ s64 timestamp;
+ u32 free_space;
+ int ret;
+
+ vbuf = to_vb2_v4l2_buffer(vb);
+ vpu_buf = to_vpu_vb2_buffer(vbuf);
+ dev_dbg(inst->dev, "[%d] dec output [%d] %d : %ld\n",
+ inst->id, vbuf->sequence, vb->index, vb2_get_plane_payload(vb, 0));
+
+ if (inst->state == VPU_CODEC_STATE_DEINIT)
+ return -EINVAL;
+ if (vdec->reset_codec)
+ return -EINVAL;
+
+ if (inst->state == VPU_CODEC_STATE_STARTED)
+ vdec_update_state(inst, VPU_CODEC_STATE_ACTIVE, 0);
+
+ ret = vpu_iface_get_stream_buffer_desc(inst, &desc);
+ if (ret)
+ return ret;
+
+ free_space = vpu_helper_get_free_space(inst);
+ if (free_space < vb2_get_plane_payload(vb, 0) + 0x40000)
+ return -ENOMEM;
+
+ timestamp = vb->timestamp;
+ if (timestamp >= 0 && vdec->ts_start < 0)
+ vdec->ts_start = timestamp;
+ if (vdec->ts_input < timestamp)
+ vdec->ts_input = timestamp;
+
+ ret = vpu_iface_input_frame(inst, vb);
+ if (ret < 0)
+ return -ENOMEM;
+
+ dev_dbg(inst->dev, "[%d][INPUT TS]%32lld\n", inst->id, vb->timestamp);
+ vdec->ts_pre_count++;
+ vdec->params.frame_count++;
+
+ v4l2_m2m_src_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf);
+ vpu_buf->state = VPU_BUF_STATE_IDLE;
+ v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
+
+ if (vdec->drain)
+ vdec_drain(inst);
+
+ return 0;
+}
+
+static int vdec_process_capture(struct vpu_inst *inst, struct vb2_buffer *vb)
+{
+ struct vdec_t *vdec = inst->priv;
+ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ int ret;
+
+ if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
+ return -EINVAL;
+ if (vdec->reset_codec)
+ return -EINVAL;
+
+ ret = vdec_response_frame(inst, vbuf);
+ if (ret)
+ return ret;
+ v4l2_m2m_dst_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf);
+ return 0;
+}
+
+static void vdec_on_queue_empty(struct vpu_inst *inst, u32 type)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ if (V4L2_TYPE_IS_OUTPUT(type))
+ return;
+
+ vdec_handle_resolution_change(inst);
+ if (vdec->eos_received)
+ vdec_set_last_buffer_dequeued(inst);
+}
+
+static void vdec_abort(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+ struct vpu_rpc_buffer_desc desc;
+ int ret;
+
+ vpu_trace(inst->dev, "[%d] state = %d\n", inst->id, inst->state);
+ vpu_iface_add_scode(inst, SCODE_PADDING_ABORT);
+ vdec->params.end_flag = 1;
+ vpu_iface_set_decode_params(inst, &vdec->params, 1);
+
+ vpu_session_abort(inst);
+
+ ret = vpu_iface_get_stream_buffer_desc(inst, &desc);
+ if (!ret)
+ vpu_iface_update_stream_buffer(inst, desc.rptr, 1);
+
+ vpu_session_rst_buf(inst);
+ vpu_trace(inst->dev, "[%d] input : %d, decoded : %d, display : %d, sequence : %d\n",
+ inst->id,
+ vdec->params.frame_count,
+ vdec->decoded_frame_count,
+ vdec->display_frame_count,
+ vdec->sequence);
+ vdec->params.end_flag = 0;
+ vdec->drain = 0;
+ vdec->ts_pre_count = 0;
+ vdec->timestamp = VPU_INVALID_TIMESTAMP;
+ vdec->ts_start = VPU_INVALID_TIMESTAMP;
+ vdec->ts_input = VPU_INVALID_TIMESTAMP;
+ vdec->params.frame_count = 0;
+ vdec->decoded_frame_count = 0;
+ vdec->display_frame_count = 0;
+ vdec->sequence = 0;
+}
+
+static void vdec_stop(struct vpu_inst *inst, bool free)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ vdec_clear_slots(inst);
+ if (inst->state != VPU_CODEC_STATE_DEINIT)
+ vpu_session_stop(inst);
+ vdec_clear_fs(&vdec->mbi);
+ vdec_clear_fs(&vdec->dcp);
+ if (free) {
+ vpu_free_dma(&vdec->udata);
+ vpu_free_dma(&inst->stream_buffer);
+ }
+ vdec_update_state(inst, VPU_CODEC_STATE_DEINIT, 1);
+ vdec->reset_codec = false;
+}
+
+static void vdec_release(struct vpu_inst *inst)
+{
+ if (inst->id != VPU_INST_NULL_ID)
+ vpu_trace(inst->dev, "[%d]\n", inst->id);
+ vpu_inst_lock(inst);
+ vdec_stop(inst, true);
+ vpu_inst_unlock(inst);
+}
+
+static void vdec_cleanup(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec;
+
+ if (!inst)
+ return;
+
+ vdec = inst->priv;
+ if (vdec)
+ vfree(vdec);
+ inst->priv = NULL;
+ vfree(inst);
+}
+
+static void vdec_init_params(struct vdec_t *vdec)
+{
+ vdec->params.frame_count = 0;
+ vdec->params.end_flag = 0;
+}
+
+static int vdec_start(struct vpu_inst *inst)
+{
+ struct vdec_t *vdec = inst->priv;
+ int stream_buffer_size;
+ int ret;
+
+ if (inst->state != VPU_CODEC_STATE_DEINIT)
+ return 0;
+
+ vpu_trace(inst->dev, "[%d]\n", inst->id);
+ if (!vdec->udata.virt) {
+ vdec->udata.length = 0x1000;
+ ret = vpu_alloc_dma(inst->core, &vdec->udata);
+ if (ret) {
+ dev_err(inst->dev, "[%d] alloc udata fail\n", inst->id);
+ goto error;
+ }
+ }
+
+ if (!inst->stream_buffer.virt) {
+ stream_buffer_size = vpu_iface_get_stream_buffer_size(inst->core);
+ if (stream_buffer_size > 0) {
+ inst->stream_buffer.length = stream_buffer_size;
+ ret = vpu_alloc_dma(inst->core, &inst->stream_buffer);
+ if (ret) {
+ dev_err(inst->dev, "[%d] alloc stream buffer fail\n", inst->id);
+ goto error;
+ }
+ inst->use_stream_buffer = true;
+ }
+ }
+
+ if (inst->use_stream_buffer)
+ vpu_iface_config_stream_buffer(inst, &inst->stream_buffer);
+ vpu_iface_init_instance(inst);
+ vdec->params.udata.base = vdec->udata.phys;
+ vdec->params.udata.size = vdec->udata.length;
+ ret = vpu_iface_set_decode_params(inst, &vdec->params, 0);
+ if (ret) {
+ dev_err(inst->dev, "[%d] set decode params fail\n", inst->id);
+ goto error;
+ }
+
+ vdec_init_params(vdec);
+ ret = vpu_session_start(inst);
+ if (ret) {
+ dev_err(inst->dev, "[%d] start fail\n", inst->id);
+ goto error;
+ }
+
+ vdec_update_state(inst, VPU_CODEC_STATE_STARTED, 0);
+
+ return 0;
+error:
+ vpu_free_dma(&vdec->udata);
+ vpu_free_dma(&inst->stream_buffer);
+ return ret;
+}
+
+static int vdec_start_session(struct vpu_inst *inst, u32 type)
+{
+ struct vdec_t *vdec = inst->priv;
+ int ret = 0;
+
+ if (V4L2_TYPE_IS_OUTPUT(type)) {
+ if (vdec->reset_codec)
+ vdec_stop(inst, false);
+ if (inst->state == VPU_CODEC_STATE_DEINIT) {
+ ret = vdec_start(inst);
+ if (ret)
+ return ret;
+ }
+ }
+
+ if (V4L2_TYPE_IS_OUTPUT(type)) {
+ if (inst->state == VPU_CODEC_STATE_SEEK)
+ vdec_update_state(inst, vdec->state, 1);
+ vdec->eos_received = 0;
+ vpu_process_output_buffer(inst);
+ } else {
+ vdec_cmd_start(inst);
+ }
+ if (inst->state == VPU_CODEC_STATE_ACTIVE)
+ vdec_response_fs_request(inst, false);
+
+ return ret;
+}
+
+static int vdec_stop_session(struct vpu_inst *inst, u32 type)
+{
+ struct vdec_t *vdec = inst->priv;
+
+ if (inst->state == VPU_CODEC_STATE_DEINIT)
+ return 0;
+
+ if (V4L2_TYPE_IS_OUTPUT(type)) {
+ vdec_update_state(inst, VPU_CODEC_STATE_SEEK, 0);
+ vdec->drain = 0;
+ } else {
+ if (inst->state != VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
+ vdec_abort(inst);
+
+ vdec->eos_received = 0;
+ vdec_clear_slots(inst);
+ }
+
+ return 0;
+}
+
+static int vdec_get_debug_info(struct vpu_inst *inst, char *str, u32 size, u32 i)
+{
+ struct vdec_t *vdec = inst->priv;
+ int num = -1;
+
+ switch (i) {
+ case 0:
+ num = scnprintf(str, size,
+ "req_frame_count = %d\ninterlaced = %d\n",
+ vdec->req_frame_count,
+ vdec->codec_info.progressive ? 0 : 1);
+ break;
+ case 1:
+ num = scnprintf(str, size,
+ "mbi: size = 0x%x request = %d, alloc = %d, response = %d\n",
+ vdec->mbi.size,
+ vdec->mbi.req_count,
+ vdec->mbi.count,
+ vdec->mbi.index);
+ break;
+ case 2:
+ num = scnprintf(str, size,
+ "dcp: size = 0x%x request = %d, alloc = %d, response = %d\n",
+ vdec->dcp.size,
+ vdec->dcp.req_count,
+ vdec->dcp.count,
+ vdec->dcp.index);
+ break;
+ case 3:
+ num = scnprintf(str, size, "input_frame_count = %d\n", vdec->params.frame_count);
+ break;
+ case 4:
+ num = scnprintf(str, size, "decoded_frame_count = %d\n", vdec->decoded_frame_count);
+ break;
+ case 5:
+ num = scnprintf(str, size, "display_frame_count = %d\n", vdec->display_frame_count);
+ break;
+ case 6:
+ num = scnprintf(str, size, "sequence = %d\n", vdec->sequence);
+ break;
+ case 7:
+ num = scnprintf(str, size, "drain = %d, eos = %d, source_change = %d\n",
+ vdec->drain, vdec->eos_received, vdec->source_change);
+ break;
+ case 8:
+ num = scnprintf(str, size, "ts_pre_count = %d, frame_depth = %d\n",
+ vdec->ts_pre_count, vdec->frame_depth);
+ break;
+ case 9:
+ num = scnprintf(str, size, "fps = %d/%d\n",
+ vdec->codec_info.frame_rate.numerator,
+ vdec->codec_info.frame_rate.denominator);
+ break;
+ case 10:
+ {
+ s64 timestamp = vdec->timestamp;
+ s64 ts_start = vdec->ts_start;
+ s64 ts_input = vdec->ts_input;
+
+ num = scnprintf(str, size, "timestamp = %9lld.%09lld(%9lld.%09lld, %9lld.%09lld)\n",
+ timestamp / NSEC_PER_SEC,
+ timestamp % NSEC_PER_SEC,
+ ts_start / NSEC_PER_SEC,
+ ts_start % NSEC_PER_SEC,
+ ts_input / NSEC_PER_SEC,
+ ts_input % NSEC_PER_SEC);
+ }
+ break;
+ default:
+ break;
+ }
+
+ return num;
+}
+
+static struct vpu_inst_ops vdec_inst_ops = {
+ .ctrl_init = vdec_ctrl_init,
+ .check_ready = vdec_check_ready,
+ .buf_done = vdec_buf_done,
+ .get_one_frame = vdec_frame_decoded,
+ .stop_done = vdec_stop_done,
+ .event_notify = vdec_event_notify,
+ .release = vdec_release,
+ .cleanup = vdec_cleanup,
+ .start = vdec_start_session,
+ .stop = vdec_stop_session,
+ .process_output = vdec_process_output,
+ .process_capture = vdec_process_capture,
+ .on_queue_empty = vdec_on_queue_empty,
+ .get_debug_info = vdec_get_debug_info,
+ .wait_prepare = vpu_inst_unlock,
+ .wait_finish = vpu_inst_lock,
+};
+
+static void vdec_init(struct file *file)
+{
+ struct vpu_inst *inst = to_inst(file);
+ struct vdec_t *vdec;
+ struct v4l2_format f;
+
+ vdec = inst->priv;
+ vdec->frame_depth = VDEC_FRAME_DEPTH;
+ vdec->timestamp = VPU_INVALID_TIMESTAMP;
+ vdec->ts_start = VPU_INVALID_TIMESTAMP;
+ vdec->ts_input = VPU_INVALID_TIMESTAMP;
+
+ memset(&f, 0, sizeof(f));
+ f.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
+ f.fmt.pix_mp.pixelformat = V4L2_PIX_FMT_H264;
+ f.fmt.pix_mp.width = 1280;
+ f.fmt.pix_mp.height = 720;
+ f.fmt.pix_mp.field = V4L2_FIELD_NONE;
+ vdec_s_fmt(file, &inst->fh, &f);
+
+ memset(&f, 0, sizeof(f));
+ f.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
+ f.fmt.pix_mp.pixelformat = V4L2_PIX_FMT_NV12MT_8L128;
+ f.fmt.pix_mp.width = 1280;
+ f.fmt.pix_mp.height = 720;
+ f.fmt.pix_mp.field = V4L2_FIELD_NONE;
+ vdec_s_fmt(file, &inst->fh, &f);
+}
+
+static int vdec_open(struct file *file)
+{
+ struct vpu_inst *inst;
+ struct vdec_t *vdec;
+ int ret;
+
+ inst = vzalloc(sizeof(*inst));
+ if (!inst)
+ return -ENOMEM;
+
+ vdec = vzalloc(sizeof(*vdec));
+ if (!vdec) {
+ vfree(inst);
+ return -ENOMEM;
+ }
+
+ inst->ops = &vdec_inst_ops;
+ inst->formats = vdec_formats;
+ inst->type = VPU_CORE_TYPE_DEC;
+ inst->priv = vdec;
+
+ ret = vpu_v4l2_open(file, inst);
+ if (ret)
+ return ret;
+
+ vdec->fixed_fmt = false;
+ inst->min_buffer_cap = VDEC_MIN_BUFFER_CAP;
+ vdec_init(file);
+
+ return 0;
+}
+
+static __poll_t vdec_poll(struct file *file, poll_table *wait)
+{
+ struct vpu_inst *inst = to_inst(file);
+ struct vb2_queue *src_q, *dst_q;
+ __poll_t ret;
+
+ ret = v4l2_m2m_fop_poll(file, wait);
+ src_q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
+ dst_q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
+ if (vb2_is_streaming(src_q) && !vb2_is_streaming(dst_q))
+ ret &= (~EPOLLERR);
+ if (!src_q->error && !dst_q->error &&
+ (vb2_is_streaming(src_q) && list_empty(&src_q->queued_list)) &&
+ (vb2_is_streaming(dst_q) && list_empty(&dst_q->queued_list)))
+ ret &= (~EPOLLERR);
+
+ return ret;
+}
+
+static const struct v4l2_file_operations vdec_fops = {
+ .owner = THIS_MODULE,
+ .open = vdec_open,
+ .release = vpu_v4l2_close,
+ .unlocked_ioctl = video_ioctl2,
+ .poll = vdec_poll,
+ .mmap = v4l2_m2m_fop_mmap,
+};
+
+const struct v4l2_ioctl_ops *vdec_get_ioctl_ops(void)
+{
+ return &vdec_ioctl_ops;
+}
+
+const struct v4l2_file_operations *vdec_get_fops(void)
+{
+ return &vdec_fops;
+}
--
2.33.0


2021-11-30 09:50:02

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 13/13] MAINTAINERS: add AMPHION VPU CODEC V4L2 driver entry

Add AMPHION VPU CODEC v4l2 driver entry

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
---
MAINTAINERS | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 8eea24d54624..a20fafc832da 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13775,6 +13775,15 @@ S: Maintained
F: Documentation/devicetree/bindings/media/nxp,imx8-jpeg.yaml
F: drivers/media/platform/imx-jpeg

+AMPHION VPU CODEC V4L2 DRIVER
+M: Ming Qian <[email protected]>
+M: Shijie Qin <[email protected]>
+M: Zhou Peng <[email protected]>
+L: [email protected]
+S: Maintained
+F: Documentation/devicetree/bindings/media/amphion,vpu.yaml
+F: drivers/media/platform/amphion/
+
NZXT-KRAKEN2 HARDWARE MONITORING DRIVER
M: Jonas Malaco <[email protected]>
L: [email protected]
--
2.33.0


2021-11-30 09:50:04

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 12/13] firmware: imx: scu-pd: imx8q: add vpu mu resources

the vpu core depends on the mu resources.
if they're missed, the vpu can't work.

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
---
drivers/firmware/imx/scu-pd.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/drivers/firmware/imx/scu-pd.c b/drivers/firmware/imx/scu-pd.c
index ff6569c4a53b..af3d057e6421 100644
--- a/drivers/firmware/imx/scu-pd.c
+++ b/drivers/firmware/imx/scu-pd.c
@@ -155,6 +155,10 @@ static const struct imx_sc_pd_range imx8qxp_scu_pd_ranges[] = {
{ "vpu-pid", IMX_SC_R_VPU_PID0, 8, true, 0 },
{ "vpu-dec0", IMX_SC_R_VPU_DEC_0, 1, false, 0 },
{ "vpu-enc0", IMX_SC_R_VPU_ENC_0, 1, false, 0 },
+ { "vpu-enc1", IMX_SC_R_VPU_ENC_1, 1, false, 0 },
+ { "vpu-mu0", IMX_SC_R_VPU_MU_0, 1, false, 0 },
+ { "vpu-mu1", IMX_SC_R_VPU_MU_1, 1, false, 0 },
+ { "vpu-mu2", IMX_SC_R_VPU_MU_2, 1, false, 0 },

/* GPU SS */
{ "gpu0-pid", IMX_SC_R_GPU_0_PID0, 4, true, 0 },
--
2.33.0


2021-11-30 09:50:09

by Ming Qian

[permalink] [raw]
Subject: [PATCH v13 09/13] media: amphion: implement windsor encoder rpc interface

This part implements the windsor encoder rpc interface.

Signed-off-by: Ming Qian <[email protected]>
Signed-off-by: Shijie Qin <[email protected]>
Signed-off-by: Zhou Peng <[email protected]>
Reported-by: kernel test robot <[email protected]>
---
drivers/media/platform/amphion/vpu_windsor.c | 1222 ++++++++++++++++++
drivers/media/platform/amphion/vpu_windsor.h | 39 +
2 files changed, 1261 insertions(+)
create mode 100644 drivers/media/platform/amphion/vpu_windsor.c
create mode 100644 drivers/media/platform/amphion/vpu_windsor.h

diff --git a/drivers/media/platform/amphion/vpu_windsor.c b/drivers/media/platform/amphion/vpu_windsor.c
new file mode 100644
index 000000000000..21e14232e7b4
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_windsor.c
@@ -0,0 +1,1222 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/ioctl.h>
+#include <linux/list.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <media/videobuf2-v4l2.h>
+#include <media/videobuf2-dma-contig.h>
+#include "vpu.h"
+#include "vpu_rpc.h"
+#include "vpu_defs.h"
+#include "vpu_helpers.h"
+#include "vpu_cmds.h"
+#include "vpu_v4l2.h"
+#include "vpu_imx8q.h"
+#include "vpu_windsor.h"
+
+#define CMD_SIZE 2560
+#define MSG_SIZE 25600
+#define WINDSOR_USER_DATA_WORDS 16
+#define WINDSOR_MAX_SRC_FRAMES 0x6
+#define WINDSOR_MAX_REF_FRAMES 0x3
+#define WINDSOR_BITRATE_UNIT 1024
+#define WINDSOR_H264_EXTENDED_SAR 255
+
+enum {
+ GTB_ENC_CMD_NOOP = 0x0,
+ GTB_ENC_CMD_STREAM_START,
+ GTB_ENC_CMD_FRAME_ENCODE,
+ GTB_ENC_CMD_FRAME_SKIP,
+ GTB_ENC_CMD_STREAM_STOP,
+ GTB_ENC_CMD_PARAMETER_UPD,
+ GTB_ENC_CMD_TERMINATE,
+ GTB_ENC_CMD_SNAPSHOT,
+ GTB_ENC_CMD_ROLL_SNAPSHOT,
+ GTB_ENC_CMD_LOCK_SCHEDULER,
+ GTB_ENC_CMD_UNLOCK_SCHEDULER,
+ GTB_ENC_CMD_CONFIGURE_CODEC,
+ GTB_ENC_CMD_DEAD_MARK,
+ GTB_ENC_CMD_FIRM_RESET,
+ GTB_ENC_CMD_FW_STATUS,
+ GTB_ENC_CMD_RESERVED
+};
+
+enum {
+ VID_API_EVENT_UNDEFINED = 0x0,
+ VID_API_ENC_EVENT_RESET_DONE = 0x1,
+ VID_API_ENC_EVENT_START_DONE,
+ VID_API_ENC_EVENT_STOP_DONE,
+ VID_API_ENC_EVENT_TERMINATE_DONE,
+ VID_API_ENC_EVENT_FRAME_INPUT_DONE,
+ VID_API_ENC_EVENT_FRAME_DONE,
+ VID_API_ENC_EVENT_FRAME_RELEASE,
+ VID_API_ENC_EVENT_PARA_UPD_DONE,
+ VID_API_ENC_EVENT_MEM_REQUEST,
+ VID_API_ENC_EVENT_FIRMWARE_XCPT,
+ VID_API_ENC_EVENT_RESERVED
+};
+
+enum {
+ MEDIAIP_ENC_PIC_TYPE_B_FRAME = 0,
+ MEDIAIP_ENC_PIC_TYPE_P_FRAME,
+ MEDIAIP_ENC_PIC_TYPE_I_FRAME,
+ MEDIAIP_ENC_PIC_TYPE_IDR_FRAME,
+ MEDIAIP_ENC_PIC_TYPE_BI_FRAME
+};
+
+struct windsor_iface {
+ u32 exec_base_addr;
+ u32 exec_area_size;
+ struct vpu_rpc_buffer_desc cmd_buffer_desc;
+ struct vpu_rpc_buffer_desc msg_buffer_desc;
+ u32 cmd_int_enable[VID_API_NUM_STREAMS];
+ u32 fw_version;
+ u32 mvd_fw_offset;
+ u32 max_streams;
+ u32 ctrl_iface[VID_API_NUM_STREAMS];
+ struct vpu_rpc_system_config system_config;
+ u32 api_version;
+ struct vpu_rpc_buffer_desc log_buffer_desc;
+};
+
+struct windsor_ctrl_iface {
+ u32 enc_yuv_buffer_desc;
+ u32 enc_stream_buffer_desc;
+ u32 enc_expert_mode_param;
+ u32 enc_param;
+ u32 enc_mem_pool;
+ u32 enc_encoding_status;
+ u32 enc_dsa_status;
+};
+
+struct vpu_enc_yuv_desc {
+ u32 frame_id;
+ u32 luma_base;
+ u32 chroma_base;
+ u32 param_idx;
+ u32 key_frame;
+};
+
+struct vpu_enc_calib_params {
+ u32 use_ame;
+
+ u32 cme_mvx_max;
+ u32 cme_mvy_max;
+ u32 ame_prefresh_y0;
+ u32 ame_prefresh_y1;
+ u32 fme_min_sad;
+ u32 cme_min_sad;
+
+ u32 fme_pred_int_weight;
+ u32 fme_pred_hp_weight;
+ u32 fme_pred_qp_weight;
+ u32 fme_cost_weight;
+ u32 fme_act_thold;
+ u32 fme_sad_thold;
+ u32 fme_zero_sad_thold;
+
+ u32 fme_lrg_mvx_lmt;
+ u32 fme_lrg_mvy_lmt;
+ u32 fme_force_mode;
+ u32 fme_force4mvcost;
+ u32 fme_force2mvcost;
+
+ u32 h264_inter_thrd;
+
+ u32 i16x16_mode_cost;
+ u32 i4x4_mode_lambda;
+ u32 i8x8_mode_lambda;
+
+ u32 inter_mod_mult;
+ u32 inter_sel_mult;
+ u32 inter_bid_cost;
+ u32 inter_bwd_cost;
+ u32 inter_4mv_cost;
+ s32 one_mv_i16_cost;
+ s32 one_mv_i4x4_cost;
+ s32 one_mv_i8x8_cost;
+ s32 two_mv_i16_cost;
+ s32 two_mv_i4x4_cost;
+ s32 two_mv_i8x8_cost;
+ s32 four_mv_i16_cost;
+ s32 four_mv_i4x4_cost;
+ s32 four_mv_i8x8_cost;
+
+ u32 intra_pred_enab;
+ u32 intra_chr_pred;
+ u32 intra16_pred;
+ u32 intra4x4_pred;
+ u32 intra8x8_pred;
+
+ u32 cb_base;
+ u32 cb_size;
+ u32 cb_head_room;
+
+ u32 mem_page_width;
+ u32 mem_page_height;
+ u32 mem_total_size;
+ u32 mem_chunk_phys_addr;
+ u32 mem_chunk_virt_addr;
+ u32 mem_chunk_size;
+ u32 mem_y_stride;
+ u32 mem_uv_stride;
+
+ u32 split_wr_enab;
+ u32 split_wr_req_size;
+ u32 split_rd_enab;
+ u32 split_rd_req_size;
+};
+
+struct vpu_enc_config_params {
+ u32 param_change;
+ u32 start_frame;
+ u32 end_frame;
+ u32 userdata_enable;
+ u32 userdata_id[4];
+ u32 userdata_message[WINDSOR_USER_DATA_WORDS];
+ u32 userdata_length;
+ u32 h264_profile_idc;
+ u32 h264_level_idc;
+ u32 h264_au_delimiter;
+ u32 h264_seq_end_code;
+ u32 h264_recovery_points;
+ u32 h264_vui_parameters;
+ u32 h264_aspect_ratio_present;
+ u32 h264_aspect_ratio_sar_width;
+ u32 h264_aspect_ratio_sar_height;
+ u32 h264_overscan_present;
+ u32 h264_video_type_present;
+ u32 h264_video_format;
+ u32 h264_video_full_range;
+ u32 h264_video_colour_descriptor;
+ u32 h264_video_colour_primaries;
+ u32 h264_video_transfer_char;
+ u32 h264_video_matrix_coeff;
+ u32 h264_chroma_loc_info_present;
+ u32 h264_chroma_loc_type_top;
+ u32 h264_chroma_loc_type_bot;
+ u32 h264_timing_info_present;
+ u32 h264_buffering_period_present;
+ u32 h264_low_delay_hrd_flag;
+ u32 aspect_ratio;
+ u32 test_mode; // Automated firmware test mode
+ u32 dsa_test_mode; // Automated test mode for the DSA.
+ u32 fme_test_mode; // Automated test mode for the fme
+ u32 cbr_row_mode; //0: FW mode; 1: HW mode
+ u32 windsor_mode; //0: normal mode; 1: intra only mode; 2: intra+0MV mode
+ u32 encode_mode; // H264, VC1, MPEG2, DIVX
+ u32 frame_width; // display width
+ u32 frame_height; // display height
+ u32 enc_frame_width; // encoding width, should be 16-pix align
+ u32 enc_frame_height; // encoding height, should be 16-pix aligned
+ u32 frame_rate_num;
+ u32 frame_rate_den;
+ u32 vi_field_source;
+ u32 vi_frame_width;
+ u32 vi_frame_height;
+ u32 crop_frame_width;
+ u32 crop_frame_height;
+ u32 crop_x_start_posn;
+ u32 crop_y_start_posn;
+ u32 mode422;
+ u32 mode_yuy2;
+ u32 dsa_luma_en;
+ u32 dsa_chroma_en;
+ u32 dsa_ext_hfilt_en;
+ u32 dsa_di_en;
+ u32 dsa_di_top_ref;
+ u32 dsa_vertf_disable;
+ u32 dsa_disable_pwb;
+ u32 dsa_hor_phase;
+ u32 dsa_ver_phase;
+ u32 dsa_iac_enable;
+ u32 iac_sc_threshold;
+ u32 iac_vm_threshold;
+ u32 iac_skip_mode;
+ u32 iac_grp_width;
+ u32 iac_grp_height;
+ u32 rate_control_mode;
+ u32 rate_control_resolution;
+ u32 buffer_size;
+ u32 buffer_level_init;
+ u32 buffer_I_bit_budget;
+ u32 top_field_first;
+ u32 intra_lum_qoffset;
+ u32 intra_chr_qoffset;
+ u32 inter_lum_qoffset;
+ u32 inter_chr_qoffset;
+ u32 use_def_scaling_mtx;
+ u32 inter_8x8_enab;
+ u32 inter_4x4_enab;
+ u32 fme_enable_qpel;
+ u32 fme_enable_hpel;
+ u32 fme_nozeromv;
+ u32 fme_predmv_en;
+ u32 fme_pred_2mv4mv;
+ u32 fme_smallsadthresh;
+ u32 ame_en_lmvc;
+ u32 ame_x_mult;
+ u32 cme_enable_4mv;
+ u32 cme_enable_1mv;
+ u32 hme_enable_16x8mv;
+ u32 hme_enable_8x16mv;
+ u32 cme_mv_weight;
+ u32 cme_mv_cost;
+ u32 ame_mult_mv;
+ u32 ame_shift_mv;
+ u32 hme_forceto1mv_en;
+ u32 hme_2mv_cost;
+ u32 hme_pred_mode;
+ u32 hme_sc_rnge;
+ u32 hme_sw_rnge;
+ u32 output_format;
+ u32 timestamp_enab;
+ u32 initial_pts_enab;
+ u32 initial_pts;
+};
+
+struct vpu_enc_static_params {
+ u32 param_change;
+ u32 gop_length;
+ u32 rate_control_bitrate;
+ u32 rate_control_bitrate_min;
+ u32 rate_control_bitrate_max;
+ u32 rate_control_content_models;
+ u32 rate_control_iframe_maxsize;
+ u32 rate_control_qp_init;
+ u32 rate_control_islice_qp;
+ u32 rate_control_pslice_qp;
+ u32 rate_control_bslice_qp;
+ u32 adaptive_quantization;
+ u32 aq_variance;
+ u32 cost_optimization;
+ u32 fdlp_mode;
+ u32 enable_isegbframes;
+ u32 enable_adaptive_keyratio;
+ u32 keyratio_imin;
+ u32 keyratio_imax;
+ u32 keyratio_pmin;
+ u32 keyratio_pmax;
+ u32 keyratio_bmin;
+ u32 keyratio_bmax;
+ s32 keyratio_istep;
+ s32 keyratio_pstep;
+ s32 keyratio_bstep;
+ u32 enable_paff;
+ u32 enable_b_frame_ref;
+ u32 enable_adaptive_gop;
+ u32 enable_closed_gop;
+ u32 open_gop_refresh_freq;
+ u32 enable_adaptive_sc;
+ u32 enable_fade_detection;
+ s32 fade_detection_threshold;
+ u32 enable_repeat_b;
+ u32 enable_low_delay_b;
+};
+
+struct vpu_enc_dynamic_params {
+ u32 param_change;
+ u32 rows_per_slice;
+ u32 mbaff_enable;
+ u32 dbf_enable;
+ u32 field_source;
+ u32 gop_b_length;
+ u32 mb_group_size;
+ u32 cbr_rows_per_group;
+ u32 skip_enable;
+ u32 pts_bits_0_to_31;
+ u32 pts_bit_32;
+ u32 rm_expsv_cff;
+ u32 const_ipred;
+ s32 chr_qp_offset;
+ u32 intra_mb_qp_offset;
+ u32 h264_cabac_init_method;
+ u32 h264_cabac_init_idc;
+ u32 h264_cabac_enable;
+ s32 alpha_c0_offset_div2;
+ s32 beta_offset_div2;
+ u32 intra_prefresh_y0;
+ u32 intra_prefresh_y1;
+ u32 dbg_dump_rec_src;
+};
+
+struct vpu_enc_expert_mode_param {
+ struct vpu_enc_calib_params calib_param;
+ struct vpu_enc_config_params config_param;
+ struct vpu_enc_static_params static_param;
+ struct vpu_enc_dynamic_params dynamic_param;
+};
+
+enum MEDIAIP_ENC_FMT {
+ MEDIAIP_ENC_FMT_H264 = 0,
+ MEDIAIP_ENC_FMT_VC1,
+ MEDIAIP_ENC_FMT_MPEG2,
+ MEDIAIP_ENC_FMT_MPEG4SP,
+ MEDIAIP_ENC_FMT_H263,
+ MEDIAIP_ENC_FMT_MPEG1,
+ MEDIAIP_ENC_FMT_SHORT_HEADER,
+ MEDIAIP_ENC_FMT_NULL
+};
+
+enum MEDIAIP_ENC_PROFILE {
+ MEDIAIP_ENC_PROF_MPEG2_SP = 0,
+ MEDIAIP_ENC_PROF_MPEG2_MP,
+ MEDIAIP_ENC_PROF_MPEG2_HP,
+ MEDIAIP_ENC_PROF_H264_BP,
+ MEDIAIP_ENC_PROF_H264_MP,
+ MEDIAIP_ENC_PROF_H264_HP,
+ MEDIAIP_ENC_PROF_MPEG4_SP,
+ MEDIAIP_ENC_PROF_MPEG4_ASP,
+ MEDIAIP_ENC_PROF_VC1_SP,
+ MEDIAIP_ENC_PROF_VC1_MP,
+ MEDIAIP_ENC_PROF_VC1_AP
+};
+
+enum MEDIAIP_ENC_BITRATE_MODE {
+ MEDIAIP_ENC_BITRATE_MODE_VBR = 0x00000001,
+ MEDIAIP_ENC_BITRATE_MODE_CBR = 0x00000002,
+ MEDIAIP_ENC_BITRATE_MODE_CONSTANT_QP = 0x00000004
+};
+
+struct vpu_enc_memory_resource {
+ u32 phys;
+ u32 virt;
+ u32 size;
+};
+
+struct vpu_enc_param {
+ enum MEDIAIP_ENC_FMT codec_mode;
+ enum MEDIAIP_ENC_PROFILE profile;
+ u32 level;
+
+ struct vpu_enc_memory_resource enc_mem_desc;
+
+ u32 frame_rate;
+ u32 src_stride;
+ u32 src_width;
+ u32 src_height;
+ u32 src_offset_x;
+ u32 src_offset_y;
+ u32 src_crop_width;
+ u32 src_crop_height;
+ u32 out_width;
+ u32 out_height;
+ u32 iframe_interval;
+ u32 bframes;
+ u32 low_latency_mode;
+
+ enum MEDIAIP_ENC_BITRATE_MODE bitrate_mode;
+ u32 target_bitrate;
+ u32 max_bitrate;
+ u32 min_bitrate;
+ u32 init_slice_qp;
+};
+
+struct vpu_enc_mem_pool {
+ struct vpu_enc_memory_resource enc_frames[WINDSOR_MAX_SRC_FRAMES];
+ struct vpu_enc_memory_resource ref_frames[WINDSOR_MAX_REF_FRAMES];
+ struct vpu_enc_memory_resource act_frame;
+};
+
+struct vpu_enc_encoding_status {
+ u32 frame_id;
+ u32 error_flag; //Error type
+ u32 mb_y;
+ u32 mb_x;
+ u32 reserved[12];
+
+};
+
+struct vpu_enc_dsa_status {
+ u32 frame_id;
+ u32 dsa_cyle;
+ u32 mb_y;
+ u32 mb_x;
+ u32 reserved[4];
+};
+
+struct vpu_enc_ctrl {
+ struct vpu_enc_yuv_desc *yuv_desc;
+ struct vpu_rpc_buffer_desc *stream_desc;
+ struct vpu_enc_expert_mode_param *expert;
+ struct vpu_enc_param *param;
+ struct vpu_enc_mem_pool *pool;
+ struct vpu_enc_encoding_status *status;
+ struct vpu_enc_dsa_status *dsa;
+};
+
+struct vpu_enc_host_ctrls {
+ struct vpu_enc_ctrl ctrls[VID_API_NUM_STREAMS];
+};
+
+struct windsor_pic_info {
+ u32 frame_id;
+ u32 pic_encod_done;
+ u32 pic_type;
+ u32 skipped_frame;
+ u32 error_flag;
+ u32 psnr;
+ u32 flush_done;
+ u32 mb_y;
+ u32 mb_x;
+ u32 frame_size;
+ u32 frame_enc_ttl_cycles;
+ u32 frame_enc_ttl_frm_cycles;
+ u32 frame_enc_ttl_slc_cycles;
+ u32 frame_enc_ttl_enc_cycles;
+ u32 frame_enc_ttl_hme_cycles;
+ u32 frame_enc_ttl_dsa_cycles;
+ u32 frame_enc_fw_cycles;
+ u32 frame_crc;
+ u32 num_interrupts_1;
+ u32 num_interrupts_2;
+ u32 poc;
+ u32 ref_info;
+ u32 pic_num;
+ u32 pic_activity;
+ u32 scene_change;
+ u32 mb_stats;
+ u32 enc_cache_count0;
+ u32 enc_cache_count1;
+ u32 mtl_wr_strb_cnt;
+ u32 mtl_rd_strb_cnt;
+ u32 str_buff_wptr;
+ u32 diagnosticEvents;
+ u32 proc_iacc_tot_rd_cnt;
+ u32 proc_dacc_tot_rd_cnt;
+ u32 proc_dacc_tot_wr_cnt;
+ u32 proc_dacc_reg_rd_cnt;
+ u32 proc_dacc_reg_wr_cnt;
+ u32 proc_dacc_rng_rd_cnt;
+ u32 proc_dacc_rng_wr_cnt;
+ s32 tv_s;
+ u32 tv_ns;
+};
+
+u32 vpu_windsor_get_data_size(void)
+{
+ return sizeof(struct vpu_enc_host_ctrls);
+}
+
+static struct vpu_enc_yuv_desc *get_yuv_desc(struct vpu_shared_addr *shared,
+ u32 instance)
+{
+ struct vpu_enc_host_ctrls *hcs = shared->priv;
+
+ return hcs->ctrls[instance].yuv_desc;
+}
+
+static struct vpu_enc_mem_pool *get_mem_pool(struct vpu_shared_addr *shared,
+ u32 instance)
+{
+ struct vpu_enc_host_ctrls *hcs = shared->priv;
+
+ return hcs->ctrls[instance].pool;
+}
+
+static struct vpu_rpc_buffer_desc *get_stream_buf_desc(struct vpu_shared_addr *shared,
+ u32 instance)
+{
+ struct vpu_enc_host_ctrls *hcs = shared->priv;
+
+ return hcs->ctrls[instance].stream_desc;
+}
+
+static struct vpu_enc_expert_mode_param *get_expert_param(struct vpu_shared_addr *shared,
+ u32 instance)
+{
+ struct vpu_enc_host_ctrls *hcs = shared->priv;
+
+ return hcs->ctrls[instance].expert;
+}
+
+static struct vpu_enc_param *get_enc_param(struct vpu_shared_addr *shared,
+ u32 instance)
+{
+ struct vpu_enc_host_ctrls *hcs = shared->priv;
+
+ return hcs->ctrls[instance].param;
+}
+
+static u32 get_ptr(u32 ptr)
+{
+ return (ptr | 0x80000000);
+}
+
+void vpu_windsor_init_rpc(struct vpu_shared_addr *shared,
+ struct vpu_buffer *rpc, dma_addr_t boot_addr)
+{
+ unsigned long base_phy_addr;
+ unsigned long phy_addr;
+ unsigned long offset;
+ struct windsor_iface *iface;
+ struct windsor_ctrl_iface *ctrl;
+ struct vpu_enc_host_ctrls *hcs;
+ unsigned int i;
+
+ WARN_ON(!shared || !shared->priv);
+ WARN_ON(!rpc || !rpc->phys || !rpc->length || rpc->phys < boot_addr);
+
+ base_phy_addr = rpc->phys - boot_addr;
+ iface = rpc->virt;
+ shared->iface = iface;
+ shared->boot_addr = boot_addr;
+ hcs = shared->priv;
+
+ iface->exec_base_addr = base_phy_addr;
+ iface->exec_area_size = rpc->length;
+
+ offset = sizeof(struct windsor_iface);
+ phy_addr = base_phy_addr + offset;
+ shared->cmd_desc = &iface->cmd_buffer_desc;
+ shared->cmd_mem_vir = rpc->virt + offset;
+ iface->cmd_buffer_desc.start =
+ iface->cmd_buffer_desc.rptr =
+ iface->cmd_buffer_desc.wptr = phy_addr;
+ iface->cmd_buffer_desc.end = iface->cmd_buffer_desc.start + CMD_SIZE;
+
+ offset += CMD_SIZE;
+ phy_addr = base_phy_addr + offset;
+ shared->msg_desc = &iface->msg_buffer_desc;
+ shared->msg_mem_vir = rpc->virt + offset;
+ iface->msg_buffer_desc.start =
+ iface->msg_buffer_desc.wptr =
+ iface->msg_buffer_desc.rptr = phy_addr;
+ iface->msg_buffer_desc.end = iface->msg_buffer_desc.start + MSG_SIZE;
+
+ offset += MSG_SIZE;
+ for (i = 0; i < ARRAY_SIZE(iface->ctrl_iface); i++) {
+ iface->ctrl_iface[i] = base_phy_addr + offset;
+ offset += sizeof(struct windsor_ctrl_iface);
+ }
+ for (i = 0; i < ARRAY_SIZE(iface->ctrl_iface); i++) {
+ ctrl = rpc->virt + (iface->ctrl_iface[i] - base_phy_addr);
+
+ ctrl->enc_yuv_buffer_desc = base_phy_addr + offset;
+ hcs->ctrls[i].yuv_desc = rpc->virt + offset;
+ offset += sizeof(struct vpu_enc_yuv_desc);
+
+ ctrl->enc_stream_buffer_desc = base_phy_addr + offset;
+ hcs->ctrls[i].stream_desc = rpc->virt + offset;
+ offset += sizeof(struct vpu_rpc_buffer_desc);
+
+ ctrl->enc_expert_mode_param = base_phy_addr + offset;
+ hcs->ctrls[i].expert = rpc->virt + offset;
+ offset += sizeof(struct vpu_enc_expert_mode_param);
+
+ ctrl->enc_param = base_phy_addr + offset;
+ hcs->ctrls[i].param = rpc->virt + offset;
+ offset += sizeof(struct vpu_enc_param);
+
+ ctrl->enc_mem_pool = base_phy_addr + offset;
+ hcs->ctrls[i].pool = rpc->virt + offset;
+ offset += sizeof(struct vpu_enc_mem_pool);
+
+ ctrl->enc_encoding_status = base_phy_addr + offset;
+ hcs->ctrls[i].status = rpc->virt + offset;
+ offset += sizeof(struct vpu_enc_encoding_status);
+
+ ctrl->enc_dsa_status = base_phy_addr + offset;
+ hcs->ctrls[i].dsa = rpc->virt + offset;
+ offset += sizeof(struct vpu_enc_dsa_status);
+ }
+
+ rpc->bytesused = offset;
+}
+
+void vpu_windsor_set_log_buf(struct vpu_shared_addr *shared,
+ struct vpu_buffer *log)
+{
+ struct windsor_iface *iface;
+
+ WARN_ON(!shared || !log || !log->phys);
+
+ iface = shared->iface;
+ iface->log_buffer_desc.start =
+ iface->log_buffer_desc.wptr =
+ iface->log_buffer_desc.rptr = log->phys - shared->boot_addr;
+ iface->log_buffer_desc.end = iface->log_buffer_desc.start + log->length;
+}
+
+void vpu_windsor_set_system_cfg(struct vpu_shared_addr *shared,
+ u32 regs_base, void __iomem *regs, u32 core_id)
+{
+ struct windsor_iface *iface;
+ struct vpu_rpc_system_config *config;
+
+ WARN_ON(!shared || !shared->iface);
+
+ iface = shared->iface;
+ config = &iface->system_config;
+
+ vpu_imx8q_set_system_cfg_common(config, regs_base, core_id);
+}
+
+int vpu_windsor_get_stream_buffer_size(struct vpu_shared_addr *shared)
+{
+ return 0x300000;
+}
+
+static struct vpu_pair windsor_cmds[] = {
+ {VPU_CMD_ID_CONFIGURE_CODEC, GTB_ENC_CMD_CONFIGURE_CODEC},
+ {VPU_CMD_ID_START, GTB_ENC_CMD_STREAM_START},
+ {VPU_CMD_ID_STOP, GTB_ENC_CMD_STREAM_STOP},
+ {VPU_CMD_ID_FRAME_ENCODE, GTB_ENC_CMD_FRAME_ENCODE},
+ {VPU_CMD_ID_SNAPSHOT, GTB_ENC_CMD_SNAPSHOT},
+ {VPU_CMD_ID_FIRM_RESET, GTB_ENC_CMD_FIRM_RESET},
+ {VPU_CMD_ID_UPDATE_PARAMETER, GTB_ENC_CMD_PARAMETER_UPD},
+ {VPU_CMD_ID_DEBUG, GTB_ENC_CMD_FW_STATUS}
+};
+
+static struct vpu_pair windsor_msgs[] = {
+ {VPU_MSG_ID_RESET_DONE, VID_API_ENC_EVENT_RESET_DONE},
+ {VPU_MSG_ID_START_DONE, VID_API_ENC_EVENT_START_DONE},
+ {VPU_MSG_ID_STOP_DONE, VID_API_ENC_EVENT_STOP_DONE},
+ {VPU_MSG_ID_FRAME_INPUT_DONE, VID_API_ENC_EVENT_FRAME_INPUT_DONE},
+ {VPU_MSG_ID_ENC_DONE, VID_API_ENC_EVENT_FRAME_DONE},
+ {VPU_MSG_ID_FRAME_RELEASE, VID_API_ENC_EVENT_FRAME_RELEASE},
+ {VPU_MSG_ID_MEM_REQUEST, VID_API_ENC_EVENT_MEM_REQUEST},
+ {VPU_MSG_ID_PARAM_UPD_DONE, VID_API_ENC_EVENT_PARA_UPD_DONE},
+ {VPU_MSG_ID_FIRMWARE_XCPT, VID_API_ENC_EVENT_FIRMWARE_XCPT},
+};
+
+int vpu_windsor_pack_cmd(struct vpu_rpc_event *pkt, u32 index, u32 id, void *data)
+{
+ int ret;
+ s64 timestamp;
+
+ WARN_ON(!pkt);
+
+ ret = vpu_find_dst_by_src(windsor_cmds, ARRAY_SIZE(windsor_cmds), id);
+ if (ret < 0)
+ return ret;
+ pkt->hdr.id = ret;
+ pkt->hdr.num = 0;
+ pkt->hdr.index = index;
+ if (id == VPU_CMD_ID_FRAME_ENCODE) {
+ pkt->hdr.num = 2;
+ timestamp = *(s64 *)data;
+ if (timestamp < 0) {
+ pkt->data[0] = (u32)-1;
+ pkt->data[1] = 0;
+ } else {
+ pkt->data[0] = timestamp / NSEC_PER_SEC;
+ pkt->data[1] = timestamp % NSEC_PER_SEC;
+ }
+ }
+
+ return 0;
+}
+
+int vpu_windsor_convert_msg_id(u32 id)
+{
+ return vpu_find_src_by_dst(windsor_msgs, ARRAY_SIZE(windsor_msgs), id);
+}
+
+static void vpu_windsor_unpack_pic_info(struct vpu_rpc_event *pkt, void *data)
+{
+ struct vpu_enc_pic_info *info = data;
+ struct windsor_pic_info *windsor = (struct windsor_pic_info *)pkt->data;
+
+ info->frame_id = windsor->frame_id;
+ switch (windsor->pic_type) {
+ case MEDIAIP_ENC_PIC_TYPE_I_FRAME:
+ case MEDIAIP_ENC_PIC_TYPE_IDR_FRAME:
+ info->pic_type = V4L2_BUF_FLAG_KEYFRAME;
+ break;
+ case MEDIAIP_ENC_PIC_TYPE_P_FRAME:
+ info->pic_type = V4L2_BUF_FLAG_PFRAME;
+ break;
+ case MEDIAIP_ENC_PIC_TYPE_B_FRAME:
+ info->pic_type = V4L2_BUF_FLAG_BFRAME;
+ break;
+ default:
+ break;
+ }
+ info->skipped_frame = windsor->skipped_frame;
+ info->error_flag = windsor->error_flag;
+ info->psnr = windsor->psnr;
+ info->frame_size = windsor->frame_size;
+ info->wptr = get_ptr(windsor->str_buff_wptr);
+ info->crc = windsor->frame_crc;
+ info->timestamp = MAKE_TIMESTAMP(windsor->tv_s, windsor->tv_ns);
+}
+
+static void vpu_windsor_unpack_mem_req(struct vpu_rpc_event *pkt, void *data)
+{
+ struct vpu_pkt_mem_req_data *req_data = data;
+
+ req_data->enc_frame_size = pkt->data[0];
+ req_data->enc_frame_num = pkt->data[1];
+ req_data->ref_frame_size = pkt->data[2];
+ req_data->ref_frame_num = pkt->data[3];
+ req_data->act_buf_size = pkt->data[4];
+ req_data->act_buf_num = 1;
+}
+
+int vpu_windsor_unpack_msg_data(struct vpu_rpc_event *pkt, void *data)
+{
+ if (!pkt || !data)
+ return -EINVAL;
+
+ switch (pkt->hdr.id) {
+ case VID_API_ENC_EVENT_FRAME_DONE:
+ vpu_windsor_unpack_pic_info(pkt, data);
+ break;
+ case VID_API_ENC_EVENT_MEM_REQUEST:
+ vpu_windsor_unpack_mem_req(pkt, data);
+ break;
+ case VID_API_ENC_EVENT_FRAME_RELEASE:
+ *(u32 *)data = pkt->data[0];
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int vpu_windsor_fill_yuv_frame(struct vpu_shared_addr *shared,
+ u32 instance,
+ struct vb2_buffer *vb)
+{
+ struct vpu_enc_yuv_desc *desc;
+ struct vb2_v4l2_buffer *vbuf;
+
+ WARN_ON(!shared || !vb || instance >= VID_API_NUM_STREAMS);
+
+ desc = get_yuv_desc(shared, instance);
+
+ vbuf = to_vb2_v4l2_buffer(vb);
+ desc->frame_id = vbuf->sequence;
+ if (vbuf->flags & V4L2_BUF_FLAG_KEYFRAME)
+ desc->key_frame = 1;
+ else
+ desc->key_frame = 0;
+ desc->luma_base = vpu_get_vb_phy_addr(vb, 0);
+ desc->chroma_base = vpu_get_vb_phy_addr(vb, 1);
+
+ return 0;
+}
+
+int vpu_windsor_input_frame(struct vpu_shared_addr *shared,
+ struct vpu_inst *inst, struct vb2_buffer *vb)
+{
+ vpu_windsor_fill_yuv_frame(shared, inst->id, vb);
+ return vpu_session_encode_frame(inst, vb->timestamp);
+}
+
+int vpu_windsor_config_memory_resource(struct vpu_shared_addr *shared,
+ u32 instance,
+ u32 type,
+ u32 index,
+ struct vpu_buffer *buf)
+{
+ struct vpu_enc_mem_pool *pool;
+ struct vpu_enc_memory_resource *res;
+
+ WARN_ON(!shared || !buf || instance >= VID_API_NUM_STREAMS);
+
+ pool = get_mem_pool(shared, instance);
+
+ switch (type) {
+ case MEM_RES_ENC:
+ res = &pool->enc_frames[index];
+ break;
+ case MEM_RES_REF:
+ res = &pool->ref_frames[index];
+ break;
+ case MEM_RES_ACT:
+ res = &pool->act_frame;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ res->phys = buf->phys;
+ res->virt = buf->phys - shared->boot_addr;
+ res->size = buf->length;
+
+ return 0;
+}
+
+int vpu_windsor_config_stream_buffer(struct vpu_shared_addr *shared,
+ u32 instance,
+ struct vpu_buffer *buf)
+{
+ struct vpu_rpc_buffer_desc *desc;
+ struct vpu_enc_expert_mode_param *expert;
+
+ desc = get_stream_buf_desc(shared, instance);
+ expert = get_expert_param(shared, instance);
+
+ desc->start = desc->wptr = desc->rptr = buf->phys;
+ desc->end = buf->phys + buf->length;
+
+ expert->calib_param.mem_chunk_phys_addr = 0;
+ expert->calib_param.mem_chunk_virt_addr = 0;
+ expert->calib_param.mem_chunk_size = 0;
+ expert->calib_param.cb_base = buf->phys;
+ expert->calib_param.cb_size = buf->length;
+
+ return 0;
+}
+
+static void vpu_windsor_update_wptr(struct vpu_rpc_buffer_desc *desc, u32 wptr)
+{
+ u32 pre_wptr = get_ptr(desc->wptr);
+ u32 new_wptr = get_ptr(wptr);
+ u32 rptr = get_ptr(desc->rptr);
+ u32 size = get_ptr(desc->end) - get_ptr(desc->start);
+ u32 space = (rptr + size - pre_wptr) % size;
+ u32 step = (new_wptr + size - pre_wptr) % size;
+
+ if (space && step > space)
+ pr_err("update wptr from 0x%x to 0x%x, cross over rptr 0x%x\n",
+ pre_wptr, new_wptr, rptr);
+
+ desc->wptr = wptr;
+}
+
+static void vpu_windsor_update_rptr(struct vpu_rpc_buffer_desc *desc, u32 rptr)
+{
+ u32 pre_rptr = get_ptr(desc->rptr);
+ u32 new_rptr = get_ptr(rptr);
+ u32 wptr = get_ptr(desc->wptr);
+ u32 size = get_ptr(desc->end) - get_ptr(desc->start);
+ u32 space = (wptr + size - pre_rptr) % size;
+ u32 step = (new_rptr + size - pre_rptr) % size;
+
+ if (step > space)
+ pr_err("update rptr from 0x%x to 0x%x, cross over wptr 0x%x\n",
+ pre_rptr, new_rptr, wptr);
+
+ desc->rptr = rptr;
+}
+
+int vpu_windsor_update_stream_buffer(struct vpu_shared_addr *shared,
+ u32 instance, u32 ptr, bool write)
+{
+ struct vpu_rpc_buffer_desc *desc;
+
+ desc = get_stream_buf_desc(shared, instance);
+
+ /*update wptr/rptr after data is written or read*/
+ mb();
+ if (write)
+ vpu_windsor_update_wptr(desc, ptr);
+ else
+ vpu_windsor_update_rptr(desc, ptr);
+
+ return 0;
+}
+
+int vpu_windsor_get_stream_buffer_desc(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_rpc_buffer_desc *desc)
+{
+ struct vpu_rpc_buffer_desc *rpc_desc;
+
+ rpc_desc = get_stream_buf_desc(shared, instance);
+ if (desc) {
+ desc->wptr = get_ptr(rpc_desc->wptr);
+ desc->rptr = get_ptr(rpc_desc->rptr);
+ desc->start = get_ptr(rpc_desc->start);
+ desc->end = get_ptr(rpc_desc->end);
+ }
+
+ return 0;
+}
+
+u32 vpu_windsor_get_version(struct vpu_shared_addr *shared)
+{
+ struct windsor_iface *iface;
+
+ WARN_ON(!shared || !shared->iface);
+
+ iface = shared->iface;
+ return iface->fw_version;
+}
+
+static int vpu_windsor_set_frame_rate(struct vpu_enc_expert_mode_param *expert,
+ struct vpu_encode_params *params)
+{
+ expert->config_param.frame_rate_num = params->frame_rate.numerator;
+ expert->config_param.frame_rate_den = params->frame_rate.denominator;
+
+ return 0;
+}
+
+static int vpu_windsor_set_format(struct vpu_enc_param *param, u32 pixelformat)
+{
+ switch (pixelformat) {
+ case V4L2_PIX_FMT_H264:
+ param->codec_mode = MEDIAIP_ENC_FMT_H264;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int vpu_windsor_set_profile(struct vpu_enc_param *param, u32 profile)
+{
+ switch (profile) {
+ case V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE:
+ param->profile = MEDIAIP_ENC_PROF_H264_BP;
+ break;
+ case V4L2_MPEG_VIDEO_H264_PROFILE_MAIN:
+ param->profile = MEDIAIP_ENC_PROF_H264_MP;
+ break;
+ case V4L2_MPEG_VIDEO_H264_PROFILE_HIGH:
+ param->profile = MEDIAIP_ENC_PROF_H264_HP;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static const u32 h264_level[] = {
+ [V4L2_MPEG_VIDEO_H264_LEVEL_1_0] = 10,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_1B] = 14,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_1_1] = 11,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_1_2] = 12,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_1_3] = 13,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_2_0] = 20,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_2_1] = 21,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_2_2] = 22,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_3_0] = 30,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_3_1] = 31,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_3_2] = 32,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_4_0] = 40,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_4_1] = 41,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_4_2] = 42,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_5_0] = 50,
+ [V4L2_MPEG_VIDEO_H264_LEVEL_5_1] = 51
+};
+
+static int vpu_windsor_set_level(struct vpu_enc_param *param, u32 level)
+{
+
+ if (level >= ARRAY_SIZE(h264_level))
+ return -EINVAL;
+
+ param->level = h264_level[level];
+
+ return 0;
+}
+
+static int vpu_windsor_set_size(struct vpu_enc_param *windsor,
+ struct vpu_encode_params *params)
+{
+ windsor->src_stride = params->src_stride;
+ windsor->src_width = params->src_width;
+ windsor->src_height = params->src_height;
+ windsor->src_offset_x = params->crop.left;
+ windsor->src_offset_y = params->crop.top;
+ windsor->src_crop_width = params->crop.width;
+ windsor->src_crop_height = params->crop.height;
+ windsor->out_width = params->out_width;
+ windsor->out_height = params->out_height;
+
+ return 0;
+}
+
+static int vpu_windsor_set_gop(struct vpu_enc_param *param, u32 gop)
+{
+ param->iframe_interval = gop;
+
+ return 0;
+}
+
+static int vpu_windsor_set_bframes(struct vpu_enc_param *param, u32 bframes)
+{
+ if (bframes) {
+ param->low_latency_mode = 0;
+ param->bframes = bframes;
+ } else {
+ param->low_latency_mode = 1;
+ param->bframes = 0;
+ }
+
+ return 0;
+}
+
+static int vpu_windsor_set_bitrate_mode(struct vpu_enc_param *param, u32 mode)
+{
+ switch (mode) {
+ case V4L2_MPEG_VIDEO_BITRATE_MODE_VBR:
+ param->bitrate_mode = MEDIAIP_ENC_BITRATE_MODE_CONSTANT_QP;
+ break;
+ case V4L2_MPEG_VIDEO_BITRATE_MODE_CBR:
+ param->bitrate_mode = MEDIAIP_ENC_BITRATE_MODE_CBR;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static u32 vpu_windsor_bitrate(u32 bitrate)
+{
+ return DIV_ROUND_CLOSEST(bitrate, WINDSOR_BITRATE_UNIT);
+}
+
+static int vpu_windsor_set_bitrate(struct vpu_enc_param *windsor,
+ struct vpu_encode_params *params)
+{
+ windsor->target_bitrate = vpu_windsor_bitrate(params->bitrate);
+ windsor->min_bitrate = vpu_windsor_bitrate(params->bitrate_min);
+ windsor->max_bitrate = vpu_windsor_bitrate(params->bitrate_max);
+
+ return 0;
+}
+
+static int vpu_windsor_set_qp(struct vpu_enc_expert_mode_param *expert,
+ struct vpu_encode_params *params)
+{
+ expert->static_param.rate_control_islice_qp = params->i_frame_qp;
+ expert->static_param.rate_control_pslice_qp = params->p_frame_qp;
+ expert->static_param.rate_control_bslice_qp = params->b_frame_qp;
+
+ return 0;
+}
+
+static int vpu_windsor_set_sar(struct vpu_enc_expert_mode_param *expert,
+ struct vpu_encode_params *params)
+{
+ expert->config_param.h264_aspect_ratio_present = params->sar.enable;
+ if (params->sar.idc == V4L2_MPEG_VIDEO_H264_VUI_SAR_IDC_EXTENDED)
+ expert->config_param.aspect_ratio = WINDSOR_H264_EXTENDED_SAR;
+ else
+ expert->config_param.aspect_ratio = params->sar.idc;
+ expert->config_param.h264_aspect_ratio_sar_width = params->sar.width;
+ expert->config_param.h264_aspect_ratio_sar_height = params->sar.height;
+
+
+ return 0;
+}
+
+static int vpu_windsor_set_color(struct vpu_enc_expert_mode_param *expert,
+ struct vpu_encode_params *params)
+{
+ expert->config_param.h264_video_type_present = 1;
+ expert->config_param.h264_video_format = 5;
+ expert->config_param.h264_video_colour_descriptor = 1;
+ expert->config_param.h264_video_colour_primaries =
+ vpu_color_cvrt_primaries_v2i(params->color.primaries);
+ expert->config_param.h264_video_transfer_char =
+ vpu_color_cvrt_transfers_v2i(params->color.transfer);
+ expert->config_param.h264_video_matrix_coeff =
+ vpu_color_cvrt_matrix_v2i(params->color.matrix);
+ expert->config_param.h264_video_full_range =
+ vpu_color_cvrt_full_range_v2i(params->color.full_range);
+ return 0;
+}
+
+static int vpu_windsor_update_bitrate(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_encode_params *params)
+{
+ struct vpu_enc_param *windsor;
+ struct vpu_enc_expert_mode_param *expert;
+
+ windsor = get_enc_param(shared, instance);
+ expert = get_expert_param(shared, instance);
+
+ if (windsor->bitrate_mode != MEDIAIP_ENC_BITRATE_MODE_CBR)
+ return 0;
+ if (params->rc_mode != V4L2_MPEG_VIDEO_BITRATE_MODE_CBR)
+ return 0;
+ if (vpu_windsor_bitrate(params->bitrate) == windsor->target_bitrate)
+ return 0;
+
+ vpu_windsor_set_bitrate(windsor, params);
+ expert->static_param.rate_control_bitrate = windsor->target_bitrate;
+ expert->static_param.rate_control_bitrate_min = windsor->min_bitrate;
+ expert->static_param.rate_control_bitrate_max = windsor->max_bitrate;
+
+ return 0;
+}
+
+static int vpu_windsor_set_params(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_encode_params *params)
+{
+ struct vpu_enc_param *windsor;
+ int ret;
+
+ windsor = get_enc_param(shared, instance);
+
+ if (params->input_format != V4L2_PIX_FMT_NV12 &&
+ params->input_format != V4L2_PIX_FMT_NV12M)
+ return -EINVAL;
+
+ ret = vpu_windsor_set_format(windsor, params->codec_format);
+ if (ret)
+ return ret;
+ vpu_windsor_set_profile(windsor, params->profile);
+ vpu_windsor_set_level(windsor, params->level);
+ vpu_windsor_set_size(windsor, params);
+ vpu_windsor_set_gop(windsor, params->gop_length);
+ vpu_windsor_set_bframes(windsor, params->bframes);
+ vpu_windsor_set_bitrate_mode(windsor, params->rc_mode);
+ vpu_windsor_set_bitrate(windsor, params);
+ windsor->init_slice_qp = params->i_frame_qp;
+
+ if (!params->frame_rate.numerator)
+ return -EINVAL;
+ windsor->frame_rate = params->frame_rate.denominator / params->frame_rate.numerator;
+
+ return 0;
+}
+
+static int vpu_windsor_update_params(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_encode_params *params)
+{
+ struct vpu_enc_expert_mode_param *expert;
+
+ expert = get_expert_param(shared, instance);
+
+ vpu_windsor_set_frame_rate(expert, params);
+ vpu_windsor_set_qp(expert, params);
+ vpu_windsor_set_sar(expert, params);
+ vpu_windsor_set_color(expert, params);
+ vpu_windsor_update_bitrate(shared, instance, params);
+ /*expert->config_param.iac_sc_threshold = 0;*/
+
+ return 0;
+}
+
+int vpu_windsor_set_encode_params(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_encode_params *params, u32 update)
+{
+ if (!params)
+ return -EINVAL;
+
+ if (!update)
+ return vpu_windsor_set_params(shared, instance, params);
+ else
+ return vpu_windsor_update_params(shared, instance, params);
+}
+
+u32 vpu_windsor_get_max_instance_count(struct vpu_shared_addr *shared)
+{
+ struct windsor_iface *iface;
+
+ WARN_ON(!shared || !shared->iface);
+
+ iface = shared->iface;
+
+ return iface->max_streams;
+}
diff --git a/drivers/media/platform/amphion/vpu_windsor.h b/drivers/media/platform/amphion/vpu_windsor.h
new file mode 100644
index 000000000000..ba2b12249b76
--- /dev/null
+++ b/drivers/media/platform/amphion/vpu_windsor.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _AMPHION_VPU_WINDSOR_H
+#define _AMPHION_VPU_WINDSOR_H
+
+u32 vpu_windsor_get_data_size(void);
+void vpu_windsor_init_rpc(struct vpu_shared_addr *shared,
+ struct vpu_buffer *rpc, dma_addr_t boot_addr);
+void vpu_windsor_set_log_buf(struct vpu_shared_addr *shared,
+ struct vpu_buffer *log);
+void vpu_windsor_set_system_cfg(struct vpu_shared_addr *shared,
+ u32 regs_base, void __iomem *regs, u32 core_id);
+int vpu_windsor_get_stream_buffer_size(struct vpu_shared_addr *shared);
+int vpu_windsor_pack_cmd(struct vpu_rpc_event *pkt,
+ u32 index, u32 id, void *data);
+int vpu_windsor_convert_msg_id(u32 msg_id);
+int vpu_windsor_unpack_msg_data(struct vpu_rpc_event *pkt, void *data);
+int vpu_windsor_config_memory_resource(struct vpu_shared_addr *shared,
+ u32 instance, u32 type, u32 index,
+ struct vpu_buffer *buf);
+int vpu_windsor_config_stream_buffer(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_buffer *buf);
+int vpu_windsor_update_stream_buffer(struct vpu_shared_addr *shared,
+ u32 instance, u32 ptr, bool write);
+int vpu_windsor_get_stream_buffer_desc(struct vpu_shared_addr *shared,
+ u32 instance, struct vpu_rpc_buffer_desc *desc);
+u32 vpu_windsor_get_version(struct vpu_shared_addr *shared);
+int vpu_windsor_set_encode_params(struct vpu_shared_addr *shared,
+ u32 instance,
+ struct vpu_encode_params *params,
+ u32 update);
+int vpu_windsor_input_frame(struct vpu_shared_addr *shared,
+ struct vpu_inst *inst, struct vb2_buffer *vb);
+u32 vpu_windsor_get_max_instance_count(struct vpu_shared_addr *shared);
+
+#endif
--
2.33.0


2021-12-02 09:04:47

by Hans Verkuil

[permalink] [raw]
Subject: Re: [PATCH v13 04/13] media: amphion: add vpu core driver

On 30/11/2021 10:48, Ming Qian wrote:
> The vpu supports encoder and decoder.
> it needs mu core to handle it.

"mu core"? Do you mean "vpu core"? If not, then what is a "mu core"?

Regards,

Hans

> core will run either encoder or decoder firmware.
>
> This driver is for support the vpu core.
>
> Signed-off-by: Ming Qian <[email protected]>
> Signed-off-by: Shijie Qin <[email protected]>
> Signed-off-by: Zhou Peng <[email protected]>
> ---
> drivers/media/platform/amphion/vpu_codec.h | 67 ++
> drivers/media/platform/amphion/vpu_core.c | 906 +++++++++++++++++++++
> drivers/media/platform/amphion/vpu_core.h | 15 +
> drivers/media/platform/amphion/vpu_dbg.c | 495 +++++++++++
> drivers/media/platform/amphion/vpu_rpc.c | 279 +++++++
> drivers/media/platform/amphion/vpu_rpc.h | 464 +++++++++++
> 6 files changed, 2226 insertions(+)
> create mode 100644 drivers/media/platform/amphion/vpu_codec.h
> create mode 100644 drivers/media/platform/amphion/vpu_core.c
> create mode 100644 drivers/media/platform/amphion/vpu_core.h
> create mode 100644 drivers/media/platform/amphion/vpu_dbg.c
> create mode 100644 drivers/media/platform/amphion/vpu_rpc.c
> create mode 100644 drivers/media/platform/amphion/vpu_rpc.h
>
> diff --git a/drivers/media/platform/amphion/vpu_codec.h b/drivers/media/platform/amphion/vpu_codec.h
> new file mode 100644
> index 000000000000..bf8920e9f6d7
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_codec.h
> @@ -0,0 +1,67 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _AMPHION_VPU_CODEC_H
> +#define _AMPHION_VPU_CODEC_H
> +
> +struct vpu_encode_params {
> + u32 input_format;
> + u32 codec_format;
> + u32 profile;
> + u32 tier;
> + u32 level;
> + struct v4l2_fract frame_rate;
> + u32 src_stride;
> + u32 src_width;
> + u32 src_height;
> + struct v4l2_rect crop;
> + u32 out_width;
> + u32 out_height;
> +
> + u32 gop_length;
> + u32 bframes;
> +
> + u32 rc_mode;
> + u32 bitrate;
> + u32 bitrate_min;
> + u32 bitrate_max;
> +
> + u32 i_frame_qp;
> + u32 p_frame_qp;
> + u32 b_frame_qp;
> + u32 qp_min;
> + u32 qp_max;
> + u32 qp_min_i;
> + u32 qp_max_i;
> +
> + struct {
> + u32 enable;
> + u32 idc;
> + u32 width;
> + u32 height;
> + } sar;
> +
> + struct {
> + u32 primaries;
> + u32 transfer;
> + u32 matrix;
> + u32 full_range;
> + } color;
> +};
> +
> +struct vpu_decode_params {
> + u32 codec_format;
> + u32 output_format;
> + u32 b_dis_reorder;
> + u32 b_non_frame;
> + u32 frame_count;
> + u32 end_flag;
> + struct {
> + u32 base;
> + u32 size;
> + } udata;
> +};
> +
> +#endif
> diff --git a/drivers/media/platform/amphion/vpu_core.c b/drivers/media/platform/amphion/vpu_core.c
> new file mode 100644
> index 000000000000..0dbfd1c84f75
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_core.c
> @@ -0,0 +1,906 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/interconnect.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/of_device.h>
> +#include <linux/of_address.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/types.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/pm_domain.h>
> +#include <linux/firmware.h>
> +#include "vpu.h"
> +#include "vpu_defs.h"
> +#include "vpu_core.h"
> +#include "vpu_mbox.h"
> +#include "vpu_msgs.h"
> +#include "vpu_rpc.h"
> +#include "vpu_cmds.h"
> +
> +void csr_writel(struct vpu_core *core, u32 reg, u32 val)
> +{
> + writel(val, core->base + reg);
> +}
> +
> +u32 csr_readl(struct vpu_core *core, u32 reg)
> +{
> + return readl(core->base + reg);
> +}
> +
> +static int vpu_core_load_firmware(struct vpu_core *core)
> +{
> + const struct firmware *pfw = NULL;
> + int ret = 0;
> +
> + WARN_ON(!core || !core->res || !core->res->fwname);
> + if (!core->fw.virt) {
> + dev_err(core->dev, "firmware buffer is not ready\n");
> + return -EINVAL;
> + }
> +
> + ret = request_firmware(&pfw, core->res->fwname, core->dev);
> + dev_dbg(core->dev, "request_firmware %s : %d\n", core->res->fwname, ret);
> + if (ret) {
> + dev_err(core->dev, "request firmware %s failed, ret = %d\n",
> + core->res->fwname, ret);
> + return ret;
> + }
> +
> + if (core->fw.length < pfw->size) {
> + dev_err(core->dev, "firmware buffer size want %zu, but %d\n",
> + pfw->size, core->fw.length);
> + ret = -EINVAL;
> + goto exit;
> + }
> +
> + memset_io(core->fw.virt, 0, core->fw.length);
> + memcpy(core->fw.virt, pfw->data, pfw->size);
> + core->fw.bytesused = pfw->size;
> + ret = vpu_iface_on_firmware_loaded(core);
> +exit:
> + release_firmware(pfw);
> + pfw = NULL;
> +
> + return ret;
> +}
> +
> +static int vpu_core_boot_done(struct vpu_core *core)
> +{
> + u32 fw_version;
> +
> + fw_version = vpu_iface_get_version(core);
> + dev_info(core->dev, "%s firmware version : %d.%d.%d\n",
> + vpu_core_type_desc(core->type),
> + (fw_version >> 16) & 0xff,
> + (fw_version >> 8) & 0xff,
> + fw_version & 0xff);
> + core->supported_instance_count = vpu_iface_get_max_instance_count(core);
> + if (core->res->act_size) {
> + u32 count = core->act.length / core->res->act_size;
> +
> + core->supported_instance_count = min(core->supported_instance_count, count);
> + }
> + core->fw_version = fw_version;
> + core->state = VPU_CORE_ACTIVE;
> +
> + return 0;
> +}
> +
> +static int vpu_core_wait_boot_done(struct vpu_core *core)
> +{
> + int ret;
> +
> + ret = wait_for_completion_timeout(&core->cmp, VPU_TIMEOUT);
> + if (!ret) {
> + dev_err(core->dev, "boot timeout\n");
> + return -EINVAL;
> + }
> + return vpu_core_boot_done(core);
> +}
> +
> +static int vpu_core_boot(struct vpu_core *core, bool load)
> +{
> + int ret;
> +
> + WARN_ON(!core);
> +
> + if (!core->res->standalone)
> + return 0;
> +
> + reinit_completion(&core->cmp);
> + if (load) {
> + ret = vpu_core_load_firmware(core);
> + if (ret)
> + return ret;
> + }
> +
> + vpu_iface_boot_core(core);
> + return vpu_core_wait_boot_done(core);
> +}
> +
> +static int vpu_core_shutdown(struct vpu_core *core)
> +{
> + if (!core->res->standalone)
> + return 0;
> + return vpu_iface_shutdown_core(core);
> +}
> +
> +static int vpu_core_restore(struct vpu_core *core)
> +{
> + int ret;
> +
> + if (!core->res->standalone)
> + return 0;
> + ret = vpu_core_sw_reset(core);
> + if (ret)
> + return ret;
> +
> + vpu_core_boot_done(core);
> + return vpu_iface_restore_core(core);
> +}
> +
> +static int __vpu_alloc_dma(struct device *dev, struct vpu_buffer *buf)
> +{
> + gfp_t gfp = GFP_KERNEL | GFP_DMA32;
> +
> + WARN_ON(!dev || !buf);
> +
> + if (!buf->length)
> + return 0;
> +
> + buf->virt = dma_alloc_coherent(dev, buf->length, &buf->phys, gfp);
> + if (!buf->virt)
> + return -ENOMEM;
> +
> + buf->dev = dev;
> +
> + return 0;
> +}
> +
> +void vpu_free_dma(struct vpu_buffer *buf)
> +{
> + WARN_ON(!buf);
> +
> + if (!buf->virt || !buf->dev)
> + return;
> +
> + dma_free_coherent(buf->dev, buf->length, buf->virt, buf->phys);
> + buf->virt = NULL;
> + buf->phys = 0;
> + buf->length = 0;
> + buf->bytesused = 0;
> + buf->dev = NULL;
> +}
> +
> +int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf)
> +{
> + WARN_ON(!core || !buf);
> +
> + return __vpu_alloc_dma(core->dev, buf);
> +}
> +
> +static void vpu_core_check_hang(struct vpu_core *core)
> +{
> + if (core->hang_mask)
> + core->state = VPU_CORE_HANG;
> +}
> +
> +static struct vpu_core *vpu_core_find_proper_by_type(struct vpu_dev *vpu, u32 type)
> +{
> + struct vpu_core *core = NULL;
> + int request_count = INT_MAX;
> + struct vpu_core *c;
> +
> + WARN_ON(!vpu);
> +
> + list_for_each_entry(c, &vpu->cores, list) {
> + dev_dbg(c->dev, "instance_mask = 0x%lx, state = %d\n",
> + c->instance_mask,
> + c->state);
> + if (c->type != type)
> + continue;
> + if (c->state == VPU_CORE_DEINIT) {
> + core = c;
> + break;
> + }
> + vpu_core_check_hang(c);
> + if (c->state != VPU_CORE_ACTIVE)
> + continue;
> + if (c->request_count < request_count) {
> + request_count = c->request_count;
> + core = c;
> + }
> + if (!request_count)
> + break;
> + }
> +
> + return core;
> +}
> +
> +static bool vpu_core_is_exist(struct vpu_dev *vpu, struct vpu_core *core)
> +{
> + struct vpu_core *c;
> +
> + list_for_each_entry(c, &vpu->cores, list) {
> + if (c == core)
> + return true;
> + }
> +
> + return false;
> +}
> +
> +static void vpu_core_get_vpu(struct vpu_core *core)
> +{
> + core->vpu->get_vpu(core->vpu);
> + if (core->type == VPU_CORE_TYPE_ENC)
> + core->vpu->get_enc(core->vpu);
> + if (core->type == VPU_CORE_TYPE_DEC)
> + core->vpu->get_dec(core->vpu);
> +}
> +
> +static int vpu_core_register(struct device *dev, struct vpu_core *core)
> +{
> + struct vpu_dev *vpu = dev_get_drvdata(dev);
> + int ret = 0;
> +
> + dev_dbg(core->dev, "register core %s\n", vpu_core_type_desc(core->type));
> + if (vpu_core_is_exist(vpu, core))
> + return 0;
> +
> + core->workqueue = alloc_workqueue("vpu", WQ_UNBOUND | WQ_MEM_RECLAIM, 1);
> + if (!core->workqueue) {
> + dev_err(core->dev, "fail to alloc workqueue\n");
> + return -ENOMEM;
> + }
> + INIT_WORK(&core->msg_work, vpu_msg_run_work);
> + INIT_DELAYED_WORK(&core->msg_delayed_work, vpu_msg_delayed_work);
> + core->msg_buffer_size = roundup_pow_of_two(VPU_MSG_BUFFER_SIZE);
> + core->msg_buffer = vzalloc(core->msg_buffer_size);
> + if (!core->msg_buffer) {
> + dev_err(core->dev, "failed allocate buffer for fifo\n");
> + ret = -ENOMEM;
> + goto error;
> + }
> + ret = kfifo_init(&core->msg_fifo, core->msg_buffer, core->msg_buffer_size);
> + if (ret) {
> + dev_err(core->dev, "failed init kfifo\n");
> + goto error;
> + }
> +
> + list_add_tail(&core->list, &vpu->cores);
> +
> + vpu_core_get_vpu(core);
> +
> + if (vpu_iface_get_power_state(core))
> + ret = vpu_core_restore(core);
> + if (ret)
> + goto error;
> +
> + return 0;
> +error:
> + if (core->msg_buffer) {
> + vfree(core->msg_buffer);
> + core->msg_buffer = NULL;
> + }
> + if (core->workqueue) {
> + destroy_workqueue(core->workqueue);
> + core->workqueue = NULL;
> + }
> + return ret;
> +}
> +
> +static void vpu_core_put_vpu(struct vpu_core *core)
> +{
> + if (core->type == VPU_CORE_TYPE_ENC)
> + core->vpu->put_enc(core->vpu);
> + if (core->type == VPU_CORE_TYPE_DEC)
> + core->vpu->put_dec(core->vpu);
> + core->vpu->put_vpu(core->vpu);
> +}
> +
> +static int vpu_core_unregister(struct device *dev, struct vpu_core *core)
> +{
> + list_del_init(&core->list);
> +
> + vpu_core_put_vpu(core);
> + core->vpu = NULL;
> + vfree(core->msg_buffer);
> + core->msg_buffer = NULL;
> +
> + if (core->workqueue) {
> + cancel_work_sync(&core->msg_work);
> + cancel_delayed_work_sync(&core->msg_delayed_work);
> + destroy_workqueue(core->workqueue);
> + core->workqueue = NULL;
> + }
> +
> + return 0;
> +}
> +
> +static int vpu_core_acquire_instance(struct vpu_core *core)
> +{
> + int id;
> +
> + WARN_ON(!core);
> +
> + id = ffz(core->instance_mask);
> + if (id >= core->supported_instance_count)
> + return -EINVAL;
> +
> + set_bit(id, &core->instance_mask);
> +
> + return id;
> +}
> +
> +static void vpu_core_release_instance(struct vpu_core *core, int id)
> +{
> + WARN_ON(!core);
> +
> + if (id < 0 || id >= core->supported_instance_count)
> + return;
> +
> + clear_bit(id, &core->instance_mask);
> +}
> +
> +struct vpu_inst *vpu_inst_get(struct vpu_inst *inst)
> +{
> + if (!inst)
> + return NULL;
> +
> + atomic_inc(&inst->ref_count);
> +
> + return inst;
> +}
> +
> +void vpu_inst_put(struct vpu_inst *inst)
> +{
> + if (!inst)
> + return;
> + if (atomic_dec_and_test(&inst->ref_count)) {
> + if (inst->release)
> + inst->release(inst);
> + }
> +}
> +
> +struct vpu_core *vpu_request_core(struct vpu_dev *vpu, enum vpu_core_type type)
> +{
> + struct vpu_core *core = NULL;
> + int ret;
> +
> + mutex_lock(&vpu->lock);
> +
> + core = vpu_core_find_proper_by_type(vpu, type);
> + if (!core)
> + goto exit;
> +
> + mutex_lock(&core->lock);
> + pm_runtime_get_sync(core->dev);
> +
> + if (core->state == VPU_CORE_DEINIT) {
> + ret = vpu_core_boot(core, true);
> + if (ret) {
> + pm_runtime_put_sync(core->dev);
> + mutex_unlock(&core->lock);
> + core = NULL;
> + goto exit;
> + }
> + }
> +
> + core->request_count++;
> +
> + mutex_unlock(&core->lock);
> +exit:
> + mutex_unlock(&vpu->lock);
> +
> + return core;
> +}
> +
> +void vpu_release_core(struct vpu_core *core)
> +{
> + if (!core)
> + return;
> +
> + mutex_lock(&core->lock);
> + pm_runtime_put_sync(core->dev);
> + if (core->request_count)
> + core->request_count--;
> + mutex_unlock(&core->lock);
> +}
> +
> +int vpu_inst_register(struct vpu_inst *inst)
> +{
> + struct vpu_dev *vpu;
> + struct vpu_core *core;
> + int ret = 0;
> +
> + WARN_ON(!inst || !inst->vpu);
> +
> + vpu = inst->vpu;
> + core = inst->core;
> + if (!core) {
> + core = vpu_request_core(vpu, inst->type);
> + if (!core) {
> + dev_err(vpu->dev, "there is no vpu core for %s\n",
> + vpu_core_type_desc(inst->type));
> + return -EINVAL;
> + }
> + inst->core = core;
> + inst->dev = get_device(core->dev);
> + }
> +
> + mutex_lock(&core->lock);
> + if (inst->id >= 0 && inst->id < core->supported_instance_count)
> + goto exit;
> +
> + ret = vpu_core_acquire_instance(core);
> + if (ret < 0)
> + goto exit;
> +
> + vpu_trace(inst->dev, "[%d] %p\n", ret, inst);
> + inst->id = ret;
> + list_add_tail(&inst->list, &core->instances);
> + ret = 0;
> + if (core->res->act_size) {
> + inst->act.phys = core->act.phys + core->res->act_size * inst->id;
> + inst->act.virt = core->act.virt + core->res->act_size * inst->id;
> + inst->act.length = core->res->act_size;
> + }
> + vpu_inst_create_dbgfs_file(inst);
> +exit:
> + mutex_unlock(&core->lock);
> +
> + if (ret)
> + dev_err(core->dev, "register instance fail\n");
> + return ret;
> +}
> +
> +int vpu_inst_unregister(struct vpu_inst *inst)
> +{
> + struct vpu_core *core;
> +
> + WARN_ON(!inst);
> +
> + if (!inst->core)
> + return 0;
> +
> + core = inst->core;
> + vpu_clear_request(inst);
> + mutex_lock(&core->lock);
> + if (inst->id >= 0 && inst->id < core->supported_instance_count) {
> + vpu_inst_remove_dbgfs_file(inst);
> + list_del_init(&inst->list);
> + vpu_core_release_instance(core, inst->id);
> + inst->id = VPU_INST_NULL_ID;
> + }
> + vpu_core_check_hang(core);
> + if (core->state == VPU_CORE_HANG && !core->instance_mask) {
> + dev_info(core->dev, "reset hang core\n");
> + if (!vpu_core_sw_reset(core)) {
> + core->state = VPU_CORE_ACTIVE;
> + core->hang_mask = 0;
> + }
> + }
> + mutex_unlock(&core->lock);
> +
> + return 0;
> +}
> +
> +struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index)
> +{
> + struct vpu_inst *inst = NULL;
> + struct vpu_inst *tmp;
> +
> + mutex_lock(&core->lock);
> + if (!test_bit(index, &core->instance_mask))
> + goto exit;
> + list_for_each_entry(tmp, &core->instances, list) {
> + if (tmp->id == index) {
> + inst = vpu_inst_get(tmp);
> + break;
> + }
> + }
> +exit:
> + mutex_unlock(&core->lock);
> +
> + return inst;
> +}
> +
> +const struct vpu_core_resources *vpu_get_resource(struct vpu_inst *inst)
> +{
> + struct vpu_dev *vpu;
> + struct vpu_core *core = NULL;
> + const struct vpu_core_resources *res = NULL;
> +
> + if (!inst || !inst->vpu)
> + return NULL;
> +
> + if (inst->core && inst->core->res)
> + return inst->core->res;
> +
> + vpu = inst->vpu;
> + mutex_lock(&vpu->lock);
> + list_for_each_entry(core, &vpu->cores, list) {
> + if (core->type == inst->type) {
> + res = core->res;
> + break;
> + }
> + }
> + mutex_unlock(&vpu->lock);
> +
> + return res;
> +}
> +
> +static int vpu_core_parse_dt(struct vpu_core *core, struct device_node *np)
> +{
> + struct device_node *node;
> + struct resource res;
> +
> + if (of_count_phandle_with_args(np, "memory-region", NULL) < 2) {
> + dev_err(core->dev, "need 2 memory-region for boot and rpc\n");
> + return -ENODEV;
> + }
> +
> + node = of_parse_phandle(np, "memory-region", 0);
> + if (!node) {
> + dev_err(core->dev, "boot-region of_parse_phandle error\n");
> + return -ENODEV;
> + }
> + if (of_address_to_resource(node, 0, &res)) {
> + dev_err(core->dev, "boot-region of_address_to_resource error\n");
> + return -EINVAL;
> + }
> + core->fw.phys = res.start;
> + core->fw.length = resource_size(&res);
> +
> + node = of_parse_phandle(np, "memory-region", 1);
> + if (!node) {
> + dev_err(core->dev, "rpc-region of_parse_phandle error\n");
> + return -ENODEV;
> + }
> + if (of_address_to_resource(node, 0, &res)) {
> + dev_err(core->dev, "rpc-region of_address_to_resource error\n");
> + return -EINVAL;
> + }
> + core->rpc.phys = res.start;
> + core->rpc.length = resource_size(&res);
> +
> + if (core->rpc.length < core->res->rpc_size + core->res->fwlog_size) {
> + dev_err(core->dev, "the rpc-region <%pad, 0x%x> is not enough\n",
> + &core->rpc.phys, core->rpc.length);
> + return -EINVAL;
> + }
> +
> + core->fw.virt = ioremap_wc(core->fw.phys, core->fw.length);
> + core->rpc.virt = ioremap_wc(core->rpc.phys, core->rpc.length);
> + memset_io(core->rpc.virt, 0, core->rpc.length);
> +
> + if (vpu_iface_check_memory_region(core,
> + core->rpc.phys,
> + core->rpc.length) != VPU_CORE_MEMORY_UNCACHED) {
> + dev_err(core->dev, "rpc region<%pad, 0x%x> isn't uncached\n",
> + &core->rpc.phys, core->rpc.length);
> + return -EINVAL;
> + }
> +
> + core->log.phys = core->rpc.phys + core->res->rpc_size;
> + core->log.virt = core->rpc.virt + core->res->rpc_size;
> + core->log.length = core->res->fwlog_size;
> + core->act.phys = core->log.phys + core->log.length;
> + core->act.virt = core->log.virt + core->log.length;
> + core->act.length = core->rpc.length - core->res->rpc_size - core->log.length;
> + core->rpc.length = core->res->rpc_size;
> +
> + return 0;
> +}
> +
> +static int vpu_core_probe(struct platform_device *pdev)
> +{
> + struct device *dev = &pdev->dev;
> + struct vpu_core *core;
> + struct vpu_dev *vpu = dev_get_drvdata(dev->parent);
> + struct vpu_shared_addr *iface;
> + u32 iface_data_size;
> + int ret;
> +
> + dev_dbg(dev, "probe\n");
> + if (!vpu)
> + return -EINVAL;
> + core = devm_kzalloc(dev, sizeof(*core), GFP_KERNEL);
> + if (!core)
> + return -ENOMEM;
> +
> + core->pdev = pdev;
> + core->dev = dev;
> + platform_set_drvdata(pdev, core);
> + core->vpu = vpu;
> + INIT_LIST_HEAD(&core->instances);
> + mutex_init(&core->lock);
> + mutex_init(&core->cmd_lock);
> + init_completion(&core->cmp);
> + init_waitqueue_head(&core->ack_wq);
> + core->state = VPU_CORE_DEINIT;
> +
> + core->res = of_device_get_match_data(dev);
> + if (!core->res)
> + return -ENODEV;
> +
> + core->type = core->res->type;
> + core->id = of_alias_get_id(dev->of_node, "vpu_core");
> + if (core->id < 0) {
> + dev_err(dev, "can't get vpu core id\n");
> + return core->id;
> + }
> + dev_info(core->dev, "[%d] = %s\n", core->id, vpu_core_type_desc(core->type));
> + ret = vpu_core_parse_dt(core, dev->of_node);
> + if (ret)
> + return ret;
> +
> + core->base = devm_platform_ioremap_resource(pdev, 0);
> + if (IS_ERR(core->base))
> + return PTR_ERR(core->base);
> +
> + if (!vpu_iface_check_codec(core)) {
> + dev_err(core->dev, "is not supported\n");
> + return -EINVAL;
> + }
> +
> + ret = vpu_mbox_init(core);
> + if (ret)
> + return ret;
> +
> + iface = devm_kzalloc(dev, sizeof(*iface), GFP_KERNEL);
> + if (!iface)
> + return -ENOMEM;
> +
> + iface_data_size = vpu_iface_get_data_size(core);
> + if (iface_data_size) {
> + iface->priv = devm_kzalloc(dev, iface_data_size, GFP_KERNEL);
> + if (!iface->priv)
> + return -ENOMEM;
> + }
> +
> + ret = vpu_iface_init(core, iface, &core->rpc, core->fw.phys);
> + if (ret) {
> + dev_err(core->dev, "init iface fail, ret = %d\n", ret);
> + return ret;
> + }
> +
> + vpu_iface_config_system(core, vpu->res->mreg_base, vpu->base);
> + vpu_iface_set_log_buf(core, &core->log);
> +
> + pm_runtime_enable(dev);
> + ret = pm_runtime_get_sync(dev);
> + if (ret) {
> + pm_runtime_put_noidle(dev);
> + pm_runtime_set_suspended(dev);
> + goto err_runtime_disable;
> + }
> +
> + ret = vpu_core_register(dev->parent, core);
> + if (ret)
> + goto err_core_register;
> + core->parent = dev->parent;
> +
> + pm_runtime_put_sync(dev);
> + vpu_core_create_dbgfs_file(core);
> +
> + return 0;
> +
> +err_core_register:
> + pm_runtime_put_sync(dev);
> +err_runtime_disable:
> + pm_runtime_disable(dev);
> +
> + return ret;
> +}
> +
> +static int vpu_core_remove(struct platform_device *pdev)
> +{
> + struct device *dev = &pdev->dev;
> + struct vpu_core *core = platform_get_drvdata(pdev);
> + int ret;
> +
> + vpu_core_remove_dbgfs_file(core);
> + ret = pm_runtime_get_sync(dev);
> + WARN_ON(ret < 0);
> +
> + vpu_core_shutdown(core);
> + pm_runtime_put_sync(dev);
> + pm_runtime_disable(dev);
> +
> + vpu_core_unregister(core->parent, core);
> + iounmap(core->fw.virt);
> + iounmap(core->rpc.virt);
> + mutex_destroy(&core->lock);
> + mutex_destroy(&core->cmd_lock);
> +
> + return 0;
> +}
> +
> +static int __maybe_unused vpu_core_runtime_resume(struct device *dev)
> +{
> + struct vpu_core *core = dev_get_drvdata(dev);
> +
> + return vpu_mbox_request(core);
> +}
> +
> +static int __maybe_unused vpu_core_runtime_suspend(struct device *dev)
> +{
> + struct vpu_core *core = dev_get_drvdata(dev);
> +
> + vpu_mbox_free(core);
> + return 0;
> +}
> +
> +static void vpu_core_cancel_work(struct vpu_core *core)
> +{
> + struct vpu_inst *inst = NULL;
> +
> + cancel_work_sync(&core->msg_work);
> + cancel_delayed_work_sync(&core->msg_delayed_work);
> +
> + mutex_lock(&core->lock);
> + list_for_each_entry(inst, &core->instances, list)
> + cancel_work_sync(&inst->msg_work);
> + mutex_unlock(&core->lock);
> +}
> +
> +static void vpu_core_resume_work(struct vpu_core *core)
> +{
> + struct vpu_inst *inst = NULL;
> + unsigned long delay = msecs_to_jiffies(10);
> +
> + queue_work(core->workqueue, &core->msg_work);
> + queue_delayed_work(core->workqueue, &core->msg_delayed_work, delay);
> +
> + mutex_lock(&core->lock);
> + list_for_each_entry(inst, &core->instances, list)
> + queue_work(inst->workqueue, &inst->msg_work);
> + mutex_unlock(&core->lock);
> +}
> +
> +static int __maybe_unused vpu_core_resume(struct device *dev)
> +{
> + struct vpu_core *core = dev_get_drvdata(dev);
> + int ret = 0;
> +
> + if (!core->res->standalone)
> + return 0;
> +
> + mutex_lock(&core->lock);
> + pm_runtime_get_sync(dev);
> + vpu_core_get_vpu(core);
> + if (core->state != VPU_CORE_SNAPSHOT)
> + goto exit;
> +
> + if (!vpu_iface_get_power_state(core)) {
> + if (!list_empty(&core->instances)) {
> + ret = vpu_core_boot(core, false);
> + if (ret) {
> + dev_err(core->dev, "%s boot fail\n", __func__);
> + core->state = VPU_CORE_DEINIT;
> + goto exit;
> + }
> + } else {
> + core->state = VPU_CORE_DEINIT;
> + }
> + } else {
> + if (!list_empty(&core->instances)) {
> + ret = vpu_core_sw_reset(core);
> + if (ret) {
> + dev_err(core->dev, "%s sw_reset fail\n", __func__);
> + core->state = VPU_CORE_HANG;
> + goto exit;
> + }
> + }
> + core->state = VPU_CORE_ACTIVE;
> + }
> +
> +exit:
> + pm_runtime_put_sync(dev);
> + mutex_unlock(&core->lock);
> +
> + vpu_core_resume_work(core);
> + return ret;
> +}
> +
> +static int __maybe_unused vpu_core_suspend(struct device *dev)
> +{
> + struct vpu_core *core = dev_get_drvdata(dev);
> + int ret = 0;
> +
> + if (!core->res->standalone)
> + return 0;
> +
> + mutex_lock(&core->lock);
> + if (core->state == VPU_CORE_ACTIVE) {
> + if (!list_empty(&core->instances)) {
> + ret = vpu_core_snapshot(core);
> + if (ret) {
> + mutex_unlock(&core->lock);
> + return ret;
> + }
> + }
> +
> + core->state = VPU_CORE_SNAPSHOT;
> + }
> + mutex_unlock(&core->lock);
> +
> + vpu_core_cancel_work(core);
> +
> + mutex_lock(&core->lock);
> + vpu_core_put_vpu(core);
> + mutex_unlock(&core->lock);
> + return ret;
> +}
> +
> +static const struct dev_pm_ops vpu_core_pm_ops = {
> + SET_RUNTIME_PM_OPS(vpu_core_runtime_suspend, vpu_core_runtime_resume, NULL)
> + SET_SYSTEM_SLEEP_PM_OPS(vpu_core_suspend, vpu_core_resume)
> +};
> +
> +static struct vpu_core_resources imx8q_enc = {
> + .type = VPU_CORE_TYPE_ENC,
> + .fwname = "vpu/vpu_fw_imx8_enc.bin",
> + .stride = 16,
> + .max_width = 1920,
> + .max_height = 1920,
> + .min_width = 64,
> + .min_height = 48,
> + .step_width = 2,
> + .step_height = 2,
> + .rpc_size = 0x80000,
> + .fwlog_size = 0x80000,
> + .act_size = 0xc0000,
> + .standalone = true,
> +};
> +
> +static struct vpu_core_resources imx8q_dec = {
> + .type = VPU_CORE_TYPE_DEC,
> + .fwname = "vpu/vpu_fw_imx8_dec.bin",
> + .stride = 256,
> + .max_width = 8188,
> + .max_height = 8188,
> + .min_width = 16,
> + .min_height = 16,
> + .step_width = 1,
> + .step_height = 1,
> + .rpc_size = 0x80000,
> + .fwlog_size = 0x80000,
> + .standalone = true,
> +};
> +
> +static const struct of_device_id vpu_core_dt_match[] = {
> + { .compatible = "nxp,imx8q-vpu-encoder", .data = &imx8q_enc },
> + { .compatible = "nxp,imx8q-vpu-decoder", .data = &imx8q_dec },
> + {}
> +};
> +MODULE_DEVICE_TABLE(of, vpu_core_dt_match);
> +
> +static struct platform_driver amphion_vpu_core_driver = {
> + .probe = vpu_core_probe,
> + .remove = vpu_core_remove,
> + .driver = {
> + .name = "amphion-vpu-core",
> + .of_match_table = vpu_core_dt_match,
> + .pm = &vpu_core_pm_ops,
> + },
> +};
> +
> +int __init vpu_core_driver_init(void)
> +{
> + return platform_driver_register(&amphion_vpu_core_driver);
> +}
> +
> +void __exit vpu_core_driver_exit(void)
> +{
> + platform_driver_unregister(&amphion_vpu_core_driver);
> +}
> diff --git a/drivers/media/platform/amphion/vpu_core.h b/drivers/media/platform/amphion/vpu_core.h
> new file mode 100644
> index 000000000000..00a662997da4
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_core.h
> @@ -0,0 +1,15 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _AMPHION_VPU_CORE_H
> +#define _AMPHION_VPU_CORE_H
> +
> +void csr_writel(struct vpu_core *core, u32 reg, u32 val);
> +u32 csr_readl(struct vpu_core *core, u32 reg);
> +int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf);
> +void vpu_free_dma(struct vpu_buffer *buf);
> +struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index);
> +
> +#endif
> diff --git a/drivers/media/platform/amphion/vpu_dbg.c b/drivers/media/platform/amphion/vpu_dbg.c
> new file mode 100644
> index 000000000000..2e7e11101f99
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_dbg.c
> @@ -0,0 +1,495 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/device.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/types.h>
> +#include <linux/pm_runtime.h>
> +#include <media/v4l2-device.h>
> +#include <linux/debugfs.h>
> +#include "vpu.h"
> +#include "vpu_defs.h"
> +#include "vpu_helpers.h"
> +#include "vpu_cmds.h"
> +#include "vpu_rpc.h"
> +
> +struct print_buf_desc {
> + u32 start_h_phy;
> + u32 start_h_vir;
> + u32 start_m;
> + u32 bytes;
> + u32 read;
> + u32 write;
> + char buffer[0];
> +};
> +
> +static char *vb2_stat_name[] = {
> + [VB2_BUF_STATE_DEQUEUED] = "dequeued",
> + [VB2_BUF_STATE_IN_REQUEST] = "in_request",
> + [VB2_BUF_STATE_PREPARING] = "preparing",
> + [VB2_BUF_STATE_QUEUED] = "queued",
> + [VB2_BUF_STATE_ACTIVE] = "active",
> + [VB2_BUF_STATE_DONE] = "done",
> + [VB2_BUF_STATE_ERROR] = "error",
> +};
> +
> +static char *vpu_stat_name[] = {
> + [VPU_BUF_STATE_IDLE] = "idle",
> + [VPU_BUF_STATE_INUSE] = "inuse",
> + [VPU_BUF_STATE_DECODED] = "decoded",
> + [VPU_BUF_STATE_READY] = "ready",
> + [VPU_BUF_STATE_SKIP] = "skip",
> + [VPU_BUF_STATE_ERROR] = "error",
> +};
> +
> +static int vpu_dbg_instance(struct seq_file *s, void *data)
> +{
> + struct vpu_inst *inst = s->private;
> + char str[128];
> + int num;
> + struct vb2_queue *vq;
> + int i;
> +
> + num = scnprintf(str, sizeof(str), "[%s]\n", vpu_core_type_desc(inst->type));
> + if (seq_write(s, str, num))
> + return 0;
> +
> + num = scnprintf(str, sizeof(str), "tgig = %d,pid = %d\n", inst->tgid, inst->pid);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str), "state = %d\n", inst->state);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str),
> + "min_buffer_out = %d, min_buffer_cap = %d\n",
> + inst->min_buffer_out, inst->min_buffer_cap);
> + if (seq_write(s, str, num))
> + return 0;
> +
> +
> + vq = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> + num = scnprintf(str, sizeof(str),
> + "output (%2d, %2d): fmt = %c%c%c%c %d x %d, %d;",
> + vb2_is_streaming(vq),
> + vq->num_buffers,
> + inst->out_format.pixfmt,
> + inst->out_format.pixfmt >> 8,
> + inst->out_format.pixfmt >> 16,
> + inst->out_format.pixfmt >> 24,
> + inst->out_format.width,
> + inst->out_format.height,
> + vq->last_buffer_dequeued);
> + if (seq_write(s, str, num))
> + return 0;
> + for (i = 0; i < inst->out_format.num_planes; i++) {
> + num = scnprintf(str, sizeof(str), " %d(%d)",
> + inst->out_format.sizeimage[i],
> + inst->out_format.bytesperline[i]);
> + if (seq_write(s, str, num))
> + return 0;
> + }
> + if (seq_write(s, "\n", 1))
> + return 0;
> +
> + vq = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> + num = scnprintf(str, sizeof(str),
> + "capture(%2d, %2d): fmt = %c%c%c%c %d x %d, %d;",
> + vb2_is_streaming(vq),
> + vq->num_buffers,
> + inst->cap_format.pixfmt,
> + inst->cap_format.pixfmt >> 8,
> + inst->cap_format.pixfmt >> 16,
> + inst->cap_format.pixfmt >> 24,
> + inst->cap_format.width,
> + inst->cap_format.height,
> + vq->last_buffer_dequeued);
> + if (seq_write(s, str, num))
> + return 0;
> + for (i = 0; i < inst->cap_format.num_planes; i++) {
> + num = scnprintf(str, sizeof(str), " %d(%d)",
> + inst->cap_format.sizeimage[i],
> + inst->cap_format.bytesperline[i]);
> + if (seq_write(s, str, num))
> + return 0;
> + }
> + if (seq_write(s, "\n", 1))
> + return 0;
> + num = scnprintf(str, sizeof(str), "crop: (%d, %d) %d x %d\n",
> + inst->crop.left,
> + inst->crop.top,
> + inst->crop.width,
> + inst->crop.height);
> + if (seq_write(s, str, num))
> + return 0;
> +
> + vq = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> + for (i = 0; i < vq->num_buffers; i++) {
> + struct vb2_buffer *vb = vq->bufs[i];
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
> +
> + if (vb->state == VB2_BUF_STATE_DEQUEUED)
> + continue;
> + num = scnprintf(str, sizeof(str),
> + "output [%2d] state = %10s, %8s\n",
> + i, vb2_stat_name[vb->state],
> + vpu_stat_name[vpu_buf->state]);
> + if (seq_write(s, str, num))
> + return 0;
> + }
> +
> + vq = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> + for (i = 0; i < vq->num_buffers; i++) {
> + struct vb2_buffer *vb = vq->bufs[i];
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
> +
> + if (vb->state == VB2_BUF_STATE_DEQUEUED)
> + continue;
> + num = scnprintf(str, sizeof(str),
> + "capture[%2d] state = %10s, %8s\n",
> + i, vb2_stat_name[vb->state],
> + vpu_stat_name[vpu_buf->state]);
> + if (seq_write(s, str, num))
> + return 0;
> + }
> +
> + num = scnprintf(str, sizeof(str), "sequence = %d\n", inst->sequence);
> + if (seq_write(s, str, num))
> + return 0;
> +
> + if (inst->use_stream_buffer) {
> + num = scnprintf(str, sizeof(str), "stream_buffer = %d / %d, <%pad, 0x%x>\n",
> + vpu_helper_get_used_space(inst),
> + inst->stream_buffer.length,
> + &inst->stream_buffer.phys,
> + inst->stream_buffer.length);
> + if (seq_write(s, str, num))
> + return 0;
> + }
> + num = scnprintf(str, sizeof(str), "kfifo len = 0x%x\n", kfifo_len(&inst->msg_fifo));
> + if (seq_write(s, str, num))
> + return 0;
> +
> + num = scnprintf(str, sizeof(str), "flow :\n");
> + if (seq_write(s, str, num))
> + return 0;
> +
> + mutex_lock(&inst->core->cmd_lock);
> + for (i = 0; i < ARRAY_SIZE(inst->flows); i++) {
> + u32 idx = (inst->flow_idx + i) % (ARRAY_SIZE(inst->flows));
> +
> + if (!inst->flows[idx])
> + continue;
> + num = scnprintf(str, sizeof(str), "\t[%s]0x%x\n",
> + inst->flows[idx] >= VPU_MSG_ID_NOOP ? "M" : "C",
> + inst->flows[idx]);
> + if (seq_write(s, str, num)) {
> + mutex_unlock(&inst->core->cmd_lock);
> + return 0;
> + }
> + }
> + mutex_unlock(&inst->core->cmd_lock);
> +
> + i = 0;
> + while (true) {
> + num = call_vop(inst, get_debug_info, str, sizeof(str), i++);
> + if (num <= 0)
> + break;
> + if (seq_write(s, str, num))
> + return 0;
> + }
> +
> + return 0;
> +}
> +
> +static int vpu_dbg_core(struct seq_file *s, void *data)
> +{
> + struct vpu_core *core = s->private;
> + struct vpu_shared_addr *iface = core->iface;
> + char str[128];
> + int num;
> +
> + num = scnprintf(str, sizeof(str), "[%s]\n", vpu_core_type_desc(core->type));
> + if (seq_write(s, str, num))
> + return 0;
> +
> + num = scnprintf(str, sizeof(str), "boot_region = <%pad, 0x%x>\n",
> + &core->fw.phys, core->fw.length);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str), "rpc_region = <%pad, 0x%x> used = 0x%x\n",
> + &core->rpc.phys, core->rpc.length, core->rpc.bytesused);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str), "fwlog_region = <%pad, 0x%x>\n",
> + &core->log.phys, core->log.length);
> + if (seq_write(s, str, num))
> + return 0;
> +
> + num = scnprintf(str, sizeof(str), "state = %d\n", core->state);
> + if (seq_write(s, str, num))
> + return 0;
> + if (core->state == VPU_CORE_DEINIT)
> + return 0;
> + num = scnprintf(str, sizeof(str), "fw version = %d.%d.%d\n",
> + (core->fw_version >> 16) & 0xff,
> + (core->fw_version >> 8) & 0xff,
> + core->fw_version & 0xff);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str), "instances = %d/%d (0x%02lx), %d\n",
> + hweight32(core->instance_mask),
> + core->supported_instance_count,
> + core->instance_mask,
> + core->request_count);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str), "kfifo len = 0x%x\n", kfifo_len(&core->msg_fifo));
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str),
> + "cmd_buf:[0x%x, 0x%x], wptr = 0x%x, rptr = 0x%x\n",
> + iface->cmd_desc->start,
> + iface->cmd_desc->end,
> + iface->cmd_desc->wptr,
> + iface->cmd_desc->rptr);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str),
> + "msg_buf:[0x%x, 0x%x], wptr = 0x%x, rptr = 0x%x\n",
> + iface->msg_desc->start,
> + iface->msg_desc->end,
> + iface->msg_desc->wptr,
> + iface->msg_desc->rptr);
> + if (seq_write(s, str, num))
> + return 0;
> +
> + return 0;
> +}
> +
> +static int vpu_dbg_fwlog(struct seq_file *s, void *data)
> +{
> + struct vpu_core *core = s->private;
> + struct print_buf_desc *print_buf;
> + int length;
> + u32 rptr;
> + u32 wptr;
> + int ret = 0;
> +
> + if (!core->log.virt || core->state == VPU_CORE_DEINIT)
> + return 0;
> +
> + print_buf = core->log.virt;
> + rptr = print_buf->read;
> + wptr = print_buf->write;
> +
> + if (rptr == wptr)
> + return 0;
> + else if (rptr < wptr)
> + length = wptr - rptr;
> + else
> + length = print_buf->bytes + wptr - rptr;
> +
> + if (s->count + length >= s->size) {
> + s->count = s->size;
> + return 0;
> + }
> +
> + if (rptr + length >= print_buf->bytes) {
> + int num = print_buf->bytes - rptr;
> +
> + if (seq_write(s, print_buf->buffer + rptr, num))
> + ret = -1;
> + length -= num;
> + rptr = 0;
> + }
> +
> + if (length) {
> + if (seq_write(s, print_buf->buffer + rptr, length))
> + ret = -1;
> + rptr += length;
> + }
> + if (!ret)
> + print_buf->read = rptr;
> +
> + return 0;
> +}
> +
> +static int vpu_dbg_inst_open(struct inode *inode, struct file *filp)
> +{
> + return single_open(filp, vpu_dbg_instance, inode->i_private);
> +}
> +
> +static ssize_t vpu_dbg_inst_write(struct file *file,
> + const char __user *user_buf, size_t size, loff_t *ppos)
> +{
> + struct seq_file *s = file->private_data;
> + struct vpu_inst *inst = s->private;
> +
> + vpu_session_debug(inst);
> +
> + return size;
> +}
> +
> +static ssize_t vpu_dbg_core_write(struct file *file,
> + const char __user *user_buf, size_t size, loff_t *ppos)
> +{
> + struct seq_file *s = file->private_data;
> + struct vpu_core *core = s->private;
> +
> + pm_runtime_get_sync(core->dev);
> + mutex_lock(&core->lock);
> + if (core->state != VPU_CORE_DEINIT && !core->instance_mask) {
> + dev_info(core->dev, "reset\n");
> + if (!vpu_core_sw_reset(core)) {
> + core->state = VPU_CORE_ACTIVE;
> + core->hang_mask = 0;
> + }
> + }
> + mutex_unlock(&core->lock);
> + pm_runtime_put_sync(core->dev);
> +
> + return size;
> +}
> +
> +static int vpu_dbg_core_open(struct inode *inode, struct file *filp)
> +{
> + return single_open(filp, vpu_dbg_core, inode->i_private);
> +}
> +
> +static int vpu_dbg_fwlog_open(struct inode *inode, struct file *filp)
> +{
> + return single_open(filp, vpu_dbg_fwlog, inode->i_private);
> +}
> +
> +static const struct file_operations vpu_dbg_inst_fops = {
> + .owner = THIS_MODULE,
> + .open = vpu_dbg_inst_open,
> + .release = single_release,
> + .read = seq_read,
> + .write = vpu_dbg_inst_write,
> +};
> +
> +static const struct file_operations vpu_dbg_core_fops = {
> + .owner = THIS_MODULE,
> + .open = vpu_dbg_core_open,
> + .release = single_release,
> + .read = seq_read,
> + .write = vpu_dbg_core_write,
> +};
> +
> +static const struct file_operations vpu_dbg_fwlog_fops = {
> + .owner = THIS_MODULE,
> + .open = vpu_dbg_fwlog_open,
> + .release = single_release,
> + .read = seq_read,
> +};
> +
> +int vpu_inst_create_dbgfs_file(struct vpu_inst *inst)
> +{
> + struct vpu_dev *vpu;
> + char name[64];
> +
> + if (!inst || !inst->core || !inst->core->vpu)
> + return -EINVAL;
> +
> + vpu = inst->core->vpu;
> + if (!vpu->debugfs)
> + return -EINVAL;
> +
> + if (inst->debugfs)
> + return 0;
> +
> + scnprintf(name, sizeof(name), "instance.%d.%d",
> + inst->core->id, inst->id);
> + inst->debugfs = debugfs_create_file((const char *)name,
> + VERIFY_OCTAL_PERMISSIONS(0644),
> + vpu->debugfs,
> + inst,
> + &vpu_dbg_inst_fops);
> + if (!inst->debugfs) {
> + dev_err(inst->dev, "vpu create debugfs %s fail\n", name);
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +int vpu_inst_remove_dbgfs_file(struct vpu_inst *inst)
> +{
> + if (!inst)
> + return 0;
> +
> + debugfs_remove(inst->debugfs);
> + inst->debugfs = NULL;
> +
> + return 0;
> +}
> +
> +int vpu_core_create_dbgfs_file(struct vpu_core *core)
> +{
> + struct vpu_dev *vpu;
> + char name[64];
> +
> + if (!core || !core->vpu)
> + return -EINVAL;
> +
> + vpu = core->vpu;
> + if (!vpu->debugfs)
> + return -EINVAL;
> +
> + if (!core->debugfs) {
> + scnprintf(name, sizeof(name), "core.%d", core->id);
> + core->debugfs = debugfs_create_file((const char *)name,
> + VERIFY_OCTAL_PERMISSIONS(0644),
> + vpu->debugfs,
> + core,
> + &vpu_dbg_core_fops);
> + if (!core->debugfs) {
> + dev_err(core->dev, "vpu create debugfs %s fail\n", name);
> + return -EINVAL;
> + }
> + }
> + if (!core->debugfs_fwlog) {
> + scnprintf(name, sizeof(name), "fwlog.%d", core->id);
> + core->debugfs_fwlog = debugfs_create_file((const char *)name,
> + VERIFY_OCTAL_PERMISSIONS(0444),
> + vpu->debugfs,
> + core,
> + &vpu_dbg_fwlog_fops);
> + if (!core->debugfs_fwlog) {
> + dev_err(core->dev, "vpu create debugfs %s fail\n", name);
> + return -EINVAL;
> + }
> + }
> +
> + return 0;
> +}
> +
> +int vpu_core_remove_dbgfs_file(struct vpu_core *core)
> +{
> + if (!core)
> + return 0;
> + debugfs_remove(core->debugfs);
> + core->debugfs = NULL;
> + debugfs_remove(core->debugfs_fwlog);
> + core->debugfs_fwlog = NULL;
> +
> + return 0;
> +}
> +
> +void vpu_inst_record_flow(struct vpu_inst *inst, u32 flow)
> +{
> + if (!inst)
> + return;
> +
> + inst->flows[inst->flow_idx] = flow;
> + inst->flow_idx = (inst->flow_idx + 1) % (ARRAY_SIZE(inst->flows));
> +}
> diff --git a/drivers/media/platform/amphion/vpu_rpc.c b/drivers/media/platform/amphion/vpu_rpc.c
> new file mode 100644
> index 000000000000..7b5e9177e010
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_rpc.c
> @@ -0,0 +1,279 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/interconnect.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/of_device.h>
> +#include <linux/of_address.h>
> +#include <linux/platform_device.h>
> +#include <linux/firmware/imx/ipc.h>
> +#include <linux/firmware/imx/svc/misc.h>
> +#include "vpu.h"
> +#include "vpu_rpc.h"
> +#include "vpu_imx8q.h"
> +#include "vpu_windsor.h"
> +#include "vpu_malone.h"
> +
> +u32 vpu_iface_check_memory_region(struct vpu_core *core, dma_addr_t addr, u32 size)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->check_memory_region)
> + return VPU_CORE_MEMORY_INVALID;
> +
> + return ops->check_memory_region(core->fw.phys, addr, size);
> +}
> +
> +static u32 vpu_rpc_check_buffer_space(struct vpu_rpc_buffer_desc *desc, bool write)
> +{
> + u32 ptr1;
> + u32 ptr2;
> + u32 size;
> +
> + WARN_ON(!desc);
> +
> + size = desc->end - desc->start;
> + if (write) {
> + ptr1 = desc->wptr;
> + ptr2 = desc->rptr;
> + } else {
> + ptr1 = desc->rptr;
> + ptr2 = desc->wptr;
> + }
> +
> + if (ptr1 == ptr2) {
> + if (!write)
> + return 0;
> + else
> + return size;
> + }
> +
> + return (ptr2 + size - ptr1) % size;
> +}
> +
> +static int vpu_rpc_send_cmd_buf(struct vpu_shared_addr *shared,
> + struct vpu_rpc_event *cmd)
> +{
> + struct vpu_rpc_buffer_desc *desc;
> + u32 space = 0;
> + u32 *data;
> + u32 wptr;
> + u32 i;
> +
> + WARN_ON(!shared || !shared->cmd_mem_vir || !cmd);
> +
> + desc = shared->cmd_desc;
> + space = vpu_rpc_check_buffer_space(desc, true);
> + if (space < (((cmd->hdr.num + 1) << 2) + 16)) {
> + pr_err("Cmd Buffer is no space for [%d] %d\n",
> + cmd->hdr.index, cmd->hdr.id);
> + return -EINVAL;
> + }
> + wptr = desc->wptr;
> + data = (u32 *)(shared->cmd_mem_vir + desc->wptr - desc->start);
> + *data = 0;
> + *data |= ((cmd->hdr.index & 0xff) << 24);
> + *data |= ((cmd->hdr.num & 0xff) << 16);
> + *data |= (cmd->hdr.id & 0x3fff);
> + wptr += 4;
> + data++;
> + if (wptr >= desc->end) {
> + wptr = desc->start;
> + data = shared->cmd_mem_vir;
> + }
> +
> + for (i = 0; i < cmd->hdr.num; i++) {
> + *data = cmd->data[i];
> + wptr += 4;
> + data++;
> + if (wptr >= desc->end) {
> + wptr = desc->start;
> + data = shared->cmd_mem_vir;
> + }
> + }
> +
> + /*update wptr after data is written*/
> + mb();
> + desc->wptr = wptr;
> +
> + return 0;
> +}
> +
> +static bool vpu_rpc_check_msg(struct vpu_shared_addr *shared)
> +{
> + struct vpu_rpc_buffer_desc *desc;
> + u32 space = 0;
> + u32 msgword;
> + u32 msgnum;
> +
> + WARN_ON(!shared || !shared->msg_desc);
> +
> + desc = shared->msg_desc;
> + space = vpu_rpc_check_buffer_space(desc, 0);
> + space = (space >> 2);
> +
> + if (space) {
> + msgword = *(u32 *)(shared->msg_mem_vir + desc->rptr - desc->start);
> + msgnum = (msgword & 0xff0000) >> 16;
> + if (msgnum <= space)
> + return true;
> + }
> +
> + return false;
> +}
> +
> +static int vpu_rpc_receive_msg_buf(struct vpu_shared_addr *shared, struct vpu_rpc_event *msg)
> +{
> + struct vpu_rpc_buffer_desc *desc;
> + u32 *data;
> + u32 msgword;
> + u32 rptr;
> + u32 i;
> +
> + WARN_ON(!shared || !shared->msg_desc || !msg);
> +
> + if (!vpu_rpc_check_msg(shared))
> + return -EINVAL;
> +
> + desc = shared->msg_desc;
> + data = (u32 *)(shared->msg_mem_vir + desc->rptr - desc->start);
> + rptr = desc->rptr;
> + msgword = *data;
> + data++;
> + rptr += 4;
> + if (rptr >= desc->end) {
> + rptr = desc->start;
> + data = shared->msg_mem_vir;
> + }
> +
> + msg->hdr.index = (msgword >> 24) & 0xff;
> + msg->hdr.num = (msgword >> 16) & 0xff;
> + msg->hdr.id = msgword & 0x3fff;
> +
> + if (msg->hdr.num > ARRAY_SIZE(msg->data)) {
> + pr_err("msg(%d) data length(%d) is out of range\n",
> + msg->hdr.id, msg->hdr.num);
> + return -EINVAL;
> + }
> +
> + for (i = 0; i < msg->hdr.num; i++) {
> + msg->data[i] = *data;
> + data++;
> + rptr += 4;
> + if (rptr >= desc->end) {
> + rptr = desc->start;
> + data = shared->msg_mem_vir;
> + }
> + }
> +
> + /*update rptr after data is read*/
> + mb();
> + desc->rptr = rptr;
> +
> + return 0;
> +}
> +
> +struct vpu_iface_ops imx8q_rpc_ops[] = {
> + [VPU_CORE_TYPE_ENC] = {
> + .check_codec = vpu_imx8q_check_codec,
> + .check_fmt = vpu_imx8q_check_fmt,
> + .boot_core = vpu_imx8q_boot_core,
> + .get_power_state = vpu_imx8q_get_power_state,
> + .on_firmware_loaded = vpu_imx8q_on_firmware_loaded,
> + .get_data_size = vpu_windsor_get_data_size,
> + .check_memory_region = vpu_imx8q_check_memory_region,
> + .init_rpc = vpu_windsor_init_rpc,
> + .set_log_buf = vpu_windsor_set_log_buf,
> + .set_system_cfg = vpu_windsor_set_system_cfg,
> + .get_version = vpu_windsor_get_version,
> + .send_cmd_buf = vpu_rpc_send_cmd_buf,
> + .receive_msg_buf = vpu_rpc_receive_msg_buf,
> + .pack_cmd = vpu_windsor_pack_cmd,
> + .convert_msg_id = vpu_windsor_convert_msg_id,
> + .unpack_msg_data = vpu_windsor_unpack_msg_data,
> + .config_memory_resource = vpu_windsor_config_memory_resource,
> + .get_stream_buffer_size = vpu_windsor_get_stream_buffer_size,
> + .config_stream_buffer = vpu_windsor_config_stream_buffer,
> + .get_stream_buffer_desc = vpu_windsor_get_stream_buffer_desc,
> + .update_stream_buffer = vpu_windsor_update_stream_buffer,
> + .set_encode_params = vpu_windsor_set_encode_params,
> + .input_frame = vpu_windsor_input_frame,
> + .get_max_instance_count = vpu_windsor_get_max_instance_count,
> + },
> + [VPU_CORE_TYPE_DEC] = {
> + .check_codec = vpu_imx8q_check_codec,
> + .check_fmt = vpu_imx8q_check_fmt,
> + .boot_core = vpu_imx8q_boot_core,
> + .get_power_state = vpu_imx8q_get_power_state,
> + .on_firmware_loaded = vpu_imx8q_on_firmware_loaded,
> + .get_data_size = vpu_malone_get_data_size,
> + .check_memory_region = vpu_imx8q_check_memory_region,
> + .init_rpc = vpu_malone_init_rpc,
> + .set_log_buf = vpu_malone_set_log_buf,
> + .set_system_cfg = vpu_malone_set_system_cfg,
> + .get_version = vpu_malone_get_version,
> + .send_cmd_buf = vpu_rpc_send_cmd_buf,
> + .receive_msg_buf = vpu_rpc_receive_msg_buf,
> + .get_stream_buffer_size = vpu_malone_get_stream_buffer_size,
> + .config_stream_buffer = vpu_malone_config_stream_buffer,
> + .set_decode_params = vpu_malone_set_decode_params,
> + .pack_cmd = vpu_malone_pack_cmd,
> + .convert_msg_id = vpu_malone_convert_msg_id,
> + .unpack_msg_data = vpu_malone_unpack_msg_data,
> + .get_stream_buffer_desc = vpu_malone_get_stream_buffer_desc,
> + .update_stream_buffer = vpu_malone_update_stream_buffer,
> + .add_scode = vpu_malone_add_scode,
> + .input_frame = vpu_malone_input_frame,
> + .pre_send_cmd = vpu_malone_pre_cmd,
> + .post_send_cmd = vpu_malone_post_cmd,
> + .init_instance = vpu_malone_init_instance,
> + .get_max_instance_count = vpu_malone_get_max_instance_count,
> + },
> +};
> +
> +
> +static struct vpu_iface_ops *vpu_get_iface(struct vpu_dev *vpu, enum vpu_core_type type)
> +{
> + struct vpu_iface_ops *rpc_ops = NULL;
> + u32 size = 0;
> +
> + WARN_ON(!vpu || !vpu->res);
> +
> + switch (vpu->res->plat_type) {
> + case IMX8QXP:
> + case IMX8QM:
> + rpc_ops = imx8q_rpc_ops;
> + size = ARRAY_SIZE(imx8q_rpc_ops);
> + break;
> + default:
> + return NULL;
> + }
> +
> + if (type >= size)
> + return NULL;
> +
> + return &rpc_ops[type];
> +}
> +
> +struct vpu_iface_ops *vpu_core_get_iface(struct vpu_core *core)
> +{
> + WARN_ON(!core || !core->vpu);
> +
> + return vpu_get_iface(core->vpu, core->type);
> +}
> +
> +struct vpu_iface_ops *vpu_inst_get_iface(struct vpu_inst *inst)
> +{
> + WARN_ON(!inst || !inst->vpu);
> +
> + if (inst->core)
> + return vpu_core_get_iface(inst->core);
> +
> + return vpu_get_iface(inst->vpu, inst->type);
> +}
> diff --git a/drivers/media/platform/amphion/vpu_rpc.h b/drivers/media/platform/amphion/vpu_rpc.h
> new file mode 100644
> index 000000000000..abe998e5a5be
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_rpc.h
> @@ -0,0 +1,464 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _AMPHION_VPU_RPC_H
> +#define _AMPHION_VPU_RPC_H
> +
> +#include <media/videobuf2-core.h>
> +#include "vpu_codec.h"
> +
> +struct vpu_rpc_buffer_desc {
> + u32 wptr;
> + u32 rptr;
> + u32 start;
> + u32 end;
> +};
> +
> +struct vpu_shared_addr {
> + void *iface;
> + struct vpu_rpc_buffer_desc *cmd_desc;
> + void *cmd_mem_vir;
> + struct vpu_rpc_buffer_desc *msg_desc;
> + void *msg_mem_vir;
> +
> + unsigned long boot_addr;
> + struct vpu_core *core;
> + void *priv;
> +};
> +
> +struct vpu_rpc_event_header {
> + u32 index;
> + u32 id;
> + u32 num;
> +};
> +
> +struct vpu_rpc_event {
> + struct vpu_rpc_event_header hdr;
> + u32 data[128];
> +};
> +
> +struct vpu_iface_ops {
> + bool (*check_codec)(enum vpu_core_type type);
> + bool (*check_fmt)(enum vpu_core_type type, u32 pixelfmt);
> + u32 (*get_data_size)(void);
> + u32 (*check_memory_region)(dma_addr_t base, dma_addr_t addr, u32 size);
> + int (*boot_core)(struct vpu_core *core);
> + int (*shutdown_core)(struct vpu_core *core);
> + int (*restore_core)(struct vpu_core *core);
> + int (*get_power_state)(struct vpu_core *core);
> + int (*on_firmware_loaded)(struct vpu_core *core);
> + void (*init_rpc)(struct vpu_shared_addr *shared,
> + struct vpu_buffer *rpc, dma_addr_t boot_addr);
> + void (*set_log_buf)(struct vpu_shared_addr *shared,
> + struct vpu_buffer *log);
> + void (*set_system_cfg)(struct vpu_shared_addr *shared,
> + u32 regs_base, void __iomem *regs, u32 index);
> + void (*set_stream_cfg)(struct vpu_shared_addr *shared, u32 index);
> + u32 (*get_version)(struct vpu_shared_addr *shared);
> + u32 (*get_max_instance_count)(struct vpu_shared_addr *shared);
> + int (*get_stream_buffer_size)(struct vpu_shared_addr *shared);
> + int (*send_cmd_buf)(struct vpu_shared_addr *shared,
> + struct vpu_rpc_event *cmd);
> + int (*receive_msg_buf)(struct vpu_shared_addr *shared,
> + struct vpu_rpc_event *msg);
> + int (*pack_cmd)(struct vpu_rpc_event *pkt, u32 index, u32 id, void *data);
> + int (*convert_msg_id)(u32 msg_id);
> + int (*unpack_msg_data)(struct vpu_rpc_event *pkt, void *data);
> + int (*input_frame)(struct vpu_shared_addr *shared,
> + struct vpu_inst *inst, struct vb2_buffer *vb);
> + int (*config_memory_resource)(struct vpu_shared_addr *shared,
> + u32 instance,
> + u32 type,
> + u32 index,
> + struct vpu_buffer *buf);
> + int (*config_stream_buffer)(struct vpu_shared_addr *shared,
> + u32 instance,
> + struct vpu_buffer *buf);
> + int (*update_stream_buffer)(struct vpu_shared_addr *shared,
> + u32 instance, u32 ptr, bool write);
> + int (*get_stream_buffer_desc)(struct vpu_shared_addr *shared,
> + u32 instance,
> + struct vpu_rpc_buffer_desc *desc);
> + int (*set_encode_params)(struct vpu_shared_addr *shared,
> + u32 instance,
> + struct vpu_encode_params *params, u32 update);
> + int (*set_decode_params)(struct vpu_shared_addr *shared,
> + u32 instance,
> + struct vpu_decode_params *params, u32 update);
> + int (*add_scode)(struct vpu_shared_addr *shared,
> + u32 instance,
> + struct vpu_buffer *stream_buffer,
> + u32 pixelformat,
> + u32 scode_type);
> + int (*pre_send_cmd)(struct vpu_shared_addr *shared, u32 instance);
> + int (*post_send_cmd)(struct vpu_shared_addr *shared, u32 instance);
> + int (*init_instance)(struct vpu_shared_addr *shared, u32 instance);
> +};
> +
> +enum {
> + VPU_CORE_MEMORY_INVALID = 0,
> + VPU_CORE_MEMORY_CACHED,
> + VPU_CORE_MEMORY_UNCACHED
> +};
> +
> +struct vpu_rpc_region_t {
> + dma_addr_t start;
> + dma_addr_t end;
> + dma_addr_t type;
> +};
> +
> +struct vpu_iface_ops *vpu_core_get_iface(struct vpu_core *core);
> +struct vpu_iface_ops *vpu_inst_get_iface(struct vpu_inst *inst);
> +u32 vpu_iface_check_memory_region(struct vpu_core *core, dma_addr_t addr, u32 size);
> +
> +static inline bool vpu_iface_check_codec(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (ops && ops->check_codec)
> + return ops->check_codec(core->type);
> +
> + return true;
> +}
> +
> +static inline bool vpu_iface_check_format(struct vpu_inst *inst, u32 pixelfmt)
> +{
> + struct vpu_iface_ops *ops = vpu_inst_get_iface(inst);
> +
> + if (ops && ops->check_fmt)
> + return ops->check_fmt(inst->type, pixelfmt);
> +
> + return true;
> +}
> +
> +static inline int vpu_iface_boot_core(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (ops && ops->boot_core)
> + return ops->boot_core(core);
> + return 0;
> +}
> +
> +static inline int vpu_iface_get_power_state(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (ops && ops->get_power_state)
> + return ops->get_power_state(core);
> + return 1;
> +}
> +
> +static inline int vpu_iface_shutdown_core(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (ops && ops->shutdown_core)
> + return ops->shutdown_core(core);
> + return 0;
> +}
> +
> +static inline int vpu_iface_restore_core(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (ops && ops->restore_core)
> + return ops->restore_core(core);
> + return 0;
> +}
> +
> +static inline int vpu_iface_on_firmware_loaded(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (ops && ops->on_firmware_loaded)
> + return ops->on_firmware_loaded(core);
> +
> + return 0;
> +}
> +
> +static inline u32 vpu_iface_get_data_size(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->get_data_size)
> + return 0;
> +
> + return ops->get_data_size();
> +}
> +
> +static inline int vpu_iface_init(struct vpu_core *core,
> + struct vpu_shared_addr *shared,
> + struct vpu_buffer *rpc,
> + dma_addr_t boot_addr)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->init_rpc)
> + return -EINVAL;
> +
> + ops->init_rpc(shared, rpc, boot_addr);
> + core->iface = shared;
> + shared->core = core;
> + if (rpc->bytesused > rpc->length)
> + return -ENOSPC;
> + return 0;
> +}
> +
> +static inline int vpu_iface_set_log_buf(struct vpu_core *core,
> + struct vpu_buffer *log)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops)
> + return -EINVAL;
> +
> + if (ops->set_log_buf)
> + ops->set_log_buf(core->iface, log);
> +
> + return 0;
> +}
> +
> +static inline int vpu_iface_config_system(struct vpu_core *core,
> + u32 regs_base, void __iomem *regs)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops)
> + return -EINVAL;
> + if (ops->set_system_cfg)
> + ops->set_system_cfg(core->iface, regs_base, regs, core->id);
> +
> + return 0;
> +}
> +
> +static inline int vpu_iface_get_stream_buffer_size(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->get_stream_buffer_size)
> + return 0;
> +
> + return ops->get_stream_buffer_size(core->iface);
> +}
> +
> +static inline int vpu_iface_config_stream(struct vpu_inst *inst)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops)
> + return -EINVAL;
> + if (ops->set_stream_cfg)
> + ops->set_stream_cfg(inst->core->iface, inst->id);
> + return 0;
> +}
> +
> +static inline int vpu_iface_send_cmd(struct vpu_core *core, struct vpu_rpc_event *cmd)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->send_cmd_buf)
> + return -EINVAL;
> +
> + return ops->send_cmd_buf(core->iface, cmd);
> +}
> +
> +static inline int vpu_iface_receive_msg(struct vpu_core *core, struct vpu_rpc_event *msg)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->receive_msg_buf)
> + return -EINVAL;
> +
> + return ops->receive_msg_buf(core->iface, msg);
> +}
> +
> +static inline int vpu_iface_pack_cmd(struct vpu_core *core,
> + struct vpu_rpc_event *pkt,
> + u32 index, u32 id, void *data)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->pack_cmd)
> + return -EINVAL;
> + return ops->pack_cmd(pkt, index, id, data);
> +}
> +
> +static inline int vpu_iface_convert_msg_id(struct vpu_core *core, u32 msg_id)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->convert_msg_id)
> + return -EINVAL;
> +
> + return ops->convert_msg_id(msg_id);
> +}
> +
> +static inline int vpu_iface_unpack_msg_data(struct vpu_core *core,
> + struct vpu_rpc_event *pkt, void *data)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->unpack_msg_data)
> + return -EINVAL;
> +
> + return ops->unpack_msg_data(pkt, data);
> +}
> +
> +static inline int vpu_iface_input_frame(struct vpu_inst *inst,
> + struct vb2_buffer *vb)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + if (!ops || !ops->input_frame)
> + return -EINVAL;
> +
> + return ops->input_frame(inst->core->iface, inst, vb);
> +}
> +
> +static inline int vpu_iface_config_memory_resource(struct vpu_inst *inst,
> + u32 type, u32 index, struct vpu_buffer *buf)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->config_memory_resource)
> + return -EINVAL;
> +
> + return ops->config_memory_resource(inst->core->iface,
> + inst->id,
> + type, index, buf);
> +}
> +
> +static inline int vpu_iface_config_stream_buffer(struct vpu_inst *inst,
> + struct vpu_buffer *buf)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->config_stream_buffer)
> + return -EINVAL;
> +
> + return ops->config_stream_buffer(inst->core->iface, inst->id, buf);
> +}
> +
> +static inline int vpu_iface_update_stream_buffer(struct vpu_inst *inst,
> + u32 ptr, bool write)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->update_stream_buffer)
> + return -EINVAL;
> +
> + return ops->update_stream_buffer(inst->core->iface, inst->id, ptr, write);
> +}
> +
> +static inline int vpu_iface_get_stream_buffer_desc(struct vpu_inst *inst,
> + struct vpu_rpc_buffer_desc *desc)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->get_stream_buffer_desc)
> + return -EINVAL;
> +
> + if (!desc)
> + return 0;
> +
> + return ops->get_stream_buffer_desc(inst->core->iface, inst->id, desc);
> +}
> +
> +static inline u32 vpu_iface_get_version(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->get_version)
> + return 0;
> +
> + return ops->get_version(core->iface);
> +}
> +
> +static inline u32 vpu_iface_get_max_instance_count(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->get_max_instance_count)
> + return 0;
> +
> + return ops->get_max_instance_count(core->iface);
> +}
> +
> +static inline int vpu_iface_set_encode_params(struct vpu_inst *inst,
> + struct vpu_encode_params *params, u32 update)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->set_encode_params)
> + return -EINVAL;
> +
> + return ops->set_encode_params(inst->core->iface, inst->id, params, update);
> +}
> +
> +static inline int vpu_iface_set_decode_params(struct vpu_inst *inst,
> + struct vpu_decode_params *params, u32 update)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->set_decode_params)
> + return -EINVAL;
> +
> + return ops->set_decode_params(inst->core->iface, inst->id, params, update);
> +}
> +
> +static inline int vpu_iface_add_scode(struct vpu_inst *inst, u32 scode_type)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->add_scode)
> + return -EINVAL;
> +
> + return ops->add_scode(inst->core->iface, inst->id,
> + &inst->stream_buffer,
> + inst->out_format.pixfmt,
> + scode_type);
> +}
> +
> +static inline int vpu_iface_pre_send_cmd(struct vpu_inst *inst)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (ops && ops->pre_send_cmd)
> + return ops->pre_send_cmd(inst->core->iface, inst->id);
> + return 0;
> +}
> +
> +static inline int vpu_iface_post_send_cmd(struct vpu_inst *inst)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (ops && ops->post_send_cmd)
> + return ops->post_send_cmd(inst->core->iface, inst->id);
> + return 0;
> +}
> +
> +static inline int vpu_iface_init_instance(struct vpu_inst *inst)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (ops && ops->init_instance)
> + return ops->init_instance(inst->core->iface, inst->id);
> +
> + return 0;
> +}
> +
> +#endif
>


2021-12-02 09:14:02

by Ming Qian

[permalink] [raw]
Subject: RE: [EXT] Re: [PATCH v13 04/13] media: amphion: add vpu core driver

> -----Original Message-----
> From: Hans Verkuil [mailto:[email protected]]
> Sent: Thursday, December 2, 2021 5:05 PM
> To: Ming Qian <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected]
> Cc: [email protected]; [email protected]; dl-linux-imx
> <[email protected]>; Aisheng Dong <[email protected]>;
> [email protected]; [email protected];
> [email protected]; [email protected]
> Subject: [EXT] Re: [PATCH v13 04/13] media: amphion: add vpu core driver
>
> Caution: EXT Email
>
> On 30/11/2021 10:48, Ming Qian wrote:
> > The vpu supports encoder and decoder.
> > it needs mu core to handle it.
>
> "mu core"? Do you mean "vpu core"? If not, then what is a "mu core"?
>
> Regards,
>
> Hans

Yes, it means "vpu core". We often call it "mu" internally

>
> > core will run either encoder or decoder firmware.
> >
> > This driver is for support the vpu core.
> >
> > Signed-off-by: Ming Qian <[email protected]>
> > Signed-off-by: Shijie Qin <[email protected]>
> > Signed-off-by: Zhou Peng <[email protected]>
> > ---
> > drivers/media/platform/amphion/vpu_codec.h | 67 ++
> > drivers/media/platform/amphion/vpu_core.c | 906
> +++++++++++++++++++++
> > drivers/media/platform/amphion/vpu_core.h | 15 +
> > drivers/media/platform/amphion/vpu_dbg.c | 495 +++++++++++
> > drivers/media/platform/amphion/vpu_rpc.c | 279 +++++++
> > drivers/media/platform/amphion/vpu_rpc.h | 464 +++++++++++
> > 6 files changed, 2226 insertions(+)
> > create mode 100644 drivers/media/platform/amphion/vpu_codec.h
> > create mode 100644 drivers/media/platform/amphion/vpu_core.c
> > create mode 100644 drivers/media/platform/amphion/vpu_core.h
> > create mode 100644 drivers/media/platform/amphion/vpu_dbg.c
> > create mode 100644 drivers/media/platform/amphion/vpu_rpc.c
> > create mode 100644 drivers/media/platform/amphion/vpu_rpc.h
> >
> > diff --git a/drivers/media/platform/amphion/vpu_codec.h
> b/drivers/media/platform/amphion/vpu_codec.h
> > new file mode 100644
> > index 000000000000..bf8920e9f6d7
> > --- /dev/null
> > +++ b/drivers/media/platform/amphion/vpu_codec.h
> > @@ -0,0 +1,67 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#ifndef _AMPHION_VPU_CODEC_H
> > +#define _AMPHION_VPU_CODEC_H
> > +
> > +struct vpu_encode_params {
> > + u32 input_format;
> > + u32 codec_format;
> > + u32 profile;
> > + u32 tier;
> > + u32 level;
> > + struct v4l2_fract frame_rate;
> > + u32 src_stride;
> > + u32 src_width;
> > + u32 src_height;
> > + struct v4l2_rect crop;
> > + u32 out_width;
> > + u32 out_height;
> > +
> > + u32 gop_length;
> > + u32 bframes;
> > +
> > + u32 rc_mode;
> > + u32 bitrate;
> > + u32 bitrate_min;
> > + u32 bitrate_max;
> > +
> > + u32 i_frame_qp;
> > + u32 p_frame_qp;
> > + u32 b_frame_qp;
> > + u32 qp_min;
> > + u32 qp_max;
> > + u32 qp_min_i;
> > + u32 qp_max_i;
> > +
> > + struct {
> > + u32 enable;
> > + u32 idc;
> > + u32 width;
> > + u32 height;
> > + } sar;
> > +
> > + struct {
> > + u32 primaries;
> > + u32 transfer;
> > + u32 matrix;
> > + u32 full_range;
> > + } color;
> > +};
> > +
> > +struct vpu_decode_params {
> > + u32 codec_format;
> > + u32 output_format;
> > + u32 b_dis_reorder;
> > + u32 b_non_frame;
> > + u32 frame_count;
> > + u32 end_flag;
> > + struct {
> > + u32 base;
> > + u32 size;
> > + } udata;
> > +};
> > +
> > +#endif
> > diff --git a/drivers/media/platform/amphion/vpu_core.c
> b/drivers/media/platform/amphion/vpu_core.c
> > new file mode 100644
> > index 000000000000..0dbfd1c84f75
> > --- /dev/null
> > +++ b/drivers/media/platform/amphion/vpu_core.c
> > @@ -0,0 +1,906 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#include <linux/init.h>
> > +#include <linux/interconnect.h>
> > +#include <linux/ioctl.h>
> > +#include <linux/list.h>
> > +#include <linux/kernel.h>
> > +#include <linux/module.h>
> > +#include <linux/of_device.h>
> > +#include <linux/of_address.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/slab.h>
> > +#include <linux/types.h>
> > +#include <linux/pm_runtime.h>
> > +#include <linux/pm_domain.h>
> > +#include <linux/firmware.h>
> > +#include "vpu.h"
> > +#include "vpu_defs.h"
> > +#include "vpu_core.h"
> > +#include "vpu_mbox.h"
> > +#include "vpu_msgs.h"
> > +#include "vpu_rpc.h"
> > +#include "vpu_cmds.h"
> > +
> > +void csr_writel(struct vpu_core *core, u32 reg, u32 val)
> > +{
> > + writel(val, core->base + reg);
> > +}
> > +
> > +u32 csr_readl(struct vpu_core *core, u32 reg)
> > +{
> > + return readl(core->base + reg);
> > +}
> > +
> > +static int vpu_core_load_firmware(struct vpu_core *core)
> > +{
> > + const struct firmware *pfw = NULL;
> > + int ret = 0;
> > +
> > + WARN_ON(!core || !core->res || !core->res->fwname);
> > + if (!core->fw.virt) {
> > + dev_err(core->dev, "firmware buffer is not ready\n");
> > + return -EINVAL;
> > + }
> > +
> > + ret = request_firmware(&pfw, core->res->fwname, core->dev);
> > + dev_dbg(core->dev, "request_firmware %s : %d\n",
> core->res->fwname, ret);
> > + if (ret) {
> > + dev_err(core->dev, "request firmware %s failed, ret = %d\n",
> > + core->res->fwname, ret);
> > + return ret;
> > + }
> > +
> > + if (core->fw.length < pfw->size) {
> > + dev_err(core->dev, "firmware buffer size want %zu,
> but %d\n",
> > + pfw->size, core->fw.length);
> > + ret = -EINVAL;
> > + goto exit;
> > + }
> > +
> > + memset_io(core->fw.virt, 0, core->fw.length);
> > + memcpy(core->fw.virt, pfw->data, pfw->size);
> > + core->fw.bytesused = pfw->size;
> > + ret = vpu_iface_on_firmware_loaded(core);
> > +exit:
> > + release_firmware(pfw);
> > + pfw = NULL;
> > +
> > + return ret;
> > +}
> > +
> > +static int vpu_core_boot_done(struct vpu_core *core)
> > +{
> > + u32 fw_version;
> > +
> > + fw_version = vpu_iface_get_version(core);
> > + dev_info(core->dev, "%s firmware version : %d.%d.%d\n",
> > + vpu_core_type_desc(core->type),
> > + (fw_version >> 16) & 0xff,
> > + (fw_version >> 8) & 0xff,
> > + fw_version & 0xff);
> > + core->supported_instance_count =
> vpu_iface_get_max_instance_count(core);
> > + if (core->res->act_size) {
> > + u32 count = core->act.length / core->res->act_size;
> > +
> > + core->supported_instance_count =
> min(core->supported_instance_count, count);
> > + }
> > + core->fw_version = fw_version;
> > + core->state = VPU_CORE_ACTIVE;
> > +
> > + return 0;
> > +}
> > +
> > +static int vpu_core_wait_boot_done(struct vpu_core *core)
> > +{
> > + int ret;
> > +
> > + ret = wait_for_completion_timeout(&core->cmp, VPU_TIMEOUT);
> > + if (!ret) {
> > + dev_err(core->dev, "boot timeout\n");
> > + return -EINVAL;
> > + }
> > + return vpu_core_boot_done(core);
> > +}
> > +
> > +static int vpu_core_boot(struct vpu_core *core, bool load)
> > +{
> > + int ret;
> > +
> > + WARN_ON(!core);
> > +
> > + if (!core->res->standalone)
> > + return 0;
> > +
> > + reinit_completion(&core->cmp);
> > + if (load) {
> > + ret = vpu_core_load_firmware(core);
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + vpu_iface_boot_core(core);
> > + return vpu_core_wait_boot_done(core);
> > +}
> > +
> > +static int vpu_core_shutdown(struct vpu_core *core)
> > +{
> > + if (!core->res->standalone)
> > + return 0;
> > + return vpu_iface_shutdown_core(core);
> > +}
> > +
> > +static int vpu_core_restore(struct vpu_core *core)
> > +{
> > + int ret;
> > +
> > + if (!core->res->standalone)
> > + return 0;
> > + ret = vpu_core_sw_reset(core);
> > + if (ret)
> > + return ret;
> > +
> > + vpu_core_boot_done(core);
> > + return vpu_iface_restore_core(core);
> > +}
> > +
> > +static int __vpu_alloc_dma(struct device *dev, struct vpu_buffer *buf)
> > +{
> > + gfp_t gfp = GFP_KERNEL | GFP_DMA32;
> > +
> > + WARN_ON(!dev || !buf);
> > +
> > + if (!buf->length)
> > + return 0;
> > +
> > + buf->virt = dma_alloc_coherent(dev, buf->length, &buf->phys, gfp);
> > + if (!buf->virt)
> > + return -ENOMEM;
> > +
> > + buf->dev = dev;
> > +
> > + return 0;
> > +}
> > +
> > +void vpu_free_dma(struct vpu_buffer *buf)
> > +{
> > + WARN_ON(!buf);
> > +
> > + if (!buf->virt || !buf->dev)
> > + return;
> > +
> > + dma_free_coherent(buf->dev, buf->length, buf->virt, buf->phys);
> > + buf->virt = NULL;
> > + buf->phys = 0;
> > + buf->length = 0;
> > + buf->bytesused = 0;
> > + buf->dev = NULL;
> > +}
> > +
> > +int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf)
> > +{
> > + WARN_ON(!core || !buf);
> > +
> > + return __vpu_alloc_dma(core->dev, buf);
> > +}
> > +
> > +static void vpu_core_check_hang(struct vpu_core *core)
> > +{
> > + if (core->hang_mask)
> > + core->state = VPU_CORE_HANG;
> > +}
> > +
> > +static struct vpu_core *vpu_core_find_proper_by_type(struct vpu_dev *vpu,
> u32 type)
> > +{
> > + struct vpu_core *core = NULL;
> > + int request_count = INT_MAX;
> > + struct vpu_core *c;
> > +
> > + WARN_ON(!vpu);
> > +
> > + list_for_each_entry(c, &vpu->cores, list) {
> > + dev_dbg(c->dev, "instance_mask = 0x%lx, state = %d\n",
> > + c->instance_mask,
> > + c->state);
> > + if (c->type != type)
> > + continue;
> > + if (c->state == VPU_CORE_DEINIT) {
> > + core = c;
> > + break;
> > + }
> > + vpu_core_check_hang(c);
> > + if (c->state != VPU_CORE_ACTIVE)
> > + continue;
> > + if (c->request_count < request_count) {
> > + request_count = c->request_count;
> > + core = c;
> > + }
> > + if (!request_count)
> > + break;
> > + }
> > +
> > + return core;
> > +}
> > +
> > +static bool vpu_core_is_exist(struct vpu_dev *vpu, struct vpu_core *core)
> > +{
> > + struct vpu_core *c;
> > +
> > + list_for_each_entry(c, &vpu->cores, list) {
> > + if (c == core)
> > + return true;
> > + }
> > +
> > + return false;
> > +}
> > +
> > +static void vpu_core_get_vpu(struct vpu_core *core)
> > +{
> > + core->vpu->get_vpu(core->vpu);
> > + if (core->type == VPU_CORE_TYPE_ENC)
> > + core->vpu->get_enc(core->vpu);
> > + if (core->type == VPU_CORE_TYPE_DEC)
> > + core->vpu->get_dec(core->vpu);
> > +}
> > +
> > +static int vpu_core_register(struct device *dev, struct vpu_core *core)
> > +{
> > + struct vpu_dev *vpu = dev_get_drvdata(dev);
> > + int ret = 0;
> > +
> > + dev_dbg(core->dev, "register core %s\n",
> vpu_core_type_desc(core->type));
> > + if (vpu_core_is_exist(vpu, core))
> > + return 0;
> > +
> > + core->workqueue = alloc_workqueue("vpu", WQ_UNBOUND |
> WQ_MEM_RECLAIM, 1);
> > + if (!core->workqueue) {
> > + dev_err(core->dev, "fail to alloc workqueue\n");
> > + return -ENOMEM;
> > + }
> > + INIT_WORK(&core->msg_work, vpu_msg_run_work);
> > + INIT_DELAYED_WORK(&core->msg_delayed_work,
> vpu_msg_delayed_work);
> > + core->msg_buffer_size =
> roundup_pow_of_two(VPU_MSG_BUFFER_SIZE);
> > + core->msg_buffer = vzalloc(core->msg_buffer_size);
> > + if (!core->msg_buffer) {
> > + dev_err(core->dev, "failed allocate buffer for fifo\n");
> > + ret = -ENOMEM;
> > + goto error;
> > + }
> > + ret = kfifo_init(&core->msg_fifo, core->msg_buffer,
> core->msg_buffer_size);
> > + if (ret) {
> > + dev_err(core->dev, "failed init kfifo\n");
> > + goto error;
> > + }
> > +
> > + list_add_tail(&core->list, &vpu->cores);
> > +
> > + vpu_core_get_vpu(core);
> > +
> > + if (vpu_iface_get_power_state(core))
> > + ret = vpu_core_restore(core);
> > + if (ret)
> > + goto error;
> > +
> > + return 0;
> > +error:
> > + if (core->msg_buffer) {
> > + vfree(core->msg_buffer);
> > + core->msg_buffer = NULL;
> > + }
> > + if (core->workqueue) {
> > + destroy_workqueue(core->workqueue);
> > + core->workqueue = NULL;
> > + }
> > + return ret;
> > +}
> > +
> > +static void vpu_core_put_vpu(struct vpu_core *core)
> > +{
> > + if (core->type == VPU_CORE_TYPE_ENC)
> > + core->vpu->put_enc(core->vpu);
> > + if (core->type == VPU_CORE_TYPE_DEC)
> > + core->vpu->put_dec(core->vpu);
> > + core->vpu->put_vpu(core->vpu);
> > +}
> > +
> > +static int vpu_core_unregister(struct device *dev, struct vpu_core *core)
> > +{
> > + list_del_init(&core->list);
> > +
> > + vpu_core_put_vpu(core);
> > + core->vpu = NULL;
> > + vfree(core->msg_buffer);
> > + core->msg_buffer = NULL;
> > +
> > + if (core->workqueue) {
> > + cancel_work_sync(&core->msg_work);
> > + cancel_delayed_work_sync(&core->msg_delayed_work);
> > + destroy_workqueue(core->workqueue);
> > + core->workqueue = NULL;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int vpu_core_acquire_instance(struct vpu_core *core)
> > +{
> > + int id;
> > +
> > + WARN_ON(!core);
> > +
> > + id = ffz(core->instance_mask);
> > + if (id >= core->supported_instance_count)
> > + return -EINVAL;
> > +
> > + set_bit(id, &core->instance_mask);
> > +
> > + return id;
> > +}
> > +
> > +static void vpu_core_release_instance(struct vpu_core *core, int id)
> > +{
> > + WARN_ON(!core);
> > +
> > + if (id < 0 || id >= core->supported_instance_count)
> > + return;
> > +
> > + clear_bit(id, &core->instance_mask);
> > +}
> > +
> > +struct vpu_inst *vpu_inst_get(struct vpu_inst *inst)
> > +{
> > + if (!inst)
> > + return NULL;
> > +
> > + atomic_inc(&inst->ref_count);
> > +
> > + return inst;
> > +}
> > +
> > +void vpu_inst_put(struct vpu_inst *inst)
> > +{
> > + if (!inst)
> > + return;
> > + if (atomic_dec_and_test(&inst->ref_count)) {
> > + if (inst->release)
> > + inst->release(inst);
> > + }
> > +}
> > +
> > +struct vpu_core *vpu_request_core(struct vpu_dev *vpu, enum
> vpu_core_type type)
> > +{
> > + struct vpu_core *core = NULL;
> > + int ret;
> > +
> > + mutex_lock(&vpu->lock);
> > +
> > + core = vpu_core_find_proper_by_type(vpu, type);
> > + if (!core)
> > + goto exit;
> > +
> > + mutex_lock(&core->lock);
> > + pm_runtime_get_sync(core->dev);
> > +
> > + if (core->state == VPU_CORE_DEINIT) {
> > + ret = vpu_core_boot(core, true);
> > + if (ret) {
> > + pm_runtime_put_sync(core->dev);
> > + mutex_unlock(&core->lock);
> > + core = NULL;
> > + goto exit;
> > + }
> > + }
> > +
> > + core->request_count++;
> > +
> > + mutex_unlock(&core->lock);
> > +exit:
> > + mutex_unlock(&vpu->lock);
> > +
> > + return core;
> > +}
> > +
> > +void vpu_release_core(struct vpu_core *core)
> > +{
> > + if (!core)
> > + return;
> > +
> > + mutex_lock(&core->lock);
> > + pm_runtime_put_sync(core->dev);
> > + if (core->request_count)
> > + core->request_count--;
> > + mutex_unlock(&core->lock);
> > +}
> > +
> > +int vpu_inst_register(struct vpu_inst *inst)
> > +{
> > + struct vpu_dev *vpu;
> > + struct vpu_core *core;
> > + int ret = 0;
> > +
> > + WARN_ON(!inst || !inst->vpu);
> > +
> > + vpu = inst->vpu;
> > + core = inst->core;
> > + if (!core) {
> > + core = vpu_request_core(vpu, inst->type);
> > + if (!core) {
> > + dev_err(vpu->dev, "there is no vpu core for %s\n",
> > + vpu_core_type_desc(inst->type));
> > + return -EINVAL;
> > + }
> > + inst->core = core;
> > + inst->dev = get_device(core->dev);
> > + }
> > +
> > + mutex_lock(&core->lock);
> > + if (inst->id >= 0 && inst->id < core->supported_instance_count)
> > + goto exit;
> > +
> > + ret = vpu_core_acquire_instance(core);
> > + if (ret < 0)
> > + goto exit;
> > +
> > + vpu_trace(inst->dev, "[%d] %p\n", ret, inst);
> > + inst->id = ret;
> > + list_add_tail(&inst->list, &core->instances);
> > + ret = 0;
> > + if (core->res->act_size) {
> > + inst->act.phys = core->act.phys + core->res->act_size *
> inst->id;
> > + inst->act.virt = core->act.virt + core->res->act_size * inst->id;
> > + inst->act.length = core->res->act_size;
> > + }
> > + vpu_inst_create_dbgfs_file(inst);
> > +exit:
> > + mutex_unlock(&core->lock);
> > +
> > + if (ret)
> > + dev_err(core->dev, "register instance fail\n");
> > + return ret;
> > +}
> > +
> > +int vpu_inst_unregister(struct vpu_inst *inst)
> > +{
> > + struct vpu_core *core;
> > +
> > + WARN_ON(!inst);
> > +
> > + if (!inst->core)
> > + return 0;
> > +
> > + core = inst->core;
> > + vpu_clear_request(inst);
> > + mutex_lock(&core->lock);
> > + if (inst->id >= 0 && inst->id < core->supported_instance_count) {
> > + vpu_inst_remove_dbgfs_file(inst);
> > + list_del_init(&inst->list);
> > + vpu_core_release_instance(core, inst->id);
> > + inst->id = VPU_INST_NULL_ID;
> > + }
> > + vpu_core_check_hang(core);
> > + if (core->state == VPU_CORE_HANG && !core->instance_mask) {
> > + dev_info(core->dev, "reset hang core\n");
> > + if (!vpu_core_sw_reset(core)) {
> > + core->state = VPU_CORE_ACTIVE;
> > + core->hang_mask = 0;
> > + }
> > + }
> > + mutex_unlock(&core->lock);
> > +
> > + return 0;
> > +}
> > +
> > +struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index)
> > +{
> > + struct vpu_inst *inst = NULL;
> > + struct vpu_inst *tmp;
> > +
> > + mutex_lock(&core->lock);
> > + if (!test_bit(index, &core->instance_mask))
> > + goto exit;
> > + list_for_each_entry(tmp, &core->instances, list) {
> > + if (tmp->id == index) {
> > + inst = vpu_inst_get(tmp);
> > + break;
> > + }
> > + }
> > +exit:
> > + mutex_unlock(&core->lock);
> > +
> > + return inst;
> > +}
> > +
> > +const struct vpu_core_resources *vpu_get_resource(struct vpu_inst *inst)
> > +{
> > + struct vpu_dev *vpu;
> > + struct vpu_core *core = NULL;
> > + const struct vpu_core_resources *res = NULL;
> > +
> > + if (!inst || !inst->vpu)
> > + return NULL;
> > +
> > + if (inst->core && inst->core->res)
> > + return inst->core->res;
> > +
> > + vpu = inst->vpu;
> > + mutex_lock(&vpu->lock);
> > + list_for_each_entry(core, &vpu->cores, list) {
> > + if (core->type == inst->type) {
> > + res = core->res;
> > + break;
> > + }
> > + }
> > + mutex_unlock(&vpu->lock);
> > +
> > + return res;
> > +}
> > +
> > +static int vpu_core_parse_dt(struct vpu_core *core, struct device_node *np)
> > +{
> > + struct device_node *node;
> > + struct resource res;
> > +
> > + if (of_count_phandle_with_args(np, "memory-region", NULL) < 2) {
> > + dev_err(core->dev, "need 2 memory-region for boot and
> rpc\n");
> > + return -ENODEV;
> > + }
> > +
> > + node = of_parse_phandle(np, "memory-region", 0);
> > + if (!node) {
> > + dev_err(core->dev, "boot-region of_parse_phandle error\n");
> > + return -ENODEV;
> > + }
> > + if (of_address_to_resource(node, 0, &res)) {
> > + dev_err(core->dev, "boot-region of_address_to_resource
> error\n");
> > + return -EINVAL;
> > + }
> > + core->fw.phys = res.start;
> > + core->fw.length = resource_size(&res);
> > +
> > + node = of_parse_phandle(np, "memory-region", 1);
> > + if (!node) {
> > + dev_err(core->dev, "rpc-region of_parse_phandle error\n");
> > + return -ENODEV;
> > + }
> > + if (of_address_to_resource(node, 0, &res)) {
> > + dev_err(core->dev, "rpc-region of_address_to_resource
> error\n");
> > + return -EINVAL;
> > + }
> > + core->rpc.phys = res.start;
> > + core->rpc.length = resource_size(&res);
> > +
> > + if (core->rpc.length < core->res->rpc_size + core->res->fwlog_size) {
> > + dev_err(core->dev, "the rpc-region <%pad, 0x%x> is not
> enough\n",
> > + &core->rpc.phys, core->rpc.length);
> > + return -EINVAL;
> > + }
> > +
> > + core->fw.virt = ioremap_wc(core->fw.phys, core->fw.length);
> > + core->rpc.virt = ioremap_wc(core->rpc.phys, core->rpc.length);
> > + memset_io(core->rpc.virt, 0, core->rpc.length);
> > +
> > + if (vpu_iface_check_memory_region(core,
> > + core->rpc.phys,
> > + core->rpc.length) !=
> VPU_CORE_MEMORY_UNCACHED) {
> > + dev_err(core->dev, "rpc region<%pad, 0x%x> isn't
> uncached\n",
> > + &core->rpc.phys, core->rpc.length);
> > + return -EINVAL;
> > + }
> > +
> > + core->log.phys = core->rpc.phys + core->res->rpc_size;
> > + core->log.virt = core->rpc.virt + core->res->rpc_size;
> > + core->log.length = core->res->fwlog_size;
> > + core->act.phys = core->log.phys + core->log.length;
> > + core->act.virt = core->log.virt + core->log.length;
> > + core->act.length = core->rpc.length - core->res->rpc_size -
> core->log.length;
> > + core->rpc.length = core->res->rpc_size;
> > +
> > + return 0;
> > +}
> > +
> > +static int vpu_core_probe(struct platform_device *pdev)
> > +{
> > + struct device *dev = &pdev->dev;
> > + struct vpu_core *core;
> > + struct vpu_dev *vpu = dev_get_drvdata(dev->parent);
> > + struct vpu_shared_addr *iface;
> > + u32 iface_data_size;
> > + int ret;
> > +
> > + dev_dbg(dev, "probe\n");
> > + if (!vpu)
> > + return -EINVAL;
> > + core = devm_kzalloc(dev, sizeof(*core), GFP_KERNEL);
> > + if (!core)
> > + return -ENOMEM;
> > +
> > + core->pdev = pdev;
> > + core->dev = dev;
> > + platform_set_drvdata(pdev, core);
> > + core->vpu = vpu;
> > + INIT_LIST_HEAD(&core->instances);
> > + mutex_init(&core->lock);
> > + mutex_init(&core->cmd_lock);
> > + init_completion(&core->cmp);
> > + init_waitqueue_head(&core->ack_wq);
> > + core->state = VPU_CORE_DEINIT;
> > +
> > + core->res = of_device_get_match_data(dev);
> > + if (!core->res)
> > + return -ENODEV;
> > +
> > + core->type = core->res->type;
> > + core->id = of_alias_get_id(dev->of_node, "vpu_core");
> > + if (core->id < 0) {
> > + dev_err(dev, "can't get vpu core id\n");
> > + return core->id;
> > + }
> > + dev_info(core->dev, "[%d] = %s\n", core->id,
> vpu_core_type_desc(core->type));
> > + ret = vpu_core_parse_dt(core, dev->of_node);
> > + if (ret)
> > + return ret;
> > +
> > + core->base = devm_platform_ioremap_resource(pdev, 0);
> > + if (IS_ERR(core->base))
> > + return PTR_ERR(core->base);
> > +
> > + if (!vpu_iface_check_codec(core)) {
> > + dev_err(core->dev, "is not supported\n");
> > + return -EINVAL;
> > + }
> > +
> > + ret = vpu_mbox_init(core);
> > + if (ret)
> > + return ret;
> > +
> > + iface = devm_kzalloc(dev, sizeof(*iface), GFP_KERNEL);
> > + if (!iface)
> > + return -ENOMEM;
> > +
> > + iface_data_size = vpu_iface_get_data_size(core);
> > + if (iface_data_size) {
> > + iface->priv = devm_kzalloc(dev, iface_data_size,
> GFP_KERNEL);
> > + if (!iface->priv)
> > + return -ENOMEM;
> > + }
> > +
> > + ret = vpu_iface_init(core, iface, &core->rpc, core->fw.phys);
> > + if (ret) {
> > + dev_err(core->dev, "init iface fail, ret = %d\n", ret);
> > + return ret;
> > + }
> > +
> > + vpu_iface_config_system(core, vpu->res->mreg_base, vpu->base);
> > + vpu_iface_set_log_buf(core, &core->log);
> > +
> > + pm_runtime_enable(dev);
> > + ret = pm_runtime_get_sync(dev);
> > + if (ret) {
> > + pm_runtime_put_noidle(dev);
> > + pm_runtime_set_suspended(dev);
> > + goto err_runtime_disable;
> > + }
> > +
> > + ret = vpu_core_register(dev->parent, core);
> > + if (ret)
> > + goto err_core_register;
> > + core->parent = dev->parent;
> > +
> > + pm_runtime_put_sync(dev);
> > + vpu_core_create_dbgfs_file(core);
> > +
> > + return 0;
> > +
> > +err_core_register:
> > + pm_runtime_put_sync(dev);
> > +err_runtime_disable:
> > + pm_runtime_disable(dev);
> > +
> > + return ret;
> > +}
> > +
> > +static int vpu_core_remove(struct platform_device *pdev)
> > +{
> > + struct device *dev = &pdev->dev;
> > + struct vpu_core *core = platform_get_drvdata(pdev);
> > + int ret;
> > +
> > + vpu_core_remove_dbgfs_file(core);
> > + ret = pm_runtime_get_sync(dev);
> > + WARN_ON(ret < 0);
> > +
> > + vpu_core_shutdown(core);
> > + pm_runtime_put_sync(dev);
> > + pm_runtime_disable(dev);
> > +
> > + vpu_core_unregister(core->parent, core);
> > + iounmap(core->fw.virt);
> > + iounmap(core->rpc.virt);
> > + mutex_destroy(&core->lock);
> > + mutex_destroy(&core->cmd_lock);
> > +
> > + return 0;
> > +}
> > +
> > +static int __maybe_unused vpu_core_runtime_resume(struct device *dev)
> > +{
> > + struct vpu_core *core = dev_get_drvdata(dev);
> > +
> > + return vpu_mbox_request(core);
> > +}
> > +
> > +static int __maybe_unused vpu_core_runtime_suspend(struct device *dev)
> > +{
> > + struct vpu_core *core = dev_get_drvdata(dev);
> > +
> > + vpu_mbox_free(core);
> > + return 0;
> > +}
> > +
> > +static void vpu_core_cancel_work(struct vpu_core *core)
> > +{
> > + struct vpu_inst *inst = NULL;
> > +
> > + cancel_work_sync(&core->msg_work);
> > + cancel_delayed_work_sync(&core->msg_delayed_work);
> > +
> > + mutex_lock(&core->lock);
> > + list_for_each_entry(inst, &core->instances, list)
> > + cancel_work_sync(&inst->msg_work);
> > + mutex_unlock(&core->lock);
> > +}
> > +
> > +static void vpu_core_resume_work(struct vpu_core *core)
> > +{
> > + struct vpu_inst *inst = NULL;
> > + unsigned long delay = msecs_to_jiffies(10);
> > +
> > + queue_work(core->workqueue, &core->msg_work);
> > + queue_delayed_work(core->workqueue, &core->msg_delayed_work,
> delay);
> > +
> > + mutex_lock(&core->lock);
> > + list_for_each_entry(inst, &core->instances, list)
> > + queue_work(inst->workqueue, &inst->msg_work);
> > + mutex_unlock(&core->lock);
> > +}
> > +
> > +static int __maybe_unused vpu_core_resume(struct device *dev)
> > +{
> > + struct vpu_core *core = dev_get_drvdata(dev);
> > + int ret = 0;
> > +
> > + if (!core->res->standalone)
> > + return 0;
> > +
> > + mutex_lock(&core->lock);
> > + pm_runtime_get_sync(dev);
> > + vpu_core_get_vpu(core);
> > + if (core->state != VPU_CORE_SNAPSHOT)
> > + goto exit;
> > +
> > + if (!vpu_iface_get_power_state(core)) {
> > + if (!list_empty(&core->instances)) {
> > + ret = vpu_core_boot(core, false);
> > + if (ret) {
> > + dev_err(core->dev, "%s boot fail\n",
> __func__);
> > + core->state = VPU_CORE_DEINIT;
> > + goto exit;
> > + }
> > + } else {
> > + core->state = VPU_CORE_DEINIT;
> > + }
> > + } else {
> > + if (!list_empty(&core->instances)) {
> > + ret = vpu_core_sw_reset(core);
> > + if (ret) {
> > + dev_err(core->dev, "%s sw_reset fail\n",
> __func__);
> > + core->state = VPU_CORE_HANG;
> > + goto exit;
> > + }
> > + }
> > + core->state = VPU_CORE_ACTIVE;
> > + }
> > +
> > +exit:
> > + pm_runtime_put_sync(dev);
> > + mutex_unlock(&core->lock);
> > +
> > + vpu_core_resume_work(core);
> > + return ret;
> > +}
> > +
> > +static int __maybe_unused vpu_core_suspend(struct device *dev)
> > +{
> > + struct vpu_core *core = dev_get_drvdata(dev);
> > + int ret = 0;
> > +
> > + if (!core->res->standalone)
> > + return 0;
> > +
> > + mutex_lock(&core->lock);
> > + if (core->state == VPU_CORE_ACTIVE) {
> > + if (!list_empty(&core->instances)) {
> > + ret = vpu_core_snapshot(core);
> > + if (ret) {
> > + mutex_unlock(&core->lock);
> > + return ret;
> > + }
> > + }
> > +
> > + core->state = VPU_CORE_SNAPSHOT;
> > + }
> > + mutex_unlock(&core->lock);
> > +
> > + vpu_core_cancel_work(core);
> > +
> > + mutex_lock(&core->lock);
> > + vpu_core_put_vpu(core);
> > + mutex_unlock(&core->lock);
> > + return ret;
> > +}
> > +
> > +static const struct dev_pm_ops vpu_core_pm_ops = {
> > + SET_RUNTIME_PM_OPS(vpu_core_runtime_suspend,
> vpu_core_runtime_resume, NULL)
> > + SET_SYSTEM_SLEEP_PM_OPS(vpu_core_suspend, vpu_core_resume)
> > +};
> > +
> > +static struct vpu_core_resources imx8q_enc = {
> > + .type = VPU_CORE_TYPE_ENC,
> > + .fwname = "vpu/vpu_fw_imx8_enc.bin",
> > + .stride = 16,
> > + .max_width = 1920,
> > + .max_height = 1920,
> > + .min_width = 64,
> > + .min_height = 48,
> > + .step_width = 2,
> > + .step_height = 2,
> > + .rpc_size = 0x80000,
> > + .fwlog_size = 0x80000,
> > + .act_size = 0xc0000,
> > + .standalone = true,
> > +};
> > +
> > +static struct vpu_core_resources imx8q_dec = {
> > + .type = VPU_CORE_TYPE_DEC,
> > + .fwname = "vpu/vpu_fw_imx8_dec.bin",
> > + .stride = 256,
> > + .max_width = 8188,
> > + .max_height = 8188,
> > + .min_width = 16,
> > + .min_height = 16,
> > + .step_width = 1,
> > + .step_height = 1,
> > + .rpc_size = 0x80000,
> > + .fwlog_size = 0x80000,
> > + .standalone = true,
> > +};
> > +
> > +static const struct of_device_id vpu_core_dt_match[] = {
> > + { .compatible = "nxp,imx8q-vpu-encoder", .data = &imx8q_enc },
> > + { .compatible = "nxp,imx8q-vpu-decoder", .data = &imx8q_dec },
> > + {}
> > +};
> > +MODULE_DEVICE_TABLE(of, vpu_core_dt_match);
> > +
> > +static struct platform_driver amphion_vpu_core_driver = {
> > + .probe = vpu_core_probe,
> > + .remove = vpu_core_remove,
> > + .driver = {
> > + .name = "amphion-vpu-core",
> > + .of_match_table = vpu_core_dt_match,
> > + .pm = &vpu_core_pm_ops,
> > + },
> > +};
> > +
> > +int __init vpu_core_driver_init(void)
> > +{
> > + return platform_driver_register(&amphion_vpu_core_driver);
> > +}
> > +
> > +void __exit vpu_core_driver_exit(void)
> > +{
> > + platform_driver_unregister(&amphion_vpu_core_driver);
> > +}
> > diff --git a/drivers/media/platform/amphion/vpu_core.h
> b/drivers/media/platform/amphion/vpu_core.h
> > new file mode 100644
> > index 000000000000..00a662997da4
> > --- /dev/null
> > +++ b/drivers/media/platform/amphion/vpu_core.h
> > @@ -0,0 +1,15 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#ifndef _AMPHION_VPU_CORE_H
> > +#define _AMPHION_VPU_CORE_H
> > +
> > +void csr_writel(struct vpu_core *core, u32 reg, u32 val);
> > +u32 csr_readl(struct vpu_core *core, u32 reg);
> > +int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf);
> > +void vpu_free_dma(struct vpu_buffer *buf);
> > +struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index);
> > +
> > +#endif
> > diff --git a/drivers/media/platform/amphion/vpu_dbg.c
> b/drivers/media/platform/amphion/vpu_dbg.c
> > new file mode 100644
> > index 000000000000..2e7e11101f99
> > --- /dev/null
> > +++ b/drivers/media/platform/amphion/vpu_dbg.c
> > @@ -0,0 +1,495 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#include <linux/init.h>
> > +#include <linux/device.h>
> > +#include <linux/ioctl.h>
> > +#include <linux/list.h>
> > +#include <linux/module.h>
> > +#include <linux/kernel.h>
> > +#include <linux/types.h>
> > +#include <linux/pm_runtime.h>
> > +#include <media/v4l2-device.h>
> > +#include <linux/debugfs.h>
> > +#include "vpu.h"
> > +#include "vpu_defs.h"
> > +#include "vpu_helpers.h"
> > +#include "vpu_cmds.h"
> > +#include "vpu_rpc.h"
> > +
> > +struct print_buf_desc {
> > + u32 start_h_phy;
> > + u32 start_h_vir;
> > + u32 start_m;
> > + u32 bytes;
> > + u32 read;
> > + u32 write;
> > + char buffer[0];
> > +};
> > +
> > +static char *vb2_stat_name[] = {
> > + [VB2_BUF_STATE_DEQUEUED] = "dequeued",
> > + [VB2_BUF_STATE_IN_REQUEST] = "in_request",
> > + [VB2_BUF_STATE_PREPARING] = "preparing",
> > + [VB2_BUF_STATE_QUEUED] = "queued",
> > + [VB2_BUF_STATE_ACTIVE] = "active",
> > + [VB2_BUF_STATE_DONE] = "done",
> > + [VB2_BUF_STATE_ERROR] = "error",
> > +};
> > +
> > +static char *vpu_stat_name[] = {
> > + [VPU_BUF_STATE_IDLE] = "idle",
> > + [VPU_BUF_STATE_INUSE] = "inuse",
> > + [VPU_BUF_STATE_DECODED] = "decoded",
> > + [VPU_BUF_STATE_READY] = "ready",
> > + [VPU_BUF_STATE_SKIP] = "skip",
> > + [VPU_BUF_STATE_ERROR] = "error",
> > +};
> > +
> > +static int vpu_dbg_instance(struct seq_file *s, void *data)
> > +{
> > + struct vpu_inst *inst = s->private;
> > + char str[128];
> > + int num;
> > + struct vb2_queue *vq;
> > + int i;
> > +
> > + num = scnprintf(str, sizeof(str), "[%s]\n",
> vpu_core_type_desc(inst->type));
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + num = scnprintf(str, sizeof(str), "tgig = %d,pid = %d\n", inst->tgid,
> inst->pid);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "state = %d\n", inst->state);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str),
> > + "min_buffer_out = %d, min_buffer_cap = %d\n",
> > + inst->min_buffer_out, inst->min_buffer_cap);
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > +
> > + vq = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> > + num = scnprintf(str, sizeof(str),
> > + "output (%2d, %2d): fmt = %c%c%c%c %d
> x %d, %d;",
> > + vb2_is_streaming(vq),
> > + vq->num_buffers,
> > + inst->out_format.pixfmt,
> > + inst->out_format.pixfmt >> 8,
> > + inst->out_format.pixfmt >> 16,
> > + inst->out_format.pixfmt >> 24,
> > + inst->out_format.width,
> > + inst->out_format.height,
> > + vq->last_buffer_dequeued);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + for (i = 0; i < inst->out_format.num_planes; i++) {
> > + num = scnprintf(str, sizeof(str), " %d(%d)",
> > + inst->out_format.sizeimage[i],
> > + inst->out_format.bytesperline[i]);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + }
> > + if (seq_write(s, "\n", 1))
> > + return 0;
> > +
> > + vq = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> > + num = scnprintf(str, sizeof(str),
> > + "capture(%2d, %2d): fmt = %c%c%c%c %d
> x %d, %d;",
> > + vb2_is_streaming(vq),
> > + vq->num_buffers,
> > + inst->cap_format.pixfmt,
> > + inst->cap_format.pixfmt >> 8,
> > + inst->cap_format.pixfmt >> 16,
> > + inst->cap_format.pixfmt >> 24,
> > + inst->cap_format.width,
> > + inst->cap_format.height,
> > + vq->last_buffer_dequeued);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + for (i = 0; i < inst->cap_format.num_planes; i++) {
> > + num = scnprintf(str, sizeof(str), " %d(%d)",
> > + inst->cap_format.sizeimage[i],
> > + inst->cap_format.bytesperline[i]);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + }
> > + if (seq_write(s, "\n", 1))
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "crop: (%d, %d) %d x %d\n",
> > + inst->crop.left,
> > + inst->crop.top,
> > + inst->crop.width,
> > + inst->crop.height);
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + vq = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> > + for (i = 0; i < vq->num_buffers; i++) {
> > + struct vb2_buffer *vb = vq->bufs[i];
> > + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> > + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
> > +
> > + if (vb->state == VB2_BUF_STATE_DEQUEUED)
> > + continue;
> > + num = scnprintf(str, sizeof(str),
> > + "output [%2d] state = %10s, %8s\n",
> > + i, vb2_stat_name[vb->state],
> > + vpu_stat_name[vpu_buf->state]);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + }
> > +
> > + vq = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> > + for (i = 0; i < vq->num_buffers; i++) {
> > + struct vb2_buffer *vb = vq->bufs[i];
> > + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> > + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
> > +
> > + if (vb->state == VB2_BUF_STATE_DEQUEUED)
> > + continue;
> > + num = scnprintf(str, sizeof(str),
> > + "capture[%2d] state = %10s, %8s\n",
> > + i, vb2_stat_name[vb->state],
> > + vpu_stat_name[vpu_buf->state]);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + }
> > +
> > + num = scnprintf(str, sizeof(str), "sequence = %d\n", inst->sequence);
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + if (inst->use_stream_buffer) {
> > + num = scnprintf(str, sizeof(str), "stream_buffer = %d / %d,
> <%pad, 0x%x>\n",
> > + vpu_helper_get_used_space(inst),
> > + inst->stream_buffer.length,
> > + &inst->stream_buffer.phys,
> > + inst->stream_buffer.length);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + }
> > + num = scnprintf(str, sizeof(str), "kfifo len = 0x%x\n",
> kfifo_len(&inst->msg_fifo));
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + num = scnprintf(str, sizeof(str), "flow :\n");
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + mutex_lock(&inst->core->cmd_lock);
> > + for (i = 0; i < ARRAY_SIZE(inst->flows); i++) {
> > + u32 idx = (inst->flow_idx + i) % (ARRAY_SIZE(inst->flows));
> > +
> > + if (!inst->flows[idx])
> > + continue;
> > + num = scnprintf(str, sizeof(str), "\t[%s]0x%x\n",
> > + inst->flows[idx] >= VPU_MSG_ID_NOOP ?
> "M" : "C",
> > + inst->flows[idx]);
> > + if (seq_write(s, str, num)) {
> > + mutex_unlock(&inst->core->cmd_lock);
> > + return 0;
> > + }
> > + }
> > + mutex_unlock(&inst->core->cmd_lock);
> > +
> > + i = 0;
> > + while (true) {
> > + num = call_vop(inst, get_debug_info, str, sizeof(str), i++);
> > + if (num <= 0)
> > + break;
> > + if (seq_write(s, str, num))
> > + return 0;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int vpu_dbg_core(struct seq_file *s, void *data)
> > +{
> > + struct vpu_core *core = s->private;
> > + struct vpu_shared_addr *iface = core->iface;
> > + char str[128];
> > + int num;
> > +
> > + num = scnprintf(str, sizeof(str), "[%s]\n",
> vpu_core_type_desc(core->type));
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + num = scnprintf(str, sizeof(str), "boot_region = <%pad, 0x%x>\n",
> > + &core->fw.phys, core->fw.length);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "rpc_region = <%pad, 0x%x> used =
> 0x%x\n",
> > + &core->rpc.phys, core->rpc.length,
> core->rpc.bytesused);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "fwlog_region = <%pad, 0x%x>\n",
> > + &core->log.phys, core->log.length);
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + num = scnprintf(str, sizeof(str), "state = %d\n", core->state);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + if (core->state == VPU_CORE_DEINIT)
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "fw version = %d.%d.%d\n",
> > + (core->fw_version >> 16) & 0xff,
> > + (core->fw_version >> 8) & 0xff,
> > + core->fw_version & 0xff);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "instances = %d/%d (0x%02lx), %d\n",
> > + hweight32(core->instance_mask),
> > + core->supported_instance_count,
> > + core->instance_mask,
> > + core->request_count);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "kfifo len = 0x%x\n",
> kfifo_len(&core->msg_fifo));
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str),
> > + "cmd_buf:[0x%x, 0x%x], wptr = 0x%x, rptr =
> 0x%x\n",
> > + iface->cmd_desc->start,
> > + iface->cmd_desc->end,
> > + iface->cmd_desc->wptr,
> > + iface->cmd_desc->rptr);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str),
> > + "msg_buf:[0x%x, 0x%x], wptr = 0x%x, rptr =
> 0x%x\n",
> > + iface->msg_desc->start,
> > + iface->msg_desc->end,
> > + iface->msg_desc->wptr,
> > + iface->msg_desc->rptr);
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + return 0;
> > +}
> > +
> > +static int vpu_dbg_fwlog(struct seq_file *s, void *data)
> > +{
> > + struct vpu_core *core = s->private;
> > + struct print_buf_desc *print_buf;
> > + int length;
> > + u32 rptr;
> > + u32 wptr;
> > + int ret = 0;
> > +
> > + if (!core->log.virt || core->state == VPU_CORE_DEINIT)
> > + return 0;
> > +
> > + print_buf = core->log.virt;
> > + rptr = print_buf->read;
> > + wptr = print_buf->write;
> > +
> > + if (rptr == wptr)
> > + return 0;
> > + else if (rptr < wptr)
> > + length = wptr - rptr;
> > + else
> > + length = print_buf->bytes + wptr - rptr;
> > +
> > + if (s->count + length >= s->size) {
> > + s->count = s->size;
> > + return 0;
> > + }
> > +
> > + if (rptr + length >= print_buf->bytes) {
> > + int num = print_buf->bytes - rptr;
> > +
> > + if (seq_write(s, print_buf->buffer + rptr, num))
> > + ret = -1;
> > + length -= num;
> > + rptr = 0;
> > + }
> > +
> > + if (length) {
> > + if (seq_write(s, print_buf->buffer + rptr, length))
> > + ret = -1;
> > + rptr += length;
> > + }
> > + if (!ret)
> > + print_buf->read = rptr;
> > +
> > + return 0;
> > +}
> > +
> > +static int vpu_dbg_inst_open(struct inode *inode, struct file *filp)
> > +{
> > + return single_open(filp, vpu_dbg_instance, inode->i_private);
> > +}
> > +
> > +static ssize_t vpu_dbg_inst_write(struct file *file,
> > + const char __user *user_buf, size_t size, loff_t
> *ppos)
> > +{
> > + struct seq_file *s = file->private_data;
> > + struct vpu_inst *inst = s->private;
> > +
> > + vpu_session_debug(inst);
> > +
> > + return size;
> > +}
> > +
> > +static ssize_t vpu_dbg_core_write(struct file *file,
> > + const char __user *user_buf, size_t size, loff_t
> *ppos)
> > +{
> > + struct seq_file *s = file->private_data;
> > + struct vpu_core *core = s->private;
> > +
> > + pm_runtime_get_sync(core->dev);
> > + mutex_lock(&core->lock);
> > + if (core->state != VPU_CORE_DEINIT && !core->instance_mask) {
> > + dev_info(core->dev, "reset\n");
> > + if (!vpu_core_sw_reset(core)) {
> > + core->state = VPU_CORE_ACTIVE;
> > + core->hang_mask = 0;
> > + }
> > + }
> > + mutex_unlock(&core->lock);
> > + pm_runtime_put_sync(core->dev);
> > +
> > + return size;
> > +}
> > +
> > +static int vpu_dbg_core_open(struct inode *inode, struct file *filp)
> > +{
> > + return single_open(filp, vpu_dbg_core, inode->i_private);
> > +}
> > +
> > +static int vpu_dbg_fwlog_open(struct inode *inode, struct file *filp)
> > +{
> > + return single_open(filp, vpu_dbg_fwlog, inode->i_private);
> > +}
> > +
> > +static const struct file_operations vpu_dbg_inst_fops = {
> > + .owner = THIS_MODULE,
> > + .open = vpu_dbg_inst_open,
> > + .release = single_release,
> > + .read = seq_read,
> > + .write = vpu_dbg_inst_write,
> > +};
> > +
> > +static const struct file_operations vpu_dbg_core_fops = {
> > + .owner = THIS_MODULE,
> > + .open = vpu_dbg_core_open,
> > + .release = single_release,
> > + .read = seq_read,
> > + .write = vpu_dbg_core_write,
> > +};
> > +
> > +static const struct file_operations vpu_dbg_fwlog_fops = {
> > + .owner = THIS_MODULE,
> > + .open = vpu_dbg_fwlog_open,
> > + .release = single_release,
> > + .read = seq_read,
> > +};
> > +
> > +int vpu_inst_create_dbgfs_file(struct vpu_inst *inst)
> > +{
> > + struct vpu_dev *vpu;
> > + char name[64];
> > +
> > + if (!inst || !inst->core || !inst->core->vpu)
> > + return -EINVAL;
> > +
> > + vpu = inst->core->vpu;
> > + if (!vpu->debugfs)
> > + return -EINVAL;
> > +
> > + if (inst->debugfs)
> > + return 0;
> > +
> > + scnprintf(name, sizeof(name), "instance.%d.%d",
> > + inst->core->id, inst->id);
> > + inst->debugfs = debugfs_create_file((const char *)name,
> > + VERIFY_OCTAL_PERMISSIONS(0644),
> > + vpu->debugfs,
> > + inst,
> > + &vpu_dbg_inst_fops);
> > + if (!inst->debugfs) {
> > + dev_err(inst->dev, "vpu create debugfs %s fail\n", name);
> > + return -EINVAL;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +int vpu_inst_remove_dbgfs_file(struct vpu_inst *inst)
> > +{
> > + if (!inst)
> > + return 0;
> > +
> > + debugfs_remove(inst->debugfs);
> > + inst->debugfs = NULL;
> > +
> > + return 0;
> > +}
> > +
> > +int vpu_core_create_dbgfs_file(struct vpu_core *core)
> > +{
> > + struct vpu_dev *vpu;
> > + char name[64];
> > +
> > + if (!core || !core->vpu)
> > + return -EINVAL;
> > +
> > + vpu = core->vpu;
> > + if (!vpu->debugfs)
> > + return -EINVAL;
> > +
> > + if (!core->debugfs) {
> > + scnprintf(name, sizeof(name), "core.%d", core->id);
> > + core->debugfs = debugfs_create_file((const char *)name,
> > +
> VERIFY_OCTAL_PERMISSIONS(0644),
> > + vpu->debugfs,
> > + core,
> > + &vpu_dbg_core_fops);
> > + if (!core->debugfs) {
> > + dev_err(core->dev, "vpu create debugfs %s fail\n",
> name);
> > + return -EINVAL;
> > + }
> > + }
> > + if (!core->debugfs_fwlog) {
> > + scnprintf(name, sizeof(name), "fwlog.%d", core->id);
> > + core->debugfs_fwlog = debugfs_create_file((const char
> *)name,
> > +
> VERIFY_OCTAL_PERMISSIONS(0444),
> > + vpu->debugfs,
> > + core,
> > + &vpu_dbg_fwlog_fops);
> > + if (!core->debugfs_fwlog) {
> > + dev_err(core->dev, "vpu create debugfs %s fail\n",
> name);
> > + return -EINVAL;
> > + }
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +int vpu_core_remove_dbgfs_file(struct vpu_core *core)
> > +{
> > + if (!core)
> > + return 0;
> > + debugfs_remove(core->debugfs);
> > + core->debugfs = NULL;
> > + debugfs_remove(core->debugfs_fwlog);
> > + core->debugfs_fwlog = NULL;
> > +
> > + return 0;
> > +}
> > +
> > +void vpu_inst_record_flow(struct vpu_inst *inst, u32 flow)
> > +{
> > + if (!inst)
> > + return;
> > +
> > + inst->flows[inst->flow_idx] = flow;
> > + inst->flow_idx = (inst->flow_idx + 1) % (ARRAY_SIZE(inst->flows));
> > +}
> > diff --git a/drivers/media/platform/amphion/vpu_rpc.c
> b/drivers/media/platform/amphion/vpu_rpc.c
> > new file mode 100644
> > index 000000000000..7b5e9177e010
> > --- /dev/null
> > +++ b/drivers/media/platform/amphion/vpu_rpc.c
> > @@ -0,0 +1,279 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#include <linux/init.h>
> > +#include <linux/interconnect.h>
> > +#include <linux/ioctl.h>
> > +#include <linux/list.h>
> > +#include <linux/kernel.h>
> > +#include <linux/module.h>
> > +#include <linux/of_device.h>
> > +#include <linux/of_address.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/firmware/imx/ipc.h>
> > +#include <linux/firmware/imx/svc/misc.h>
> > +#include "vpu.h"
> > +#include "vpu_rpc.h"
> > +#include "vpu_imx8q.h"
> > +#include "vpu_windsor.h"
> > +#include "vpu_malone.h"
> > +
> > +u32 vpu_iface_check_memory_region(struct vpu_core *core, dma_addr_t
> addr, u32 size)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->check_memory_region)
> > + return VPU_CORE_MEMORY_INVALID;
> > +
> > + return ops->check_memory_region(core->fw.phys, addr, size);
> > +}
> > +
> > +static u32 vpu_rpc_check_buffer_space(struct vpu_rpc_buffer_desc *desc,
> bool write)
> > +{
> > + u32 ptr1;
> > + u32 ptr2;
> > + u32 size;
> > +
> > + WARN_ON(!desc);
> > +
> > + size = desc->end - desc->start;
> > + if (write) {
> > + ptr1 = desc->wptr;
> > + ptr2 = desc->rptr;
> > + } else {
> > + ptr1 = desc->rptr;
> > + ptr2 = desc->wptr;
> > + }
> > +
> > + if (ptr1 == ptr2) {
> > + if (!write)
> > + return 0;
> > + else
> > + return size;
> > + }
> > +
> > + return (ptr2 + size - ptr1) % size;
> > +}
> > +
> > +static int vpu_rpc_send_cmd_buf(struct vpu_shared_addr *shared,
> > + struct vpu_rpc_event *cmd)
> > +{
> > + struct vpu_rpc_buffer_desc *desc;
> > + u32 space = 0;
> > + u32 *data;
> > + u32 wptr;
> > + u32 i;
> > +
> > + WARN_ON(!shared || !shared->cmd_mem_vir || !cmd);
> > +
> > + desc = shared->cmd_desc;
> > + space = vpu_rpc_check_buffer_space(desc, true);
> > + if (space < (((cmd->hdr.num + 1) << 2) + 16)) {
> > + pr_err("Cmd Buffer is no space for [%d] %d\n",
> > + cmd->hdr.index, cmd->hdr.id);
> > + return -EINVAL;
> > + }
> > + wptr = desc->wptr;
> > + data = (u32 *)(shared->cmd_mem_vir + desc->wptr - desc->start);
> > + *data = 0;
> > + *data |= ((cmd->hdr.index & 0xff) << 24);
> > + *data |= ((cmd->hdr.num & 0xff) << 16);
> > + *data |= (cmd->hdr.id & 0x3fff);
> > + wptr += 4;
> > + data++;
> > + if (wptr >= desc->end) {
> > + wptr = desc->start;
> > + data = shared->cmd_mem_vir;
> > + }
> > +
> > + for (i = 0; i < cmd->hdr.num; i++) {
> > + *data = cmd->data[i];
> > + wptr += 4;
> > + data++;
> > + if (wptr >= desc->end) {
> > + wptr = desc->start;
> > + data = shared->cmd_mem_vir;
> > + }
> > + }
> > +
> > + /*update wptr after data is written*/
> > + mb();
> > + desc->wptr = wptr;
> > +
> > + return 0;
> > +}
> > +
> > +static bool vpu_rpc_check_msg(struct vpu_shared_addr *shared)
> > +{
> > + struct vpu_rpc_buffer_desc *desc;
> > + u32 space = 0;
> > + u32 msgword;
> > + u32 msgnum;
> > +
> > + WARN_ON(!shared || !shared->msg_desc);
> > +
> > + desc = shared->msg_desc;
> > + space = vpu_rpc_check_buffer_space(desc, 0);
> > + space = (space >> 2);
> > +
> > + if (space) {
> > + msgword = *(u32 *)(shared->msg_mem_vir + desc->rptr -
> desc->start);
> > + msgnum = (msgword & 0xff0000) >> 16;
> > + if (msgnum <= space)
> > + return true;
> > + }
> > +
> > + return false;
> > +}
> > +
> > +static int vpu_rpc_receive_msg_buf(struct vpu_shared_addr *shared, struct
> vpu_rpc_event *msg)
> > +{
> > + struct vpu_rpc_buffer_desc *desc;
> > + u32 *data;
> > + u32 msgword;
> > + u32 rptr;
> > + u32 i;
> > +
> > + WARN_ON(!shared || !shared->msg_desc || !msg);
> > +
> > + if (!vpu_rpc_check_msg(shared))
> > + return -EINVAL;
> > +
> > + desc = shared->msg_desc;
> > + data = (u32 *)(shared->msg_mem_vir + desc->rptr - desc->start);
> > + rptr = desc->rptr;
> > + msgword = *data;
> > + data++;
> > + rptr += 4;
> > + if (rptr >= desc->end) {
> > + rptr = desc->start;
> > + data = shared->msg_mem_vir;
> > + }
> > +
> > + msg->hdr.index = (msgword >> 24) & 0xff;
> > + msg->hdr.num = (msgword >> 16) & 0xff;
> > + msg->hdr.id = msgword & 0x3fff;
> > +
> > + if (msg->hdr.num > ARRAY_SIZE(msg->data)) {
> > + pr_err("msg(%d) data length(%d) is out of range\n",
> > + msg->hdr.id, msg->hdr.num);
> > + return -EINVAL;
> > + }
> > +
> > + for (i = 0; i < msg->hdr.num; i++) {
> > + msg->data[i] = *data;
> > + data++;
> > + rptr += 4;
> > + if (rptr >= desc->end) {
> > + rptr = desc->start;
> > + data = shared->msg_mem_vir;
> > + }
> > + }
> > +
> > + /*update rptr after data is read*/
> > + mb();
> > + desc->rptr = rptr;
> > +
> > + return 0;
> > +}
> > +
> > +struct vpu_iface_ops imx8q_rpc_ops[] = {
> > + [VPU_CORE_TYPE_ENC] = {
> > + .check_codec = vpu_imx8q_check_codec,
> > + .check_fmt = vpu_imx8q_check_fmt,
> > + .boot_core = vpu_imx8q_boot_core,
> > + .get_power_state = vpu_imx8q_get_power_state,
> > + .on_firmware_loaded = vpu_imx8q_on_firmware_loaded,
> > + .get_data_size = vpu_windsor_get_data_size,
> > + .check_memory_region =
> vpu_imx8q_check_memory_region,
> > + .init_rpc = vpu_windsor_init_rpc,
> > + .set_log_buf = vpu_windsor_set_log_buf,
> > + .set_system_cfg = vpu_windsor_set_system_cfg,
> > + .get_version = vpu_windsor_get_version,
> > + .send_cmd_buf = vpu_rpc_send_cmd_buf,
> > + .receive_msg_buf = vpu_rpc_receive_msg_buf,
> > + .pack_cmd = vpu_windsor_pack_cmd,
> > + .convert_msg_id = vpu_windsor_convert_msg_id,
> > + .unpack_msg_data = vpu_windsor_unpack_msg_data,
> > + .config_memory_resource =
> vpu_windsor_config_memory_resource,
> > + .get_stream_buffer_size =
> vpu_windsor_get_stream_buffer_size,
> > + .config_stream_buffer = vpu_windsor_config_stream_buffer,
> > + .get_stream_buffer_desc =
> vpu_windsor_get_stream_buffer_desc,
> > + .update_stream_buffer =
> vpu_windsor_update_stream_buffer,
> > + .set_encode_params = vpu_windsor_set_encode_params,
> > + .input_frame = vpu_windsor_input_frame,
> > + .get_max_instance_count =
> vpu_windsor_get_max_instance_count,
> > + },
> > + [VPU_CORE_TYPE_DEC] = {
> > + .check_codec = vpu_imx8q_check_codec,
> > + .check_fmt = vpu_imx8q_check_fmt,
> > + .boot_core = vpu_imx8q_boot_core,
> > + .get_power_state = vpu_imx8q_get_power_state,
> > + .on_firmware_loaded = vpu_imx8q_on_firmware_loaded,
> > + .get_data_size = vpu_malone_get_data_size,
> > + .check_memory_region =
> vpu_imx8q_check_memory_region,
> > + .init_rpc = vpu_malone_init_rpc,
> > + .set_log_buf = vpu_malone_set_log_buf,
> > + .set_system_cfg = vpu_malone_set_system_cfg,
> > + .get_version = vpu_malone_get_version,
> > + .send_cmd_buf = vpu_rpc_send_cmd_buf,
> > + .receive_msg_buf = vpu_rpc_receive_msg_buf,
> > + .get_stream_buffer_size =
> vpu_malone_get_stream_buffer_size,
> > + .config_stream_buffer = vpu_malone_config_stream_buffer,
> > + .set_decode_params = vpu_malone_set_decode_params,
> > + .pack_cmd = vpu_malone_pack_cmd,
> > + .convert_msg_id = vpu_malone_convert_msg_id,
> > + .unpack_msg_data = vpu_malone_unpack_msg_data,
> > + .get_stream_buffer_desc =
> vpu_malone_get_stream_buffer_desc,
> > + .update_stream_buffer =
> vpu_malone_update_stream_buffer,
> > + .add_scode = vpu_malone_add_scode,
> > + .input_frame = vpu_malone_input_frame,
> > + .pre_send_cmd = vpu_malone_pre_cmd,
> > + .post_send_cmd = vpu_malone_post_cmd,
> > + .init_instance = vpu_malone_init_instance,
> > + .get_max_instance_count =
> vpu_malone_get_max_instance_count,
> > + },
> > +};
> > +
> > +
> > +static struct vpu_iface_ops *vpu_get_iface(struct vpu_dev *vpu, enum
> vpu_core_type type)
> > +{
> > + struct vpu_iface_ops *rpc_ops = NULL;
> > + u32 size = 0;
> > +
> > + WARN_ON(!vpu || !vpu->res);
> > +
> > + switch (vpu->res->plat_type) {
> > + case IMX8QXP:
> > + case IMX8QM:
> > + rpc_ops = imx8q_rpc_ops;
> > + size = ARRAY_SIZE(imx8q_rpc_ops);
> > + break;
> > + default:
> > + return NULL;
> > + }
> > +
> > + if (type >= size)
> > + return NULL;
> > +
> > + return &rpc_ops[type];
> > +}
> > +
> > +struct vpu_iface_ops *vpu_core_get_iface(struct vpu_core *core)
> > +{
> > + WARN_ON(!core || !core->vpu);
> > +
> > + return vpu_get_iface(core->vpu, core->type);
> > +}
> > +
> > +struct vpu_iface_ops *vpu_inst_get_iface(struct vpu_inst *inst)
> > +{
> > + WARN_ON(!inst || !inst->vpu);
> > +
> > + if (inst->core)
> > + return vpu_core_get_iface(inst->core);
> > +
> > + return vpu_get_iface(inst->vpu, inst->type);
> > +}
> > diff --git a/drivers/media/platform/amphion/vpu_rpc.h
> b/drivers/media/platform/amphion/vpu_rpc.h
> > new file mode 100644
> > index 000000000000..abe998e5a5be
> > --- /dev/null
> > +++ b/drivers/media/platform/amphion/vpu_rpc.h
> > @@ -0,0 +1,464 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#ifndef _AMPHION_VPU_RPC_H
> > +#define _AMPHION_VPU_RPC_H
> > +
> > +#include <media/videobuf2-core.h>
> > +#include "vpu_codec.h"
> > +
> > +struct vpu_rpc_buffer_desc {
> > + u32 wptr;
> > + u32 rptr;
> > + u32 start;
> > + u32 end;
> > +};
> > +
> > +struct vpu_shared_addr {
> > + void *iface;
> > + struct vpu_rpc_buffer_desc *cmd_desc;
> > + void *cmd_mem_vir;
> > + struct vpu_rpc_buffer_desc *msg_desc;
> > + void *msg_mem_vir;
> > +
> > + unsigned long boot_addr;
> > + struct vpu_core *core;
> > + void *priv;
> > +};
> > +
> > +struct vpu_rpc_event_header {
> > + u32 index;
> > + u32 id;
> > + u32 num;
> > +};
> > +
> > +struct vpu_rpc_event {
> > + struct vpu_rpc_event_header hdr;
> > + u32 data[128];
> > +};
> > +
> > +struct vpu_iface_ops {
> > + bool (*check_codec)(enum vpu_core_type type);
> > + bool (*check_fmt)(enum vpu_core_type type, u32 pixelfmt);
> > + u32 (*get_data_size)(void);
> > + u32 (*check_memory_region)(dma_addr_t base, dma_addr_t addr,
> u32 size);
> > + int (*boot_core)(struct vpu_core *core);
> > + int (*shutdown_core)(struct vpu_core *core);
> > + int (*restore_core)(struct vpu_core *core);
> > + int (*get_power_state)(struct vpu_core *core);
> > + int (*on_firmware_loaded)(struct vpu_core *core);
> > + void (*init_rpc)(struct vpu_shared_addr *shared,
> > + struct vpu_buffer *rpc, dma_addr_t boot_addr);
> > + void (*set_log_buf)(struct vpu_shared_addr *shared,
> > + struct vpu_buffer *log);
> > + void (*set_system_cfg)(struct vpu_shared_addr *shared,
> > + u32 regs_base, void __iomem *regs, u32 index);
> > + void (*set_stream_cfg)(struct vpu_shared_addr *shared, u32 index);
> > + u32 (*get_version)(struct vpu_shared_addr *shared);
> > + u32 (*get_max_instance_count)(struct vpu_shared_addr *shared);
> > + int (*get_stream_buffer_size)(struct vpu_shared_addr *shared);
> > + int (*send_cmd_buf)(struct vpu_shared_addr *shared,
> > + struct vpu_rpc_event *cmd);
> > + int (*receive_msg_buf)(struct vpu_shared_addr *shared,
> > + struct vpu_rpc_event *msg);
> > + int (*pack_cmd)(struct vpu_rpc_event *pkt, u32 index, u32 id, void
> *data);
> > + int (*convert_msg_id)(u32 msg_id);
> > + int (*unpack_msg_data)(struct vpu_rpc_event *pkt, void *data);
> > + int (*input_frame)(struct vpu_shared_addr *shared,
> > + struct vpu_inst *inst, struct vb2_buffer *vb);
> > + int (*config_memory_resource)(struct vpu_shared_addr *shared,
> > + u32 instance,
> > + u32 type,
> > + u32 index,
> > + struct vpu_buffer *buf);
> > + int (*config_stream_buffer)(struct vpu_shared_addr *shared,
> > + u32 instance,
> > + struct vpu_buffer *buf);
> > + int (*update_stream_buffer)(struct vpu_shared_addr *shared,
> > + u32 instance, u32 ptr, bool
> write);
> > + int (*get_stream_buffer_desc)(struct vpu_shared_addr *shared,
> > + u32 instance,
> > + struct vpu_rpc_buffer_desc
> *desc);
> > + int (*set_encode_params)(struct vpu_shared_addr *shared,
> > + u32 instance,
> > + struct vpu_encode_params *params, u32 update);
> > + int (*set_decode_params)(struct vpu_shared_addr *shared,
> > + u32 instance,
> > + struct vpu_decode_params *params, u32 update);
> > + int (*add_scode)(struct vpu_shared_addr *shared,
> > + u32 instance,
> > + struct vpu_buffer *stream_buffer,
> > + u32 pixelformat,
> > + u32 scode_type);
> > + int (*pre_send_cmd)(struct vpu_shared_addr *shared, u32 instance);
> > + int (*post_send_cmd)(struct vpu_shared_addr *shared, u32 instance);
> > + int (*init_instance)(struct vpu_shared_addr *shared, u32 instance);
> > +};
> > +
> > +enum {
> > + VPU_CORE_MEMORY_INVALID = 0,
> > + VPU_CORE_MEMORY_CACHED,
> > + VPU_CORE_MEMORY_UNCACHED
> > +};
> > +
> > +struct vpu_rpc_region_t {
> > + dma_addr_t start;
> > + dma_addr_t end;
> > + dma_addr_t type;
> > +};
> > +
> > +struct vpu_iface_ops *vpu_core_get_iface(struct vpu_core *core);
> > +struct vpu_iface_ops *vpu_inst_get_iface(struct vpu_inst *inst);
> > +u32 vpu_iface_check_memory_region(struct vpu_core *core, dma_addr_t
> addr, u32 size);
> > +
> > +static inline bool vpu_iface_check_codec(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (ops && ops->check_codec)
> > + return ops->check_codec(core->type);
> > +
> > + return true;
> > +}
> > +
> > +static inline bool vpu_iface_check_format(struct vpu_inst *inst, u32
> pixelfmt)
> > +{
> > + struct vpu_iface_ops *ops = vpu_inst_get_iface(inst);
> > +
> > + if (ops && ops->check_fmt)
> > + return ops->check_fmt(inst->type, pixelfmt);
> > +
> > + return true;
> > +}
> > +
> > +static inline int vpu_iface_boot_core(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (ops && ops->boot_core)
> > + return ops->boot_core(core);
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_get_power_state(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (ops && ops->get_power_state)
> > + return ops->get_power_state(core);
> > + return 1;
> > +}
> > +
> > +static inline int vpu_iface_shutdown_core(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (ops && ops->shutdown_core)
> > + return ops->shutdown_core(core);
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_restore_core(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (ops && ops->restore_core)
> > + return ops->restore_core(core);
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_on_firmware_loaded(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (ops && ops->on_firmware_loaded)
> > + return ops->on_firmware_loaded(core);
> > +
> > + return 0;
> > +}
> > +
> > +static inline u32 vpu_iface_get_data_size(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->get_data_size)
> > + return 0;
> > +
> > + return ops->get_data_size();
> > +}
> > +
> > +static inline int vpu_iface_init(struct vpu_core *core,
> > + struct vpu_shared_addr *shared,
> > + struct vpu_buffer *rpc,
> > + dma_addr_t boot_addr)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->init_rpc)
> > + return -EINVAL;
> > +
> > + ops->init_rpc(shared, rpc, boot_addr);
> > + core->iface = shared;
> > + shared->core = core;
> > + if (rpc->bytesused > rpc->length)
> > + return -ENOSPC;
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_set_log_buf(struct vpu_core *core,
> > + struct vpu_buffer *log)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops)
> > + return -EINVAL;
> > +
> > + if (ops->set_log_buf)
> > + ops->set_log_buf(core->iface, log);
> > +
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_config_system(struct vpu_core *core,
> > + u32 regs_base, void __iomem *regs)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops)
> > + return -EINVAL;
> > + if (ops->set_system_cfg)
> > + ops->set_system_cfg(core->iface, regs_base, regs, core->id);
> > +
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_get_stream_buffer_size(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->get_stream_buffer_size)
> > + return 0;
> > +
> > + return ops->get_stream_buffer_size(core->iface);
> > +}
> > +
> > +static inline int vpu_iface_config_stream(struct vpu_inst *inst)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops)
> > + return -EINVAL;
> > + if (ops->set_stream_cfg)
> > + ops->set_stream_cfg(inst->core->iface, inst->id);
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_send_cmd(struct vpu_core *core, struct
> vpu_rpc_event *cmd)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->send_cmd_buf)
> > + return -EINVAL;
> > +
> > + return ops->send_cmd_buf(core->iface, cmd);
> > +}
> > +
> > +static inline int vpu_iface_receive_msg(struct vpu_core *core, struct
> vpu_rpc_event *msg)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->receive_msg_buf)
> > + return -EINVAL;
> > +
> > + return ops->receive_msg_buf(core->iface, msg);
> > +}
> > +
> > +static inline int vpu_iface_pack_cmd(struct vpu_core *core,
> > + struct vpu_rpc_event *pkt,
> > + u32 index, u32 id, void *data)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->pack_cmd)
> > + return -EINVAL;
> > + return ops->pack_cmd(pkt, index, id, data);
> > +}
> > +
> > +static inline int vpu_iface_convert_msg_id(struct vpu_core *core, u32
> msg_id)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->convert_msg_id)
> > + return -EINVAL;
> > +
> > + return ops->convert_msg_id(msg_id);
> > +}
> > +
> > +static inline int vpu_iface_unpack_msg_data(struct vpu_core *core,
> > + struct vpu_rpc_event
> *pkt, void *data)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->unpack_msg_data)
> > + return -EINVAL;
> > +
> > + return ops->unpack_msg_data(pkt, data);
> > +}
> > +
> > +static inline int vpu_iface_input_frame(struct vpu_inst *inst,
> > + struct vb2_buffer *vb)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + if (!ops || !ops->input_frame)
> > + return -EINVAL;
> > +
> > + return ops->input_frame(inst->core->iface, inst, vb);
> > +}
> > +
> > +static inline int vpu_iface_config_memory_resource(struct vpu_inst *inst,
> > + u32 type, u32 index, struct vpu_buffer *buf)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->config_memory_resource)
> > + return -EINVAL;
> > +
> > + return ops->config_memory_resource(inst->core->iface,
> > + inst->id,
> > + type, index, buf);
> > +}
> > +
> > +static inline int vpu_iface_config_stream_buffer(struct vpu_inst *inst,
> > + struct vpu_buffer *buf)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->config_stream_buffer)
> > + return -EINVAL;
> > +
> > + return ops->config_stream_buffer(inst->core->iface, inst->id, buf);
> > +}
> > +
> > +static inline int vpu_iface_update_stream_buffer(struct vpu_inst *inst,
> > + u32 ptr, bool write)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->update_stream_buffer)
> > + return -EINVAL;
> > +
> > + return ops->update_stream_buffer(inst->core->iface, inst->id, ptr,
> write);
> > +}
> > +
> > +static inline int vpu_iface_get_stream_buffer_desc(struct vpu_inst *inst,
> > + struct vpu_rpc_buffer_desc
> *desc)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->get_stream_buffer_desc)
> > + return -EINVAL;
> > +
> > + if (!desc)
> > + return 0;
> > +
> > + return ops->get_stream_buffer_desc(inst->core->iface, inst->id, desc);
> > +}
> > +
> > +static inline u32 vpu_iface_get_version(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->get_version)
> > + return 0;
> > +
> > + return ops->get_version(core->iface);
> > +}
> > +
> > +static inline u32 vpu_iface_get_max_instance_count(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->get_max_instance_count)
> > + return 0;
> > +
> > + return ops->get_max_instance_count(core->iface);
> > +}
> > +
> > +static inline int vpu_iface_set_encode_params(struct vpu_inst *inst,
> > + struct vpu_encode_params *params, u32
> update)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->set_encode_params)
> > + return -EINVAL;
> > +
> > + return ops->set_encode_params(inst->core->iface, inst->id, params,
> update);
> > +}
> > +
> > +static inline int vpu_iface_set_decode_params(struct vpu_inst *inst,
> > + struct vpu_decode_params *params, u32
> update)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->set_decode_params)
> > + return -EINVAL;
> > +
> > + return ops->set_decode_params(inst->core->iface, inst->id, params,
> update);
> > +}
> > +
> > +static inline int vpu_iface_add_scode(struct vpu_inst *inst, u32 scode_type)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->add_scode)
> > + return -EINVAL;
> > +
> > + return ops->add_scode(inst->core->iface, inst->id,
> > + &inst->stream_buffer,
> > + inst->out_format.pixfmt,
> > + scode_type);
> > +}
> > +
> > +static inline int vpu_iface_pre_send_cmd(struct vpu_inst *inst)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (ops && ops->pre_send_cmd)
> > + return ops->pre_send_cmd(inst->core->iface, inst->id);
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_post_send_cmd(struct vpu_inst *inst)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (ops && ops->post_send_cmd)
> > + return ops->post_send_cmd(inst->core->iface, inst->id);
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_init_instance(struct vpu_inst *inst)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (ops && ops->init_instance)
> > + return ops->init_instance(inst->core->iface, inst->id);
> > +
> > + return 0;
> > +}
> > +
> > +#endif
> >

2021-12-02 09:24:50

by Ming Qian

[permalink] [raw]
Subject: RE: [EXT] Re: [PATCH v13 04/13] media: amphion: add vpu core driver

> -----Original Message-----
> From: Hans Verkuil [mailto:[email protected]]
> Sent: Thursday, December 2, 2021 5:05 PM
> To: Ming Qian <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected]
> Cc: [email protected]; [email protected]; dl-linux-imx
> <[email protected]>; Aisheng Dong <[email protected]>;
> [email protected]; [email protected];
> [email protected]; [email protected]
> Subject: [EXT] Re: [PATCH v13 04/13] media: amphion: add vpu core driver
>
> Caution: EXT Email
>
> On 30/11/2021 10:48, Ming Qian wrote:
> > The vpu supports encoder and decoder.
> > it needs mu core to handle it.
>
> "mu core"? Do you mean "vpu core"? If not, then what is a "mu core"?
>
> Regards,
>
> Hans

Yes, it means "vpu core", we often call it mu internally.
I'm sorry that my statement caused confusion

>
> > core will run either encoder or decoder firmware.
> >
> > This driver is for support the vpu core.
> >
> > Signed-off-by: Ming Qian <[email protected]>
> > Signed-off-by: Shijie Qin <[email protected]>
> > Signed-off-by: Zhou Peng <[email protected]>
> > ---
> > drivers/media/platform/amphion/vpu_codec.h | 67 ++
> > drivers/media/platform/amphion/vpu_core.c | 906
> +++++++++++++++++++++
> > drivers/media/platform/amphion/vpu_core.h | 15 +
> > drivers/media/platform/amphion/vpu_dbg.c | 495 +++++++++++
> > drivers/media/platform/amphion/vpu_rpc.c | 279 +++++++
> > drivers/media/platform/amphion/vpu_rpc.h | 464 +++++++++++
> > 6 files changed, 2226 insertions(+)
> > create mode 100644 drivers/media/platform/amphion/vpu_codec.h
> > create mode 100644 drivers/media/platform/amphion/vpu_core.c
> > create mode 100644 drivers/media/platform/amphion/vpu_core.h
> > create mode 100644 drivers/media/platform/amphion/vpu_dbg.c
> > create mode 100644 drivers/media/platform/amphion/vpu_rpc.c
> > create mode 100644 drivers/media/platform/amphion/vpu_rpc.h
> >

2021-12-02 09:39:53

by Hans Verkuil

[permalink] [raw]
Subject: Re: [PATCH v13 02/13] media:Add nv12mt_8l128 and nv12mt_10be_8l128 video format.

On 30/11/2021 10:48, Ming Qian wrote:
> nv12mt_8l128 is 8-bit tiled nv12 format used by amphion decoder.
> nv12mt_10be_8l128 is 10-bit tiled format used by amphion decoder.
> The tile size is 8x128
>
> Signed-off-by: Ming Qian <[email protected]>
> Signed-off-by: Shijie Qin <[email protected]>
> Signed-off-by: Zhou Peng <[email protected]>
> ---
> .../userspace-api/media/v4l/pixfmt-yuv-planar.rst | 15 +++++++++++++++
> drivers/media/v4l2-core/v4l2-ioctl.c | 2 ++
> include/uapi/linux/videodev2.h | 2 ++
> 3 files changed, 19 insertions(+)
>
> diff --git a/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst b/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst
> index 3a09d93d405b..fc3baa2753ab 100644
> --- a/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst
> +++ b/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst
> @@ -257,6 +257,8 @@ of the luma plane.
> .. _V4L2-PIX-FMT-NV12-4L4:
> .. _V4L2-PIX-FMT-NV12-16L16:
> .. _V4L2-PIX-FMT-NV12-32L32:
> +.. _V4L2_PIX_FMT_NV12MT_8L128:
> +.. _V4L2_PIX_FMT_NV12MT_10BE_8L128:
>
> Tiled NV12
> ----------
> @@ -296,6 +298,19 @@ tiles linearly in memory. The line stride and image height must be
> aligned to a multiple of 32. The layouts of the luma and chroma planes are
> identical.
>
> +``V4L2_PIX_FMT_NV12MT_8L128`` is similar to ``V4L2_PIX_FMT_NV12M`` but stores
> +pixel in 2D 8x128 tiles, and stores tiles linearly in memory.

pixel -> pixels (note: also wrong in the text V4L2_PIX_FMT_NV12_4L4/16L16/32L32)

Shouldn't this be called V4L2_PIX_FMT_NV12M_8L128? The 'MT' suffix seems to be specific
to macroblock tiles and not linear tiles.

> +The image height must be aligned to a multiple of 128.
> +The layouts of the luma and chroma planes are identical.
> +
> +``V4L2_PIX_FMT_NV12MT_10BE_8L128`` is similar to ``V4L2_PIX_FMT_NV12M`` but stores
> +10 bits pixel in 2D 8x128 tiles, and stores tiles linearly in memory.
> +the data is arranged at the big end.

at the big end -> in big endian order

I assume the 10 bit pixels are packed? So 5 bytes contain 4 10-bit pixels layout like
this (for luma):

byte 0: Y0(bits 9-2)
byte 1: Y0(bits 1-0) Y1(bits 9-4)
byte 2: Y1(bits 3-0) Y2(bits 9-6)
byte 3: Y2(bits 5-0) Y3(bits 9-8)
byte 4: Y3(bits 7-0)

> +The image height must be aligned to a multiple of 128.
> +The layouts of the luma and chroma planes are identical.
> +Note the tile size is 8bytes multiplied by 128 bytes,
> +it means that the low bits and high bits of one pixel may be in differnt tiles.

differnt -> different

> +
> .. _nv12mt:
>
> .. kernel-figure:: nv12mt.svg
> diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
> index 69b74d0e8a90..400eec0157a7 100644
> --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
> @@ -1388,6 +1388,8 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
> case V4L2_META_FMT_VIVID: descr = "Vivid Metadata"; break;
> case V4L2_META_FMT_RK_ISP1_PARAMS: descr = "Rockchip ISP1 3A Parameters"; break;
> case V4L2_META_FMT_RK_ISP1_STAT_3A: descr = "Rockchip ISP1 3A Statistics"; break;
> + case V4L2_PIX_FMT_NV12MT_8L128: descr = "NV12M (8x128 Linear)"; break;
> + case V4L2_PIX_FMT_NV12MT_10BE_8L128: descr = "NV12M 10BE(8x128 Linear)"; break;

"10-bit NV12M (8x128 Linear, BE)"

>
> default:
> /* Compressed formats */
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index f118fe7a9f58..9443c3109928 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -632,6 +632,8 @@ struct v4l2_pix_format {
> /* Tiled YUV formats, non contiguous planes */
> #define V4L2_PIX_FMT_NV12MT v4l2_fourcc('T', 'M', '1', '2') /* 12 Y/CbCr 4:2:0 64x32 tiles */
> #define V4L2_PIX_FMT_NV12MT_16X16 v4l2_fourcc('V', 'M', '1', '2') /* 12 Y/CbCr 4:2:0 16x16 tiles */
> +#define V4L2_PIX_FMT_NV12MT_8L128 v4l2_fourcc('N', 'A', '1', '2') /* Y/CbCr 4:2:0 8x128 tiles */
> +#define V4L2_PIX_FMT_NV12MT_10BE_8L128 v4l2_fourcc('N', 'T', '1', '2') /* Y/CbCr 4:2:0 10-bit 8x128 tiles */

Use v4l2_fourcc_be to denote that this is a BE format.

>
> /* Bayer formats - see http://www.siliconimaging.com/RGB%20Bayer.htm */
> #define V4L2_PIX_FMT_SBGGR8 v4l2_fourcc('B', 'A', '8', '1') /* 8 BGBG.. GRGR.. */
>

Regards,

Hans

2021-12-02 09:44:56

by Hans Verkuil

[permalink] [raw]
Subject: Re: [PATCH v13 03/13] media: amphion: add amphion vpu device driver

On 30/11/2021 10:48, Ming Qian wrote:
> The amphion vpu codec ip contains encoder and decoder.
> Windsor is the encoder, it supports to encode H.264.
> Malone is the decoder, it features a powerful
> video processing unit able to decode many foramts,

foramts -> formats

> such as H.264, HEVC, and other foramts.

ditto

>
> This Driver is for this IP that is based on the v4l2 mem2mem framework.
>
> Supported SoCs are: IMX8QXP, IMX8QM
>
> Signed-off-by: Ming Qian <[email protected]>
> Signed-off-by: Shijie Qin <[email protected]>
> Signed-off-by: Zhou Peng <[email protected]>
> Reported-by: kernel test robot <[email protected]>
> ---
> arch/arm64/configs/defconfig | 1 +
> drivers/media/platform/Kconfig | 19 ++
> drivers/media/platform/Makefile | 2 +
> drivers/media/platform/amphion/Makefile | 20 ++
> drivers/media/platform/amphion/vpu.h | 357 +++++++++++++++++++++
> drivers/media/platform/amphion/vpu_defs.h | 186 +++++++++++
> drivers/media/platform/amphion/vpu_drv.c | 265 +++++++++++++++
> drivers/media/platform/amphion/vpu_imx8q.c | 271 ++++++++++++++++
> drivers/media/platform/amphion/vpu_imx8q.h | 116 +++++++
> 9 files changed, 1237 insertions(+)
> create mode 100644 drivers/media/platform/amphion/Makefile
> create mode 100644 drivers/media/platform/amphion/vpu.h
> create mode 100644 drivers/media/platform/amphion/vpu_defs.h
> create mode 100644 drivers/media/platform/amphion/vpu_drv.c
> create mode 100644 drivers/media/platform/amphion/vpu_imx8q.c
> create mode 100644 drivers/media/platform/amphion/vpu_imx8q.h
>
> diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
> index f2e2b9bdd702..cc3633112f3f 100644
> --- a/arch/arm64/configs/defconfig
> +++ b/arch/arm64/configs/defconfig
> @@ -657,6 +657,7 @@ CONFIG_V4L_PLATFORM_DRIVERS=y
> CONFIG_VIDEO_RCAR_CSI2=m
> CONFIG_VIDEO_RCAR_VIN=m
> CONFIG_VIDEO_SUN6I_CSI=m
> +CONFIG_VIDEO_AMPHION_VPU=m
> CONFIG_V4L_MEM2MEM_DRIVERS=y
> CONFIG_VIDEO_SAMSUNG_S5P_JPEG=m
> CONFIG_VIDEO_SAMSUNG_S5P_MFC=m
> diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
> index 9fbdba0fd1e7..7d4a8cd52a9e 100644
> --- a/drivers/media/platform/Kconfig
> +++ b/drivers/media/platform/Kconfig
> @@ -216,6 +216,25 @@ config VIDEO_RCAR_ISP
> To compile this driver as a module, choose M here: the
> module will be called rcar-isp.
>
> +config VIDEO_AMPHION_VPU
> + tristate "Amphion VPU(Video Processing Unit) Codec IP"

Add space before (

> + depends on ARCH_MXC

Add: || COMPILE_TEST

It should always be possible to compile test drivers, even on other architectures.

> + depends on MEDIA_SUPPORT
> + depends on VIDEO_DEV
> + depends on VIDEO_V4L2
> + select MEDIA_CONTROLLER
> + select V4L2_MEM2MEM_DEV
> + select VIDEOBUF2_DMA_CONTIG
> + select VIDEOBUF2_VMALLOC
> + help
> + Amphion VPU Codec IP contains two parts: Windsor and Malone.
> + Windsor is encoder that supports H.264, and Malone is decoder
> + that supports H.264, HEVC, and other video formats.
> + This is a V4L2 driver for NXP MXC 8Q video accelerator hardware.
> + It accelerates encoding and decoding operations on
> + various NXP SoCs.
> + To compile this driver as a module choose m here.
> +
> endif # V4L_PLATFORM_DRIVERS
>
> menuconfig V4L_MEM2MEM_DRIVERS

Regards,

Hans

2021-12-02 09:58:19

by Hans Verkuil

[permalink] [raw]
Subject: Re: [PATCH v13 04/13] media: amphion: add vpu core driver

On 30/11/2021 10:48, Ming Qian wrote:
> The vpu supports encoder and decoder.
> it needs mu core to handle it.
> core will run either encoder or decoder firmware.
>
> This driver is for support the vpu core.
>
> Signed-off-by: Ming Qian <[email protected]>
> Signed-off-by: Shijie Qin <[email protected]>
> Signed-off-by: Zhou Peng <[email protected]>
> ---
> drivers/media/platform/amphion/vpu_codec.h | 67 ++
> drivers/media/platform/amphion/vpu_core.c | 906 +++++++++++++++++++++
> drivers/media/platform/amphion/vpu_core.h | 15 +
> drivers/media/platform/amphion/vpu_dbg.c | 495 +++++++++++
> drivers/media/platform/amphion/vpu_rpc.c | 279 +++++++
> drivers/media/platform/amphion/vpu_rpc.h | 464 +++++++++++
> 6 files changed, 2226 insertions(+)
> create mode 100644 drivers/media/platform/amphion/vpu_codec.h
> create mode 100644 drivers/media/platform/amphion/vpu_core.c
> create mode 100644 drivers/media/platform/amphion/vpu_core.h
> create mode 100644 drivers/media/platform/amphion/vpu_dbg.c
> create mode 100644 drivers/media/platform/amphion/vpu_rpc.c
> create mode 100644 drivers/media/platform/amphion/vpu_rpc.h
>
> diff --git a/drivers/media/platform/amphion/vpu_codec.h b/drivers/media/platform/amphion/vpu_codec.h
> new file mode 100644
> index 000000000000..bf8920e9f6d7
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_codec.h
> @@ -0,0 +1,67 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _AMPHION_VPU_CODEC_H
> +#define _AMPHION_VPU_CODEC_H
> +
> +struct vpu_encode_params {
> + u32 input_format;
> + u32 codec_format;
> + u32 profile;
> + u32 tier;
> + u32 level;
> + struct v4l2_fract frame_rate;
> + u32 src_stride;
> + u32 src_width;
> + u32 src_height;
> + struct v4l2_rect crop;
> + u32 out_width;
> + u32 out_height;
> +
> + u32 gop_length;
> + u32 bframes;
> +
> + u32 rc_mode;
> + u32 bitrate;
> + u32 bitrate_min;
> + u32 bitrate_max;
> +
> + u32 i_frame_qp;
> + u32 p_frame_qp;
> + u32 b_frame_qp;
> + u32 qp_min;
> + u32 qp_max;
> + u32 qp_min_i;
> + u32 qp_max_i;
> +
> + struct {
> + u32 enable;
> + u32 idc;
> + u32 width;
> + u32 height;
> + } sar;
> +
> + struct {
> + u32 primaries;
> + u32 transfer;
> + u32 matrix;
> + u32 full_range;
> + } color;
> +};
> +
> +struct vpu_decode_params {
> + u32 codec_format;
> + u32 output_format;
> + u32 b_dis_reorder;
> + u32 b_non_frame;
> + u32 frame_count;
> + u32 end_flag;
> + struct {
> + u32 base;
> + u32 size;
> + } udata;
> +};
> +
> +#endif
> diff --git a/drivers/media/platform/amphion/vpu_core.c b/drivers/media/platform/amphion/vpu_core.c
> new file mode 100644
> index 000000000000..0dbfd1c84f75
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_core.c
> @@ -0,0 +1,906 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/interconnect.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/of_device.h>
> +#include <linux/of_address.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/types.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/pm_domain.h>
> +#include <linux/firmware.h>
> +#include "vpu.h"
> +#include "vpu_defs.h"
> +#include "vpu_core.h"
> +#include "vpu_mbox.h"
> +#include "vpu_msgs.h"
> +#include "vpu_rpc.h"
> +#include "vpu_cmds.h"
> +
> +void csr_writel(struct vpu_core *core, u32 reg, u32 val)
> +{
> + writel(val, core->base + reg);
> +}
> +
> +u32 csr_readl(struct vpu_core *core, u32 reg)
> +{
> + return readl(core->base + reg);
> +}
> +
> +static int vpu_core_load_firmware(struct vpu_core *core)
> +{
> + const struct firmware *pfw = NULL;
> + int ret = 0;
> +
> + WARN_ON(!core || !core->res || !core->res->fwname);

Either do:

if (WARN_ON(!core || !core->res || !core->res->fwname))
return -EINVAL;

or just drop it. You'll get a oops with backtrace soon enough.

Same elsewhere in this driver.

> + if (!core->fw.virt) {
> + dev_err(core->dev, "firmware buffer is not ready\n");
> + return -EINVAL;
> + }
> +
> + ret = request_firmware(&pfw, core->res->fwname, core->dev);
> + dev_dbg(core->dev, "request_firmware %s : %d\n", core->res->fwname, ret);
> + if (ret) {
> + dev_err(core->dev, "request firmware %s failed, ret = %d\n",
> + core->res->fwname, ret);
> + return ret;
> + }
> +
> + if (core->fw.length < pfw->size) {
> + dev_err(core->dev, "firmware buffer size want %zu, but %d\n",
> + pfw->size, core->fw.length);
> + ret = -EINVAL;
> + goto exit;
> + }
> +
> + memset_io(core->fw.virt, 0, core->fw.length);
> + memcpy(core->fw.virt, pfw->data, pfw->size);
> + core->fw.bytesused = pfw->size;
> + ret = vpu_iface_on_firmware_loaded(core);
> +exit:
> + release_firmware(pfw);
> + pfw = NULL;
> +
> + return ret;
> +}
> +
> +static int vpu_core_boot_done(struct vpu_core *core)
> +{
> + u32 fw_version;
> +
> + fw_version = vpu_iface_get_version(core);
> + dev_info(core->dev, "%s firmware version : %d.%d.%d\n",
> + vpu_core_type_desc(core->type),
> + (fw_version >> 16) & 0xff,
> + (fw_version >> 8) & 0xff,
> + fw_version & 0xff);
> + core->supported_instance_count = vpu_iface_get_max_instance_count(core);
> + if (core->res->act_size) {
> + u32 count = core->act.length / core->res->act_size;
> +
> + core->supported_instance_count = min(core->supported_instance_count, count);
> + }
> + core->fw_version = fw_version;
> + core->state = VPU_CORE_ACTIVE;
> +
> + return 0;
> +}
> +
> +static int vpu_core_wait_boot_done(struct vpu_core *core)
> +{
> + int ret;
> +
> + ret = wait_for_completion_timeout(&core->cmp, VPU_TIMEOUT);
> + if (!ret) {
> + dev_err(core->dev, "boot timeout\n");
> + return -EINVAL;
> + }
> + return vpu_core_boot_done(core);
> +}
> +
> +static int vpu_core_boot(struct vpu_core *core, bool load)
> +{
> + int ret;
> +
> + WARN_ON(!core);
> +
> + if (!core->res->standalone)
> + return 0;
> +
> + reinit_completion(&core->cmp);
> + if (load) {
> + ret = vpu_core_load_firmware(core);
> + if (ret)
> + return ret;
> + }
> +
> + vpu_iface_boot_core(core);
> + return vpu_core_wait_boot_done(core);
> +}
> +
> +static int vpu_core_shutdown(struct vpu_core *core)
> +{
> + if (!core->res->standalone)
> + return 0;
> + return vpu_iface_shutdown_core(core);
> +}
> +
> +static int vpu_core_restore(struct vpu_core *core)
> +{
> + int ret;
> +
> + if (!core->res->standalone)
> + return 0;
> + ret = vpu_core_sw_reset(core);
> + if (ret)
> + return ret;
> +
> + vpu_core_boot_done(core);
> + return vpu_iface_restore_core(core);
> +}
> +
> +static int __vpu_alloc_dma(struct device *dev, struct vpu_buffer *buf)
> +{
> + gfp_t gfp = GFP_KERNEL | GFP_DMA32;
> +
> + WARN_ON(!dev || !buf);
> +
> + if (!buf->length)
> + return 0;
> +
> + buf->virt = dma_alloc_coherent(dev, buf->length, &buf->phys, gfp);
> + if (!buf->virt)
> + return -ENOMEM;
> +
> + buf->dev = dev;
> +
> + return 0;
> +}
> +
> +void vpu_free_dma(struct vpu_buffer *buf)
> +{
> + WARN_ON(!buf);
> +
> + if (!buf->virt || !buf->dev)
> + return;
> +
> + dma_free_coherent(buf->dev, buf->length, buf->virt, buf->phys);
> + buf->virt = NULL;
> + buf->phys = 0;
> + buf->length = 0;
> + buf->bytesused = 0;
> + buf->dev = NULL;
> +}
> +
> +int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf)
> +{
> + WARN_ON(!core || !buf);
> +
> + return __vpu_alloc_dma(core->dev, buf);
> +}
> +
> +static void vpu_core_check_hang(struct vpu_core *core)
> +{
> + if (core->hang_mask)
> + core->state = VPU_CORE_HANG;
> +}
> +
> +static struct vpu_core *vpu_core_find_proper_by_type(struct vpu_dev *vpu, u32 type)
> +{
> + struct vpu_core *core = NULL;
> + int request_count = INT_MAX;
> + struct vpu_core *c;
> +
> + WARN_ON(!vpu);
> +
> + list_for_each_entry(c, &vpu->cores, list) {
> + dev_dbg(c->dev, "instance_mask = 0x%lx, state = %d\n",
> + c->instance_mask,
> + c->state);
> + if (c->type != type)
> + continue;
> + if (c->state == VPU_CORE_DEINIT) {
> + core = c;
> + break;
> + }
> + vpu_core_check_hang(c);
> + if (c->state != VPU_CORE_ACTIVE)
> + continue;
> + if (c->request_count < request_count) {
> + request_count = c->request_count;
> + core = c;
> + }
> + if (!request_count)
> + break;
> + }
> +
> + return core;
> +}
> +
> +static bool vpu_core_is_exist(struct vpu_dev *vpu, struct vpu_core *core)
> +{
> + struct vpu_core *c;
> +
> + list_for_each_entry(c, &vpu->cores, list) {
> + if (c == core)
> + return true;
> + }
> +
> + return false;
> +}
> +
> +static void vpu_core_get_vpu(struct vpu_core *core)
> +{
> + core->vpu->get_vpu(core->vpu);
> + if (core->type == VPU_CORE_TYPE_ENC)
> + core->vpu->get_enc(core->vpu);
> + if (core->type == VPU_CORE_TYPE_DEC)
> + core->vpu->get_dec(core->vpu);
> +}
> +
> +static int vpu_core_register(struct device *dev, struct vpu_core *core)
> +{
> + struct vpu_dev *vpu = dev_get_drvdata(dev);
> + int ret = 0;
> +
> + dev_dbg(core->dev, "register core %s\n", vpu_core_type_desc(core->type));
> + if (vpu_core_is_exist(vpu, core))
> + return 0;
> +
> + core->workqueue = alloc_workqueue("vpu", WQ_UNBOUND | WQ_MEM_RECLAIM, 1);
> + if (!core->workqueue) {
> + dev_err(core->dev, "fail to alloc workqueue\n");
> + return -ENOMEM;
> + }
> + INIT_WORK(&core->msg_work, vpu_msg_run_work);
> + INIT_DELAYED_WORK(&core->msg_delayed_work, vpu_msg_delayed_work);
> + core->msg_buffer_size = roundup_pow_of_two(VPU_MSG_BUFFER_SIZE);
> + core->msg_buffer = vzalloc(core->msg_buffer_size);
> + if (!core->msg_buffer) {
> + dev_err(core->dev, "failed allocate buffer for fifo\n");
> + ret = -ENOMEM;
> + goto error;
> + }
> + ret = kfifo_init(&core->msg_fifo, core->msg_buffer, core->msg_buffer_size);
> + if (ret) {
> + dev_err(core->dev, "failed init kfifo\n");
> + goto error;
> + }
> +
> + list_add_tail(&core->list, &vpu->cores);
> +
> + vpu_core_get_vpu(core);
> +
> + if (vpu_iface_get_power_state(core))
> + ret = vpu_core_restore(core);
> + if (ret)
> + goto error;
> +
> + return 0;
> +error:
> + if (core->msg_buffer) {
> + vfree(core->msg_buffer);
> + core->msg_buffer = NULL;
> + }
> + if (core->workqueue) {
> + destroy_workqueue(core->workqueue);
> + core->workqueue = NULL;
> + }
> + return ret;
> +}
> +
> +static void vpu_core_put_vpu(struct vpu_core *core)
> +{
> + if (core->type == VPU_CORE_TYPE_ENC)
> + core->vpu->put_enc(core->vpu);
> + if (core->type == VPU_CORE_TYPE_DEC)
> + core->vpu->put_dec(core->vpu);
> + core->vpu->put_vpu(core->vpu);
> +}
> +
> +static int vpu_core_unregister(struct device *dev, struct vpu_core *core)
> +{
> + list_del_init(&core->list);
> +
> + vpu_core_put_vpu(core);
> + core->vpu = NULL;
> + vfree(core->msg_buffer);
> + core->msg_buffer = NULL;
> +
> + if (core->workqueue) {
> + cancel_work_sync(&core->msg_work);
> + cancel_delayed_work_sync(&core->msg_delayed_work);
> + destroy_workqueue(core->workqueue);
> + core->workqueue = NULL;
> + }
> +
> + return 0;
> +}
> +
> +static int vpu_core_acquire_instance(struct vpu_core *core)
> +{
> + int id;
> +
> + WARN_ON(!core);
> +
> + id = ffz(core->instance_mask);
> + if (id >= core->supported_instance_count)
> + return -EINVAL;
> +
> + set_bit(id, &core->instance_mask);
> +
> + return id;
> +}
> +
> +static void vpu_core_release_instance(struct vpu_core *core, int id)
> +{
> + WARN_ON(!core);
> +
> + if (id < 0 || id >= core->supported_instance_count)
> + return;
> +
> + clear_bit(id, &core->instance_mask);
> +}
> +
> +struct vpu_inst *vpu_inst_get(struct vpu_inst *inst)
> +{
> + if (!inst)
> + return NULL;
> +
> + atomic_inc(&inst->ref_count);
> +
> + return inst;
> +}
> +
> +void vpu_inst_put(struct vpu_inst *inst)
> +{
> + if (!inst)
> + return;
> + if (atomic_dec_and_test(&inst->ref_count)) {
> + if (inst->release)
> + inst->release(inst);
> + }
> +}
> +
> +struct vpu_core *vpu_request_core(struct vpu_dev *vpu, enum vpu_core_type type)
> +{
> + struct vpu_core *core = NULL;
> + int ret;
> +
> + mutex_lock(&vpu->lock);
> +
> + core = vpu_core_find_proper_by_type(vpu, type);
> + if (!core)
> + goto exit;
> +
> + mutex_lock(&core->lock);
> + pm_runtime_get_sync(core->dev);
> +
> + if (core->state == VPU_CORE_DEINIT) {
> + ret = vpu_core_boot(core, true);
> + if (ret) {
> + pm_runtime_put_sync(core->dev);
> + mutex_unlock(&core->lock);
> + core = NULL;
> + goto exit;
> + }
> + }
> +
> + core->request_count++;
> +
> + mutex_unlock(&core->lock);
> +exit:
> + mutex_unlock(&vpu->lock);
> +
> + return core;
> +}
> +
> +void vpu_release_core(struct vpu_core *core)
> +{
> + if (!core)
> + return;
> +
> + mutex_lock(&core->lock);
> + pm_runtime_put_sync(core->dev);
> + if (core->request_count)
> + core->request_count--;
> + mutex_unlock(&core->lock);
> +}
> +
> +int vpu_inst_register(struct vpu_inst *inst)
> +{
> + struct vpu_dev *vpu;
> + struct vpu_core *core;
> + int ret = 0;
> +
> + WARN_ON(!inst || !inst->vpu);
> +
> + vpu = inst->vpu;
> + core = inst->core;
> + if (!core) {
> + core = vpu_request_core(vpu, inst->type);
> + if (!core) {
> + dev_err(vpu->dev, "there is no vpu core for %s\n",
> + vpu_core_type_desc(inst->type));
> + return -EINVAL;
> + }
> + inst->core = core;
> + inst->dev = get_device(core->dev);
> + }
> +
> + mutex_lock(&core->lock);
> + if (inst->id >= 0 && inst->id < core->supported_instance_count)
> + goto exit;
> +
> + ret = vpu_core_acquire_instance(core);
> + if (ret < 0)
> + goto exit;
> +
> + vpu_trace(inst->dev, "[%d] %p\n", ret, inst);
> + inst->id = ret;
> + list_add_tail(&inst->list, &core->instances);
> + ret = 0;
> + if (core->res->act_size) {
> + inst->act.phys = core->act.phys + core->res->act_size * inst->id;
> + inst->act.virt = core->act.virt + core->res->act_size * inst->id;
> + inst->act.length = core->res->act_size;
> + }
> + vpu_inst_create_dbgfs_file(inst);
> +exit:
> + mutex_unlock(&core->lock);
> +
> + if (ret)
> + dev_err(core->dev, "register instance fail\n");
> + return ret;
> +}
> +
> +int vpu_inst_unregister(struct vpu_inst *inst)
> +{
> + struct vpu_core *core;
> +
> + WARN_ON(!inst);
> +
> + if (!inst->core)
> + return 0;
> +
> + core = inst->core;
> + vpu_clear_request(inst);
> + mutex_lock(&core->lock);
> + if (inst->id >= 0 && inst->id < core->supported_instance_count) {
> + vpu_inst_remove_dbgfs_file(inst);
> + list_del_init(&inst->list);
> + vpu_core_release_instance(core, inst->id);
> + inst->id = VPU_INST_NULL_ID;
> + }
> + vpu_core_check_hang(core);
> + if (core->state == VPU_CORE_HANG && !core->instance_mask) {
> + dev_info(core->dev, "reset hang core\n");
> + if (!vpu_core_sw_reset(core)) {
> + core->state = VPU_CORE_ACTIVE;
> + core->hang_mask = 0;
> + }
> + }
> + mutex_unlock(&core->lock);
> +
> + return 0;
> +}
> +
> +struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index)
> +{
> + struct vpu_inst *inst = NULL;
> + struct vpu_inst *tmp;
> +
> + mutex_lock(&core->lock);
> + if (!test_bit(index, &core->instance_mask))
> + goto exit;
> + list_for_each_entry(tmp, &core->instances, list) {
> + if (tmp->id == index) {
> + inst = vpu_inst_get(tmp);
> + break;
> + }
> + }
> +exit:
> + mutex_unlock(&core->lock);
> +
> + return inst;
> +}
> +
> +const struct vpu_core_resources *vpu_get_resource(struct vpu_inst *inst)
> +{
> + struct vpu_dev *vpu;
> + struct vpu_core *core = NULL;
> + const struct vpu_core_resources *res = NULL;
> +
> + if (!inst || !inst->vpu)
> + return NULL;
> +
> + if (inst->core && inst->core->res)
> + return inst->core->res;
> +
> + vpu = inst->vpu;
> + mutex_lock(&vpu->lock);
> + list_for_each_entry(core, &vpu->cores, list) {
> + if (core->type == inst->type) {
> + res = core->res;
> + break;
> + }
> + }
> + mutex_unlock(&vpu->lock);
> +
> + return res;
> +}
> +
> +static int vpu_core_parse_dt(struct vpu_core *core, struct device_node *np)
> +{
> + struct device_node *node;
> + struct resource res;
> +
> + if (of_count_phandle_with_args(np, "memory-region", NULL) < 2) {
> + dev_err(core->dev, "need 2 memory-region for boot and rpc\n");
> + return -ENODEV;
> + }
> +
> + node = of_parse_phandle(np, "memory-region", 0);
> + if (!node) {
> + dev_err(core->dev, "boot-region of_parse_phandle error\n");
> + return -ENODEV;
> + }
> + if (of_address_to_resource(node, 0, &res)) {
> + dev_err(core->dev, "boot-region of_address_to_resource error\n");
> + return -EINVAL;
> + }
> + core->fw.phys = res.start;
> + core->fw.length = resource_size(&res);
> +
> + node = of_parse_phandle(np, "memory-region", 1);
> + if (!node) {
> + dev_err(core->dev, "rpc-region of_parse_phandle error\n");
> + return -ENODEV;
> + }
> + if (of_address_to_resource(node, 0, &res)) {
> + dev_err(core->dev, "rpc-region of_address_to_resource error\n");
> + return -EINVAL;
> + }
> + core->rpc.phys = res.start;
> + core->rpc.length = resource_size(&res);
> +
> + if (core->rpc.length < core->res->rpc_size + core->res->fwlog_size) {
> + dev_err(core->dev, "the rpc-region <%pad, 0x%x> is not enough\n",
> + &core->rpc.phys, core->rpc.length);
> + return -EINVAL;
> + }
> +
> + core->fw.virt = ioremap_wc(core->fw.phys, core->fw.length);
> + core->rpc.virt = ioremap_wc(core->rpc.phys, core->rpc.length);
> + memset_io(core->rpc.virt, 0, core->rpc.length);
> +
> + if (vpu_iface_check_memory_region(core,
> + core->rpc.phys,
> + core->rpc.length) != VPU_CORE_MEMORY_UNCACHED) {
> + dev_err(core->dev, "rpc region<%pad, 0x%x> isn't uncached\n",
> + &core->rpc.phys, core->rpc.length);
> + return -EINVAL;
> + }
> +
> + core->log.phys = core->rpc.phys + core->res->rpc_size;
> + core->log.virt = core->rpc.virt + core->res->rpc_size;
> + core->log.length = core->res->fwlog_size;
> + core->act.phys = core->log.phys + core->log.length;
> + core->act.virt = core->log.virt + core->log.length;
> + core->act.length = core->rpc.length - core->res->rpc_size - core->log.length;
> + core->rpc.length = core->res->rpc_size;
> +
> + return 0;
> +}
> +
> +static int vpu_core_probe(struct platform_device *pdev)
> +{
> + struct device *dev = &pdev->dev;
> + struct vpu_core *core;
> + struct vpu_dev *vpu = dev_get_drvdata(dev->parent);
> + struct vpu_shared_addr *iface;
> + u32 iface_data_size;
> + int ret;
> +
> + dev_dbg(dev, "probe\n");
> + if (!vpu)
> + return -EINVAL;
> + core = devm_kzalloc(dev, sizeof(*core), GFP_KERNEL);
> + if (!core)
> + return -ENOMEM;
> +
> + core->pdev = pdev;
> + core->dev = dev;
> + platform_set_drvdata(pdev, core);
> + core->vpu = vpu;
> + INIT_LIST_HEAD(&core->instances);
> + mutex_init(&core->lock);
> + mutex_init(&core->cmd_lock);
> + init_completion(&core->cmp);
> + init_waitqueue_head(&core->ack_wq);
> + core->state = VPU_CORE_DEINIT;
> +
> + core->res = of_device_get_match_data(dev);
> + if (!core->res)
> + return -ENODEV;
> +
> + core->type = core->res->type;
> + core->id = of_alias_get_id(dev->of_node, "vpu_core");
> + if (core->id < 0) {
> + dev_err(dev, "can't get vpu core id\n");
> + return core->id;
> + }
> + dev_info(core->dev, "[%d] = %s\n", core->id, vpu_core_type_desc(core->type));
> + ret = vpu_core_parse_dt(core, dev->of_node);
> + if (ret)
> + return ret;
> +
> + core->base = devm_platform_ioremap_resource(pdev, 0);
> + if (IS_ERR(core->base))
> + return PTR_ERR(core->base);
> +
> + if (!vpu_iface_check_codec(core)) {
> + dev_err(core->dev, "is not supported\n");
> + return -EINVAL;
> + }
> +
> + ret = vpu_mbox_init(core);
> + if (ret)
> + return ret;
> +
> + iface = devm_kzalloc(dev, sizeof(*iface), GFP_KERNEL);
> + if (!iface)
> + return -ENOMEM;
> +
> + iface_data_size = vpu_iface_get_data_size(core);
> + if (iface_data_size) {
> + iface->priv = devm_kzalloc(dev, iface_data_size, GFP_KERNEL);
> + if (!iface->priv)
> + return -ENOMEM;
> + }
> +
> + ret = vpu_iface_init(core, iface, &core->rpc, core->fw.phys);
> + if (ret) {
> + dev_err(core->dev, "init iface fail, ret = %d\n", ret);
> + return ret;
> + }
> +
> + vpu_iface_config_system(core, vpu->res->mreg_base, vpu->base);
> + vpu_iface_set_log_buf(core, &core->log);
> +
> + pm_runtime_enable(dev);
> + ret = pm_runtime_get_sync(dev);

Use pm_runtime_resume_and_get() instead and drop the pm_runtime_put_noidle()
in the 'if' below. The use of pm_runtime_resume_and_get is preferred over
the rather confusing pm_runtime_get_sync().

If it is used elsewhere in this series as well (I haven't checked this),
then make the same changes.

> + if (ret) {
> + pm_runtime_put_noidle(dev);
> + pm_runtime_set_suspended(dev);
> + goto err_runtime_disable;
> + }
> +
> + ret = vpu_core_register(dev->parent, core);
> + if (ret)
> + goto err_core_register;
> + core->parent = dev->parent;
> +
> + pm_runtime_put_sync(dev);
> + vpu_core_create_dbgfs_file(core);
> +
> + return 0;
> +
> +err_core_register:
> + pm_runtime_put_sync(dev);
> +err_runtime_disable:
> + pm_runtime_disable(dev);
> +
> + return ret;
> +}
> +
> +static int vpu_core_remove(struct platform_device *pdev)
> +{
> + struct device *dev = &pdev->dev;
> + struct vpu_core *core = platform_get_drvdata(pdev);
> + int ret;
> +
> + vpu_core_remove_dbgfs_file(core);
> + ret = pm_runtime_get_sync(dev);

Ah, same here.

> + WARN_ON(ret < 0);
> +
> + vpu_core_shutdown(core);
> + pm_runtime_put_sync(dev);
> + pm_runtime_disable(dev);
> +
> + vpu_core_unregister(core->parent, core);
> + iounmap(core->fw.virt);
> + iounmap(core->rpc.virt);
> + mutex_destroy(&core->lock);
> + mutex_destroy(&core->cmd_lock);
> +
> + return 0;
> +}
> +
> +static int __maybe_unused vpu_core_runtime_resume(struct device *dev)
> +{
> + struct vpu_core *core = dev_get_drvdata(dev);
> +
> + return vpu_mbox_request(core);
> +}
> +
> +static int __maybe_unused vpu_core_runtime_suspend(struct device *dev)
> +{
> + struct vpu_core *core = dev_get_drvdata(dev);
> +
> + vpu_mbox_free(core);
> + return 0;
> +}
> +
> +static void vpu_core_cancel_work(struct vpu_core *core)
> +{
> + struct vpu_inst *inst = NULL;
> +
> + cancel_work_sync(&core->msg_work);
> + cancel_delayed_work_sync(&core->msg_delayed_work);
> +
> + mutex_lock(&core->lock);
> + list_for_each_entry(inst, &core->instances, list)
> + cancel_work_sync(&inst->msg_work);
> + mutex_unlock(&core->lock);
> +}
> +
> +static void vpu_core_resume_work(struct vpu_core *core)
> +{
> + struct vpu_inst *inst = NULL;
> + unsigned long delay = msecs_to_jiffies(10);
> +
> + queue_work(core->workqueue, &core->msg_work);
> + queue_delayed_work(core->workqueue, &core->msg_delayed_work, delay);
> +
> + mutex_lock(&core->lock);
> + list_for_each_entry(inst, &core->instances, list)
> + queue_work(inst->workqueue, &inst->msg_work);
> + mutex_unlock(&core->lock);
> +}
> +
> +static int __maybe_unused vpu_core_resume(struct device *dev)
> +{
> + struct vpu_core *core = dev_get_drvdata(dev);
> + int ret = 0;
> +
> + if (!core->res->standalone)
> + return 0;
> +
> + mutex_lock(&core->lock);
> + pm_runtime_get_sync(dev);
> + vpu_core_get_vpu(core);
> + if (core->state != VPU_CORE_SNAPSHOT)
> + goto exit;
> +
> + if (!vpu_iface_get_power_state(core)) {
> + if (!list_empty(&core->instances)) {
> + ret = vpu_core_boot(core, false);
> + if (ret) {
> + dev_err(core->dev, "%s boot fail\n", __func__);
> + core->state = VPU_CORE_DEINIT;
> + goto exit;
> + }
> + } else {
> + core->state = VPU_CORE_DEINIT;
> + }
> + } else {
> + if (!list_empty(&core->instances)) {
> + ret = vpu_core_sw_reset(core);
> + if (ret) {
> + dev_err(core->dev, "%s sw_reset fail\n", __func__);
> + core->state = VPU_CORE_HANG;
> + goto exit;
> + }
> + }
> + core->state = VPU_CORE_ACTIVE;
> + }
> +
> +exit:
> + pm_runtime_put_sync(dev);
> + mutex_unlock(&core->lock);
> +
> + vpu_core_resume_work(core);
> + return ret;
> +}
> +
> +static int __maybe_unused vpu_core_suspend(struct device *dev)
> +{
> + struct vpu_core *core = dev_get_drvdata(dev);
> + int ret = 0;
> +
> + if (!core->res->standalone)
> + return 0;
> +
> + mutex_lock(&core->lock);
> + if (core->state == VPU_CORE_ACTIVE) {
> + if (!list_empty(&core->instances)) {
> + ret = vpu_core_snapshot(core);
> + if (ret) {
> + mutex_unlock(&core->lock);
> + return ret;
> + }
> + }
> +
> + core->state = VPU_CORE_SNAPSHOT;
> + }
> + mutex_unlock(&core->lock);
> +
> + vpu_core_cancel_work(core);
> +
> + mutex_lock(&core->lock);
> + vpu_core_put_vpu(core);
> + mutex_unlock(&core->lock);
> + return ret;
> +}
> +
> +static const struct dev_pm_ops vpu_core_pm_ops = {
> + SET_RUNTIME_PM_OPS(vpu_core_runtime_suspend, vpu_core_runtime_resume, NULL)
> + SET_SYSTEM_SLEEP_PM_OPS(vpu_core_suspend, vpu_core_resume)
> +};
> +
> +static struct vpu_core_resources imx8q_enc = {
> + .type = VPU_CORE_TYPE_ENC,
> + .fwname = "vpu/vpu_fw_imx8_enc.bin",
> + .stride = 16,
> + .max_width = 1920,
> + .max_height = 1920,
> + .min_width = 64,
> + .min_height = 48,
> + .step_width = 2,
> + .step_height = 2,
> + .rpc_size = 0x80000,
> + .fwlog_size = 0x80000,
> + .act_size = 0xc0000,
> + .standalone = true,
> +};
> +
> +static struct vpu_core_resources imx8q_dec = {
> + .type = VPU_CORE_TYPE_DEC,
> + .fwname = "vpu/vpu_fw_imx8_dec.bin",
> + .stride = 256,
> + .max_width = 8188,
> + .max_height = 8188,
> + .min_width = 16,
> + .min_height = 16,
> + .step_width = 1,
> + .step_height = 1,
> + .rpc_size = 0x80000,
> + .fwlog_size = 0x80000,
> + .standalone = true,
> +};
> +
> +static const struct of_device_id vpu_core_dt_match[] = {
> + { .compatible = "nxp,imx8q-vpu-encoder", .data = &imx8q_enc },
> + { .compatible = "nxp,imx8q-vpu-decoder", .data = &imx8q_dec },
> + {}
> +};
> +MODULE_DEVICE_TABLE(of, vpu_core_dt_match);
> +
> +static struct platform_driver amphion_vpu_core_driver = {
> + .probe = vpu_core_probe,
> + .remove = vpu_core_remove,
> + .driver = {
> + .name = "amphion-vpu-core",
> + .of_match_table = vpu_core_dt_match,
> + .pm = &vpu_core_pm_ops,
> + },
> +};
> +
> +int __init vpu_core_driver_init(void)
> +{
> + return platform_driver_register(&amphion_vpu_core_driver);
> +}
> +
> +void __exit vpu_core_driver_exit(void)
> +{
> + platform_driver_unregister(&amphion_vpu_core_driver);
> +}
> diff --git a/drivers/media/platform/amphion/vpu_core.h b/drivers/media/platform/amphion/vpu_core.h
> new file mode 100644
> index 000000000000..00a662997da4
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_core.h
> @@ -0,0 +1,15 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _AMPHION_VPU_CORE_H
> +#define _AMPHION_VPU_CORE_H
> +
> +void csr_writel(struct vpu_core *core, u32 reg, u32 val);
> +u32 csr_readl(struct vpu_core *core, u32 reg);
> +int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf);
> +void vpu_free_dma(struct vpu_buffer *buf);
> +struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index);
> +
> +#endif
> diff --git a/drivers/media/platform/amphion/vpu_dbg.c b/drivers/media/platform/amphion/vpu_dbg.c
> new file mode 100644
> index 000000000000..2e7e11101f99
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_dbg.c
> @@ -0,0 +1,495 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/device.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/types.h>
> +#include <linux/pm_runtime.h>
> +#include <media/v4l2-device.h>
> +#include <linux/debugfs.h>
> +#include "vpu.h"
> +#include "vpu_defs.h"
> +#include "vpu_helpers.h"
> +#include "vpu_cmds.h"
> +#include "vpu_rpc.h"
> +
> +struct print_buf_desc {
> + u32 start_h_phy;
> + u32 start_h_vir;
> + u32 start_m;
> + u32 bytes;
> + u32 read;
> + u32 write;
> + char buffer[0];
> +};
> +
> +static char *vb2_stat_name[] = {
> + [VB2_BUF_STATE_DEQUEUED] = "dequeued",
> + [VB2_BUF_STATE_IN_REQUEST] = "in_request",
> + [VB2_BUF_STATE_PREPARING] = "preparing",
> + [VB2_BUF_STATE_QUEUED] = "queued",
> + [VB2_BUF_STATE_ACTIVE] = "active",
> + [VB2_BUF_STATE_DONE] = "done",
> + [VB2_BUF_STATE_ERROR] = "error",
> +};
> +
> +static char *vpu_stat_name[] = {
> + [VPU_BUF_STATE_IDLE] = "idle",
> + [VPU_BUF_STATE_INUSE] = "inuse",
> + [VPU_BUF_STATE_DECODED] = "decoded",
> + [VPU_BUF_STATE_READY] = "ready",
> + [VPU_BUF_STATE_SKIP] = "skip",
> + [VPU_BUF_STATE_ERROR] = "error",
> +};
> +
> +static int vpu_dbg_instance(struct seq_file *s, void *data)
> +{
> + struct vpu_inst *inst = s->private;
> + char str[128];
> + int num;
> + struct vb2_queue *vq;
> + int i;
> +
> + num = scnprintf(str, sizeof(str), "[%s]\n", vpu_core_type_desc(inst->type));
> + if (seq_write(s, str, num))
> + return 0;
> +
> + num = scnprintf(str, sizeof(str), "tgig = %d,pid = %d\n", inst->tgid, inst->pid);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str), "state = %d\n", inst->state);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str),
> + "min_buffer_out = %d, min_buffer_cap = %d\n",
> + inst->min_buffer_out, inst->min_buffer_cap);
> + if (seq_write(s, str, num))
> + return 0;
> +
> +
> + vq = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> + num = scnprintf(str, sizeof(str),
> + "output (%2d, %2d): fmt = %c%c%c%c %d x %d, %d;",
> + vb2_is_streaming(vq),
> + vq->num_buffers,
> + inst->out_format.pixfmt,
> + inst->out_format.pixfmt >> 8,
> + inst->out_format.pixfmt >> 16,
> + inst->out_format.pixfmt >> 24,
> + inst->out_format.width,
> + inst->out_format.height,
> + vq->last_buffer_dequeued);
> + if (seq_write(s, str, num))
> + return 0;
> + for (i = 0; i < inst->out_format.num_planes; i++) {
> + num = scnprintf(str, sizeof(str), " %d(%d)",
> + inst->out_format.sizeimage[i],
> + inst->out_format.bytesperline[i]);
> + if (seq_write(s, str, num))
> + return 0;
> + }
> + if (seq_write(s, "\n", 1))
> + return 0;
> +
> + vq = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> + num = scnprintf(str, sizeof(str),
> + "capture(%2d, %2d): fmt = %c%c%c%c %d x %d, %d;",
> + vb2_is_streaming(vq),
> + vq->num_buffers,
> + inst->cap_format.pixfmt,
> + inst->cap_format.pixfmt >> 8,
> + inst->cap_format.pixfmt >> 16,
> + inst->cap_format.pixfmt >> 24,
> + inst->cap_format.width,
> + inst->cap_format.height,
> + vq->last_buffer_dequeued);
> + if (seq_write(s, str, num))
> + return 0;
> + for (i = 0; i < inst->cap_format.num_planes; i++) {
> + num = scnprintf(str, sizeof(str), " %d(%d)",
> + inst->cap_format.sizeimage[i],
> + inst->cap_format.bytesperline[i]);
> + if (seq_write(s, str, num))
> + return 0;
> + }
> + if (seq_write(s, "\n", 1))
> + return 0;
> + num = scnprintf(str, sizeof(str), "crop: (%d, %d) %d x %d\n",
> + inst->crop.left,
> + inst->crop.top,
> + inst->crop.width,
> + inst->crop.height);
> + if (seq_write(s, str, num))
> + return 0;
> +
> + vq = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> + for (i = 0; i < vq->num_buffers; i++) {
> + struct vb2_buffer *vb = vq->bufs[i];
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
> +
> + if (vb->state == VB2_BUF_STATE_DEQUEUED)
> + continue;
> + num = scnprintf(str, sizeof(str),
> + "output [%2d] state = %10s, %8s\n",
> + i, vb2_stat_name[vb->state],
> + vpu_stat_name[vpu_buf->state]);
> + if (seq_write(s, str, num))
> + return 0;
> + }
> +
> + vq = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> + for (i = 0; i < vq->num_buffers; i++) {
> + struct vb2_buffer *vb = vq->bufs[i];
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
> +
> + if (vb->state == VB2_BUF_STATE_DEQUEUED)
> + continue;
> + num = scnprintf(str, sizeof(str),
> + "capture[%2d] state = %10s, %8s\n",
> + i, vb2_stat_name[vb->state],
> + vpu_stat_name[vpu_buf->state]);
> + if (seq_write(s, str, num))
> + return 0;
> + }
> +
> + num = scnprintf(str, sizeof(str), "sequence = %d\n", inst->sequence);
> + if (seq_write(s, str, num))
> + return 0;
> +
> + if (inst->use_stream_buffer) {
> + num = scnprintf(str, sizeof(str), "stream_buffer = %d / %d, <%pad, 0x%x>\n",
> + vpu_helper_get_used_space(inst),
> + inst->stream_buffer.length,
> + &inst->stream_buffer.phys,
> + inst->stream_buffer.length);
> + if (seq_write(s, str, num))
> + return 0;
> + }
> + num = scnprintf(str, sizeof(str), "kfifo len = 0x%x\n", kfifo_len(&inst->msg_fifo));
> + if (seq_write(s, str, num))
> + return 0;
> +
> + num = scnprintf(str, sizeof(str), "flow :\n");
> + if (seq_write(s, str, num))
> + return 0;
> +
> + mutex_lock(&inst->core->cmd_lock);
> + for (i = 0; i < ARRAY_SIZE(inst->flows); i++) {
> + u32 idx = (inst->flow_idx + i) % (ARRAY_SIZE(inst->flows));
> +
> + if (!inst->flows[idx])
> + continue;
> + num = scnprintf(str, sizeof(str), "\t[%s]0x%x\n",
> + inst->flows[idx] >= VPU_MSG_ID_NOOP ? "M" : "C",
> + inst->flows[idx]);
> + if (seq_write(s, str, num)) {
> + mutex_unlock(&inst->core->cmd_lock);
> + return 0;
> + }
> + }
> + mutex_unlock(&inst->core->cmd_lock);
> +
> + i = 0;
> + while (true) {
> + num = call_vop(inst, get_debug_info, str, sizeof(str), i++);
> + if (num <= 0)
> + break;
> + if (seq_write(s, str, num))
> + return 0;
> + }
> +
> + return 0;
> +}
> +
> +static int vpu_dbg_core(struct seq_file *s, void *data)
> +{
> + struct vpu_core *core = s->private;
> + struct vpu_shared_addr *iface = core->iface;
> + char str[128];
> + int num;
> +
> + num = scnprintf(str, sizeof(str), "[%s]\n", vpu_core_type_desc(core->type));
> + if (seq_write(s, str, num))
> + return 0;
> +
> + num = scnprintf(str, sizeof(str), "boot_region = <%pad, 0x%x>\n",
> + &core->fw.phys, core->fw.length);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str), "rpc_region = <%pad, 0x%x> used = 0x%x\n",
> + &core->rpc.phys, core->rpc.length, core->rpc.bytesused);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str), "fwlog_region = <%pad, 0x%x>\n",
> + &core->log.phys, core->log.length);
> + if (seq_write(s, str, num))
> + return 0;
> +
> + num = scnprintf(str, sizeof(str), "state = %d\n", core->state);
> + if (seq_write(s, str, num))
> + return 0;
> + if (core->state == VPU_CORE_DEINIT)
> + return 0;
> + num = scnprintf(str, sizeof(str), "fw version = %d.%d.%d\n",
> + (core->fw_version >> 16) & 0xff,
> + (core->fw_version >> 8) & 0xff,
> + core->fw_version & 0xff);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str), "instances = %d/%d (0x%02lx), %d\n",
> + hweight32(core->instance_mask),
> + core->supported_instance_count,
> + core->instance_mask,
> + core->request_count);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str), "kfifo len = 0x%x\n", kfifo_len(&core->msg_fifo));
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str),
> + "cmd_buf:[0x%x, 0x%x], wptr = 0x%x, rptr = 0x%x\n",
> + iface->cmd_desc->start,
> + iface->cmd_desc->end,
> + iface->cmd_desc->wptr,
> + iface->cmd_desc->rptr);
> + if (seq_write(s, str, num))
> + return 0;
> + num = scnprintf(str, sizeof(str),
> + "msg_buf:[0x%x, 0x%x], wptr = 0x%x, rptr = 0x%x\n",
> + iface->msg_desc->start,
> + iface->msg_desc->end,
> + iface->msg_desc->wptr,
> + iface->msg_desc->rptr);
> + if (seq_write(s, str, num))
> + return 0;
> +
> + return 0;
> +}
> +
> +static int vpu_dbg_fwlog(struct seq_file *s, void *data)
> +{
> + struct vpu_core *core = s->private;
> + struct print_buf_desc *print_buf;
> + int length;
> + u32 rptr;
> + u32 wptr;
> + int ret = 0;
> +
> + if (!core->log.virt || core->state == VPU_CORE_DEINIT)
> + return 0;
> +
> + print_buf = core->log.virt;
> + rptr = print_buf->read;
> + wptr = print_buf->write;
> +
> + if (rptr == wptr)
> + return 0;
> + else if (rptr < wptr)
> + length = wptr - rptr;
> + else
> + length = print_buf->bytes + wptr - rptr;
> +
> + if (s->count + length >= s->size) {
> + s->count = s->size;
> + return 0;
> + }
> +
> + if (rptr + length >= print_buf->bytes) {
> + int num = print_buf->bytes - rptr;
> +
> + if (seq_write(s, print_buf->buffer + rptr, num))
> + ret = -1;
> + length -= num;
> + rptr = 0;
> + }
> +
> + if (length) {
> + if (seq_write(s, print_buf->buffer + rptr, length))
> + ret = -1;
> + rptr += length;
> + }
> + if (!ret)
> + print_buf->read = rptr;
> +
> + return 0;
> +}
> +
> +static int vpu_dbg_inst_open(struct inode *inode, struct file *filp)
> +{
> + return single_open(filp, vpu_dbg_instance, inode->i_private);
> +}
> +
> +static ssize_t vpu_dbg_inst_write(struct file *file,
> + const char __user *user_buf, size_t size, loff_t *ppos)
> +{
> + struct seq_file *s = file->private_data;
> + struct vpu_inst *inst = s->private;
> +
> + vpu_session_debug(inst);
> +
> + return size;
> +}
> +
> +static ssize_t vpu_dbg_core_write(struct file *file,
> + const char __user *user_buf, size_t size, loff_t *ppos)
> +{
> + struct seq_file *s = file->private_data;
> + struct vpu_core *core = s->private;
> +
> + pm_runtime_get_sync(core->dev);
> + mutex_lock(&core->lock);
> + if (core->state != VPU_CORE_DEINIT && !core->instance_mask) {
> + dev_info(core->dev, "reset\n");
> + if (!vpu_core_sw_reset(core)) {
> + core->state = VPU_CORE_ACTIVE;
> + core->hang_mask = 0;
> + }
> + }
> + mutex_unlock(&core->lock);
> + pm_runtime_put_sync(core->dev);
> +
> + return size;
> +}
> +
> +static int vpu_dbg_core_open(struct inode *inode, struct file *filp)
> +{
> + return single_open(filp, vpu_dbg_core, inode->i_private);
> +}
> +
> +static int vpu_dbg_fwlog_open(struct inode *inode, struct file *filp)
> +{
> + return single_open(filp, vpu_dbg_fwlog, inode->i_private);
> +}
> +
> +static const struct file_operations vpu_dbg_inst_fops = {
> + .owner = THIS_MODULE,
> + .open = vpu_dbg_inst_open,
> + .release = single_release,
> + .read = seq_read,
> + .write = vpu_dbg_inst_write,
> +};
> +
> +static const struct file_operations vpu_dbg_core_fops = {
> + .owner = THIS_MODULE,
> + .open = vpu_dbg_core_open,
> + .release = single_release,
> + .read = seq_read,
> + .write = vpu_dbg_core_write,
> +};
> +
> +static const struct file_operations vpu_dbg_fwlog_fops = {
> + .owner = THIS_MODULE,
> + .open = vpu_dbg_fwlog_open,
> + .release = single_release,
> + .read = seq_read,
> +};
> +
> +int vpu_inst_create_dbgfs_file(struct vpu_inst *inst)
> +{
> + struct vpu_dev *vpu;
> + char name[64];
> +
> + if (!inst || !inst->core || !inst->core->vpu)
> + return -EINVAL;
> +
> + vpu = inst->core->vpu;
> + if (!vpu->debugfs)
> + return -EINVAL;
> +
> + if (inst->debugfs)
> + return 0;
> +
> + scnprintf(name, sizeof(name), "instance.%d.%d",
> + inst->core->id, inst->id);
> + inst->debugfs = debugfs_create_file((const char *)name,
> + VERIFY_OCTAL_PERMISSIONS(0644),
> + vpu->debugfs,
> + inst,
> + &vpu_dbg_inst_fops);
> + if (!inst->debugfs) {
> + dev_err(inst->dev, "vpu create debugfs %s fail\n", name);
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +int vpu_inst_remove_dbgfs_file(struct vpu_inst *inst)
> +{
> + if (!inst)
> + return 0;
> +
> + debugfs_remove(inst->debugfs);
> + inst->debugfs = NULL;
> +
> + return 0;
> +}
> +
> +int vpu_core_create_dbgfs_file(struct vpu_core *core)
> +{
> + struct vpu_dev *vpu;
> + char name[64];
> +
> + if (!core || !core->vpu)
> + return -EINVAL;
> +
> + vpu = core->vpu;
> + if (!vpu->debugfs)
> + return -EINVAL;
> +
> + if (!core->debugfs) {
> + scnprintf(name, sizeof(name), "core.%d", core->id);
> + core->debugfs = debugfs_create_file((const char *)name,
> + VERIFY_OCTAL_PERMISSIONS(0644),
> + vpu->debugfs,
> + core,
> + &vpu_dbg_core_fops);
> + if (!core->debugfs) {
> + dev_err(core->dev, "vpu create debugfs %s fail\n", name);
> + return -EINVAL;
> + }
> + }
> + if (!core->debugfs_fwlog) {
> + scnprintf(name, sizeof(name), "fwlog.%d", core->id);
> + core->debugfs_fwlog = debugfs_create_file((const char *)name,
> + VERIFY_OCTAL_PERMISSIONS(0444),
> + vpu->debugfs,
> + core,
> + &vpu_dbg_fwlog_fops);
> + if (!core->debugfs_fwlog) {
> + dev_err(core->dev, "vpu create debugfs %s fail\n", name);
> + return -EINVAL;
> + }
> + }
> +
> + return 0;
> +}
> +
> +int vpu_core_remove_dbgfs_file(struct vpu_core *core)
> +{
> + if (!core)
> + return 0;
> + debugfs_remove(core->debugfs);
> + core->debugfs = NULL;
> + debugfs_remove(core->debugfs_fwlog);
> + core->debugfs_fwlog = NULL;
> +
> + return 0;
> +}
> +
> +void vpu_inst_record_flow(struct vpu_inst *inst, u32 flow)
> +{
> + if (!inst)
> + return;
> +
> + inst->flows[inst->flow_idx] = flow;
> + inst->flow_idx = (inst->flow_idx + 1) % (ARRAY_SIZE(inst->flows));
> +}
> diff --git a/drivers/media/platform/amphion/vpu_rpc.c b/drivers/media/platform/amphion/vpu_rpc.c
> new file mode 100644
> index 000000000000..7b5e9177e010
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_rpc.c
> @@ -0,0 +1,279 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/interconnect.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/of_device.h>
> +#include <linux/of_address.h>
> +#include <linux/platform_device.h>
> +#include <linux/firmware/imx/ipc.h>
> +#include <linux/firmware/imx/svc/misc.h>
> +#include "vpu.h"
> +#include "vpu_rpc.h"
> +#include "vpu_imx8q.h"
> +#include "vpu_windsor.h"
> +#include "vpu_malone.h"
> +
> +u32 vpu_iface_check_memory_region(struct vpu_core *core, dma_addr_t addr, u32 size)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->check_memory_region)
> + return VPU_CORE_MEMORY_INVALID;
> +
> + return ops->check_memory_region(core->fw.phys, addr, size);
> +}
> +
> +static u32 vpu_rpc_check_buffer_space(struct vpu_rpc_buffer_desc *desc, bool write)
> +{
> + u32 ptr1;
> + u32 ptr2;
> + u32 size;
> +
> + WARN_ON(!desc);
> +
> + size = desc->end - desc->start;
> + if (write) {
> + ptr1 = desc->wptr;
> + ptr2 = desc->rptr;
> + } else {
> + ptr1 = desc->rptr;
> + ptr2 = desc->wptr;
> + }
> +
> + if (ptr1 == ptr2) {
> + if (!write)
> + return 0;
> + else
> + return size;
> + }
> +
> + return (ptr2 + size - ptr1) % size;
> +}
> +
> +static int vpu_rpc_send_cmd_buf(struct vpu_shared_addr *shared,
> + struct vpu_rpc_event *cmd)
> +{
> + struct vpu_rpc_buffer_desc *desc;
> + u32 space = 0;
> + u32 *data;
> + u32 wptr;
> + u32 i;
> +
> + WARN_ON(!shared || !shared->cmd_mem_vir || !cmd);
> +
> + desc = shared->cmd_desc;
> + space = vpu_rpc_check_buffer_space(desc, true);
> + if (space < (((cmd->hdr.num + 1) << 2) + 16)) {
> + pr_err("Cmd Buffer is no space for [%d] %d\n",
> + cmd->hdr.index, cmd->hdr.id);
> + return -EINVAL;
> + }
> + wptr = desc->wptr;
> + data = (u32 *)(shared->cmd_mem_vir + desc->wptr - desc->start);
> + *data = 0;
> + *data |= ((cmd->hdr.index & 0xff) << 24);
> + *data |= ((cmd->hdr.num & 0xff) << 16);
> + *data |= (cmd->hdr.id & 0x3fff);
> + wptr += 4;
> + data++;
> + if (wptr >= desc->end) {
> + wptr = desc->start;
> + data = shared->cmd_mem_vir;
> + }
> +
> + for (i = 0; i < cmd->hdr.num; i++) {
> + *data = cmd->data[i];
> + wptr += 4;
> + data++;
> + if (wptr >= desc->end) {
> + wptr = desc->start;
> + data = shared->cmd_mem_vir;
> + }
> + }
> +
> + /*update wptr after data is written*/
> + mb();
> + desc->wptr = wptr;
> +
> + return 0;
> +}
> +
> +static bool vpu_rpc_check_msg(struct vpu_shared_addr *shared)
> +{
> + struct vpu_rpc_buffer_desc *desc;
> + u32 space = 0;
> + u32 msgword;
> + u32 msgnum;
> +
> + WARN_ON(!shared || !shared->msg_desc);
> +
> + desc = shared->msg_desc;
> + space = vpu_rpc_check_buffer_space(desc, 0);
> + space = (space >> 2);
> +
> + if (space) {
> + msgword = *(u32 *)(shared->msg_mem_vir + desc->rptr - desc->start);
> + msgnum = (msgword & 0xff0000) >> 16;
> + if (msgnum <= space)
> + return true;
> + }
> +
> + return false;
> +}
> +
> +static int vpu_rpc_receive_msg_buf(struct vpu_shared_addr *shared, struct vpu_rpc_event *msg)
> +{
> + struct vpu_rpc_buffer_desc *desc;
> + u32 *data;
> + u32 msgword;
> + u32 rptr;
> + u32 i;
> +
> + WARN_ON(!shared || !shared->msg_desc || !msg);
> +
> + if (!vpu_rpc_check_msg(shared))
> + return -EINVAL;
> +
> + desc = shared->msg_desc;
> + data = (u32 *)(shared->msg_mem_vir + desc->rptr - desc->start);
> + rptr = desc->rptr;
> + msgword = *data;
> + data++;
> + rptr += 4;
> + if (rptr >= desc->end) {
> + rptr = desc->start;
> + data = shared->msg_mem_vir;
> + }
> +
> + msg->hdr.index = (msgword >> 24) & 0xff;
> + msg->hdr.num = (msgword >> 16) & 0xff;
> + msg->hdr.id = msgword & 0x3fff;
> +
> + if (msg->hdr.num > ARRAY_SIZE(msg->data)) {
> + pr_err("msg(%d) data length(%d) is out of range\n",
> + msg->hdr.id, msg->hdr.num);
> + return -EINVAL;
> + }
> +
> + for (i = 0; i < msg->hdr.num; i++) {
> + msg->data[i] = *data;
> + data++;
> + rptr += 4;
> + if (rptr >= desc->end) {
> + rptr = desc->start;
> + data = shared->msg_mem_vir;
> + }
> + }
> +
> + /*update rptr after data is read*/
> + mb();
> + desc->rptr = rptr;
> +
> + return 0;
> +}
> +
> +struct vpu_iface_ops imx8q_rpc_ops[] = {
> + [VPU_CORE_TYPE_ENC] = {
> + .check_codec = vpu_imx8q_check_codec,
> + .check_fmt = vpu_imx8q_check_fmt,
> + .boot_core = vpu_imx8q_boot_core,
> + .get_power_state = vpu_imx8q_get_power_state,
> + .on_firmware_loaded = vpu_imx8q_on_firmware_loaded,
> + .get_data_size = vpu_windsor_get_data_size,
> + .check_memory_region = vpu_imx8q_check_memory_region,
> + .init_rpc = vpu_windsor_init_rpc,
> + .set_log_buf = vpu_windsor_set_log_buf,
> + .set_system_cfg = vpu_windsor_set_system_cfg,
> + .get_version = vpu_windsor_get_version,
> + .send_cmd_buf = vpu_rpc_send_cmd_buf,
> + .receive_msg_buf = vpu_rpc_receive_msg_buf,
> + .pack_cmd = vpu_windsor_pack_cmd,
> + .convert_msg_id = vpu_windsor_convert_msg_id,
> + .unpack_msg_data = vpu_windsor_unpack_msg_data,
> + .config_memory_resource = vpu_windsor_config_memory_resource,
> + .get_stream_buffer_size = vpu_windsor_get_stream_buffer_size,
> + .config_stream_buffer = vpu_windsor_config_stream_buffer,
> + .get_stream_buffer_desc = vpu_windsor_get_stream_buffer_desc,
> + .update_stream_buffer = vpu_windsor_update_stream_buffer,
> + .set_encode_params = vpu_windsor_set_encode_params,
> + .input_frame = vpu_windsor_input_frame,
> + .get_max_instance_count = vpu_windsor_get_max_instance_count,
> + },
> + [VPU_CORE_TYPE_DEC] = {
> + .check_codec = vpu_imx8q_check_codec,
> + .check_fmt = vpu_imx8q_check_fmt,
> + .boot_core = vpu_imx8q_boot_core,
> + .get_power_state = vpu_imx8q_get_power_state,
> + .on_firmware_loaded = vpu_imx8q_on_firmware_loaded,
> + .get_data_size = vpu_malone_get_data_size,
> + .check_memory_region = vpu_imx8q_check_memory_region,
> + .init_rpc = vpu_malone_init_rpc,
> + .set_log_buf = vpu_malone_set_log_buf,
> + .set_system_cfg = vpu_malone_set_system_cfg,
> + .get_version = vpu_malone_get_version,
> + .send_cmd_buf = vpu_rpc_send_cmd_buf,
> + .receive_msg_buf = vpu_rpc_receive_msg_buf,
> + .get_stream_buffer_size = vpu_malone_get_stream_buffer_size,
> + .config_stream_buffer = vpu_malone_config_stream_buffer,
> + .set_decode_params = vpu_malone_set_decode_params,
> + .pack_cmd = vpu_malone_pack_cmd,
> + .convert_msg_id = vpu_malone_convert_msg_id,
> + .unpack_msg_data = vpu_malone_unpack_msg_data,
> + .get_stream_buffer_desc = vpu_malone_get_stream_buffer_desc,
> + .update_stream_buffer = vpu_malone_update_stream_buffer,
> + .add_scode = vpu_malone_add_scode,
> + .input_frame = vpu_malone_input_frame,
> + .pre_send_cmd = vpu_malone_pre_cmd,
> + .post_send_cmd = vpu_malone_post_cmd,
> + .init_instance = vpu_malone_init_instance,
> + .get_max_instance_count = vpu_malone_get_max_instance_count,
> + },
> +};
> +
> +
> +static struct vpu_iface_ops *vpu_get_iface(struct vpu_dev *vpu, enum vpu_core_type type)
> +{
> + struct vpu_iface_ops *rpc_ops = NULL;
> + u32 size = 0;
> +
> + WARN_ON(!vpu || !vpu->res);
> +
> + switch (vpu->res->plat_type) {
> + case IMX8QXP:
> + case IMX8QM:
> + rpc_ops = imx8q_rpc_ops;
> + size = ARRAY_SIZE(imx8q_rpc_ops);
> + break;
> + default:
> + return NULL;
> + }
> +
> + if (type >= size)
> + return NULL;
> +
> + return &rpc_ops[type];
> +}
> +
> +struct vpu_iface_ops *vpu_core_get_iface(struct vpu_core *core)
> +{
> + WARN_ON(!core || !core->vpu);
> +
> + return vpu_get_iface(core->vpu, core->type);
> +}
> +
> +struct vpu_iface_ops *vpu_inst_get_iface(struct vpu_inst *inst)
> +{
> + WARN_ON(!inst || !inst->vpu);
> +
> + if (inst->core)
> + return vpu_core_get_iface(inst->core);
> +
> + return vpu_get_iface(inst->vpu, inst->type);
> +}
> diff --git a/drivers/media/platform/amphion/vpu_rpc.h b/drivers/media/platform/amphion/vpu_rpc.h
> new file mode 100644
> index 000000000000..abe998e5a5be
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_rpc.h
> @@ -0,0 +1,464 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _AMPHION_VPU_RPC_H
> +#define _AMPHION_VPU_RPC_H
> +
> +#include <media/videobuf2-core.h>
> +#include "vpu_codec.h"
> +
> +struct vpu_rpc_buffer_desc {
> + u32 wptr;
> + u32 rptr;
> + u32 start;
> + u32 end;
> +};
> +
> +struct vpu_shared_addr {
> + void *iface;
> + struct vpu_rpc_buffer_desc *cmd_desc;
> + void *cmd_mem_vir;
> + struct vpu_rpc_buffer_desc *msg_desc;
> + void *msg_mem_vir;
> +
> + unsigned long boot_addr;
> + struct vpu_core *core;
> + void *priv;
> +};
> +
> +struct vpu_rpc_event_header {
> + u32 index;
> + u32 id;
> + u32 num;
> +};
> +
> +struct vpu_rpc_event {
> + struct vpu_rpc_event_header hdr;
> + u32 data[128];
> +};
> +
> +struct vpu_iface_ops {
> + bool (*check_codec)(enum vpu_core_type type);
> + bool (*check_fmt)(enum vpu_core_type type, u32 pixelfmt);
> + u32 (*get_data_size)(void);
> + u32 (*check_memory_region)(dma_addr_t base, dma_addr_t addr, u32 size);
> + int (*boot_core)(struct vpu_core *core);
> + int (*shutdown_core)(struct vpu_core *core);
> + int (*restore_core)(struct vpu_core *core);
> + int (*get_power_state)(struct vpu_core *core);
> + int (*on_firmware_loaded)(struct vpu_core *core);
> + void (*init_rpc)(struct vpu_shared_addr *shared,
> + struct vpu_buffer *rpc, dma_addr_t boot_addr);
> + void (*set_log_buf)(struct vpu_shared_addr *shared,
> + struct vpu_buffer *log);
> + void (*set_system_cfg)(struct vpu_shared_addr *shared,
> + u32 regs_base, void __iomem *regs, u32 index);
> + void (*set_stream_cfg)(struct vpu_shared_addr *shared, u32 index);
> + u32 (*get_version)(struct vpu_shared_addr *shared);
> + u32 (*get_max_instance_count)(struct vpu_shared_addr *shared);
> + int (*get_stream_buffer_size)(struct vpu_shared_addr *shared);
> + int (*send_cmd_buf)(struct vpu_shared_addr *shared,
> + struct vpu_rpc_event *cmd);
> + int (*receive_msg_buf)(struct vpu_shared_addr *shared,
> + struct vpu_rpc_event *msg);
> + int (*pack_cmd)(struct vpu_rpc_event *pkt, u32 index, u32 id, void *data);
> + int (*convert_msg_id)(u32 msg_id);
> + int (*unpack_msg_data)(struct vpu_rpc_event *pkt, void *data);
> + int (*input_frame)(struct vpu_shared_addr *shared,
> + struct vpu_inst *inst, struct vb2_buffer *vb);
> + int (*config_memory_resource)(struct vpu_shared_addr *shared,
> + u32 instance,
> + u32 type,
> + u32 index,
> + struct vpu_buffer *buf);
> + int (*config_stream_buffer)(struct vpu_shared_addr *shared,
> + u32 instance,
> + struct vpu_buffer *buf);
> + int (*update_stream_buffer)(struct vpu_shared_addr *shared,
> + u32 instance, u32 ptr, bool write);
> + int (*get_stream_buffer_desc)(struct vpu_shared_addr *shared,
> + u32 instance,
> + struct vpu_rpc_buffer_desc *desc);
> + int (*set_encode_params)(struct vpu_shared_addr *shared,
> + u32 instance,
> + struct vpu_encode_params *params, u32 update);
> + int (*set_decode_params)(struct vpu_shared_addr *shared,
> + u32 instance,
> + struct vpu_decode_params *params, u32 update);
> + int (*add_scode)(struct vpu_shared_addr *shared,
> + u32 instance,
> + struct vpu_buffer *stream_buffer,
> + u32 pixelformat,
> + u32 scode_type);
> + int (*pre_send_cmd)(struct vpu_shared_addr *shared, u32 instance);
> + int (*post_send_cmd)(struct vpu_shared_addr *shared, u32 instance);
> + int (*init_instance)(struct vpu_shared_addr *shared, u32 instance);
> +};
> +
> +enum {
> + VPU_CORE_MEMORY_INVALID = 0,
> + VPU_CORE_MEMORY_CACHED,
> + VPU_CORE_MEMORY_UNCACHED
> +};
> +
> +struct vpu_rpc_region_t {
> + dma_addr_t start;
> + dma_addr_t end;
> + dma_addr_t type;
> +};
> +
> +struct vpu_iface_ops *vpu_core_get_iface(struct vpu_core *core);
> +struct vpu_iface_ops *vpu_inst_get_iface(struct vpu_inst *inst);
> +u32 vpu_iface_check_memory_region(struct vpu_core *core, dma_addr_t addr, u32 size);
> +
> +static inline bool vpu_iface_check_codec(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (ops && ops->check_codec)
> + return ops->check_codec(core->type);
> +
> + return true;
> +}
> +
> +static inline bool vpu_iface_check_format(struct vpu_inst *inst, u32 pixelfmt)
> +{
> + struct vpu_iface_ops *ops = vpu_inst_get_iface(inst);
> +
> + if (ops && ops->check_fmt)
> + return ops->check_fmt(inst->type, pixelfmt);
> +
> + return true;
> +}
> +
> +static inline int vpu_iface_boot_core(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (ops && ops->boot_core)
> + return ops->boot_core(core);
> + return 0;
> +}
> +
> +static inline int vpu_iface_get_power_state(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (ops && ops->get_power_state)
> + return ops->get_power_state(core);
> + return 1;
> +}
> +
> +static inline int vpu_iface_shutdown_core(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (ops && ops->shutdown_core)
> + return ops->shutdown_core(core);
> + return 0;
> +}
> +
> +static inline int vpu_iface_restore_core(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (ops && ops->restore_core)
> + return ops->restore_core(core);
> + return 0;
> +}
> +
> +static inline int vpu_iface_on_firmware_loaded(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (ops && ops->on_firmware_loaded)
> + return ops->on_firmware_loaded(core);
> +
> + return 0;
> +}
> +
> +static inline u32 vpu_iface_get_data_size(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->get_data_size)
> + return 0;
> +
> + return ops->get_data_size();
> +}
> +
> +static inline int vpu_iface_init(struct vpu_core *core,
> + struct vpu_shared_addr *shared,
> + struct vpu_buffer *rpc,
> + dma_addr_t boot_addr)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->init_rpc)
> + return -EINVAL;
> +
> + ops->init_rpc(shared, rpc, boot_addr);
> + core->iface = shared;
> + shared->core = core;
> + if (rpc->bytesused > rpc->length)
> + return -ENOSPC;
> + return 0;
> +}
> +
> +static inline int vpu_iface_set_log_buf(struct vpu_core *core,
> + struct vpu_buffer *log)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops)
> + return -EINVAL;
> +
> + if (ops->set_log_buf)
> + ops->set_log_buf(core->iface, log);
> +
> + return 0;
> +}
> +
> +static inline int vpu_iface_config_system(struct vpu_core *core,
> + u32 regs_base, void __iomem *regs)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops)
> + return -EINVAL;
> + if (ops->set_system_cfg)
> + ops->set_system_cfg(core->iface, regs_base, regs, core->id);
> +
> + return 0;
> +}
> +
> +static inline int vpu_iface_get_stream_buffer_size(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->get_stream_buffer_size)
> + return 0;
> +
> + return ops->get_stream_buffer_size(core->iface);
> +}
> +
> +static inline int vpu_iface_config_stream(struct vpu_inst *inst)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops)
> + return -EINVAL;
> + if (ops->set_stream_cfg)
> + ops->set_stream_cfg(inst->core->iface, inst->id);
> + return 0;
> +}
> +
> +static inline int vpu_iface_send_cmd(struct vpu_core *core, struct vpu_rpc_event *cmd)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->send_cmd_buf)
> + return -EINVAL;
> +
> + return ops->send_cmd_buf(core->iface, cmd);
> +}
> +
> +static inline int vpu_iface_receive_msg(struct vpu_core *core, struct vpu_rpc_event *msg)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->receive_msg_buf)
> + return -EINVAL;
> +
> + return ops->receive_msg_buf(core->iface, msg);
> +}
> +
> +static inline int vpu_iface_pack_cmd(struct vpu_core *core,
> + struct vpu_rpc_event *pkt,
> + u32 index, u32 id, void *data)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->pack_cmd)
> + return -EINVAL;
> + return ops->pack_cmd(pkt, index, id, data);
> +}
> +
> +static inline int vpu_iface_convert_msg_id(struct vpu_core *core, u32 msg_id)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->convert_msg_id)
> + return -EINVAL;
> +
> + return ops->convert_msg_id(msg_id);
> +}
> +
> +static inline int vpu_iface_unpack_msg_data(struct vpu_core *core,
> + struct vpu_rpc_event *pkt, void *data)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->unpack_msg_data)
> + return -EINVAL;
> +
> + return ops->unpack_msg_data(pkt, data);
> +}
> +
> +static inline int vpu_iface_input_frame(struct vpu_inst *inst,
> + struct vb2_buffer *vb)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + if (!ops || !ops->input_frame)
> + return -EINVAL;
> +
> + return ops->input_frame(inst->core->iface, inst, vb);
> +}
> +
> +static inline int vpu_iface_config_memory_resource(struct vpu_inst *inst,
> + u32 type, u32 index, struct vpu_buffer *buf)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->config_memory_resource)
> + return -EINVAL;
> +
> + return ops->config_memory_resource(inst->core->iface,
> + inst->id,
> + type, index, buf);
> +}
> +
> +static inline int vpu_iface_config_stream_buffer(struct vpu_inst *inst,
> + struct vpu_buffer *buf)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->config_stream_buffer)
> + return -EINVAL;
> +
> + return ops->config_stream_buffer(inst->core->iface, inst->id, buf);
> +}
> +
> +static inline int vpu_iface_update_stream_buffer(struct vpu_inst *inst,
> + u32 ptr, bool write)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->update_stream_buffer)
> + return -EINVAL;
> +
> + return ops->update_stream_buffer(inst->core->iface, inst->id, ptr, write);
> +}
> +
> +static inline int vpu_iface_get_stream_buffer_desc(struct vpu_inst *inst,
> + struct vpu_rpc_buffer_desc *desc)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->get_stream_buffer_desc)
> + return -EINVAL;
> +
> + if (!desc)
> + return 0;
> +
> + return ops->get_stream_buffer_desc(inst->core->iface, inst->id, desc);
> +}
> +
> +static inline u32 vpu_iface_get_version(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->get_version)
> + return 0;
> +
> + return ops->get_version(core->iface);
> +}
> +
> +static inline u32 vpu_iface_get_max_instance_count(struct vpu_core *core)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> +
> + if (!ops || !ops->get_max_instance_count)
> + return 0;
> +
> + return ops->get_max_instance_count(core->iface);
> +}
> +
> +static inline int vpu_iface_set_encode_params(struct vpu_inst *inst,
> + struct vpu_encode_params *params, u32 update)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->set_encode_params)
> + return -EINVAL;
> +
> + return ops->set_encode_params(inst->core->iface, inst->id, params, update);
> +}
> +
> +static inline int vpu_iface_set_decode_params(struct vpu_inst *inst,
> + struct vpu_decode_params *params, u32 update)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->set_decode_params)
> + return -EINVAL;
> +
> + return ops->set_decode_params(inst->core->iface, inst->id, params, update);
> +}
> +
> +static inline int vpu_iface_add_scode(struct vpu_inst *inst, u32 scode_type)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (!ops || !ops->add_scode)
> + return -EINVAL;
> +
> + return ops->add_scode(inst->core->iface, inst->id,
> + &inst->stream_buffer,
> + inst->out_format.pixfmt,
> + scode_type);
> +}
> +
> +static inline int vpu_iface_pre_send_cmd(struct vpu_inst *inst)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (ops && ops->pre_send_cmd)
> + return ops->pre_send_cmd(inst->core->iface, inst->id);
> + return 0;
> +}
> +
> +static inline int vpu_iface_post_send_cmd(struct vpu_inst *inst)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (ops && ops->post_send_cmd)
> + return ops->post_send_cmd(inst->core->iface, inst->id);
> + return 0;
> +}
> +
> +static inline int vpu_iface_init_instance(struct vpu_inst *inst)
> +{
> + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> +
> + WARN_ON(inst->id < 0);
> + if (ops && ops->init_instance)
> + return ops->init_instance(inst->core->iface, inst->id);
> +
> + return 0;
> +}
> +
> +#endif
>

Regards,

Hans

2021-12-02 10:03:40

by Ming Qian

[permalink] [raw]
Subject: RE: [EXT] Re: [PATCH v13 02/13] media:Add nv12mt_8l128 and nv12mt_10be_8l128 video format.


> -----Original Message-----
> From: Hans Verkuil [mailto:[email protected]]
> Sent: Thursday, December 2, 2021 5:40 PM
> To: Ming Qian <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected]
> Cc: [email protected]; [email protected]; dl-linux-imx
> <[email protected]>; Aisheng Dong <[email protected]>;
> [email protected]; [email protected];
> [email protected]; [email protected]
> Subject: [EXT] Re: [PATCH v13 02/13] media:Add nv12mt_8l128 and
> nv12mt_10be_8l128 video format.
>
> Caution: EXT Email
>
> On 30/11/2021 10:48, Ming Qian wrote:
> > nv12mt_8l128 is 8-bit tiled nv12 format used by amphion decoder.
> > nv12mt_10be_8l128 is 10-bit tiled format used by amphion decoder.
> > The tile size is 8x128
> >
> > Signed-off-by: Ming Qian <[email protected]>
> > Signed-off-by: Shijie Qin <[email protected]>
> > Signed-off-by: Zhou Peng <[email protected]>
> > ---
> > .../userspace-api/media/v4l/pixfmt-yuv-planar.rst | 15 +++++++++++++++
> > drivers/media/v4l2-core/v4l2-ioctl.c | 2 ++
> > include/uapi/linux/videodev2.h | 2 ++
> > 3 files changed, 19 insertions(+)
> >
> > diff --git
> > a/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst
> > b/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst
> > index 3a09d93d405b..fc3baa2753ab 100644
> > --- a/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst
> > +++ b/Documentation/userspace-api/media/v4l/pixfmt-yuv-planar.rst
> > @@ -257,6 +257,8 @@ of the luma plane.
> > .. _V4L2-PIX-FMT-NV12-4L4:
> > .. _V4L2-PIX-FMT-NV12-16L16:
> > .. _V4L2-PIX-FMT-NV12-32L32:
> > +.. _V4L2_PIX_FMT_NV12MT_8L128:
> > +.. _V4L2_PIX_FMT_NV12MT_10BE_8L128:
> >
> > Tiled NV12
> > ----------
> > @@ -296,6 +298,19 @@ tiles linearly in memory. The line stride and
> > image height must be aligned to a multiple of 32. The layouts of the
> > luma and chroma planes are identical.
> >
> > +``V4L2_PIX_FMT_NV12MT_8L128`` is similar to ``V4L2_PIX_FMT_NV12M``
> > +but stores pixel in 2D 8x128 tiles, and stores tiles linearly in memory.
>
> pixel -> pixels (note: also wrong in the text
> V4L2_PIX_FMT_NV12_4L4/16L16/32L32)
>
> Shouldn't this be called V4L2_PIX_FMT_NV12M_8L128? The 'MT' suffix seems
> to be specific to macroblock tiles and not linear tiles.

I'll change it, I thought the t means tiled

>
> > +The image height must be aligned to a multiple of 128.
> > +The layouts of the luma and chroma planes are identical.
> > +
> > +``V4L2_PIX_FMT_NV12MT_10BE_8L128`` is similar to
> > +``V4L2_PIX_FMT_NV12M`` but stores
> > +10 bits pixel in 2D 8x128 tiles, and stores tiles linearly in memory.
> > +the data is arranged at the big end.
>
> at the big end -> in big endian order
>
> I assume the 10 bit pixels are packed? So 5 bytes contain 4 10-bit pixels layout
> like this (for luma):
>
> byte 0: Y0(bits 9-2)
> byte 1: Y0(bits 1-0) Y1(bits 9-4)
> byte 2: Y1(bits 3-0) Y2(bits 9-6)
> byte 3: Y2(bits 5-0) Y3(bits 9-8)
> byte 4: Y3(bits 7-0)
>
> > +The image height must be aligned to a multiple of 128.
> > +The layouts of the luma and chroma planes are identical.
> > +Note the tile size is 8bytes multiplied by 128 bytes, it means that
> > +the low bits and high bits of one pixel may be in differnt tiles.
>
> differnt -> different
>

Got it

> > +
> > .. _nv12mt:
> >
> > .. kernel-figure:: nv12mt.svg
> > diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c
> > b/drivers/media/v4l2-core/v4l2-ioctl.c
> > index 69b74d0e8a90..400eec0157a7 100644
> > --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> > +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
> > @@ -1388,6 +1388,8 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc
> *fmt)
> > case V4L2_META_FMT_VIVID: descr = "Vivid Metadata";
> break;
> > case V4L2_META_FMT_RK_ISP1_PARAMS: descr = "Rockchip
> ISP1 3A Parameters"; break;
> > case V4L2_META_FMT_RK_ISP1_STAT_3A: descr = "Rockchip
> ISP1 3A Statistics"; break;
> > + case V4L2_PIX_FMT_NV12MT_8L128: descr = "NV12M (8x128
> Linear)"; break;
> > + case V4L2_PIX_FMT_NV12MT_10BE_8L128: descr = "NV12M
> 10BE(8x128 Linear)"; break;
>
> "10-bit NV12M (8x128 Linear, BE)"
>
> >
> > default:
> > /* Compressed formats */ diff --git
> > a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> > index f118fe7a9f58..9443c3109928 100644
> > --- a/include/uapi/linux/videodev2.h
> > +++ b/include/uapi/linux/videodev2.h
> > @@ -632,6 +632,8 @@ struct v4l2_pix_format {
> > /* Tiled YUV formats, non contiguous planes */ #define
> > V4L2_PIX_FMT_NV12MT v4l2_fourcc('T', 'M', '1', '2') /* 12 Y/CbCr
> > 4:2:0 64x32 tiles */ #define V4L2_PIX_FMT_NV12MT_16X16
> > v4l2_fourcc('V', 'M', '1', '2') /* 12 Y/CbCr 4:2:0 16x16 tiles */
> > +#define V4L2_PIX_FMT_NV12MT_8L128 v4l2_fourcc('N', 'A', '1', '2')
> /* Y/CbCr 4:2:0 8x128 tiles */
> > +#define V4L2_PIX_FMT_NV12MT_10BE_8L128 v4l2_fourcc('N', 'T', '1',
> > +'2') /* Y/CbCr 4:2:0 10-bit 8x128 tiles */
>
> Use v4l2_fourcc_be to denote that this is a BE format.
>
> >
> > /* Bayer formats - see
> > https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.s
> >
> iliconimaging.com%2FRGB%2520Bayer.htm&amp;data=04%7C01%7Cming.qia
> n%40n
> >
> xp.com%7Ce303b4acc4c7478171bc08d9b577abe4%7C686ea1d3bc2b4c6fa92
> cd99c5c
> >
> 301635%7C0%7C0%7C637740347869679174%7CUnknown%7CTWFpbGZsb3
> d8eyJWIjoiMC
> >
> 4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&a
> mp;sd
> >
> ata=DJGu6nhkZwiJlVD7oqXG0ZUOGBMmrILtiYirX87MyEo%3D&amp;reserved=
> 0 */
> > #define V4L2_PIX_FMT_SBGGR8 v4l2_fourcc('B', 'A', '8', '1') /* 8
> > BGBG.. GRGR.. */
> >
>
> Regards,
>
> Hans

2021-12-02 10:04:28

by Ming Qian

[permalink] [raw]
Subject: RE: [EXT] Re: [PATCH v13 03/13] media: amphion: add amphion vpu device driver

> -----Original Message-----
> From: Hans Verkuil [mailto:[email protected]]
> Sent: Thursday, December 2, 2021 5:45 PM
> To: Ming Qian <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected]
> Cc: [email protected]; [email protected]; dl-linux-imx
> <[email protected]>; Aisheng Dong <[email protected]>;
> [email protected]; [email protected];
> [email protected]; [email protected]
> Subject: [EXT] Re: [PATCH v13 03/13] media: amphion: add amphion vpu
> device driver
>
> Caution: EXT Email
>
> On 30/11/2021 10:48, Ming Qian wrote:
> > The amphion vpu codec ip contains encoder and decoder.
> > Windsor is the encoder, it supports to encode H.264.
> > Malone is the decoder, it features a powerful video processing unit
> > able to decode many foramts,
>
> foramts -> formats
>
> > such as H.264, HEVC, and other foramts.
>
> ditto
>

Got it, I'll fix it

> >
> > This Driver is for this IP that is based on the v4l2 mem2mem framework.
> >
> > Supported SoCs are: IMX8QXP, IMX8QM
> >
> > Signed-off-by: Ming Qian <[email protected]>
> > Signed-off-by: Shijie Qin <[email protected]>
> > Signed-off-by: Zhou Peng <[email protected]>
> > Reported-by: kernel test robot <[email protected]>
> > ---
> > arch/arm64/configs/defconfig | 1 +
> > drivers/media/platform/Kconfig | 19 ++
> > drivers/media/platform/Makefile | 2 +
> > drivers/media/platform/amphion/Makefile | 20 ++
> > drivers/media/platform/amphion/vpu.h | 357
> +++++++++++++++++++++
> > drivers/media/platform/amphion/vpu_defs.h | 186 +++++++++++
> > drivers/media/platform/amphion/vpu_drv.c | 265 +++++++++++++++
> > drivers/media/platform/amphion/vpu_imx8q.c | 271 ++++++++++++++++
> > drivers/media/platform/amphion/vpu_imx8q.h | 116 +++++++
> > 9 files changed, 1237 insertions(+)
> > create mode 100644 drivers/media/platform/amphion/Makefile
> > create mode 100644 drivers/media/platform/amphion/vpu.h
> > create mode 100644 drivers/media/platform/amphion/vpu_defs.h
> > create mode 100644 drivers/media/platform/amphion/vpu_drv.c
> > create mode 100644 drivers/media/platform/amphion/vpu_imx8q.c
> > create mode 100644 drivers/media/platform/amphion/vpu_imx8q.h
> >
> > diff --git a/arch/arm64/configs/defconfig
> > b/arch/arm64/configs/defconfig index f2e2b9bdd702..cc3633112f3f 100644
> > --- a/arch/arm64/configs/defconfig
> > +++ b/arch/arm64/configs/defconfig
> > @@ -657,6 +657,7 @@ CONFIG_V4L_PLATFORM_DRIVERS=y
> > CONFIG_VIDEO_RCAR_CSI2=m CONFIG_VIDEO_RCAR_VIN=m
> > CONFIG_VIDEO_SUN6I_CSI=m
> > +CONFIG_VIDEO_AMPHION_VPU=m
> > CONFIG_V4L_MEM2MEM_DRIVERS=y
> > CONFIG_VIDEO_SAMSUNG_S5P_JPEG=m
> > CONFIG_VIDEO_SAMSUNG_S5P_MFC=m
> > diff --git a/drivers/media/platform/Kconfig
> > b/drivers/media/platform/Kconfig index 9fbdba0fd1e7..7d4a8cd52a9e
> > 100644
> > --- a/drivers/media/platform/Kconfig
> > +++ b/drivers/media/platform/Kconfig
> > @@ -216,6 +216,25 @@ config VIDEO_RCAR_ISP
> > To compile this driver as a module, choose M here: the
> > module will be called rcar-isp.
> >
> > +config VIDEO_AMPHION_VPU
> > + tristate "Amphion VPU(Video Processing Unit) Codec IP"
>
> Add space before (
>
> > + depends on ARCH_MXC
>
> Add: || COMPILE_TEST
>
> It should always be possible to compile test drivers, even on other
> architectures.
>
> > + depends on MEDIA_SUPPORT
> > + depends on VIDEO_DEV
> > + depends on VIDEO_V4L2
> > + select MEDIA_CONTROLLER
> > + select V4L2_MEM2MEM_DEV
> > + select VIDEOBUF2_DMA_CONTIG
> > + select VIDEOBUF2_VMALLOC
> > + help
> > + Amphion VPU Codec IP contains two parts: Windsor and Malone.
> > + Windsor is encoder that supports H.264, and Malone is decoder
> > + that supports H.264, HEVC, and other video formats.
> > + This is a V4L2 driver for NXP MXC 8Q video accelerator hardware.
> > + It accelerates encoding and decoding operations on
> > + various NXP SoCs.
> > + To compile this driver as a module choose m here.
> > +
> > endif # V4L_PLATFORM_DRIVERS
> >
> > menuconfig V4L_MEM2MEM_DRIVERS
>
> Regards,
>
> Hans

2021-12-02 10:07:11

by Ming Qian

[permalink] [raw]
Subject: RE: [EXT] Re: [PATCH v13 04/13] media: amphion: add vpu core driver

> -----Original Message-----
> From: Hans Verkuil [mailto:[email protected]]
> Sent: Thursday, December 2, 2021 5:54 PM
> To: Ming Qian <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected]
> Cc: [email protected]; [email protected]; dl-linux-imx
> <[email protected]>; Aisheng Dong <[email protected]>;
> [email protected]; [email protected];
> [email protected]; [email protected]
> Subject: [EXT] Re: [PATCH v13 04/13] media: amphion: add vpu core driver
>
> Caution: EXT Email
>
> On 30/11/2021 10:48, Ming Qian wrote:
> > The vpu supports encoder and decoder.
> > it needs mu core to handle it.
> > core will run either encoder or decoder firmware.
> >
> > This driver is for support the vpu core.
> >
> > Signed-off-by: Ming Qian <[email protected]>
> > Signed-off-by: Shijie Qin <[email protected]>
> > Signed-off-by: Zhou Peng <[email protected]>
> > ---
> > drivers/media/platform/amphion/vpu_codec.h | 67 ++
> > drivers/media/platform/amphion/vpu_core.c | 906
> +++++++++++++++++++++
> > drivers/media/platform/amphion/vpu_core.h | 15 +
> > drivers/media/platform/amphion/vpu_dbg.c | 495 +++++++++++
> > drivers/media/platform/amphion/vpu_rpc.c | 279 +++++++
> > drivers/media/platform/amphion/vpu_rpc.h | 464 +++++++++++
> > 6 files changed, 2226 insertions(+)
> > create mode 100644 drivers/media/platform/amphion/vpu_codec.h
> > create mode 100644 drivers/media/platform/amphion/vpu_core.c
> > create mode 100644 drivers/media/platform/amphion/vpu_core.h
> > create mode 100644 drivers/media/platform/amphion/vpu_dbg.c
> > create mode 100644 drivers/media/platform/amphion/vpu_rpc.c
> > create mode 100644 drivers/media/platform/amphion/vpu_rpc.h
> >
> > diff --git a/drivers/media/platform/amphion/vpu_codec.h
> b/drivers/media/platform/amphion/vpu_codec.h
> > new file mode 100644
> > index 000000000000..bf8920e9f6d7
> > --- /dev/null
> > +++ b/drivers/media/platform/amphion/vpu_codec.h
> > @@ -0,0 +1,67 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#ifndef _AMPHION_VPU_CODEC_H
> > +#define _AMPHION_VPU_CODEC_H
> > +
> > +struct vpu_encode_params {
> > + u32 input_format;
> > + u32 codec_format;
> > + u32 profile;
> > + u32 tier;
> > + u32 level;
> > + struct v4l2_fract frame_rate;
> > + u32 src_stride;
> > + u32 src_width;
> > + u32 src_height;
> > + struct v4l2_rect crop;
> > + u32 out_width;
> > + u32 out_height;
> > +
> > + u32 gop_length;
> > + u32 bframes;
> > +
> > + u32 rc_mode;
> > + u32 bitrate;
> > + u32 bitrate_min;
> > + u32 bitrate_max;
> > +
> > + u32 i_frame_qp;
> > + u32 p_frame_qp;
> > + u32 b_frame_qp;
> > + u32 qp_min;
> > + u32 qp_max;
> > + u32 qp_min_i;
> > + u32 qp_max_i;
> > +
> > + struct {
> > + u32 enable;
> > + u32 idc;
> > + u32 width;
> > + u32 height;
> > + } sar;
> > +
> > + struct {
> > + u32 primaries;
> > + u32 transfer;
> > + u32 matrix;
> > + u32 full_range;
> > + } color;
> > +};
> > +
> > +struct vpu_decode_params {
> > + u32 codec_format;
> > + u32 output_format;
> > + u32 b_dis_reorder;
> > + u32 b_non_frame;
> > + u32 frame_count;
> > + u32 end_flag;
> > + struct {
> > + u32 base;
> > + u32 size;
> > + } udata;
> > +};
> > +
> > +#endif
> > diff --git a/drivers/media/platform/amphion/vpu_core.c
> b/drivers/media/platform/amphion/vpu_core.c
> > new file mode 100644
> > index 000000000000..0dbfd1c84f75
> > --- /dev/null
> > +++ b/drivers/media/platform/amphion/vpu_core.c
> > @@ -0,0 +1,906 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#include <linux/init.h>
> > +#include <linux/interconnect.h>
> > +#include <linux/ioctl.h>
> > +#include <linux/list.h>
> > +#include <linux/kernel.h>
> > +#include <linux/module.h>
> > +#include <linux/of_device.h>
> > +#include <linux/of_address.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/slab.h>
> > +#include <linux/types.h>
> > +#include <linux/pm_runtime.h>
> > +#include <linux/pm_domain.h>
> > +#include <linux/firmware.h>
> > +#include "vpu.h"
> > +#include "vpu_defs.h"
> > +#include "vpu_core.h"
> > +#include "vpu_mbox.h"
> > +#include "vpu_msgs.h"
> > +#include "vpu_rpc.h"
> > +#include "vpu_cmds.h"
> > +
> > +void csr_writel(struct vpu_core *core, u32 reg, u32 val)
> > +{
> > + writel(val, core->base + reg);
> > +}
> > +
> > +u32 csr_readl(struct vpu_core *core, u32 reg)
> > +{
> > + return readl(core->base + reg);
> > +}
> > +
> > +static int vpu_core_load_firmware(struct vpu_core *core)
> > +{
> > + const struct firmware *pfw = NULL;
> > + int ret = 0;
> > +
> > + WARN_ON(!core || !core->res || !core->res->fwname);
>
> Either do:
>
> if (WARN_ON(!core || !core->res || !core->res->fwname))
> return -EINVAL;
>
> or just drop it. You'll get a oops with backtrace soon enough.
>
> Same elsewhere in this driver.

Got it, I'll check and fix them

>
> > + if (!core->fw.virt) {
> > + dev_err(core->dev, "firmware buffer is not ready\n");
> > + return -EINVAL;
> > + }
> > +
> > + ret = request_firmware(&pfw, core->res->fwname, core->dev);
> > + dev_dbg(core->dev, "request_firmware %s : %d\n",
> core->res->fwname, ret);
> > + if (ret) {
> > + dev_err(core->dev, "request firmware %s failed, ret = %d\n",
> > + core->res->fwname, ret);
> > + return ret;
> > + }
> > +
> > + if (core->fw.length < pfw->size) {
> > + dev_err(core->dev, "firmware buffer size want %zu,
> but %d\n",
> > + pfw->size, core->fw.length);
> > + ret = -EINVAL;
> > + goto exit;
> > + }
> > +
> > + memset_io(core->fw.virt, 0, core->fw.length);
> > + memcpy(core->fw.virt, pfw->data, pfw->size);
> > + core->fw.bytesused = pfw->size;
> > + ret = vpu_iface_on_firmware_loaded(core);
> > +exit:
> > + release_firmware(pfw);
> > + pfw = NULL;
> > +
> > + return ret;
> > +}
> > +
> > +static int vpu_core_boot_done(struct vpu_core *core)
> > +{
> > + u32 fw_version;
> > +
> > + fw_version = vpu_iface_get_version(core);
> > + dev_info(core->dev, "%s firmware version : %d.%d.%d\n",
> > + vpu_core_type_desc(core->type),
> > + (fw_version >> 16) & 0xff,
> > + (fw_version >> 8) & 0xff,
> > + fw_version & 0xff);
> > + core->supported_instance_count =
> vpu_iface_get_max_instance_count(core);
> > + if (core->res->act_size) {
> > + u32 count = core->act.length / core->res->act_size;
> > +
> > + core->supported_instance_count =
> min(core->supported_instance_count, count);
> > + }
> > + core->fw_version = fw_version;
> > + core->state = VPU_CORE_ACTIVE;
> > +
> > + return 0;
> > +}
> > +
> > +static int vpu_core_wait_boot_done(struct vpu_core *core)
> > +{
> > + int ret;
> > +
> > + ret = wait_for_completion_timeout(&core->cmp, VPU_TIMEOUT);
> > + if (!ret) {
> > + dev_err(core->dev, "boot timeout\n");
> > + return -EINVAL;
> > + }
> > + return vpu_core_boot_done(core);
> > +}
> > +
> > +static int vpu_core_boot(struct vpu_core *core, bool load)
> > +{
> > + int ret;
> > +
> > + WARN_ON(!core);
> > +
> > + if (!core->res->standalone)
> > + return 0;
> > +
> > + reinit_completion(&core->cmp);
> > + if (load) {
> > + ret = vpu_core_load_firmware(core);
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + vpu_iface_boot_core(core);
> > + return vpu_core_wait_boot_done(core);
> > +}
> > +
> > +static int vpu_core_shutdown(struct vpu_core *core)
> > +{
> > + if (!core->res->standalone)
> > + return 0;
> > + return vpu_iface_shutdown_core(core);
> > +}
> > +
> > +static int vpu_core_restore(struct vpu_core *core)
> > +{
> > + int ret;
> > +
> > + if (!core->res->standalone)
> > + return 0;
> > + ret = vpu_core_sw_reset(core);
> > + if (ret)
> > + return ret;
> > +
> > + vpu_core_boot_done(core);
> > + return vpu_iface_restore_core(core);
> > +}
> > +
> > +static int __vpu_alloc_dma(struct device *dev, struct vpu_buffer *buf)
> > +{
> > + gfp_t gfp = GFP_KERNEL | GFP_DMA32;
> > +
> > + WARN_ON(!dev || !buf);
> > +
> > + if (!buf->length)
> > + return 0;
> > +
> > + buf->virt = dma_alloc_coherent(dev, buf->length, &buf->phys, gfp);
> > + if (!buf->virt)
> > + return -ENOMEM;
> > +
> > + buf->dev = dev;
> > +
> > + return 0;
> > +}
> > +
> > +void vpu_free_dma(struct vpu_buffer *buf)
> > +{
> > + WARN_ON(!buf);
> > +
> > + if (!buf->virt || !buf->dev)
> > + return;
> > +
> > + dma_free_coherent(buf->dev, buf->length, buf->virt, buf->phys);
> > + buf->virt = NULL;
> > + buf->phys = 0;
> > + buf->length = 0;
> > + buf->bytesused = 0;
> > + buf->dev = NULL;
> > +}
> > +
> > +int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf)
> > +{
> > + WARN_ON(!core || !buf);
> > +
> > + return __vpu_alloc_dma(core->dev, buf);
> > +}
> > +
> > +static void vpu_core_check_hang(struct vpu_core *core)
> > +{
> > + if (core->hang_mask)
> > + core->state = VPU_CORE_HANG;
> > +}
> > +
> > +static struct vpu_core *vpu_core_find_proper_by_type(struct vpu_dev *vpu,
> u32 type)
> > +{
> > + struct vpu_core *core = NULL;
> > + int request_count = INT_MAX;
> > + struct vpu_core *c;
> > +
> > + WARN_ON(!vpu);
> > +
> > + list_for_each_entry(c, &vpu->cores, list) {
> > + dev_dbg(c->dev, "instance_mask = 0x%lx, state = %d\n",
> > + c->instance_mask,
> > + c->state);
> > + if (c->type != type)
> > + continue;
> > + if (c->state == VPU_CORE_DEINIT) {
> > + core = c;
> > + break;
> > + }
> > + vpu_core_check_hang(c);
> > + if (c->state != VPU_CORE_ACTIVE)
> > + continue;
> > + if (c->request_count < request_count) {
> > + request_count = c->request_count;
> > + core = c;
> > + }
> > + if (!request_count)
> > + break;
> > + }
> > +
> > + return core;
> > +}
> > +
> > +static bool vpu_core_is_exist(struct vpu_dev *vpu, struct vpu_core *core)
> > +{
> > + struct vpu_core *c;
> > +
> > + list_for_each_entry(c, &vpu->cores, list) {
> > + if (c == core)
> > + return true;
> > + }
> > +
> > + return false;
> > +}
> > +
> > +static void vpu_core_get_vpu(struct vpu_core *core)
> > +{
> > + core->vpu->get_vpu(core->vpu);
> > + if (core->type == VPU_CORE_TYPE_ENC)
> > + core->vpu->get_enc(core->vpu);
> > + if (core->type == VPU_CORE_TYPE_DEC)
> > + core->vpu->get_dec(core->vpu);
> > +}
> > +
> > +static int vpu_core_register(struct device *dev, struct vpu_core *core)
> > +{
> > + struct vpu_dev *vpu = dev_get_drvdata(dev);
> > + int ret = 0;
> > +
> > + dev_dbg(core->dev, "register core %s\n",
> vpu_core_type_desc(core->type));
> > + if (vpu_core_is_exist(vpu, core))
> > + return 0;
> > +
> > + core->workqueue = alloc_workqueue("vpu", WQ_UNBOUND |
> WQ_MEM_RECLAIM, 1);
> > + if (!core->workqueue) {
> > + dev_err(core->dev, "fail to alloc workqueue\n");
> > + return -ENOMEM;
> > + }
> > + INIT_WORK(&core->msg_work, vpu_msg_run_work);
> > + INIT_DELAYED_WORK(&core->msg_delayed_work,
> vpu_msg_delayed_work);
> > + core->msg_buffer_size =
> roundup_pow_of_two(VPU_MSG_BUFFER_SIZE);
> > + core->msg_buffer = vzalloc(core->msg_buffer_size);
> > + if (!core->msg_buffer) {
> > + dev_err(core->dev, "failed allocate buffer for fifo\n");
> > + ret = -ENOMEM;
> > + goto error;
> > + }
> > + ret = kfifo_init(&core->msg_fifo, core->msg_buffer,
> core->msg_buffer_size);
> > + if (ret) {
> > + dev_err(core->dev, "failed init kfifo\n");
> > + goto error;
> > + }
> > +
> > + list_add_tail(&core->list, &vpu->cores);
> > +
> > + vpu_core_get_vpu(core);
> > +
> > + if (vpu_iface_get_power_state(core))
> > + ret = vpu_core_restore(core);
> > + if (ret)
> > + goto error;
> > +
> > + return 0;
> > +error:
> > + if (core->msg_buffer) {
> > + vfree(core->msg_buffer);
> > + core->msg_buffer = NULL;
> > + }
> > + if (core->workqueue) {
> > + destroy_workqueue(core->workqueue);
> > + core->workqueue = NULL;
> > + }
> > + return ret;
> > +}
> > +
> > +static void vpu_core_put_vpu(struct vpu_core *core)
> > +{
> > + if (core->type == VPU_CORE_TYPE_ENC)
> > + core->vpu->put_enc(core->vpu);
> > + if (core->type == VPU_CORE_TYPE_DEC)
> > + core->vpu->put_dec(core->vpu);
> > + core->vpu->put_vpu(core->vpu);
> > +}
> > +
> > +static int vpu_core_unregister(struct device *dev, struct vpu_core *core)
> > +{
> > + list_del_init(&core->list);
> > +
> > + vpu_core_put_vpu(core);
> > + core->vpu = NULL;
> > + vfree(core->msg_buffer);
> > + core->msg_buffer = NULL;
> > +
> > + if (core->workqueue) {
> > + cancel_work_sync(&core->msg_work);
> > + cancel_delayed_work_sync(&core->msg_delayed_work);
> > + destroy_workqueue(core->workqueue);
> > + core->workqueue = NULL;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int vpu_core_acquire_instance(struct vpu_core *core)
> > +{
> > + int id;
> > +
> > + WARN_ON(!core);
> > +
> > + id = ffz(core->instance_mask);
> > + if (id >= core->supported_instance_count)
> > + return -EINVAL;
> > +
> > + set_bit(id, &core->instance_mask);
> > +
> > + return id;
> > +}
> > +
> > +static void vpu_core_release_instance(struct vpu_core *core, int id)
> > +{
> > + WARN_ON(!core);
> > +
> > + if (id < 0 || id >= core->supported_instance_count)
> > + return;
> > +
> > + clear_bit(id, &core->instance_mask);
> > +}
> > +
> > +struct vpu_inst *vpu_inst_get(struct vpu_inst *inst)
> > +{
> > + if (!inst)
> > + return NULL;
> > +
> > + atomic_inc(&inst->ref_count);
> > +
> > + return inst;
> > +}
> > +
> > +void vpu_inst_put(struct vpu_inst *inst)
> > +{
> > + if (!inst)
> > + return;
> > + if (atomic_dec_and_test(&inst->ref_count)) {
> > + if (inst->release)
> > + inst->release(inst);
> > + }
> > +}
> > +
> > +struct vpu_core *vpu_request_core(struct vpu_dev *vpu, enum
> vpu_core_type type)
> > +{
> > + struct vpu_core *core = NULL;
> > + int ret;
> > +
> > + mutex_lock(&vpu->lock);
> > +
> > + core = vpu_core_find_proper_by_type(vpu, type);
> > + if (!core)
> > + goto exit;
> > +
> > + mutex_lock(&core->lock);
> > + pm_runtime_get_sync(core->dev);
> > +
> > + if (core->state == VPU_CORE_DEINIT) {
> > + ret = vpu_core_boot(core, true);
> > + if (ret) {
> > + pm_runtime_put_sync(core->dev);
> > + mutex_unlock(&core->lock);
> > + core = NULL;
> > + goto exit;
> > + }
> > + }
> > +
> > + core->request_count++;
> > +
> > + mutex_unlock(&core->lock);
> > +exit:
> > + mutex_unlock(&vpu->lock);
> > +
> > + return core;
> > +}
> > +
> > +void vpu_release_core(struct vpu_core *core)
> > +{
> > + if (!core)
> > + return;
> > +
> > + mutex_lock(&core->lock);
> > + pm_runtime_put_sync(core->dev);
> > + if (core->request_count)
> > + core->request_count--;
> > + mutex_unlock(&core->lock);
> > +}
> > +
> > +int vpu_inst_register(struct vpu_inst *inst)
> > +{
> > + struct vpu_dev *vpu;
> > + struct vpu_core *core;
> > + int ret = 0;
> > +
> > + WARN_ON(!inst || !inst->vpu);
> > +
> > + vpu = inst->vpu;
> > + core = inst->core;
> > + if (!core) {
> > + core = vpu_request_core(vpu, inst->type);
> > + if (!core) {
> > + dev_err(vpu->dev, "there is no vpu core for %s\n",
> > + vpu_core_type_desc(inst->type));
> > + return -EINVAL;
> > + }
> > + inst->core = core;
> > + inst->dev = get_device(core->dev);
> > + }
> > +
> > + mutex_lock(&core->lock);
> > + if (inst->id >= 0 && inst->id < core->supported_instance_count)
> > + goto exit;
> > +
> > + ret = vpu_core_acquire_instance(core);
> > + if (ret < 0)
> > + goto exit;
> > +
> > + vpu_trace(inst->dev, "[%d] %p\n", ret, inst);
> > + inst->id = ret;
> > + list_add_tail(&inst->list, &core->instances);
> > + ret = 0;
> > + if (core->res->act_size) {
> > + inst->act.phys = core->act.phys + core->res->act_size *
> inst->id;
> > + inst->act.virt = core->act.virt + core->res->act_size * inst->id;
> > + inst->act.length = core->res->act_size;
> > + }
> > + vpu_inst_create_dbgfs_file(inst);
> > +exit:
> > + mutex_unlock(&core->lock);
> > +
> > + if (ret)
> > + dev_err(core->dev, "register instance fail\n");
> > + return ret;
> > +}
> > +
> > +int vpu_inst_unregister(struct vpu_inst *inst)
> > +{
> > + struct vpu_core *core;
> > +
> > + WARN_ON(!inst);
> > +
> > + if (!inst->core)
> > + return 0;
> > +
> > + core = inst->core;
> > + vpu_clear_request(inst);
> > + mutex_lock(&core->lock);
> > + if (inst->id >= 0 && inst->id < core->supported_instance_count) {
> > + vpu_inst_remove_dbgfs_file(inst);
> > + list_del_init(&inst->list);
> > + vpu_core_release_instance(core, inst->id);
> > + inst->id = VPU_INST_NULL_ID;
> > + }
> > + vpu_core_check_hang(core);
> > + if (core->state == VPU_CORE_HANG && !core->instance_mask) {
> > + dev_info(core->dev, "reset hang core\n");
> > + if (!vpu_core_sw_reset(core)) {
> > + core->state = VPU_CORE_ACTIVE;
> > + core->hang_mask = 0;
> > + }
> > + }
> > + mutex_unlock(&core->lock);
> > +
> > + return 0;
> > +}
> > +
> > +struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index)
> > +{
> > + struct vpu_inst *inst = NULL;
> > + struct vpu_inst *tmp;
> > +
> > + mutex_lock(&core->lock);
> > + if (!test_bit(index, &core->instance_mask))
> > + goto exit;
> > + list_for_each_entry(tmp, &core->instances, list) {
> > + if (tmp->id == index) {
> > + inst = vpu_inst_get(tmp);
> > + break;
> > + }
> > + }
> > +exit:
> > + mutex_unlock(&core->lock);
> > +
> > + return inst;
> > +}
> > +
> > +const struct vpu_core_resources *vpu_get_resource(struct vpu_inst *inst)
> > +{
> > + struct vpu_dev *vpu;
> > + struct vpu_core *core = NULL;
> > + const struct vpu_core_resources *res = NULL;
> > +
> > + if (!inst || !inst->vpu)
> > + return NULL;
> > +
> > + if (inst->core && inst->core->res)
> > + return inst->core->res;
> > +
> > + vpu = inst->vpu;
> > + mutex_lock(&vpu->lock);
> > + list_for_each_entry(core, &vpu->cores, list) {
> > + if (core->type == inst->type) {
> > + res = core->res;
> > + break;
> > + }
> > + }
> > + mutex_unlock(&vpu->lock);
> > +
> > + return res;
> > +}
> > +
> > +static int vpu_core_parse_dt(struct vpu_core *core, struct device_node *np)
> > +{
> > + struct device_node *node;
> > + struct resource res;
> > +
> > + if (of_count_phandle_with_args(np, "memory-region", NULL) < 2) {
> > + dev_err(core->dev, "need 2 memory-region for boot and
> rpc\n");
> > + return -ENODEV;
> > + }
> > +
> > + node = of_parse_phandle(np, "memory-region", 0);
> > + if (!node) {
> > + dev_err(core->dev, "boot-region of_parse_phandle error\n");
> > + return -ENODEV;
> > + }
> > + if (of_address_to_resource(node, 0, &res)) {
> > + dev_err(core->dev, "boot-region of_address_to_resource
> error\n");
> > + return -EINVAL;
> > + }
> > + core->fw.phys = res.start;
> > + core->fw.length = resource_size(&res);
> > +
> > + node = of_parse_phandle(np, "memory-region", 1);
> > + if (!node) {
> > + dev_err(core->dev, "rpc-region of_parse_phandle error\n");
> > + return -ENODEV;
> > + }
> > + if (of_address_to_resource(node, 0, &res)) {
> > + dev_err(core->dev, "rpc-region of_address_to_resource
> error\n");
> > + return -EINVAL;
> > + }
> > + core->rpc.phys = res.start;
> > + core->rpc.length = resource_size(&res);
> > +
> > + if (core->rpc.length < core->res->rpc_size + core->res->fwlog_size) {
> > + dev_err(core->dev, "the rpc-region <%pad, 0x%x> is not
> enough\n",
> > + &core->rpc.phys, core->rpc.length);
> > + return -EINVAL;
> > + }
> > +
> > + core->fw.virt = ioremap_wc(core->fw.phys, core->fw.length);
> > + core->rpc.virt = ioremap_wc(core->rpc.phys, core->rpc.length);
> > + memset_io(core->rpc.virt, 0, core->rpc.length);
> > +
> > + if (vpu_iface_check_memory_region(core,
> > + core->rpc.phys,
> > + core->rpc.length) !=
> VPU_CORE_MEMORY_UNCACHED) {
> > + dev_err(core->dev, "rpc region<%pad, 0x%x> isn't
> uncached\n",
> > + &core->rpc.phys, core->rpc.length);
> > + return -EINVAL;
> > + }
> > +
> > + core->log.phys = core->rpc.phys + core->res->rpc_size;
> > + core->log.virt = core->rpc.virt + core->res->rpc_size;
> > + core->log.length = core->res->fwlog_size;
> > + core->act.phys = core->log.phys + core->log.length;
> > + core->act.virt = core->log.virt + core->log.length;
> > + core->act.length = core->rpc.length - core->res->rpc_size -
> core->log.length;
> > + core->rpc.length = core->res->rpc_size;
> > +
> > + return 0;
> > +}
> > +
> > +static int vpu_core_probe(struct platform_device *pdev)
> > +{
> > + struct device *dev = &pdev->dev;
> > + struct vpu_core *core;
> > + struct vpu_dev *vpu = dev_get_drvdata(dev->parent);
> > + struct vpu_shared_addr *iface;
> > + u32 iface_data_size;
> > + int ret;
> > +
> > + dev_dbg(dev, "probe\n");
> > + if (!vpu)
> > + return -EINVAL;
> > + core = devm_kzalloc(dev, sizeof(*core), GFP_KERNEL);
> > + if (!core)
> > + return -ENOMEM;
> > +
> > + core->pdev = pdev;
> > + core->dev = dev;
> > + platform_set_drvdata(pdev, core);
> > + core->vpu = vpu;
> > + INIT_LIST_HEAD(&core->instances);
> > + mutex_init(&core->lock);
> > + mutex_init(&core->cmd_lock);
> > + init_completion(&core->cmp);
> > + init_waitqueue_head(&core->ack_wq);
> > + core->state = VPU_CORE_DEINIT;
> > +
> > + core->res = of_device_get_match_data(dev);
> > + if (!core->res)
> > + return -ENODEV;
> > +
> > + core->type = core->res->type;
> > + core->id = of_alias_get_id(dev->of_node, "vpu_core");
> > + if (core->id < 0) {
> > + dev_err(dev, "can't get vpu core id\n");
> > + return core->id;
> > + }
> > + dev_info(core->dev, "[%d] = %s\n", core->id,
> vpu_core_type_desc(core->type));
> > + ret = vpu_core_parse_dt(core, dev->of_node);
> > + if (ret)
> > + return ret;
> > +
> > + core->base = devm_platform_ioremap_resource(pdev, 0);
> > + if (IS_ERR(core->base))
> > + return PTR_ERR(core->base);
> > +
> > + if (!vpu_iface_check_codec(core)) {
> > + dev_err(core->dev, "is not supported\n");
> > + return -EINVAL;
> > + }
> > +
> > + ret = vpu_mbox_init(core);
> > + if (ret)
> > + return ret;
> > +
> > + iface = devm_kzalloc(dev, sizeof(*iface), GFP_KERNEL);
> > + if (!iface)
> > + return -ENOMEM;
> > +
> > + iface_data_size = vpu_iface_get_data_size(core);
> > + if (iface_data_size) {
> > + iface->priv = devm_kzalloc(dev, iface_data_size,
> GFP_KERNEL);
> > + if (!iface->priv)
> > + return -ENOMEM;
> > + }
> > +
> > + ret = vpu_iface_init(core, iface, &core->rpc, core->fw.phys);
> > + if (ret) {
> > + dev_err(core->dev, "init iface fail, ret = %d\n", ret);
> > + return ret;
> > + }
> > +
> > + vpu_iface_config_system(core, vpu->res->mreg_base, vpu->base);
> > + vpu_iface_set_log_buf(core, &core->log);
> > +
> > + pm_runtime_enable(dev);
> > + ret = pm_runtime_get_sync(dev);
>
> Use pm_runtime_resume_and_get() instead and drop the
> pm_runtime_put_noidle()
> in the 'if' below. The use of pm_runtime_resume_and_get is preferred over
> the rather confusing pm_runtime_get_sync().
>
> If it is used elsewhere in this series as well (I haven't checked this),
> then make the same changes.
>
> > + if (ret) {
> > + pm_runtime_put_noidle(dev);
> > + pm_runtime_set_suspended(dev);
> > + goto err_runtime_disable;
> > + }
> > +
> > + ret = vpu_core_register(dev->parent, core);
> > + if (ret)
> > + goto err_core_register;
> > + core->parent = dev->parent;
> > +
> > + pm_runtime_put_sync(dev);
> > + vpu_core_create_dbgfs_file(core);
> > +
> > + return 0;
> > +
> > +err_core_register:
> > + pm_runtime_put_sync(dev);
> > +err_runtime_disable:
> > + pm_runtime_disable(dev);
> > +
> > + return ret;
> > +}
> > +
> > +static int vpu_core_remove(struct platform_device *pdev)
> > +{
> > + struct device *dev = &pdev->dev;
> > + struct vpu_core *core = platform_get_drvdata(pdev);
> > + int ret;
> > +
> > + vpu_core_remove_dbgfs_file(core);
> > + ret = pm_runtime_get_sync(dev);
>
> Ah, same here.
>
> > + WARN_ON(ret < 0);
> > +
> > + vpu_core_shutdown(core);
> > + pm_runtime_put_sync(dev);
> > + pm_runtime_disable(dev);
> > +
> > + vpu_core_unregister(core->parent, core);
> > + iounmap(core->fw.virt);
> > + iounmap(core->rpc.virt);
> > + mutex_destroy(&core->lock);
> > + mutex_destroy(&core->cmd_lock);
> > +
> > + return 0;
> > +}
> > +
> > +static int __maybe_unused vpu_core_runtime_resume(struct device *dev)
> > +{
> > + struct vpu_core *core = dev_get_drvdata(dev);
> > +
> > + return vpu_mbox_request(core);
> > +}
> > +
> > +static int __maybe_unused vpu_core_runtime_suspend(struct device *dev)
> > +{
> > + struct vpu_core *core = dev_get_drvdata(dev);
> > +
> > + vpu_mbox_free(core);
> > + return 0;
> > +}
> > +
> > +static void vpu_core_cancel_work(struct vpu_core *core)
> > +{
> > + struct vpu_inst *inst = NULL;
> > +
> > + cancel_work_sync(&core->msg_work);
> > + cancel_delayed_work_sync(&core->msg_delayed_work);
> > +
> > + mutex_lock(&core->lock);
> > + list_for_each_entry(inst, &core->instances, list)
> > + cancel_work_sync(&inst->msg_work);
> > + mutex_unlock(&core->lock);
> > +}
> > +
> > +static void vpu_core_resume_work(struct vpu_core *core)
> > +{
> > + struct vpu_inst *inst = NULL;
> > + unsigned long delay = msecs_to_jiffies(10);
> > +
> > + queue_work(core->workqueue, &core->msg_work);
> > + queue_delayed_work(core->workqueue, &core->msg_delayed_work,
> delay);
> > +
> > + mutex_lock(&core->lock);
> > + list_for_each_entry(inst, &core->instances, list)
> > + queue_work(inst->workqueue, &inst->msg_work);
> > + mutex_unlock(&core->lock);
> > +}
> > +
> > +static int __maybe_unused vpu_core_resume(struct device *dev)
> > +{
> > + struct vpu_core *core = dev_get_drvdata(dev);
> > + int ret = 0;
> > +
> > + if (!core->res->standalone)
> > + return 0;
> > +
> > + mutex_lock(&core->lock);
> > + pm_runtime_get_sync(dev);
> > + vpu_core_get_vpu(core);
> > + if (core->state != VPU_CORE_SNAPSHOT)
> > + goto exit;
> > +
> > + if (!vpu_iface_get_power_state(core)) {
> > + if (!list_empty(&core->instances)) {
> > + ret = vpu_core_boot(core, false);
> > + if (ret) {
> > + dev_err(core->dev, "%s boot fail\n",
> __func__);
> > + core->state = VPU_CORE_DEINIT;
> > + goto exit;
> > + }
> > + } else {
> > + core->state = VPU_CORE_DEINIT;
> > + }
> > + } else {
> > + if (!list_empty(&core->instances)) {
> > + ret = vpu_core_sw_reset(core);
> > + if (ret) {
> > + dev_err(core->dev, "%s sw_reset fail\n",
> __func__);
> > + core->state = VPU_CORE_HANG;
> > + goto exit;
> > + }
> > + }
> > + core->state = VPU_CORE_ACTIVE;
> > + }
> > +
> > +exit:
> > + pm_runtime_put_sync(dev);
> > + mutex_unlock(&core->lock);
> > +
> > + vpu_core_resume_work(core);
> > + return ret;
> > +}
> > +
> > +static int __maybe_unused vpu_core_suspend(struct device *dev)
> > +{
> > + struct vpu_core *core = dev_get_drvdata(dev);
> > + int ret = 0;
> > +
> > + if (!core->res->standalone)
> > + return 0;
> > +
> > + mutex_lock(&core->lock);
> > + if (core->state == VPU_CORE_ACTIVE) {
> > + if (!list_empty(&core->instances)) {
> > + ret = vpu_core_snapshot(core);
> > + if (ret) {
> > + mutex_unlock(&core->lock);
> > + return ret;
> > + }
> > + }
> > +
> > + core->state = VPU_CORE_SNAPSHOT;
> > + }
> > + mutex_unlock(&core->lock);
> > +
> > + vpu_core_cancel_work(core);
> > +
> > + mutex_lock(&core->lock);
> > + vpu_core_put_vpu(core);
> > + mutex_unlock(&core->lock);
> > + return ret;
> > +}
> > +
> > +static const struct dev_pm_ops vpu_core_pm_ops = {
> > + SET_RUNTIME_PM_OPS(vpu_core_runtime_suspend,
> vpu_core_runtime_resume, NULL)
> > + SET_SYSTEM_SLEEP_PM_OPS(vpu_core_suspend, vpu_core_resume)
> > +};
> > +
> > +static struct vpu_core_resources imx8q_enc = {
> > + .type = VPU_CORE_TYPE_ENC,
> > + .fwname = "vpu/vpu_fw_imx8_enc.bin",
> > + .stride = 16,
> > + .max_width = 1920,
> > + .max_height = 1920,
> > + .min_width = 64,
> > + .min_height = 48,
> > + .step_width = 2,
> > + .step_height = 2,
> > + .rpc_size = 0x80000,
> > + .fwlog_size = 0x80000,
> > + .act_size = 0xc0000,
> > + .standalone = true,
> > +};
> > +
> > +static struct vpu_core_resources imx8q_dec = {
> > + .type = VPU_CORE_TYPE_DEC,
> > + .fwname = "vpu/vpu_fw_imx8_dec.bin",
> > + .stride = 256,
> > + .max_width = 8188,
> > + .max_height = 8188,
> > + .min_width = 16,
> > + .min_height = 16,
> > + .step_width = 1,
> > + .step_height = 1,
> > + .rpc_size = 0x80000,
> > + .fwlog_size = 0x80000,
> > + .standalone = true,
> > +};
> > +
> > +static const struct of_device_id vpu_core_dt_match[] = {
> > + { .compatible = "nxp,imx8q-vpu-encoder", .data = &imx8q_enc },
> > + { .compatible = "nxp,imx8q-vpu-decoder", .data = &imx8q_dec },
> > + {}
> > +};
> > +MODULE_DEVICE_TABLE(of, vpu_core_dt_match);
> > +
> > +static struct platform_driver amphion_vpu_core_driver = {
> > + .probe = vpu_core_probe,
> > + .remove = vpu_core_remove,
> > + .driver = {
> > + .name = "amphion-vpu-core",
> > + .of_match_table = vpu_core_dt_match,
> > + .pm = &vpu_core_pm_ops,
> > + },
> > +};
> > +
> > +int __init vpu_core_driver_init(void)
> > +{
> > + return platform_driver_register(&amphion_vpu_core_driver);
> > +}
> > +
> > +void __exit vpu_core_driver_exit(void)
> > +{
> > + platform_driver_unregister(&amphion_vpu_core_driver);
> > +}
> > diff --git a/drivers/media/platform/amphion/vpu_core.h
> b/drivers/media/platform/amphion/vpu_core.h
> > new file mode 100644
> > index 000000000000..00a662997da4
> > --- /dev/null
> > +++ b/drivers/media/platform/amphion/vpu_core.h
> > @@ -0,0 +1,15 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#ifndef _AMPHION_VPU_CORE_H
> > +#define _AMPHION_VPU_CORE_H
> > +
> > +void csr_writel(struct vpu_core *core, u32 reg, u32 val);
> > +u32 csr_readl(struct vpu_core *core, u32 reg);
> > +int vpu_alloc_dma(struct vpu_core *core, struct vpu_buffer *buf);
> > +void vpu_free_dma(struct vpu_buffer *buf);
> > +struct vpu_inst *vpu_core_find_instance(struct vpu_core *core, u32 index);
> > +
> > +#endif
> > diff --git a/drivers/media/platform/amphion/vpu_dbg.c
> b/drivers/media/platform/amphion/vpu_dbg.c
> > new file mode 100644
> > index 000000000000..2e7e11101f99
> > --- /dev/null
> > +++ b/drivers/media/platform/amphion/vpu_dbg.c
> > @@ -0,0 +1,495 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#include <linux/init.h>
> > +#include <linux/device.h>
> > +#include <linux/ioctl.h>
> > +#include <linux/list.h>
> > +#include <linux/module.h>
> > +#include <linux/kernel.h>
> > +#include <linux/types.h>
> > +#include <linux/pm_runtime.h>
> > +#include <media/v4l2-device.h>
> > +#include <linux/debugfs.h>
> > +#include "vpu.h"
> > +#include "vpu_defs.h"
> > +#include "vpu_helpers.h"
> > +#include "vpu_cmds.h"
> > +#include "vpu_rpc.h"
> > +
> > +struct print_buf_desc {
> > + u32 start_h_phy;
> > + u32 start_h_vir;
> > + u32 start_m;
> > + u32 bytes;
> > + u32 read;
> > + u32 write;
> > + char buffer[0];
> > +};
> > +
> > +static char *vb2_stat_name[] = {
> > + [VB2_BUF_STATE_DEQUEUED] = "dequeued",
> > + [VB2_BUF_STATE_IN_REQUEST] = "in_request",
> > + [VB2_BUF_STATE_PREPARING] = "preparing",
> > + [VB2_BUF_STATE_QUEUED] = "queued",
> > + [VB2_BUF_STATE_ACTIVE] = "active",
> > + [VB2_BUF_STATE_DONE] = "done",
> > + [VB2_BUF_STATE_ERROR] = "error",
> > +};
> > +
> > +static char *vpu_stat_name[] = {
> > + [VPU_BUF_STATE_IDLE] = "idle",
> > + [VPU_BUF_STATE_INUSE] = "inuse",
> > + [VPU_BUF_STATE_DECODED] = "decoded",
> > + [VPU_BUF_STATE_READY] = "ready",
> > + [VPU_BUF_STATE_SKIP] = "skip",
> > + [VPU_BUF_STATE_ERROR] = "error",
> > +};
> > +
> > +static int vpu_dbg_instance(struct seq_file *s, void *data)
> > +{
> > + struct vpu_inst *inst = s->private;
> > + char str[128];
> > + int num;
> > + struct vb2_queue *vq;
> > + int i;
> > +
> > + num = scnprintf(str, sizeof(str), "[%s]\n",
> vpu_core_type_desc(inst->type));
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + num = scnprintf(str, sizeof(str), "tgig = %d,pid = %d\n", inst->tgid,
> inst->pid);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "state = %d\n", inst->state);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str),
> > + "min_buffer_out = %d, min_buffer_cap = %d\n",
> > + inst->min_buffer_out, inst->min_buffer_cap);
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > +
> > + vq = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> > + num = scnprintf(str, sizeof(str),
> > + "output (%2d, %2d): fmt = %c%c%c%c %d
> x %d, %d;",
> > + vb2_is_streaming(vq),
> > + vq->num_buffers,
> > + inst->out_format.pixfmt,
> > + inst->out_format.pixfmt >> 8,
> > + inst->out_format.pixfmt >> 16,
> > + inst->out_format.pixfmt >> 24,
> > + inst->out_format.width,
> > + inst->out_format.height,
> > + vq->last_buffer_dequeued);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + for (i = 0; i < inst->out_format.num_planes; i++) {
> > + num = scnprintf(str, sizeof(str), " %d(%d)",
> > + inst->out_format.sizeimage[i],
> > + inst->out_format.bytesperline[i]);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + }
> > + if (seq_write(s, "\n", 1))
> > + return 0;
> > +
> > + vq = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> > + num = scnprintf(str, sizeof(str),
> > + "capture(%2d, %2d): fmt = %c%c%c%c %d
> x %d, %d;",
> > + vb2_is_streaming(vq),
> > + vq->num_buffers,
> > + inst->cap_format.pixfmt,
> > + inst->cap_format.pixfmt >> 8,
> > + inst->cap_format.pixfmt >> 16,
> > + inst->cap_format.pixfmt >> 24,
> > + inst->cap_format.width,
> > + inst->cap_format.height,
> > + vq->last_buffer_dequeued);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + for (i = 0; i < inst->cap_format.num_planes; i++) {
> > + num = scnprintf(str, sizeof(str), " %d(%d)",
> > + inst->cap_format.sizeimage[i],
> > + inst->cap_format.bytesperline[i]);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + }
> > + if (seq_write(s, "\n", 1))
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "crop: (%d, %d) %d x %d\n",
> > + inst->crop.left,
> > + inst->crop.top,
> > + inst->crop.width,
> > + inst->crop.height);
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + vq = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> > + for (i = 0; i < vq->num_buffers; i++) {
> > + struct vb2_buffer *vb = vq->bufs[i];
> > + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> > + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
> > +
> > + if (vb->state == VB2_BUF_STATE_DEQUEUED)
> > + continue;
> > + num = scnprintf(str, sizeof(str),
> > + "output [%2d] state = %10s, %8s\n",
> > + i, vb2_stat_name[vb->state],
> > + vpu_stat_name[vpu_buf->state]);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + }
> > +
> > + vq = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> > + for (i = 0; i < vq->num_buffers; i++) {
> > + struct vb2_buffer *vb = vq->bufs[i];
> > + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> > + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
> > +
> > + if (vb->state == VB2_BUF_STATE_DEQUEUED)
> > + continue;
> > + num = scnprintf(str, sizeof(str),
> > + "capture[%2d] state = %10s, %8s\n",
> > + i, vb2_stat_name[vb->state],
> > + vpu_stat_name[vpu_buf->state]);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + }
> > +
> > + num = scnprintf(str, sizeof(str), "sequence = %d\n", inst->sequence);
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + if (inst->use_stream_buffer) {
> > + num = scnprintf(str, sizeof(str), "stream_buffer = %d / %d,
> <%pad, 0x%x>\n",
> > + vpu_helper_get_used_space(inst),
> > + inst->stream_buffer.length,
> > + &inst->stream_buffer.phys,
> > + inst->stream_buffer.length);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + }
> > + num = scnprintf(str, sizeof(str), "kfifo len = 0x%x\n",
> kfifo_len(&inst->msg_fifo));
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + num = scnprintf(str, sizeof(str), "flow :\n");
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + mutex_lock(&inst->core->cmd_lock);
> > + for (i = 0; i < ARRAY_SIZE(inst->flows); i++) {
> > + u32 idx = (inst->flow_idx + i) % (ARRAY_SIZE(inst->flows));
> > +
> > + if (!inst->flows[idx])
> > + continue;
> > + num = scnprintf(str, sizeof(str), "\t[%s]0x%x\n",
> > + inst->flows[idx] >= VPU_MSG_ID_NOOP ?
> "M" : "C",
> > + inst->flows[idx]);
> > + if (seq_write(s, str, num)) {
> > + mutex_unlock(&inst->core->cmd_lock);
> > + return 0;
> > + }
> > + }
> > + mutex_unlock(&inst->core->cmd_lock);
> > +
> > + i = 0;
> > + while (true) {
> > + num = call_vop(inst, get_debug_info, str, sizeof(str), i++);
> > + if (num <= 0)
> > + break;
> > + if (seq_write(s, str, num))
> > + return 0;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int vpu_dbg_core(struct seq_file *s, void *data)
> > +{
> > + struct vpu_core *core = s->private;
> > + struct vpu_shared_addr *iface = core->iface;
> > + char str[128];
> > + int num;
> > +
> > + num = scnprintf(str, sizeof(str), "[%s]\n",
> vpu_core_type_desc(core->type));
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + num = scnprintf(str, sizeof(str), "boot_region = <%pad, 0x%x>\n",
> > + &core->fw.phys, core->fw.length);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "rpc_region = <%pad, 0x%x> used =
> 0x%x\n",
> > + &core->rpc.phys, core->rpc.length,
> core->rpc.bytesused);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "fwlog_region = <%pad, 0x%x>\n",
> > + &core->log.phys, core->log.length);
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + num = scnprintf(str, sizeof(str), "state = %d\n", core->state);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + if (core->state == VPU_CORE_DEINIT)
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "fw version = %d.%d.%d\n",
> > + (core->fw_version >> 16) & 0xff,
> > + (core->fw_version >> 8) & 0xff,
> > + core->fw_version & 0xff);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "instances = %d/%d (0x%02lx), %d\n",
> > + hweight32(core->instance_mask),
> > + core->supported_instance_count,
> > + core->instance_mask,
> > + core->request_count);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str), "kfifo len = 0x%x\n",
> kfifo_len(&core->msg_fifo));
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str),
> > + "cmd_buf:[0x%x, 0x%x], wptr = 0x%x, rptr =
> 0x%x\n",
> > + iface->cmd_desc->start,
> > + iface->cmd_desc->end,
> > + iface->cmd_desc->wptr,
> > + iface->cmd_desc->rptr);
> > + if (seq_write(s, str, num))
> > + return 0;
> > + num = scnprintf(str, sizeof(str),
> > + "msg_buf:[0x%x, 0x%x], wptr = 0x%x, rptr =
> 0x%x\n",
> > + iface->msg_desc->start,
> > + iface->msg_desc->end,
> > + iface->msg_desc->wptr,
> > + iface->msg_desc->rptr);
> > + if (seq_write(s, str, num))
> > + return 0;
> > +
> > + return 0;
> > +}
> > +
> > +static int vpu_dbg_fwlog(struct seq_file *s, void *data)
> > +{
> > + struct vpu_core *core = s->private;
> > + struct print_buf_desc *print_buf;
> > + int length;
> > + u32 rptr;
> > + u32 wptr;
> > + int ret = 0;
> > +
> > + if (!core->log.virt || core->state == VPU_CORE_DEINIT)
> > + return 0;
> > +
> > + print_buf = core->log.virt;
> > + rptr = print_buf->read;
> > + wptr = print_buf->write;
> > +
> > + if (rptr == wptr)
> > + return 0;
> > + else if (rptr < wptr)
> > + length = wptr - rptr;
> > + else
> > + length = print_buf->bytes + wptr - rptr;
> > +
> > + if (s->count + length >= s->size) {
> > + s->count = s->size;
> > + return 0;
> > + }
> > +
> > + if (rptr + length >= print_buf->bytes) {
> > + int num = print_buf->bytes - rptr;
> > +
> > + if (seq_write(s, print_buf->buffer + rptr, num))
> > + ret = -1;
> > + length -= num;
> > + rptr = 0;
> > + }
> > +
> > + if (length) {
> > + if (seq_write(s, print_buf->buffer + rptr, length))
> > + ret = -1;
> > + rptr += length;
> > + }
> > + if (!ret)
> > + print_buf->read = rptr;
> > +
> > + return 0;
> > +}
> > +
> > +static int vpu_dbg_inst_open(struct inode *inode, struct file *filp)
> > +{
> > + return single_open(filp, vpu_dbg_instance, inode->i_private);
> > +}
> > +
> > +static ssize_t vpu_dbg_inst_write(struct file *file,
> > + const char __user *user_buf, size_t size, loff_t
> *ppos)
> > +{
> > + struct seq_file *s = file->private_data;
> > + struct vpu_inst *inst = s->private;
> > +
> > + vpu_session_debug(inst);
> > +
> > + return size;
> > +}
> > +
> > +static ssize_t vpu_dbg_core_write(struct file *file,
> > + const char __user *user_buf, size_t size, loff_t
> *ppos)
> > +{
> > + struct seq_file *s = file->private_data;
> > + struct vpu_core *core = s->private;
> > +
> > + pm_runtime_get_sync(core->dev);
> > + mutex_lock(&core->lock);
> > + if (core->state != VPU_CORE_DEINIT && !core->instance_mask) {
> > + dev_info(core->dev, "reset\n");
> > + if (!vpu_core_sw_reset(core)) {
> > + core->state = VPU_CORE_ACTIVE;
> > + core->hang_mask = 0;
> > + }
> > + }
> > + mutex_unlock(&core->lock);
> > + pm_runtime_put_sync(core->dev);
> > +
> > + return size;
> > +}
> > +
> > +static int vpu_dbg_core_open(struct inode *inode, struct file *filp)
> > +{
> > + return single_open(filp, vpu_dbg_core, inode->i_private);
> > +}
> > +
> > +static int vpu_dbg_fwlog_open(struct inode *inode, struct file *filp)
> > +{
> > + return single_open(filp, vpu_dbg_fwlog, inode->i_private);
> > +}
> > +
> > +static const struct file_operations vpu_dbg_inst_fops = {
> > + .owner = THIS_MODULE,
> > + .open = vpu_dbg_inst_open,
> > + .release = single_release,
> > + .read = seq_read,
> > + .write = vpu_dbg_inst_write,
> > +};
> > +
> > +static const struct file_operations vpu_dbg_core_fops = {
> > + .owner = THIS_MODULE,
> > + .open = vpu_dbg_core_open,
> > + .release = single_release,
> > + .read = seq_read,
> > + .write = vpu_dbg_core_write,
> > +};
> > +
> > +static const struct file_operations vpu_dbg_fwlog_fops = {
> > + .owner = THIS_MODULE,
> > + .open = vpu_dbg_fwlog_open,
> > + .release = single_release,
> > + .read = seq_read,
> > +};
> > +
> > +int vpu_inst_create_dbgfs_file(struct vpu_inst *inst)
> > +{
> > + struct vpu_dev *vpu;
> > + char name[64];
> > +
> > + if (!inst || !inst->core || !inst->core->vpu)
> > + return -EINVAL;
> > +
> > + vpu = inst->core->vpu;
> > + if (!vpu->debugfs)
> > + return -EINVAL;
> > +
> > + if (inst->debugfs)
> > + return 0;
> > +
> > + scnprintf(name, sizeof(name), "instance.%d.%d",
> > + inst->core->id, inst->id);
> > + inst->debugfs = debugfs_create_file((const char *)name,
> > + VERIFY_OCTAL_PERMISSIONS(0644),
> > + vpu->debugfs,
> > + inst,
> > + &vpu_dbg_inst_fops);
> > + if (!inst->debugfs) {
> > + dev_err(inst->dev, "vpu create debugfs %s fail\n", name);
> > + return -EINVAL;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +int vpu_inst_remove_dbgfs_file(struct vpu_inst *inst)
> > +{
> > + if (!inst)
> > + return 0;
> > +
> > + debugfs_remove(inst->debugfs);
> > + inst->debugfs = NULL;
> > +
> > + return 0;
> > +}
> > +
> > +int vpu_core_create_dbgfs_file(struct vpu_core *core)
> > +{
> > + struct vpu_dev *vpu;
> > + char name[64];
> > +
> > + if (!core || !core->vpu)
> > + return -EINVAL;
> > +
> > + vpu = core->vpu;
> > + if (!vpu->debugfs)
> > + return -EINVAL;
> > +
> > + if (!core->debugfs) {
> > + scnprintf(name, sizeof(name), "core.%d", core->id);
> > + core->debugfs = debugfs_create_file((const char *)name,
> > +
> VERIFY_OCTAL_PERMISSIONS(0644),
> > + vpu->debugfs,
> > + core,
> > + &vpu_dbg_core_fops);
> > + if (!core->debugfs) {
> > + dev_err(core->dev, "vpu create debugfs %s fail\n",
> name);
> > + return -EINVAL;
> > + }
> > + }
> > + if (!core->debugfs_fwlog) {
> > + scnprintf(name, sizeof(name), "fwlog.%d", core->id);
> > + core->debugfs_fwlog = debugfs_create_file((const char
> *)name,
> > +
> VERIFY_OCTAL_PERMISSIONS(0444),
> > + vpu->debugfs,
> > + core,
> > + &vpu_dbg_fwlog_fops);
> > + if (!core->debugfs_fwlog) {
> > + dev_err(core->dev, "vpu create debugfs %s fail\n",
> name);
> > + return -EINVAL;
> > + }
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +int vpu_core_remove_dbgfs_file(struct vpu_core *core)
> > +{
> > + if (!core)
> > + return 0;
> > + debugfs_remove(core->debugfs);
> > + core->debugfs = NULL;
> > + debugfs_remove(core->debugfs_fwlog);
> > + core->debugfs_fwlog = NULL;
> > +
> > + return 0;
> > +}
> > +
> > +void vpu_inst_record_flow(struct vpu_inst *inst, u32 flow)
> > +{
> > + if (!inst)
> > + return;
> > +
> > + inst->flows[inst->flow_idx] = flow;
> > + inst->flow_idx = (inst->flow_idx + 1) % (ARRAY_SIZE(inst->flows));
> > +}
> > diff --git a/drivers/media/platform/amphion/vpu_rpc.c
> b/drivers/media/platform/amphion/vpu_rpc.c
> > new file mode 100644
> > index 000000000000..7b5e9177e010
> > --- /dev/null
> > +++ b/drivers/media/platform/amphion/vpu_rpc.c
> > @@ -0,0 +1,279 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#include <linux/init.h>
> > +#include <linux/interconnect.h>
> > +#include <linux/ioctl.h>
> > +#include <linux/list.h>
> > +#include <linux/kernel.h>
> > +#include <linux/module.h>
> > +#include <linux/of_device.h>
> > +#include <linux/of_address.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/firmware/imx/ipc.h>
> > +#include <linux/firmware/imx/svc/misc.h>
> > +#include "vpu.h"
> > +#include "vpu_rpc.h"
> > +#include "vpu_imx8q.h"
> > +#include "vpu_windsor.h"
> > +#include "vpu_malone.h"
> > +
> > +u32 vpu_iface_check_memory_region(struct vpu_core *core, dma_addr_t
> addr, u32 size)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->check_memory_region)
> > + return VPU_CORE_MEMORY_INVALID;
> > +
> > + return ops->check_memory_region(core->fw.phys, addr, size);
> > +}
> > +
> > +static u32 vpu_rpc_check_buffer_space(struct vpu_rpc_buffer_desc *desc,
> bool write)
> > +{
> > + u32 ptr1;
> > + u32 ptr2;
> > + u32 size;
> > +
> > + WARN_ON(!desc);
> > +
> > + size = desc->end - desc->start;
> > + if (write) {
> > + ptr1 = desc->wptr;
> > + ptr2 = desc->rptr;
> > + } else {
> > + ptr1 = desc->rptr;
> > + ptr2 = desc->wptr;
> > + }
> > +
> > + if (ptr1 == ptr2) {
> > + if (!write)
> > + return 0;
> > + else
> > + return size;
> > + }
> > +
> > + return (ptr2 + size - ptr1) % size;
> > +}
> > +
> > +static int vpu_rpc_send_cmd_buf(struct vpu_shared_addr *shared,
> > + struct vpu_rpc_event *cmd)
> > +{
> > + struct vpu_rpc_buffer_desc *desc;
> > + u32 space = 0;
> > + u32 *data;
> > + u32 wptr;
> > + u32 i;
> > +
> > + WARN_ON(!shared || !shared->cmd_mem_vir || !cmd);
> > +
> > + desc = shared->cmd_desc;
> > + space = vpu_rpc_check_buffer_space(desc, true);
> > + if (space < (((cmd->hdr.num + 1) << 2) + 16)) {
> > + pr_err("Cmd Buffer is no space for [%d] %d\n",
> > + cmd->hdr.index, cmd->hdr.id);
> > + return -EINVAL;
> > + }
> > + wptr = desc->wptr;
> > + data = (u32 *)(shared->cmd_mem_vir + desc->wptr - desc->start);
> > + *data = 0;
> > + *data |= ((cmd->hdr.index & 0xff) << 24);
> > + *data |= ((cmd->hdr.num & 0xff) << 16);
> > + *data |= (cmd->hdr.id & 0x3fff);
> > + wptr += 4;
> > + data++;
> > + if (wptr >= desc->end) {
> > + wptr = desc->start;
> > + data = shared->cmd_mem_vir;
> > + }
> > +
> > + for (i = 0; i < cmd->hdr.num; i++) {
> > + *data = cmd->data[i];
> > + wptr += 4;
> > + data++;
> > + if (wptr >= desc->end) {
> > + wptr = desc->start;
> > + data = shared->cmd_mem_vir;
> > + }
> > + }
> > +
> > + /*update wptr after data is written*/
> > + mb();
> > + desc->wptr = wptr;
> > +
> > + return 0;
> > +}
> > +
> > +static bool vpu_rpc_check_msg(struct vpu_shared_addr *shared)
> > +{
> > + struct vpu_rpc_buffer_desc *desc;
> > + u32 space = 0;
> > + u32 msgword;
> > + u32 msgnum;
> > +
> > + WARN_ON(!shared || !shared->msg_desc);
> > +
> > + desc = shared->msg_desc;
> > + space = vpu_rpc_check_buffer_space(desc, 0);
> > + space = (space >> 2);
> > +
> > + if (space) {
> > + msgword = *(u32 *)(shared->msg_mem_vir + desc->rptr -
> desc->start);
> > + msgnum = (msgword & 0xff0000) >> 16;
> > + if (msgnum <= space)
> > + return true;
> > + }
> > +
> > + return false;
> > +}
> > +
> > +static int vpu_rpc_receive_msg_buf(struct vpu_shared_addr *shared, struct
> vpu_rpc_event *msg)
> > +{
> > + struct vpu_rpc_buffer_desc *desc;
> > + u32 *data;
> > + u32 msgword;
> > + u32 rptr;
> > + u32 i;
> > +
> > + WARN_ON(!shared || !shared->msg_desc || !msg);
> > +
> > + if (!vpu_rpc_check_msg(shared))
> > + return -EINVAL;
> > +
> > + desc = shared->msg_desc;
> > + data = (u32 *)(shared->msg_mem_vir + desc->rptr - desc->start);
> > + rptr = desc->rptr;
> > + msgword = *data;
> > + data++;
> > + rptr += 4;
> > + if (rptr >= desc->end) {
> > + rptr = desc->start;
> > + data = shared->msg_mem_vir;
> > + }
> > +
> > + msg->hdr.index = (msgword >> 24) & 0xff;
> > + msg->hdr.num = (msgword >> 16) & 0xff;
> > + msg->hdr.id = msgword & 0x3fff;
> > +
> > + if (msg->hdr.num > ARRAY_SIZE(msg->data)) {
> > + pr_err("msg(%d) data length(%d) is out of range\n",
> > + msg->hdr.id, msg->hdr.num);
> > + return -EINVAL;
> > + }
> > +
> > + for (i = 0; i < msg->hdr.num; i++) {
> > + msg->data[i] = *data;
> > + data++;
> > + rptr += 4;
> > + if (rptr >= desc->end) {
> > + rptr = desc->start;
> > + data = shared->msg_mem_vir;
> > + }
> > + }
> > +
> > + /*update rptr after data is read*/
> > + mb();
> > + desc->rptr = rptr;
> > +
> > + return 0;
> > +}
> > +
> > +struct vpu_iface_ops imx8q_rpc_ops[] = {
> > + [VPU_CORE_TYPE_ENC] = {
> > + .check_codec = vpu_imx8q_check_codec,
> > + .check_fmt = vpu_imx8q_check_fmt,
> > + .boot_core = vpu_imx8q_boot_core,
> > + .get_power_state = vpu_imx8q_get_power_state,
> > + .on_firmware_loaded = vpu_imx8q_on_firmware_loaded,
> > + .get_data_size = vpu_windsor_get_data_size,
> > + .check_memory_region =
> vpu_imx8q_check_memory_region,
> > + .init_rpc = vpu_windsor_init_rpc,
> > + .set_log_buf = vpu_windsor_set_log_buf,
> > + .set_system_cfg = vpu_windsor_set_system_cfg,
> > + .get_version = vpu_windsor_get_version,
> > + .send_cmd_buf = vpu_rpc_send_cmd_buf,
> > + .receive_msg_buf = vpu_rpc_receive_msg_buf,
> > + .pack_cmd = vpu_windsor_pack_cmd,
> > + .convert_msg_id = vpu_windsor_convert_msg_id,
> > + .unpack_msg_data = vpu_windsor_unpack_msg_data,
> > + .config_memory_resource =
> vpu_windsor_config_memory_resource,
> > + .get_stream_buffer_size =
> vpu_windsor_get_stream_buffer_size,
> > + .config_stream_buffer = vpu_windsor_config_stream_buffer,
> > + .get_stream_buffer_desc =
> vpu_windsor_get_stream_buffer_desc,
> > + .update_stream_buffer =
> vpu_windsor_update_stream_buffer,
> > + .set_encode_params = vpu_windsor_set_encode_params,
> > + .input_frame = vpu_windsor_input_frame,
> > + .get_max_instance_count =
> vpu_windsor_get_max_instance_count,
> > + },
> > + [VPU_CORE_TYPE_DEC] = {
> > + .check_codec = vpu_imx8q_check_codec,
> > + .check_fmt = vpu_imx8q_check_fmt,
> > + .boot_core = vpu_imx8q_boot_core,
> > + .get_power_state = vpu_imx8q_get_power_state,
> > + .on_firmware_loaded = vpu_imx8q_on_firmware_loaded,
> > + .get_data_size = vpu_malone_get_data_size,
> > + .check_memory_region =
> vpu_imx8q_check_memory_region,
> > + .init_rpc = vpu_malone_init_rpc,
> > + .set_log_buf = vpu_malone_set_log_buf,
> > + .set_system_cfg = vpu_malone_set_system_cfg,
> > + .get_version = vpu_malone_get_version,
> > + .send_cmd_buf = vpu_rpc_send_cmd_buf,
> > + .receive_msg_buf = vpu_rpc_receive_msg_buf,
> > + .get_stream_buffer_size =
> vpu_malone_get_stream_buffer_size,
> > + .config_stream_buffer = vpu_malone_config_stream_buffer,
> > + .set_decode_params = vpu_malone_set_decode_params,
> > + .pack_cmd = vpu_malone_pack_cmd,
> > + .convert_msg_id = vpu_malone_convert_msg_id,
> > + .unpack_msg_data = vpu_malone_unpack_msg_data,
> > + .get_stream_buffer_desc =
> vpu_malone_get_stream_buffer_desc,
> > + .update_stream_buffer =
> vpu_malone_update_stream_buffer,
> > + .add_scode = vpu_malone_add_scode,
> > + .input_frame = vpu_malone_input_frame,
> > + .pre_send_cmd = vpu_malone_pre_cmd,
> > + .post_send_cmd = vpu_malone_post_cmd,
> > + .init_instance = vpu_malone_init_instance,
> > + .get_max_instance_count =
> vpu_malone_get_max_instance_count,
> > + },
> > +};
> > +
> > +
> > +static struct vpu_iface_ops *vpu_get_iface(struct vpu_dev *vpu, enum
> vpu_core_type type)
> > +{
> > + struct vpu_iface_ops *rpc_ops = NULL;
> > + u32 size = 0;
> > +
> > + WARN_ON(!vpu || !vpu->res);
> > +
> > + switch (vpu->res->plat_type) {
> > + case IMX8QXP:
> > + case IMX8QM:
> > + rpc_ops = imx8q_rpc_ops;
> > + size = ARRAY_SIZE(imx8q_rpc_ops);
> > + break;
> > + default:
> > + return NULL;
> > + }
> > +
> > + if (type >= size)
> > + return NULL;
> > +
> > + return &rpc_ops[type];
> > +}
> > +
> > +struct vpu_iface_ops *vpu_core_get_iface(struct vpu_core *core)
> > +{
> > + WARN_ON(!core || !core->vpu);
> > +
> > + return vpu_get_iface(core->vpu, core->type);
> > +}
> > +
> > +struct vpu_iface_ops *vpu_inst_get_iface(struct vpu_inst *inst)
> > +{
> > + WARN_ON(!inst || !inst->vpu);
> > +
> > + if (inst->core)
> > + return vpu_core_get_iface(inst->core);
> > +
> > + return vpu_get_iface(inst->vpu, inst->type);
> > +}
> > diff --git a/drivers/media/platform/amphion/vpu_rpc.h
> b/drivers/media/platform/amphion/vpu_rpc.h
> > new file mode 100644
> > index 000000000000..abe998e5a5be
> > --- /dev/null
> > +++ b/drivers/media/platform/amphion/vpu_rpc.h
> > @@ -0,0 +1,464 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#ifndef _AMPHION_VPU_RPC_H
> > +#define _AMPHION_VPU_RPC_H
> > +
> > +#include <media/videobuf2-core.h>
> > +#include "vpu_codec.h"
> > +
> > +struct vpu_rpc_buffer_desc {
> > + u32 wptr;
> > + u32 rptr;
> > + u32 start;
> > + u32 end;
> > +};
> > +
> > +struct vpu_shared_addr {
> > + void *iface;
> > + struct vpu_rpc_buffer_desc *cmd_desc;
> > + void *cmd_mem_vir;
> > + struct vpu_rpc_buffer_desc *msg_desc;
> > + void *msg_mem_vir;
> > +
> > + unsigned long boot_addr;
> > + struct vpu_core *core;
> > + void *priv;
> > +};
> > +
> > +struct vpu_rpc_event_header {
> > + u32 index;
> > + u32 id;
> > + u32 num;
> > +};
> > +
> > +struct vpu_rpc_event {
> > + struct vpu_rpc_event_header hdr;
> > + u32 data[128];
> > +};
> > +
> > +struct vpu_iface_ops {
> > + bool (*check_codec)(enum vpu_core_type type);
> > + bool (*check_fmt)(enum vpu_core_type type, u32 pixelfmt);
> > + u32 (*get_data_size)(void);
> > + u32 (*check_memory_region)(dma_addr_t base, dma_addr_t addr,
> u32 size);
> > + int (*boot_core)(struct vpu_core *core);
> > + int (*shutdown_core)(struct vpu_core *core);
> > + int (*restore_core)(struct vpu_core *core);
> > + int (*get_power_state)(struct vpu_core *core);
> > + int (*on_firmware_loaded)(struct vpu_core *core);
> > + void (*init_rpc)(struct vpu_shared_addr *shared,
> > + struct vpu_buffer *rpc, dma_addr_t boot_addr);
> > + void (*set_log_buf)(struct vpu_shared_addr *shared,
> > + struct vpu_buffer *log);
> > + void (*set_system_cfg)(struct vpu_shared_addr *shared,
> > + u32 regs_base, void __iomem *regs, u32 index);
> > + void (*set_stream_cfg)(struct vpu_shared_addr *shared, u32 index);
> > + u32 (*get_version)(struct vpu_shared_addr *shared);
> > + u32 (*get_max_instance_count)(struct vpu_shared_addr *shared);
> > + int (*get_stream_buffer_size)(struct vpu_shared_addr *shared);
> > + int (*send_cmd_buf)(struct vpu_shared_addr *shared,
> > + struct vpu_rpc_event *cmd);
> > + int (*receive_msg_buf)(struct vpu_shared_addr *shared,
> > + struct vpu_rpc_event *msg);
> > + int (*pack_cmd)(struct vpu_rpc_event *pkt, u32 index, u32 id, void
> *data);
> > + int (*convert_msg_id)(u32 msg_id);
> > + int (*unpack_msg_data)(struct vpu_rpc_event *pkt, void *data);
> > + int (*input_frame)(struct vpu_shared_addr *shared,
> > + struct vpu_inst *inst, struct vb2_buffer *vb);
> > + int (*config_memory_resource)(struct vpu_shared_addr *shared,
> > + u32 instance,
> > + u32 type,
> > + u32 index,
> > + struct vpu_buffer *buf);
> > + int (*config_stream_buffer)(struct vpu_shared_addr *shared,
> > + u32 instance,
> > + struct vpu_buffer *buf);
> > + int (*update_stream_buffer)(struct vpu_shared_addr *shared,
> > + u32 instance, u32 ptr, bool
> write);
> > + int (*get_stream_buffer_desc)(struct vpu_shared_addr *shared,
> > + u32 instance,
> > + struct vpu_rpc_buffer_desc
> *desc);
> > + int (*set_encode_params)(struct vpu_shared_addr *shared,
> > + u32 instance,
> > + struct vpu_encode_params *params, u32 update);
> > + int (*set_decode_params)(struct vpu_shared_addr *shared,
> > + u32 instance,
> > + struct vpu_decode_params *params, u32 update);
> > + int (*add_scode)(struct vpu_shared_addr *shared,
> > + u32 instance,
> > + struct vpu_buffer *stream_buffer,
> > + u32 pixelformat,
> > + u32 scode_type);
> > + int (*pre_send_cmd)(struct vpu_shared_addr *shared, u32 instance);
> > + int (*post_send_cmd)(struct vpu_shared_addr *shared, u32 instance);
> > + int (*init_instance)(struct vpu_shared_addr *shared, u32 instance);
> > +};
> > +
> > +enum {
> > + VPU_CORE_MEMORY_INVALID = 0,
> > + VPU_CORE_MEMORY_CACHED,
> > + VPU_CORE_MEMORY_UNCACHED
> > +};
> > +
> > +struct vpu_rpc_region_t {
> > + dma_addr_t start;
> > + dma_addr_t end;
> > + dma_addr_t type;
> > +};
> > +
> > +struct vpu_iface_ops *vpu_core_get_iface(struct vpu_core *core);
> > +struct vpu_iface_ops *vpu_inst_get_iface(struct vpu_inst *inst);
> > +u32 vpu_iface_check_memory_region(struct vpu_core *core, dma_addr_t
> addr, u32 size);
> > +
> > +static inline bool vpu_iface_check_codec(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (ops && ops->check_codec)
> > + return ops->check_codec(core->type);
> > +
> > + return true;
> > +}
> > +
> > +static inline bool vpu_iface_check_format(struct vpu_inst *inst, u32
> pixelfmt)
> > +{
> > + struct vpu_iface_ops *ops = vpu_inst_get_iface(inst);
> > +
> > + if (ops && ops->check_fmt)
> > + return ops->check_fmt(inst->type, pixelfmt);
> > +
> > + return true;
> > +}
> > +
> > +static inline int vpu_iface_boot_core(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (ops && ops->boot_core)
> > + return ops->boot_core(core);
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_get_power_state(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (ops && ops->get_power_state)
> > + return ops->get_power_state(core);
> > + return 1;
> > +}
> > +
> > +static inline int vpu_iface_shutdown_core(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (ops && ops->shutdown_core)
> > + return ops->shutdown_core(core);
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_restore_core(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (ops && ops->restore_core)
> > + return ops->restore_core(core);
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_on_firmware_loaded(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (ops && ops->on_firmware_loaded)
> > + return ops->on_firmware_loaded(core);
> > +
> > + return 0;
> > +}
> > +
> > +static inline u32 vpu_iface_get_data_size(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->get_data_size)
> > + return 0;
> > +
> > + return ops->get_data_size();
> > +}
> > +
> > +static inline int vpu_iface_init(struct vpu_core *core,
> > + struct vpu_shared_addr *shared,
> > + struct vpu_buffer *rpc,
> > + dma_addr_t boot_addr)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->init_rpc)
> > + return -EINVAL;
> > +
> > + ops->init_rpc(shared, rpc, boot_addr);
> > + core->iface = shared;
> > + shared->core = core;
> > + if (rpc->bytesused > rpc->length)
> > + return -ENOSPC;
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_set_log_buf(struct vpu_core *core,
> > + struct vpu_buffer *log)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops)
> > + return -EINVAL;
> > +
> > + if (ops->set_log_buf)
> > + ops->set_log_buf(core->iface, log);
> > +
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_config_system(struct vpu_core *core,
> > + u32 regs_base, void __iomem *regs)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops)
> > + return -EINVAL;
> > + if (ops->set_system_cfg)
> > + ops->set_system_cfg(core->iface, regs_base, regs, core->id);
> > +
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_get_stream_buffer_size(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->get_stream_buffer_size)
> > + return 0;
> > +
> > + return ops->get_stream_buffer_size(core->iface);
> > +}
> > +
> > +static inline int vpu_iface_config_stream(struct vpu_inst *inst)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops)
> > + return -EINVAL;
> > + if (ops->set_stream_cfg)
> > + ops->set_stream_cfg(inst->core->iface, inst->id);
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_send_cmd(struct vpu_core *core, struct
> vpu_rpc_event *cmd)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->send_cmd_buf)
> > + return -EINVAL;
> > +
> > + return ops->send_cmd_buf(core->iface, cmd);
> > +}
> > +
> > +static inline int vpu_iface_receive_msg(struct vpu_core *core, struct
> vpu_rpc_event *msg)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->receive_msg_buf)
> > + return -EINVAL;
> > +
> > + return ops->receive_msg_buf(core->iface, msg);
> > +}
> > +
> > +static inline int vpu_iface_pack_cmd(struct vpu_core *core,
> > + struct vpu_rpc_event *pkt,
> > + u32 index, u32 id, void *data)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->pack_cmd)
> > + return -EINVAL;
> > + return ops->pack_cmd(pkt, index, id, data);
> > +}
> > +
> > +static inline int vpu_iface_convert_msg_id(struct vpu_core *core, u32
> msg_id)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->convert_msg_id)
> > + return -EINVAL;
> > +
> > + return ops->convert_msg_id(msg_id);
> > +}
> > +
> > +static inline int vpu_iface_unpack_msg_data(struct vpu_core *core,
> > + struct vpu_rpc_event
> *pkt, void *data)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->unpack_msg_data)
> > + return -EINVAL;
> > +
> > + return ops->unpack_msg_data(pkt, data);
> > +}
> > +
> > +static inline int vpu_iface_input_frame(struct vpu_inst *inst,
> > + struct vb2_buffer *vb)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + if (!ops || !ops->input_frame)
> > + return -EINVAL;
> > +
> > + return ops->input_frame(inst->core->iface, inst, vb);
> > +}
> > +
> > +static inline int vpu_iface_config_memory_resource(struct vpu_inst *inst,
> > + u32 type, u32 index, struct vpu_buffer *buf)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->config_memory_resource)
> > + return -EINVAL;
> > +
> > + return ops->config_memory_resource(inst->core->iface,
> > + inst->id,
> > + type, index, buf);
> > +}
> > +
> > +static inline int vpu_iface_config_stream_buffer(struct vpu_inst *inst,
> > + struct vpu_buffer *buf)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->config_stream_buffer)
> > + return -EINVAL;
> > +
> > + return ops->config_stream_buffer(inst->core->iface, inst->id, buf);
> > +}
> > +
> > +static inline int vpu_iface_update_stream_buffer(struct vpu_inst *inst,
> > + u32 ptr, bool write)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->update_stream_buffer)
> > + return -EINVAL;
> > +
> > + return ops->update_stream_buffer(inst->core->iface, inst->id, ptr,
> write);
> > +}
> > +
> > +static inline int vpu_iface_get_stream_buffer_desc(struct vpu_inst *inst,
> > + struct vpu_rpc_buffer_desc
> *desc)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->get_stream_buffer_desc)
> > + return -EINVAL;
> > +
> > + if (!desc)
> > + return 0;
> > +
> > + return ops->get_stream_buffer_desc(inst->core->iface, inst->id, desc);
> > +}
> > +
> > +static inline u32 vpu_iface_get_version(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->get_version)
> > + return 0;
> > +
> > + return ops->get_version(core->iface);
> > +}
> > +
> > +static inline u32 vpu_iface_get_max_instance_count(struct vpu_core *core)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(core);
> > +
> > + if (!ops || !ops->get_max_instance_count)
> > + return 0;
> > +
> > + return ops->get_max_instance_count(core->iface);
> > +}
> > +
> > +static inline int vpu_iface_set_encode_params(struct vpu_inst *inst,
> > + struct vpu_encode_params *params, u32
> update)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->set_encode_params)
> > + return -EINVAL;
> > +
> > + return ops->set_encode_params(inst->core->iface, inst->id, params,
> update);
> > +}
> > +
> > +static inline int vpu_iface_set_decode_params(struct vpu_inst *inst,
> > + struct vpu_decode_params *params, u32
> update)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->set_decode_params)
> > + return -EINVAL;
> > +
> > + return ops->set_decode_params(inst->core->iface, inst->id, params,
> update);
> > +}
> > +
> > +static inline int vpu_iface_add_scode(struct vpu_inst *inst, u32 scode_type)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (!ops || !ops->add_scode)
> > + return -EINVAL;
> > +
> > + return ops->add_scode(inst->core->iface, inst->id,
> > + &inst->stream_buffer,
> > + inst->out_format.pixfmt,
> > + scode_type);
> > +}
> > +
> > +static inline int vpu_iface_pre_send_cmd(struct vpu_inst *inst)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (ops && ops->pre_send_cmd)
> > + return ops->pre_send_cmd(inst->core->iface, inst->id);
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_post_send_cmd(struct vpu_inst *inst)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (ops && ops->post_send_cmd)
> > + return ops->post_send_cmd(inst->core->iface, inst->id);
> > + return 0;
> > +}
> > +
> > +static inline int vpu_iface_init_instance(struct vpu_inst *inst)
> > +{
> > + struct vpu_iface_ops *ops = vpu_core_get_iface(inst->core);
> > +
> > + WARN_ON(inst->id < 0);
> > + if (ops && ops->init_instance)
> > + return ops->init_instance(inst->core->iface, inst->id);
> > +
> > + return 0;
> > +}
> > +
> > +#endif
> >
>
> Regards,
>
> Hans

2021-12-02 10:29:55

by Hans Verkuil

[permalink] [raw]
Subject: Re: [PATCH v13 06/13] media: amphion: add vpu v4l2 m2m support

On 30/11/2021 10:48, Ming Qian wrote:
> vpu_v4l2.c implements the v4l2 m2m driver methods.
> vpu_helpers.c implements the common helper functions
> vpu_color.c converts the v4l2 colorspace with iso

iso?

>
> Signed-off-by: Ming Qian <[email protected]>
> Signed-off-by: Shijie Qin <[email protected]>
> Signed-off-by: Zhou Peng <[email protected]>
> Reported-by: kernel test robot <[email protected]>
> ---
> drivers/media/platform/amphion/vpu_color.c | 190 +++++
> drivers/media/platform/amphion/vpu_helpers.c | 436 ++++++++++++
> drivers/media/platform/amphion/vpu_helpers.h | 71 ++
> drivers/media/platform/amphion/vpu_v4l2.c | 703 +++++++++++++++++++
> drivers/media/platform/amphion/vpu_v4l2.h | 54 ++
> 5 files changed, 1454 insertions(+)
> create mode 100644 drivers/media/platform/amphion/vpu_color.c
> create mode 100644 drivers/media/platform/amphion/vpu_helpers.c
> create mode 100644 drivers/media/platform/amphion/vpu_helpers.h
> create mode 100644 drivers/media/platform/amphion/vpu_v4l2.c
> create mode 100644 drivers/media/platform/amphion/vpu_v4l2.h
>
> diff --git a/drivers/media/platform/amphion/vpu_color.c b/drivers/media/platform/amphion/vpu_color.c
> new file mode 100644
> index 000000000000..c3f45dd9ee30
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_color.c
> @@ -0,0 +1,190 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/device.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/slab.h>
> +#include <linux/delay.h>
> +#include <linux/types.h>
> +#include <media/v4l2-device.h>
> +#include "vpu.h"
> +#include "vpu_helpers.h"
> +
> +static const u8 colorprimaries[] = {
> + 0,
> + V4L2_COLORSPACE_REC709, /*Rec. ITU-R BT.709-6*/
> + 0,
> + 0,
> + V4L2_COLORSPACE_470_SYSTEM_M, /*Rec. ITU-R BT.470-6 System M*/
> + V4L2_COLORSPACE_470_SYSTEM_BG,/*Rec. ITU-R BT.470-6 System B, G*/
> + V4L2_COLORSPACE_SMPTE170M, /*SMPTE170M*/
> + V4L2_COLORSPACE_SMPTE240M, /*SMPTE240M*/
> + 0, /*Generic film*/
> + V4L2_COLORSPACE_BT2020, /*Rec. ITU-R BT.2020-2*/
> + 0, /*SMPTE ST 428-1*/

Add space after /* and before */

> +};
> +
> +static const u8 colortransfers[] = {
> + 0,
> + V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.709-6*/
> + 0,
> + 0,
> + 0, /*Rec. ITU-R BT.470-6 System M*/
> + 0, /*Rec. ITU-R BT.470-6 System B, G*/
> + V4L2_XFER_FUNC_709, /*SMPTE170M*/
> + V4L2_XFER_FUNC_SMPTE240M,/*SMPTE240M*/
> + V4L2_XFER_FUNC_NONE, /*Linear transfer characteristics*/
> + 0,
> + 0,
> + 0, /*IEC 61966-2-4*/
> + 0, /*Rec. ITU-R BT.1361-0 extended colour gamut*/
> + V4L2_XFER_FUNC_SRGB, /*IEC 61966-2-1 sRGB or sYCC*/
> + V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.2020-2 (10 bit system)*/
> + V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.2020-2 (12 bit system)*/
> + V4L2_XFER_FUNC_SMPTE2084,/*SMPTE ST 2084*/
> + 0, /*SMPTE ST 428-1*/
> + 0 /*Rec. ITU-R BT.2100-0 hybrid log-gamma (HLG)*/

Ditto here and elsewhere.

> +};
> +
> +static const u8 colormatrixcoefs[] = {
> + 0,
> + V4L2_YCBCR_ENC_709, /*Rec. ITU-R BT.709-6*/
> + 0,
> + 0,
> + 0, /*Title 47 Code of Federal Regulations*/
> + V4L2_YCBCR_ENC_601, /*Rec. ITU-R BT.601-7 625*/
> + V4L2_YCBCR_ENC_601, /*Rec. ITU-R BT.601-7 525*/
> + V4L2_YCBCR_ENC_SMPTE240M, /*SMPTE240M*/
> + 0,
> + V4L2_YCBCR_ENC_BT2020, /*Rec. ITU-R BT.2020-2*/
> + V4L2_YCBCR_ENC_BT2020_CONST_LUM /*Rec. ITU-R BT.2020-2 constant*/
> +};
> +
> +u32 vpu_color_cvrt_primaries_v2i(u32 primaries)
> +{
> + return VPU_ARRAY_FIND(colorprimaries, primaries);
> +}
> +
> +u32 vpu_color_cvrt_primaries_i2v(u32 primaries)
> +{
> + return VPU_ARRAY_AT(colorprimaries, primaries);
> +}
> +
> +u32 vpu_color_cvrt_transfers_v2i(u32 transfers)
> +{
> + return VPU_ARRAY_FIND(colortransfers, transfers);
> +}
> +
> +u32 vpu_color_cvrt_transfers_i2v(u32 transfers)
> +{
> + return VPU_ARRAY_AT(colortransfers, transfers);
> +}
> +
> +u32 vpu_color_cvrt_matrix_v2i(u32 matrix)
> +{
> + return VPU_ARRAY_FIND(colormatrixcoefs, matrix);
> +}
> +
> +u32 vpu_color_cvrt_matrix_i2v(u32 matrix)
> +{
> + return VPU_ARRAY_AT(colormatrixcoefs, matrix);
> +}
> +
> +u32 vpu_color_cvrt_full_range_v2i(u32 full_range)
> +{
> + return (full_range == V4L2_QUANTIZATION_FULL_RANGE);
> +}
> +
> +u32 vpu_color_cvrt_full_range_i2v(u32 full_range)
> +{
> + if (full_range)
> + return V4L2_QUANTIZATION_FULL_RANGE;
> +
> + return V4L2_QUANTIZATION_LIM_RANGE;
> +}
> +
> +int vpu_color_check_primaries(u32 primaries)
> +{
> + return vpu_color_cvrt_primaries_v2i(primaries) ? 0 : -EINVAL;
> +}
> +
> +int vpu_color_check_transfers(u32 transfers)
> +{
> + return vpu_color_cvrt_transfers_v2i(transfers) ? 0 : -EINVAL;
> +}
> +
> +int vpu_color_check_matrix(u32 matrix)
> +{
> + return vpu_color_cvrt_matrix_v2i(matrix) ? 0 : -EINVAL;
> +}
> +
> +int vpu_color_check_full_range(u32 full_range)
> +{
> + int ret = -EINVAL;
> +
> + switch (full_range) {
> + case V4L2_QUANTIZATION_FULL_RANGE:
> + case V4L2_QUANTIZATION_LIM_RANGE:
> + ret = 0;
> + break;
> + default:
> + break;
> +
> + }
> +
> + return ret;
> +}
> +
> +int vpu_color_get_default(u32 primaries,
> + u32 *ptransfers, u32 *pmatrix, u32 *pfull_range)
> +{
> + u32 transfers;
> + u32 matrix;
> + u32 full_range;
> +
> + switch (primaries) {
> + case V4L2_COLORSPACE_REC709:
> + transfers = V4L2_XFER_FUNC_709;
> + matrix = V4L2_YCBCR_ENC_709;
> + full_range = V4L2_QUANTIZATION_LIM_RANGE;
> + break;
> + case V4L2_COLORSPACE_470_SYSTEM_M:
> + case V4L2_COLORSPACE_470_SYSTEM_BG:
> + case V4L2_COLORSPACE_SMPTE170M:
> + transfers = V4L2_XFER_FUNC_709;
> + matrix = V4L2_YCBCR_ENC_601;
> + full_range = V4L2_QUANTIZATION_LIM_RANGE;
> + break;
> + case V4L2_COLORSPACE_SMPTE240M:
> + transfers = V4L2_XFER_FUNC_SMPTE240M;
> + matrix = V4L2_YCBCR_ENC_SMPTE240M;
> + full_range = V4L2_QUANTIZATION_LIM_RANGE;
> + break;
> + case V4L2_COLORSPACE_BT2020:
> + transfers = V4L2_XFER_FUNC_709;
> + matrix = V4L2_YCBCR_ENC_BT2020;
> + full_range = V4L2_QUANTIZATION_LIM_RANGE;
> + break;
> + default:
> + transfers = V4L2_XFER_FUNC_709;
> + matrix = V4L2_YCBCR_ENC_709;
> + full_range = V4L2_QUANTIZATION_LIM_RANGE;
> + break;

You can use V4L2_MAP_XFER_FUNC_DEFAULT and V4L2_MAP_YCBCR_ENC_DEFAULT
here.

Do you even need to provide the quantization range? Isn't it always lim range
anyway?

> + }
> +
> + if (ptransfers)
> + *ptransfers = transfers;
> + if (pmatrix)
> + *pmatrix = matrix;
> + if (pfull_range)
> + *pfull_range = full_range;
> +
> +
> + return 0;
> +}
> diff --git a/drivers/media/platform/amphion/vpu_helpers.c b/drivers/media/platform/amphion/vpu_helpers.c
> new file mode 100644
> index 000000000000..4b9fb82f24fd
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_helpers.c
> @@ -0,0 +1,436 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/interconnect.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/platform_device.h>
> +#include "vpu.h"
> +#include "vpu_core.h"
> +#include "vpu_rpc.h"
> +#include "vpu_helpers.h"
> +
> +int vpu_helper_find_in_array_u8(const u8 *array, u32 size, u32 x)
> +{
> + int i;
> +
> + for (i = 0; i < size; i++) {
> + if (array[i] == x)
> + return i;
> + }
> +
> + return 0;
> +}
> +
> +bool vpu_helper_check_type(struct vpu_inst *inst, u32 type)
> +{
> + const struct vpu_format *pfmt;
> +
> + for (pfmt = inst->formats; pfmt->pixfmt; pfmt++) {
> + if (!vpu_iface_check_format(inst, pfmt->pixfmt))
> + continue;
> + if (pfmt->type == type)
> + return true;
> + }
> +
> + return false;
> +}
> +
> +const struct vpu_format *vpu_helper_find_format(struct vpu_inst *inst, u32 type, u32 pixelfmt)
> +{
> + const struct vpu_format *pfmt;
> +
> + if (!inst || !inst->formats)
> + return NULL;
> +
> + if (!vpu_iface_check_format(inst, pixelfmt))
> + return NULL;
> +
> + for (pfmt = inst->formats; pfmt->pixfmt; pfmt++) {
> + if (pfmt->pixfmt == pixelfmt && (!type || type == pfmt->type))
> + return pfmt;
> + }
> +
> + return NULL;
> +}
> +
> +const struct vpu_format *vpu_helper_enum_format(struct vpu_inst *inst, u32 type, int index)
> +{
> + const struct vpu_format *pfmt;
> + int i = 0;
> +
> + if (!inst || !inst->formats)
> + return NULL;
> +
> + for (pfmt = inst->formats; pfmt->pixfmt; pfmt++) {
> + if (!vpu_iface_check_format(inst, pfmt->pixfmt))
> + continue;
> +
> + if (pfmt->type == type) {
> + if (index == i)
> + return pfmt;
> + i++;
> + }
> + }
> +
> + return NULL;
> +}
> +
> +u32 vpu_helper_valid_frame_width(struct vpu_inst *inst, u32 width)
> +{
> + const struct vpu_core_resources *res;
> +
> + if (!inst)
> + return width;
> +
> + res = vpu_get_resource(inst);
> + if (!res)
> + return width;
> + if (res->max_width)
> + width = clamp(width, res->min_width, res->max_width);
> + if (res->step_width)
> + width = ALIGN(width, res->step_width);
> +
> + return width;
> +}
> +
> +u32 vpu_helper_valid_frame_height(struct vpu_inst *inst, u32 height)
> +{
> + const struct vpu_core_resources *res;
> +
> + if (!inst)
> + return height;
> +
> + res = vpu_get_resource(inst);
> + if (!res)
> + return height;
> + if (res->max_height)
> + height = clamp(height, res->min_height, res->max_height);
> + if (res->step_height)
> + height = ALIGN(height, res->step_height);
> +
> + return height;
> +}
> +
> +static u32 get_nv12_plane_size(u32 width, u32 height, int plane_no,
> + u32 stride, u32 interlaced, u32 *pbl)
> +{
> + u32 bytesperline;
> + u32 size = 0;
> +
> + bytesperline = ALIGN(width, stride);
> + if (pbl)
> + bytesperline = max(bytesperline, *pbl);
> + height = ALIGN(height, 2);
> + if (plane_no == 0)
> + size = bytesperline * height;
> + else if (plane_no == 1)
> + size = bytesperline * height >> 1;
> + if (pbl)
> + *pbl = bytesperline;
> +
> + return size;
> +}
> +
> +static u32 get_tiled_8l128_plane_size(u32 fmt, u32 width, u32 height, int plane_no,
> + u32 stride, u32 interlaced, u32 *pbl)
> +{
> + u32 ws = 3;
> + u32 hs = 7;
> + u32 bitdepth = 8;
> + u32 bytesperline;
> + u32 size = 0;
> +
> + if (interlaced)
> + hs++;
> + if (fmt == V4L2_PIX_FMT_NV12MT_10BE_8L128)
> + bitdepth = 10;
> + bytesperline = DIV_ROUND_UP(width * bitdepth, BITS_PER_BYTE);
> + bytesperline = ALIGN(bytesperline, 1 << ws);
> + bytesperline = ALIGN(bytesperline, stride);
> + if (pbl)
> + bytesperline = max(bytesperline, *pbl);
> + height = ALIGN(height, 1 << hs);
> + if (plane_no == 0)
> + size = bytesperline * height;
> + else if (plane_no == 1)
> + size = (bytesperline * ALIGN(height, 1 << (hs + 1))) >> 1;
> + if (pbl)
> + *pbl = bytesperline;
> +
> + return size;
> +}
> +
> +static u32 get_default_plane_size(u32 width, u32 height, int plane_no,
> + u32 stride, u32 interlaced, u32 *pbl)
> +{
> + u32 bytesperline;
> + u32 size = 0;
> +
> + bytesperline = ALIGN(width, stride);
> + if (pbl)
> + bytesperline = max(bytesperline, *pbl);
> + if (plane_no == 0)
> + size = bytesperline * height;
> + if (pbl)
> + *pbl = bytesperline;
> +
> + return size;
> +}
> +
> +u32 vpu_helper_get_plane_size(u32 fmt, u32 w, u32 h, int plane_no,
> + u32 stride, u32 interlaced, u32 *pbl)
> +{
> + switch (fmt) {
> + case V4L2_PIX_FMT_NV12M:
> + return get_nv12_plane_size(w, h, plane_no, stride, interlaced, pbl);
> + case V4L2_PIX_FMT_NV12MT_8L128:
> + case V4L2_PIX_FMT_NV12MT_10BE_8L128:
> + return get_tiled_8l128_plane_size(fmt, w, h, plane_no, stride, interlaced, pbl);
> + default:
> + return get_default_plane_size(w, h, plane_no, stride, interlaced, pbl);
> + }
> +}
> +
> +u32 vpu_helper_copy_from_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 *rptr, u32 size, void *dst)
> +{
> + u32 offset;
> + u32 start;
> + u32 end;
> + void *virt;
> +
> + if (!stream_buffer || !rptr || !dst)
> + return -EINVAL;
> +
> + if (!size)
> + return 0;
> +
> + offset = *rptr;
> + start = stream_buffer->phys;
> + end = start + stream_buffer->length;
> + virt = stream_buffer->virt;
> +
> + if (offset < start || offset > end)
> + return -EINVAL;
> +
> + if (offset + size <= end) {
> + memcpy(dst, virt + (offset - start), size);
> + } else {
> + memcpy(dst, virt + (offset - start), end - offset);
> + memcpy(dst + end - offset, virt, size + offset - end);
> + }
> +
> + *rptr = vpu_helper_step_walk(stream_buffer, offset, size);
> + return size;
> +}
> +
> +u32 vpu_helper_copy_to_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 *wptr, u32 size, void *src)
> +{
> + u32 offset;
> + u32 start;
> + u32 end;
> + void *virt;
> +
> + if (!stream_buffer || !wptr || !src)
> + return -EINVAL;
> +
> + if (!size)
> + return 0;
> +
> + offset = *wptr;
> + start = stream_buffer->phys;
> + end = start + stream_buffer->length;
> + virt = stream_buffer->virt;
> + if (offset < start || offset > end)
> + return -EINVAL;
> +
> + if (offset + size <= end) {
> + memcpy(virt + (offset - start), src, size);
> + } else {
> + memcpy(virt + (offset - start), src, end - offset);
> + memcpy(virt, src + end - offset, size + offset - end);
> + }
> +
> + *wptr = vpu_helper_step_walk(stream_buffer, offset, size);
> +
> + return size;
> +}
> +
> +u32 vpu_helper_memset_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 *wptr, u8 val, u32 size)
> +{
> + u32 offset;
> + u32 start;
> + u32 end;
> + void *virt;
> +
> + if (!stream_buffer || !wptr)
> + return -EINVAL;
> +
> + if (!size)
> + return 0;
> +
> + offset = *wptr;
> + start = stream_buffer->phys;
> + end = start + stream_buffer->length;
> + virt = stream_buffer->virt;
> + if (offset < start || offset > end)
> + return -EINVAL;
> +
> + if (offset + size <= end) {
> + memset(virt + (offset - start), val, size);
> + } else {
> + memset(virt + (offset - start), val, end - offset);
> + memset(virt, val, size + offset - end);
> + }
> +
> + offset += size;
> + if (offset >= end)
> + offset -= stream_buffer->length;
> +
> + *wptr = offset;
> +
> + return size;
> +}
> +
> +u32 vpu_helper_get_free_space(struct vpu_inst *inst)
> +{
> + struct vpu_rpc_buffer_desc desc;
> +
> + if (vpu_iface_get_stream_buffer_desc(inst, &desc))
> + return 0;
> +
> + if (desc.rptr > desc.wptr)
> + return desc.rptr - desc.wptr;
> + else if (desc.rptr < desc.wptr)
> + return (desc.end - desc.start + desc.rptr - desc.wptr);
> + else
> + return desc.end - desc.start;
> +}
> +
> +u32 vpu_helper_get_used_space(struct vpu_inst *inst)
> +{
> + struct vpu_rpc_buffer_desc desc;
> +
> + if (vpu_iface_get_stream_buffer_desc(inst, &desc))
> + return 0;
> +
> + if (desc.wptr > desc.rptr)
> + return desc.wptr - desc.rptr;
> + else if (desc.wptr < desc.rptr)
> + return (desc.end - desc.start + desc.wptr - desc.rptr);
> + else
> + return 0;
> +}
> +
> +int vpu_helper_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
> +{
> + struct vpu_inst *inst = ctrl_to_inst(ctrl);
> +
> + switch (ctrl->id) {
> + case V4L2_CID_MIN_BUFFERS_FOR_CAPTURE:
> + ctrl->val = inst->min_buffer_cap;
> + break;
> + case V4L2_CID_MIN_BUFFERS_FOR_OUTPUT:
> + ctrl->val = inst->min_buffer_out;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +u32 vpu_helper_calc_coprime(u32 *a, u32 *b)
> +{
> + int m = *a;
> + int n = *b;
> +
> + if (m == 0)
> + return n;
> + if (n == 0)
> + return m;
> +
> + while (n != 0) {
> + int tmp = m % n;
> +
> + m = n;
> + n = tmp;
> + }
> + *a = (*a) / m;
> + *b = (*b) / m;
> +
> + return m;
> +}
> +
> +#define READ_BYTE(buffer, pos) (*(u8 *)((buffer)->virt + ((pos) % buffer->length)))

Add newline. Also split up the define into two lines:

#define READ_BYTE(buffer, pos) \
(*(u8 *)((buffer)->virt + ((pos) % buffer->length)))

> +int vpu_helper_find_startcode(struct vpu_buffer *stream_buffer,
> + u32 pixelformat, u32 offset, u32 bytesused)
> +{
> + u32 start_code;
> + int start_code_size;
> + u32 val = 0;
> + int i;
> + int ret = -EINVAL;
> +
> + if (!stream_buffer || !stream_buffer->virt)
> + return -EINVAL;
> +
> + switch (pixelformat) {
> + case V4L2_PIX_FMT_H264:
> + start_code_size = 4;
> + start_code = 0x00000001;
> + break;
> + default:
> + return 0;
> + }
> +
> + for (i = 0; i < bytesused; i++) {
> + val = (val << 8) | READ_BYTE(stream_buffer, offset + i);
> + if (i < start_code_size - 1)
> + continue;
> + if (val == start_code) {
> + ret = i + 1 - start_code_size;
> + break;
> + }
> + }
> +
> + return ret;
> +}
> +
> +int vpu_find_dst_by_src(struct vpu_pair *pairs, u32 cnt, u32 src)
> +{
> + u32 i;
> +
> + if (!pairs || !cnt)
> + return -EINVAL;
> +
> + for (i = 0; i < cnt; i++) {
> + if (pairs[i].src == src)
> + return pairs[i].dst;
> + }
> +
> + return -EINVAL;
> +}
> +
> +int vpu_find_src_by_dst(struct vpu_pair *pairs, u32 cnt, u32 dst)
> +{
> + u32 i;
> +
> + if (!pairs || !cnt)
> + return -EINVAL;
> +
> + for (i = 0; i < cnt; i++) {
> + if (pairs[i].dst == dst)
> + return pairs[i].src;
> + }
> +
> + return -EINVAL;
> +}
> diff --git a/drivers/media/platform/amphion/vpu_helpers.h b/drivers/media/platform/amphion/vpu_helpers.h
> new file mode 100644
> index 000000000000..65d4451ad8a1
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_helpers.h
> @@ -0,0 +1,71 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _AMPHION_VPU_HELPERS_H
> +#define _AMPHION_VPU_HELPERS_H
> +
> +struct vpu_pair {
> + u32 src;
> + u32 dst;
> +};
> +
> +#define MAKE_TIMESTAMP(s, ns) (((s32)(s) * NSEC_PER_SEC) + (ns))
> +#define VPU_INVALID_TIMESTAMP MAKE_TIMESTAMP(-1, 0)
> +#define VPU_ARRAY_AT(array, i) (((i) < ARRAY_SIZE(array)) ? array[i] : 0)
> +#define VPU_ARRAY_FIND(array, x) vpu_helper_find_in_array_u8(array, ARRAY_SIZE(array), x)
> +
> +int vpu_helper_find_in_array_u8(const u8 *array, u32 size, u32 x);
> +bool vpu_helper_check_type(struct vpu_inst *inst, u32 type);
> +const struct vpu_format *vpu_helper_find_format(struct vpu_inst *inst, u32 type, u32 pixelfmt);
> +const struct vpu_format *vpu_helper_enum_format(struct vpu_inst *inst, u32 type, int index);
> +u32 vpu_helper_valid_frame_width(struct vpu_inst *inst, u32 width);
> +u32 vpu_helper_valid_frame_height(struct vpu_inst *inst, u32 height);
> +u32 vpu_helper_get_plane_size(u32 fmt, u32 width, u32 height, int plane_no,
> + u32 stride, u32 interlaced, u32 *pbl);
> +u32 vpu_helper_copy_from_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 *rptr, u32 size, void *dst);
> +u32 vpu_helper_copy_to_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 *wptr, u32 size, void *src);
> +u32 vpu_helper_memset_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 *wptr, u8 val, u32 size);
> +u32 vpu_helper_get_free_space(struct vpu_inst *inst);
> +u32 vpu_helper_get_used_space(struct vpu_inst *inst);
> +int vpu_helper_g_volatile_ctrl(struct v4l2_ctrl *ctrl);
> +u32 vpu_helper_calc_coprime(u32 *a, u32 *b);
> +void vpu_helper_get_kmp_next(const u8 *pattern, int *next, int size);
> +int vpu_helper_kmp_search(u8 *s, int s_len, const u8 *p, int p_len, int *next);
> +int vpu_helper_kmp_search_in_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 offset, int bytesused,
> + const u8 *p, int p_len, int *next);
> +int vpu_helper_find_startcode(struct vpu_buffer *stream_buffer,
> + u32 pixelformat, u32 offset, u32 bytesused);
> +
> +static inline u32 vpu_helper_step_walk(struct vpu_buffer *stream_buffer, u32 pos, u32 step)
> +{
> + pos += step;
> + if (pos > stream_buffer->phys + stream_buffer->length)
> + pos -= stream_buffer->length;
> +
> + return pos;
> +}
> +
> +int vpu_color_check_primaries(u32 primaries);
> +int vpu_color_check_transfers(u32 transfers);
> +int vpu_color_check_matrix(u32 matrix);
> +int vpu_color_check_full_range(u32 full_range);
> +u32 vpu_color_cvrt_primaries_v2i(u32 primaries);
> +u32 vpu_color_cvrt_primaries_i2v(u32 primaries);
> +u32 vpu_color_cvrt_transfers_v2i(u32 transfers);
> +u32 vpu_color_cvrt_transfers_i2v(u32 transfers);
> +u32 vpu_color_cvrt_matrix_v2i(u32 matrix);
> +u32 vpu_color_cvrt_matrix_i2v(u32 matrix);
> +u32 vpu_color_cvrt_full_range_v2i(u32 full_range);
> +u32 vpu_color_cvrt_full_range_i2v(u32 full_range);
> +int vpu_color_get_default(u32 primaries,
> + u32 *ptransfers, u32 *pmatrix, u32 *pfull_range);
> +
> +int vpu_find_dst_by_src(struct vpu_pair *pairs, u32 cnt, u32 src);
> +int vpu_find_src_by_dst(struct vpu_pair *pairs, u32 cnt, u32 dst);
> +#endif
> diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c
> new file mode 100644
> index 000000000000..909a94d5aa8a
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_v4l2.c
> @@ -0,0 +1,703 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/interconnect.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/videodev2.h>
> +#include <media/v4l2-device.h>
> +#include <media/v4l2-event.h>
> +#include <media/v4l2-mem2mem.h>
> +#include <media/v4l2-ioctl.h>
> +#include <media/videobuf2-v4l2.h>
> +#include <media/videobuf2-dma-contig.h>
> +#include <media/videobuf2-vmalloc.h>
> +#include "vpu.h"
> +#include "vpu_core.h"
> +#include "vpu_v4l2.h"
> +#include "vpu_msgs.h"
> +#include "vpu_helpers.h"
> +
> +void vpu_inst_lock(struct vpu_inst *inst)
> +{
> + mutex_lock(&inst->lock);
> +}
> +
> +void vpu_inst_unlock(struct vpu_inst *inst)
> +{
> + mutex_unlock(&inst->lock);
> +}
> +
> +dma_addr_t vpu_get_vb_phy_addr(struct vb2_buffer *vb, u32 plane_no)
> +{
> + if (plane_no >= vb->num_planes)
> + return 0;
> + return vb2_dma_contig_plane_dma_addr(vb, plane_no) +
> + vb->planes[plane_no].data_offset;
> +}
> +
> +unsigned int vpu_get_vb_length(struct vb2_buffer *vb, u32 plane_no)
> +{
> + if (plane_no >= vb->num_planes)
> + return 0;
> + return vb2_plane_size(vb, plane_no) - vb->planes[plane_no].data_offset;
> +}
> +
> +void vpu_v4l2_set_error(struct vpu_inst *inst)
> +{
> + struct vb2_queue *src_q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> + struct vb2_queue *dst_q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> +
> + dev_err(inst->dev, "some error occurs in codec\n");
> + if (src_q)
> + src_q->error = 1;
> + if (dst_q)
> + dst_q->error = 1;
> +}
> +
> +int vpu_notify_eos(struct vpu_inst *inst)
> +{
> + const struct v4l2_event ev = {

Can be static.

> + .id = 0,
> + .type = V4L2_EVENT_EOS
> + };
> +
> + vpu_trace(inst->dev, "[%d]\n", inst->id);
> + v4l2_event_queue_fh(&inst->fh, &ev);
> +
> + return 0;
> +}
> +
> +int vpu_notify_source_change(struct vpu_inst *inst)
> +{
> + const struct v4l2_event ev = {
> + .id = 0,
> + .type = V4L2_EVENT_SOURCE_CHANGE,
> + .u.src_change.changes = V4L2_EVENT_SRC_CH_RESOLUTION
> + };

Ditto.

> +
> + vpu_trace(inst->dev, "[%d]\n", inst->id);
> + v4l2_event_queue_fh(&inst->fh, &ev);
> + return 0;
> +}
> +
> +int vpu_set_last_buffer_dequeued(struct vpu_inst *inst)
> +{
> + struct vb2_queue *q;
> +
> + if (!inst || !inst->fh.m2m_ctx)
> + return -EINVAL;
> +
> + q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> + if (!list_empty(&q->done_list))
> + return -EINVAL;
> +
> + vpu_trace(inst->dev, "last buffer dequeued\n");
> + q->last_buffer_dequeued = true;
> + wake_up(&q->done_wq);
> + vpu_notify_eos(inst);
> + return 0;
> +}
> +
> +const struct vpu_format *vpu_try_fmt_common(struct vpu_inst *inst,
> + struct v4l2_format *f)
> +{
> + struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp;
> + u32 type = f->type;
> + u32 stride = 1;
> + u32 bytesperline;
> + u32 sizeimage;
> + const struct vpu_format *fmt;
> + const struct vpu_core_resources *res;
> + int i;
> +
> + fmt = vpu_helper_find_format(inst, type, pixmp->pixelformat);
> + if (!fmt) {
> + fmt = vpu_helper_enum_format(inst, type, 0);
> + if (!fmt)
> + return NULL;
> + pixmp->pixelformat = fmt->pixfmt;
> + }
> +
> + res = vpu_get_resource(inst);
> + if (res)
> + stride = res->stride;
> + if (pixmp->width)
> + pixmp->width = vpu_helper_valid_frame_width(inst, pixmp->width);
> + if (pixmp->height)
> + pixmp->height = vpu_helper_valid_frame_height(inst, pixmp->height);
> + pixmp->flags = fmt->flags;
> + pixmp->num_planes = fmt->num_planes;
> + if (pixmp->field == V4L2_FIELD_ANY)
> + pixmp->field = V4L2_FIELD_NONE;
> + for (i = 0; i < pixmp->num_planes; i++) {
> + bytesperline = max_t(s32, pixmp->plane_fmt[i].bytesperline, 0);
> + sizeimage = vpu_helper_get_plane_size(pixmp->pixelformat,
> + pixmp->width, pixmp->height, i, stride,
> + pixmp->field == V4L2_FIELD_INTERLACED ? 1 : 0,
> + &bytesperline);
> + sizeimage = max_t(s32, pixmp->plane_fmt[i].sizeimage, sizeimage);
> + pixmp->plane_fmt[i].bytesperline = bytesperline;
> + pixmp->plane_fmt[i].sizeimage = sizeimage;
> + }
> +
> + return fmt;
> +}
> +
> +static bool vpu_check_ready(struct vpu_inst *inst, u32 type)
> +{
> + if (!inst)
> + return false;
> + if (inst->state == VPU_CODEC_STATE_DEINIT || inst->id < 0)
> + return false;
> + if (!inst->ops->check_ready)
> + return true;
> + return call_vop(inst, check_ready, type);
> +}
> +
> +int vpu_process_output_buffer(struct vpu_inst *inst)
> +{
> + struct v4l2_m2m_buffer *buf = NULL;
> + struct vpu_vb2_buffer *vpu_buf = NULL;
> +
> + if (!inst)
> + return -EINVAL;
> +
> + if (!vpu_check_ready(inst, inst->out_format.type))
> + return -EINVAL;
> +
> + v4l2_m2m_for_each_src_buf(inst->fh.m2m_ctx, buf) {
> + vpu_buf = container_of(buf, struct vpu_vb2_buffer, m2m_buf);
> + if (vpu_buf->state == VPU_BUF_STATE_IDLE)
> + break;
> + vpu_buf = NULL;
> + }
> +
> + if (!vpu_buf)
> + return -EINVAL;
> +
> + dev_dbg(inst->dev, "[%d]frame id = %d / %d\n",
> + inst->id, vpu_buf->m2m_buf.vb.sequence, inst->sequence);
> + return call_vop(inst, process_output, &vpu_buf->m2m_buf.vb.vb2_buf);
> +}
> +
> +int vpu_process_capture_buffer(struct vpu_inst *inst)
> +{
> + struct v4l2_m2m_buffer *buf = NULL;
> + struct vpu_vb2_buffer *vpu_buf = NULL;
> +
> + if (!inst)
> + return -EINVAL;
> +
> + if (!vpu_check_ready(inst, inst->cap_format.type))
> + return -EINVAL;
> +
> + v4l2_m2m_for_each_dst_buf(inst->fh.m2m_ctx, buf) {
> + vpu_buf = container_of(buf, struct vpu_vb2_buffer, m2m_buf);
> + if (vpu_buf->state == VPU_BUF_STATE_IDLE)
> + break;
> + vpu_buf = NULL;
> + }
> + if (!vpu_buf)
> + return -EINVAL;
> +
> + return call_vop(inst, process_capture, &vpu_buf->m2m_buf.vb.vb2_buf);
> +}
> +
> +struct vb2_v4l2_buffer *vpu_find_buf_by_sequence(struct vpu_inst *inst,
> + u32 type, u32 sequence)
> +{
> + struct v4l2_m2m_buffer *buf = NULL;
> + struct vb2_v4l2_buffer *vbuf = NULL;
> +
> + if (V4L2_TYPE_IS_OUTPUT(type)) {
> + v4l2_m2m_for_each_src_buf(inst->fh.m2m_ctx, buf) {
> + vbuf = &buf->vb;
> + if (vbuf->sequence == sequence)
> + break;
> + vbuf = NULL;
> + }
> + } else {
> + v4l2_m2m_for_each_dst_buf(inst->fh.m2m_ctx, buf) {
> + vbuf = &buf->vb;
> + if (vbuf->sequence == sequence)
> + break;
> + vbuf = NULL;
> + }
> + }
> +
> + return vbuf;
> +}
> +
> +struct vb2_v4l2_buffer *vpu_find_buf_by_idx(struct vpu_inst *inst,
> + u32 type, u32 idx)
> +{
> + struct v4l2_m2m_buffer *buf = NULL;
> + struct vb2_v4l2_buffer *vbuf = NULL;
> +
> + if (V4L2_TYPE_IS_OUTPUT(type)) {
> + v4l2_m2m_for_each_src_buf(inst->fh.m2m_ctx, buf) {
> + vbuf = &buf->vb;
> + if (vbuf->vb2_buf.index == idx)
> + break;
> + vbuf = NULL;
> + }
> + } else {
> + v4l2_m2m_for_each_dst_buf(inst->fh.m2m_ctx, buf) {
> + vbuf = &buf->vb;
> + if (vbuf->vb2_buf.index == idx)
> + break;
> + vbuf = NULL;
> + }
> + }
> +
> + return vbuf;
> +}
> +
> +int vpu_get_num_buffers(struct vpu_inst *inst, u32 type)
> +{
> + struct vb2_queue *q;
> +
> + if (!inst || !inst->fh.m2m_ctx)
> + return -EINVAL;
> + if (V4L2_TYPE_IS_OUTPUT(type))
> + q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> + else
> + q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> +
> + return q->num_buffers;
> +}
> +
> +static void vpu_m2m_device_run(void *priv)
> +{
> +}
> +
> +static void vpu_m2m_job_abort(void *priv)
> +{
> + struct vpu_inst *inst = priv;
> + struct v4l2_m2m_ctx *m2m_ctx = inst->fh.m2m_ctx;
> +
> + v4l2_m2m_job_finish(m2m_ctx->m2m_dev, m2m_ctx);
> +}
> +
> +static const struct v4l2_m2m_ops vpu_m2m_ops = {
> + .device_run = vpu_m2m_device_run,
> + .job_abort = vpu_m2m_job_abort
> +};
> +
> +static int vpu_vb2_queue_setup(struct vb2_queue *vq,
> + unsigned int *buf_count,
> + unsigned int *plane_count,
> + unsigned int psize[],
> + struct device *allocators[])
> +{
> + struct vpu_inst *inst = vb2_get_drv_priv(vq);
> + struct vpu_format *cur_fmt;
> + int i;
> +
> + cur_fmt = vpu_get_format(inst, vq->type);
> +
> + if (*plane_count) {
> + if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE) {
> + for (i = 0; i < *plane_count; i++) {
> + if (!psize[i])
> + psize[i] = cur_fmt->sizeimage[i];
> + }
> + return 0;
> + }
> + if (*plane_count != cur_fmt->num_planes)
> + return -EINVAL;
> + for (i = 0; i < cur_fmt->num_planes; i++) {
> + if (psize[i] < cur_fmt->sizeimage[i])
> + return -EINVAL;
> + }
> + return 0;
> + }
> +
> + *plane_count = cur_fmt->num_planes;
> + for (i = 0; i < cur_fmt->num_planes; i++)
> + psize[i] = cur_fmt->sizeimage[i];
> +
> + return 0;
> +}
> +
> +static int vpu_vb2_buf_init(struct vb2_buffer *vb)
> +{
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
> +
> + vpu_buf->state = VPU_BUF_STATE_IDLE;
> +
> + return 0;
> +}
> +
> +static void vpu_vb2_buf_cleanup(struct vb2_buffer *vb)
> +{
> +}

Unless this is filled in in a later patch, you can just drop this.

> +
> +static int vpu_vb2_buf_prepare(struct vb2_buffer *vb)
> +{
> + struct vpu_inst *inst = vb2_get_drv_priv(vb->vb2_queue);
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
> + struct vpu_format *cur_fmt;
> + u32 i;
> +
> + cur_fmt = vpu_get_format(inst, vb->type);
> + if (vb->num_planes != cur_fmt->num_planes)
> + return -EINVAL;
> + for (i = 0; i < cur_fmt->num_planes; i++) {
> + if (vpu_get_vb_length(vb, i) < cur_fmt->sizeimage[i]) {
> + dev_dbg(inst->dev, "[%d] %s buf[%d] is invalid\n",
> + inst->id,
> + vpu_type_name(vb->type),
> + vb->index);
> + vpu_buf->state = VPU_BUF_STATE_ERROR;
> + }
> + }
> +
> + return 0;
> +}
> +
> +static void vpu_vb2_buf_finish(struct vb2_buffer *vb)
> +{
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + struct vpu_inst *inst = vb2_get_drv_priv(vb->vb2_queue);
> + struct vb2_queue *q = vb->vb2_queue;
> +
> + if (vbuf->flags & V4L2_BUF_FLAG_LAST)
> + vpu_notify_eos(inst);
> +
> + if (list_empty(&q->done_list))
> + call_vop(inst, on_queue_empty, q->type);
> +}
> +
> +void vpu_vb2_buffers_return(struct vpu_inst *inst,
> + unsigned int type, enum vb2_buffer_state state)
> +{
> + struct vb2_v4l2_buffer *buf;
> +
> + if (!inst || !inst->fh.m2m_ctx)
> + return;
> +
> + if (V4L2_TYPE_IS_OUTPUT(type)) {
> + while ((buf = v4l2_m2m_src_buf_remove(inst->fh.m2m_ctx)))
> + v4l2_m2m_buf_done(buf, state);
> + } else {
> + while ((buf = v4l2_m2m_dst_buf_remove(inst->fh.m2m_ctx)))
> + v4l2_m2m_buf_done(buf, state);
> + }
> +}
> +
> +static int vpu_vb2_start_streaming(struct vb2_queue *q, unsigned int count)
> +{
> + struct vpu_inst *inst = vb2_get_drv_priv(q);
> + struct vpu_format *fmt = vpu_get_format(inst, q->type);
> + int ret;
> +
> + vpu_inst_unlock(inst);
> + ret = vpu_inst_register(inst);
> + vpu_inst_lock(inst);
> + if (ret) {
> + vpu_vb2_buffers_return(inst, q->type, VB2_BUF_STATE_QUEUED);
> + return ret;
> + }
> +
> + vpu_trace(inst->dev, "[%d] %s %c%c%c%c %dx%d %u(%u) %u(%u) %u(%u) %d\n",
> + inst->id, vpu_type_name(q->type),
> + fmt->pixfmt,
> + fmt->pixfmt >> 8,
> + fmt->pixfmt >> 16,
> + fmt->pixfmt >> 24,
> + fmt->width, fmt->height,
> + fmt->sizeimage[0], fmt->bytesperline[0],
> + fmt->sizeimage[1], fmt->bytesperline[1],
> + fmt->sizeimage[2], fmt->bytesperline[2],
> + q->num_buffers);
> + call_vop(inst, start, q->type);
> + vb2_clear_last_buffer_dequeued(q);
> +
> + return 0;
> +}
> +
> +static void vpu_vb2_stop_streaming(struct vb2_queue *q)
> +{
> + struct vpu_inst *inst = vb2_get_drv_priv(q);
> +
> + vpu_trace(inst->dev, "[%d] %s\n", inst->id, vpu_type_name(q->type));
> +
> + call_vop(inst, stop, q->type);
> + vpu_vb2_buffers_return(inst, q->type, VB2_BUF_STATE_ERROR);
> + if (V4L2_TYPE_IS_OUTPUT(q->type))
> + inst->sequence = 0;
> +}
> +
> +static void vpu_vb2_buf_queue(struct vb2_buffer *vb)
> +{
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + struct vpu_inst *inst = vb2_get_drv_priv(vb->vb2_queue);
> +
> + if (V4L2_TYPE_IS_OUTPUT(vb->type)) {
> + vbuf->sequence = inst->sequence++;
> + if ((s64)vb->timestamp < 0)
> + vb->timestamp = VPU_INVALID_TIMESTAMP;
> + }
> +
> + v4l2_m2m_buf_queue(inst->fh.m2m_ctx, vbuf);
> + vpu_process_output_buffer(inst);
> + vpu_process_capture_buffer(inst);
> +}
> +
> +static struct vb2_ops vpu_vb2_ops = {
> + .queue_setup = vpu_vb2_queue_setup,
> + .buf_init = vpu_vb2_buf_init,
> + .buf_cleanup = vpu_vb2_buf_cleanup,
> + .buf_prepare = vpu_vb2_buf_prepare,
> + .buf_finish = vpu_vb2_buf_finish,
> + .start_streaming = vpu_vb2_start_streaming,
> + .stop_streaming = vpu_vb2_stop_streaming,
> + .buf_queue = vpu_vb2_buf_queue,
> + .wait_prepare = vb2_ops_wait_prepare,
> + .wait_finish = vb2_ops_wait_finish,
> +};
> +
> +static int vpu_m2m_queue_init(void *priv, struct vb2_queue *src_vq,
> + struct vb2_queue *dst_vq)
> +{
> + struct vpu_inst *inst = priv;
> + int ret;
> +
> + inst->out_format.type = src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
> + src_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;

I would drop VB2_USERPTR. Not desired for new drivers unless there is a really good reason.
Ditto for dst_vq below.

> + src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
> + src_vq->ops = &vpu_vb2_ops;
> + src_vq->mem_ops = &vb2_dma_contig_memops;
> + if (inst->type == VPU_CORE_TYPE_DEC && inst->use_stream_buffer)
> + src_vq->mem_ops = &vb2_vmalloc_memops;
> + src_vq->drv_priv = inst;
> + src_vq->buf_struct_size = sizeof(struct vpu_vb2_buffer);
> + src_vq->allow_zero_bytesused = 1;

Do you need this? Unless you have a really good reason for this, I would
drop this. Same for dst_vq.

> + src_vq->min_buffers_needed = 1;
> + src_vq->dev = inst->vpu->dev;
> + src_vq->lock = &inst->lock;
> + ret = vb2_queue_init(src_vq);
> + if (ret)
> + return ret;
> +
> + inst->cap_format.type = dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
> + dst_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
> + dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
> + dst_vq->ops = &vpu_vb2_ops;
> + dst_vq->mem_ops = &vb2_dma_contig_memops;
> + if (inst->type == VPU_CORE_TYPE_ENC && inst->use_stream_buffer)
> + dst_vq->mem_ops = &vb2_vmalloc_memops;
> + dst_vq->drv_priv = inst;
> + dst_vq->buf_struct_size = sizeof(struct vpu_vb2_buffer);
> + dst_vq->allow_zero_bytesused = 1;
> + dst_vq->min_buffers_needed = 1;
> + dst_vq->dev = inst->vpu->dev;
> + dst_vq->lock = &inst->lock;
> + ret = vb2_queue_init(dst_vq);
> + if (ret) {
> + vb2_queue_release(src_vq);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int vpu_v4l2_release(struct vpu_inst *inst)
> +{
> + vpu_trace(inst->vpu->dev, "%p\n", inst);
> +
> + vpu_release_core(inst->core);
> + put_device(inst->dev);
> +
> + if (inst->workqueue) {
> + cancel_work_sync(&inst->msg_work);
> + destroy_workqueue(inst->workqueue);
> + inst->workqueue = NULL;
> + }
> + if (inst->fh.m2m_ctx) {
> + v4l2_m2m_ctx_release(inst->fh.m2m_ctx);
> + inst->fh.m2m_ctx = NULL;
> + }
> +
> + v4l2_ctrl_handler_free(&inst->ctrl_handler);
> + mutex_destroy(&inst->lock);
> + v4l2_fh_del(&inst->fh);
> + v4l2_fh_exit(&inst->fh);
> +
> + call_vop(inst, cleanup);
> +
> + return 0;
> +}
> +
> +int vpu_v4l2_open(struct file *file, struct vpu_inst *inst)
> +{
> + struct vpu_dev *vpu = video_drvdata(file);
> + struct vpu_func *func;
> + int ret = 0;
> +
> + WARN_ON(!file || !inst || !inst->ops);
> +
> + if (inst->type == VPU_CORE_TYPE_ENC)
> + func = &vpu->encoder;
> + else
> + func = &vpu->decoder;
> +
> + atomic_set(&inst->ref_count, 0);
> + vpu_inst_get(inst);
> + inst->vpu = vpu;
> + inst->core = vpu_request_core(vpu, inst->type);
> + if (inst->core)
> + inst->dev = get_device(inst->core->dev);
> + mutex_init(&inst->lock);
> + INIT_LIST_HEAD(&inst->cmd_q);
> + inst->id = VPU_INST_NULL_ID;
> + inst->release = vpu_v4l2_release;
> + inst->pid = current->pid;
> + inst->tgid = current->tgid;
> + inst->min_buffer_cap = 2;
> + inst->min_buffer_out = 2;

Assuming this means the minimum number of buffers needed, why is
min_buffers_needed set to 1 when initializing the vb2_queue structs?

> + v4l2_fh_init(&inst->fh, func->vfd);
> + v4l2_fh_add(&inst->fh);
> +
> + ret = call_vop(inst, ctrl_init);
> + if (ret)
> + goto error;
> +
> + inst->fh.m2m_ctx = v4l2_m2m_ctx_init(func->m2m_dev,
> + inst, vpu_m2m_queue_init);
> + if (IS_ERR(inst->fh.m2m_ctx)) {
> + dev_err(vpu->dev, "v4l2_m2m_ctx_init fail\n");
> + ret = PTR_ERR(func->m2m_dev);
> + goto error;
> + }
> +
> + inst->fh.ctrl_handler = &inst->ctrl_handler;
> + file->private_data = &inst->fh;
> + inst->state = VPU_CODEC_STATE_DEINIT;
> + inst->workqueue = alloc_workqueue("vpu_inst", WQ_UNBOUND | WQ_MEM_RECLAIM, 1);
> + if (inst->workqueue) {
> + INIT_WORK(&inst->msg_work, vpu_inst_run_work);
> + ret = kfifo_init(&inst->msg_fifo,
> + inst->msg_buffer,
> + roundup_pow_of_two(sizeof(inst->msg_buffer)));
> + if (ret) {
> + destroy_workqueue(inst->workqueue);
> + inst->workqueue = NULL;
> + }
> + }
> + vpu_trace(vpu->dev, "tgid = %d, pid = %d, type = %s, inst = %p\n",
> + inst->tgid, inst->pid, vpu_core_type_desc(inst->type), inst);
> +
> + return 0;
> +error:
> + vpu_inst_put(inst);
> + return ret;
> +}
> +
> +int vpu_v4l2_close(struct file *file)
> +{
> + struct vpu_dev *vpu = video_drvdata(file);
> + struct vpu_inst *inst = to_inst(file);
> + struct vb2_queue *src_q;
> + struct vb2_queue *dst_q;
> +
> + vpu_trace(vpu->dev, "tgid = %d, pid = %d, inst = %p\n",
> + inst->tgid, inst->pid, inst);
> + src_q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> + dst_q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> + vpu_inst_lock(inst);
> + if (vb2_is_streaming(src_q))
> + v4l2_m2m_streamoff(file, inst->fh.m2m_ctx, src_q->type);
> + if (vb2_is_streaming(dst_q))
> + v4l2_m2m_streamoff(file, inst->fh.m2m_ctx, dst_q->type);

This looks very wrong. I expect a call to v4l2_m2m_ctx_release() here,
and that will take care of any streaming.

> + vpu_inst_unlock(inst);
> +
> + call_vop(inst, release);
> + vpu_inst_unregister(inst);
> + vpu_inst_put(inst);
> +
> + return 0;
> +}
> +
> +int vpu_add_func(struct vpu_dev *vpu, struct vpu_func *func)
> +{
> + struct video_device *vfd;
> + int ret;
> +
> + if (!vpu || !func)
> + return -EINVAL;
> +
> + if (func->vfd)
> + return 0;
> +
> + vfd = video_device_alloc();
> + if (!vfd) {
> + dev_err(vpu->dev, "alloc vpu decoder video device fail\n");
> + return -ENOMEM;
> + }
> + vfd->release = video_device_release;
> + vfd->vfl_dir = VFL_DIR_M2M;
> + vfd->v4l2_dev = &vpu->v4l2_dev;
> + vfd->device_caps = V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING;
> + if (func->type == VPU_CORE_TYPE_ENC) {
> + strscpy(vfd->name, "amphion-vpu-encoder", sizeof(vfd->name));
> + vfd->fops = venc_get_fops();
> + vfd->ioctl_ops = venc_get_ioctl_ops();
> + } else {
> + strscpy(vfd->name, "amphion-vpu-decoder", sizeof(vfd->name));
> + vfd->fops = vdec_get_fops();
> + vfd->ioctl_ops = vdec_get_ioctl_ops();
> + }
> +
> + ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
> + if (ret) {
> + video_device_release(vfd);
> + return ret;
> + }
> + video_set_drvdata(vfd, vpu);
> + func->vfd = vfd;
> + func->m2m_dev = v4l2_m2m_init(&vpu_m2m_ops);

This should be done before the video_register_device to avoid creating
device nodes while the device isn't fully initialized yet.

> + if (IS_ERR(func->m2m_dev)) {
> + dev_err(vpu->dev, "v4l2_m2m_init fail\n");
> + video_unregister_device(func->vfd);
> + func->vfd = NULL;
> + return PTR_ERR(func->m2m_dev);
> + }
> +
> + ret = v4l2_m2m_register_media_controller(func->m2m_dev, func->vfd, func->function);
> + if (ret) {
> + v4l2_m2m_release(func->m2m_dev);
> + func->m2m_dev = NULL;
> + video_unregister_device(func->vfd);
> + func->vfd = NULL;
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +void vpu_remove_func(struct vpu_func *func)
> +{
> + if (!func)
> + return;
> +
> + if (func->m2m_dev) {
> + v4l2_m2m_unregister_media_controller(func->m2m_dev);
> + v4l2_m2m_release(func->m2m_dev);
> + func->m2m_dev = NULL;
> + }
> + if (func->vfd) {
> + video_unregister_device(func->vfd);
> + func->vfd = NULL;
> + }
> +}
> diff --git a/drivers/media/platform/amphion/vpu_v4l2.h b/drivers/media/platform/amphion/vpu_v4l2.h
> new file mode 100644
> index 000000000000..c9ed7aec637a
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_v4l2.h
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _AMPHION_VPU_V4L2_H
> +#define _AMPHION_VPU_V4L2_H
> +
> +#include <linux/videodev2.h>
> +
> +void vpu_inst_lock(struct vpu_inst *inst);
> +void vpu_inst_unlock(struct vpu_inst *inst);
> +
> +int vpu_v4l2_open(struct file *file, struct vpu_inst *inst);
> +int vpu_v4l2_close(struct file *file);
> +
> +const struct vpu_format *vpu_try_fmt_common(struct vpu_inst *inst, struct v4l2_format *f);
> +int vpu_process_output_buffer(struct vpu_inst *inst);
> +int vpu_process_capture_buffer(struct vpu_inst *inst);
> +struct vb2_v4l2_buffer *vpu_find_buf_by_sequence(struct vpu_inst *inst, u32 type, u32 sequence);
> +struct vb2_v4l2_buffer *vpu_find_buf_by_idx(struct vpu_inst *inst, u32 type, u32 idx);
> +void vpu_v4l2_set_error(struct vpu_inst *inst);
> +int vpu_notify_eos(struct vpu_inst *inst);
> +int vpu_notify_source_change(struct vpu_inst *inst);
> +int vpu_set_last_buffer_dequeued(struct vpu_inst *inst);
> +void vpu_vb2_buffers_return(struct vpu_inst *inst,
> + unsigned int type, enum vb2_buffer_state state);
> +int vpu_get_num_buffers(struct vpu_inst *inst, u32 type);
> +
> +dma_addr_t vpu_get_vb_phy_addr(struct vb2_buffer *vb, u32 plane_no);
> +unsigned int vpu_get_vb_length(struct vb2_buffer *vb, u32 plane_no);
> +static inline struct vpu_format *vpu_get_format(struct vpu_inst *inst, u32 type)
> +{
> + if (V4L2_TYPE_IS_OUTPUT(type))
> + return &inst->out_format;
> + else
> + return &inst->cap_format;
> +}
> +
> +static inline char *vpu_type_name(u32 type)
> +{
> + return V4L2_TYPE_IS_OUTPUT(type) ? "output" : "capture";
> +}
> +
> +static inline int vpu_vb_is_codecconfig(struct vb2_v4l2_buffer *vbuf)
> +{
> +#ifdef V4L2_BUF_FLAG_CODECCONFIG
> + return (vbuf->flags & V4L2_BUF_FLAG_CODECCONFIG) ? 1 : 0;
> +#else
> + return 0;
> +#endif
> +}
> +
> +#endif
>

Regards,

Hans

2021-12-02 10:52:49

by Hans Verkuil

[permalink] [raw]
Subject: Re: [PATCH v13 07/13] media: amphion: add v4l2 m2m vpu encoder stateful driver

On 30/11/2021 10:48, Ming Qian wrote:
> This consists of video encoder implementation plus encoder controls.
>
> Signed-off-by: Ming Qian <[email protected]>
> Signed-off-by: Shijie Qin <[email protected]>
> Signed-off-by: Zhou Peng <[email protected]>
> ---
> drivers/media/platform/amphion/venc.c | 1351 +++++++++++++++++++++++++
> 1 file changed, 1351 insertions(+)
> create mode 100644 drivers/media/platform/amphion/venc.c
>
> diff --git a/drivers/media/platform/amphion/venc.c b/drivers/media/platform/amphion/venc.c
> new file mode 100644
> index 000000000000..468608a76b78
> --- /dev/null
> +++ b/drivers/media/platform/amphion/venc.c
> @@ -0,0 +1,1351 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/interconnect.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/delay.h>
> +#include <linux/videodev2.h>
> +#include <linux/ktime.h>
> +#include <media/v4l2-device.h>
> +#include <media/v4l2-event.h>
> +#include <media/v4l2-mem2mem.h>
> +#include <media/v4l2-ioctl.h>
> +#include <media/videobuf2-v4l2.h>
> +#include <media/videobuf2-dma-contig.h>
> +#include <media/videobuf2-vmalloc.h>
> +#include "vpu.h"
> +#include "vpu_defs.h"
> +#include "vpu_core.h"
> +#include "vpu_helpers.h"
> +#include "vpu_v4l2.h"
> +#include "vpu_cmds.h"
> +#include "vpu_rpc.h"
> +
> +#define VENC_OUTPUT_ENABLE (1 << 0)
> +#define VENC_CAPTURE_ENABLE (1 << 1)
> +#define VENC_ENABLE_MASK (VENC_OUTPUT_ENABLE | VENC_CAPTURE_ENABLE)
> +#define VENC_MAX_BUF_CNT 8
> +
> +struct venc_t {
> + struct vpu_encode_params params;
> + u32 request_key_frame;
> + u32 input_ready;
> + u32 cpb_size;
> + bool bitrate_change;
> +
> + struct vpu_buffer enc[VENC_MAX_BUF_CNT];
> + struct vpu_buffer ref[VENC_MAX_BUF_CNT];
> + struct vpu_buffer act[VENC_MAX_BUF_CNT];
> + struct list_head frames;
> + u32 frame_count;
> + u32 encode_count;
> + u32 ready_count;
> + u32 enable;
> + u32 stopped;
> +
> + u32 skipped_count;
> + u32 skipped_bytes;
> +
> + wait_queue_head_t wq;
> +};
> +
> +struct venc_frame_t {
> + struct list_head list;
> + struct vpu_enc_pic_info info;
> + u32 bytesused;
> + s64 timestamp;
> +};
> +
> +static const struct vpu_format venc_formats[] = {
> + {
> + .pixfmt = V4L2_PIX_FMT_NV12M,
> + .num_planes = 2,
> + .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
> + },
> + {
> + .pixfmt = V4L2_PIX_FMT_H264,
> + .num_planes = 1,
> + .type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
> + },
> + {0, 0, 0, 0},
> +};
> +
> +static int venc_querycap(struct file *file, void *fh, struct v4l2_capability *cap)
> +{
> + strscpy(cap->driver, "amphion-vpu", sizeof(cap->driver));
> + strscpy(cap->card, "amphion vpu encoder", sizeof(cap->card));
> + strscpy(cap->bus_info, "platform: amphion-vpu", sizeof(cap->bus_info));
> +
> + return 0;
> +}
> +
> +static int venc_enum_fmt(struct file *file, void *fh, struct v4l2_fmtdesc *f)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + const struct vpu_format *fmt;
> +
> + memset(f->reserved, 0, sizeof(f->reserved));
> + fmt = vpu_helper_enum_format(inst, f->type, f->index);
> + if (!fmt)
> + return -EINVAL;
> +
> + f->pixelformat = fmt->pixfmt;
> + f->flags = fmt->flags;
> +
> + return 0;
> +}
> +
> +static int venc_enum_framesizes(struct file *file, void *fh, struct v4l2_frmsizeenum *fsize)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + const struct vpu_core_resources *res;
> +
> + if (!fsize || fsize->index)
> + return -EINVAL;
> +
> + if (!vpu_helper_find_format(inst, 0, fsize->pixel_format))
> + return -EINVAL;
> +
> + res = vpu_get_resource(inst);
> + if (!res)
> + return -EINVAL;
> + fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE;
> + fsize->stepwise.max_width = res->max_width;
> + fsize->stepwise.max_height = res->max_height;
> + fsize->stepwise.min_width = res->min_width;
> + fsize->stepwise.min_height = res->min_height;
> + fsize->stepwise.step_width = res->step_width;
> + fsize->stepwise.step_height = res->step_height;
> +
> + return 0;
> +}
> +
> +static int venc_enum_frameintervals(struct file *file, void *fh, struct v4l2_frmivalenum *fival)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + const struct vpu_core_resources *res;
> +
> + if (!fival || fival->index)
> + return -EINVAL;
> +
> + if (!vpu_helper_find_format(inst, 0, fival->pixel_format))
> + return -EINVAL;
> +
> + if (!fival->width || !fival->height)
> + return -EINVAL;
> +
> + res = vpu_get_resource(inst);
> + if (!res)
> + return -EINVAL;
> + if (fival->width < res->min_width ||
> + fival->width > res->max_width ||
> + fival->height < res->min_height ||
> + fival->height > res->max_height)
> + return -EINVAL;
> +
> + fival->type = V4L2_FRMIVAL_TYPE_CONTINUOUS;
> + fival->stepwise.min.numerator = 1;
> + fival->stepwise.min.denominator = USHRT_MAX;
> + fival->stepwise.max.numerator = USHRT_MAX;
> + fival->stepwise.max.denominator = 1;
> + fival->stepwise.step.numerator = 1;
> + fival->stepwise.step.denominator = 1;
> +
> + return 0;
> +}
> +
> +static int venc_g_fmt(struct file *file, void *fh, struct v4l2_format *f)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + struct venc_t *venc = inst->priv;
> + struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp;
> + struct vpu_format *cur_fmt;
> + int i;
> +
> + cur_fmt = vpu_get_format(inst, f->type);
> +
> + pixmp->pixelformat = cur_fmt->pixfmt;
> + pixmp->num_planes = cur_fmt->num_planes;
> + pixmp->width = cur_fmt->width;
> + pixmp->height = cur_fmt->height;
> + pixmp->field = cur_fmt->field;
> + pixmp->flags = cur_fmt->flags;
> + for (i = 0; i < pixmp->num_planes; i++) {
> + pixmp->plane_fmt[i].bytesperline = cur_fmt->bytesperline[i];
> + pixmp->plane_fmt[i].sizeimage = cur_fmt->sizeimage[i];
> + }
> +
> + f->fmt.pix_mp.colorspace = venc->params.color.primaries;
> + f->fmt.pix_mp.xfer_func = venc->params.color.transfer;
> + f->fmt.pix_mp.ycbcr_enc = venc->params.color.matrix;
> + f->fmt.pix_mp.quantization = venc->params.color.full_range;
> +
> + return 0;
> +}
> +
> +static int venc_try_fmt(struct file *file, void *fh, struct v4l2_format *f)
> +{
> + struct vpu_inst *inst = to_inst(file);
> +
> + vpu_try_fmt_common(inst, f);
> +
> + return 0;
> +}
> +
> +static int venc_s_fmt(struct file *file, void *fh, struct v4l2_format *f)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + const struct vpu_format *fmt;
> + struct vpu_format *cur_fmt;
> + struct vb2_queue *q;
> + struct venc_t *venc = inst->priv;
> + struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
> + int i;
> +
> + q = v4l2_m2m_get_vq(inst->fh.m2m_ctx, f->type);
> + if (!q)
> + return -EINVAL;
> + if (vb2_is_streaming(q))
> + return -EBUSY;
> +
> + fmt = vpu_try_fmt_common(inst, f);
> + if (!fmt)
> + return -EINVAL;
> +
> + cur_fmt = vpu_get_format(inst, f->type);
> +
> + cur_fmt->pixfmt = fmt->pixfmt;
> + cur_fmt->num_planes = fmt->num_planes;
> + cur_fmt->flags = fmt->flags;
> + cur_fmt->width = pix_mp->width;
> + cur_fmt->height = pix_mp->height;
> + for (i = 0; i < fmt->num_planes; i++) {
> + cur_fmt->sizeimage[i] = pix_mp->plane_fmt[i].sizeimage;
> + cur_fmt->bytesperline[i] = pix_mp->plane_fmt[i].bytesperline;
> + }
> +
> + if (pix_mp->field != V4L2_FIELD_ANY)
> + cur_fmt->field = pix_mp->field;
> +
> + if (V4L2_TYPE_IS_OUTPUT(f->type)) {
> + venc->params.input_format = cur_fmt->pixfmt;
> + venc->params.src_stride = cur_fmt->bytesperline[0];
> + venc->params.src_width = cur_fmt->width;
> + venc->params.src_height = cur_fmt->height;
> + venc->params.crop.left = 0;
> + venc->params.crop.top = 0;
> + venc->params.crop.width = cur_fmt->width;
> + venc->params.crop.height = cur_fmt->height;
> + } else {
> + venc->params.codec_format = cur_fmt->pixfmt;
> + venc->params.out_width = cur_fmt->width;
> + venc->params.out_height = cur_fmt->height;
> + }
> +
> + if (V4L2_TYPE_IS_OUTPUT(f->type)) {
> + if (!vpu_color_check_primaries(pix_mp->colorspace)) {
> + venc->params.color.primaries = pix_mp->colorspace;
> + vpu_color_get_default(venc->params.color.primaries,
> + &venc->params.color.transfer,
> + &venc->params.color.matrix,
> + &venc->params.color.full_range);
> + }
> + if (!vpu_color_check_transfers(pix_mp->xfer_func))
> + venc->params.color.transfer = pix_mp->xfer_func;
> + if (!vpu_color_check_matrix(pix_mp->ycbcr_enc))
> + venc->params.color.matrix = pix_mp->ycbcr_enc;
> + if (!vpu_color_check_full_range(pix_mp->quantization))
> + venc->params.color.full_range = pix_mp->quantization;
> + }
> +
> + pix_mp->colorspace = venc->params.color.primaries;
> + pix_mp->xfer_func = venc->params.color.transfer;
> + pix_mp->ycbcr_enc = venc->params.color.matrix;
> + pix_mp->quantization = venc->params.color.full_range;
> +
> + return 0;
> +}
> +
> +static int venc_g_parm(struct file *file, void *fh, struct v4l2_streamparm *parm)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + struct venc_t *venc = inst->priv;
> + struct v4l2_fract *timeperframe = &parm->parm.capture.timeperframe;
> +
> + if (!parm)
> + return -EINVAL;
> +
> + if (!vpu_helper_check_type(inst, parm->type))
> + return -EINVAL;
> +
> + parm->parm.capture.capability = V4L2_CAP_TIMEPERFRAME;
> + parm->parm.capture.readbuffers = 0;
> + timeperframe->numerator = venc->params.frame_rate.numerator;
> + timeperframe->denominator = venc->params.frame_rate.denominator;
> +
> + return 0;
> +}
> +
> +static int venc_s_parm(struct file *file, void *fh, struct v4l2_streamparm *parm)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + struct venc_t *venc = inst->priv;
> + struct v4l2_fract *timeperframe = &parm->parm.capture.timeperframe;
> +
> + if (!parm)
> + return -EINVAL;
> +
> + if (!vpu_helper_check_type(inst, parm->type))
> + return -EINVAL;
> +
> + if (!timeperframe->numerator)
> + timeperframe->numerator = venc->params.frame_rate.numerator;
> + if (!timeperframe->denominator)
> + timeperframe->denominator = venc->params.frame_rate.denominator;
> +
> + venc->params.frame_rate.numerator = timeperframe->numerator;
> + venc->params.frame_rate.denominator = timeperframe->denominator;
> +
> + vpu_helper_calc_coprime(&venc->params.frame_rate.numerator,
> + &venc->params.frame_rate.denominator);

You can use this function instead: rational_best_approximation().
See e.g. drivers/media/v4l2-core/v4l2-dv-timings.c.

> +
> + parm->parm.capture.capability = V4L2_CAP_TIMEPERFRAME;
> + memset(parm->parm.capture.reserved,
> + 0, sizeof(parm->parm.capture.reserved));
> +
> + return 0;
> +}
> +
> +static int venc_g_selection(struct file *file, void *fh, struct v4l2_selection *s)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + struct venc_t *venc = inst->priv;
> +
> + if (s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT && s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
> + return -EINVAL;
> +
> + switch (s->target) {
> + case V4L2_SEL_TGT_CROP_DEFAULT:
> + case V4L2_SEL_TGT_CROP_BOUNDS:
> + s->r.left = 0;
> + s->r.top = 0;
> + s->r.width = inst->out_format.width;
> + s->r.height = inst->out_format.height;
> + break;
> + case V4L2_SEL_TGT_CROP:
> + s->r = venc->params.crop;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +static int venc_valid_crop(struct venc_t *venc, const struct vpu_core_resources *res)
> +{
> + struct v4l2_rect *rect = NULL;
> + u32 min_width;
> + u32 min_height;
> + u32 src_width;
> + u32 src_height;
> +
> + rect = &venc->params.crop;
> + min_width = res->min_width;
> + min_height = res->min_height;
> + src_width = venc->params.src_width;
> + src_height = venc->params.src_height;
> +
> + if (rect->width == 0 || rect->height == 0)
> + return -EINVAL;
> + if (rect->left > src_width - min_width ||
> + rect->top > src_height - min_height)
> + return -EINVAL;
> +
> + rect->width = min(rect->width, src_width - rect->left);
> + rect->width = max_t(u32, rect->width, min_width);
> +
> + rect->height = min(rect->height, src_height - rect->top);
> + rect->height = max_t(u32, rect->height, min_height);
> +
> + return 0;
> +}
> +
> +static int venc_s_selection(struct file *file, void *fh, struct v4l2_selection *s)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + const struct vpu_core_resources *res;
> + struct venc_t *venc = inst->priv;
> +
> + res = vpu_get_resource(inst);
> + if (!res)
> + return -EINVAL;
> +
> + if (s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT && s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
> + return -EINVAL;
> + if (s->target != V4L2_SEL_TGT_CROP)
> + return -EINVAL;
> +
> + venc->params.crop.left = ALIGN(s->r.left, res->step_width);
> + venc->params.crop.top = ALIGN(s->r.top, res->step_height);
> + venc->params.crop.width = ALIGN(s->r.width, res->step_width);
> + venc->params.crop.height = ALIGN(s->r.height, res->step_height);
> + if (venc_valid_crop(venc, res)) {
> + venc->params.crop.left = 0;
> + venc->params.crop.top = 0;
> + venc->params.crop.width = venc->params.src_width;
> + venc->params.crop.height = venc->params.src_height;
> + }
> +
> + inst->crop = venc->params.crop;
> +
> + return 0;
> +}
> +
> +static int venc_drain(struct vpu_inst *inst)
> +{
> + struct venc_t *venc = inst->priv;
> + int ret;
> +
> + if (inst->state != VPU_CODEC_STATE_DRAIN)
> + return 0;
> +
> + if (v4l2_m2m_num_src_bufs_ready(inst->fh.m2m_ctx))
> + return 0;
> +
> + if (!venc->input_ready)
> + return 0;
> +
> + venc->input_ready = false;
> + vpu_trace(inst->dev, "[%d]\n", inst->id);
> + ret = vpu_session_stop(inst);
> + if (ret)
> + return ret;
> + inst->state = VPU_CODEC_STATE_STOP;
> + wake_up_all(&venc->wq);
> +
> + return 0;
> +}
> +
> +static int venc_request_eos(struct vpu_inst *inst)
> +{
> + inst->state = VPU_CODEC_STATE_DRAIN;
> + venc_drain(inst);
> +
> + return 0;
> +}
> +
> +static int venc_encoder_cmd(struct file *file, void *fh, struct v4l2_encoder_cmd *cmd)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + int ret;
> +
> + ret = v4l2_m2m_ioctl_try_encoder_cmd(file, fh, cmd);
> + if (ret)
> + return ret;
> +
> + vpu_inst_lock(inst);
> + if (cmd->cmd == V4L2_ENC_CMD_STOP) {
> + if (inst->state == VPU_CODEC_STATE_DEINIT)
> + vpu_set_last_buffer_dequeued(inst);
> + else
> + venc_request_eos(inst);
> + }
> + vpu_inst_unlock(inst);
> +
> + return 0;
> +}
> +
> +static int venc_subscribe_event(struct v4l2_fh *fh, const struct v4l2_event_subscription *sub)
> +{
> + switch (sub->type) {
> + case V4L2_EVENT_EOS:
> + return v4l2_event_subscribe(fh, sub, 0, NULL);
> + case V4L2_EVENT_CTRL:
> + return v4l2_ctrl_subscribe_event(fh, sub);
> + default:
> + return -EINVAL;
> + }
> +}
> +
> +static const struct v4l2_ioctl_ops venc_ioctl_ops = {
> + .vidioc_querycap = venc_querycap,
> + .vidioc_enum_fmt_vid_cap = venc_enum_fmt,
> + .vidioc_enum_fmt_vid_out = venc_enum_fmt,
> + .vidioc_enum_framesizes = venc_enum_framesizes,
> + .vidioc_enum_frameintervals = venc_enum_frameintervals,
> + .vidioc_g_fmt_vid_cap_mplane = venc_g_fmt,
> + .vidioc_g_fmt_vid_out_mplane = venc_g_fmt,
> + .vidioc_try_fmt_vid_cap_mplane = venc_try_fmt,
> + .vidioc_try_fmt_vid_out_mplane = venc_try_fmt,
> + .vidioc_s_fmt_vid_cap_mplane = venc_s_fmt,
> + .vidioc_s_fmt_vid_out_mplane = venc_s_fmt,
> + .vidioc_g_parm = venc_g_parm,
> + .vidioc_s_parm = venc_s_parm,
> + .vidioc_g_selection = venc_g_selection,
> + .vidioc_s_selection = venc_s_selection,
> + .vidioc_try_encoder_cmd = v4l2_m2m_ioctl_try_encoder_cmd,
> + .vidioc_encoder_cmd = venc_encoder_cmd,
> + .vidioc_subscribe_event = venc_subscribe_event,
> + .vidioc_unsubscribe_event = v4l2_event_unsubscribe,
> + .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs,
> + .vidioc_querybuf = v4l2_m2m_ioctl_querybuf,
> + .vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs,
> + .vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf,
> + .vidioc_qbuf = v4l2_m2m_ioctl_qbuf,
> + .vidioc_expbuf = v4l2_m2m_ioctl_expbuf,
> + .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf,
> + .vidioc_streamon = v4l2_m2m_ioctl_streamon,
> + .vidioc_streamoff = v4l2_m2m_ioctl_streamoff,
> +};
> +
> +static int venc_op_s_ctrl(struct v4l2_ctrl *ctrl)
> +{
> + struct vpu_inst *inst = ctrl_to_inst(ctrl);
> + struct venc_t *venc = inst->priv;
> + int ret = 0;
> +
> + vpu_inst_lock(inst);
> + switch (ctrl->id) {
> + case V4L2_CID_MPEG_VIDEO_H264_PROFILE:
> + venc->params.profile = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_H264_LEVEL:
> + venc->params.level = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_BITRATE_MODE:
> + venc->params.rc_mode = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_BITRATE:
> + if (ctrl->val != venc->params.bitrate)
> + venc->bitrate_change = true;
> + venc->params.bitrate = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_GOP_SIZE:
> + venc->params.gop_length = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_B_FRAMES:
> + venc->params.bframes = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_H264_I_FRAME_QP:
> + venc->params.i_frame_qp = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_H264_P_FRAME_QP:
> + venc->params.p_frame_qp = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_H264_B_FRAME_QP:
> + venc->params.b_frame_qp = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME:
> + venc->request_key_frame = 1;
> + break;
> + case V4L2_CID_MPEG_VIDEO_H264_CPB_SIZE:
> + venc->cpb_size = ctrl->val * 1024;
> + break;
> + case V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_ENABLE:
> + venc->params.sar.enable = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_IDC:
> + venc->params.sar.idc = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_H264_VUI_EXT_SAR_WIDTH:
> + venc->params.sar.width = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_H264_VUI_EXT_SAR_HEIGHT:
> + venc->params.sar.height = ctrl->val;
> + break;
> + case V4L2_CID_MPEG_VIDEO_HEADER_MODE:
> + break;
> + default:
> + ret = -EINVAL;
> + break;
> + }
> + vpu_inst_unlock(inst);
> +
> + return ret;
> +}
> +
> +static const struct v4l2_ctrl_ops venc_ctrl_ops = {
> + .s_ctrl = venc_op_s_ctrl,
> + .g_volatile_ctrl = vpu_helper_g_volatile_ctrl,
> +};
> +
> +static int venc_ctrl_init(struct vpu_inst *inst)
> +{
> + struct v4l2_ctrl *ctrl;
> + int ret;
> +
> + ret = v4l2_ctrl_handler_init(&inst->ctrl_handler, 20);
> + if (ret)
> + return ret;
> +
> + v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_H264_PROFILE,
> + V4L2_MPEG_VIDEO_H264_PROFILE_HIGH,
> + ~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE) |
> + (1 << V4L2_MPEG_VIDEO_H264_PROFILE_MAIN) |
> + (1 << V4L2_MPEG_VIDEO_H264_PROFILE_HIGH)),
> + V4L2_MPEG_VIDEO_H264_PROFILE_HIGH);
> +
> + v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_H264_LEVEL,
> + V4L2_MPEG_VIDEO_H264_LEVEL_5_1,
> + 0x0,
> + V4L2_MPEG_VIDEO_H264_LEVEL_4_0);
> +
> + v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_BITRATE_MODE,
> + V4L2_MPEG_VIDEO_BITRATE_MODE_CBR,
> + 0x0,
> + V4L2_MPEG_VIDEO_BITRATE_MODE_CBR);
> +
> + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_BITRATE,
> + BITRATE_MIN,
> + BITRATE_MAX,
> + BITRATE_STEP,
> + BITRATE_DEFAULT);
> +
> + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_GOP_SIZE, 0, (1 << 16) - 1, 1, 30);
> +
> + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_B_FRAMES, 0, 4, 1, 0);
> +
> + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_H264_I_FRAME_QP, 1, 51, 1, 26);
> + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_H264_P_FRAME_QP, 1, 51, 1, 28);
> + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_H264_B_FRAME_QP, 1, 51, 1, 30);
> + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME, 0, 0, 0, 0);
> + ctrl = v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MIN_BUFFERS_FOR_CAPTURE, 1, 32, 1, 2);
> + if (ctrl)
> + ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE;
> + ctrl = v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MIN_BUFFERS_FOR_OUTPUT, 1, 32, 1, 2);
> + if (ctrl)
> + ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE;
> +
> + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_H264_CPB_SIZE, 64, 10240, 1, 1024);
> +
> + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_ENABLE, 0, 1, 1, 1);
> + v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_IDC,
> + V4L2_MPEG_VIDEO_H264_VUI_SAR_IDC_EXTENDED,
> + 0x0,
> + V4L2_MPEG_VIDEO_H264_VUI_SAR_IDC_1x1);
> + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_H264_VUI_EXT_SAR_WIDTH,
> + 0, USHRT_MAX, 1, 1);
> + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_H264_VUI_EXT_SAR_HEIGHT,
> + 0, USHRT_MAX, 1, 1);
> + v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops,
> + V4L2_CID_MPEG_VIDEO_HEADER_MODE,
> + V4L2_MPEG_VIDEO_HEADER_MODE_JOINED_WITH_1ST_FRAME,
> + ~(1 << V4L2_MPEG_VIDEO_HEADER_MODE_JOINED_WITH_1ST_FRAME),
> + V4L2_MPEG_VIDEO_HEADER_MODE_JOINED_WITH_1ST_FRAME);
> +
> + ret = v4l2_ctrl_handler_setup(&inst->ctrl_handler);
> + if (ret) {
> + dev_err(inst->dev, "[%d] setup ctrls fail, ret = %d\n", inst->id, ret);
> + v4l2_ctrl_handler_free(&inst->ctrl_handler);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static bool venc_check_ready(struct vpu_inst *inst, unsigned int type)
> +{
> + struct venc_t *venc = inst->priv;
> +
> + if (V4L2_TYPE_IS_OUTPUT(type)) {
> + if (vpu_helper_get_free_space(inst) < venc->cpb_size)
> + return false;
> + return venc->input_ready;
> + }
> +
> + if (list_empty(&venc->frames))
> + return false;
> + return true;
> +}
> +
> +static u32 venc_get_enable_mask(u32 type)
> +{
> + if (V4L2_TYPE_IS_OUTPUT(type))
> + return VENC_OUTPUT_ENABLE;
> + else
> + return VENC_CAPTURE_ENABLE;
> +}
> +
> +static void venc_set_enable(struct venc_t *venc, u32 type, int enable)
> +{
> + u32 mask = venc_get_enable_mask(type);
> +
> + if (enable)
> + venc->enable |= mask;
> + else
> + venc->enable &= ~mask;
> +}
> +
> +static u32 venc_get_enable(struct venc_t *venc, u32 type)
> +{
> + return venc->enable & venc_get_enable_mask(type);
> +}
> +
> +static void venc_input_done(struct vpu_inst *inst)
> +{
> + struct venc_t *venc = inst->priv;
> +
> + vpu_inst_lock(inst);
> + venc->input_ready = true;
> + vpu_process_output_buffer(inst);
> + if (inst->state == VPU_CODEC_STATE_DRAIN)
> + venc_drain(inst);
> + vpu_inst_unlock(inst);
> +}
> +
> +/*
> + * It's hardware limitation, that there may be several bytes
> + * redundant data at the beginning of frame.
> + * For android platform, the redundant data may cause cts test fail
> + * So driver will strip them
> + */
> +static int venc_precheck_encoded_frame(struct vpu_inst *inst, struct venc_frame_t *frame)
> +{
> + struct venc_t *venc;
> + int skipped;
> +
> + if (!inst || !frame || !frame->bytesused)
> + return -EINVAL;
> +
> + venc = inst->priv;
> + skipped = vpu_helper_find_startcode(&inst->stream_buffer,
> + inst->cap_format.pixfmt,
> + frame->info.wptr - inst->stream_buffer.phys,
> + frame->bytesused);
> + if (skipped > 0) {
> + frame->bytesused -= skipped;
> + frame->info.wptr = vpu_helper_step_walk(&inst->stream_buffer,
> + frame->info.wptr, skipped);
> + venc->skipped_bytes += skipped;
> + venc->skipped_count++;
> + }
> +
> + return 0;
> +}
> +
> +static int venc_get_one_encoded_frame(struct vpu_inst *inst,
> + struct venc_frame_t *frame,
> + struct vb2_v4l2_buffer *vbuf)
> +{
> + struct venc_t *venc = inst->priv;
> + struct vpu_vb2_buffer *vpu_buf;
> +
> + if (!vbuf)
> + return -EAGAIN;
> +
> + if (!venc_get_enable(inst->priv, vbuf->vb2_buf.type)) {
> + v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR);
> + return 0;
> + }
> + vpu_buf = to_vpu_vb2_buffer(vbuf);
> + if (frame->bytesused > vbuf->vb2_buf.planes[0].length) {
> + v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR);
> + return -ENOMEM;
> + }
> +
> + venc_precheck_encoded_frame(inst, frame);
> +
> + if (frame->bytesused) {
> + u32 rptr = frame->info.wptr;
> + void *dst = vb2_plane_vaddr(&vbuf->vb2_buf, 0);
> +
> + vpu_helper_copy_from_stream_buffer(&inst->stream_buffer,
> + &rptr, frame->bytesused, dst);
> + vpu_iface_update_stream_buffer(inst, rptr, 0);
> + }
> + vb2_set_plane_payload(&vbuf->vb2_buf, 0, frame->bytesused);
> + vbuf->sequence = frame->info.frame_id;
> + vbuf->vb2_buf.timestamp = frame->info.timestamp;
> + vbuf->field = inst->cap_format.field;
> + vbuf->flags |= frame->info.pic_type;
> + vpu_buf->state = VPU_BUF_STATE_IDLE;
> + dev_dbg(inst->dev, "[%d][OUTPUT TS]%32lld\n", inst->id, frame->info.timestamp);
> + v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
> + venc->ready_count++;
> +
> + if (vbuf->flags & V4L2_BUF_FLAG_KEYFRAME)
> + dev_dbg(inst->dev, "[%d][%d]key frame\n", inst->id, frame->info.frame_id);
> +
> + return 0;
> +}
> +
> +static int venc_get_encoded_frames(struct vpu_inst *inst)
> +{
> + struct venc_t *venc;
> + struct venc_frame_t *frame;
> + struct venc_frame_t *tmp;
> +
> + if (!inst || !inst->priv)
> + return -EINVAL;
> +
> + venc = inst->priv;
> + list_for_each_entry_safe(frame, tmp, &venc->frames, list) {
> + if (venc_get_one_encoded_frame(inst, frame,
> + v4l2_m2m_dst_buf_remove(inst->fh.m2m_ctx)))
> + break;
> + list_del_init(&frame->list);
> + vfree(frame);
> + }
> +
> + return 0;
> +}
> +
> +static int venc_frame_encoded(struct vpu_inst *inst, void *arg)
> +{
> + struct vpu_enc_pic_info *info = arg;
> + struct venc_frame_t *frame;
> + struct venc_t *venc;
> + int ret = 0;
> +
> + if (!inst || !info)
> + return -EINVAL;
> + venc = inst->priv;
> + frame = vzalloc(sizeof(*frame));
> + if (!frame)
> + return -ENOMEM;
> +
> + memcpy(&frame->info, info, sizeof(frame->info));
> + frame->bytesused = info->frame_size;
> +
> + vpu_inst_lock(inst);
> + list_add_tail(&frame->list, &venc->frames);
> + venc->encode_count++;
> + venc_get_encoded_frames(inst);
> + vpu_inst_unlock(inst);
> +
> + return ret;
> +}
> +
> +static void venc_buf_done(struct vpu_inst *inst, struct vpu_frame_info *frame)
> +{
> + struct vb2_v4l2_buffer *vbuf;
> + struct vpu_vb2_buffer *vpu_buf;
> +
> + if (!inst || !frame)
> + return;
> +
> + vpu_inst_lock(inst);
> + if (!venc_get_enable(inst->priv, frame->type))
> + goto exit;
> + vbuf = vpu_find_buf_by_sequence(inst, frame->type, frame->sequence);
> + if (!vbuf) {
> + dev_err(inst->dev, "[%d] can't find buf: type %d, sequence %d\n",
> + inst->id, frame->type, frame->sequence);
> + goto exit;
> + }
> +
> + vpu_buf = to_vpu_vb2_buffer(vbuf);
> + vpu_buf->state = VPU_BUF_STATE_IDLE;
> + if (V4L2_TYPE_IS_OUTPUT(frame->type))
> + v4l2_m2m_src_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf);
> + else
> + v4l2_m2m_dst_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf);
> + v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
> +exit:
> + vpu_inst_unlock(inst);
> +}
> +
> +static void venc_set_last_buffer_dequeued(struct vpu_inst *inst)
> +{
> + struct venc_t *venc = inst->priv;
> +
> + if (venc->stopped && list_empty(&venc->frames))
> + vpu_set_last_buffer_dequeued(inst);
> +}
> +
> +static void venc_stop_done(struct vpu_inst *inst)
> +{
> + struct venc_t *venc = inst->priv;
> +
> + vpu_inst_lock(inst);
> + venc->stopped = true;
> + venc_set_last_buffer_dequeued(inst);
> + vpu_inst_unlock(inst);
> +
> + wake_up_all(&venc->wq);
> +}
> +
> +static void venc_event_notify(struct vpu_inst *inst, u32 event, void *data)
> +{
> +}
> +
> +static void venc_release(struct vpu_inst *inst)
> +{
> +}
> +
> +static void venc_cleanup(struct vpu_inst *inst)
> +{
> + struct venc_t *venc;
> +
> + if (!inst)
> + return;
> +
> + venc = inst->priv;
> + if (venc)
> + vfree(venc);
> + inst->priv = NULL;
> + vfree(inst);
> +}
> +
> +static int venc_start_session(struct vpu_inst *inst, u32 type)
> +{
> + struct venc_t *venc = inst->priv;
> + int stream_buffer_size;
> + int ret;
> +
> + venc_set_enable(venc, type, 1);
> + if ((venc->enable & VENC_ENABLE_MASK) != VENC_ENABLE_MASK)
> + return 0;
> +
> + vpu_iface_init_instance(inst);
> + stream_buffer_size = vpu_iface_get_stream_buffer_size(inst->core);
> + if (stream_buffer_size > 0) {
> + inst->stream_buffer.length = max_t(u32, stream_buffer_size, venc->cpb_size * 3);
> + ret = vpu_alloc_dma(inst->core, &inst->stream_buffer);
> + if (ret)
> + goto error;
> +
> + inst->use_stream_buffer = true;
> + vpu_iface_config_stream_buffer(inst, &inst->stream_buffer);
> + }
> +
> + ret = vpu_iface_set_encode_params(inst, &venc->params, 0);
> + if (ret)
> + goto error;
> + ret = vpu_session_configure_codec(inst);
> + if (ret)
> + goto error;
> +
> + inst->state = VPU_CODEC_STATE_CONFIGURED;
> + /*vpu_iface_config_memory_resource*/
> +
> + /*config enc expert mode parameter*/
> + ret = vpu_iface_set_encode_params(inst, &venc->params, 1);
> + if (ret)
> + goto error;
> +
> + ret = vpu_session_start(inst);
> + if (ret)
> + goto error;
> + inst->state = VPU_CODEC_STATE_STARTED;
> +
> + venc->bitrate_change = false;
> + venc->input_ready = true;
> + venc->frame_count = 0;
> + venc->encode_count = 0;
> + venc->ready_count = 0;
> + venc->stopped = false;
> + vpu_process_output_buffer(inst);
> + if (venc->frame_count == 0)
> + dev_err(inst->dev, "[%d] there is no input when starting\n", inst->id);
> +
> + return 0;
> +error:
> + venc_set_enable(venc, type, 0);
> + inst->state = VPU_CODEC_STATE_DEINIT;
> +
> + vpu_free_dma(&inst->stream_buffer);
> + return ret;
> +}
> +
> +static void venc_cleanup_mem_resource(struct vpu_inst *inst)
> +{
> + struct venc_t *venc;
> + u32 i;
> +
> + WARN_ON(!inst || !inst->priv);
> +
> + venc = inst->priv;
> +
> + for (i = 0; i < ARRAY_SIZE(venc->enc); i++)
> + vpu_free_dma(&venc->enc[i]);
> + for (i = 0; i < ARRAY_SIZE(venc->ref); i++)
> + vpu_free_dma(&venc->ref[i]);
> + for (i = 0; i < ARRAY_SIZE(venc->act); i++)
> + vpu_free_dma(&venc->act[i]);
> +}
> +
> +static void venc_request_mem_resource(struct vpu_inst *inst,
> + u32 enc_frame_size,
> + u32 enc_frame_num,
> + u32 ref_frame_size,
> + u32 ref_frame_num,
> + u32 act_frame_size,
> + u32 act_frame_num)
> +{
> + struct venc_t *venc;
> + u32 i;
> + int ret;
> +
> + WARN_ON(!inst || !inst->priv || !inst->core);
> +
> + venc = inst->priv;
> +
> + if (enc_frame_num > ARRAY_SIZE(venc->enc)) {
> + dev_err(inst->dev, "[%d] enc num(%d) is out of range\n",
> + inst->id, enc_frame_num);
> + return;
> + }
> + if (ref_frame_num > ARRAY_SIZE(venc->ref)) {
> + dev_err(inst->dev, "[%d] ref num(%d) is out of range\n",
> + inst->id, ref_frame_num);
> + return;
> + }
> + if (act_frame_num > ARRAY_SIZE(venc->act)) {
> + dev_err(inst->dev, "[%d] act num(%d) is out of range\n",
> + inst->id, act_frame_num);
> + return;
> + }
> +
> + for (i = 0; i < enc_frame_num; i++) {
> + venc->enc[i].length = enc_frame_size;
> + ret = vpu_alloc_dma(inst->core, &venc->enc[i]);
> + if (ret) {
> + venc_cleanup_mem_resource(inst);
> + return;
> + }
> + }
> + for (i = 0; i < ref_frame_num; i++) {
> + venc->ref[i].length = ref_frame_size;
> + ret = vpu_alloc_dma(inst->core, &venc->ref[i]);
> + if (ret) {
> + venc_cleanup_mem_resource(inst);
> + return;
> + }
> + }
> + if (act_frame_num != 1 || act_frame_size > inst->act.length) {
> + venc_cleanup_mem_resource(inst);
> + return;
> + }
> + venc->act[0].length = act_frame_size;
> + venc->act[0].phys = inst->act.phys;
> + venc->act[0].virt = inst->act.virt;
> +
> + for (i = 0; i < enc_frame_num; i++)
> + vpu_iface_config_memory_resource(inst, MEM_RES_ENC, i, &venc->enc[i]);
> + for (i = 0; i < ref_frame_num; i++)
> + vpu_iface_config_memory_resource(inst, MEM_RES_REF, i, &venc->ref[i]);
> + for (i = 0; i < act_frame_num; i++)
> + vpu_iface_config_memory_resource(inst, MEM_RES_ACT, i, &venc->act[i]);
> +}
> +
> +static void venc_cleanup_frames(struct venc_t *venc)
> +{
> + struct venc_frame_t *frame;
> + struct venc_frame_t *tmp;
> +
> + list_for_each_entry_safe(frame, tmp, &venc->frames, list) {
> + list_del_init(&frame->list);
> + vfree(frame);
> + }
> +}
> +
> +static int venc_stop_session(struct vpu_inst *inst, u32 type)
> +{
> + struct venc_t *venc = inst->priv;
> +
> + venc_set_enable(venc, type, 0);
> + if (venc->enable & VENC_ENABLE_MASK)
> + return 0;
> +
> + if (inst->state == VPU_CODEC_STATE_DEINIT)
> + return 0;
> +
> + if (inst->state != VPU_CODEC_STATE_STOP)
> + venc_request_eos(inst);
> +
> + call_vop(inst, wait_prepare);
> + if (!wait_event_timeout(venc->wq, venc->stopped, VPU_TIMEOUT)) {
> + set_bit(inst->id, &inst->core->hang_mask);
> + vpu_session_debug(inst);
> + }
> + call_vop(inst, wait_finish);
> +
> + inst->state = VPU_CODEC_STATE_DEINIT;
> + venc_cleanup_frames(inst->priv);
> + vpu_free_dma(&inst->stream_buffer);
> + venc_cleanup_mem_resource(inst);
> +
> + return 0;
> +}
> +
> +static int venc_process_output(struct vpu_inst *inst, struct vb2_buffer *vb)
> +{
> + struct venc_t *venc = inst->priv;
> + struct vb2_v4l2_buffer *vbuf;
> + struct vpu_vb2_buffer *vpu_buf = NULL;
> + u32 flags;
> +
> + if (inst->state == VPU_CODEC_STATE_DEINIT)
> + return -EINVAL;
> +
> + vbuf = to_vb2_v4l2_buffer(vb);
> + vpu_buf = to_vpu_vb2_buffer(vbuf);
> + if (inst->state == VPU_CODEC_STATE_STARTED)
> + inst->state = VPU_CODEC_STATE_ACTIVE;
> +
> + flags = vbuf->flags;
> + if (venc->request_key_frame) {
> + vbuf->flags |= V4L2_BUF_FLAG_KEYFRAME;
> + venc->request_key_frame = 0;
> + }
> + if (venc->bitrate_change) {
> + vpu_session_update_parameters(inst, &venc->params);
> + venc->bitrate_change = false;
> + }
> + dev_dbg(inst->dev, "[%d][INPUT TS]%32lld\n", inst->id, vb->timestamp);
> + vpu_iface_input_frame(inst, vb);
> + vbuf->flags = flags;
> + venc->input_ready = false;
> + venc->frame_count++;
> + vpu_buf->state = VPU_BUF_STATE_INUSE;
> +
> + return 0;
> +}
> +
> +static int venc_process_capture(struct vpu_inst *inst, struct vb2_buffer *vb)
> +{
> + struct venc_t *venc;
> + struct venc_frame_t *frame = NULL;
> + struct vb2_v4l2_buffer *vbuf;
> + int ret;
> +
> + venc = inst->priv;
> + if (list_empty(&venc->frames))
> + return -EINVAL;
> +
> + frame = list_first_entry(&venc->frames, struct venc_frame_t, list);
> + vbuf = to_vb2_v4l2_buffer(vb);
> + v4l2_m2m_dst_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf);
> + ret = venc_get_one_encoded_frame(inst, frame, vbuf);
> + if (ret)
> + return ret;
> +
> + list_del_init(&frame->list);
> + vfree(frame);
> + return 0;
> +}
> +
> +static void venc_on_queue_empty(struct vpu_inst *inst, u32 type)
> +{
> + struct venc_t *venc = inst->priv;
> +
> + if (V4L2_TYPE_IS_OUTPUT(type))
> + return;
> +
> + if (venc->stopped)
> + venc_set_last_buffer_dequeued(inst);
> +}
> +
> +static int venc_get_debug_info(struct vpu_inst *inst, char *str, u32 size, u32 i)
> +{
> + struct venc_t *venc = inst->priv;
> + int num = -1;
> +
> + switch (i) {
> + case 0:
> + num = scnprintf(str, size, "profile = %d\n", venc->params.profile);
> + break;
> + case 1:
> + num = scnprintf(str, size, "level = %d\n", venc->params.level);
> + break;
> + case 2:
> + num = scnprintf(str, size, "fps = %d/%d\n",
> + venc->params.frame_rate.numerator,
> + venc->params.frame_rate.denominator);
> + break;
> + case 3:
> + num = scnprintf(str, size, "%d x %d -> %d x %d\n",
> + venc->params.src_width,
> + venc->params.src_height,
> + venc->params.out_width,
> + venc->params.out_height);
> + break;
> + case 4:
> + num = scnprintf(str, size, "(%d, %d) %d x %d\n",
> + venc->params.crop.left,
> + venc->params.crop.top,
> + venc->params.crop.width,
> + venc->params.crop.height);
> + break;
> + case 5:
> + num = scnprintf(str, size,
> + "enable = 0x%x, input = %d, encode = %d, ready = %d, stopped = %d\n",
> + venc->enable,
> + venc->frame_count, venc->encode_count,
> + venc->ready_count,
> + venc->stopped);
> + break;
> + case 6:
> + num = scnprintf(str, size, "gop = %d\n", venc->params.gop_length);
> + break;
> + case 7:
> + num = scnprintf(str, size, "bframes = %d\n", venc->params.bframes);
> + break;
> + case 8:
> + num = scnprintf(str, size, "rc: mode = %d, bitrate = %d, qp = %d\n",
> + venc->params.rc_mode,
> + venc->params.bitrate,
> + venc->params.i_frame_qp);
> + break;
> + case 9:
> + num = scnprintf(str, size, "sar: enable = %d, idc = %d, %d x %d\n",
> + venc->params.sar.enable,
> + venc->params.sar.idc,
> + venc->params.sar.width,
> + venc->params.sar.height);
> +
> + break;
> + case 10:
> + num = scnprintf(str, size,
> + "colorspace: primaries = %d, transfer = %d, matrix = %d, full_range = %d\n",
> + venc->params.color.primaries,
> + venc->params.color.transfer,
> + venc->params.color.matrix,
> + venc->params.color.full_range);
> + break;
> + case 11:
> + num = scnprintf(str, size, "skipped: count = %d, bytes = %d\n",
> + venc->skipped_count, venc->skipped_bytes);
> + break;
> + default:
> + break;
> + }
> +
> + return num;
> +}
> +
> +static struct vpu_inst_ops venc_inst_ops = {
> + .ctrl_init = venc_ctrl_init,
> + .check_ready = venc_check_ready,
> + .input_done = venc_input_done,
> + .get_one_frame = venc_frame_encoded,
> + .buf_done = venc_buf_done,
> + .stop_done = venc_stop_done,
> + .event_notify = venc_event_notify,
> + .release = venc_release,
> + .cleanup = venc_cleanup,
> + .start = venc_start_session,
> + .mem_request = venc_request_mem_resource,
> + .stop = venc_stop_session,
> + .process_output = venc_process_output,
> + .process_capture = venc_process_capture,
> + .on_queue_empty = venc_on_queue_empty,
> + .get_debug_info = venc_get_debug_info,
> + .wait_prepare = vpu_inst_unlock,
> + .wait_finish = vpu_inst_lock,
> +};
> +
> +static void venc_init(struct file *file)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + struct venc_t *venc;
> + struct v4l2_format f;
> + struct v4l2_streamparm parm;
> +
> + venc = inst->priv;
> + venc->params.qp_min = 1;
> + venc->params.qp_max = 51;
> + venc->params.qp_min_i = 1;
> + venc->params.qp_max_i = 51;
> + venc->params.bitrate_max = BITRATE_MAX;
> + venc->params.bitrate_min = BITRATE_MIN;
> +
> + memset(&f, 0, sizeof(f));
> + f.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
> + f.fmt.pix_mp.pixelformat = V4L2_PIX_FMT_NV12M;
> + f.fmt.pix_mp.width = 1280;
> + f.fmt.pix_mp.height = 720;
> + f.fmt.pix_mp.field = V4L2_FIELD_NONE;
> + f.fmt.pix_mp.colorspace = V4L2_COLORSPACE_REC709;
> + venc_s_fmt(file, &inst->fh, &f);
> +
> + memset(&f, 0, sizeof(f));
> + f.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
> + f.fmt.pix_mp.pixelformat = V4L2_PIX_FMT_H264;
> + f.fmt.pix_mp.width = 1280;
> + f.fmt.pix_mp.height = 720;
> + f.fmt.pix_mp.field = V4L2_FIELD_NONE;
> + venc_s_fmt(file, &inst->fh, &f);
> +
> + memset(&parm, 0, sizeof(parm));
> + parm.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
> + parm.parm.capture.timeperframe.numerator = 1;
> + parm.parm.capture.timeperframe.denominator = 30;
> + venc_s_parm(file, &inst->fh, &parm);
> +}
> +
> +static int venc_open(struct file *file)
> +{
> + struct vpu_inst *inst;
> + struct venc_t *venc;
> + int ret;
> +
> + inst = vzalloc(sizeof(*inst));
> + if (!inst)
> + return -ENOMEM;
> +
> + venc = vzalloc(sizeof(*venc));
> + if (!venc) {
> + vfree(inst);
> + return -ENOMEM;
> + }
> +
> + inst->ops = &venc_inst_ops;
> + inst->formats = venc_formats;
> + inst->type = VPU_CORE_TYPE_ENC;
> + inst->priv = venc;
> + INIT_LIST_HEAD(&venc->frames);
> + init_waitqueue_head(&venc->wq);
> +
> + ret = vpu_v4l2_open(file, inst);
> + if (ret)
> + return ret;
> +
> + venc_init(file);
> +
> + return 0;
> +}
> +
> +static const struct v4l2_file_operations venc_fops = {
> + .owner = THIS_MODULE,
> + .open = venc_open,
> + .release = vpu_v4l2_close,
> + .unlocked_ioctl = video_ioctl2,
> + .poll = v4l2_m2m_fop_poll,
> + .mmap = v4l2_m2m_fop_mmap,
> +};
> +
> +const struct v4l2_ioctl_ops *venc_get_ioctl_ops(void)
> +{
> + return &venc_ioctl_ops;
> +}
> +
> +const struct v4l2_file_operations *venc_get_fops(void)
> +{
> + return &venc_fops;
> +}
>

Regards,

Hans

2021-12-03 01:54:30

by Ming Qian

[permalink] [raw]
Subject: RE: [EXT] Re: [PATCH v13 06/13] media: amphion: add vpu v4l2 m2m support

> > +
> > +int vpu_v4l2_open(struct file *file, struct vpu_inst *inst)
> > +{
> > + struct vpu_dev *vpu = video_drvdata(file);
> > + struct vpu_func *func;
> > + int ret = 0;
> > +
> > + WARN_ON(!file || !inst || !inst->ops);
> > +
> > + if (inst->type == VPU_CORE_TYPE_ENC)
> > + func = &vpu->encoder;
> > + else
> > + func = &vpu->decoder;
> > +
> > + atomic_set(&inst->ref_count, 0);
> > + vpu_inst_get(inst);
> > + inst->vpu = vpu;
> > + inst->core = vpu_request_core(vpu, inst->type);
> > + if (inst->core)
> > + inst->dev = get_device(inst->core->dev);
> > + mutex_init(&inst->lock);
> > + INIT_LIST_HEAD(&inst->cmd_q);
> > + inst->id = VPU_INST_NULL_ID;
> > + inst->release = vpu_v4l2_release;
> > + inst->pid = current->pid;
> > + inst->tgid = current->tgid;
> > + inst->min_buffer_cap = 2;
> > + inst->min_buffer_out = 2;
>
> Assuming this means the minimum number of buffers needed, why is
> min_buffers_needed set to 1 when initializing the vb2_queue structs?

In my opinion, the min_buffers_needed determine when vb2_start_streaming will be called,
Like the following code:
if (q->queued_count >= q->min_buffers_needed) {
... ...
ret = vb2_start_streaming(q);
... ...
}
I hope driver starts a vpu instance when 1 frame is queued, so I set the min_buffers_needed to 1.
And the min_buffer_cap means the minimum vb2 buffer count, and it will changed according to the stream,
I just set default value to 2, it will be changed after vpu parsed the stream info.

>
> > + v4l2_fh_init(&inst->fh, func->vfd);
> > + v4l2_fh_add(&inst->fh);
> > +
> > + ret = call_vop(inst, ctrl_init);
> > + if (ret)
> > + goto error;
> > +
> > + inst->fh.m2m_ctx = v4l2_m2m_ctx_init(func->m2m_dev,
> > + inst, vpu_m2m_queue_init);
> > + if (IS_ERR(inst->fh.m2m_ctx)) {
> > + dev_err(vpu->dev, "v4l2_m2m_ctx_init fail\n");
> > + ret = PTR_ERR(func->m2m_dev);
> > + goto error;
> > + }
> > +
> > + inst->fh.ctrl_handler = &inst->ctrl_handler;
> > + file->private_data = &inst->fh;
> > + inst->state = VPU_CODEC_STATE_DEINIT;
> > + inst->workqueue = alloc_workqueue("vpu_inst", WQ_UNBOUND |
> WQ_MEM_RECLAIM, 1);
> > + if (inst->workqueue) {
> > + INIT_WORK(&inst->msg_work, vpu_inst_run_work);
> > + ret = kfifo_init(&inst->msg_fifo,
> > + inst->msg_buffer,
> > +
> roundup_pow_of_two(sizeof(inst->msg_buffer)));
> > + if (ret) {
> > + destroy_workqueue(inst->workqueue);
> > + inst->workqueue = NULL;
> > + }
> > + }
> > + vpu_trace(vpu->dev, "tgid = %d, pid = %d, type = %s, inst = %p\n",
> > + inst->tgid, inst->pid,
> vpu_core_type_desc(inst->type), inst);
> > +
> > + return 0;
> > +error:
> > + vpu_inst_put(inst);
> > + return ret;
> > +}
> > +

2021-12-03 04:48:54

by Nicolas Dufresne

[permalink] [raw]
Subject: Re: [PATCH v13 06/13] media: amphion: add vpu v4l2 m2m support

Le mardi 30 novembre 2021 à 17:48 +0800, Ming Qian a écrit :
> vpu_v4l2.c implements the v4l2 m2m driver methods.
> vpu_helpers.c implements the common helper functions
> vpu_color.c converts the v4l2 colorspace with iso
>
> Signed-off-by: Ming Qian <[email protected]>
> Signed-off-by: Shijie Qin <[email protected]>
> Signed-off-by: Zhou Peng <[email protected]>
> Reported-by: kernel test robot <[email protected]>
> ---
> drivers/media/platform/amphion/vpu_color.c | 190 +++++
> drivers/media/platform/amphion/vpu_helpers.c | 436 ++++++++++++
> drivers/media/platform/amphion/vpu_helpers.h | 71 ++
> drivers/media/platform/amphion/vpu_v4l2.c | 703 +++++++++++++++++++
> drivers/media/platform/amphion/vpu_v4l2.h | 54 ++
> 5 files changed, 1454 insertions(+)
> create mode 100644 drivers/media/platform/amphion/vpu_color.c
> create mode 100644 drivers/media/platform/amphion/vpu_helpers.c
> create mode 100644 drivers/media/platform/amphion/vpu_helpers.h
> create mode 100644 drivers/media/platform/amphion/vpu_v4l2.c
> create mode 100644 drivers/media/platform/amphion/vpu_v4l2.h
>
> diff --git a/drivers/media/platform/amphion/vpu_color.c b/drivers/media/platform/amphion/vpu_color.c
> new file mode 100644
> index 000000000000..c3f45dd9ee30
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_color.c
> @@ -0,0 +1,190 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/device.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/slab.h>
> +#include <linux/delay.h>
> +#include <linux/types.h>
> +#include <media/v4l2-device.h>
> +#include "vpu.h"
> +#include "vpu_helpers.h"
> +
> +static const u8 colorprimaries[] = {
> + 0,
> + V4L2_COLORSPACE_REC709, /*Rec. ITU-R BT.709-6*/
> + 0,
> + 0,
> + V4L2_COLORSPACE_470_SYSTEM_M, /*Rec. ITU-R BT.470-6 System M*/
> + V4L2_COLORSPACE_470_SYSTEM_BG,/*Rec. ITU-R BT.470-6 System B, G*/
> + V4L2_COLORSPACE_SMPTE170M, /*SMPTE170M*/
> + V4L2_COLORSPACE_SMPTE240M, /*SMPTE240M*/
> + 0, /*Generic film*/
> + V4L2_COLORSPACE_BT2020, /*Rec. ITU-R BT.2020-2*/
> + 0, /*SMPTE ST 428-1*/
> +};
> +
> +static const u8 colortransfers[] = {
> + 0,
> + V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.709-6*/
> + 0,
> + 0,
> + 0, /*Rec. ITU-R BT.470-6 System M*/
> + 0, /*Rec. ITU-R BT.470-6 System B, G*/
> + V4L2_XFER_FUNC_709, /*SMPTE170M*/
> + V4L2_XFER_FUNC_SMPTE240M,/*SMPTE240M*/
> + V4L2_XFER_FUNC_NONE, /*Linear transfer characteristics*/
> + 0,
> + 0,
> + 0, /*IEC 61966-2-4*/
> + 0, /*Rec. ITU-R BT.1361-0 extended colour gamut*/
> + V4L2_XFER_FUNC_SRGB, /*IEC 61966-2-1 sRGB or sYCC*/
> + V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.2020-2 (10 bit system)*/
> + V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.2020-2 (12 bit system)*/
> + V4L2_XFER_FUNC_SMPTE2084,/*SMPTE ST 2084*/
> + 0, /*SMPTE ST 428-1*/
> + 0 /*Rec. ITU-R BT.2100-0 hybrid log-gamma (HLG)*/
> +};
> +
> +static const u8 colormatrixcoefs[] = {
> + 0,
> + V4L2_YCBCR_ENC_709, /*Rec. ITU-R BT.709-6*/
> + 0,
> + 0,
> + 0, /*Title 47 Code of Federal Regulations*/
> + V4L2_YCBCR_ENC_601, /*Rec. ITU-R BT.601-7 625*/
> + V4L2_YCBCR_ENC_601, /*Rec. ITU-R BT.601-7 525*/
> + V4L2_YCBCR_ENC_SMPTE240M, /*SMPTE240M*/
> + 0,
> + V4L2_YCBCR_ENC_BT2020, /*Rec. ITU-R BT.2020-2*/
> + V4L2_YCBCR_ENC_BT2020_CONST_LUM /*Rec. ITU-R BT.2020-2 constant*/
> +};
> +
> +u32 vpu_color_cvrt_primaries_v2i(u32 primaries)
> +{
> + return VPU_ARRAY_FIND(colorprimaries, primaries);
> +}
> +
> +u32 vpu_color_cvrt_primaries_i2v(u32 primaries)
> +{
> + return VPU_ARRAY_AT(colorprimaries, primaries);
> +}
> +
> +u32 vpu_color_cvrt_transfers_v2i(u32 transfers)
> +{
> + return VPU_ARRAY_FIND(colortransfers, transfers);
> +}
> +
> +u32 vpu_color_cvrt_transfers_i2v(u32 transfers)
> +{
> + return VPU_ARRAY_AT(colortransfers, transfers);
> +}
> +
> +u32 vpu_color_cvrt_matrix_v2i(u32 matrix)
> +{
> + return VPU_ARRAY_FIND(colormatrixcoefs, matrix);
> +}
> +
> +u32 vpu_color_cvrt_matrix_i2v(u32 matrix)
> +{
> + return VPU_ARRAY_AT(colormatrixcoefs, matrix);
> +}
> +
> +u32 vpu_color_cvrt_full_range_v2i(u32 full_range)
> +{
> + return (full_range == V4L2_QUANTIZATION_FULL_RANGE);
> +}
> +
> +u32 vpu_color_cvrt_full_range_i2v(u32 full_range)
> +{
> + if (full_range)
> + return V4L2_QUANTIZATION_FULL_RANGE;
> +
> + return V4L2_QUANTIZATION_LIM_RANGE;
> +}
> +
> +int vpu_color_check_primaries(u32 primaries)
> +{
> + return vpu_color_cvrt_primaries_v2i(primaries) ? 0 : -EINVAL;
> +}
> +
> +int vpu_color_check_transfers(u32 transfers)
> +{
> + return vpu_color_cvrt_transfers_v2i(transfers) ? 0 : -EINVAL;
> +}
> +
> +int vpu_color_check_matrix(u32 matrix)
> +{
> + return vpu_color_cvrt_matrix_v2i(matrix) ? 0 : -EINVAL;
> +}
> +
> +int vpu_color_check_full_range(u32 full_range)
> +{
> + int ret = -EINVAL;
> +
> + switch (full_range) {
> + case V4L2_QUANTIZATION_FULL_RANGE:
> + case V4L2_QUANTIZATION_LIM_RANGE:
> + ret = 0;
> + break;
> + default:
> + break;
> +
> + }
> +
> + return ret;
> +}
> +
> +int vpu_color_get_default(u32 primaries,
> + u32 *ptransfers, u32 *pmatrix, u32 *pfull_range)
> +{
> + u32 transfers;
> + u32 matrix;
> + u32 full_range;
> +
> + switch (primaries) {
> + case V4L2_COLORSPACE_REC709:
> + transfers = V4L2_XFER_FUNC_709;
> + matrix = V4L2_YCBCR_ENC_709;
> + full_range = V4L2_QUANTIZATION_LIM_RANGE;
> + break;
> + case V4L2_COLORSPACE_470_SYSTEM_M:
> + case V4L2_COLORSPACE_470_SYSTEM_BG:
> + case V4L2_COLORSPACE_SMPTE170M:
> + transfers = V4L2_XFER_FUNC_709;
> + matrix = V4L2_YCBCR_ENC_601;
> + full_range = V4L2_QUANTIZATION_LIM_RANGE;
> + break;
> + case V4L2_COLORSPACE_SMPTE240M:
> + transfers = V4L2_XFER_FUNC_SMPTE240M;
> + matrix = V4L2_YCBCR_ENC_SMPTE240M;
> + full_range = V4L2_QUANTIZATION_LIM_RANGE;
> + break;
> + case V4L2_COLORSPACE_BT2020:
> + transfers = V4L2_XFER_FUNC_709;
> + matrix = V4L2_YCBCR_ENC_BT2020;
> + full_range = V4L2_QUANTIZATION_LIM_RANGE;
> + break;
> + default:
> + transfers = V4L2_XFER_FUNC_709;
> + matrix = V4L2_YCBCR_ENC_709;
> + full_range = V4L2_QUANTIZATION_LIM_RANGE;
> + break;
> + }
> +
> + if (ptransfers)
> + *ptransfers = transfers;
> + if (pmatrix)
> + *pmatrix = matrix;
> + if (pfull_range)
> + *pfull_range = full_range;
> +
> +
> + return 0;
> +}
> diff --git a/drivers/media/platform/amphion/vpu_helpers.c b/drivers/media/platform/amphion/vpu_helpers.c
> new file mode 100644
> index 000000000000..4b9fb82f24fd
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_helpers.c
> @@ -0,0 +1,436 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/interconnect.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/platform_device.h>
> +#include "vpu.h"
> +#include "vpu_core.h"
> +#include "vpu_rpc.h"
> +#include "vpu_helpers.h"
> +
> +int vpu_helper_find_in_array_u8(const u8 *array, u32 size, u32 x)
> +{
> + int i;
> +
> + for (i = 0; i < size; i++) {
> + if (array[i] == x)
> + return i;
> + }
> +
> + return 0;
> +}
> +
> +bool vpu_helper_check_type(struct vpu_inst *inst, u32 type)
> +{
> + const struct vpu_format *pfmt;
> +
> + for (pfmt = inst->formats; pfmt->pixfmt; pfmt++) {
> + if (!vpu_iface_check_format(inst, pfmt->pixfmt))
> + continue;
> + if (pfmt->type == type)
> + return true;
> + }
> +
> + return false;
> +}
> +
> +const struct vpu_format *vpu_helper_find_format(struct vpu_inst *inst, u32 type, u32 pixelfmt)
> +{
> + const struct vpu_format *pfmt;
> +
> + if (!inst || !inst->formats)
> + return NULL;
> +
> + if (!vpu_iface_check_format(inst, pixelfmt))
> + return NULL;
> +
> + for (pfmt = inst->formats; pfmt->pixfmt; pfmt++) {
> + if (pfmt->pixfmt == pixelfmt && (!type || type == pfmt->type))
> + return pfmt;
> + }
> +
> + return NULL;
> +}
> +
> +const struct vpu_format *vpu_helper_enum_format(struct vpu_inst *inst, u32 type, int index)
> +{
> + const struct vpu_format *pfmt;
> + int i = 0;
> +
> + if (!inst || !inst->formats)
> + return NULL;
> +
> + for (pfmt = inst->formats; pfmt->pixfmt; pfmt++) {
> + if (!vpu_iface_check_format(inst, pfmt->pixfmt))
> + continue;
> +
> + if (pfmt->type == type) {
> + if (index == i)
> + return pfmt;
> + i++;
> + }
> + }
> +
> + return NULL;
> +}
> +
> +u32 vpu_helper_valid_frame_width(struct vpu_inst *inst, u32 width)
> +{
> + const struct vpu_core_resources *res;
> +
> + if (!inst)
> + return width;
> +
> + res = vpu_get_resource(inst);
> + if (!res)
> + return width;
> + if (res->max_width)
> + width = clamp(width, res->min_width, res->max_width);
> + if (res->step_width)
> + width = ALIGN(width, res->step_width);
> +
> + return width;
> +}
> +
> +u32 vpu_helper_valid_frame_height(struct vpu_inst *inst, u32 height)
> +{
> + const struct vpu_core_resources *res;
> +
> + if (!inst)
> + return height;
> +
> + res = vpu_get_resource(inst);
> + if (!res)
> + return height;
> + if (res->max_height)
> + height = clamp(height, res->min_height, res->max_height);
> + if (res->step_height)
> + height = ALIGN(height, res->step_height);
> +
> + return height;
> +}
> +
> +static u32 get_nv12_plane_size(u32 width, u32 height, int plane_no,
> + u32 stride, u32 interlaced, u32 *pbl)
> +{
> + u32 bytesperline;
> + u32 size = 0;
> +
> + bytesperline = ALIGN(width, stride);
> + if (pbl)
> + bytesperline = max(bytesperline, *pbl);
> + height = ALIGN(height, 2);
> + if (plane_no == 0)
> + size = bytesperline * height;
> + else if (plane_no == 1)
> + size = bytesperline * height >> 1;
> + if (pbl)
> + *pbl = bytesperline;
> +
> + return size;
> +}
> +
> +static u32 get_tiled_8l128_plane_size(u32 fmt, u32 width, u32 height, int plane_no,
> + u32 stride, u32 interlaced, u32 *pbl)
> +{
> + u32 ws = 3;
> + u32 hs = 7;
> + u32 bitdepth = 8;
> + u32 bytesperline;
> + u32 size = 0;
> +
> + if (interlaced)
> + hs++;

As discussed earlier, when producing tiled, this driver should negotiate
V4L2_FIELD_ALTERNATE and ensure the sequence numbers for paired fields matches
as per spec. The height being halved is common this this type of interlace
output.

Note that this is a bit of a whole in GStreamer fluster integration (there is
nothing to merge fields), you can probably just move NV12, which according to
its frame size will be interleaved, by placing that format first in your
enum_fmt implementation. Though it conflict with the intent to place first the
most "native" format. I can also provide patch that will force NV12 in fluster
for current mainline testing.

> + if (fmt == V4L2_PIX_FMT_NV12MT_10BE_8L128)
> + bitdepth = 10;
> + bytesperline = DIV_ROUND_UP(width * bitdepth, BITS_PER_BYTE);
> + bytesperline = ALIGN(bytesperline, 1 << ws);
> + bytesperline = ALIGN(bytesperline, stride);
> + if (pbl)
> + bytesperline = max(bytesperline, *pbl);
> + height = ALIGN(height, 1 << hs);
> + if (plane_no == 0)
> + size = bytesperline * height;
> + else if (plane_no == 1)
> + size = (bytesperline * ALIGN(height, 1 << (hs + 1))) >> 1;
> + if (pbl)
> + *pbl = bytesperline;
> +
> + return size;
> +}
> +
> +static u32 get_default_plane_size(u32 width, u32 height, int plane_no,
> + u32 stride, u32 interlaced, u32 *pbl)
> +{
> + u32 bytesperline;
> + u32 size = 0;
> +
> + bytesperline = ALIGN(width, stride);
> + if (pbl)
> + bytesperline = max(bytesperline, *pbl);
> + if (plane_no == 0)
> + size = bytesperline * height;
> + if (pbl)
> + *pbl = bytesperline;
> +
> + return size;
> +}
> +
> +u32 vpu_helper_get_plane_size(u32 fmt, u32 w, u32 h, int plane_no,
> + u32 stride, u32 interlaced, u32 *pbl)
> +{
> + switch (fmt) {
> + case V4L2_PIX_FMT_NV12M:
> + return get_nv12_plane_size(w, h, plane_no, stride, interlaced, pbl);
> + case V4L2_PIX_FMT_NV12MT_8L128:
> + case V4L2_PIX_FMT_NV12MT_10BE_8L128:
> + return get_tiled_8l128_plane_size(fmt, w, h, plane_no, stride, interlaced, pbl);
> + default:
> + return get_default_plane_size(w, h, plane_no, stride, interlaced, pbl);
> + }
> +}
> +
> +u32 vpu_helper_copy_from_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 *rptr, u32 size, void *dst)
> +{
> + u32 offset;
> + u32 start;
> + u32 end;
> + void *virt;
> +
> + if (!stream_buffer || !rptr || !dst)
> + return -EINVAL;
> +
> + if (!size)
> + return 0;
> +
> + offset = *rptr;
> + start = stream_buffer->phys;
> + end = start + stream_buffer->length;
> + virt = stream_buffer->virt;
> +
> + if (offset < start || offset > end)
> + return -EINVAL;
> +
> + if (offset + size <= end) {
> + memcpy(dst, virt + (offset - start), size);
> + } else {
> + memcpy(dst, virt + (offset - start), end - offset);
> + memcpy(dst + end - offset, virt, size + offset - end);
> + }
> +
> + *rptr = vpu_helper_step_walk(stream_buffer, offset, size);
> + return size;
> +}
> +
> +u32 vpu_helper_copy_to_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 *wptr, u32 size, void *src)
> +{
> + u32 offset;
> + u32 start;
> + u32 end;
> + void *virt;
> +
> + if (!stream_buffer || !wptr || !src)
> + return -EINVAL;
> +
> + if (!size)
> + return 0;
> +
> + offset = *wptr;
> + start = stream_buffer->phys;
> + end = start + stream_buffer->length;
> + virt = stream_buffer->virt;
> + if (offset < start || offset > end)
> + return -EINVAL;
> +
> + if (offset + size <= end) {
> + memcpy(virt + (offset - start), src, size);
> + } else {
> + memcpy(virt + (offset - start), src, end - offset);
> + memcpy(virt, src + end - offset, size + offset - end);
> + }
> +
> + *wptr = vpu_helper_step_walk(stream_buffer, offset, size);
> +
> + return size;
> +}
> +
> +u32 vpu_helper_memset_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 *wptr, u8 val, u32 size)
> +{
> + u32 offset;
> + u32 start;
> + u32 end;
> + void *virt;
> +
> + if (!stream_buffer || !wptr)
> + return -EINVAL;
> +
> + if (!size)
> + return 0;
> +
> + offset = *wptr;
> + start = stream_buffer->phys;
> + end = start + stream_buffer->length;
> + virt = stream_buffer->virt;
> + if (offset < start || offset > end)
> + return -EINVAL;
> +
> + if (offset + size <= end) {
> + memset(virt + (offset - start), val, size);
> + } else {
> + memset(virt + (offset - start), val, end - offset);
> + memset(virt, val, size + offset - end);
> + }
> +
> + offset += size;
> + if (offset >= end)
> + offset -= stream_buffer->length;
> +
> + *wptr = offset;
> +
> + return size;
> +}
> +
> +u32 vpu_helper_get_free_space(struct vpu_inst *inst)
> +{
> + struct vpu_rpc_buffer_desc desc;
> +
> + if (vpu_iface_get_stream_buffer_desc(inst, &desc))
> + return 0;
> +
> + if (desc.rptr > desc.wptr)
> + return desc.rptr - desc.wptr;
> + else if (desc.rptr < desc.wptr)
> + return (desc.end - desc.start + desc.rptr - desc.wptr);
> + else
> + return desc.end - desc.start;
> +}
> +
> +u32 vpu_helper_get_used_space(struct vpu_inst *inst)
> +{
> + struct vpu_rpc_buffer_desc desc;
> +
> + if (vpu_iface_get_stream_buffer_desc(inst, &desc))
> + return 0;
> +
> + if (desc.wptr > desc.rptr)
> + return desc.wptr - desc.rptr;
> + else if (desc.wptr < desc.rptr)
> + return (desc.end - desc.start + desc.wptr - desc.rptr);
> + else
> + return 0;
> +}
> +
> +int vpu_helper_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
> +{
> + struct vpu_inst *inst = ctrl_to_inst(ctrl);
> +
> + switch (ctrl->id) {
> + case V4L2_CID_MIN_BUFFERS_FOR_CAPTURE:
> + ctrl->val = inst->min_buffer_cap;
> + break;
> + case V4L2_CID_MIN_BUFFERS_FOR_OUTPUT:
> + ctrl->val = inst->min_buffer_out;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +u32 vpu_helper_calc_coprime(u32 *a, u32 *b)
> +{
> + int m = *a;
> + int n = *b;
> +
> + if (m == 0)
> + return n;
> + if (n == 0)
> + return m;
> +
> + while (n != 0) {
> + int tmp = m % n;
> +
> + m = n;
> + n = tmp;
> + }
> + *a = (*a) / m;
> + *b = (*b) / m;
> +
> + return m;
> +}
> +
> +#define READ_BYTE(buffer, pos) (*(u8 *)((buffer)->virt + ((pos) % buffer->length)))
> +int vpu_helper_find_startcode(struct vpu_buffer *stream_buffer,
> + u32 pixelformat, u32 offset, u32 bytesused)
> +{
> + u32 start_code;
> + int start_code_size;
> + u32 val = 0;
> + int i;
> + int ret = -EINVAL;
> +
> + if (!stream_buffer || !stream_buffer->virt)
> + return -EINVAL;
> +
> + switch (pixelformat) {
> + case V4L2_PIX_FMT_H264:
> + start_code_size = 4;
> + start_code = 0x00000001;
> + break;
> + default:
> + return 0;
> + }
> +
> + for (i = 0; i < bytesused; i++) {
> + val = (val << 8) | READ_BYTE(stream_buffer, offset + i);
> + if (i < start_code_size - 1)
> + continue;
> + if (val == start_code) {
> + ret = i + 1 - start_code_size;
> + break;
> + }
> + }
> +
> + return ret;
> +}
> +
> +int vpu_find_dst_by_src(struct vpu_pair *pairs, u32 cnt, u32 src)
> +{
> + u32 i;
> +
> + if (!pairs || !cnt)
> + return -EINVAL;
> +
> + for (i = 0; i < cnt; i++) {
> + if (pairs[i].src == src)
> + return pairs[i].dst;
> + }
> +
> + return -EINVAL;
> +}
> +
> +int vpu_find_src_by_dst(struct vpu_pair *pairs, u32 cnt, u32 dst)
> +{
> + u32 i;
> +
> + if (!pairs || !cnt)
> + return -EINVAL;
> +
> + for (i = 0; i < cnt; i++) {
> + if (pairs[i].dst == dst)
> + return pairs[i].src;
> + }
> +
> + return -EINVAL;
> +}
> diff --git a/drivers/media/platform/amphion/vpu_helpers.h b/drivers/media/platform/amphion/vpu_helpers.h
> new file mode 100644
> index 000000000000..65d4451ad8a1
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_helpers.h
> @@ -0,0 +1,71 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _AMPHION_VPU_HELPERS_H
> +#define _AMPHION_VPU_HELPERS_H
> +
> +struct vpu_pair {
> + u32 src;
> + u32 dst;
> +};
> +
> +#define MAKE_TIMESTAMP(s, ns) (((s32)(s) * NSEC_PER_SEC) + (ns))
> +#define VPU_INVALID_TIMESTAMP MAKE_TIMESTAMP(-1, 0)
> +#define VPU_ARRAY_AT(array, i) (((i) < ARRAY_SIZE(array)) ? array[i] : 0)
> +#define VPU_ARRAY_FIND(array, x) vpu_helper_find_in_array_u8(array, ARRAY_SIZE(array), x)
> +
> +int vpu_helper_find_in_array_u8(const u8 *array, u32 size, u32 x);
> +bool vpu_helper_check_type(struct vpu_inst *inst, u32 type);
> +const struct vpu_format *vpu_helper_find_format(struct vpu_inst *inst, u32 type, u32 pixelfmt);
> +const struct vpu_format *vpu_helper_enum_format(struct vpu_inst *inst, u32 type, int index);
> +u32 vpu_helper_valid_frame_width(struct vpu_inst *inst, u32 width);
> +u32 vpu_helper_valid_frame_height(struct vpu_inst *inst, u32 height);
> +u32 vpu_helper_get_plane_size(u32 fmt, u32 width, u32 height, int plane_no,
> + u32 stride, u32 interlaced, u32 *pbl);
> +u32 vpu_helper_copy_from_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 *rptr, u32 size, void *dst);
> +u32 vpu_helper_copy_to_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 *wptr, u32 size, void *src);
> +u32 vpu_helper_memset_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 *wptr, u8 val, u32 size);
> +u32 vpu_helper_get_free_space(struct vpu_inst *inst);
> +u32 vpu_helper_get_used_space(struct vpu_inst *inst);
> +int vpu_helper_g_volatile_ctrl(struct v4l2_ctrl *ctrl);
> +u32 vpu_helper_calc_coprime(u32 *a, u32 *b);
> +void vpu_helper_get_kmp_next(const u8 *pattern, int *next, int size);
> +int vpu_helper_kmp_search(u8 *s, int s_len, const u8 *p, int p_len, int *next);
> +int vpu_helper_kmp_search_in_stream_buffer(struct vpu_buffer *stream_buffer,
> + u32 offset, int bytesused,
> + const u8 *p, int p_len, int *next);
> +int vpu_helper_find_startcode(struct vpu_buffer *stream_buffer,
> + u32 pixelformat, u32 offset, u32 bytesused);
> +
> +static inline u32 vpu_helper_step_walk(struct vpu_buffer *stream_buffer, u32 pos, u32 step)
> +{
> + pos += step;
> + if (pos > stream_buffer->phys + stream_buffer->length)
> + pos -= stream_buffer->length;
> +
> + return pos;
> +}
> +
> +int vpu_color_check_primaries(u32 primaries);
> +int vpu_color_check_transfers(u32 transfers);
> +int vpu_color_check_matrix(u32 matrix);
> +int vpu_color_check_full_range(u32 full_range);
> +u32 vpu_color_cvrt_primaries_v2i(u32 primaries);
> +u32 vpu_color_cvrt_primaries_i2v(u32 primaries);
> +u32 vpu_color_cvrt_transfers_v2i(u32 transfers);
> +u32 vpu_color_cvrt_transfers_i2v(u32 transfers);
> +u32 vpu_color_cvrt_matrix_v2i(u32 matrix);
> +u32 vpu_color_cvrt_matrix_i2v(u32 matrix);
> +u32 vpu_color_cvrt_full_range_v2i(u32 full_range);
> +u32 vpu_color_cvrt_full_range_i2v(u32 full_range);
> +int vpu_color_get_default(u32 primaries,
> + u32 *ptransfers, u32 *pmatrix, u32 *pfull_range);
> +
> +int vpu_find_dst_by_src(struct vpu_pair *pairs, u32 cnt, u32 src);
> +int vpu_find_src_by_dst(struct vpu_pair *pairs, u32 cnt, u32 dst);
> +#endif
> diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c
> new file mode 100644
> index 000000000000..909a94d5aa8a
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_v4l2.c
> @@ -0,0 +1,703 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/interconnect.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/videodev2.h>
> +#include <media/v4l2-device.h>
> +#include <media/v4l2-event.h>
> +#include <media/v4l2-mem2mem.h>
> +#include <media/v4l2-ioctl.h>
> +#include <media/videobuf2-v4l2.h>
> +#include <media/videobuf2-dma-contig.h>
> +#include <media/videobuf2-vmalloc.h>
> +#include "vpu.h"
> +#include "vpu_core.h"
> +#include "vpu_v4l2.h"
> +#include "vpu_msgs.h"
> +#include "vpu_helpers.h"
> +
> +void vpu_inst_lock(struct vpu_inst *inst)
> +{
> + mutex_lock(&inst->lock);
> +}
> +
> +void vpu_inst_unlock(struct vpu_inst *inst)
> +{
> + mutex_unlock(&inst->lock);
> +}
> +
> +dma_addr_t vpu_get_vb_phy_addr(struct vb2_buffer *vb, u32 plane_no)
> +{
> + if (plane_no >= vb->num_planes)
> + return 0;
> + return vb2_dma_contig_plane_dma_addr(vb, plane_no) +
> + vb->planes[plane_no].data_offset;
> +}
> +
> +unsigned int vpu_get_vb_length(struct vb2_buffer *vb, u32 plane_no)
> +{
> + if (plane_no >= vb->num_planes)
> + return 0;
> + return vb2_plane_size(vb, plane_no) - vb->planes[plane_no].data_offset;
> +}
> +
> +void vpu_v4l2_set_error(struct vpu_inst *inst)
> +{
> + struct vb2_queue *src_q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> + struct vb2_queue *dst_q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> +
> + dev_err(inst->dev, "some error occurs in codec\n");
> + if (src_q)
> + src_q->error = 1;
> + if (dst_q)
> + dst_q->error = 1;
> +}
> +
> +int vpu_notify_eos(struct vpu_inst *inst)
> +{
> + const struct v4l2_event ev = {
> + .id = 0,
> + .type = V4L2_EVENT_EOS
> + };
> +
> + vpu_trace(inst->dev, "[%d]\n", inst->id);
> + v4l2_event_queue_fh(&inst->fh, &ev);
> +
> + return 0;
> +}
> +
> +int vpu_notify_source_change(struct vpu_inst *inst)
> +{
> + const struct v4l2_event ev = {
> + .id = 0,
> + .type = V4L2_EVENT_SOURCE_CHANGE,
> + .u.src_change.changes = V4L2_EVENT_SRC_CH_RESOLUTION
> + };
> +
> + vpu_trace(inst->dev, "[%d]\n", inst->id);
> + v4l2_event_queue_fh(&inst->fh, &ev);
> + return 0;
> +}
> +
> +int vpu_set_last_buffer_dequeued(struct vpu_inst *inst)
> +{
> + struct vb2_queue *q;
> +
> + if (!inst || !inst->fh.m2m_ctx)
> + return -EINVAL;
> +
> + q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> + if (!list_empty(&q->done_list))
> + return -EINVAL;
> +
> + vpu_trace(inst->dev, "last buffer dequeued\n");
> + q->last_buffer_dequeued = true;
> + wake_up(&q->done_wq);
> + vpu_notify_eos(inst);
> + return 0;
> +}
> +
> +const struct vpu_format *vpu_try_fmt_common(struct vpu_inst *inst,
> + struct v4l2_format *f)
> +{
> + struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp;
> + u32 type = f->type;
> + u32 stride = 1;
> + u32 bytesperline;
> + u32 sizeimage;
> + const struct vpu_format *fmt;
> + const struct vpu_core_resources *res;
> + int i;
> +
> + fmt = vpu_helper_find_format(inst, type, pixmp->pixelformat);
> + if (!fmt) {
> + fmt = vpu_helper_enum_format(inst, type, 0);
> + if (!fmt)
> + return NULL;
> + pixmp->pixelformat = fmt->pixfmt;
> + }
> +
> + res = vpu_get_resource(inst);
> + if (res)
> + stride = res->stride;
> + if (pixmp->width)
> + pixmp->width = vpu_helper_valid_frame_width(inst, pixmp->width);
> + if (pixmp->height)
> + pixmp->height = vpu_helper_valid_frame_height(inst, pixmp->height);
> + pixmp->flags = fmt->flags;
> + pixmp->num_planes = fmt->num_planes;
> + if (pixmp->field == V4L2_FIELD_ANY)
> + pixmp->field = V4L2_FIELD_NONE;
> + for (i = 0; i < pixmp->num_planes; i++) {
> + bytesperline = max_t(s32, pixmp->plane_fmt[i].bytesperline, 0);
> + sizeimage = vpu_helper_get_plane_size(pixmp->pixelformat,
> + pixmp->width, pixmp->height, i, stride,
> + pixmp->field == V4L2_FIELD_INTERLACED ? 1 : 0,
> + &bytesperline);
> + sizeimage = max_t(s32, pixmp->plane_fmt[i].sizeimage, sizeimage);
> + pixmp->plane_fmt[i].bytesperline = bytesperline;
> + pixmp->plane_fmt[i].sizeimage = sizeimage;
> + }
> +
> + return fmt;
> +}
> +
> +static bool vpu_check_ready(struct vpu_inst *inst, u32 type)
> +{
> + if (!inst)
> + return false;
> + if (inst->state == VPU_CODEC_STATE_DEINIT || inst->id < 0)
> + return false;
> + if (!inst->ops->check_ready)
> + return true;
> + return call_vop(inst, check_ready, type);
> +}
> +
> +int vpu_process_output_buffer(struct vpu_inst *inst)
> +{
> + struct v4l2_m2m_buffer *buf = NULL;
> + struct vpu_vb2_buffer *vpu_buf = NULL;
> +
> + if (!inst)
> + return -EINVAL;
> +
> + if (!vpu_check_ready(inst, inst->out_format.type))
> + return -EINVAL;
> +
> + v4l2_m2m_for_each_src_buf(inst->fh.m2m_ctx, buf) {
> + vpu_buf = container_of(buf, struct vpu_vb2_buffer, m2m_buf);
> + if (vpu_buf->state == VPU_BUF_STATE_IDLE)
> + break;
> + vpu_buf = NULL;
> + }
> +
> + if (!vpu_buf)
> + return -EINVAL;
> +
> + dev_dbg(inst->dev, "[%d]frame id = %d / %d\n",
> + inst->id, vpu_buf->m2m_buf.vb.sequence, inst->sequence);
> + return call_vop(inst, process_output, &vpu_buf->m2m_buf.vb.vb2_buf);
> +}
> +
> +int vpu_process_capture_buffer(struct vpu_inst *inst)
> +{
> + struct v4l2_m2m_buffer *buf = NULL;
> + struct vpu_vb2_buffer *vpu_buf = NULL;
> +
> + if (!inst)
> + return -EINVAL;
> +
> + if (!vpu_check_ready(inst, inst->cap_format.type))
> + return -EINVAL;
> +
> + v4l2_m2m_for_each_dst_buf(inst->fh.m2m_ctx, buf) {
> + vpu_buf = container_of(buf, struct vpu_vb2_buffer, m2m_buf);
> + if (vpu_buf->state == VPU_BUF_STATE_IDLE)
> + break;
> + vpu_buf = NULL;
> + }
> + if (!vpu_buf)
> + return -EINVAL;
> +
> + return call_vop(inst, process_capture, &vpu_buf->m2m_buf.vb.vb2_buf);
> +}
> +
> +struct vb2_v4l2_buffer *vpu_find_buf_by_sequence(struct vpu_inst *inst,
> + u32 type, u32 sequence)
> +{
> + struct v4l2_m2m_buffer *buf = NULL;
> + struct vb2_v4l2_buffer *vbuf = NULL;
> +
> + if (V4L2_TYPE_IS_OUTPUT(type)) {
> + v4l2_m2m_for_each_src_buf(inst->fh.m2m_ctx, buf) {
> + vbuf = &buf->vb;
> + if (vbuf->sequence == sequence)
> + break;
> + vbuf = NULL;
> + }
> + } else {
> + v4l2_m2m_for_each_dst_buf(inst->fh.m2m_ctx, buf) {
> + vbuf = &buf->vb;
> + if (vbuf->sequence == sequence)
> + break;
> + vbuf = NULL;
> + }
> + }
> +
> + return vbuf;
> +}
> +
> +struct vb2_v4l2_buffer *vpu_find_buf_by_idx(struct vpu_inst *inst,
> + u32 type, u32 idx)
> +{
> + struct v4l2_m2m_buffer *buf = NULL;
> + struct vb2_v4l2_buffer *vbuf = NULL;
> +
> + if (V4L2_TYPE_IS_OUTPUT(type)) {
> + v4l2_m2m_for_each_src_buf(inst->fh.m2m_ctx, buf) {
> + vbuf = &buf->vb;
> + if (vbuf->vb2_buf.index == idx)
> + break;
> + vbuf = NULL;
> + }
> + } else {
> + v4l2_m2m_for_each_dst_buf(inst->fh.m2m_ctx, buf) {
> + vbuf = &buf->vb;
> + if (vbuf->vb2_buf.index == idx)
> + break;
> + vbuf = NULL;
> + }
> + }
> +
> + return vbuf;
> +}
> +
> +int vpu_get_num_buffers(struct vpu_inst *inst, u32 type)
> +{
> + struct vb2_queue *q;
> +
> + if (!inst || !inst->fh.m2m_ctx)
> + return -EINVAL;
> + if (V4L2_TYPE_IS_OUTPUT(type))
> + q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> + else
> + q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> +
> + return q->num_buffers;
> +}
> +
> +static void vpu_m2m_device_run(void *priv)
> +{
> +}
> +
> +static void vpu_m2m_job_abort(void *priv)
> +{
> + struct vpu_inst *inst = priv;
> + struct v4l2_m2m_ctx *m2m_ctx = inst->fh.m2m_ctx;
> +
> + v4l2_m2m_job_finish(m2m_ctx->m2m_dev, m2m_ctx);
> +}
> +
> +static const struct v4l2_m2m_ops vpu_m2m_ops = {
> + .device_run = vpu_m2m_device_run,
> + .job_abort = vpu_m2m_job_abort
> +};
> +
> +static int vpu_vb2_queue_setup(struct vb2_queue *vq,
> + unsigned int *buf_count,
> + unsigned int *plane_count,
> + unsigned int psize[],
> + struct device *allocators[])
> +{
> + struct vpu_inst *inst = vb2_get_drv_priv(vq);
> + struct vpu_format *cur_fmt;
> + int i;
> +
> + cur_fmt = vpu_get_format(inst, vq->type);
> +
> + if (*plane_count) {
> + if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE) {
> + for (i = 0; i < *plane_count; i++) {
> + if (!psize[i])
> + psize[i] = cur_fmt->sizeimage[i];
> + }
> + return 0;
> + }
> + if (*plane_count != cur_fmt->num_planes)
> + return -EINVAL;
> + for (i = 0; i < cur_fmt->num_planes; i++) {
> + if (psize[i] < cur_fmt->sizeimage[i])
> + return -EINVAL;
> + }
> + return 0;
> + }
> +
> + *plane_count = cur_fmt->num_planes;
> + for (i = 0; i < cur_fmt->num_planes; i++)
> + psize[i] = cur_fmt->sizeimage[i];
> +
> + return 0;
> +}
> +
> +static int vpu_vb2_buf_init(struct vb2_buffer *vb)
> +{
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
> +
> + vpu_buf->state = VPU_BUF_STATE_IDLE;
> +
> + return 0;
> +}
> +
> +static void vpu_vb2_buf_cleanup(struct vb2_buffer *vb)
> +{
> +}
> +
> +static int vpu_vb2_buf_prepare(struct vb2_buffer *vb)
> +{
> + struct vpu_inst *inst = vb2_get_drv_priv(vb->vb2_queue);
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + struct vpu_vb2_buffer *vpu_buf = to_vpu_vb2_buffer(vbuf);
> + struct vpu_format *cur_fmt;
> + u32 i;
> +
> + cur_fmt = vpu_get_format(inst, vb->type);
> + if (vb->num_planes != cur_fmt->num_planes)
> + return -EINVAL;
> + for (i = 0; i < cur_fmt->num_planes; i++) {
> + if (vpu_get_vb_length(vb, i) < cur_fmt->sizeimage[i]) {
> + dev_dbg(inst->dev, "[%d] %s buf[%d] is invalid\n",
> + inst->id,
> + vpu_type_name(vb->type),
> + vb->index);
> + vpu_buf->state = VPU_BUF_STATE_ERROR;
> + }
> + }
> +
> + return 0;
> +}
> +
> +static void vpu_vb2_buf_finish(struct vb2_buffer *vb)
> +{
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + struct vpu_inst *inst = vb2_get_drv_priv(vb->vb2_queue);
> + struct vb2_queue *q = vb->vb2_queue;
> +
> + if (vbuf->flags & V4L2_BUF_FLAG_LAST)
> + vpu_notify_eos(inst);
> +
> + if (list_empty(&q->done_list))
> + call_vop(inst, on_queue_empty, q->type);
> +}
> +
> +void vpu_vb2_buffers_return(struct vpu_inst *inst,
> + unsigned int type, enum vb2_buffer_state state)
> +{
> + struct vb2_v4l2_buffer *buf;
> +
> + if (!inst || !inst->fh.m2m_ctx)
> + return;
> +
> + if (V4L2_TYPE_IS_OUTPUT(type)) {
> + while ((buf = v4l2_m2m_src_buf_remove(inst->fh.m2m_ctx)))
> + v4l2_m2m_buf_done(buf, state);
> + } else {
> + while ((buf = v4l2_m2m_dst_buf_remove(inst->fh.m2m_ctx)))
> + v4l2_m2m_buf_done(buf, state);
> + }
> +}
> +
> +static int vpu_vb2_start_streaming(struct vb2_queue *q, unsigned int count)
> +{
> + struct vpu_inst *inst = vb2_get_drv_priv(q);
> + struct vpu_format *fmt = vpu_get_format(inst, q->type);
> + int ret;
> +
> + vpu_inst_unlock(inst);
> + ret = vpu_inst_register(inst);
> + vpu_inst_lock(inst);
> + if (ret) {
> + vpu_vb2_buffers_return(inst, q->type, VB2_BUF_STATE_QUEUED);
> + return ret;
> + }
> +
> + vpu_trace(inst->dev, "[%d] %s %c%c%c%c %dx%d %u(%u) %u(%u) %u(%u) %d\n",
> + inst->id, vpu_type_name(q->type),
> + fmt->pixfmt,
> + fmt->pixfmt >> 8,
> + fmt->pixfmt >> 16,
> + fmt->pixfmt >> 24,
> + fmt->width, fmt->height,
> + fmt->sizeimage[0], fmt->bytesperline[0],
> + fmt->sizeimage[1], fmt->bytesperline[1],
> + fmt->sizeimage[2], fmt->bytesperline[2],
> + q->num_buffers);
> + call_vop(inst, start, q->type);
> + vb2_clear_last_buffer_dequeued(q);
> +
> + return 0;
> +}
> +
> +static void vpu_vb2_stop_streaming(struct vb2_queue *q)
> +{
> + struct vpu_inst *inst = vb2_get_drv_priv(q);
> +
> + vpu_trace(inst->dev, "[%d] %s\n", inst->id, vpu_type_name(q->type));
> +
> + call_vop(inst, stop, q->type);
> + vpu_vb2_buffers_return(inst, q->type, VB2_BUF_STATE_ERROR);
> + if (V4L2_TYPE_IS_OUTPUT(q->type))
> + inst->sequence = 0;
> +}
> +
> +static void vpu_vb2_buf_queue(struct vb2_buffer *vb)
> +{
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + struct vpu_inst *inst = vb2_get_drv_priv(vb->vb2_queue);
> +
> + if (V4L2_TYPE_IS_OUTPUT(vb->type)) {
> + vbuf->sequence = inst->sequence++;
> + if ((s64)vb->timestamp < 0)
> + vb->timestamp = VPU_INVALID_TIMESTAMP;
> + }
> +
> + v4l2_m2m_buf_queue(inst->fh.m2m_ctx, vbuf);
> + vpu_process_output_buffer(inst);
> + vpu_process_capture_buffer(inst);
> +}
> +
> +static struct vb2_ops vpu_vb2_ops = {
> + .queue_setup = vpu_vb2_queue_setup,
> + .buf_init = vpu_vb2_buf_init,
> + .buf_cleanup = vpu_vb2_buf_cleanup,
> + .buf_prepare = vpu_vb2_buf_prepare,
> + .buf_finish = vpu_vb2_buf_finish,
> + .start_streaming = vpu_vb2_start_streaming,
> + .stop_streaming = vpu_vb2_stop_streaming,
> + .buf_queue = vpu_vb2_buf_queue,
> + .wait_prepare = vb2_ops_wait_prepare,
> + .wait_finish = vb2_ops_wait_finish,
> +};
> +
> +static int vpu_m2m_queue_init(void *priv, struct vb2_queue *src_vq,
> + struct vb2_queue *dst_vq)
> +{
> + struct vpu_inst *inst = priv;
> + int ret;
> +
> + inst->out_format.type = src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
> + src_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
> + src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
> + src_vq->ops = &vpu_vb2_ops;
> + src_vq->mem_ops = &vb2_dma_contig_memops;
> + if (inst->type == VPU_CORE_TYPE_DEC && inst->use_stream_buffer)
> + src_vq->mem_ops = &vb2_vmalloc_memops;
> + src_vq->drv_priv = inst;
> + src_vq->buf_struct_size = sizeof(struct vpu_vb2_buffer);
> + src_vq->allow_zero_bytesused = 1;
> + src_vq->min_buffers_needed = 1;
> + src_vq->dev = inst->vpu->dev;
> + src_vq->lock = &inst->lock;
> + ret = vb2_queue_init(src_vq);
> + if (ret)
> + return ret;
> +
> + inst->cap_format.type = dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
> + dst_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
> + dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
> + dst_vq->ops = &vpu_vb2_ops;
> + dst_vq->mem_ops = &vb2_dma_contig_memops;
> + if (inst->type == VPU_CORE_TYPE_ENC && inst->use_stream_buffer)
> + dst_vq->mem_ops = &vb2_vmalloc_memops;
> + dst_vq->drv_priv = inst;
> + dst_vq->buf_struct_size = sizeof(struct vpu_vb2_buffer);
> + dst_vq->allow_zero_bytesused = 1;
> + dst_vq->min_buffers_needed = 1;
> + dst_vq->dev = inst->vpu->dev;
> + dst_vq->lock = &inst->lock;
> + ret = vb2_queue_init(dst_vq);
> + if (ret) {
> + vb2_queue_release(src_vq);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int vpu_v4l2_release(struct vpu_inst *inst)
> +{
> + vpu_trace(inst->vpu->dev, "%p\n", inst);
> +
> + vpu_release_core(inst->core);
> + put_device(inst->dev);
> +
> + if (inst->workqueue) {
> + cancel_work_sync(&inst->msg_work);
> + destroy_workqueue(inst->workqueue);
> + inst->workqueue = NULL;
> + }
> + if (inst->fh.m2m_ctx) {
> + v4l2_m2m_ctx_release(inst->fh.m2m_ctx);
> + inst->fh.m2m_ctx = NULL;
> + }
> +
> + v4l2_ctrl_handler_free(&inst->ctrl_handler);
> + mutex_destroy(&inst->lock);
> + v4l2_fh_del(&inst->fh);
> + v4l2_fh_exit(&inst->fh);
> +
> + call_vop(inst, cleanup);
> +
> + return 0;
> +}
> +
> +int vpu_v4l2_open(struct file *file, struct vpu_inst *inst)
> +{
> + struct vpu_dev *vpu = video_drvdata(file);
> + struct vpu_func *func;
> + int ret = 0;
> +
> + WARN_ON(!file || !inst || !inst->ops);
> +
> + if (inst->type == VPU_CORE_TYPE_ENC)
> + func = &vpu->encoder;
> + else
> + func = &vpu->decoder;
> +
> + atomic_set(&inst->ref_count, 0);
> + vpu_inst_get(inst);
> + inst->vpu = vpu;
> + inst->core = vpu_request_core(vpu, inst->type);
> + if (inst->core)
> + inst->dev = get_device(inst->core->dev);
> + mutex_init(&inst->lock);
> + INIT_LIST_HEAD(&inst->cmd_q);
> + inst->id = VPU_INST_NULL_ID;
> + inst->release = vpu_v4l2_release;
> + inst->pid = current->pid;
> + inst->tgid = current->tgid;
> + inst->min_buffer_cap = 2;
> + inst->min_buffer_out = 2;
> + v4l2_fh_init(&inst->fh, func->vfd);
> + v4l2_fh_add(&inst->fh);
> +
> + ret = call_vop(inst, ctrl_init);
> + if (ret)
> + goto error;
> +
> + inst->fh.m2m_ctx = v4l2_m2m_ctx_init(func->m2m_dev,
> + inst, vpu_m2m_queue_init);
> + if (IS_ERR(inst->fh.m2m_ctx)) {
> + dev_err(vpu->dev, "v4l2_m2m_ctx_init fail\n");
> + ret = PTR_ERR(func->m2m_dev);
> + goto error;
> + }
> +
> + inst->fh.ctrl_handler = &inst->ctrl_handler;
> + file->private_data = &inst->fh;
> + inst->state = VPU_CODEC_STATE_DEINIT;
> + inst->workqueue = alloc_workqueue("vpu_inst", WQ_UNBOUND | WQ_MEM_RECLAIM, 1);
> + if (inst->workqueue) {
> + INIT_WORK(&inst->msg_work, vpu_inst_run_work);
> + ret = kfifo_init(&inst->msg_fifo,
> + inst->msg_buffer,
> + roundup_pow_of_two(sizeof(inst->msg_buffer)));
> + if (ret) {
> + destroy_workqueue(inst->workqueue);
> + inst->workqueue = NULL;
> + }
> + }
> + vpu_trace(vpu->dev, "tgid = %d, pid = %d, type = %s, inst = %p\n",
> + inst->tgid, inst->pid, vpu_core_type_desc(inst->type), inst);
> +
> + return 0;
> +error:
> + vpu_inst_put(inst);
> + return ret;
> +}
> +
> +int vpu_v4l2_close(struct file *file)
> +{
> + struct vpu_dev *vpu = video_drvdata(file);
> + struct vpu_inst *inst = to_inst(file);
> + struct vb2_queue *src_q;
> + struct vb2_queue *dst_q;
> +
> + vpu_trace(vpu->dev, "tgid = %d, pid = %d, inst = %p\n",
> + inst->tgid, inst->pid, inst);
> + src_q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> + dst_q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> + vpu_inst_lock(inst);
> + if (vb2_is_streaming(src_q))
> + v4l2_m2m_streamoff(file, inst->fh.m2m_ctx, src_q->type);
> + if (vb2_is_streaming(dst_q))
> + v4l2_m2m_streamoff(file, inst->fh.m2m_ctx, dst_q->type);
> + vpu_inst_unlock(inst);
> +
> + call_vop(inst, release);
> + vpu_inst_unregister(inst);
> + vpu_inst_put(inst);
> +
> + return 0;
> +}
> +
> +int vpu_add_func(struct vpu_dev *vpu, struct vpu_func *func)
> +{
> + struct video_device *vfd;
> + int ret;
> +
> + if (!vpu || !func)
> + return -EINVAL;
> +
> + if (func->vfd)
> + return 0;
> +
> + vfd = video_device_alloc();
> + if (!vfd) {
> + dev_err(vpu->dev, "alloc vpu decoder video device fail\n");
> + return -ENOMEM;
> + }
> + vfd->release = video_device_release;
> + vfd->vfl_dir = VFL_DIR_M2M;
> + vfd->v4l2_dev = &vpu->v4l2_dev;
> + vfd->device_caps = V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING;
> + if (func->type == VPU_CORE_TYPE_ENC) {
> + strscpy(vfd->name, "amphion-vpu-encoder", sizeof(vfd->name));
> + vfd->fops = venc_get_fops();
> + vfd->ioctl_ops = venc_get_ioctl_ops();
> + } else {
> + strscpy(vfd->name, "amphion-vpu-decoder", sizeof(vfd->name));
> + vfd->fops = vdec_get_fops();
> + vfd->ioctl_ops = vdec_get_ioctl_ops();
> + }
> +
> + ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
> + if (ret) {
> + video_device_release(vfd);
> + return ret;
> + }
> + video_set_drvdata(vfd, vpu);
> + func->vfd = vfd;
> + func->m2m_dev = v4l2_m2m_init(&vpu_m2m_ops);
> + if (IS_ERR(func->m2m_dev)) {
> + dev_err(vpu->dev, "v4l2_m2m_init fail\n");
> + video_unregister_device(func->vfd);
> + func->vfd = NULL;
> + return PTR_ERR(func->m2m_dev);
> + }
> +
> + ret = v4l2_m2m_register_media_controller(func->m2m_dev, func->vfd, func->function);
> + if (ret) {
> + v4l2_m2m_release(func->m2m_dev);
> + func->m2m_dev = NULL;
> + video_unregister_device(func->vfd);
> + func->vfd = NULL;
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +void vpu_remove_func(struct vpu_func *func)
> +{
> + if (!func)
> + return;
> +
> + if (func->m2m_dev) {
> + v4l2_m2m_unregister_media_controller(func->m2m_dev);
> + v4l2_m2m_release(func->m2m_dev);
> + func->m2m_dev = NULL;
> + }
> + if (func->vfd) {
> + video_unregister_device(func->vfd);
> + func->vfd = NULL;
> + }
> +}
> diff --git a/drivers/media/platform/amphion/vpu_v4l2.h b/drivers/media/platform/amphion/vpu_v4l2.h
> new file mode 100644
> index 000000000000..c9ed7aec637a
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vpu_v4l2.h
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _AMPHION_VPU_V4L2_H
> +#define _AMPHION_VPU_V4L2_H
> +
> +#include <linux/videodev2.h>
> +
> +void vpu_inst_lock(struct vpu_inst *inst);
> +void vpu_inst_unlock(struct vpu_inst *inst);
> +
> +int vpu_v4l2_open(struct file *file, struct vpu_inst *inst);
> +int vpu_v4l2_close(struct file *file);
> +
> +const struct vpu_format *vpu_try_fmt_common(struct vpu_inst *inst, struct v4l2_format *f);
> +int vpu_process_output_buffer(struct vpu_inst *inst);
> +int vpu_process_capture_buffer(struct vpu_inst *inst);
> +struct vb2_v4l2_buffer *vpu_find_buf_by_sequence(struct vpu_inst *inst, u32 type, u32 sequence);
> +struct vb2_v4l2_buffer *vpu_find_buf_by_idx(struct vpu_inst *inst, u32 type, u32 idx);
> +void vpu_v4l2_set_error(struct vpu_inst *inst);
> +int vpu_notify_eos(struct vpu_inst *inst);
> +int vpu_notify_source_change(struct vpu_inst *inst);
> +int vpu_set_last_buffer_dequeued(struct vpu_inst *inst);
> +void vpu_vb2_buffers_return(struct vpu_inst *inst,
> + unsigned int type, enum vb2_buffer_state state);
> +int vpu_get_num_buffers(struct vpu_inst *inst, u32 type);
> +
> +dma_addr_t vpu_get_vb_phy_addr(struct vb2_buffer *vb, u32 plane_no);
> +unsigned int vpu_get_vb_length(struct vb2_buffer *vb, u32 plane_no);
> +static inline struct vpu_format *vpu_get_format(struct vpu_inst *inst, u32 type)
> +{
> + if (V4L2_TYPE_IS_OUTPUT(type))
> + return &inst->out_format;
> + else
> + return &inst->cap_format;
> +}
> +
> +static inline char *vpu_type_name(u32 type)
> +{
> + return V4L2_TYPE_IS_OUTPUT(type) ? "output" : "capture";
> +}
> +
> +static inline int vpu_vb_is_codecconfig(struct vb2_v4l2_buffer *vbuf)
> +{
> +#ifdef V4L2_BUF_FLAG_CODECCONFIG
> + return (vbuf->flags & V4L2_BUF_FLAG_CODECCONFIG) ? 1 : 0;
> +#else
> + return 0;
> +#endif
> +}
> +
> +#endif


2021-12-03 04:55:43

by Nicolas Dufresne

[permalink] [raw]
Subject: Re: [PATCH v13 08/13] media: amphion: add v4l2 m2m vpu decoder stateful driver

Le mardi 30 novembre 2021 à 17:48 +0800, Ming Qian a écrit :
> This consists of video decoder implementation plus decoder controls.
>
> Signed-off-by: Ming Qian <[email protected]>
> Signed-off-by: Shijie Qin <[email protected]>
> Signed-off-by: Zhou Peng <[email protected]>
> ---
> drivers/media/platform/amphion/vdec.c | 1680 +++++++++++++++++++++++++
> 1 file changed, 1680 insertions(+)
> create mode 100644 drivers/media/platform/amphion/vdec.c
>
> diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c
> new file mode 100644
> index 000000000000..a66d34d02a50
> --- /dev/null
> +++ b/drivers/media/platform/amphion/vdec.c
> @@ -0,0 +1,1680 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <linux/init.h>
> +#include <linux/interconnect.h>
> +#include <linux/ioctl.h>
> +#include <linux/list.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/videodev2.h>
> +#include <media/v4l2-device.h>
> +#include <media/v4l2-event.h>
> +#include <media/v4l2-mem2mem.h>
> +#include <media/v4l2-ioctl.h>
> +#include <media/videobuf2-v4l2.h>
> +#include <media/videobuf2-dma-contig.h>
> +#include <media/videobuf2-vmalloc.h>
> +#include "vpu.h"
> +#include "vpu_defs.h"
> +#include "vpu_core.h"
> +#include "vpu_helpers.h"
> +#include "vpu_v4l2.h"
> +#include "vpu_cmds.h"
> +#include "vpu_rpc.h"
> +
> +#define VDEC_FRAME_DEPTH 256
> +#define VDEC_MIN_BUFFER_CAP 8
> +
> +struct vdec_fs_info {
> + char name[8];
> + u32 type;
> + u32 max_count;
> + u32 req_count;
> + u32 count;
> + u32 index;
> + u32 size;
> + struct vpu_buffer buffer[32];
> + u32 tag;
> +};
> +
> +struct vdec_t {
> + u32 seq_hdr_found;
> + struct vpu_buffer udata;
> + struct vpu_decode_params params;
> + struct vpu_dec_codec_info codec_info;
> + enum vpu_codec_state state;
> +
> + struct vpu_vb2_buffer *slots[VB2_MAX_FRAME];
> + u32 req_frame_count;
> + struct vdec_fs_info mbi;
> + struct vdec_fs_info dcp;
> + u32 seq_tag;
> +
> + bool reset_codec;
> + bool fixed_fmt;
> + u32 decoded_frame_count;
> + u32 display_frame_count;
> + u32 sequence;
> + u32 eos_received;
> + bool is_source_changed;
> + u32 source_change;
> + u32 drain;
> + u32 ts_pre_count;
> + u32 frame_depth;
> + s64 ts_start;
> + s64 ts_input;
> + s64 timestamp;
> +};
> +
> +static const struct vpu_format vdec_formats[] = {
> + {
> + .pixfmt = V4L2_PIX_FMT_NV12MT_8L128,
> + .num_planes = 2,
> + .type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
> + },
> + {
> + .pixfmt = V4L2_PIX_FMT_NV12MT_10BE_8L128,
> + .num_planes = 2,
> + .type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
> + },
> + {
> + .pixfmt = V4L2_PIX_FMT_H264,
> + .num_planes = 1,
> + .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
> + .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
> + },
> + {
> + .pixfmt = V4L2_PIX_FMT_H264_MVC,
> + .num_planes = 1,
> + .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
> + .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
> + },
> + {
> + .pixfmt = V4L2_PIX_FMT_HEVC,
> + .num_planes = 1,
> + .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
> + .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
> + },
> + {
> + .pixfmt = V4L2_PIX_FMT_VC1_ANNEX_G,
> + .num_planes = 1,
> + .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
> + .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
> + },
> + {
> + .pixfmt = V4L2_PIX_FMT_VC1_ANNEX_L,
> + .num_planes = 1,
> + .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
> + .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
> + },
> + {
> + .pixfmt = V4L2_PIX_FMT_MPEG2,
> + .num_planes = 1,
> + .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
> + .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
> + },
> + {
> + .pixfmt = V4L2_PIX_FMT_MPEG4,
> + .num_planes = 1,
> + .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
> + .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
> + },
> + {
> + .pixfmt = V4L2_PIX_FMT_XVID,
> + .num_planes = 1,
> + .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
> + .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
> + },
> + {
> + .pixfmt = V4L2_PIX_FMT_VP8,
> + .num_planes = 1,
> + .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
> + .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
> + },
> + {
> + .pixfmt = V4L2_PIX_FMT_H263,
> + .num_planes = 1,
> + .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
> + .flags = V4L2_FMT_FLAG_DYN_RESOLUTION
> + },
> + {0, 0, 0, 0},
> +};
> +
> +static const struct v4l2_ctrl_ops vdec_ctrl_ops = {
> + .g_volatile_ctrl = vpu_helper_g_volatile_ctrl,
> +};
> +
> +static int vdec_ctrl_init(struct vpu_inst *inst)
> +{
> + struct v4l2_ctrl *ctrl;
> + int ret;
> +
> + ret = v4l2_ctrl_handler_init(&inst->ctrl_handler, 20);
> + if (ret)
> + return ret;
> +
> + ctrl = v4l2_ctrl_new_std(&inst->ctrl_handler, &vdec_ctrl_ops,
> + V4L2_CID_MIN_BUFFERS_FOR_CAPTURE, 1, 32, 1, 2);
> + if (ctrl)
> + ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE;
> +
> + ctrl = v4l2_ctrl_new_std(&inst->ctrl_handler, &vdec_ctrl_ops,
> + V4L2_CID_MIN_BUFFERS_FOR_OUTPUT, 1, 32, 1, 2);
> + if (ctrl)
> + ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE;
> +
> + ret = v4l2_ctrl_handler_setup(&inst->ctrl_handler);
> + if (ret) {
> + dev_err(inst->dev, "[%d] setup ctrls fail, ret = %d\n", inst->id, ret);
> + v4l2_ctrl_handler_free(&inst->ctrl_handler);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static void vdec_set_last_buffer_dequeued(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + if (vdec->eos_received) {
> + if (!vpu_set_last_buffer_dequeued(inst))
> + vdec->eos_received--;
> + }
> +}
> +
> +static void vdec_handle_resolution_change(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> + struct vb2_queue *q;
> +
> + if (inst->state != VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
> + return;
> + if (!vdec->source_change)
> + return;
> +
> + q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> + if (!list_empty(&q->done_list))
> + return;
> +
> + vdec->source_change--;
> + vpu_notify_source_change(inst);
> +}
> +
> +static int vdec_update_state(struct vpu_inst *inst,
> + enum vpu_codec_state state, u32 force)
> +{
> + struct vdec_t *vdec = inst->priv;
> + enum vpu_codec_state pre_state = inst->state;
> +
> + if (state == VPU_CODEC_STATE_SEEK) {
> + if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
> + vdec->state = inst->state;
> + else
> + vdec->state = VPU_CODEC_STATE_ACTIVE;
> + }
> + if (inst->state != VPU_CODEC_STATE_SEEK || force)
> + inst->state = state;
> + else if (state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
> + vdec->state = VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE;
> +
> + if (inst->state != pre_state)
> + vpu_trace(inst->dev, "[%d] %d -> %d\n", inst->id, pre_state, inst->state);
> +
> + if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
> + vdec_handle_resolution_change(inst);
> +
> + return 0;
> +}
> +
> +static int vdec_querycap(struct file *file, void *fh, struct v4l2_capability *cap)
> +{
> + strscpy(cap->driver, "amphion-vpu", sizeof(cap->driver));
> + strscpy(cap->card, "amphion vpu decoder", sizeof(cap->card));
> + strscpy(cap->bus_info, "platform: amphion-vpu", sizeof(cap->bus_info));
> +
> + return 0;
> +}
> +
> +static int vdec_enum_fmt(struct file *file, void *fh, struct v4l2_fmtdesc *f)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + struct vdec_t *vdec = inst->priv;
> + const struct vpu_format *fmt;
> + int ret = -EINVAL;
> +
> + vpu_inst_lock(inst);
> + if (!V4L2_TYPE_IS_OUTPUT(f->type) && vdec->fixed_fmt) {
> + if (f->index == 0) {
> + f->pixelformat = inst->cap_format.pixfmt;
> + f->flags = inst->cap_format.flags;
> + ret = 0;
> + }
> + } else {
> + fmt = vpu_helper_enum_format(inst, f->type, f->index);
> + memset(f->reserved, 0, sizeof(f->reserved));
> + if (!fmt)
> + goto exit;
> +
> + f->pixelformat = fmt->pixfmt;
> + f->flags = fmt->flags;
> + ret = 0;
> + }
> +
> +exit:
> + vpu_inst_unlock(inst);
> + return ret;
> +}
> +
> +static int vdec_g_fmt(struct file *file, void *fh, struct v4l2_format *f)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + struct vdec_t *vdec = inst->priv;
> + struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp;
> + struct vpu_format *cur_fmt;
> + int i;
> +
> + cur_fmt = vpu_get_format(inst, f->type);
> +
> + pixmp->pixelformat = cur_fmt->pixfmt;
> + pixmp->num_planes = cur_fmt->num_planes;
> + pixmp->width = cur_fmt->width;
> + pixmp->height = cur_fmt->height;
> + pixmp->field = cur_fmt->field;
> + pixmp->flags = cur_fmt->flags;
> + for (i = 0; i < pixmp->num_planes; i++) {
> + pixmp->plane_fmt[i].bytesperline = cur_fmt->bytesperline[i];
> + pixmp->plane_fmt[i].sizeimage = cur_fmt->sizeimage[i];
> + }
> +
> + f->fmt.pix_mp.colorspace = vdec->codec_info.color_primaries;
> + f->fmt.pix_mp.xfer_func = vdec->codec_info.transfer_chars;
> + f->fmt.pix_mp.ycbcr_enc = vdec->codec_info.matrix_coeffs;
> + f->fmt.pix_mp.quantization = vdec->codec_info.full_range;
> +
> + return 0;
> +}
> +
> +static int vdec_try_fmt(struct file *file, void *fh, struct v4l2_format *f)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + struct vdec_t *vdec = inst->priv;
> +
> + vpu_try_fmt_common(inst, f);
> +
> + vpu_inst_lock(inst);
> + if (vdec->fixed_fmt) {
> + f->fmt.pix_mp.colorspace = vdec->codec_info.color_primaries;
> + f->fmt.pix_mp.xfer_func = vdec->codec_info.transfer_chars;
> + f->fmt.pix_mp.ycbcr_enc = vdec->codec_info.matrix_coeffs;
> + f->fmt.pix_mp.quantization = vdec->codec_info.full_range;
> + } else {
> + f->fmt.pix_mp.colorspace = V4L2_COLORSPACE_DEFAULT;
> + f->fmt.pix_mp.xfer_func = V4L2_XFER_FUNC_DEFAULT;
> + f->fmt.pix_mp.ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT;
> + f->fmt.pix_mp.quantization = V4L2_QUANTIZATION_DEFAULT;
> + }
> + vpu_inst_unlock(inst);
> +
> + return 0;
> +}
> +
> +static int vdec_s_fmt_common(struct vpu_inst *inst, struct v4l2_format *f)
> +{
> + struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp;
> + const struct vpu_format *fmt;
> + struct vpu_format *cur_fmt;
> + struct vb2_queue *q;
> + struct vdec_t *vdec = inst->priv;
> + int i;
> +
> + q = v4l2_m2m_get_vq(inst->fh.m2m_ctx, f->type);
> + if (!q)
> + return -EINVAL;
> + if (vb2_is_streaming(q))
> + return -EBUSY;
> +
> + fmt = vpu_try_fmt_common(inst, f);
> + if (!fmt)
> + return -EINVAL;
> +
> + cur_fmt = vpu_get_format(inst, f->type);
> + if (V4L2_TYPE_IS_OUTPUT(f->type) && inst->state != VPU_CODEC_STATE_DEINIT) {
> + if (cur_fmt->pixfmt != fmt->pixfmt ||
> + (pixmp->width && cur_fmt->width != pixmp->width) ||
> + (pixmp->height && cur_fmt->height != pixmp->height)) {
> + vdec->reset_codec = true;
> + vdec->fixed_fmt = false;
> + }
> + }
> + cur_fmt->pixfmt = fmt->pixfmt;
> + if (V4L2_TYPE_IS_OUTPUT(f->type) || !vdec->fixed_fmt) {
> + cur_fmt->num_planes = fmt->num_planes;
> + cur_fmt->flags = fmt->flags;
> + cur_fmt->width = pixmp->width;
> + cur_fmt->height = pixmp->height;
> + for (i = 0; i < fmt->num_planes; i++) {
> + cur_fmt->sizeimage[i] = pixmp->plane_fmt[i].sizeimage;
> + cur_fmt->bytesperline[i] = pixmp->plane_fmt[i].bytesperline;
> + }
> + if (pixmp->field != V4L2_FIELD_ANY)
> + cur_fmt->field = pixmp->field;
> + } else {
> + pixmp->num_planes = cur_fmt->num_planes;
> + pixmp->width = cur_fmt->width;
> + pixmp->height = cur_fmt->height;
> + for (i = 0; i < pixmp->num_planes; i++) {
> + pixmp->plane_fmt[i].bytesperline = cur_fmt->bytesperline[i];
> + pixmp->plane_fmt[i].sizeimage = cur_fmt->sizeimage[i];
> + }
> + pixmp->field = cur_fmt->field;
> + }
> +
> + if (!vdec->fixed_fmt) {
> + if (V4L2_TYPE_IS_OUTPUT(f->type)) {
> + vdec->params.codec_format = cur_fmt->pixfmt;
> + vdec->codec_info.color_primaries = f->fmt.pix_mp.colorspace;
> + vdec->codec_info.transfer_chars = f->fmt.pix_mp.xfer_func;
> + vdec->codec_info.matrix_coeffs = f->fmt.pix_mp.ycbcr_enc;
> + vdec->codec_info.full_range = f->fmt.pix_mp.quantization;
> + } else {
> + vdec->params.output_format = cur_fmt->pixfmt;
> + inst->crop.left = 0;
> + inst->crop.top = 0;
> + inst->crop.width = cur_fmt->width;
> + inst->crop.height = cur_fmt->height;
> + }
> + }
> +
> + return 0;
> +}
> +
> +static int vdec_s_fmt(struct file *file, void *fh, struct v4l2_format *f)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp;
> + struct vdec_t *vdec = inst->priv;
> + int ret = 0;
> +
> + vpu_inst_lock(inst);
> + ret = vdec_s_fmt_common(inst, f);
> + if (ret)
> + goto exit;
> +
> + if (V4L2_TYPE_IS_OUTPUT(f->type) && !vdec->fixed_fmt) {
> + struct v4l2_format fc;
> +
> + memset(&fc, 0, sizeof(fc));
> + fc.type = inst->cap_format.type;
> + fc.fmt.pix_mp.pixelformat = inst->cap_format.pixfmt;
> + fc.fmt.pix_mp.width = pixmp->width;
> + fc.fmt.pix_mp.height = pixmp->height;
> + vdec_s_fmt_common(inst, &fc);
> + }
> +
> + f->fmt.pix_mp.colorspace = vdec->codec_info.color_primaries;
> + f->fmt.pix_mp.xfer_func = vdec->codec_info.transfer_chars;
> + f->fmt.pix_mp.ycbcr_enc = vdec->codec_info.matrix_coeffs;
> + f->fmt.pix_mp.quantization = vdec->codec_info.full_range;
> +
> +exit:
> + vpu_inst_unlock(inst);
> + return ret;
> +}
> +
> +static int vdec_g_selection(struct file *file, void *fh,
> + struct v4l2_selection *s)
> +{
> + struct vpu_inst *inst = to_inst(file);
> +
> + if (s->type != V4L2_BUF_TYPE_VIDEO_CAPTURE &&
> + s->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
> + return -EINVAL;
> +
> + switch (s->target) {
> + case V4L2_SEL_TGT_COMPOSE:
> + case V4L2_SEL_TGT_COMPOSE_DEFAULT:
> + case V4L2_SEL_TGT_COMPOSE_PADDED:
> + s->r = inst->crop;
> + break;
> + case V4L2_SEL_TGT_COMPOSE_BOUNDS:
> + s->r.left = 0;
> + s->r.top = 0;
> + s->r.width = inst->cap_format.width;
> + s->r.height = inst->cap_format.height;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +static int vdec_drain(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + if (!vdec->drain)
> + return 0;
> +
> + if (v4l2_m2m_num_src_bufs_ready(inst->fh.m2m_ctx))
> + return 0;
> +
> + if (!vdec->params.frame_count) {
> + vpu_set_last_buffer_dequeued(inst);
> + return 0;
> + }
> +
> + vpu_iface_add_scode(inst, SCODE_PADDING_EOS);
> + vdec->params.end_flag = 1;
> + vpu_iface_set_decode_params(inst, &vdec->params, 1);
> + vdec->drain = 0;
> + vpu_trace(inst->dev, "[%d] frame_count = %d\n", inst->id, vdec->params.frame_count);
> +
> + return 0;
> +}
> +
> +static int vdec_cmd_start(struct vpu_inst *inst)
> +{
> + if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
> + vdec_update_state(inst, VPU_CODEC_STATE_ACTIVE, 0);
> + vpu_process_capture_buffer(inst);
> + return 0;
> +}
> +
> +static int vdec_cmd_stop(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + vpu_trace(inst->dev, "[%d]\n", inst->id);
> +
> + if (inst->state == VPU_CODEC_STATE_DEINIT) {
> + vpu_set_last_buffer_dequeued(inst);
> + } else {
> + vdec->drain = 1;
> + vdec_drain(inst);
> + }
> +
> + return 0;
> +}
> +
> +static int vdec_decoder_cmd(struct file *file,
> + void *fh,
> + struct v4l2_decoder_cmd *cmd)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + int ret;
> +
> + ret = v4l2_m2m_ioctl_try_decoder_cmd(file, fh, cmd);
> + if (ret)
> + return ret;
> +
> + vpu_inst_lock(inst);
> + switch (cmd->cmd) {
> + case V4L2_DEC_CMD_START:
> + vdec_cmd_start(inst);
> + break;
> + case V4L2_DEC_CMD_STOP:
> + vdec_cmd_stop(inst);
> + break;
> + default:
> + break;
> + }
> + vpu_inst_unlock(inst);
> +
> + return 0;
> +}
> +
> +static int vdec_subscribe_event(struct v4l2_fh *fh,
> + const struct v4l2_event_subscription *sub)
> +{
> + switch (sub->type) {
> + case V4L2_EVENT_EOS:
> + return v4l2_event_subscribe(fh, sub, 0, NULL);
> + case V4L2_EVENT_SOURCE_CHANGE:
> + return v4l2_src_change_event_subscribe(fh, sub);
> + case V4L2_EVENT_CTRL:
> + return v4l2_ctrl_subscribe_event(fh, sub);
> + default:
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +static const struct v4l2_ioctl_ops vdec_ioctl_ops = {
> + .vidioc_querycap = vdec_querycap,
> + .vidioc_enum_fmt_vid_cap = vdec_enum_fmt,
> + .vidioc_enum_fmt_vid_out = vdec_enum_fmt,
> + .vidioc_g_fmt_vid_cap_mplane = vdec_g_fmt,
> + .vidioc_g_fmt_vid_out_mplane = vdec_g_fmt,
> + .vidioc_try_fmt_vid_cap_mplane = vdec_try_fmt,
> + .vidioc_try_fmt_vid_out_mplane = vdec_try_fmt,
> + .vidioc_s_fmt_vid_cap_mplane = vdec_s_fmt,
> + .vidioc_s_fmt_vid_out_mplane = vdec_s_fmt,
> + .vidioc_g_selection = vdec_g_selection,
> + .vidioc_try_decoder_cmd = v4l2_m2m_ioctl_try_decoder_cmd,
> + .vidioc_decoder_cmd = vdec_decoder_cmd,
> + .vidioc_subscribe_event = vdec_subscribe_event,
> + .vidioc_unsubscribe_event = v4l2_event_unsubscribe,
> + .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs,
> + .vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs,
> + .vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf,
> + .vidioc_querybuf = v4l2_m2m_ioctl_querybuf,
> + .vidioc_qbuf = v4l2_m2m_ioctl_qbuf,
> + .vidioc_expbuf = v4l2_m2m_ioctl_expbuf,
> + .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf,
> + .vidioc_streamon = v4l2_m2m_ioctl_streamon,
> + .vidioc_streamoff = v4l2_m2m_ioctl_streamoff,
> +};
> +
> +static bool vdec_check_ready(struct vpu_inst *inst, unsigned int type)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + if (V4L2_TYPE_IS_OUTPUT(type)) {
> + if (vdec->ts_pre_count >= vdec->frame_depth)
> + return false;
> + return true;
> + }
> +
> + if (vdec->req_frame_count)
> + return true;
> +
> + return false;
> +}
> +
> +static int vdec_frame_decoded(struct vpu_inst *inst, void *arg)
> +{
> + struct vdec_t *vdec = inst->priv;
> + struct vpu_dec_pic_info *info = arg;
> + struct vpu_vb2_buffer *vpu_buf;
> + int ret = 0;
> +
> + if (!info || info->id >= ARRAY_SIZE(vdec->slots))
> + return -EINVAL;
> +
> + vpu_inst_lock(inst);
> + vpu_buf = vdec->slots[info->id];
> + if (!vpu_buf) {
> + dev_err(inst->dev, "[%d] decoded invalid frame[%d]\n", inst->id, info->id);
> + ret = -EINVAL;
> + goto exit;
> + }
> + if (vpu_buf->state == VPU_BUF_STATE_DECODED)
> + dev_info(inst->dev, "[%d] buf[%d] has been decoded\n", inst->id, info->id);
> + vpu_buf->state = VPU_BUF_STATE_DECODED;
> + vdec->decoded_frame_count++;
> + if (vdec->ts_pre_count >= info->consumed_count)
> + vdec->ts_pre_count -= info->consumed_count;
> + else
> + vdec->ts_pre_count = 0;
> +exit:
> + vpu_inst_unlock(inst);
> +
> + return ret;
> +}
> +
> +static struct vpu_vb2_buffer *vdec_find_buffer(struct vpu_inst *inst, u32 luma)
> +{
> + struct vdec_t *vdec = inst->priv;
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(vdec->slots); i++) {
> + if (!vdec->slots[i])
> + continue;
> + if (luma == vdec->slots[i]->luma)
> + return vdec->slots[i];
> + }
> +
> + return NULL;
> +}
> +
> +static void vdec_buf_done(struct vpu_inst *inst, struct vpu_frame_info *frame)
> +{
> + struct vdec_t *vdec = inst->priv;
> + struct vpu_vb2_buffer *vpu_buf;
> + struct vb2_v4l2_buffer *vbuf;
> + u32 sequence;
> +
> + if (!frame)
> + return;
> +
> + vpu_inst_lock(inst);
> + sequence = vdec->sequence++;
> + vpu_buf = vdec_find_buffer(inst, frame->luma);
> + vpu_inst_unlock(inst);
> + if (!vpu_buf) {
> + dev_err(inst->dev, "[%d] can't find buffer, id = %d, addr = 0x%x\n",
> + inst->id, frame->id, frame->luma);
> + return;
> + }
> + if (frame->skipped) {
> + dev_dbg(inst->dev, "[%d] frame skip\n", inst->id);
> + return;
> + }
> +
> + vbuf = &vpu_buf->m2m_buf.vb;
> + if (vbuf->vb2_buf.index != frame->id)
> + dev_err(inst->dev, "[%d] buffer id(%d, %d) dismatch\n",
> + inst->id, vbuf->vb2_buf.index, frame->id);
> +
> + if (vpu_buf->state != VPU_BUF_STATE_DECODED)
> + dev_err(inst->dev, "[%d] buffer(%d) ready without decoded\n",
> + inst->id, frame->id);
> + vpu_buf->state = VPU_BUF_STATE_READY;
> + vb2_set_plane_payload(&vbuf->vb2_buf, 0, inst->cap_format.sizeimage[0]);
> + vb2_set_plane_payload(&vbuf->vb2_buf, 1, inst->cap_format.sizeimage[1]);
> + vbuf->vb2_buf.timestamp = frame->timestamp;
> + vbuf->field = inst->cap_format.field;
> + vbuf->sequence = sequence;
> + dev_dbg(inst->dev, "[%d][OUTPUT TS]%32lld\n", inst->id, frame->timestamp);
> +
> + v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
> + vpu_inst_lock(inst);
> + vdec->timestamp = frame->timestamp;
> + vdec->display_frame_count++;
> + vpu_inst_unlock(inst);
> + dev_dbg(inst->dev, "[%d] decoded : %d, display : %d, sequence : %d\n",
> + inst->id,
> + vdec->decoded_frame_count,
> + vdec->display_frame_count,
> + vdec->sequence);
> +}
> +
> +static void vdec_stop_done(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + vpu_inst_lock(inst);
> + vdec_update_state(inst, VPU_CODEC_STATE_DEINIT, 0);
> + vdec->seq_hdr_found = 0;
> + vdec->req_frame_count = 0;
> + vdec->reset_codec = false;
> + vdec->fixed_fmt = false;
> + vdec->params.end_flag = 0;
> + vdec->drain = 0;
> + vdec->ts_pre_count = 0;
> + vdec->timestamp = VPU_INVALID_TIMESTAMP;
> + vdec->ts_start = VPU_INVALID_TIMESTAMP;
> + vdec->ts_input = VPU_INVALID_TIMESTAMP;
> + vdec->params.frame_count = 0;
> + vdec->decoded_frame_count = 0;
> + vdec->display_frame_count = 0;
> + vdec->sequence = 0;
> + vdec->eos_received = 0;
> + vdec->is_source_changed = false;
> + vdec->source_change = 0;
> + vpu_inst_unlock(inst);
> +}
> +
> +static bool vdec_check_source_change(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> + const struct vpu_format *fmt;
> + int i;
> +
> + if (!vb2_is_streaming(v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx)))
> + return true;
> + fmt = vpu_helper_find_format(inst, inst->cap_format.type, vdec->codec_info.pixfmt);
> + if (inst->cap_format.pixfmt != vdec->codec_info.pixfmt)
> + return true;
> + if (inst->cap_format.width != vdec->codec_info.decoded_width)
> + return true;
> + if (inst->cap_format.height != vdec->codec_info.decoded_height)
> + return true;
> + if (vpu_get_num_buffers(inst, inst->cap_format.type) < inst->min_buffer_cap)
> + return true;
> + if (inst->crop.left != vdec->codec_info.offset_x)
> + return true;
> + if (inst->crop.top != vdec->codec_info.offset_y)
> + return true;
> + if (inst->crop.width != vdec->codec_info.width)
> + return true;
> + if (inst->crop.height != vdec->codec_info.height)
> + return true;
> + if (fmt && inst->cap_format.num_planes != fmt->num_planes)
> + return true;
> + for (i = 0; i < inst->cap_format.num_planes; i++) {
> + if (inst->cap_format.bytesperline[i] != vdec->codec_info.bytesperline[i])
> + return true;
> + if (inst->cap_format.sizeimage[i] != vdec->codec_info.sizeimage[i])
> + return true;
> + }
> +
> + return false;
> +}
> +
> +static void vdec_init_fmt(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> + const struct vpu_format *fmt;
> + int i;
> +
> + fmt = vpu_helper_find_format(inst, inst->cap_format.type, vdec->codec_info.pixfmt);
> + inst->out_format.width = vdec->codec_info.width;
> + inst->out_format.height = vdec->codec_info.height;
> + inst->cap_format.width = vdec->codec_info.decoded_width;
> + inst->cap_format.height = vdec->codec_info.decoded_height;
> + inst->cap_format.pixfmt = vdec->codec_info.pixfmt;
> + if (fmt) {
> + inst->cap_format.num_planes = fmt->num_planes;
> + inst->cap_format.flags = fmt->flags;
> + }
> + for (i = 0; i < inst->cap_format.num_planes; i++) {
> + inst->cap_format.bytesperline[i] = vdec->codec_info.bytesperline[i];
> + inst->cap_format.sizeimage[i] = vdec->codec_info.sizeimage[i];
> + }
> + if (vdec->codec_info.progressive)
> + inst->cap_format.field = V4L2_FIELD_NONE;
> + else
> + inst->cap_format.field = V4L2_FIELD_INTERLACED;

As a followup, this should be conditional to the chosen pixel format. If I
understood correct, you produce interlaced is only produce for linear NV12, for
tiled the fields are outputed seperated in their respective v4l2_buffer. Note
sure where yet, but the V4L2 spec requires you to pair the fields by using the
same seq_num on both.

> + if (vdec->codec_info.color_primaries == V4L2_COLORSPACE_DEFAULT)
> + vdec->codec_info.color_primaries = V4L2_COLORSPACE_REC709;
> + if (vdec->codec_info.transfer_chars == V4L2_XFER_FUNC_DEFAULT)
> + vdec->codec_info.transfer_chars = V4L2_XFER_FUNC_709;
> + if (vdec->codec_info.matrix_coeffs == V4L2_YCBCR_ENC_DEFAULT)
> + vdec->codec_info.matrix_coeffs = V4L2_YCBCR_ENC_709;
> + if (vdec->codec_info.full_range == V4L2_QUANTIZATION_DEFAULT)
> + vdec->codec_info.full_range = V4L2_QUANTIZATION_LIM_RANGE;
> +}
> +
> +static void vdec_init_crop(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + inst->crop.left = vdec->codec_info.offset_x;
> + inst->crop.top = vdec->codec_info.offset_y;
> + inst->crop.width = vdec->codec_info.width;
> + inst->crop.height = vdec->codec_info.height;
> +}
> +
> +static void vdec_init_mbi(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + vdec->mbi.size = vdec->codec_info.mbi_size;
> + vdec->mbi.max_count = ARRAY_SIZE(vdec->mbi.buffer);
> + scnprintf(vdec->mbi.name, sizeof(vdec->mbi.name), "mbi");
> + vdec->mbi.type = MEM_RES_MBI;
> + vdec->mbi.tag = vdec->seq_tag;
> +}
> +
> +static void vdec_init_dcp(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + vdec->dcp.size = vdec->codec_info.dcp_size;
> + vdec->dcp.max_count = ARRAY_SIZE(vdec->dcp.buffer);
> + scnprintf(vdec->dcp.name, sizeof(vdec->dcp.name), "dcp");
> + vdec->dcp.type = MEM_RES_DCP;
> + vdec->dcp.tag = vdec->seq_tag;
> +}
> +
> +static void vdec_request_one_fs(struct vdec_fs_info *fs)
> +{
> + WARN_ON(!fs);
> +
> + fs->req_count++;
> + if (fs->req_count > fs->max_count)
> + fs->req_count = fs->max_count;
> +}
> +
> +static int vdec_alloc_fs_buffer(struct vpu_inst *inst, struct vdec_fs_info *fs)
> +{
> + struct vpu_buffer *buffer;
> +
> + if (!inst || !fs || !fs->size)
> + return -EINVAL;
> +
> + if (fs->count >= fs->req_count)
> + return -EINVAL;
> +
> + buffer = &fs->buffer[fs->count];
> + if (buffer->virt && buffer->length >= fs->size)
> + return 0;
> +
> + vpu_free_dma(buffer);
> + buffer->length = fs->size;
> + return vpu_alloc_dma(inst->core, buffer);
> +}
> +
> +static void vdec_alloc_fs(struct vpu_inst *inst, struct vdec_fs_info *fs)
> +{
> + int ret;
> +
> + while (fs->count < fs->req_count) {
> + ret = vdec_alloc_fs_buffer(inst, fs);
> + if (ret)
> + break;
> + fs->count++;
> + }
> +}
> +
> +static void vdec_clear_fs(struct vdec_fs_info *fs)
> +{
> + u32 i;
> +
> + if (!fs)
> + return;
> +
> + for (i = 0; i < ARRAY_SIZE(fs->buffer); i++)
> + vpu_free_dma(&fs->buffer[i]);
> + memset(fs, 0, sizeof(*fs));
> +}
> +
> +static int vdec_response_fs(struct vpu_inst *inst, struct vdec_fs_info *fs)
> +{
> + struct vpu_fs_info info;
> + int ret;
> +
> + if (fs->index >= fs->count)
> + return 0;
> +
> + memset(&info, 0, sizeof(info));
> + info.id = fs->index;
> + info.type = fs->type;
> + info.tag = fs->tag;
> + info.luma_addr = fs->buffer[fs->index].phys;
> + info.luma_size = fs->buffer[fs->index].length;
> + ret = vpu_session_alloc_fs(inst, &info);
> + if (ret)
> + return ret;
> +
> + fs->index++;
> + return 0;
> +}
> +
> +static int vdec_response_frame_abnormal(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> + struct vpu_fs_info info;
> +
> + if (!vdec->req_frame_count)
> + return 0;
> +
> + memset(&info, 0, sizeof(info));
> + info.type = MEM_RES_FRAME;
> + info.tag = vdec->seq_tag + 0xf0;
> + vpu_session_alloc_fs(inst, &info);
> + vdec->req_frame_count--;
> +
> + return 0;
> +}
> +
> +static int vdec_response_frame(struct vpu_inst *inst, struct vb2_v4l2_buffer *vbuf)
> +{
> + struct vdec_t *vdec = inst->priv;
> + struct vpu_vb2_buffer *vpu_buf;
> + struct vpu_fs_info info;
> + int ret;
> +
> + if (inst->state != VPU_CODEC_STATE_ACTIVE)
> + return -EINVAL;
> +
> + if (!vdec->req_frame_count)
> + return -EINVAL;
> +
> + if (!vbuf)
> + return -EINVAL;
> +
> + if (vdec->slots[vbuf->vb2_buf.index]) {
> + dev_err(inst->dev, "[%d] repeat alloc fs %d\n",
> + inst->id, vbuf->vb2_buf.index);
> + return -EINVAL;
> + }
> +
> + dev_dbg(inst->dev, "[%d] state = %d, alloc fs %d, tag = 0x%x\n",
> + inst->id, inst->state, vbuf->vb2_buf.index, vdec->seq_tag);
> + vpu_buf = to_vpu_vb2_buffer(vbuf);
> +
> + memset(&info, 0, sizeof(info));
> + info.id = vbuf->vb2_buf.index;
> + info.type = MEM_RES_FRAME;
> + info.tag = vdec->seq_tag;
> + info.luma_addr = vpu_get_vb_phy_addr(&vbuf->vb2_buf, 0);
> + info.luma_size = inst->cap_format.sizeimage[0];
> + info.chroma_addr = vpu_get_vb_phy_addr(&vbuf->vb2_buf, 1);
> + info.chromau_size = inst->cap_format.sizeimage[1];
> + info.bytesperline = inst->cap_format.bytesperline[0];
> + ret = vpu_session_alloc_fs(inst, &info);
> + if (ret)
> + return ret;
> +
> + vpu_buf->tag = info.tag;
> + vpu_buf->luma = info.luma_addr;
> + vpu_buf->chroma_u = info.chromau_size;
> + vpu_buf->chroma_v = 0;
> + vpu_buf->state = VPU_BUF_STATE_INUSE;
> + vdec->slots[info.id] = vpu_buf;
> + vdec->req_frame_count--;
> +
> + return 0;
> +}
> +
> +static void vdec_response_fs_request(struct vpu_inst *inst, bool force)
> +{
> + struct vdec_t *vdec = inst->priv;
> + int i;
> + int ret;
> +
> + if (force) {
> + for (i = vdec->req_frame_count; i > 0; i--)
> + vdec_response_frame_abnormal(inst);
> + return;
> + }
> +
> + for (i = vdec->req_frame_count; i > 0; i--) {
> + ret = vpu_process_capture_buffer(inst);
> + if (ret)
> + break;
> + if (vdec->eos_received)
> + break;
> + }
> +
> + for (i = vdec->mbi.index; i < vdec->mbi.count; i++) {
> + if (vdec_response_fs(inst, &vdec->mbi))
> + break;
> + if (vdec->eos_received)
> + break;
> + }
> + for (i = vdec->dcp.index; i < vdec->dcp.count; i++) {
> + if (vdec_response_fs(inst, &vdec->dcp))
> + break;
> + if (vdec->eos_received)
> + break;
> + }
> +}
> +
> +static void vdec_response_fs_release(struct vpu_inst *inst, u32 id, u32 tag)
> +{
> + struct vpu_fs_info info;
> +
> + memset(&info, 0, sizeof(info));
> + info.id = id;
> + info.tag = tag;
> + vpu_session_release_fs(inst, &info);
> +}
> +
> +static void vdec_recycle_buffer(struct vpu_inst *inst, struct vb2_v4l2_buffer *vbuf)
> +{
> + if (!inst || !vbuf)
> + return;
> +
> + if (vbuf->vb2_buf.state != VB2_BUF_STATE_ACTIVE)
> + return;
> + if (vpu_find_buf_by_idx(inst, vbuf->vb2_buf.type, vbuf->vb2_buf.index))
> + return;
> + v4l2_m2m_buf_queue(inst->fh.m2m_ctx, vbuf);
> +}
> +
> +static void vdec_clear_slots(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> + struct vpu_vb2_buffer *vpu_buf;
> + struct vb2_v4l2_buffer *vbuf;
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(vdec->slots); i++) {
> + if (!vdec->slots[i])
> + continue;
> +
> + vpu_buf = vdec->slots[i];
> + vbuf = &vpu_buf->m2m_buf.vb;
> +
> + vdec_response_fs_release(inst, i, vpu_buf->tag);
> + vdec_recycle_buffer(inst, vbuf);
> + vdec->slots[i]->state = VPU_BUF_STATE_IDLE;
> + vdec->slots[i] = NULL;
> + }
> +}
> +
> +static void vdec_event_seq_hdr(struct vpu_inst *inst,
> + struct vpu_dec_codec_info *hdr)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + vpu_inst_lock(inst);
> + memcpy(&vdec->codec_info, hdr, sizeof(vdec->codec_info));
> +
> + vpu_trace(inst->dev, "[%d] %d x %d, crop : (%d, %d) %d x %d, %d, %d\n",
> + inst->id,
> + vdec->codec_info.decoded_width,
> + vdec->codec_info.decoded_height,
> + vdec->codec_info.offset_x,
> + vdec->codec_info.offset_y,
> + vdec->codec_info.width,
> + vdec->codec_info.height,
> + hdr->num_ref_frms,
> + hdr->num_dpb_frms);
> + inst->min_buffer_cap = hdr->num_ref_frms + hdr->num_dpb_frms;
> + vdec->is_source_changed = vdec_check_source_change(inst);
> + vdec_init_fmt(inst);
> + vdec_init_crop(inst);
> + vdec_init_mbi(inst);
> + vdec_init_dcp(inst);
> + if (!vdec->seq_hdr_found) {
> + vdec->seq_tag = vdec->codec_info.tag;
> + if (vdec->is_source_changed) {
> + vdec_update_state(inst, VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE, 0);
> + vpu_notify_source_change(inst);
> + vdec->is_source_changed = false;
> + }
> + }
> + if (vdec->seq_tag != vdec->codec_info.tag) {
> + vdec_response_fs_request(inst, true);
> + vpu_trace(inst->dev, "[%d] seq tag change: %d -> %d\n",
> + inst->id, vdec->seq_tag, vdec->codec_info.tag);
> + }
> + vdec->seq_hdr_found++;
> + vdec->fixed_fmt = true;
> + vpu_inst_unlock(inst);
> +}
> +
> +static void vdec_event_resolution_change(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + vpu_trace(inst->dev, "[%d]\n", inst->id);
> + vpu_inst_lock(inst);
> + vdec->seq_tag = vdec->codec_info.tag;
> + vdec_clear_fs(&vdec->mbi);
> + vdec_clear_fs(&vdec->dcp);
> + vdec_clear_slots(inst);
> + vdec_init_mbi(inst);
> + vdec_init_dcp(inst);
> + if (vdec->is_source_changed) {
> + vdec_update_state(inst, VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE, 0);
> + vdec->source_change++;
> + vdec_handle_resolution_change(inst);
> + vdec->is_source_changed = false;
> + }
> + vpu_inst_unlock(inst);
> +}
> +
> +static void vdec_event_req_fs(struct vpu_inst *inst, struct vpu_fs_info *fs)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + if (!fs)
> + return;
> +
> + vpu_inst_lock(inst);
> +
> + switch (fs->type) {
> + case MEM_RES_FRAME:
> + vdec->req_frame_count++;
> + break;
> + case MEM_RES_MBI:
> + vdec_request_one_fs(&vdec->mbi);
> + break;
> + case MEM_RES_DCP:
> + vdec_request_one_fs(&vdec->dcp);
> + break;
> + default:
> + break;
> + }
> +
> + vdec_alloc_fs(inst, &vdec->mbi);
> + vdec_alloc_fs(inst, &vdec->dcp);
> +
> + vdec_response_fs_request(inst, false);
> +
> + vpu_inst_unlock(inst);
> +}
> +
> +static void vdec_evnet_rel_fs(struct vpu_inst *inst, struct vpu_fs_info *fs)
> +{
> + struct vdec_t *vdec = inst->priv;
> + struct vpu_vb2_buffer *vpu_buf;
> + struct vb2_v4l2_buffer *vbuf;
> +
> + if (!fs || fs->id >= ARRAY_SIZE(vdec->slots))
> + return;
> + if (fs->type != MEM_RES_FRAME)
> + return;
> +
> + if (fs->id >= vpu_get_num_buffers(inst, inst->cap_format.type)) {
> + dev_err(inst->dev, "[%d] invalid fs(%d) to release\n", inst->id, fs->id);
> + return;
> + }
> +
> + vpu_inst_lock(inst);
> + vpu_buf = vdec->slots[fs->id];
> + vdec->slots[fs->id] = NULL;
> +
> + if (!vpu_buf) {
> + dev_dbg(inst->dev, "[%d] fs[%d] has bee released\n", inst->id, fs->id);
> + goto exit;
> + }
> +
> + if (vpu_buf->state == VPU_BUF_STATE_DECODED) {
> + dev_dbg(inst->dev, "[%d] frame skip\n", inst->id);
> + vdec->sequence++;
> + }
> +
> + vdec_response_fs_release(inst, fs->id, vpu_buf->tag);
> + vbuf = &vpu_buf->m2m_buf.vb;
> + if (vpu_buf->state != VPU_BUF_STATE_READY)
> + vdec_recycle_buffer(inst, vbuf);
> +
> + vpu_buf->state = VPU_BUF_STATE_IDLE;
> + vpu_process_capture_buffer(inst);
> +
> +exit:
> + vpu_inst_unlock(inst);
> +}
> +
> +static void vdec_event_eos(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + vpu_trace(inst->dev, "[%d] input : %d, decoded : %d, display : %d, sequence : %d\n",
> + inst->id,
> + vdec->params.frame_count,
> + vdec->decoded_frame_count,
> + vdec->display_frame_count,
> + vdec->sequence);
> + vpu_inst_lock(inst);
> + vdec->eos_received++;
> + vdec->fixed_fmt = false;
> + inst->min_buffer_cap = VDEC_MIN_BUFFER_CAP;
> + vdec_update_state(inst, VPU_CODEC_STATE_DRAIN, 0);
> + vdec_set_last_buffer_dequeued(inst);
> + vpu_inst_unlock(inst);
> +}
> +
> +static void vdec_event_notify(struct vpu_inst *inst, u32 event, void *data)
> +{
> + switch (event) {
> + case VPU_MSG_ID_SEQ_HDR_FOUND:
> + vdec_event_seq_hdr(inst, data);
> + break;
> + case VPU_MSG_ID_RES_CHANGE:
> + vdec_event_resolution_change(inst);
> + break;
> + case VPU_MSG_ID_FRAME_REQ:
> + vdec_event_req_fs(inst, data);
> + break;
> + case VPU_MSG_ID_FRAME_RELEASE:
> + vdec_evnet_rel_fs(inst, data);
> + break;
> + case VPU_MSG_ID_PIC_EOS:
> + vdec_event_eos(inst);
> + break;
> + default:
> + break;
> + }
> +}
> +
> +static int vdec_process_output(struct vpu_inst *inst, struct vb2_buffer *vb)
> +{
> + struct vdec_t *vdec = inst->priv;
> + struct vb2_v4l2_buffer *vbuf;
> + struct vpu_vb2_buffer *vpu_buf;
> + struct vpu_rpc_buffer_desc desc;
> + s64 timestamp;
> + u32 free_space;
> + int ret;
> +
> + vbuf = to_vb2_v4l2_buffer(vb);
> + vpu_buf = to_vpu_vb2_buffer(vbuf);
> + dev_dbg(inst->dev, "[%d] dec output [%d] %d : %ld\n",
> + inst->id, vbuf->sequence, vb->index, vb2_get_plane_payload(vb, 0));
> +
> + if (inst->state == VPU_CODEC_STATE_DEINIT)
> + return -EINVAL;
> + if (vdec->reset_codec)
> + return -EINVAL;
> +
> + if (inst->state == VPU_CODEC_STATE_STARTED)
> + vdec_update_state(inst, VPU_CODEC_STATE_ACTIVE, 0);
> +
> + ret = vpu_iface_get_stream_buffer_desc(inst, &desc);
> + if (ret)
> + return ret;
> +
> + free_space = vpu_helper_get_free_space(inst);
> + if (free_space < vb2_get_plane_payload(vb, 0) + 0x40000)
> + return -ENOMEM;
> +
> + timestamp = vb->timestamp;
> + if (timestamp >= 0 && vdec->ts_start < 0)
> + vdec->ts_start = timestamp;
> + if (vdec->ts_input < timestamp)
> + vdec->ts_input = timestamp;
> +
> + ret = vpu_iface_input_frame(inst, vb);
> + if (ret < 0)
> + return -ENOMEM;
> +
> + dev_dbg(inst->dev, "[%d][INPUT TS]%32lld\n", inst->id, vb->timestamp);
> + vdec->ts_pre_count++;
> + vdec->params.frame_count++;
> +
> + v4l2_m2m_src_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf);
> + vpu_buf->state = VPU_BUF_STATE_IDLE;
> + v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
> +
> + if (vdec->drain)
> + vdec_drain(inst);
> +
> + return 0;
> +}
> +
> +static int vdec_process_capture(struct vpu_inst *inst, struct vb2_buffer *vb)
> +{
> + struct vdec_t *vdec = inst->priv;
> + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> + int ret;
> +
> + if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
> + return -EINVAL;
> + if (vdec->reset_codec)
> + return -EINVAL;
> +
> + ret = vdec_response_frame(inst, vbuf);
> + if (ret)
> + return ret;
> + v4l2_m2m_dst_buf_remove_by_buf(inst->fh.m2m_ctx, vbuf);
> + return 0;
> +}
> +
> +static void vdec_on_queue_empty(struct vpu_inst *inst, u32 type)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + if (V4L2_TYPE_IS_OUTPUT(type))
> + return;
> +
> + vdec_handle_resolution_change(inst);
> + if (vdec->eos_received)
> + vdec_set_last_buffer_dequeued(inst);
> +}
> +
> +static void vdec_abort(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> + struct vpu_rpc_buffer_desc desc;
> + int ret;
> +
> + vpu_trace(inst->dev, "[%d] state = %d\n", inst->id, inst->state);
> + vpu_iface_add_scode(inst, SCODE_PADDING_ABORT);
> + vdec->params.end_flag = 1;
> + vpu_iface_set_decode_params(inst, &vdec->params, 1);
> +
> + vpu_session_abort(inst);
> +
> + ret = vpu_iface_get_stream_buffer_desc(inst, &desc);
> + if (!ret)
> + vpu_iface_update_stream_buffer(inst, desc.rptr, 1);
> +
> + vpu_session_rst_buf(inst);
> + vpu_trace(inst->dev, "[%d] input : %d, decoded : %d, display : %d, sequence : %d\n",
> + inst->id,
> + vdec->params.frame_count,
> + vdec->decoded_frame_count,
> + vdec->display_frame_count,
> + vdec->sequence);
> + vdec->params.end_flag = 0;
> + vdec->drain = 0;
> + vdec->ts_pre_count = 0;
> + vdec->timestamp = VPU_INVALID_TIMESTAMP;
> + vdec->ts_start = VPU_INVALID_TIMESTAMP;
> + vdec->ts_input = VPU_INVALID_TIMESTAMP;
> + vdec->params.frame_count = 0;
> + vdec->decoded_frame_count = 0;
> + vdec->display_frame_count = 0;
> + vdec->sequence = 0;
> +}
> +
> +static void vdec_stop(struct vpu_inst *inst, bool free)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + vdec_clear_slots(inst);
> + if (inst->state != VPU_CODEC_STATE_DEINIT)
> + vpu_session_stop(inst);
> + vdec_clear_fs(&vdec->mbi);
> + vdec_clear_fs(&vdec->dcp);
> + if (free) {
> + vpu_free_dma(&vdec->udata);
> + vpu_free_dma(&inst->stream_buffer);
> + }
> + vdec_update_state(inst, VPU_CODEC_STATE_DEINIT, 1);
> + vdec->reset_codec = false;
> +}
> +
> +static void vdec_release(struct vpu_inst *inst)
> +{
> + if (inst->id != VPU_INST_NULL_ID)
> + vpu_trace(inst->dev, "[%d]\n", inst->id);
> + vpu_inst_lock(inst);
> + vdec_stop(inst, true);
> + vpu_inst_unlock(inst);
> +}
> +
> +static void vdec_cleanup(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec;
> +
> + if (!inst)
> + return;
> +
> + vdec = inst->priv;
> + if (vdec)
> + vfree(vdec);
> + inst->priv = NULL;
> + vfree(inst);
> +}
> +
> +static void vdec_init_params(struct vdec_t *vdec)
> +{
> + vdec->params.frame_count = 0;
> + vdec->params.end_flag = 0;
> +}
> +
> +static int vdec_start(struct vpu_inst *inst)
> +{
> + struct vdec_t *vdec = inst->priv;
> + int stream_buffer_size;
> + int ret;
> +
> + if (inst->state != VPU_CODEC_STATE_DEINIT)
> + return 0;
> +
> + vpu_trace(inst->dev, "[%d]\n", inst->id);
> + if (!vdec->udata.virt) {
> + vdec->udata.length = 0x1000;
> + ret = vpu_alloc_dma(inst->core, &vdec->udata);
> + if (ret) {
> + dev_err(inst->dev, "[%d] alloc udata fail\n", inst->id);
> + goto error;
> + }
> + }
> +
> + if (!inst->stream_buffer.virt) {
> + stream_buffer_size = vpu_iface_get_stream_buffer_size(inst->core);
> + if (stream_buffer_size > 0) {
> + inst->stream_buffer.length = stream_buffer_size;
> + ret = vpu_alloc_dma(inst->core, &inst->stream_buffer);
> + if (ret) {
> + dev_err(inst->dev, "[%d] alloc stream buffer fail\n", inst->id);
> + goto error;
> + }
> + inst->use_stream_buffer = true;
> + }
> + }
> +
> + if (inst->use_stream_buffer)
> + vpu_iface_config_stream_buffer(inst, &inst->stream_buffer);
> + vpu_iface_init_instance(inst);
> + vdec->params.udata.base = vdec->udata.phys;
> + vdec->params.udata.size = vdec->udata.length;
> + ret = vpu_iface_set_decode_params(inst, &vdec->params, 0);
> + if (ret) {
> + dev_err(inst->dev, "[%d] set decode params fail\n", inst->id);
> + goto error;
> + }
> +
> + vdec_init_params(vdec);
> + ret = vpu_session_start(inst);
> + if (ret) {
> + dev_err(inst->dev, "[%d] start fail\n", inst->id);
> + goto error;
> + }
> +
> + vdec_update_state(inst, VPU_CODEC_STATE_STARTED, 0);
> +
> + return 0;
> +error:
> + vpu_free_dma(&vdec->udata);
> + vpu_free_dma(&inst->stream_buffer);
> + return ret;
> +}
> +
> +static int vdec_start_session(struct vpu_inst *inst, u32 type)
> +{
> + struct vdec_t *vdec = inst->priv;
> + int ret = 0;
> +
> + if (V4L2_TYPE_IS_OUTPUT(type)) {
> + if (vdec->reset_codec)
> + vdec_stop(inst, false);
> + if (inst->state == VPU_CODEC_STATE_DEINIT) {
> + ret = vdec_start(inst);
> + if (ret)
> + return ret;
> + }
> + }
> +
> + if (V4L2_TYPE_IS_OUTPUT(type)) {
> + if (inst->state == VPU_CODEC_STATE_SEEK)
> + vdec_update_state(inst, vdec->state, 1);
> + vdec->eos_received = 0;
> + vpu_process_output_buffer(inst);
> + } else {
> + vdec_cmd_start(inst);
> + }
> + if (inst->state == VPU_CODEC_STATE_ACTIVE)
> + vdec_response_fs_request(inst, false);
> +
> + return ret;
> +}
> +
> +static int vdec_stop_session(struct vpu_inst *inst, u32 type)
> +{
> + struct vdec_t *vdec = inst->priv;
> +
> + if (inst->state == VPU_CODEC_STATE_DEINIT)
> + return 0;
> +
> + if (V4L2_TYPE_IS_OUTPUT(type)) {
> + vdec_update_state(inst, VPU_CODEC_STATE_SEEK, 0);
> + vdec->drain = 0;
> + } else {
> + if (inst->state != VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
> + vdec_abort(inst);
> +
> + vdec->eos_received = 0;
> + vdec_clear_slots(inst);
> + }
> +
> + return 0;
> +}
> +
> +static int vdec_get_debug_info(struct vpu_inst *inst, char *str, u32 size, u32 i)
> +{
> + struct vdec_t *vdec = inst->priv;
> + int num = -1;
> +
> + switch (i) {
> + case 0:
> + num = scnprintf(str, size,
> + "req_frame_count = %d\ninterlaced = %d\n",
> + vdec->req_frame_count,
> + vdec->codec_info.progressive ? 0 : 1);
> + break;
> + case 1:
> + num = scnprintf(str, size,
> + "mbi: size = 0x%x request = %d, alloc = %d, response = %d\n",
> + vdec->mbi.size,
> + vdec->mbi.req_count,
> + vdec->mbi.count,
> + vdec->mbi.index);
> + break;
> + case 2:
> + num = scnprintf(str, size,
> + "dcp: size = 0x%x request = %d, alloc = %d, response = %d\n",
> + vdec->dcp.size,
> + vdec->dcp.req_count,
> + vdec->dcp.count,
> + vdec->dcp.index);
> + break;
> + case 3:
> + num = scnprintf(str, size, "input_frame_count = %d\n", vdec->params.frame_count);
> + break;
> + case 4:
> + num = scnprintf(str, size, "decoded_frame_count = %d\n", vdec->decoded_frame_count);
> + break;
> + case 5:
> + num = scnprintf(str, size, "display_frame_count = %d\n", vdec->display_frame_count);
> + break;
> + case 6:
> + num = scnprintf(str, size, "sequence = %d\n", vdec->sequence);
> + break;
> + case 7:
> + num = scnprintf(str, size, "drain = %d, eos = %d, source_change = %d\n",
> + vdec->drain, vdec->eos_received, vdec->source_change);
> + break;
> + case 8:
> + num = scnprintf(str, size, "ts_pre_count = %d, frame_depth = %d\n",
> + vdec->ts_pre_count, vdec->frame_depth);
> + break;
> + case 9:
> + num = scnprintf(str, size, "fps = %d/%d\n",
> + vdec->codec_info.frame_rate.numerator,
> + vdec->codec_info.frame_rate.denominator);
> + break;
> + case 10:
> + {
> + s64 timestamp = vdec->timestamp;
> + s64 ts_start = vdec->ts_start;
> + s64 ts_input = vdec->ts_input;
> +
> + num = scnprintf(str, size, "timestamp = %9lld.%09lld(%9lld.%09lld, %9lld.%09lld)\n",
> + timestamp / NSEC_PER_SEC,
> + timestamp % NSEC_PER_SEC,
> + ts_start / NSEC_PER_SEC,
> + ts_start % NSEC_PER_SEC,
> + ts_input / NSEC_PER_SEC,
> + ts_input % NSEC_PER_SEC);
> + }
> + break;
> + default:
> + break;
> + }
> +
> + return num;
> +}
> +
> +static struct vpu_inst_ops vdec_inst_ops = {
> + .ctrl_init = vdec_ctrl_init,
> + .check_ready = vdec_check_ready,
> + .buf_done = vdec_buf_done,
> + .get_one_frame = vdec_frame_decoded,
> + .stop_done = vdec_stop_done,
> + .event_notify = vdec_event_notify,
> + .release = vdec_release,
> + .cleanup = vdec_cleanup,
> + .start = vdec_start_session,
> + .stop = vdec_stop_session,
> + .process_output = vdec_process_output,
> + .process_capture = vdec_process_capture,
> + .on_queue_empty = vdec_on_queue_empty,
> + .get_debug_info = vdec_get_debug_info,
> + .wait_prepare = vpu_inst_unlock,
> + .wait_finish = vpu_inst_lock,
> +};
> +
> +static void vdec_init(struct file *file)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + struct vdec_t *vdec;
> + struct v4l2_format f;
> +
> + vdec = inst->priv;
> + vdec->frame_depth = VDEC_FRAME_DEPTH;
> + vdec->timestamp = VPU_INVALID_TIMESTAMP;
> + vdec->ts_start = VPU_INVALID_TIMESTAMP;
> + vdec->ts_input = VPU_INVALID_TIMESTAMP;
> +
> + memset(&f, 0, sizeof(f));
> + f.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
> + f.fmt.pix_mp.pixelformat = V4L2_PIX_FMT_H264;
> + f.fmt.pix_mp.width = 1280;
> + f.fmt.pix_mp.height = 720;
> + f.fmt.pix_mp.field = V4L2_FIELD_NONE;
> + vdec_s_fmt(file, &inst->fh, &f);
> +
> + memset(&f, 0, sizeof(f));
> + f.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
> + f.fmt.pix_mp.pixelformat = V4L2_PIX_FMT_NV12MT_8L128;
> + f.fmt.pix_mp.width = 1280;
> + f.fmt.pix_mp.height = 720;
> + f.fmt.pix_mp.field = V4L2_FIELD_NONE;
> + vdec_s_fmt(file, &inst->fh, &f);
> +}
> +
> +static int vdec_open(struct file *file)
> +{
> + struct vpu_inst *inst;
> + struct vdec_t *vdec;
> + int ret;
> +
> + inst = vzalloc(sizeof(*inst));
> + if (!inst)
> + return -ENOMEM;
> +
> + vdec = vzalloc(sizeof(*vdec));
> + if (!vdec) {
> + vfree(inst);
> + return -ENOMEM;
> + }
> +
> + inst->ops = &vdec_inst_ops;
> + inst->formats = vdec_formats;
> + inst->type = VPU_CORE_TYPE_DEC;
> + inst->priv = vdec;
> +
> + ret = vpu_v4l2_open(file, inst);
> + if (ret)
> + return ret;
> +
> + vdec->fixed_fmt = false;
> + inst->min_buffer_cap = VDEC_MIN_BUFFER_CAP;
> + vdec_init(file);
> +
> + return 0;
> +}
> +
> +static __poll_t vdec_poll(struct file *file, poll_table *wait)
> +{
> + struct vpu_inst *inst = to_inst(file);
> + struct vb2_queue *src_q, *dst_q;
> + __poll_t ret;
> +
> + ret = v4l2_m2m_fop_poll(file, wait);
> + src_q = v4l2_m2m_get_src_vq(inst->fh.m2m_ctx);
> + dst_q = v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx);
> + if (vb2_is_streaming(src_q) && !vb2_is_streaming(dst_q))
> + ret &= (~EPOLLERR);
> + if (!src_q->error && !dst_q->error &&
> + (vb2_is_streaming(src_q) && list_empty(&src_q->queued_list)) &&
> + (vb2_is_streaming(dst_q) && list_empty(&dst_q->queued_list)))
> + ret &= (~EPOLLERR);
> +
> + return ret;
> +}
> +
> +static const struct v4l2_file_operations vdec_fops = {
> + .owner = THIS_MODULE,
> + .open = vdec_open,
> + .release = vpu_v4l2_close,
> + .unlocked_ioctl = video_ioctl2,
> + .poll = vdec_poll,
> + .mmap = v4l2_m2m_fop_mmap,
> +};
> +
> +const struct v4l2_ioctl_ops *vdec_get_ioctl_ops(void)
> +{
> + return &vdec_ioctl_ops;
> +}
> +
> +const struct v4l2_file_operations *vdec_get_fops(void)
> +{
> + return &vdec_fops;
> +}


2021-12-03 05:43:04

by Ming Qian

[permalink] [raw]
Subject: RE: [EXT] Re: [PATCH v13 08/13] media: amphion: add v4l2 m2m vpu decoder stateful driver

> -----Original Message-----
> From: Nicolas Dufresne [mailto:[email protected]]
> Sent: Friday, December 3, 2021 12:56 PM
> To: Ming Qian <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected]
> Cc: [email protected]; [email protected]; [email protected];
> dl-linux-imx <[email protected]>; Aisheng Dong <[email protected]>;
> [email protected]; [email protected];
> [email protected]; [email protected]
> Subject: [EXT] Re: [PATCH v13 08/13] media: amphion: add v4l2 m2m vpu
> decoder stateful driver
>
> Caution: EXT Email
>
> Le mardi 30 novembre 2021 à 17:48 +0800, Ming Qian a écrit :
> > This consists of video decoder implementation plus decoder controls.
> >
> > Signed-off-by: Ming Qian <[email protected]>
> > Signed-off-by: Shijie Qin <[email protected]>
> > Signed-off-by: Zhou Peng <[email protected]>
> > ---
> > drivers/media/platform/amphion/vdec.c | 1680
> +++++++++++++++++++++++++


> > +
> > +static void vdec_init_fmt(struct vpu_inst *inst)
> > +{
> > + struct vdec_t *vdec = inst->priv;
> > + const struct vpu_format *fmt;
> > + int i;
> > +
> > + fmt = vpu_helper_find_format(inst, inst->cap_format.type,
> vdec->codec_info.pixfmt);
> > + inst->out_format.width = vdec->codec_info.width;
> > + inst->out_format.height = vdec->codec_info.height;
> > + inst->cap_format.width = vdec->codec_info.decoded_width;
> > + inst->cap_format.height = vdec->codec_info.decoded_height;
> > + inst->cap_format.pixfmt = vdec->codec_info.pixfmt;
> > + if (fmt) {
> > + inst->cap_format.num_planes = fmt->num_planes;
> > + inst->cap_format.flags = fmt->flags;
> > + }
> > + for (i = 0; i < inst->cap_format.num_planes; i++) {
> > + inst->cap_format.bytesperline[i] =
> vdec->codec_info.bytesperline[i];
> > + inst->cap_format.sizeimage[i] =
> vdec->codec_info.sizeimage[i];
> > + }
> > + if (vdec->codec_info.progressive)
> > + inst->cap_format.field = V4L2_FIELD_NONE;
> > + else
> > + inst->cap_format.field = V4L2_FIELD_INTERLACED;
>
> As a followup, this should be conditional to the chosen pixel format. If I
> understood correct, you produce interlaced is only produce for linear NV12, for
> tiled the fields are outputed seperated in their respective v4l2_buffer. Note
> sure where yet, but the V4L2 spec requires you to pair the fields by using the
> same seq_num on both.

The amphion vpu will store the two fields into one v4l2_buf,
So I'll change V4L2_FIELD_INTERLACED to V4L2_FIELD_SEQ_TB

>
> > + if (vdec->codec_info.color_primaries == V4L2_COLORSPACE_DEFAULT)
> > + vdec->codec_info.color_primaries =
> V4L2_COLORSPACE_REC709;
> > + if (vdec->codec_info.transfer_chars == V4L2_XFER_FUNC_DEFAULT)
> > + vdec->codec_info.transfer_chars = V4L2_XFER_FUNC_709;
> > + if (vdec->codec_info.matrix_coeffs == V4L2_YCBCR_ENC_DEFAULT)
> > + vdec->codec_info.matrix_coeffs = V4L2_YCBCR_ENC_709;
> > + if (vdec->codec_info.full_range == V4L2_QUANTIZATION_DEFAULT)
> > + vdec->codec_info.full_range =
> V4L2_QUANTIZATION_LIM_RANGE;
> > +}
> > +

2021-12-03 06:01:58

by Ming Qian

[permalink] [raw]
Subject: RE: [EXT] Re: [PATCH v13 08/13] media: amphion: add v4l2 m2m vpu decoder stateful driver

> -----Original Message-----
> From: Ming Qian
> Sent: Friday, December 3, 2021 1:43 PM
> To: Nicolas Dufresne <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected]
> Cc: [email protected]; [email protected]; [email protected];
> dl-linux-imx <[email protected]>; Aisheng Dong <[email protected]>;
> [email protected]; [email protected];
> [email protected]; [email protected]
> Subject: RE: [EXT] Re: [PATCH v13 08/13] media: amphion: add v4l2 m2m vpu
> decoder stateful driver
>
> > -----Original Message-----
> > From: Nicolas Dufresne [mailto:[email protected]]
> > Sent: Friday, December 3, 2021 12:56 PM
> > To: Ming Qian <[email protected]>; [email protected];
> > [email protected]; [email protected]; [email protected]
> > Cc: [email protected]; [email protected];
> > [email protected]; dl-linux-imx <[email protected]>; Aisheng Dong
> > <[email protected]>; [email protected];
> > [email protected]; [email protected];
> > [email protected]
> > Subject: [EXT] Re: [PATCH v13 08/13] media: amphion: add v4l2 m2m vpu
> > decoder stateful driver
> >
> > Caution: EXT Email
> >
> > Le mardi 30 novembre 2021 à 17:48 +0800, Ming Qian a écrit :
> > > This consists of video decoder implementation plus decoder controls.
> > >
> > > Signed-off-by: Ming Qian <[email protected]>
> > > Signed-off-by: Shijie Qin <[email protected]>
> > > Signed-off-by: Zhou Peng <[email protected]>
> > > ---
> > > drivers/media/platform/amphion/vdec.c | 1680
> > +++++++++++++++++++++++++
>
>
> > > +
> > > +static void vdec_init_fmt(struct vpu_inst *inst) {
> > > + struct vdec_t *vdec = inst->priv;
> > > + const struct vpu_format *fmt;
> > > + int i;
> > > +
> > > + fmt = vpu_helper_find_format(inst, inst->cap_format.type,
> > vdec->codec_info.pixfmt);
> > > + inst->out_format.width = vdec->codec_info.width;
> > > + inst->out_format.height = vdec->codec_info.height;
> > > + inst->cap_format.width = vdec->codec_info.decoded_width;
> > > + inst->cap_format.height = vdec->codec_info.decoded_height;
> > > + inst->cap_format.pixfmt = vdec->codec_info.pixfmt;
> > > + if (fmt) {
> > > + inst->cap_format.num_planes = fmt->num_planes;
> > > + inst->cap_format.flags = fmt->flags;
> > > + }
> > > + for (i = 0; i < inst->cap_format.num_planes; i++) {
> > > + inst->cap_format.bytesperline[i] =
> > vdec->codec_info.bytesperline[i];
> > > + inst->cap_format.sizeimage[i] =
> > vdec->codec_info.sizeimage[i];
> > > + }
> > > + if (vdec->codec_info.progressive)
> > > + inst->cap_format.field = V4L2_FIELD_NONE;
> > > + else
> > > + inst->cap_format.field = V4L2_FIELD_INTERLACED;
> >
> > As a followup, this should be conditional to the chosen pixel format.
> > If I understood correct, you produce interlaced is only produce for
> > linear NV12, for tiled the fields are outputed seperated in their
> > respective v4l2_buffer. Note sure where yet, but the V4L2 spec
> > requires you to pair the fields by using the same seq_num on both.
>
> The amphion vpu will store the two fields into one v4l2_buf, So I'll change
> V4L2_FIELD_INTERLACED to V4L2_FIELD_SEQ_TB
>

Hi Nicolas,
Seems gstreamer doesn't support V4L2_FIELD_SEQ_TB yet.

switch (fmt.fmt.pix.field) {
case V4L2_FIELD_ANY:
case V4L2_FIELD_NONE:
interlace_mode = GST_VIDEO_INTERLACE_MODE_PROGRESSIVE;
break;
case V4L2_FIELD_INTERLACED:
case V4L2_FIELD_INTERLACED_TB:
case V4L2_FIELD_INTERLACED_BT:
interlace_mode = GST_VIDEO_INTERLACE_MODE_INTERLEAVED;
break;
case V4L2_FIELD_ALTERNATE:
interlace_mode = GST_VIDEO_INTERLACE_MODE_ALTERNATE;
break;
default:
goto unsupported_field;
}

> >
> > > + if (vdec->codec_info.color_primaries ==
> V4L2_COLORSPACE_DEFAULT)
> > > + vdec->codec_info.color_primaries =
> > V4L2_COLORSPACE_REC709;
> > > + if (vdec->codec_info.transfer_chars == V4L2_XFER_FUNC_DEFAULT)
> > > + vdec->codec_info.transfer_chars = V4L2_XFER_FUNC_709;
> > > + if (vdec->codec_info.matrix_coeffs == V4L2_YCBCR_ENC_DEFAULT)
> > > + vdec->codec_info.matrix_coeffs = V4L2_YCBCR_ENC_709;
> > > + if (vdec->codec_info.full_range == V4L2_QUANTIZATION_DEFAULT)
> > > + vdec->codec_info.full_range =
> > V4L2_QUANTIZATION_LIM_RANGE;
> > > +}
> > > +

2021-12-03 15:09:59

by Nicolas Dufresne

[permalink] [raw]
Subject: Re: [EXT] Re: [PATCH v13 08/13] media: amphion: add v4l2 m2m vpu decoder stateful driver

Le vendredi 03 décembre 2021 à 06:01 +0000, Ming Qian a écrit :
> > -----Original Message-----
> > From: Ming Qian
> > Sent: Friday, December 3, 2021 1:43 PM
> > To: Nicolas Dufresne <[email protected]>; [email protected];
> > [email protected]; [email protected]; [email protected]
> > Cc: [email protected]; [email protected]; [email protected];
> > dl-linux-imx <[email protected]>; Aisheng Dong <[email protected]>;
> > [email protected]; [email protected];
> > [email protected]; [email protected]
> > Subject: RE: [EXT] Re: [PATCH v13 08/13] media: amphion: add v4l2 m2m vpu
> > decoder stateful driver
> >
> > > -----Original Message-----
> > > From: Nicolas Dufresne [mailto:[email protected]]
> > > Sent: Friday, December 3, 2021 12:56 PM
> > > To: Ming Qian <[email protected]>; [email protected];
> > > [email protected]; [email protected]; [email protected]
> > > Cc: [email protected]; [email protected];
> > > [email protected]; dl-linux-imx <[email protected]>; Aisheng Dong
> > > <[email protected]>; [email protected];
> > > [email protected]; [email protected];
> > > [email protected]
> > > Subject: [EXT] Re: [PATCH v13 08/13] media: amphion: add v4l2 m2m vpu
> > > decoder stateful driver
> > >
> > > Caution: EXT Email
> > >
> > > Le mardi 30 novembre 2021 à 17:48 +0800, Ming Qian a écrit :
> > > > This consists of video decoder implementation plus decoder controls.
> > > >
> > > > Signed-off-by: Ming Qian <[email protected]>
> > > > Signed-off-by: Shijie Qin <[email protected]>
> > > > Signed-off-by: Zhou Peng <[email protected]>
> > > > ---
> > > >  drivers/media/platform/amphion/vdec.c | 1680
> > > +++++++++++++++++++++++++
> >
> >
> > > > +
> > > > +static void vdec_init_fmt(struct vpu_inst *inst) {
> > > > + struct vdec_t *vdec = inst->priv;
> > > > + const struct vpu_format *fmt;
> > > > + int i;
> > > > +
> > > > + fmt = vpu_helper_find_format(inst, inst->cap_format.type,
> > > vdec->codec_info.pixfmt);
> > > > + inst->out_format.width = vdec->codec_info.width;
> > > > + inst->out_format.height = vdec->codec_info.height;
> > > > + inst->cap_format.width = vdec->codec_info.decoded_width;
> > > > + inst->cap_format.height = vdec->codec_info.decoded_height;
> > > > + inst->cap_format.pixfmt = vdec->codec_info.pixfmt;
> > > > + if (fmt) {
> > > > + inst->cap_format.num_planes = fmt->num_planes;
> > > > + inst->cap_format.flags = fmt->flags;
> > > > + }
> > > > + for (i = 0; i < inst->cap_format.num_planes; i++) {
> > > > + inst->cap_format.bytesperline[i] =
> > > vdec->codec_info.bytesperline[i];
> > > > + inst->cap_format.sizeimage[i] =
> > > vdec->codec_info.sizeimage[i];
> > > > + }
> > > > + if (vdec->codec_info.progressive)
> > > > + inst->cap_format.field = V4L2_FIELD_NONE;
> > > > + else
> > > > + inst->cap_format.field = V4L2_FIELD_INTERLACED;
> > >
> > > As a followup, this should be conditional to the chosen pixel format.
> > > If I understood correct, you produce interlaced is only produce for
> > > linear NV12, for tiled the fields are outputed seperated in their
> > > respective v4l2_buffer. Note sure where yet, but the V4L2 spec
> > > requires you to pair the fields by using the same seq_num on both.
> >
> > The amphion vpu will store the two fields into one v4l2_buf, So I'll change
> > V4L2_FIELD_INTERLACED to V4L2_FIELD_SEQ_TB
> >
>
> Hi Nicolas,
>     Seems gstreamer doesn't support V4L2_FIELD_SEQ_TB yet.
>
>   switch (fmt.fmt.pix.field) {
>     case V4L2_FIELD_ANY:
>     case V4L2_FIELD_NONE:
>       interlace_mode = GST_VIDEO_INTERLACE_MODE_PROGRESSIVE;
>       break;
>     case V4L2_FIELD_INTERLACED:
>     case V4L2_FIELD_INTERLACED_TB:
>     case V4L2_FIELD_INTERLACED_BT:
>       interlace_mode = GST_VIDEO_INTERLACE_MODE_INTERLEAVED;
>       break;
>     case V4L2_FIELD_ALTERNATE:
>       interlace_mode = GST_VIDEO_INTERLACE_MODE_ALTERNATE;
>       break;
>     default:
>       goto unsupported_field;
>   }

This is correct, I had never had the chance to implement it. So far I only know
IMX6 camera pipeline producing that, but rarely used in practice. What matters
here is that your driver does report the right information so that userspace
don't get fooled into thinking it's interleaved.

>
> > >
> > > > + if (vdec->codec_info.color_primaries ==
> > V4L2_COLORSPACE_DEFAULT)
> > > > + vdec->codec_info.color_primaries =
> > > V4L2_COLORSPACE_REC709;
> > > > + if (vdec->codec_info.transfer_chars == V4L2_XFER_FUNC_DEFAULT)
> > > > + vdec->codec_info.transfer_chars = V4L2_XFER_FUNC_709;
> > > > + if (vdec->codec_info.matrix_coeffs == V4L2_YCBCR_ENC_DEFAULT)
> > > > + vdec->codec_info.matrix_coeffs = V4L2_YCBCR_ENC_709;
> > > > + if (vdec->codec_info.full_range == V4L2_QUANTIZATION_DEFAULT)
> > > > + vdec->codec_info.full_range =
> > > V4L2_QUANTIZATION_LIM_RANGE;
> > > > +}
> > > > +


2021-12-04 02:39:32

by Ming Qian

[permalink] [raw]
Subject: RE: [EXT] Re: [PATCH v13 08/13] media: amphion: add v4l2 m2m vpu decoder stateful driver


> -----Original Message-----
> From: Nicolas Dufresne [mailto:[email protected]]
> Sent: Friday, December 3, 2021 11:10 PM
> To: Ming Qian <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected]
> Cc: [email protected]; [email protected]; [email protected];
> dl-linux-imx <[email protected]>; Aisheng Dong <[email protected]>;
> [email protected]; [email protected];
> [email protected]; [email protected]
> Subject: Re: [EXT] Re: [PATCH v13 08/13] media: amphion: add v4l2 m2m vpu
> decoder stateful driver
>
> Caution: EXT Email
>
> Le vendredi 03 décembre 2021 à 06:01 +0000, Ming Qian a écrit :
> > > -----Original Message-----
> > > From: Ming Qian
> > > Sent: Friday, December 3, 2021 1:43 PM
> > > To: Nicolas Dufresne <[email protected]>; [email protected];
> > > [email protected]; [email protected]; [email protected]
> > > Cc: [email protected]; [email protected];
> > > [email protected]; dl-linux-imx <[email protected]>; Aisheng Dong
> > > <[email protected]>; [email protected];
> > > [email protected]; [email protected];
> > > [email protected]
> > > Subject: RE: [EXT] Re: [PATCH v13 08/13] media: amphion: add v4l2
> > > m2m vpu decoder stateful driver
> > >
> > > > -----Original Message-----
> > > > From: Nicolas Dufresne [mailto:[email protected]]
> > > > Sent: Friday, December 3, 2021 12:56 PM
> > > > To: Ming Qian <[email protected]>; [email protected];
> > > > [email protected]; [email protected]; [email protected]
> > > > Cc: [email protected]; [email protected];
> > > > [email protected]; dl-linux-imx <[email protected]>; Aisheng Dong
> > > > <[email protected]>; [email protected];
> > > > [email protected]; [email protected];
> > > > [email protected]
> > > > Subject: [EXT] Re: [PATCH v13 08/13] media: amphion: add v4l2 m2m
> > > > vpu decoder stateful driver
> > > >
> > > > Caution: EXT Email
> > > >
> > > > Le mardi 30 novembre 2021 à 17:48 +0800, Ming Qian a écrit :
> > > > > This consists of video decoder implementation plus decoder controls.
> > > > >
> > > > > Signed-off-by: Ming Qian <[email protected]>
> > > > > Signed-off-by: Shijie Qin <[email protected]>
> > > > > Signed-off-by: Zhou Peng <[email protected]>
> > > > > ---
> > > > > drivers/media/platform/amphion/vdec.c | 1680
> > > > +++++++++++++++++++++++++
> > >
> > >
> > > > > +
> > > > > +static void vdec_init_fmt(struct vpu_inst *inst) {
> > > > > + struct vdec_t *vdec = inst->priv;
> > > > > + const struct vpu_format *fmt;
> > > > > + int i;
> > > > > +
> > > > > + fmt = vpu_helper_find_format(inst, inst->cap_format.type,
> > > > vdec->codec_info.pixfmt);
> > > > > + inst->out_format.width = vdec->codec_info.width;
> > > > > + inst->out_format.height = vdec->codec_info.height;
> > > > > + inst->cap_format.width = vdec->codec_info.decoded_width;
> > > > > + inst->cap_format.height = vdec->codec_info.decoded_height;
> > > > > + inst->cap_format.pixfmt = vdec->codec_info.pixfmt;
> > > > > + if (fmt) {
> > > > > + inst->cap_format.num_planes = fmt->num_planes;
> > > > > + inst->cap_format.flags = fmt->flags;
> > > > > + }
> > > > > + for (i = 0; i < inst->cap_format.num_planes; i++) {
> > > > > + inst->cap_format.bytesperline[i] =
> > > > vdec->codec_info.bytesperline[i];
> > > > > + inst->cap_format.sizeimage[i] =
> > > > vdec->codec_info.sizeimage[i];
> > > > > + }
> > > > > + if (vdec->codec_info.progressive)
> > > > > + inst->cap_format.field = V4L2_FIELD_NONE;
> > > > > + else
> > > > > + inst->cap_format.field = V4L2_FIELD_INTERLACED;
> > > >
> > > > As a followup, this should be conditional to the chosen pixel format.
> > > > If I understood correct, you produce interlaced is only produce
> > > > for linear NV12, for tiled the fields are outputed seperated in
> > > > their respective v4l2_buffer. Note sure where yet, but the V4L2
> > > > spec requires you to pair the fields by using the same seq_num on both.
> > >
> > > The amphion vpu will store the two fields into one v4l2_buf, So I'll
> > > change V4L2_FIELD_INTERLACED to V4L2_FIELD_SEQ_TB
> > >
> >
> > Hi Nicolas,
> > Seems gstreamer doesn't support V4L2_FIELD_SEQ_TB yet.
> >
> > switch (fmt.fmt.pix.field) {
> > case V4L2_FIELD_ANY:
> > case V4L2_FIELD_NONE:
> > interlace_mode = GST_VIDEO_INTERLACE_MODE_PROGRESSIVE;
> > break;
> > case V4L2_FIELD_INTERLACED:
> > case V4L2_FIELD_INTERLACED_TB:
> > case V4L2_FIELD_INTERLACED_BT:
> > interlace_mode = GST_VIDEO_INTERLACE_MODE_INTERLEAVED;
> > break;
> > case V4L2_FIELD_ALTERNATE:
> > interlace_mode = GST_VIDEO_INTERLACE_MODE_ALTERNATE;
> > break;
> > default:
> > goto unsupported_field;
> > }
>
> This is correct, I had never had the chance to implement it. So far I only know
> IMX6 camera pipeline producing that, but rarely used in practice. What
> matters here is that your driver does report the right information so that
> userspace don't get fooled into thinking it's interleaved.
>
OK, then no problem.

> >
> > > >
> > > > > + if (vdec->codec_info.color_primaries ==
> > > V4L2_COLORSPACE_DEFAULT)
> > > > > + vdec->codec_info.color_primaries =
> > > > V4L2_COLORSPACE_REC709;
> > > > > + if (vdec->codec_info.transfer_chars ==
> V4L2_XFER_FUNC_DEFAULT)
> > > > > + vdec->codec_info.transfer_chars =
> V4L2_XFER_FUNC_709;
> > > > > + if (vdec->codec_info.matrix_coeffs ==
> V4L2_YCBCR_ENC_DEFAULT)
> > > > > + vdec->codec_info.matrix_coeffs =
> V4L2_YCBCR_ENC_709;
> > > > > + if (vdec->codec_info.full_range ==
> V4L2_QUANTIZATION_DEFAULT)
> > > > > + vdec->codec_info.full_range =
> > > > V4L2_QUANTIZATION_LIM_RANGE;
> > > > > +}
> > > > > +