From: Sidraya <[email protected]>
This series of patches implements v4l2 based Decoder driver for H264,
H265 and MJPEG decoding standards.This Driver is for D5520 H/W decoder on
DRA8x SOC of J721e platform.
This driver has been tested on v5.14-rc6 kernel for following
decoding standards on v4l2 based Gstreamer 1.16 plug-in.
1. H264
2. H265
3. MJPEG
Note:
Currently Driver uses list, map and queue custom data structure APIs
implementation and IOMMU custom framework.We are working on replacing
customised APIs with Linux kernel generic framework APIs.
Meanwhile would like to address review comments from
reviewers before merging to main media/platform subsystem.
Sidraya (30):
dt-bindings: Add binding for img,d5500-vxd for DRA8x
v4l: vxd-dec: Create mmu programming helper library
v4l: vxd-dec: Create vxd_dec Mem Manager helper library
v4l: vxd-dec: Add vxd helper library
v4l: vxd-dec: Add IMG VXD Video Decoder mem to mem drive
v4l: vxd-dec: Add hardware control modules
v4l: vxd-dec: Add vxd core module
v4l: vxd-dec: Add translation control modules
v4l: vxd-dec: Add idgen api modules
v4l: vxd-dec: Add utility modules
v4l: vxd-dec: Add TALMMU module
v4l: vxd-dec: Add VDEC MMU wrapper
v4l: vxd-dec: Add Bistream Preparser (BSPP) module
v4l: vxd-dec: Add common headers
v4l: vxd-dec: Add firmware interface headers
v4l: vxd-dec: Add pool api modules
v4l: vxd-dec: This patch implements resource manage component
v4l: vxd-dec: This patch implements pixel processing library
v4l:vxd-dec:vdecdd utility library
v4l:vxd-dec:Decoder resource component
v4l:vxd-dec:Decoder Core Component
v4l:vxd-dec:vdecdd headers added
v4l:vxd-dec:Decoder Component
v4l:vxd-dec: Add resource manager
v4l: videodev2: Add 10bit definitions for NV12 and NV16 color formats
media: Kconfig: Add Video decoder kconfig and Makefile entries
media: platform: vxd: Kconfig: Add Video decoder Kconfig and Makefile
IMG DEC V4L2 Interface function implementations
arm64: dts: dra82: Add v4l2 vxd_dec device node
ARM64: ti_sdk_arm64_release_defconfig: Enable d5520 video decoder
driver
.../bindings/media/img,d5520-vxd.yaml | 52 +
MAINTAINERS | 114 +
arch/arm64/boot/dts/ti/k3-j721e-main.dtsi | 9 +
.../configs/ti_sdk_arm64_release_defconfig | 7407 +++++++++++++++++
drivers/media/v4l2-core/v4l2-ioctl.c | 2 +
drivers/staging/media/Kconfig | 2 +
drivers/staging/media/Makefile | 1 +
drivers/staging/media/vxd/common/addr_alloc.c | 499 ++
drivers/staging/media/vxd/common/addr_alloc.h | 238 +
drivers/staging/media/vxd/common/dq.c | 248 +
drivers/staging/media/vxd/common/dq.h | 36 +
drivers/staging/media/vxd/common/hash.c | 481 ++
drivers/staging/media/vxd/common/hash.h | 86 +
drivers/staging/media/vxd/common/idgen_api.c | 449 +
drivers/staging/media/vxd/common/idgen_api.h | 59 +
drivers/staging/media/vxd/common/img_errors.h | 104 +
drivers/staging/media/vxd/common/img_mem.h | 43 +
.../staging/media/vxd/common/img_mem_man.c | 1124 +++
.../staging/media/vxd/common/img_mem_man.h | 231 +
.../media/vxd/common/img_mem_unified.c | 276 +
drivers/staging/media/vxd/common/imgmmu.c | 782 ++
drivers/staging/media/vxd/common/imgmmu.h | 180 +
drivers/staging/media/vxd/common/lst.c | 119 +
drivers/staging/media/vxd/common/lst.h | 37 +
drivers/staging/media/vxd/common/pool.c | 228 +
drivers/staging/media/vxd/common/pool.h | 66 +
drivers/staging/media/vxd/common/pool_api.c | 709 ++
drivers/staging/media/vxd/common/pool_api.h | 113 +
drivers/staging/media/vxd/common/ra.c | 972 +++
drivers/staging/media/vxd/common/ra.h | 200 +
drivers/staging/media/vxd/common/resource.c | 576 ++
drivers/staging/media/vxd/common/resource.h | 66 +
drivers/staging/media/vxd/common/rman_api.c | 620 ++
drivers/staging/media/vxd/common/rman_api.h | 66 +
drivers/staging/media/vxd/common/talmmu_api.c | 753 ++
drivers/staging/media/vxd/common/talmmu_api.h | 246 +
drivers/staging/media/vxd/common/vid_buf.h | 42 +
drivers/staging/media/vxd/common/work_queue.c | 188 +
drivers/staging/media/vxd/common/work_queue.h | 66 +
drivers/staging/media/vxd/decoder/Kconfig | 13 +
drivers/staging/media/vxd/decoder/Makefile | 129 +
drivers/staging/media/vxd/decoder/bspp.c | 2479 ++++++
drivers/staging/media/vxd/decoder/bspp.h | 363 +
drivers/staging/media/vxd/decoder/bspp_int.h | 514 ++
drivers/staging/media/vxd/decoder/core.c | 3656 ++++++++
drivers/staging/media/vxd/decoder/core.h | 72 +
.../staging/media/vxd/decoder/dec_resources.c | 554 ++
.../staging/media/vxd/decoder/dec_resources.h | 46 +
drivers/staging/media/vxd/decoder/decoder.c | 4622 ++++++++++
drivers/staging/media/vxd/decoder/decoder.h | 375 +
.../staging/media/vxd/decoder/fw_interface.h | 818 ++
drivers/staging/media/vxd/decoder/h264_idx.h | 60 +
.../media/vxd/decoder/h264_secure_parser.c | 3051 +++++++
.../media/vxd/decoder/h264_secure_parser.h | 278 +
drivers/staging/media/vxd/decoder/h264_vlc.h | 604 ++
.../staging/media/vxd/decoder/h264fw_data.h | 652 ++
.../media/vxd/decoder/h264fw_data_shared.h | 759 ++
.../media/vxd/decoder/hevc_secure_parser.c | 2895 +++++++
.../media/vxd/decoder/hevc_secure_parser.h | 455 +
.../staging/media/vxd/decoder/hevcfw_data.h | 472 ++
.../media/vxd/decoder/hevcfw_data_shared.h | 767 ++
.../staging/media/vxd/decoder/hw_control.c | 1211 +++
.../staging/media/vxd/decoder/hw_control.h | 144 +
.../media/vxd/decoder/img_dec_common.h | 278 +
.../media/vxd/decoder/img_msvdx_cmds.h | 279 +
.../media/vxd/decoder/img_msvdx_core_regs.h | 22 +
.../media/vxd/decoder/img_msvdx_vdmc_regs.h | 26 +
.../media/vxd/decoder/img_msvdx_vec_regs.h | 60 +
.../staging/media/vxd/decoder/img_pixfmts.h | 195 +
.../media/vxd/decoder/img_profiles_levels.h | 33 +
.../media/vxd/decoder/img_pvdec_core_regs.h | 60 +
.../media/vxd/decoder/img_pvdec_pixel_regs.h | 35 +
.../media/vxd/decoder/img_pvdec_test_regs.h | 39 +
.../media/vxd/decoder/img_vdec_fw_msg.h | 192 +
.../vxd/decoder/img_video_bus4_mmu_regs.h | 120 +
.../media/vxd/decoder/jpeg_secure_parser.c | 645 ++
.../media/vxd/decoder/jpeg_secure_parser.h | 37 +
.../staging/media/vxd/decoder/jpegfw_data.h | 83 +
.../media/vxd/decoder/jpegfw_data_shared.h | 84 +
drivers/staging/media/vxd/decoder/mem_io.h | 42 +
drivers/staging/media/vxd/decoder/mmu_defs.h | 42 +
drivers/staging/media/vxd/decoder/pixel_api.c | 895 ++
drivers/staging/media/vxd/decoder/pixel_api.h | 152 +
.../media/vxd/decoder/pvdec_entropy_regs.h | 33 +
drivers/staging/media/vxd/decoder/pvdec_int.h | 82 +
.../media/vxd/decoder/pvdec_vec_be_regs.h | 35 +
drivers/staging/media/vxd/decoder/reg_io2.h | 74 +
.../staging/media/vxd/decoder/scaler_setup.h | 59 +
drivers/staging/media/vxd/decoder/swsr.c | 1657 ++++
drivers/staging/media/vxd/decoder/swsr.h | 278 +
.../media/vxd/decoder/translation_api.c | 1725 ++++
.../media/vxd/decoder/translation_api.h | 42 +
drivers/staging/media/vxd/decoder/vdec_defs.h | 548 ++
.../media/vxd/decoder/vdec_mmu_wrapper.c | 829 ++
.../media/vxd/decoder/vdec_mmu_wrapper.h | 174 +
.../staging/media/vxd/decoder/vdecdd_defs.h | 446 +
.../staging/media/vxd/decoder/vdecdd_utils.c | 95 +
.../staging/media/vxd/decoder/vdecdd_utils.h | 93 +
.../media/vxd/decoder/vdecdd_utils_buf.c | 897 ++
.../staging/media/vxd/decoder/vdecfw_share.h | 36 +
.../staging/media/vxd/decoder/vdecfw_shared.h | 893 ++
drivers/staging/media/vxd/decoder/vxd_core.c | 1683 ++++
drivers/staging/media/vxd/decoder/vxd_dec.c | 185 +
drivers/staging/media/vxd/decoder/vxd_dec.h | 477 ++
drivers/staging/media/vxd/decoder/vxd_ext.h | 74 +
drivers/staging/media/vxd/decoder/vxd_int.c | 1137 +++
drivers/staging/media/vxd/decoder/vxd_int.h | 128 +
.../staging/media/vxd/decoder/vxd_mmu_defs.h | 30 +
drivers/staging/media/vxd/decoder/vxd_props.h | 80 +
drivers/staging/media/vxd/decoder/vxd_pvdec.c | 1745 ++++
.../media/vxd/decoder/vxd_pvdec_priv.h | 126 +
.../media/vxd/decoder/vxd_pvdec_regs.h | 779 ++
drivers/staging/media/vxd/decoder/vxd_v4l2.c | 2129 +++++
include/uapi/linux/videodev2.h | 2 +
114 files changed, 62369 insertions(+)
create mode 100644 Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
create mode 100644 arch/arm64/configs/ti_sdk_arm64_release_defconfig
create mode 100644 drivers/staging/media/vxd/common/addr_alloc.c
create mode 100644 drivers/staging/media/vxd/common/addr_alloc.h
create mode 100644 drivers/staging/media/vxd/common/dq.c
create mode 100644 drivers/staging/media/vxd/common/dq.h
create mode 100644 drivers/staging/media/vxd/common/hash.c
create mode 100644 drivers/staging/media/vxd/common/hash.h
create mode 100644 drivers/staging/media/vxd/common/idgen_api.c
create mode 100644 drivers/staging/media/vxd/common/idgen_api.h
create mode 100644 drivers/staging/media/vxd/common/img_errors.h
create mode 100644 drivers/staging/media/vxd/common/img_mem.h
create mode 100644 drivers/staging/media/vxd/common/img_mem_man.c
create mode 100644 drivers/staging/media/vxd/common/img_mem_man.h
create mode 100644 drivers/staging/media/vxd/common/img_mem_unified.c
create mode 100644 drivers/staging/media/vxd/common/imgmmu.c
create mode 100644 drivers/staging/media/vxd/common/imgmmu.h
create mode 100644 drivers/staging/media/vxd/common/lst.c
create mode 100644 drivers/staging/media/vxd/common/lst.h
create mode 100644 drivers/staging/media/vxd/common/pool.c
create mode 100644 drivers/staging/media/vxd/common/pool.h
create mode 100644 drivers/staging/media/vxd/common/pool_api.c
create mode 100644 drivers/staging/media/vxd/common/pool_api.h
create mode 100644 drivers/staging/media/vxd/common/ra.c
create mode 100644 drivers/staging/media/vxd/common/ra.h
create mode 100644 drivers/staging/media/vxd/common/resource.c
create mode 100644 drivers/staging/media/vxd/common/resource.h
create mode 100644 drivers/staging/media/vxd/common/rman_api.c
create mode 100644 drivers/staging/media/vxd/common/rman_api.h
create mode 100644 drivers/staging/media/vxd/common/talmmu_api.c
create mode 100644 drivers/staging/media/vxd/common/talmmu_api.h
create mode 100644 drivers/staging/media/vxd/common/vid_buf.h
create mode 100644 drivers/staging/media/vxd/common/work_queue.c
create mode 100644 drivers/staging/media/vxd/common/work_queue.h
create mode 100644 drivers/staging/media/vxd/decoder/Kconfig
create mode 100644 drivers/staging/media/vxd/decoder/Makefile
create mode 100644 drivers/staging/media/vxd/decoder/bspp.c
create mode 100644 drivers/staging/media/vxd/decoder/bspp.h
create mode 100644 drivers/staging/media/vxd/decoder/bspp_int.h
create mode 100644 drivers/staging/media/vxd/decoder/core.c
create mode 100644 drivers/staging/media/vxd/decoder/core.h
create mode 100644 drivers/staging/media/vxd/decoder/dec_resources.c
create mode 100644 drivers/staging/media/vxd/decoder/dec_resources.h
create mode 100644 drivers/staging/media/vxd/decoder/decoder.c
create mode 100644 drivers/staging/media/vxd/decoder/decoder.h
create mode 100644 drivers/staging/media/vxd/decoder/fw_interface.h
create mode 100644 drivers/staging/media/vxd/decoder/h264_idx.h
create mode 100644 drivers/staging/media/vxd/decoder/h264_secure_parser.c
create mode 100644 drivers/staging/media/vxd/decoder/h264_secure_parser.h
create mode 100644 drivers/staging/media/vxd/decoder/h264_vlc.h
create mode 100644 drivers/staging/media/vxd/decoder/h264fw_data.h
create mode 100644 drivers/staging/media/vxd/decoder/h264fw_data_shared.h
create mode 100644 drivers/staging/media/vxd/decoder/hevc_secure_parser.c
create mode 100644 drivers/staging/media/vxd/decoder/hevc_secure_parser.h
create mode 100644 drivers/staging/media/vxd/decoder/hevcfw_data.h
create mode 100644 drivers/staging/media/vxd/decoder/hevcfw_data_shared.h
create mode 100644 drivers/staging/media/vxd/decoder/hw_control.c
create mode 100644 drivers/staging/media/vxd/decoder/hw_control.h
create mode 100644 drivers/staging/media/vxd/decoder/img_dec_common.h
create mode 100644 drivers/staging/media/vxd/decoder/img_msvdx_cmds.h
create mode 100644 drivers/staging/media/vxd/decoder/img_msvdx_core_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/img_msvdx_vdmc_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/img_msvdx_vec_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/img_pixfmts.h
create mode 100644 drivers/staging/media/vxd/decoder/img_profiles_levels.h
create mode 100644 drivers/staging/media/vxd/decoder/img_pvdec_core_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/img_pvdec_pixel_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/img_pvdec_test_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/img_vdec_fw_msg.h
create mode 100644 drivers/staging/media/vxd/decoder/img_video_bus4_mmu_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/jpeg_secure_parser.c
create mode 100644 drivers/staging/media/vxd/decoder/jpeg_secure_parser.h
create mode 100644 drivers/staging/media/vxd/decoder/jpegfw_data.h
create mode 100644 drivers/staging/media/vxd/decoder/jpegfw_data_shared.h
create mode 100644 drivers/staging/media/vxd/decoder/mem_io.h
create mode 100644 drivers/staging/media/vxd/decoder/mmu_defs.h
create mode 100644 drivers/staging/media/vxd/decoder/pixel_api.c
create mode 100644 drivers/staging/media/vxd/decoder/pixel_api.h
create mode 100644 drivers/staging/media/vxd/decoder/pvdec_entropy_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/pvdec_int.h
create mode 100644 drivers/staging/media/vxd/decoder/pvdec_vec_be_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/reg_io2.h
create mode 100644 drivers/staging/media/vxd/decoder/scaler_setup.h
create mode 100644 drivers/staging/media/vxd/decoder/swsr.c
create mode 100644 drivers/staging/media/vxd/decoder/swsr.h
create mode 100644 drivers/staging/media/vxd/decoder/translation_api.c
create mode 100644 drivers/staging/media/vxd/decoder/translation_api.h
create mode 100644 drivers/staging/media/vxd/decoder/vdec_defs.h
create mode 100644 drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c
create mode 100644 drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.h
create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_defs.h
create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_utils.c
create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_utils.h
create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_utils_buf.c
create mode 100644 drivers/staging/media/vxd/decoder/vdecfw_share.h
create mode 100644 drivers/staging/media/vxd/decoder/vdecfw_shared.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_core.c
create mode 100644 drivers/staging/media/vxd/decoder/vxd_dec.c
create mode 100644 drivers/staging/media/vxd/decoder/vxd_dec.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_ext.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_int.c
create mode 100644 drivers/staging/media/vxd/decoder/vxd_int.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_mmu_defs.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_props.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_pvdec.c
create mode 100644 drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_v4l2.c
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
The IMG D5520 has an MMU which needs to be programmed with all
memory which it needs access to. This includes input buffers,
output buffers and parameter buffers for each decode instance,
as well as common buffers for firmware, etc.
Functions are provided for creating MMU directories (each stream
will have it's own MMU context), retrieving the directory page,
and mapping/unmapping a buffer into the MMU for a specific MMU context.
Also helper(s) are provided for querying the capabilities of the MMU.
Signed-off-by: Buddy Liong <[email protected]>
Signed-off-by: Angela Stegmaier <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
drivers/staging/media/vxd/common/imgmmu.c | 782 ++++++++++++++++++++++
drivers/staging/media/vxd/common/imgmmu.h | 180 +++++
3 files changed, 964 insertions(+)
create mode 100644 drivers/staging/media/vxd/common/imgmmu.c
create mode 100644 drivers/staging/media/vxd/common/imgmmu.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 163b3176ccf9..2e921650a14c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19537,6 +19537,8 @@ M: Sidraya Jayagond <[email protected]>
L: [email protected]
S: Maintained
F: Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
+F: drivers/staging/media/vxd/common/imgmmu.c
+F: drivers/staging/media/vxd/common/imgmmu.h
VIDEO I2C POLLING DRIVER
M: Matt Ranostay <[email protected]>
diff --git a/drivers/staging/media/vxd/common/imgmmu.c b/drivers/staging/media/vxd/common/imgmmu.c
new file mode 100644
index 000000000000..ce2f41f72485
--- /dev/null
+++ b/drivers/staging/media/vxd/common/imgmmu.c
@@ -0,0 +1,782 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * IMG DEC MMU function implementations
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+#include "img_mem_man.h"
+#include "imgmmu.h"
+
+/**
+ * struct mmu_directory - the MMU directory information
+ * @dir_page: pointer to the mmu_page_cfg_table (physical table used) which
+ * this mmu_directory belongs to
+ * @dir_page_table: All the page table structures in a static array of pointers
+ * @mmu_info_cfg: Functions to use to manage pages allocation, liberation and
+ * writing
+ * @num_mapping: number of mapping using this directory
+ */
+struct mmu_directory {
+ struct mmu_page_cfg *dir_page;
+ struct mmu_page_cfg_table **dir_page_table;
+ struct mmu_info mmu_info_cfg;
+ unsigned int num_mapping;
+};
+
+/*
+ * struct mmu_map - the MMU mapping information
+ * @mmu_dir: pointer to the mmu_directory which this mmu_map belongs to
+ * @dev_virt_addr: device virtual address root associated with this mapping
+ * @used_flag: flag used when allocating
+ * @n_entries: number of entries mapped
+ */
+struct mmu_map {
+ struct mmu_directory *mmu_dir;
+ struct mmu_heap_alloc dev_virt_addr;
+ unsigned int used_flag;
+ unsigned int n_entries;
+};
+
+/*
+ * struct mmu_page_cfg_table - the MMU page table information.
+ * One page table of the directory.
+ * @mmu_dir: pointer to the mmu_directory which this mmu_page_cfg_table
+ * belongs to
+ * @page: page used to store this mapping in the MMU
+ * @valid_entries: number of valid entries in this page
+ */
+struct mmu_page_cfg_table {
+ struct mmu_directory *mmu_dir;
+ struct mmu_page_cfg *page;
+ unsigned int valid_entries;
+};
+
+/*
+ * mmu_pgt_destroy() - Destruction of a page table (does not follow the
+ * child pointer)
+ * @pgt: pointer to the MMU page table information
+ *
+ * Warning: Does not verify if pages are still valid or not
+ */
+static void mmu_pgt_destroy(struct mmu_page_cfg_table *pgt)
+{
+ if (!pgt->mmu_dir ||
+ !pgt->mmu_dir->mmu_info_cfg.pfn_page_free ||
+ !pgt->page) {
+ return;
+ }
+
+ pr_debug("%s:%d Destroy page table (phys addr %llu)\n",
+ __func__, __LINE__, pgt->page->phys_addr);
+
+ pgt->mmu_dir->mmu_info_cfg.pfn_page_free(pgt->page);
+ pgt->page = NULL;
+
+ kfree(pgt);
+}
+
+/*
+ * mmu_dir_entry() - Extact the directory index from a virtual address
+ * @vaddr: virtual address
+ */
+static inline unsigned int mmu_dir_entry(unsigned long vaddr)
+{
+ return (unsigned int)((vaddr & VIRT_DIR_IDX_MASK) >> MMU_DIR_SHIFT);
+}
+
+/*
+ * mmu_pg_entry() - Extract the page table index from a virtual address
+ * @vaddr: virtual address
+ */
+static inline unsigned int mmu_pg_entry(unsigned long vaddr)
+{
+ return (unsigned int)((vaddr & VIRT_PAGE_TBL_MASK) >> MMU_PAGE_SHIFT);
+}
+
+/*
+ * mmu_pg_wr() - Default function used when a mmu_info structure has an empty
+ * pfn_page_write pointer
+ * @mmu_page: pointer to the mmu_page to update
+ * @offset: offset into the directory
+ * @pa_to_write: physical address value to add to the entr
+ * @mmu_flag: mmu flag(s) to set
+ */
+static void mmu_pg_wr(struct mmu_page_cfg *mmu_page, unsigned int offset,
+ unsigned long long pa_to_write, unsigned int mmu_flag)
+{
+ unsigned int *dir_mem = NULL;
+ unsigned long long cur_pa = pa_to_write;
+
+ if (!mmu_page)
+ return;
+
+ dir_mem = (unsigned int *)mmu_page->cpu_virt_addr;
+ /*
+ * assumes that the MMU HW has the extra-bits enabled (this default
+ * function has no way of knowing)
+ */
+ if ((MMU_PHYS_SIZE - MMU_VIRT_SIZE) > 0)
+ cur_pa >>= (MMU_PHYS_SIZE - MMU_VIRT_SIZE);
+ /*
+ * The MMU_PAGE_SHIFT bottom bits should be masked because page
+ * allocation.
+ * MMU_PAGE_SHIFT-(MMU_PHYS_SIZE-MMU_VIRT_SIZE) are used for
+ * flags so it's ok
+ */
+ dir_mem[offset] = (unsigned int)cur_pa | (mmu_flag);
+}
+
+/*
+ * mmu_page_cfg_table() - Create a page table
+ * @mmu_dir: pointer to the mmu_directory in which to create the new page table
+ * structure
+ *
+ * Return: A pointer to the new page table structure in case of success.
+ * (void *) in case of error
+ */
+static struct mmu_page_cfg_table *mmu_pgt_create(struct mmu_directory *mmu_dir)
+{
+ struct mmu_page_cfg_table *neo = NULL;
+ unsigned int i;
+
+ if (!mmu_dir || !mmu_dir->mmu_info_cfg.pfn_page_alloc ||
+ !mmu_dir->mmu_info_cfg.pfn_page_write)
+ return (void *)(-EINVAL);
+
+ neo = kmalloc(sizeof(*neo), GFP_KERNEL);
+ if (!neo)
+ return (void *)(-ENOMEM);
+
+ neo->mmu_dir = mmu_dir;
+
+ neo->page =
+ mmu_dir->mmu_info_cfg.pfn_page_alloc(mmu_dir->mmu_info_cfg.alloc_ctx);
+ if (!neo->page) {
+ pr_err("%s:%d failed to allocate Page Table physical page\n",
+ __func__, __LINE__);
+ kfree(neo);
+ return (void *)(-ENOMEM);
+ }
+ pr_debug("%s:%d Create page table (phys addr 0x%llx CPU Virt 0x%lx)\n",
+ __func__, __LINE__, neo->page->phys_addr,
+ neo->page->cpu_virt_addr);
+
+ /* invalidate all pages */
+ for (i = 0; i < MMU_N_PAGE; i++) {
+ mmu_dir->mmu_info_cfg.pfn_page_write(neo->page, i, 0,
+ MMU_FLAG_INVALID);
+ }
+
+ /*
+ * When non-UMA need to update the device memory after setting
+ * it to 0
+ */
+ if (mmu_dir->mmu_info_cfg.pfn_page_update)
+ mmu_dir->mmu_info_cfg.pfn_page_update(neo->page);
+
+ return neo;
+}
+
+/*
+ * mmu_create_directory - Create a directory entry based on a given directory
+ * configuration
+ * @mmu_info_ops: contains the functions to use to manage page table memory.
+ * Is copied and not modified.
+ *
+ * @warning Obviously creation of the directory allocates memory - do not call
+ * while interrupts are disabled
+ *
+ * @return The opaque handle to the mmu_directory object and result to 0
+ * @return (void *) in case of an error and result has the value:
+ * @li -EINVAL if mmu_info configuration is NULL or does not
+ * contain function pointers
+ * @li -ENOMEM if an internal allocation failed
+ * @li -ENOMEM if the given mmu_pfn_page_alloc returned NULL
+ */
+struct mmu_directory *mmu_create_directory(const struct mmu_info *mmu_info_ops)
+{
+ struct mmu_directory *neo = NULL;
+ unsigned int i;
+
+ /*
+ * invalid information in the directory config:
+ * - invalid page allocator and dealloc (page write can be NULL)
+ * - invalid virtual address representation
+ * - invalid page size
+ * - invalid MMU size
+ */
+ if (!mmu_info_ops || !mmu_info_ops->pfn_page_alloc || !mmu_info_ops->pfn_page_free) {
+ pr_err("%s:%d invalid MMU configuration\n", __func__, __LINE__);
+ return (void *)(-EINVAL);
+ }
+
+ neo = kzalloc(sizeof(*neo), GFP_KERNEL);
+ if (!neo)
+ return (void *)(-ENOMEM);
+
+ neo->dir_page_table = kcalloc(MMU_N_TABLE, sizeof(struct mmu_page_cfg_table *),
+ GFP_KERNEL);
+ if (!neo->dir_page_table) {
+ kfree(neo);
+ return (void *)(-ENOMEM);
+ }
+
+ memcpy(&neo->mmu_info_cfg, mmu_info_ops, sizeof(struct mmu_info));
+ if (!mmu_info_ops->pfn_page_write) {
+ pr_debug("%s:%d using default MMU write\n", __func__, __LINE__);
+ /* use internal function */
+ neo->mmu_info_cfg.pfn_page_write = &mmu_pg_wr;
+ }
+
+ neo->dir_page = mmu_info_ops->pfn_page_alloc(mmu_info_ops->alloc_ctx);
+ if (!neo->dir_page) {
+ kfree(neo->dir_page_table);
+ kfree(neo);
+ return (void *)(-ENOMEM);
+ }
+
+ pr_debug("%s:%d (phys page 0x%llx; CPU virt 0x%lx)\n", __func__,
+ __LINE__, neo->dir_page->phys_addr,
+ neo->dir_page->cpu_virt_addr);
+ /* now we have a valid mmu_directory structure */
+
+ /* invalidate all entries */
+ for (i = 0; i < MMU_N_TABLE; i++) {
+ neo->mmu_info_cfg.pfn_page_write(neo->dir_page, i, 0,
+ MMU_FLAG_INVALID);
+ }
+
+ /* when non-UMA need to update the device memory */
+ if (neo->mmu_info_cfg.pfn_page_update)
+ neo->mmu_info_cfg.pfn_page_update(neo->dir_page);
+
+ return neo;
+}
+
+/*
+ * mmu_destroy_directory - Destroy the mmu_directory - assumes that the HW is
+ * not going to access the memory any-more
+ * @mmu_dir: pointer to the mmu directory to destroy
+ *
+ * Does not invalidate any memory because it assumes that everything is not
+ * used any-more
+ */
+int mmu_destroy_directory(struct mmu_directory *mmu_dir)
+{
+ unsigned int i;
+
+ if (!mmu_dir) {
+ /* could be an assert */
+ pr_err("%s:%d mmu_dir is NULL\n", __func__, __LINE__);
+ return -EINVAL;
+ }
+
+ if (mmu_dir->num_mapping > 0)
+ /* mappings should have been destroyed! */
+ pr_err("%s:%d directory still has %u mapping attached to it\n",
+ __func__, __LINE__, mmu_dir->num_mapping);
+ /*
+ * not exiting because clearing the page table map is more
+ * important than losing a few structures
+ */
+
+ if (!mmu_dir->mmu_info_cfg.pfn_page_free || !mmu_dir->dir_page_table)
+ return -EINVAL;
+
+ pr_debug("%s:%d destroy MMU dir (phys page 0x%llx)\n",
+ __func__, __LINE__, mmu_dir->dir_page->phys_addr);
+
+ /* first we destroy the directory entry */
+ mmu_dir->mmu_info_cfg.pfn_page_free(mmu_dir->dir_page);
+ mmu_dir->dir_page = NULL;
+
+ /* destroy every mapping that still exists */
+ for (i = 0; i < MMU_N_TABLE; i++) {
+ if (mmu_dir->dir_page_table[i]) {
+ mmu_pgt_destroy(mmu_dir->dir_page_table[i]);
+ mmu_dir->dir_page_table[i] = NULL;
+ }
+ }
+
+ kfree(mmu_dir->dir_page_table);
+ kfree(mmu_dir);
+ return 0;
+}
+
+/*
+ * mmu_directory_get_page - Get access to the page table structure used in the
+ * directory (to be able to write it to registers)
+ * @mmu_dir: pointer to the mmu directory. asserts if mmu_dir is NULL
+ *
+ * @return the page table structure used
+ */
+struct mmu_page_cfg *mmu_directory_get_page(struct mmu_directory *mmu_dir)
+{
+ if (!mmu_dir)
+ return NULL;
+
+ return mmu_dir->dir_page;
+}
+
+static struct mmu_map *mmu_directory_map(struct mmu_directory *mmu_dir,
+ const struct mmu_heap_alloc *dev_va,
+ unsigned int ui_map_flags,
+ int (*phys_iter_next)(void *arg,
+ unsigned long long *next),
+ void *phys_iter_arg)
+{
+ unsigned int first_dir = 0;
+ unsigned int first_pg = 0;
+ unsigned int dir_off = 0;
+ unsigned int pg_off = 0;
+ unsigned int n_entries = 0;
+ unsigned int i;
+ unsigned int d;
+ const unsigned int duplicate = PAGE_SIZE / mmu_get_page_size();
+ int res = 0;
+ struct mmu_map *neo = NULL;
+ struct mmu_page_cfg_table **dir_pgtbl = NULL;
+
+ /*
+ * in non UMA updates on pages needs to be done - store index of
+ * directory entry pages to update
+ */
+ unsigned int *to_update;
+ /*
+ * number of pages in to_update (will be at least 1 for the first_pg to
+ * update)
+ */
+ unsigned int n_pgs_to_update = 0;
+ /*
+ * to know if we also need to update the directory page (creation of new
+ * page)
+ */
+ unsigned char dir_modified = FALSE;
+
+ if (!mmu_dir || !dev_va || duplicate < 1)
+ return (void *)(-EINVAL);
+
+ dir_pgtbl = mmu_dir->dir_page_table;
+
+ n_entries = dev_va->alloc_size / PAGE_SIZE;
+ if (dev_va->alloc_size % MMU_PAGE_SIZE != 0 || n_entries == 0) {
+ pr_err("%s:%d invalid allocation size\n", __func__, __LINE__);
+ return (void *)(-EINVAL);
+ }
+
+ if ((ui_map_flags & MMU_FLAG_VALID) != 0) {
+ pr_err("%s:%d valid flag (0x%x) is set in the falgs 0x%x\n",
+ __func__, __LINE__, MMU_FLAG_VALID, ui_map_flags);
+ return (void *)(-EINVAL);
+ }
+
+ /*
+ * has to be dynamically allocated because it is bigger than 1k (max
+ * stack in the kernel)
+ * MMU_N_TABLE is 1024 for 4096B pages, that's a 4k allocation (1 page)
+ * - if it gets bigger may IMG_BIGALLOC should be used
+ */
+ to_update = kcalloc(MMU_N_TABLE, sizeof(unsigned int), GFP_KERNEL);
+ if (!to_update)
+ return (void *)(-ENOMEM);
+
+ /* manage multiple page table mapping */
+
+ first_dir = mmu_dir_entry(dev_va->virt_addr);
+ first_pg = mmu_pg_entry(dev_va->virt_addr);
+
+ if (first_dir >= MMU_N_TABLE || first_pg >= MMU_N_PAGE) {
+ kfree(to_update);
+ return (void *)(-EINVAL);
+ }
+
+ /* verify that the pages that should be used are available */
+ dir_off = first_dir;
+ pg_off = first_pg;
+
+ /*
+ * loop over the number of entries given by CPU allocator but CPU page
+ * size can be > than MMU page size therefore it may need to "duplicate"
+ * entries by creating a fake physical address
+ */
+ for (i = 0; i < n_entries * duplicate; i++) {
+ if (pg_off >= MMU_N_PAGE) {
+ dir_off++; /* move to next directory */
+ if (dir_off >= MMU_N_TABLE) {
+ res = -EINVAL;
+ break;
+ }
+ pg_off = 0; /* using its first page */
+ }
+
+ /*
+ * if dir_pgtbl[dir_off] == NULL not yet
+ * allocated it means all entries are available
+ */
+ if (dir_pgtbl[dir_off]) {
+ /*
+ * inside a pagetable - verify that the required offset
+ * is invalid
+ */
+ struct mmu_page_cfg_table *tbl = dir_pgtbl[dir_off];
+ unsigned int *page_mem = (unsigned int *)tbl->page->cpu_virt_addr;
+
+ if ((page_mem[pg_off] & MMU_FLAG_VALID) != 0) {
+ pr_err("%s:%d one of the required page is currently in use\n",
+ __func__, __LINE__);
+ res = -EPERM;
+ break;
+ }
+ }
+ /* PageTable struct exists */
+ pg_off++;
+ } /* for all needed entries */
+
+ /* it means one entry was not invalid or not enough page were given */
+ if (res != 0) {
+ /*
+ * message already printed
+ * IMG_ERROR_MEMORY_IN_USE when an entry is not invalid
+ * IMG_ERROR_INVALID_PARAMETERS when not enough pages are given
+ * (or too much)
+ */
+ kfree(to_update);
+ return (void *)(unsigned long)(res);
+ }
+
+ neo = kmalloc(sizeof(*neo), GFP_KERNEL);
+ if (!neo) {
+ kfree(to_update);
+ return (void *)(-ENOMEM);
+ }
+ neo->mmu_dir = mmu_dir;
+ neo->dev_virt_addr = *dev_va;
+ memcpy(&neo->dev_virt_addr, dev_va, sizeof(struct mmu_heap_alloc));
+ neo->used_flag = ui_map_flags;
+
+ /* we now know that all pages are available */
+ dir_off = first_dir;
+ pg_off = first_pg;
+
+ to_update[n_pgs_to_update] = first_dir;
+ n_pgs_to_update++;
+
+ for (i = 0; i < n_entries; i++) {
+ unsigned long long cur_phys_addr;
+
+ if (phys_iter_next(phys_iter_arg, &cur_phys_addr) != 0) {
+ pr_err("%s:%d not enough entries in physical address array\n",
+ __func__, __LINE__);
+ kfree(neo);
+ kfree(to_update);
+ return (void *)(-EBUSY);
+ }
+ for (d = 0; d < duplicate; d++) {
+ if (pg_off >= MMU_N_PAGE) {
+ /* move to next directory */
+ dir_off++;
+ /* using its first page */
+ pg_off = 0;
+
+ to_update[n_pgs_to_update] = dir_off;
+ n_pgs_to_update++;
+ }
+
+ /* this page table object does not exists, create it */
+ if (!dir_pgtbl[dir_off]) {
+ dir_pgtbl[dir_off] = mmu_pgt_create(mmu_dir);
+ if (IS_ERR_VALUE((unsigned long)dir_pgtbl[dir_off])) {
+ dir_pgtbl[dir_off] = NULL;
+ goto cleanup_fail;
+ }
+ /*
+ * make this page table valid
+ * should be dir_off
+ */
+ mmu_dir->mmu_info_cfg.pfn_page_write(mmu_dir->dir_page,
+ dir_off,
+ dir_pgtbl[dir_off]->page->phys_addr,
+ MMU_FLAG_VALID);
+ dir_modified = TRUE;
+ }
+
+ /*
+ * map this particular page in the page table
+ * use d*(MMU page size) to add additional entries from
+ * the given physical address with the correct offset
+ * for the MMU
+ */
+ mmu_dir->mmu_info_cfg.pfn_page_write(dir_pgtbl[dir_off]->page,
+ pg_off,
+ cur_phys_addr + d *
+ mmu_get_page_size(),
+ neo->used_flag |
+ MMU_FLAG_VALID);
+ dir_pgtbl[dir_off]->valid_entries++;
+
+ pg_off++;
+ } /* for duplicate */
+ } /* for entries */
+
+ neo->n_entries = n_entries * duplicate;
+ /* one more mapping is related to this directory */
+ mmu_dir->num_mapping++;
+
+ /* if non UMA we need to update device memory */
+ if (mmu_dir->mmu_info_cfg.pfn_page_update) {
+ while (n_pgs_to_update > 0) {
+ unsigned int idx = to_update[n_pgs_to_update - 1];
+ struct mmu_page_cfg_table *tbl = dir_pgtbl[idx];
+
+ mmu_dir->mmu_info_cfg.pfn_page_update(tbl->page);
+ n_pgs_to_update--;
+ }
+ if (dir_modified)
+ mmu_dir->mmu_info_cfg.pfn_page_update(mmu_dir->dir_page);
+ }
+
+ kfree(to_update);
+ return neo;
+
+cleanup_fail:
+ pr_err("%s:%d failed to create a non-existing page table\n", __func__, __LINE__);
+
+ /*
+ * invalidate all already mapped pages -
+ * do not destroy the created pages
+ */
+ while (i > 1) {
+ if (d == 0) {
+ i--;
+ d = duplicate;
+ }
+ d--;
+
+ if (pg_off == 0) {
+ pg_off = MMU_N_PAGE;
+ if (!dir_off)
+ continue;
+ dir_off--;
+ }
+
+ pg_off--;
+
+ /* it should have been used before */
+ if (!dir_pgtbl[dir_off])
+ continue;
+
+ mmu_dir->mmu_info_cfg.pfn_page_write(dir_pgtbl[dir_off]->page,
+ pg_off, 0,
+ MMU_FLAG_INVALID);
+ dir_pgtbl[dir_off]->valid_entries--;
+ }
+
+ kfree(neo);
+ kfree(to_update);
+ return (void *)(-ENOMEM);
+}
+
+/*
+ * with sg
+ */
+struct sg_phys_iter {
+ void *sgl;
+ unsigned int offset;
+};
+
+static int sg_phys_iter_next(void *arg, unsigned long long *next)
+{
+ struct sg_phys_iter *iter = arg;
+
+ if (!iter->sgl)
+ return -ENOENT;
+
+ *next = sg_phys(iter->sgl) + iter->offset; /* phys_addr to dma_addr? */
+ iter->offset += PAGE_SIZE;
+
+ if (iter->offset == img_mmu_get_sgl_length(iter->sgl)) {
+ iter->sgl = sg_next(iter->sgl);
+ iter->offset = 0;
+ }
+
+ return 0;
+}
+
+/*
+ * mmu_directory_map_sg - Create a page table mapping for a list of physical
+ * pages and device virtual address
+ *
+ * @mmu_dir: directory to use for the mapping
+ * @phys_page_sg: sorted array of physical addresses (ascending order). The
+ * number of elements is dev_va->alloc_size/MMU_PAGE_SIZE
+ * @note This array can potentially be big, the caller may need to use vmalloc
+ * if running the linux kernel (e.g. mapping a 1080p NV12 is 760 entries, 6080
+ * Bytes - 2 CPU pages needed, fine with kmalloc; 4k NV12 is 3038 entries,
+ * 24304 Bytes - 6 CPU pages needed, kmalloc would try to find 8 contiguous
+ * pages which may be problematic if memory is fragmented)
+ * @dev_va: associated device virtual address. Given structure is copied
+ * @map_flag: flags to apply on the page (typically 0x2 for Write Only,
+ * 0x4 for Read Only) - the flag should not set bit 1 as 0x1 is the
+ * valid flag.
+ *
+ * @warning Mapping can cause memory allocation (missing pages) - do not call
+ * while interrupts are disabled
+ *
+ * @return The opaque handle to the mmu_map object and result to 0
+ * @return (void *) in case of an error with the following values:
+ * @li -EINVAL if the allocation size is not a multiple of MMU_PAGE_SIZE,
+ * if the given list of page table is too long or not long enough for the
+ * mapping or if the give flags set the invalid bit
+ * @li -EPERM if the virtual memory is already mapped
+ * @li -ENOMEM if an internal allocation failed
+ * @li -ENOMEM if a page creation failed
+ */
+struct mmu_map *mmu_directory_map_sg(struct mmu_directory *mmu_dir,
+ void *phys_page_sg,
+ const struct mmu_heap_alloc *dev_va,
+ unsigned int map_flag)
+{
+ struct sg_phys_iter arg = { phys_page_sg };
+
+ return mmu_directory_map(mmu_dir, dev_va, map_flag,
+ sg_phys_iter_next, &arg);
+}
+
+/*
+ * mmu_directory_unmap - Un-map the mapped pages (invalidate their entries) and
+ * destroy the mapping object
+ * @map: pointer to the pages to un-map
+ *
+ * This does not destroy the created Page Table (even if they are becoming
+ * un-used) and does not change the Directory valid bits.
+ *
+ * @return 0
+ */
+int mmu_directory_unmap(struct mmu_map *map)
+{
+ unsigned int first_dir = 0;
+ unsigned int first_pg = 0;
+ unsigned int dir_offset = 0;
+ unsigned int pg_offset = 0;
+ unsigned int i;
+ struct mmu_directory *mmu_dir = NULL;
+
+ /*
+ * in non UMA updates on pages needs to be done - store index of
+ * directory entry pages to update
+ */
+ unsigned int *to_update;
+ unsigned int n_pgs_to_update = 0;
+
+ if (!map || map->n_entries <= 0 || !map->mmu_dir)
+ return -EINVAL;
+
+ mmu_dir = map->mmu_dir;
+
+ /*
+ * has to be dynamically allocated because it is bigger than 1k (max
+ * stack in the kernel)
+ */
+ to_update = kcalloc(MMU_N_TABLE, sizeof(unsigned int), GFP_KERNEL);
+ if (!to_update)
+ return -ENOMEM;
+
+ first_dir = mmu_dir_entry(map->dev_virt_addr.virt_addr);
+ first_pg = mmu_pg_entry(map->dev_virt_addr.virt_addr);
+
+ /* verify that the pages that should be used are available */
+ dir_offset = first_dir;
+ pg_offset = first_pg;
+
+ to_update[n_pgs_to_update] = first_dir;
+ n_pgs_to_update++;
+
+ for (i = 0; i < map->n_entries; i++) {
+ if (pg_offset >= MMU_N_PAGE) {
+ /* move to next directory */
+ dir_offset++;
+ /* using its first page */
+ pg_offset = 0;
+
+ to_update[n_pgs_to_update] = dir_offset;
+ n_pgs_to_update++;
+ }
+
+ /*
+ * this page table object does not exist, something destroyed
+ * it while the mapping was supposed to use it
+ */
+ if (mmu_dir->dir_page_table[dir_offset]) {
+ mmu_dir->mmu_info_cfg.pfn_page_write
+ (mmu_dir->dir_page_table[dir_offset]->page,
+ pg_offset, 0,
+ MMU_FLAG_INVALID);
+ mmu_dir->dir_page_table[dir_offset]->valid_entries--;
+ }
+
+ pg_offset++;
+ }
+
+ mmu_dir->num_mapping--;
+
+ if (mmu_dir->mmu_info_cfg.pfn_page_update)
+ while (n_pgs_to_update > 0) {
+ unsigned int idx = to_update[n_pgs_to_update - 1];
+ struct mmu_page_cfg_table *tbl = mmu_dir->dir_page_table[idx];
+
+ mmu_dir->mmu_info_cfg.pfn_page_update(tbl->page);
+ n_pgs_to_update--;
+ }
+
+ /* mapping does not own the given virtual address */
+ kfree(map);
+ kfree(to_update);
+ return 0;
+}
+
+unsigned int mmu_directory_get_pagetable_entry(struct mmu_directory *mmu_dir,
+ unsigned long dev_virt_addr)
+{
+ unsigned int dir_entry = 0;
+ unsigned int table_entry = 0;
+ struct mmu_page_cfg_table *tbl;
+ struct mmu_page_cfg_table **dir_pgtbl = NULL;
+ unsigned int *page_mem;
+
+ if (!mmu_dir) {
+ pr_err("mmu directory table is NULL\n");
+ return 0xFFFFFF;
+ }
+
+ dir_pgtbl = mmu_dir->dir_page_table;
+
+ dir_entry = mmu_dir_entry(dev_virt_addr);
+ table_entry = mmu_pg_entry(dev_virt_addr);
+
+ tbl = dir_pgtbl[dir_entry];
+ if (!tbl) {
+ pr_err("page table entry is NULL\n");
+ return 0xFFFFFF;
+ }
+
+ page_mem = (unsigned int *)tbl->page->cpu_virt_addr;
+
+#if defined(DEBUG_DECODER_DRIVER) || defined(DEBUG_ENCODER_DRIVER)
+ pr_info("Page table value@dir_entry:table_entry[%d : %d] = %x\n",
+ dir_entry, table_entry, page_mem[table_entry]);
+#endif
+
+ return page_mem[table_entry];
+}
diff --git a/drivers/staging/media/vxd/common/imgmmu.h b/drivers/staging/media/vxd/common/imgmmu.h
new file mode 100644
index 000000000000..b35256d09e24
--- /dev/null
+++ b/drivers/staging/media/vxd/common/imgmmu.h
@@ -0,0 +1,180 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG DEC MMU Library
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef IMG_DEC_MMU_MMU_H
+#define IMG_DEC_MMU_MMU_H
+
+#include <linux/types.h>
+
+#ifndef MMU_PHYS_SIZE
+/* @brief MMU physical address size in bits */
+#define MMU_PHYS_SIZE 40
+#endif
+
+#ifndef MMU_VIRT_SIZE
+/* @brief MMU virtual address size in bits */
+#define MMU_VIRT_SIZE 32
+#endif
+
+#ifndef MMU_PAGE_SIZE
+/* @brief Page size in bytes */
+#define MMU_PAGE_SIZE 4096u
+#define MMU_PAGE_SHIFT 12
+#define MMU_DIR_SHIFT 22
+#endif
+
+#if MMU_VIRT_SIZE == 32
+/* @brief max number of pagetable that can be stored in the directory entry */
+#define MMU_N_TABLE (MMU_PAGE_SIZE / 4u)
+/* @brief max number of page mapping in the pagetable */
+#define MMU_N_PAGE (MMU_PAGE_SIZE / 4u)
+#endif
+
+/* @brief Memory flag used to mark a page mapping as invalid */
+#define MMU_FLAG_VALID 0x1
+#define MMU_FLAG_INVALID 0x0
+
+/*
+ * This type defines MMU variant.
+ */
+enum mmu_etype {
+ MMU_TYPE_NONE = 0,
+ MMU_TYPE_32BIT,
+ MMU_TYPE_36BIT,
+ MMU_TYPE_40BIT,
+ MMU_TYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* @brief Page offset mask in virtual address - bottom bits */
+static const unsigned long VIRT_PAGE_OFF_MASK = ((1 << MMU_PAGE_SHIFT) - 1);
+/* @brief Page table index mask in virtual address - middle bits */
+static const unsigned long VIRT_PAGE_TBL_MASK =
+ (((1 << MMU_DIR_SHIFT) - 1) & ~(((1 << MMU_PAGE_SHIFT) - 1)));
+/* @brief Directory index mask in virtual address - high bits */
+static const unsigned long VIRT_DIR_IDX_MASK = (~((1 << MMU_DIR_SHIFT) - 1));
+
+/*
+ * struct mmu_heap_alloc - information about a virtual mem heap allocation
+ * @virt_addr: pointer to start of the allocation
+ * @alloc_size: size in bytes
+ */
+struct mmu_heap_alloc {
+ unsigned long virt_addr;
+ unsigned long alloc_size;
+};
+
+/*
+ * struct mmu_page_cfg - mmu_page configuration
+ * @phys_addr: physical address - unsigned long long is used to support extended physical
+ * address on 32bit system
+ * @cpu_virt_addr: CPU virtual address pointer
+ */
+struct mmu_page_cfg {
+ unsigned long long phys_addr;
+ unsigned long cpu_virt_addr;
+};
+
+/*
+ * typedef mmu_pfn_page_alloc - page table allocation function
+ *
+ * Pointer to a function implemented by the used allocator to create 1
+ * page table (used for the MMU mapping - directory page and mapping page)
+ *
+ * Return:
+ * * A populated mmu_page_cfg structure with the result of the page alloc.
+ * * NULL if the allocation failed.
+ */
+typedef struct mmu_page_cfg *(*mmu_pfn_page_alloc) (void *);
+
+/*
+ * typedef mmu_pfn_page_free
+ * @arg1: pointer to the mmu_page_cfg that is allocated using mmu_pfn_page_alloc
+ *
+ * Pointer to a function to free the allocated page table used for MMU mapping.
+ *
+ * @return void
+ */
+typedef void (*mmu_pfn_page_free) (struct mmu_page_cfg *arg1);
+
+/*
+ * typedef mmu_pfn_page_update
+ * @arg1: pointer to the mmu_page_cfg that is allocated using mmu_pfn_page_alloc
+ *
+ * Pointer to a function to update Device memory on non Unified Memory
+ *
+ * @return void
+ */
+typedef void (*mmu_pfn_page_update) (struct mmu_page_cfg *arg1);
+
+/*
+ * typedef mmu_pfn_page_write
+ * @mmu_page: mmu_page mmu page configuration to be written
+ * @offset: offset in entries (32b word)
+ * @pa_to_write: pa_to_write physical address to write
+ * @flags: flags bottom part of the entry used as flags for the MMU (including
+ * valid flag)
+ *
+ * Pointer to a function to write to a device address
+ *
+ * @return void
+ */
+typedef void (*mmu_pfn_page_write) (struct mmu_page_cfg *mmu_page,
+ unsigned int offset,
+ unsigned long long pa_to_write, unsigned int flags);
+
+/*
+ * struct mmu_info
+ * @pfn_page_alloc: function pointer for allocating a physical page used in
+ * MMU mapping
+ * @alloc_ctx: allocation context handler
+ * @pfn_page_free: function pointer for freeing a physical page used in
+ * MMU mapping
+ * @pfn_page_write: function pointer to write a physical address onto a page.
+ * If NULL, then internal function is used. Internal function
+ * assumes that MMU_PHYS_SIZE is the MMU size.
+ * @pfn_page_update: function pointer to update a physical page on device if
+ * non UMA.
+ */
+struct mmu_info {
+ mmu_pfn_page_alloc pfn_page_alloc;
+ void *alloc_ctx;
+ mmu_pfn_page_free pfn_page_free;
+ mmu_pfn_page_write pfn_page_write;
+ mmu_pfn_page_update pfn_page_update;
+};
+
+/*
+ * mmu_get_page_size() - Access the compilation specified page size of the
+ * MMU (in Bytes)
+ */
+static inline unsigned long mmu_get_page_size(void)
+{
+ return MMU_PAGE_SIZE;
+}
+
+struct mmu_directory *mmu_create_directory(const struct mmu_info *mmu_info_ops);
+int mmu_destroy_directory(struct mmu_directory *mmu_dir);
+
+struct mmu_page_cfg *mmu_directory_get_page(struct mmu_directory *mmu_dir);
+
+struct mmu_map *mmu_directory_map_sg(struct mmu_directory *mmu_dir,
+ void *phys_page_sg,
+ const struct mmu_heap_alloc *dev_va,
+ unsigned int map_flag);
+int mmu_directory_unmap(struct mmu_map *map);
+
+unsigned int mmu_directory_get_pagetable_entry(struct mmu_directory *mmu_dir,
+ unsigned long dev_virt_addr);
+
+#endif /* IMG_DEC_MMU_MMU_H */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
Add the dt-binding for the img,d5500-vxd node for DRA8x.
Signed-off-by: Angela Stegmaier <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
.../bindings/media/img,d5520-vxd.yaml | 52 +++++++++++++++++++
MAINTAINERS | 7 +++
2 files changed, 59 insertions(+)
create mode 100644 Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
diff --git a/Documentation/devicetree/bindings/media/img,d5520-vxd.yaml b/Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
new file mode 100644
index 000000000000..812a431336a2
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
@@ -0,0 +1,52 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/media/img,d5520-vxd.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Imagination D5520-VXD Driver
+
+maintainers:
+ - Sidraya Jayagond <[email protected]>
+ - Prashanth Kumar Amai <[email protected]>
+
+description: |
+ The IMG VXD video decode driver for the D5500-VXD is a video decoder for
+ multiple video formats including H.264 and HEVC on the TI J721E family
+ of SoCs.
+
+properties:
+ compatible:
+ const: img,d5500-vxd
+
+ reg:
+ maxItems: 2
+
+ interrupts:
+ maxItems: 1
+
+ power-domains:
+ maxItems: 1
+
+required:
+ - compatible
+ - reg
+ - interrupts
+
+additionalProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/interrupt-controller/irq.h>
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+
+ d5520: video-decoder@4300000 {
+ /* IMG D5520 driver configuration */
+ compatible = "img,d5500-vxd";
+ reg = <0x00 0x04300000>,
+ <0x00 0x100000>;
+ power-domains = <&k3_pds 144>;
+ interrupts = <GIC_SPI 180 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+...
diff --git a/MAINTAINERS b/MAINTAINERS
index fd25e4ecf0b9..163b3176ccf9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19531,6 +19531,13 @@ W: https://linuxtv.org
T: git git://linuxtv.org/media_tree.git
F: drivers/media/test-drivers/vicodec/*
+VIDEO DECODER DRIVER FOR TI DRA8XX/J721E
+M: Prashant Amai <[email protected]>
+M: Sidraya Jayagond <[email protected]>
+L: [email protected]
+S: Maintained
+F: Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
+
VIDEO I2C POLLING DRIVER
M: Matt Ranostay <[email protected]>
L: [email protected]
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
This patch prepares the picture commands for the firmware
it includes reconstructed and alternate picture commands.
Signed-off-by: Amit Makani <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
drivers/staging/media/vxd/decoder/vxd_int.c | 1137 +++++++++++++++++++
drivers/staging/media/vxd/decoder/vxd_int.h | 128 +++
3 files changed, 1267 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/vxd_int.c
create mode 100644 drivers/staging/media/vxd/decoder/vxd_int.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 2327ea12caa6..7b21ebfc61d4 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19548,6 +19548,8 @@ F: drivers/staging/media/vxd/decoder/img_dec_common.h
F: drivers/staging/media/vxd/decoder/vxd_core.c
F: drivers/staging/media/vxd/decoder/vxd_dec.c
F: drivers/staging/media/vxd/decoder/vxd_dec.h
+F: drivers/staging/media/vxd/decoder/vxd_int.c
+F: drivers/staging/media/vxd/decoder/vxd_int.h
F: drivers/staging/media/vxd/decoder/vxd_pvdec.c
F: drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h
F: drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h
diff --git a/drivers/staging/media/vxd/decoder/vxd_int.c b/drivers/staging/media/vxd/decoder/vxd_int.c
new file mode 100644
index 000000000000..c75aef6deed1
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vxd_int.c
@@ -0,0 +1,1137 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * VXD DEC Common low level core interface component
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include <linux/types.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "bspp.h"
+#include "fw_interface.h"
+#include "h264fw_data.h"
+#include "img_errors.h"
+#include "img_dec_common.h"
+#include "img_pvdec_core_regs.h"
+#include "img_pvdec_pixel_regs.h"
+#include "img_pvdec_test_regs.h"
+#include "img_vdec_fw_msg.h"
+#include "img_video_bus4_mmu_regs.h"
+#include "img_msvdx_core_regs.h"
+#include "img_msvdx_cmds.h"
+#include "reg_io2.h"
+#include "scaler_setup.h"
+#include "vdecdd_defs.h"
+#include "vdecdd_utils.h"
+#include "vdecfw_shared.h"
+#include "vdec_defs.h"
+#include "vxd_ext.h"
+#include "vxd_int.h"
+#include "vxd_props.h"
+
+#define MSVDX_CACHE_REF_OFFSET_V100 (72L)
+#define MSVDX_CACHE_ROW_OFFSET_V100 (4L)
+
+#define MSVDX_CACHE_REF_OFFSET_V550 (144L)
+#define MSVDX_CACHE_ROW_OFFSET_V550 (8L)
+
+#define GET_BITS(v, lb, n) (((v) >> (lb)) & ((1 << (n)) - 1))
+#define IS_PVDEC_PIPELINE(std) ((std) == VDEC_STD_HEVC ? 1 : 0)
+
+static int amsvdx_codecmode[VDEC_STD_MAX] = {
+ /* Invalid */
+ -1,
+ /* MPEG2 */
+ 3,
+ /* MPEG4 */
+ 4,
+ /* H263 */
+ 4,
+ /* H264 */
+ 1,
+ /* VC1 */
+ 2,
+ /* AVS */
+ 5,
+ /* RealVideo (8) */
+ 8,
+ /* JPEG */
+ 0,
+ /* On2 VP6 */
+ 10,
+ /* On2 VP8 */
+ 11,
+ /* Invalid */
+#ifdef HAS_VP9
+ /* On2 VP9 */
+ 13,
+#endif
+ /* Sorenson */
+ 4,
+ /* HEVC */
+ 12,
+};
+
+struct msvdx_scaler_coeff_cmds {
+ unsigned int acmd_horizluma_coeff[VDECFW_NUM_SCALE_COEFFS];
+ unsigned int acmd_vertluma_coeff[VDECFW_NUM_SCALE_COEFFS];
+ unsigned int acmd_horizchroma_coeff[VDECFW_NUM_SCALE_COEFFS];
+ unsigned int acmd_vertchroma_coeff[VDECFW_NUM_SCALE_COEFFS];
+};
+
+static struct vxd_vidstd_props astd_props[] = {
+ { VDEC_STD_MPEG2, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8,
+ PIXEL_FORMAT_420 },
+ { VDEC_STD_MPEG4, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8,
+ PIXEL_FORMAT_420 },
+ { VDEC_STD_H263, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8,
+ PIXEL_FORMAT_420 },
+ { VDEC_STD_H264, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0x10000, 8,
+ 8, PIXEL_FORMAT_420 },
+ { VDEC_STD_VC1, CORE_REVISION(7, 0, 0), 80, 16, 4096, 4096, 0, 8, 8,
+ PIXEL_FORMAT_420 },
+ { VDEC_STD_AVS, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8,
+ PIXEL_FORMAT_420 },
+ { VDEC_STD_REAL, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8,
+ PIXEL_FORMAT_420 },
+ { VDEC_STD_JPEG, CORE_REVISION(7, 0, 0), 64, 16, 32768, 32768, 0, 8, 8,
+ PIXEL_FORMAT_444 },
+ { VDEC_STD_VP6, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8,
+ PIXEL_FORMAT_420 },
+ { VDEC_STD_VP8, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8,
+ PIXEL_FORMAT_420 },
+ { VDEC_STD_SORENSON, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8,
+ 8, PIXEL_FORMAT_420 },
+ { VDEC_STD_HEVC, CORE_REVISION(7, 0, 0), 64, 16, 8192, 8192, 0, 8, 8,
+ PIXEL_FORMAT_420 },
+};
+
+enum vdec_msvdx_async_mode {
+ VDEC_MSVDX_ASYNC_NORMAL,
+ VDEC_MSVDX_ASYNC_VDMC,
+ VDEC_MSVDX_ASYNC_VDEB,
+ VDEC_MSVDX_ASYNC_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* MSVDX row strides for video buffers. */
+static const unsigned int amsvdx_64byte_row_stride[] = {
+ 384, 768, 1280, 1920, 512, 1024, 2048, 4096
+};
+
+/* MSVDX row strides for jpeg buffers. */
+static const unsigned int amsvdx_jpeg_row_stride[] = {
+ 256, 384, 512, 768, 1024, 1536, 2048, 3072, 4096, 6144, 8192, 12288, 16384, 24576, 32768
+};
+
+/* VXD Core major revision. */
+static unsigned int maj_rev;
+/* VXD Core minor revision. */
+static unsigned int min_rev;
+/* VXD Core maintenance revision. */
+static unsigned int maint_rev;
+
+static int get_stride_code(enum vdec_vid_std vidstd, unsigned int row_stride)
+{
+ unsigned int i;
+
+ if (vidstd == VDEC_STD_JPEG) {
+ for (i = 0; i < (sizeof(amsvdx_jpeg_row_stride) /
+ sizeof(amsvdx_jpeg_row_stride[0])); i++) {
+ if (amsvdx_jpeg_row_stride[i] == row_stride)
+ return i;
+ }
+ } else {
+ for (i = 0; i < (sizeof(amsvdx_64byte_row_stride) /
+ sizeof(amsvdx_64byte_row_stride[0])); i++) {
+ if (amsvdx_64byte_row_stride[i] == row_stride)
+ return i;
+ }
+ }
+
+ return -1;
+}
+
+/* Obtains the hardware defined video profile. */
+static unsigned int vxd_getprofile(enum vdec_vid_std vidstd, unsigned int std_profile)
+{
+ unsigned int profile = 0;
+
+ switch (vidstd) {
+ case VDEC_STD_H264:
+ switch (std_profile) {
+ case H264_PROFILE_BASELINE:
+ profile = 0;
+ break;
+
+ /*
+ * Extended may be attempted as Baseline or
+ * Main depending on the constraint_set_flags
+ */
+ case H264_PROFILE_EXTENDED:
+ case H264_PROFILE_MAIN:
+ profile = 1;
+ break;
+
+ case H264_PROFILE_HIGH:
+ case H264_PROFILE_HIGH444:
+ case H264_PROFILE_HIGH422:
+ case H264_PROFILE_HIGH10:
+ case H264_PROFILE_CAVLC444:
+ case H264_PROFILE_MVC_HIGH:
+ case H264_PROFILE_MVC_STEREO:
+ profile = 2;
+ break;
+ default:
+ profile = 2;
+ break;
+ }
+ break;
+
+ default:
+ profile = 0;
+ break;
+ }
+
+ return profile;
+}
+
+static int vxd_getcoreproperties(struct vxd_coreprops *coreprops,
+ unsigned int corerev,
+ unsigned int pvdec_coreid, unsigned int mmu_config0,
+ unsigned int mmu_config1, unsigned int *pixel_pipecfg,
+ unsigned int *pixel_misccfg, unsigned int max_framecfg)
+{
+ unsigned int group_id;
+ unsigned int core_id;
+ unsigned int core_config;
+ unsigned int extended_address_range;
+ unsigned char group_size = 0;
+ unsigned char pipe_minus1 = 0;
+ unsigned int max_h264_hw_chromaformat = 0;
+ unsigned int max_hevc_hw_chromaformat = 0;
+ unsigned int max_bitdepth_luma = 0;
+ unsigned int i;
+
+ struct pvdec_core_rev core_rev;
+
+ if (!coreprops || !pixel_pipecfg || !pixel_misccfg)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* PVDEC Core Revision Information */
+ core_rev.maj_rev = REGIO_READ_FIELD(corerev, PVDEC_CORE, CR_PVDEC_CORE_REV,
+ CR_PVDEC_MAJOR_REV);
+ core_rev.min_rev = REGIO_READ_FIELD(corerev, PVDEC_CORE, CR_PVDEC_CORE_REV,
+ CR_PVDEC_MINOR_REV);
+ core_rev.maint_rev = REGIO_READ_FIELD(corerev, PVDEC_CORE, CR_PVDEC_CORE_REV,
+ CR_PVDEC_MAINT_REV);
+
+ /* core id */
+ group_id = REGIO_READ_FIELD(pvdec_coreid, PVDEC_CORE, CR_PVDEC_CORE_ID, CR_GROUP_ID);
+ core_id = REGIO_READ_FIELD(pvdec_coreid, PVDEC_CORE, CR_PVDEC_CORE_ID, CR_CORE_ID);
+
+ /* Ensure that the core is IMG Video Decoder (PVDEC). */
+ if (group_id != 3 || core_id != 3)
+ return IMG_ERROR_DEVICE_NOT_FOUND;
+
+ core_config = REGIO_READ_FIELD(pvdec_coreid, PVDEC_CORE,
+ CR_PVDEC_CORE_ID, CR_PVDEC_CORE_CONFIG);
+
+ memset(coreprops, 0, sizeof(*(coreprops)));
+
+ /* Construct core version name. */
+ snprintf(coreprops->aversion, VER_STR_LEN, "%d.%d.%d",
+ core_rev.maj_rev, core_rev.min_rev, core_rev.maint_rev);
+
+ coreprops->mmu_support_stride_per_context =
+ REGIO_READ_FIELD(mmu_config1, IMG_VIDEO_BUS4_MMU,
+ MMU_CONFIG1,
+ SUPPORT_STRIDE_PER_CONTEXT) == 1 ? 1 : 0;
+
+ coreprops->mmu_support_secure = REGIO_READ_FIELD(mmu_config1, IMG_VIDEO_BUS4_MMU,
+ MMU_CONFIG1, SUPPORT_SECURE) == 1 ? 1 : 0;
+
+ extended_address_range = REGIO_READ_FIELD(mmu_config0, IMG_VIDEO_BUS4_MMU,
+ MMU_CONFIG0, EXTENDED_ADDR_RANGE);
+
+ switch (extended_address_range) {
+ case 0:
+ coreprops->mmu_type = MMU_TYPE_32BIT;
+ break;
+ case 4:
+ coreprops->mmu_type = MMU_TYPE_36BIT;
+ break;
+ case 8:
+ coreprops->mmu_type = MMU_TYPE_40BIT;
+ break;
+ default:
+ return IMG_ERROR_NOT_SUPPORTED;
+ }
+
+ group_size += REGIO_READ_FIELD(mmu_config0, IMG_VIDEO_BUS4_MMU,
+ MMU_CONFIG0, GROUP_OVERRIDE_SIZE);
+
+ coreprops->num_entropy_pipes = core_config & 0xF;
+ coreprops->num_pixel_pipes = core_config >> 4 & 0xF;
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("PVDEC revision %08x detected, id %08x.\n", corerev, core_id);
+ pr_info("Found %d entropy pipe(s), %d pixel pipe(s), %d group size",
+ coreprops->num_entropy_pipes, coreprops->num_pixel_pipes,
+ group_size);
+#endif
+
+ /* Set global rev info variables used by macros */
+ maj_rev = core_rev.maj_rev;
+ min_rev = core_rev.min_rev;
+ maint_rev = core_rev.maint_rev;
+
+ /* Default settings */
+ for (i = 0; i < ARRAY_SIZE(astd_props); i++) {
+ struct vxd_vidstd_props *pvidstd_props =
+ &coreprops->vidstd_props[astd_props[i].vidstd];
+ /*
+ * Update video standard properties if the core is beyond
+ * specified version and the properties are for newer cores
+ * than the previous.
+ */
+ if (FROM_REV(MAJOR_REVISION((int)astd_props[i].core_rev),
+ MINOR_REVISION((int)astd_props[i].core_rev),
+ MAINT_REVISION((int)astd_props[i].core_rev), int) &&
+ astd_props[i].core_rev >= pvidstd_props->core_rev) {
+ *pvidstd_props = astd_props[i];
+
+ if (pvidstd_props->vidstd != VDEC_STD_JPEG &&
+ (FROM_REV(8, 0, 0, int)) && (pvidstd_props->vidstd ==
+ VDEC_STD_HEVC ? 1 : 0)) {
+ /*
+ * override default values with values
+ * specified in HW (register does not
+ * exist in previous cores)
+ */
+ pvidstd_props->max_width =
+ 2 << REGIO_READ_FIELD(max_framecfg,
+ PVDEC_PIXEL,
+ CR_MAX_FRAME_CONFIG,
+ CR_PVDEC_HOR_MSB);
+
+ pvidstd_props->max_height =
+ 2 << REGIO_READ_FIELD(max_framecfg,
+ PVDEC_PIXEL,
+ CR_MAX_FRAME_CONFIG,
+ CR_PVDEC_VER_MSB);
+ } else if (pvidstd_props->vidstd != VDEC_STD_JPEG &&
+ (FROM_REV(8, 0, 0, int))) {
+ pvidstd_props->max_width =
+ 2 << REGIO_READ_FIELD(max_framecfg,
+ PVDEC_PIXEL,
+ CR_MAX_FRAME_CONFIG,
+ CR_MSVDX_HOR_MSB);
+
+ pvidstd_props->max_height =
+ 2 << REGIO_READ_FIELD(max_framecfg,
+ PVDEC_PIXEL,
+ CR_MAX_FRAME_CONFIG,
+ CR_MSVDX_VER_MSB);
+ }
+ }
+ }
+
+ /* Populate the core properties. */
+ if (GET_BITS(core_config, 11, 1))
+ coreprops->hd_support = 1;
+
+ for (pipe_minus1 = 0; pipe_minus1 < coreprops->num_pixel_pipes;
+ pipe_minus1++) {
+ unsigned int current_bitdepth =
+ GET_BITS(pixel_misccfg[pipe_minus1], 4, 3) + 8;
+ unsigned int current_h264_hw_chromaformat =
+ GET_BITS(pixel_misccfg[pipe_minus1], 0, 2);
+ unsigned int current_hevc_hw_chromaformat =
+ GET_BITS(pixel_misccfg[pipe_minus1], 2, 2);
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("cur_bitdepth: %d cur_h264_hw_chromaformat: %d",
+ current_bitdepth, current_h264_hw_chromaformat);
+ pr_info("cur_hevc_hw_chromaformat: %d pipe_minus1: %d\n",
+ current_hevc_hw_chromaformat, pipe_minus1);
+#endif
+
+ if (GET_BITS(pixel_misccfg[pipe_minus1], 8, 1))
+ coreprops->rotation_support[pipe_minus1] = 1;
+
+ if (GET_BITS(pixel_misccfg[pipe_minus1], 9, 1))
+ coreprops->scaling_support[pipe_minus1] = 1;
+
+ coreprops->num_streams[pipe_minus1] =
+ GET_BITS(pixel_misccfg[pipe_minus1], 12, 2) + 1;
+
+ /* Video standards. */
+ coreprops->mpeg2[pipe_minus1] =
+ GET_BITS(pixel_pipecfg[pipe_minus1], 0, 1) ? 1 : 0;
+ coreprops->mpeg4[pipe_minus1] =
+ GET_BITS(pixel_pipecfg[pipe_minus1], 1, 1) ? 1 : 0;
+ coreprops->h264[pipe_minus1] =
+ GET_BITS(pixel_pipecfg[pipe_minus1], 2, 1) ? 1 : 0;
+ coreprops->vc1[pipe_minus1] =
+ GET_BITS(pixel_pipecfg[pipe_minus1], 3, 1) ? 1 : 0;
+ coreprops->jpeg[pipe_minus1] =
+ GET_BITS(pixel_pipecfg[pipe_minus1], 5, 1) ? 1 : 0;
+ coreprops->avs[pipe_minus1] =
+ GET_BITS(pixel_pipecfg[pipe_minus1], 7, 1) ? 1 : 0;
+ coreprops->real[pipe_minus1] =
+ GET_BITS(pixel_pipecfg[pipe_minus1], 8, 1) ? 1 : 0;
+ coreprops->vp6[pipe_minus1] =
+ GET_BITS(pixel_pipecfg[pipe_minus1], 9, 1) ? 1 : 0;
+ coreprops->vp8[pipe_minus1] =
+ GET_BITS(pixel_pipecfg[pipe_minus1], 10, 1) ? 1 : 0;
+ coreprops->hevc[pipe_minus1] =
+ GET_BITS(pixel_pipecfg[pipe_minus1], 22, 1) ? 1 : 0;
+
+ max_bitdepth_luma = (max_bitdepth_luma > current_bitdepth ?
+ max_bitdepth_luma : current_bitdepth);
+ max_h264_hw_chromaformat = (max_h264_hw_chromaformat >
+ current_h264_hw_chromaformat ? max_h264_hw_chromaformat
+ : current_h264_hw_chromaformat);
+ max_hevc_hw_chromaformat = (max_hevc_hw_chromaformat >
+ current_hevc_hw_chromaformat ? max_hevc_hw_chromaformat
+ : current_hevc_hw_chromaformat);
+ }
+
+ /* Override default bit-depth with value signalled explicitly by core. */
+ coreprops->vidstd_props[0].max_luma_bitdepth = max_bitdepth_luma;
+ coreprops->vidstd_props[0].max_chroma_bitdepth =
+ coreprops->vidstd_props[0].max_luma_bitdepth;
+
+ for (i = 1; i < VDEC_STD_MAX; i++) {
+ coreprops->vidstd_props[i].max_luma_bitdepth =
+ coreprops->vidstd_props[0].max_luma_bitdepth;
+ coreprops->vidstd_props[i].max_chroma_bitdepth =
+ coreprops->vidstd_props[0].max_chroma_bitdepth;
+ }
+
+ switch (max_h264_hw_chromaformat) {
+ case 1:
+ coreprops->vidstd_props[VDEC_STD_H264].max_chroma_format =
+ PIXEL_FORMAT_420;
+ break;
+
+ case 2:
+ coreprops->vidstd_props[VDEC_STD_H264].max_chroma_format =
+ PIXEL_FORMAT_422;
+ break;
+
+ case 3:
+ coreprops->vidstd_props[VDEC_STD_H264].max_chroma_format =
+ PIXEL_FORMAT_444;
+ break;
+
+ default:
+ break;
+ }
+
+ switch (max_hevc_hw_chromaformat) {
+ case 1:
+ coreprops->vidstd_props[VDEC_STD_HEVC].max_chroma_format =
+ PIXEL_FORMAT_420;
+ break;
+
+ case 2:
+ coreprops->vidstd_props[VDEC_STD_HEVC].max_chroma_format =
+ PIXEL_FORMAT_422;
+ break;
+
+ case 3:
+ coreprops->vidstd_props[VDEC_STD_HEVC].max_chroma_format =
+ PIXEL_FORMAT_444;
+ break;
+
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static unsigned char vxd_is_supported_byatleast_onepipe(const unsigned char *features,
+ unsigned int num_pipes)
+{
+ unsigned int i;
+
+ VDEC_ASSERT(features);
+ VDEC_ASSERT(num_pipes <= VDEC_MAX_PIXEL_PIPES);
+
+ for (i = 0; i < num_pipes; i++) {
+ if (features[i])
+ return 1;
+ }
+
+ return 0;
+}
+
+void vxd_set_reconpictcmds(const struct vdecdd_str_unit *str_unit,
+ const struct vdec_str_configdata *str_configdata,
+ const struct vdec_str_opconfig *output_config,
+ const struct vxd_coreprops *coreprops,
+ const struct vxd_buffers *buffers,
+ unsigned int *pict_cmds)
+{
+ struct pixel_pixinfo *pixel_info;
+ unsigned int row_stride_code;
+ unsigned char benable_auxline_buf = 1;
+
+ unsigned int coded_height;
+ unsigned int coded_width;
+ unsigned int disp_height;
+ unsigned int disp_width;
+ unsigned int profile;
+ unsigned char plane;
+ unsigned int y_stride;
+ unsigned int uv_stride;
+ unsigned int v_stride;
+ unsigned int cache_ref_offset;
+ unsigned int cache_row_offset;
+
+ if (str_configdata->vid_std == VDEC_STD_JPEG) {
+ disp_height = 0;
+ disp_width = 0;
+ coded_height = 0;
+ coded_width = 0;
+ } else {
+ coded_height = ALIGN(str_unit->pict_hdr_info->coded_frame_size.height,
+ (str_unit->pict_hdr_info->field) ?
+ 2 * VDEC_MB_DIMENSION : VDEC_MB_DIMENSION);
+ /* Hardware field is coded size - 1 */
+ coded_height -= 1;
+
+ coded_width = ALIGN(str_unit->pict_hdr_info->coded_frame_size.width,
+ VDEC_MB_DIMENSION);
+ /* Hardware field is coded size - 1 */
+ coded_width -= 1;
+
+ disp_height = str_unit->pict_hdr_info->disp_info.enc_disp_region.height
+ + str_unit->pict_hdr_info->disp_info.enc_disp_region.left_offset - 1;
+ disp_width = str_unit->pict_hdr_info->disp_info.enc_disp_region.width +
+ str_unit->pict_hdr_info->disp_info.enc_disp_region.top_offset - 1;
+ }
+ /*
+ * Display picture size (DISPLAY_PICTURE)
+ * The display to be written is not the actual video size to be
+ * displayed but a number that has to differ from the coded pixel size
+ * by less than 1MB (coded_size-display_size <= 0x0F). Because H264 can
+ * have a different display size, we need to check and write
+ * the coded_size again in the display_size register if this condition
+ * is not fulfilled.
+ */
+ if (str_configdata->vid_std != VDEC_STD_VC1 && ((coded_height - disp_height) > 0x0F)) {
+ REGIO_WRITE_FIELD_LITE(pict_cmds[VDECFW_CMD_DISPLAY_PICTURE],
+ MSVDX_CMDS, DISPLAY_PICTURE_SIZE,
+ DISPLAY_PICTURE_HEIGHT,
+ coded_height, unsigned int);
+ } else {
+ REGIO_WRITE_FIELD_LITE(pict_cmds[VDECFW_CMD_DISPLAY_PICTURE],
+ MSVDX_CMDS, DISPLAY_PICTURE_SIZE,
+ DISPLAY_PICTURE_HEIGHT,
+ disp_height, unsigned int);
+ }
+
+ if (((coded_width - disp_width) > 0x0F)) {
+ REGIO_WRITE_FIELD_LITE(pict_cmds[VDECFW_CMD_DISPLAY_PICTURE],
+ MSVDX_CMDS, DISPLAY_PICTURE_SIZE,
+ DISPLAY_PICTURE_WIDTH,
+ coded_width, unsigned int);
+ } else {
+ REGIO_WRITE_FIELD_LITE(pict_cmds[VDECFW_CMD_DISPLAY_PICTURE],
+ MSVDX_CMDS, DISPLAY_PICTURE_SIZE,
+ DISPLAY_PICTURE_WIDTH,
+ disp_width, unsigned int);
+ }
+
+ REGIO_WRITE_FIELD_LITE(pict_cmds[VDECFW_CMD_CODED_PICTURE],
+ MSVDX_CMDS, CODED_PICTURE_SIZE,
+ CODED_PICTURE_HEIGHT,
+ coded_height, unsigned int);
+ REGIO_WRITE_FIELD_LITE(pict_cmds[VDECFW_CMD_CODED_PICTURE],
+ MSVDX_CMDS, CODED_PICTURE_SIZE,
+ CODED_PICTURE_WIDTH,
+ coded_width, unsigned int);
+
+ /*
+ * For standards where dpb_diff != 1 and chroma format != 420
+ * cache_ref_offset has to be calculated in the F/W.
+ */
+ if (str_configdata->vid_std != VDEC_STD_HEVC && str_configdata->vid_std != VDEC_STD_H264) {
+ unsigned int log2_size, cache_size, luma_size;
+ unsigned char is_hevc_supported, is_hevc444_supported = 0;
+
+ is_hevc_supported =
+ vxd_is_supported_byatleast_onepipe(coreprops->hevc,
+ coreprops->num_pixel_pipes);
+
+ if (is_hevc_supported) {
+ is_hevc444_supported =
+ coreprops->vidstd_props[VDEC_STD_HEVC].max_chroma_format ==
+ PIXEL_FORMAT_444 ? 1 : 0;
+ }
+
+ log2_size = 9 + (is_hevc_supported ? 1 : 0) + (is_hevc444_supported ? 1 : 0);
+ cache_size = 3 << log2_size;
+ luma_size = (cache_size * 2) / 3;
+ cache_ref_offset = (luma_size * 15) / 32;
+ cache_ref_offset = (cache_ref_offset + 7) & (~7);
+ cache_row_offset = 0x0C;
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_MC_CACHE_CONFIGURATION],
+ MSVDX_CMDS, MC_CACHE_CONFIGURATION,
+ CONFIG_REF_CHROMA_ADJUST, 1,
+ unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_MC_CACHE_CONFIGURATION],
+ MSVDX_CMDS, MC_CACHE_CONFIGURATION,
+ CONFIG_REF_OFFSET, cache_ref_offset,
+ unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_MC_CACHE_CONFIGURATION],
+ MSVDX_CMDS, MC_CACHE_CONFIGURATION,
+ CONFIG_ROW_OFFSET, cache_row_offset,
+ unsigned int, unsigned int);
+ }
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE],
+ MSVDX_CMDS, OPERATING_MODE, CODEC_MODE,
+ amsvdx_codecmode[str_configdata->vid_std],
+ unsigned int, unsigned int);
+
+ profile = str_unit->seq_hdr_info->com_sequ_hdr_info.codec_profile;
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE],
+ MSVDX_CMDS, OPERATING_MODE, CODEC_PROFILE,
+ vxd_getprofile(str_configdata->vid_std, profile),
+ unsigned int, unsigned int);
+
+ plane = str_unit->seq_hdr_info->com_sequ_hdr_info.separate_chroma_planes;
+ pixel_info = &str_unit->seq_hdr_info->com_sequ_hdr_info.pixel_info;
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE],
+ MSVDX_CMDS, OPERATING_MODE, CHROMA_FORMAT, plane ?
+ 0 : pixel_info->chroma_fmt, unsigned int, int);
+
+ if (str_configdata->vid_std != VDEC_STD_JPEG) {
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_EXT_OP_MODE],
+ MSVDX_CMDS, EXT_OP_MODE, CHROMA_FORMAT_IDC, plane ?
+ 0 : pixel_get_hw_chroma_format_idc
+ (pixel_info->chroma_fmt_idc),
+ unsigned int, int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_EXT_OP_MODE],
+ MSVDX_CMDS, EXT_OP_MODE, MEMORY_PACKING,
+ output_config->pixel_info.mem_pkg ==
+ PIXEL_BIT10_MP ? 1 : 0, unsigned int, int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_EXT_OP_MODE],
+ MSVDX_CMDS, EXT_OP_MODE, BIT_DEPTH_LUMA_MINUS8,
+ pixel_info->bitdepth_y - 8,
+ unsigned int, unsigned int);
+
+ if (pixel_info->chroma_fmt_idc == PIXEL_FORMAT_MONO) {
+ /*
+ * For monochrome streams use the same bit depth for
+ * chroma and luma.
+ */
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_EXT_OP_MODE],
+ MSVDX_CMDS, EXT_OP_MODE,
+ BIT_DEPTH_CHROMA_MINUS8,
+ pixel_info->bitdepth_y - 8,
+ unsigned int, unsigned int);
+ } else {
+ /*
+ * For normal streams use the appropriate bit depth for chroma.
+ */
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_EXT_OP_MODE], MSVDX_CMDS,
+ EXT_OP_MODE, BIT_DEPTH_CHROMA_MINUS8,
+ pixel_info->bitdepth_c - 8,
+ unsigned int, unsigned int);
+ }
+ } else {
+ pict_cmds[VDECFW_CMD_EXT_OP_MODE] = 0;
+ }
+
+ if (str_configdata->vid_std != VDEC_STD_JPEG) {
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE], MSVDX_CMDS,
+ OPERATING_MODE, CHROMA_INTERLEAVED,
+ PIXEL_GET_HW_CHROMA_INTERLEAVED
+ (output_config->pixel_info.chroma_interleave),
+ unsigned int, int);
+ }
+
+ if (str_configdata->vid_std == VDEC_STD_JPEG) {
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE],
+ MSVDX_CMDS, OPERATING_MODE, ASYNC_MODE,
+ VDEC_MSVDX_ASYNC_VDMC,
+ unsigned int, unsigned int);
+ }
+
+ if (str_configdata->vid_std == VDEC_STD_H264) {
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE], MSVDX_CMDS,
+ OPERATING_MODE, ASYNC_MODE,
+ str_unit->pict_hdr_info->discontinuous_mbs ?
+ VDEC_MSVDX_ASYNC_VDMC : VDEC_MSVDX_ASYNC_NORMAL,
+ unsigned int, int);
+ }
+
+ y_stride = buffers->recon_pict->rend_info.plane_info[VDEC_PLANE_VIDEO_Y].stride;
+ uv_stride = buffers->recon_pict->rend_info.plane_info[VDEC_PLANE_VIDEO_UV].stride;
+ v_stride = buffers->recon_pict->rend_info.plane_info[VDEC_PLANE_VIDEO_V].stride;
+
+ if (((y_stride % (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT)) == 0) &&
+ ((uv_stride % (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT)) == 0) &&
+ ((v_stride % (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT)) == 0)) {
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE],
+ MSVDX_CMDS, OPERATING_MODE,
+ USE_EXT_ROW_STRIDE, 1, unsigned int, int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_EXTENDED_ROW_STRIDE],
+ MSVDX_CMDS, EXTENDED_ROW_STRIDE,
+ EXT_ROW_STRIDE, y_stride >> 6, unsigned int, unsigned int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_CHROMA_ROW_STRIDE],
+ MSVDX_CMDS, CHROMA_ROW_STRIDE,
+ CHROMA_ROW_STRIDE, uv_stride >> 6, unsigned int, unsigned int);
+ } else {
+ row_stride_code = get_stride_code(str_configdata->vid_std, y_stride);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE],
+ MSVDX_CMDS, OPERATING_MODE, ROW_STRIDE,
+ row_stride_code & 0x7, unsigned int, unsigned int);
+
+ if (str_configdata->vid_std == VDEC_STD_JPEG) {
+ /*
+ * Use the unused chroma interleaved flag
+ * to hold MSB of row stride code
+ */
+ IMG_ASSERT(row_stride_code < 16);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE],
+ MSVDX_CMDS, OPERATING_MODE,
+ CHROMA_INTERLEAVED,
+ row_stride_code >> 3, unsigned int, unsigned int);
+ } else {
+ IMG_ASSERT(row_stride_code < 8);
+ }
+ }
+ pict_cmds[VDECFW_CMD_LUMA_RECONSTRUCTED_PICTURE_BASE_ADDRESS] =
+ (unsigned int)GET_HOST_ADDR(&buffers->recon_pict->pict_buf->ddbuf_info) +
+ buffers->recon_pict->rend_info.plane_info[0].offset;
+
+ pict_cmds[VDECFW_CMD_CHROMA_RECONSTRUCTED_PICTURE_BASE_ADDRESS] =
+ (unsigned int)GET_HOST_ADDR(&buffers->recon_pict->pict_buf->ddbuf_info) +
+ buffers->recon_pict->rend_info.plane_info[1].offset;
+
+ pict_cmds[VDECFW_CMD_CHROMA2_RECONSTRUCTED_PICTURE_BASE_ADDRESS] =
+ (unsigned int)GET_HOST_ADDR(&buffers->recon_pict->pict_buf->ddbuf_info) +
+ buffers->recon_pict->rend_info.plane_info[2].offset;
+
+ pict_cmds[VDECFW_CMD_LUMA_ERROR_PICTURE_BASE_ADDRESS] = 0;
+ pict_cmds[VDECFW_CMD_CHROMA_ERROR_PICTURE_BASE_ADDRESS] = 0;
+
+#ifdef ERROR_CONCEALMENT
+ /* update error concealment frame info if available */
+ if (buffers->err_pict_bufinfo) {
+ pict_cmds[VDECFW_CMD_LUMA_ERROR_PICTURE_BASE_ADDRESS] =
+ (unsigned int)GET_HOST_ADDR(buffers->err_pict_bufinfo) +
+ buffers->recon_pict->rend_info.plane_info[0].offset;
+
+ pict_cmds[VDECFW_CMD_CHROMA_ERROR_PICTURE_BASE_ADDRESS] =
+ (unsigned int)GET_HOST_ADDR(buffers->err_pict_bufinfo) +
+ buffers->recon_pict->rend_info.plane_info[1].offset;
+ }
+#endif
+
+ pict_cmds[VDECFW_CMD_INTRA_BUFFER_BASE_ADDRESS] =
+ (unsigned int)GET_HOST_ADDR(buffers->intra_bufinfo);
+ pict_cmds[VDECFW_CMD_INTRA_BUFFER_PLANE_SIZE] =
+ buffers->intra_bufsize_per_pipe / 3;
+ pict_cmds[VDECFW_CMD_INTRA_BUFFER_SIZE_PER_PIPE] =
+ buffers->intra_bufsize_per_pipe;
+ pict_cmds[VDECFW_CMD_AUX_LINE_BUFFER_BASE_ADDRESS] =
+ (unsigned int)GET_HOST_ADDR(buffers->auxline_bufinfo);
+ pict_cmds[VDECFW_CMD_AUX_LINE_BUFFER_SIZE_PER_PIPE] =
+ buffers->auxline_bufsize_per_pipe;
+
+ /*
+ * for pvdec we need to set this registers even if we don't
+ * use alternative output
+ */
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ ALT_BIT_DEPTH_CHROMA_MINUS8,
+ output_config->pixel_info.bitdepth_c - 8, unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ ALT_BIT_DEPTH_LUMA_MINUS8,
+ output_config->pixel_info.bitdepth_y - 8, unsigned int, unsigned int);
+
+ /*
+ * this is causing corruption in RV40 and VC1 streams with
+ * scaling/rotation enabled on Coral, so setting to 0
+ */
+ benable_auxline_buf = benable_auxline_buf &&
+ (str_configdata->vid_std != VDEC_STD_REAL) &&
+ (str_configdata->vid_std != VDEC_STD_VC1);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION,
+ USE_AUX_LINE_BUF, benable_auxline_buf ? 1 : 0, unsigned int, int);
+}
+
+void vxd_set_altpictcmds(const struct vdecdd_str_unit *str_unit,
+ const struct vdec_str_configdata *str_configdata,
+ const struct vdec_str_opconfig *output_config,
+ const struct vxd_coreprops *coreprops,
+ const struct vxd_buffers *buffers,
+ unsigned int *pict_cmds)
+{
+ unsigned int row_stride_code;
+ unsigned int y_stride;
+ unsigned int uv_stride;
+ unsigned int v_stride;
+
+ y_stride = buffers->alt_pict->rend_info.plane_info[VDEC_PLANE_VIDEO_Y].stride;
+ uv_stride = buffers->alt_pict->rend_info.plane_info[VDEC_PLANE_VIDEO_UV].stride;
+ v_stride = buffers->alt_pict->rend_info.plane_info[VDEC_PLANE_VIDEO_V].stride;
+
+ if (((y_stride % (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT)) == 0) &&
+ ((uv_stride % (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT)) == 0) &&
+ ((v_stride % (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT)) == 0)) {
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION,
+ USE_EXT_ROT_ROW_STRIDE, 1, unsigned int, int);
+
+ /* 64-byte (min) aligned luma stride value. */
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION],
+ MSVDX_CMDS,
+ ALTERNATIVE_OUTPUT_PICTURE_ROTATION,
+ EXT_ROT_ROW_STRIDE, y_stride >> 6,
+ unsigned int, unsigned int);
+
+ /* 64-byte (min) aligned chroma stride value. */
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_CHROMA_ROW_STRIDE],
+ MSVDX_CMDS, CHROMA_ROW_STRIDE,
+ ALT_CHROMA_ROW_STRIDE, uv_stride >> 6,
+ unsigned int, unsigned int);
+ } else {
+ /*
+ * Obtain the code for buffer stride
+ * (must be less than 8, i.e. not JPEG strides)
+ */
+ row_stride_code =
+ get_stride_code(str_configdata->vid_std, y_stride);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION],
+ MSVDX_CMDS,
+ ALTERNATIVE_OUTPUT_PICTURE_ROTATION,
+ ROTATION_ROW_STRIDE, row_stride_code & 0x7,
+ unsigned int, unsigned int);
+ }
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION,
+ SCALE_INPUT_SIZE_SEL,
+ ((output_config->pixel_info.chroma_fmt_idc !=
+ str_unit->seq_hdr_info->com_sequ_hdr_info.pixel_info.chroma_fmt_idc)) ?
+ 1 : 0, unsigned int, int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION,
+ PACKED_422_OUTPUT,
+ (output_config->pixel_info.chroma_fmt_idc ==
+ PIXEL_FORMAT_422 &&
+ output_config->pixel_info.num_planes == 1) ? 1 : 0,
+ unsigned int, int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ ALT_OUTPUT_FORMAT,
+ str_unit->seq_hdr_info->com_sequ_hdr_info.separate_chroma_planes ?
+ 0 : pixel_get_hw_chroma_format_idc
+ (output_config->pixel_info.chroma_fmt_idc),
+ unsigned int, int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ ALT_BIT_DEPTH_CHROMA_MINUS8,
+ output_config->pixel_info.bitdepth_c - 8,
+ unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ ALT_BIT_DEPTH_LUMA_MINUS8,
+ output_config->pixel_info.bitdepth_y - 8,
+ unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ ALT_MEMORY_PACKING,
+ (output_config->pixel_info.mem_pkg ==
+ PIXEL_BIT10_MP) ? 1 : 0, unsigned int, int);
+
+ pict_cmds[VDECFW_CMD_LUMA_ALTERNATIVE_PICTURE_BASE_ADDRESS] =
+ (unsigned int)GET_HOST_ADDR(&buffers->alt_pict->pict_buf->ddbuf_info) +
+ buffers->alt_pict->rend_info.plane_info[0].offset;
+
+ pict_cmds[VDECFW_CMD_CHROMA_ALTERNATIVE_PICTURE_BASE_ADDRESS] =
+ (unsigned int)GET_HOST_ADDR(&buffers->alt_pict->pict_buf->ddbuf_info) +
+ buffers->alt_pict->rend_info.plane_info[1].offset;
+
+ pict_cmds[VDECFW_CMD_CHROMA2_ALTERNATIVE_PICTURE_BASE_ADDRESS] =
+ (unsigned int)GET_HOST_ADDR(&buffers->alt_pict->pict_buf->ddbuf_info) +
+ buffers->alt_pict->rend_info.plane_info[2].offset;
+}
+
+int vxd_getscalercmds(const struct scaler_config *scaler_config,
+ const struct scaler_pitch *pitch,
+ const struct scaler_filter *filter,
+ const struct pixel_pixinfo *out_loop_pixel_info,
+ struct scaler_params *params,
+ unsigned int *pict_cmds)
+{
+ const struct vxd_coreprops *coreprops = scaler_config->coreprops;
+ /*
+ * Indirectly detect decoder core type (if HEVC is supported, it has
+ * to be PVDEC core) and decide if to force luma re-sampling.
+ */
+ unsigned char bforce_luma_resampling = coreprops->hevc[0];
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ ALT_OUTPUT_FORMAT,
+ scaler_config->bseparate_chroma_planes ? 0 :
+ pixel_get_hw_chroma_format_idc(out_loop_pixel_info->chroma_fmt_idc),
+ unsigned int, int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ SCALE_CHROMA_RESAMP_ONLY, bforce_luma_resampling ? 0 :
+ (pitch->horiz_luma == FIXED(1, HIGHP)) &&
+ (pitch->vert_luma == FIXED(1, HIGHP)), unsigned int, int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, ALT_MEMORY_PACKING,
+ pixel_get_hw_memory_packing(out_loop_pixel_info->mem_pkg),
+ unsigned int, int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ ALT_BIT_DEPTH_LUMA_MINUS8,
+ out_loop_pixel_info->bitdepth_y - 8,
+ unsigned int, unsigned int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ ALT_BIT_DEPTH_CHROMA_MINUS8,
+ out_loop_pixel_info->bitdepth_c - 8,
+ unsigned int, unsigned int);
+
+ /* Scale luma bifilter is always 0 for now */
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ SCALE_LUMA_BIFILTER_HORIZ,
+ 0, unsigned int, int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ SCALE_LUMA_BIFILTER_VERT,
+ 0, unsigned int, int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ SCALE_CHROMA_BIFILTER_HORIZ,
+ filter->bhoriz_bilinear ? 1 : 0,
+ unsigned int, int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL],
+ MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL,
+ SCALE_CHROMA_BIFILTER_VERT,
+ filter->bvert_bilinear ? 1 : 0, unsigned int, int);
+
+ /* for cores 7.x.x and more, precision 3.13 */
+ params->fixed_point_shift = 13;
+
+ /* Calculate the fixed-point versions for use by the hardware. */
+ params->vert_pitch = (int)((pitch->vert_luma +
+ (1 << (HIGHP - params->fixed_point_shift - 1))) >>
+ (HIGHP - params->fixed_point_shift));
+ params->vert_startpos = params->vert_pitch >> 1;
+ params->vert_pitch_chroma = (int)((pitch->vert_chroma +
+ (1 << (HIGHP - params->fixed_point_shift - 1))) >>
+ (HIGHP - params->fixed_point_shift));
+ params->vert_startpos_chroma = params->vert_pitch_chroma >> 1;
+ params->horz_pitch = (int)(pitch->horiz_luma >>
+ (HIGHP - params->fixed_point_shift));
+ params->horz_startpos = params->horz_pitch >> 1;
+ params->horz_pitch_chroma = (int)(pitch->horiz_chroma >>
+ (HIGHP - params->fixed_point_shift));
+ params->horz_startpos_chroma = params->horz_pitch_chroma >> 1;
+
+#ifdef HAS_HEVC
+ if (scaler_config->vidstd == VDEC_STD_HEVC) {
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE],
+ MSVDX_CMDS, PVDEC_SCALED_DISPLAY_SIZE,
+ PVDEC_SCALE_DISPLAY_WIDTH,
+ scaler_config->recon_width - 1,
+ unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE],
+ MSVDX_CMDS, PVDEC_SCALED_DISPLAY_SIZE,
+ PVDEC_SCALE_DISPLAY_HEIGHT,
+ scaler_config->recon_height - 1,
+ unsigned int, unsigned int);
+ } else {
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE],
+ MSVDX_CMDS, SCALED_DISPLAY_SIZE,
+ SCALE_DISPLAY_WIDTH,
+ scaler_config->recon_width - 1,
+ unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE],
+ MSVDX_CMDS, SCALED_DISPLAY_SIZE,
+ SCALE_DISPLAY_HEIGHT,
+ scaler_config->recon_height - 1,
+ unsigned int, unsigned int);
+ }
+#else
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE],
+ MSVDX_CMDS, SCALED_DISPLAY_SIZE,
+ SCALE_DISPLAY_WIDTH,
+ scaler_config->recon_width - 1,
+ unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE],
+ MSVDX_CMDS, SCALED_DISPLAY_SIZE, SCALE_DISPLAY_HEIGHT,
+ scaler_config->recon_height - 1,
+ unsigned int, unsigned int);
+#endif
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALE_OUTPUT_SIZE],
+ MSVDX_CMDS, SCALE_OUTPUT_SIZE,
+ SCALE_OUTPUT_WIDTH_MIN1,
+ scaler_config->scale_width - 1,
+ unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALE_OUTPUT_SIZE],
+ MSVDX_CMDS, SCALE_OUTPUT_SIZE,
+ SCALE_OUTPUT_HEIGHT_MIN1,
+ scaler_config->scale_height - 1,
+ unsigned int, unsigned int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_HORIZONTAL_SCALE_CONTROL],
+ MSVDX_CMDS, HORIZONTAL_SCALE_CONTROL,
+ HORIZONTAL_SCALE_PITCH, params->horz_pitch,
+ unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_HORIZONTAL_SCALE_CONTROL],
+ MSVDX_CMDS, HORIZONTAL_SCALE_CONTROL,
+ HORIZONTAL_INITIAL_POS, params->horz_startpos,
+ unsigned int, unsigned int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALE_HORIZONTAL_CHROMA],
+ MSVDX_CMDS, SCALE_HORIZONTAL_CHROMA,
+ CHROMA_HORIZONTAL_PITCH, params->horz_pitch_chroma,
+ unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALE_HORIZONTAL_CHROMA],
+ MSVDX_CMDS, SCALE_HORIZONTAL_CHROMA,
+ CHROMA_HORIZONTAL_INITIAL,
+ params->horz_startpos_chroma,
+ unsigned int, unsigned int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_VERTICAL_SCALE_CONTROL],
+ MSVDX_CMDS, VERTICAL_SCALE_CONTROL,
+ VERTICAL_SCALE_PITCH, params->vert_pitch,
+ unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_VERTICAL_SCALE_CONTROL],
+ MSVDX_CMDS, VERTICAL_SCALE_CONTROL,
+ VERTICAL_INITIAL_POS, params->vert_startpos,
+ unsigned int, unsigned int);
+
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALE_VERTICAL_CHROMA],
+ MSVDX_CMDS, SCALE_VERTICAL_CHROMA,
+ CHROMA_VERTICAL_PITCH, params->vert_pitch_chroma,
+ unsigned int, unsigned int);
+ REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALE_VERTICAL_CHROMA],
+ MSVDX_CMDS, SCALE_VERTICAL_CHROMA,
+ CHROMA_VERTICAL_INITIAL,
+ params->vert_startpos_chroma,
+ unsigned int, unsigned int);
+ return 0;
+}
+
+unsigned int vxd_get_codedpicsize(unsigned short width_min1, unsigned short height_min1)
+{
+ unsigned int reg = 0;
+
+ REGIO_WRITE_FIELD_LITE(reg, MSVDX_CMDS, CODED_PICTURE_SIZE,
+ CODED_PICTURE_WIDTH, width_min1,
+ unsigned short);
+ REGIO_WRITE_FIELD_LITE(reg, MSVDX_CMDS, CODED_PICTURE_SIZE,
+ CODED_PICTURE_HEIGHT, height_min1,
+ unsigned short);
+
+ return reg;
+}
+
+unsigned char vxd_get_codedmode(enum vdec_vid_std vidstd)
+{
+ return (unsigned char)amsvdx_codecmode[vidstd];
+}
+
+void vxd_get_coreproperties(void *hndl_coreproperties,
+ struct vxd_coreprops *vxd_coreprops)
+{
+ struct vxd_core_props *props =
+ (struct vxd_core_props *)hndl_coreproperties;
+
+ vxd_getcoreproperties(vxd_coreprops, props->core_rev,
+ props->pvdec_core_id,
+ props->mmu_config0,
+ props->mmu_config1,
+ props->pixel_pipe_cfg,
+ props->pixel_misc_cfg,
+ props->pixel_max_frame_cfg);
+}
+
+int vxd_get_pictattrs(unsigned int flags, struct vxd_pict_attrs *pict_attrs)
+{
+ if (flags & (VXD_FW_MSG_FLAG_DWR | VXD_FW_MSG_FLAG_FATAL))
+ pict_attrs->dwrfired = 1;
+ if (flags & VXD_FW_MSG_FLAG_MMU_FAULT)
+ pict_attrs->mmufault = 1;
+ if (flags & VXD_FW_MSG_FLAG_DEV_ERR)
+ pict_attrs->deverror = 1;
+
+ return 0;
+}
+
+int vxd_get_msgerrattr(unsigned int flags, enum vxd_msg_attr *msg_attr)
+{
+ if ((flags & ~VXD_FW_MSG_FLAG_CANCELED))
+ *msg_attr = VXD_MSG_ATTR_FATAL;
+ else if ((flags & VXD_FW_MSG_FLAG_CANCELED))
+ *msg_attr = VXD_MSG_ATTR_CANCELED;
+ else
+ *msg_attr = VXD_MSG_ATTR_NONE;
+
+ return 0;
+}
+
+int vxd_set_msgflag(enum vxd_msg_flag input_flag, unsigned int *flags)
+{
+ switch (input_flag) {
+ case VXD_MSG_FLAG_DROP:
+ *flags |= VXD_FW_MSG_FLAG_DROP;
+ break;
+ case VXD_MSG_FLAG_EXCL:
+ *flags |= VXD_FW_MSG_FLAG_EXCL;
+ break;
+ default:
+ return IMG_ERROR_FATAL;
+ }
+
+ return 0;
+}
diff --git a/drivers/staging/media/vxd/decoder/vxd_int.h b/drivers/staging/media/vxd/decoder/vxd_int.h
new file mode 100644
index 000000000000..a294e0d6044f
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vxd_int.h
@@ -0,0 +1,128 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD DEC Common low level core interface component
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+#ifndef _VXD_INT_H
+#define _VXD_INT_H
+
+#include "fw_interface.h"
+#include "scaler_setup.h"
+#include "vdecdd_defs.h"
+#include "vdecfw_shared.h"
+#include "vdec_defs.h"
+#include "vxd_ext.h"
+#include "vxd_props.h"
+
+/*
+ * Size of buffer used for batching messages
+ */
+#define BATCH_MSG_BUFFER_SIZE (8 * 4096)
+
+#define INTRA_BUF_SIZE (1024 * 32)
+#define AUX_LINE_BUFFER_SIZE (512 * 1024)
+
+#define MAX_PICTURE_WIDTH (4096)
+#define MAX_PICTURE_HEIGHT (4096)
+
+/*
+ * this macro returns the host address of device buffer.
+ */
+#define GET_HOST_ADDR(buf) ((buf)->dev_virt)
+
+#define GET_HOST_ADDR_OFFSET(buf, offset) (((buf)->dev_virt) + (offset))
+
+/*
+ * The extended stride alignment for VXD.
+ */
+#define VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT (64)
+
+struct vxd_buffers {
+ struct vdecdd_ddpict_buf *recon_pict;
+ struct vdecdd_ddpict_buf *alt_pict;
+ struct vidio_ddbufinfo *intra_bufinfo;
+ struct vidio_ddbufinfo *auxline_bufinfo;
+ struct vidio_ddbufinfo *err_pict_bufinfo;
+ unsigned int intra_bufsize_per_pipe;
+ unsigned int auxline_bufsize_per_pipe;
+ struct vidio_ddbufinfo *msb_bufinfo;
+ unsigned char btwopass;
+};
+
+struct pvdec_core_rev {
+ unsigned int maj_rev;
+ unsigned int min_rev;
+ unsigned int maint_rev;
+ unsigned int int_rev;
+};
+
+/*
+ * this has all that it needs to translate a Stream Unit for a picture
+ * into a transaction.
+ */
+void vxd_set_altpictcmds(const struct vdecdd_str_unit *str_unit,
+ const struct vdec_str_configdata *str_configdata,
+ const struct vdec_str_opconfig *output_config,
+ const struct vxd_coreprops *coreprops,
+ const struct vxd_buffers *buffers,
+ unsigned int *pict_cmds);
+
+/*
+ * this has all that it needs to translate a Stream Unit for
+ * a picture into a transaction.
+ */
+void vxd_set_reconpictcmds(const struct vdecdd_str_unit *str_unit,
+ const struct vdec_str_configdata *str_configdata,
+ const struct vdec_str_opconfig *output_config,
+ const struct vxd_coreprops *coreprops,
+ const struct vxd_buffers *buffers,
+ unsigned int *pict_cmds);
+
+int vxd_getscalercmds(const struct scaler_config *scaler_config,
+ const struct scaler_pitch *pitch,
+ const struct scaler_filter *filter,
+ const struct pixel_pixinfo *out_loop_pixel_info,
+ struct scaler_params *params,
+ unsigned int *pict_cmds);
+
+/*
+ * this creates value of MSVDX_CMDS_CODED_PICTURE_SIZE register.
+ */
+unsigned int vxd_get_codedpicsize(unsigned short width_min1, unsigned short height_min1);
+
+/*
+ * return HW codec mode based on video standard.
+ */
+unsigned char vxd_get_codedmode(enum vdec_vid_std vidstd);
+
+/*
+ * translates core properties to the form of the struct vxd_coreprops struct.
+ */
+void vxd_get_coreproperties(void *hndl_coreproperties,
+ struct vxd_coreprops *vxd_coreprops);
+
+/*
+ * translates picture attributes to the form of the VXD_sPictAttrs struct.
+ */
+int vxd_get_pictattrs(unsigned int flags, struct vxd_pict_attrs *pict_attrs);
+
+/*
+ * translates message attributes to the form of the VXD_eMsgAttr struct.
+ */
+int vxd_get_msgerrattr(unsigned int flags, enum vxd_msg_attr *msg_attr);
+
+/*
+ * sets a message flag.
+ */
+int vxd_set_msgflag(enum vxd_msg_flag input_flag, unsigned int *flags);
+
+#endif /* _VXD_INT_H */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
This patch adds the control allocation buffer for firmware
and gets the data from decoder module and sent it to firmware
through hardware control module.
It prepares all the standard headers, dma transfer commands,
vlc table information and sends it to firmware.
Signed-off-by: Amit Makani <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
.../media/vxd/decoder/translation_api.c | 1725 +++++++++++++++++
.../media/vxd/decoder/translation_api.h | 42 +
3 files changed, 1769 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/translation_api.c
create mode 100644 drivers/staging/media/vxd/decoder/translation_api.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 7b21ebfc61d4..538faa644d13 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19545,6 +19545,8 @@ F: drivers/staging/media/vxd/common/imgmmu.h
F: drivers/staging/media/vxd/decoder/hw_control.c
F: drivers/staging/media/vxd/decoder/hw_control.h
F: drivers/staging/media/vxd/decoder/img_dec_common.h
+F: drivers/staging/media/vxd/decoder/translation_api.c
+F: drivers/staging/media/vxd/decoder/translation_api.h
F: drivers/staging/media/vxd/decoder/vxd_core.c
F: drivers/staging/media/vxd/decoder/vxd_dec.c
F: drivers/staging/media/vxd/decoder/vxd_dec.h
diff --git a/drivers/staging/media/vxd/decoder/translation_api.c b/drivers/staging/media/vxd/decoder/translation_api.c
new file mode 100644
index 000000000000..af8924bb5173
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/translation_api.c
@@ -0,0 +1,1725 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * VDECDD translation APIs.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+/* As of now we are defining HAS_H264 */
+#define HAS_H264
+#define VDEC_USE_PVDEC
+
+#include <linux/types.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "fw_interface.h"
+#ifdef HAS_H264
+#include "h264fw_data.h"
+#endif /* HAS_H264 */
+#include "hw_control.h"
+#include "img_errors.h"
+#include "img_msvdx_cmds.h"
+#include "img_msvdx_vec_regs.h"
+#ifdef VDEC_USE_PVDEC
+#include "pvdec_int.h"
+#include "img_pvdec_core_regs.h"
+#endif
+#include "img_video_bus4_mmu_regs.h"
+#include "lst.h"
+#include "reg_io2.h"
+#include "rman_api.h"
+#include "translation_api.h"
+#include "vdecdd_defs.h"
+#include "vdecdd_utils.h"
+#include "vdecfw_share.h"
+#include "vxd_int.h"
+#include "vxd_props.h"
+
+#ifdef HAS_HEVC
+#include "hevcfw_data.h"
+#include "pvdec_entropy_regs.h"
+#include "pvdec_vec_be_regs.h"
+#endif
+
+#ifdef HAS_JPEG
+#include "jpegfw_data.h"
+#endif /* HAS_JPEG */
+
+#define NO_VALUE 0
+
+/*
+ * Discontinuity in layout of VEC_VLC_TABLE* registers.
+ * Address of VEC_VLC_TABLE_ADDR16 does not immediately follow
+ * VEC_VLC_TABLE_ADDR15, see TRM.
+ */
+#define VEC_VLC_TABLE_ADDR_PT1_SIZE 16 /* in 32-bit words */
+#define VEC_VLC_TABLE_ADDR_DISCONT (VEC_VLC_TABLE_ADDR_PT1_SIZE * \
+ PVDECIO_VLC_IDX_ADDR_PARTS)
+
+/*
+ * now it can be done by VXD_GetCodecMode
+ * Imply standard from OperatingMode.
+ * As of now only H264 supported through the file.
+ */
+#define CODEC_MODE_JPEG 0x0
+#define CODEC_MODE_H264 0x1
+#define CODEC_MODE_REAL8 0x8
+#define CODEC_MODE_REAL9 0x9
+
+/*
+ * This enum defines values of ENTDEC_BE_MODE field of VEC_ENTDEC_BE_CONTROL
+ * register and ENTDEC_FE_MODE field of VEC_ENTDEC_FE_CONTROL register.
+ */
+enum decode_mode {
+ /* JPEG */
+ VDEC_ENTDEC_MODE_JPEG = 0x0,
+ /* H264 (MPEG4/AVC) */
+ VDEC_ENTDEC_MODE_H264 = 0x1,
+ VDEC_ENTDEC_MODE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This has all that it needs to translate a Stream Unit for a picture into a
+ * transaction.
+ */
+static int translation_set_buffer(struct vdecdd_ddpict_buf *picbuf,
+ struct vdecfw_image_buffer *image_buffer)
+{
+ unsigned int i;
+
+ for (i = 0; i < VDEC_PLANE_MAX; i++) {
+ image_buffer->byte_offset[i] =
+ (unsigned int)GET_HOST_ADDR(&picbuf->pict_buf->ddbuf_info) +
+ picbuf->rend_info.plane_info[i].offset;
+ pr_debug("%s image_buffer->byte_offset[%d] = 0x%x\n",
+ __func__, i, image_buffer->byte_offset[i]);
+ }
+ return IMG_SUCCESS;
+}
+
+#ifdef HAS_HEVC
+/*
+ * @Function translation_hevc_header
+ */
+static int translation_hevc_header(struct vdecdd_picture *picture,
+ struct dec_decpict *dec_pict,
+ struct hevcfw_headerdata *header_data)
+{
+ translation_set_buffer(dec_pict->recon_pict, &header_data->primary);
+
+ if (dec_pict->alt_pict)
+ translation_set_buffer(dec_pict->alt_pict, &header_data->alternate);
+
+ VDEC_ASSERT(picture);
+ VDEC_ASSERT(picture->pict_res_int);
+ VDEC_ASSERT(picture->pict_res_int->mb_param_buf);
+ header_data->temporal_outaddr = (unsigned int)GET_HOST_ADDR
+ (&picture->pict_res_int->mb_param_buf->ddbuf_info);
+
+ return IMG_SUCCESS;
+}
+#endif
+
+#ifdef HAS_H264
+static int translation_h264header(struct vdecdd_picture *pspicture,
+ struct dec_decpict *dec_pict,
+ struct h264fw_header_data *psheaderdata,
+ struct vdec_str_configdata *psstrconfigdata)
+{
+ psheaderdata->two_pass_flag = dec_pict->pict_hdr_info->discontinuous_mbs;
+ psheaderdata->disable_mvc = psstrconfigdata->disable_mvc;
+
+ /*
+ * As of now commenting the mb params base address as we are not using,
+ * if needed in future please un comment and make the allocation for
+ * pict_res_int.
+ */
+ /* Obtain the MB parameter address from the stream unit. */
+ if (pspicture->pict_res_int->mb_param_buf) {
+ psheaderdata->mbparams_base_address =
+ (unsigned int)GET_HOST_ADDR(&pspicture->pict_res_int->mb_param_buf->ddbuf_info);
+ psheaderdata->mbparams_size_per_plane =
+ pspicture->pict_res_int->mb_param_buf->ddbuf_info.buf_size / 3;
+ } else {
+ psheaderdata->mbparams_base_address = 0;
+ psheaderdata->mbparams_size_per_plane = 0;
+ }
+ psheaderdata->slicegroupmap_base_address =
+ (unsigned int)GET_HOST_ADDR(&dec_pict->cur_pict_dec_res->h264_sgm_buf);
+
+ translation_set_buffer(dec_pict->recon_pict, &psheaderdata->primary);
+
+ if (dec_pict->alt_pict)
+ translation_set_buffer(dec_pict->alt_pict, &psheaderdata->alternate);
+
+ /* Signal whether we have PPS for the second field. */
+ if (pspicture->dec_pict_aux_info.second_pps_id == BSPP_INVALID)
+ psheaderdata->second_pps = 0;
+ else
+ psheaderdata->second_pps = 1;
+
+ return IMG_SUCCESS;
+}
+#endif /* HAS_H264 */
+
+#ifdef HAS_JPEG
+
+static int translation_jpegheader(const struct bspp_sequ_hdr_info *seq,
+ const struct dec_decpict *dec_pict,
+ const struct bspp_pict_hdr_info *pict_hdrinfo,
+ struct jpegfw_header_data *header_data)
+{
+ unsigned int i;
+
+ /* Output picture planes addresses */
+ for (i = 0; i < seq->com_sequ_hdr_info.pixel_info.num_planes; i++) {
+ header_data->plane_offsets[i] =
+ (unsigned int)GET_HOST_ADDR(&dec_pict->recon_pict->pict_buf->ddbuf_info) +
+ dec_pict->recon_pict->rend_info.plane_info[i].offset;
+ }
+
+ /* copy the expected SOS fields number */
+ header_data->hdr_sos_count = pict_hdrinfo->sos_count;
+
+ translation_set_buffer(dec_pict->recon_pict, &header_data->primary);
+
+ return IMG_SUCCESS;
+}
+#endif /* HAS_JPEG */
+/*
+ * This function translates host video standard enum (VDEC_eVidStd) into
+ * firmware video standard enum (VDECFW_eCodecType);
+ */
+static int translation_get_codec(enum vdec_vid_std evidstd,
+ enum vdecfw_codectype *pecodec)
+{
+ enum vdecfw_codectype ecodec = VDEC_CODEC_NONE;
+ unsigned int result = IMG_ERROR_NOT_SUPPORTED;
+
+ /* Translate from video standard to firmware codec. */
+ switch (evidstd) {
+ #ifdef HAS_H264
+ case VDEC_STD_H264:
+ ecodec = VDECFW_CODEC_H264;
+ result = IMG_SUCCESS;
+ break;
+ #endif /* HAS_H264 */
+#ifdef HAS_HEVC
+ case VDEC_STD_HEVC:
+ ecodec = VDECFW_CODEC_HEVC;
+ result = IMG_SUCCESS;
+ break;
+#endif /* HAS_HEVC */
+#ifdef HAS_JPEG
+ case VDEC_STD_JPEG:
+ ecodec = VDECFW_CODEC_JPEG;
+ result = IMG_SUCCESS;
+ break;
+#endif
+ default:
+ result = IMG_ERROR_NOT_SUPPORTED;
+ break;
+ }
+ *pecodec = ecodec;
+ return result;
+}
+
+/*
+ * This function is used to obtain buffer for sequence header.
+ */
+static int translation_get_seqhdr(struct vdecdd_str_unit *psstrunit,
+ struct dec_decpict *psdecpict,
+ unsigned int *puipseqaddr)
+{
+ /*
+ * ending Sequence info only if its a First Pic of Sequence, or a Start
+ * of Closed GOP
+ */
+ if (psstrunit->pict_hdr_info->first_pic_of_sequence || psstrunit->closed_gop) {
+ struct vdecdd_ddbuf_mapinfo *ddbuf_map_info;
+ /* Get access to map info context */
+ int result = rman_get_resource(psstrunit->seq_hdr_info->bufmap_id,
+ VDECDD_BUFMAP_TYPE_ID,
+ (void **)&ddbuf_map_info, NULL);
+ VDEC_ASSERT(result == IMG_SUCCESS);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ *puipseqaddr = GET_HOST_ADDR_OFFSET(&ddbuf_map_info->ddbuf_info,
+ psstrunit->seq_hdr_info->buf_offset);
+ } else {
+ *puipseqaddr = 0;
+ }
+ return IMG_SUCCESS;
+}
+
+/*
+ * This function is used to obtain buffer for picture parameter set.
+ */
+static int translation_get_ppshdr(struct vdecdd_str_unit *psstrunit,
+ struct dec_decpict *psdecpict,
+ unsigned int *puipppsaddr)
+{
+ if (psstrunit->pict_hdr_info->pict_aux_data.id != BSPP_INVALID) {
+ struct vdecdd_ddbuf_mapinfo *ddbuf_map_info;
+ int result;
+
+ VDEC_ASSERT(psstrunit->pict_hdr_info->pict_aux_data.pic_data);
+ /* Get access to map info context */
+ result = rman_get_resource(psstrunit->pict_hdr_info->pict_aux_data.bufmap_id,
+ VDECDD_BUFMAP_TYPE_ID,
+ (void **)&ddbuf_map_info, NULL);
+ VDEC_ASSERT(result == IMG_SUCCESS);
+
+ if (result != IMG_SUCCESS)
+ return result;
+ *puipppsaddr =
+ GET_HOST_ADDR_OFFSET(&ddbuf_map_info->ddbuf_info,
+ psstrunit->pict_hdr_info->pict_aux_data.buf_offset);
+ } else {
+ *puipppsaddr = 0;
+ }
+ return IMG_SUCCESS;
+}
+
+/*
+ * This function is used to obtain buffer for second picture parameter set.
+ */
+static int translation_getsecond_ppshdr(struct vdecdd_str_unit *psstrunit,
+ unsigned int *puisecond_ppshdr)
+{
+ if (psstrunit->pict_hdr_info->second_pict_aux_data.id !=
+ BSPP_INVALID) {
+ struct vdecdd_ddbuf_mapinfo *ddbuf_map_info;
+ int result;
+ void *pic_data =
+ psstrunit->pict_hdr_info->second_pict_aux_data.pic_data;
+
+ VDEC_ASSERT(pic_data);
+ result = rman_get_resource(psstrunit->pict_hdr_info->second_pict_aux_data.bufmap_id,
+ VDECDD_BUFMAP_TYPE_ID,
+ (void **)&ddbuf_map_info, NULL);
+ VDEC_ASSERT(result == IMG_SUCCESS);
+
+ if (result != IMG_SUCCESS)
+ return result;
+
+ *puisecond_ppshdr =
+ GET_HOST_ADDR_OFFSET
+ (&ddbuf_map_info->ddbuf_info,
+ psstrunit->pict_hdr_info->second_pict_aux_data.buf_offset);
+ } else {
+ *puisecond_ppshdr = 0;
+ }
+ return IMG_SUCCESS;
+}
+
+/*
+ * Returns address from which FW should download its shared context.
+ */
+static unsigned int translation_getctx_loadaddr(struct dec_decpict *psdecpict)
+{
+ if (psdecpict->prev_pict_dec_res)
+ return GET_HOST_ADDR(&psdecpict->prev_pict_dec_res->fw_ctx_buf);
+
+ /*
+ * No previous context exists, using current context leads to
+ * problems on replay so just say to FW to use clean one.
+ * This is NULL as integer to avoid pointer size warnings due
+ * to type casting.
+ */
+ return 0;
+}
+
+static void translation_setup_std_header
+ (struct vdec_str_configdata *str_configdata,
+ struct dec_decpict *dec_pict,
+ struct vdecdd_str_unit *str_unit, unsigned int *psr_hdrsize,
+ struct vdecdd_picture *picture, unsigned int *picture_cmds,
+ enum vdecfw_parsermode *parser_mode)
+{
+ switch (str_configdata->vid_std) {
+#ifdef HAS_H264
+ case VDEC_STD_H264:
+ {
+ struct h264fw_header_data *header_data =
+ (struct h264fw_header_data *)
+ dec_pict->hdr_info->ddbuf_info->cpu_virt;
+ *parser_mode = str_unit->pict_hdr_info->parser_mode;
+
+ if (str_unit->pict_hdr_info->parser_mode !=
+ VDECFW_SCP_ONLY) {
+ pr_warn("VDECFW_SCP_ONLY mode supported in PVDEC FW\n");
+ }
+ /* Reset header data. */
+ memset(header_data, 0, sizeof(*(header_data)));
+
+ /* Prepare active parameter sets. */
+ translation_h264header(picture, dec_pict, header_data, str_configdata);
+
+ /* Setup header size in the transaction. */
+ *psr_hdrsize = sizeof(struct h264fw_header_data);
+ break;
+ }
+#endif /* HAS_H264 */
+
+#ifdef HAS_HEVC
+ case VDEC_STD_HEVC:
+ {
+ struct hevcfw_headerdata *header_data =
+ (struct hevcfw_headerdata *)dec_pict->hdr_info->ddbuf_info->cpu_virt;
+ *parser_mode = str_unit->pict_hdr_info->parser_mode;
+
+ /* Reset header data. */
+ memset(header_data, 0, sizeof(*header_data));
+
+ /* Prepare active parameter sets. */
+ translation_hevc_header(picture, dec_pict, header_data);
+
+ /* Setup header size in the transaction. */
+ *psr_hdrsize = sizeof(struct hevcfw_headerdata);
+ break;
+ }
+#endif
+#ifdef HAS_JPEG
+ case VDEC_STD_JPEG:
+ {
+ struct jpegfw_header_data *header_data =
+ (struct jpegfw_header_data *)dec_pict->hdr_info->ddbuf_info->cpu_virt;
+ const struct bspp_sequ_hdr_info *seq = str_unit->seq_hdr_info;
+ const struct bspp_pict_hdr_info *pict_hdr_info = str_unit->pict_hdr_info;
+
+ /* Reset header data. */
+ memset(header_data, 0, sizeof(*(header_data)));
+
+ /* Prepare active parameter sets. */
+ translation_jpegheader(seq, dec_pict, pict_hdr_info, header_data);
+
+ /* Setup header size in the transaction. */
+ *psr_hdrsize = sizeof(struct jpegfw_header_data);
+ break;
+ }
+#endif
+ default:
+ VDEC_ASSERT(NULL == "Unknown standard!");
+ *psr_hdrsize = 0;
+ break;
+ }
+}
+
+#define VDEC_INITIAL_DEVA_DMA_CMD_SIZE 3
+#define VDEC_SINLGE_DEVA_DMA_CMD_SIZE 2
+
+#ifdef VDEC_USE_PVDEC
+/*
+ * Creates DEVA bitstream segments command and saves is to control allocation
+ * buffer.
+ */
+static int translation_pvdec_adddma_transfers
+ (struct lst_t *decpic_seglist, unsigned int **dma_cmdbuf,
+ int cmd_bufsize, struct dec_decpict *psdecpict, int eop)
+{
+ /*
+ * DEVA's bitstream DMA command is made out of chunks with following
+ * layout ('+' sign is used to mark actual words in command):
+ *
+ * + Bitstream HDR, type unsigned int, consists of:
+ * - command id (CMD_BITSTREAM_SEGMENTS),
+ * - number of segments in this chunk,
+ * - optional CMD_BITSTREAM_SEGMENTS_MORE_FOLLOW_MASK
+ *
+ * + Bitstream total size, type unsigned int,
+ * represents size of all segments in all chunks
+ *
+ * Segments of following type (can repeat up to
+ * CMD_BITSTREAM_SEGMENTS_MINUS1_MASK + 1 times)
+ *
+ * + Bitstream segment address, type unsigned int
+ *
+ * + Bitstream segment size, type unsigned int
+ *
+ * Subsequent chunks are present when
+ * CMD_BITSTREAM_SEGMENTS_MORE_FOLLOW_MASK flag is set in Bitstream HDR.
+ */
+ struct dec_decpict_seg *dec_picseg = (struct dec_decpict_seg *)lst_first(decpic_seglist);
+ unsigned int *cmd = *dma_cmdbuf;
+ unsigned int *dma_hdr = cmd;
+ unsigned int segcount = 0;
+ unsigned int bitstream_size = 0;
+
+ /*
+ * Two words for DMA command header (setup later as we need to find out
+ * count of BS segments).
+ */
+ cmd += CMD_BITSTREAM_HDR_DW_SIZE;
+ cmd_bufsize -= CMD_BITSTREAM_HDR_DW_SIZE;
+ if (cmd_bufsize < 0) {
+ pr_err("Buffer for DMA command too small.\n");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!dec_picseg) {
+ /* No segments to be send to FW: preparing fake one */
+ cmd_bufsize -= VDEC_SINLGE_DEVA_DMA_CMD_SIZE;
+ if (cmd_bufsize < 0) {
+ pr_err("Buffer for DMA command too small.\n");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+ segcount++;
+
+ /* zeroing bitstream size and bitstream offset */
+ *(cmd++) = 0;
+ *(cmd++) = 0;
+ }
+
+ /* Loop through all bitstream segments */
+ while (dec_picseg) {
+ if (dec_picseg->bstr_seg && (dec_picseg->bstr_seg->bstr_seg_flag
+ & VDECDD_BSSEG_SKIP) == 0) {
+ unsigned int result;
+ struct vdecdd_ddbuf_mapinfo *ddbuf_map_info;
+
+ segcount++;
+ /* Two words for each added bitstream segment */
+ cmd_bufsize -= VDEC_SINLGE_DEVA_DMA_CMD_SIZE;
+ if (cmd_bufsize < 0) {
+ pr_err("Buffer for DMA command too small.\n");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+ /* Insert SCP/SC if needed */
+ if (dec_picseg->bstr_seg->bstr_seg_flag &
+ VDECDD_BSSEG_INSERTSCP) {
+ unsigned int startcode_length =
+ psdecpict->start_code_bufinfo->buf_size;
+
+ if (dec_picseg->bstr_seg->bstr_seg_flag &
+ VDECDD_BSSEG_INSERT_STARTCODE) {
+ unsigned char *start_code =
+ psdecpict->start_code_bufinfo->cpu_virt;
+ start_code[startcode_length - 1] =
+ dec_picseg->bstr_seg->start_code_suffix;
+ } else {
+ startcode_length -= 1;
+ }
+
+ segcount++;
+ *(cmd++) = startcode_length;
+ bitstream_size += startcode_length;
+
+ *(cmd++) = psdecpict->start_code_bufinfo->dev_virt;
+
+ if (((segcount %
+ (CMD_BITSTREAM_SEGMENTS_MINUS1_MASK + 1)) == 0))
+ /*
+ * we have reached max number of
+ * bitstream segments for current
+ * command make pui32Cmd point to next
+ * BS command
+ */
+ cmd += CMD_BITSTREAM_HDR_DW_SIZE;
+ }
+ /* Get access to map info context */
+ result = rman_get_resource(dec_picseg->bstr_seg->bufmap_id,
+ VDECDD_BUFMAP_TYPE_ID,
+ (void **)&ddbuf_map_info, NULL);
+ VDEC_ASSERT(result == IMG_SUCCESS);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ *(cmd++) = (dec_picseg->bstr_seg->data_size);
+ bitstream_size += dec_picseg->bstr_seg->data_size;
+
+ *(cmd++) = ddbuf_map_info->ddbuf_info.dev_virt +
+ dec_picseg->bstr_seg->data_byte_offset;
+
+ if (((segcount %
+ (CMD_BITSTREAM_SEGMENTS_MINUS1_MASK + 1)) == 0) &&
+ (lst_next(dec_picseg)))
+ /*
+ * we have reached max number of bitstream
+ * segments for current command make pui32Cmd
+ * point to next BS command
+ */
+ cmd += CMD_BITSTREAM_HDR_DW_SIZE;
+ }
+ dec_picseg = lst_next(dec_picseg);
+ }
+
+ if (segcount > CMD_BITSTREAM_SEGMENTS_MAX_NUM) {
+ pr_err("Too many bitstream segments to transfer.\n");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ while (segcount > (CMD_BITSTREAM_SEGMENTS_MINUS1_MASK + 1)) {
+ *dma_hdr++ = CMD_BITSTREAM_SEGMENTS |
+ CMD_BITSTREAM_SEGMENTS_MORE_FOLLOW_MASK |
+ CMD_BITSTREAM_SEGMENTS_MINUS1_MASK;
+ *dma_hdr++ = bitstream_size;
+ /*
+ * make pui32DmaHdr point to next chunk by skipping bitstream
+ * Segments
+ */
+ dma_hdr += (2 * (CMD_BITSTREAM_SEGMENTS_MINUS1_MASK + 1));
+ segcount -= (CMD_BITSTREAM_SEGMENTS_MINUS1_MASK + 1);
+ }
+ *dma_hdr = eop ? CMD_BITSTREAM_EOP_MASK : 0;
+ *dma_hdr++ |= CMD_BITSTREAM_SEGMENTS | (segcount - 1);
+ *dma_hdr = bitstream_size;
+
+ /*
+ * Let caller know where we finished. Pointer to location one word after
+ * end of our command buffer
+ */
+ *dma_cmdbuf = cmd;
+ return IMG_SUCCESS;
+}
+
+/*
+ * Creates DEVA control allocation buffer header.
+ */
+static void translation_pvdec_ctrl_setuphdr
+ (struct ctrl_alloc_header *ctrlalloc_hdr,
+ unsigned int *pic_cmds)
+{
+ ctrlalloc_hdr->cmd_additional_params = CMD_CTRL_ALLOC_HEADER;
+ ctrlalloc_hdr->ext_opmode = pic_cmds[VDECFW_CMD_EXT_OP_MODE];
+ ctrlalloc_hdr->chroma_strides =
+ pic_cmds[VDECFW_CMD_CHROMA_ROW_STRIDE];
+ ctrlalloc_hdr->alt_output_addr[0] =
+ pic_cmds[VDECFW_CMD_LUMA_ALTERNATIVE_PICTURE_BASE_ADDRESS];
+ ctrlalloc_hdr->alt_output_addr[1] =
+ pic_cmds[VDECFW_CMD_CHROMA_ALTERNATIVE_PICTURE_BASE_ADDRESS];
+ ctrlalloc_hdr->alt_output_flags =
+ pic_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION];
+}
+
+/*
+ * Creates DEVA VLC DMA command and saves is to control allocation buffer.
+ */
+static int translation_pvdecsetup_vlcdma
+ (struct vidio_ddbufinfo *vlctables_bufinfo,
+ unsigned int **dmacmd_buf, unsigned int cmdbuf_size)
+{
+ unsigned int cmd_dma;
+ unsigned int *cmd = *dmacmd_buf;
+
+ /* Check if VLC tables fit in one DMA transfer */
+ if (vlctables_bufinfo->buf_size > CMD_DMA_DMA_SIZE_MASK) {
+ pr_err("VLC tables won't fit into one DMA transfer!\n");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Check if we have enough space in control allocation buffer. */
+ if (cmdbuf_size < VDEC_SINLGE_DEVA_DMA_CMD_SIZE) {
+ pr_err("Buffer for DMA command too small.\n");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Construct DMA command */
+ cmd_dma = CMD_DMA | CMD_DMA_TYPE_VLC_TABLE |
+ vlctables_bufinfo->buf_size;
+
+ /* Add command to control allocation */
+ *cmd++ = cmd_dma;
+ *cmd++ = vlctables_bufinfo->dev_virt;
+
+ /*
+ * Let caller know where we finished. Pointer to location one word after
+ * end of our command buffer
+ */
+ *dmacmd_buf = cmd;
+ return IMG_SUCCESS;
+}
+
+/*
+ * Creates DEVA commands for configuring VLC tables and saves them into
+ * control allocation buffer.
+ */
+static int translation_pvdecsetup_vlctables
+ (unsigned short vlc_index_data[][3], unsigned int num_tables,
+ unsigned int **ctrl_allocbuf, unsigned int ctrl_allocsize,
+ unsigned int msvdx_vecoffset)
+{
+ unsigned int i;
+ unsigned int word_count;
+ unsigned int reg_val;
+ unsigned int *ctrl_allochdr;
+
+ unsigned int *ctrl_alloc = *ctrl_allocbuf;
+
+ /* Calculate the number of words needed for VLC control allocations. */
+ /*
+ * 3 words for control allocation headers (we are writing 3 chunks:
+ * addresses, widths, opcodes)
+ */
+ unsigned int req_elems = 3 +
+ (ALIGN(num_tables, PVDECIO_VLC_IDX_WIDTH_PARTS) /
+ PVDECIO_VLC_IDX_WIDTH_PARTS) +
+ (ALIGN(num_tables, PVDECIO_VLC_IDX_ADDR_PARTS) /
+ PVDECIO_VLC_IDX_ADDR_PARTS) +
+ (ALIGN(num_tables, PVDECIO_VLC_IDX_OPCODE_PARTS) /
+ PVDECIO_VLC_IDX_OPCODE_PARTS);
+
+ /*
+ * Addresses chunk has to be split in two, if number of tables exceeds
+ * VEC_VLC_TABLE_ADDR_DISCONT (see layout of VEC_VLC_TABLE_ADDR*
+ * registers in TRM)
+ */
+ if (num_tables > VEC_VLC_TABLE_ADDR_DISCONT)
+ /* We need additional control allocation header */
+ req_elems += 1;
+
+ if (ctrl_allocsize < req_elems) {
+ pr_err("Buffer for VLC IDX commands too small.\n");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /*
+ * Write VLC IDX addresses. Chunks for VEC_VLC_TABLE_ADDR[0-15] and
+ * VEC_VLC_TABLE_ADDR[16-18] registers.
+ */
+ ctrl_allochdr = ctrl_alloc++;
+ *ctrl_allochdr = CMD_REGISTER_BLOCK | CMD_REGISTER_BLOCK_FLAG_VLC_DATA |
+ (MSVDX_VEC_CR_VEC_VLC_TABLE_ADDR0_OFFSET + msvdx_vecoffset);
+ /* Reset the word count. */
+ word_count = 0;
+
+ /* Process VLC index table. */
+ i = 0;
+ reg_val = 0;
+ while (i < num_tables) {
+ VDEC_ASSERT((vlc_index_data[i][PVDECIO_VLC_IDX_ADDR_ID] &
+ ~PVDECIO_VLC_IDX_ADDR_MASK) == 0);
+ /* Pack the addresses into a word. */
+ reg_val |= ((vlc_index_data[i][PVDECIO_VLC_IDX_ADDR_ID] &
+ PVDECIO_VLC_IDX_ADDR_MASK) <<
+ ((i % PVDECIO_VLC_IDX_ADDR_PARTS) *
+ PVDECIO_VLC_IDX_ADDR_SHIFT));
+
+ /* If we reached the end of VEC_VLC_TABLE_ADDR[0-15] area... */
+ if (i == VEC_VLC_TABLE_ADDR_DISCONT) {
+ /*
+ * Finalize command header for VEC_VLC_TABLE_ADDR[0-15]
+ * register chunk.
+ */
+ *ctrl_allochdr |= word_count << 16;
+ /*
+ * Reserve and preset command header for
+ * VEC_VLC_TABLE_ADDR[16-18] register chunk.
+ */
+ ctrl_allochdr = ctrl_alloc++;
+ *ctrl_allochdr = CMD_REGISTER_BLOCK |
+ CMD_REGISTER_BLOCK_FLAG_VLC_DATA |
+ (MSVDX_VEC_CR_VEC_VLC_TABLE_ADDR16_OFFSET +
+ msvdx_vecoffset);
+ /* Reset the word count. */
+ word_count = 0;
+ }
+
+ /*
+ * If all the addresses are packed in this word or that's the
+ * last iteration
+ */
+ if (((i % PVDECIO_VLC_IDX_ADDR_PARTS) ==
+ (PVDECIO_VLC_IDX_ADDR_PARTS - 1)) ||
+ (i == (num_tables - 1))) {
+ /*
+ * Add VLC table address to this chunk and increase
+ * words count.
+ */
+ *ctrl_alloc++ = reg_val;
+ word_count++;
+ /* Reset address value. */
+ reg_val = 0;
+ }
+
+ i++;
+ }
+
+ /*
+ * Finalize the current command header for VEC_VLC_TABLE_ADDR register
+ * chunk.
+ */
+ *ctrl_allochdr |= word_count << 16;
+
+ /*
+ * Start new commands chunk for VEC_VLC_TABLE_INITIAL_WIDTH[0-3]
+ * registers.
+ */
+
+ /*
+ * Reserve and preset command header for
+ * VEC_VLC_TABLE_INITIAL_WIDTH[0-3] register chunk.
+ */
+ ctrl_allochdr = ctrl_alloc++;
+ *ctrl_allochdr = CMD_REGISTER_BLOCK | CMD_REGISTER_BLOCK_FLAG_VLC_DATA |
+ (MSVDX_VEC_CR_VEC_VLC_TABLE_INITIAL_WIDTH0_OFFSET +
+ msvdx_vecoffset);
+ /* Reset the word count. */
+ word_count = 0;
+
+ /* Process VLC index table. */
+ i = 0;
+ reg_val = 0;
+
+ while (i < num_tables) {
+ VDEC_ASSERT((vlc_index_data[i][PVDECIO_VLC_IDX_WIDTH_ID] &
+ ~PVDECIO_VLC_IDX_WIDTH_MASK) == 0);
+ /* Pack the widths into a word. */
+ reg_val |= ((vlc_index_data[i][PVDECIO_VLC_IDX_WIDTH_ID] &
+ PVDECIO_VLC_IDX_WIDTH_MASK) <<
+ (i % PVDECIO_VLC_IDX_WIDTH_PARTS) *
+ PVDECIO_VLC_IDX_WIDTH_SHIFT);
+
+ /*
+ * If all the widths are packed in this word or that's the last
+ * iteration.
+ */
+ if (((i % PVDECIO_VLC_IDX_WIDTH_PARTS) ==
+ (PVDECIO_VLC_IDX_WIDTH_PARTS - 1)) ||
+ (i == (num_tables - 1))) {
+ /*
+ * Add VLC table width to this chunk and increase words
+ * count.
+ */
+ *ctrl_alloc++ = reg_val;
+ word_count++;
+ /* Reset width value. */
+ reg_val = 0;
+ }
+ i++;
+ }
+
+ /*
+ * Finalize command header for VEC_VLC_TABLE_INITIAL_WIDTH[0-3] register
+ * chunk.
+ */
+ *ctrl_allochdr |= word_count << 16;
+
+ /*
+ * Start new commands chunk for VEC_VLC_TABLE_INITIAL_OPCODE[0-2]
+ * registers.
+ * Reserve and preset command header for
+ * VEC_VLC_TABLE_INITIAL_OPCODE[0-2] register chunk
+ */
+ ctrl_allochdr = ctrl_alloc++;
+ *ctrl_allochdr = CMD_REGISTER_BLOCK | CMD_REGISTER_BLOCK_FLAG_VLC_DATA |
+ (MSVDX_VEC_CR_VEC_VLC_TABLE_INITIAL_OPCODE0_OFFSET +
+ msvdx_vecoffset);
+ /* Reset the word count. */
+ word_count = 0;
+
+ /* Process VLC index table. */
+ i = 0;
+ reg_val = 0;
+
+ while (i < num_tables) {
+ VDEC_ASSERT((vlc_index_data[i][PVDECIO_VLC_IDX_OPCODE_ID] &
+ ~PVDECIO_VLC_IDX_OPCODE_MASK) == 0);
+ /* Pack the opcodes into a word. */
+ reg_val |= ((vlc_index_data[i][PVDECIO_VLC_IDX_OPCODE_ID] &
+ PVDECIO_VLC_IDX_OPCODE_MASK) <<
+ (i % PVDECIO_VLC_IDX_OPCODE_PARTS) *
+ PVDECIO_VLC_IDX_OPCODE_SHIFT);
+
+ /*
+ * If all the opcodes are packed in this word or that's the last
+ * iteration.
+ */
+ if (((i % PVDECIO_VLC_IDX_OPCODE_PARTS) ==
+ (PVDECIO_VLC_IDX_OPCODE_PARTS - 1)) ||
+ (i == (num_tables - 1))) {
+ /*
+ * Add VLC table opcodes to this chunk and increase
+ * words count.
+ */
+ *ctrl_alloc++ = reg_val;
+ word_count++;
+ /* Reset width value. */
+ reg_val = 0;
+ }
+ i++;
+ }
+
+ /*
+ * Finalize command header for VEC_VLC_TABLE_INITIAL_OPCODE[0-2]
+ * register chunk.
+ */
+ *ctrl_allochdr |= word_count << 16;
+
+ /* Update caller with current location of control allocation pointer */
+ *ctrl_allocbuf = ctrl_alloc;
+ return IMG_SUCCESS;
+}
+
+/*
+ * fills in a rendec command chunk in the command buffer.
+ */
+static void fill_rendec_chunk(int num, ...)
+{
+ va_list valist;
+ unsigned int i, j = 0;
+ unsigned int chunk_word_count = 0;
+ unsigned int used_word_count = 0;
+ int aux_array_size = 0;
+ unsigned int *pic_cmds;
+ unsigned int **ctrl_allocbuf;
+ unsigned int ctrl_allocsize;
+ unsigned int vdmc_cmd_offset;
+ unsigned int offset;
+ unsigned int *buf;
+ /* 5 is the fixed arguments passed to fill_rendec_chunk function */
+ enum vdecfw_picture_cmds *aux_array = kmalloc((sizeof(unsigned int) *
+ (num - 5)), GFP_KERNEL);
+ if (!aux_array)
+ return;
+
+ /* initialize valist for num number of arguments */
+ va_start(valist, num);
+
+ pic_cmds = va_arg(valist, unsigned int *);
+ ctrl_allocbuf = va_arg(valist, unsigned int **);
+ ctrl_allocsize = va_arg(valist, unsigned int);
+ vdmc_cmd_offset = va_arg(valist, unsigned int);
+ offset = va_arg(valist, unsigned int);
+ buf = *ctrl_allocbuf;
+
+ aux_array_size = (sizeof(unsigned int) * (num - 5));
+ /*
+ * access all the arguments assigned to valist, we have already
+ * read till 5
+ */
+ for (i = 6, j = 0; i <= num; i++, j++)
+ aux_array[j] = (enum vdecfw_picture_cmds)va_arg(valist, int);
+
+ /* clean memory reserved for valist */
+ va_end(valist);
+ chunk_word_count = aux_array_size /
+ sizeof(enum vdecfw_picture_cmds);
+ if ((chunk_word_count + 1) > (ctrl_allocsize - used_word_count)) {
+ kfree(aux_array);
+ return;
+ }
+ if ((chunk_word_count & ~(CMD_RENDEC_WORD_COUNT_MASK >>
+ CMD_RENDEC_WORD_COUNT_SHIFT)) != 0) {
+ kfree(aux_array);
+ return;
+ }
+ used_word_count += chunk_word_count + 1;
+ *buf++ = CMD_RENDEC_BLOCK | (chunk_word_count << 16) |
+ (vdmc_cmd_offset + offset);
+
+ for (i = 0; i < chunk_word_count; i++)
+ *buf++ = pic_cmds[aux_array[i]];
+
+ *ctrl_allocbuf = buf;
+ /* free the memory */
+ kfree(aux_array);
+}
+
+/*
+ * Creates DEVA commands for configuring rendec and writes them into control
+ * allocation buffer.
+ */
+static void translation_pvdec_setup_commands(unsigned int *pic_cmds,
+ unsigned int **ctrl_allocbuf,
+ unsigned int ctrl_allocsize,
+ unsigned int vdmc_cmd_offset)
+{
+ unsigned int codec_mode;
+
+ codec_mode = REGIO_READ_FIELD(pic_cmds[VDECFW_CMD_OPERATING_MODE],
+ MSVDX_CMDS, OPERATING_MODE, CODEC_MODE);
+
+ if (codec_mode != CODEC_MODE_H264)
+ /* chunk with cache settings at 0x01C */
+ /*
+ * here first argument 6 says there are 6 number of arguments
+ * being passed to fill_rendec_chunk function.
+ */
+ fill_rendec_chunk(6, pic_cmds, ctrl_allocbuf, ctrl_allocsize,
+ vdmc_cmd_offset,
+ MSVDX_CMDS_MC_CACHE_CONFIGURATION_OFFSET,
+ VDECFW_CMD_MC_CACHE_CONFIGURATION);
+
+ /* chunk with extended row stride at 0x03C */
+ /*
+ * here first argument 6 says there are 6 number of arguments
+ * being passed to fill_rendec_chunk function.
+ */
+ fill_rendec_chunk(6, pic_cmds, ctrl_allocbuf, ctrl_allocsize,
+ vdmc_cmd_offset,
+ MSVDX_CMDS_EXTENDED_ROW_STRIDE_OFFSET,
+ VDECFW_CMD_EXTENDED_ROW_STRIDE);
+
+ /* chunk with alternative output control at 0x1B4 */
+ /*
+ * here first argument 6 says there are 6 number of arguments
+ * being passed to fill_rendec_chunk function.
+ */
+ fill_rendec_chunk(6, pic_cmds, ctrl_allocbuf, ctrl_allocsize,
+ vdmc_cmd_offset,
+ MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_OFFSET,
+ VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL);
+
+ /* scaling chunks */
+ if (pic_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE]) {
+ if (codec_mode != CODEC_MODE_REAL8 && codec_mode != CODEC_MODE_REAL9) {
+ /*
+ * chunk with scale display size, scale H/V control at
+ * 0x0050
+ */
+ /*
+ * here first argument 8 says there are 8 number of
+ * arguments being passed to fill_rendec_chunk function.
+ */
+ fill_rendec_chunk(8, pic_cmds, ctrl_allocbuf,
+ ctrl_allocsize, vdmc_cmd_offset,
+ MSVDX_CMDS_SCALED_DISPLAY_SIZE_OFFSET,
+ VDECFW_CMD_SCALED_DISPLAY_SIZE,
+ VDECFW_CMD_HORIZONTAL_SCALE_CONTROL,
+ VDECFW_CMD_VERTICAL_SCALE_CONTROL);
+
+ /* chunk with luma/chorma H/V coeffs at 0x0060 */
+ /*
+ * here first argument 21 says there are 21 number of
+ * arguments being passed to fill_rendec_chunk function.
+ */
+ fill_rendec_chunk(21, pic_cmds, ctrl_allocbuf,
+ ctrl_allocsize, vdmc_cmd_offset,
+ MSVDX_CMDS_HORIZONTAL_LUMA_COEFFICIENTS_OFFSET,
+ VDECFW_CMD_HORIZONTAL_LUMA_COEFFICIENTS_0,
+ VDECFW_CMD_HORIZONTAL_LUMA_COEFFICIENTS_1,
+ VDECFW_CMD_HORIZONTAL_LUMA_COEFFICIENTS_2,
+ VDECFW_CMD_HORIZONTAL_LUMA_COEFFICIENTS_3,
+ VDECFW_CMD_VERTICAL_LUMA_COEFFICIENTS_0,
+ VDECFW_CMD_VERTICAL_LUMA_COEFFICIENTS_1,
+ VDECFW_CMD_VERTICAL_LUMA_COEFFICIENTS_2,
+ VDECFW_CMD_VERTICAL_LUMA_COEFFICIENTS_3,
+ VDECFW_CMD_HORIZONTAL_CHROMA_COEFFICIENTS_0,
+ VDECFW_CMD_HORIZONTAL_CHROMA_COEFFICIENTS_1,
+ VDECFW_CMD_HORIZONTAL_CHROMA_COEFFICIENTS_2,
+ VDECFW_CMD_HORIZONTAL_CHROMA_COEFFICIENTS_3,
+ VDECFW_CMD_VERTICAL_CHROMA_COEFFICIENTS_0,
+ VDECFW_CMD_VERTICAL_CHROMA_COEFFICIENTS_1,
+ VDECFW_CMD_VERTICAL_CHROMA_COEFFICIENTS_2,
+ VDECFW_CMD_VERTICAL_CHROMA_COEFFICIENTS_3);
+
+ /*
+ * chunk with scale output size, scale H/V chroma at
+ * 0x01B8
+ */
+ /*
+ * here first argument 8 says there are 8 number of
+ * arguments being passed to fill_rendec_chunk function.
+ */
+ fill_rendec_chunk(8, pic_cmds, ctrl_allocbuf,
+ ctrl_allocsize, vdmc_cmd_offset,
+ MSVDX_CMDS_SCALE_OUTPUT_SIZE_OFFSET,
+ VDECFW_CMD_SCALE_OUTPUT_SIZE,
+ VDECFW_CMD_SCALE_HORIZONTAL_CHROMA,
+ VDECFW_CMD_SCALE_VERTICAL_CHROMA);
+ }
+ }
+}
+
+#ifdef HAS_HEVC
+/*
+ * @Function translation_pvdec_setup_pvdec_commands
+ */
+static int translation_pvdec_setup_pvdec_commands(struct vdecdd_picture *picture,
+ struct dec_decpict *dec_pict,
+ struct vdecdd_str_unit *str_unit,
+ struct decoder_regsoffsets *regs_offsets,
+ unsigned int **ctrl_allocbuf,
+ unsigned int ctrl_alloc_size,
+ unsigned int *mem_to_reg_host_part,
+ unsigned int *pict_cmds)
+{
+ const unsigned int genc_buf_cnt = 4;
+ /* We have two chunks: for GENC buffers addresses and sizes*/
+ const unsigned int genc_conf_items = 2;
+ const unsigned int pipe = 0xf << 16; /* Instruct H/W to write to current pipe */
+ /* We need to configure address and size of each GENC buffer */
+ const unsigned int genc_words_cnt = genc_buf_cnt * genc_conf_items;
+ struct vdecdd_ddbuf_mapinfo **genc_buffers =
+ picture->pict_res_int->seq_resint->genc_buffers;
+ unsigned int memto_reg_used; /* in bytes */
+ unsigned int i;
+ unsigned int *ctrl_alloc = *ctrl_allocbuf;
+ unsigned int *mem_to_reg = (unsigned int *)dec_pict->pvdec_info->ddbuf_info->cpu_virt;
+ unsigned int reg = 0;
+
+ if (ctrl_alloc_size < genc_words_cnt + genc_conf_items) {
+ pr_err("Buffer for GENC config too small.");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Insert command header for GENC buffers sizes */
+ *ctrl_alloc++ = CMD_REGISTER_BLOCK | (genc_buf_cnt << 16) |
+ (PVDEC_ENTROPY_CR_GENC_BUFFER_SIZE_OFFSET + regs_offsets->entropy_offset);
+ for (i = 0; i < genc_buf_cnt; i++)
+ *ctrl_alloc++ = genc_buffers[i]->ddbuf_info.buf_size;
+
+ /* Insert command header for GENC buffers addresses */
+ *ctrl_alloc++ = CMD_REGISTER_BLOCK | (genc_buf_cnt << 16) |
+ (PVDEC_ENTROPY_CR_GENC_BUFFER_BASE_ADDRESS_OFFSET + regs_offsets->entropy_offset);
+ for (i = 0; i < genc_buf_cnt; i++)
+ *ctrl_alloc++ = genc_buffers[i]->ddbuf_info.dev_virt;
+
+ /* Insert GENC fragment buffer address */
+ *ctrl_alloc++ = CMD_REGISTER_BLOCK | (1 << 16) |
+ (PVDEC_ENTROPY_CR_GENC_FRAGMENT_BASE_ADDRESS_OFFSET + regs_offsets->entropy_offset);
+ *ctrl_alloc++ = picture->pict_res_int->genc_fragment_buf->ddbuf_info.dev_virt;
+
+ /* Return current location in control allocation buffer to caller */
+ *ctrl_allocbuf = ctrl_alloc;
+
+ reg = 0;
+ REGIO_WRITE_FIELD_LITE
+ (reg,
+ MSVDX_CMDS, PVDEC_DISPLAY_PICTURE_SIZE, PVDEC_DISPLAY_PICTURE_WIDTH_MIN1,
+ str_unit->pict_hdr_info->coded_frame_size.width - 1, unsigned int);
+ REGIO_WRITE_FIELD_LITE
+ (reg,
+ MSVDX_CMDS, PVDEC_DISPLAY_PICTURE_SIZE, PVDEC_DISPLAY_PICTURE_HEIGHT_MIN1,
+ str_unit->pict_hdr_info->coded_frame_size.height - 1, unsigned int);
+
+ /*
+ * Pvdec operating mode needs to be submitted before any other commands.
+ * This will be set in FW. Make sure it's the first command in Mem2Reg buffer.
+ */
+ VDEC_ASSERT((unsigned int *)dec_pict->pvdec_info->ddbuf_info->cpu_virt == mem_to_reg);
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_PVDEC_OPERATING_MODE_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = 0x0; /* has to be updated in the F/W */
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_MC_CACHE_CONFIGURATION_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = 0x0; /* has to be updated in the F/W */
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_PVDEC_DISPLAY_PICTURE_SIZE_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = reg;
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_PVDEC_CODED_PICTURE_SIZE_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = reg;
+
+ /* scaling configuration */
+ if (pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE]) {
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_PVDEC_SCALED_DISPLAY_SIZE_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_HORIZONTAL_SCALE_CONTROL_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_HORIZONTAL_SCALE_CONTROL];
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_VERTICAL_SCALE_CONTROL_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_VERTICAL_SCALE_CONTROL];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_SCALE_OUTPUT_SIZE_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_SCALE_OUTPUT_SIZE];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_SCALE_HORIZONTAL_CHROMA_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_SCALE_HORIZONTAL_CHROMA];
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_SCALE_VERTICAL_CHROMA_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_SCALE_VERTICAL_CHROMA];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_HORIZONTAL_LUMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_HORIZONTAL_LUMA_COEFFICIENTS_0];
+ *mem_to_reg++ = pipe |
+ (4 + MSVDX_CMDS_HORIZONTAL_LUMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_HORIZONTAL_LUMA_COEFFICIENTS_1];
+ *mem_to_reg++ = pipe |
+ (8 + MSVDX_CMDS_HORIZONTAL_LUMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_HORIZONTAL_LUMA_COEFFICIENTS_2];
+ *mem_to_reg++ = pipe |
+ (12 + MSVDX_CMDS_HORIZONTAL_LUMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_HORIZONTAL_LUMA_COEFFICIENTS_3];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_VERTICAL_LUMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_VERTICAL_LUMA_COEFFICIENTS_0];
+ *mem_to_reg++ = pipe |
+ (4 + MSVDX_CMDS_VERTICAL_LUMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_VERTICAL_LUMA_COEFFICIENTS_1];
+ *mem_to_reg++ = pipe |
+ (8 + MSVDX_CMDS_VERTICAL_LUMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_VERTICAL_LUMA_COEFFICIENTS_2];
+ *mem_to_reg++ = pipe |
+ (12 + MSVDX_CMDS_VERTICAL_LUMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_VERTICAL_LUMA_COEFFICIENTS_3];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_HORIZONTAL_CHROMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_HORIZONTAL_CHROMA_COEFFICIENTS_0];
+ *mem_to_reg++ = pipe |
+ (4 + MSVDX_CMDS_HORIZONTAL_CHROMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_HORIZONTAL_CHROMA_COEFFICIENTS_1];
+ *mem_to_reg++ = pipe |
+ (8 + MSVDX_CMDS_HORIZONTAL_CHROMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_HORIZONTAL_CHROMA_COEFFICIENTS_2];
+ *mem_to_reg++ = pipe |
+ (12 + MSVDX_CMDS_HORIZONTAL_CHROMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_HORIZONTAL_CHROMA_COEFFICIENTS_3];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_VERTICAL_CHROMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_VERTICAL_CHROMA_COEFFICIENTS_0];
+ *mem_to_reg++ = pipe |
+ (4 + MSVDX_CMDS_VERTICAL_CHROMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_VERTICAL_CHROMA_COEFFICIENTS_1];
+ *mem_to_reg++ = pipe |
+ (8 + MSVDX_CMDS_VERTICAL_CHROMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_VERTICAL_CHROMA_COEFFICIENTS_2];
+ *mem_to_reg++ = pipe |
+ (12 + MSVDX_CMDS_VERTICAL_CHROMA_COEFFICIENTS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_VERTICAL_CHROMA_COEFFICIENTS_3];
+ }
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_EXTENDED_ROW_STRIDE_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_EXTENDED_ROW_STRIDE];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_CHROMA_ROW_STRIDE_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_CHROMA_ROW_STRIDE];
+
+ /* Setup MEM_TO_REG buffer */
+ for (i = 0; i < genc_buf_cnt; i++) {
+ *mem_to_reg++ = pipe | (PVDEC_VEC_BE_CR_GENC_BUFFER_SIZE_OFFSET +
+ regs_offsets->vec_be_regs_offset + i * sizeof(unsigned int));
+ *mem_to_reg++ = genc_buffers[i]->ddbuf_info.buf_size;
+ *mem_to_reg++ = pipe | (PVDEC_VEC_BE_CR_GENC_BUFFER_BASE_ADDRESS_OFFSET +
+ regs_offsets->vec_be_regs_offset + i * sizeof(unsigned int));
+ *mem_to_reg++ = genc_buffers[i]->ddbuf_info.dev_virt;
+ }
+
+ *mem_to_reg++ = pipe |
+ (PVDEC_VEC_BE_CR_GENC_FRAGMENT_BASE_ADDRESS_OFFSET +
+ regs_offsets->vec_be_regs_offset);
+ *mem_to_reg++ = picture->pict_res_int->genc_fragment_buf->ddbuf_info.dev_virt;
+
+ *mem_to_reg++ = pipe |
+ (PVDEC_VEC_BE_CR_ABOVE_PARAM_BASE_ADDRESS_OFFSET +
+ regs_offsets->vec_be_regs_offset);
+
+ *mem_to_reg++ = dec_pict->pvdec_info->ddbuf_info->dev_virt +
+ MEM_TO_REG_BUF_SIZE + SLICE_PARAMS_BUF_SIZE;
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_LUMA_RECONSTRUCTED_PICTURE_BASE_ADDRESSES_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_LUMA_RECONSTRUCTED_PICTURE_BASE_ADDRESS];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_CHROMA_RECONSTRUCTED_PICTURE_BASE_ADDRESSES_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_CHROMA_RECONSTRUCTED_PICTURE_BASE_ADDRESS];
+
+ /* alternative picture configuration */
+ if (dec_pict->alt_pict) {
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_VC1_LUMA_RANGE_MAPPING_BASE_ADDRESS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_LUMA_ALTERNATIVE_PICTURE_BASE_ADDRESS];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_VC1_CHROMA_RANGE_MAPPING_BASE_ADDRESS_OFFSET +
+ regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_CHROMA_ALTERNATIVE_PICTURE_BASE_ADDRESS];
+ }
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_AUX_LINE_BUFFER_BASE_ADDRESS_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_AUX_LINE_BUFFER_BASE_ADDRESS];
+
+ *mem_to_reg++ = pipe |
+ (MSVDX_CMDS_INTRA_BUFFER_BASE_ADDRESS_OFFSET + regs_offsets->vdmc_cmd_offset);
+ *mem_to_reg++ = pict_cmds[VDECFW_CMD_INTRA_BUFFER_BASE_ADDRESS];
+
+ /* Make sure we fit in buffer */
+ memto_reg_used = (unsigned long)mem_to_reg -
+ (unsigned long)dec_pict->pvdec_info->ddbuf_info->cpu_virt;
+
+ VDEC_ASSERT(memto_reg_used < MEM_TO_REG_BUF_SIZE);
+
+ *mem_to_reg_host_part = memto_reg_used / sizeof(unsigned int);
+
+ return IMG_SUCCESS;
+}
+#endif
+
+/*
+ * Creates DEVA commands for configuring rendec and writes them into control
+ * allocation buffer.
+ */
+static int translation_pvdecsetup_vdecext
+ (struct vdec_ext_cmd *vdec_ext,
+ struct dec_decpict *dec_pict, unsigned int *pic_cmds,
+ struct vdecdd_str_unit *str_unit, enum vdec_vid_std vid_std,
+ enum vdecfw_parsermode parser_mode)
+{
+ int result;
+ unsigned int trans_id = dec_pict->transaction_id;
+
+ VDEC_ASSERT(dec_pict->recon_pict);
+
+ vdec_ext->cmd = CMD_VDEC_EXT;
+ vdec_ext->trans_id = trans_id;
+
+ result = translation_get_seqhdr(str_unit, dec_pict, &vdec_ext->seq_addr);
+ VDEC_ASSERT(result == IMG_SUCCESS);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ result = translation_get_ppshdr(str_unit, dec_pict, &vdec_ext->pps_addr);
+ VDEC_ASSERT(result == IMG_SUCCESS);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ result = translation_getsecond_ppshdr(str_unit, &vdec_ext->pps_2addr);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ vdec_ext->hdr_addr = GET_HOST_ADDR(dec_pict->hdr_info->ddbuf_info);
+
+ vdec_ext->ctx_load_addr = translation_getctx_loadaddr(dec_pict);
+ vdec_ext->ctx_save_addr = GET_HOST_ADDR(&dec_pict->cur_pict_dec_res->fw_ctx_buf);
+ vdec_ext->buf_ctrl_addr = GET_HOST_ADDR(&dec_pict->pict_ref_res->fw_ctrlbuf);
+ if (dec_pict->prev_pict_dec_res) {
+ /*
+ * Copy the previous firmware context to the current one in case
+ * picture management fails in firmware.
+ */
+ memcpy(dec_pict->cur_pict_dec_res->fw_ctx_buf.cpu_virt,
+ dec_pict->prev_pict_dec_res->fw_ctx_buf.cpu_virt,
+ dec_pict->prev_pict_dec_res->fw_ctx_buf.buf_size);
+ }
+
+ vdec_ext->last_luma_recon =
+ pic_cmds[VDECFW_CMD_LUMA_RECONSTRUCTED_PICTURE_BASE_ADDRESS];
+ vdec_ext->last_chroma_recon =
+ pic_cmds[VDECFW_CMD_CHROMA_RECONSTRUCTED_PICTURE_BASE_ADDRESS];
+
+ vdec_ext->luma_err_base =
+ pic_cmds[VDECFW_CMD_LUMA_ERROR_PICTURE_BASE_ADDRESS];
+ vdec_ext->chroma_err_base =
+ pic_cmds[VDECFW_CMD_CHROMA_ERROR_PICTURE_BASE_ADDRESS];
+
+ vdec_ext->scaled_display_size =
+ pic_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE];
+ vdec_ext->horz_scale_control =
+ pic_cmds[VDECFW_CMD_HORIZONTAL_SCALE_CONTROL];
+ vdec_ext->vert_scale_control =
+ pic_cmds[VDECFW_CMD_VERTICAL_SCALE_CONTROL];
+ vdec_ext->scale_output_size = pic_cmds[VDECFW_CMD_SCALE_OUTPUT_SIZE];
+
+ vdec_ext->intra_buf_base_addr =
+ pic_cmds[VDECFW_CMD_INTRA_BUFFER_BASE_ADDRESS];
+ vdec_ext->intra_buf_size_per_pipe =
+ pic_cmds[VDECFW_CMD_INTRA_BUFFER_SIZE_PER_PIPE];
+ vdec_ext->intra_buf_size_per_plane =
+ pic_cmds[VDECFW_CMD_INTRA_BUFFER_PLANE_SIZE];
+ vdec_ext->aux_line_buffer_base_addr =
+ pic_cmds[VDECFW_CMD_AUX_LINE_BUFFER_BASE_ADDRESS];
+ vdec_ext->aux_line_buf_size_per_pipe =
+ pic_cmds[VDECFW_CMD_AUX_LINE_BUFFER_SIZE_PER_PIPE];
+ vdec_ext->alt_output_pict_rotation =
+ pic_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION];
+ vdec_ext->chroma2reconstructed_addr =
+ pic_cmds[VDECFW_CMD_CHROMA2_RECONSTRUCTED_PICTURE_BASE_ADDRESS];
+ vdec_ext->luma_alt_addr =
+ pic_cmds[VDECFW_CMD_LUMA_ALTERNATIVE_PICTURE_BASE_ADDRESS];
+ vdec_ext->chroma_alt_addr =
+ pic_cmds[VDECFW_CMD_CHROMA_ALTERNATIVE_PICTURE_BASE_ADDRESS];
+ vdec_ext->chroma2alt_addr =
+ pic_cmds[VDECFW_CMD_CHROMA2_ALTERNATIVE_PICTURE_BASE_ADDRESS];
+
+ if (vid_std == VDEC_STD_VC1) {
+ struct vidio_ddbufinfo *vlc_idx_tables_bufinfo =
+ dec_pict->vlc_idx_tables_bufinfo;
+ struct vidio_ddbufinfo *vlc_tables_bufinfo =
+ dec_pict->vlc_tables_bufinfo;
+
+ vdec_ext->vlc_idx_table_size = vlc_idx_tables_bufinfo->buf_size;
+ vdec_ext->vlc_idx_table_addr = vlc_idx_tables_bufinfo->buf_size;
+ vdec_ext->vlc_tables_size = vlc_tables_bufinfo->buf_size;
+ vdec_ext->vlc_tables_size = vlc_tables_bufinfo->buf_size;
+ } else {
+ vdec_ext->vlc_idx_table_size = 0;
+ vdec_ext->vlc_idx_table_addr = 0;
+ vdec_ext->vlc_tables_size = 0;
+ vdec_ext->vlc_tables_size = 0;
+ }
+
+ vdec_ext->display_picture_size = pic_cmds[VDECFW_CMD_DISPLAY_PICTURE];
+ vdec_ext->parser_mode = parser_mode;
+
+ /* miscellaneous flags */
+ vdec_ext->is_chromainterleaved =
+ REGIO_READ_FIELD(pic_cmds[VDECFW_CMD_OPERATING_MODE], MSVDX_CMDS, OPERATING_MODE,
+ CHROMA_INTERLEAVED);
+ vdec_ext->is_discontinuousmbs =
+ dec_pict->pict_hdr_info->discontinuous_mbs;
+
+#ifdef HAS_HEVC
+ if (dec_pict->pvdec_info) {
+ vdec_ext->mem_to_reg_addr = dec_pict->pvdec_info->ddbuf_info->dev_virt;
+ vdec_ext->slice_params_addr = dec_pict->pvdec_info->ddbuf_info->dev_virt +
+ MEM_TO_REG_BUF_SIZE;
+ vdec_ext->slice_params_size = SLICE_PARAMS_BUF_SIZE;
+ }
+ if (vid_std == VDEC_STD_HEVC) {
+ struct vdecdd_picture *picture = (struct vdecdd_picture *)str_unit->dd_pict_data;
+
+ VDEC_ASSERT(picture);
+ /* 10-bit packed output format indicator */
+ vdec_ext->is_packedformat = picture->op_config.pixel_info.mem_pkg ==
+ PIXEL_BIT10_MP ? 1 : 0;
+ }
+#endif
+ return IMG_SUCCESS;
+}
+
+/*
+ * NOTE :
+ * translation_configure_tiling is not supported as of now.
+ */
+int translation_ctrl_alloc_prepare(struct vdec_str_configdata *pstr_config_data,
+ struct vdecdd_str_unit *str_unit,
+ struct dec_decpict *dec_pict,
+ const struct vxd_coreprops *core_props,
+ struct decoder_regsoffsets *regs_offset)
+{
+ int result;
+ unsigned int *cmd_buf;
+ unsigned int hdr_size = 0;
+ unsigned int pict_cmds[VDECFW_CMD_MAX];
+ enum vdecfw_codectype codec;
+ struct vxd_buffers buffers;
+ struct vdec_ext_cmd *vdec_ext;
+ enum vdecfw_parsermode parser_mode = VDECFW_SCP_ONLY;
+ struct vidio_ddbufinfo *batch_msgbuf_info =
+ dec_pict->batch_msginfo->ddbuf_info;
+ struct lst_t *decpic_seg_list = &dec_pict->dec_pict_seg_list;
+ unsigned int memto_reg_host_part = 0;
+
+ unsigned long ctrl_alloc = (unsigned long)batch_msgbuf_info->cpu_virt;
+ unsigned long ctrl_alloc_end = ctrl_alloc + batch_msgbuf_info->buf_size;
+
+ struct vdecdd_picture *picture =
+ (struct vdecdd_picture *)str_unit->dd_pict_data;
+
+ memset(pict_cmds, 0, sizeof(pict_cmds));
+ memset(&buffers, 0, sizeof(buffers));
+
+ VDEC_ASSERT(batch_msgbuf_info->buf_size >= CTRL_ALLOC_MAX_SEGMENT_SIZE);
+ memset(batch_msgbuf_info->cpu_virt, 0, batch_msgbuf_info->buf_size);
+
+ /* Construct transaction based on new picture. */
+ VDEC_ASSERT(str_unit->str_unit_type == VDECDD_STRUNIT_PICTURE_START);
+
+ /* Obtain picture data. */
+ picture = (struct vdecdd_picture *)str_unit->dd_pict_data;
+ dec_pict->recon_pict = &picture->disp_pict_buf;
+
+ result = translation_get_codec(pstr_config_data->vid_std, &codec);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ translation_setup_std_header(pstr_config_data, dec_pict, str_unit, &hdr_size, picture,
+ pict_cmds, &parser_mode);
+
+ buffers.recon_pict = dec_pict->recon_pict;
+ buffers.alt_pict = dec_pict->alt_pict;
+
+#ifdef HAS_HEVC
+ /* Set pipe offsets to device buffers */
+ if (pstr_config_data->vid_std == VDEC_STD_HEVC) {
+ /* FW in multipipe requires this buffers to be allocated per stream */
+ if (picture->pict_res_int && picture->pict_res_int->seq_resint &&
+ picture->pict_res_int->seq_resint->intra_buffer &&
+ picture->pict_res_int->seq_resint->aux_buffer) {
+ buffers.intra_bufinfo =
+ &picture->pict_res_int->seq_resint->intra_buffer->ddbuf_info;
+ buffers.auxline_bufinfo =
+ &picture->pict_res_int->seq_resint->aux_buffer->ddbuf_info;
+ }
+ } else {
+ buffers.intra_bufinfo = dec_pict->intra_bufinfo;
+ buffers.auxline_bufinfo = dec_pict->auxline_bufinfo;
+ }
+
+ if (buffers.intra_bufinfo)
+ buffers.intra_bufsize_per_pipe = buffers.intra_bufinfo->buf_size /
+ core_props->num_pixel_pipes;
+ if (buffers.auxline_bufinfo)
+ buffers.auxline_bufsize_per_pipe = buffers.auxline_bufinfo->buf_size /
+ core_props->num_pixel_pipes;
+#endif
+
+#ifdef ERROR_CONCEALMENT
+ if (picture->pict_res_int && picture->pict_res_int->seq_resint)
+ if (picture->pict_res_int->seq_resint->err_pict_buf)
+ buffers.err_pict_bufinfo =
+ &picture->pict_res_int->seq_resint->err_pict_buf->ddbuf_info;
+#endif
+
+ /*
+ * Prepare Reconstructed Picture Configuration
+ * Note: we are obtaining values of registers prepared basing on header
+ * files generated from MSVDX *dev files.
+ * That's allowed, as layout of registers: MSVDX_CMDS_OPERATING_MODE,
+ * MSVDX_CMDS_EXTENDED_ROW_STRIDE,
+ * MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION,
+ * MSVDX_CMDS_CHROMA_ROW_STRIDE is the same for both MSVDX and PVDEC.
+ */
+ vxd_set_reconpictcmds(str_unit, pstr_config_data, &picture->op_config, core_props,
+ &buffers, pict_cmds);
+
+ /* Alternative Picture Configuration */
+ if (dec_pict->alt_pict) {
+ dec_pict->twopass = picture->op_config.force_oold;
+ buffers.btwopass = dec_pict->twopass;
+ /*
+ * Alternative Picture Configuration
+ * Note: we are obtaining values of registers prepared basing
+ * on header files generated from MSVDX *dev files.
+ * That's allowed, as layout of registers:
+ * MSVDX_CMDS_OPERATING_MODE, MSVDX_CMDS_EXTENDED_ROW_STRIDE,
+ * MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION,
+ * MSVDX_CMDS_CHROMA_ROW_STRIDE is the same for both MSVDX and
+ * PVDEC.
+ */
+ /*
+ * Configure second buffer for out-of-loop processing
+ * (e.g. scaling etc.).
+ */
+ vxd_set_altpictcmds(str_unit, pstr_config_data, &picture->op_config, core_props,
+ &buffers, pict_cmds);
+ }
+
+ /*
+ * Setup initial simple bitstream configuration to be used by parser
+ * task
+ */
+ cmd_buf = (unsigned int *)ctrl_alloc;
+ result = translation_pvdec_adddma_transfers
+ (decpic_seg_list, &cmd_buf,
+ (ctrl_alloc_end - (unsigned long)cmd_buf) / sizeof(unsigned int),
+ dec_pict, str_unit->eop);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ if ((unsigned long)(cmd_buf + (sizeof(struct ctrl_alloc_header) +
+ sizeof(struct vdec_ext_cmd)) / sizeof(unsigned int)) >=
+ ctrl_alloc_end)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /*
+ * Setup regular control allocation message. Start with control
+ * allocation header
+ */
+ translation_pvdec_ctrl_setuphdr((struct ctrl_alloc_header *)cmd_buf, pict_cmds);
+ /* Setup additional params for VP8 */
+ cmd_buf += sizeof(struct ctrl_alloc_header) / sizeof(unsigned int);
+
+ /* Reserve space for VDEC extension command and fill it */
+ vdec_ext = (struct vdec_ext_cmd *)cmd_buf;
+ cmd_buf += sizeof(struct vdec_ext_cmd) / sizeof(unsigned int);
+
+ result = translation_pvdecsetup_vdecext(vdec_ext, dec_pict, pict_cmds,
+ str_unit,
+ pstr_config_data->vid_std,
+ parser_mode);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ vdec_ext->hdr_size = hdr_size;
+
+ /* Add VLC tables to control allocation, skip when VC1 */
+ if (pstr_config_data->vid_std != VDEC_STD_VC1 &&
+ dec_pict->vlc_idx_tables_bufinfo &&
+ dec_pict->vlc_idx_tables_bufinfo->cpu_virt) {
+ unsigned short *vlc_idx_tables = (unsigned short *)
+ dec_pict->vlc_idx_tables_bufinfo->cpu_virt;
+ /*
+ * Get count of elements in VLC idx table. Each element is made
+ * of 3 IMG_UINT16, see e.g. mpeg2_idx.c
+ */
+ unsigned int vlc_idx_count =
+ dec_pict->vlc_idx_tables_bufinfo->buf_size /
+ (3 * sizeof(unsigned short));
+
+ /* Add command to DMA VLC */
+ result = translation_pvdecsetup_vlcdma
+ (dec_pict->vlc_tables_bufinfo, &cmd_buf,
+ (ctrl_alloc_end - (unsigned long)cmd_buf) / sizeof(unsigned int));
+
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /* Add command to configure VLC tables */
+ result = translation_pvdecsetup_vlctables
+ ((unsigned short (*)[3])vlc_idx_tables, vlc_idx_count, &cmd_buf,
+ (ctrl_alloc_end - (unsigned long)cmd_buf) / sizeof(unsigned int),
+ regs_offset->vec_offset);
+
+ if (result != IMG_SUCCESS)
+ return result;
+ }
+
+ /* Setup commands for standards other than HEVC */
+ if (pstr_config_data->vid_std != VDEC_STD_HEVC) {
+ translation_pvdec_setup_commands
+ (pict_cmds, &cmd_buf,
+ (ctrl_alloc_end - (unsigned long)cmd_buf) / sizeof(unsigned int),
+ regs_offset->vdmc_cmd_offset);
+ }
+
+ /* Setup commands for HEVC */
+ vdec_ext->mem_to_reg_size = 0;
+
+#ifdef HAS_HEVC
+ if (pstr_config_data->vid_std == VDEC_STD_HEVC) {
+ result = translation_pvdec_setup_pvdec_commands
+ (picture, dec_pict, str_unit,
+ regs_offset, &cmd_buf,
+ (ctrl_alloc_end - (unsigned long)cmd_buf) / sizeof(unsigned int),
+ &memto_reg_host_part, pict_cmds);
+ if (result != IMG_SUCCESS) {
+ pr_err("Failed to setup VDMC & VDEB firmware commands.");
+ return result;
+ }
+
+ /* Set size of MemToReg buffer in VDEC extension command */
+ VDEC_ASSERT(MEM_TO_REG_BUF_SIZE <
+ (MEM2REG_SIZE_BUF_TOTAL_MASK >> MEM2REG_SIZE_BUF_TOTAL_SHIFT));
+ VDEC_ASSERT(memto_reg_host_part <
+ (MEM2REG_SIZE_HOST_PART_MASK >> MEM2REG_SIZE_HOST_PART_SHIFT));
+
+ vdec_ext->mem_to_reg_size = (MEM_TO_REG_BUF_SIZE << MEM2REG_SIZE_BUF_TOTAL_SHIFT) |
+ (memto_reg_host_part << MEM2REG_SIZE_HOST_PART_SHIFT);
+
+ dec_pict->genc_id = picture->pict_res_int->seq_resint->genc_buf_id;
+ dec_pict->genc_bufs = picture->pict_res_int->seq_resint->genc_buffers;
+ }
+#endif
+ /* Finally mark end of commands */
+ *(cmd_buf++) = CMD_COMPLETION;
+
+ /* Print message for debugging */
+ {
+ int i;
+
+ for (i = 0; i < ((unsigned long)cmd_buf - ctrl_alloc) / sizeof(unsigned int); i++)
+ pr_debug("ctrl_alloc_buf[%d] == %08x\n", i,
+ ((unsigned int *)ctrl_alloc)[i]);
+ }
+ /* Transfer control allocation command to device memory */
+ dec_pict->ctrl_alloc_bytes = ((unsigned long)cmd_buf - ctrl_alloc);
+ dec_pict->ctrl_alloc_offset = dec_pict->ctrl_alloc_bytes;
+ dec_pict->operating_op = pict_cmds[VDECFW_CMD_OPERATING_MODE];
+
+ /*
+ * NOTE : Nothing related to tiling will be used.
+ * result = translation_ConfigureTiling(psStrUnit, psDecPict,
+ * psCoreProps);
+ */
+
+ return result;
+};
+
+int translation_fragment_prepare(struct dec_decpict *dec_pict,
+ struct lst_t *decpic_seg_list, int eop,
+ struct dec_pict_fragment *pict_fragement)
+{
+ int result;
+ unsigned int *cmd_buf;
+ struct vidio_ddbufinfo *batchmsg_bufinfo;
+ unsigned long ctrl_alloc;
+ unsigned long ctrl_alloc_end;
+
+ if (!dec_pict || !dec_pict->batch_msginfo ||
+ !decpic_seg_list || !pict_fragement)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ batchmsg_bufinfo = dec_pict->batch_msginfo->ddbuf_info;
+
+ ctrl_alloc = (unsigned long)batchmsg_bufinfo->cpu_virt +
+ dec_pict->ctrl_alloc_offset;
+ ctrl_alloc_end = (unsigned long)batchmsg_bufinfo->cpu_virt +
+ batchmsg_bufinfo->buf_size;
+
+ /*
+ * Setup initial simple bitstream configuration to be used by parser
+ * task
+ */
+ cmd_buf = (unsigned int *)ctrl_alloc;
+ result = translation_pvdec_adddma_transfers
+ (decpic_seg_list, &cmd_buf,
+ (ctrl_alloc_end - (unsigned long)cmd_buf) / sizeof(unsigned int),
+ dec_pict, eop);
+
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /* Finally mark end of commands */
+ *(cmd_buf++) = CMD_COMPLETION;
+
+ /* Transfer control allocation command to device memory */
+ pict_fragement->ctrl_alloc_offset = dec_pict->ctrl_alloc_offset;
+ pict_fragement->ctrl_alloc_bytes =
+ ((unsigned long)cmd_buf - ctrl_alloc);
+
+ dec_pict->ctrl_alloc_offset += pict_fragement->ctrl_alloc_bytes;
+
+ return result;
+};
+#endif /* VDEC_USE_PVDEC */
diff --git a/drivers/staging/media/vxd/decoder/translation_api.h b/drivers/staging/media/vxd/decoder/translation_api.h
new file mode 100644
index 000000000000..43c570760d57
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/translation_api.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VDECDD translation API's.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+#ifndef __TRANSLATION_API_H__
+#define __TRANSLATION_API_H__
+
+#include "decoder.h"
+#include "hw_control.h"
+#include "vdecdd_defs.h"
+#include "vdec_defs.h"
+#include "vxd_props.h"
+
+/*
+ * This function submits a stream unit for translation
+ * into a control allocation buffer used in PVDEC operation.
+ */
+int translation_ctrl_alloc_prepare
+ (struct vdec_str_configdata *psstr_config_data,
+ struct vdecdd_str_unit *psstrunit,
+ struct dec_decpict *psdecpict,
+ const struct vxd_coreprops *core_props,
+ struct decoder_regsoffsets *regs_offset);
+
+/*
+ * TRANSLATION_FragmentPrepare.
+ */
+int translation_fragment_prepare(struct dec_decpict *psdecpict,
+ struct lst_t *decpic_seg_list, int eop,
+ struct dec_pict_fragment *pict_fragement);
+
+#endif /* __TRANSLATION_API_H__ */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
The memory manager helper library provides functions for managing
the memory context of a decode stream. Each stream will have it's
own memory context with associated mmu context and heap
allocations. The memory manager tracks the allocations and mappings
per context, as well as providing a wrapper around the MMU library.
It also provides functions for the driver to query information about
the page table directory for a particular memory context.
In addition, the memory manager provides the ability to plug in
different heaps (unified, carveout, dmabuf, etc.) that the caller
can use when doing memory allocations.
By default, the "unified" heap functionality is supported. No other
types of heaps are supported at this time, though the framework is
present to add more heap types in the future, if needed. This
heap is used only for allocating internal buffers used for communication
with the hardware, and for loading the firmware.
Functions are provided for creating/destroying a memory context,
creating/destroying an MMU context, mapping and unmapping buffers in
the device MMU, allocating and freeing buffers from specified available
heaps, and retrieving information about those allocations.
Signed-off-by: Buddy Liong <[email protected]>
Signed-off-by: Angela Stegmaier <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 3 +
.../staging/media/vxd/common/img_mem_man.c | 1124 +++++++++++++++++
.../staging/media/vxd/common/img_mem_man.h | 231 ++++
.../media/vxd/common/img_mem_unified.c | 276 ++++
4 files changed, 1634 insertions(+)
create mode 100644 drivers/staging/media/vxd/common/img_mem_man.c
create mode 100644 drivers/staging/media/vxd/common/img_mem_man.h
create mode 100644 drivers/staging/media/vxd/common/img_mem_unified.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 2e921650a14c..150272927839 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19537,6 +19537,9 @@ M: Sidraya Jayagond <[email protected]>
L: [email protected]
S: Maintained
F: Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
+F: drivers/staging/media/vxd/common/img_mem_man.c
+F: drivers/staging/media/vxd/common/img_mem_man.h
+F: drivers/staging/media/vxd/common/img_mem_unified.c
F: drivers/staging/media/vxd/common/imgmmu.c
F: drivers/staging/media/vxd/common/imgmmu.h
diff --git a/drivers/staging/media/vxd/common/img_mem_man.c b/drivers/staging/media/vxd/common/img_mem_man.c
new file mode 100644
index 000000000000..cf9792d9a1a9
--- /dev/null
+++ b/drivers/staging/media/vxd/common/img_mem_man.c
@@ -0,0 +1,1124 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * IMG DEC Memory Manager
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/idr.h>
+#include <linux/slab.h>
+#include <linux/dma-mapping.h>
+#include <linux/printk.h>
+#include <linux/mutex.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "imgmmu.h"
+#include "img_mem_man.h"
+
+#define VXD_MMU_SHIFT 8 /* assume 40-bit MMU */
+/* heaps ids (global) */
+#define MIN_HEAP 1
+#define MAX_HEAP 16
+
+/*
+ * struct dev_mem_man - the device memory management
+ * @heaps: idr list of heap for the device memory manager
+ * @mem_ctxs: contains lists of mem_ctx
+ * @mutex: mutex for this device
+ */
+struct mem_man {
+ void *dev;
+ struct idr *heaps;
+ struct list_head mem_ctxs;
+ struct mutex *mutex; /* mutex for this device */
+};
+
+static struct mem_man mem_man_data = {0};
+
+/**
+ * struct mmu_page - the mmu page information for the buffer
+ * @buffer: buffer pointer for the particular mmu_page
+ * @page_cfg: mmu page configuration of physical and virtual addr
+ * @addr_shift: address shifting information
+ */
+struct mmu_page {
+ struct buffer *buffer;
+ struct mmu_page_cfg page_cfg;
+ unsigned int addr_shift;
+};
+
+static void _img_mem_free(struct buffer *buffer);
+static void _img_mmu_unmap(struct mmu_ctx_mapping *mapping);
+static void _img_mmu_ctx_destroy(struct mmu_ctx *ctx);
+
+#if defined(DEBUG_DECODER_DRIVER)
+static unsigned char *get_heap_name(enum heap_type type)
+{
+ switch (type) {
+ case MEM_HEAP_TYPE_UNIFIED:
+ return "unified";
+ default:
+ return "unknown";
+ }
+}
+#endif
+
+int img_mem_add_heap(const struct heap_config *heap_cfg, int *heap_id)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct heap *heap;
+ int (*init_fn)(const struct heap_config *heap_cfg, struct heap *heap);
+ int ret;
+
+ switch (heap_cfg->type) {
+ case MEM_HEAP_TYPE_UNIFIED:
+ init_fn = img_mem_unified_init;
+ break;
+ default:
+ dev_err(mem_man->dev, "%s: heap type %d unknown\n", __func__,
+ heap_cfg->type);
+ return -EINVAL;
+ }
+
+ heap = kmalloc(sizeof(*heap), GFP_KERNEL);
+ if (!heap)
+ return -ENOMEM;
+
+ ret = mutex_lock_interruptible_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+ if (ret)
+ goto lock_failed;
+
+ ret = idr_alloc(mem_man->heaps, heap, MIN_HEAP, MAX_HEAP, GFP_KERNEL);
+ if (ret < 0) {
+ dev_err(mem_man->dev, "%s: idr_alloc failed\n", __func__);
+ goto alloc_id_failed;
+ }
+
+ heap->id = ret;
+ heap->type = heap_cfg->type;
+ heap->options = heap_cfg->options;
+ heap->to_dev_addr = heap_cfg->to_dev_addr;
+ heap->priv = NULL;
+
+ ret = init_fn(heap_cfg, heap);
+ if (ret) {
+ dev_err(mem_man->dev, "%s: heap init failed\n", __func__);
+ goto heap_init_failed;
+ }
+
+ *heap_id = heap->id;
+ mutex_unlock(mem_man->mutex);
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(mem_man->dev, "%s created heap %d type %d (%s)\n",
+ __func__, *heap_id, heap_cfg->type, get_heap_name(heap->type));
+#endif
+ return 0;
+
+heap_init_failed:
+ idr_remove(mem_man->heaps, heap->id);
+alloc_id_failed:
+ mutex_unlock(mem_man->mutex);
+lock_failed:
+ kfree(heap);
+ return ret;
+}
+
+static void _img_mem_del_heap(struct heap *heap)
+{
+ struct mem_man *mem_man = &mem_man_data;
+
+ if (heap->ops->destroy)
+ heap->ops->destroy(heap);
+
+ idr_remove(mem_man->heaps, heap->id);
+}
+
+void img_mem_del_heap(int heap_id)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct heap *heap;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+
+ heap = idr_find(mem_man->heaps, heap_id);
+ if (!heap) {
+ dev_warn(mem_man->dev, "%s heap %d not found!\n", __func__,
+ heap_id);
+ mutex_unlock(mem_man->mutex);
+ return;
+ }
+
+ _img_mem_del_heap(heap);
+
+ mutex_unlock(mem_man->mutex);
+
+ kfree(heap);
+}
+
+int img_mem_create_ctx(struct mem_ctx **new_ctx)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct mem_ctx *ctx;
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx)
+ return -ENOMEM;
+
+ ctx->buffers = kzalloc(sizeof(*ctx->buffers), GFP_KERNEL);
+ if (!ctx->buffers)
+ return -ENOMEM;
+ idr_init(ctx->buffers);
+
+ INIT_LIST_HEAD(&ctx->mmu_ctxs);
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+ list_add(&ctx->mem_man_entry, &mem_man->mem_ctxs);
+ mutex_unlock(mem_man->mutex);
+
+ *new_ctx = ctx;
+ return 0;
+}
+
+static void _img_mem_destroy_ctx(struct mem_ctx *ctx)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct buffer *buffer;
+ int buff_id;
+
+ /* free derelict mmu contexts */
+ while (!list_empty(&ctx->mmu_ctxs)) {
+ struct mmu_ctx *mc;
+
+ mc = list_first_entry(&ctx->mmu_ctxs,
+ struct mmu_ctx, mem_ctx_entry);
+ dev_warn(mem_man->dev, "%s: found derelict mmu context %p\n",
+ __func__, mc);
+ _img_mmu_ctx_destroy(mc);
+ kfree(mc);
+ }
+
+ /* free derelict buffers */
+ buff_id = MEM_MAN_MIN_BUFFER;
+ buffer = idr_get_next(ctx->buffers, &buff_id);
+ while (buffer) {
+ dev_warn(mem_man->dev, "%s: found derelict buffer %d\n",
+ __func__, buff_id);
+ if (buffer->heap)
+ _img_mem_free(buffer);
+ else
+ idr_remove(ctx->buffers, buffer->id);
+ kfree(buffer);
+ buff_id = MEM_MAN_MIN_BUFFER;
+ buffer = idr_get_next(ctx->buffers, &buff_id);
+ }
+
+ idr_destroy(ctx->buffers);
+ kfree(ctx->buffers);
+ __list_del_entry(&ctx->mem_man_entry);
+}
+
+void img_mem_destroy_ctx(struct mem_ctx *ctx)
+{
+ struct mem_man *mem_man = &mem_man_data;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+ _img_mem_destroy_ctx(ctx);
+ mutex_unlock(mem_man->mutex);
+
+ kfree(ctx);
+}
+
+static int _img_mem_alloc(void *device, struct mem_ctx *ctx,
+ struct heap *heap, unsigned long size,
+ enum mem_attr attr, struct buffer **buffer_new)
+{
+ struct buffer *buffer;
+ int ret;
+
+ if (size == 0) {
+ dev_err(device, "%s: buffer size is zero\n", __func__);
+ return -EINVAL;
+ }
+
+ if (!heap->ops || !heap->ops->alloc) {
+ dev_err(device, "%s: no alloc function in heap %d!\n",
+ __func__, heap->id);
+ return -EINVAL;
+ }
+
+ buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
+
+ ret = idr_alloc(ctx->buffers, buffer,
+ MEM_MAN_MIN_BUFFER, MEM_MAN_MAX_BUFFER, GFP_KERNEL);
+ if (ret < 0) {
+ dev_err(device, "%s: idr_alloc failed\n", __func__);
+ goto idr_alloc_failed;
+ }
+
+ buffer->id = ret;
+ buffer->request_size = size;
+ buffer->actual_size = ((size + PAGE_SIZE - 1) / PAGE_SIZE) * PAGE_SIZE;
+ buffer->device = device;
+ buffer->mem_ctx = ctx;
+ buffer->heap = heap;
+ INIT_LIST_HEAD(&buffer->mappings);
+ buffer->kptr = NULL;
+ buffer->priv = NULL;
+
+ ret = heap->ops->alloc(device, heap, buffer->actual_size, attr,
+ buffer);
+ if (ret) {
+ dev_err(device, "%s: heap %d alloc failed\n", __func__,
+ heap->id);
+ goto heap_alloc_failed;
+ }
+
+ *buffer_new = buffer;
+
+ dev_dbg(device, "%s heap %p ctx %p created buffer %d (%p) actual_size %zu\n",
+ __func__, heap, ctx, buffer->id, buffer, buffer->actual_size);
+ return 0;
+
+heap_alloc_failed:
+ idr_remove(ctx->buffers, buffer->id);
+idr_alloc_failed:
+ kfree(buffer);
+ return ret;
+}
+
+int img_mem_alloc(void *device, struct mem_ctx *ctx, int heap_id,
+ unsigned long size, enum mem_attr attr, int *buf_id)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct heap *heap;
+ struct buffer *buffer;
+ int ret;
+
+ dev_dbg(device, "%s heap %d ctx %p size %zu\n", __func__, heap_id,
+ ctx, size);
+
+ ret = mutex_lock_interruptible_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+ if (ret)
+ return ret;
+
+ heap = idr_find(mem_man->heaps, heap_id);
+ if (!heap) {
+ dev_err(device, "%s: heap id %d not found\n", __func__,
+ heap_id);
+ mutex_unlock(mem_man->mutex);
+ return -EINVAL;
+ }
+
+ ret = _img_mem_alloc(device, ctx, heap, size, attr, &buffer);
+ if (ret) {
+ mutex_unlock(mem_man->mutex);
+ return ret;
+ }
+
+ *buf_id = buffer->id;
+ mutex_unlock(mem_man->mutex);
+
+ dev_dbg(device, "%s heap %d ctx %p created buffer %d (%p) size %zu\n",
+ __func__, heap_id, ctx, *buf_id, buffer, size);
+ return ret;
+}
+
+static int _img_mem_import(void *device, struct mem_ctx *ctx,
+ unsigned long size, enum mem_attr attr, struct buffer **buffer_new)
+{
+ struct buffer *buffer;
+ int ret;
+
+ if (size == 0) {
+ dev_err(device, "%s: buffer size is zero\n", __func__);
+ return -EINVAL;
+ }
+
+ buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
+
+ ret = idr_alloc(ctx->buffers, buffer,
+ MEM_MAN_MIN_BUFFER, MEM_MAN_MAX_BUFFER, GFP_KERNEL);
+ if (ret < 0) {
+ dev_err(device, "%s: idr_alloc failed\n", __func__);
+ goto idr_alloc_failed;
+ }
+
+ buffer->id = ret;
+ buffer->request_size = size;
+ buffer->actual_size = ((size + PAGE_SIZE - 1) / PAGE_SIZE) * PAGE_SIZE;
+ buffer->device = device;
+ buffer->mem_ctx = ctx;
+ buffer->heap = NULL;
+ INIT_LIST_HEAD(&buffer->mappings);
+ buffer->kptr = NULL;
+ buffer->priv = NULL;
+
+ *buffer_new = buffer;
+
+ dev_dbg(device, "%s ctx %p created buffer %d (%p) actual_size %zu\n",
+ __func__, ctx, buffer->id, buffer, buffer->actual_size);
+ return 0;
+
+idr_alloc_failed:
+ kfree(buffer);
+ return ret;
+}
+
+int img_mem_import(void *device, struct mem_ctx *ctx,
+ unsigned long size, enum mem_attr attr, int *buf_id)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct buffer *buffer;
+ int ret;
+
+ dev_dbg(device, "%s ctx %p size %zu\n", __func__, ctx, size);
+
+ ret = mutex_lock_interruptible_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+ if (ret)
+ return ret;
+
+ ret = _img_mem_import(device, ctx, size, attr, &buffer);
+ if (ret) {
+ mutex_unlock(mem_man->mutex);
+ return ret;
+ }
+
+ *buf_id = buffer->id;
+ mutex_unlock(mem_man->mutex);
+
+ dev_dbg(device, "%s ctx %p created buffer %d (%p) size %zu\n",
+ __func__, ctx, *buf_id, buffer, size);
+ return ret;
+}
+
+static void _img_mem_free(struct buffer *buffer)
+{
+ void *dev = buffer->device;
+ struct heap *heap = buffer->heap;
+ struct mem_ctx *ctx = buffer->mem_ctx;
+
+ if (!heap->ops || !heap->ops->free) {
+ dev_err(dev, "%s: no free function in heap %d!\n",
+ __func__, heap->id);
+ return;
+ }
+
+ while (!list_empty(&buffer->mappings)) {
+ struct mmu_ctx_mapping *map;
+
+ map = list_first_entry(&buffer->mappings,
+ struct mmu_ctx_mapping, buffer_entry);
+ dev_warn(dev, "%s: found mapping for buffer %d (size %zu)\n",
+ __func__, map->buffer->id, map->buffer->actual_size);
+
+ _img_mmu_unmap(map);
+
+ kfree(map);
+ }
+
+ heap->ops->free(heap, buffer);
+
+ idr_remove(ctx->buffers, buffer->id);
+}
+
+void img_mem_free(struct mem_ctx *ctx, int buff_id)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct buffer *buffer;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+
+ buffer = idr_find(ctx->buffers, buff_id);
+ if (!buffer) {
+ dev_err(mem_man->dev, "%s: buffer id %d not found\n",
+ __func__, buff_id);
+ mutex_unlock(mem_man->mutex);
+ return;
+ }
+
+ _img_mem_free(buffer);
+
+ mutex_unlock(mem_man->mutex);
+
+ kfree(buffer);
+}
+
+void img_mem_free_bufid(struct mem_ctx *ctx, int buff_id)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct buffer *buffer;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+
+ buffer = idr_find(ctx->buffers, buff_id);
+ if (!buffer) {
+ dev_err(mem_man->dev, "%s: buffer id %d not found\n",
+ __func__, buff_id);
+ mutex_unlock(mem_man->mutex);
+ return;
+ }
+
+ idr_remove(ctx->buffers, buffer->id);
+
+ mutex_unlock(mem_man->mutex);
+
+ kfree(buffer);
+}
+
+static int _img_mem_map_km(struct buffer *buffer)
+{
+ void *dev = buffer->device;
+ struct heap *heap = buffer->heap;
+
+ if (!heap->ops || !heap->ops->map_km) {
+ dev_err(dev, "%s: no map_km in heap %d!\n", __func__, heap->id);
+ return -EINVAL;
+ }
+
+ return heap->ops->map_km(heap, buffer);
+}
+
+int img_mem_map_km(struct mem_ctx *ctx, int buff_id)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct buffer *buffer;
+ int ret;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+ buffer = idr_find(ctx->buffers, buff_id);
+ if (!buffer) {
+ dev_err(mem_man->dev, "%s: buffer id %d not found\n",
+ __func__, buff_id);
+ mutex_unlock(mem_man->mutex);
+ return -EINVAL;
+ }
+
+ ret = _img_mem_map_km(buffer);
+
+ mutex_unlock(mem_man->mutex);
+
+ return ret;
+}
+
+void *img_mem_get_kptr(struct mem_ctx *ctx, int buff_id)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct buffer *buffer;
+ void *kptr;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+ buffer = idr_find(ctx->buffers, buff_id);
+ if (!buffer) {
+ dev_err(mem_man->dev, "%s: buffer id %d not found\n", __func__,
+ buff_id);
+ mutex_unlock(mem_man->mutex);
+ return NULL;
+ }
+ kptr = buffer->kptr;
+ mutex_unlock(mem_man->mutex);
+ return kptr;
+}
+
+static void _img_mem_sync_cpu_to_device(struct buffer *buffer)
+{
+ struct heap *heap = buffer->heap;
+
+ if (heap->ops && heap->ops->sync_cpu_to_dev)
+ heap->ops->sync_cpu_to_dev(heap, buffer);
+
+ /* sync to device memory */
+ mb();
+}
+
+int img_mem_sync_cpu_to_device(struct mem_ctx *ctx, int buff_id)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct buffer *buffer;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+ buffer = idr_find(ctx->buffers, buff_id);
+ if (!buffer) {
+ dev_err(mem_man->dev, "%s: buffer id %d not found\n", __func__,
+ buff_id);
+ mutex_unlock(mem_man->mutex);
+ return -EINVAL;
+ }
+
+ _img_mem_sync_cpu_to_device(buffer);
+
+ mutex_unlock(mem_man->mutex);
+ return 0;
+}
+
+static void _img_mem_sync_device_to_cpu(struct buffer *buffer)
+{
+ struct heap *heap = buffer->heap;
+
+ if (heap->ops && heap->ops->sync_dev_to_cpu)
+ heap->ops->sync_dev_to_cpu(heap, buffer);
+}
+
+int img_mem_sync_device_to_cpu(struct mem_ctx *ctx, int buff_id)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct buffer *buffer;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+ buffer = idr_find(ctx->buffers, buff_id);
+ if (!buffer) {
+ dev_err(mem_man->dev, "%s: buffer id %d not found\n", __func__,
+ buff_id);
+ mutex_unlock(mem_man->mutex);
+ return -EINVAL;
+ }
+
+ _img_mem_sync_device_to_cpu(buffer);
+
+ mutex_unlock(mem_man->mutex);
+ return 0;
+}
+
+static struct mmu_page_cfg *mmu_page_alloc(void *arg)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct mmu_ctx *mmu_ctx = arg;
+ struct mmu_page *page;
+ struct buffer *buffer;
+ struct heap *heap;
+ int ret;
+
+ dev_dbg(mmu_ctx->device, "%s:%d arg %p\n", __func__, __LINE__, arg);
+
+ WARN_ON(!mutex_is_locked(mem_man->mutex));
+
+ page = kzalloc(sizeof(*page), GFP_KERNEL);
+ if (!page)
+ return NULL;
+
+ ret = _img_mem_alloc(mmu_ctx->device, mmu_ctx->mem_ctx,
+ mmu_ctx->heap, PAGE_SIZE, (enum mem_attr)0, &buffer);
+ if (ret) {
+ dev_err(mmu_ctx->device, "%s: img_mem_alloc failed (%d)\n",
+ __func__, ret);
+ goto free_page;
+ }
+
+ ret = _img_mem_map_km(buffer);
+ if (ret) {
+ dev_err(mmu_ctx->device, "%s: img_mem_map_km failed (%d)\n",
+ __func__, ret);
+ goto free_buffer;
+ }
+
+ page->addr_shift = mmu_ctx->mmu_config_addr_width - 32;
+ page->buffer = buffer;
+ page->page_cfg.cpu_virt_addr = (unsigned long)buffer->kptr;
+
+ heap = buffer->heap;
+ if (heap->ops && heap->ops->get_sg_table) {
+ void *sgt;
+
+ ret = heap->ops->get_sg_table(heap, buffer, &sgt);
+ if (ret) {
+ dev_err(mmu_ctx->device,
+ "%s: heap %d buffer %d no sg_table!\n",
+ __func__, heap->id, buffer->id);
+ ret = -EINVAL;
+ goto free_buffer;
+ }
+ page->page_cfg.phys_addr = sg_phys(img_mmu_get_sgl(sgt));
+ } else {
+ dev_err(mmu_ctx->device, "%s: heap %d buffer %d no get_sg!\n",
+ __func__, heap->id, buffer->id);
+ ret = -EINVAL;
+ goto free_buffer;
+ }
+
+ dev_dbg(mmu_ctx->device, "%s:%d virt addr %#lx\n", __func__, __LINE__,
+ page->page_cfg.cpu_virt_addr);
+ dev_dbg(mmu_ctx->device, "%s:%d phys addr %#llx\n", __func__, __LINE__,
+ page->page_cfg.phys_addr);
+ return &page->page_cfg;
+
+free_buffer:
+ _img_mem_free(buffer);
+ kfree(buffer);
+free_page:
+ kfree(page);
+ return NULL;
+}
+
+static void mmu_page_free(struct mmu_page_cfg *arg)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct mmu_page *page;
+
+ page = container_of(arg, struct mmu_page, page_cfg);
+
+ WARN_ON(!mutex_is_locked(mem_man->mutex));
+
+ _img_mem_free(page->buffer);
+ kfree(page->buffer);
+ kfree(page);
+}
+
+static void mmu_page_write(struct mmu_page_cfg *page_cfg,
+ unsigned int offset, unsigned long long addr,
+ unsigned int flags)
+{
+ unsigned int *mem = (unsigned int *)page_cfg->cpu_virt_addr;
+ struct mmu_page *mmu_page;
+ struct heap *heap;
+
+ mmu_page = container_of(page_cfg, struct mmu_page, page_cfg);
+ heap = mmu_page->buffer->heap;
+
+ /* skip translation when flags are zero, assuming address is invalid */
+ if (flags && heap->to_dev_addr)
+ addr = heap->to_dev_addr(&heap->options, addr);
+ addr >>= mmu_page->addr_shift;
+
+ mem[offset] = addr | flags;
+}
+
+static void mmu_update_page(struct mmu_page_cfg *arg)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct mmu_page *page;
+
+ page = container_of(arg, struct mmu_page, page_cfg);
+
+ WARN_ON(!mutex_is_locked(mem_man->mutex));
+
+ _img_mem_sync_cpu_to_device(page->buffer);
+}
+
+int img_mmu_ctx_create(void *device, unsigned int mmu_config_addr_width,
+ struct mem_ctx *mem_ctx, int heap_id,
+ void (*callback_fn)(enum mmu_callback_type type,
+ int buff_id, void *data),
+ void *callback_data, struct mmu_ctx **mmu_ctx)
+{
+ struct mem_man *mem_man = &mem_man_data;
+
+ static struct mmu_info mmu_functions = {
+ .pfn_page_alloc = mmu_page_alloc,
+ .pfn_page_free = mmu_page_free,
+ .pfn_page_write = mmu_page_write,
+ .pfn_page_update = mmu_update_page,
+ };
+ struct mmu_ctx *ctx;
+ int ret;
+
+ if (mmu_config_addr_width < 32) {
+ dev_err(device,
+ "%s: invalid addr_width (%d) must be >= 32 !\n",
+ __func__, mmu_config_addr_width);
+ return -EINVAL;
+ }
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx)
+ return -ENOMEM;
+
+ ctx->device = device;
+ ctx->mem_ctx = mem_ctx;
+ ctx->mmu_config_addr_width = mmu_config_addr_width;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+
+ ctx->heap = idr_find(mem_man->heaps, heap_id);
+ if (!ctx->heap) {
+ dev_err(device, "%s: invalid heap_id (%d)!\n", __func__,
+ heap_id);
+ mutex_unlock(mem_man->mutex);
+ kfree(ctx);
+ return -EINVAL;
+ }
+
+ mmu_functions.alloc_ctx = ctx;
+ ctx->mmu_dir = mmu_create_directory(&mmu_functions);
+ if (IS_ERR_VALUE((unsigned long)ctx->mmu_dir)) {
+ ret = (long)(ctx->mmu_dir);
+ dev_err(device, "%s: directory create failed (%d)!\n", __func__,
+ ret);
+ ctx->mmu_dir = NULL;
+ mutex_unlock(mem_man->mutex);
+ kfree(ctx);
+ return ret;
+ }
+
+ list_add(&ctx->mem_ctx_entry, &mem_ctx->mmu_ctxs);
+ INIT_LIST_HEAD(&ctx->mappings);
+
+ ctx->callback_fn = callback_fn;
+ ctx->callback_data = callback_data;
+
+ *mmu_ctx = ctx;
+
+ mutex_unlock(mem_man->mutex);
+
+ return 0;
+}
+
+static void _img_mmu_ctx_destroy(struct mmu_ctx *ctx)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ int ret;
+
+ while (!list_empty(&ctx->mappings)) {
+ struct mmu_ctx_mapping *map;
+
+ map = list_first_entry(&ctx->mappings,
+ struct mmu_ctx_mapping, mmu_ctx_entry);
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(ctx->device,
+ "%s: found mapped buffer %d (size %zu)\n",
+ __func__, map->buffer->id, map->buffer->request_size);
+#endif
+
+ _img_mmu_unmap(map);
+
+ kfree(map);
+ }
+
+ ret = mmu_destroy_directory(ctx->mmu_dir);
+ if (ret)
+ dev_err(mem_man->dev, "mmu_destroy_directory failed (%d)!\n",
+ ret);
+ __list_del_entry(&ctx->mem_ctx_entry);
+}
+
+void img_mmu_ctx_destroy(struct mmu_ctx *ctx)
+{
+ struct mem_man *mem_man = &mem_man_data;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+ _img_mmu_ctx_destroy(ctx);
+ mutex_unlock(mem_man->mutex);
+
+ kfree(ctx);
+}
+
+int img_mmu_map_sg(struct mmu_ctx *mmu_ctx, struct mem_ctx *mem_ctx,
+ int buff_id, void *sgt, unsigned int virt_addr,
+ unsigned int map_flags)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct mmu_ctx_mapping *mapping;
+ struct mmu_heap_alloc heap_alloc;
+ struct buffer *buffer;
+ int ret = 0;
+
+ dev_dbg(mmu_ctx->device, "%s sgt %p virt_addr %#x\n", __func__,
+ sgt, virt_addr);
+
+ mapping = kzalloc(sizeof(*mapping), GFP_KERNEL);
+ if (!mapping)
+ return -ENOMEM;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+ buffer = idr_find(mem_ctx->buffers, buff_id);
+ if (!buffer) {
+ dev_err(mmu_ctx->device, "%s: buffer id %d not found\n",
+ __func__, buff_id);
+ ret = -EINVAL;
+ goto error;
+ }
+ dev_dbg(mmu_ctx->device, "%s buffer %d 0x%p size %zu virt_addr %#x\n",
+ __func__, buff_id, buffer, buffer->request_size, virt_addr);
+
+ heap_alloc.virt_addr = virt_addr;
+ heap_alloc.alloc_size = buffer->actual_size;
+
+ mapping->mmu_ctx = mmu_ctx;
+ mapping->buffer = buffer;
+ mapping->virt_addr = virt_addr;
+
+ if (sgt) {
+ struct sg_table *sgt_new = sgt;
+
+ mapping->map = mmu_directory_map_sg(mmu_ctx->mmu_dir, sgt_new->sgl,
+ &heap_alloc, map_flags);
+ if (IS_ERR_VALUE((unsigned long)mapping->map)) {
+ ret = (long)(mapping->map);
+ mapping->map = NULL;
+ }
+ } else {
+ dev_err(mmu_ctx->device, "%s: buffer %d no get_sg!\n",
+ __func__, buffer->id);
+ ret = -EINVAL;
+ goto error;
+ }
+ if (ret) {
+ dev_err(mmu_ctx->device, "mmu_directory_map_sg failed (%d)!\n",
+ ret);
+ goto error;
+ }
+
+ list_add(&mapping->mmu_ctx_entry, &mmu_ctx->mappings);
+ list_add(&mapping->buffer_entry, &mapping->buffer->mappings);
+
+ if (mmu_ctx->callback_fn)
+ mmu_ctx->callback_fn(MMU_CALLBACK_MAP, buffer->id,
+ mmu_ctx->callback_data);
+
+ mutex_unlock(mem_man->mutex);
+ return 0;
+
+error:
+ mutex_unlock(mem_man->mutex);
+ kfree(mapping);
+ return ret;
+}
+
+int img_mmu_map(struct mmu_ctx *mmu_ctx, struct mem_ctx *mem_ctx,
+ int buff_id, unsigned int virt_addr, unsigned int map_flags)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct mmu_ctx_mapping *mapping;
+ struct mmu_heap_alloc heap_alloc;
+ struct buffer *buffer;
+ struct heap *heap;
+ int ret;
+
+ dev_dbg(mmu_ctx->device, "%s buffer %d virt_addr %#x\n", __func__,
+ buff_id, virt_addr);
+
+ mapping = kzalloc(sizeof(*mapping), GFP_KERNEL);
+ if (!mapping)
+ return -ENOMEM;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+ buffer = idr_find(mem_ctx->buffers, buff_id);
+ if (!buffer) {
+ dev_err(mmu_ctx->device, "%s: buffer id %d not found\n",
+ __func__, buff_id);
+ ret = -EINVAL;
+ goto error;
+ }
+ dev_dbg(mmu_ctx->device, "%s buffer %d 0x%p size %zu virt_addr %#x\n",
+ __func__, buff_id, buffer, buffer->request_size, virt_addr);
+
+ heap_alloc.virt_addr = virt_addr;
+ heap_alloc.alloc_size = buffer->actual_size;
+
+ mapping->mmu_ctx = mmu_ctx;
+ mapping->buffer = buffer;
+ mapping->virt_addr = virt_addr;
+
+ heap = buffer->heap;
+ if (heap->ops && heap->ops->get_sg_table) {
+ void *sgt;
+
+ ret = heap->ops->get_sg_table(heap, buffer, &sgt);
+ if (ret) {
+ dev_err(mmu_ctx->device,
+ "%s: heap %d buffer %d no sg_table!\n",
+ __func__, heap->id, buffer->id);
+ goto error;
+ }
+
+ mapping->map = mmu_directory_map_sg(mmu_ctx->mmu_dir, img_mmu_get_sgl(sgt),
+ &heap_alloc, map_flags);
+ if (IS_ERR_VALUE((unsigned long)mapping->map)) {
+ ret = (long)(mapping->map);
+ mapping->map = NULL;
+ }
+ } else {
+ dev_err(mmu_ctx->device, "%s: heap %d buffer %d no get_sg!\n",
+ __func__, heap->id, buffer->id);
+ ret = -EINVAL;
+ goto error;
+ }
+ if (ret) {
+ dev_err(mmu_ctx->device, "mmu_directory_map failed (%d)!\n",
+ ret);
+ goto error;
+ }
+
+ list_add(&mapping->mmu_ctx_entry, &mmu_ctx->mappings);
+ list_add(&mapping->buffer_entry, &mapping->buffer->mappings);
+
+ if (mmu_ctx->callback_fn)
+ mmu_ctx->callback_fn(MMU_CALLBACK_MAP, buffer->id,
+ mmu_ctx->callback_data);
+
+ mutex_unlock(mem_man->mutex);
+ return 0;
+
+error:
+ mutex_unlock(mem_man->mutex);
+ kfree(mapping);
+ return ret;
+}
+
+static void _img_mmu_unmap(struct mmu_ctx_mapping *mapping)
+{
+ struct mmu_ctx *ctx = mapping->mmu_ctx;
+ int res;
+
+ dev_dbg(ctx->device, "%s:%d mapping %p buffer %d\n", __func__,
+ __LINE__, mapping, mapping->buffer->id);
+
+ res = mmu_directory_unmap(mapping->map);
+ if (res)
+ dev_warn(ctx->device, "mmu_directory_unmap failed (%d)!\n",
+ res);
+
+ __list_del_entry(&mapping->mmu_ctx_entry);
+ __list_del_entry(&mapping->buffer_entry);
+
+ if (ctx->callback_fn)
+ ctx->callback_fn(MMU_CALLBACK_UNMAP, mapping->buffer->id,
+ ctx->callback_data);
+}
+
+int img_mmu_unmap(struct mmu_ctx *mmu_ctx, struct mem_ctx *mem_ctx,
+ int buff_id)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct mmu_ctx_mapping *mapping;
+ struct list_head *lst;
+
+ dev_dbg(mmu_ctx->device, "%s:%d buffer %d\n", __func__, __LINE__,
+ buff_id);
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+
+ mapping = NULL;
+ list_for_each(lst, &mmu_ctx->mappings) {
+ struct mmu_ctx_mapping *m;
+
+ m = list_entry(lst, struct mmu_ctx_mapping, mmu_ctx_entry);
+ if (m->buffer->id == buff_id) {
+ mapping = m;
+ break;
+ }
+ }
+
+ if (!mapping) {
+ dev_err(mmu_ctx->device, "%s: buffer id %d not found\n",
+ __func__, buff_id);
+ mutex_unlock(mem_man->mutex);
+ return -EINVAL;
+ }
+
+ _img_mmu_unmap(mapping);
+
+ mutex_unlock(mem_man->mutex);
+ kfree(mapping);
+ return 0;
+}
+
+int img_mmu_get_ptd(const struct mmu_ctx *ctx, unsigned int *ptd)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct mmu_page_cfg *page_cfg;
+ unsigned long long addr;
+
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+
+ page_cfg = mmu_directory_get_page(ctx->mmu_dir);
+ if (!page_cfg) {
+ mutex_unlock(mem_man->mutex);
+ return -EINVAL;
+ }
+
+ addr = page_cfg->phys_addr;
+ if (ctx->heap->to_dev_addr)
+ addr = ctx->heap->to_dev_addr(&ctx->heap->options, addr);
+
+ mutex_unlock(mem_man->mutex);
+
+ *ptd = (unsigned int)(addr >>= VXD_MMU_SHIFT);
+
+ dev_dbg(ctx->device, "%s: addr %#llx ptd %#x\n", __func__,
+ page_cfg->phys_addr, *ptd);
+ return 0;
+}
+
+int img_mmu_get_pagetable_entry(const struct mmu_ctx *ctx, unsigned long dev_virt_addr)
+{
+ if (!ctx)
+ return 0xFFFFFF;
+
+ return mmu_directory_get_pagetable_entry(ctx->mmu_dir, dev_virt_addr);
+}
+
+/*
+ * Initialisation
+ */
+int img_mem_init(void *dev)
+{
+ struct mem_man *mem_man = &mem_man_data;
+
+ mem_man->dev = dev;
+ mem_man->heaps = kzalloc(sizeof(*mem_man->heaps), GFP_KERNEL);
+ if (!mem_man->heaps)
+ return -ENOMEM;
+ idr_init(mem_man->heaps);
+ INIT_LIST_HEAD(&mem_man->mem_ctxs);
+ mem_man->mutex = kzalloc(sizeof(*mem_man->mutex), GFP_KERNEL);
+ if (!mem_man->mutex) {
+ pr_err("Memory allocation failed for mutex\n");
+ return -ENOMEM;
+ }
+ mutex_init(mem_man->mutex);
+
+ return 0;
+}
+
+void img_mem_exit(void)
+{
+ struct mem_man *mem_man = &mem_man_data;
+ struct heap *heap;
+ int heap_id;
+
+ /* keeps mutex checks (WARN_ON) happy, this will never actually wait */
+ mutex_lock_nested(mem_man->mutex, SUBCLASS_IMGMEM);
+
+ while (!list_empty(&mem_man->mem_ctxs)) {
+ struct mem_ctx *mc;
+
+ mc = list_first_entry(&mem_man->mem_ctxs,
+ struct mem_ctx, mem_man_entry);
+ dev_warn(mem_man->dev, "%s derelict memory context %p!\n",
+ __func__, mc);
+ _img_mem_destroy_ctx(mc);
+ kfree(mc);
+ }
+
+ heap_id = MIN_HEAP;
+ heap = idr_get_next(mem_man->heaps, &heap_id);
+ while (heap) {
+ dev_warn(mem_man->dev, "%s derelict heap %d!\n", __func__,
+ heap_id);
+ _img_mem_del_heap(heap);
+ kfree(heap);
+ heap_id = MIN_HEAP;
+ heap = idr_get_next(mem_man->heaps, &heap_id);
+ }
+ idr_destroy(mem_man->heaps);
+ kfree(mem_man->heaps);
+
+ mutex_unlock(mem_man->mutex);
+
+ mutex_destroy(mem_man->mutex);
+ kfree(mem_man->mutex);
+ mem_man->mutex = NULL;
+}
diff --git a/drivers/staging/media/vxd/common/img_mem_man.h b/drivers/staging/media/vxd/common/img_mem_man.h
new file mode 100644
index 000000000000..1a10ad994d6e
--- /dev/null
+++ b/drivers/staging/media/vxd/common/img_mem_man.h
@@ -0,0 +1,231 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG DEC Memory Manager header file
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _IMG_DEC_MEM_MGR_H
+#define _IMG_DEC_MEM_MGR_H
+
+#include <linux/types.h>
+
+/* buffer ids (per memory context) */
+#define MEM_MAN_MIN_BUFFER 1
+#define MEM_MAN_MAX_BUFFER 16384
+
+enum mem_attr {
+ MEM_ATTR_CACHED = 0x00000001,
+ MEM_ATTR_UNCACHED = 0x00000002,
+ MEM_ATTR_WRITECOMBINE = 0x00000004,
+ MEM_ATTR_SECURE = 0x00000010,
+ MEM_ATTR_FORCE32BITS = 0x7FFFFFFFU
+};
+
+enum mmu_callback_type {
+ MMU_CALLBACK_MAP = 1,
+ MMU_CALLBACK_UNMAP,
+ MMU_CALLBACK_FORCE32BITS = 0x7FFFFFFFU
+};
+
+enum heap_type {
+ MEM_HEAP_TYPE_UNIFIED = 1,
+ MEM_HEAP_TYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+union heap_options {
+ struct {
+ long long gfp_type; /* pool and flags for buffer allocations */
+ } unified;
+};
+
+/**
+ * struct heap_config - contains heap configuration structure
+ * @type: enumeration of heap_type
+ * @options: pool and flags for buffer allocations, eg GFP_KERNEL
+ * @to_dev_addr: function pointer for retrieving device addr
+ */
+struct heap_config {
+ enum heap_type type;
+ union heap_options options;
+ unsigned long long (*to_dev_addr)(union heap_options *opts, unsigned long long addr);
+};
+
+/*
+ * struct mmu_heap - typedef for mmu_heap
+ * @virt_addr_start: start of the device virtual address
+ * @alloc_atom: atom allocation in bytes
+ * @size: total size of the heap in bytes
+ */
+struct mmu_heap {
+ unsigned long virt_addr_start;
+ unsigned long alloc_atom;
+ unsigned long size;
+};
+
+/*
+ * struct mem_ctx - the memory context
+ * @buffers: idr list of buffers
+ * @mmu_ctxs: contains linked lists of struct mmu_ctx
+ * @mem_man_entry: the entry list for dev_mem_main:mem_ctxs linked list
+ */
+struct mem_ctx {
+ struct idr *buffers;
+ struct list_head mmu_ctxs;
+ struct list_head mem_man_entry;
+};
+
+/*
+ * struct mmu_ctx_mapping - the mmu context mapping information
+ * @mmu_ctx: pointer to the mmu_ctx to which this mmu mapping information
+ * belongs
+ * @buffer: pointer to the buffer which this mmu_ctx_mapping is for
+ * @map: pointer to the mmu_map which this mmu_ctx_mapping belongs
+ * @virt_addr: Virtual address
+ * @mmu_ctx_entry: the entry list for mmu_ctx:mapping linked list.
+ * @buffer_entry: the entry list for buffer:mappings linked list.
+ */
+struct mmu_ctx_mapping {
+ struct mmu_ctx *mmu_ctx;
+ struct buffer *buffer;
+ struct mmu_map *map;
+ unsigned int virt_addr;
+ struct list_head mmu_ctx_entry;
+ struct list_head buffer_entry;
+};
+
+/*
+ * struct mmu_ctx - the mmu context information - one per stream
+ * @device: pointer to the device
+ * @mmu_config_addr_width: the address width for the mmu config
+ * @mem_ctx: pointer to mem_ctx where this mmu_ctx belongs to
+ * @heap: pointer to struct heap to where this mem_ctx belongs to
+ * @mmu_dir: pointer to the mmu_directory this mmu_ctx belongs to
+ * @mappings: contains linked list of struct mmu_ctx_mapping
+ * @mem_ctx_entry: the entry list for mem_ctx:mmu_ctxs
+ * @callback_fn: pointer to function callback
+ * @callback_data: pointer to the callback data
+ */
+struct mmu_ctx {
+ void *device;
+ unsigned int mmu_config_addr_width;
+ struct mem_ctx *mem_ctx;
+ struct heap *heap;
+ struct mmu_directory *mmu_dir;
+ struct list_head mappings;
+ struct list_head mem_ctx_entry;
+ void (*callback_fn)(enum mmu_callback_type type, int buff_id,
+ void *data);
+ void *callback_data;
+};
+
+/*
+ * struct buffer - the mmu context information - one per stream
+ * @id: buffer identification
+ * @request_size: request size for the allocation
+ * @actual_size: size aligned with the PAGE_SIZE allocation
+ * @device: pointer to the device
+ * @mem_ctx: pointer to struct mem_ctx to where this buffer belongs to
+ * @heap: pointer to struct heap to where this buffer belongs to
+ * @mappings: contains linked lists of struct mmu_ctx_mapping
+ * @kptr: pointer to virtual mapping for the buffer object into kernel address
+ * space
+ * @priv: pointer to priv data used for scaterlist table info
+ */
+struct buffer {
+ int id; /* Generated in <mem_ctx:buffers> */
+ unsigned long request_size;
+ unsigned long actual_size;
+ void *device;
+ struct mem_ctx *mem_ctx;
+ struct heap *heap;
+ struct list_head mappings; /* contains <struct mmu_ctx_mapping> */
+ void *kptr;
+ void *priv;
+};
+
+struct heap_ops {
+ int (*alloc)(void *device, struct heap *heap,
+ unsigned long size, enum mem_attr attr,
+ struct buffer *buffer);
+ void (*free)(struct heap *heap, struct buffer *buffer);
+ int (*map_km)(struct heap *heap, struct buffer *buffer);
+ int (*get_sg_table)(struct heap *heap, struct buffer *buffer,
+ void **sg_table);
+ void (*sync_cpu_to_dev)(struct heap *heap, struct buffer *buffer);
+ void (*sync_dev_to_cpu)(struct heap *heap, struct buffer *buffer);
+ void (*destroy)(struct heap *heap);
+};
+
+struct heap {
+ int id; /* Generated in <mem_man:heaps> */
+ enum heap_type type;
+ struct heap_ops *ops;
+ union heap_options options;
+ unsigned long long (*to_dev_addr)(union heap_options *opts, unsigned long long addr);
+ void *priv;
+};
+
+int img_mem_init(void *dev);
+void img_mem_exit(void);
+
+int img_mem_create_ctx(struct mem_ctx **new_ctx);
+void img_mem_destroy_ctx(struct mem_ctx *ctx);
+
+int img_mem_import(void *device, struct mem_ctx *ctx,
+ unsigned long size, enum mem_attr attr, int *buf_id);
+
+int img_mem_alloc(void *device, struct mem_ctx *ctx, int heap_id,
+ unsigned long size, enum mem_attr attributes, int *buf_id);
+void img_mem_free(struct mem_ctx *ctx, int buff_id);
+
+void img_mem_free_bufid(struct mem_ctx *ctx, int buf_id);
+
+int img_mem_map_km(struct mem_ctx *ctx, int buf_id);
+void *img_mem_get_kptr(struct mem_ctx *ctx, int buff_id);
+
+int img_mem_sync_cpu_to_device(struct mem_ctx *ctx, int buf_id);
+int img_mem_sync_device_to_cpu(struct mem_ctx *ctx, int buf_id);
+
+int img_mmu_ctx_create(void *device, unsigned int mmu_config_addr_width,
+ struct mem_ctx *mem_ctx, int heap_id,
+ void (*callback_fn)(enum mmu_callback_type type,
+ int buff_id, void *data),
+ void *callback_data, struct mmu_ctx **mmu_ctx);
+void img_mmu_ctx_destroy(struct mmu_ctx *ctx);
+
+int img_mmu_map(struct mmu_ctx *mmu_ctx, struct mem_ctx *mem_ctx,
+ int buff_id, unsigned int virt_addr, unsigned int map_flags);
+int img_mmu_map_sg(struct mmu_ctx *mmu_ctx, struct mem_ctx *mem_ctx,
+ int buff_id, void *sgt, unsigned int virt_addr,
+ unsigned int map_flags);
+int img_mmu_unmap(struct mmu_ctx *mmu_ctx, struct mem_ctx *mem_ctx,
+ int buff_id);
+
+int img_mmu_get_ptd(const struct mmu_ctx *ctx, unsigned int *ptd);
+
+int img_mmu_get_pagetable_entry(const struct mmu_ctx *ctx, unsigned long dev_virt_addr);
+
+int img_mem_add_heap(const struct heap_config *heap_cfg, int *heap_id);
+void img_mem_del_heap(int heap_id);
+
+/* Heap operation related function */
+int img_mem_unified_init(const struct heap_config *config,
+ struct heap *heap);
+
+/* page and sg list related functions */
+void img_mmu_get_pages(void **page_args, void *sgt_args);
+unsigned int img_mmu_get_orig_nents(void *sgt_args);
+void img_mmu_set_sgt_nents(void *sgt_args, int ret);
+void img_mmu_set_sg_table(void **sg_table_args, void *buffer);
+unsigned int img_mmu_get_sgl_length(void *sgl_args);
+void *img_mmu_get_sgl(void *sgt_args);
+
+#endif /* _IMG_DEC_MEM_MGR */
diff --git a/drivers/staging/media/vxd/common/img_mem_unified.c b/drivers/staging/media/vxd/common/img_mem_unified.c
new file mode 100644
index 000000000000..30108b25d8b0
--- /dev/null
+++ b/drivers/staging/media/vxd/common/img_mem_unified.c
@@ -0,0 +1,276 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * IMG DEC Memory Manager for unified memory
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "img_mem_man.h"
+
+void img_mmu_get_pages(void **page_args, void *sgt_args)
+{
+ struct page **pages = (struct page **)page_args;
+ struct sg_table *sgt = sgt_args;
+ struct scatterlist *sgl = sgt->sgl;
+ int i;
+
+ i = 0;
+ while (sgl) {
+ pages[i++] = sg_page(sgl);
+ sgl = sg_next(sgl);
+ }
+}
+
+unsigned int img_mmu_get_orig_nents(void *sgt_args)
+{
+ struct sg_table *sgt = sgt_args;
+
+ return sgt->orig_nents;
+}
+
+void img_mmu_set_sgt_nents(void *sgt_args, int ret)
+{
+ struct sg_table *sgt = sgt_args;
+
+ sgt->nents = ret;
+}
+
+void img_mmu_set_sg_table(void **sg_table_args, void *buffer)
+{
+ struct sg_table **sg_table = (struct sg_table **)sg_table_args;
+
+ *sg_table = buffer;
+}
+
+unsigned int img_mmu_get_sgl_length(void *sgl_args)
+{
+ struct scatterlist *sgl = (struct scatterlist *)sgl_args;
+
+ return sgl->length;
+}
+
+void *img_mmu_get_sgl(void *sgt_args)
+{
+ struct sg_table *sgt = sgt_args;
+
+ return sgt->sgl;
+}
+
+static int unified_alloc(void *device, struct heap *heap,
+ unsigned long size, enum mem_attr attr,
+ struct buffer *buffer)
+{
+ struct sg_table *sgt;
+ void *sgl;
+ int pages;
+ int ret;
+
+ dev_dbg(device, "%s:%d buffer %d (0x%p)\n", __func__, __LINE__,
+ buffer->id, buffer);
+
+ sgt = kmalloc(sizeof(*sgt), GFP_KERNEL);
+ if (!sgt)
+ return -ENOMEM;
+
+ pages = (size + PAGE_SIZE - 1) / PAGE_SIZE;
+
+ ret = sg_alloc_table(sgt, pages, GFP_KERNEL);
+ if (ret)
+ goto sg_alloc_table_failed;
+
+ sgl = img_mmu_get_sgl(sgt);
+ while (sgl) {
+ void *page;
+ unsigned long long dma_addr;
+
+ page = alloc_page(heap->options.unified.gfp_type);
+ if (!page) {
+ dev_err(device, "%s alloc_page failed!\n", __func__);
+ ret = -ENOMEM;
+ goto alloc_page_failed;
+ }
+
+ /*
+ * dma_map_page() is probably going to fail if alloc flags are
+ * GFP_HIGHMEM, since it is not mapped to CPU. Hopefully, this
+ * will never happen because memory of this sort cannot be used
+ * for DMA anyway. To check if this is the case, build with
+ * debug, set trace_physical_pages=1 and check if page_address
+ * printed above is NULL
+ */
+ dma_addr = dma_map_page(device, page, 0, PAGE_SIZE, DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(device, dma_addr)) {
+ __free_page(page);
+ dev_err(device, "%s dma_map_page failed!\n", __func__);
+ ret = -EIO;
+ goto alloc_page_failed;
+ }
+ dma_unmap_page(device, dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
+
+ sg_set_page(sgl, page, PAGE_SIZE, 0);
+
+ sgl = sg_next(sgl);
+ }
+
+ buffer->priv = sgt;
+ return 0;
+
+alloc_page_failed:
+ sgl = img_mmu_get_sgl(sgt);
+ while (sgl) {
+ void *page = sg_page(sgl);
+
+ if (page)
+ __free_page(page);
+
+ sgl = sg_next(sgl);
+ }
+ sg_free_table(sgt);
+sg_alloc_table_failed:
+ kfree(sgt);
+ return ret;
+}
+
+static void unified_free(struct heap *heap, struct buffer *buffer)
+{
+ void *dev = buffer->device;
+ void *sgt = buffer->priv;
+ void *sgl;
+
+ dev_dbg(dev, "%s:%d buffer %d (0x%p)\n", __func__, __LINE__,
+ buffer->id, buffer);
+
+ if (buffer->kptr) {
+ dev_dbg(dev, "%s vunmap 0x%p\n", __func__, buffer->kptr);
+ dma_unmap_sg(dev, img_mmu_get_sgl(sgt), img_mmu_get_orig_nents(sgt),
+ DMA_FROM_DEVICE);
+ vunmap(buffer->kptr);
+ }
+
+ sgl = img_mmu_get_sgl(sgt);
+ while (sgl) {
+ __free_page(sg_page(sgl));
+ sgl = sg_next(sgl);
+ }
+ sg_free_table(sgt);
+ kfree(sgt);
+}
+
+static int unified_map_km(struct heap *heap, struct buffer *buffer)
+{
+ void *dev = buffer->device;
+ void *sgt = buffer->priv;
+ void *sgl = img_mmu_get_sgl(sgt);
+ unsigned int num_pages = sg_nents(sgl);
+ unsigned int orig_nents = img_mmu_get_orig_nents(sgt);
+ void **pages;
+ int ret;
+ pgprot_t prot;
+
+ dev_dbg(dev, "%s:%d buffer %d (0x%p)\n", __func__, __LINE__, buffer->id, buffer);
+
+ if (buffer->kptr) {
+ dev_warn(dev, "%s called for already mapped buffer %d\n", __func__, buffer->id);
+ return 0;
+ }
+
+ pages = kmalloc_array(num_pages, sizeof(void *), GFP_KERNEL);
+ if (!pages)
+ return -ENOMEM;
+
+ img_mmu_get_pages(pages, sgt);
+
+ prot = PAGE_KERNEL;
+ prot = pgprot_writecombine(prot);
+ buffer->kptr = vmap((struct page **)pages, num_pages, VM_MAP, prot);
+ kfree(pages);
+ if (!buffer->kptr) {
+ dev_err(dev, "%s vmap failed!\n", __func__);
+ return -EFAULT;
+ }
+
+ ret = dma_map_sg(dev, sgl, orig_nents, DMA_FROM_DEVICE);
+
+ if (ret <= 0) {
+ dev_err(dev, "%s dma_map_sg failed!\n", __func__);
+ vunmap(buffer->kptr);
+ return -EFAULT;
+ }
+ dev_dbg(dev, "%s:%d buffer %d orig_nents %d nents %d\n", __func__,
+ __LINE__, buffer->id, orig_nents, ret);
+
+ img_mmu_set_sgt_nents(sgt, ret);
+
+ dev_dbg(dev, "%s:%d buffer %d vmap to 0x%p\n", __func__, __LINE__,
+ buffer->id, buffer->kptr);
+
+ return 0;
+}
+
+static int unified_get_sg_table(struct heap *heap, struct buffer *buffer, void **sg_table)
+{
+ img_mmu_set_sg_table(sg_table, buffer->priv);
+ return 0;
+}
+
+static void unified_sync_cpu_to_dev(struct heap *heap, struct buffer *buffer)
+{
+ void *dev = buffer->device;
+ void *sgt = buffer->priv;
+
+ if (!buffer->kptr)
+ return;
+
+ dev_dbg(dev, "%s:%d buffer %d (0x%p)\n", __func__, __LINE__, buffer->id, buffer);
+
+ dma_sync_sg_for_device(dev, img_mmu_get_sgl(sgt), img_mmu_get_orig_nents(sgt),
+ DMA_TO_DEVICE);
+}
+
+static void unified_sync_dev_to_cpu(struct heap *heap, struct buffer *buffer)
+{
+ void *dev = buffer->device;
+ void *sgt = buffer->priv;
+
+ if (!buffer->kptr)
+ return;
+
+ dev_dbg(dev, "%s:%d buffer %d (0x%p)\n", __func__, __LINE__,
+ buffer->id, buffer);
+
+ dma_sync_sg_for_cpu(dev, img_mmu_get_sgl(sgt), img_mmu_get_orig_nents(sgt),
+ DMA_FROM_DEVICE);
+}
+
+static void unified_heap_destroy(struct heap *heap)
+{
+}
+
+static struct heap_ops unified_heap_ops = {
+ .alloc = unified_alloc,
+ .free = unified_free,
+ .map_km = unified_map_km,
+ .get_sg_table = unified_get_sg_table,
+ .sync_cpu_to_dev = unified_sync_cpu_to_dev,
+ .sync_dev_to_cpu = unified_sync_dev_to_cpu,
+ .destroy = unified_heap_destroy,
+};
+
+int img_mem_unified_init(const struct heap_config *heap_cfg,
+ struct heap *heap)
+{
+ heap->ops = &unified_heap_ops;
+ return 0;
+}
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
The IMG VXD Video Decoder uses the IMG D5520 to provide video
decoding for various codecs including H.264 and HEVC.
Scaling and rotation are also supported by the hardware
and driver. The driver also supports multiple simultaneous
video decodes.
Each mem2mem context is a single stream decode session.
Each session creates it's own vxd context and associated
mem mgr and mmu contexts. Firmware loading, firmware messaging,
and hardware power management (reset) are supported, as well as MMU
programming of the HW.
This patch adds the framework for the v4l2 IMG VXD video decoder
driver, supporting HW initialization, MMU mapping and buffer management.
The decoding functionality is not yet implemented.
Signed-off-by: Buddy Liong <[email protected]>
Signed-off-by: Angela Stegmaier <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 3 +
drivers/staging/media/vxd/decoder/vxd_core.c | 1683 ++++++++++++++++++
drivers/staging/media/vxd/decoder/vxd_dec.c | 185 ++
drivers/staging/media/vxd/decoder/vxd_dec.h | 477 +++++
4 files changed, 2348 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/vxd_core.c
create mode 100644 drivers/staging/media/vxd/decoder/vxd_dec.c
create mode 100644 drivers/staging/media/vxd/decoder/vxd_dec.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 0f8154b69a91..47067f907539 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19543,6 +19543,9 @@ F: drivers/staging/media/vxd/common/img_mem_unified.c
F: drivers/staging/media/vxd/common/imgmmu.c
F: drivers/staging/media/vxd/common/imgmmu.h
F: drivers/staging/media/vxd/decoder/img_dec_common.h
+F: drivers/staging/media/vxd/decoder/vxd_core.c
+F: drivers/staging/media/vxd/decoder/vxd_dec.c
+F: drivers/staging/media/vxd/decoder/vxd_dec.h
F: drivers/staging/media/vxd/decoder/vxd_pvdec.c
F: drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h
F: drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h
diff --git a/drivers/staging/media/vxd/decoder/vxd_core.c b/drivers/staging/media/vxd/decoder/vxd_core.c
new file mode 100644
index 000000000000..b502c33e6456
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vxd_core.c
@@ -0,0 +1,1683 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * IMG DEC VXD Core component function implementations
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include <linux/firmware.h>
+#include <linux/completion.h>
+#include <linux/slab.h>
+#include <linux/idr.h>
+#include <linux/platform_device.h>
+#include <linux/interrupt.h>
+#include <linux/printk.h>
+#include <linux/mutex.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/time64.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "img_dec_common.h"
+#include "vxd_pvdec_priv.h"
+
+#define VXD_RENDEC_SIZE (5 * 1024 * 1024)
+
+#define VXD_MSG_CNT_SHIFT 8
+#define VXD_MSG_CNT_MASK 0xff00
+#define VXD_MAX_MSG_CNT ((1 << VXD_MSG_CNT_SHIFT) - 1)
+#define VXD_MSG_STR_MASK 0xff
+#define VXD_INVALID_ID (-1)
+
+#define MAP_FIRMWARE_TO_STREAM 1
+
+/* Has to be used with VXD->mutex acquired! */
+#define VXD_GEN_MSG_ID(VXD, STR_ID, MSG_ID, vxd_type, str_type) \
+ do { \
+ vxd_type __VXD = VXD; \
+ str_type __STR_ID = STR_ID; \
+ WARN_ON((__STR_ID) > VXD_MSG_STR_MASK); \
+ (__VXD)->msg_cnt = (__VXD)->msg_cnt + 1 % (VXD_MAX_MSG_CNT); \
+ (MSG_ID) = ((__VXD)->msg_cnt << VXD_MSG_CNT_SHIFT) | \
+ ((__STR_ID) & VXD_MSG_STR_MASK); \
+ } while (0)
+
+/* Have to be used with VXD->mutex acquired! */
+#define VXD_RET_MSG_ID(VXD) ((VXD)->msg_cnt--)
+
+#define VXD_MSG_ID_GET_STR_ID(MSG_ID) \
+ ((MSG_ID) & VXD_MSG_STR_MASK)
+
+#define VXD_MSG_ID_GET_CNT(MSG_ID) \
+ (((MSG_ID) & VXD_MSG_CNT_MASK) >> VXD_MSG_CNT_SHIFT)
+
+static const unsigned char *drv_fw_name = "pvdec_full_bin.fw";
+
+/* Driver context */
+static struct {
+ /* Available memory heaps. List of <struct vxd_heap> */
+ struct list_head heaps;
+ /* heap id for all internal allocations (rendec, firmware) */
+ int internal_heap_id;
+
+ /* Memory Management context for driver */
+ struct mem_ctx *mem_ctx;
+
+ /* List of associated <struct vxd_dev> */
+ struct list_head devices;
+
+ /* Virtual addresses of shared buffers, common for all streams. */
+ struct {
+ unsigned int fw_addr; /* Firmware blob */
+ unsigned int rendec_addr; /* Rendec buffer */
+ } virt_space;
+
+ int initialised;
+} vxd_drv;
+
+/*
+ * struct vxd_heap - node for heaps list
+ * @id: heap id
+ * @list: Entry in <struct vxd_drv:heaps>
+ */
+struct vxd_heap {
+ int id;
+ struct list_head list;
+};
+
+static void img_mmu_callback(enum mmu_callback_type callback_type,
+ int buff_id, void *data)
+{
+ struct vxd_dev *vxd = data;
+
+ if (!vxd)
+ return;
+
+ if (callback_type == MMU_CALLBACK_MAP)
+ return;
+
+ if (vxd->hw_on)
+ vxd_pvdec_mmu_flush(vxd->dev, vxd->reg_base);
+}
+
+static int vxd_is_apm_required(struct vxd_dev *vxd)
+{
+ return vxd->hw_on;
+}
+
+/*
+ * Power on the HW.
+ * Call with vxd->mutex acquired.
+ */
+static int vxd_make_hw_on_locked(struct vxd_dev *vxd, unsigned int fw_ptd)
+{
+ unsigned int fw_size;
+ struct vxd_fw_hdr *fw_hdr;
+ struct vxd_ena_params ena_params;
+ int ret;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s:%d\n", __func__, __LINE__);
+#endif
+ if (vxd->hw_on)
+ return 0;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: enabling HW\n", __func__);
+#endif
+
+ fw_size = vxd->firmware.fw_size;
+ fw_hdr = vxd->firmware.hdr;
+ if (!fw_size || !fw_hdr) {
+ dev_err(vxd->dev, "%s: firmware missing!\n", __func__);
+ return -ENOENT;
+ }
+
+ memset(&ena_params, 0, sizeof(struct vxd_ena_params));
+
+ ena_params.fw_buf_size = fw_size - sizeof(struct vxd_fw_hdr);
+ ena_params.fw_buf_virt_addr = vxd_drv.virt_space.fw_addr;
+ ena_params.ptd = fw_ptd;
+ ena_params.boot_poll.msleep_cycles = 50;
+ ena_params.crc = 0;
+ ena_params.rendec_addr = vxd_drv.virt_space.rendec_addr;
+ ena_params.rendec_size = (VXD_NUM_PIX_PIPES(vxd->props) *
+ VXD_RENDEC_SIZE) / 4096u;
+
+ ena_params.secure = 0;
+ ena_params.wait_dbg_fifo = 0;
+ ena_params.mem_staller.data = NULL;
+ ena_params.mem_staller.size = 0;
+
+ ret = vxd_pvdec_ena(vxd->dev, vxd->reg_base, &ena_params,
+ fw_hdr, &vxd->freq_khz);
+ /*
+ * Ignore the return code, proceed as usual, it will be returned anyway.
+ * The HW is turned on, so we can perform post mortem analysis,
+ * and collect the fw logs when available.
+ */
+
+ vxd->hw_on = 1;
+
+ return ret;
+}
+
+/*
+ * Power off the HW.
+ * Call with vxd->mutex acquired.
+ */
+static void vxd_make_hw_off_locked(struct vxd_dev *vxd, unsigned char suspending)
+{
+ int ret;
+
+ if (!vxd->hw_on)
+ return;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s:%d\n", __func__, __LINE__);
+#endif
+
+ ret = vxd_pvdec_dis(vxd->dev, vxd->reg_base);
+ vxd->hw_on = 0;
+ if (ret)
+ dev_err(vxd->dev, "%s: failed to power off the VXD!\n", __func__);
+}
+
+/*
+ * Moves all valid items from the queue of items being currently processed to
+ * the pending queue.
+ * Call with vxd->mutex locked
+ */
+static void vxd_rewind_msgs_locked(struct vxd_dev *vxd)
+{
+ struct vxd_item *item, *tmp;
+
+ if (list_empty(&vxd->msgs))
+ return;
+
+ list_for_each_entry_safe(item, tmp, &vxd->msgs, list)
+ list_move(&item->list, &vxd->pend);
+}
+
+static void vxd_report_item_locked(struct vxd_dev *vxd,
+ struct vxd_item *item,
+ unsigned int flags)
+{
+ struct vxd_stream *stream;
+
+ __list_del_entry(&item->list);
+ stream = idr_find(vxd->streams, item->stream_id);
+ if (!stream) {
+ /*
+ * Failed to find associated stream. Probably it was
+ * already destroyed -- drop the item
+ */
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: drop item %p [0x%x]\n", __func__, item, item->msg_id);
+#endif
+ kfree(item);
+ } else {
+ item->msg.out_flags |= flags;
+ list_add_tail(&item->list, &stream->ctx->items_done);
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: waking %p\n", __func__, stream->ctx);
+
+ dev_info(vxd->dev, "%s: signaling worker for %p\n", __func__, stream->ctx);
+#endif
+ schedule_work(stream->ctx->work);
+ }
+}
+
+/*
+ * Rewind all items to the pending queue and report those to listener.
+ * Postpone the reset.
+ * Call with vxd->mutex acquired.
+ */
+static void vxd_emrg_reset_locked(struct vxd_dev *vxd, unsigned int flags)
+{
+ cancel_delayed_work(vxd->dwork);
+
+ vxd->emergency = 1;
+
+#ifdef ERROR_RECOVERY_SIMULATION
+ if (disable_fw_irq_value != 0) {
+ /*
+ * Previously we have disabled IRQ, now enable it. This
+ * condition will occur only when the firmware non responsiveness
+ * will be detected on vxd_worker thread. Once we reproduce the
+ * issue we will enable the IRQ so that the code flow continues.
+ */
+ enable_irq(g_module_irq);
+ }
+#endif
+
+ /*
+ * If the firmware sends more than one reply per item, it's possible
+ * that corresponding item was already removed from vxd-msgs, but the
+ * HW was still processing it and MMU page fault could happen and
+ * trigger execution of this function. So make sure that vxd->msgs
+ * is not empty before rewinding items.
+ */
+ if (!list_empty(&vxd->msgs))
+ /* Move all valid items to the pending queue */
+ vxd_rewind_msgs_locked(vxd);
+
+ {
+ struct vxd_item *item, *tmp;
+
+ list_for_each_entry_safe(item, tmp, &vxd->pend, list) {
+ /*
+ * Exclusive items that were on the pending list
+ * must be reported as canceled
+ */
+ if ((item->msg.out_flags & VXD_FW_MSG_FLAG_EXCL) && !item->msg_id)
+ item->msg.out_flags |= VXD_FW_MSG_FLAG_CANCELED;
+
+ vxd_report_item_locked(vxd, item, flags);
+ }
+ }
+}
+
+static void vxd_handle_io_error_locked(struct vxd_dev *vxd)
+{
+ struct vxd_item *item, *tmp;
+ unsigned int pend_flags = !vxd->hw_on ? VXD_FW_MSG_FLAG_DEV_ERR :
+ VXD_FW_MSG_FLAG_CANCELED;
+
+ list_for_each_entry_safe(item, tmp, &vxd->msgs, list)
+ vxd_report_item_locked(vxd, item, VXD_FW_MSG_FLAG_DEV_ERR);
+
+ list_for_each_entry_safe(item, tmp, &vxd->pend, list)
+ vxd_report_item_locked(vxd, item, pend_flags);
+}
+
+static void vxd_sched_worker_locked(struct vxd_dev *vxd, unsigned int delay_ms)
+{
+ unsigned long long work_at = jiffies + msecs_to_jiffies(delay_ms);
+ int ret;
+
+ /*
+ * Try to queue the work.
+ * This may be also called from the worker context,
+ * so we need to re-arm anyway in case of error
+ */
+ ret = schedule_delayed_work(vxd->dwork, work_at - jiffies);
+ if (ret) {
+ /* Work is already in the queue */
+ /*
+ * Check if new requested time is "before"
+ * the last "time" we scheduled this work at,
+ * if not, do nothing, the worker will do
+ * recalculation for APM/DWR afterwards
+ */
+ if (time_before((unsigned long)work_at, (unsigned long)vxd->work_sched_at)) {
+ /*
+ * Canceling & rescheduling might be problematic,
+ * so just modify it, when needed
+ */
+ ret = mod_delayed_work(system_wq, vxd->dwork, work_at - jiffies);
+ if (!ret)
+ dev_err(vxd->dev, "%s: failed to modify work!\n", __func__);
+ /*
+ * Record the 'time' this work
+ * has been rescheduled at
+ */
+ vxd->work_sched_at = work_at;
+ }
+ } else {
+ /* Record the 'time' this work has been scheduled at */
+ vxd->work_sched_at = work_at;
+ }
+}
+
+static void vxd_monitor_locked(struct vxd_dev *vxd)
+{
+ /* HW is dead, not much sense in rescheduling */
+ if (vxd->hw_dead)
+ return;
+
+ /*
+ * We are not processing anything, but pending list is not empty
+ * probably the message fifo is full, so retrigger the worker.
+ */
+ if (!list_empty(&vxd->pend) && list_empty(&vxd->msgs))
+ vxd_sched_worker_locked(vxd, 1);
+
+ if (list_empty(&vxd->pend) && list_empty(&vxd->msgs) && vxd_is_apm_required(vxd)) {
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: scheduling APM work (%d ms)!\n", __func__, vxd->hw_pm_delay);
+#endif
+ /*
+ * No items to process and no items being processed -
+ * disable the HW
+ */
+ vxd->pm_start = jiffies;
+ vxd_sched_worker_locked(vxd, vxd->hw_pm_delay);
+ return;
+ }
+
+ if (vxd->hw_dwr_period > 0 && !list_empty(&vxd->msgs)) {
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: scheduling DWR work (%d ms)!\n",
+ __func__, vxd->hw_dwr_period);
+#endif
+ vxd->dwr_start = jiffies;
+ vxd_sched_worker_locked(vxd, vxd->hw_dwr_period);
+ }
+}
+
+/*
+ * Take first item from pending list and submit it to the hardware.
+ * Has to be called with vxd->mutex locked.
+ */
+static int vxd_sched_single_locked(struct vxd_dev *vxd)
+{
+ struct vxd_item *item = NULL;
+ unsigned long msg_size;
+ int ret;
+
+ item = list_first_entry(&vxd->pend, struct vxd_item, list);
+
+ msg_size = item->msg.payload_size / sizeof(unsigned int);
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: checking msg_size: %zu, item: %p\n", __func__, msg_size, item);
+#endif
+
+ /*
+ * In case of exclusive item check if hw/fw is
+ * currently processing anything.
+ * If so we need to wait until items are returned back.
+ */
+ if ((item->msg.out_flags & VXD_FW_MSG_FLAG_EXCL) && !list_empty(&vxd->msgs) &&
+ /*
+ * We can move forward if message
+ * is about to be dropped.
+ */
+ !(item->msg.out_flags & VXD_FW_MSG_FLAG_DROP))
+
+ ret = -EBUSY;
+ else
+ /*
+ * Check if there's enough space
+ * in comms RAM to submit the message.
+ */
+ ret = vxd_pvdec_msg_fit(vxd->dev, vxd->reg_base, msg_size);
+
+ if (ret == 0) {
+ unsigned short msg_id;
+
+ VXD_GEN_MSG_ID(vxd, item->stream_id, msg_id, struct vxd_dev*, unsigned int);
+
+ /* submit the message to the hardware */
+ ret = vxd_pvdec_send_msg(vxd->dev, vxd->reg_base,
+ (unsigned int *)item->msg.payload, msg_size,
+ msg_id, vxd);
+ if (ret) {
+ dev_err(vxd->dev, "%s: failed to send msg!\n", __func__);
+ VXD_RET_MSG_ID(vxd);
+ } else {
+ if (item->msg.out_flags & VXD_FW_MSG_FLAG_DROP) {
+ __list_del_entry(&item->list);
+ kfree(item);
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: drop msg 0x%x! (user requested)\n",
+ __func__, msg_id);
+#endif
+ } else {
+ item->msg_id = msg_id;
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev,
+ "%s: moving item %p, id 0x%x to msgs\n",
+ __func__, item, item->msg_id);
+#endif
+ list_move(&item->list, &vxd->msgs);
+ }
+
+ vxd_monitor_locked(vxd);
+ }
+
+ } else if (ret == -EINVAL) {
+ dev_warn(vxd->dev, "%s: invalid msg!\n", __func__);
+ vxd_report_item_locked(vxd, item, VXD_FW_MSG_FLAG_INV);
+ /*
+ * HW is ok, the message was invalid, so don't return an
+ * error
+ */
+ ret = 0;
+ } else if (ret == -EBUSY) {
+ /*
+ * Not enough space. Message is already in the pending queue,
+ * so it will be submitted once we've got space. Delayed work
+ * might have been canceled (if we are currently processing
+ * threaded irq), so make sure that DWR will trigger if it's
+ * enabled.
+ */
+ vxd_monitor_locked(vxd);
+ } else {
+ dev_err(vxd->dev, "%s: failed to check space for msg!\n", __func__);
+ }
+
+ return ret;
+}
+
+/*
+ * Take items from pending list and submit them to the hardware, if space is
+ * available in the ring buffer.
+ * Call with vxd->mutex locked
+ */
+static void vxd_schedule_locked(struct vxd_dev *vxd)
+{
+ unsigned char emergency = vxd->emergency;
+ int ret;
+
+ /* if HW is dead, inform the UM and skip */
+ if (vxd->hw_dead) {
+ vxd_handle_io_error_locked(vxd);
+ return;
+ }
+
+ if (!vxd->hw_on && !list_empty(&vxd->msgs))
+ dev_err(vxd->dev, "%s: msgs not empty when the HW is off!\n", __func__);
+
+ if (list_empty(&vxd->pend)) {
+ vxd_monitor_locked(vxd);
+ return;
+ }
+
+ /*
+ * If the emergency routine was fired, the hw was left ON,so the UM
+ * could do the post mortem analysis before submitting the next items.
+ * Now we can switch off the hardware.
+ */
+ if (emergency) {
+ vxd->emergency = 0;
+ vxd_make_hw_off_locked(vxd, FALSE);
+ usleep_range(1000, 2000);
+ }
+
+ /* Try to schedule */
+ ret = 0;
+ while (!list_empty(&vxd->pend) && ret == 0) {
+ struct vxd_item *item;
+ struct vxd_stream *stream;
+
+ item = list_first_entry(&vxd->pend, struct vxd_item, list);
+ stream = idr_find(vxd->streams, item->stream_id);
+
+ ret = vxd_make_hw_on_locked(vxd, stream->ptd);
+ if (ret) {
+ dev_err(vxd->dev, "%s: failed to start HW!\n", __func__);
+ vxd->hw_dead = 1;
+ vxd_handle_io_error_locked(vxd);
+ return;
+ }
+
+ ret = vxd_sched_single_locked(vxd);
+ }
+
+ if (ret != 0 && ret != -EBUSY) {
+ dev_err(vxd->dev, "%s: failed to schedule, emrg: %d!\n", __func__, emergency);
+ if (emergency) {
+ /*
+ * Failed to schedule in the emergency mode --
+ * there's no hope. Power off the HW, mark all
+ * items as failed and return them.
+ */
+ vxd_handle_io_error_locked(vxd);
+ return;
+ }
+ /* Let worker try to handle it */
+ vxd_sched_worker_locked(vxd, 0);
+ }
+}
+
+static void stream_worker(void *work)
+{
+ struct vxd_dec_ctx *ctx = NULL;
+ struct vxd_dev *vxd = NULL;
+ struct vxd_item *item;
+
+ work = get_work_buff(work, FALSE);
+ ctx = container_of(work, struct vxd_dec_ctx, work);
+ vxd = ctx->dev;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: got work for ctx %p\n", __func__, ctx);
+#endif
+
+ mutex_lock_nested(ctx->mutex, SUBCLASS_VXD_CORE);
+
+ while (!list_empty(&ctx->items_done)) {
+ item = list_first_entry(&ctx->items_done, struct vxd_item, list);
+
+ item->msg.out_flags &= VXD_FW_MSG_RD_FLAGS_MASK;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(vxd->dev, "%s: item: %p, payload_size: %d, flags: 0x%x\n",
+ __func__, item, item->msg.payload_size,
+ item->msg.out_flags);
+#endif
+
+ if (ctx->cb)
+ ctx->cb(ctx->res_str_id, item->msg.payload,
+ item->msg.payload_size, item->msg.out_flags);
+
+ __list_del_entry(&item->list);
+ kfree(item);
+ }
+ mutex_unlock(ctx->mutex);
+}
+
+int vxd_create_ctx(struct vxd_dev *vxd, struct vxd_dec_ctx *ctx)
+{
+ int ret = 0;
+ unsigned int fw_load_retries = 2 * 1000;
+
+ while (!vxd->firmware.ready) {
+ usleep_range(1000, 2000);
+ fw_load_retries--;
+ }
+ if (vxd->firmware.buf_id == 0) {
+ dev_err(vxd->dev, "%s: request fw not yet done!\n", __func__);
+ return -EAGAIN;
+ }
+
+ /* Create memory management context for HW buffers */
+ ret = img_mem_create_ctx(&ctx->mem_ctx);
+ if (ret) {
+ dev_err(vxd->dev, "%s: failed to create mem context (err:%d)!\n", __func__, ret);
+ return ret;
+ }
+
+ ret = img_mmu_ctx_create(vxd->dev, vxd->mmu_config_addr_width,
+ ctx->mem_ctx, vxd_drv.internal_heap_id,
+ img_mmu_callback, vxd, &ctx->mmu_ctx);
+ if (ret) {
+ dev_err(vxd->dev, "%s:%d: failed to create mmu ctx\n", __func__, __LINE__);
+ ret = -EPERM;
+ goto out_destroy_ctx;
+ }
+
+ ret = img_mmu_map(ctx->mmu_ctx, vxd->mem_ctx, vxd->firmware.buf_id,
+ vxd_drv.virt_space.fw_addr,
+ VXD_MMU_PTD_FLAG_READ_ONLY);
+ if (ret) {
+ dev_err(vxd->dev, "%s:%d: failed to map firmware buffer\n", __func__, __LINE__);
+ ret = -EPERM;
+ goto out_destroy_mmu_ctx;
+ }
+
+ ret = img_mmu_map(ctx->mmu_ctx, vxd->mem_ctx, vxd->rendec_buf_id,
+ vxd_drv.virt_space.rendec_addr,
+ VXD_MMU_PTD_FLAG_NONE);
+ if (ret) {
+ dev_err(vxd->dev, "%s:%d: failed to map rendec buffer\n", __func__, __LINE__);
+ ret = -EPERM;
+ goto out_unmap_fw;
+ }
+
+ ret = img_mmu_get_ptd(ctx->mmu_ctx, &ctx->ptd);
+ if (ret) {
+ dev_err(vxd->dev, "%s:%d: failed to get PTD\n", __func__, __LINE__);
+ ret = -EPERM;
+ goto out_unmap_rendec;
+ }
+
+ /* load fw - turned Hw on */
+ ret = vxd_make_hw_on_locked(vxd, ctx->ptd);
+ if (ret) {
+ dev_err(vxd->dev, "%s:%d: failed to start HW\n", __func__, __LINE__);
+ ret = -EPERM;
+ vxd->hw_on = FALSE;
+ goto out_unmap_rendec;
+ }
+
+ init_work(&ctx->work, stream_worker, HWA_DECODER);
+ if (!ctx->work) {
+ ret = ENOMEM;
+ goto out_unmap_rendec;
+ }
+
+ vxd->fw_refcnt++;
+
+ return ret;
+
+out_unmap_rendec:
+ img_mmu_unmap(ctx->mmu_ctx, vxd->mem_ctx, vxd->rendec_buf_id);
+out_unmap_fw:
+ img_mmu_unmap(ctx->mmu_ctx, vxd->mem_ctx, vxd->firmware.buf_id);
+
+out_destroy_mmu_ctx:
+ img_mmu_ctx_destroy(ctx->mmu_ctx);
+out_destroy_ctx:
+ img_mem_destroy_ctx(ctx->mem_ctx);
+ return ret;
+}
+
+void vxd_destroy_ctx(struct vxd_dev *vxd, struct vxd_dec_ctx *ctx)
+{
+ vxd->fw_refcnt--;
+
+ flush_work(ctx->work);
+
+ img_mmu_unmap(ctx->mmu_ctx, vxd->mem_ctx, vxd->rendec_buf_id);
+
+ img_mmu_unmap(ctx->mmu_ctx, vxd->mem_ctx, vxd->firmware.buf_id);
+
+ img_mmu_ctx_destroy(ctx->mmu_ctx);
+
+ img_mem_destroy_ctx(ctx->mem_ctx);
+
+ if (vxd->fw_refcnt == 0) {
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(vxd->dev, "FW: put %s\n", drv_fw_name);
+#endif
+ /* Poke the monitor to finally switch off the hw, when needed */
+ vxd_monitor_locked(vxd);
+ }
+}
+
+/* Top half */
+irqreturn_t vxd_handle_irq(void *dev)
+{
+ struct vxd_dev *vxd = ((const struct device *)dev)->driver_data;
+ struct vxd_hw_state *hw_state = &vxd->state.hw_state;
+ int ret;
+
+ if (!vxd)
+ return IRQ_NONE;
+
+ ret = vxd_pvdec_clear_int(vxd->reg_base, &hw_state->irq_status);
+
+ if (!hw_state->irq_status || ret == IRQ_NONE)
+ dev_warn(dev, "Got spurious interrupt!\n");
+
+ return (irqreturn_t)ret;
+}
+
+static void vxd_drop_msg_locked(const struct vxd_dev *vxd)
+{
+ int ret;
+
+ ret = vxd_pvdec_recv_msg(vxd->dev, vxd->reg_base, NULL, 0, (struct vxd_dev *)vxd);
+ if (ret)
+ dev_warn(vxd->dev, "%s: failed to receive msg!\n", __func__);
+}
+
+#ifdef DEBUG_DECODER_DRIVER
+static void vxd_dbg_dump_msg(const void *dev, const unsigned char *func,
+ const unsigned int *payload,
+ unsigned long msg_size)
+{
+ unsigned int i;
+
+ for (i = 0; i < msg_size; i++)
+ dev_dbg(dev, "%s: msg %d: 0x%08x\n", func, i, payload[i]);
+}
+#endif
+
+static struct vxd_item *vxd_get_orphaned_item_locked(struct vxd_dev *vxd,
+ unsigned short msg_id,
+ unsigned long msg_size)
+{
+ struct vxd_stream *stream;
+ struct vxd_item *item;
+ unsigned short str_id = VXD_MSG_ID_GET_STR_ID(msg_id);
+
+ /* Try to find associated stream */
+ stream = idr_find(vxd->streams, str_id);
+ if (!stream) {
+ /* Failed to find associated stream. */
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: failed to find str_id: %u\n", __func__, str_id);
+#endif
+ return NULL;
+ }
+
+ item = kzalloc(sizeof(*item) + (msg_size * sizeof(unsigned int)), GFP_KERNEL);
+ if (!item)
+ return NULL;
+
+ item->msg.out_flags = 0;
+ item->stream_id = str_id;
+ item->msg.payload_size = msg_size * sizeof(unsigned int);
+ if (vxd_pvdec_recv_msg(vxd->dev, vxd->reg_base, item->msg.payload, msg_size, vxd)) {
+ dev_err(vxd->dev, "%s: failed to receive msg from VXD!\n", __func__);
+ item->msg.out_flags |= VXD_FW_MSG_FLAG_DEV_ERR;
+ }
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: item: %p str_id: %u\n", __func__, item, str_id);
+#endif
+ /*
+ * Need to put this item on the vxd->msgs list.
+ * It will be removed after.
+ */
+ list_add_tail(&item->list, &vxd->msgs);
+
+#ifdef DEBUG_DECODER_DRIVER
+ vxd_dbg_dump_msg(vxd->dev, __func__, item->msg.payload, msg_size);
+#endif
+
+ return item;
+}
+
+/*
+ * Fetch and process a single message from the MTX->host ring buffer.
+ * <no_more> parameter is used to indicate if there are more messages pending.
+ * <fatal> parameter indicates if there is some serious situation detected.
+ * Has to be called with vxd->mutex locked.
+ */
+static void vxd_handle_single_msg_locked(struct vxd_dev *vxd,
+ unsigned char *no_more,
+ unsigned char *fatal)
+{
+ int ret;
+ unsigned short msg_id, str_id;
+ unsigned long msg_size; /* size in dwords */
+ struct vxd_item *item = NULL, *tmp, *it;
+ struct vxd_stream *stream;
+ void *dev = vxd->dev;
+ unsigned char not_last_msg;
+
+ /* get the message size and id */
+ ret = vxd_pvdec_pend_msg_info(dev, vxd->reg_base, &msg_size, &msg_id,
+ ¬_last_msg);
+ if (ret) {
+ dev_err(dev, "%s: failed to get pending msg size!\n", __func__);
+ *no_more = TRUE; /* worker will HW failure */
+ return;
+ }
+
+ if (msg_size == 0) {
+ *no_more = TRUE;
+ return;
+ }
+ *no_more = FALSE;
+
+ str_id = VXD_MSG_ID_GET_STR_ID(msg_id);
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: [msg] size: %zu, cnt: %u, str_id: %u, id: 0x%x\n",
+ __func__, msg_size, VXD_MSG_ID_GET_CNT(msg_id),
+ str_id, msg_id);
+ dev_dbg(dev, "%s: [msg] not last: %u\n", __func__, not_last_msg);
+#endif
+
+ cancel_delayed_work(vxd->dwork);
+
+ /* Find associated item */
+ list_for_each_entry_safe_reverse(it, tmp, &vxd->msgs, list) {
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: checking item %p [0x%x] [des: %d]\n",
+ __func__, it, it->msg_id, it->destroy);
+#endif
+ if (it->msg_id == msg_id) {
+ item = it;
+ break;
+ }
+ }
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: found item %p [destroy: %d]\n",
+ __func__, item, item ? item->destroy : VXD_INVALID_ID);
+#endif
+
+ /* Find associated stream */
+ stream = idr_find(vxd->streams, str_id);
+ /*
+ * Check for firmware condition in case
+ * when unexpected item is received.
+ */
+ if (!item && !stream && vxd_pvdec_check_fw_status(dev, vxd->reg_base)) {
+ struct vxd_item *orphan;
+ /*
+ * Lets forward the fatal info to listeners first, relaying
+ * on the head of the msg queue.
+ */
+ /* TODO: forward fatal info to all attached processes */
+ item = list_entry(vxd->msgs.prev, struct vxd_item, list);
+ orphan = vxd_get_orphaned_item_locked(vxd, item->msg_id, msg_size);
+ if (!orphan) {
+ dev_warn(dev, "%s: drop msg 0x%x! (no orphan)\n", __func__, item->msg_id);
+ vxd_drop_msg_locked(vxd);
+ }
+
+ *fatal = TRUE;
+ return;
+ }
+
+ if ((item && item->destroy) || !stream) {
+ /*
+ * Item was marked for destruction or we failed to find
+ * associated stream. Probably it was already destroyed --
+ * just ignore the message.
+ */
+ if (item) {
+ __list_del_entry(&item->list);
+ kfree(item);
+ item = NULL;
+ }
+ dev_warn(dev, "%s: drop msg 0x%x! (no owner)\n", __func__, msg_id);
+ vxd_drop_msg_locked(vxd);
+ return;
+ }
+
+ /* Remove item from vxd->msgs list */
+ if (item && item->msg_id == msg_id && !not_last_msg)
+ __list_del_entry(&item->list);
+
+ /*
+ * If there's no such item on a <being processed> list, or the one
+ * found is too small to fit the output, or it's not supposed to be
+ * released, allocate a new one.
+ */
+ if (!item || (msg_size * sizeof(unsigned int) > item->msg.payload_size) || not_last_msg) {
+ struct vxd_item *new_item;
+
+ new_item = kzalloc(sizeof(*new_item) +
+ (msg_size * sizeof(unsigned int)), GFP_KERNEL);
+ if (item) {
+ if (!new_item) {
+ /*
+ * Failed to allocate new item. Mark item as
+ * errored and continue best effort, provide
+ * only part of the message to the userspace
+ */
+ dev_err(dev, "%s: failed to alloc new item!\n", __func__);
+ msg_size = item->msg.payload_size / sizeof(unsigned int);
+ item->msg.out_flags |= VXD_FW_MSG_FLAG_DRV_ERR;
+ } else {
+ *new_item = *item;
+ /*
+ * Do not free the old item if subsequent
+ * messages are expected (it also wasn't
+ * removed from the vxd->msgs list, so we are
+ * not losing a pointer here).
+ */
+ if (!not_last_msg)
+ kfree(item);
+ item = new_item;
+ }
+ } else {
+ if (!new_item) {
+ /*
+ * We have no place to put the message, we have
+ * to drop it
+ */
+ dev_err(dev, "%s: drop msg 0x%08x! (no mem)\n", __func__, msg_id);
+ vxd_drop_msg_locked(vxd);
+ return;
+ }
+ /*
+ * There was no corresponding item on the
+ * <being processed> list and we've allocated
+ * a new one. Initialize it
+ */
+ new_item->msg.out_flags = 0;
+ new_item->stream_id = str_id;
+ item = new_item;
+ }
+ }
+ ret = vxd_pvdec_recv_msg(dev, vxd->reg_base, item->msg.payload, msg_size, vxd);
+ if (ret) {
+ dev_err(dev, "%s: failed to receive msg from VXD!\n", __func__);
+ item->msg.out_flags |= VXD_FW_MSG_FLAG_DEV_ERR;
+ }
+ item->msg.payload_size = msg_size * sizeof(unsigned int);
+
+#ifdef DEBUG_DECODER_DRIVER
+ vxd_dbg_dump_msg(dev, __func__, item->msg.payload, msg_size);
+
+ dev_dbg(dev, "%s: adding to done list, item: %p, msg_size: %zu\n",
+ __func__, item, msg_size);
+#endif
+ list_add_tail(&item->list, &stream->ctx->items_done);
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(dev, "%s: signaling worker for %p\n", __func__, stream->ctx);
+#endif
+ schedule_work(stream->ctx->work);
+}
+
+/* Bottom half */
+irqreturn_t vxd_handle_thread_irq(void *dev)
+{
+ unsigned char no_more = FALSE;
+ unsigned char fatal = FALSE;
+ struct vxd_dev *vxd = ((const struct device *)dev)->driver_data;
+ struct vxd_hw_state *hw_state = &vxd->state.hw_state;
+ irqreturn_t ret = IRQ_HANDLED;
+
+ if (!vxd)
+ return IRQ_NONE;
+
+ mutex_lock(vxd->mutex);
+
+ /* Spurious interrupt? */
+ if (unlikely(!vxd->hw_on || vxd->hw_dead)) {
+ ret = IRQ_NONE;
+ goto out_unlock;
+ }
+
+ /* Check for critical exception - only MMU faults for now */
+ if (vxd_pvdec_check_irq(dev, vxd->reg_base, hw_state->irq_status) < 0) {
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(vxd->dev, "device MMU fault: resetting!!!\n");
+#endif
+ vxd_emrg_reset_locked(vxd, VXD_FW_MSG_FLAG_MMU_FAULT);
+ goto out_unlock;
+ }
+
+ /*
+ * Single interrupt can correspond to multiple messages, handle them
+ * all.
+ */
+ while (!no_more)
+ vxd_handle_single_msg_locked(vxd, &no_more, &fatal);
+
+ if (fatal) {
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(vxd->dev, "fw fatal condition: resetting!!!\n");
+#endif
+ /* Try to recover ... */
+ vxd_emrg_reset_locked(vxd, VXD_FW_MSG_FLAG_FATAL);
+ } else {
+ /* Try to submit items to the HW */
+ vxd_schedule_locked(vxd);
+ }
+
+out_unlock:
+ hw_state->irq_status = 0;
+ mutex_unlock(vxd->mutex);
+
+ return ret;
+}
+
+static void vxd_worker(void *work)
+{
+ struct vxd_dev *vxd = NULL;
+ struct vxd_hw_state state = { 0 };
+ struct vxd_item *item_tail;
+
+ work = get_delayed_work_buff(work, FALSE);
+ vxd = container_of(work, struct vxd_dev, dwork);
+ mutex_lock(vxd->mutex);
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: jif: %lu, pm: %llu dwr: %llu\n", __func__,
+ jiffies, vxd->pm_start, vxd->dwr_start);
+#endif
+
+ /*
+ * Disable the hardware if it has been idle for vxd->hw_pm_delay
+ * milliseconds. Or simply leave the function without doing anything
+ * if the HW is not supposed to be turned off.
+ */
+ if (list_empty(&vxd->pend) && list_empty(&vxd->msgs)) {
+ if (vxd_is_apm_required(vxd)) {
+ unsigned long long dst = vxd->pm_start +
+ msecs_to_jiffies(vxd->hw_pm_delay);
+
+ if (time_is_before_eq_jiffies((unsigned long)dst)) {
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: pm, power off\n", __func__);
+#endif
+ vxd_make_hw_off_locked(vxd, FALSE);
+ } else {
+ unsigned long long targ = dst - jiffies;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: pm, reschedule: %llu\n", __func__, targ);
+#endif
+ vxd_sched_worker_locked(vxd, jiffies_to_msecs(targ));
+ }
+ }
+ goto out_unlock;
+ }
+
+ /*
+ * We are not processing anything, but pending list is not empty (if it
+ * was, we would enter <if statement> above. This can happen upon
+ * specific conditions, when input message occupies almost whole
+ * host->MTX ring buffer and is followed by large padding message.
+ */
+ if (list_empty(&vxd->msgs)) {
+ vxd_schedule_locked(vxd);
+ goto out_unlock;
+ }
+
+ /* Skip emergency reset if it's disabled. */
+ if (vxd->hw_dwr_period <= 0) {
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: skip watchdog\n", __func__);
+#endif
+ goto out_unlock;
+ } else {
+ /* Recalculate DWR when needed */
+ unsigned long long dst = vxd->dwr_start +
+ msecs_to_jiffies(vxd->hw_dwr_period);
+
+ if (time_is_after_jiffies((unsigned long)dst)) {
+ unsigned long long targ = dst - jiffies;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: dwr, reschedule: %llu\n", __func__, targ);
+#endif
+ vxd_sched_worker_locked(vxd, jiffies_to_msecs(targ));
+ goto out_unlock;
+ }
+ }
+
+ /* Get ID of the oldest item being processed by the HW */
+ item_tail = list_entry(vxd->msgs.prev, struct vxd_item, list);
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: tail_item: %p, id: 0x%x\n", __func__, item_tail,
+ item_tail->msg_id);
+#endif
+
+ /* Get HW and firmware state */
+ vxd_pvdec_get_state(vxd->dev, vxd->reg_base, VXD_NUM_PIX_PIPES(vxd->props), &state);
+
+ if (vxd->state.msg_id_tail == item_tail->msg_id &&
+ !memcmp(&state, &vxd->state.hw_state,
+ sizeof(struct vxd_hw_state))) {
+ vxd->state.msg_id_tail = 0;
+ memset(&vxd->state.hw_state, 0, sizeof(vxd->state.hw_state));
+ dev_err(vxd->dev, "device DWR(%ums) expired: resetting!!!\n",
+ vxd->hw_dwr_period);
+ vxd_emrg_reset_locked(vxd, VXD_FW_MSG_FLAG_DWR);
+ } else {
+ /* Record current state */
+ vxd->state.msg_id_tail = item_tail->msg_id;
+ vxd->state.hw_state = state;
+
+ /* Submit items to the HW, if space is available. */
+ vxd_schedule_locked(vxd);
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: scheduling DWR work (%d ms)!\n",
+ __func__, vxd->hw_dwr_period);
+#endif
+ vxd_sched_worker_locked(vxd, vxd->hw_dwr_period);
+ }
+
+out_unlock:
+ mutex_unlock(vxd->mutex);
+}
+
+/*
+ * Lazy initialization of main driver context (when first core is probed -- we
+ * need heap configuration from sysdev to allocate firmware buffers.
+ */
+int vxd_init(void *dev, struct vxd_dev *vxd,
+ const struct heap_config heap_configs[], int heaps)
+{
+ int ret, i;
+
+ INIT_LIST_HEAD(&vxd_drv.heaps);
+ vxd_drv.internal_heap_id = VXD_INVALID_ID;
+
+ vxd_drv.mem_ctx = NULL;
+
+ INIT_LIST_HEAD(&vxd_drv.devices);
+
+ vxd_drv.virt_space.fw_addr = 0x42000;
+ vxd_drv.virt_space.rendec_addr = 0xe0000000;
+
+ vxd_drv.initialised = 0;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: vxd drv init, params:\n", __func__);
+#endif
+
+ /* Initialise memory management component */
+ for (i = 0; i < heaps; i++) {
+ struct vxd_heap *heap;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: adding heap of type %d\n",
+ __func__, heap_configs[i].type);
+#endif
+
+ heap = kzalloc(sizeof(*heap), GFP_KERNEL);
+ if (!heap) {
+ ret = -ENOMEM;
+ goto heap_add_failed;
+ }
+
+ ret = img_mem_add_heap(&heap_configs[i], &heap->id);
+ if (ret < 0) {
+ dev_err(dev, "%s: failed to init heap (type %d)!\n",
+ __func__, heap_configs[i].type);
+ kfree(heap);
+ goto heap_add_failed;
+ }
+ list_add(&heap->list, &vxd_drv.heaps);
+
+ /* Implicitly, first heap is used for internal allocations */
+ if (vxd_drv.internal_heap_id < 0) {
+ vxd_drv.internal_heap_id = heap->id;
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: using heap %d for internal alloc\n",
+ __func__, vxd_drv.internal_heap_id);
+#endif
+ }
+ }
+
+ /* Do not proceed if internal heap not defined */
+ if (vxd_drv.internal_heap_id < 0) {
+ dev_err(dev, "%s: failed to locate heap for internal alloc\n", __func__);
+ ret = -EINVAL;
+ /* Loop registered heaps just for sanity */
+ goto heap_add_failed;
+ }
+
+ /* Create memory management context for HW buffers */
+ ret = img_mem_create_ctx(&vxd_drv.mem_ctx);
+ if (ret) {
+ dev_err(dev, "%s: failed to create mem context (err:%d)!\n", __func__, ret);
+ goto create_mem_context_failed;
+ }
+
+ vxd->mem_ctx = vxd_drv.mem_ctx;
+
+ /* Allocate rendec buffer */
+ ret = img_mem_alloc(dev, vxd_drv.mem_ctx, vxd_drv.internal_heap_id,
+ VXD_RENDEC_SIZE * VXD_NUM_PIX_PIPES(vxd->props),
+ (enum mem_attr)0, &vxd->rendec_buf_id);
+ if (ret) {
+ dev_err(dev, "%s: alloc rendec buffer failed (err:%d)!\n", __func__, ret);
+ goto create_mem_context_failed;
+ }
+
+ init_delayed_work(&vxd->dwork, vxd_worker, HWA_DECODER);
+ if (!vxd->dwork) {
+ ret = ENOMEM;
+ goto create_mem_context_failed;
+ }
+
+ vxd_drv.initialised = 1;
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: vxd drv init done\n", __func__);
+#endif
+ return 0;
+
+create_mem_context_failed:
+heap_add_failed:
+ while (!list_empty(&vxd_drv.heaps)) {
+ struct vxd_heap *heap;
+
+ heap = list_first_entry(&vxd_drv.heaps, struct vxd_heap, list);
+ __list_del_entry(&heap->list);
+ img_mem_del_heap(heap->id);
+ kfree(heap);
+ }
+ vxd_drv.internal_heap_id = VXD_INVALID_ID;
+ return ret;
+}
+
+/*
+ * Get internal_heap_id
+ * TODO: Only error checking is if < 0, so if the stored value is < 0, then
+ * just passing the value to caller still conveys error.
+ * Caller must error check.
+ */
+int vxd_g_internal_heap_id(void)
+{
+ return vxd_drv.internal_heap_id;
+}
+
+void vxd_deinit(struct vxd_dev *vxd)
+{
+ cancel_delayed_work_sync(vxd->dwork);
+ vxd_make_hw_off_locked(vxd, FALSE);
+
+ /* Destroy memory management context */
+ if (vxd_drv.mem_ctx) {
+ /* Deallocate rendec buffer */
+ img_mem_free(vxd_drv.mem_ctx, vxd->rendec_buf_id);
+
+ img_mem_destroy_ctx(vxd_drv.mem_ctx);
+ vxd_drv.mem_ctx = NULL;
+ }
+
+ /* Deinitialize memory management component */
+ while (!list_empty(&vxd_drv.heaps)) {
+ struct vxd_heap *heap;
+
+ heap = list_first_entry(&vxd_drv.heaps, struct vxd_heap, list);
+ __list_del_entry(&heap->list);
+ img_mem_del_heap(heap->id);
+ kfree(heap);
+ }
+
+ vxd_drv.internal_heap_id = VXD_INVALID_ID;
+ vxd_drv.mem_ctx = NULL;
+ vxd_drv.virt_space.fw_addr = 0x0;
+ vxd_drv.virt_space.rendec_addr = 0x0;
+ vxd_drv.initialised = 0;
+
+#ifdef ERROR_RECOVERY_SIMULATION
+ /* free the kernel object created to debug */
+ kobject_put(vxd_dec_kobject);
+#endif
+}
+
+static void vxd_fw_loaded(const struct firmware *fw, void *context)
+{
+ struct vxd_dev *vxd = context;
+ unsigned long bin_size;
+ int buf_id;
+ struct vxd_fw_hdr *hdr;
+ void *buf_kptr;
+ int ret;
+ unsigned long size = 0;
+ const unsigned char *data = NULL;
+
+ if (!fw) {
+ dev_err(vxd->dev, "Firmware binary is not present\n");
+ vxd->no_fw = 1;
+ return;
+ }
+
+ size = fw->size;
+ data = fw->data;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(vxd->dev, "FW: acquired %s size %zu\n", drv_fw_name, size);
+#endif
+
+ /* Sanity verification of the firmware */
+ if (size < sizeof(struct vxd_fw_hdr)) {
+ dev_err(vxd->dev, "%s: firmware file too small!\n", __func__);
+ goto out;
+ }
+
+ bin_size = size - sizeof(struct vxd_fw_hdr);
+ ret = img_mem_alloc(vxd->dev, vxd_drv.mem_ctx, vxd_drv.internal_heap_id,
+ bin_size, (enum mem_attr)0, &buf_id);
+ if (ret) {
+ dev_err(vxd->dev, "%s: failed to alloc fw buffer (err:%d)!\n", __func__, ret);
+ goto out;
+ }
+
+ hdr = kzalloc(sizeof(*hdr), GFP_KERNEL);
+ if (!hdr)
+ goto out_release_buf;
+
+ /* Store firmware header in vxd context */
+ memcpy(hdr, data, sizeof(struct vxd_fw_hdr));
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(vxd->dev, "FW: info cs: %u, bs: %u, id: 0x%08x, ts: %u\n",
+ hdr->core_size, hdr->blob_size,
+ hdr->firmware_id, hdr->timestamp);
+#endif
+
+ /* Check if header is consistent */
+ if (hdr->core_size > bin_size || hdr->blob_size > bin_size) {
+ dev_err(vxd->dev, "%s: got invalid firmware!\n", __func__);
+ goto out_release_hdr;
+ }
+
+ /* Map the firmware buffer to CPU */
+ ret = img_mem_map_km(vxd_drv.mem_ctx, buf_id);
+ if (ret) {
+ dev_err(vxd->dev, "%s: failed to map FW buf to cpu! (%d)\n", __func__, ret);
+ goto out_release_hdr;
+ }
+
+ /* Copy firmware to device buffer */
+ buf_kptr = img_mem_get_kptr(vxd_drv.mem_ctx, buf_id);
+ memcpy(buf_kptr, data + sizeof(struct vxd_fw_hdr), size - sizeof(struct vxd_fw_hdr));
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: FW: copied to buffer %d kptr 0x%p\n", __func__, buf_id, buf_kptr);
+#endif
+
+ img_mem_sync_cpu_to_device(vxd_drv.mem_ctx, buf_id);
+
+ vxd->firmware.fw_size = size;
+ vxd->firmware.buf_id = buf_id;
+ vxd->firmware.hdr = hdr;
+ vxd->firmware.ready = TRUE;
+
+ release_firmware(fw);
+ complete_all(vxd->firmware_loading_complete);
+ pr_debug("Firmware loaded successfully ..!!\n");
+ return;
+
+out_release_hdr:
+ kfree(hdr);
+out_release_buf:
+ img_mem_free(vxd_drv.mem_ctx, buf_id);
+out:
+ release_firmware(fw);
+ complete_all(vxd->firmware_loading_complete);
+ kfree(vxd->firmware_loading_complete);
+ vxd->firmware_loading_complete = NULL;
+}
+
+/*
+ * Takes the firmware from the file system and allocates a buffer
+ */
+int vxd_prepare_fw(struct vxd_dev *vxd)
+{
+ int ret;
+
+ /* Fetch firmware from the file system */
+ struct completion **firmware_loading_complete =
+ (struct completion **)&vxd->firmware_loading_complete;
+
+ *firmware_loading_complete = kmalloc(sizeof(*firmware_loading_complete), GFP_KERNEL);
+ if (!(*firmware_loading_complete)) {
+ pr_err("Memory allocation failed for init_completion\n");
+ return -ENOMEM;
+ }
+ init_completion(*firmware_loading_complete);
+
+ if (!vxd->firmware_loading_complete)
+ return -ENOMEM;
+
+ vxd->firmware.ready = FALSE;
+ ret = request_firmware_nowait(THIS_MODULE, FW_ACTION_UEVENT,
+ drv_fw_name, vxd->dev, GFP_KERNEL, vxd,
+ vxd_fw_loaded);
+ if (ret < 0) {
+ dev_err(vxd->dev, "request_firmware_nowait err: %d\n", ret);
+ complete_all(vxd->firmware_loading_complete);
+ kfree(vxd->firmware_loading_complete);
+ vxd->firmware_loading_complete = NULL;
+ }
+
+ return ret;
+}
+
+/*
+ * Cleans firmware resources
+ */
+void vxd_clean_fw_resources(struct vxd_dev *vxd)
+{
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s:%d\n", __func__, __LINE__);
+#endif
+
+ wait_for_completion(vxd->firmware_loading_complete);
+ kfree(vxd->firmware_loading_complete);
+ vxd->firmware_loading_complete = NULL;
+
+ if (vxd->firmware.fw_size) {
+ img_mem_free(vxd_drv.mem_ctx, vxd->firmware.buf_id);
+ kfree(vxd->firmware.hdr);
+ vxd->firmware.hdr = NULL;
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(vxd->dev, "FW: released %s\n", drv_fw_name);
+#endif
+ vxd->firmware.buf_id = VXD_INVALID_ID;
+ }
+}
+
+/*
+ * Submit a message to the VXD.
+ * <ctx> is used to verify that requested stream id (item->stream_id) is valid
+ * for this ctx
+ */
+int vxd_send_msg(struct vxd_dec_ctx *ctx, struct vxd_fw_msg *msg)
+{
+ struct vxd_dev *vxd = ctx->dev;
+ unsigned long msg_size;
+ struct vxd_item *item;
+ struct vxd_stream *stream;
+ int ret;
+
+ if (msg->payload_size < VXD_MIN_INPUT_SIZE)
+ return -EINVAL;
+
+ if (msg->payload_size % sizeof(unsigned int)) {
+ dev_err(vxd->dev, "msg size not aligned! (%u)\n",
+ msg->payload_size);
+ return -EINVAL;
+ }
+
+ msg_size = VXD_MSG_SIZE(*msg);
+
+ if (msg_size > VXD_MAX_INPUT_SIZE)
+ return -EINVAL;
+
+ /* Verify that the gap was left for stream PTD */
+ if (msg->payload[VXD_PTD_MSG_OFFSET] != 0) {
+ dev_err(vxd->dev, "%s: PTD gap missing!\n", __func__);
+ return -EINVAL;
+ }
+
+ ret = mutex_lock_interruptible_nested(ctx->mutex, SUBCLASS_VXD_CORE);
+ if (ret)
+ return ret;
+
+ stream = idr_find(vxd->streams, ctx->stream.id);
+ if (!stream) {
+ dev_warn(vxd->dev, "%s: invalid stream id requested! (%u)\n",
+ __func__, ctx->stream.id);
+
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
+ item = kmalloc(sizeof(*item) + msg->payload_size, GFP_KERNEL);
+ if (!item) {
+ ret = -ENOMEM;
+ goto out_unlock;
+ }
+
+ memcpy(&item->msg, msg, msg_size);
+
+ msg->out_flags &= VXD_FW_MSG_WR_FLAGS_MASK;
+ item->stream_id = ctx->stream.id;
+ item->msg_id = 0;
+ item->msg.out_flags = msg->out_flags;
+ item->destroy = 0;
+
+ /*
+ * Inject the stream PTD into the message. It was already verified that
+ * there is enough space.
+ */
+ item->msg.payload[VXD_PTD_MSG_OFFSET] = stream->ptd;
+
+ list_add_tail(&item->list, &vxd->pend);
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev,
+ "%s: added item %p to pend, ptd: 0x%x, str: %u flags: 0x%x\n",
+ __func__, item, stream->ptd, stream->id, item->msg.out_flags);
+#endif
+
+ vxd_schedule_locked(vxd);
+
+out_unlock:
+ mutex_unlock(ctx->mutex);
+
+ return ret;
+}
+
+int vxd_suspend_dev(void *dev)
+{
+ struct vxd_dev *vxd = platform_get_drvdata(to_platform_device(dev));
+
+ mutex_lock(vxd->mutex);
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: taking a nap!\n", __func__);
+#endif
+
+ /* Cancel the worker first */
+ cancel_delayed_work(vxd->dwork);
+
+ /* Forcing hardware disable */
+ vxd_make_hw_off_locked(vxd, TRUE);
+
+ /* Move all valid items to the pending queue */
+ vxd_rewind_msgs_locked(vxd);
+
+ mutex_unlock(vxd->mutex);
+
+ return 0;
+}
+
+int vxd_resume_dev(void *dev)
+{
+ struct vxd_dev *vxd = platform_get_drvdata(to_platform_device(dev));
+ int ret = 0;
+
+ mutex_lock(vxd->mutex);
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: waking up!\n", __func__);
+#endif
+
+ mutex_unlock(vxd->mutex);
+
+ return ret;
+}
+
+int vxd_map_buffer_sg(struct vxd_dev *vxd, struct vxd_dec_ctx *ctx,
+ unsigned int str_id,
+ unsigned int buff_id,
+ void *sgt, unsigned int virt_addr,
+ unsigned int map_flags)
+{
+ struct vxd_stream *stream;
+ unsigned int flags = VXD_MMU_PTD_FLAG_NONE;
+ int ret;
+
+ ret = mutex_lock_interruptible_nested(ctx->mutex, SUBCLASS_VXD_CORE);
+ if (ret)
+ return ret;
+
+ stream = idr_find(vxd->streams, str_id);
+ if (!stream) {
+ dev_err(vxd->dev, "%s: stream %d not found!\n", __func__, str_id);
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
+ if ((map_flags & (VXD_MAP_FLAG_READ_ONLY | VXD_MAP_FLAG_WRITE_ONLY))
+ == (VXD_MAP_FLAG_READ_ONLY | VXD_MAP_FLAG_WRITE_ONLY)) {
+ dev_err(vxd->dev, "%s: Bogus mapping flags 0x%x!\n", __func__,
+ map_flags);
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
+ /* Convert permission flags to internal definitions */
+ if (map_flags & VXD_MAP_FLAG_READ_ONLY)
+ flags |= VXD_MMU_PTD_FLAG_READ_ONLY;
+
+ if (map_flags & VXD_MAP_FLAG_WRITE_ONLY)
+ flags |= VXD_MMU_PTD_FLAG_WRITE_ONLY;
+
+ ret = img_mmu_map_sg(stream->mmu_ctx, ctx->mem_ctx, buff_id, sgt, virt_addr, flags);
+ if (ret) {
+ dev_err(vxd->dev, "%s: map failed!\n", __func__);
+ goto out_unlock;
+ }
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev,
+ "%s: mapped buf %u to 0x%08x, str_id: %u flags: 0x%x\n",
+ __func__, buff_id, virt_addr, str_id, flags);
+#endif
+
+out_unlock:
+ mutex_unlock(ctx->mutex);
+ return ret;
+}
+
+int vxd_map_buffer(struct vxd_dev *vxd, struct vxd_dec_ctx *ctx, unsigned int str_id,
+ unsigned int buff_id,
+ unsigned int virt_addr,
+ unsigned int map_flags)
+{
+ struct vxd_stream *stream;
+ unsigned int flags = VXD_MMU_PTD_FLAG_NONE;
+ int ret;
+
+ ret = mutex_lock_interruptible_nested(ctx->mutex, SUBCLASS_VXD_CORE);
+ if (ret)
+ return ret;
+
+ stream = idr_find(vxd->streams, str_id);
+ if (!stream) {
+ dev_err(vxd->dev, "%s: stream %d not found!\n", __func__, str_id);
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
+ if ((map_flags & (VXD_MAP_FLAG_READ_ONLY | VXD_MAP_FLAG_WRITE_ONLY))
+ == (VXD_MAP_FLAG_READ_ONLY | VXD_MAP_FLAG_WRITE_ONLY)) {
+ dev_err(vxd->dev, "%s: Bogus mapping flags 0x%x!\n", __func__, map_flags);
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
+ /* Convert permission flags to internal definitions */
+ if (map_flags & VXD_MAP_FLAG_READ_ONLY)
+ flags |= VXD_MMU_PTD_FLAG_READ_ONLY;
+
+ if (map_flags & VXD_MAP_FLAG_WRITE_ONLY)
+ flags |= VXD_MMU_PTD_FLAG_WRITE_ONLY;
+
+ ret = img_mmu_map(stream->mmu_ctx, ctx->mem_ctx, buff_id, virt_addr, flags);
+ if (ret) {
+ dev_err(vxd->dev, "%s: map failed!\n", __func__);
+ goto out_unlock;
+ }
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev,
+ "%s: mapped buf %u to 0x%08x, str_id: %u flags: 0x%x\n",
+ __func__, buff_id, virt_addr, str_id, flags);
+#endif
+
+out_unlock:
+ mutex_unlock(ctx->mutex);
+ return ret;
+}
+
+int vxd_unmap_buffer(struct vxd_dev *vxd, struct vxd_dec_ctx *ctx,
+ unsigned int str_id, unsigned int buff_id)
+{
+ struct vxd_stream *stream;
+ int ret;
+
+ ret = mutex_lock_interruptible_nested(ctx->mutex, SUBCLASS_VXD_CORE);
+ if (ret)
+ return ret;
+
+ stream = idr_find(vxd->streams, str_id);
+ if (!stream) {
+ dev_err(vxd->dev, "%s: stream %d not found!\n", __func__, str_id);
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
+ ret = img_mmu_unmap(stream->mmu_ctx, ctx->mem_ctx, buff_id);
+ if (ret) {
+ dev_err(vxd->dev, "%s: map failed!\n", __func__);
+ goto out_unlock;
+ }
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(vxd->dev, "%s: unmapped buf %u str_id: %u\n", __func__, buff_id, str_id);
+#endif
+
+out_unlock: mutex_unlock(ctx->mutex);
+ return ret;
+}
diff --git a/drivers/staging/media/vxd/decoder/vxd_dec.c b/drivers/staging/media/vxd/decoder/vxd_dec.c
new file mode 100644
index 000000000000..cf3cf9b7b6f0
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vxd_dec.c
@@ -0,0 +1,185 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * IMG DEC SYSDEV and UI Interface function implementations
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "core.h"
+#include "h264fw_data.h"
+#include "hevcfw_data.h"
+#include "img_dec_common.h"
+#include "vxd_pvdec_priv.h"
+
+unsigned int get_nbuffers(enum vdec_vid_std std, int w, int h,
+ unsigned int max_num_ref_frames)
+{
+ unsigned int nbuffers;
+
+ switch (std) {
+ case VDEC_STD_H264:
+ /*
+ * Request number of buffers from header bspp information
+ * using formula N + Display Lag
+ * Parser is passing (2*N)
+ */
+ if (max_num_ref_frames == 0) {
+ nbuffers = DISPLAY_LAG + min(MAX_CAPBUFS_H264,
+ (184320 / ((w / 16) * (h / 16))));
+ } else {
+ nbuffers = max_num_ref_frames + DISPLAY_LAG;
+ }
+ break;
+ case VDEC_STD_HEVC:
+ if (max_num_ref_frames == 0) {
+ if ((w * h) <= (HEVC_MAX_LUMA_PS >> 2))
+ nbuffers = 16;
+ else if ((w * h) <= (HEVC_MAX_LUMA_PS >> 1))
+ nbuffers = 12;
+ else if ((w * h) <= ((3 * HEVC_MAX_LUMA_PS) >> 2))
+ nbuffers = 8;
+ else
+ nbuffers = 6;
+ nbuffers += DISPLAY_LAG;
+ } else {
+ nbuffers = max_num_ref_frames + DISPLAY_LAG;
+ }
+ break;
+#ifdef HAS_JPEG
+ case VDEC_STD_JPEG:
+ /*
+ * Request number of output buffers based on h264 spec
+ * + display delay
+ */
+ nbuffers = DISPLAY_LAG + min(MAX_CAPBUFS_H264,
+ (184320 / ((w / 16) * (h / 16))));
+ break;
+#endif
+ default:
+ nbuffers = 0;
+ }
+
+ return nbuffers;
+}
+
+int vxd_dec_alloc_bspp_resource(struct vxd_dec_ctx *ctx, enum vdec_vid_std vid_std)
+{
+ struct vxd_dev *vxd_dev = ctx->dev;
+ struct device *dev = vxd_dev->v4l2_dev.dev;
+ struct vdec_buf_info buf_info;
+ struct bspp_ddbuf_array_info *fw_sequ = ctx->fw_sequ;
+ struct bspp_ddbuf_array_info *fw_pps = ctx->fw_pps;
+ int attributes = 0, heap_id = 0, size = 0;
+ int i, ret = 0;
+
+ attributes = SYS_MEMATTRIB_UNCACHED | SYS_MEMATTRIB_WRITECOMBINE |
+ SYS_MEMATTRIB_INTERNAL | SYS_MEMATTRIB_CPU_WRITE;
+ heap_id = vxd_g_internal_heap_id();
+
+ size = vid_std == VDEC_STD_HEVC ?
+ sizeof(struct hevcfw_sequence_ps) : sizeof(struct h264fw_sequence_ps);
+
+#ifdef HAS_JPEG
+ if (vid_std == VDEC_STD_JPEG)
+ size = sizeof(struct vdec_jpeg_sequ_hdr_info);
+#endif
+
+ for (i = 0; i < MAX_SEQUENCES; i++) {
+ ret = img_mem_alloc(vxd_dev->dev, ctx->mem_ctx, heap_id,
+ size, (enum mem_attr)attributes,
+ (int *)&fw_sequ[i].ddbuf_info.buf_id);
+ if (ret) {
+ dev_err(dev, "Couldn't allocate sequ buffer %d\n", i);
+ return -ENOMEM;
+ }
+ ret = img_mem_map_km(ctx->mem_ctx, fw_sequ[i].ddbuf_info.buf_id);
+ if (ret) {
+ dev_err(dev, "Couldn't map sequ buffer %d\n", i);
+ return -ENOMEM;
+ }
+ fw_sequ[i].ddbuf_info.cpu_virt_addr = img_mem_get_kptr
+ (ctx->mem_ctx,
+ fw_sequ[i].ddbuf_info.buf_id);
+ fw_sequ[i].buf_offset = 0;
+ fw_sequ[i].buf_element_size = size;
+ fw_sequ[i].ddbuf_info.buf_size = size;
+ fw_sequ[i].ddbuf_info.mem_attrib = (enum sys_emem_attrib)attributes;
+ memset(fw_sequ[i].ddbuf_info.cpu_virt_addr, 0, size);
+
+ buf_info.cpu_linear_addr =
+ fw_sequ[i].ddbuf_info.cpu_virt_addr;
+ buf_info.buf_size = size;
+ buf_info.fd = -1;
+ buf_info.buf_id = fw_sequ[i].ddbuf_info.buf_id;
+ buf_info.mem_attrib =
+ (enum sys_emem_attrib)(SYS_MEMATTRIB_UNCACHED | SYS_MEMATTRIB_WRITECOMBINE |
+ SYS_MEMATTRIB_INPUT | SYS_MEMATTRIB_CPU_WRITE);
+
+ ret = core_stream_map_buf(ctx->res_str_id, VDEC_BUFTYPE_BITSTREAM, &buf_info,
+ &fw_sequ[i].ddbuf_info.bufmap_id);
+ if (ret) {
+ dev_err(dev, "sps core_stream_map_buf failed\n");
+ return ret;
+ }
+ }
+
+#ifdef HAS_JPEG
+ if (vid_std == VDEC_STD_JPEG)
+ return 0;
+#endif
+
+ size = vid_std == VDEC_STD_HEVC ?
+ sizeof(struct hevcfw_picture_ps) : sizeof(struct h264fw_picture_ps);
+
+ for (i = 0; i < MAX_PPSS; i++) {
+ ret = img_mem_alloc(vxd_dev->dev, ctx->mem_ctx, heap_id, size,
+ (enum mem_attr)attributes,
+ (int *)&fw_pps[i].ddbuf_info.buf_id);
+ if (ret) {
+ dev_err(dev, "Couldn't allocate sequ buffer %d\n", i);
+ return -ENOMEM;
+ }
+ ret = img_mem_map_km(ctx->mem_ctx, fw_pps[i].ddbuf_info.buf_id);
+ if (ret) {
+ dev_err(dev, "Couldn't map sequ buffer %d\n", i);
+ return -ENOMEM;
+ }
+ fw_pps[i].ddbuf_info.cpu_virt_addr = img_mem_get_kptr(ctx->mem_ctx,
+ fw_pps[i].ddbuf_info.buf_id);
+ fw_pps[i].buf_offset = 0;
+ fw_pps[i].buf_element_size = size;
+ fw_pps[i].ddbuf_info.buf_size = size;
+ fw_pps[i].ddbuf_info.mem_attrib = (enum sys_emem_attrib)attributes;
+ memset(fw_pps[i].ddbuf_info.cpu_virt_addr, 0, size);
+
+ buf_info.cpu_linear_addr =
+ fw_pps[i].ddbuf_info.cpu_virt_addr;
+ buf_info.buf_size = size;
+ buf_info.fd = -1;
+ buf_info.buf_id = fw_pps[i].ddbuf_info.buf_id;
+ buf_info.mem_attrib =
+ (enum sys_emem_attrib)(SYS_MEMATTRIB_UNCACHED | SYS_MEMATTRIB_WRITECOMBINE |
+ SYS_MEMATTRIB_INPUT | SYS_MEMATTRIB_CPU_WRITE);
+
+ ret = core_stream_map_buf(ctx->res_str_id, VDEC_BUFTYPE_BITSTREAM, &buf_info,
+ &fw_pps[i].ddbuf_info.bufmap_id);
+ if (ret) {
+ dev_err(dev, "pps core_stream_map_buf failed\n");
+ return ret;
+ }
+ }
+ return 0;
+}
diff --git a/drivers/staging/media/vxd/decoder/vxd_dec.h b/drivers/staging/media/vxd/decoder/vxd_dec.h
new file mode 100644
index 000000000000..a8d409bc4212
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vxd_dec.h
@@ -0,0 +1,477 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG DEC SYSDEV and UI Interface header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#ifndef _VXD_DEC_H
+#define _VXD_DEC_H
+
+#include <linux/interrupt.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+#include <linux/types.h>
+
+#include "bspp.h"
+#include "img_dec_common.h"
+#include "img_mem_man.h"
+#include "img_pixfmts.h"
+#include "pixel_api.h"
+#include "vdecdd_defs.h"
+#include "vdec_defs.h"
+#include "work_queue.h"
+
+#define VXD_MIN_STREAM_ID 1
+#define VXD_MAX_STREAMS_PER_DEV 254
+#define VXD_MAX_STREAM_ID (VXD_MIN_STREAM_ID + VXD_MAX_STREAMS_PER_DEV)
+
+#define CODEC_NONE -1
+#define CODEC_H264_DEC 0
+#define CODEC_MPEG4_DEC 1
+#define CODEC_VP8_DEC 2
+#define CODEC_VC1_DEC 3
+#define CODEC_MPEG2_DEC 4
+#define CODEC_JPEG_DEC 5
+#define CODEC_VP9_DEC 6
+#define CODEC_HEVC_DEC 7
+
+#define MAX_SEGMENTS 6
+#define HW_ALIGN 64
+
+#define MAX_BUF_TRACE 30
+
+#define MAX_CAPBUFS_H264 16
+#define DISPLAY_LAG 3
+#define HEVC_MAX_LUMA_PS 35651584
+
+#define MAX_PLANES 3
+
+enum {
+ Q_DATA_SRC = 0,
+ Q_DATA_DST = 1,
+ Q_DATA_FORCE32BITS = 0x7FFFFFFFU
+};
+
+enum {
+ IMG_DEC_FMT_TYPE_CAPTURE = 0x01,
+ IMG_DEC_FMT_TYPE_OUTPUT = 0x10,
+ IMG_DEC_FMT_TYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+enum vxd_map_flags {
+ VXD_MAP_FLAG_NONE = 0x0,
+ VXD_MAP_FLAG_READ_ONLY = 0x1,
+ VXD_MAP_FLAG_WRITE_ONLY = 0x2,
+ VXD_MAP_FLAG_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * struct vxd_fw_msg - This structure holds the information about the message
+ * exchanged in read/write between Kernel and firmware.
+ *
+ * @out_flags: indicating the type of message
+ * @payload_size: size of payload in bytes
+ * @payload: data which is send to firmware
+ */
+struct vxd_fw_msg {
+ unsigned int out_flags;
+ unsigned int payload_size;
+ unsigned int payload[0];
+};
+
+/* HW state */
+struct vxd_hw_state {
+ unsigned int fw_counter;
+ unsigned int fe_status[VXD_MAX_PIPES];
+ unsigned int be_status[VXD_MAX_PIPES];
+ unsigned int dmac_status[VXD_MAX_PIPES][2]; /* Cover DMA chan 2/3*/
+ unsigned int irq_status;
+};
+
+/*
+ * struct vxd_state - contains VXD HW state
+ *
+ * @hw_state: HW state
+ * @msg_id_tail: msg id of the oldest item being processed
+ */
+struct vxd_state {
+ struct vxd_hw_state hw_state;
+ unsigned short msg_id_tail;
+};
+
+/*
+ * struct vxd_dec_fmt - contains info for each of the supported video format
+ *
+ * @fourcc: V4L2 pixel format FCC identifier
+ * @num_planes: number of planes required for luma and chroma
+ * @type: CAPTURE or OUTPUT
+ * @std: VDEC video standard
+ * @pixfmt: IMG pixel format
+ * @interleave: Chroma interleave order
+ * @idc: Chroma format
+ * @size_num: Numberator used to calculate image size
+ * @size_den: Denominator used to calculate image size
+ * @bytes_pp: Bytes per pixel for this format
+ */
+struct vxd_dec_fmt {
+ unsigned int fourcc;
+ unsigned int num_planes;
+ unsigned char type;
+ enum vdec_vid_std std;
+ enum img_pixfmt pixfmt;
+ enum pixel_chroma_interleaved interleave;
+ enum pixel_fmt_idc idc;
+ int size_num;
+ int size_den;
+ int bytes_pp;
+};
+
+/*
+ * struct vxd_item - contains information about the item sent to fw
+ *
+ * @list: item to be linked list to items_done, msgs, or pend.
+ * @stream_id: stream id
+ * @msg_id: message id
+ * @destroy: item belongs to the stream which is destroyed
+ * @msg: contains msg between kernel and fw
+ */
+struct vxd_item {
+ struct list_head list;
+ unsigned int stream_id;
+ unsigned int msg_id;
+ struct {
+ unsigned destroy : 1;
+ };
+ struct vxd_fw_msg msg;
+};
+
+enum vxd_cb_type {
+ VXD_CB_STRUNIT_PROCESSED,
+ VXD_CB_SPS_RELEASE,
+ VXD_CB_PPS_RELEASE,
+ VXD_CB_PICT_DECODED,
+ VXD_CB_PICT_DISPLAY,
+ VXD_CB_PICT_RELEASE,
+ VXD_CB_PICT_END,
+ VXD_CB_STR_END,
+ VXD_CB_ERROR_FATAL,
+ VXD_CB_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * vxd_cb - Return a resource to vxd
+ *
+ * @ctx: the vxd stream context
+ * @type: the type of message
+ * @buf_map_id: the buf_map_id of the resource being returned
+ */
+typedef void (*vxd_cb)(void *ctx, enum vxd_cb_type type, unsigned int buf_map_id);
+
+/*
+ * struct vxd_return - contains information about items returning from core
+ *
+ * @type: Type of item being returned
+ * @buf_map_id: mmu mapped id of buffer being returned
+ */
+struct vxd_return {
+ void *work;
+ struct vxd_dec_ctx *ctx;
+ enum vxd_cb_type type;
+ unsigned int buf_map_id;
+};
+
+/*
+ * struct vxd_dec_q_data - contains queue data information
+ *
+ * @fmt: format info
+ * @width: frame width
+ * @height: frame height
+ * @bytesperline: bytes per line in memory
+ * @size_image: image size in memory
+ */
+struct vxd_dec_q_data {
+ struct vxd_dec_fmt *fmt;
+ unsigned int width;
+ unsigned int height;
+ unsigned int bytesperline[MAX_PLANES];
+ unsigned int size_image[MAX_PLANES];
+};
+
+/*
+ * struct time_prof - contains time taken by decoding information
+ *
+ * @id: id info
+ * @start_time: start time
+ * @end_time: end time
+ */
+struct time_prof {
+ unsigned int id;
+ long long start_time;
+ long long end_time;
+};
+
+/*
+ * struct vxd_dev - The struct containing decoder driver internal parameters.
+ *
+ * @v4l2_dev: main struct of V4L2 device drivers
+ * @dev: platform device driver
+ * @vfd_dec: video device structure to create and manage the V4L2 device node.
+ * @plat_dev: linux platform device
+ * @struct v4l2_m2m_dev: mem2mem device
+ * @mutex: mutex to protect certain ongoing operation.
+ * @module_irq: a threaded request IRQ for the device
+ * @reg_base: base address of the IMG VXD hw registers
+ * @props: contains HW properties
+ * @mmu_config_addr_width: indicates the number of extended address bits
+ * (above 32) that the external memory interface
+ * uses, based on EXTENDED_ADDR_RANGE field of
+ * MMU_CONFIG0
+ * @rendec_buf_id: buffer id for rendec buffer allocation
+ * @firmware: firmware information based on vxd_dev_fw structure
+ * @firmware_loading_complete: loading completion
+ * @no_fw: Just to check if firmware is present in /lib
+ * @fw_refcnt: firmware reference counter
+ * @hw_on: indication if hw is on or off
+ * @hw_dead: indication if hw is dead
+ * @lock: basic primitive for locking through spinlock
+ * @state: internal state handling of vxd state
+ * @msgs: linked list of msgs with vxd_item
+ * @pend: linked list of pending msgs to be sent to fw
+ * @msg_cnt: counter of messages submitted to VXD. Wraps every VXD_MSG_ID_MASK
+ * @freq_khz: Core clock frequency measured during boot of firmware
+ * @streams: unique id for the stream
+ * @mem_ctx: memory management context for HW buffers
+ * @dwork: use for Power Management and Watchdog
+ * @work_sched_at: the time of the last work has been scheduled at
+ * @emergency: indicates if emergency condition occurred
+ * @dbgfs_ctx: pointer to debug FS context.
+ * @hw_pm_delay: delay before performaing PM
+ * @hw_dwr_period: period for checking for dwr
+ * @pm_start: time, in jiffies, when core become idle
+ * @dwr_start: time, in jiffies, when dwr has been started
+ */
+struct vxd_dev {
+ struct v4l2_device v4l2_dev;
+ void *dev;
+ struct video_device *vfd_dec;
+ struct platform_device *plat_dev;
+ struct v4l2_m2m_dev *m2m_dev;
+ struct mutex *mutex; /* Per device mutex */
+ int module_irq;
+ void __iomem *reg_base;
+ struct vxd_core_props props;
+ unsigned int mmu_config_addr_width;
+ int rendec_buf_id;
+ struct vxd_dev_fw firmware;
+ void *firmware_loading_complete;
+ unsigned char no_fw;
+ unsigned char fw_refcnt;
+ unsigned int hw_on;
+ unsigned int hw_dead;
+ void *lock; /* basic device level spinlock */
+ struct vxd_state state;
+ struct list_head msgs;
+ struct list_head pend;
+ int msg_cnt;
+ unsigned int freq_khz;
+ struct idr *streams;
+ struct mem_ctx *mem_ctx;
+ void *dwork;
+ unsigned long long work_sched_at;
+ unsigned int emergency;
+ void *dbgfs_ctx;
+ unsigned int hw_pm_delay;
+ unsigned int hw_dwr_period;
+ unsigned long long pm_start;
+ unsigned long long dwr_start;
+ struct time_prof time_fw[MAX_BUF_TRACE];
+ struct time_prof time_drv[MAX_BUF_TRACE];
+
+ /* The variables defined below are used in RTOS only. */
+ /* This variable holds queue handler */
+ void *vxd_worker_queue_handle;
+ void *vxd_worker_queue_sem_handle;
+};
+
+/*
+ * struct vxd_stream - holds stream-related info
+ *
+ * @ctx: associated vxd_dec_ctx
+ * @mmu_ctx: MMU context for this stream
+ * @ptd: ptd for the stream
+ * @id: unique stream id
+ */
+struct vxd_stream {
+ struct vxd_dec_ctx *ctx;
+ struct mmu_ctx *mmu_ctx;
+ unsigned int ptd;
+ unsigned int id;
+};
+
+/*
+ * struct vxd_buffer - holds per buffer info.
+ * @buffer: the vb2_v4l2_buffer
+ * @list: list head for gathering in linked list
+ * @mapped: is this buffer mapped yet
+ * @reuse: is the buffer ready for reuse
+ * @buf_map_id: the mapped buffer id
+ * @buf_info: the buffer info for submitting to map
+ * @bstr_info: the buffer info for submitting to bspp
+ * @seq_unit: the str_unit for submitting sps
+ * @seq_unit: the str_unit for submitting pps and segments
+ * @seq_unit: the str_unit for submitting picture_end
+ */
+struct vxd_buffer {
+ struct v4l2_m2m_buffer buffer;
+ struct list_head list;
+ unsigned char mapped;
+ unsigned char reuse;
+ unsigned int buf_map_id;
+ struct vdec_buf_info buf_info;
+ struct bspp_ddbuf_info bstr_info;
+ struct vdecdd_str_unit seq_unit;
+ struct vdecdd_str_unit pic_unit;
+ struct vdecdd_str_unit end_unit;
+ struct bspp_preparsed_data preparsed_data;
+};
+
+typedef void (*decode_cb)(int res_str_id, unsigned int *msg, unsigned int msg_size,
+ unsigned int msg_flags);
+
+/*
+ * struct vxd_dec_ctx - holds per stream data. Each playback has its own
+ * vxd_dec_ctx
+ *
+ * @fh: V4L2 file handler
+ * @dev: pointer to the device main information.
+ * @ctrl_hdl_dec: v4l2 custom control command for video decoder
+ * @mem_ctx: mem context for this stream
+ * @mmu_ctx: MMU context for this stream
+ * @ptd: page table information
+ * @items_done: linked list of items is ready
+ * @width: frame width
+ * @height: frame height
+ * @width_orig: original frame width (before padding)
+ * @height_orig: original frame height (before padding)
+ * @q_data: Queue data information of src[0] and dst[1]
+ * @stream: stream-related info
+ * @work: work queue for message handling
+ * @return_queue: list of resources returned from core
+ * @out_buffers: list of all output buffers
+ * @cap_buffers: list of all capture buffers except those in reuse_queue
+ * @reuse_queue: list of capture buffers waiting for core to signal reuse
+ * @res_str_id: Core stream id
+ * @stream_created: Core stream is created
+ * @stream_configured: Core stream is configured
+ * @opconfig_pending: Core opconfig is pending stream_create
+ * @src_streaming: V4L2 src stream is streaming
+ * @dst_streaming: V4L2 dst stream is streaming
+ * @core_streaming: core is streaming
+ * @aborting: signal job abort on next irq
+ * @str_opcfg: core output config
+ * @pict_bufcfg: core picture buffer config
+ * @bspp_context: BSPP Stream context handle
+ * @seg_list: list of bspp_bitstr_seg for submitting to BSPP
+ * @fw_sequ: BSPP sps resource
+ * @fw_pps: BSPP pps resource
+ * @cb: registered callback for incoming messages
+ * @mutex: mutex to protect context specific state machine
+ */
+struct vxd_dec_ctx {
+ struct v4l2_fh fh;
+ struct vxd_dev *dev;
+ struct mem_ctx *mem_ctx;
+ struct mmu_ctx *mmu_ctx;
+ unsigned int ptd;
+ struct list_head items_done;
+ unsigned int width;
+ unsigned int height;
+ unsigned int width_orig;
+ unsigned int height_orig;
+ struct vxd_dec_q_data q_data[2];
+ struct vxd_stream stream;
+ void *work;
+ struct list_head return_queue;
+ struct list_head out_buffers;
+ struct list_head cap_buffers;
+ struct list_head reuse_queue;
+ unsigned int res_str_id;
+ unsigned char stream_created;
+ unsigned char stream_configured;
+ unsigned char opconfig_pending;
+ unsigned char src_streaming;
+ unsigned char dst_streaming;
+ unsigned char core_streaming;
+ unsigned char aborting;
+ unsigned char eos;
+ unsigned char stop_initiated;
+ unsigned char flag_last;
+ unsigned char num_decoding;
+ unsigned int max_num_ref_frames;
+ struct vdec_str_opconfig str_opcfg;
+ struct vdec_pict_bufconfig pict_bufcfg;
+ void *bspp_context;
+ struct bspp_bitstr_seg bstr_segments[MAX_SEGMENTS];
+ struct lst_t seg_list;
+ struct bspp_ddbuf_array_info fw_sequ[MAX_SEQUENCES];
+ struct bspp_ddbuf_array_info fw_pps[MAX_PPSS];
+ decode_cb cb;
+ struct mutex *mutex; /* Per stream mutex */
+
+ /* The below variable used only in Rtos */
+ void *mm_return_resource; /* Place holder for CB to application */
+ void *stream_worker_queue_handle;
+ void *stream_worker_queue_sem_handle;
+ // lock is used to synchronize the stream worker and process function
+ void *lock;
+ /* "sem_eos" this semaphore variable used to wait until all frame decoded */
+ void *sem_eos;
+};
+
+irqreturn_t vxd_handle_irq(void *dev);
+irqreturn_t vxd_handle_thread_irq(void *dev);
+int vxd_init(void *dev, struct vxd_dev *vxd, const struct heap_config heap_configs[], int heaps);
+int vxd_g_internal_heap_id(void);
+void vxd_deinit(struct vxd_dev *vxd);
+int vxd_prepare_fw(struct vxd_dev *vxd);
+void vxd_clean_fw_resources(struct vxd_dev *vxd);
+int vxd_send_msg(struct vxd_dec_ctx *ctx, struct vxd_fw_msg *msg);
+int vxd_suspend_dev(void *dev);
+int vxd_resume_dev(void *dev);
+
+int vxd_create_ctx(struct vxd_dev *vxd, struct vxd_dec_ctx *ctx);
+void vxd_destroy_ctx(struct vxd_dev *vxd, struct vxd_dec_ctx *ctx);
+
+int vxd_map_buffer_sg(struct vxd_dev *vxd, struct vxd_dec_ctx *ctx,
+ unsigned int str_id, unsigned int buff_id,
+ void *sgt, unsigned int virt_addr,
+ unsigned int map_flags);
+int vxd_map_buffer(struct vxd_dev *vxd, struct vxd_dec_ctx *ctx, unsigned int str_id,
+ unsigned int buff_id, unsigned int virt_addr, unsigned int map_flags);
+int vxd_unmap_buffer(struct vxd_dev *vxd, struct vxd_dec_ctx *ctx,
+ unsigned int str_id, unsigned int buff_id);
+
+unsigned int get_nbuffers(enum vdec_vid_std std, int w, int h, unsigned int max_num_ref_frames);
+
+int vxd_dec_alloc_bspp_resource(struct vxd_dec_ctx *ctx, enum vdec_vid_std vid_std);
+
+#ifdef ERROR_RECOVERY_SIMULATION
+/* sysfs read write functions */
+ssize_t vxd_sysfs_show(struct kobject *vxd_dec_kobject,
+ struct kobj_attribute *attr, char *buf);
+
+ssize_t vxd_sysfs_store(struct kobject *vxd_dec_kobject,
+ struct kobj_attribute *attr, const char *buf, unsigned long count);
+#endif
+#endif /* _VXD_DEC_H */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
This patch creates and destroy the context for idgen and
it returns ids, based on ids it manage the resources
Signed-off-by: Amit Makani <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
drivers/staging/media/vxd/common/idgen_api.c | 449 +++++++++++++++++++
drivers/staging/media/vxd/common/idgen_api.h | 59 +++
3 files changed, 510 insertions(+)
create mode 100644 drivers/staging/media/vxd/common/idgen_api.c
create mode 100644 drivers/staging/media/vxd/common/idgen_api.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 538faa644d13..0468aaac3b7d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19537,6 +19537,8 @@ M: Sidraya Jayagond <[email protected]>
L: [email protected]
S: Maintained
F: Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
+F: drivers/staging/media/vxd/common/idgen_api.c
+F: drivers/staging/media/vxd/common/idgen_api.h
F: drivers/staging/media/vxd/common/img_mem_man.c
F: drivers/staging/media/vxd/common/img_mem_man.h
F: drivers/staging/media/vxd/common/img_mem_unified.c
diff --git a/drivers/staging/media/vxd/common/idgen_api.c b/drivers/staging/media/vxd/common/idgen_api.c
new file mode 100644
index 000000000000..abc8660d7a4a
--- /dev/null
+++ b/drivers/staging/media/vxd/common/idgen_api.c
@@ -0,0 +1,449 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * ID generation manager API.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "idgen_api.h"
+#include "lst.h"
+
+/*
+ * This structure contains ID context.
+ */
+struct idgen_context {
+ /* List of handle block structures */
+ struct lst_t hdlblklst;
+ /* Max ID - set by IDGEN_CreateContext(). */
+ unsigned int maxid;
+ /*
+ * The number of handle per block. In case of
+ * incrementing ids, size of the Hash table.
+ */
+ unsigned int blksize;
+ /* Next free slot. */
+ unsigned int freeslot;
+ /* Max slot+1 for which we have allocated blocks. */
+ unsigned int maxslotplus1;
+ /* Incrementing ID's */
+ /* API needed to return incrementing IDs */
+ int incids;
+ /* Latest ID given back */
+ unsigned int latestincnumb;
+ /* Array of list to hold IDGEN_sHdlId */
+ struct lst_t *incidlist;
+};
+
+/*
+ * This structure represents internal representation of an Incrementing ID.
+ */
+struct idgen_id {
+ void **link; /* to be part of single linked list */
+ /* Incrementing ID returned */
+ unsigned int incid;
+ void *hid;
+};
+
+/*
+ * Structure contains the ID context.
+ */
+struct idgen_hdblk {
+ void **link; /* to be part of single linked list */
+ /* Array of handles in this block. */
+ void *ahhandles[1];
+};
+
+/*
+ * A hashing function could go here. Currently just makes a circular list of
+ * max number of concurrent Ids (idgen_context->blksize) in the system.
+ */
+static unsigned int idgen_func(struct idgen_context *idcontext, unsigned int id)
+{
+ return ((id - 1) % idcontext->blksize);
+}
+
+int idgen_createcontext(unsigned int maxid, unsigned int blksize,
+ int incid, void **idgenhandle)
+{
+ struct idgen_context *idcontext;
+
+ /* Create context structure */
+ idcontext = kzalloc(sizeof(*idcontext), GFP_KERNEL);
+ if (!idcontext)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /* InitIalise the context */
+ lst_init(&idcontext->hdlblklst);
+ idcontext->maxid = maxid;
+ idcontext->blksize = blksize;
+
+ /* If we need incrementing Ids */
+ idcontext->incids = incid;
+ idcontext->latestincnumb = 0;
+ idcontext->incidlist = NULL;
+ if (idcontext->incids) {
+ unsigned int i = 0;
+ /* Initialise the hash table of lists of length ui32BlkSize */
+ idcontext->incidlist = kzalloc((sizeof(*idcontext->incidlist) *
+ idcontext->blksize), GFP_KERNEL);
+ if (!idcontext->incidlist) {
+ kfree(idcontext);
+ return IMG_ERROR_OUT_OF_MEMORY;
+ }
+
+ /* Initialise all the lists in the hash table */
+ for (i = 0; i < idcontext->blksize; i++)
+ lst_init(&idcontext->incidlist[i]);
+ }
+
+ /* Return context structure as handle */
+ *idgenhandle = idcontext;
+
+ return IMG_SUCCESS;
+}
+
+int idgen_destroycontext(void *idgenhandle)
+{
+ struct idgen_context *idcontext = (struct idgen_context *)idgenhandle;
+ struct idgen_hdblk *hdblk;
+
+ if (!idcontext)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* If incrementing Ids, free the List of Incrementing Ids */
+ if (idcontext->incids) {
+ struct idgen_id *id;
+ unsigned int i = 0;
+
+ for (i = 0; i < idcontext->blksize; i++) {
+ id = lst_removehead(&idcontext->incidlist[i]);
+ while (id) {
+ kfree(id);
+ id = lst_removehead(&idcontext->incidlist[i]);
+ }
+ }
+ kfree(idcontext->incidlist);
+ }
+
+ /* Remove and free all handle blocks */
+ hdblk = (struct idgen_hdblk *)lst_removehead(&idcontext->hdlblklst);
+ while (hdblk) {
+ kfree(hdblk);
+ hdblk = (struct idgen_hdblk *)
+ lst_removehead(&idcontext->hdlblklst);
+ }
+
+ /* Free context structure */
+ kfree(idcontext);
+
+ return IMG_SUCCESS;
+}
+
+static int idgen_findnextfreeslot(void *idgenhandle, unsigned int prevfreeslot)
+{
+ struct idgen_context *idcontext = (struct idgen_context *)idgenhandle;
+ struct idgen_hdblk *hdblk;
+ unsigned int freslotblk;
+ unsigned int freeslot;
+
+ if (!idcontext)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* Find the block containing the current free slot */
+ freeslot = prevfreeslot;
+ freslotblk = prevfreeslot;
+ hdblk = (struct idgen_hdblk *)lst_first(&idcontext->hdlblklst);
+ if (!hdblk)
+ return IMG_ERROR_FATAL;
+
+ while (freslotblk >= idcontext->blksize) {
+ freslotblk -= idcontext->blksize;
+ hdblk = (struct idgen_hdblk *)lst_next(hdblk);
+ }
+
+ /* Locate the next free slot */
+ while (hdblk) {
+ while (freslotblk < idcontext->blksize) {
+ if (!hdblk->ahhandles[freslotblk]) {
+ /* Found */
+ idcontext->freeslot = freeslot;
+ return IMG_SUCCESS;
+ }
+ freeslot++;
+ freslotblk++;
+ }
+ freslotblk = 0;
+ hdblk = (struct idgen_hdblk *)lst_next(hdblk);
+ }
+
+ /* Beyond the last block */
+ idcontext->freeslot = freeslot;
+ return IMG_SUCCESS;
+}
+
+/*
+ * This function returns ID structure (
+ */
+static struct idgen_id *idgen_getid(struct lst_t *idlist, unsigned int id)
+{
+ struct idgen_id *idstruct;
+
+ idstruct = lst_first(idlist);
+ while (idstruct) {
+ if (idstruct->incid == id)
+ break;
+
+ idstruct = lst_next(idstruct);
+ }
+ return idstruct;
+}
+
+/*
+ * This function does IDGEN allocation.
+ */
+int idgen_allocid(void *idgenhandle, void *handle, unsigned int *id)
+{
+ struct idgen_context *idcontext = (struct idgen_context *)idgenhandle;
+ struct idgen_hdblk *hdblk;
+ unsigned int size = 0;
+ unsigned int freeslot = 0;
+ unsigned int result = 0;
+
+ if (!idcontext || !handle)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (!idcontext->incids) {
+ /* If the free slot is >= to the max id */
+ if (idcontext->freeslot >= idcontext->maxid) {
+ result = IMG_ERROR_INVALID_ID;
+ goto error;
+ }
+
+ /* If all of the allocated Ids have been used */
+ if (idcontext->freeslot >= idcontext->maxslotplus1) {
+ /* Allocate a stream context */
+ size = sizeof(*hdblk) + (sizeof(void *) *
+ (idcontext->blksize - 1));
+ hdblk = kzalloc(size, GFP_KERNEL);
+ if (!hdblk) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+
+ lst_add(&idcontext->hdlblklst, hdblk);
+ idcontext->maxslotplus1 += idcontext->blksize;
+ }
+
+ /* Find the block containing the next free slot */
+ freeslot = idcontext->freeslot;
+ hdblk = (struct idgen_hdblk *)lst_first(&idcontext->hdlblklst);
+ if (!hdblk) {
+ result = IMG_ERROR_FATAL;
+ goto error;
+ }
+ while (freeslot >= idcontext->blksize) {
+ freeslot -= idcontext->blksize;
+ hdblk = (struct idgen_hdblk *)lst_next(hdblk);
+ if (!hdblk) {
+ result = IMG_ERROR_FATAL;
+ goto error;
+ }
+ }
+
+ /* Put handle in the next free slot */
+ hdblk->ahhandles[freeslot] = handle;
+
+ *id = idcontext->freeslot + 1;
+
+ /* Find a new free slot */
+ result = idgen_findnextfreeslot(idcontext, idcontext->freeslot);
+ if (result != 0)
+ goto error;
+ /*
+ * If incrementing IDs, just add the ID node to the correct hash table
+ * list.
+ */
+ } else {
+ struct idgen_id *psid;
+ unsigned int currentincnum, funcid;
+ /*
+ * If incrementing IDs, increment the id for returning back,and
+ * save the ID node in the list of ids, indexed by hash function
+ * (idgen_func). We might want to use a better hashing function
+ */
+ currentincnum = (idcontext->latestincnumb + 1) %
+ idcontext->maxid;
+
+ /* Increment the id. Wraps if greater then Max Id */
+ if (currentincnum == 0)
+ currentincnum++;
+
+ idcontext->latestincnumb = currentincnum;
+
+ result = IMG_ERROR_INVALID_ID;
+ do {
+ /* Add to list in the correct hash table entry */
+ funcid = idgen_func(idcontext, idcontext->latestincnumb);
+ if (idgen_getid(&idcontext->incidlist[funcid],
+ idcontext->latestincnumb) == NULL) {
+ psid = kmalloc(sizeof(*psid), GFP_KERNEL);
+ if (!psid) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+
+ psid->incid = idcontext->latestincnumb;
+ psid->hid = handle;
+
+ funcid = idgen_func(idcontext,
+ idcontext->latestincnumb);
+ lst_add(&idcontext->incidlist[funcid],
+ psid);
+
+ result = IMG_SUCCESS;
+ } else {
+ idcontext->latestincnumb =
+ (idcontext->latestincnumb + 1) %
+ idcontext->maxid;
+ if (idcontext->latestincnumb == 0) {
+ /* Do not want to have zero as pic id */
+ idcontext->latestincnumb++;
+ }
+ /*
+ * We have reached a point where we have wrapped
+ * allowed Ids (MaxId) and we want to overwrite
+ * ID still not released
+ */
+ if (idcontext->latestincnumb == currentincnum)
+ goto error;
+ }
+ } while (result != IMG_SUCCESS);
+
+ *id = psid->incid;
+ }
+ return IMG_SUCCESS;
+error:
+ return result;
+}
+
+int idgen_freeid(void *idgenhandle, unsigned int id)
+{
+ struct idgen_context *idcontext = (struct idgen_context *)idgenhandle;
+ struct idgen_hdblk *hdblk;
+ unsigned int origslot;
+ unsigned int slot;
+
+ if (idcontext->incids) {
+ /*
+ * Find the slot in the correct hash table entry, and
+ * remove the ID.
+ */
+ struct idgen_id *psid;
+
+ psid = idgen_getid(&idcontext->incidlist
+ [idgen_func(idcontext, id)], id);
+ if (psid) {
+ lst_remove(&idcontext->incidlist
+ [idgen_func(idcontext, id)], psid);
+ kfree(psid);
+ } else {
+ return IMG_ERROR_INVALID_ID;
+ }
+ } else {
+ /* If not incrementing id */
+ slot = id - 1;
+ origslot = slot;
+
+ if (slot >= idcontext->maxslotplus1)
+ return IMG_ERROR_INVALID_ID;
+
+ /* Find the block containing the id */
+ hdblk = (struct idgen_hdblk *)lst_first(&idcontext->hdlblklst);
+ if (!hdblk)
+ return IMG_ERROR_FATAL;
+
+ while (slot >= idcontext->blksize) {
+ slot -= idcontext->blksize;
+ hdblk = (struct idgen_hdblk *)lst_next(hdblk);
+ if (!hdblk)
+ return IMG_ERROR_FATAL;
+ }
+
+ /* Slot should be occupied */
+ if (!hdblk->ahhandles[slot])
+ return IMG_ERROR_INVALID_ID;
+
+ /* Free slot */
+ hdblk->ahhandles[slot] = NULL;
+
+ /* If this slot is before the previous free slot */
+ if ((origslot) < idcontext->freeslot)
+ idcontext->freeslot = origslot;
+ }
+ return IMG_SUCCESS;
+}
+
+int idgen_gethandle(void *idgenhandle, unsigned int id, void **handle)
+{
+ struct idgen_context *idcontext = (struct idgen_context *)idgenhandle;
+ struct idgen_hdblk *hdblk;
+ unsigned int slot;
+
+ if (!idcontext)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (idcontext->incids) {
+ /*
+ * Find the slot in the correct hash table entry, and return
+ * the handles.
+ */
+ struct idgen_id *psid;
+
+ psid = idgen_getid(&idcontext->incidlist
+ [idgen_func(idcontext, id)], id);
+ if (psid)
+ *handle = psid->hid;
+
+ else
+ return IMG_ERROR_INVALID_ID;
+ } else {
+ /* If not incrementing IDs */
+ slot = id - 1;
+ if (slot >= idcontext->maxslotplus1)
+ return IMG_ERROR_INVALID_ID;
+
+ /* Find the block containing the id */
+ hdblk = (struct idgen_hdblk *)lst_first(&idcontext->hdlblklst);
+ if (!hdblk)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ while (slot >= idcontext->blksize) {
+ slot -= idcontext->blksize;
+ hdblk = (struct idgen_hdblk *)lst_next(hdblk);
+ if (!hdblk)
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Slot should be occupied */
+ if (!hdblk->ahhandles[slot])
+ return IMG_ERROR_INVALID_ID;
+
+ /* Return the handle */
+ *handle = hdblk->ahhandles[slot];
+ }
+
+ return IMG_SUCCESS;
+}
diff --git a/drivers/staging/media/vxd/common/idgen_api.h b/drivers/staging/media/vxd/common/idgen_api.h
new file mode 100644
index 000000000000..6c894343f1fb
--- /dev/null
+++ b/drivers/staging/media/vxd/common/idgen_api.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * ID generation manager API.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+#ifndef __IDGENAPI_H__
+#define __IDGENAPI_H__
+
+#include <linux/types.h>
+
+#include "img_errors.h"
+
+/*
+ * This function is used to create Id generation context.
+ * NOTE: Should only be called once to setup the context structure.
+ * NOTE: The client is responsible for providing thread/process safe locks on
+ * the context structure to maintain coherence.
+ */
+int idgen_createcontext(unsigned int maxid, unsigned int blksize,
+ int incid, void **idgenhandle);
+
+/*
+ * This function is used to destroy an Id generation context. This function
+ * discards any handle blocks associated with the context.
+ * NOTE: The client is responsible for providing thread/process safe locks on
+ * the context structure to maintain coherence.
+ */
+int idgen_destroycontext(void *idgenhandle);
+
+/*
+ * This function is used to associate a handle with an Id.
+ * NOTE: The client is responsible for providing thread/process safe locks on
+ * the context structure to maintain coherency.
+ */
+int idgen_allocid(void *idgenhandle, void *handle, unsigned int *id);
+
+/*
+ * This function is used to free an Id.
+ * NOTE: The client is responsible for providing thread/process safe locks on
+ * the context structure to maintain coherency.
+ */
+int idgen_freeid(void *idgenhandle, unsigned int id);
+
+/*
+ * This function is used to get the handle associated with an Id.
+ * NOTE: The client is responsible for providing thread/process safe locks on
+ * the context structure to maintain coherency.
+ */
+int idgen_gethandle(void *idgenhandle, unsigned int id, void **handle);
+#endif /* __IDGENAPI_H__ */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
It contains the implementation of Address allocation management
APIs, list processing primitives, generic resource allocations,
self scaling has tables and object pool memory allocator which
are needed for TALMMU functionality
Signed-off-by: Lakshmi Sankar <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 10 +
drivers/staging/media/vxd/common/addr_alloc.c | 499 +++++++++
drivers/staging/media/vxd/common/addr_alloc.h | 238 +++++
drivers/staging/media/vxd/common/hash.c | 481 +++++++++
drivers/staging/media/vxd/common/hash.h | 86 ++
drivers/staging/media/vxd/common/pool.c | 228 ++++
drivers/staging/media/vxd/common/pool.h | 66 ++
drivers/staging/media/vxd/common/ra.c | 972 ++++++++++++++++++
drivers/staging/media/vxd/common/ra.h | 200 ++++
drivers/staging/media/vxd/common/talmmu_api.c | 753 ++++++++++++++
drivers/staging/media/vxd/common/talmmu_api.h | 246 +++++
11 files changed, 3779 insertions(+)
create mode 100644 drivers/staging/media/vxd/common/addr_alloc.c
create mode 100644 drivers/staging/media/vxd/common/addr_alloc.h
create mode 100644 drivers/staging/media/vxd/common/hash.c
create mode 100644 drivers/staging/media/vxd/common/hash.h
create mode 100644 drivers/staging/media/vxd/common/pool.c
create mode 100644 drivers/staging/media/vxd/common/pool.h
create mode 100644 drivers/staging/media/vxd/common/ra.c
create mode 100644 drivers/staging/media/vxd/common/ra.h
create mode 100644 drivers/staging/media/vxd/common/talmmu_api.c
create mode 100644 drivers/staging/media/vxd/common/talmmu_api.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 2668eeb89a34..2b0d0708d852 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19537,8 +19537,12 @@ M: Sidraya Jayagond <[email protected]>
L: [email protected]
S: Maintained
F: Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
+F: drivers/staging/media/vxd/common/addr_alloc.c
+F: drivers/staging/media/vxd/common/addr_alloc.h
F: drivers/staging/media/vxd/common/dq.c
F: drivers/staging/media/vxd/common/dq.h
+F: drivers/staging/media/vxd/common/hash.c
+F: drivers/staging/media/vxd/common/hash.h
F: drivers/staging/media/vxd/common/idgen_api.c
F: drivers/staging/media/vxd/common/idgen_api.h
F: drivers/staging/media/vxd/common/img_mem_man.c
@@ -19548,6 +19552,12 @@ F: drivers/staging/media/vxd/common/imgmmu.c
F: drivers/staging/media/vxd/common/imgmmu.h
F: drivers/staging/media/vxd/common/lst.c
F: drivers/staging/media/vxd/common/lst.h
+F: drivers/staging/media/vxd/common/pool.c
+F: drivers/staging/media/vxd/common/pool.h
+F: drivers/staging/media/vxd/common/ra.c
+F: drivers/staging/media/vxd/common/ra.h
+F: drivers/staging/media/vxd/common/talmmu_api.c
+F: drivers/staging/media/vxd/common/talmmu_api.h
F: drivers/staging/media/vxd/common/work_queue.c
F: drivers/staging/media/vxd/common/work_queue.h
F: drivers/staging/media/vxd/decoder/hw_control.c
diff --git a/drivers/staging/media/vxd/common/addr_alloc.c b/drivers/staging/media/vxd/common/addr_alloc.c
new file mode 100644
index 000000000000..393d309b2c0c
--- /dev/null
+++ b/drivers/staging/media/vxd/common/addr_alloc.c
@@ -0,0 +1,499 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Address allocation APIs - used to manage address allocation
+ * with a number of predefined regions.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/slab.h>
+#include <linux/printk.h>
+#include <linux/mutex.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "addr_alloc.h"
+#include "hash.h"
+#include "img_errors.h"
+
+/* Global context. */
+static struct addr_context global_ctx = {0};
+/* Sub-system initialized. */
+static int global_initialized;
+/* Count of contexts. */
+static unsigned int num_ctx;
+/* Global mutex */
+static struct mutex *global_lock;
+
+/**
+ * addr_initialise - addr_initialise
+ */
+
+int addr_initialise(void)
+{
+ unsigned int result = IMG_ERROR_ALREADY_INITIALISED;
+
+ /* If we are not initialized */
+ if (!global_initialized)
+ result = addr_cx_initialise(&global_ctx);
+ return result;
+}
+
+int addr_cx_initialise(struct addr_context * const context)
+{
+ unsigned int result = IMG_ERROR_FATAL;
+
+ if (!context)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (!global_initialized) {
+ /* Initialise context */
+ memset(context, 0x00, sizeof(struct addr_context));
+
+ /* If no mutex associated with this resource */
+ if (!global_lock) {
+ /* Create one */
+
+ global_lock = kzalloc(sizeof(*global_lock), GFP_KERNEL);
+ if (!global_lock)
+ return -ENOMEM;
+
+ mutex_init(global_lock);
+ }
+
+ mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC);
+
+ /* Initialise the hash functions. */
+ result = vid_hash_initialise();
+ if (result != IMG_SUCCESS) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+
+ /* Initialise the arena functions */
+ result = vid_ra_initialise();
+ if (result != IMG_SUCCESS) {
+ mutex_unlock(global_lock);
+ result = vid_hash_finalise();
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+
+ /* We are now initialized */
+ global_initialized = TRUE;
+ result = IMG_SUCCESS;
+ } else {
+ mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC);
+ }
+
+ num_ctx++;
+ mutex_unlock(global_lock);
+
+ return result;
+}
+
+int addr_deinitialise(void)
+{
+ return addr_cx_deinitialise(&global_ctx);
+}
+
+int addr_cx_deinitialise(struct addr_context * const context)
+{
+ struct addr_region *tmp_region = NULL;
+ unsigned int result = IMG_ERROR_FATAL;
+
+ if (!context)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (global_initialized) {
+ mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC);
+
+ tmp_region = context->regions;
+
+ /* Delete all arena structure */
+ if (context->default_region)
+ result = vid_ra_delete(context->default_region->arena);
+
+ while (tmp_region) {
+ result = vid_ra_delete(tmp_region->arena);
+ tmp_region = tmp_region->nxt_region;
+ }
+
+ if (num_ctx != 0)
+ num_ctx--;
+
+ result = IMG_SUCCESS;
+ if (num_ctx == 0) {
+ /* Free off resources */
+ result = vid_hash_finalise();
+ result = vid_ra_deinit();
+ global_initialized = FALSE;
+
+ mutex_unlock(global_lock);
+ mutex_destroy(global_lock);
+ kfree(global_lock);
+ global_lock = NULL;
+ } else {
+ mutex_unlock(global_lock);
+ }
+ }
+
+ return result;
+}
+
+int addr_define_mem_region(struct addr_region * const region)
+{
+ return addr_cx_define_mem_region(&global_ctx, region);
+}
+
+int addr_cx_define_mem_region(struct addr_context * const context,
+ struct addr_region * const region)
+{
+ struct addr_region *tmp_region = NULL;
+ unsigned int result = IMG_SUCCESS;
+
+ if (!context || !region)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC);
+
+ tmp_region = context->regions;
+
+ /* Ensure the link to the next is NULL */
+ region->nxt_region = NULL;
+
+ /* If this is the default memory region */
+ if (!region->name) {
+ /* Should not previously have been defined */
+ if (context->default_region) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+
+ context->default_region = region;
+ context->no_regions++;
+
+ /*
+ * Create an arena for memory allocation
+ * name of resource arena for debug
+ * start of resource
+ * size of resource
+ * allocation quantum
+ * import allocator
+ * import deallocator
+ * import handle
+ */
+ result = vid_ra_create("memory",
+ region->base_addr,
+ region->size,
+ 1,
+ NULL,
+ NULL,
+ NULL,
+ ®ion->arena);
+
+ if (result != IMG_SUCCESS) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+ } else {
+ /*
+ * Run down the list of existing named regions
+ * to check if there is a region with this name
+ */
+ while (tmp_region &&
+ (strcmp(region->name, tmp_region->name) != 0) &&
+ tmp_region->nxt_region) {
+ tmp_region = tmp_region->nxt_region;
+ }
+
+ /* If we have items in the list */
+ if (tmp_region) {
+ /*
+ * Check we didn't stop because the name
+ * clashes with one already defined.
+ */
+
+ if (strcmp(region->name, tmp_region->name) == 0 ||
+ tmp_region->nxt_region) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+
+ /* Add to end of list */
+ tmp_region->nxt_region = region;
+ } else {
+ /* Add to head of list */
+ context->regions = region;
+ }
+
+ context->no_regions++;
+
+ /*
+ * Create an arena for memory allocation
+ * name of resource arena for debug
+ * start of resource
+ * size of resource
+ * allocation quantum
+ * import allocator
+ * import deallocator
+ * import handle
+ */
+ result = vid_ra_create(region->name,
+ region->base_addr,
+ region->size,
+ 1,
+ NULL,
+ NULL,
+ NULL,
+ ®ion->arena);
+
+ if (result != IMG_SUCCESS) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+ }
+
+ mutex_unlock(global_lock);
+
+ /* Check the arean was created OK */
+ if (!region->arena)
+ return IMG_ERROR_UNEXPECTED_STATE;
+
+ return result;
+}
+
+int addr_malloc(const unsigned char * const name,
+ unsigned long long size,
+ unsigned long long * const base_adr)
+{
+ return addr_cx_malloc(&global_ctx, name, size, base_adr);
+}
+
+int addr_cx_malloc(struct addr_context * const context,
+ const unsigned char * const name,
+ unsigned long long size,
+ unsigned long long * const base_adr)
+{
+ unsigned int result = IMG_ERROR_FATAL;
+ struct addr_region *tmp_region = NULL;
+
+ if (!context || !base_adr || !name)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ *(base_adr) = (unsigned long long)-1LL;
+
+ mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC);
+
+ tmp_region = context->regions;
+
+ /*
+ * Run down the list of existing named
+ * regions to locate this
+ */
+ while (tmp_region && (strcmp(name, tmp_region->name) != 0) && (tmp_region->nxt_region))
+ tmp_region = tmp_region->nxt_region;
+
+ /* If there was no match. */
+ if (!tmp_region || (strcmp(name, tmp_region->name) != 0)) {
+ /* Use the default */
+ if (!context->default_region) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+
+ tmp_region = context->default_region;
+ }
+
+ if (!tmp_region) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+
+ /* Allocate size + guard band */
+ result = vid_ra_alloc(tmp_region->arena,
+ size + tmp_region->guard_band,
+ NULL,
+ NULL,
+ SEQUENTIAL_ALLOCATION,
+ 1,
+ base_adr);
+ if (result != IMG_SUCCESS) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_OUT_OF_MEMORY;
+ }
+
+ mutex_unlock(global_lock);
+
+ return result;
+}
+
+int addr_cx_malloc_res(struct addr_context * const context,
+ const unsigned char * const name,
+ unsigned long long size,
+ unsigned long long * const base_adr)
+{
+ unsigned int result = IMG_ERROR_FATAL;
+ struct addr_region *tmp_region = NULL;
+
+ if (!context || !base_adr || !name)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC);
+
+ tmp_region = context->regions;
+ /* If the allocation is for the default region */
+ /*
+ * Run down the list of existing named
+ * regions to locate this
+ */
+ while (tmp_region && (strcmp(name, tmp_region->name) != 0) && (tmp_region->nxt_region))
+ tmp_region = tmp_region->nxt_region;
+
+ /* If there was no match. */
+ if (!tmp_region || (strcmp(name, tmp_region->name) != 0)) {
+ /* Use the default */
+ if (!context->default_region) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+ tmp_region = context->default_region;
+ }
+ if (!tmp_region) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+ /* Allocate size + guard band */
+ result = vid_ra_alloc(tmp_region->arena, size + tmp_region->guard_band,
+ NULL, NULL, SEQUENTIAL_ALLOCATION, 1, base_adr);
+ if (result != IMG_SUCCESS) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_OUT_OF_MEMORY;
+ }
+ mutex_unlock(global_lock);
+
+ return result;
+}
+
+int addr_cx_malloc_align_res(struct addr_context * const context,
+ const unsigned char * const name,
+ unsigned long long size,
+ unsigned long long alignment,
+ unsigned long long * const base_adr)
+{
+ unsigned int result;
+ struct addr_region *tmp_region = NULL;
+
+ if (!context || !base_adr || !name)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC);
+
+ tmp_region = context->regions;
+
+ /*
+ * Run down the list of existing named
+ * regions to locate this
+ */
+ while (tmp_region &&
+ (strcmp(name, tmp_region->name) != 0) &&
+ (tmp_region->nxt_region)) {
+ tmp_region = tmp_region->nxt_region;
+ }
+ /* If there was no match. */
+ if (!tmp_region ||
+ (strcmp(name, tmp_region->name) != 0)) {
+ /* Use the default */
+ if (!context->default_region) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+
+ tmp_region = context->default_region;
+ }
+
+ if (!tmp_region) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+ /* Allocate size + guard band */
+ result = vid_ra_alloc(tmp_region->arena,
+ size + tmp_region->guard_band,
+ NULL,
+ NULL,
+ SEQUENTIAL_ALLOCATION,
+ alignment,
+ base_adr);
+ if (result != IMG_SUCCESS) {
+ mutex_unlock(global_lock);
+ return IMG_ERROR_OUT_OF_MEMORY;
+ }
+
+ mutex_unlock(global_lock);
+
+ return result;
+}
+
+int addr_free(const unsigned char * const name, unsigned long long addr)
+{
+ return addr_cx_free(&global_ctx, name, addr);
+}
+
+int addr_cx_free(struct addr_context * const context,
+ const unsigned char * const name,
+ unsigned long long addr)
+{
+ struct addr_region *tmp_region;
+ unsigned int result;
+
+ if (!context)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ tmp_region = context->regions;
+
+ mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC);
+
+ /* If the allocation is for the default region */
+ if (!name) {
+ if (!context->default_region) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+ tmp_region = context->default_region;
+ } else {
+ /*
+ * Run down the list of existing named
+ * regions to locate this
+ */
+ while (tmp_region &&
+ (strcmp(name, tmp_region->name) != 0) &&
+ tmp_region->nxt_region) {
+ tmp_region = tmp_region->nxt_region;
+ }
+
+ /* If there was no match */
+ if (!tmp_region || (strcmp(name, tmp_region->name) != 0)) {
+ /* Use the default */
+ if (!context->default_region) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+ tmp_region = context->default_region;
+ }
+ }
+
+ /* Free the address */
+ result = vid_ra_free(tmp_region->arena, addr);
+
+error:
+ mutex_unlock(global_lock);
+ return result;
+}
diff --git a/drivers/staging/media/vxd/common/addr_alloc.h b/drivers/staging/media/vxd/common/addr_alloc.h
new file mode 100644
index 000000000000..387418b124e4
--- /dev/null
+++ b/drivers/staging/media/vxd/common/addr_alloc.h
@@ -0,0 +1,238 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Address allocation management API.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef __ADDR_ALLOC_H__
+#define __ADDR_ALLOC_H__
+
+#include <linux/types.h>
+#include "ra.h"
+
+/* Defines whether sequential or random allocation is used */
+enum {
+ SEQUENTIAL_ALLOCATION,
+ RANDOM_ALLOCATION,
+ RANDOM_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/**
+ * struct addr_region - Memory region structure
+ *@name: A pointer to a sring containing the name of the region.
+ * NULL for the default memory region.
+ *@base_addr: The base address of the memory region.
+ *@size: The size of the memory region.
+ *@guard_band: The size of any guard band to be used.
+ * Guard bands can be useful in separating block allocations
+ * and allows the caller to detect erroneous accesses
+ * into these areas.
+ *@nxt_region:Used internally by the ADDR API.A pointer used to point
+ * to the next memory region.
+ *@arena: Used internally by the ADDR API. A to a structure used to
+ * maintain and perform address allocation.
+ *
+ * This structure contains information about the memory region.
+ */
+struct addr_region {
+ unsigned char *name;
+ unsigned long long base_addr;
+ unsigned long long size;
+ unsigned int guard_band;
+ struct addr_region *nxt_region;
+ void *arena;
+};
+
+/*
+ * This structure contains the context for allocation.
+ *@regions: Pointer the first region in the list.
+ *@default_region: Pointer the default region.
+ *@no_regions: Number of regions currently available (including default)
+ */
+struct addr_context {
+ struct addr_region *regions;
+ struct addr_region *default_region;
+ unsigned int no_regions;
+};
+
+/*
+ * @Function ADDR_Initialise
+ * @Description
+ * This function is used to initialise the address alocation sub-system.
+ * NOTE: This function may be called multiple times. The initialisation only
+ * happens the first time it is called.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int addr_initialise(void);
+
+/*
+ * @Function addr_deinitialise
+ * @Description
+ * This function is used to de-initialise the address alocation sub-system.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int addr_deinitialise(void);
+
+/*
+ * @Function addr_define_mem_region
+ * @Description
+ * This function is used define a memory region.
+ * NOTE: The region structure MUST be defined in static memory as this
+ * is retained and used by the ADDR sub-system.
+ * NOTE: Invalid parameters are trapped by asserts.
+ * @Input region: A pointer to a region structure.
+ * @Return IMG_RESULT : IMG_SUCCESS or an error code.
+ */
+int addr_define_mem_region(struct addr_region * const region);
+
+/*
+ * @Function addr_malloc
+ * @Description
+ * This function is used allocate space within a memory region.
+ * NOTE: Allocation failures or invalid parameters are trapped by asserts.
+ * @Input name: Is a pointer the name of the memory region.
+ * NULL can be used to allocate space from the
+ * default memory region.
+ * @Input size: The size (in bytes) of the allocation.
+ * @Output base_adr : The address of the allocated space.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int addr_malloc(const unsigned char *const name,
+ unsigned long long size,
+ unsigned long long *const base_adr);
+
+/*
+ * @Function addr_free
+ * @Description
+ * This function is used free a previously allocate space within
+ * a memory region.
+ * NOTE: Invalid parameters are trapped by asserts.
+ * @Input name: Is a pointer to the name of the memory region.
+ * NULL is used to free space from the default memory region.
+ *@Input addr: The address allocated.
+ *@Return IMG_SUCCESS or an error code.
+ */
+int addr_free(const unsigned char * const name, unsigned long long addr);
+
+/*
+ * @Function addr_cx_initialise
+ * @Description
+ * This function is used to initialise the address allocation sub-system with
+ * an external context structure.
+ * NOTE: This function should be call only once for the context.
+ * @Input context : Pointer to context structure.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int addr_cx_initialise(struct addr_context * const context);
+
+/*
+ * @Function addr_cx_deinitialise
+ * @Description
+ * This function is used to de-initialise the address allocation
+ * sub-system with an external context structure.
+ * @Input context : Pointer to context structure.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int addr_cx_deinitialise(struct addr_context * const context);
+
+/*
+ * @Function addr_cx_define_mem_region
+ * @Description
+ * This function is used define a memory region with an external
+ * context structure.
+ * NOTE: The region structure MUST be defined in static memory as this
+ * is retained and used by the ADDR sub-system.
+ * NOTE: Invalid parameters are trapped by asserts.
+ * @Input context : Pointer to context structure.
+ * @Input region : A pointer to a region structure.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int addr_cx_define_mem_region(struct addr_context *const context,
+ struct addr_region *const region);
+
+/*
+ * @Function addr_cx_malloc
+ * @Description
+ * This function is used allocate space within a memory region with
+ * an external context structure.
+ * NOTE: Allocation failures or invalid parameters are trapped by asserts.
+ * @Input context : Pointer to context structure.
+ * @Input name : Is a pointer the name of the memory region.
+ * NULL can be used to allocate space from the
+ * default memory region.
+ * @Input size : The size (in bytes) of the allocation.
+ * @Output base_adr : The address of the allocated space.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int addr_cx_malloc(struct addr_context * const context,
+ const unsigned char *const name,
+ unsigned long long size,
+ unsigned long long *const base_adr);
+
+/*
+ * @Function addr_cx_malloc_res
+ * @Description
+ * This function is used allocate space within a memory region with
+ * an external context structure.
+ * NOTE: Allocation failures are returned in IMG_RESULT, however invalid
+ * parameters are trapped by asserts.
+ * @Input context : Pointer to context structure.
+ * @Input name : Is a pointer the name of the memory region.
+ * NULL can be used to allocate space from the
+ * default memory region.
+ * @Input size : The size (in bytes) of the allocation.
+ * @Input base_adr : Pointer to the address of the allocated space.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int addr_cx_malloc_res(struct addr_context *const context,
+ const unsigned char *const name,
+ unsigned long long size,
+ unsigned long long * const base_adr);
+
+/*
+ * @Function addr_cx_malloc1_res
+ * @Description
+ * This function is used allocate space within a memory region with
+ * an external context structure.
+ * NOTE: Allocation failures are returned in IMG_RESULT, however invalid
+ * parameters are trapped by asserts.
+ * @Input context : Pointer to context structure.
+ * @Input name : Is a pointer the name of the memory region.
+ * NULL can be used to allocate space from the
+ * default memory region.
+ * @Input size : The size (in bytes) of the allocation.
+ * @Input alignment : The required byte alignment (1, 2, 4, 8, 16 etc).
+ * @Input base_adr : Pointer to the address of the allocated space.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int addr_cx_malloc_align_res(struct addr_context *const context,
+ const unsigned char *const name,
+ unsigned long long size,
+ unsigned long long alignment,
+ unsigned long long *const base_adr);
+
+/*
+ * @Function addr_cx_free
+ * @Description
+ * This function is used free a previously allocate space within a memory region
+ * with an external context structure.
+ * NOTE: Invalid parameters are trapped by asserts.
+ * @Input context : Pointer to context structure.
+ * @Input name : Is a pointer the name of the memory region.
+ * NULL is used to free space from the
+ * default memory region.
+ * @Input addr : The address allocated.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int addr_cx_free(struct addr_context *const context,
+ const unsigned char *const name,
+ unsigned long long addr);
+
+#endif /* __ADDR_ALLOC_H__ */
diff --git a/drivers/staging/media/vxd/common/hash.c b/drivers/staging/media/vxd/common/hash.c
new file mode 100644
index 000000000000..1a03aecc34ef
--- /dev/null
+++ b/drivers/staging/media/vxd/common/hash.c
@@ -0,0 +1,481 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Self scaling hash tables.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "hash.h"
+#include "img_errors.h"
+#include "pool.h"
+
+/* pool of struct hash objects */
+static struct pool *global_hashpool;
+
+/* pool of struct bucket objects */
+static struct pool *global_bucketpool;
+
+static int global_initialized;
+
+/* Each entry in a hash table is placed into a bucket */
+struct bucket {
+ struct bucket *next;
+ unsigned long long key;
+ unsigned long long value;
+};
+
+struct hash {
+ struct bucket **table;
+ unsigned int size;
+ unsigned int count;
+ unsigned int minimum_size;
+};
+
+/**
+ * hash_func - Hash function intended for hashing addresses.
+ * @vale : The key to hash.
+ * @size : The size of the hash table
+ */
+static unsigned int hash_func(unsigned long long vale,
+ unsigned int size)
+{
+ unsigned int hash = (unsigned int)(vale);
+
+ hash += (hash << 12);
+ hash ^= (hash >> 22);
+ hash += (hash << 4);
+ hash ^= (hash >> 9);
+ hash += (hash << 10);
+ hash ^= (hash >> 2);
+ hash += (hash << 7);
+ hash ^= (hash >> 12);
+ hash &= (size - 1);
+ return hash;
+}
+
+/*
+ * @Function hash_chain_insert
+ * @Description
+ * Hash function intended for hashing addresses.
+ * @Input bucket : The bucket
+ * @Input table : The hash table
+ * @Input size : The size of the hash table
+ * @Return IMG_SUCCESS or an error code.
+ */
+static int hash_chain_insert(struct bucket *bucket,
+ struct bucket **table,
+ unsigned int size)
+{
+ unsigned int idx;
+ unsigned int result = IMG_ERROR_FATAL;
+
+ if (!bucket || !table || !size) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ return result;
+ }
+
+ idx = hash_func(bucket->key, size);
+
+ if (idx < size) {
+ result = IMG_SUCCESS;
+ bucket->next = table[idx];
+ table[idx] = bucket;
+ }
+
+ return result;
+}
+
+/*
+ * @Function hash_rehash
+ * @Description
+ * Iterate over every entry in an old hash table and rehash into the new table.
+ * @Input old_table : The old hash table
+ * @Input old_size : The size of the old hash table
+ * @Input new_table : The new hash table
+ * @Input new_sz : The size of the new hash table
+ * @Return IMG_SUCCESS or an error code.
+ */
+static int hash_rehash(struct bucket **old_table,
+ unsigned int old_size,
+ struct bucket **new_table,
+ unsigned int new_sz)
+{
+ unsigned int idx;
+ unsigned int result = IMG_ERROR_FATAL;
+
+ if (!old_table || !new_table) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ return result;
+ }
+
+ for (idx = 0; idx < old_size; idx++) {
+ struct bucket *bucket;
+ struct bucket *nex_bucket;
+
+ bucket = old_table[idx];
+ while (bucket) {
+ nex_bucket = bucket->next;
+ result = hash_chain_insert(bucket, new_table, new_sz);
+ if (result != IMG_SUCCESS) {
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ return result;
+ }
+ bucket = nex_bucket;
+ }
+ }
+ result = IMG_SUCCESS;
+
+ return result;
+}
+
+/*
+ * @Function hash_resize
+ * @Description
+ * Attempt to resize a hash table, failure to allocate a new larger hash table
+ * is not considered a hard failure. We simply continue and allow the table to
+ * fill up, the effect is to allow hash chains to become longer.
+ * @Input hash_arg : Pointer to the hash table
+ * @Input new_sz : The size of the new hash table
+ * @Return IMG_SUCCESS or an error code.
+ */
+static int hash_resize(struct hash *hash_arg,
+ unsigned int new_sz)
+{
+ unsigned int malloc_sz = 0;
+ unsigned int result = IMG_ERROR_FATAL;
+ unsigned int idx;
+
+ if (!hash_arg) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ return result;
+ }
+
+ if (new_sz != hash_arg->size) {
+ struct bucket **new_bkt_table;
+
+ malloc_sz = (sizeof(struct bucket *) * new_sz);
+ new_bkt_table = kmalloc(malloc_sz, GFP_KERNEL);
+
+ if (!new_bkt_table) {
+ result = IMG_ERROR_MALLOC_FAILED;
+ return result;
+ }
+
+ for (idx = 0; idx < new_sz; idx++)
+ new_bkt_table[idx] = NULL;
+
+ result = hash_rehash(hash_arg->table,
+ hash_arg->size,
+ new_bkt_table,
+ new_sz);
+
+ if (result != IMG_SUCCESS) {
+ kfree(new_bkt_table);
+ new_bkt_table = NULL;
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ return result;
+ }
+
+ kfree(hash_arg->table);
+ hash_arg->table = new_bkt_table;
+ hash_arg->size = new_sz;
+ }
+ result = IMG_SUCCESS;
+
+ return result;
+}
+
+static unsigned int private_max(unsigned int a, unsigned int b)
+{
+ unsigned int ret = (a > b) ? a : b;
+ return ret;
+}
+
+/*
+ * @Function vid_hash_initialise
+ * @Description
+ * To initialise the hash module.
+ * @Input None
+ * @Return IMG_SUCCESS or an error code.
+ */
+int vid_hash_initialise(void)
+{
+ unsigned int result = IMG_ERROR_ALREADY_COMPLETE;
+
+ if (!global_initialized) {
+ if (global_hashpool || global_bucketpool) {
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ return result;
+ }
+
+ result = pool_create("img-hash",
+ sizeof(struct hash),
+ &global_hashpool);
+
+ if (result != IMG_SUCCESS) {
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ return result;
+ }
+
+ result = pool_create("img-sBucket",
+ sizeof(struct bucket),
+ &global_bucketpool);
+ if (result != IMG_SUCCESS) {
+ if (global_bucketpool) {
+ result = pool_delete(global_bucketpool);
+ global_bucketpool = NULL;
+ }
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ return result;
+ }
+ global_initialized = true;
+ result = IMG_SUCCESS;
+ }
+ return result;
+}
+
+/*
+ * @Function vid_hash_finalise
+ * @Description
+ * To finalise the hash module. All allocated hash tables should
+ * be deleted before calling this function.
+ * @Input None
+ * @Return IMG_SUCCESS or an error code.
+ */
+int vid_hash_finalise(void)
+{
+ unsigned int result = IMG_ERROR_FATAL;
+
+ if (global_initialized) {
+ if (global_hashpool) {
+ result = pool_delete(global_hashpool);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ global_hashpool = NULL;
+ }
+
+ if (global_bucketpool) {
+ result = pool_delete(global_bucketpool);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ global_bucketpool = NULL;
+ }
+ global_initialized = false;
+ result = IMG_SUCCESS;
+ }
+
+ return result;
+}
+
+/*
+ * @Function vid_hash_create
+ * @Description
+ * Create a self scaling hash table.
+ * @Input initial_size : Initial and minimum size of the hash table.
+ * @Output hash_arg : Will countin the hash table handle or NULL.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int vid_hash_create(unsigned int initial_size,
+ struct hash ** const hash_arg)
+{
+ unsigned int idx;
+ unsigned int tbl_sz = 0;
+ unsigned int result = IMG_ERROR_FATAL;
+ struct hash *local_hash = NULL;
+
+ if (!hash_arg) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ return result;
+ }
+
+ if (global_initialized) {
+ pool_alloc(global_hashpool, ((void **)&local_hash));
+ if (!local_hash) {
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ *hash_arg = NULL;
+ return result;
+ }
+
+ local_hash->count = 0;
+ local_hash->size = initial_size;
+ local_hash->minimum_size = initial_size;
+
+ tbl_sz = (sizeof(struct bucket *) * local_hash->size);
+ local_hash->table = kmalloc(tbl_sz, GFP_KERNEL);
+ if (!local_hash->table) {
+ result = pool_free(global_hashpool, local_hash);
+ if (result != IMG_SUCCESS)
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ result |= IMG_ERROR_MALLOC_FAILED;
+ *hash_arg = NULL;
+ return result;
+ }
+
+ for (idx = 0; idx < local_hash->size; idx++)
+ local_hash->table[idx] = NULL;
+
+ *hash_arg = local_hash;
+ result = IMG_SUCCESS;
+ }
+ return result;
+}
+
+/*
+ * @Function vid_hash_delete
+ * @Description
+ * To delete a hash table, all entries in the table should be
+ * removed before calling this function.
+ * @Input hash_arg : Hash table pointer
+ * @Return IMG_SUCCESS or an error code.
+ */
+int vid_hash_delete(struct hash * const hash_arg)
+{
+ unsigned int result = IMG_ERROR_FATAL;
+
+ if (!hash_arg) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ return result;
+ }
+
+ if (global_initialized) {
+ if (hash_arg->count != 0) {
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ return result;
+ }
+
+ kfree(hash_arg->table);
+ hash_arg->table = NULL;
+
+ result = pool_free(global_hashpool, hash_arg);
+ if (result != IMG_SUCCESS) {
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ return result;
+ }
+ }
+ return result;
+}
+
+/*
+ * @Function vid_hash_insert
+ * @Description
+ * To insert a key value pair into a hash table.
+ * @Input hash_arg : Hash table pointer
+ * @Input key : Key value
+ * @Input value : The value associated with the key.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int vid_hash_insert(struct hash * const hash_arg,
+ unsigned long long key,
+ unsigned long long value)
+{
+ struct bucket *ps_bucket = NULL;
+ unsigned int result = IMG_ERROR_FATAL;
+
+ if (!hash_arg) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ return result;
+ }
+
+ if (global_initialized) {
+ result = pool_alloc(global_bucketpool, ((void **)&ps_bucket));
+ if (result != IMG_SUCCESS || !ps_bucket) {
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ return result;
+ }
+ ps_bucket->next = NULL;
+ ps_bucket->key = key;
+ ps_bucket->value = value;
+
+ result = hash_chain_insert(ps_bucket,
+ hash_arg->table,
+ hash_arg->size);
+
+ if (result != IMG_SUCCESS) {
+ pool_free(global_bucketpool, ((void **)&ps_bucket));
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ return result;
+ }
+
+ hash_arg->count++;
+
+ /* check if we need to think about re-balancing */
+ if ((hash_arg->count << 1) > hash_arg->size) {
+ result = hash_resize(hash_arg, (hash_arg->size << 1));
+ if (result != IMG_SUCCESS) {
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ return result;
+ }
+ }
+ result = IMG_SUCCESS;
+ }
+ return result;
+}
+
+/*
+ * @Function vid_hash_remove
+ * @Description
+ * To remove a key value pair from a hash table
+ * @Input hash_arg : Hash table pointer
+ * @Input key : Key value
+ * @Input ret_result : 0 if the key is missing or the value
+ * associated with the key.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int vid_hash_remove(struct hash * const hash_arg,
+ unsigned long long key,
+ unsigned long * const ret_result)
+{
+ unsigned int idx;
+ unsigned int tmp1 = 0;
+ unsigned int tmp2 = 0;
+ unsigned int result = IMG_ERROR_FATAL;
+ struct bucket **bucket = NULL;
+
+ if (!hash_arg) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ return result;
+ }
+
+ idx = hash_func(key, hash_arg->size);
+
+ for (bucket = &hash_arg->table[idx]; (*bucket) != NULL;
+ bucket = &((*bucket)->next)) {
+ if ((*bucket)->key == key) {
+ struct bucket *ps_bucket = (*bucket);
+
+ unsigned long long value = ps_bucket->value;
+
+ *bucket = ps_bucket->next;
+ result = pool_free(global_bucketpool, ps_bucket);
+
+ hash_arg->count--;
+
+ /* check if we need to think about re-balencing */
+ if (hash_arg->size > (hash_arg->count << 2) &&
+ hash_arg->size > hash_arg->minimum_size) {
+ tmp1 = (hash_arg->size >> 1);
+ tmp2 = hash_arg->minimum_size;
+ result = hash_resize(hash_arg,
+ private_max(tmp1, tmp2));
+ }
+ *ret_result = value;
+ result = IMG_SUCCESS;
+ break;
+ }
+ }
+ return result;
+}
diff --git a/drivers/staging/media/vxd/common/hash.h b/drivers/staging/media/vxd/common/hash.h
new file mode 100644
index 000000000000..91034d1ba441
--- /dev/null
+++ b/drivers/staging/media/vxd/common/hash.h
@@ -0,0 +1,86 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Self scaling hash tables.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef _HASH_H_
+#define _HASH_H_
+
+#include <linux/types.h>
+struct hash;
+
+/**
+ * vid_hash_initialise - VID_HASH_Initialise
+ * @Input None
+ *
+ * To initialise the hash module.
+ */
+int vid_hash_initialise(void);
+
+/*
+ * @Function VID_HASH_Finalise
+ * @Description
+ * To finalise the hash module. All allocated hash tables should
+ * be deleted before calling this function.
+ * @Input None
+ * @Return IMG_SUCCESS or an error code.
+ */
+int vid_hash_finalise(void);
+
+/*
+ * @Function VID_HASH_Create
+ * @Description
+ * Create a self scaling hash table.
+ * @Input initial_size : Initial and minimum size of the hash table.
+ * @Output hash : Hash table handle or NULL.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int vid_hash_create(unsigned int initial_size,
+ struct hash ** const hash_hndl);
+
+/*
+ * @Function VID_HASH_Delete
+ * @Description
+ * To delete a hash table, all entries in the table should be
+ * removed before calling this function.
+ * @Input hash : Hash table pointer
+ * @Return IMG_SUCCESS or an error code.
+ */
+int vid_hash_delete(struct hash * const ps_hash);
+
+/*
+ * @Function VID_HASH_Insert
+ * @Description
+ * To insert a key value pair into a hash table.
+ * @Input ps_hash : Hash table pointer
+ * @Input key : Key value
+ * @Input value : The value associated with the key.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int vid_hash_insert(struct hash * const ps_hash,
+ unsigned long long key,
+ unsigned long long value);
+
+/*
+ * @Function VID_HASH_Remove
+ * @Description
+ * To remove a key value pair from a hash table
+ * @Input ps_hash : Hash table pointer
+ * @Input key : Key value
+ * @Input result : 0 if the key is missing or the value
+ * associated with the key.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int vid_hash_remove(struct hash * const ps_hash,
+ unsigned long long key,
+ unsigned long * const result);
+
+#endif /* _HASH_H_ */
diff --git a/drivers/staging/media/vxd/common/pool.c b/drivers/staging/media/vxd/common/pool.c
new file mode 100644
index 000000000000..c0cb1e465c50
--- /dev/null
+++ b/drivers/staging/media/vxd/common/pool.c
@@ -0,0 +1,228 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Object Pool Memory Allocator
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "img_errors.h"
+#include "pool.h"
+
+#define BUFF_MAX_SIZE 4096
+#define BUFF_MAX_GROW 32
+
+/* 64 bits */
+#define ALIGN_SIZE (sizeof(long long) - 1)
+
+struct pool {
+ unsigned char *name;
+ unsigned int size;
+ unsigned int grow;
+ struct buffer *buffers;
+ struct object *objects;
+};
+
+struct buffer {
+ struct buffer *next;
+};
+
+struct object {
+ struct object *next;
+};
+
+static inline unsigned char *strdup_cust(const unsigned char *str)
+{
+ unsigned char *r = kmalloc(strlen(str) + 1, GFP_KERNEL);
+
+ if (r)
+ strcpy(r, str);
+ return r;
+}
+
+/*
+ * pool_create - Create an sObject pool
+ * @name: Name of sObject pool for diagnostic purposes
+ * @obj_size: size of each sObject in the pool in bytes
+ * @pool_hdnl: Will contain NULL or sObject pool handle
+ *
+ * This function Create an sObject pool
+ */
+
+int pool_create(const unsigned char * const name,
+ unsigned int obj_size,
+ struct pool ** const pool_hdnl)
+{
+ struct pool *local_pool = NULL;
+ unsigned int result = IMG_ERROR_FATAL;
+
+ if (!name || !pool_hdnl) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ return result;
+ }
+
+ local_pool = kmalloc((sizeof(*local_pool)), GFP_KERNEL);
+ if (!local_pool) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ return result;
+ }
+
+ local_pool->name = strdup_cust((unsigned char *)name);
+ local_pool->size = obj_size;
+ local_pool->buffers = NULL;
+ local_pool->objects = NULL;
+ local_pool->grow =
+ (BUFF_MAX_SIZE - sizeof(struct buffer)) /
+ (obj_size + ALIGN_SIZE);
+
+ if (local_pool->grow == 0)
+ local_pool->grow = 1;
+ else if (local_pool->grow > BUFF_MAX_GROW)
+ local_pool->grow = BUFF_MAX_GROW;
+
+ *pool_hdnl = local_pool;
+ result = IMG_SUCCESS;
+
+ return result;
+}
+
+/*
+ * @Function pool_delete
+ * @Description
+ * Delete an sObject pool. All psObjects allocated from the pool must
+ * be free'd with pool_free() before deleting the sObject pool.
+ * @Input pool : Object Pool pointer
+ * @Return IMG_SUCCESS or an error code.
+ */
+int pool_delete(struct pool * const pool_arg)
+{
+ struct buffer *local_buf = NULL;
+ unsigned int result = IMG_ERROR_FATAL;
+
+ if (!pool_arg) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ return result;
+ }
+
+ local_buf = pool_arg->buffers;
+ while (local_buf) {
+ local_buf = local_buf->next;
+ kfree(pool_arg->buffers);
+ pool_arg->buffers = local_buf;
+ }
+
+ kfree(pool_arg->name);
+ pool_arg->name = NULL;
+
+ kfree(pool_arg);
+ result = IMG_SUCCESS;
+
+ return result;
+}
+
+/*
+ * @Function pool_alloc
+ * @Description
+ * Allocate an sObject from an sObject pool.
+ * @Input pool_arg : Object Pool
+ * @Output obj_hndl : Pointer containing the handle to the
+ * object created or IMG_NULL
+ * @Return IMG_SUCCESS or an error code.
+ */
+int pool_alloc(struct pool * const pool_arg,
+ void ** const obj_hndl)
+{
+ struct object *local_obj1 = NULL;
+ struct buffer *local_buf = NULL;
+ unsigned int idx = 0;
+ unsigned int sz = 0;
+ unsigned int result = IMG_ERROR_FATAL;
+
+ if (!pool_arg || !obj_hndl) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ return result;
+ }
+
+ if (!pool_arg->objects) {
+ sz = (pool_arg->size + ALIGN_SIZE);
+ sz *= (pool_arg->grow + sizeof(struct buffer));
+ local_buf = kmalloc(sz, GFP_KERNEL);
+ if (!local_buf) {
+ result = IMG_ERROR_MALLOC_FAILED;
+ return result;
+ }
+
+ local_buf->next = pool_arg->buffers;
+ pool_arg->buffers = local_buf;
+
+ for (idx = 0; idx < pool_arg->grow; idx++) {
+ struct object *local_obj2;
+ unsigned char *temp_ptr = NULL;
+
+ local_obj2 = (struct object *)(((unsigned char *)(local_buf + 1))
+ + (idx * (pool_arg->size + ALIGN_SIZE)));
+
+ temp_ptr = (unsigned char *)local_obj2;
+ if ((unsigned long)temp_ptr & ALIGN_SIZE) {
+ temp_ptr += ((ALIGN_SIZE + 1)
+ - ((unsigned long)temp_ptr & ALIGN_SIZE));
+ local_obj2 = (struct object *)temp_ptr;
+ }
+
+ local_obj2->next = pool_arg->objects;
+ pool_arg->objects = local_obj2;
+ }
+ }
+
+ if (!pool_arg->objects) {
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ return result;
+ }
+
+ local_obj1 = pool_arg->objects;
+ pool_arg->objects = local_obj1->next;
+
+ *obj_hndl = (void *)(local_obj1);
+ result = IMG_SUCCESS;
+
+ return result;
+}
+
+/*
+ * @Function pool_free
+ * @Description
+ * Free an sObject previously allocated from an sObject pool.
+ * @Input pool_arg : Object Pool pointer.
+ * @Output h_object : Handle to the object to be freed.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int pool_free(struct pool * const pool_arg,
+ void * const obj_hndl)
+{
+ struct object *object = NULL;
+ unsigned int result = IMG_ERROR_FATAL;
+
+ if (!pool_arg || !obj_hndl) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ return result;
+ }
+
+ object = (struct object *)obj_hndl;
+ object->next = pool_arg->objects;
+ pool_arg->objects = object;
+
+ result = IMG_SUCCESS;
+
+ return result;
+}
diff --git a/drivers/staging/media/vxd/common/pool.h b/drivers/staging/media/vxd/common/pool.h
new file mode 100644
index 000000000000..d22d15a2af54
--- /dev/null
+++ b/drivers/staging/media/vxd/common/pool.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Object Pool Memory Allocator header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef _pool_h_
+#define _pool_h_
+
+#include <linux/types.h>
+
+struct pool;
+
+/**
+ * pool_create - Create an sObject pool
+ * @name: Name of sObject pool for diagnostic purposes
+ * @obj_size: size of each sObject in the pool in bytes
+ * @pool: Will contain NULL or sObject pool handle
+ *
+ * Return IMG_SUCCESS or an error code.
+ */
+int pool_create(const unsigned char * const name,
+ unsigned int obj_size,
+ struct pool ** const pool);
+
+/*
+ * @Function pool_delete
+ * @Description
+ * Delete an sObject pool. All psObjects allocated from the pool must
+ * be free'd with pool_free() before deleting the sObject pool.
+ * @Input pool : Object Pool pointer
+ * @Return IMG_SUCCESS or an error code.
+ */
+int pool_delete(struct pool * const pool);
+
+/*
+ * @Function pool_alloc
+ * @Description
+ * Allocate an Object from an Object pool.
+ * @Input pool : Object Pool
+ * @Output obj_hdnl : Pointer containing the handle to the
+ * object created or IMG_NULL
+ * @Return IMG_SUCCESS or an error code.
+ */
+int pool_alloc(struct pool * const pool,
+ void ** const obj_hdnl);
+
+/*
+ * @Function pool_free
+ * @Description
+ * Free an sObject previously allocated from an sObject pool.
+ * @Input pool : Object Pool pointer.
+ * @Output obj_hdnl : Handle to the object to be freed.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int pool_free(struct pool * const pool,
+ void * const obj_hdnl);
+
+#endif /* _pool_h_ */
diff --git a/drivers/staging/media/vxd/common/ra.c b/drivers/staging/media/vxd/common/ra.c
new file mode 100644
index 000000000000..ac07737f351b
--- /dev/null
+++ b/drivers/staging/media/vxd/common/ra.c
@@ -0,0 +1,972 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Implements generic resource allocation.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "hash.h"
+#include "img_errors.h"
+#include "pool.h"
+#include "ra.h"
+
+static unsigned char global_init;
+
+/* pool of struct arena's */
+static struct pool *global_pool_arena;
+
+/* pool of struct boundary tag */
+static struct pool *global_pool_bt;
+
+/**
+ * ra_request_alloc_fail - ra_request_alloc_fail
+ * @import_hdnl : Callback handle.
+ * @requested_size : Requested allocation size.
+ * @ref : Pointer to user reference data.
+ * @alloc_flags : Allocation flags.
+ * @actual_size : Pointer to contain the actual allocated size.
+ * @base_addr : Allocation base(always 0,it is failing).
+ *
+ * Default callback allocator used if no callback is specified, always fails
+ * to allocate further resources to the arena.
+ */
+static int ra_request_alloc_fail(void *import_hdnl,
+ unsigned long long requested_size,
+ unsigned long long *actual_size,
+ void **ref,
+ unsigned int alloc_flags,
+ unsigned long long *base_addr)
+{
+ if (base_addr)
+ *base_addr = 0;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function ra_log2
+ * @Description
+ * Calculates the Log2(n) with n being a 64-bit value.
+ *
+ * @Input value : Input value.
+ * @Output None
+ * @Return result : Log2(ui64Value).
+ */
+
+static unsigned int ra_log2(unsigned long long value)
+{
+ int res = 0;
+
+ value >>= 1;
+ while (value > 0) {
+ value >>= 1;
+ res++;
+ }
+ return res;
+}
+
+/*
+ * @Function ra_segment_list_insert_after
+ * @Description Insert a boundary tag into an arena segment list after a
+ * specified boundary tag.
+ * @Input arena_arg : Pointer to the input arena.
+ * @Input bt_here_arg : The boundary tag before which psBTToInsert
+ * will be added .
+ * @Input bt_to_insert_arg : The boundary tag to insert.
+ * @Output None
+ * @Return None
+ */
+static void ra_segment_list_insert_after(struct arena *arena_arg,
+ struct btag *bt_here_arg,
+ struct btag *bt_to_insert_arg)
+{
+ bt_to_insert_arg->nxt_seg = bt_here_arg->nxt_seg;
+ bt_to_insert_arg->prv_seg = bt_here_arg;
+
+ if (!bt_here_arg->nxt_seg)
+ arena_arg->tail_seg = bt_to_insert_arg;
+ else
+ bt_here_arg->nxt_seg->prv_seg = bt_to_insert_arg;
+
+ bt_here_arg->nxt_seg = bt_to_insert_arg;
+}
+
+/*
+ * @Function ra_segment_list_insert
+ * @Description
+ * Insert a boundary tag into an arena segment list at the appropriate point.
+ * @Input arena_arg : Pointer to the input arena.
+ * @Input bt_to_insert_arg : The boundary tag to insert.
+ * @Output None
+ * @Return None
+ */
+static void ra_segment_list_insert(struct arena *arena_arg,
+ struct btag *bt_to_insert_arg)
+{
+ /* insert into the segment chain */
+ if (!arena_arg->head_seg) {
+ arena_arg->head_seg = bt_to_insert_arg;
+ arena_arg->tail_seg = bt_to_insert_arg;
+ bt_to_insert_arg->nxt_seg = NULL;
+ bt_to_insert_arg->prv_seg = NULL;
+ } else {
+ struct btag *bt_scan = arena_arg->head_seg;
+
+ while (bt_scan->nxt_seg &&
+ bt_to_insert_arg->base >=
+ bt_scan->nxt_seg->base) {
+ bt_scan = bt_scan->nxt_seg;
+ }
+ ra_segment_list_insert_after(arena_arg,
+ bt_scan,
+ bt_to_insert_arg);
+ }
+}
+
+/*
+ * @Function ra_SegmentListRemove
+ * @Description
+ * Insert a boundary tag into an arena segment list at the appropriate point.
+ * @Input arena_arg : Pointer to the input arena.
+ * @Input bt_to_remove_arg : The boundary tag to insert.
+ * @Output None
+ * @Return None
+ */
+static void ra_segment_list_remove(struct arena *arena_arg,
+ struct btag *bt_to_remove_arg)
+{
+ if (!bt_to_remove_arg->prv_seg)
+ arena_arg->head_seg = bt_to_remove_arg->nxt_seg;
+ else
+ bt_to_remove_arg->prv_seg->nxt_seg = bt_to_remove_arg->nxt_seg;
+
+ if (!bt_to_remove_arg->nxt_seg)
+ arena_arg->tail_seg = bt_to_remove_arg->prv_seg;
+ else
+ bt_to_remove_arg->nxt_seg->prv_seg = bt_to_remove_arg->prv_seg;
+}
+
+/*
+ * @Function ra_segment_split
+ * @Description
+ * Split a segment into two, maintain the arena segment list.
+ * The boundary tag should not be in the free table. Neither the original or
+ * the new psBTNeighbour bounary tag will be in the free table.
+ * @Input arena_arg : Pointer to the input arena.
+ * @Input bt_to_split_arg : The boundary tag to split.
+ * The required segment size of boundary tag after the split.
+ * @Output None
+ * @Return btag *: New boundary tag.
+ */
+static struct btag *ra_segment_split(struct arena *arena_arg,
+ struct btag *bt_to_split_arg,
+ unsigned long long size)
+{
+ struct btag *local_bt_neighbour = NULL;
+ int res = IMG_ERROR_FATAL;
+
+ res = pool_alloc(global_pool_bt, ((void **)&local_bt_neighbour));
+ if (res != IMG_SUCCESS)
+ return NULL;
+
+ local_bt_neighbour->prv_seg = bt_to_split_arg;
+ local_bt_neighbour->nxt_seg = bt_to_split_arg->nxt_seg;
+ local_bt_neighbour->bt_type = RA_BOUNDARY_TAG_TYPE_FREE;
+ local_bt_neighbour->size = (bt_to_split_arg->size - size);
+ local_bt_neighbour->base = (bt_to_split_arg->base + size);
+ local_bt_neighbour->nxt_free = NULL;
+ local_bt_neighbour->prv_free = NULL;
+ local_bt_neighbour->ref = bt_to_split_arg->ref;
+
+ if (!bt_to_split_arg->nxt_seg)
+ arena_arg->tail_seg = local_bt_neighbour;
+ else
+ bt_to_split_arg->nxt_seg->prv_seg = local_bt_neighbour;
+
+ bt_to_split_arg->nxt_seg = local_bt_neighbour;
+ bt_to_split_arg->size = size;
+
+ return local_bt_neighbour;
+}
+
+/*
+ * @Function ra_free_list_insert
+ * @Description
+ * Insert a boundary tag into an arena free table.
+ * @Input arena_arg : Pointer to the input arena.
+ * @Input bt_arg : The boundary tag to insert into an arena
+ * free table.
+ * @Output None
+ * @Return None
+ */
+static void ra_free_list_insert(struct arena *arena_arg,
+ struct btag *bt_arg)
+{
+ unsigned int index = ra_log2(bt_arg->size);
+
+ bt_arg->bt_type = RA_BOUNDARY_TAG_TYPE_FREE;
+ if (index < FREE_TABLE_LIMIT)
+ bt_arg->nxt_free = arena_arg->head_free[index];
+ else
+ bt_arg->nxt_free = NULL;
+
+ bt_arg->prv_free = NULL;
+
+ if (index < FREE_TABLE_LIMIT) {
+ if (arena_arg->head_free[index])
+ arena_arg->head_free[index]->prv_free = bt_arg;
+ }
+
+ if (index < FREE_TABLE_LIMIT)
+ arena_arg->head_free[index] = bt_arg;
+}
+
+/*
+ * @Function ra_free_list_remove
+ * @Description
+ * Remove a boundary tag from an arena free table.
+ * @Input arena_arg : Pointer to the input arena.
+ * @Input bt_arg : The boundary tag to remove from
+ * an arena free table.
+ * @Output None
+ * @Return None
+ */
+static void ra_free_list_remove(struct arena *arena_arg,
+ struct btag *bt_arg)
+{
+ unsigned int index = ra_log2(bt_arg->size);
+
+ if (bt_arg->nxt_free)
+ bt_arg->nxt_free->prv_free = bt_arg->prv_free;
+
+ if (!bt_arg->prv_free && index < FREE_TABLE_LIMIT)
+ arena_arg->head_free[index] = bt_arg->nxt_free;
+ else if (bt_arg->prv_free)
+ bt_arg->prv_free->nxt_free = bt_arg->nxt_free;
+}
+
+/*
+ * @Function ra_build_span_marker
+ * @Description
+ * Construct a span marker boundary tag.
+ * @Input base : The base of the boundary tag.
+ * @Output None
+ * @Return btag * : New span marker boundary tag
+ */
+static struct btag *ra_build_span_marker(unsigned long long base)
+{
+ struct btag *local_bt = NULL;
+ int res = IMG_ERROR_FATAL;
+
+ res = pool_alloc(global_pool_bt, ((void **)&local_bt));
+ if (res != IMG_SUCCESS)
+ return NULL;
+
+ local_bt->bt_type = RA_BOUNDARY_TAG_TYPE_SPAN;
+ local_bt->base = base;
+ local_bt->size = 0;
+ local_bt->nxt_seg = NULL;
+ local_bt->prv_seg = NULL;
+ local_bt->nxt_free = NULL;
+ local_bt->prv_free = NULL;
+ local_bt->ref = NULL;
+
+ return local_bt;
+}
+
+/*
+ * @Function ra_build_bt
+ * @Description
+ * Construct a boundary tag for a free segment.
+ * @Input ui64Base : The base of the resource segment.
+ * @Input ui64Size : The extent of the resource segment.
+ * @Output None
+ * @Return btag * : New boundary tag
+ */
+static struct btag *ra_build_bt(unsigned long long base, unsigned long long size)
+{
+ struct btag *local_bt = NULL;
+ int res = IMG_ERROR_FATAL;
+
+ res = pool_alloc(global_pool_bt, ((void **)&local_bt));
+
+ if (res != IMG_SUCCESS)
+ return local_bt;
+
+ local_bt->bt_type = RA_BOUNDARY_TAG_TYPE_FREE;
+ local_bt->base = base;
+ local_bt->size = size;
+ local_bt->nxt_seg = NULL;
+ local_bt->prv_seg = NULL;
+ local_bt->nxt_free = NULL;
+ local_bt->prv_free = NULL;
+ local_bt->ref = NULL;
+
+ return local_bt;
+}
+
+/*
+ * @Function ra_insert_resource
+ * @Description
+ * Add a free resource segment to an arena.
+ * @Input base : The base of the resource segment.
+ * @Input size : The size of the resource segment.
+ * @Output None
+ * @Return IMG_SUCCESS or an error code.
+ */
+static int ra_insert_resource(struct arena *arena_arg,
+ unsigned long long base,
+ unsigned long long size)
+{
+ struct btag *local_bt = NULL;
+
+ local_bt = ra_build_bt(base, size);
+ if (!local_bt)
+ return IMG_ERROR_UNEXPECTED_STATE;
+
+ ra_segment_list_insert(arena_arg, local_bt);
+ ra_free_list_insert(arena_arg, local_bt);
+ arena_arg->max_idx = ra_log2(size);
+ if (1ULL << arena_arg->max_idx < size)
+ arena_arg->max_idx++;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function ra_insert_resource_span
+ * @Description
+ * Add a free resource span to an arena, complete with span markers.
+ * @Input arena_arg : Pointer to the input arena.
+ * @Input base : The base of the resource segment.
+ * @Input size : The size of the resource segment.
+ * @Output None
+ * @Return btag * : The boundary tag representing
+ * the free resource segment.
+ */
+static struct btag *ra_insert_resource_span(struct arena *arena_arg,
+ unsigned long long base,
+ unsigned long long size)
+{
+ struct btag *local_bt = NULL;
+ struct btag *local_bt_span_start = NULL;
+ struct btag *local_bt_span_end = NULL;
+
+ local_bt_span_start = ra_build_span_marker(base);
+ if (!local_bt_span_start)
+ return NULL;
+
+ local_bt_span_end = ra_build_span_marker(base + size);
+ if (!local_bt_span_end) {
+ pool_free(global_pool_bt, local_bt_span_start);
+ return NULL;
+ }
+
+ local_bt = ra_build_bt(base, size);
+ if (!local_bt) {
+ pool_free(global_pool_bt, local_bt_span_end);
+ pool_free(global_pool_bt, local_bt_span_start);
+ return NULL;
+ }
+
+ ra_segment_list_insert(arena_arg, local_bt_span_start);
+ ra_segment_list_insert_after(arena_arg,
+ local_bt_span_start,
+ local_bt);
+ ra_free_list_insert(arena_arg, local_bt);
+ ra_segment_list_insert_after(arena_arg,
+ local_bt,
+ local_bt_span_end);
+
+ return local_bt;
+}
+
+/*
+ * @Function ra_free_bt
+ * @Description
+ * Free a boundary tag taking care of the segment list and the
+ * boundary tag free table.
+ * @Input arena_arg : Pointer to the input arena.
+ * @Input bt_arg : The boundary tag to free.
+ * @Output None
+ * @Return None
+ */
+static void ra_free_bt(struct arena *arena_arg,
+ struct btag *bt_arg)
+{
+ struct btag *bt_neibr;
+
+ /* try and coalesce with left bt_neibr */
+ bt_neibr = bt_arg->prv_seg;
+ if (bt_neibr &&
+ bt_neibr->bt_type == RA_BOUNDARY_TAG_TYPE_FREE &&
+ bt_neibr->base + bt_neibr->size == bt_arg->base) {
+ ra_free_list_remove(arena_arg, bt_neibr);
+ ra_segment_list_remove(arena_arg, bt_neibr);
+ bt_arg->base = bt_neibr->base;
+ bt_arg->size += bt_neibr->size;
+ pool_free(global_pool_bt, bt_neibr);
+ }
+
+ /* try to coalesce with right psBTNeighbour */
+ bt_neibr = bt_arg->nxt_seg;
+ if (bt_neibr &&
+ bt_neibr->bt_type == RA_BOUNDARY_TAG_TYPE_FREE &&
+ bt_arg->base + bt_arg->size == bt_neibr->base) {
+ ra_free_list_remove(arena_arg, bt_neibr);
+ ra_segment_list_remove(arena_arg, bt_neibr);
+ bt_arg->size += bt_neibr->size;
+ pool_free(global_pool_bt, bt_neibr);
+ }
+
+ if (bt_arg->nxt_seg &&
+ bt_arg->nxt_seg->bt_type == RA_BOUNDARY_TAG_TYPE_SPAN &&
+ bt_arg->prv_seg && bt_arg->prv_seg->bt_type ==
+ RA_BOUNDARY_TAG_TYPE_SPAN) {
+ struct btag *ps_bt_nxt = bt_arg->nxt_seg;
+ struct btag *ps_bt_prev = bt_arg->prv_seg;
+
+ ra_segment_list_remove(arena_arg, ps_bt_nxt);
+ ra_segment_list_remove(arena_arg, ps_bt_prev);
+ ra_segment_list_remove(arena_arg, bt_arg);
+ arena_arg->import_free_fxn(arena_arg->import_hdnl,
+ bt_arg->base,
+ bt_arg->ref);
+ pool_free(global_pool_bt, ps_bt_nxt);
+ pool_free(global_pool_bt, ps_bt_prev);
+ pool_free(global_pool_bt, bt_arg);
+ } else {
+ ra_free_list_insert(arena_arg, bt_arg);
+ }
+}
+
+static int ra_check_btag(struct arena *arena_arg,
+ unsigned long long size_arg,
+ void **ref,
+ struct btag *bt_arg,
+ unsigned long long align_arg,
+ unsigned long long *base_arg,
+ unsigned int align_log2)
+{
+ unsigned long long local_align_base;
+ int res = IMG_ERROR_FATAL;
+
+ while (bt_arg) {
+ if (align_arg > 1ULL)
+ local_align_base = ((bt_arg->base + align_arg - 1)
+ >> align_log2) << align_log2;
+ else
+ local_align_base = bt_arg->base;
+
+ if ((bt_arg->base + bt_arg->size) >=
+ (local_align_base + size_arg)) {
+ ra_free_list_remove(arena_arg, bt_arg);
+
+ /*
+ * with align_arg we might need to discard the front of
+ * this segment
+ */
+ if (local_align_base > bt_arg->base) {
+ struct btag *btneighbor;
+
+ btneighbor = ra_segment_split(arena_arg,
+ bt_arg,
+ (local_align_base -
+ bt_arg->base));
+ /*
+ * Partition the buffer, create a new boundary
+ * tag
+ */
+ if (!btneighbor)
+ return IMG_ERROR_UNEXPECTED_STATE;
+
+ ra_free_list_insert(arena_arg, bt_arg);
+ bt_arg = btneighbor;
+ }
+
+ /*
+ * The segment might be too big, if so, discard the back
+ * of the segment
+ */
+ if (bt_arg->size > size_arg) {
+ struct btag *btneighbor;
+
+ btneighbor = ra_segment_split(arena_arg,
+ bt_arg,
+ size_arg);
+ /*
+ * Partition the buffer, create a new boundary
+ * tag
+ */
+ if (!btneighbor)
+ return IMG_ERROR_UNEXPECTED_STATE;
+
+ ra_free_list_insert(arena_arg, btneighbor);
+ }
+
+ bt_arg->bt_type = RA_BOUNDARY_TAG_TYPE_LIVE;
+
+ res = vid_hash_insert(arena_arg->hash_tbl,
+ bt_arg->base,
+ (unsigned long)bt_arg);
+ if (res != IMG_SUCCESS) {
+ ra_free_bt(arena_arg, bt_arg);
+ *base_arg = 0;
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+
+ if (ref)
+ *ref = bt_arg->ref;
+
+ *base_arg = bt_arg->base;
+ return IMG_SUCCESS;
+ }
+ bt_arg = bt_arg->nxt_free;
+ }
+
+ return res;
+}
+
+/*
+ * @Function ra_attempt_alloc_aligned
+ * @Description Attempt to allocate from an arena
+ * @Input arena_arg: Pointer to the input arena
+ * @Input size_arg: The requested allocation size
+ * @Input ref: The user references associated with the allocated
+ * segment
+ * @Input align_arg: Required alignment
+ * @Output base_arg: Allocated resource size
+ * @Return IMG_SUCCESS or an error code
+ */
+static int ra_attempt_alloc_aligned(struct arena *arena_arg,
+ unsigned long long size_arg,
+ void **ref,
+ unsigned long long align_arg,
+ unsigned long long *base_arg)
+{
+ unsigned int index;
+ unsigned int align_log2;
+ int res = IMG_ERROR_FATAL;
+
+ if (!arena_arg || !base_arg)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /*
+ * Take the log of the alignment to get number of bits to shift
+ * left/right for multiply/divide. Assumption made here is that
+ * alignment has to be a power of 2 value. Aserting otherwise.
+ */
+ align_log2 = ra_log2(align_arg);
+
+ /*
+ * Search for a near fit free boundary tag, start looking at the
+ * log2 free table for our required size and work on up the table.
+ */
+ index = ra_log2(size_arg);
+
+ /*
+ * If the Size required is exactly 2**n then use the n bucket, because
+ * we know that every free block in that bucket is larger than 2**n,
+ * otherwise start at then next bucket up.
+ */
+ if (size_arg > (1ull << index))
+ index++;
+
+ while ((index < FREE_TABLE_LIMIT) && !arena_arg->head_free[index])
+ index++;
+
+ if (index >= FREE_TABLE_LIMIT) {
+ pr_err("requested allocation size doesn't fit in the arena. Increase MMU HEAP Size\n");
+ return IMG_ERROR_OUT_OF_MEMORY;
+ }
+
+ while (index < FREE_TABLE_LIMIT) {
+ if (arena_arg->head_free[index]) {
+ /* we have a cached free boundary tag */
+ struct btag *local_bt =
+ arena_arg->head_free[index];
+
+ res = ra_check_btag(arena_arg,
+ size_arg,
+ ref,
+ local_bt,
+ align_arg,
+ base_arg,
+ align_log2);
+ if (res != IMG_SUCCESS)
+ return res;
+ }
+ index++;
+ }
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function vid_ra_init
+ * @Description Initializes the RA module. Must be called before any other
+ * ra API function
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_initialise(void)
+{
+ int res = IMG_ERROR_FATAL;
+
+ if (!global_init) {
+ res = pool_create("img-arena",
+ sizeof(struct arena),
+ &global_pool_arena);
+ if (res != IMG_SUCCESS)
+ return IMG_ERROR_UNEXPECTED_STATE;
+
+ res = pool_create("img-bt",
+ sizeof(struct btag),
+ &global_pool_bt);
+ if (res != IMG_SUCCESS) {
+ res = pool_delete(global_pool_arena);
+ global_pool_arena = NULL;
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+ global_init = 1;
+ res = IMG_SUCCESS;
+ }
+
+ return res;
+}
+
+/*
+ * @Function vid_ra_deinit
+ * @Description Deinitializes the RA module
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_deinit(void)
+{
+ int res = IMG_ERROR_FATAL;
+
+ if (global_init) {
+ if (global_pool_arena) {
+ res = pool_delete(global_pool_arena);
+ global_pool_arena = NULL;
+ }
+ if (global_pool_bt) {
+ res = pool_delete(global_pool_bt);
+ global_pool_bt = NULL;
+ }
+ global_init = 0;
+ res = IMG_SUCCESS;
+ }
+ return res;
+}
+
+/*
+ * @Function vid_ra_create
+ * @Description Used to create a resource arena.
+ * @Input name: The name of the arena for diagnostic purposes
+ * @Input base_arg: The base of an initial resource span or 0
+ * @Input size_arg: The size of an initial resource span or 0
+ * @Input quantum: The arena allocation quantum
+ * @Input (*import_alloc_fxn): A resource allocation callback or NULL
+ * @Input (*import_free_fxn): A resource de-allocation callback or NULL
+ * @Input import_hdnl: Handle passed to alloc and free or NULL
+ * @Output arena_hndl: The handle for the arene being created, or NULL
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_create(const unsigned char * const name,
+ unsigned long long base_arg,
+ unsigned long long size_arg,
+ unsigned long quantum,
+ int (*import_alloc_fxn)(void * const import_hdnl,
+ unsigned long long req_sz,
+ unsigned long long * const actl_sz,
+ void ** const ref,
+ unsigned int alloc_flags,
+ unsigned long long * const base_arg),
+ int (*import_free_fxn)(void * const import_hdnl,
+ unsigned long long import_base,
+ void * const import_ref),
+ void *import_hdnl,
+ void **arena_hndl)
+{
+ struct arena *local_arena = NULL;
+ unsigned int idx = 0;
+ int res = IMG_ERROR_FATAL;
+
+ if (!arena_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ *(arena_hndl) = NULL;
+
+ if (global_init) {
+ res = pool_alloc(global_pool_arena, ((void **)&local_arena));
+ if (!local_arena || res != IMG_SUCCESS)
+ return IMG_ERROR_UNEXPECTED_STATE;
+
+ local_arena->name = NULL;
+ if (name)
+ local_arena->name = kstrdup((const signed char *)name,
+ GFP_KERNEL);
+ if (import_alloc_fxn)
+ local_arena->import_alloc_fxn = import_alloc_fxn;
+ else
+ local_arena->import_alloc_fxn = ra_request_alloc_fail;
+
+ local_arena->import_free_fxn = import_free_fxn;
+ local_arena->import_hdnl = import_hdnl;
+
+ for (idx = 0; idx < FREE_TABLE_LIMIT; idx++)
+ local_arena->head_free[idx] = NULL;
+
+ local_arena->head_seg = NULL;
+ local_arena->tail_seg = NULL;
+ local_arena->quantum = quantum;
+
+ res = vid_hash_create(MINIMUM_HASH_SIZE,
+ &local_arena->hash_tbl);
+
+ if (!local_arena->hash_tbl) {
+ vid_hash_delete(local_arena->hash_tbl);
+ kfree(local_arena->name);
+ local_arena->name = NULL;
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+
+ //if (size_arg > (unsigned long long)0) {
+ if (size_arg > 0ULL) {
+ size_arg = (size_arg + quantum - 1) / quantum * quantum;
+
+ res = ra_insert_resource(local_arena,
+ base_arg,
+ size_arg);
+ if (res != IMG_SUCCESS) {
+ vid_hash_delete(local_arena->hash_tbl);
+ pool_free(global_pool_arena, local_arena);
+ kfree(local_arena->name);
+ local_arena->name = NULL;
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+ }
+ *(arena_hndl) = local_arena;
+ res = IMG_SUCCESS;
+ }
+
+ return res;
+}
+
+/*
+ * @Function vid_ra_delete
+ * @Description Used to delete a resource arena. All resources allocated from
+ * the arena must be freed before deleting the arena
+ * @Input arena_hndl: The handle to the arena to delete
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_delete(void * const arena_hndl)
+{
+ int res = IMG_ERROR_FATAL;
+ struct arena *local_arena = NULL;
+ unsigned int idx;
+
+ if (!arena_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (global_init) {
+ local_arena = (struct arena *)arena_hndl;
+ kfree(local_arena->name);
+ local_arena->name = NULL;
+ for (idx = 0; idx < FREE_TABLE_LIMIT; idx++)
+ local_arena->head_free[idx] = NULL;
+
+ while (local_arena->head_seg) {
+ struct btag *local_bt = local_arena->head_seg;
+
+ ra_segment_list_remove(local_arena, local_bt);
+ }
+ res = vid_hash_delete(local_arena->hash_tbl);
+ if (res != IMG_SUCCESS)
+ return IMG_ERROR_UNEXPECTED_STATE;
+
+ res = pool_free(global_pool_arena, local_arena);
+ if (res != IMG_SUCCESS)
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+
+ return res;
+}
+
+/*
+ * @Function vid_ra_add
+ * @Description Used to add a resource span to an arena. The span must not
+ * overlap with any span previously added to the arena
+ * @Input base_arg: The base_arg of the span
+ * @Input size_arg: The size of the span
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_add(void * const arena_hndl, unsigned long long base_arg, unsigned long long size_arg)
+{
+ int res = IMG_ERROR_FATAL;
+ struct arena *local_arena = NULL;
+
+ if (!arena_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (global_init) {
+ local_arena = (struct arena *)arena_hndl;
+ size_arg = (size_arg + local_arena->quantum - 1) /
+ local_arena->quantum * local_arena->quantum;
+
+ res = ra_insert_resource(local_arena, base_arg, size_arg);
+ if (res != IMG_SUCCESS)
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ return res;
+}
+
+/*
+ * @Function vid_ra_alloc
+ * @Description Used to allocate resource from an arena
+ * @Input arena_hndl: The handle to the arena to create the resource
+ * @Input request_size: The requested size of resource segment
+ * @Input actl_size: The actualSize of resource segment
+ * @Input ref: The user reference associated with allocated resource
+ * span
+ * @Input alloc_flags: AllocationFlags influencing allocation policy
+ * @Input align_arg: The alignment constraint required for the allocated
+ * segment
+ * @Output base_args: The base of the allocated resource
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_alloc(void * const arena_hndl,
+ unsigned long long request_size,
+ unsigned long long * const actl_sz,
+ void ** const ref,
+ unsigned int alloc_flags,
+ unsigned long long alignarg,
+ unsigned long long * const basearg)
+{
+ int res = IMG_ERROR_FATAL;
+ struct arena *arn_ctx = NULL;
+ unsigned long long loc_size = request_size;
+
+ if (!arena_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (global_init) {
+ arn_ctx = (struct arena *)arena_hndl;
+ loc_size = ((loc_size + arn_ctx->quantum - 1) /
+ arn_ctx->quantum) * arn_ctx->quantum;
+
+ if (actl_sz)
+ *actl_sz = loc_size;
+
+ /*
+ * If allocation failed then we might have an import source
+ * which can provide more resource, else we will have to fail
+ * the allocation to the caller.
+ */
+ if (alloc_flags == RA_SEQUENTIAL_ALLOCATION)
+ res = ra_attempt_alloc_aligned(arn_ctx,
+ loc_size,
+ ref,
+ alignarg,
+ basearg);
+
+ if (res != IMG_SUCCESS) {
+ void *import_ref = NULL;
+ unsigned long long import_base = 0ULL;
+ unsigned long long locimprt_reqsz = loc_size;
+ unsigned long long locimprt_actsz = 0ULL;
+
+ res = arn_ctx->import_alloc_fxn(arn_ctx->import_hdnl,
+ locimprt_reqsz,
+ &locimprt_actsz,
+ &import_ref,
+ alloc_flags,
+ &import_base);
+
+ if (res == IMG_SUCCESS) {
+ struct btag *local_bt =
+ ra_insert_resource_span(arn_ctx,
+ import_base,
+ locimprt_actsz);
+
+ /*
+ * Successfully import more resource, create a
+ * span to represent it and retry the allocation
+ * attempt
+ */
+ if (!local_bt) {
+ /*
+ * Insufficient resources to insert the
+ * newly acquired span, so free it back
+ */
+ arn_ctx->import_free_fxn(arn_ctx->import_hdnl,
+ import_base,
+ import_ref);
+ return IMG_ERROR_UNEXPECTED_STATE;
+ }
+ local_bt->ref = import_ref;
+ if (alloc_flags == RA_SEQUENTIAL_ALLOCATION) {
+ res = ra_attempt_alloc_aligned(arn_ctx,
+ loc_size,
+ ref,
+ alignarg,
+ basearg);
+ }
+ }
+ }
+ }
+
+ return res;
+}
+
+/*
+ * @Function vid_ra_free
+ * @Description Used to free a resource segment
+ * @Input arena_hndl: The arena the segment was originally allocated from
+ * @Input base_arg: The base of the span
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_free(void * const arena_hndl, unsigned long long base_arg)
+{
+ int res = IMG_ERROR_FATAL;
+ struct arena *local_arena = NULL;
+ struct btag *local_bt = NULL;
+ unsigned long uip_res;
+
+ if (!arena_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (global_init) {
+ local_arena = (struct arena *)arena_hndl;
+
+ res = vid_hash_remove(local_arena->hash_tbl,
+ base_arg,
+ &uip_res);
+ if (res != IMG_SUCCESS)
+ return res;
+ local_bt = (struct btag *)uip_res;
+
+ ra_free_bt(local_arena, local_bt);
+ }
+
+ return res;
+}
diff --git a/drivers/staging/media/vxd/common/ra.h b/drivers/staging/media/vxd/common/ra.h
new file mode 100644
index 000000000000..a4d529d635d7
--- /dev/null
+++ b/drivers/staging/media/vxd/common/ra.h
@@ -0,0 +1,200 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Implements generic resource allocation.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef _RA_H_
+#define _RA_H_
+
+#define MINIMUM_HASH_SIZE (64)
+#define FREE_TABLE_LIMIT (64)
+
+/* Defines whether sequential or random allocation is used */
+enum {
+ RA_SEQUENTIAL_ALLOCATION = 0,
+ RA_RANDOM_ALLOCATION,
+ RA_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* Defines boundary tag type */
+enum eboundary_tag_type {
+ RA_BOUNDARY_TAG_TYPE_SPAN = 0,
+ RA_BOUNDARY_TAG_TYPE_FREE,
+ RA_BOUNDARY_TAG_TYPE_LIVE,
+ RA_BOUNDARY_TAG_TYPE_MAX,
+ RA_BOUNDARY_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * @Description
+ * Boundary tags, used to describe a resource segment
+ *
+ * @enum0: span markers
+ * @enum1: free resource segment
+ * @enum2: allocated resource segment
+ * @enum3: max
+ * @base,size: The base resource of this segment and extent of this segment
+ * @nxt_seg, prv_seg: doubly linked ordered list of all segments
+ * within the arena
+ * @nxt_free, prv_free: doubly linked un-ordered list of free segments
+ * @reference : a user reference associated with this span, user
+ * references are currently only provided in
+ * the callback mechanism
+ */
+struct btag {
+ unsigned int bt_type;
+ unsigned long long base;
+ unsigned long long size;
+ struct btag *nxt_seg;
+ struct btag *prv_seg;
+ struct btag *nxt_free;
+ struct btag *prv_free;
+ void *ref;
+};
+
+/*
+ * @Description
+ * resource allocation arena
+ *
+ * @name: arena for diagnostics output
+ * @quantum: allocations within this arena are quantum sized
+ * @max_idx: index of the last position in the psBTHeadFree table,
+ * with available free space
+ * @import_alloc_fxn: import interface, if provided
+ * @import_free_fxn: import interface, if provided
+ * @import_hdnl: import interface, if provided
+ * @head_free: head of list of free boundary tags for indexed by Log2
+ * of the boundary tag size. Power-of-two table of free lists
+ * @head_seg, tail_seg : resource ordered segment list
+ * @ps_hash : segment address to boundary tag hash table
+ */
+struct arena {
+ unsigned char *name;
+ unsigned long quantum;
+ unsigned int max_idx;
+ int (*import_alloc_fxn)(void *import_hdnl,
+ unsigned long long requested_size,
+ unsigned long long *actual_size,
+ void **ref,
+ unsigned int alloc_flags,
+ unsigned long long *base_addr);
+ int (*import_free_fxn)(void *import_hdnl,
+ unsigned long long base,
+ void *ref);
+ void *import_hdnl;
+ struct btag *head_free[FREE_TABLE_LIMIT];
+ struct btag *head_seg;
+ struct btag *tail_seg;
+ struct hash *hash_tbl;
+};
+
+/*
+ * @Function vid_ra_init
+ * @Description Initializes the RA module. Must be called before any other
+ * ra API function
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_initialise(void);
+
+/*
+ * @Function vid_ra_deinit
+ * @Description Deinitializes the RA module
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_deinit(void);
+
+/*
+ * @Function vid_ra_create
+ * @Description Used to create a resource arena.
+ * @Input name: The name of the arena for diagnostic purposes
+ * @Input base_arg: The base of an initial resource span or 0
+ * @Input size_arg: The size of an initial resource span or 0
+ * @Input quantum: The arena allocation quantum
+ * @Input (*import_alloc_fxn): A resource allocation callback or NULL
+ * @Input (*import_free_fxn): A resource de-allocation callback or NULL
+ * @Input import_hdnl: Handle passed to alloc and free or NULL
+ * @Output arena_hndl: The handle for the arene being created, or NULL
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_create(const unsigned char * const name,
+ unsigned long long base_arg,
+ unsigned long long size_arg,
+ unsigned long quantum,
+ int (*import_alloc_fxn)(void * const import_hdnl,
+ unsigned long long req_sz,
+ unsigned long long * const actl_sz,
+ void ** const ref,
+ unsigned int alloc_flags,
+ unsigned long long * const base_arg),
+ int (*import_free_fxn)(void * const import_hdnl,
+ unsigned long long import_base,
+ void * const import_ref),
+ void *import_hdnl,
+ void **arena_hndl);
+
+/*
+ * @Function vid_ra_delete
+ * @Description Used to delete a resource arena. All resources allocated from
+ * the arena must be freed before deleting the arena
+ * @Input arena_hndl: The handle to the arena to delete
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_delete(void * const arena_hndl);
+
+/*
+ * @Function vid_ra_add
+ * @Description Used to add a resource span to an arena. The span must not
+ * overlap with any span previously added to the arena
+ * @Input base_arg: The base_arg of the span
+ * @Input size_arg: The size of the span
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_add(void * const arena_hndl, unsigned long long base_arg, unsigned long long size_arg);
+
+/*
+ * @Function vid_ra_alloc
+ * @Description Used to allocate resource from an arena
+ * @Input arena_hndl: The handle to the arena to create the resource
+ * @Input request_size: The requested size of resource segment
+ * @Input actl_size: The actualSize of resource segment
+ * @Input ref: The user reference associated with allocated resource
+ * span
+ * @Input alloc_flags: AllocationFlags influencing allocation policy
+ * @Input align_arg: The alignment constraint required for the allocated
+ * segment
+ * @Output base_args: The base of the allocated resource
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_alloc(void * const arena_hndl,
+ unsigned long long request_size,
+ unsigned long long * const actl_sz,
+ void ** const ref,
+ unsigned int alloc_flags,
+ unsigned long long align_arg,
+ unsigned long long * const base_arg);
+
+/*
+ * @Function vid_ra_free
+ * @Description Used to free a resource segment
+ * @Input arena_hndl: The arena the segment was originally allocated from
+ * @Input base_arg: The base of the span
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int vid_ra_free(void * const arena_hndl, unsigned long long base_arg);
+
+#endif
diff --git a/drivers/staging/media/vxd/common/talmmu_api.c b/drivers/staging/media/vxd/common/talmmu_api.c
new file mode 100644
index 000000000000..04ddcc33505c
--- /dev/null
+++ b/drivers/staging/media/vxd/common/talmmu_api.c
@@ -0,0 +1,753 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * TAL MMU Extensions.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+#include <linux/slab.h>
+#include <linux/printk.h>
+#include <linux/mutex.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "img_errors.h"
+#include "lst.h"
+#include "talmmu_api.h"
+
+static int global_init;
+static struct lst_t gl_dmtmpl_lst = {0};
+static struct mutex *global_lock;
+
+static int talmmu_devmem_free(void *mem_hndl)
+{
+ struct talmmu_memory *mem = mem_hndl;
+ struct talmmu_devmem_heap *mem_heap;
+
+ if (!mem_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ mem_heap = mem->devmem_heap;
+
+ if (!mem->ext_dev_virtaddr)
+ addr_cx_free(&mem_heap->ctx, "", mem->dev_virtoffset);
+
+ mutex_lock_nested(global_lock, SUBCLASS_TALMMU);
+
+ lst_remove(&mem_heap->memory_list, mem);
+
+ mutex_unlock(global_lock);
+
+ kfree(mem);
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * talmmu_devmem_heap_empty - talmmu_devmem_heap_empty
+ * @devmem_heap_hndl: device memory heap handle
+ *
+ * This function is used for emptying the device memory heap list
+ */
+int talmmu_devmem_heap_empty(void *devmem_heap_hndl)
+{
+ struct talmmu_devmem_heap *devmem_heap = devmem_heap_hndl;
+
+ if (!devmem_heap)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ while (!lst_empty(&devmem_heap->memory_list))
+ talmmu_devmem_free(lst_first(&devmem_heap->memory_list));
+
+ addr_cx_deinitialise(&devmem_heap->ctx);
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function talmmu_devmem_heap_destroy
+ *
+ * @Description This function is used for freeing the device memory heap
+ *
+ * @Input
+ *
+ * @Output
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+static void talmmu_devmem_heap_destroy(void *devmem_heap_hndl)
+{
+ struct talmmu_devmem_heap *devmem_heap = devmem_heap_hndl;
+
+ talmmu_devmem_heap_empty(devmem_heap_hndl);
+ kfree(devmem_heap);
+}
+
+/*
+ * @Function talmmu_init
+ *
+ * @Description This function is used to initialize the TALMMU component.
+ *
+ * @Input None.
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int talmmu_init(void)
+{
+ if (!global_init) {
+ /* If no mutex associated with this resource */
+ if (!global_lock) {
+ /* Create one */
+ global_lock = kzalloc(sizeof(*global_lock), GFP_KERNEL);
+ if (!global_lock)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ mutex_init(global_lock);
+ }
+
+ lst_init(&gl_dmtmpl_lst);
+ global_init = 1;
+ }
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function talmmu_deinit
+ *
+ * @Description This function is used to de-initialize the TALMMU component.
+ *
+ * @Input None.
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int talmmu_deinit(void)
+{
+ struct talmmu_dm_tmpl *t;
+
+ if (global_init) {
+ while (!lst_empty(&gl_dmtmpl_lst)) {
+ t = (struct talmmu_dm_tmpl *)lst_first(&gl_dmtmpl_lst);
+ talmmu_devmem_template_destroy((void *)t);
+ }
+ mutex_destroy(global_lock);
+ kfree(global_lock);
+ global_lock = NULL;
+ global_init = 0;
+ }
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function talmmu_devmem_template_create
+ *
+ * @Description This function is used to create a device memory template
+ *
+ * @Input devmem_info: A pointer to a talmmu_devmem_info structure.
+ *
+ * @Output devmem_template_hndl: A pointer used to return the template
+ * handle
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int talmmu_devmem_template_create(struct talmmu_devmem_info *devmem_info,
+ void **devmem_template_hndl)
+{
+ struct talmmu_dm_tmpl *devmem_template;
+ struct talmmu_dm_tmpl *tmp_devmem_template;
+
+ if (!devmem_info)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ devmem_template = kzalloc(sizeof(*devmem_template), GFP_KERNEL);
+ if (!devmem_template)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ devmem_template->devmem_info = *devmem_info;
+
+ lst_init(&devmem_template->devmem_ctx_list);
+
+ mutex_lock_nested(global_lock, SUBCLASS_TALMMU);
+
+ tmp_devmem_template = lst_first(&gl_dmtmpl_lst);
+ while (tmp_devmem_template)
+ tmp_devmem_template = lst_next(tmp_devmem_template);
+
+ devmem_template->page_num_shift = 12;
+ devmem_template->byte_in_pagemask = 0xFFF;
+ devmem_template->heap_alignment = 0x400000;
+ devmem_template->pagetable_entries_perpage =
+ (devmem_template->devmem_info.page_size / sizeof(unsigned int));
+ devmem_template->pagetable_num_shift = 10;
+ devmem_template->index_in_pagetable_mask = 0x3FF;
+ devmem_template->pagedir_num_shift = 22;
+
+ lst_add(&gl_dmtmpl_lst, devmem_template);
+
+ mutex_unlock(global_lock);
+
+ *devmem_template_hndl = devmem_template;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function talmmu_devmem_template_destroy
+ *
+ * @Description This function is used to obtain the template from the list and
+ * destroy
+ *
+ * @Input devmem_tmplt_hndl: Device memory template handle
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int talmmu_devmem_template_destroy(void *devmem_tmplt_hndl)
+{
+ struct talmmu_dm_tmpl *dm_tmpl = devmem_tmplt_hndl;
+ unsigned int i;
+
+ if (!devmem_tmplt_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ while (!lst_empty(&dm_tmpl->devmem_ctx_list))
+ talmmu_devmem_ctx_destroy(lst_first(&dm_tmpl->devmem_ctx_list));
+
+ for (i = 0; i < dm_tmpl->num_heaps; i++)
+ talmmu_devmem_heap_destroy(dm_tmpl->devmem_heap[i]);
+
+ mutex_lock_nested(global_lock, SUBCLASS_TALMMU);
+
+ lst_remove(&gl_dmtmpl_lst, dm_tmpl);
+
+ mutex_unlock(global_lock);
+
+ kfree(dm_tmpl);
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function talmmu_create_heap
+ *
+ * @Description This function is used to create a device memory heap
+ *
+ * @Input
+ *
+ * @Output
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+static int talmmu_create_heap(void *devmem_tmplt_hndl,
+ struct talmmu_heap_info *heap_info_arg,
+ unsigned char isfull,
+ struct talmmu_devmem_heap **devmem_heap_arg)
+{
+ struct talmmu_dm_tmpl *devmem_template = devmem_tmplt_hndl;
+ struct talmmu_devmem_heap *devmem_heap;
+
+ /* Allocating memory for device memory heap */
+ devmem_heap = kzalloc(sizeof(*devmem_heap), GFP_KERNEL);
+ if (!devmem_heap)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /*
+ * Update the device memory heap structure members
+ * Update the device memory template
+ */
+ devmem_heap->devmem_template = devmem_template;
+ /* Update the device memory heap information */
+ devmem_heap->heap_info = *heap_info_arg;
+
+ /* Initialize the device memory heap list */
+ lst_init(&devmem_heap->memory_list);
+
+ /* If full structure required */
+ if (isfull) {
+ addr_cx_initialise(&devmem_heap->ctx);
+ devmem_heap->regions.base_addr = 0;
+ devmem_heap->regions.size = devmem_heap->heap_info.size;
+ addr_cx_define_mem_region(&devmem_heap->ctx,
+ &devmem_heap->regions);
+ }
+
+ *devmem_heap_arg = devmem_heap;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function talmmu_devmem_heap_add
+ *
+ * @Description This function is for creating and adding the heap to the
+ * device memory template
+ *
+ * @Input devmem_tmplt_hndl: device memory template handle
+ *
+ * @Input heap_info_arg: pointer to the heap info structure
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int talmmu_devmem_heap_add(void *devmem_tmplt_hndl,
+ struct talmmu_heap_info *heap_info_arg)
+{
+ struct talmmu_dm_tmpl *devmem_template = devmem_tmplt_hndl;
+ struct talmmu_devmem_heap *devmem_heap;
+ unsigned int res;
+
+ if (!devmem_tmplt_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (!heap_info_arg)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ res = talmmu_create_heap(devmem_tmplt_hndl,
+ heap_info_arg,
+ 1,
+ &devmem_heap);
+ if (res != IMG_SUCCESS)
+ return res;
+
+ devmem_template->devmem_heap[devmem_template->num_heaps] = devmem_heap;
+ devmem_template->num_heaps++;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function talmmu_devmem_ctx_create
+ *
+ * @Description This function is used to create a device memory context
+ *
+ * @Input devmem_tmplt_hndl: pointer to the device memory template handle
+ *
+ * @Input mmu_ctx_id: MMU context ID used with the TAL
+ *
+ * @Output devmem_ctx_hndl: pointer to the device memory context handle
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int talmmu_devmem_ctx_create(void *devmem_tmplt_hndl,
+ unsigned int mmu_ctx_id,
+ void **devmem_ctx_hndl)
+{
+ struct talmmu_dm_tmpl *dm_tmpl = devmem_tmplt_hndl;
+ struct talmmu_devmem_ctx *dm_ctx;
+ struct talmmu_devmem_heap *dm_heap;
+ int i;
+ unsigned int res = IMG_SUCCESS;
+
+ if (!devmem_tmplt_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* Allocate memory for device memory context */
+ dm_ctx = kzalloc((sizeof(struct talmmu_devmem_ctx)), GFP_KERNEL);
+ if (!dm_ctx)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /*
+ * Update the device memory context structure members
+ * Update the device memory template
+ */
+ dm_ctx->devmem_template = dm_tmpl;
+ /* Update MMU context ID */
+ dm_ctx->mmu_ctx_id = mmu_ctx_id;
+
+ /* Check for PTD Alignment */
+ if (dm_tmpl->devmem_info.ptd_alignment == 0)
+ /*
+ * Make sure alignment is a multiple of page size.
+ * Set up PTD alignment to Page Size
+ */
+ dm_tmpl->devmem_info.ptd_alignment =
+ dm_tmpl->devmem_info.page_size;
+
+ /* Reference or create heaps for this context */
+ for (i = 0; i < dm_tmpl->num_heaps; i++) {
+ dm_heap = dm_tmpl->devmem_heap[i];
+ if (!dm_heap)
+ goto error_heap_create;
+
+ switch (dm_heap->heap_info.heap_type) {
+ case TALMMU_HEAP_PERCONTEXT:
+ res = talmmu_create_heap(dm_tmpl,
+ &dm_heap->heap_info,
+ 1,
+ &dm_ctx->devmem_heap[i]);
+ if (res != IMG_SUCCESS)
+ goto error_heap_create;
+ break;
+
+ default:
+ break;
+ }
+
+ dm_ctx->num_heaps++;
+ }
+
+ mutex_lock_nested(global_lock, SUBCLASS_TALMMU);
+
+ /* Add the device memory context to the list */
+ lst_add(&dm_tmpl->devmem_ctx_list, dm_ctx);
+
+ dm_tmpl->num_ctxs++;
+
+ mutex_unlock(global_lock);
+
+ *devmem_ctx_hndl = dm_ctx;
+
+ return IMG_SUCCESS;
+
+error_heap_create:
+ /* Destroy the device memory heaps which were already created */
+ for (i--; i >= 0; i--) {
+ dm_heap = dm_ctx->devmem_heap[i];
+ if (dm_heap->heap_info.heap_type == TALMMU_HEAP_PERCONTEXT)
+ talmmu_devmem_heap_destroy(dm_heap);
+
+ dm_ctx->num_heaps--;
+ }
+ kfree(dm_ctx);
+ return res;
+}
+
+/*
+ * @Function talmmu_devmem_ctx_destroy
+ *
+ * @Description This function is used to get the device memory context from
+ * the list and destroy
+ *
+ * @Input devmem_ctx_hndl: device memory context handle
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int talmmu_devmem_ctx_destroy(void *devmem_ctx_hndl)
+{
+ struct talmmu_devmem_ctx *devmem_ctx = devmem_ctx_hndl;
+ struct talmmu_dm_tmpl *devmem_template;
+ struct talmmu_devmem_heap *devmem_heap;
+ unsigned int i;
+
+ if (!devmem_ctx_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ devmem_template = devmem_ctx->devmem_template;
+
+ for (i = 0; i < devmem_ctx->num_heaps; i++) {
+ devmem_heap = devmem_ctx->devmem_heap[i];
+ if (!devmem_heap)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ talmmu_devmem_heap_destroy(devmem_heap);
+ }
+
+ devmem_ctx->pagedir = NULL;
+
+ mutex_lock_nested(global_lock, SUBCLASS_TALMMU);
+
+ lst_remove(&devmem_template->devmem_ctx_list, devmem_ctx);
+
+ devmem_ctx->devmem_template->num_ctxs--;
+
+ mutex_unlock(global_lock);
+
+ kfree(devmem_ctx);
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function talmmu_get_heap_handle
+ *
+ * @Description This function is used to get the device memory heap handle
+ *
+ * @Input hid: heap id
+ *
+ * @Input devmem_ctx_hndl: device memory context handle
+ *
+ * @Output devmem_heap_hndl: pointer to the device memory heap handle
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int talmmu_get_heap_handle(unsigned int hid,
+ void *devmem_ctx_hndl,
+ void **devmem_heap_hndl)
+{
+ struct talmmu_devmem_ctx *devmem_ctx = devmem_ctx_hndl;
+ unsigned int i;
+
+ if (!devmem_ctx_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ for (i = 0; i < devmem_ctx->num_heaps; i++) {
+ /*
+ * Checking for requested heap id match and return the device
+ * memory heap handle
+ */
+ if (devmem_ctx->devmem_heap[i]->heap_info.heap_id == hid) {
+ *devmem_heap_hndl = devmem_ctx->devmem_heap[i];
+ return IMG_SUCCESS;
+ }
+ }
+
+ return IMG_ERROR_GENERIC_FAILURE;
+}
+
+/*
+ * @Function talmmu_devmem_heap_options
+ *
+ * @Description This function is used to set additional heap options
+ *
+ * @Input devmem_heap_hndl: Handle for heap
+ *
+ * @Input heap_opt_id: Heap options ID
+ *
+ * @Input heap_options: Heap options
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+void talmmu_devmem_heap_options(void *devmem_heap_hndl,
+ enum talmmu_heap_option_id heap_opt_id,
+ union talmmu_heap_options heap_options)
+{
+ struct talmmu_devmem_heap *dm_heap = devmem_heap_hndl;
+
+ switch (heap_opt_id) {
+ case TALMMU_HEAP_OPT_ADD_GUARD_BAND:
+ dm_heap->guardband = heap_options.guardband_opt.guardband;
+ break;
+ default:
+ break;
+ }
+}
+
+/*
+ * @Function talmmu_devmem_malloc_nonmap
+ *
+ * @Description
+ *
+ * @Input
+ *
+ * @Output
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+static int talmmu_devmem_alloc_nonmap(void *devmem_ctx_hndl,
+ void *devmem_heap_hndl,
+ unsigned int size,
+ unsigned int align,
+ unsigned int dev_virt_ofset,
+ unsigned char ext_dev_vaddr,
+ void **mem_hndl)
+{
+ struct talmmu_devmem_ctx *dm_ctx = devmem_ctx_hndl;
+ struct talmmu_dm_tmpl *dm_tmpl;
+ struct talmmu_devmem_heap *dm_heap = devmem_heap_hndl;
+ struct talmmu_memory *mem;
+ unsigned long long ui64_dev_offset = 0;
+ int res = IMG_SUCCESS;
+
+ if (!dm_ctx)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (!devmem_heap_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ dm_tmpl = dm_ctx->devmem_template;
+
+ /* Allocate memory for memory structure */
+ mem = kzalloc((sizeof(struct talmmu_memory)), GFP_KERNEL);
+ if (!mem)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ mem->devmem_heap = dm_heap;
+ mem->devmem_ctx = dm_ctx;
+ mem->ext_dev_virtaddr = ext_dev_vaddr;
+
+ /* We always for to be at least page aligned */
+ if (align >= dm_tmpl->devmem_info.page_size)
+ /*
+ * alignment is larger than page size - make sure alignment is
+ * a multiple of page size
+ */
+ mem->alignment = align;
+ else
+ /*
+ * alignment is smaller than page size - make sure page size is
+ * a multiple of alignment. Now round up alignment to one page
+ */
+ mem->alignment = dm_tmpl->devmem_info.page_size;
+
+ /* Round size up to next multiple of physical pages */
+ if ((size % dm_tmpl->devmem_info.page_size) != 0)
+ mem->size = ((size / dm_tmpl->devmem_info.page_size)
+ + 1) * dm_tmpl->devmem_info.page_size;
+ else
+ mem->size = size;
+
+ /* If the device virtual address was externally defined */
+ if (mem->ext_dev_virtaddr) {
+ res = IMG_ERROR_INVALID_PARAMETERS;
+ goto free_mem;
+ }
+
+ res = addr_cx_malloc_align_res(&dm_heap->ctx, "",
+ (mem->size + dm_heap->guardband),
+ mem->alignment,
+ &ui64_dev_offset);
+
+ mem->dev_virtoffset = (unsigned int)ui64_dev_offset;
+ if (res != IMG_SUCCESS)
+ /*
+ * If heap space is unavaliable return NULL, the caller must
+ * handle this condition
+ */
+ goto free_virt;
+
+ mutex_lock_nested(global_lock, SUBCLASS_TALMMU);
+
+ /*
+ * Add memory allocation to the list for this heap...
+ * If the heap is empty...
+ */
+ if (lst_empty(&dm_heap->memory_list))
+ /*
+ * Save flag to indicate whether the device virtual address
+ * is allocated internally or externally...
+ */
+ dm_heap->ext_dev_virtaddr = mem->ext_dev_virtaddr;
+
+ /*
+ * Once we have started allocating in one way ensure that we continue
+ * to do this...
+ */
+ lst_add(&dm_heap->memory_list, mem);
+
+ mutex_unlock(global_lock);
+
+ *mem_hndl = mem;
+
+ return IMG_SUCCESS;
+
+free_virt:
+ addr_cx_free(&dm_heap->ctx, "", mem->dev_virtoffset);
+free_mem:
+ kfree(mem);
+
+ return res;
+}
+
+/*
+ * @Function talmmu_devmem_addr_alloc
+ *
+ * @Description
+ *
+ * @Input
+ *
+ * @Output
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int talmmu_devmem_addr_alloc(void *devmem_ctx_hndl,
+ void *devmem_heap_hndl,
+ unsigned int size,
+ unsigned int align,
+ void **mem_hndl)
+{
+ unsigned int res;
+ void *mem;
+
+ res = talmmu_devmem_alloc_nonmap(devmem_ctx_hndl,
+ devmem_heap_hndl,
+ size,
+ align,
+ 0,
+ 0,
+ &mem);
+ if (res != IMG_SUCCESS)
+ return res;
+
+ *mem_hndl = mem;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function talmmu_devmem_addr_free
+ *
+ * @Description This function is used to free device memory allocated using
+ * talmmu_devmem_addr_alloc().
+ *
+ * @Input mem_hndl : Handle for the memory object
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int talmmu_devmem_addr_free(void *mem_hndl)
+{
+ unsigned int res;
+
+ if (!mem_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* free device memory allocated by calling talmmu_devmem_free() */
+ res = talmmu_devmem_free(mem_hndl);
+
+ return res;
+}
+
+/*
+ * @Function talmmu_get_dev_virt_addr
+ *
+ * @Description This function is use to obtain the device (virtual) memory
+ * address which may be required for as a device virtual address
+ * in some of the TAL image functions
+ *
+ * @Input mem_hndl : Handle for the memory object
+ *
+ * @Output dev_virt: A piointer used to return the device virtual address
+ *
+ * @Return IMG_SUCCESS or an error code
+ *
+ */
+int talmmu_get_dev_virt_addr(void *mem_hndl,
+ unsigned int *dev_virt)
+{
+ struct talmmu_memory *mem = mem_hndl;
+ struct talmmu_devmem_heap *devmem_heap;
+
+ if (!mem_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ devmem_heap = mem->devmem_heap;
+
+ /*
+ * Device virtual address is addition of the specific device virtual
+ * offset and the base device virtual address from the heap information
+ */
+ *dev_virt = (devmem_heap->heap_info.basedev_virtaddr +
+ mem->dev_virtoffset);
+
+ return IMG_SUCCESS;
+}
diff --git a/drivers/staging/media/vxd/common/talmmu_api.h b/drivers/staging/media/vxd/common/talmmu_api.h
new file mode 100644
index 000000000000..f37f78394d54
--- /dev/null
+++ b/drivers/staging/media/vxd/common/talmmu_api.h
@@ -0,0 +1,246 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * TAL MMU Extensions.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+#include "addr_alloc.h"
+#include "ra.h"
+#include "lst.h"
+
+#ifndef __TALMMU_API_H__
+#define __TALMMU_API_H__
+
+#define TALMMU_MAX_DEVICE_HEAPS (32)
+#define TALMMU_MAX_TEMPLATES (32)
+
+/* MMU type */
+enum talmmu_mmu_type {
+ /* 4kb pages and 32-bit address range */
+ TALMMU_MMUTYPE_4K_PAGES_32BIT_ADDR = 0x1,
+ /* variable size pages and 32-bit address */
+ TALMMU_MMUTYPE_VAR_PAGES_32BIT_ADDR,
+ /* 4kb pages and 36-bit address range */
+ TALMMU_MMUTYPE_4K_PAGES_36BIT_ADDR,
+ /* 4kb pages and 40-bit address range */
+ TALMMU_MMUTYPE_4K_PAGES_40BIT_ADDR,
+ /* variable size pages and 40-bit address range */
+ TALMMU_MMUTYPE_VP_40BIT,
+ TALMMU_MMUTYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* Device flags */
+enum talmmu_dev_flags {
+ TALMMU_DEVFLAGS_NONE = 0x0,
+ TALMMU_DEVFLAGS_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* Heap type */
+enum talmmu_heap_type {
+ TALMMU_HEAP_SHARED_EXPORTED,
+ TALMMU_HEAP_PERCONTEXT,
+ TALMMU_HEAP_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* Heap flags */
+enum talmmu_eheapflags {
+ TALMMU_HEAPFLAGS_NONE = 0x0,
+ TALMMU_HEAPFLAGS_SET_CACHE_CONSISTENCY = 0x00000001,
+ TALMMU_HEAPFLAGS_128BYTE_INTERLEAVE = 0x00000002,
+ TALMMU_HEAPFLAGS_256BYTE_INTERLEAVE = 0x00000004,
+ TALMMU_HEAPFLAGS_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* Contains the device memory information */
+struct talmmu_devmem_info {
+ /* device id */
+ unsigned int device_id;
+ /* mmu type */
+ enum talmmu_mmu_type mmu_type;
+ /* Device flags - bit flags that can be combined */
+ enum talmmu_dev_flags dev_flags;
+ /* Name of the memory space for page directory allocations */
+ unsigned char *pagedir_memspace_name;
+ /* Name of the memory space for page table allocations */
+ unsigned char *pagetable_memspace_name;
+ /* Page size in bytes */
+ unsigned int page_size;
+ /* PTD alignment, must be multiple of Page size */
+ unsigned int ptd_alignment;
+};
+
+struct talmmu_heap_info {
+ /* heap id */
+ unsigned int heap_id;
+ /* heap type */
+ enum talmmu_heap_type heap_type;
+ /* heap flags - bit flags that can be combined */
+ enum talmmu_eheapflags heap_flags;
+ /* Name of the memory space for memory allocations */
+ unsigned char *memspace_name;
+ /* Base device virtual address */
+ unsigned int basedev_virtaddr;
+ /* size in bytes */
+ unsigned int size;
+};
+
+/* Device memory template information */
+struct talmmu_dm_tmpl {
+ /* list */
+ struct lst_t list;
+ /* Copy of device memory info structure */
+ struct talmmu_devmem_info devmem_info;
+ /* Memory space ID for PTD allocations */
+ void *ptd_memspace_hndl;
+ /* Memory space ID for Page Table allocations */
+ void *ptentry_memspace_hndl;
+ /* number of heaps */
+ unsigned int num_heaps;
+ /* Array of heap pointers */
+ struct talmmu_devmem_heap *devmem_heap[TALMMU_MAX_DEVICE_HEAPS];
+ /* Number of active contexts */
+ unsigned int num_ctxs;
+ /* List of device memory context created from this template */
+ struct lst_t devmem_ctx_list;
+ /* Number of bits to shift right to obtain page number */
+ unsigned int page_num_shift;
+ /* Mask to extract byte-within-page */
+ unsigned int byte_in_pagemask;
+ /* Heap alignment */
+ unsigned int heap_alignment;
+ /* Page table entries/page */
+ unsigned int pagetable_entries_perpage;
+ /* Number of bits to shift right to obtain page table number */
+ unsigned int pagetable_num_shift;
+ /* Mask to extract index-within-page-table */
+ unsigned int index_in_pagetable_mask;
+ /* Number of bits to shift right to obtain page dir number */
+ unsigned int pagedir_num_shift;
+};
+
+/* Device memory heap information */
+struct talmmu_devmem_heap {
+ /* list item */
+ struct lst_t list;
+ /* Copy of the heap info structure */
+ struct talmmu_heap_info heap_info;
+ /* Pointer to the device memory template */
+ struct talmmu_dm_tmpl *devmem_template;
+ /* true if device virtual address offset allocated externally by user */
+ unsigned int ext_dev_virtaddr;
+ /* list of memory allocations */
+ struct lst_t memory_list;
+ /* Memory space ID for memory allocations */
+ void *memspace_hndl;
+ /* Address context structure */
+ struct addr_context ctx;
+ /* Regions structure */
+ struct addr_region regions;
+ /* size of heap guard band */
+ unsigned int guardband;
+};
+
+struct talmmu_devmem_ctx {
+ /* list item */
+ struct lst_t list;
+ /* Pointer to device template */
+ struct talmmu_dm_tmpl *devmem_template;
+ /* No. of heaps */
+ unsigned int num_heaps;
+ /* Array of heap pointers */
+ struct talmmu_devmem_heap *devmem_heap[TALMMU_MAX_DEVICE_HEAPS];
+ /* The MMU context id */
+ unsigned int mmu_ctx_id;
+ /* Pointer to the memory that represents Page directory */
+ unsigned int *pagedir;
+};
+
+struct talmmu_memory {
+ /* list item */
+ struct lst_t list;
+ /* Heap from which memory was allocated */
+ struct talmmu_devmem_heap *devmem_heap;
+ /* Context through which memory was allocated */
+ struct talmmu_devmem_ctx *devmem_ctx;
+ /* size */
+ unsigned int size;
+ /* alignment */
+ unsigned int alignment;
+ /* device virtual offset of allocation */
+ unsigned int dev_virtoffset;
+ /* true if device virtual address offset allocated externally by user */
+ unsigned int ext_dev_virtaddr;
+};
+
+/* This type defines the event types for the TALMMU callbacks */
+enum talmmu_event {
+ /* Function to flush the cache. */
+ TALMMU_EVENT_FLUSH_CACHE,
+ /*! Function to write the page directory address to the device */
+ TALMMU_EVENT_WRITE_PAGE_DIRECTORY_REF,
+ /* Placeholder*/
+ TALMMU_NO_OF_EVENTS
+};
+
+enum talmmu_heap_option_id {
+ /* Add guard band to all mallocs */
+ TALMMU_HEAP_OPT_ADD_GUARD_BAND,
+ TALMMU_HEAP_OPT_SET_MEM_ATTRIB,
+ TALMMU_HEAP_OPT_SET_MEM_POOL,
+
+ /* Placeholder */
+ TALMMU_NO_OF_OPTIONS,
+ TALMMU_NO_OF_FORCE32BITS = 0x7FFFFFFFU
+};
+
+struct talmmu_guardband_options {
+ unsigned int guardband;
+};
+
+union talmmu_heap_options {
+ /* Guardband parameters */
+ struct talmmu_guardband_options guardband_opt;
+};
+
+int talmmu_init(void);
+int talmmu_deinit(void);
+int talmmu_devmem_template_create(struct talmmu_devmem_info *devmem_info,
+ void **devmem_template_hndl);
+int talmmu_devmem_heap_add(void *devmem_tmplt_hndl,
+ struct talmmu_heap_info *heap_info_arg);
+int talmmu_devmem_template_destroy(void *devmem_tmplt_hndl);
+int talmmu_devmem_ctx_create(void *devmem_tmplt_hndl,
+ unsigned int mmu_ctx_id,
+ void **devmem_ctx_hndl);
+int talmmu_devmem_ctx_destroy(void *devmem_ctx_hndl);
+int talmmu_get_heap_handle(unsigned int hid,
+ void *devmem_ctx_hndl,
+ void **devmem_heap_hndl);
+/**
+ * talmmu_devmem_heap_empty - talmmu_devmem_heap_empty
+ * @devmem_heap_hndl: device memory heap handle
+ *
+ * This function is used for emptying the device memory heap list
+ */
+
+int talmmu_devmem_heap_empty(void *devmem_heap_hndl);
+void talmmu_devmem_heap_options(void *devmem_heap_hndl,
+ enum talmmu_heap_option_id heap_opt_id,
+ union talmmu_heap_options heap_options);
+int talmmu_devmem_addr_alloc(void *devmem_ctx_hndl,
+ void *devmem_heap_hndl,
+ unsigned int size,
+ unsigned int align,
+ void **mem_hndl);
+int talmmu_devmem_addr_free(void *mem_hndl);
+int talmmu_get_dev_virt_addr(void *mem_hndl,
+ unsigned int *dev_virt);
+
+#endif /* __TALMMU_API_H__ */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
This patch includes all the common headers which is related
to h264, HEVC and MJPEG firmware data and error defines and pixel format.
Signed-off-by: Amit Makani <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 14 +
drivers/staging/media/vxd/common/img_errors.h | 104 +++
drivers/staging/media/vxd/common/img_mem.h | 43 +
.../staging/media/vxd/decoder/fw_interface.h | 818 ++++++++++++++++++
.../staging/media/vxd/decoder/h264fw_data.h | 652 ++++++++++++++
.../staging/media/vxd/decoder/hevcfw_data.h | 472 ++++++++++
.../staging/media/vxd/decoder/img_pixfmts.h | 195 +++++
.../media/vxd/decoder/img_profiles_levels.h | 33 +
.../staging/media/vxd/decoder/jpegfw_data.h | 83 ++
drivers/staging/media/vxd/decoder/mmu_defs.h | 42 +
.../staging/media/vxd/decoder/scaler_setup.h | 59 ++
drivers/staging/media/vxd/decoder/vdec_defs.h | 548 ++++++++++++
drivers/staging/media/vxd/decoder/vxd_ext.h | 74 ++
.../staging/media/vxd/decoder/vxd_mmu_defs.h | 30 +
drivers/staging/media/vxd/decoder/vxd_props.h | 80 ++
15 files changed, 3247 insertions(+)
create mode 100644 drivers/staging/media/vxd/common/img_errors.h
create mode 100644 drivers/staging/media/vxd/common/img_mem.h
create mode 100644 drivers/staging/media/vxd/decoder/fw_interface.h
create mode 100644 drivers/staging/media/vxd/decoder/h264fw_data.h
create mode 100644 drivers/staging/media/vxd/decoder/hevcfw_data.h
create mode 100644 drivers/staging/media/vxd/decoder/img_pixfmts.h
create mode 100644 drivers/staging/media/vxd/decoder/img_profiles_levels.h
create mode 100644 drivers/staging/media/vxd/decoder/jpegfw_data.h
create mode 100644 drivers/staging/media/vxd/decoder/mmu_defs.h
create mode 100644 drivers/staging/media/vxd/decoder/scaler_setup.h
create mode 100644 drivers/staging/media/vxd/decoder/vdec_defs.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_ext.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_mmu_defs.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_props.h
diff --git a/MAINTAINERS b/MAINTAINERS
index baf1f19e21f7..c7a6a0974415 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19545,6 +19545,8 @@ F: drivers/staging/media/vxd/common/hash.c
F: drivers/staging/media/vxd/common/hash.h
F: drivers/staging/media/vxd/common/idgen_api.c
F: drivers/staging/media/vxd/common/idgen_api.h
+F: drivers/staging/media/vxd/common/img_errors.h
+F: drivers/staging/media/vxd/common/img_mem.h
F: drivers/staging/media/vxd/common/img_mem_man.c
F: drivers/staging/media/vxd/common/img_mem_man.h
F: drivers/staging/media/vxd/common/img_mem_unified.c
@@ -19563,26 +19565,38 @@ F: drivers/staging/media/vxd/common/work_queue.h
F: drivers/staging/media/vxd/decoder/bspp.c
F: drivers/staging/media/vxd/decoder/bspp.h
F: drivers/staging/media/vxd/decoder/bspp_int.h
+F: drivers/staging/media/vxd/decoder/fw_interface.h
F: drivers/staging/media/vxd/decoder/h264_secure_parser.c
F: drivers/staging/media/vxd/decoder/h264_secure_parser.h
+F: drivers/staging/media/vxd/decoder/h264fw_data.h
F: drivers/staging/media/vxd/decoder/hevc_secure_parser.c
F: drivers/staging/media/vxd/decoder/hevc_secure_parser.h
+F: drivers/staging/media/vxd/decoder/hevcfw_data.h
F: drivers/staging/media/vxd/decoder/hw_control.c
F: drivers/staging/media/vxd/decoder/hw_control.h
F: drivers/staging/media/vxd/decoder/img_dec_common.h
+F: drivers/staging/media/vxd/decoder/img_pixfmts.h
+F: drivers/staging/media/vxd/decoder/img_profiles_levels.h
F: drivers/staging/media/vxd/decoder/jpeg_secure_parser.c
F: drivers/staging/media/vxd/decoder/jpeg_secure_parser.h
+F: drivers/staging/media/vxd/decoder/jpegfw_data.h
+F: drivers/staging/media/vxd/decoder/mmu_defs.h
+F: drivers/staging/media/vxd/decoder/scaler_setup.h
F: drivers/staging/media/vxd/decoder/swsr.c
F: drivers/staging/media/vxd/decoder/swsr.h
F: drivers/staging/media/vxd/decoder/translation_api.c
F: drivers/staging/media/vxd/decoder/translation_api.h
+F: drivers/staging/media/vxd/decoder/vdec_defs.h
F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c
F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.h
F: drivers/staging/media/vxd/decoder/vxd_core.c
F: drivers/staging/media/vxd/decoder/vxd_dec.c
F: drivers/staging/media/vxd/decoder/vxd_dec.h
+F: drivers/staging/media/vxd/decoder/vxd_ext.h
F: drivers/staging/media/vxd/decoder/vxd_int.c
F: drivers/staging/media/vxd/decoder/vxd_int.h
+F: drivers/staging/media/vxd/decoder/vxd_mmu_defs.h
+F; drivers/staging/media/vxd/decoder/vxd_props.h
F: drivers/staging/media/vxd/decoder/vxd_pvdec.c
F: drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h
F: drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h
diff --git a/drivers/staging/media/vxd/common/img_errors.h b/drivers/staging/media/vxd/common/img_errors.h
new file mode 100644
index 000000000000..1f583b0284dc
--- /dev/null
+++ b/drivers/staging/media/vxd/common/img_errors.h
@@ -0,0 +1,104 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Error codes.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef __IMG_ERRORS__
+#define __IMG_ERRORS__
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#define IMG_DBG_ASSERT(expected) ({WARN_ON(!(expected)); 0; })
+
+/**
+ *IMG_SUCCESS - IMG_SUCCESS
+ */
+#define IMG_SUCCESS (0)
+/* @brief Timeout */
+#define IMG_ERROR_TIMEOUT (1)
+/* @brief memory allocation failed */
+#define IMG_ERROR_MALLOC_FAILED (2)
+/* @brief Unspecified fatal error */
+#define IMG_ERROR_FATAL (3)
+/* @brief Memory allocation failed */
+#define IMG_ERROR_OUT_OF_MEMORY (4)
+/* @brief Device is not found */
+#define IMG_ERROR_DEVICE_NOT_FOUND (5)
+/* @brief Device is not available/in use */
+#define IMG_ERROR_DEVICE_UNAVAILABLE (6)
+/* @brief Generic/unspecified failure */
+#define IMG_ERROR_GENERIC_FAILURE (7)
+/* @brief Operation was interrupted - retry */
+#define IMG_ERROR_INTERRUPTED (8)
+/* @brief Invalid id */
+#define IMG_ERROR_INVALID_ID (9)
+/* @brief A signature value was found to be incorrect */
+#define IMG_ERROR_SIGNATURE_INCORRECT (10)
+/* @brief The provided parameters were inconsistent/incorrect */
+#define IMG_ERROR_INVALID_PARAMETERS (11)
+/* @brief A list/pool has run dry */
+#define IMG_ERROR_STORAGE_TYPE_EMPTY (12)
+/* @brief A list is full */
+#define IMG_ERROR_STORAGE_TYPE_FULL (13)
+/* @brief Something has already occurred which the code thinks has not */
+#define IMG_ERROR_ALREADY_COMPLETE (14)
+/* @brief A state machine is in an unexpected/illegal state */
+#define IMG_ERROR_UNEXPECTED_STATE (15)
+/* @brief A required resource could not be created/locked */
+#define IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE (16)
+/*
+ * @brief An attempt to access a structure/resource was
+ * made before it was initialised
+ */
+#define IMG_ERROR_NOT_INITIALISED (17)
+/*
+ * @brief An attempt to initialise a structure/resource
+ * was made when it has already been initialised
+ */
+#define IMG_ERROR_ALREADY_INITIALISED (18)
+/* @brief A provided value exceeded stated bounds */
+#define IMG_ERROR_VALUE_OUT_OF_RANGE (19)
+/* @brief The operation has been cancelled */
+#define IMG_ERROR_CANCELLED (20)
+/* @brief A specified minimum has not been met */
+#define IMG_ERROR_MINIMUM_LIMIT_NOT_MET (21)
+/* @brief The requested feature or mode is not supported */
+#define IMG_ERROR_NOT_SUPPORTED (22)
+/* @brief A device or process was idle */
+#define IMG_ERROR_IDLE (23)
+/* @brief A device or process was busy */
+#define IMG_ERROR_BUSY (24)
+/* @brief The device or resource has been disabled */
+#define IMG_ERROR_DISABLED (25)
+/* @brief The requested operation is not permitted at this time */
+#define IMG_ERROR_OPERATION_PROHIBITED (26)
+/* @brief The entry read from the MMU page directory is invalid */
+#define IMG_ERROR_MMU_PAGE_DIRECTORY_FAULT (27)
+/* @brief The entry read from an MMU page table is invalid */
+#define IMG_ERROR_MMU_PAGE_TABLE_FAULT (28)
+/* @brief The entry read from an MMU page catalogue is invalid */
+#define IMG_ERROR_MMU_PAGE_CATALOGUE_FAULT (29)
+/* @brief Memory can not be freed as it is still been used */
+#define IMG_ERROR_MEMORY_IN_USE (30)
+/* @brief A mismatch has unexpectedly occurred in data */
+#define IMG_ERROR_TEST_MISMATCH (31)
+
+#define IMG_ERROR_INVALID_CONTEXT (32)
+
+#define IMG_ERROR_RETRY (33)
+#define IMG_ERROR_UNDEFINED (34)
+#define IMG_ERROR_INVALID_SIZE (35)
+#define IMG_ERROR_SURFACE_LOCKED (36)
+
+#endif /* __IMG_ERRORS__ */
diff --git a/drivers/staging/media/vxd/common/img_mem.h b/drivers/staging/media/vxd/common/img_mem.h
new file mode 100644
index 000000000000..c8b5e2311d14
--- /dev/null
+++ b/drivers/staging/media/vxd/common/img_mem.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Typedefs for memory pool and attributes
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef __IMG_MEM__
+#define __IMG_MEM__
+
+/*
+ * This type defines the memory attributes.
+ * @0x00000001: Memory to be allocated as cached
+ * @0x00000002: Memory to be allocated as uncached
+ * @0x00000004: Memory to be allocated as write-combined
+ * (or equivalent buffered/burst writes mechanism)
+ * @0x00001000: Memory can be read only by the core
+ * @0x00002000: Memory can be written only by the core
+ * @0x00010000: Memory should be readable by the cpu
+ * @0x00020000: Memory should be writable by the cpu
+ */
+enum sys_emem_attrib {
+ SYS_MEMATTRIB_CACHED = 0x00000001,
+ SYS_MEMATTRIB_UNCACHED = 0x00000002,
+ SYS_MEMATTRIB_WRITECOMBINE = 0x00000004,
+ SYS_MEMATTRIB_SECURE = 0x00000010,
+ SYS_MEMATTRIB_INPUT = 0x00000100,
+ SYS_MEMATTRIB_OUTPUT = 0x00000200,
+ SYS_MEMATTRIB_INTERNAL = 0x00000400,
+ SYS_MEMATTRIB_CORE_READ_ONLY = 0x00001000,
+ SYS_MEMATTRIB_CORE_WRITE_ONLY = 0x00002000,
+ SYS_MEMATTRIB_CPU_READ = 0x00010000,
+ SYS_MEMATTRIB_CPU_WRITE = 0x00020000,
+ SYS_MEMATTRIB_FORCE32BITS = 0x7FFFFFFFU
+};
+
+#endif /* __IMG_MEM__ */
diff --git a/drivers/staging/media/vxd/decoder/fw_interface.h b/drivers/staging/media/vxd/decoder/fw_interface.h
new file mode 100644
index 000000000000..6da3d835b950
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/fw_interface.h
@@ -0,0 +1,818 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG MSVDX core Registers
+ * This file contains the MSVDX_CORE_REGS_H Definitions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef FW_INTERFACE_H_
+#define FW_INTERFACE_H_
+
+/* TODO For now this macro defined, need to think and enable */
+#define VDEC_USE_PVDEC_COMPATIBILITY 1
+
+#define MSG_TYPE_PADDING (0x00)
+/* Start of parser specific Host->MTX messages */
+#define MSG_TYPE_START_PSR_HOSTMTX_MSG (0x80)
+/* Start of parser specific MTX->Host message */
+#define MSG_TYPE_START_PSR_MTXHOST_MSG (0xC0)
+
+enum {
+ FW_DEVA_INIT = MSG_TYPE_START_PSR_HOSTMTX_MSG,
+ FW_DEVA_DECODE_FE,
+ FW_DEVA_RES_0,
+ FW_DEVA_RES_1,
+ FW_DEVA_DECODE_BE,
+ FW_DEVA_HOST_BE_OPP,
+ FW_DEVA_DEBLOCK,
+ FW_DEVA_INTRA_OOLD,
+ FW_DEVA_ENDFRAME,
+
+ FW_DEVA_PARSE,
+ FW_DEVA_PARSE_FRAGMENT,
+ FW_DEVA_BEGINFRAME,
+
+#ifdef VDEC_USE_PVDEC_COMPATIBILITY
+#ifdef VDEC_USE_PVDEC_SEC
+ FWBSP_INIT,
+ FWBSP_PARSE_BITSTREAM,
+ FWDEC_DECODE,
+#endif /* VDEC_USE_PVDEC_SEC */
+#endif /* VDEC_USE_PVDEC_COMPATIBILITY */
+
+ /* Sent by the firmware on the MTX to the host. */
+ FW_DEVA_COMPLETED = MSG_TYPE_START_PSR_MTXHOST_MSG,
+#ifndef VDEC_USE_PVDEC_COMPATIBILITY
+ FW_DEVA_RES_2,
+ FW_DEVA_RES_3,
+ FW_DEVA_RES_4,
+ FW_DEVA_RES_5,
+
+ FW_DEVA_RES_6,
+ FW_DEVA_CONTIGUITY_WARNING,
+ FW_DEVA_PANIC,
+ FW_DEVA_RES_7,
+ FW_DEVA_RES_8,
+#else /* ndef VDEC_USE_PVDEC_COMPATIBILITY */
+ FW_DEVA_PANIC,
+ FW_ASSERT,
+ FW_PERF,
+ /* An empty completion message sent by new vxd driver */
+ FW_VXD_EMPTY_COMPL,
+ FW_DEC_REQ_RECEIVED,
+ FW_SO,
+#ifdef VDEC_USE_PVDEC_SEC
+ FWBSP_NEW_SEQ,
+ FWBSP_NEW_PIC,
+ FWBSP_BUF_EMPTY,
+ FWBSP_ERROR,
+ FWDEC_COMPLETED,
+#endif /* VDEC_USE_PVDEC_SEC */
+#endif /* VDEC_USE_PVDEC_COMPATIBILITY */
+ FW_DEVA_SIGNATURES_LEGACY = 0xD0,
+ FW_DEVA_SIGNATURES_HEVC = 0xE0,
+ FW_DEVA_SIGNATURES_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* Defines the Host/Firmware communication area */
+#ifndef VDEC_USE_PVDEC_COMPATIBILITY
+#define COMMS_HEADER_SIZE (0x34)
+#else /* def VDEC_USE_PVDEC_COMPATIBILITY */
+#define COMMS_HEADER_SIZE (0x40)
+#endif /* def VDEC_USE_PVDEC_COMPATIBILITY */
+/* dwords */
+#define PVDEC_COM_RAM_FW_STATUS_OFFSET 0x00
+#define PVDEC_COM_RAM_TASK_STATUS_OFFSET 0x04
+#define PVDEC_COM_RAM_FW_ID_OFFSET 0x08
+#define PVDEC_COM_RAM_FW_MTXPC_OFFSET 0x0c
+#define PVDEC_COM_RAM_MSG_COUNTER_OFFSET 0x10
+#define PVDEC_COM_RAM_SIGNATURE_OFFSET 0x14
+#define PVDEC_COM_RAM_TO_HOST_BUF_SIZE_AND_OFFSET_OFFSET 0x18
+#define PVDEC_COM_RAM_TO_HOST_RD_INDEX_OFFSET 0x1c
+#define PVDEC_COM_RAM_TO_HOST_WRT_INDEX_OFFSET 0x20
+#define PVDEC_COM_RAM_TO_MTX_BUF_SIZE_AND_OFFSET_OFFSET 0x24
+#define PVDEC_COM_RAM_TO_MTX_RD_INDEX_OFFSET 0x28
+#define PVDEC_COM_RAM_FLAGS_OFFSET 0x2c
+#define PVDEC_COM_RAM_TO_MTX_WRT_INDEX_OFFSET 0x30
+#ifdef VDEC_USE_PVDEC_COMPATIBILITY
+#define PVDEC_COM_RAM_STATE_BUF_SIZE_AND_OFFSET_OFFSET 0x34
+#define PVDEC_COM_RAM_FW_MMU_REPORT_OFFSET 0x38
+#endif /* VDEC_USE_PVDEC_COMPATIBILITY */
+/* fields */
+#define PVDEC_COM_RAM_TO_HOST_BUF_SIZE_AND_OFFSET_SIZE_MASK 0xFFFF
+#define PVDEC_COM_RAM_TO_HOST_BUF_SIZE_AND_OFFSET_SIZE_SHIFT 0
+#define PVDEC_COM_RAM_TO_HOST_BUF_SIZE_AND_OFFSET_OFFSET_MASK 0xFFFF0000
+#define PVDEC_COM_RAM_TO_HOST_BUF_SIZE_AND_OFFSET_OFFSET_SHIFT 16
+
+#define PVDEC_COM_RAM_TO_MTX_BUF_SIZE_AND_OFFSET_SIZE_MASK 0xFFFF
+#define PVDEC_COM_RAM_TO_MTX_BUF_SIZE_AND_OFFSET_SIZE_SHIFT 0
+#define PVDEC_COM_RAM_TO_MTX_BUF_SIZE_AND_OFFSET_OFFSET_MASK 0xFFFF0000
+#define PVDEC_COM_RAM_TO_MTX_BUF_SIZE_AND_OFFSET_OFFSET_SHIFT 16
+#ifdef VDEC_USE_PVDEC_COMPATIBILITY
+#define PVDEC_COM_RAM_STATE_BUF_SIZE_AND_OFFSET_SIZE_MASK 0xFFFF
+#define PVDEC_COM_RAM_STATE_BUF_SIZE_AND_OFFSET_SIZE_SHIFT 0
+#define PVDEC_COM_RAM_STATE_BUF_SIZE_AND_OFFSET_OFFSET_MASK 0xFFFF0000
+#define PVDEC_COM_RAM_STATE_BUF_SIZE_AND_OFFSET_OFFSET_SHIFT 16
+#endif /* VDEC_USE_PVDEC_COMPATIBILITY */
+#define PVDEC_COM_RAM_BUF_GET_SIZE(_reg_, _name_) \
+ (((_reg_) & PVDEC_COM_RAM_ ## _name_ ## _BUF_SIZE_AND_OFFSET_SIZE_MASK) >> \
+ PVDEC_COM_RAM_ ## _name_ ## _BUF_SIZE_AND_OFFSET_SIZE_SHIFT)
+#define PVDEC_COM_RAM_BUF_GET_OFFSET(_reg_, _name_) \
+ (((_reg_) & \
+ PVDEC_COM_RAM_ ## _name_ ## _BUF_SIZE_AND_OFFSET_OFFSET_MASK) >> \
+ PVDEC_COM_RAM_ ## _name_ ## _BUF_SIZE_AND_OFFSET_OFFSET_SHIFT)
+#define PVDEC_COM_RAM_BUF_SET_SIZE_AND_OFFSET(_name_, _size_, _offset_) \
+ ((((_size_) << \
+ PVDEC_COM_RAM_ ## _name_ ## _BUF_SIZE_AND_OFFSET_SIZE_SHIFT) \
+ & PVDEC_COM_RAM_ ## _name_ ## _BUF_SIZE_AND_OFFSET_SIZE_MASK) | \
+ (((_offset_) << \
+ PVDEC_COM_RAM_ ## _name_ ## _BUF_SIZE_AND_OFFSET_OFFSET_SHIFT) \
+ & PVDEC_COM_RAM_ ## _name_ ## _BUF_SIZE_AND_OFFSET_OFFSET_MASK))
+/* values */
+/* Firmware ready signature value */
+ #define FW_READY_SIGNATURE (0xA5A5A5A5)
+
+/* Firmware status values */
+ #define FW_STATUS_BUSY 0
+ #define FW_STATUS_IDLE 1
+ #define FW_STATUS_PANIC 2
+ #define FW_STATUS_ASSERT 3
+ #define FW_STATUS_GAMEOVER 4
+ #define FW_STATUS_FEWATCHDOG 5
+ #define FW_STATUS_EPWATCHDOG 6
+ #define FW_STATUS_BEWATCHDOG 7
+#ifdef VDEC_USE_PVDEC_COMPATIBILITY
+ #define FW_STATUS_SO 8
+ #define FW_STATUS_INIT 0xF
+#endif
+
+/* Decode Message Flags */
+ #define FW_DEVA_RENDER_IS_FIRST_SLICE (0x00000001)
+/* This is H264 Mbaff - required for state store */
+ #define FW_DEVA_FORCE_RECON_WRITE_DISABLE (0x00000002)
+ #define FW_DEVA_RENDER_IS_LAST_SLICE (0x00000004)
+/* Prevents insertion of end of picture or flush at VEC EOS */
+ #define FW_DEVA_DECODE_DISABLE_EOF_DETECTION (0x00000008)
+
+ #define FW_DEVA_CONTEXT_BUFFER_INVALID (0x00000010)
+ #define FW_DEVA_FORCE_ALT_OUTPUT (0x00000020)
+ #define FW_SECURE_STREAM (0x00000040)
+ #define FW_LOW_LATENCY (0x00000080)
+
+ #define FW_DEVA_CONTIGUITY_DETECTION (0x00000100)
+ #define FW_DEVA_FORCE_INIT_CMDS (0x00000200)
+ #define FW_DEVA_DEBLOCK_ENABLE (0x00000400)
+#ifdef VDEC_USE_PVDEC_COMPATIBILITY
+ #define FW_VDEC_SEND_SIGNATURES (0x00000800)
+#else
+/* (0x00000800) */
+#endif /* VDEC_USE_PVDEC_COMPATIBILITY */
+
+ #define FW_DEVA_FORCE_AUX_LINE_BUF_DISABLE (0x00001000)
+/*
+ * Cause no response message to be sent, and no interrupt
+ * generation on successful completion
+ */
+ #define FW_DEVA_RENDER_NO_RESPONSE_MSG (0x00002000)
+/*
+ * Cause an interrupt if a response message is generated
+ * on successful completion
+ */
+ #define FW_DEVA_RENDER_HOST_INT (0x00004000)
+/* Report contiguity errors to host */
+ #define FW_DEVA_CONTIGUITY_REPORTING (0x00008000)
+
+ #define FW_DEVA_VC1_SKIPPED_PICTURE (0x00010000)
+ #define FW_INTERNAL_RENDER_SWITCH (0x00020000)
+ #define FW_DEVA_UNSUPPORTED (0x00040000)
+ #define DEBLOCKING_FORCED_OFF (0x00080000)
+#ifdef VDEC_USE_PVDEC_COMPATIBILITY
+ #define FW_VDEC_CMD_PENDING (0x00100000)
+#else
+/* (0x00100000) */
+#endif
+/* Only for debug */
+ #define DETECTED_RENDEC_FULL (0x00200000)
+/* Only for debug */
+ #define DETECTED_RENDEC_EMPTY (0x00400000)
+ #define FW_ONE_PASS_PARSE (0x00800000)
+
+ #define FW_DEVA_EARLY_COMPLETE (0x01000000)
+ #define FW_DEVA_FE_EP_SIGNATURES_READY (0x02000000)
+ #define FW_VEC_EOS (0x04000000)
+/* hardware has reported an error relating to this command */
+ #define FW_DEVA_ERROR_DETECTED_ENT (0x08000000)
+
+ #define FW_DEVA_ERROR_DETECTED_PIX (0x10000000)
+ #define FW_DEVA_MP_SYNC (0x20000000)
+ #define MORE_THAN_ONE_MB (0x40000000)
+ #define REATTEMPT_SINGLEPIPE (0x80000000)
+/* end of message flags */
+#ifdef VDEC_USE_PVDEC_COMPATIBILITY
+/* VDEC Decode Message Flags */
+/*
+ * H.264/H.265 are to be configured in SIZE_DELIMITED mode rather than SCP mode.
+ */
+#define FW_VDEC_NAL_SIZE_DELIM (0x00000001)
+/* Indicates if MMU cache shall be flushed. */
+#define FW_VDEC_MMU_FLUSH_CACHE (0x00000002)
+/* end of message flags */
+#endif /* VDEC_USE_PVDEC_COMPATIBILITY */
+
+/* FW flags */
+/* TODO : Temporary for HW testing */
+ #define FWFLAG_DISABLE_VDEB_PRELOAD (0x00000001)
+ #define FWFLAG_BIG_TO_HOST_BUFFER (0x00000002)
+/* FS is default regarless of this flag */
+ #define FWFLAG_FORCE_FS_FLOW (0x00000004)
+ #define FWFLAG_DISABLE_WATCHDOG_TIMERS (0x00000008)
+
+ #define FWFLAG_DISABLE_AEH (0x00000020)
+ #define FWFLAG_DISABLE_AUTONOMOUS_RESET (0x00000040)
+ #define FWFLAG_NON_ACCUMULATING_HWSIGS (0x00000080)
+
+ #define FWFLAG_DISABLE_2PASS_DEBLOCK (0x00000100)
+ #define FWFLAG_NO_INT_ON_TOHOST_FULL (0x00000200)
+ #define FWFLAG_RETURN_VDEB_CR (0x00000800)
+
+ #define FWFLAG_DISABLE_AUTOCLOCKGATING (0x00001000)
+ #define FWFLAG_DISABLE_IDLE_GPIO (0x00002000)
+ #define FWFLAG_XPL (0x00004000)
+ #define FWFLAG_INFINITE_MTX_TIMEOUT (0x00008000)
+
+ #define FWFLAG_DECOUPLE_BE_FE (0x00010000)
+ #define FWFLAG_ENABLE_SECURITY (0x00080000)
+
+ #define FWFLAG_ENABLE_CONCEALMENT (0x00100000)
+/* Not currently supported */
+/* #define FWFLAG_PREEMPT (0x00200000) */
+/* NA in FS */
+ #define FWFLAG_FORCE_FLUSHING (0x00400000)
+/* NA in FS */
+ #define FWFLAG_DISABLE_GENC_FLUSHING (0x00800000)
+
+ #define FWFLAG_DISABLE_COREWDT_TIMERS (0x01000000)
+ #define FWFLAG_DISABLE_RENDEC_AUTOFLUSH (0x02000000)
+ #define FWFLAG_FORCE_STRICT_SINGLEPIPE (0x04000000)
+ #define FWFLAG_CONSISTENT_MULTIPIPE_FLOW (0x08000000)
+
+ #define FWFLAG_DISABLE_IDLE_FAST_EVAL (0x10000000)
+ #define FWFLAG_FAKE_COMPLETION (0x20000000)
+ #define FWFLAG_MAN_PP_CLK (0x40000000)
+ #define FWFLAG_STACK_CHK (0x80000000)
+
+/* end of FW flags */
+
+#ifdef FW_STACK_USAGE_TRACKING
+/* FW task identifiers */
+enum task_id {
+ TASK_ID_RX = 0,
+ TASK_ID_TX,
+ TASK_ID_EP1,
+ TASK_ID_FE1,
+ TASK_ID_FE2,
+ TASK_ID_FE3,
+ TASK_ID_BE1,
+ TASK_ID_BE2,
+ TASK_ID_BE3,
+ TASK_ID_PARSER,
+ TASK_ID_MAX,
+ TASK_ID_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* FW task stack info utility macros */
+#define TASK_STACK_SIZE_MASK 0xFFFF
+#define TASK_STACK_SIZE_SHIFT 0
+#define TASK_STACK_USED_MASK 0xFFFF0000
+#define TASK_STACK_USED_SHIFT 16
+#define TASK_STACK_SET_INFO(_task_id_, _stack_info_, _size_, _used_) \
+ (_stack_info_[_task_id_] = \
+ ((_size_) << TASK_STACK_SIZE_SHIFT) | \
+ ((_used_) << TASK_STACK_USED_SHIFT))
+#define TASK_STACK_GET_SIZE(_task_id_, _stack_info_) \
+ ((_stack_info_[_task_id_] & TASK_STACK_SIZE_MASK) >> \
+ TASK_STACK_SIZE_SHIFT)
+#define TASK_STACK_GET_USED(_task_id_, _stack_info_) \
+ ((_stack_info_[_task_id_] & TASK_STACK_USED_MASK) >> \
+ TASK_STACK_USED_SHIFT)
+#endif /* FW_STACK_USAGE_TRACKING */
+
+/* Control Allocation */
+#define CMD_MASK (0xF0000000)
+
+/* Ctrl Allocation Header */
+#define CMD_CTRL_ALLOC_HEADER (0x90000000)
+
+struct ctrl_alloc_header {
+ unsigned int cmd_additional_params;
+ unsigned int slice_params;
+ union {
+ unsigned int vp8_probability_data;
+ unsigned int h264_pipeintra_buffersize;
+ };
+ unsigned int chroma_strides;
+ unsigned int slice_first_mb_yx;
+ unsigned int pic_last_mb_yx;
+ /* VC1 only : Store Range Map flags in bottom bits of [0] */
+ unsigned int alt_output_addr[2];
+ unsigned int alt_output_flags;
+ /* H264 Only : Extended Operating Mode */
+ unsigned int ext_opmode;
+};
+
+#define CMD_CTRL_ALLOC_HEADER_DWSIZE \
+ (sizeof(struct ctrl_alloc_header) / sizeof(unsigned int))
+
+/* Additional Parameter flags */
+#define VC1_PARSEHDR_MASK (0x00000001)
+#define VC1_SKIPPIC_MASK (0x00000002)
+
+#define VP6_BUFFOFFSET_MASK (0x0000ffff)
+#define VP6_MULTISTREAM_MASK (0x01000000)
+#define VP6_FRAMETYPE_MASK (0x02000000)
+
+#define VP8_BUFFOFFSET_MASK (0x00ffffff)
+#define VP8_PARTITIONSCOUNT_MASK (0x0f000000)
+#define VP8_PARTITIONSCOUNT_SHIFT (24)
+
+/* Nop Command */
+#define CMD_NOP (0x00000000)
+#define CMD_NOP_DWSIZE (1)
+
+/* Register Block */
+#define CMD_REGISTER_BLOCK (0x10000000)
+#define CMD_REGISTER_BLOCK_PATCHING_REQUIRED (0x01000000)
+#define CMD_REGISTER_BLOCK_FLAG_PRELOAD (0x04000000)
+#define CMD_REGISTER_BLOCK_FLAG_VLC_DATA (0x08000000)
+
+/* Rendec Command */
+#define CMD_RENDEC_BLOCK (0x50000000)
+#define CMD_RENDEC_BLOCK_FLAG_MASK (0x0F000000)
+#define CMD_RENDEC_FORCE (0x08000000)
+#define CMD_RENDEC_PATCHING_REQUIRED (0x01000000)
+#define CMD_RENDEC_WORD_COUNT_MASK (0x00ff0000)
+#define CMD_RENDEC_WORD_COUNT_SHIFT (16)
+#define CMD_RENDEC_ADDRESS_MASK (0x0000ffff)
+#define CMD_RENDEC_ADDRESS_SHIFT (0)
+
+#ifndef VDEC_USE_PVDEC_SEC
+/* Deblock */
+#define CMD_DEBLOCK (0x70000000)
+#define CMD_DEBLOCK_TYPE_STD (0x00000000)
+#define CMD_DEBLOCK_TYPE_OOLD (0x00000001)
+#define CMD_DEBLOCK_TYPE_SKIP (0x00000002)
+/* End Of Frame */
+#define CMD_DEBLOCK_TYPE_EF (0x00000003)
+
+struct deblock_cmd {
+ unsigned int cmd; /* 0x70000000 */
+ unsigned int source_mb_data;
+ unsigned int address_a[2];
+};
+
+#define CMD_DEBLOCK_DWSIZE (sizeof(DEBLOCK_CMD) / sizeof(unsigned int))
+#endif /* !VDEC_USE_PVDEC_SEC */
+
+/* Skip */
+#define CMD_CONDITIONAL_SKIP (0x80000000)
+#define CMD_CONDITIONAL_SKIP_DWSIZE (1)
+#define CMD_CONDITIONAL_SKIP_DWORDS (0x0000ffff)
+#define CMD_CONDITIONAL_SKIP_CONTEXT_SWITCH BIT(20)
+
+/* DMA */
+#define CMD_DMA (0xE0000000)
+#define CMD_DMA_DMA_TYPE_MASK (0x03000000)
+#define CMD_DMA_DMA_TYPE_SHIFT (24)
+#define CMD_DMA_FLAG_MASK (0x00100000)
+#define CMD_DMA_FLAG_SHIFT (20)
+#define CMD_DMA_DMA_SIZE_MASK (0x000fffff)
+
+#define CMD_DMA_OFFSET_FLAG (0x00100000)
+
+#define CMD_DMA_MAX_OFFSET (0xFFF)
+#define CMD_DMA_TYPE_VLC_TABLE (0 << CMD_DMA_DMA_TYPE_SHIFT)
+#define CMD_DMA_TYPE_PROBABILITY_DATA BIT(CMD_DMA_DMA_TYPE_SHIFT)
+
+struct dma_cmd {
+ unsigned int cmd;
+ unsigned int dev_virt_add;
+};
+
+#define CMD_DMA_DWSIZE (sizeof(DMA_CMD) / sizeof(unsigned int))
+
+struct dma_cmd_offset_dwsize {
+ unsigned int cmd;
+ unsigned int dev_virt_add;
+ unsigned int byte_offset;
+};
+
+#define CMD_DMA_OFFSET_DWSIZE (sizeof(DMA_CMD_WITH_OFFSET) / sizeof(unsigned int))
+
+/* HOST COPY */
+#define CMD_HOST_COPY (0xF0000000)
+#define CMD_HOST_COPY_SIZE_MASK (0x000fffff)
+
+struct host_copy_cmd {
+ unsigned int cmd;
+ unsigned int src_dev_virt_add;
+ unsigned int dst_dev_virt_add;
+};
+
+#define CMD_HOST_COPY_DWSIZE (sizeof(HOST_COPY_CMD) / sizeof(unsigned int))
+
+/* Shift register setup and Bitstream DMA */
+#define CMD_SR_SETUP (0xB0000000)
+#define CMD_SR_ENABLE_RBDU_EXTRACTION (0x00000001)
+#define CMD_SR_ENABLE_AES_COUNTER (0x00000002)
+#define CMD_SR_VERIFY_STARTCODE (0x00000004)
+#define CMD_SR_BITSTR_ADDR_DEREF (0x00000008)
+#define CMD_SR_BITSTR_PARSE_KEY (0x00000010)
+
+struct sr_setup_cmd {
+ unsigned int cmd;
+ unsigned int bitstream_offset_bits;
+ unsigned int bitstream_size_bytes;
+};
+
+#define CMD_SR_DWSIZE (sizeof(SR_SETUP_CMD) / sizeof(unsigned int))
+
+#define CMD_BITSTREAM_DMA (0xA0000000)
+#define CMD_BITSTREAM_DMA_DWSIZE (2)
+/* VC1 Parse Header Command */
+#define CMD_PARSE_HEADER (0x30000000)
+#define CMD_PARSE_HEADER_CONTEXT_MASK (0x000000ff)
+#define CMD_PARSE_HEADER_NEWSLICE (0x00000001)
+#define CMD_PARSE_HEADER_SKIP_PIC (0x00000002)
+#define CMD_PARSE_HEADER_ONEPASSPARSE (0x00000004)
+#define CMD_PARSE_HEADER_NUMSLICE_MINUS1 (0x00ffff00)
+
+struct parse_header_cmd {
+ unsigned int cmd;
+ unsigned int seq_hdr_data;
+ unsigned int pic_dimensions;
+ unsigned int bitplane_addr[3];
+ unsigned int vlc_table_addr;
+};
+
+#define CMD_PARSE_DWSIZE (sizeof(PARSE_HEADER_CMD) / sizeof(unsigned int))
+
+#define CMD_SLICE_INFO (0x20000000)
+#define CMD_SLICE_INFO_SLICENUM (0xff000000)
+#define CMD_SLICE_INFO_FIRSTMBY (0x00ff0000)
+#define CMD_SLICE_INFO_MBBITOFFSET (0x0000ffff)
+
+struct slice_info {
+ unsigned char slice_num;
+ unsigned char slice_first_mby;
+ unsigned short slice_mb_bitoffset;
+};
+
+#ifdef VDEC_USE_PVDEC_COMPATIBILITY
+/* VDEC extension */
+#define CMD_VDEC_EXT (0xC0000000)
+#ifdef VDEC_USE_PVDEC_SEC
+/*
+ * Used only between firmware secure modules FWBSP->FWDEC,
+ * thus the structure is defined in firmware structures.h
+ */
+#define CMD_VDEC_SECURE_EXT (0x40000000)
+#endif/* VDEC_USE_PVDEC_SEC */
+
+#define MEM2REG_SIZE_HOST_PART_MASK 0x0000FFFF
+#define MEM2REG_SIZE_HOST_PART_SHIFT 0
+
+#define MEM2REG_SIZE_BUF_TOTAL_MASK 0xFFFF0000
+#define MEM2REG_SIZE_BUF_TOTAL_SHIFT 16
+
+struct vdec_ext_cmd {
+ unsigned int cmd;
+ unsigned int trans_id;
+ unsigned int hdr_addr;
+ unsigned int hdr_size;
+ unsigned int ctx_save_addr;
+ unsigned int ctx_load_addr;
+ unsigned int buf_ctrl_addr;
+ unsigned int seq_addr;
+ unsigned int pps_addr;
+ unsigned int pps_2addr;
+ unsigned int mem_to_reg_addr;
+ /* 31-16: buff size, 15-0: size filled by host; dwords */
+ unsigned int mem_to_reg_size;
+ unsigned int slice_params_addr;
+ unsigned int slice_params_size; /* dwords */
+ unsigned int last_luma_recon;
+ unsigned int last_chroma_recon;
+ unsigned int luma_err_base;
+ unsigned int chroma_err_base;
+ unsigned int scaled_display_size;
+ unsigned int horz_scale_control;
+ unsigned int vert_scale_control;
+ unsigned int scale_output_size;
+ unsigned int vlc_idx_table_size;
+ unsigned int vlc_idx_table_addr;
+ unsigned int vlc_tables_size;
+ unsigned int vlc_tables_addr;
+ unsigned int display_picture_size;
+ unsigned int parser_mode;
+ /* needed for separate colour planes */
+ unsigned int intra_buf_base_addr;
+ unsigned int intra_buf_size_per_plane;
+ unsigned int intra_buf_size_per_pipe;
+ unsigned int chroma2reconstructed_addr;
+ unsigned int luma_alt_addr;
+ unsigned int chroma_alt_addr;
+ unsigned int chroma2alt_addr;
+ unsigned int aux_line_buf_size_per_pipe;
+ unsigned int aux_line_buffer_base_addr;
+ unsigned int alt_output_pict_rotation;
+ /* miscellaneous flags */
+ struct {
+ unsigned is_chromainterleaved : 1;
+ unsigned is_packedformat : 1;
+ unsigned is_discontinuousmbs : 1;
+ };
+};
+
+#define CMD_VDEC_EXT_DWSIZE (sizeof(VDEC_EXT_CMD) / sizeof(unsigned int))
+#endif /* VDEC_USE_PVDEC_COMPATIBILITY */
+
+/* Completion */
+#define CMD_COMPLETION (0x60000000)
+#define CMD_COMPLETION_DWSIZE (1)
+
+#ifdef VDEC_USE_PVDEC_SEC
+/* Slice done */
+#define CMD_SLICE_DONE (0x70000000)
+#define CMD_SLICE_DONE_DWSIZE (1)
+#endif /* VDEC_USE_PVDEC_SEC */
+
+/* Bitstream segments */
+#define CMD_BITSTREAM_SEGMENTS (0xD0000000)
+#define CMD_BITSTREAM_SEGMENTS_MINUS1_MASK (0x0000001F)
+#define CMD_BITSTREAM_PARSE_BLK_MASK (0x0000FF00)
+#ifdef VDEC_USE_PVDEC_COMPATIBILITY
+#define CMD_BITSTREAM_SEGMENTS_MORE_FOLLOW_MASK (0x00000020)
+#define CMD_BITSTREAM_EOP_MASK (0x00000040)
+#define CMD_BITSTREAM_BS_TOT_SIZE_WORD_OFFSET (1)
+#define CMD_BITSTREAM_BS_SEG_LIST_WORD_OFFSET (2)
+#define CMD_BITSTREAM_HDR_DW_SIZE CMD_BITSTREAM_BS_SEG_LIST_WORD_OFFSET
+
+#define CMD_BITSTREAM_SEGMENTS_MAX_NUM (60)
+#endif /* VDEC_USE_PVDEC_COMPATIBILITY */
+
+#ifdef VDEC_USE_PVDEC_COMPATIBILITY
+/* Signatures */
+/* Signature set ids (see hwSignatureModules.c for exact order). */
+/* -- FRONT END/ENTROPY_PIPE ----------------------------------- */
+/*
+ * Signature group 0:
+ * REG(PVDEC_ENTROPY, CR_SR_SIGNATURE)
+ * REG(MSVDX_VEC, CR_SR_CRC)
+ */
+#define PVDEC_SIGNATURE_GROUP_0 BIT(0)
+/*
+ * Signature group 1:
+ * REG(PVDEC_ENTROPY, CR_HEVC_PARSER_SIGNATURE)
+ * REG(PVDEC_ENTROPY, CR_ENCAP_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_1 BIT(1)
+/*
+ * Signature group 2:
+ * REG(PVDEC_ENTROPY, CR_GENC_ENGINE_OUTPUT_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_2 BIT(2)
+/*
+ * Signature group 3:
+ * REGREP(PVDEC_ENTROPY, CR_GENC_BUFFER_SIGNATURE, 0)
+ * REGREP(PVDEC_ENTROPY, CR_GENC_BUFFER_SIGNATURE, 1)
+ * REGREP(PVDEC_ENTROPY, CR_GENC_BUFFER_SIGNATURE, 2)
+ * REGREP(PVDEC_ENTROPY, CR_GENC_BUFFER_SIGNATURE, 3)
+ * REG( PVDEC_ENTROPY, CR_GENC_FRAGMENT_SIGNATURE)
+ * REG( PVDEC_ENTROPY, CR_GENC_FRAGMENT_READ_SIGNATURE)
+ * REG( PVDEC_ENTROPY, CR_GENC_FRAGMENT_WRADDR_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_3 BIT(3)
+/* -- GENC_DEC -------------------------------------------------- */
+/*
+ * Signature group 4:
+ * REG( PVDEC_VEC_BE, CR_GDEC_FRAGMENT_REQ_SIGNATURE)
+ * REG( PVDEC_VEC_BE, CR_GDEC_SYS_WR_SIGNATURE)
+ * REG( PVDEC_VEC_BE, CR_GDEC_MEM2REG_SYS_WR_SIGNATURE)
+ * REG( PVDEC_VEC_BE, CR_SLICE_STRUCTURE_REQ_SIGNATURE)
+ * REG( PVDEC_VEC_BE, CR_SLICE_STRUCTURE_OVER1K_REQ_SIGNATURE)
+ * REG( PVDEC_VEC_BE, CR_MEM_STRUCTURE_REQ_SIGNATURE)
+ * REGREP(PVDEC_VEC_BE, CR_GDEC_DATA_REQ_SIGNATURE, 0)
+ * REGREP(PVDEC_VEC_BE, CR_GDEC_DATA_REQ_SIGNATURE, 1)
+ * REGREP(PVDEC_VEC_BE, CR_GDEC_DATA_REQ_SIGNATURE, 2)
+ * REGREP(PVDEC_VEC_BE, CR_GDEC_DATA_REQ_SIGNATURE, 3)
+ */
+#define PVDEC_SIGNATURE_GROUP_4 BIT(4)
+/*
+ * Signature group 5:
+ * REG( PVDEC_VEC_BE, CR_GDEC_FRAGMENT_SIGNATURE)
+ * REG( PVDEC_VEC_BE, CR_SLICE_STRUCTURE_SIGNATURE)
+ * REG( PVDEC_VEC_BE, CR_SLICE_STRUCTURE_OVER1K_SIGNATURE)
+ * REG( PVDEC_VEC_BE, CR_MEM_STRUCTURE_SIGNATURE)
+ * REGREP(PVDEC_VEC_BE, CR_GDEC_BUFFER_SIGNATURE, 0)
+ * REGREP(PVDEC_VEC_BE, CR_GDEC_BUFFER_SIGNATURE, 1)
+ * REGREP(PVDEC_VEC_BE, CR_GDEC_BUFFER_SIGNATURE, 2)
+ * REGREP(PVDEC_VEC_BE, CR_GDEC_BUFFER_SIGNATURE, 3)
+ */
+#define PVDEC_SIGNATURE_GROUP_5 BIT(5)
+/* -- RESIDUAL AND COMMAND DEBUG--------------------------------- */
+/*
+ * Signature group 12:
+ * REG(PVDEC_VEC_BE, CR_DECODE_TO_COMMAND_PRIME_SIGNATURE)
+ * REG(PVDEC_VEC_BE, CR_DECODE_TO_COMMAND_SECOND_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_12 BIT(12)
+/*
+ * Signature group 13:
+ * REG(PVDEC_VEC_BE, CR_DECODE_TO_RESIDUAL_PRIME_SIGNATURE)
+ * REG(PVDEC_VEC_BE, CR_DECODE_TO_RESIDUAL_SECOND_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_13 BIT(13)
+/*
+ * Signature group 14:
+ * REG(PVDEC_VEC_BE, CR_COMMAND_ABOVE_READ_SIGNATURE)
+ * REG(PVDEC_VEC_BE, CR_COMMAND_ABOVE_WRITE_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_14 BIT(14)
+/*
+ * Signature group 15:
+ * REG(PVDEC_VEC_BE, CR_TEMPORAL_READ_SIGNATURE)
+ * REG(PVDEC_VEC_BE, CR_TEMPORAL_WRITE_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_15 BIT(15)
+/* --VEC--------------------------------------------------------- */
+/*
+ * Signature group 16:
+ * REG(PVDEC_VEC_BE, CR_COMMAND_OUTPUT_SIGNATURE)
+ * REG(MSVDX_VEC, CR_VEC_IXFORM_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_16 BIT(16)
+/*
+ * Signature group 17:
+ * REG(PVDEC_VEC_BE, CR_RESIDUAL_OUTPUT_SIGNATURE)
+ * REG(MSVDX_VEC, CR_VEC_COMMAND_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_17 BIT(17)
+/* --VDMC-------------------------------------------------------- */
+/*
+ * Signature group 18:
+ * REG(MSVDX_VDMC, CR_VDMC_REFERENCE_CACHE_SIGNATURE)
+ * REG(MSVDX_VDMC, CR_VDMC_REFERENCE_CACHE_MEM_WADDR_SIGNATURE)
+ * REG(MSVDX_VDMC, CR_VDMC_REFERENCE_CACHE_MEM_RADDR_SIGNATURE)
+ * REG(MSVDX_VDMC, CR_VDMC_REFERENCE_CACHE_MEM_WDATA_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_18 BIT(18)
+/*
+ * Signature group 19:
+ * REG(MSVDX_VDMC, CR_VDMC_2D_FILTER_PIPELINE_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_19 BIT(19)
+/*
+ * Signature group 20:
+ * REG(MSVDX_VDMC, CR_VDMC_PIXEL_RECONSTRUCTION_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_20 BIT(20)
+/*
+ * Signature group 21:
+ * REG(MSVDX_VDMC, CR_VDMC_MCU_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_21 BIT(21)
+/* ---VDEB------------------------------------------------------- */
+/*
+ * Signature group 22:
+ * REG(MSVDX_VDEB, CR_VDEB_SYS_MEM_RDATA_LUMA_SIGNATURE)
+ * REG(MSVDX_VDEB, CR_VDEB_SYS_MEM_RDATA_CHROMA_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_22 BIT(22)
+/*
+ * Signature group 23:
+ * REG(MSVDX_VDEB, CR_VDEB_SYS_MEM_ADDR_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_23 BIT(23)
+/*
+ * Signature group 24:
+ * REG(MSVDX_VDEB, CR_VDEB_SYS_MEM_WDATA_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_24 BIT(24)
+/* ---SCALER----------------------------------------------------- */
+/*
+ * Signature group 25:
+ * REG(MSVDX_VDEB, CR_VDEB_SCALE_ADDR_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_25 BIT(25)
+/*
+ * Signature group 26:
+ * REG(MSVDX_VDEB, CR_VDEB_SCALE_WDATA_SIGNATURE)
+ */
+#define PVDEC_SIGNATURE_GROUP_26 BIT(26)
+/* ---PICTURE CHECKSUM------------------------------------------- */
+/*
+ * Signature group 27:
+ * REG(MSVDX_VDEB, CR_VDEB_HEVC_CHECKSUM_LUMA)
+ * REG(MSVDX_VDEB, CR_VDEB_HEVC_CHECKSUM_CB)
+ * REG(MSVDX_VDEB, CR_VDEB_HEVC_CHECKSUM_CR)
+ */
+#define PVDEC_SIGNATURE_GROUP_27 BIT(27)
+#define PVDEC_SIGNATURE_NEW_METHOD BIT(31)
+
+/* Debug messages */
+#define DEBUG_DATA_TYPE_MASK 0xF
+#define DEBUG_DATA_TYPE_SHIFT 28
+
+#define DEBUG_DATA_MSG_TYPE_MASK 0x1
+#define DEBUG_DATA_MSG_TYPE_SHIFT 15
+
+#define DEBUG_DATA_MSG_ARG_COUNT_MASK 0x7
+#define DEBUG_DATA_MSG_ARG_COUNT_SHIFT 12
+
+#define DEBUG_DATA_MSG_LINE_NO_MASK 0xFFF
+#define DEBUG_DATA_MSG_LINE_NO_SHIFT 0
+
+#define DEBUG_DATA_TYPE_HEADER (0)
+#define DEBUG_DATA_TYPE_STRING (1)
+#define DEBUG_DATA_TYPE_PARAMS (2)
+#define DEBUG_DATA_TYPE_MSG (3)
+#define DEBUG_DATA_TYPE_PERF (6)
+
+#define DEBUG_DATA_MSG_TYPE_LOG 0
+#define DEBUG_DATA_MSG_TYPE_ASSERT 1
+
+#define DEBUG_DATA_TAPE_PERF_INC_TIME_MASK 0x1
+#define DEBUG_DATA_TYPE_PERF_INC_TIME_SHIFT 28
+#define DEBUG_DATA_TYPE_PERF_INC_TIME 0x1
+
+#define DEBUG_DATA_SET_TYPE(val, type, data_type) \
+ ({ \
+ data_type __val = val; \
+ ((__val) = (__val & ~(DEBUG_DATA_TYPE_MASK << DEBUG_DATA_TYPE_SHIFT)) | \
+ ((type) << DEBUG_DATA_TYPE_SHIFT)); })
+
+#define DEBUG_DATA_MSG_SET_ARG_COUNT(val, ac, data_type) \
+ ({ \
+ data_type __val = val; \
+ (__val = (__val & \
+ ~(DEBUG_DATA_MSG_ARG_COUNT_MASK << DEBUG_DATA_MSG_ARG_COUNT_SHIFT)) \
+ | ((ac) << DEBUG_DATA_MSG_ARG_COUNT_SHIFT)); })
+
+#define DEBUG_DATA_MSG_SET_LINE_NO(val, ln, type) \
+ ({ \
+ type __val = val; \
+ (__val = (__val & \
+ ~(DEBUG_DATA_MSG_LINE_NO_MASK << DEBUG_DATA_MSG_LINE_NO_SHIFT)) \
+ | ((ln) << DEBUG_DATA_MSG_LINE_NO_SHIFT)); })
+
+#define DEBUG_DATA_MSG_SET_TYPE(val, tp, type) \
+ ({ \
+ type __val = val; \
+ (__val = (__val & \
+ ~(DEBUG_DATA_MSG_TYPE_MASK << DEBUG_DATA_MSG_TYPE_SHIFT)) \
+ | ((tp) << DEBUG_DATA_MSG_TYPE_SHIFT)); })
+
+#define DEBUG_DATA_GET_TYPE(val) \
+ (((val) >> DEBUG_DATA_TYPE_SHIFT) & DEBUG_DATA_TYPE_MASK)
+#define DEBUG_DATA_TYPE_PERF_IS_INC_TIME(val) \
+ (((val) >> DEBUG_DATA_TYPE_PERF_INC_TIME_SHIFT) \
+ & DEBUG_DATA_TAPE_PERF_INC_TIME_MASK)
+#define DEBUG_DATA_MSG_GET_ARG_COUNT(val) \
+ (((val) >> DEBUG_DATA_MSG_ARG_COUNT_SHIFT) \
+ & DEBUG_DATA_MSG_ARG_COUNT_MASK)
+#define DEBUG_DATA_MSG_GET_LINE_NO(val) \
+ (((val) >> DEBUG_DATA_MSG_LINE_NO_SHIFT) \
+ & DEBUG_DATA_MSG_LINE_NO_MASK)
+#define DEBUG_DATA_MSG_GET_TYPE(val) \
+ (((val) >> DEBUG_DATA_MSG_TYPE_SHIFT) & DEBUG_DATA_MSG_TYPE_MASK)
+#define DEBUG_DATA_MSG_TYPE_IS_ASSERT(val) \
+ (DEBUG_DATA_MSG_GET_TYPE(val) == DEBUG_DATA_MSG_TYPE_ASSERT \
+ ? IMG_TRUE : IMG_FALSE)
+#define DEBUG_DATA_MSG_TYPE_IS_LOG(val) \
+ (DEBUG_DATA_MSG_GET_TYPE(val) == DEBUG_DATA_MSG_TYPE_LOG ? \
+ IMG_TRUE : IMG_FALSE)
+
+#define DEBUG_DATA_MSG_LAT(ln, ac, tp) \
+ (((ln) << DEBUG_DATA_MSG_LINE_NO_SHIFT) | \
+ ((ac) << DEBUG_DATA_MSG_ARG_COUNT_SHIFT) | \
+ ((tp) << DEBUG_DATA_MSG_TYPE_SHIFT))
+/* FWBSP-mode specific defines. */
+#ifdef VDEC_USE_PVDEC_SEC
+/**
+ * FWBSP_ENC_BSTR_BUF_QUEUE_LEN - Suggested number of bitstream buffers submitted (queued)
+ * to firmware for processing at the same time.
+ */
+#define FWBSP_ENC_BSTR_BUF_QUEUE_LEN 1
+
+#endif /* VDEC_USE_PVDEC_SEC */
+
+#endif /* VDEC_USE_PVDEC_COMPATIBILITY */
+#endif /* FW_INTERFACE_H_ */
diff --git a/drivers/staging/media/vxd/decoder/h264fw_data.h b/drivers/staging/media/vxd/decoder/h264fw_data.h
new file mode 100644
index 000000000000..e098d27948d0
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/h264fw_data.h
@@ -0,0 +1,652 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Public data structures for the h264 parser firmware module.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+/* Include shared header version here to replace the standard version */
+#include "h264fw_data_shared.h"
+
+#ifndef _H264FW_DATA_H_
+#define _H264FW_DATA_H_
+
+#include "vdecfw_shared.h"
+
+/* Maximum number of alternative CPB specifications in the stream */
+#define H264_MAXIMUMVALUEOFCPB_CNT 32
+
+/*
+ * The maximum DPB size is related to the number of MVC views supported
+ * The size is defined in H.10.2 for the H.264 spec.
+ * If the number of views needs to be changed the DPB size should be too
+ * The limits are as follows:
+ * NumViews 1, 2, 4, 8, 16
+ * MaxDpbFrames: 16, 16, 32, 48, 64
+ */
+#ifdef H264_ENABLE_MVC
+#define H264FW_MAX_NUM_VIEWS 4
+#define H264FW_MAX_DPB_SIZE 32
+#define H264FW_MAX_NUM_MVC_REFS 16
+#else
+#define H264FW_MAX_NUM_VIEWS 1
+#define H264FW_MAX_DPB_SIZE 16
+#define H264FW_MAX_NUM_MVC_REFS 1
+#endif
+
+/* Maximum value for num_ref_frames_in_pic_order_cnt_cycle */
+#define H264FW_MAX_CYCLE_REF_FRAMES 256
+
+/* 4x4 scaling list size */
+#define H264FW_4X4_SIZE 16
+/* 8x8 scaling list size */
+#define H264FW_8X8_SIZE 64
+/* Number of 4x4 scaling lists */
+#define H264FW_NUM_4X4_LISTS 6
+/* Number of 8x8 scaling lists */
+#define H264FW_NUM_8X8_LISTS 6
+
+/* Number of reference picture lists */
+#define H264FW_MAX_REFPIC_LISTS 2
+
+/*
+ * The maximum number of slice groups
+ * remove if slice group map is prepared on the host
+ */
+#define H264FW_MAX_SLICE_GROUPS 8
+
+/* The maximum number of planes for 4:4:4 separate color plane streams */
+#define H264FW_MAX_PLANES 3
+
+#define H264_MAX_SGM_SIZE 8196
+
+#define IS_H264_HIGH_PROFILE(profile_idc, type) \
+ ({ \
+ type __profile_idc = profile_idc; \
+ ((__profile_idc) == H264_PROFILE_HIGH) || \
+ ((__profile_idc) == H264_PROFILE_HIGH10) || \
+ ((__profile_idc) == H264_PROFILE_HIGH422) || \
+ ((__profile_idc) == H264_PROFILE_HIGH444) || \
+ ((__profile_idc) == H264_PROFILE_CAVLC444) || \
+ ((__profile_idc) == H264_PROFILE_MVC_HIGH) || \
+ ((__profile_idc) == H264_PROFILE_MVC_STEREO); }) \
+
+/*
+ * This type describes the H.264 NAL unit types
+ */
+enum h264_enaltype {
+ H264FW_NALTYPE_SLICE = 1,
+ H264FW_NALTYPE_IDRSLICE = 5,
+ H264FW_NALTYPE_SEI = 6,
+ H264FW_NALTYPE_SPS = 7,
+ H264FW_NALTYPE_PPS = 8,
+ H264FW_NALTYPE_AUD = 9,
+ H264FW_NALTYPE_EOSEQ = 10,
+ H264FW_NALTYPE_EOSTR = 11,
+ H264FW_NALTYPE_PREFIX = 14,
+ H264FW_NALTYPE_SUBSET_SPS = 15,
+ H264FW_NALTYPE_AUXILIARY_SLICE = 19,
+ H264FW_NALTYPE_EXTSLICE = 20,
+ H264FW_NALTYPE_EXTSLICE_DEPTH_VIEW = 21,
+ H264FW_NALTYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * AVC Profile IDC definitions
+ */
+enum h264_eprofileidc {
+ /* YUV 4:4:4/14 "CAVLC 4:4:4 */
+ H264_PROFILE_CAVLC444 = 44,
+ /* YUV 4:2:0/8 "Baseline */
+ H264_PROFILE_BASELINE = 66,
+ /* YUV 4:2:0/8 "Main */
+ H264_PROFILE_MAIN = 77,
+ /* YUV 4:2:0/8 "Scalable" */
+ H264_PROFILE_SCALABLE = 83,
+ /* YUV 4:2:0/8 "Extended" */
+ H264_PROFILE_EXTENDED = 88,
+ /* YUV 4:2:0/8 "High" */
+ H264_PROFILE_HIGH = 100,
+ /* YUV 4:2:0/10 "High 10" */
+ H264_PROFILE_HIGH10 = 110,
+ /* YUV 4:2:0/8 "Multiview High" */
+ H264_PROFILE_MVC_HIGH = 118,
+ /* YUV 4:2:2/10 "High 4:2:2" */
+ H264_PROFILE_HIGH422 = 122,
+ /* YUV 4:2:0/8 "Stereo High" */
+ H264_PROFILE_MVC_STEREO = 128,
+ /* YUV 4:4:4/14 "High 4:4:4" */
+ H264_PROFILE_HIGH444 = 244,
+ H264_PROFILE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This type defines the constraint set flags
+ */
+enum h264fw_econstraint_flag {
+ /* Compatible with Baseline profile */
+ H264FW_CONSTRAINT_BASELINE_SHIFT = 7,
+ /* Compatible with Main profile */
+ H264FW_CONSTRAINT_MAIN_SHIFT = 6,
+ /* Compatible with Extended profile */
+ H264FW_CONSTRAINT_EXTENDED_SHIFT = 5,
+ /* Compatible with Intra profiles */
+ H264FW_CONSTRAINT_INTRA_SHIFT = 4,
+ /* Compatible with Multiview High profile */
+ H264FW_CONSTRAINT_MULTIHIGH_SHIFT = 3,
+ /* Compatible with Stereo High profile */
+ H264FW_CONSTRAINT_STEREOHIGH_SHIFT = 2,
+ /* Reserved flag */
+ H264FW_CONSTRAINT_RESERVED6_SHIFT = 1,
+ /* Reserved flag */
+ H264FW_CONSTRAINT_RESERVED7_SHIFT = 0,
+ H264FW_CONSTRAINT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This enum describes the reference status of an H.264 picture.
+ * Unpaired fields should have all eRefStatusX set to the same value
+ *
+ * For Frame, Mbaff, and Pair types individual fields and frame ref status
+ * should be set accordingly.
+ *
+ * eRefStatusFrame eRefStatusTop eRefStatusBottom
+ * UNUSED UNUSED UNUSED
+ * SHORTTERM SHORTTERM SHORTTERM
+ * LONGTERM LONGTERM LONGTERM
+ *
+ * UNUSED SHORT/LONGTERM UNUSED
+ * UNUSED UNUSED SHORT/LONGTERM
+ *
+ * SHORTTERM LONGTERM SHORTTERM
+ * SHORTTERM SHORTTERM LONGTERM
+ * NB: It is not clear from the spec if the Frame should be marked as short
+ * or long term in this case
+ */
+enum h264fw_ereference {
+ /* Picture is unused for reference */
+ H264FW_REF_UNUSED = 0,
+ /* used for short term reference */
+ H264FW_REF_SHORTTERM,
+ /* used for Long Term reference */
+ H264FW_REF_LONGTERM,
+ H264FW_REF_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This type defines the picture structure.
+ */
+enum h264fw_epicture_type {
+ /* No valid picture */
+ H264FW_TYPE_NONE = 0,
+ /* Picture contains the top (even) lines of the frame */
+ H264FW_TYPE_TOP,
+ /* Picture contains the bottom (odd) lines of the frame */
+ H264FW_TYPE_BOTTOM,
+ /* Picture contains the entire frame */
+ H264FW_TYPE_FRAME,
+ /* Picture contains an MBAFF frame */
+ H264FW_TYPE_MBAFF,
+ /* Picture contains top and bottom lines of the frame */
+ H264FW_TYPE_PAIR,
+ H264FW_TYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This describes the SPS header data required by the H264 firmware that should
+ * be supplied by the Host.
+ */
+struct h264fw_sequence_ps {
+ /* syntax elements from SPS */
+ /* syntax element from bitstream - 8 bit */
+ unsigned char profile_idc;
+ /* syntax element from bitstream - 2 bit */
+ unsigned char chroma_format_idc;
+ /* syntax element from bitstream - 1 bit */
+ unsigned char separate_colour_plane_flag;
+ /* syntax element from bitstream - 3 bit */
+ unsigned char bit_depth_luma_minus8;
+ /* syntax element from bitstream - 3 bit */
+ unsigned char bit_depth_chroma_minus8;
+ /* syntax element from bitstream - 1 bit */
+ unsigned char delta_pic_order_always_zero_flag;
+ /* syntax element from bitstream - 4+ bit */
+ unsigned char log2_max_pic_order_cnt_lsb;
+ /* syntax element from bitstream - 5 bit */
+ unsigned char max_num_ref_frames;
+ /* syntax element from bitstream - 4+ bit */
+ unsigned char log2_max_frame_num;
+ /* syntax element from bitstream - 2 bit */
+ unsigned char pic_order_cnt_type;
+ /* syntax element from bitstream - 1 bit */
+ unsigned char frame_mbs_only_flag;
+ /* syntax element from bitstream - 1 bit */
+ unsigned char gaps_in_frame_num_value_allowed_flag;
+
+ /*
+ * set0--7 flags as they occur in the bitstream (including reserved
+ * values)
+ */
+ unsigned char constraint_set_flags;
+ /* syntax element from bitstream - 8 bit */
+ unsigned char level_idc;
+ /* syntax element from bitstream - 8 bit */
+ unsigned char num_ref_frames_in_pic_order_cnt_cycle;
+ /* syntax element from bitstream - 1 bit */
+ unsigned char mb_adaptive_frame_field_flag;
+ /* syntax element from bitstream - 32 bit */
+ int offset_for_non_ref_pic;
+ /* syntax element from bitstream - 32 bit */
+ int offset_for_top_to_bottom_field;
+
+ /* syntax element from bitstream */
+ unsigned int pic_width_in_mbs_minus1;
+ /* syntax element from bitstream */
+ unsigned int pic_height_in_map_units_minus1;
+ /* syntax element from bitstream - 1 bit */
+ unsigned char direct_8x8_inference_flag;
+ /* syntax element from bitstream */
+ unsigned char qpprime_y_zero_transform_bypass_flag;
+
+ /* syntax element from bitstream - 32 bit each */
+ int offset_for_ref_frame[H264FW_MAX_CYCLE_REF_FRAMES];
+
+ /* From VUI information */
+ unsigned char num_reorder_frames;
+ /*
+ * From VUI/MVC SEI, 0 indicates not set, any actual 0 value will be
+ * inferred by the firmware
+ */
+ unsigned char max_dec_frame_buffering;
+
+ /* From SPS MVC Extension - for the current view_id */
+ /* Number of views in this stream */
+ unsigned char num_views;
+ /* a Map in order of VOIdx of view_id's */
+ unsigned short view_ids[H264FW_MAX_NUM_VIEWS];
+
+ /* Disable VDMC horizontal/vertical filtering */
+ unsigned char disable_vdmc_filt;
+ /* Disable CABAC 4:4:4 4x4 transform as not available */
+ unsigned char transform4x4_mb_not_available;
+
+ /* anchor reference list */
+ unsigned short anchor_inter_view_reference_id_list[2]
+ [H264FW_MAX_NUM_VIEWS][H264FW_MAX_NUM_MVC_REFS];
+ /* nonanchor reference list */
+ unsigned short non_anchor_inter_view_reference_id_list[2]
+ [H264FW_MAX_NUM_VIEWS][H264FW_MAX_NUM_MVC_REFS];
+ /* number of elements in aui16AnchorInterViewReferenceIndiciesLX[] */
+ unsigned short num_anchor_refsx[2][H264FW_MAX_NUM_VIEWS];
+ /* number of elements in aui16NonAnchorInterViewReferenceIndiciesLX[] */
+ unsigned short num_non_anchor_refsx[2][H264FW_MAX_NUM_VIEWS];
+};
+
+/*
+ * This structure represents HRD parameters.
+ */
+struct h264fw_hrd {
+ /* cpb_cnt_minus1 */
+ unsigned char cpb_cnt_minus1;
+ /* bit_rate_scale */
+ unsigned char bit_rate_scale;
+ /* cpb_size_scale */
+ unsigned char cpb_size_scale;
+ /* bit_rate_value_minus1 */
+ unsigned int bit_rate_value_minus1[H264_MAXIMUMVALUEOFCPB_CNT];
+ /* cpb_size_value_minus1 */
+ unsigned int cpb_size_value_minus1[H264_MAXIMUMVALUEOFCPB_CNT];
+ /* cbr_flag */
+ unsigned char cbr_flag[H264_MAXIMUMVALUEOFCPB_CNT];
+ /* initial_cpb_removal_delay_length_minus1 */
+ unsigned char initial_cpb_removal_delay_length_minus1;
+ /* cpb_removal_delay_length_minus1 */
+ unsigned char cpb_removal_delay_length_minus1;
+ /* dpb_output_delay_length_minus1 */
+ unsigned char dpb_output_delay_length_minus1;
+ /* time_offset_length */
+ unsigned char time_offset_length;
+};
+
+/*
+ * This structure represents the VUI parameters data.
+ */
+struct h264fw_vui {
+ int aspect_ratio_info_present_flag;
+ unsigned char aspect_ratio_idc;
+ unsigned short sar_width;
+ unsigned short sar_height;
+ int overscan_info_present_flag;
+ int overscan_appropriate_flag;
+ int video_signal_type_present_flag;
+ unsigned char video_format;
+ int video_full_range_flag;
+ int colour_description_present_flag;
+ unsigned char colour_primaries;
+ unsigned char transfer_characteristics;
+ unsigned char matrix_coefficients;
+ int chroma_location_info_present_flag;
+ unsigned int chroma_sample_loc_type_top_field;
+ unsigned int chroma_sample_loc_type_bottom_field;
+ int timing_info_present_flag;
+ unsigned int num_units_in_tick;
+ unsigned int time_scale;
+ int fixed_frame_rate_flag;
+ int nal_hrd_parameters_present_flag;
+ struct h264fw_hrd nal_hrd_params;
+ int vcl_hrd_parameters_present_flag;
+ struct h264fw_hrd vcl_hrd_params;
+ int low_delay_hrd_flag;
+ int pic_struct_present_flag;
+ int bitstream_restriction_flag;
+ int motion_vectors_over_pic_boundaries_flag;
+ unsigned int max_bytes_per_pic_denom;
+ unsigned int max_bits_per_mb_denom;
+ unsigned int log2_max_mv_length_vertical;
+ unsigned int log2_max_mv_length_horizontal;
+ unsigned int num_reorder_frames;
+ unsigned int max_dec_frame_buffering;
+};
+
+/*
+ * This describes the HW specific SPS header data required by the H264
+ * firmware that should be supplied by the Host.
+ */
+struct h264fw_ddsequence_ps {
+ /* pre-packed registers derived from SPS */
+ /* Value for CR_VEC_ENTDEC_FE_CONTROL */
+ unsigned int regentdec_control;
+
+ /* NB: This register should contain the 4-bit SGM flag */
+ /* Value for CR_VEC_H264_FE_SPS0 & CR_VEC_H264_BE_SPS0 combined */
+ unsigned int reg_sps0;
+ /* Value of CR_VEC_H264_BE_INTRA_8x8 */
+ unsigned int reg_beintra;
+ /* Value of CR_VEC_H264_FE_CABAC444 */
+ unsigned int reg_fecaabac444;
+ /* Treat CABAC 4:4:4 4x4 transform as not available */
+ unsigned char transform4x4_mb_notavialbale;
+ /* Disable VDMC horizontal/vertical filtering */
+ unsigned char disable_vdmcfilt;
+};
+
+/*
+ * This describes the PPS header data required by the H264 firmware that should
+ * be supplied by the Host.
+ */
+struct h264fw_picture_ps {
+ /* syntax elements from the PPS */
+ /* syntax element from bitstream - 1 bit */
+ unsigned char deblocking_filter_control_present_flag;
+ /* syntax element from bitstream - 1 bit */
+ unsigned char transform_8x8_mode_flag;
+ /* syntax element from bitstream - 1 bit */
+ unsigned char entropy_coding_mode_flag;
+ /* syntax element from bitstream - 1 bit */
+ unsigned char redundant_pic_cnt_present_flag;
+
+ /* syntax element from bitstream - 2 bit */
+ unsigned char weighted_bipred_idc;
+ /* syntax element from bitstream - 1 bit */
+ unsigned char weighted_pred_flag;
+ /* syntax element from bitstream - 1 bit */
+ unsigned char pic_order_present_flag;
+
+ /* 26 + syntax element from bitstream - 7 bit */
+ unsigned char pic_init_qp;
+ /* syntax element from bitstream - 1 bit */
+ unsigned char constrained_intra_pred_flag;
+ /* syntax element from bitstream - 5 bit each */
+ unsigned char num_ref_lx_active_minus1[H264FW_MAX_REFPIC_LISTS];
+
+ /* syntax element from bitstream - 3 bit */
+ unsigned char slice_group_map_type;
+ /* syntax element from bitstream - 3 bit */
+ unsigned char num_slice_groups_minus1;
+ /* syntax element from bitstream - 13 bit */
+ unsigned short slice_group_change_rate_minus1;
+
+ /* syntax element from bitstream */
+ unsigned int chroma_qp_index_offset;
+ /* syntax element from bitstream */
+ unsigned int second_chroma_qp_index_offset;
+
+ /* scaling lists are derived from both SPS and PPS information */
+ /* but will change whenever the PPS changes */
+ /* The derived set of tables are associated here with the PPS */
+ /* NB: These are in H.264 order */
+ /* derived from SPS and PPS - 8 bit each */
+ unsigned char scalinglist4x4[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE];
+ /* derived from SPS and PPS - 8 bit each */
+ unsigned char scalinglist8x8[H264FW_NUM_8X8_LISTS][H264FW_8X8_SIZE];
+};
+
+/*
+ * This describes the HW specific PPS header data required by the H264
+ * firmware that should be supplied by the Host.
+ */
+struct h264fw_dd_picture_ps {
+ /* values derived from the PPS */
+ /* Value for MSVDX_CMDS_SLICE_PARAMS_MODE_CONFIG */
+ unsigned char vdmc_mode_config;
+
+ /* pre-packed registers derived from the PPS */
+ /* Value for CR_VEC_H264_FE_PPS0 & CR_VEC_H264_BE_PPS0 combined */
+ unsigned int reg_pps0;
+
+ /*
+ * scaling lists are derived from both SPS and PPS information
+ * but will change whenever the PPS changes
+ * The derived set of tables are associated here with the PPS
+ * But this will become invalid if the SPS changes and will have to be
+ * recalculated
+ * These tables MUST be aligned on a 32-bit boundary
+ * NB: These are in MSVDX order
+ */
+ /* derived from SPS and PPS - 8 bit each */
+ unsigned char scalinglist4x4[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE];
+ /* derived from SPS and PPS - 8 bit each */
+ unsigned char scalinglist8x8[H264FW_NUM_8X8_LISTS][H264FW_8X8_SIZE];
+};
+
+/*
+ * This describes the H.264 parser component "Header data", shown in the
+ * Firmware Memory Layout diagram. This data is required by the H264 firmware
+ * and should be supplied by the Host.
+ */
+struct h264fw_header_data {
+ /* Decode buffers and output control for the current picture */
+ /* Primary decode buffer base addresses */
+ struct vdecfw_image_buffer primary;
+ /* buffer base addresses for alternate output */
+ struct vdecfw_image_buffer alternate;
+ /* Output control: rotation, scaling, oold, etc. */
+ unsigned int pic_cmds[VDECFW_CMD_MAX];
+ /* Macroblock parameters base address for the picture */
+ unsigned int mbparams_base_address;
+
+ unsigned int mbparams_size_per_plane;
+
+ /* Buffers for context preload for colour plane switching (6.x.x) */
+ unsigned int preload_buffer_base_address
+ [H264FW_MAX_PLANES];
+
+ /*
+ * slice group map should be calculated on Host
+ * (using some slice params) and base address provided here
+ */
+ /* Base address of active slice group map */
+ /* Base address of active slice group map */
+ unsigned int slicegroupmap_base_address;
+
+ /* H264 specific control */
+ /* do second pass Intra Deblock on frame */
+ unsigned int do_old __attribute__ (aligned(4));
+ /* set to IMG_FALSE to disable second-pass deblock */
+ unsigned int two_pass_flag __attribute__ (aligned(4));
+ /* set to IMG_TRUE to disable MVC */
+ unsigned int disable_mvc __attribute__ (aligned(4));
+ /*
+ * Do we have second PPS in uipSecondPPSInfoSource provided for the
+ * second field
+ */
+ unsigned int second_pps __attribute__ (aligned(4));
+};
+
+/*
+ * This describes an H.264 picture. It is part of the Context data
+ */
+struct h264fw_picture {
+ /* Primary (reconstructed) picture buffers */
+ struct vdecfw_image_buffer primary;
+ /* Secondary (alternative) picture buffers */
+ struct vdecfw_image_buffer alternate;
+ /* Macroblock parameters base address for the picture */
+ unsigned int mbparams_base_address;
+
+ /* Unique ID for this picture */
+ unsigned int transaction_id;
+ /* Picture type */
+ enum h264fw_epicture_type pricture_type;
+
+ /* Reference status of the picture */
+ enum h264fw_ereference ref_status_bottom;
+ /* Reference status of the picture */
+ enum h264fw_ereference ref_status_top;
+ /* Reference status of the picture */
+ enum h264fw_ereference ref_status_frame;
+
+ /* Frame Number */
+ unsigned int frame_number;
+ /* Short term reference info */
+ int fame_number_wrap;
+ /* long term reference number - should be 8-bit */
+ unsigned int longterm_frame_idx;
+
+ /* Top field order count for this picture */
+ int top_field_order_count;
+ /* Bottom field order count for this picture */
+ int bottom_field_order_count;
+ /* MVC view_id */
+ unsigned short view_id;
+ /*
+ * When picture is in the DPB Offset to use into the MSVDX DPB reg table
+ * when the current picture is the same view as this.
+ */
+ unsigned char view_dpb_offset;
+ /* Flags for this picture for the display process */
+ unsigned char display_flags;
+
+ /* IMG_FALSE if sent to display, or otherwise not needed for display */
+ unsigned char needed_for_output;
+};
+
+/*
+ * This structure describes frame data for POC calculation
+ */
+struct h264fw_poc_picture_data {
+ /* type 0,1,2 */
+ unsigned char mmco_5_flag;
+
+ /* type 0 */
+ unsigned char bottom_field_flag;
+ unsigned short pic_order_cnt_lsb;
+ int top_field_order_count;
+ int pic_order_count_msb;
+
+ /* type 1,2 */
+ int16 frame_num;
+ int frame_num_offset;
+
+ /* output */
+ int bottom_filed_order_count;
+};
+
+/*
+ * This structure describes picture data for determining Complementary
+ * Field Pairs
+ */
+struct h264fw_last_pic_data {
+ /* Unique ID for this picture */
+ unsigned int transaction_id;
+ /* Picture type */
+ enum h264fw_epicture_type picture_type;
+ /* Reference status of the picture */
+ enum h264fw_ereference ref_status_frame;
+ /* Frame Number */
+ unsigned int frame_number;
+
+ unsigned int luma_recon;
+ unsigned int chroma_recon;
+ unsigned int chroma_2_recon;
+ unsigned int luma_alter;
+ unsigned int chroma_alter;
+ unsigned int chroma_2_alter;
+ struct vdecfw_image_buffer primary;
+ struct vdecfw_image_buffer alternate;
+ unsigned int mbparams_base_address;
+ /* Top field order count for this picture */
+ int top_field_order_count;
+ /* Bottom field order count for this picture */
+ int bottom_field_order_count;
+};
+
+/*
+ * This describes the H.264 parser component "Context data", shown in the
+ * Firmware Memory Layout diagram. This data is the state preserved across
+ * pictures. It is loaded and saved by the Firmware, but requires the host to
+ * provide buffer(s) for this.
+ */
+struct h264fw_context_data {
+ /* Decoded Picture Buffer */
+ struct h264fw_picture dpb[H264FW_MAX_DPB_SIZE];
+ /*
+ * Inter-view reference components - also used as detail of the previous
+ * picture for any particular view, can be used to determine
+ * complemetary field pairs
+ */
+ struct h264fw_picture interview_prediction_ref[H264FW_MAX_NUM_VIEWS];
+ /* previous ref pic for type0, previous pic for type1&2 */
+ struct h264fw_poc_picture_data prev_poc_pic_data[H264FW_MAX_NUM_VIEWS];
+ /* previous picture information to detect complementary field pairs */
+ struct h264fw_last_pic_data last_pic_data[H264FW_MAX_NUM_VIEWS];
+ struct h264fw_last_pic_data last_displayed_pic_data
+ [H264FW_MAX_NUM_VIEWS];
+
+ /* previous reference frame number for each view */
+ unsigned short prev_ref_frame_num[H264FW_MAX_NUM_VIEWS];
+ /* Bitmap of used slots in each view DPB */
+ unsigned short dpb_bitmap[H264FW_MAX_NUM_VIEWS];
+
+ /* DPB size */
+ unsigned int dpb_size;
+ /* Number of pictures in DPB */
+ unsigned int dpb_fullness;
+
+ unsigned char prev_display_flags;
+ int prev_display;
+ int prev_release;
+ /* Active parameter sets */
+ /* Sequence Parameter Set data */
+ struct h264fw_sequence_ps sps;
+ /* Picture Parameter Set data */
+ struct h264fw_picture_ps pps;
+ /*
+ * Picture Parameter Set data for second field when in the same buffer
+ */
+ struct h264fw_picture_ps second_pps;
+
+ /* Set if stream is MVC */
+ int mvc;
+ /* DPB long term reference information */
+ int max_longterm_frame_idx[H264FW_MAX_NUM_VIEWS];
+};
+
+#endif /* _H264FW_DATA_H_ */
diff --git a/drivers/staging/media/vxd/decoder/hevcfw_data.h b/drivers/staging/media/vxd/decoder/hevcfw_data.h
new file mode 100644
index 000000000000..cdfe8d067d90
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/hevcfw_data.h
@@ -0,0 +1,472 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Public data structures for the hevc parser firmware module.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreming
+ * Sidraya Jayagond <[email protected]>
+ */
+
+/* Include shared header version here to replace the standard version. */
+#include "hevcfw_data_shared.h"
+
+#ifndef _HEVCFW_DATA_H_
+#define _HEVCFW_DATA_H_
+
+#include "vdecfw_shared.h"
+
+#define HEVC_MAX_SPS_COUNT 16
+#define HEVC_MAX_PPS_COUNT 64
+
+#define HEVCFW_MAX_NUM_PROFILE_IDC 32
+
+#define HEVCFW_MAX_NUM_REF_PICS 16
+#define HEVCFW_MAX_NUM_ST_REF_PIC_SETS 65
+#define HEVCFW_MAX_NUM_LT_REF_PICS 32
+#define HEVCFW_MAX_NUM_SUBLAYERS 7
+#define HEVCFW_SCALING_LISTS_BUFSIZE 256
+#define HEVCFW_MAX_TILE_COLS 20
+#define HEVCFW_MAX_TILE_ROWS 22
+
+#define HEVCFW_MAX_CHROMA_QP 6
+
+#define HEVCFW_MAX_DPB_SIZE HEVCFW_MAX_NUM_REF_PICS
+#define HEVCFW_REF_PIC_LIST0 0
+#define HEVCFW_REF_PIC_LIST1 1
+#define HEVCFW_NUM_REF_PIC_LISTS 2
+#define HEVCFW_NUM_DPB_DIFF_REGS 4
+
+/* non-critical errors */
+#define HEVC_ERR_INVALID_VALUE (20)
+#define HEVC_ERR_CORRECTION_VALIDVALUE (21)
+
+#define HEVC_IS_ERR_CRITICAL(err) \
+ ((err) > HEVC_ERR_CORRECTION_VALIDVALUE ? 1 : 0)
+
+/* critical errors */
+#define HEVC_ERR_INV_VIDEO_DIMENSION (22)
+#define HEVC_ERR_NO_SEQUENCE_HDR (23)
+#define HEVC_ERR_SPS_EXT_UNSUPP (24 | VDECFW_UNSUPPORTED_CODE_BASE)
+#define HEVC_ERR_PPS_EXT_UNSUPP (25 | VDECFW_UNSUPPORTED_CODE_BASE)
+
+#define HEVC_ERR_FAILED_TO_STORE_VPS (100)
+#define HEVC_ERR_FAILED_TO_STORE_SPS (101)
+#define HEVC_ERR_FAILED_TO_STORE_PPS (102)
+
+#define HEVC_ERR_FAILED_TO_FETCH_VPS (103)
+#define HEVC_ERR_FAILED_TO_FETCH_SPS (104)
+#define HEVC_ERR_FAILED_TO_FETCH_PPS (105)
+/* HEVC Scaling Lists (all values are maximum possible ones) */
+#define HEVCFW_SCALING_LIST_NUM_SIZES 4
+#define HEVCFW_SCALING_LIST_NUM_MATRICES 6
+#define HEVCFW_SCALING_LIST_MATRIX_SIZE 64
+
+struct hevcfw_scaling_listdata {
+ unsigned char dc_coeffs
+ [HEVCFW_SCALING_LIST_NUM_SIZES - 2]
+ [HEVCFW_SCALING_LIST_NUM_MATRICES];
+ unsigned char lists
+ [HEVCFW_SCALING_LIST_NUM_SIZES]
+ [HEVCFW_SCALING_LIST_NUM_MATRICES]
+ [HEVCFW_SCALING_LIST_MATRIX_SIZE];
+};
+
+/* HEVC Video Profile_Tier_Level */
+struct hevcfw_profile_tier_level {
+ unsigned char general_profile_space;
+ unsigned char general_tier_flag;
+ unsigned char general_profile_idc;
+ unsigned char general_profile_compatibility_flag[HEVCFW_MAX_NUM_PROFILE_IDC];
+ unsigned char general_progressive_source_flag;
+ unsigned char general_interlaced_source_flag;
+ unsigned char general_non_packed_constraint_flag;
+ unsigned char general_frame_only_constraint_flag;
+ unsigned char general_max_12bit_constraint_flag;
+ unsigned char general_max_10bit_constraint_flag;
+ unsigned char general_max_8bit_constraint_flag;
+ unsigned char general_max_422chroma_constraint_flag;
+ unsigned char general_max_420chroma_constraint_flag;
+ unsigned char general_max_monochrome_constraint_flag;
+ unsigned char general_intra_constraint_flag;
+ unsigned char general_one_picture_only_constraint_flag;
+ unsigned char general_lower_bit_rate_constraint_flag;
+ unsigned char general_level_idc;
+ unsigned char sub_layer_profile_present_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_level_present_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_profile_space[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_tier_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_profile_idc[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_profile_compatibility_flag[HEVCFW_MAX_NUM_SUBLAYERS -
+ 1][HEVCFW_MAX_NUM_PROFILE_IDC];
+ unsigned char sub_layer_progressive_source_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_interlaced_source_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_non_packed_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_frame_only_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_max_12bit_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_max_10bit_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_max_8bit_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_max_422chroma_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_max_420chroma_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_max_monochrome_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_intra_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_one_picture_only_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_lower_bit_rate_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_level_idc[HEVCFW_MAX_NUM_SUBLAYERS - 1];
+};
+
+struct hevcfw_video_ps {
+ int is_different;
+ int is_sent;
+ int is_available;
+ unsigned char vps_video_parameter_set_id;
+ unsigned char vps_reserved_three_2bits;
+ unsigned char vps_max_layers_minus1;
+ unsigned char vps_max_sub_layers_minus1;
+ unsigned char vps_temporal_id_nesting_flag;
+ unsigned short vps_reserved_0xffff_16bits;
+ struct hevcfw_profile_tier_level profile_tier_level;
+};
+
+/* HEVC Video Usability Information */
+struct hevcfw_vui_params {
+ unsigned char aspect_ratio_info_present_flag;
+ unsigned char aspect_ratio_idc;
+ unsigned short sar_width;
+ unsigned short sar_height;
+ unsigned char overscan_info_present_flag;
+ unsigned char overscan_appropriate_flag;
+ unsigned char video_signal_type_present_flag;
+ unsigned char video_format;
+ unsigned char video_full_range_flag;
+ unsigned char colour_description_present_flag;
+ unsigned char colour_primaries;
+ unsigned char transfer_characteristics;
+ unsigned char matrix_coeffs;
+ unsigned char chroma_loc_info_present_flag;
+ unsigned char chroma_sample_loc_type_top_field;
+ unsigned char chroma_sample_loc_type_bottom_field;
+ unsigned char neutral_chroma_indication_flag;
+ unsigned char field_seq_flag;
+ unsigned char frame_field_info_present_flag;
+ unsigned char default_display_window_flag;
+ unsigned short def_disp_win_left_offset;
+ unsigned short def_disp_win_right_offset;
+ unsigned short def_disp_win_top_offset;
+ unsigned short def_disp_win_bottom_offset;
+ unsigned char vui_timing_info_present_flag;
+ unsigned int vui_num_units_in_tick;
+ unsigned int vui_time_scale;
+};
+
+/* HEVC Short Term Reference Picture Set */
+struct hevcfw_short_term_ref_picset {
+ unsigned char num_negative_pics;
+ unsigned char num_positive_pics;
+ short delta_poc_s0[HEVCFW_MAX_NUM_REF_PICS];
+ short delta_poc_s1[HEVCFW_MAX_NUM_REF_PICS];
+ unsigned char used_bycurr_pic_s0[HEVCFW_MAX_NUM_REF_PICS];
+ unsigned char used_bycurr_pic_s1[HEVCFW_MAX_NUM_REF_PICS];
+ unsigned char num_delta_pocs;
+};
+
+/*
+ * This describes the SPS header data required by the HEVC firmware that should
+ * be supplied by the Host.
+ */
+struct hevcfw_sequence_ps {
+ /* syntax elements from SPS */
+ unsigned short pic_width_in_luma_samples;
+ unsigned short pic_height_in_luma_samples;
+ unsigned char num_short_term_ref_pic_sets;
+ unsigned char num_long_term_ref_pics_sps;
+ unsigned short lt_ref_pic_poc_lsb_sps[HEVCFW_MAX_NUM_LT_REF_PICS];
+ unsigned char used_by_curr_pic_lt_sps_flag[HEVCFW_MAX_NUM_LT_REF_PICS];
+ struct hevcfw_short_term_ref_picset st_rps_list[HEVCFW_MAX_NUM_ST_REF_PIC_SETS];
+ unsigned char sps_max_sub_layers_minus1;
+ unsigned char sps_max_dec_pic_buffering_minus1[HEVCFW_MAX_NUM_SUBLAYERS];
+ unsigned char sps_max_num_reorder_pics[HEVCFW_MAX_NUM_SUBLAYERS];
+ unsigned int sps_max_latency_increase_plus1[HEVCFW_MAX_NUM_SUBLAYERS];
+ unsigned char max_transform_hierarchy_depth_inter;
+ unsigned char max_transform_hierarchy_depth_intra;
+ unsigned char log2_diff_max_min_transform_block_size;
+ unsigned char log2_min_transform_block_size_minus2;
+ unsigned char log2_diff_max_min_luma_coding_block_size;
+ unsigned char log2_min_luma_coding_block_size_minus3;
+ unsigned char chroma_format_idc;
+ unsigned char separate_colour_plane_flag;
+ unsigned char num_extra_slice_header_bits;
+ unsigned char log2_max_pic_order_cnt_lsb_minus4;
+ unsigned char long_term_ref_pics_present_flag;
+ unsigned char sample_adaptive_offset_enabled_flag;
+ unsigned char sps_temporal_mvp_enabled_flag;
+ unsigned char bit_depth_luma_minus8;
+ unsigned char bit_depth_chroma_minus8;
+ unsigned char pcm_sample_bit_depth_luma_minus1;
+ unsigned char pcm_sample_bit_depth_chroma_minus1;
+ unsigned char log2_min_pcm_luma_coding_block_size_minus3;
+ unsigned char log2_diff_max_min_pcm_luma_coding_block_size;
+ unsigned char pcm_loop_filter_disabled_flag;
+ unsigned char amp_enabled_flag;
+ unsigned char pcm_enabled_flag;
+ unsigned char strong_intra_smoothing_enabled_flag;
+ unsigned char scaling_list_enabled_flag;
+ unsigned char transform_skip_rotation_enabled_flag;
+ unsigned char transform_skip_context_enabled_flag;
+ unsigned char implicit_rdpcm_enabled_flag;
+ unsigned char explicit_rdpcm_enabled_flag;
+ unsigned char extended_precision_processing_flag;
+ unsigned char intra_smoothing_disabled_flag;
+ unsigned char high_precision_offsets_enabled_flag;
+ unsigned char persistent_rice_adaptation_enabled_flag;
+ unsigned char cabac_bypass_alignment_enabled_flag;
+ /* derived elements */
+ unsigned int pic_size_in_ctbs_y;
+ unsigned short pic_height_in_ctbs_y;
+ unsigned short pic_width_in_ctbs_y;
+ unsigned char ctb_size_y;
+ unsigned char ctb_log2size_y;
+ int max_pic_order_cnt_lsb;
+ unsigned int sps_max_latency_pictures[HEVCFW_MAX_NUM_SUBLAYERS];
+ unsigned char pps_seq_parameter_set_id;
+ unsigned char sps_video_parameter_set_id;
+ unsigned char sps_temporal_id_nesting_flag;
+ unsigned char sps_seq_parameter_set_id;
+ /* local */
+ unsigned char conformance_window_flag;
+ unsigned short conf_win_left_offset;
+ unsigned short conf_win_right_offset;
+ unsigned short conf_win_top_offset;
+ unsigned short conf_win_bottom_offset;
+ unsigned char sps_sub_layer_ordering_info_present_flag;
+ unsigned char sps_scaling_list_data_present_flag;
+ unsigned char vui_parameters_present_flag;
+ unsigned char sps_extension_present_flag;
+ struct hevcfw_vui_params vui_params;
+ /* derived elements */
+ unsigned char sub_width_c;
+ unsigned char sub_height_c;
+ struct hevcfw_profile_tier_level profile_tier_level;
+ struct hevcfw_scaling_listdata scaling_listdata;
+};
+
+/*
+ * This describes the HEVC parser component "Header data", shown in the
+ * Firmware Memory Layout diagram. This data is required by the HEVC firmware
+ * and should be supplied by the Host.
+ */
+struct hevcfw_headerdata {
+ /* Decode buffers and output control for the current picture */
+ /* Primary decode buffer base addresses */
+ struct vdecfw_image_buffer primary;
+ /* buffer base addresses for alternate output */
+ struct vdecfw_image_buffer alternate;
+ /* address of buffer for temporal mv params */
+ unsigned int temporal_outaddr;
+};
+
+/*
+ * This describes the PPS header data required by the HEVC firmware that should
+ * be supplied by the Host.
+ */
+struct hevcfw_picture_ps {
+ /* syntax elements from the PPS */
+ unsigned char pps_pic_parameter_set_id;
+ unsigned char num_tile_columns_minus1;
+ unsigned char num_tile_rows_minus1;
+ unsigned char diff_cu_qp_delta_depth;
+ unsigned char init_qp_minus26;
+ unsigned char pps_beta_offset_div2;
+ unsigned char pps_tc_offset_div2;
+ unsigned char pps_cb_qp_offset;
+ unsigned char pps_cr_qp_offset;
+ unsigned char log2_parallel_merge_level_minus2;
+ unsigned char dependent_slice_segments_enabled_flag;
+ unsigned char output_flag_present_flag;
+ unsigned char num_extra_slice_header_bits;
+ unsigned char lists_modification_present_flag;
+ unsigned char cabac_init_present_flag;
+ unsigned char weighted_pred_flag;
+ unsigned char weighted_bipred_flag;
+ unsigned char pps_slice_chroma_qp_offsets_present_flag;
+ unsigned char deblocking_filter_override_enabled_flag;
+ unsigned char tiles_enabled_flag;
+ unsigned char entropy_coding_sync_enabled_flag;
+ unsigned char slice_segment_header_extension_present_flag;
+ unsigned char transquant_bypass_enabled_flag;
+ unsigned char cu_qp_delta_enabled_flag;
+ unsigned char transform_skip_enabled_flag;
+ unsigned char sign_data_hiding_enabled_flag;
+ unsigned char num_ref_idx_l0_default_active_minus1;
+ unsigned char num_ref_idx_l1_default_active_minus1;
+ unsigned char constrained_intra_pred_flag;
+ unsigned char pps_deblocking_filter_disabled_flag;
+ unsigned char pps_loop_filter_across_slices_enabled_flag;
+ unsigned char loop_filter_across_tiles_enabled_flag;
+ /* rewritten from SPS, maybe at some point we could get rid of this */
+ unsigned char scaling_list_enabled_flag;
+ unsigned char log2_max_transform_skip_block_size_minus2;
+ unsigned char cross_component_prediction_enabled_flag;
+ unsigned char chroma_qp_offset_list_enabled_flag;
+ unsigned char diff_cu_chroma_qp_offset_depth;
+ /*
+ * PVDEC derived elements. HEVCFW_SCALING_LISTS_BUFSIZE is
+ * multiplied by 2 to ensure that there will be space for address of
+ * each element. These addresses are completed in lower layer.
+ */
+ unsigned int scaling_lists[HEVCFW_SCALING_LISTS_BUFSIZE * 2];
+ /* derived elements */
+ unsigned short col_bd[HEVCFW_MAX_TILE_COLS + 1];
+ unsigned short row_bd[HEVCFW_MAX_TILE_ROWS + 1];
+
+ unsigned char chroma_qp_offset_list_len_minus1;
+ unsigned char cb_qp_offset_list[HEVCFW_MAX_CHROMA_QP];
+ unsigned char cr_qp_offset_list[HEVCFW_MAX_CHROMA_QP];
+
+ unsigned char uniform_spacing_flag;
+ unsigned char column_width_minus1[HEVCFW_MAX_TILE_COLS];
+ unsigned char row_height_minus1[HEVCFW_MAX_TILE_ROWS];
+
+ unsigned char pps_seq_parameter_set_id;
+ unsigned char deblocking_filter_control_present_flag;
+ unsigned char pps_scaling_list_data_present_flag;
+ unsigned char pps_extension_present_flag;
+
+ struct hevcfw_scaling_listdata scaling_list;
+};
+
+/* This enum determines reference picture status */
+enum hevcfw_reference_type {
+ HEVCFW_REF_UNUSED = 0,
+ HEVCFW_REF_SHORTTERM,
+ HEVCFW_REF_LONGTERM,
+ HEVCFW_REF_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* This describes an HEVC picture. It is part of the Context data */
+struct hevcfw_picture {
+ /* Primary (reconstructed) picture buffers */
+ struct vdecfw_image_buffer primary;
+ /* Secondary (alternative) picture buffers */
+ struct vdecfw_image_buffer alternate;
+ /* Unique ID for this picture */
+ unsigned int transaction_id;
+ /* nut of first ssh of picture, determines picture type */
+ unsigned char nalunit_type;
+ /* Picture Order Count (frame number) */
+ int pic_order_cnt_val;
+ /* Slice Picture Order Count Lsb */
+ int slice_pic_ordercnt_lsb;
+ unsigned char pic_output_flag;
+ /* information about long-term pictures */
+ unsigned short dpb_longterm_flags;
+ unsigned int dpb_pic_order_diff[HEVCFW_NUM_DPB_DIFF_REGS];
+ /* address of buffer for temporal mv params */
+ unsigned int temporal_outaddr;
+ /* worst case Dpb diff for the current pic */
+ unsigned int dpb_diff;
+};
+
+/*
+ * This is a wrapper for a picture to hold it in a Decoded Picture Buffer
+ * for further reference
+ */
+struct hevcfw_picture_in_dpb {
+ /* DPB data about the picture */
+ enum hevcfw_reference_type ref_type;
+ unsigned char valid;
+ unsigned char needed_for_output;
+ unsigned char pic_latency_count;
+ /* Picture itself */
+ struct hevcfw_picture picture;
+};
+
+/*
+ * This describes an HEVC's Decoded Picture Buffer (DPB).
+ * It is part of the Context data
+ */
+#define HEVCFW_DPB_IDX_INVALID -1
+
+struct hevcfw_decoded_picture_buffer {
+ /* reference pictures */
+ struct hevcfw_picture_in_dpb pictures[HEVCFW_MAX_DPB_SIZE];
+ /* organizational data of DPB */
+ unsigned int fullness;
+};
+
+/*
+ * This describes an HEVC's Reference Picture Set (RPS).
+ * It is part of the Context data
+ */
+struct hevcfw_reference_picture_set {
+ /* sizes of poc lists */
+ unsigned char num_pocst_curr_before;
+ unsigned char num_pocst_curr_after;
+ unsigned char num_pocst_foll;
+ unsigned char num_poclt_curr;
+ unsigned char num_poclt_foll;
+ /* poc lists */
+ int pocst_curr_before[HEVCFW_MAX_NUM_REF_PICS];
+ int pocst_curr_after[HEVCFW_MAX_NUM_REF_PICS];
+ int pocst_foll[HEVCFW_MAX_NUM_REF_PICS];
+ int poclt_curr[HEVCFW_MAX_NUM_REF_PICS];
+ int poclt_foll[HEVCFW_MAX_NUM_REF_PICS];
+ /* derived elements */
+ unsigned char curr_delta_pocmsb_presentflag[HEVCFW_MAX_NUM_REF_PICS];
+ unsigned char foll_delta_pocmsb_presentflag[HEVCFW_MAX_NUM_REF_PICS];
+ /* reference picture sets: indices in DPB */
+ unsigned char ref_picsetlt_curr[HEVCFW_MAX_NUM_REF_PICS];
+ unsigned char ref_picsetlt_foll[HEVCFW_MAX_NUM_REF_PICS];
+ unsigned char ref_picsetst_curr_before[HEVCFW_MAX_NUM_REF_PICS];
+ unsigned char ref_picsetst_curr_after[HEVCFW_MAX_NUM_REF_PICS];
+ unsigned char ref_picsetst_foll[HEVCFW_MAX_NUM_REF_PICS];
+};
+
+/*
+ * This describes the HEVC parser component "Context data", shown in the
+ * Firmware Memory Layout diagram. This data is the state preserved across
+ * pictures. It is loaded and saved by the Firmware, but requires the host to
+ * provide buffer(s) for this.
+ */
+struct hevcfw_ctx_data {
+ struct hevcfw_sequence_ps sps;
+ struct hevcfw_picture_ps pps;
+ /*
+ * data from last picture with TemporalId = 0 that is not a RASL, RADL
+ * or sub-layer non-reference picture
+ */
+ int prev_pic_order_cnt_lsb;
+ int prev_pic_order_cnt_msb;
+ unsigned char last_irapnorasl_output_flag;
+ /*
+ * Decoded Pictures Buffer holds information about decoded pictures
+ * needed for further INTER decoding
+ */
+ struct hevcfw_decoded_picture_buffer dpb;
+ /* Reference Picture Set is determined on per-picture basis */
+ struct hevcfw_reference_picture_set rps;
+ /*
+ * Reference Picture List is determined using data from Reference
+ * Picture Set and from Slice (Segment) Header on per-slice basis
+ */
+ unsigned char ref_pic_list[HEVCFW_NUM_REF_PIC_LISTS][HEVCFW_MAX_NUM_REF_PICS];
+ /*
+ * Reference Picture List used to send reflist to the host, the only
+ * difference is that missing references are marked
+ * with HEVCFW_DPB_IDX_INVALID
+ */
+ unsigned char ref_pic_listhlp[HEVCFW_NUM_REF_PIC_LISTS][HEVCFW_MAX_NUM_REF_PICS];
+
+ unsigned int pic_count;
+ unsigned int slice_segment_count;
+ /* There was EOS NAL detected and no new picture yet */
+ int eos_detected;
+ /* This is first picture after EOS NAL */
+ int first_after_eos;
+};
+
+#endif /* _HEVCFW_DATA_H_ */
diff --git a/drivers/staging/media/vxd/decoder/img_pixfmts.h b/drivers/staging/media/vxd/decoder/img_pixfmts.h
new file mode 100644
index 000000000000..f21c8f9da4b5
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/img_pixfmts.h
@@ -0,0 +1,195 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD DEC SYSDEV and UI Interface header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ */
+
+#ifndef __IMG_PIXFMTS_H__
+#define __IMG_PIXFMTS_H__
+/*
+ * @brief Old pixel format definition
+ *
+ * @note These definitions are different in HW documentation(current to HW doc):
+ * @li PL8 is defined as PL111
+ * @li PL12 is sometime used wrongly for monochrome formats instead of PL_Y
+ */
+enum img_pixfmt {
+ IMG_PIXFMT_CLUT1 = 0,
+ IMG_PIXFMT_CLUT2 = 1,
+ IMG_PIXFMT_CLUT4 = 2,
+ IMG_PIXFMT_I4A4 = 3,
+ IMG_PIXFMT_I8A8 = 4,
+ IMG_PIXFMT_A8I8 = 51,
+ IMG_PIXFMT_RGB8 = 5,
+ IMG_PIXFMT_RGB332 = 6,
+ IMG_PIXFMT_RGB555 = 7,
+ IMG_PIXFMT_ARGB4444 = 8,
+ IMG_PIXFMT_ABGR4444 = 57,
+ IMG_PIXFMT_RGBA4444 = 58,
+ IMG_PIXFMT_BGRA4444 = 59,
+ IMG_PIXFMT_ARGB1555 = 9,
+ IMG_PIXFMT_ABGR1555 = 60,
+ IMG_PIXFMT_RGBA5551 = 61,
+ IMG_PIXFMT_BGRA5551 = 62,
+ IMG_PIXFMT_RGB565 = 10,
+ IMG_PIXFMT_BGR565 = 63,
+ IMG_PIXFMT_RGB888 = 11,
+ IMG_PIXFMT_RSGSBS888 = 68,
+ IMG_PIXFMT_ARGB8888 = 12,
+ IMG_PIXFMT_ABGR8888 = 41,
+ IMG_PIXFMT_BGRA8888 = 42,
+ IMG_PIXFMT_RGBA8888 = 56,
+ IMG_PIXFMT_ARGB8332 = 43,
+ IMG_PIXFMT_ARGB8161616 = 64,
+ IMG_PIXFMT_ARGB2101010 = 67,
+ IMG_PIXFMT_UYVY8888 = 13,
+ IMG_PIXFMT_VYUY8888 = 14,
+ IMG_PIXFMT_YVYU8888 = 15,
+ IMG_PIXFMT_YUYV8888 = 16,
+ IMG_PIXFMT_UYVY10101010 = 17,
+ IMG_PIXFMT_VYAUYA8888 = 18,
+ IMG_PIXFMT_YUV101010 = 19,
+ IMG_PIXFMT_AYUV4444 = 20,
+ IMG_PIXFMT_YUV888 = 21,
+ IMG_PIXFMT_AYUV8888 = 22,
+ IMG_PIXFMT_AYUV2101010 = 23,
+ IMG_PIXFMT_411PL111YUV8 = 120,
+ IMG_PIXFMT_411PL12YUV8 = 121,
+ IMG_PIXFMT_411PL12YVU8 = 24,
+ IMG_PIXFMT_420PL12YUV8 = 25,
+ IMG_PIXFMT_420PL12YVU8 = 26,
+ IMG_PIXFMT_422PL12YUV8 = 27,
+ IMG_PIXFMT_422PL12YVU8 = 28,
+ IMG_PIXFMT_420PL8YUV8 = 47,
+ IMG_PIXFMT_422PL8YUV8 = 48,
+ IMG_PIXFMT_420PL12YUV8_A8 = 31,
+ IMG_PIXFMT_422PL12YUV8_A8 = 32,
+ IMG_PIXFMT_PL12Y8 = 33,
+ IMG_PIXFMT_PL12YV8 = 35,
+ IMG_PIXFMT_PL12IMC2 = 36,
+ IMG_PIXFMT_A4 = 37,
+ IMG_PIXFMT_A8 = 38,
+ IMG_PIXFMT_YUV8 = 39,
+ IMG_PIXFMT_CVBS10 = 40,
+ IMG_PIXFMT_PL12YV12 = 44,
+#if ((!defined(METAG) && !defined(MTXG)) || defined(__linux__))
+ IMG_PIXFMT_F16 = 52,
+ IMG_PIXFMT_F32 = 53,
+ IMG_PIXFMT_F16F16F16F16 = 65,
+#endif
+ IMG_PIXFMT_L16 = 54,
+ IMG_PIXFMT_L32 = 55,
+ IMG_PIXFMT_Y1 = 66,
+ IMG_PIXFMT_444PL111YUV8 = 69,
+ IMG_PIXFMT_444PL12YUV8 = 137,
+ IMG_PIXFMT_444PL12YVU8 = 138,
+ IMG_PIXFMT_PL12Y10 = 34,
+ IMG_PIXFMT_PL12Y10_LSB = 96,
+ IMG_PIXFMT_PL12Y10_MSB = 97,
+ IMG_PIXFMT_420PL8YUV10 = 49,
+ IMG_PIXFMT_420PL111YUV10_LSB = 71,
+ IMG_PIXFMT_420PL111YUV10_MSB = 72,
+ IMG_PIXFMT_420PL12YUV10 = 29,
+ IMG_PIXFMT_420PL12YUV10_LSB = 74,
+ IMG_PIXFMT_420PL12YUV10_MSB = 75,
+ IMG_PIXFMT_420PL12YVU10 = 45,
+ IMG_PIXFMT_420PL12YVU10_LSB = 77,
+ IMG_PIXFMT_420PL12YVU10_MSB = 78,
+ IMG_PIXFMT_422PL8YUV10 = 50,
+ IMG_PIXFMT_422PL111YUV10_LSB = 122,
+ IMG_PIXFMT_422PL111YUV10_MSB = 123,
+ IMG_PIXFMT_422PL12YUV10 = 30,
+ IMG_PIXFMT_422PL12YUV10_LSB = 80,
+ IMG_PIXFMT_422PL12YUV10_MSB = 81,
+ IMG_PIXFMT_422PL12YVU10 = 46,
+ IMG_PIXFMT_422PL12YVU10_LSB = 83,
+ IMG_PIXFMT_422PL12YVU10_MSB = 84,
+ IMG_PIXFMT_444PL111YUV10 = 85,
+ IMG_PIXFMT_444PL111YUV10_LSB = 86,
+ IMG_PIXFMT_444PL111YUV10_MSB = 87,
+ IMG_PIXFMT_444PL12YUV10 = 139,
+ IMG_PIXFMT_444PL12YUV10_LSB = 141,
+ IMG_PIXFMT_444PL12YUV10_MSB = 142,
+ IMG_PIXFMT_444PL12YVU10 = 140,
+ IMG_PIXFMT_444PL12YVU10_LSB = 143,
+ IMG_PIXFMT_444PL12YVU10_MSB = 144,
+ IMG_PIXFMT_420PL12Y8UV10 = 88,
+ IMG_PIXFMT_420PL12Y8UV10_LSB = 98,
+ IMG_PIXFMT_420PL12Y8UV10_MSB = 99,
+ IMG_PIXFMT_420PL12Y8VU10 = 89,
+ IMG_PIXFMT_420PL12Y8VU10_LSB = 100,
+ IMG_PIXFMT_420PL12Y8VU10_MSB = 101,
+ IMG_PIXFMT_420PL111Y8UV10 = 70,
+ IMG_PIXFMT_420PL111Y8UV10_LSB = 127,
+ IMG_PIXFMT_420PL111Y8UV10_MSB = 125,
+ IMG_PIXFMT_422PL12Y8UV10 = 90,
+ IMG_PIXFMT_422PL12Y8UV10_LSB = 102,
+ IMG_PIXFMT_422PL12Y8UV10_MSB = 103,
+ IMG_PIXFMT_422PL12Y8VU10 = 91,
+ IMG_PIXFMT_422PL12Y8VU10_LSB = 104,
+ IMG_PIXFMT_422PL12Y8VU10_MSB = 105,
+ IMG_PIXFMT_444PL12Y8UV10 = 151,
+ IMG_PIXFMT_444PL12Y8UV10_LSB = 153,
+ IMG_PIXFMT_444PL12Y8UV10_MSB = 154,
+ IMG_PIXFMT_444PL12Y8VU10 = 152,
+ IMG_PIXFMT_444PL12Y8VU10_LSB = 155,
+ IMG_PIXFMT_444PL12Y8VU10_MSB = 156,
+ IMG_PIXFMT_420PL12Y10UV8 = 92,
+ IMG_PIXFMT_420PL12Y10UV8_LSB = 106,
+ IMG_PIXFMT_420PL12Y10UV8_MSB = 107,
+
+ IMG_PIXFMT_420PL12Y10VU8 = 93,
+ IMG_PIXFMT_420PL12Y10VU8_LSB = 108,
+ IMG_PIXFMT_420PL12Y10VU8_MSB = 109,
+
+ IMG_PIXFMT_420PL111Y10UV8 = 129,
+ IMG_PIXFMT_420PL111Y10UV8_LSB = 133,
+ IMG_PIXFMT_420PL111Y10UV8_MSB = 131,
+ IMG_PIXFMT_422PL12Y10UV8 = 94,
+ IMG_PIXFMT_422PL12Y10UV8_LSB = 110,
+ IMG_PIXFMT_422PL12Y10UV8_MSB = 111,
+ IMG_PIXFMT_422PL12Y10VU8 = 95,
+ IMG_PIXFMT_422PL12Y10VU8_LSB = 112,
+ IMG_PIXFMT_422PL12Y10VU8_MSB = 113,
+
+ IMG_PIXFMT_444PL111Y10UV8 = 114,
+ IMG_PIXFMT_444PL111Y10UV8_LSB = 115,
+ IMG_PIXFMT_444PL111Y10UV8_MSB = 116,
+ IMG_PIXFMT_444PL111Y8UV10 = 117,
+ IMG_PIXFMT_444PL111Y8UV10_LSB = 118,
+ IMG_PIXFMT_444PL111Y8UV10_MSB = 119,
+ IMG_PIXFMT_444PL12Y10UV8 = 145,
+ IMG_PIXFMT_444PL12Y10UV8_LSB = 147,
+ IMG_PIXFMT_444PL12Y10UV8_MSB = 148,
+ IMG_PIXFMT_444PL12Y10VU8 = 146,
+ IMG_PIXFMT_444PL12Y10VU8_LSB = 149,
+ IMG_PIXFMT_444PL12Y10VU8_MSB = 150,
+ IMG_PIXFMT_422PL111Y8UV10 = 124,
+ IMG_PIXFMT_422PL111Y8UV10_MSB = 126,
+ IMG_PIXFMT_422PL111Y8UV10_LSB = 128,
+
+ IMG_PIXFMT_422PL111Y10UV8 = 130,
+ IMG_PIXFMT_422PL111Y10UV8_LSB = 134,
+ IMG_PIXFMT_422PL111Y10UV8_MSB = 132,
+ IMG_PIXFMT_420PL8YUV12 = 160,
+ IMG_PIXFMT_422PL8YUV12 = 161,
+ IMG_PIXFMT_444PL8YUV12 = 162,
+ IMG_PIXFMT_420PL8YUV14 = 163,
+ IMG_PIXFMT_422PL8YUV14 = 164,
+ IMG_PIXFMT_444PL8YUV14 = 165,
+ IMG_PIXFMT_420PL8YUV16 = 166,
+ IMG_PIXFMT_422PL8YUV16 = 167,
+ IMG_PIXFMT_444PL8YUV16 = 168,
+ IMG_PIXFMT_UNDEFINED = 255,
+
+ IMG_PIXFMT_ARBPLANAR8 = 65536,
+ IMG_PIXFMT_ARBPLANAR8_LAST = IMG_PIXFMT_ARBPLANAR8 + 0xffff,
+ IMG_PIXFMT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+#endif
diff --git a/drivers/staging/media/vxd/decoder/img_profiles_levels.h b/drivers/staging/media/vxd/decoder/img_profiles_levels.h
new file mode 100644
index 000000000000..710b429f7f3e
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/img_profiles_levels.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD DEC SYSDEV and UI Interface header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef __IMG_PROFILES_LEVELS_H
+#define __IMG_PROFILES_LEVELS_H
+
+#include "vdecdd_utils.h"
+
+/* Minimum level value for h.264 */
+#define H264_LEVEL_MIN (9)
+/* Maximum level value for h.264 */
+#define H264_LEVEL_MAX (52)
+/* Number of major levels for h.264 (5 + 1 for special levels) */
+#define H264_LEVEL_MAJOR_NUM (6)
+/* Number of minor levels for h.264 */
+#define H264_LEVEL_MINOR_NUM (4)
+/* Number of major levels for HEVC */
+#define HEVC_LEVEL_MAJOR_NUM (6)
+/* Number of minor levels for HEVC */
+#define HEVC_LEVEL_MINOR_NUM (3)
+
+#endif /*__IMG_PROFILES_LEVELS_H */
diff --git a/drivers/staging/media/vxd/decoder/jpegfw_data.h b/drivers/staging/media/vxd/decoder/jpegfw_data.h
new file mode 100644
index 000000000000..d84e5d73f844
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/jpegfw_data.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Public data structures for the h264 parser firmware module.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstreming
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include "jpegfw_data_shared.h"
+
+#ifndef _JPEGFW_DATA_H_
+#define _JPEGFW_DATA_H_
+
+#define JPEG_VDEC_8x8_DCT_SIZE 64 //!< Number of elements in 8x8 DCT
+#define JPEG_VDEC_MAX_COMPONENTS 4 //!< Maximum number of component in JPEG
+#define JPEG_VDEC_MAX_SETS_HUFFMAN_TABLES 2 //!< Maximum set of huffman table in JPEG
+#define JPEG_VDEC_MAX_QUANT_TABLES 4 //!< Maximum set of quantisation table in JPEG
+#define JPEG_VDEC_TABLE_CLASS_NUM 2 //!< Maximum set of class of huffman table in JPEG
+#define JPEG_VDEC_PLANE_MAX 4 //!< Maximum number of planes
+
+struct hentry {
+ unsigned short code;
+ unsigned char codelen;
+ unsigned char value;
+};
+
+/**
+ * struct vdec_jpeg_huffman_tableinfo - This structure contains JPEG huffmant table
+ * @bits: number of bits
+ * @values: codeword value
+ *
+ * NOTE: Should only contain JPEG specific information.
+ * JPEG Huffman Table Information
+ */
+struct vdec_jpeg_huffman_tableinfo {
+ /* number of bits */
+ unsigned char bits[16];
+ /* codeword value */
+ unsigned char values[256];
+};
+
+/*
+ * This structure contains JPEG DeQunatisation table
+ * NOTE: Should only contain JPEG specific information.
+ * @brief JPEG Dequantisation Table Information
+ */
+struct vdec_jpeg_de_quant_tableinfo {
+ /* Qunatisation precision */
+ unsigned char precision;
+ /* Qunatisation Value for 8x8 DCT */
+ unsigned short elements[64];
+};
+
+/*
+ * This describes the JPEG parser component "Header data", shown in the
+ * Firmware Memory Layout diagram. This data is required by the JPEG firmware
+ * and should be supplied by the Host.
+ */
+struct jpegfw_header_data {
+ /* Primary decode buffer base addresses */
+ struct vdecfw_image_buffer primary;
+ /* Reference (output) picture base addresses */
+ unsigned int plane_offsets[JPEG_VDEC_PLANE_MAX];
+ /* SOS fields count value */
+ unsigned char hdr_sos_count;
+};
+
+/*
+ * This describes the JPEG parser component "Context data".
+ * JPEG does not need any data to be saved between pictures, this structure
+ * is needed only to fit in firmware framework.
+ */
+struct jpegfw_context_data {
+ unsigned int dummy;
+};
+
+#endif /* _JPEGFW_DATA_H_ */
diff --git a/drivers/staging/media/vxd/decoder/mmu_defs.h b/drivers/staging/media/vxd/decoder/mmu_defs.h
new file mode 100644
index 000000000000..0ea65509071d
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/mmu_defs.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * V-DEC MMU Definitions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ */
+
+#ifndef _VXD_MMU_DEF_H_
+#define _VXD_MMU_DEF_H_
+
+/*
+ * This type defines MMU variant.
+ */
+enum mmu_etype {
+ MMU_TYPE_NONE = 0,
+ MMU_TYPE_32BIT,
+ MMU_TYPE_36BIT,
+ MMU_TYPE_40BIT,
+ MMU_TYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/**
+ * enum mmu_eheap_id - This type defines the MMU heaps.
+ * @MMU_HEAP_IMAGE_BUFFERS_UNTILED: Heap for untiled video buffers
+ * @MMU_HEAP_BITSTREAM_BUFFERS : Heap for bitstream buffers
+ * @MMU_HEAP_STREAM_BUFFERS : Heap for Stream buffers
+ * @MMU_HEAP_MAX : Number of heaps
+ * @MMU_HEAP_FORCE32BITS: MMU_HEAP_FORCE32BITS
+ */
+enum mmu_eheap_id {
+ MMU_HEAP_IMAGE_BUFFERS_UNTILED = 0x00,
+ MMU_HEAP_BITSTREAM_BUFFERS,
+ MMU_HEAP_STREAM_BUFFERS,
+ MMU_HEAP_MAX,
+ MMU_HEAP_FORCE32BITS = 0x7FFFFFFFU
+};
+
+#endif /* _VXD_MMU_DEFS_H_ */
diff --git a/drivers/staging/media/vxd/decoder/scaler_setup.h b/drivers/staging/media/vxd/decoder/scaler_setup.h
new file mode 100644
index 000000000000..55dc000e07a2
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/scaler_setup.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD DEC constants calculation and scalling coefficients
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ */
+
+#ifndef _SCALER_SETUP_H
+#define _SCALER_SETUP_H
+
+#define LOWP 11
+#define HIGHP 14
+
+#define FIXED(a, digits) ((int)((a) * (1 << (digits))))
+
+struct scaler_params {
+ unsigned int vert_pitch;
+ unsigned int vert_startpos;
+ unsigned int vert_pitch_chroma;
+ unsigned int vert_startpos_chroma;
+ unsigned int horz_pitch;
+ unsigned int horz_startpos;
+ unsigned int horz_pitch_chroma;
+ unsigned int horz_startpos_chroma;
+ unsigned char fixed_point_shift;
+};
+
+struct scaler_filter {
+ unsigned char bhoriz_bilinear;
+ unsigned char bvert_bilinear;
+};
+
+struct scaler_pitch {
+ int horiz_luma;
+ int vert_luma;
+ int horiz_chroma;
+ int vert_chroma;
+};
+
+struct scaler_config {
+ enum vdec_vid_std vidstd;
+ const struct vxd_coreprops *coreprops;
+ struct pixel_pixinfo *in_pixel_info;
+ const struct pixel_pixinfo *out_pixel_info;
+ unsigned char bfield_coded;
+ unsigned char bseparate_chroma_planes;
+ unsigned int recon_width;
+ unsigned int recon_height;
+ unsigned int mb_width;
+ unsigned int mb_height;
+ unsigned int scale_width;
+ unsigned int scale_height;
+};
+
+#endif /* _SCALER_SETUP_H */
diff --git a/drivers/staging/media/vxd/decoder/vdec_defs.h b/drivers/staging/media/vxd/decoder/vdec_defs.h
new file mode 100644
index 000000000000..d7f182fd96d3
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vdec_defs.h
@@ -0,0 +1,548 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD Decoder common header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef __VDEC_DEFS_H__
+#define __VDEC_DEFS_H__
+
+#include "img_mem.h"
+#include "img_pixfmts.h"
+#ifdef HAS_JPEG
+#include "jpegfw_data.h"
+#endif
+#include "pixel_api.h"
+#include "vdecfw_shared.h"
+
+#define VDEC_MAX_PANSCAN_WINDOWS 4
+#define VDEC_MB_DIMENSION (16)
+
+#define MAX_PICS_IN_SYSTEM (8)
+#define SEQUENCE_SLOTS (8)
+#define PPS_SLOTS (8)
+/* Only for HEVC */
+#define VPS_SLOTS (16)
+#define MAX_VPSS (MAX_PICS_IN_SYSTEM + VPS_SLOTS)
+#define MAX_SEQUENCES (MAX_PICS_IN_SYSTEM + SEQUENCE_SLOTS)
+#define MAX_PPSS (MAX_PICS_IN_SYSTEM + PPS_SLOTS)
+
+#define VDEC_H264_MAXIMUMVALUEOFCPB_CNT 32
+#define VDEC_H264_MVC_MAX_VIEWS (H264FW_MAX_NUM_VIEWS)
+
+#define VDEC_ASSERT(expected) ({ WARN_ON(!(expected)); 0; })
+
+#define VDEC_ALIGN_SIZE(_val, _alignment, val_type, align_type) \
+ ({ \
+ val_type val = _val; \
+ align_type alignment = _alignment; \
+ (((val) + (alignment) - 1) & ~((alignment) - 1)); })
+
+/*
+ * This type defines the video standard.
+ * @brief VDEC Video Standards
+ */
+enum vdec_vid_std {
+ VDEC_STD_UNDEFINED = 0,
+ VDEC_STD_MPEG2,
+ VDEC_STD_MPEG4,
+ VDEC_STD_H263,
+ VDEC_STD_H264,
+ VDEC_STD_VC1,
+ VDEC_STD_AVS,
+ VDEC_STD_REAL,
+ VDEC_STD_JPEG,
+ VDEC_STD_VP6,
+ VDEC_STD_VP8,
+ VDEC_STD_SORENSON,
+ VDEC_STD_HEVC,
+ VDEC_STD_MAX,
+ VDEC_STD_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This type defines the bitstream format. Should be done at the
+ * start of decoding.
+ * @brief VDEC Bitstream Format
+ */
+enum vdec_bstr_format {
+ VDEC_BSTRFORMAT_UNDEFINED = 0,
+ VDEC_BSTRFORMAT_ELEMENTARY,
+ VDEC_BSTRFORMAT_DEMUX_BYTESTREAM,
+ VDEC_BSTRFORMAT_DEMUX_SIZEDELIMITED,
+ VDEC_BSTRFORMAT_MAX,
+ VDEC_BSTRFORMAT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This type defines the Type of payload. Could change with every buffer.
+ * @brief VDEC Bitstream Element Type
+ */
+enum vdec_bstr_element_type {
+ VDEC_BSTRELEMENT_UNDEFINED = 0,
+ VDEC_BSTRELEMENT_UNSPECIFIED,
+ VDEC_BSTRELEMENT_CODEC_CONFIG,
+ VDEC_BSTRELEMENT_PICTURE_DATA,
+ VDEC_BSTRELEMENT_MAX,
+ VDEC_BSTRELEMENT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This structure contains the stream configuration details.
+ * @brief VDEC Stream Configuration Information
+ */
+struct vdec_str_configdata {
+ enum vdec_vid_std vid_std;
+ enum vdec_bstr_format bstr_format;
+ unsigned int user_str_id;
+ unsigned char update_yuv;
+ unsigned char bandwidth_efficient;
+ unsigned char disable_mvc;
+ unsigned char full_scan;
+ unsigned char immediate_decode;
+ unsigned char intra_frame_closed_gop;
+};
+
+/*
+ * This type defines the buffer type categories.
+ * @brief Buffer Types
+ */
+enum vdec_buf_type {
+ VDEC_BUFTYPE_BITSTREAM,
+ VDEC_BUFTYPE_PICTURE,
+ VDEC_BUFTYPE_ALL,
+ VDEC_BUFTYPE_MAX,
+ VDEC_BUFTYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This structure contains information related to a picture plane.
+ * @brief Picture Plane Information
+ */
+struct vdec_plane_info {
+ unsigned int offset;
+ unsigned int stride;
+ unsigned int size;
+};
+
+/*
+ * This structure describes the VDEC picture dimensions.
+ * @brief VDEC Picture Size
+ */
+struct vdec_pict_size {
+ unsigned int width;
+ unsigned int height;
+};
+
+/*
+ * This enumeration defines the colour plane indices.
+ * @brief Colour Plane Indices
+ */
+enum vdec_color_planes {
+ VDEC_PLANE_VIDEO_Y = 0,
+ VDEC_PLANE_VIDEO_YUV = 0,
+ VDEC_PLANE_VIDEO_U = 1,
+ VDEC_PLANE_VIDEO_UV = 1,
+ VDEC_PLANE_VIDEO_V = 2,
+ VDEC_PLANE_VIDEO_A = 3,
+ VDEC_PLANE_LIGHT_R = 0,
+ VDEC_PLANE_LIGHT_G = 1,
+ VDEC_PLANE_LIGHT_B = 2,
+ VDEC_PLANE_INK_C = 0,
+ VDEC_PLANE_INK_M = 1,
+ VDEC_PLANE_INK_Y = 2,
+ VDEC_PLANE_INK_K = 3,
+ VDEC_PLANE_MAX = 4,
+ VDEC_PLANE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This structure describes the rendered region of a picture buffer (i.e. where
+ * the image data is written.
+ * @brief Picture Buffer Render Information
+ */
+struct vdec_pict_rendinfo {
+ unsigned int rendered_size;
+ struct vdec_plane_info plane_info[VDEC_PLANE_MAX];
+ unsigned int stride_alignment;
+ struct vdec_pict_size rend_pict_size;
+};
+
+/*
+ * This structure contains information required to configure the picture
+ * buffers
+ * @brief Picture Buffer Configuration
+ */
+struct vdec_pict_bufconfig {
+ unsigned int coded_width;
+ unsigned int coded_height;
+ enum img_pixfmt pixel_fmt;
+ unsigned int stride[IMG_MAX_NUM_PLANES];
+ unsigned int stride_alignment;
+ unsigned char byte_interleave;
+ unsigned int buf_size;
+ unsigned char packed;
+ unsigned int chroma_offset[IMG_MAX_NUM_PLANES];
+ unsigned int plane_size[IMG_MAX_NUM_PLANES];
+};
+
+/*
+ * This structure describes the VDEC Display Rectangle.
+ * @brief VDEC Display Rectangle
+ */
+struct vdec_rect {
+ unsigned int top_offset;
+ unsigned int left_offset;
+ unsigned int width;
+ unsigned int height;
+};
+
+/*
+ * This structure contains the Color Space Description that may be present
+ * in SequenceDisplayExtn(MPEG2), VUI parameters(H264), Visual Object(MPEG4)
+ * for the application to use.
+ * @brief Stream Color Space Properties
+ */
+struct vdec_color_space_desc {
+ unsigned char is_present;
+ unsigned char color_primaries;
+ unsigned char transfer_characteristics;
+ unsigned char matrix_coefficients;
+};
+
+/*
+ * This structure contains common (standard agnostic) sequence header
+ * information, which is required for image buffer allocation and display.
+ * @brief Sequence Header Information (common)
+ */
+struct vdec_comsequ_hdrinfo {
+ unsigned int codec_profile;
+ unsigned int codec_level;
+ unsigned int bitrate;
+ long frame_rate;
+ unsigned int frame_rate_num;
+ unsigned int frame_rate_den;
+ unsigned int aspect_ratio_num;
+ unsigned int aspect_ratio_den;
+ unsigned char interlaced_frames;
+ struct pixel_pixinfo pixel_info;
+ struct vdec_pict_size max_frame_size;
+ unsigned int max_ref_frame_num;
+ struct vdec_pict_size frame_size;
+ unsigned char field_codec_mblocks;
+ unsigned int min_pict_buf_num;
+ unsigned char picture_reordering;
+ unsigned char post_processing;
+ struct vdec_rect orig_display_region;
+ struct vdec_rect raw_display_region;
+ unsigned int num_views;
+ unsigned int max_reorder_picts;
+ unsigned char separate_chroma_planes;
+ unsigned char not_dpb_flush;
+ struct vdec_color_space_desc color_space_info;
+};
+
+/*
+ * This structure contains the standard specific codec configuration
+ * @brief Codec configuration
+ */
+struct vdec_codec_config {
+ unsigned int default_height;
+ unsigned int default_width;
+};
+
+/*
+ * This structure describes the decoded picture attributes (relative to the
+ * encoded, where necessary, e.g. rotation angle).
+ * @brief Stream Output Configuration
+ */
+struct vdec_str_opconfig {
+ struct pixel_pixinfo pixel_info;
+ unsigned char force_oold;
+};
+
+/*
+ * This type defines the "play" mode.
+ * @brief Play Mode
+ */
+enum vdec_play_mode {
+ VDEC_PLAYMODE_PARSE_ONLY,
+ VDEC_PLAYMODE_NORMAL_DECODE,
+ VDEC_PLAYMODE_MAX,
+ VDEC_PLAYMODE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This type defines the bitstream processing error info.
+ * @brief Bitstream Processing Error Info
+ */
+struct vdec_bstr_err_info {
+ unsigned int sequence_err;
+ unsigned int picture_err;
+ unsigned int other_err;
+};
+
+/*
+ * This structure describes the VDEC Pan Scan Window.
+ * @brief VDEC Pan Scan Window
+ */
+struct vdec_window {
+ unsigned int ui32topoffset;
+ unsigned int ui32leftoffset;
+ unsigned int ui32width;
+ unsigned int ui32height;
+};
+
+/*
+ * This structure contains the VDEC picture display properties.
+ * @brief VDEC Picture Display Properties
+ */
+struct vdec_pict_disp_info {
+ struct vdec_rect enc_disp_region;
+ struct vdec_rect disp_region;
+ struct vdec_rect raw_disp_region;
+ unsigned char top_fld_first;
+ unsigned char out_top_fld_first;
+ unsigned int max_frm_repeat;
+ unsigned int repeat_first_fld;
+ unsigned int num_pan_scan_windows;
+ struct vdec_window pan_scan_windows[VDEC_MAX_PANSCAN_WINDOWS];
+};
+
+/*
+ * This structure contains VXD hardware signatures.
+ * @brief VXD Hardware signatures
+ */
+struct vdec_pict_hwcrc {
+ unsigned char first_fld_rcvd;
+ unsigned int crc_vdmc_pix_recon;
+ unsigned int vdeb_sysmem_wrdata;
+};
+
+struct vdec_features {
+ unsigned char valid;
+ unsigned char mpeg2;
+ unsigned char mpeg4;
+ unsigned char h264;
+ unsigned char vc1;
+ unsigned char avs;
+ unsigned char real;
+ unsigned char jpeg;
+ unsigned char vp6;
+ unsigned char vp8;
+ unsigned char hevc;
+ unsigned char hd;
+ unsigned char rotation;
+ unsigned char scaling;
+ unsigned char scaling_oold;
+ unsigned char scaling_extnd_strides;
+};
+
+/*
+ * This type defines the auxiliary info for picture queued for decoding.
+ * @brief Auxiliary Decoding Picture Info
+ */
+struct vdec_dec_pict_auxinfo {
+ unsigned int seq_hdr_id;
+ unsigned int pps_id;
+ unsigned int second_pps_id;
+ unsigned char not_decoded;
+};
+
+/*
+ * This type defines the decoded picture state.
+ * @brief Decoded Picture State
+ */
+enum vdec_pict_state {
+ VDEC_PICT_STATE_NOT_DECODED,
+ VDEC_PICT_STATE_DECODED,
+ VDEC_PICT_STATE_TERMINATED,
+ VDEC_PICT_STATE_MAX,
+ VDEC_PICT_STATE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This type defines the container for various picture tags.
+ * @brief Picture Tag Container
+ */
+struct vdec_pict_tag_container {
+ enum img_buffer_type pict_type;
+ unsigned long long pict_tag_param;
+ unsigned long long sideband_info;
+ struct vdec_pict_hwcrc pict_hwcrc;
+};
+
+/*
+ * This structure describes raw bitstream data chunk.
+ * @brief Raw Bitstream Data Chunk
+ */
+struct vdec_raw_bstr_data {
+ unsigned int size;
+ unsigned int bit_offset;
+ unsigned char *data;
+ struct vdec_raw_bstr_data *next;
+};
+
+/*
+ * This type defines the supplementary picture data.
+ * @brief Supplementary Picture Data
+ */
+struct vdec_pict_supl_data {
+ struct vdec_raw_bstr_data *raw_vui_data;
+ struct vdec_raw_bstr_data *raw_sei_list_first_fld;
+ struct vdec_raw_bstr_data *raw_sei_list_second_fld;
+ union {
+ struct h264_pict_supl_data {
+ unsigned char nal_ref_idc;
+ unsigned short frame_num;
+ } data;
+ };
+};
+
+/*
+ * This structure contains decoded picture information for display.
+ * @brief Decoded Picture Information
+ */
+struct vdec_dec_pict_info {
+ enum vdec_pict_state pict_state;
+ enum img_buffer_type buf_type;
+ unsigned char interlaced_flds;
+ unsigned int err_flags;
+ unsigned int err_level;
+ struct vdec_pict_tag_container first_fld_tag_container;
+ struct vdec_pict_tag_container second_fld_tag_container;
+ struct vdec_str_opconfig op_config;
+ struct vdec_pict_rendinfo rend_info;
+ struct vdec_pict_disp_info disp_info;
+ unsigned int last_in_seq;
+ unsigned int decode_id;
+ unsigned int id_for_hwcrc_chk;
+ unsigned short view_id;
+ unsigned int timestamp;
+ struct vdec_pict_supl_data pict_supl_data;
+};
+
+struct vdec_pict_rend_config {
+ struct vdec_pict_size coded_pict_size;
+ unsigned char packed;
+ unsigned char byte_interleave;
+ unsigned int stride_alignment;
+};
+
+/*
+ * This structure contains unsupported feature flags.
+ * @brief Unsupported Feature Flags
+ */
+struct vdec_unsupp_flags {
+ unsigned int str_cfg;
+ unsigned int str_opcfg;
+ unsigned int op_bufcfg;
+ unsigned int seq_hdr;
+ unsigned int pict_hdr;
+};
+
+/*
+ * This type defines the error , error in parsing, error in decoding etc.
+ * @brief VDEC parsing/decoding error Information
+ */
+enum vdec_error_type {
+ VDEC_ERROR_NONE = (0),
+ VDEC_ERROR_SR_ERROR = (1 << 0),
+ VDEC_ERROR_FEHW_TIMEOUT = (1 << 1),
+ VDEC_ERROR_FEHW_DECODE = (1 << 2),
+ VDEC_ERROR_BEHW_TIMEOUT = (1 << 3),
+ VDEC_ERROR_SERVICE_TIMER_EXPIRY = (1 << 4),
+ VDEC_ERROR_MISSING_REFERENCES = (1 << 5),
+ VDEC_ERROR_MMU_FAULT = (1 << 6),
+ VDEC_ERROR_DEVICE = (1 << 7),
+ VDEC_ERROR_CORRUPTED_REFERENCE = (1 << 8),
+ VDEC_ERROR_MMCO = (1 << 9),
+ VDEC_ERROR_MBS_DROPPED = (1 << 10),
+ VDEC_ERROR_MAX = (1 << 11),
+ VDEC_ERROR_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This structure contains information relating to a buffer.
+ * @brief Buffer Information
+ */
+struct vdec_buf_info {
+ void *cpu_linear_addr;
+ unsigned int buf_id;
+ struct vdec_pict_bufconfig pictbuf_cfg;
+ int fd;
+ /* The following are fields used internally within VDEC... */
+ unsigned int buf_size;
+ enum sys_emem_attrib mem_attrib;
+ void *buf_alloc_handle;
+ void *buf_map_handle;
+};
+
+#ifdef HAS_JPEG
+/*
+ * This structure contains JPEG sequence header information.
+ * NOTE: Should only contain JPEG specific information.
+ * @brief JPEG sequence header Information
+ */
+struct vdec_jpeg_sequ_hdr_info {
+ /* total component in jpeg */
+ unsigned char num_component;
+ /* precision */
+ unsigned char precision;
+};
+
+/*
+ * This structure contains JPEG start of frame segment header
+ * NOTE: Should only contain JPEG specific information.
+ * @brief JPEG SOF header Information
+ */
+struct vdec_jpeg_sof_component_hdr {
+ /* component identifier. */
+ unsigned char identifier;
+ /* Horizontal scaling. */
+ unsigned char horz_factor;
+ /* Verticale scaling */
+ unsigned char vert_factor;
+ /* Qunatisation tables . */
+ unsigned char quant_table;
+};
+
+/*
+ * This structure contains JPEG start of scan segment header
+ * NOTE: Should only contain JPEG specific information.
+ * @brief JPEG SOS header Information
+ */
+struct vdec_jpeg_sos_component_hdr {
+ /* component identifier. */
+ unsigned char component_index;
+ /* Huffman DC tables. */
+ unsigned char dc_table;
+ /* Huffman AC table .*/
+ unsigned char ac_table;
+};
+
+struct vdec_jpeg_pict_hdr_info {
+ /* Start of frame component header */
+ struct vdec_jpeg_sof_component_hdr sof_comp[JPEG_VDEC_MAX_COMPONENTS];
+ /* Start of Scan component header */
+ struct vdec_jpeg_sos_component_hdr sos_comp[JPEG_VDEC_MAX_COMPONENTS];
+ /* Huffman tables */
+ struct vdec_jpeg_huffman_tableinfo huff_tables[JPEG_VDEC_TABLE_CLASS_NUM]
+ [JPEG_VDEC_MAX_SETS_HUFFMAN_TABLES];
+ /* Quantization tables */
+ struct vdec_jpeg_de_quant_tableinfo quant_tables[JPEG_VDEC_MAX_QUANT_TABLES];
+ /* Number of MCU in the restart interval */
+ unsigned short interval;
+ unsigned int test;
+};
+#endif
+
+#endif
diff --git a/drivers/staging/media/vxd/decoder/vxd_ext.h b/drivers/staging/media/vxd/decoder/vxd_ext.h
new file mode 100644
index 000000000000..fa92c9001c73
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vxd_ext.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD DEC Low-level device interface component
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ */
+
+#ifndef _VXD_EXT_H
+#define _VXD_EXT_H
+
+#define VLR_COMPLETION_COMMS_AREA_SIZE 476
+
+/* Word Size of buffer used to pass messages between LISR and HISR */
+#define VXD_SIZE_MSG_BUFFER (1 * 1024)
+
+/* This structure describes macroblock coordinates. */
+struct vxd_mb_coords {
+ unsigned int x;
+ unsigned int y;
+};
+
+/* This structure contains firmware and decoding pipe state information. */
+struct vxd_pipestate {
+ unsigned char is_pipe_present;
+ unsigned char cur_codec;
+ unsigned int acheck_point[VDECFW_CHECKPOINT_MAX];
+ unsigned int firmware_action;
+ unsigned int fe_slices;
+ unsigned int be_slices;
+ unsigned int fe_errored_slices;
+ unsigned int be_errored_slices;
+ unsigned int be_mbs_dropped;
+ unsigned int be_mbs_recovered;
+ struct vxd_mb_coords fe_mb;
+ struct vxd_mb_coords be_mb;
+};
+
+/* This structure contains firmware and decoder core state information. */
+struct vxd_firmware_state {
+ unsigned int fw_step;
+ struct vxd_pipestate pipe_state[VDECFW_MAX_DP];
+};
+
+/* This structure contains the video decoder device state. */
+struct vxd_states {
+ struct vxd_firmware_state fw_state;
+};
+
+struct vxd_pict_attrs {
+ unsigned int dwrfired;
+ unsigned int mmufault;
+ unsigned int deverror;
+};
+
+/* This type defines the message attributes. */
+enum vxd_msg_attr {
+ VXD_MSG_ATTR_NONE = 0,
+ VXD_MSG_ATTR_DECODED = 1,
+ VXD_MSG_ATTR_FATAL = 2,
+ VXD_MSG_ATTR_CANCELED = 3,
+ VXD_MSG_ATTR_FORCE32BITS = 0x7FFFFFFFU
+};
+
+enum vxd_msg_flag {
+ VXD_MSG_FLAG_DROP = 0,
+ VXD_MSG_FLAG_EXCL = 1,
+ VXD_MSG_FLAG_FORCE32BITS = 0x7FFFFFFFU
+};
+
+#endif /* VXD_EXT_H */
diff --git a/drivers/staging/media/vxd/decoder/vxd_mmu_defs.h b/drivers/staging/media/vxd/decoder/vxd_mmu_defs.h
new file mode 100644
index 000000000000..77d493cb39f2
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vxd_mmu_defs.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * V-DEC MMU Definitions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ */
+
+#ifndef _VXD_MMU_DEF_H_
+#define _VXD_MMU_DEF_H_
+
+/*
+ * This type defines the MMU heaps.
+ * @0: Heap for untiled video buffers
+ * @1: Heap for bitstream buffers
+ * @2: Heap for Stream buffers
+ * @3: Number of heaps
+ */
+enum mmu_eheap_id {
+ MMU_HEAP_IMAGE_BUFFERS_UNTILED = 0x00,
+ MMU_HEAP_BITSTREAM_BUFFERS,
+ MMU_HEAP_STREAM_BUFFERS,
+ MMU_HEAP_MAX,
+ MMU_HEAP_FORCE32BITS = 0x7FFFFFFFU
+};
+
+#endif /* _VXD_MMU_DEFS_H_ */
diff --git a/drivers/staging/media/vxd/decoder/vxd_props.h b/drivers/staging/media/vxd/decoder/vxd_props.h
new file mode 100644
index 000000000000..bdab182859a7
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vxd_props.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Low-level VXD interface component
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _VXD_PROPS_H
+#define _VXD_PROPS_H
+
+#include "vdec_defs.h"
+#include "imgmmu.h"
+
+#define VDEC_MAX_PIXEL_PIPES 2
+
+#define VXD_MAX_CORES 1
+#define VER_STR_LEN 64
+
+#define CORE_REVISION(maj, min, maint) \
+ ((((maj) & 0xff) << 16) | (((min) & 0xff) << 8) | (((maint) & 0xff)))
+#define MAJOR_REVISION(rev) (((rev) >> 16) & 0xff)
+#define MINOR_REVISION(rev) (((rev) >> 8) & 0xff)
+#define MAINT_REVISION(rev) ((rev) & 0xff)
+
+#define FROM_REV(maj, min, maint, type) \
+ ({ \
+ type __maj = maj; \
+ type __min = min; \
+ (((maj_rev) > (__maj)) || \
+ (((maj_rev) == (__maj)) && ((min_rev) > (__min))) || \
+ (((maj_rev) == (__maj)) && ((min_rev) == (__min)) && \
+ ((int)(maint_rev) >= (maint)))); })
+
+struct vxd_vidstd_props {
+ enum vdec_vid_std vidstd;
+ unsigned int core_rev;
+ unsigned int min_width;
+ unsigned int min_height;
+ unsigned int max_width;
+ unsigned int max_height;
+ unsigned int max_macroblocks;
+ unsigned int max_luma_bitdepth;
+ unsigned int max_chroma_bitdepth;
+ enum pixel_fmt_idc max_chroma_format;
+};
+
+struct vxd_coreprops {
+ unsigned char aversion[VER_STR_LEN];
+ unsigned char mpeg2[VDEC_MAX_PIXEL_PIPES];
+ unsigned char mpeg4[VDEC_MAX_PIXEL_PIPES];
+ unsigned char h264[VDEC_MAX_PIXEL_PIPES];
+ unsigned char vc1[VDEC_MAX_PIXEL_PIPES];
+ unsigned char avs[VDEC_MAX_PIXEL_PIPES];
+ unsigned char real[VDEC_MAX_PIXEL_PIPES];
+ unsigned char jpeg[VDEC_MAX_PIXEL_PIPES];
+ unsigned char vp6[VDEC_MAX_PIXEL_PIPES];
+ unsigned char vp8[VDEC_MAX_PIXEL_PIPES];
+ unsigned char hevc[VDEC_MAX_PIXEL_PIPES];
+ unsigned char rotation_support[VDEC_MAX_PIXEL_PIPES];
+ unsigned char scaling_support[VDEC_MAX_PIXEL_PIPES];
+ unsigned char hd_support;
+ unsigned int num_streams[VDEC_MAX_PIXEL_PIPES];
+ unsigned int num_entropy_pipes;
+ unsigned int num_pixel_pipes;
+ struct vxd_vidstd_props vidstd_props[VDEC_STD_MAX];
+ enum mmu_etype mmu_type;
+ unsigned char mmu_support_stride_per_context;
+ unsigned char mmu_support_secure;
+ /* Range extensions supported by hw -> used only by hevc */
+ unsigned char hevc_range_ext[VDEC_MAX_PIXEL_PIPES];
+};
+
+#endif /* _VXD_PROPS_H */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
The vxd helper provides the functionality for firmware blob preparation
and loading, power management (core reset, etc.), firmware messaging,
interrupt handling, managing the hardware status, and error handling.
The vxd helper also interacts with the memory manager helper to create
a context for each stream and associate it with the mmu context. The
common mappings are done during this creation for the firmware and
rendec buffers.
Signed-off-by: Buddy Liong <[email protected]>
Signed-off-by: Angela Stegmaier <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 4 +
.../media/vxd/decoder/img_dec_common.h | 278 +++
drivers/staging/media/vxd/decoder/vxd_pvdec.c | 1745 +++++++++++++++++
.../media/vxd/decoder/vxd_pvdec_priv.h | 126 ++
.../media/vxd/decoder/vxd_pvdec_regs.h | 779 ++++++++
5 files changed, 2932 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/img_dec_common.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_pvdec.c
create mode 100644 drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 150272927839..0f8154b69a91 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19542,6 +19542,10 @@ F: drivers/staging/media/vxd/common/img_mem_man.h
F: drivers/staging/media/vxd/common/img_mem_unified.c
F: drivers/staging/media/vxd/common/imgmmu.c
F: drivers/staging/media/vxd/common/imgmmu.h
+F: drivers/staging/media/vxd/decoder/img_dec_common.h
+F: drivers/staging/media/vxd/decoder/vxd_pvdec.c
+F: drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h
+F: drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h
VIDEO I2C POLLING DRIVER
M: Matt Ranostay <[email protected]>
diff --git a/drivers/staging/media/vxd/decoder/img_dec_common.h b/drivers/staging/media/vxd/decoder/img_dec_common.h
new file mode 100644
index 000000000000..7bb3bd6d6e78
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/img_dec_common.h
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG DEC common header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 exas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#ifndef _IMG_DEC_COMMON_H
+#define _IMG_DEC_COMMON_H
+
+#include <linux/types.h>
+
+#define VXD_MAX_PIPES 2
+#define MAX_DST_BUFFERS 32
+
+/* Helpers for parsing core properties. Based on HW registers layout. */
+#define VXD_GET_BITS(v, lb, rb, type) \
+ ({ \
+ type __rb = (rb); \
+ (((v) >> (__rb)) & ((1 << ((lb) - __rb + 1)) - 1)); })
+#define VXD_GET_BIT(v, b) (((v) >> (b)) & 1)
+
+/* Get major core revision. */
+#define VXD_MAJ_REV(props) (VXD_GET_BITS((props).core_rev, 23, 16, unsigned int))
+/* Get minor core revision. */
+#define VXD_MIN_REV(props) (VXD_GET_BITS((props).core_rev, 15, 8, unsigned int))
+/* Get maint core revision. */
+#define VXD_MAINT_REV(props) (VXD_GET_BITS((props).core_rev, 7, 0, unsigned int))
+/* Get number of entropy pipes available (HEVC). */
+#define VXD_NUM_ENT_PIPES(props) ((props).pvdec_core_id & 0xF)
+/* Get number of pixel pipes available (other standards). */
+#define VXD_NUM_PIX_PIPES(props) (((props).pvdec_core_id & 0xF0) >> 4)
+/* Get number of bits used by external memory interface. */
+#define VXD_EXTRN_ADDR_WIDTH(props) ((((props).mmu_config0 & 0xF0) >> 4) + 32)
+
+/* Check whether specific standard is supported by the pixel pipe. */
+#define VXD_HAS_MPEG2(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 0)
+#define VXD_HAS_MPEG4(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 1)
+#define VXD_HAS_H264(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 2)
+#define VXD_HAS_VC1(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 3)
+#define VXD_HAS_WMV9(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 4)
+#define VXD_HAS_JPEG(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 5)
+#define VXD_HAS_MPEG4_DATA_PART(props, pipe) \
+ VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 6)
+#define VXD_HAS_AVS(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 7)
+#define VXD_HAS_REAL(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 8)
+#define VXD_HAS_VP6(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 9)
+#define VXD_HAS_VP8(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 10)
+#define VXD_HAS_SORENSON(props, pipe) \
+ VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 11)
+#define VXD_HAS_HEVC(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 22)
+
+/* Check whether specific feature is supported by the pixel pipe */
+
+/*
+ * Max picture size for HEVC still picture profile is 64k wide and/or 64k
+ * high.
+ */
+#define VXD_HAS_HEVC_64K_STILL(props, pipe) \
+ (VXD_GET_BIT((props).pixel_misc_cfg[pipe], 24))
+
+/* Pixel processing pipe index. */
+#define VXD_PIX_PIPE_ID(props, pipe) \
+ (VXD_GET_BITS((props).pixel_misc_cfg[pipe], 18, 16, unsigned int))
+
+/* Number of stream supported by the pixel pipe DMAC and shift register. */
+#define VXD_PIX_NUM_STRS(props, pipe) \
+ (VXD_GET_BITS((props).pixel_misc_cfg[pipe], 13, 12, unsigned int) + 1)
+
+/* Is scaling supported. */
+#define VXD_HAS_SCALING(props, pipe) \
+ (VXD_GET_BIT((props).pixel_misc_cfg[pipe], 9))
+
+/* Is rotation supported. */
+#define VXD_HAS_ROTATION(props, pipe) \
+ (VXD_GET_BIT((props).pixel_misc_cfg[pipe], 8))
+
+/* Are HEVC range extensions supported. */
+#define VXD_HAS_HEVC_REXT(props, pipe) \
+ (VXD_GET_BIT((props).pixel_misc_cfg[pipe], 7))
+
+/* Maximum bit depth supported by the pipe. */
+#define VXD_MAX_BIT_DEPTH(props, pipe) \
+ (VXD_GET_BITS((props).pixel_misc_cfg[pipe], 6, 4, unsigned int) + 8)
+
+/*
+ * Maximum chroma fomar supported by the pipe in HEVC mode.
+ * 0x1 - 4:2:0
+ * 0x2 - 4:2:2
+ * 0x3 - 4:4:4
+ */
+#define VXD_MAX_HEVC_CHROMA_FMT(props, pipe) \
+ (VXD_GET_BITS((props).pixel_misc_cfg[pipe], 3, 2, unsigned int))
+
+/*
+ * Maximum chroma format supported by the pipe in H264 mode.
+ * 0x1 - 4:2:0
+ * 0x2 - 4:2:2
+ * 0x3 - 4:4:4
+ */
+#define VXD_MAX_H264_CHROMA_FMT(props, pipe) \
+ (VXD_GET_BITS((props).pixel_misc_cfg[pipe], 1, 0, unsigned int))
+
+/*
+ * Maximum frame width and height supported in MSVDX pipeline.
+ */
+#define VXD_MAX_WIDTH_MSVDX(props) \
+ (2 << (VXD_GET_BITS((props).pixel_max_frame_cfg, 4, 0, unsigned int)))
+#define VXD_MAX_HEIGHT_MSVDX(props) \
+ (2 << (VXD_GET_BITS((props).pixel_max_frame_cfg, 12, 8, unsigned int)))
+
+/*
+ * Maximum frame width and height supported in PVDEC pipeline.
+ */
+#define VXD_MAX_WIDTH_PVDEC(props) \
+ (2 << (VXD_GET_BITS((props).pixel_max_frame_cfg, 20, 16, unsigned int)))
+#define VXD_MAX_HEIGHT_PVDEC(props) \
+ (2 << (VXD_GET_BITS((props).pixel_max_frame_cfg, 28, 24, unsigned int)))
+
+#define PVDEC_COMMS_RAM_OFFSET 0x00002000
+#define PVDEC_COMMS_RAM_SIZE 0x00001000
+#define PVDEC_ENTROPY_OFFSET 0x00003000
+#define PVDEC_ENTROPY_SIZE 0x1FF
+#define PVDEC_VEC_BE_OFFSET 0x00005000
+#define PVDEC_VEC_BE_SIZE 0x3FF
+#define PVDEC_VEC_BE_CODEC_OFFSET 0x00005400
+#define MSVDX_VEC_OFFSET 0x00006000
+#define MSVDX_VEC_SIZE 0x7FF
+#define MSVDX_CMD_OFFSET 0x00007000
+
+/*
+ * Virtual memory heap address ranges for tiled
+ * and non-tiled buffers. Addresses within each
+ * range should be assigned to the appropriate
+ * buffers by the UM driver and mapped into the
+ * device using the corresponding KM driver ioctl.
+ */
+#define PVDEC_HEAP_UNTILED_START 0x00400000ul
+#define PVDEC_HEAP_UNTILED_SIZE 0x3FC00000ul
+#define PVDEC_HEAP_TILE512_START 0x40000000ul
+#define PVDEC_HEAP_TILE512_SIZE 0x10000000ul
+#define PVDEC_HEAP_TILE1024_START 0x50000000ul
+#define PVDEC_HEAP_TILE1024_SIZE 0x20000000ul
+#define PVDEC_HEAP_TILE2048_START 0x70000000ul
+#define PVDEC_HEAP_TILE2048_SIZE 0x30000000ul
+#define PVDEC_HEAP_TILE4096_START 0xA0000000ul
+#define PVDEC_HEAP_TILE4096_SIZE 0x30000000ul
+#define PVDEC_HEAP_BITSTREAM_START 0xD2000000ul
+#define PVDEC_HEAP_BITSTREAM_SIZE 0x0A000000ul
+#define PVDEC_HEAP_STREAM_START 0xE4000000ul
+#define PVDEC_HEAP_STREAM_SIZE 0x1C000000ul
+
+/*
+ * Max size of the message payload, in bytes. There are 7 bits used to encode
+ * the message size in the firmware interface.
+ */
+#define VXD_MAX_PAYLOAD_SIZE (127 * sizeof(unsigned int))
+/* Max size of the input message in bytes. */
+#define VXD_MAX_INPUT_SIZE (VXD_MAX_PAYLOAD_SIZE + sizeof(struct vxd_fw_msg))
+/*
+ * Min size of the input message. Two words needed for message header and
+ * stream PTD
+ */
+#define VXD_MIN_INPUT_SIZE 2
+/*
+ * Offset of the stream PTD within message. This word has to be left null in
+ * submitted message, driver will fill it in with an appropriate value.
+ */
+#define VXD_PTD_MSG_OFFSET 1
+
+/* Read flags */
+#define VXD_FW_MSG_RD_FLAGS_MASK 0xffff
+/* Driver watchdog interrupted processing of the message. */
+#define VXD_FW_MSG_FLAG_DWR 0x1
+/* VXD MMU fault occurred when the message was processed. */
+#define VXD_FW_MSG_FLAG_MMU_FAULT 0x2
+/* Invalid input message, e.g. the message was too large. */
+#define VXD_FW_MSG_FLAG_INV 0x4
+/* I/O error occurred when the message was processed. */
+#define VXD_FW_MSG_FLAG_DEV_ERR 0x8
+/*
+ * Driver error occurred when the message was processed, e.g. failed to
+ * allocate memory.
+ */
+#define VXD_FW_MSG_FLAG_DRV_ERR 0x10
+/*
+ * Item was canceled, without being fully processed
+ * i.e. corresponding stream was destroyed.
+ */
+#define VXD_FW_MSG_FLAG_CANCELED 0x20
+/* Firmware internal error occurred when the message was processed */
+#define VXD_FW_MSG_FLAG_FATAL 0x40
+
+/* Write flags */
+#define VXD_FW_MSG_WR_FLAGS_MASK 0xffff0000
+/* Indicates that message shall be dropped after sending it to the firmware. */
+#define VXD_FW_MSG_FLAG_DROP 0x10000
+/*
+ * Indicates that message shall be exclusively handled by
+ * the firmware/hardware. Any other pending messages are
+ * blocked until such message is handled.
+ */
+#define VXD_FW_MSG_FLAG_EXCL 0x20000
+
+#define VXD_MSG_SIZE(msg) (sizeof(struct vxd_fw_msg) + ((msg).payload_size))
+
+/* Header included at the beginning of firmware binary */
+struct vxd_fw_hdr {
+ unsigned int core_size;
+ unsigned int blob_size;
+ unsigned int firmware_id;
+ unsigned int timestamp;
+};
+
+/*
+ * struct vxd_dev_fw - Core component will allocate a buffer for firmware.
+ * This structure holds the information about the firmware
+ * binary.
+ * @buf_id: The buffer id allocation
+ * @hdr: firmware header information
+ * @fw_size: The size of the fw. Set after successful firmware request.
+ */
+struct vxd_dev_fw {
+ int buf_id;
+ struct vxd_fw_hdr *hdr;
+ unsigned int fw_size;
+ unsigned char ready;
+};
+
+/*
+ * struct vxd_core_props - contains HW core properties
+ * @core_rev: Core revision based on register CR_PVDEC_CORE_REV
+ * @pvdec_core_id: PVDEC Core id based on register CR_PVDEC_CORE_ID
+ * @mmu_config0: MMU configuration 0 based on register MMU_CONFIG0
+ * @mmu_config1: MMU configuration 1 based on register MMU_CONFIG1
+ * @mtx_ram_size: size of the MTX RAM based on register CR_PROC_DEBUG
+ * @pixel_max_frame_cfg: indicates the max frame height and width for
+ * PVDEC pipeline and MSVDX pipeline based on register
+ * MAX_FRAME_CONFIG
+ * @pixel_pipe_cfg: pipe configuration which codecs are supported in a
+ * Pixel Processing Pipe, based on register
+ * PIXEL_PIPE_CONFIG
+ * @pixel_misc_cfg: Additional pipe configuration eg. supported scaling
+ * or rotation, based on register PIXEL_MISC_CONFIG
+ * @dbg_fifo_size: contains the depth of the Debug FIFO, based on
+ * register CR_PROC_DEBUG_FIFO_SIZE
+ */
+struct vxd_core_props {
+ unsigned int core_rev;
+ unsigned int pvdec_core_id;
+ unsigned int mmu_config0;
+ unsigned int mmu_config1;
+ unsigned int mtx_ram_size;
+ unsigned int pixel_max_frame_cfg;
+ unsigned int pixel_pipe_cfg[VXD_MAX_PIPES];
+ unsigned int pixel_misc_cfg[VXD_MAX_PIPES];
+ unsigned int dbg_fifo_size;
+};
+
+struct vxd_alloc_data {
+ unsigned int heap_id; /* [IN] Heap ID of allocator */
+ unsigned int size; /* [IN] Size of device memory (in bytes) */
+ unsigned int attributes; /* [IN] Attributes of buffer */
+ unsigned int buf_id; /* [OUT] Generated buffer ID */
+};
+
+struct vxd_free_data {
+ unsigned int buf_id; /* [IN] ID of device buffer to free */
+};
+#endif /* _IMG_DEC_COMMON_H */
diff --git a/drivers/staging/media/vxd/decoder/vxd_pvdec.c b/drivers/staging/media/vxd/decoder/vxd_pvdec.c
new file mode 100644
index 000000000000..c2b59c3dd164
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vxd_pvdec.c
@@ -0,0 +1,1745 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * IMG DEC PVDEC function implementations
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include <linux/interrupt.h>
+#include <linux/time64.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "img_dec_common.h"
+#include "img_pvdec_test_regs.h"
+#include "img_video_bus4_mmu_regs.h"
+#include "vxd_pvdec_priv.h"
+#include "vxd_pvdec_regs.h"
+
+#ifdef PVDEC_SINGLETHREADED_IO
+static DEFINE_SPINLOCK(pvdec_irq_lock);
+static ulong pvdec_irq_flags;
+#endif
+
+static const ulong vxd_plat_poll_udelay = 100;
+
+/* This function will return reminder and quotient */
+static inline unsigned int do_divide(unsigned long long *n, unsigned int base)
+{
+ unsigned int remainder = *n % base;
+ *n = *n / base;
+ return remainder;
+}
+
+/*
+ * Reads PROC_DEBUG register and provides number of MTX RAM banks
+ * and their size
+ */
+static int pvdec_get_mtx_ram_info(void __iomem *reg_base, int *bank_cnt,
+ unsigned long *bank_size,
+ unsigned long *last_bank_size)
+{
+ unsigned int ram_bank_count, reg;
+
+ reg = VXD_RD_REG(reg_base, PVDEC_CORE, PROC_DEBUG);
+ ram_bank_count = VXD_RD_REG_FIELD(reg, PVDEC_CORE, PROC_DEBUG, MTX_RAM_BANKS);
+ if (!ram_bank_count)
+ return -EIO;
+
+ if (bank_cnt)
+ *bank_cnt = ram_bank_count;
+
+ if (bank_size) {
+ unsigned int ram_bank_size = VXD_RD_REG_FIELD(reg, PVDEC_CORE,
+ PROC_DEBUG, MTX_RAM_BANK_SIZE);
+ *bank_size = 1 << (ram_bank_size + 2);
+ }
+
+ if (last_bank_size) {
+ unsigned int last_bank = VXD_RD_REG_FIELD(reg, PVDEC_CORE, PROC_DEBUG,
+ MTX_LAST_RAM_BANK_SIZE);
+ unsigned char new_representation = VXD_RD_REG_FIELD(reg,
+ PVDEC_CORE, PROC_DEBUG, MTX_RAM_NEW_REPRESENTATION);
+ if (new_representation) {
+ *last_bank_size = 1024 * last_bank;
+ } else {
+ *last_bank_size = 1 << (last_bank + 2);
+ if (bank_cnt && last_bank == 13 && *bank_cnt == 4) {
+ /*
+ * VXD hardware ambiguity:
+ * old cores confuse 120k and 128k
+ * So assume worst case.
+ */
+ *last_bank_size -= 0x2000;
+ }
+ }
+ }
+
+ return 0;
+}
+
+/* Provides size of MTX RAM in bytes */
+static int pvdec_get_mtx_ram_size(void __iomem *reg_base, unsigned int *ram_size)
+{
+ int bank_cnt, ret;
+ unsigned long bank_size, last_bank_size;
+
+ ret = pvdec_get_mtx_ram_info(reg_base, &bank_cnt, &bank_size, &last_bank_size);
+ if (ret)
+ return ret;
+
+ *ram_size = (bank_cnt - 1) * bank_size + last_bank_size;
+
+ return 0;
+}
+
+/* Poll for single register-based transfer to/from MTX to complete */
+static unsigned int pvdec_wait_mtx_reg_access(void __iomem *reg_base, unsigned int *mtx_fault)
+{
+ unsigned int pvdec_timeout = PVDEC_TIMEOUT_COUNTER, reg;
+
+ do {
+ /* Check MTX is OK */
+ reg = VXD_RD_REG(reg_base, MTX_CORE, MTX_FAULT0);
+ if (reg != 0) {
+ *mtx_fault = reg;
+ return -EIO;
+ }
+
+ pvdec_timeout--;
+ reg = VXD_RD_REG(reg_base, MTX_CORE, MTX_REG_READ_WRITE_REQUEST);
+ } while ((VXD_RD_REG_FIELD(reg, MTX_CORE,
+ MTX_REG_READ_WRITE_REQUEST,
+ MTX_DREADY) == 0) &&
+ (pvdec_timeout != 0));
+
+ if (pvdec_timeout == 0)
+ return -EIO;
+
+ return 0;
+}
+
+static void pvdec_mtx_status_dump(void __iomem *reg_base, unsigned int *status)
+{
+ unsigned int reg;
+
+ pr_debug("%s: *** dumping status ***\n", __func__);
+
+#define READ_MTX_REG(_NAME_) \
+ do { \
+ unsigned int val; \
+ VXD_WR_REG(reg_base, MTX_CORE, \
+ MTX_REG_READ_WRITE_REQUEST, reg); \
+ if (pvdec_wait_mtx_reg_access(reg_base, ®)) { \
+ pr_debug("%s: " \
+ "MTX REG RD fault: 0x%08x\n", __func__, reg); \
+ break; \
+ } \
+ val = VXD_RD_REG(reg_base, MTX_CORE, MTX_REG_READ_WRITE_DATA); \
+ if (status) \
+ *status++ = val; \
+ pr_debug("%s: " _NAME_ ": 0x%08x\n", __func__, val); \
+ } while (0)
+
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* Read */
+ MTX_REG_READ_WRITE_REQUEST, MTX_RNW, 1);
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* PC or PCX */
+ MTX_REG_READ_WRITE_REQUEST, MTX_USPECIFIER, 5);
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* PC */
+ MTX_REG_READ_WRITE_REQUEST, MTX_RSPECIFIER, 0);
+ READ_MTX_REG("MTX PC");
+
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* Read */
+ MTX_REG_READ_WRITE_REQUEST, MTX_RNW, 1);
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* PC or PCX */
+ MTX_REG_READ_WRITE_REQUEST, MTX_USPECIFIER, 5);
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* PCX */
+ MTX_REG_READ_WRITE_REQUEST, MTX_RSPECIFIER, 1);
+ READ_MTX_REG("MTX PCX");
+
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* Read */
+ MTX_REG_READ_WRITE_REQUEST, MTX_RNW, 1);
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* A0StP */
+ MTX_REG_READ_WRITE_REQUEST, MTX_USPECIFIER, 3);
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE,
+ MTX_REG_READ_WRITE_REQUEST, MTX_RSPECIFIER, 0);
+ READ_MTX_REG("MTX A0STP");
+
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* Read */
+ MTX_REG_READ_WRITE_REQUEST, MTX_RNW, 1);
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* A0FrP */
+ MTX_REG_READ_WRITE_REQUEST, MTX_USPECIFIER, 3);
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_REG_READ_WRITE_REQUEST, MTX_RSPECIFIER, 1);
+ READ_MTX_REG("MTX A0FRP");
+#undef PRINT_MTX_REG
+
+ pr_debug("%s: *** status dump done ***\n", __func__);
+}
+
+static void pvdec_prep_fw_upload(const void *dev,
+ void __iomem *reg_base,
+ struct vxd_ena_params *ena_params,
+ unsigned char dma_channel)
+{
+ unsigned int fw_vxd_virt_addr = ena_params->fw_buf_virt_addr;
+ unsigned int vxd_ptd_addr = ena_params->ptd;
+ unsigned int reg = 0;
+ int i;
+ unsigned int flags = PVDEC_FWFLAG_FORCE_FS_FLOW |
+ PVDEC_FWFLAG_DISABLE_GENC_FLUSHING |
+ PVDEC_FWFLAG_DISABLE_AUTONOMOUS_RESET |
+ PVDEC_FWFLAG_DISABLE_IDLE_GPIO |
+ PVDEC_FWFLAG_ENABLE_ERROR_CONCEALMENT;
+
+ if (ena_params->secure)
+ flags |= PVDEC_FWFLAG_BIG_TO_HOST_BUFFER;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: fw_virt: 0x%x, ptd: 0x%x, dma ch: %u, flags: 0x%x\n",
+ __func__, fw_vxd_virt_addr, vxd_ptd_addr, dma_channel, flags);
+#endif
+
+ /* Reset MTX */
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SOFT_RESET, MTX_RESET, 1);
+ VXD_WR_REG(reg_base, MTX_CORE, MTX_SOFT_RESET, reg);
+ /*
+ * NOTE: The MTX reset bit is WRITE ONLY, so we cannot
+ * check the reset procedure has finished, thus BEWARE to put
+ * any MTX_CORE* access just after this line
+ */
+
+ /* Clear COMMS RAM header */
+ for (i = 0; i < PVDEC_FW_COMMS_HDR_SIZE; i++)
+ VXD_WR_REG_ABS(reg_base, VLR_OFFSET + i * sizeof(unsigned int), 0);
+
+ VXD_WR_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_FLAGS_OFFSET, flags);
+ /* Do not wait for debug FIFO flag - set it only when requested */
+ VXD_WR_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_SIGNATURE_OFFSET,
+ !ena_params->wait_dbg_fifo);
+
+ /*
+ * Clear the bypass bits and enable extended addressing in MMU.
+ * Firmware depends on this configuration, so we have to set it,
+ * even if firmware is being uploaded via registers.
+ */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_ADDRESS_CONTROL, UPPER_ADDR_FIXED, 0);
+ reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_ADDRESS_CONTROL, MMU_ENA_EXT_ADDR, 1);
+ reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_ADDRESS_CONTROL, MMU_BYPASS, 0);
+ VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_ADDRESS_CONTROL, reg);
+
+ /*
+ * Buffer device virtual address.
+ * This is an address of a firmware blob, firmware reads this base
+ * address from DMAC_SETUP register and uses to load the modules, so it
+ * has to be set even when uploading the FW via registers.
+ */
+ VXD_WR_RPT_REG(reg_base, DMAC, DMAC_SETUP, fw_vxd_virt_addr, dma_channel);
+
+ /*
+ * Set base address of PTD. Same as before, has to be configured even
+ * when uploading the firmware via regs, FW uses it to execute DMA
+ * before switching to stream MMU context.
+ */
+ VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_DIR_BASE_ADDR, vxd_ptd_addr);
+
+ /* Configure MMU bank index - Use bank 0 */
+ VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_BANK_INDEX, 0);
+
+ /* Set the MTX timer divider register */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_TIMERDIV, TIMER_EN, 1);
+ /*
+ * Setting max freq - divide by 1 for better measurement accuracy
+ * during fw upload stage
+ */
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_TIMERDIV, TIMER_DIV, 0);
+ VXD_WR_REG(reg_base, MTX_CORE, MTX_SYSC_TIMERDIV, reg);
+}
+
+static int pvdec_check_fw_sig(void __iomem *reg_base)
+{
+ unsigned int fw_sig = VXD_RD_REG_ABS(reg_base, VLR_OFFSET +
+ PVDEC_FW_SIGNATURE_OFFSET);
+
+ if (fw_sig != PVDEC_FW_READY_SIG)
+ return -EIO;
+
+ return 0;
+}
+
+static void pvdec_kick_mtx(void __iomem *reg_base)
+{
+ unsigned int reg = 0;
+
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_KICKI, MTX_KICKI, 1);
+ VXD_WR_REG(reg_base, MTX_CORE, MTX_KICKI, reg);
+}
+
+static int pvdec_write_vlr(void __iomem *reg_base, const unsigned int *buf,
+ unsigned long size_dwrds, int off_dwrds)
+{
+ unsigned int i;
+
+ if (((off_dwrds + size_dwrds) * sizeof(unsigned int)) > VLR_SIZE)
+ return -EINVAL;
+
+ for (i = 0; i < size_dwrds; i++) {
+ int off = (off_dwrds + i) * sizeof(unsigned int);
+
+ VXD_WR_REG_ABS(reg_base, (VLR_OFFSET + off), *buf);
+ buf++;
+ }
+
+ return 0;
+}
+
+static int pvdec_poll_fw_boot(void __iomem *reg_base, struct vxd_boot_poll_params *poll_params)
+{
+ unsigned int i;
+
+ for (i = 0; i < 25; i++) {
+ if (!pvdec_check_fw_sig(reg_base))
+ return 0;
+ usleep_range(100, 110);
+ }
+ for (i = 0; i < poll_params->msleep_cycles; i++) {
+ if (!pvdec_check_fw_sig(reg_base))
+ return 0;
+ msleep(100);
+ }
+ return -EIO;
+}
+
+static int pvdec_read_vlr(void __iomem *reg_base, unsigned int *buf,
+ unsigned long size_dwrds, int off_dwrds)
+{
+ unsigned int i;
+
+ if (((off_dwrds + size_dwrds) * sizeof(unsigned int)) > VLR_SIZE)
+ return -EINVAL;
+
+ for (i = 0; i < size_dwrds; i++) {
+ int off = (off_dwrds + i) * sizeof(unsigned int);
+ *buf++ = VXD_RD_REG_ABS(reg_base, (VLR_OFFSET + off));
+ }
+
+ return 0;
+}
+
+/* Get configuration of a ring buffer used to send messages to the MTX */
+static int pvdec_get_to_mtx_cfg(void __iomem *reg_base, unsigned long *size, int *off,
+ unsigned int *wr_idx, unsigned int *rd_idx)
+{
+ unsigned int to_mtx_cfg;
+ int to_mtx_off, ret;
+
+ ret = pvdec_check_fw_sig(reg_base);
+ if (ret)
+ return ret;
+
+ to_mtx_cfg = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_MTX_BUF_CONF_OFFSET);
+
+ *size = PVDEC_FW_COM_BUF_SIZE(to_mtx_cfg);
+ to_mtx_off = PVDEC_FW_COM_BUF_OFF(to_mtx_cfg);
+
+ if (to_mtx_off % 4)
+ return -EIO;
+
+ to_mtx_off /= sizeof(unsigned int);
+ *off = to_mtx_off;
+
+ *wr_idx = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_MTX_WR_IDX_OFFSET);
+ *rd_idx = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_MTX_RD_IDX_OFFSET);
+
+ if ((*rd_idx >= *size) || (*wr_idx >= *size))
+ return -EIO;
+
+ return 0;
+}
+
+/* Submit a padding message to the host->MTX ring buffer */
+static int pvdec_send_pad_msg(void __iomem *reg_base)
+{
+ int ret, pad_size, to_mtx_off; /* offset in dwords */
+ unsigned int wr_idx, rd_idx; /* indicies in dwords */
+ unsigned long pad_msg_size = 1, to_mtx_size; /* size in dwords */
+ const unsigned long max_msg_size = VXD_MAX_PAYLOAD_SIZE / sizeof(unsigned int);
+ unsigned int pad_msg;
+
+ ret = pvdec_get_to_mtx_cfg(reg_base, &to_mtx_size, &to_mtx_off, &wr_idx, &rd_idx);
+ if (ret)
+ return ret;
+
+ pad_size = to_mtx_size - wr_idx; /* size in dwords */
+
+ if (pad_size <= 0) {
+ VXD_WR_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_MTX_WR_IDX_OFFSET, 0);
+ return 0;
+ }
+
+ while (pad_size > 0) {
+ int cur_pad_size = pad_size > max_msg_size ?
+ max_msg_size : pad_size;
+
+ pad_msg = 0;
+ pad_msg = VXD_WR_REG_FIELD(pad_msg, PVDEC_FW, DEVA_GENMSG, MSG_SIZE, cur_pad_size);
+ pad_msg = VXD_WR_REG_FIELD(pad_msg, PVDEC_FW, DEVA_GENMSG,
+ MSG_TYPE, PVDEC_FW_MSG_TYPE_PADDING);
+
+ ret = pvdec_write_vlr(reg_base, &pad_msg, pad_msg_size, to_mtx_off + wr_idx);
+ if (ret)
+ return ret;
+
+ wr_idx += cur_pad_size;
+
+ VXD_WR_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_MTX_WR_IDX_OFFSET, wr_idx);
+
+ pad_size -= cur_pad_size;
+
+ pvdec_kick_mtx(reg_base);
+ }
+
+ wr_idx = 0;
+ VXD_WR_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_MTX_WR_IDX_OFFSET, wr_idx);
+
+ return 0;
+}
+
+/*
+ * Check if there is enough space in comms RAM to submit a <msg_size>
+ * dwords long message. Submit a padding message if necessary and requested.
+ *
+ * Returns 0 if there is space for a message.
+ * Returns -EINVAL when msg is too big or empty.
+ * Returns -EIO when there was a problem accessing the HW.
+ * Returns -EBUSY when there is not ennough space.
+ */
+static int pvdec_check_comms_space(void __iomem *reg_base, unsigned long msg_size,
+ unsigned char send_padding)
+{
+ int ret, to_mtx_off; /* offset in dwords */
+ unsigned int wr_idx, rd_idx; /* indicies in dwords */
+ unsigned long to_mtx_size; /* size in dwords */
+
+ ret = pvdec_get_to_mtx_cfg(reg_base, &to_mtx_size, &to_mtx_off, &wr_idx, &rd_idx);
+ if (ret)
+ return ret;
+
+ /* Enormous or empty message, won't fit */
+ if (msg_size >= to_mtx_size || !msg_size)
+ return -EINVAL;
+
+ /* Buffer does not wrap */
+ if (wr_idx >= rd_idx) {
+ /* Is there enough space to put the message? */
+ if (wr_idx + msg_size < to_mtx_size)
+ return 0;
+
+ if (!send_padding)
+ return -EBUSY;
+
+ /* Check if it's ok to send a padding message */
+ if (rd_idx == 0)
+ return -EBUSY;
+
+ /* Send a padding message */
+ ret = pvdec_send_pad_msg(reg_base);
+ if (ret)
+ return ret;
+
+ /*
+ * And check if there's enough space at the beginning
+ * of a buffer
+ */
+ if (msg_size >= rd_idx)
+ return -EBUSY; /* Not enough space at the beginning */
+
+ } else { /* Buffer wraps */
+ if (wr_idx + msg_size >= rd_idx)
+ return -EBUSY; /* Not enough space! */
+ }
+
+ return 0;
+}
+
+/* Get configuration of a ring buffer used to receive messages from the MTX */
+static int pvdec_get_to_host_cfg(void __iomem *reg_base, unsigned long *size, int *off,
+ unsigned int *wr_idx, unsigned int *rd_idx)
+{
+ unsigned int to_host_cfg;
+ int to_host_off, ret;
+
+ ret = pvdec_check_fw_sig(reg_base);
+ if (ret)
+ return ret;
+
+ to_host_cfg = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_HOST_BUF_CONF_OFFSET);
+
+ *size = PVDEC_FW_COM_BUF_SIZE(to_host_cfg);
+ to_host_off = PVDEC_FW_COM_BUF_OFF(to_host_cfg);
+
+ if (to_host_off % 4)
+ return -EIO;
+
+ to_host_off /= sizeof(unsigned int);
+ *off = to_host_off;
+
+ *wr_idx = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_HOST_WR_IDX_OFFSET);
+ *rd_idx = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_HOST_RD_IDX_OFFSET);
+
+ if ((*rd_idx >= *size) || (*wr_idx >= *size))
+ return -EIO;
+
+ return 0;
+}
+
+static void pvdec_select_pipe(void __iomem *reg_base, unsigned char pipe)
+{
+ unsigned int reg = 0;
+
+ reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_HOST_PIPE_SELECT, PIPE_SEL, pipe);
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_HOST_PIPE_SELECT, reg);
+}
+
+static void pvdec_pre_boot_setup(const void *dev,
+ void __iomem *reg_base,
+ struct vxd_ena_params *ena_params)
+{
+ /* Memory staller pre boot settings */
+ if (ena_params->mem_staller.data) {
+ unsigned char size = ena_params->mem_staller.size;
+
+ if (size == PVDEC_CORE_MEMSTALLER_ELEMENTS) {
+ unsigned int *data = ena_params->mem_staller.data;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: Setting up memory staller", __func__);
+#endif
+ /*
+ * Data structure represents PVDEC_TEST memory staller
+ * registers according to TRM 5.25 section
+ */
+ VXD_WR_REG(reg_base, PVDEC_TEST, MEM_READ_LATENCY, data[0]);
+ VXD_WR_REG(reg_base, PVDEC_TEST, MEM_WRITE_RESPONSE_LATENCY, data[1]);
+ VXD_WR_REG(reg_base, PVDEC_TEST, MEM_CTRL, data[2]);
+ VXD_WR_REG(reg_base, PVDEC_TEST, RAND_STL_MEM_CMD_CONFIG, data[3]);
+ VXD_WR_REG(reg_base, PVDEC_TEST, RAND_STL_MEM_WDATA_CONFIG, data[4]);
+ VXD_WR_REG(reg_base, PVDEC_TEST, RAND_STL_MEM_WRESP_CONFIG, data[5]);
+ VXD_WR_REG(reg_base, PVDEC_TEST, RAND_STL_MEM_RDATA_CONFIG, data[6]);
+ } else {
+ dev_warn(dev, "%s: Wrong layout of mem staller config (%u)!",
+ __func__, size);
+ }
+ }
+}
+
+static void pvdec_post_boot_setup(const void *dev,
+ void __iomem *reg_base,
+ unsigned int freq_khz)
+{
+ int reg;
+
+ /*
+ * Configure VXD MMU to use video tiles (256x16) and unique
+ * strides per context as default. There is currently no
+ * override mechanism.
+ */
+ reg = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL0);
+ reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL0,
+ MMU_TILING_SCHEME, 0);
+ reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL0,
+ USE_TILE_STRIDE_PER_CTX, 1);
+ VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL0, reg);
+
+ /*
+ * Setup VXD MMU with the tile heap device virtual address
+ * ranges.
+ */
+ VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MIN_ADDR,
+ PVDEC_HEAP_TILE512_START, 0);
+ VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MAX_ADDR,
+ PVDEC_HEAP_TILE512_START + PVDEC_HEAP_TILE512_SIZE - 1, 0);
+ VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MIN_ADDR,
+ PVDEC_HEAP_TILE1024_START, 1);
+ VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MAX_ADDR,
+ PVDEC_HEAP_TILE1024_START + PVDEC_HEAP_TILE1024_SIZE - 1, 1);
+ VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MIN_ADDR,
+ PVDEC_HEAP_TILE2048_START, 2);
+ VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MAX_ADDR,
+ PVDEC_HEAP_TILE2048_START + PVDEC_HEAP_TILE2048_SIZE - 1, 2);
+ VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MIN_ADDR,
+ PVDEC_HEAP_TILE4096_START, 3);
+ VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MAX_ADDR,
+ PVDEC_HEAP_TILE4096_START + PVDEC_HEAP_TILE4096_SIZE - 1, 3);
+
+ /* Disable timer */
+ VXD_WR_REG(reg_base, MTX_CORE, MTX_SYSC_TIMERDIV, 0);
+
+ reg = 0;
+ if (freq_khz)
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_TIMERDIV, TIMER_DIV,
+ PVDEC_CALC_TIMER_DIV(freq_khz / 1000));
+ else
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_TIMERDIV,
+ TIMER_DIV, PVDEC_CLK_MHZ_DEFAULT - 1);
+
+ /* Enable the MTX timer with final settings */
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_TIMERDIV, TIMER_EN, 1);
+ VXD_WR_REG(reg_base, MTX_CORE, MTX_SYSC_TIMERDIV, reg);
+}
+
+static void pvdec_clock_measure(void __iomem *reg_base,
+ struct timespec64 *start_time,
+ unsigned int *start_ticks)
+{
+ local_irq_disable();
+ ktime_get_real_ts64(start_time);
+ *start_ticks = VXD_RD_REG(reg_base, MTX_CORE, MTX_SYSC_TXTIMER);
+ local_irq_enable();
+}
+
+static int pvdec_clock_calculate(const void *dev,
+ void __iomem *reg_base,
+ struct timespec64 *start_time,
+ unsigned int start_ticks,
+ unsigned int *freq_khz)
+{
+ struct timespec64 end_time, dif_time;
+ long long span_nsec = 0;
+ unsigned int stop_ticks, tot_ticks;
+
+ local_irq_disable();
+ ktime_get_real_ts64(&end_time);
+
+ stop_ticks = VXD_RD_REG(reg_base, MTX_CORE, MTX_SYSC_TXTIMER);
+ local_irq_enable();
+
+ *(struct timespec64 *)(&dif_time) = timespec64_sub(*((struct timespec64 *)(&end_time)),
+ *((struct timespec64 *)(&start_time)));
+
+ span_nsec = timespec64_to_ns((const struct timespec64 *)&dif_time);
+
+ /* Sanity check for mtx timer */
+ if (!stop_ticks || stop_ticks < start_ticks) {
+ dev_err(dev, "%s: invalid ticks (0x%x -> 0x%x)\n",
+ __func__, start_ticks, stop_ticks);
+ return -EIO;
+ }
+ tot_ticks = stop_ticks - start_ticks;
+
+ if (span_nsec) {
+ unsigned long long res = (unsigned long long)tot_ticks * 1000000UL;
+
+ do_divide(&res, span_nsec);
+ *freq_khz = (unsigned int)res;
+ if (*freq_khz < 1000)
+ *freq_khz = 1000; /* 1MHz */
+ } else {
+ dev_err(dev, "%s: generic failure!\n", __func__);
+ *freq_khz = 0;
+ return -ERANGE;
+ }
+
+ return 0;
+}
+
+static int pvdec_wait_dma_done(const void *dev,
+ void __iomem *reg_base,
+ unsigned long size,
+ unsigned char dma_channel)
+{
+ unsigned int reg, timeout = PVDEC_TIMEOUT_COUNTER, prev_count, count = size;
+
+ do {
+ usleep_range(300, 310);
+ prev_count = count;
+ reg = VXD_RD_RPT_REG(reg_base, DMAC, DMAC_COUNT, dma_channel);
+ count = VXD_RD_REG_FIELD(reg, DMAC, DMAC_COUNT, CNT);
+ /* Check for dma progress */
+ if (count == prev_count) {
+ /* There could be a bus lag, protect against that */
+ timeout--;
+ if (timeout == 0) {
+ dev_err(dev, "%s FW DMA failed! (0x%x)\n", __func__, count);
+ return -EIO;
+ }
+ } else {
+ /* Reset timeout counter */
+ timeout = PVDEC_TIMEOUT_COUNTER;
+ }
+ } while (count > 0);
+
+ return 0;
+}
+
+static int pvdec_start_fw_dma(const void *dev,
+ void __iomem *reg_base,
+ unsigned char dma_channel,
+ unsigned long fw_buf_size,
+ unsigned int *freq_khz)
+{
+ unsigned int reg = 0;
+ int ret = 0;
+
+ fw_buf_size = fw_buf_size / sizeof(unsigned int);
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: dma FW upload, fw_buf_size: %zu (dwords)\n", __func__, fw_buf_size);
+#endif
+
+ pvdec_select_pipe(reg_base, 1);
+
+ reg = VXD_RD_REG(reg_base, PVDEC_PIXEL, PIXEL_MAN_CLK_ENA);
+ reg = VXD_WR_REG_FIELD(reg, PVDEC_PIXEL, PIXEL_MAN_CLK_ENA, PIXEL_DMAC_MAN_CLK_ENA, 1);
+ reg = VXD_WR_REG_FIELD(reg, PVDEC_PIXEL, PIXEL_MAN_CLK_ENA, PIXEL_REG_MAN_CLK_ENA, 1);
+ VXD_WR_REG(reg_base, PVDEC_PIXEL, PIXEL_MAN_CLK_ENA, reg);
+
+ /*
+ * Setup MTX to receive DMA
+ * DMA transfers to/from the MTX have to be 32-bit aligned and
+ * in multiples of 32 bits
+ */
+ VXD_WR_REG(reg_base, MTX_CORE, MTX_SYSC_CDMAA, 0); /* MTX: 0x80900000 */
+
+ reg = 0;
+ /* Burst size in multiples of 64 bits (allowed values are 2 or 4) */
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_CDMAC, BURSTSIZE, 0);
+ /* 0 - write to MTX memory */
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_CDMAC, RNW, 0);
+ /* Begin transfer */
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_CDMAC, ENABLE, 1);
+ /* Transfer size */
+ reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_CDMAC, LENGTH,
+ ((fw_buf_size + 7) & (~7)) + 8);
+ VXD_WR_REG(reg_base, MTX_CORE, MTX_SYSC_CDMAC, reg);
+
+ /* Boot MTX once transfer is done */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PROC_DMAC_CONTROL,
+ BOOT_ON_DMA_CH0, 1);
+ VXD_WR_REG(reg_base, PVDEC_CORE, PROC_DMAC_CONTROL, reg);
+
+ /* Toggle channel 0 usage between MTX and other PVDEC peripherals */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, PVDEC_PIXEL, PIXEL_CONTROL_0,
+ DMAC_CH_SEL_FOR_MTX, 0);
+ VXD_WR_REG(reg_base, PVDEC_PIXEL, PIXEL_CONTROL_0, reg);
+
+ /* Reset DMA channel first */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, SRST, 1);
+ VXD_WR_RPT_REG(reg_base, DMAC, DMAC_COUNT, reg, dma_channel);
+
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, LIST_EN, 0);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, CNT, 0);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, EN, 0);
+ VXD_WR_RPT_REG(reg_base, DMAC, DMAC_COUNT, reg, dma_channel);
+
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, SRST, 0);
+ VXD_WR_RPT_REG(reg_base, DMAC, DMAC_COUNT, reg, dma_channel);
+
+ /*
+ * Setup a Simple DMA for Ch0
+ * Specify the holdover period to use for the channel
+ */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PER_HOLD, PER_HOLD, 7);
+ VXD_WR_RPT_REG(reg_base, DMAC, DMAC_PER_HOLD, reg, dma_channel);
+
+ /* Clear the DMAC Stats */
+ VXD_WR_RPT_REG(reg_base, DMAC, DMAC_IRQ_STAT, 0, dma_channel);
+
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PERIPH_ADDR, ADDR,
+ MTX_CORE_MTX_SYSC_CDMAT_OFFSET);
+ VXD_WR_RPT_REG(reg_base, DMAC, DMAC_PERIPH_ADDR, reg, dma_channel);
+
+ /* Clear peripheral register address */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PERIPH, ACC_DEL, 0);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PERIPH, INCR, DMAC_INCR_OFF);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PERIPH, BURST, DMAC_BURST_1);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PERIPH, EXT_BURST, DMAC_EXT_BURST_0);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PERIPH, EXT_SA, 0);
+ VXD_WR_RPT_REG(reg_base, DMAC, DMAC_PERIPH, reg, dma_channel);
+
+ /*
+ * Now start the transfer by setting the list enable bit in
+ * the count register
+ */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, TRANSFER_IEN, 1);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, PW, DMAC_PWIDTH_32_BIT);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, DIR, DMAC_MEM_TO_VXD);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, PI, DMAC_INCR_ON);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, LIST_FIN_CTL, 0);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, LIST_EN, 0);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, ENABLE_2D_MODE, 0);
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, CNT, fw_buf_size);
+ VXD_WR_RPT_REG(reg_base, DMAC, DMAC_COUNT, reg, dma_channel);
+
+ reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, EN, 1);
+ VXD_WR_RPT_REG(reg_base, DMAC, DMAC_COUNT, reg, dma_channel);
+
+ /* NOTE: The MTX timer starts once DMA boot is triggered */
+ {
+ struct timespec64 host_time;
+ unsigned int mtx_time;
+
+ pvdec_clock_measure(reg_base, &host_time, &mtx_time);
+
+ ret = pvdec_wait_dma_done(dev, reg_base, fw_buf_size, dma_channel);
+ if (!ret) {
+ if (pvdec_clock_calculate(dev, reg_base, &host_time, mtx_time,
+ freq_khz) < 0)
+ dev_dbg(dev, "%s: measure info not available!\n", __func__);
+ }
+ }
+
+ return ret;
+}
+
+static int pvdec_set_clocks(void __iomem *reg_base, unsigned int req_clocks)
+{
+ unsigned int clocks = 0, reg;
+ unsigned int pvdec_timeout;
+
+ /* Turn on core clocks only */
+ clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA,
+ PVDEC_REG_MAN_CLK_ENA, 1);
+ clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA, CORE_MAN_CLK_ENA, 1);
+
+ /* Wait until core clocks set */
+ pvdec_timeout = PVDEC_TIMEOUT_COUNTER;
+ do {
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_MAN_CLK_ENA, clocks);
+ udelay(vxd_plat_poll_udelay);
+ reg = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_MAN_CLK_ENA);
+ pvdec_timeout--;
+ } while (reg != clocks && pvdec_timeout != 0);
+
+ if (pvdec_timeout == 0)
+ return -EIO;
+
+ /* Write requested clocks */
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_MAN_CLK_ENA, req_clocks);
+
+ return 0;
+}
+
+static int pvdec_enable_clocks(void __iomem *reg_base)
+{
+ unsigned int clocks = 0;
+
+ clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA,
+ PVDEC_REG_MAN_CLK_ENA, 1);
+ clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA,
+ CORE_MAN_CLK_ENA, 1);
+ clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA,
+ MEM_MAN_CLK_ENA, 1);
+ clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA,
+ PROC_MAN_CLK_ENA, 1);
+ clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA,
+ PIXEL_PROC_MAN_CLK_ENA, 1);
+
+ return pvdec_set_clocks(reg_base, clocks);
+}
+
+static int pvdec_disable_clocks(void __iomem *reg_base)
+{
+ return pvdec_set_clocks(reg_base, 0);
+}
+
+static void pvdec_ena_mtx_int(void __iomem *reg_base)
+{
+ unsigned int reg = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_HOST_INT_ENA);
+
+ reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_INT_STAT, HOST_PROC_IRQ, 1);
+ reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_INT_STAT, HOST_MMU_FAULT_IRQ, 1);
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_HOST_INT_ENA, reg);
+}
+
+static void pvdec_check_mmu_requests(void __iomem *reg_base,
+ unsigned int mmu_checks,
+ unsigned int max_attempts)
+{
+ unsigned int reg, i, checks = 0;
+
+ for (i = 0; i < max_attempts; i++) {
+ reg = VXD_RD_REG(reg_base,
+ IMG_VIDEO_BUS4_MMU, MMU_MEM_REQ);
+ reg = VXD_RD_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_MEM_REQ, TAG_OUTSTANDING);
+ if (reg) {
+ udelay(vxd_plat_poll_udelay);
+ continue;
+ }
+
+ /* Read READ_WORDS_OUTSTANDING */
+ reg = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_MEM_EXT_OUTSTANDING);
+ reg = VXD_RD_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_MEM_EXT_OUTSTANDING,
+ READ_WORDS);
+ if (!reg) {
+ checks++;
+ if (checks == mmu_checks)
+ break;
+ } else { /* Reset the counter and continue */
+ checks = 0;
+ }
+ }
+
+ if (checks != mmu_checks)
+ pr_warn("Checking for MMU outstanding requests failed!\n");
+}
+
+static int pvdec_reset(void __iomem *reg_base, unsigned char skip_pipe_clocks)
+{
+ unsigned int reg = 0;
+ unsigned char pipe, num_ent_pipes, num_pix_pipes;
+ unsigned int core_id, pvdec_timeout;
+
+ core_id = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_CORE_ID);
+
+ num_ent_pipes = VXD_RD_REG_FIELD(core_id, PVDEC_CORE, PVDEC_CORE_ID, ENT_PIPES);
+ num_pix_pipes = VXD_RD_REG_FIELD(core_id, PVDEC_CORE, PVDEC_CORE_ID, PIX_PIPES);
+
+ if (num_pix_pipes == 0 || num_pix_pipes > VXD_MAX_PIPES)
+ return -EINVAL;
+
+ /* Clear interrupt enabled flag */
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_HOST_INT_ENA, 0);
+
+ /* Clear any pending interrupt flags */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_INT_CLEAR, IRQ_CLEAR, 0xFFFF);
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_INT_CLEAR, reg);
+
+ /* Turn all clocks on - don't touch reserved bits! */
+ pvdec_set_clocks(reg_base, 0xFFFF0113);
+
+ if (!skip_pipe_clocks) {
+ for (pipe = 1; pipe <= num_pix_pipes; pipe++) {
+ pvdec_select_pipe(reg_base, pipe);
+ /* Turn all available clocks on - skip reserved bits! */
+ VXD_WR_REG(reg_base, PVDEC_PIXEL, PIXEL_MAN_CLK_ENA, 0xFFBF0FFF);
+ }
+
+ for (pipe = 1; pipe <= num_ent_pipes; pipe++) {
+ pvdec_select_pipe(reg_base, pipe);
+ /* Turn all available clocks on - skip reserved bits! */
+ VXD_WR_REG(reg_base, PVDEC_ENTROPY, ENTROPY_MAN_CLK_ENA, 0x5);
+ }
+ }
+
+ /* 1st MMU outstanding requests check */
+ pvdec_check_mmu_requests(reg_base, 1000, 2000);
+
+ /* Make sure MMU is not under reset MMU_SOFT_RESET -> 0 */
+ pvdec_timeout = PVDEC_TIMEOUT_COUNTER;
+ do {
+ reg = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1);
+ reg = VXD_RD_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_SOFT_RESET);
+ udelay(vxd_plat_poll_udelay);
+ pvdec_timeout--;
+ } while (reg != 0 && pvdec_timeout != 0);
+
+ if (pvdec_timeout == 0) {
+ pr_err("Waiting for MMU soft reset(1) timed out!\n");
+ pvdec_mtx_status_dump(reg_base, NULL);
+ }
+
+ /* Write 1 to MMU_PAUSE_SET */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_PAUSE_SET, 1);
+ VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, reg);
+
+ /* 2nd MMU outstanding requests check */
+ pvdec_check_mmu_requests(reg_base, 100, 1000);
+
+ /* Issue software reset for all but MMU/core */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_SOFT_RST, PVDEC_PIXEL_PROC_SOFT_RST, 0xFF);
+ reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_SOFT_RST, PVDEC_ENTROPY_SOFT_RST, 0xFF);
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_SOFT_RST, reg);
+
+ VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_SOFT_RST);
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_SOFT_RST, 0);
+
+ /* Write 1 to MMU_PAUSE_CLEAR in MMU_CONTROL1 reg */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_PAUSE_CLEAR, 1);
+ VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, reg);
+
+ /* Confirm MMU_PAUSE_SET is cleared */
+ pvdec_timeout = PVDEC_TIMEOUT_COUNTER;
+ do {
+ reg = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1);
+ reg = VXD_RD_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_PAUSE_SET);
+ udelay(vxd_plat_poll_udelay);
+ pvdec_timeout--;
+ } while (reg != 0 && pvdec_timeout != 0);
+
+ if (pvdec_timeout == 0) {
+ pr_err("Waiting for MMU pause clear timed out!\n");
+ pvdec_mtx_status_dump(reg_base, NULL);
+ return -EIO;
+ }
+
+ /* Issue software reset for MMU */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_SOFT_RESET, 1);
+ VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, reg);
+
+ /* Wait until MMU_SOFT_RESET -> 0 */
+ pvdec_timeout = PVDEC_TIMEOUT_COUNTER;
+ do {
+ reg = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1);
+ reg = VXD_RD_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_SOFT_RESET);
+ udelay(vxd_plat_poll_udelay);
+ pvdec_timeout--;
+ } while (reg != 0 && pvdec_timeout != 0);
+
+ if (pvdec_timeout == 0) {
+ pr_err("Waiting for MMU soft reset(2) timed out!\n");
+ pvdec_mtx_status_dump(reg_base, NULL);
+ }
+
+ /* Issue software reset for entire PVDEC */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_SOFT_RST, PVDEC_SOFT_RST, 0x1);
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_SOFT_RST, reg);
+
+ /* Waiting for reset bit to be cleared */
+ pvdec_timeout = PVDEC_TIMEOUT_COUNTER;
+ do {
+ reg = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_SOFT_RST);
+ reg = VXD_RD_REG_FIELD(reg, PVDEC_CORE, PVDEC_SOFT_RST, PVDEC_SOFT_RST);
+ udelay(vxd_plat_poll_udelay);
+ pvdec_timeout--;
+ } while (reg != 0 && pvdec_timeout != 0);
+
+ if (pvdec_timeout == 0) {
+ pr_err("Waiting for PVDEC soft reset timed out!\n");
+ pvdec_mtx_status_dump(reg_base, NULL);
+ return -EIO;
+ }
+
+ /* Clear interrupt enabled flag */
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_HOST_INT_ENA, 0);
+
+ /* Clear any pending interrupt flags */
+ reg = 0;
+ reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_INT_CLEAR, IRQ_CLEAR, 0xFFFF);
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_INT_CLEAR, reg);
+ return 0;
+}
+
+static int pvdec_get_properties(void __iomem *reg_base,
+ struct vxd_core_props *props)
+{
+ unsigned int major, minor, maint, group_id, core_id;
+ unsigned char num_pix_pipes, pipe;
+
+ if (!props)
+ return -EINVAL;
+
+ /* PVDEC Core Revision Information */
+ props->core_rev = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_CORE_REV);
+ major = VXD_RD_REG_FIELD(props->core_rev, PVDEC_CORE, PVDEC_CORE_REV, PVDEC_MAJOR_REV);
+ minor = VXD_RD_REG_FIELD(props->core_rev, PVDEC_CORE, PVDEC_CORE_REV, PVDEC_MINOR_REV);
+ maint = VXD_RD_REG_FIELD(props->core_rev, PVDEC_CORE, PVDEC_CORE_REV, PVDEC_MAINT_REV);
+
+ /* Core ID */
+ props->pvdec_core_id = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_CORE_ID);
+ group_id = VXD_RD_REG_FIELD(props->pvdec_core_id, PVDEC_CORE, PVDEC_CORE_ID, GROUP_ID);
+ core_id = VXD_RD_REG_FIELD(props->pvdec_core_id, PVDEC_CORE, PVDEC_CORE_ID, CORE_ID);
+
+ /* Ensure that the core is IMG Video Decoder (PVDEC). */
+ if (group_id != 3 || core_id != 3) {
+ pr_err("Wrong core revision %d.%d.%d !!!\n", major, minor, maint);
+ return -EIO;
+ }
+
+ props->mmu_config0 = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONFIG0);
+ props->mmu_config1 = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONFIG1);
+
+ num_pix_pipes = VXD_NUM_PIX_PIPES(*props);
+
+ if (unlikely(num_pix_pipes > VXD_MAX_PIPES)) {
+ pr_warn("Too many pipes detected!\n");
+ num_pix_pipes = VXD_MAX_PIPES;
+ }
+
+ for (pipe = 1; pipe <= num_pix_pipes; ++pipe) {
+ pvdec_select_pipe(reg_base, pipe);
+ if (pipe < VXD_MAX_PIPES) {
+ props->pixel_pipe_cfg[pipe - 1] =
+ VXD_RD_REG(reg_base, PVDEC_PIXEL, PIXEL_PIPE_CONFIG);
+ props->pixel_misc_cfg[pipe - 1] =
+ VXD_RD_REG(reg_base, PVDEC_PIXEL, PIXEL_MISC_CONFIG);
+ /*
+ * Detect pipe access problems.
+ * Pipe config shall always indicate
+ * a non zero value (at least one standard supported)!
+ */
+ if (!props->pixel_pipe_cfg[pipe - 1])
+ pr_warn("Pipe config info is wrong!\n");
+ }
+ }
+
+ pvdec_select_pipe(reg_base, 1);
+ props->pixel_max_frame_cfg = VXD_RD_REG(reg_base, PVDEC_PIXEL, MAX_FRAME_CONFIG);
+
+ {
+ unsigned int fifo_ctrl = VXD_RD_REG(reg_base, PVDEC_CORE, PROC_DBG_FIFO_CTRL0);
+
+ props->dbg_fifo_size = VXD_RD_REG_FIELD(fifo_ctrl,
+ PVDEC_CORE,
+ PROC_DBG_FIFO_CTRL0,
+ PROC_DBG_FIFO_SIZE);
+ }
+
+ return 0;
+}
+
+int vxd_pvdec_init(const void *dev, void __iomem *reg_base)
+{
+ int ret;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: trying to reset VXD, reg base: %p\n", __func__, reg_base);
+#endif
+
+ ret = pvdec_enable_clocks(reg_base);
+ if (ret) {
+ dev_err(dev, "%s: failed to enable clocks!\n", __func__);
+ return ret;
+ }
+
+ ret = pvdec_reset(reg_base, FALSE);
+ if (ret) {
+ dev_err(dev, "%s: VXD reset failed!\n", __func__);
+ return ret;
+ }
+
+ pvdec_ena_mtx_int(reg_base);
+
+ return 0;
+}
+
+/* Send <msg_size> dwords long message */
+int vxd_pvdec_send_msg(const void *dev,
+ void __iomem *reg_base,
+ unsigned int *msg,
+ unsigned long msg_size,
+ unsigned short msg_id,
+ struct vxd_dev *ctx)
+{
+ int ret, to_mtx_off; /* offset in dwords */
+ unsigned int wr_idx, rd_idx; /* indicies in dwords */
+ unsigned long to_mtx_size; /* size in dwords */
+ unsigned int msg_wrd;
+ struct timespec64 time;
+ static int cnt;
+
+ ktime_get_real_ts64(&time);
+
+ ctx->time_fw[cnt].start_time = timespec64_to_ns((const struct timespec64 *)&time);
+ ctx->time_fw[cnt].id = msg_id;
+ cnt++;
+
+ if (cnt >= ARRAY_SIZE(ctx->time_fw))
+ cnt = 0;
+
+ ret = pvdec_get_to_mtx_cfg(reg_base, &to_mtx_size, &to_mtx_off, &wr_idx, &rd_idx);
+ if (ret) {
+ dev_err(dev, "%s: failed to obtain mtx ring buffer config!\n", __func__);
+ return ret;
+ }
+
+ /* populate the size and id fields in the message header */
+ msg_wrd = VXD_RD_MSG_WRD(msg, PVDEC_FW, DEVA_GENMSG);
+ msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_GENMSG, MSG_SIZE, msg_size);
+ msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_GENMSG, MSG_ID, msg_id);
+ VXD_WR_MSG_WRD(msg, PVDEC_FW, DEVA_GENMSG, msg_wrd);
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: [msg out] size: %zu, id: 0x%x, type: 0x%x\n", __func__, msg_size, msg_id,
+ VXD_RD_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_GENMSG, MSG_TYPE));
+ dev_dbg(dev, "%s: to_mtx: (%zu @ %d), wr_idx: %d, rd_idx: %d\n",
+ __func__, to_mtx_size, to_mtx_off, wr_idx, rd_idx);
+#endif
+
+ ret = pvdec_check_comms_space(reg_base, msg_size, FALSE);
+ if (ret) {
+ dev_err(dev, "%s: invalid message or not enough space (%d)!\n", __func__, ret);
+ return ret;
+ }
+
+ ret = pvdec_write_vlr(reg_base, msg, msg_size, to_mtx_off + wr_idx);
+ if (ret) {
+ dev_err(dev, "%s: failed to write msg to vlr!\n", __func__);
+ return ret;
+ }
+
+ wr_idx += msg_size;
+ if (wr_idx == to_mtx_size)
+ wr_idx = 0;
+ VXD_WR_REG_ABS(reg_base, VLR_OFFSET +
+ PVDEC_FW_TO_MTX_WR_IDX_OFFSET, wr_idx);
+
+ pvdec_kick_mtx(reg_base);
+
+ return 0;
+}
+
+/* Fetch size (in dwords) of message pending from MTX */
+int vxd_pvdec_pend_msg_info(const void *dev, void __iomem *reg_base,
+ unsigned long *size,
+ unsigned short *msg_id,
+ unsigned char *not_last_msg)
+{
+ int ret, to_host_off; /* offset in dwords */
+ unsigned int wr_idx, rd_idx; /* indicies in dwords */
+ unsigned long to_host_size; /* size in dwords */
+ unsigned int val = 0;
+
+ ret = pvdec_get_to_host_cfg(reg_base, &to_host_size, &to_host_off, &wr_idx, &rd_idx);
+ if (ret) {
+ dev_err(dev, "%s: failed to obtain host ring buffer config!\n", __func__);
+ return ret;
+ }
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: to host: (%zu @ %d), wr: %u, rd: %u\n", __func__,
+ to_host_size, to_host_off, wr_idx, rd_idx);
+#endif
+
+ if (wr_idx == rd_idx) {
+ *size = 0;
+ *msg_id = 0;
+ return 0;
+ }
+
+ ret = pvdec_read_vlr(reg_base, &val, 1, to_host_off + rd_idx);
+ if (ret) {
+ dev_err(dev, "%s: failed to read first word!\n", __func__);
+ return ret;
+ }
+
+ *size = VXD_RD_REG_FIELD(val, PVDEC_FW, DEVA_GENMSG, MSG_SIZE);
+ *msg_id = VXD_RD_REG_FIELD(val, PVDEC_FW, DEVA_GENMSG, MSG_ID);
+ *not_last_msg = VXD_RD_REG_FIELD(val, PVDEC_FW, DEVA_GENMSG, NOT_LAST_MSG);
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: [msg in] rd_idx: %d, size: %zu, id: 0x%04x, type: 0x%x\n",
+ __func__, rd_idx, *size, *msg_id,
+ VXD_RD_REG_FIELD(val, PVDEC_FW, DEVA_GENMSG, MSG_TYPE));
+#endif
+
+ return 0;
+}
+
+/*
+ * Receive message from the MTX and place it in a <buf_size> dwords long
+ * buffer. If the provided buffer is too small to hold the message, only part
+ * of it will be placed in a buffer, but the ring buffer read index will be
+ * moved so that message is no longer available.
+ */
+int vxd_pvdec_recv_msg(const void *dev, void __iomem *reg_base,
+ unsigned int *buf,
+ unsigned long buf_size,
+ struct vxd_dev *vxd)
+{
+ int ret, to_host_off; /* offset in dwords */
+ unsigned int wr_idx, rd_idx; /* indicies in dwords */
+ unsigned long to_host_size, msg_size, to_read; /* sizes in dwords */
+ unsigned int val = 0;
+ struct timespec64 time;
+ unsigned short msg_id;
+ int loop;
+
+ ret = pvdec_get_to_host_cfg(reg_base, &to_host_size,
+ &to_host_off, &wr_idx, &rd_idx);
+ if (ret) {
+ dev_err(dev, "%s: failed to obtain host ring buffer config!\n", __func__);
+ return ret;
+ }
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: to host: (%zu @ %d), wr: %u, rd: %u\n", __func__,
+ to_host_size, to_host_off, wr_idx, rd_idx);
+#endif
+
+ /* Obtain the message size */
+ ret = pvdec_read_vlr(reg_base, &val, 1, to_host_off + rd_idx);
+ if (ret) {
+ dev_err(dev, "%s: failed to read first word!\n", __func__);
+ return ret;
+ }
+ msg_size = VXD_RD_REG_FIELD(val, PVDEC_FW, DEVA_GENMSG, MSG_SIZE);
+
+ to_read = (msg_size > buf_size) ? buf_size : msg_size;
+
+ /* Does the message wrap? */
+ if (to_read + rd_idx > to_host_size) {
+ unsigned long chunk_size = to_host_size - rd_idx;
+
+ ret = pvdec_read_vlr(reg_base, buf, chunk_size, to_host_off + rd_idx);
+ if (ret) {
+ dev_err(dev, "%s: failed to read chunk before wrap!\n", __func__);
+ return ret;
+ }
+ to_read -= chunk_size;
+ buf += chunk_size;
+ rd_idx = 0;
+ msg_size -= chunk_size;
+ }
+
+ /*
+ * If the message wrapped, read the second chunk.
+ * If it didn't, read first and only chunk
+ */
+ ret = pvdec_read_vlr(reg_base, buf, to_read, to_host_off + rd_idx);
+ if (ret) {
+ dev_err(dev, "%s: failed to read message from vlr!\n", __func__);
+ return ret;
+ }
+
+ /* Update read index in the ring buffer */
+ rd_idx = (rd_idx + msg_size) % to_host_size;
+ VXD_WR_REG_ABS(reg_base, VLR_OFFSET +
+ PVDEC_FW_TO_HOST_RD_IDX_OFFSET, rd_idx);
+
+ msg_id = VXD_RD_REG_FIELD(val, PVDEC_FW, DEVA_GENMSG, MSG_ID);
+
+ ktime_get_real_ts64(&time);
+ for (loop = 0; loop < ARRAY_SIZE(vxd->time_fw); loop++) {
+ if (vxd->time_fw[loop].id == msg_id) {
+ vxd->time_fw[loop].end_time =
+ timespec64_to_ns((const struct timespec64 *)&time);
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(dev, "fw decode time is %llu us for msg_id x%0x\n",
+ div_s64(vxd->time_fw[loop].end_time -
+ vxd->time_fw[loop].start_time, 1000), msg_id);
+#endif
+ break;
+ }
+ }
+
+ if (loop == ARRAY_SIZE(vxd->time_fw))
+ dev_err(dev, "fw decode time for msg_id x%0x is not measured\n", msg_id);
+
+ return 0;
+}
+
+int vxd_pvdec_check_fw_status(const void *dev, void __iomem *reg_base)
+{
+ int ret;
+ unsigned int val = 0;
+
+ /* Obtain current fw status */
+ ret = pvdec_read_vlr(reg_base, &val, 1, PVDEC_FW_STATUS_OFFSET);
+ if (ret) {
+ dev_err(dev, "%s: failed to read fw status!\n", __func__);
+ return ret;
+ }
+
+ /* Check for fatal condition */
+ if (val == PVDEC_FW_STATUS_PANIC || val == PVDEC_FW_STATUS_ASSERT ||
+ val == PVDEC_FW_STATUS_SO)
+ return -1;
+
+ return 0;
+}
+
+static int pvdec_send_init_msg(const void *dev,
+ void __iomem *reg_base,
+ struct vxd_ena_params *ena_params)
+{
+ unsigned short msg_id = 0;
+ unsigned int msg[PVDEC_FW_DEVA_INIT_MSG_WRDS] = { 0 }, msg_wrd = 0;
+ struct vxd_dev *vxd;
+ int ret;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: rendec: %d@0x%x, crc: 0x%x\n", __func__,
+ ena_params->rendec_size, ena_params->rendec_addr, ena_params->crc);
+#endif
+
+ vxd = kzalloc(sizeof(*vxd), GFP_KERNEL);
+ if (!vxd)
+ return -1;
+
+ /* message type */
+ msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_GENMSG, MSG_TYPE,
+ PVDEC_FW_MSG_TYPE_INIT);
+ VXD_WR_MSG_WRD(msg, PVDEC_FW, DEVA_GENMSG, msg_wrd);
+
+ /* rendec address */
+ VXD_WR_MSG_WRD(msg, PVDEC_FW_DEVA_INIT, RENDEC_ADDR0, ena_params->rendec_addr);
+
+ /* rendec size */
+ msg_wrd = 0;
+ msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_INIT, RENDEC_SIZE0,
+ ena_params->rendec_size);
+ VXD_WR_MSG_WRD(msg, PVDEC_FW_DEVA_INIT, RENDEC_SIZE0, msg_wrd);
+
+ /* HEVC configuration */
+ msg_wrd = 0;
+ msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_INIT,
+ HEVC_CFG_MAX_H_FOR_PIPE_WAIT, 0xFFFF);
+ VXD_WR_MSG_WRD(msg, PVDEC_FW_DEVA_INIT, HEVC_CFG, msg_wrd);
+
+ /* signature select */
+ VXD_WR_MSG_WRD(msg, PVDEC_FW_DEVA_INIT, SIG_SELECT, ena_params->crc);
+
+ /* partial frame notification timer divider */
+ msg_wrd = 0;
+ msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_INIT, PFNT_DIV, PVDEC_PFNT_DIV);
+ VXD_WR_MSG_WRD(msg, PVDEC_FW_DEVA_INIT, PFNT_DIV, msg_wrd);
+
+ /* firmware watchdog timeout value */
+ msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_INIT, FWWDT_MS, ena_params->fwwdt_ms);
+ VXD_WR_MSG_WRD(msg, PVDEC_FW_DEVA_INIT, FWWDT_MS, msg_wrd);
+
+ ret = vxd_pvdec_send_msg(dev, reg_base, msg, ARRAY_SIZE(msg), msg_id, vxd);
+ kfree(vxd);
+
+ return ret;
+}
+
+int vxd_pvdec_ena(const void *dev, void __iomem *reg_base,
+ struct vxd_ena_params *ena_params,
+ struct vxd_fw_hdr *fw_hdr,
+ unsigned int *freq_khz)
+{
+ int ret;
+ unsigned int mtx_ram_size = 0;
+ unsigned char dma_channel = 0;
+
+ ret = vxd_pvdec_init(dev, reg_base);
+ if (ret) {
+ dev_err(dev, "%s: PVDEC init failed!\n", __func__);
+ return ret;
+ }
+
+ ret = pvdec_get_mtx_ram_size(reg_base, &mtx_ram_size);
+ if (ret) {
+ dev_err(dev, "%s: failed to get MTX RAM size!\n", __func__);
+ return ret;
+ }
+
+ if (mtx_ram_size < fw_hdr->core_size) {
+ dev_err(dev, "%s: FW larger than MTX RAM size (%u < %d)!\n",
+ __func__, mtx_ram_size, fw_hdr->core_size);
+ return -EINVAL;
+ }
+
+ /* Apply pre boot settings - if any */
+ pvdec_pre_boot_setup(dev, reg_base, ena_params);
+
+ pvdec_prep_fw_upload(dev, reg_base, ena_params, dma_channel);
+
+ ret = pvdec_start_fw_dma(dev, reg_base, dma_channel, fw_hdr->core_size, freq_khz);
+
+ if (ret) {
+ dev_err(dev, "%s: failed to load FW! (%d)", __func__, ret);
+ pvdec_mtx_status_dump(reg_base, NULL);
+ return ret;
+ }
+
+ /* Apply final settings - if any */
+ pvdec_post_boot_setup(dev, reg_base, *freq_khz);
+
+ ret = pvdec_poll_fw_boot(reg_base, &ena_params->boot_poll);
+ if (ret) {
+ dev_err(dev, "%s: FW failed to boot! (%d)!\n", __func__, ret);
+ return ret;
+ }
+
+ ret = pvdec_send_init_msg(dev, reg_base, ena_params);
+ if (ret) {
+ dev_err(dev, "%s: failed to send init message! (%d)!\n", __func__, ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+int vxd_pvdec_dis(const void *dev, void __iomem *reg_base)
+{
+ int ret = pvdec_enable_clocks(reg_base);
+
+ if (ret) {
+ dev_err(dev, "%s: failed to enable clocks! (%d)\n", __func__, ret);
+ return ret;
+ }
+
+ ret = pvdec_reset(reg_base, TRUE);
+ if (ret) {
+ dev_err(dev, "%s: VXD reset failed! (%d)\n", __func__, ret);
+ return ret;
+ }
+
+ ret = pvdec_disable_clocks(reg_base);
+ if (ret) {
+ dev_err(dev, "%s: VXD disable clocks failed! (%d)\n", __func__, ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+/*
+ * Invalidate VXD's MMU cache.
+ */
+int vxd_pvdec_mmu_flush(const void *dev, void __iomem *reg_base)
+{
+ unsigned int reg = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1);
+
+ if (reg == PVDEC_INVALID_HW_STATE) {
+ dev_err(dev, "%s: invalid HW state!\n", __func__);
+ return -EIO;
+ }
+
+ reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_INVALDC, 0xF);
+ VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, reg);
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: device MMU cache invalidated!\n", __func__);
+#endif
+
+ return 0;
+}
+
+irqreturn_t vxd_pvdec_clear_int(void __iomem *reg_base, unsigned int *irq_status)
+{
+ irqreturn_t ret = IRQ_NONE;
+ unsigned int enabled;
+ unsigned int status = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_INT_STAT);
+
+ enabled = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_HOST_INT_ENA);
+
+ status &= enabled;
+ /* Store the last irq status */
+ *irq_status |= status;
+
+ if (status & (PVDEC_CORE_PVDEC_INT_STAT_HOST_MMU_FAULT_IRQ_MASK |
+ PVDEC_CORE_PVDEC_INT_STAT_HOST_PROC_IRQ_MASK))
+ ret = IRQ_WAKE_THREAD;
+
+ /* Disable MMU interrupts - clearing is not enough */
+ if (status & PVDEC_CORE_PVDEC_INT_STAT_HOST_MMU_FAULT_IRQ_MASK) {
+ enabled &= ~PVDEC_CORE_PVDEC_INT_STAT_HOST_MMU_FAULT_IRQ_MASK;
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_HOST_INT_ENA, enabled);
+ }
+
+ VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_INT_CLEAR, status);
+
+ return ret;
+}
+
+/*
+ * Check if there's enough space in comms RAM to submit <msg_size> dwords long
+ * message. This function also submits a padding message if it will be
+ * necessary for this particular message.
+ *
+ * return 0 if there is enough space,
+ * return -EBUSY if there is not enough space,
+ * return another fault code in case of an error.
+ */
+int vxd_pvdec_msg_fit(const void *dev, void __iomem *reg_base, unsigned long msg_size)
+{
+ int ret = pvdec_check_comms_space(reg_base, msg_size, TRUE);
+
+ /*
+ * In specific environment, when to_mtx buffer is small, and messages
+ * the userspace is submitting are large (e.g. FWBSP flow), it's
+ * possible that firmware will consume the padding message sent by
+ * vxd_pvdec_msg_fit() immediately. Retry the check.
+ */
+ if (ret == -EBUSY) {
+ unsigned int flags = VXD_RD_REG_ABS(reg_base,
+ VLR_OFFSET + PVDEC_FW_FLAGS_OFFSET) |
+ PVDEC_FWFLAG_FAKE_COMPLETION;
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "comms space full, asking fw to send empty msg when space is available");
+#endif
+
+ VXD_WR_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_FLAGS_OFFSET, flags);
+ ret = pvdec_check_comms_space(reg_base, msg_size, FALSE);
+ }
+
+ return ret;
+}
+
+void vxd_pvdec_get_state(const void *dev, void __iomem *reg_base,
+ unsigned int num_pipes,
+ struct vxd_hw_state *state)
+{
+ unsigned char pipe;
+#ifdef DEBUG_DECODER_DRIVER
+ unsigned int state_cfg = VXD_RD_REG_ABS(reg_base, (VLR_OFFSET +
+ PVDEC_FW_STATE_BUF_CFG_OFFSET));
+
+ unsigned short state_size = PVDEC_FW_COM_BUF_SIZE(state_cfg);
+ unsigned short state_off = PVDEC_FW_COM_BUF_OFF(state_cfg);
+
+ /*
+ * The generic fw progress counter
+ * is the first element in the fw state
+ */
+ dev_dbg(dev, "%s: state off: 0x%x, size: 0x%x\n", __func__, state_off, state_size);
+ state->fw_counter = VXD_RD_REG_ABS(reg_base, (VLR_OFFSET + state_off));
+ dev_dbg(dev, "%s: fw_counter: 0x%x\n", __func__, state->fw_counter);
+#endif
+
+ /* We just combine the macroblocks being processed by the HW */
+ for (pipe = 0; pipe < num_pipes; pipe++) {
+ unsigned int p_off = VXD_GET_PIPE_OFF(num_pipes, pipe + 1);
+ unsigned int reg_val;
+
+ /* Front-end */
+ unsigned int reg_off = VXD_GET_REG_OFF(PVDEC_ENTROPY, ENTROPY_LAST_MB);
+
+ state->fe_status[pipe] = VXD_RD_REG_ABS(reg_base, reg_off + p_off);
+
+ reg_off = VXD_GET_REG_OFF(MSVDX_VEC, VEC_ENTDEC_INFORMATION);
+ state->fe_status[pipe] |= VXD_RD_REG_ABS(reg_base, reg_off + p_off);
+
+ /* Back-end */
+ reg_off = VXD_GET_REG_OFF(PVDEC_VEC_BE, VEC_BE_STATUS);
+ state->be_status[pipe] = VXD_RD_REG_ABS(reg_base, reg_off + p_off);
+ reg_off = VXD_GET_REG_OFF(MSVDX_VDMC, VDMC_MACROBLOCK_NUMBER);
+ state->be_status[pipe] |= VXD_RD_REG_ABS(reg_base, reg_off + p_off);
+
+ /*
+ * Take DMAC channels 2/3 into consideration to cover
+ * parser progress on SR1/2
+ */
+ reg_off = VXD_GET_RPT_REG_OFF(DMAC, DMAC_COUNT, 2);
+ reg_val = VXD_RD_REG_ABS(reg_base, reg_off + p_off);
+ state->dmac_status[pipe][0] = VXD_RD_REG_FIELD(reg_val, DMAC, DMAC_COUNT, CNT);
+ reg_off = VXD_GET_RPT_REG_OFF(DMAC, DMAC_COUNT, 3);
+ reg_val = VXD_RD_REG_ABS(reg_base, reg_off + p_off);
+ state->dmac_status[pipe][1] = VXD_RD_REG_FIELD(reg_val, DMAC, DMAC_COUNT, CNT);
+ }
+}
+
+/*
+ * Check for the source of the last interrupt.
+ *
+ * return 0 if nothing serious happened,
+ * return -EFAULT if there was a critical interrupt detected.
+ */
+int vxd_pvdec_check_irq(const void *dev, void __iomem *reg_base, unsigned int irq_status)
+{
+ if (irq_status & PVDEC_CORE_PVDEC_INT_STAT_HOST_MMU_FAULT_IRQ_MASK) {
+ unsigned int status0 =
+ VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_STATUS0);
+ unsigned int status1 =
+ VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_STATUS1);
+
+ unsigned int addr = VXD_RD_REG_FIELD(status0, IMG_VIDEO_BUS4_MMU,
+ MMU_STATUS0, MMU_FAULT_ADDR) << 12;
+ unsigned char reason = VXD_RD_REG_FIELD(status0, IMG_VIDEO_BUS4_MMU,
+ MMU_STATUS0, MMU_PF_N_RW);
+ unsigned char requestor = VXD_RD_REG_FIELD(status1, IMG_VIDEO_BUS4_MMU,
+ MMU_STATUS1, MMU_FAULT_REQ_ID);
+ unsigned char type = VXD_RD_REG_FIELD(status1, IMG_VIDEO_BUS4_MMU,
+ MMU_STATUS1, MMU_FAULT_RNW);
+ unsigned char secure = VXD_RD_REG_FIELD(status0, IMG_VIDEO_BUS4_MMU,
+ MMU_STATUS0, MMU_SECURE_FAULT);
+
+#ifdef DEBUG_DECODER_DRIVER
+ dev_dbg(dev, "%s: MMU Page Fault s0:%08x s1:%08x", __func__, status0, status1);
+#endif
+
+ dev_err(dev, "%s: MMU %s fault from %s while %s @ 0x%08X", __func__,
+ (reason) ? "Page" : "Protection",
+ (requestor & (0x1)) ? "dmac" :
+ (requestor & (0x2)) ? "vec" :
+ (requestor & (0x4)) ? "vdmc" :
+ (requestor & (0x8)) ? "vdeb" : "unknown source",
+ (type) ? "reading" : "writing", addr);
+
+ if (secure)
+ dev_err(dev, "%s: MMU security policy violation detected!", __func__);
+
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+/*
+ * This functions enables the clocks, fetches the core properties, stores them
+ * in the <props> structure and DISABLES the clocks. Do not call when hardware
+ * is busy!
+ */
+int vxd_pvdec_get_props(const void *dev, void __iomem *reg_base, struct vxd_core_props *props)
+{
+#ifdef DEBUG_DECODER_DRIVER
+ unsigned char num_pix_pipes, pipe;
+#endif
+ int ret = pvdec_enable_clocks(reg_base);
+
+ if (ret) {
+ dev_err(dev, "%s: failed to enable clocks!\n", __func__);
+ return ret;
+ }
+
+ ret = pvdec_get_mtx_ram_size(reg_base, &props->mtx_ram_size);
+ if (ret) {
+ dev_err(dev, "%s: failed to get MTX ram size!\n", __func__);
+ return ret;
+ }
+
+ ret = pvdec_get_properties(reg_base, props);
+ if (ret) {
+ dev_err(dev, "%s: failed to get VXD props!\n", __func__);
+ return ret;
+ }
+
+ if (pvdec_disable_clocks(reg_base))
+ dev_err(dev, "%s: failed to disable clocks!\n", __func__);
+
+#ifdef DEBUG_DECODER_DRIVER
+ num_pix_pipes = VXD_NUM_PIX_PIPES(*props);
+
+ /* Warning already raised in pvdec_get_properties() */
+ if (unlikely(num_pix_pipes > VXD_MAX_PIPES))
+ num_pix_pipes = VXD_MAX_PIPES;
+ dev_dbg(dev, "%s: core_rev: 0x%08x\n", __func__, props->core_rev);
+ dev_dbg(dev, "%s: pvdec_core_id: 0x%08x\n", __func__, props->pvdec_core_id);
+ dev_dbg(dev, "%s: mmu_config0: 0x%08x\n", __func__, props->mmu_config0);
+ dev_dbg(dev, "%s: mmu_config1: 0x%08x\n", __func__, props->mmu_config1);
+ dev_dbg(dev, "%s: mtx_ram_size: %u\n", __func__, props->mtx_ram_size);
+ dev_dbg(dev, "%s: pix max frame: 0x%08x\n", __func__, props->pixel_max_frame_cfg);
+
+ for (pipe = 1; pipe <= num_pix_pipes; ++pipe)
+ dev_dbg(dev, "%s: pipe %u, 0x%08x, misc 0x%08x\n",
+ __func__, pipe, props->pixel_pipe_cfg[pipe - 1],
+ props->pixel_misc_cfg[pipe - 1]);
+ dev_dbg(dev, "%s: dbg fifo size: %u\n", __func__, props->dbg_fifo_size);
+#endif
+ return 0;
+}
diff --git a/drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h b/drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h
new file mode 100644
index 000000000000..6cc9aef45904
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h
@@ -0,0 +1,126 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD PVDEC Private header file
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#ifndef _VXD_PVDEC_PRIV_H
+#define _VXD_PVDEC_PRIV_H
+#include <linux/interrupt.h>
+
+#include "img_dec_common.h"
+#include "vxd_pvdec_regs.h"
+#include "vxd_dec.h"
+
+#ifdef ERROR_RECOVERY_SIMULATION
+/* kernel object used to debug. Declared in v4l2_int.c */
+extern struct kobject *vxd_dec_kobject;
+extern int disable_fw_irq_value;
+extern int g_module_irq;
+#endif
+
+struct vxd_boot_poll_params {
+ unsigned int msleep_cycles;
+};
+
+struct vxd_ena_params {
+ struct vxd_boot_poll_params boot_poll;
+
+ unsigned long fw_buf_size;
+ unsigned int fw_buf_virt_addr;
+ /*
+ * VXD's MMU virtual address of a firmware
+ * buffer.
+ */
+ unsigned int ptd; /* Shifted physical address of PTD */
+
+ /* Required for firmware upload via registers. */
+ struct {
+ const unsigned char *buf; /* Firmware blob buffer */
+
+ } regs_data;
+
+ struct {
+ unsigned secure : 1; /* Secure flow indicator. */
+ unsigned wait_dbg_fifo : 1; /*
+ * Indicates that fw shall use
+ * blocking mode when putting logs
+ * into debug fifo
+ */
+ };
+
+ /* Structure containing memory staller configuration */
+ struct {
+ unsigned int *data; /* Configuration data array */
+ unsigned char size; /* Configuration size in dwords */
+
+ } mem_staller;
+
+ unsigned int fwwdt_ms; /* Firmware software watchdog timeout value */
+
+ unsigned int crc; /* HW signatures to be enabled by firmware */
+ unsigned int rendec_addr; /* VXD's virtual address of a rendec buffer */
+ unsigned short rendec_size; /* Size of a rendec buffer in 4K pages */
+};
+
+int vxd_pvdec_init(const void *dev, void __iomem *reg_base);
+
+int vxd_pvdec_ena(const void *dev, void __iomem *reg_base,
+ struct vxd_ena_params *ena_params, struct vxd_fw_hdr *hdr,
+ unsigned int *freq_khz);
+
+int vxd_pvdec_dis(const void *dev, void __iomem *reg_base);
+
+int vxd_pvdec_mmu_flush(const void *dev, void __iomem *reg_base);
+
+int vxd_pvdec_send_msg(const void *dev, void __iomem *reg_base,
+ unsigned int *msg, unsigned long msg_size, unsigned short msg_id,
+ struct vxd_dev *ctx);
+
+int vxd_pvdec_pend_msg_info(const void *dev, void __iomem *reg_base,
+ unsigned long *size, unsigned short *msg_id,
+ unsigned char *not_last_msg);
+
+int vxd_pvdec_recv_msg(const void *dev, void __iomem *reg_base,
+ unsigned int *buf, unsigned long buf_size, struct vxd_dev *ctx);
+
+int vxd_pvdec_check_fw_status(const void *dev, void __iomem *reg_base);
+
+unsigned long vxd_pvdec_peek_mtx_fifo(const void *dev,
+ void __iomem *reg_base);
+
+unsigned long vxd_pvdec_read_mtx_fifo(const void *dev, void __iomem *reg_base,
+ unsigned int *buf, unsigned long size);
+
+irqreturn_t vxd_pvdec_clear_int(void __iomem *reg_base, unsigned int *irq_status);
+
+int vxd_pvdec_check_irq(const void *dev, void __iomem *reg_base,
+ unsigned int irq_status);
+
+int vxd_pvdec_msg_fit(const void *dev, void __iomem *reg_base,
+ unsigned long msg_size);
+
+void vxd_pvdec_get_state(const void *dev, void __iomem *reg_base,
+ unsigned int num_pipes, struct vxd_hw_state *state);
+
+int vxd_pvdec_get_props(const void *dev, void __iomem *reg_base,
+ struct vxd_core_props *props);
+
+unsigned long vxd_pvdec_get_dbg_fifo_size(void __iomem *reg_base);
+
+int vxd_pvdec_dump_mtx_ram(const void *dev, void __iomem *reg_base,
+ unsigned int addr, unsigned int count, unsigned int *buf);
+
+int vxd_pvdec_dump_mtx_status(const void *dev, void __iomem *reg_base,
+ unsigned int *array, unsigned int array_size);
+
+#endif /* _VXD_PVDEC_PRIV_H */
diff --git a/drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h b/drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h
new file mode 100644
index 000000000000..2d8cf9ef8df7
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h
@@ -0,0 +1,779 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD PVDEC registers header file
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#ifndef VXD_PVDEC_REGS_H
+#define VXD_PVDEC_REGS_H
+
+/* ************************* VXD-specific values *************************** */
+/* 0x10 for code, 0x18 for data. */
+#define PVDEC_MTX_CORE_MEM 0x18
+/* Iteration time out counter for MTX I/0. */
+#define PVDEC_TIMEOUT_COUNTER 1000
+/* Partial frame notification timer divider. */
+#define PVDEC_PFNT_DIV 0
+/* Value returned by register reads when HW enters invalid state (FPGA) */
+#define PVDEC_INVALID_HW_STATE 0x000dead1
+
+/* Default core clock for pvdec */
+#define PVDEC_CLK_MHZ_DEFAULT 200
+
+/* Offsets of registers groups within VXD. */
+#define PVDEC_PROC_OFFSET 0x0000
+/* 0x34c: Skip DMA registers when running against CSIM (vritual platform) */
+#define PVDEC_PROC_SIZE 0x34C /* 0x3FF */
+
+#define PVDEC_CORE_OFFSET 0x0400
+#define PVDEC_CORE_SIZE 0x3FF
+
+#define MTX_CORE_OFFSET PVDEC_PROC_OFFSET
+#define MTX_CORE_SIZE PVDEC_PROC_SIZE
+
+#define VIDEO_BUS4_MMU_OFFSET 0x1000
+#define VIDEO_BUS4_MMU_SIZE 0x1FF
+
+#define IMG_VIDEO_BUS4_MMU_OFFSET VIDEO_BUS4_MMU_OFFSET
+#define IMG_VIDEO_BUS4_MMU_SIZE VIDEO_BUS4_MMU_SIZE
+
+#define VLR_OFFSET 0x2000
+#define VLR_SIZE 0x1000
+
+/* PVDEC_ENTROPY defined in uapi/vxd_pvdec.h */
+
+#define PVDEC_PIXEL_OFFSET 0x4000
+#define PVDEC_PIXEL_SIZE 0x1FF
+
+/* PVDEC_VEC_BE defined in uapi/vxd_pvdec.h */
+
+/* MSVDX_VEC defined in uapi/vxd_pvdec.h */
+
+#define MSVDX_VDMC_OFFSET 0x6800
+#define MSVDX_VDMC_SIZE 0x7F
+
+#define DMAC_OFFSET 0x6A00
+#define DMAC_SIZE 0x1FF
+
+#define PVDEC_TEST_OFFSET 0xFF00
+#define PVDEC_TEST_SIZE 0xFF
+
+/* *********************** firmware specific values ************************* */
+
+/* layout of COMMS RAM */
+
+#define PVDEC_FW_COMMS_HDR_SIZE 0x38
+
+#define PVDEC_FW_STATUS_OFFSET 0x00
+#define PVDEC_FW_TASK_STATUS_OFFSET 0x04
+#define PVDEC_FW_ID_OFFSET 0x08
+#define PVDEC_FW_MTXPC_OFFSET 0x0c
+#define PVDEC_FW_MSG_COUNTER_OFFSET 0x10
+#define PVDEC_FW_SIGNATURE_OFFSET 0x14
+#define PVDEC_FW_TO_HOST_BUF_CONF_OFFSET 0x18
+#define PVDEC_FW_TO_HOST_RD_IDX_OFFSET 0x1c
+#define PVDEC_FW_TO_HOST_WR_IDX_OFFSET 0x20
+#define PVDEC_FW_TO_MTX_BUF_CONF_OFFSET 0x24
+#define PVDEC_FW_TO_MTX_RD_IDX_OFFSET 0x28
+#define PVDEC_FW_FLAGS_OFFSET 0x2c
+#define PVDEC_FW_TO_MTX_WR_IDX_OFFSET 0x30
+#define PVDEC_FW_STATE_BUF_CFG_OFFSET 0x34
+
+/* firmware status */
+
+#define PVDEC_FW_STATUS_PANIC 0x2
+#define PVDEC_FW_STATUS_ASSERT 0x3
+#define PVDEC_FW_STATUS_SO 0x8
+
+/* firmware flags */
+
+#define PVDEC_FWFLAG_BIG_TO_HOST_BUFFER 0x00000002
+#define PVDEC_FWFLAG_FORCE_FS_FLOW 0x00000004
+#define PVDEC_FWFLAG_DISABLE_WATCHDOGS 0x00000008
+#define PVDEC_FWFLAG_DISABLE_AUTONOMOUS_RESET 0x00000040
+#define PVDEC_FWFLAG_DISABLE_IDLE_GPIO 0x00002000
+#define PVDEC_FWFLAG_ENABLE_ERROR_CONCEALMENT 0x00100000
+#define PVDEC_FWFLAG_DISABLE_GENC_FLUSHING 0x00800000
+#define PVDEC_FWFLAG_FAKE_COMPLETION 0x20000000
+#define PVDEC_FWFLAG_DISABLE_COREWDT_TIMERS 0x01000000
+
+/* firmware message header */
+
+#define PVDEC_FW_DEVA_GENMSG_OFFSET 0
+
+#define PVDEC_FW_DEVA_GENMSG_MSG_ID_MASK 0xFFFF0000
+#define PVDEC_FW_DEVA_GENMSG_MSG_ID_SHIFT 16
+
+#define PVDEC_FW_DEVA_GENMSG_MSG_TYPE_MASK 0xFF00
+#define PVDEC_FW_DEVA_GENMSG_MSG_TYPE_SHIFT 8
+
+#define PVDEC_FW_DEVA_GENMSG_NOT_LAST_MSG_MASK 0x80
+#define PVDEC_FW_DEVA_GENMSG_NOT_LAST_MSG_SHIFT 7
+
+#define PVDEC_FW_DEVA_GENMSG_MSG_SIZE_MASK 0x7F
+#define PVDEC_FW_DEVA_GENMSG_MSG_SIZE_SHIFT 0
+
+/* firmware init message */
+
+#define PVDEC_FW_DEVA_INIT_MSG_WRDS 9
+
+#define PVDEC_FW_DEVA_INIT_RENDEC_ADDR0_OFFSET 0xC
+
+#define PVDEC_FW_DEVA_INIT_RENDEC_SIZE0_OFFSET 0x10
+#define PVDEC_FW_DEVA_INIT_RENDEC_SIZE0_MASK 0xFFFF
+#define PVDEC_FW_DEVA_INIT_RENDEC_SIZE0_SHIFT 0
+
+#define PVDEC_FW_DEVA_INIT_HEVC_CFG_OFFSET 0x14
+#define PVDEC_FW_DEVA_INIT_HEVC_CFG_MAX_H_FOR_PIPE_WAIT_MASK 0xFFFF0000
+#define PVDEC_FW_DEVA_INIT_HEVC_CFG_MAX_H_FOR_PIPE_WAIT_SHIFT 16
+#define PVDEC_FW_DEVA_INIT_HEVC_CFG_MIN_H_FOR_DUAL_PIPE_MASK 0xFFFF
+#define PVDEC_FW_DEVA_INIT_HEVC_CFG_MIN_H_FOR_DUAL_PIPE_SHIFT 0
+
+#define PVDEC_FW_DEVA_INIT_SIG_SELECT_OFFSET 0x18
+
+#define PVDEC_FW_DEVA_INIT_DBG_DELAYS_OFFSET 0x1C
+
+#define PVDEC_FW_DEVA_INIT_PFNT_DIV_OFFSET 0x20
+#define PVDEC_FW_DEVA_INIT_PFNT_DIV_MASK 0xFFFF0000
+#define PVDEC_FW_DEVA_INIT_PFNT_DIV_SHIFT 16
+
+#define PVDEC_FW_DEVA_INIT_FWWDT_MS_OFFSET 0x20
+#define PVDEC_FW_DEVA_INIT_FWWDT_MS_MASK 0xFFFF
+#define PVDEC_FW_DEVA_INIT_FWWDT_MS_SHIFT 0
+
+/* firmware message types */
+#define PVDEC_FW_MSG_TYPE_PADDING 0
+#define PVDEC_FW_MSG_TYPE_INIT 0x80
+
+/* miscellaneous */
+
+#define PVDEC_FW_READY_SIG 0xa5a5a5a5
+
+#define PVDEC_FW_COM_BUF_SIZE(cfg) ((cfg) & 0x0000ffff)
+#define PVDEC_FW_COM_BUF_OFF(cfg) (((cfg) & 0xffff0000) >> 16)
+
+/*
+ * Timer divider calculation macro.
+ * NOTE: The Timer divider is only 8bit field
+ * so we set it for 2MHz timer base to cover wider
+ * range of core frequencies on real platforms (freq > 255MHz)
+ */
+#define PVDEC_CALC_TIMER_DIV(val) (((val) - 1) / 2)
+
+#define MTX_CORE_STATUS_ELEMENTS 4
+
+#define PVDEC_CORE_MEMSTALLER_ELEMENTS 7
+
+/* ********************** PVDEC_CORE registers group ************************ */
+
+/* register PVDEC_SOFT_RESET */
+#define PVDEC_CORE_PVDEC_SOFT_RST_OFFSET 0x0000
+
+#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_PIXEL_PROC_SOFT_RST_MASK 0xFF000000
+#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_PIXEL_PROC_SOFT_RST_SHIFT 24
+
+#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_ENTROPY_SOFT_RST_MASK 0x00FF0000
+#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_ENTROPY_SOFT_RST_SHIFT 16
+
+#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_MMU_SOFT_RST_MASK 0x00000002
+#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_MMU_SOFT_RST_SHIFT 1
+
+#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_SOFT_RST_MASK 0x00000001
+#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_SOFT_RST_SHIFT 0
+
+/* register PVDEC_HOST_INTERRUPT_STATUS */
+#define PVDEC_CORE_PVDEC_INT_STAT_OFFSET 0x0010
+
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_SYS_WDT_MASK 0x10000000
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_SYS_WDT_SHIFT 28
+
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_READ_TIMEOUT_PROC_IRQ_MASK 0x08000000
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_READ_TIMEOUT_PROC_IRQ_SHIFT 27
+
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_COMMAND_TIMEOUT_PROC_IRQ_MASK 0x04000000
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_COMMAND_TIMEOUT_PROC_IRQ_SHIFT 26
+
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_READ_TIMEOUT_HOST_IRQ_MASK 0x02000000
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_READ_TIMEOUT_HOST_IRQ_SHIFT 25
+
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_COMMAND_TIMEOUT_HOST_IRQ_MASK 0x01000000
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_COMMAND_TIMEOUT_HOST_IRQ_SHIFT 24
+
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_PROC_GPIO_IRQ_MASK 0x00200000
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_PROC_GPIO_IRQ_SHIFT 21
+
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_PROC_IRQ_MASK 0x00100000
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_PROC_IRQ_SHIFT 20
+
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_MMU_FAULT_IRQ_MASK 0x00010000
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_MMU_FAULT_IRQ_SHIFT 16
+
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_PIXEL_PROCESSING_IRQ_MASK 0x0000FF00
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_PIXEL_PROCESSING_IRQ_SHIFT 8
+
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_ENTROPY_PIPE_IRQ_MASK 0x000000FF
+#define PVDEC_CORE_PVDEC_INT_STAT_HOST_ENTROPY_PIPE_IRQ_SHIFT 0
+
+/* register PVDEC_INTERRUPT_CLEAR */
+#define PVDEC_CORE_PVDEC_INT_CLEAR_OFFSET 0x0014
+
+#define PVDEC_CORE_PVDEC_INT_CLEAR_IRQ_CLEAR_MASK 0xFFFF0000
+#define PVDEC_CORE_PVDEC_INT_CLEAR_IRQ_CLEAR_SHIFT 16
+
+/* register PVDEC_HOST_INTERRUPT_ENABLE */
+#define PVDEC_CORE_PVDEC_HOST_INT_ENA_OFFSET 0x0018
+
+#define PVDEC_CORE_PVDEC_HOST_INT_ENA_HOST_IRQ_ENABLE_MASK 0xFFFF0000
+#define PVDEC_CORE_PVDEC_HOST_INT_ENA_HOST_IRQ_ENABLE_SHIFT 16
+
+/* Register PVDEC_MAN_CLK_ENABLE */
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_OFFSET 0x0040
+
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_PIXEL_PROC_MAN_CLK_ENA_MASK 0xFF000000
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_PIXEL_PROC_MAN_CLK_ENA_SHIFT 24
+
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_ENTROPY_PIPE_MAN_CLK_ENA_MASK 0x00FF0000
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_ENTROPY_PIPE_MAN_CLK_ENA_SHIFT 16
+
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_MEM_MAN_CLK_ENA_MASK 0x00000100
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_MEM_MAN_CLK_ENA_SHIFT 8
+
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_PVDEC_REG_MAN_CLK_ENA_MASK 0x00000010
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_PVDEC_REG_MAN_CLK_ENA_SHIFT 4
+
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_PROC_MAN_CLK_ENA_MASK 0x00000002
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_PROC_MAN_CLK_ENA_SHIFT 1
+
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_CORE_MAN_CLK_ENA_MASK 0x00000001
+#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_CORE_MAN_CLK_ENA_SHIFT 0
+
+/* register PVDEC_HOST_PIPE_SELECT */
+#define PVDEC_CORE_PVDEC_HOST_PIPE_SELECT_OFFSET 0x0060
+
+#define PVDEC_CORE_PVDEC_HOST_PIPE_SELECT_PIPE_SEL_MASK 0x0000000F
+#define PVDEC_CORE_PVDEC_HOST_PIPE_SELECT_PIPE_SEL_SHIFT 0
+
+/* register PROC_DEBUG */
+#define PVDEC_CORE_PROC_DEBUG_OFFSET 0x0100
+
+#define PVDEC_CORE_PROC_DEBUG_MTX_LAST_RAM_BANK_SIZE_MASK 0xFF000000
+#define PVDEC_CORE_PROC_DEBUG_MTX_LAST_RAM_BANK_SIZE_SHIFT 24
+
+#define PVDEC_CORE_PROC_DEBUG_MTX_RAM_BANK_SIZE_MASK 0x000F0000
+#define PVDEC_CORE_PROC_DEBUG_MTX_RAM_BANK_SIZE_SHIFT 16
+
+#define PVDEC_CORE_PROC_DEBUG_MTX_RAM_BANKS_MASK 0x00000F00
+#define PVDEC_CORE_PROC_DEBUG_MTX_RAM_BANKS_SHIFT 8
+
+#define PVDEC_CORE_PROC_DEBUG_MTX_RAM_NEW_REPRESENTATION_MASK 0x00000080
+#define PVDEC_CORE_PROC_DEBUG_MTX_RAM_NEW_REPRESENTATION_SHIFT 7
+
+#define PVDEC_CORE_PROC_DEBUG_PROC_DBG_GPIO_OUT_MASK 0x00000018
+#define PVDEC_CORE_PROC_DEBUG_PROC_DBG_GPIO_OUT_SHIFT 3
+
+#define PVDEC_CORE_PROC_DEBUG_PROC_DBG_IS_SLAVE_MASK 0x00000004
+#define PVDEC_CORE_PROC_DEBUG_PROC_DBG_IS_SLAVE_SHIFT 2
+
+#define PVDEC_CORE_PROC_DEBUG_PROC_DBG_GPIO_IN_MASK 0x00000003
+#define PVDEC_CORE_PROC_DEBUG_PROC_DBG_GPIO_IN_SHIFT 0
+
+/* register PROC_DMAC_CONTROL */
+#define PVDEC_CORE_PROC_DMAC_CONTROL_OFFSET 0x0104
+
+#define PVDEC_CORE_PROC_DMAC_CONTROL_BOOT_ON_DMA_CH0_MASK 0x80000000
+#define PVDEC_CORE_PROC_DMAC_CONTROL_BOOT_ON_DMA_CH0_SHIFT 31
+
+/* register PROC_DEBUG_FIFO */
+#define PVDEC_CORE_PROC_DBG_FIFO_OFFSET 0x0108
+
+#define PVDEC_CORE_PROC_DBG_FIFO_PROC_DBG_FIFO_MASK 0xFFFFFFFF
+#define PVDEC_CORE_PROC_DBG_FIFO_PROC_DBG_FIFO_SHIFT 0
+
+/* register PROC_DEBUG_FIFO_CTRL_0 */
+#define PVDEC_CORE_PROC_DBG_FIFO_CTRL0_OFFSET 0x010C
+
+#define PVDEC_CORE_PROC_DBG_FIFO_CTRL0_PROC_DBG_FIFO_COUNT_MASK 0xFFFF0000
+#define PVDEC_CORE_PROC_DBG_FIFO_CTRL0_PROC_DBG_FIFO_COUNT_SHIFT 16
+
+#define PVDEC_CORE_PROC_DBG_FIFO_CTRL0_PROC_DBG_FIFO_SIZE_MASK 0x0000FFFF
+#define PVDEC_CORE_PROC_DBG_FIFO_CTRL0_PROC_DBG_FIFO_SIZE_SHIFT 0
+
+/* register PVDEC_CORE_ID */
+#define PVDEC_CORE_PVDEC_CORE_ID_OFFSET 0x0230
+
+#define PVDEC_CORE_PVDEC_CORE_ID_GROUP_ID_MASK 0xFF000000
+#define PVDEC_CORE_PVDEC_CORE_ID_GROUP_ID_SHIFT 24
+
+#define PVDEC_CORE_PVDEC_CORE_ID_CORE_ID_MASK 0x00FF0000
+#define PVDEC_CORE_PVDEC_CORE_ID_CORE_ID_SHIFT 16
+
+#define PVDEC_CORE_PVDEC_CORE_ID_PVDEC_CORE_CONFIG_MASK 0x0000FFFF
+#define PVDEC_CORE_PVDEC_CORE_ID_PVDEC_CORE_CONFIG_SHIFT 0
+
+#define PVDEC_CORE_PVDEC_CORE_ID_ENT_PIPES_MASK 0x0000000F
+#define PVDEC_CORE_PVDEC_CORE_ID_ENT_PIPES_SHIFT 0
+
+#define PVDEC_CORE_PVDEC_CORE_ID_PIX_PIPES_MASK 0x000000F0
+#define PVDEC_CORE_PVDEC_CORE_ID_PIX_PIPES_SHIFT 4
+
+/* register PVDEC_CORE_REV */
+#define PVDEC_CORE_PVDEC_CORE_REV_OFFSET 0x0240
+
+#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_DESIGNER_MASK 0xFF000000
+#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_DESIGNER_SHIFT 24
+
+#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_MAJOR_REV_MASK 0x00FF0000
+#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_MAJOR_REV_SHIFT 16
+
+#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_MINOR_REV_MASK 0x0000FF00
+#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_MINOR_REV_SHIFT 8
+
+#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_MAINT_REV_MASK 0x000000FF
+#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_MAINT_REV_SHIFT 0
+
+/* *********************** MTX_CORE registers group ************************* */
+
+/* register MTX_ENABLE */
+#define MTX_CORE_MTX_ENABLE_OFFSET 0x0000
+
+/* register MTX_SYSC_TXTIMER. Note: it's not defined in PVDEC TRM. */
+#define MTX_CORE_MTX_SYSC_TXTIMER_OFFSET 0x0010
+
+/* register MTX_KICKI */
+#define MTX_CORE_MTX_KICKI_OFFSET 0x0088
+
+#define MTX_CORE_MTX_KICKI_MTX_KICKI_MASK 0x0000FFFF
+#define MTX_CORE_MTX_KICKI_MTX_KICKI_SHIFT 0
+
+/* register MTX_FAULT0 */
+#define MTX_CORE_MTX_FAULT0_OFFSET 0x0090
+
+/* register MTX_REGISTER_READ_WRITE_DATA */
+#define MTX_CORE_MTX_REG_READ_WRITE_DATA_OFFSET 0x00F8
+
+/* register MTX_REGISTER_READ_WRITE_REQUEST */
+#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_OFFSET 0x00FC
+
+#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_DREADY_MASK 0x80000000
+#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_DREADY_SHIFT 31
+
+#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_RNW_MASK 0x00010000
+#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_RNW_SHIFT 16
+
+#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_RSPECIFIER_MASK 0x00000070
+#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_RSPECIFIER_SHIFT 4
+
+#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_USPECIFIER_MASK 0x0000000F
+#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_USPECIFIER_SHIFT 0
+
+/* register MTX_RAM_ACCESS_DATA_EXCHANGE */
+#define MTX_CORE_MTX_RAM_ACCESS_DATA_EXCHANGE_OFFSET 0x0100
+
+/* register MTX_RAM_ACCESS_DATA_TRANSFER */
+#define MTX_CORE_MTX_RAM_ACCESS_DATA_TRANSFER_OFFSET 0x0104
+
+/* register MTX_RAM_ACCESS_CONTROL */
+#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_OFFSET 0x0108
+
+#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCMID_MASK 0x0FF00000
+#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCMID_SHIFT 20
+
+#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCM_ADDR_MASK 0x000FFFFC
+#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCM_ADDR_SHIFT 2
+
+#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCMAI_MASK 0x00000002
+#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCMAI_SHIFT 1
+
+#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCMR_MASK 0x00000001
+#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCMR_SHIFT 0
+
+/* register MTX_RAM_ACCESS_STATUS */
+#define MTX_CORE_MTX_RAM_ACCESS_STATUS_OFFSET 0x010C
+
+#define MTX_CORE_MTX_RAM_ACCESS_STATUS_MTX_MTX_MCM_STAT_MASK 0x00000001
+#define MTX_CORE_MTX_RAM_ACCESS_STATUS_MTX_MTX_MCM_STAT_SHIFT 0
+
+/* register MTX_SOFT_RESET */
+#define MTX_CORE_MTX_SOFT_RESET_OFFSET 0x0200
+
+#define MTX_CORE_MTX_SOFT_RESET_MTX_RESET_MASK 0x00000001
+#define MTX_CORE_MTX_SOFT_RESET_MTX_RESET_SHIFT 0
+
+/* register MTX_SYSC_TIMERDIV */
+#define MTX_CORE_MTX_SYSC_TIMERDIV_OFFSET 0x0208
+
+#define MTX_CORE_MTX_SYSC_TIMERDIV_TIMER_EN_MASK 0x00010000
+#define MTX_CORE_MTX_SYSC_TIMERDIV_TIMER_EN_SHIFT 16
+
+#define MTX_CORE_MTX_SYSC_TIMERDIV_TIMER_DIV_MASK 0x000000FF
+#define MTX_CORE_MTX_SYSC_TIMERDIV_TIMER_DIV_SHIFT 0
+
+/* register MTX_SYSC_CDMAA */
+#define MTX_CORE_MTX_SYSC_CDMAA_OFFSET 0x0344
+
+#define MTX_CORE_MTX_SYSC_CDMAA_CDMAA_ADDRESS_MASK 0x03FFFFFC
+#define MTX_CORE_MTX_SYSC_CDMAA_CDMAA_ADDRESS_SHIFT 2
+
+/* register MTX_SYSC_CDMAC */
+#define MTX_CORE_MTX_SYSC_CDMAC_OFFSET 0x0340
+
+#define MTX_CORE_MTX_SYSC_CDMAC_BURSTSIZE_MASK 0x07000000
+#define MTX_CORE_MTX_SYSC_CDMAC_BURSTSIZE_SHIFT 24
+
+#define MTX_CORE_MTX_SYSC_CDMAC_RNW_MASK 0x00020000
+#define MTX_CORE_MTX_SYSC_CDMAC_RNW_SHIFT 17
+
+#define MTX_CORE_MTX_SYSC_CDMAC_ENABLE_MASK 0x00010000
+#define MTX_CORE_MTX_SYSC_CDMAC_ENABLE_SHIFT 16
+
+#define MTX_CORE_MTX_SYSC_CDMAC_LENGTH_MASK 0x0000FFFF
+#define MTX_CORE_MTX_SYSC_CDMAC_LENGTH_SHIFT 0
+
+/* register MTX_SYSC_CDMAT */
+#define MTX_CORE_MTX_SYSC_CDMAT_OFFSET 0x0350
+
+/* ****************** IMG_VIDEO_BUS4_MMU registers group ******************** */
+
+/* register MMU_CONTROL0_ */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL0_USE_TILE_STRIDE_PER_CTX_MASK 0x00010000
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL0_USE_TILE_STRIDE_PER_CTX_SHIFT 16
+
+#define IMG_VIDEO_BUS4_MMU_MMU_ADDRESS_CONTROL_MMU_ENA_EXT_ADDR_MASK 0x00000010
+#define IMG_VIDEO_BUS4_MMU_MMU_ADDRESS_CONTROL_MMU_ENA_EXT_ADDR_SHIFT 4
+
+#define IMG_VIDEO_BUS4_MMU_MMU_ADDRESS_CONTROL_UPPER_ADDR_FIXED_MASK 0x00FF0000
+#define IMG_VIDEO_BUS4_MMU_MMU_ADDRESS_CONTROL_UPPER_ADDR_FIXED_SHIFT 16
+
+#define IMG_VIDEO_BUS4_MMU_MMU_MEM_EXT_OUTSTANDING_READ_WORDS_MASK 0x0000FFFF
+#define IMG_VIDEO_BUS4_MMU_MMU_MEM_EXT_OUTSTANDING_READ_WORDS_SHIFT 0
+
+/* *************************** MMU-related values ************************** */
+
+/* MMU page size */
+
+enum {
+ VXD_MMU_SOFT_PAGE_SIZE_PAGE_64K = 0x4,
+ VXD_MMU_SOFT_PAGE_SIZE_PAGE_16K = 0x2,
+ VXD_MMU_SOFT_PAGE_SIZE_PAGE_4K = 0x0,
+ VXD_MMU_SOFT_PAGE_SIZE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* MMU PTD entry flags */
+enum {
+ VXD_MMU_PTD_FLAG_NONE = 0x0,
+ VXD_MMU_PTD_FLAG_VALID = 0x1,
+ VXD_MMU_PTD_FLAG_WRITE_ONLY = 0x2,
+ VXD_MMU_PTD_FLAG_READ_ONLY = 0x4,
+ VXD_MMU_PTD_FLAG_CACHE_COHERENCY = 0x8,
+ VXD_MMU_PTD_FLAG_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* ********************* PVDEC_PIXEL registers group *********************** */
+
+/* register PVDEC_PIXEL_PIXEL_CONTROL_0 */
+#define PVDEC_PIXEL_PIXEL_CONTROL_0_OFFSET 0x0004
+
+#define PVDEC_PIXEL_PIXEL_CONTROL_0_DMAC_CH_SEL_FOR_MTX_MASK 0x0000000E
+#define PVDEC_PIXEL_PIXEL_CONTROL_0_DMAC_CH_SEL_FOR_MTX_SHIFT 1
+
+#define PVDEC_PIXEL_PIXEL_CONTROL_0_PROC_DMAC_CH0_SEL_MASK 0x00000001
+#define PVDEC_PIXEL_PIXEL_CONTROL_0_PROC_DMAC_CH0_SEL_SHIFT 0
+
+/* register PVDEC_PIXEL_MAN_CLK_ENABLE */
+#define PVDEC_PIXEL_PIXEL_MAN_CLK_ENA_OFFSET 0x0020
+
+#define PVDEC_PIXEL_PIXEL_MAN_CLK_ENA_PIXEL_REG_MAN_CLK_ENA_MASK 0x00020000
+#define PVDEC_PIXEL_PIXEL_MAN_CLK_ENA_PIXEL_REG_MAN_CLK_ENA_SHIFT 17
+
+#define PVDEC_PIXEL_PIXEL_MAN_CLK_ENA_PIXEL_DMAC_MAN_CLK_ENA_MASK 0x00010000
+#define PVDEC_PIXEL_PIXEL_MAN_CLK_ENA_PIXEL_DMAC_MAN_CLK_ENA_SHIFT 16
+
+/* register PIXEL_PIPE_CONFIG */
+#define PVDEC_PIXEL_PIXEL_PIPE_CONFIG_OFFSET 0x00C0
+
+/* register PIXEL_MISC_CONFIG */
+#define PVDEC_PIXEL_PIXEL_MISC_CONFIG_OFFSET 0x00C4
+
+/* register MAX_FRAME_CONFIG */
+#define PVDEC_PIXEL_MAX_FRAME_CONFIG_OFFSET 0x00C8
+
+/* ********************* PVDEC_ENTROPY registers group ********************* */
+
+/* Register PVDEC_ENTROPY_MAN_CLK_ENABLE */
+#define PVDEC_ENTROPY_ENTROPY_MAN_CLK_ENA_OFFSET 0x0020
+
+/* Register PVDEC_ENTROPY_LAST_LAST_MB */
+#define PVDEC_ENTROPY_ENTROPY_LAST_MB_OFFSET 0x00BC
+
+/* ********************* PVDEC_VEC_BE registers group ********************** */
+
+/* Register PVDEC_VEC_BE_VEC_BE_STATUS */
+#define PVDEC_VEC_BE_VEC_BE_STATUS_OFFSET 0x0018
+
+/* ********************* MSVDX_VEC registers group ************************* */
+
+/* Register MSVDX_VEC_VEC_ENTDEC_INFORMATION */
+#define MSVDX_VEC_VEC_ENTDEC_INFORMATION_OFFSET 0x00AC
+
+/* ********************* MSVDX_VDMC registers group ************************ */
+
+/* Register MSVDX_VDMC_VDMC_MACROBLOCK_NUMBER */
+#define MSVDX_VDMC_VDMC_MACROBLOCK_NUMBER_OFFSET 0x0048
+
+/* ************************** DMAC registers group ************************* */
+
+/* register DMAC_SETUP */
+#define DMAC_DMAC_SETUP_OFFSET 0x0000
+#define DMAC_DMAC_SETUP_STRIDE 32
+#define DMAC_DMAC_SETUP_NO_ENTRIES 6
+
+/* register DMAC_COUNT */
+#define DMAC_DMAC_COUNT_OFFSET 0x0004
+#define DMAC_DMAC_COUNT_STRIDE 32
+#define DMAC_DMAC_COUNT_NO_ENTRIES 6
+
+#define DMAC_DMAC_COUNT_LIST_IEN_MASK 0x80000000
+#define DMAC_DMAC_COUNT_LIST_IEN_SHIFT 31
+
+#define DMAC_DMAC_COUNT_BSWAP_MASK 0x40000000
+#define DMAC_DMAC_COUNT_BSWAP_SHIFT 30
+
+#define DMAC_DMAC_COUNT_TRANSFER_IEN_MASK 0x20000000
+#define DMAC_DMAC_COUNT_TRANSFER_IEN_SHIFT 29
+
+#define DMAC_DMAC_COUNT_PW_MASK 0x18000000
+#define DMAC_DMAC_COUNT_PW_SHIFT 27
+
+#define DMAC_DMAC_COUNT_DIR_MASK 0x04000000
+#define DMAC_DMAC_COUNT_DIR_SHIFT 26
+
+#define DMAC_DMAC_COUNT_PI_MASK 0x03000000
+#define DMAC_DMAC_COUNT_PI_SHIFT 24
+
+#define DMAC_DMAC_COUNT_LIST_FIN_CTL_MASK 0x00400000
+#define DMAC_DMAC_COUNT_LIST_FIN_CTL_SHIFT 22
+
+#define DMAC_DMAC_COUNT_DREQ_MASK 0x00100000
+#define DMAC_DMAC_COUNT_DREQ_SHIFT 20
+
+#define DMAC_DMAC_COUNT_SRST_MASK 0x00080000
+#define DMAC_DMAC_COUNT_SRST_SHIFT 19
+
+#define DMAC_DMAC_COUNT_LIST_EN_MASK 0x00040000
+#define DMAC_DMAC_COUNT_LIST_EN_SHIFT 18
+
+#define DMAC_DMAC_COUNT_ENABLE_2D_MODE_MASK 0x00020000
+#define DMAC_DMAC_COUNT_ENABLE_2D_MODE_SHIFT 17
+
+#define DMAC_DMAC_COUNT_EN_MASK 0x00010000
+#define DMAC_DMAC_COUNT_EN_SHIFT 16
+
+#define DMAC_DMAC_COUNT_CNT_MASK 0x0000FFFF
+#define DMAC_DMAC_COUNT_CNT_SHIFT 0
+
+/* register DMAC_PERIPH */
+#define DMAC_DMAC_PERIPH_OFFSET 0x0008
+#define DMAC_DMAC_PERIPH_STRIDE 32
+#define DMAC_DMAC_PERIPH_NO_ENTRIES 6
+
+#define DMAC_DMAC_PERIPH_ACC_DEL_MASK 0xE0000000
+#define DMAC_DMAC_PERIPH_ACC_DEL_SHIFT 29
+
+#define DMAC_DMAC_PERIPH_INCR_MASK 0x08000000
+#define DMAC_DMAC_PERIPH_INCR_SHIFT 27
+
+#define DMAC_DMAC_PERIPH_BURST_MASK 0x07000000
+#define DMAC_DMAC_PERIPH_BURST_SHIFT 24
+
+#define DMAC_DMAC_PERIPH_EXT_BURST_MASK 0x000F0000
+#define DMAC_DMAC_PERIPH_EXT_BURST_SHIFT 16
+
+#define DMAC_DMAC_PERIPH_EXT_SA_MASK 0x0000000F
+#define DMAC_DMAC_PERIPH_EXT_SA_SHIFT 0
+
+/* register DMAC_IRQ_STAT */
+#define DMAC_DMAC_IRQ_STAT_OFFSET 0x000C
+#define DMAC_DMAC_IRQ_STAT_STRIDE 32
+#define DMAC_DMAC_IRQ_STAT_NO_ENTRIES 6
+
+/* register DMAC_PERIPHERAL_ADDR */
+#define DMAC_DMAC_PERIPH_ADDR_OFFSET 0x0014
+#define DMAC_DMAC_PERIPH_ADDR_STRIDE 32
+#define DMAC_DMAC_PERIPH_ADDR_NO_ENTRIES 6
+
+#define DMAC_DMAC_PERIPH_ADDR_ADDR_MASK 0x007FFFFF
+#define DMAC_DMAC_PERIPH_ADDR_ADDR_SHIFT 0
+
+/* register DMAC_PER_HOLD */
+#define DMAC_DMAC_PER_HOLD_OFFSET 0x0018
+#define DMAC_DMAC_PER_HOLD_STRIDE 32
+#define DMAC_DMAC_PER_HOLD_NO_ENTRIES 6
+
+#define DMAC_DMAC_PER_HOLD_PER_HOLD_MASK 0x0000001F
+#define DMAC_DMAC_PER_HOLD_PER_HOLD_SHIFT 0
+
+#define DMAC_DMAC_SOFT_RESET_OFFSET 0x00C0
+
+/* ************************** DMAC-related values *************************** */
+
+/*
+ * This type defines whether the peripheral address is static or
+ * auto-incremented. (see the TRM "Transfer Sequence Linked-list - INCR")
+ */
+enum {
+ DMAC_INCR_OFF = 0, /* No action, no increment. */
+ DMAC_INCR_ON = 1, /* Generate address increment. */
+ DMAC_INCR_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* Burst size settings (see the TRM "Transfer Sequence Linked-list - BURST"). */
+enum {
+ DMAC_BURST_0 = 0x0, /* burst size of 0 */
+ DMAC_BURST_1 = 0x1, /* burst size of 1 */
+ DMAC_BURST_2 = 0x2, /* burst size of 2 */
+ DMAC_BURST_3 = 0x3, /* burst size of 3 */
+ DMAC_BURST_4 = 0x4, /* burst size of 4 */
+ DMAC_BURST_5 = 0x5, /* burst size of 5 */
+ DMAC_BURST_6 = 0x6, /* burst size of 6 */
+ DMAC_BURST_7 = 0x7, /* burst size of 7 */
+ DMAC_BURST_8 = 0x8, /* burst size of 8 */
+ DMAC_BURST_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * Extended burst size settings (see TRM "Transfer Sequence Linked-list -
+ * EXT_BURST").
+ */
+enum {
+ DMAC_EXT_BURST_0 = 0x0, /* no extension */
+ DMAC_EXT_BURST_1 = 0x1, /* extension of 8 */
+ DMAC_EXT_BURST_2 = 0x2, /* extension of 16 */
+ DMAC_EXT_BURST_3 = 0x3, /* extension of 24 */
+ DMAC_EXT_BURST_4 = 0x4, /* extension of 32 */
+ DMAC_EXT_BURST_5 = 0x5, /* extension of 40 */
+ DMAC_EXT_BURST_6 = 0x6, /* extension of 48 */
+ DMAC_EXT_BURST_7 = 0x7, /* extension of 56 */
+ DMAC_EXT_BURST_8 = 0x8, /* extension of 64 */
+ DMAC_EXT_BURST_9 = 0x9, /* extension of 72 */
+ DMAC_EXT_BURST_10 = 0xa, /* extension of 80 */
+ DMAC_EXT_BURST_11 = 0xb, /* extension of 88 */
+ DMAC_EXT_BURST_12 = 0xc, /* extension of 96 */
+ DMAC_EXT_BURST_13 = 0xd, /* extension of 104 */
+ DMAC_EXT_BURST_14 = 0xe, /* extension of 112 */
+ DMAC_EXT_BURST_15 = 0xf, /* extension of 120 */
+ DMAC_EXT_BURST_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* Transfer direction. */
+enum {
+ DMAC_MEM_TO_VXD = 0x0,
+ DMAC_VXD_TO_MEM = 0x1,
+ DMAC_VXD_TO_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* How much to increment the peripheral address. */
+enum {
+ DMAC_PI_1 = 0x2, /* increment by 1 */
+ DMAC_PI_2 = 0x1, /* increment by 2 */
+ DMAC_PI_4 = 0x0, /* increment by 4 */
+ DMAC_PI_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* Peripheral width settings (see TRM "Transfer Sequence Linked-list - PW"). */
+enum {
+ DMAC_PWIDTH_32_BIT = 0x0, /* Peripheral width 32-bit. */
+ DMAC_PWIDTH_16_BIT = 0x1, /* Peripheral width 16-bit. */
+ DMAC_PWIDTH_8_BIT = 0x2, /* Peripheral width 8-bit. */
+ DMAC_PWIDTH_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* ******************************* macros ********************************** */
+
+#ifdef PVDEC_SINGLETHREADED_IO
+/* Write to the register */
+#define VXD_WR_REG_ABS(base, addr, val) \
+ ({ spin_lock_irqsave(&pvdec_irq_lock, pvdec_irq_flags); \
+ iowrite32((val), (addr) + (base)); \
+ spin_unlock_irqrestore(&pvdec_irq_lock, (unsigned long)pvdec_irq_flags); })
+
+/* Read the register */
+#define VXD_RD_REG_ABS(base, addr) \
+ ({ unsigned int reg; \
+ spin_lock_irqsave(&pvdec_irq_lock, pvdec_irq_flags); \
+ reg = ioread32((addr) + (base)); \
+ spin_unlock_irqrestore(&pvdec_irq_lock, (unsigned long)pvdec_irq_flags); \
+ reg; })
+#else /* ndef PVDEC_SINGLETHREADED_IO */
+
+/* Write to the register */
+#define VXD_WR_REG_ABS(base, addr, val) \
+ (iowrite32((val), (addr) + (base)))
+
+/* Read the register */
+#define VXD_RD_REG_ABS(base, addr) \
+ (ioread32((addr) + (base)))
+
+#endif
+
+/* Get offset of a register */
+#define VXD_GET_REG_OFF(group, reg) \
+ (group ## _OFFSET + group ## _ ## reg ## _OFFSET)
+
+/* Get offset of a repated register */
+#define VXD_GET_RPT_REG_OFF(group, reg, index) \
+ (VXD_GET_REG_OFF(group, reg) + ((index) * group ## _ ## reg ## _STRIDE))
+
+/* Extract field from a register */
+#define VXD_RD_REG_FIELD(val, group, reg, field) \
+ (((val) & group ## _ ## reg ## _ ## field ## _MASK) >> \
+ group ## _ ## reg ## _ ## field ## _SHIFT)
+
+/* Shift provided value by number of bits relevant to register specification */
+#define VXD_ENC_REG_FIELD(group, reg, field, val) \
+ ((unsigned int)(val) << (group ## _ ## reg ## _ ## field ## _SHIFT))
+
+/* Update the field in a register */
+#define VXD_WR_REG_FIELD(reg_val, group, reg, field, val) \
+ (((reg_val) & ~(group ## _ ## reg ## _ ## field ## _MASK)) | \
+ (VXD_ENC_REG_FIELD(group, reg, field, val) & \
+ (group ## _ ## reg ## _ ## field ## _MASK)))
+
+/* Write to a register */
+#define VXD_WR_REG(base, group, reg, val) \
+ VXD_WR_REG_ABS(base, VXD_GET_REG_OFF(group, reg), val)
+
+/* Write to a repeated register */
+#define VXD_WR_RPT_REG(base, group, reg, val, index) \
+ VXD_WR_REG_ABS(base, VXD_GET_RPT_REG_OFF(group, reg, index), val)
+
+/* Read a register */
+#define VXD_RD_REG(base, group, reg) \
+ VXD_RD_REG_ABS(base, VXD_GET_REG_OFF(group, reg))
+
+/* Read a repeated register */
+#define VXD_RD_RPT_REG(base, group, reg, index) \
+ VXD_RD_REG_ABS(base, VXD_GET_RPT_REG_OFF(group, reg, index))
+
+/* Insert word into the message buffer */
+#define VXD_WR_MSG_WRD(buf, msg_type, wrd, val) \
+ (((unsigned int *)buf)[(msg_type ## _ ## wrd ## _OFFSET) / sizeof(unsigned int)] = \
+ val)
+
+/* Get a word from the message buffer */
+#define VXD_RD_MSG_WRD(buf, msg_type, wrd) \
+ (((unsigned int *)buf)[(msg_type ## _ ## wrd ## _OFFSET) / sizeof(unsigned int)])
+
+/* Get offset for pipe register */
+#define VXD_GET_PIPE_OFF(num_pipes, pipe) \
+ ((num_pipes) > 1 ? ((pipe) << 16) : 0)
+
+#endif /* VXD_PVDEC_REGS_H */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
Contains utility module for double linked queues, single linked
lists and workqueue.
Signed-off-by: Lakshmi Sankar <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 6 +
drivers/staging/media/vxd/common/dq.c | 248 ++++++++++++++++++
drivers/staging/media/vxd/common/dq.h | 36 +++
drivers/staging/media/vxd/common/lst.c | 119 +++++++++
drivers/staging/media/vxd/common/lst.h | 37 +++
drivers/staging/media/vxd/common/work_queue.c | 188 +++++++++++++
drivers/staging/media/vxd/common/work_queue.h | 66 +++++
7 files changed, 700 insertions(+)
create mode 100644 drivers/staging/media/vxd/common/dq.c
create mode 100644 drivers/staging/media/vxd/common/dq.h
create mode 100644 drivers/staging/media/vxd/common/lst.c
create mode 100644 drivers/staging/media/vxd/common/lst.h
create mode 100644 drivers/staging/media/vxd/common/work_queue.c
create mode 100644 drivers/staging/media/vxd/common/work_queue.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 0468aaac3b7d..2668eeb89a34 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19537,6 +19537,8 @@ M: Sidraya Jayagond <[email protected]>
L: [email protected]
S: Maintained
F: Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
+F: drivers/staging/media/vxd/common/dq.c
+F: drivers/staging/media/vxd/common/dq.h
F: drivers/staging/media/vxd/common/idgen_api.c
F: drivers/staging/media/vxd/common/idgen_api.h
F: drivers/staging/media/vxd/common/img_mem_man.c
@@ -19544,6 +19546,10 @@ F: drivers/staging/media/vxd/common/img_mem_man.h
F: drivers/staging/media/vxd/common/img_mem_unified.c
F: drivers/staging/media/vxd/common/imgmmu.c
F: drivers/staging/media/vxd/common/imgmmu.h
+F: drivers/staging/media/vxd/common/lst.c
+F: drivers/staging/media/vxd/common/lst.h
+F: drivers/staging/media/vxd/common/work_queue.c
+F: drivers/staging/media/vxd/common/work_queue.h
F: drivers/staging/media/vxd/decoder/hw_control.c
F: drivers/staging/media/vxd/decoder/hw_control.h
F: drivers/staging/media/vxd/decoder/img_dec_common.h
diff --git a/drivers/staging/media/vxd/common/dq.c b/drivers/staging/media/vxd/common/dq.c
new file mode 100644
index 000000000000..890be5ed00e7
--- /dev/null
+++ b/drivers/staging/media/vxd/common/dq.c
@@ -0,0 +1,248 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Utility module for doubly linked queues.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/types.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "dq.h"
+#include "img_errors.h"
+
+void dq_init(struct dq_linkage_t *queue)
+{
+ queue->fwd = (struct dq_linkage_t *)queue;
+ queue->back = (struct dq_linkage_t *)queue;
+}
+
+void dq_addhead(struct dq_linkage_t *queue, void *item)
+{
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd);
+
+ if (!((struct dq_linkage_t *)queue)->back ||
+ !((struct dq_linkage_t *)queue)->fwd)
+ return;
+
+ ((struct dq_linkage_t *)item)->back = (struct dq_linkage_t *)queue;
+ ((struct dq_linkage_t *)item)->fwd =
+ ((struct dq_linkage_t *)queue)->fwd;
+ ((struct dq_linkage_t *)queue)->fwd->back = (struct dq_linkage_t *)item;
+ ((struct dq_linkage_t *)queue)->fwd = (struct dq_linkage_t *)item;
+}
+
+void dq_addtail(struct dq_linkage_t *queue, void *item)
+{
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd);
+
+ if (!((struct dq_linkage_t *)queue)->back ||
+ !((struct dq_linkage_t *)queue)->fwd)
+ return;
+
+ ((struct dq_linkage_t *)item)->fwd = (struct dq_linkage_t *)queue;
+ ((struct dq_linkage_t *)item)->back =
+ ((struct dq_linkage_t *)queue)->back;
+ ((struct dq_linkage_t *)queue)->back->fwd = (struct dq_linkage_t *)item;
+ ((struct dq_linkage_t *)queue)->back = (struct dq_linkage_t *)item;
+}
+
+int dq_empty(struct dq_linkage_t *queue)
+{
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd);
+
+ if (!((struct dq_linkage_t *)queue)->back ||
+ !((struct dq_linkage_t *)queue)->fwd)
+ return 1;
+
+ return ((queue)->fwd == (struct dq_linkage_t *)(queue));
+}
+
+void *dq_first(struct dq_linkage_t *queue)
+{
+ struct dq_linkage_t *temp = queue->fwd;
+
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd);
+
+ if (!((struct dq_linkage_t *)queue)->back ||
+ !((struct dq_linkage_t *)queue)->fwd)
+ return NULL;
+
+ return temp == (struct dq_linkage_t *)queue ? NULL : temp;
+}
+
+void *dq_last(struct dq_linkage_t *queue)
+{
+ struct dq_linkage_t *temp = queue->back;
+
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd);
+
+ if (!((struct dq_linkage_t *)queue)->back ||
+ !((struct dq_linkage_t *)queue)->fwd)
+ return NULL;
+
+ return temp == (struct dq_linkage_t *)queue ? NULL : temp;
+}
+
+void *dq_next(void *item)
+{
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)item)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)item)->fwd);
+
+ if (!((struct dq_linkage_t *)item)->back ||
+ !((struct dq_linkage_t *)item)->fwd)
+ return NULL;
+
+ return ((struct dq_linkage_t *)item)->fwd;
+}
+
+void *dq_previous(void *item)
+{
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)item)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)item)->fwd);
+
+ if (!((struct dq_linkage_t *)item)->back ||
+ !((struct dq_linkage_t *)item)->fwd)
+ return NULL;
+
+ return ((struct dq_linkage_t *)item)->back;
+}
+
+void dq_remove(void *item)
+{
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)item)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)item)->fwd);
+
+ if (!((struct dq_linkage_t *)item)->back ||
+ !((struct dq_linkage_t *)item)->fwd)
+ return;
+
+ ((struct dq_linkage_t *)item)->fwd->back =
+ ((struct dq_linkage_t *)item)->back;
+ ((struct dq_linkage_t *)item)->back->fwd =
+ ((struct dq_linkage_t *)item)->fwd;
+
+ /* make item linkages safe for "orphan" removes */
+ ((struct dq_linkage_t *)item)->fwd = item;
+ ((struct dq_linkage_t *)item)->back = item;
+}
+
+void *dq_removehead(struct dq_linkage_t *queue)
+{
+ struct dq_linkage_t *temp;
+
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd);
+
+ if (!((struct dq_linkage_t *)queue)->back ||
+ !((struct dq_linkage_t *)queue)->fwd)
+ return NULL;
+
+ if ((queue)->fwd == (struct dq_linkage_t *)(queue))
+ return NULL;
+
+ temp = ((struct dq_linkage_t *)queue)->fwd;
+ temp->fwd->back = temp->back;
+ temp->back->fwd = temp->fwd;
+
+ /* make item linkages safe for "orphan" removes */
+ temp->fwd = temp;
+ temp->back = temp;
+ return temp;
+}
+
+void *dq_removetail(struct dq_linkage_t *queue)
+{
+ struct dq_linkage_t *temp;
+
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd);
+
+ if (!((struct dq_linkage_t *)queue)->back ||
+ !((struct dq_linkage_t *)queue)->fwd)
+ return NULL;
+
+ if ((queue)->fwd == (struct dq_linkage_t *)(queue))
+ return NULL;
+
+ temp = ((struct dq_linkage_t *)queue)->back;
+ temp->fwd->back = temp->back;
+ temp->back->fwd = temp->fwd;
+
+ /* make item linkages safe for "orphan" removes */
+ temp->fwd = temp;
+ temp->back = temp;
+
+ return temp;
+}
+
+void dq_addbefore(void *successor, void *item)
+{
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)successor)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)successor)->fwd);
+
+ if (!((struct dq_linkage_t *)successor)->back ||
+ !((struct dq_linkage_t *)successor)->fwd)
+ return;
+
+ ((struct dq_linkage_t *)item)->fwd = (struct dq_linkage_t *)successor;
+ ((struct dq_linkage_t *)item)->back =
+ ((struct dq_linkage_t *)successor)->back;
+ ((struct dq_linkage_t *)item)->back->fwd = (struct dq_linkage_t *)item;
+ ((struct dq_linkage_t *)successor)->back = (struct dq_linkage_t *)item;
+}
+
+void dq_addafter(void *predecessor, void *item)
+{
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)predecessor)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)predecessor)->fwd);
+
+ if (!((struct dq_linkage_t *)predecessor)->back ||
+ !((struct dq_linkage_t *)predecessor)->fwd)
+ return;
+
+ ((struct dq_linkage_t *)item)->fwd =
+ ((struct dq_linkage_t *)predecessor)->fwd;
+ ((struct dq_linkage_t *)item)->back =
+ (struct dq_linkage_t *)predecessor;
+ ((struct dq_linkage_t *)item)->fwd->back = (struct dq_linkage_t *)item;
+ ((struct dq_linkage_t *)predecessor)->fwd = (struct dq_linkage_t *)item;
+}
+
+void dq_move(struct dq_linkage_t *from, struct dq_linkage_t *to)
+{
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)from)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)from)->fwd);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)to)->back);
+ IMG_DBG_ASSERT(((struct dq_linkage_t *)to)->fwd);
+
+ if (!((struct dq_linkage_t *)from)->back ||
+ !((struct dq_linkage_t *)from)->fwd ||
+ !((struct dq_linkage_t *)to)->back ||
+ !((struct dq_linkage_t *)to)->fwd)
+ return;
+
+ if ((from)->fwd == (struct dq_linkage_t *)(from)) {
+ dq_init(to);
+ } else {
+ *to = *from;
+ to->fwd->back = (struct dq_linkage_t *)to;
+ to->back->fwd = (struct dq_linkage_t *)to;
+ dq_init(from);
+ }
+}
diff --git a/drivers/staging/media/vxd/common/dq.h b/drivers/staging/media/vxd/common/dq.h
new file mode 100644
index 000000000000..4663a92aaf7a
--- /dev/null
+++ b/drivers/staging/media/vxd/common/dq.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Utility module for doubly linked queues.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ */
+#ifndef DQ_H
+#define DQ_H
+
+/* dq structure */
+struct dq_linkage_t {
+ struct dq_linkage_t *fwd;
+ struct dq_linkage_t *back;
+};
+
+/* Function Prototypes */
+void dq_addafter(void *predecessor, void *item);
+void dq_addbefore(void *successor, void *item);
+void dq_addhead(struct dq_linkage_t *queue, void *item);
+void dq_addtail(struct dq_linkage_t *queue, void *item);
+int dq_empty(struct dq_linkage_t *queue);
+void *dq_first(struct dq_linkage_t *queue);
+void *dq_last(struct dq_linkage_t *queue);
+void dq_init(struct dq_linkage_t *queue);
+void dq_move(struct dq_linkage_t *from, struct dq_linkage_t *to);
+void *dq_next(void *item);
+void *dq_previous(void *item);
+void dq_remove(void *item);
+void *dq_removehead(struct dq_linkage_t *queue);
+void *dq_removetail(struct dq_linkage_t *queue);
+
+#endif /* #define DQ_H */
diff --git a/drivers/staging/media/vxd/common/lst.c b/drivers/staging/media/vxd/common/lst.c
new file mode 100644
index 000000000000..bb047ab6d598
--- /dev/null
+++ b/drivers/staging/media/vxd/common/lst.c
@@ -0,0 +1,119 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * List processing primitives.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Author:
+ * Lakshmi Sankar <[email protected]>
+ */
+
+#include "lst.h"
+
+#ifndef NULL
+#define NULL ((void *)0)
+#endif
+
+void lst_add(struct lst_t *list, void *item)
+{
+ if (!list->first) {
+ list->first = item;
+ list->last = item;
+ } else {
+ *list->last = item;
+ list->last = item;
+ }
+ *((void **)item) = NULL;
+}
+
+void lst_addhead(struct lst_t *list, void *item)
+{
+ if (!list->first) {
+ list->first = item;
+ list->last = item;
+ *((void **)item) = NULL;
+ } else {
+ *((void **)item) = list->first;
+ list->first = item;
+ }
+}
+
+int lst_empty(struct lst_t *list)
+{
+ if (!list->first)
+ return 1;
+ else
+ return 0;
+}
+
+void *lst_first(struct lst_t *list)
+{
+ return list->first;
+}
+
+void lst_init(struct lst_t *list)
+{
+ list->first = NULL;
+ list->last = NULL;
+}
+
+void *lst_last(struct lst_t *list)
+{
+ return list->last;
+}
+
+void *lst_next(void *item)
+{
+ return *((void **)item);
+}
+
+void *lst_removehead(struct lst_t *list)
+{
+ void **temp = list->first;
+
+ if (temp) {
+ list->first = *temp;
+ if (!list->first)
+ list->last = NULL;
+ }
+ return temp;
+}
+
+void *lst_remove(struct lst_t *list, void *item)
+{
+ void **p;
+ void **q;
+
+ p = (void **)list;
+ q = *p;
+ while (q) {
+ if (q == item) {
+ *p = *q;
+ if (list->last == q)
+ list->last = p;
+ return item;
+ }
+ p = q;
+ q = *p;
+ }
+
+ return NULL;
+}
+
+int lst_check(struct lst_t *list, void *item)
+{
+ void **p;
+ void **q;
+
+ p = (void **)list;
+ q = *p;
+ while (q) {
+ if (q == item)
+ return 1;
+ p = q;
+ q = *p;
+ }
+
+ return 0;
+}
diff --git a/drivers/staging/media/vxd/common/lst.h b/drivers/staging/media/vxd/common/lst.h
new file mode 100644
index 000000000000..ccf6eed19019
--- /dev/null
+++ b/drivers/staging/media/vxd/common/lst.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * List processing primitives.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Author:
+ * Lakshmi Sankar <[email protected]>
+ */
+#ifndef __LIST_H__
+#define __LIST_H__
+
+#include <linux/types.h>
+
+struct lst_t {
+ void **first;
+ void **last;
+};
+
+void lst_add(struct lst_t *list, void *item);
+void lst_addhead(struct lst_t *list, void *item);
+
+/**
+ * lst_empty- Is list empty?
+ * @list: pointer to list
+ */
+int lst_empty(struct lst_t *list);
+void *lst_first(struct lst_t *list);
+void lst_init(struct lst_t *list);
+void *lst_last(struct lst_t *list);
+void *lst_next(void *item);
+void *lst_remove(struct lst_t *list, void *item);
+void *lst_removehead(struct lst_t *list);
+int lst_check(struct lst_t *list, void *item);
+
+#endif /* __LIST_H__ */
diff --git a/drivers/staging/media/vxd/common/work_queue.c b/drivers/staging/media/vxd/common/work_queue.c
new file mode 100644
index 000000000000..6bd91a7fdbf4
--- /dev/null
+++ b/drivers/staging/media/vxd/common/work_queue.c
@@ -0,0 +1,188 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Work Queue Handling for Linux
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include <linux/slab.h>
+#include <linux/printk.h>
+#include <linux/mutex.h>
+
+#include "work_queue.h"
+
+/* Defining and initilizing mutex
+ */
+DEFINE_MUTEX(mutex);
+
+#define false 0
+#define true 1
+
+struct node {
+ void **key;
+ struct node *next;
+};
+
+struct node *work_head;
+struct node *delayed_work_head;
+
+void init_work(void **work_args, void *work_fn, uint8_t hwa_id)
+{
+ struct work_struct **work = (struct work_struct **)work_args;
+ //create a link
+ struct node *link = kmalloc(sizeof(*link), GFP_KERNEL);
+
+ *work = kzalloc(sizeof(*work), GFP_KERNEL);
+ if (!(*work)) {
+ pr_err("Memory allocation failed for work_queue\n");
+ return;
+ }
+ INIT_WORK(*work, work_fn);
+
+ link->key = (void **)work;
+ mutex_lock(&mutex);
+ //point it to old first node
+ link->next = work_head;
+
+ //point first to new first node
+ work_head = link;
+ mutex_unlock(&mutex);
+}
+
+void init_delayed_work(void **work_args, void *work_fn, uint8_t hwa_id)
+{
+ struct delayed_work **work = (struct delayed_work **)work_args;
+ //create a link
+ struct node *link = kmalloc(sizeof(*link), GFP_KERNEL);
+
+ *work = kzalloc(sizeof(*work), GFP_KERNEL);
+ if (!(*work)) {
+ pr_err("Memory allocation failed for delayed_work_queue\n");
+ return;
+ }
+ INIT_DELAYED_WORK(*work, work_fn);
+
+ link->key = (void **)work;
+ mutex_lock(&mutex);
+ //point it to old first node
+ link->next = delayed_work_head;
+
+ //point first to new first node
+ delayed_work_head = link;
+ mutex_unlock(&mutex);
+}
+
+/**
+ * get_work_buff - get_work_buff
+ * @key: key value
+ * @flag: flag
+ */
+
+void *get_work_buff(void *key, signed char flag)
+{
+ struct node *data = NULL;
+ void *work_new = NULL;
+ struct node *temp = NULL;
+ struct node *previous = NULL;
+ struct work_struct **work = NULL;
+
+ //start from the first link
+ mutex_lock(&mutex);
+ temp = work_head;
+
+ //if list is empty
+ if (!work_head) {
+ mutex_unlock(&mutex);
+ return NULL;
+ }
+
+ work = ((struct work_struct **)(temp->key));
+ //navigate through list
+ while (*work != key) {
+ //if it is last node
+ if (!temp->next) {
+ mutex_unlock(&mutex);
+ return NULL;
+ }
+ //store reference to current link
+ previous = temp;
+ //move to next link
+ temp = temp->next;
+ work = ((struct work_struct **)(temp->key));
+ }
+
+ if (flag) {
+ //found a match, update the link
+ if (temp == work_head) {
+ //change first to point to next link
+ work_head = work_head->next;
+ } else {
+ //bypass the current link
+ previous->next = temp->next;
+ }
+ }
+
+ mutex_unlock(&mutex);
+ //return temp;
+ data = temp;
+ if (data) {
+ work_new = data->key;
+ if (flag)
+ kfree(data);
+ }
+ return work_new;
+}
+
+void *get_delayed_work_buff(void *key, signed char flag)
+{
+ struct node *data = NULL;
+ void *dwork_new = NULL;
+ struct node *temp = NULL;
+ struct node *previous = NULL;
+ struct delayed_work **dwork = NULL;
+
+ if (flag) {
+ /* This Condition is true when kernel module is removed */
+ return delayed_work_head;
+ }
+ //start from the first link
+ mutex_lock(&mutex);
+ temp = delayed_work_head;
+
+ //if list is empty
+ if (!delayed_work_head) {
+ mutex_unlock(&mutex);
+ return NULL;
+ }
+
+ dwork = ((struct delayed_work **)(temp->key));
+ //navigate through list
+ while (&(*dwork)->work != key) {
+ //if it is last node
+ if (!temp->next) {
+ mutex_unlock(&mutex);
+ return NULL;
+ }
+ //store reference to current link
+ previous = temp;
+ //move to next link
+ temp = temp->next;
+ dwork = ((struct delayed_work **)(temp->key));
+ }
+
+ mutex_unlock(&mutex);
+ data = temp;
+ if (data) {
+ dwork_new = data->key;
+ if (flag)
+ kfree(data);
+ }
+ return dwork_new;
+}
diff --git a/drivers/staging/media/vxd/common/work_queue.h b/drivers/staging/media/vxd/common/work_queue.h
new file mode 100644
index 000000000000..44ed423334e2
--- /dev/null
+++ b/drivers/staging/media/vxd/common/work_queue.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Work Queue Related Definitions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#ifndef WORKQUEUE_H_
+#define WORKQUEUE_H_
+
+#include <linux/types.h>
+
+enum {
+ HWA_DECODER = 0,
+ HWA_ENCODER = 1,
+ HWA_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * init_work - This function provides the necessary initialization
+ * and saving given pointer(work_args) in linked list.
+ * @work_args: structure for the initialization
+ * @work_fn: work function pointer
+ *
+ * This function provides the necessary initialization
+ * and setting of the handler function (passed by the user).
+ */
+void init_work(void **work_args, void *work_fn, uint8_t hwa_id);
+
+/*
+ * init_delayed_work - This function provides the necessary initialization.
+ * and saving given pointer(work_args) in linked list.
+ * @work_args: structure for the initialization
+ * @work_fn: work function pointer
+ *
+ * This function provides the necessary initialization
+ * and setting of the handler function (passed by the user).
+ */
+void init_delayed_work(void **work_args, void *work_fn, uint8_t hwa_id);
+
+/*
+ * get_delayed_work_buff - This function return base address of given pointer
+ * @key: The given work struct pointer
+ * @flag: If TRUE, delete the node from the linked list.
+ *
+ * Return: Base address of the given input buffer.
+ */
+void *get_delayed_work_buff(void *key, signed char flag);
+
+/**
+ * get_work_buff - This function return base address of given pointer
+ * @key: The given work struct pointer
+ * @flag: If TRUE, delete the node from the linked list.
+ *
+ * Return: Base address of the given input buffer.
+ */
+void *get_work_buff(void *key, signed char flag);
+
+#endif /* WORKQUEUE_H_ */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
The TI Video Decoder uses IMG D5520 to provide video
decoding for H.264 codec and this patch handles firmware
messages transaction with firmware.
It prepares the batch and fragment messages for firmware.
Signed-off-by: Amit Makani <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
.../staging/media/vxd/decoder/hw_control.c | 1211 +++++++++++++++++
.../staging/media/vxd/decoder/hw_control.h | 144 ++
3 files changed, 1357 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/hw_control.c
create mode 100644 drivers/staging/media/vxd/decoder/hw_control.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 47067f907539..2327ea12caa6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19542,6 +19542,8 @@ F: drivers/staging/media/vxd/common/img_mem_man.h
F: drivers/staging/media/vxd/common/img_mem_unified.c
F: drivers/staging/media/vxd/common/imgmmu.c
F: drivers/staging/media/vxd/common/imgmmu.h
+F: drivers/staging/media/vxd/decoder/hw_control.c
+F: drivers/staging/media/vxd/decoder/hw_control.h
F: drivers/staging/media/vxd/decoder/img_dec_common.h
F: drivers/staging/media/vxd/decoder/vxd_core.c
F: drivers/staging/media/vxd/decoder/vxd_dec.c
diff --git a/drivers/staging/media/vxd/decoder/hw_control.c b/drivers/staging/media/vxd/decoder/hw_control.c
new file mode 100644
index 000000000000..049d9bbcd52c
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/hw_control.c
@@ -0,0 +1,1211 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * VXD DEC Hardware control implementation
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include <linux/types.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "decoder.h"
+#include "hw_control.h"
+#include "img_msvdx_vdmc_regs.h"
+#include "img_pvdec_core_regs.h"
+#include "img_pvdec_pixel_regs.h"
+#include "img_pvdec_test_regs.h"
+#include "img_vdec_fw_msg.h"
+#include "img_video_bus4_mmu_regs.h"
+#include "img_msvdx_core_regs.h"
+#include "reg_io2.h"
+#include "vdecdd_defs.h"
+#include "vxd_dec.h"
+#include "vxd_ext.h"
+#include "vxd_int.h"
+#include "vxd_pvdec_priv.h"
+
+#define MSG_GROUP_MASK 0xf0
+
+struct hwctrl_ctx {
+ unsigned int is_initialised;
+ unsigned int is_on_seq_replay;
+ unsigned int replay_tid;
+ unsigned int num_pipes;
+ struct vdecdd_dd_devconfig devconfig;
+ void *hndl_vxd;
+ void *dec_core;
+ void *comp_init_userdata;
+ struct vidio_ddbufinfo dev_ptd_bufinfo;
+ struct lst_t pend_pict_list;
+ struct hwctrl_msgstatus host_msg_status;
+ void *hmsg_task_event;
+ void *hmsg_task_kick;
+ void *hmsg_task;
+ unsigned int is_msg_task_active;
+ struct hwctrl_state state;
+ struct hwctrl_state prev_state;
+ unsigned int is_prev_hw_state_set;
+ unsigned int is_fatal_state;
+};
+
+struct vdeckm_context {
+ unsigned int core_num;
+ struct vxd_coreprops props;
+ unsigned short current_msgid;
+ unsigned char reader_active;
+ void *comms_ram_addr;
+ unsigned int state_offset;
+ unsigned int state_size;
+};
+
+/*
+ * Panic reason identifier.
+ */
+enum pvdec_panic_reason {
+ PANIC_REASON_OTHER = 0,
+ PANIC_REASON_WDT,
+ PANIC_REASON_READ_TIMEOUT,
+ PANIC_REASON_CMD_TIMEOUT,
+ PANIC_REASON_MMU_FAULT,
+ PANIC_REASON_MAX,
+ PANIC_REASON_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * Panic reason strings.
+ * NOTE: Should match the pvdec_panic_reason ids.
+ */
+static unsigned char *apanic_reason[PANIC_REASON_MAX] = {
+ [PANIC_REASON_OTHER] = "Other",
+ [PANIC_REASON_WDT] = "Watch Dog Timeout",
+ [PANIC_REASON_READ_TIMEOUT] = "Read Timeout",
+ [PANIC_REASON_CMD_TIMEOUT] = "Command Timeout",
+ [PANIC_REASON_MMU_FAULT] = "MMU Page Fault"
+};
+
+/*
+ * Maximum length of the panic reason string.
+ */
+#define PANIC_REASON_LEN (255)
+
+static struct vdeckm_context acore_ctx[VXD_MAX_CORES] = {0};
+
+static int vdeckm_getregsoffsets(const void *hndl_vxd,
+ struct decoder_regsoffsets *regs_offsets)
+{
+ struct vdeckm_context *core_ctx = (struct vdeckm_context *)hndl_vxd;
+
+ if (!core_ctx)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ regs_offsets->vdmc_cmd_offset = MSVDX_CMD_OFFSET;
+ regs_offsets->vec_offset = MSVDX_VEC_OFFSET;
+ regs_offsets->entropy_offset = PVDEC_ENTROPY_OFFSET;
+ regs_offsets->vec_be_regs_offset = PVDEC_VEC_BE_OFFSET;
+ regs_offsets->vdec_be_codec_regs_offset = PVDEC_VEC_BE_CODEC_OFFSET;
+
+ return IMG_SUCCESS;
+}
+
+static int vdeckm_send_message(const void *hndl_vxd,
+ struct hwctrl_to_kernel_msg *to_kernelmsg,
+ void *vxd_dec_ctx)
+{
+ struct vdeckm_context *core_ctx = (struct vdeckm_context *)hndl_vxd;
+ unsigned int count = 0;
+ unsigned int *msg;
+
+ if (!core_ctx || !to_kernelmsg)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ msg = kzalloc(VXD_SIZE_MSG_BUFFER, GFP_KERNEL);
+ if (!msg)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ msg[count++] = to_kernelmsg->flags;
+ msg[count++] = to_kernelmsg->msg_size;
+
+ memcpy(&msg[count], to_kernelmsg->msg_hdr, to_kernelmsg->msg_size);
+
+ core_ctx->reader_active = 1;
+
+ if (!(to_kernelmsg->msg_hdr)) {
+ kfree(msg);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ pr_debug("[HWCTRL] adding message to vxd queue\n");
+ vxd_send_msg(vxd_dec_ctx, (struct vxd_fw_msg *)msg);
+
+ kfree(msg);
+
+ return 0;
+}
+
+static void vdeckm_return_msg(const void *hndl_vxd,
+ struct hwctrl_to_kernel_msg *to_kernelmsg)
+{
+ if (to_kernelmsg)
+ kfree(to_kernelmsg->msg_hdr);
+}
+
+static int vdeckm_handle_mtxtohost_msg(unsigned int *msg, struct lst_t *pend_pict_list,
+ enum vxd_msg_attr *msg_attr,
+ struct dec_decpict **decpict,
+ unsigned char msg_type,
+ unsigned int trans_id)
+{
+ struct dec_decpict *pdec_pict;
+
+ switch (msg_type) {
+ case FW_DEVA_COMPLETED:
+ {
+ struct dec_pict_attrs *pict_attrs = NULL;
+ unsigned short error_flags = 0;
+ unsigned int no_bewdts = 0;
+ unsigned int mbs_dropped = 0;
+ unsigned int mbs_recovered = 0;
+ unsigned char flag = 0;
+
+ pr_debug("Received message from firmware\n");
+ error_flags = MEMIO_READ_FIELD(msg, FW_DEVA_COMPLETED_ERROR_FLAGS);
+
+ no_bewdts = MEMIO_READ_FIELD(msg, FW_DEVA_COMPLETED_NUM_BEWDTS);
+
+ mbs_dropped = MEMIO_READ_FIELD(msg, FW_DEVA_COMPLETED_NUM_MBSDROPPED);
+
+ mbs_recovered = MEMIO_READ_FIELD(msg, FW_DEVA_COMPLETED_NUM_MBSRECOVERED);
+
+ pdec_pict = lst_first(pend_pict_list);
+ while (pdec_pict) {
+ if (pdec_pict->transaction_id == trans_id)
+ break;
+ pdec_pict = lst_next(pdec_pict);
+ }
+ /*
+ * We must have a picture in the list that matches
+ * the transaction id
+ */
+ if (!pdec_pict)
+ return IMG_ERROR_FATAL;
+
+ if (!(pdec_pict->first_fld_fwmsg) || !(pdec_pict->second_fld_fwmsg))
+ return IMG_ERROR_FATAL;
+
+ flag = pdec_pict->first_fld_fwmsg->pict_attrs.first_fld_rcvd;
+ if (flag) {
+ pict_attrs = &pdec_pict->second_fld_fwmsg->pict_attrs;
+ } else {
+ pict_attrs = &pdec_pict->first_fld_fwmsg->pict_attrs;
+ flag = 1;
+ }
+
+ pict_attrs->fe_err = (unsigned int)error_flags;
+ pict_attrs->no_be_wdt = no_bewdts;
+ pict_attrs->mbs_dropped = mbs_dropped;
+ pict_attrs->mbs_recovered = mbs_recovered;
+ /*
+ * We may successfully replayed the picture,
+ * so reset the error flags
+ */
+ pict_attrs->pict_attrs.dwrfired = 0;
+ pict_attrs->pict_attrs.mmufault = 0;
+ pict_attrs->pict_attrs.deverror = 0;
+
+ *msg_attr = VXD_MSG_ATTR_DECODED;
+ *decpict = pdec_pict;
+ break;
+ }
+
+ case FW_DEVA_PANIC:
+ {
+ unsigned int panic_info = MEMIO_READ_FIELD(msg, FW_DEVA_PANIC_ERROR_INT);
+ unsigned char panic_reason[PANIC_REASON_LEN] = "Reason(s): ";
+ unsigned char is_panic_reson_identified = 0;
+ /*
+ * Create panic reason string.
+ */
+ if (REGIO_READ_FIELD(panic_info, PVDEC_CORE, CR_PVDEC_HOST_INTERRUPT_STATUS,
+ CR_HOST_SYS_WDT)) {
+ strncat(panic_reason, apanic_reason[PANIC_REASON_WDT],
+ PANIC_REASON_LEN - 1);
+ is_panic_reson_identified = 1;
+ }
+ if (REGIO_READ_FIELD(panic_info, PVDEC_CORE, CR_PVDEC_HOST_INTERRUPT_STATUS,
+ CR_HOST_READ_TIMEOUT_PROC_IRQ)) {
+ strncat(panic_reason, apanic_reason[PANIC_REASON_READ_TIMEOUT],
+ PANIC_REASON_LEN - 1);
+ is_panic_reson_identified = 1;
+ }
+ if (REGIO_READ_FIELD(panic_info, PVDEC_CORE, CR_PVDEC_HOST_INTERRUPT_STATUS,
+ CR_HOST_COMMAND_TIMEOUT_PROC_IRQ)) {
+ strncat(panic_reason, apanic_reason[PANIC_REASON_CMD_TIMEOUT],
+ PANIC_REASON_LEN - 1);
+ is_panic_reson_identified = 1;
+ }
+ if (!is_panic_reson_identified) {
+ strncat(panic_reason, apanic_reason[PANIC_REASON_OTHER],
+ PANIC_REASON_LEN - 1);
+ }
+ panic_reason[strlen(panic_reason) - 2] = 0;
+ if (trans_id != 0)
+ pr_err("TID=0x%08X [FIRMWARE PANIC %s]\n", trans_id, panic_reason);
+ else
+ pr_err("TID=NULL [GENERAL FIRMWARE PANIC %s]\n", panic_reason);
+
+ break;
+ }
+
+ case FW_ASSERT:
+ {
+ unsigned int fwfile_namehash = MEMIO_READ_FIELD(msg, FW_ASSERT_FILE_NAME_HASH);
+ unsigned int fwfile_line = MEMIO_READ_FIELD(msg, FW_ASSERT_FILE_LINE);
+
+ pr_err("ASSERT file name hash:0x%08X line number:%d\n",
+ fwfile_namehash, fwfile_line);
+ break;
+ }
+
+ case FW_SO:
+ {
+ unsigned int task_name = MEMIO_READ_FIELD(msg, FW_SO_TASK_NAME);
+ unsigned char sztaskname[sizeof(unsigned int) + 1];
+
+ sztaskname[0] = task_name >> 24;
+ sztaskname[1] = (task_name >> 16) & 0xff;
+ sztaskname[2] = (task_name >> 8) & 0xff;
+ sztaskname[3] = task_name & 0xff;
+ if (sztaskname[3] != 0)
+ sztaskname[4] = 0;
+ pr_warn("STACK OVERFLOW for %s task\n", sztaskname);
+ break;
+ }
+
+ case FW_VXD_EMPTY_COMPL:
+ /*
+ * Empty completion message sent as response to init,
+ * configure etc The architecture of vxd.ko module
+ * requires the firmware to send a reply for every
+ * message submitted by the user space.
+ */
+ break;
+
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int vdeckm_handle_hosttomtx_msg(unsigned int *msg, struct lst_t *pend_pict_list,
+ enum vxd_msg_attr *msg_attr,
+ struct dec_decpict **decpict,
+ unsigned char msg_type,
+ unsigned int trans_id,
+ unsigned int msg_flags)
+{
+ struct dec_decpict *pdec_pict;
+
+ pr_debug("Received message from HOST\n");
+
+ switch (msg_type) {
+ case FW_DEVA_PARSE:
+ {
+ struct dec_pict_attrs *pict_attrs = NULL;
+ unsigned char flag = 0;
+
+ pdec_pict = lst_first(pend_pict_list);
+ while (pdec_pict) {
+ if (pdec_pict->transaction_id == trans_id)
+ break;
+
+ pdec_pict = lst_next(pdec_pict);
+ }
+
+ /*
+ * We must have a picture in the list that matches
+ * the transaction id
+ */
+ if (!pdec_pict) {
+ pr_err("Firmware decoded message received\n");
+ pr_err("no pending picture\n");
+ return IMG_ERROR_FATAL;
+ }
+
+ if (!(pdec_pict->first_fld_fwmsg) || !(pdec_pict->second_fld_fwmsg)) {
+ pr_err("invalid pending picture struct\n");
+ return IMG_ERROR_FATAL;
+ }
+
+ flag = pdec_pict->first_fld_fwmsg->pict_attrs.first_fld_rcvd;
+ if (flag) {
+ pict_attrs = &pdec_pict->second_fld_fwmsg->pict_attrs;
+ } else {
+ pict_attrs = &pdec_pict->first_fld_fwmsg->pict_attrs;
+ flag = 1;
+ }
+
+ /*
+ * The below info is fetched from firmware state
+ * afterwards, so just set this to zero for now.
+ */
+ pict_attrs->fe_err = 0;
+ pict_attrs->no_be_wdt = 0;
+ pict_attrs->mbs_dropped = 0;
+ pict_attrs->mbs_recovered = 0;
+
+ vxd_get_pictattrs(msg_flags, &pict_attrs->pict_attrs);
+ vxd_get_msgerrattr(msg_flags, msg_attr);
+
+ if (*msg_attr == VXD_MSG_ATTR_FATAL)
+ pr_err("[TID=0x%08X] [DECODE_FAILED]\n", trans_id);
+ if (*msg_attr == VXD_MSG_ATTR_CANCELED)
+ pr_err("[TID=0x%08X] [DECODE_CANCELED]\n", trans_id);
+
+ *decpict = pdec_pict;
+ break;
+ }
+
+ case FW_DEVA_PARSE_FRAGMENT:
+ /*
+ * Do nothing - Picture holds the list of fragments.
+ * So, in case of any error those would be replayed
+ * anyway.
+ */
+ break;
+ default:
+ pr_warn("Unknown message received 0x%02x\n", msg_type);
+ break;
+ }
+
+ return 0;
+}
+
+static int vdeckm_process_msg(const void *hndl_vxd, unsigned int *msg,
+ struct lst_t *pend_pict_list,
+ unsigned int msg_flags,
+ enum vxd_msg_attr *msg_attr,
+ struct dec_decpict **decpict)
+{
+ struct vdeckm_context *core_ctx = (struct vdeckm_context *)hndl_vxd;
+ unsigned char msg_type;
+ unsigned char msg_group;
+ unsigned int trans_id = 0;
+ struct vdec_pict_hwcrc *pict_hwcrc = NULL;
+ struct dec_decpict *pdec_pict;
+
+ if (!core_ctx || !msg || !msg_attr || !pend_pict_list || !decpict)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ *msg_attr = VXD_MSG_ATTR_NONE;
+ *decpict = NULL;
+
+ trans_id = MEMIO_READ_FIELD(msg, FW_DEVA_GENMSG_TRANS_ID);
+ msg_type = MEMIO_READ_FIELD(msg, FW_DEVA_GENMSG_MSG_TYPE);
+ msg_group = msg_type & MSG_GROUP_MASK;
+
+ switch (msg_group) {
+ case MSG_TYPE_START_PSR_MTXHOST_MSG:
+ vdeckm_handle_mtxtohost_msg(msg, pend_pict_list, msg_attr,
+ decpict, msg_type, trans_id);
+ break;
+ /*
+ * Picture decode has been returned as unprocessed.
+ * Locate the picture with corresponding TID and mark
+ * it as decoded with errors.
+ */
+ case MSG_TYPE_START_PSR_HOSTMTX_MSG:
+ vdeckm_handle_hosttomtx_msg(msg, pend_pict_list, msg_attr,
+ decpict, msg_type, trans_id,
+ msg_flags);
+ break;
+
+ case FW_DEVA_SIGNATURES_HEVC:
+ case FW_DEVA_SIGNATURES_LEGACY:
+ {
+ unsigned int *signatures = msg + (FW_DEVA_SIGNATURES_SIGNATURES_OFFSET /
+ sizeof(unsigned int));
+ unsigned char sigcount = MEMIO_READ_FIELD(msg, FW_DEVA_SIGNATURES_MSG_SIZE) -
+ ((FW_DEVA_SIGNATURES_SIZE / sizeof(unsigned int)) - 1);
+ unsigned int selected = MEMIO_READ_FIELD(msg, FW_DEVA_SIGNATURES_SIGNATURE_SELECT);
+ unsigned char i, j = 0;
+
+ pdec_pict = lst_first(pend_pict_list);
+ while (pdec_pict) {
+ if (pdec_pict->transaction_id == trans_id)
+ break;
+ pdec_pict = lst_next(pdec_pict);
+ }
+
+ /* We must have a picture in the list that matches the tid */
+ VDEC_ASSERT(pdec_pict);
+ if (!pdec_pict) {
+ pr_err("Firmware signatures message received with no pending picture\n");
+ return IMG_ERROR_FATAL;
+ }
+
+ VDEC_ASSERT(pdec_pict->first_fld_fwmsg);
+ VDEC_ASSERT(pdec_pict->second_fld_fwmsg);
+ if (!pdec_pict->first_fld_fwmsg || !pdec_pict->second_fld_fwmsg) {
+ pr_err("Invalid pending picture struct\n");
+ return IMG_ERROR_FATAL;
+ }
+ if (pdec_pict->first_fld_fwmsg->pict_hwcrc.first_fld_rcvd) {
+ pict_hwcrc = &pdec_pict->second_fld_fwmsg->pict_hwcrc;
+ } else {
+ pict_hwcrc = &pdec_pict->first_fld_fwmsg->pict_hwcrc;
+ if (selected & (PVDEC_SIGNATURE_GROUP_20 | PVDEC_SIGNATURE_GROUP_24))
+ pdec_pict->first_fld_fwmsg->pict_hwcrc.first_fld_rcvd = TRUE;
+ }
+
+ for (i = 0; i < 32; i++) {
+ unsigned int group = selected & (1 << i);
+
+ switch (group) {
+ case PVDEC_SIGNATURE_GROUP_20:
+ pict_hwcrc->crc_vdmc_pix_recon = signatures[j++];
+ break;
+
+ case PVDEC_SIGNATURE_GROUP_24:
+ pict_hwcrc->vdeb_sysmem_wrdata = signatures[j++];
+ break;
+
+ default:
+ break;
+ }
+ }
+
+ /* sanity check */
+ sigcount -= j;
+ VDEC_ASSERT(sigcount == 0);
+
+ /*
+ * suppress PVDEC_SIGNATURE_GROUP_1 and notify
+ * only about groups used for verification
+ */
+#ifdef DEBUG_DECODER_DRIVER
+ if (selected & (PVDEC_SIGNATURE_GROUP_20 | PVDEC_SIGNATURE_GROUP_24))
+ pr_info("[TID=0x%08X] [SIGNATURES]\n", trans_id);
+#endif
+
+ *decpict = pdec_pict;
+
+ break;
+ }
+
+ default: {
+#ifdef DEBUG_DECODER_DRIVER
+ unsigned short msg_size, i;
+
+ pr_warn("Unknown message type received: 0x%x", msg_type);
+
+ msg_size = MEMIO_READ_FIELD(msg, FW_DEVA_GENMSG_MSG_SIZE);
+
+ for (i = 0; i < msg_size; i++)
+ pr_info("0x%04x: 0x%08x\n", i, msg[i]);
+#endif
+ break;
+ }
+ }
+
+ return 0;
+}
+
+static void vdeckm_vlr_copy(void *dst, void *src, unsigned int size)
+{
+ unsigned int *pdst = (unsigned int *)dst;
+ unsigned int *psrc = (unsigned int *)src;
+
+ size /= 4;
+ while (size--)
+ *pdst++ = *psrc++;
+}
+
+static int vdeckm_get_core_state(const void *hndl_vxd, struct vxd_states *state)
+{
+ struct vdeckm_context *core_ctx = (struct vdeckm_context *)hndl_vxd;
+ struct vdecfw_pvdecfirmwarestate firmware_state;
+ unsigned char pipe = 0;
+
+#ifdef ERROR_RECOVERY_SIMULATION
+ /*
+ * if disable_fw_irq_value is not zero, return error. If processed further
+ * the kernel will crash because we have ignored the interrupt, but here
+ * we will try to access comms_ram_addr which will result in crash.
+ */
+ if (disable_fw_irq_value != 0)
+ return IMG_ERROR_INVALID_PARAMETERS;
+#endif
+
+ if (!core_ctx || !state)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /*
+ * If state is requested for the first time.
+ */
+ if (core_ctx->state_size == 0) {
+ unsigned int regval;
+ /*
+ * get the state buffer info.
+ */
+ regval = *((unsigned int *)core_ctx->comms_ram_addr +
+ (PVDEC_COM_RAM_STATE_BUF_SIZE_AND_OFFSET_OFFSET / sizeof(unsigned int)));
+ core_ctx->state_size = PVDEC_COM_RAM_BUF_GET_SIZE(regval, STATE);
+ core_ctx->state_offset = PVDEC_COM_RAM_BUF_GET_OFFSET(regval, STATE);
+ }
+
+ /*
+ * If state buffer is available.
+ */
+ if (core_ctx->state_size) {
+ /*
+ * Determine the latest transaction to have passed each
+ * checkpoint in the firmware.
+ * Read the firmware state from VEC Local RAM
+ */
+ vdeckm_vlr_copy(&firmware_state, (unsigned char *)core_ctx->comms_ram_addr +
+ core_ctx->state_offset, core_ctx->state_size);
+
+ for (pipe = 0; pipe < core_ctx->props.num_pixel_pipes; pipe++) {
+ /*
+ * Set pipe presence.
+ */
+ state->fw_state.pipe_state[pipe].is_pipe_present = 1;
+
+ /*
+ * For checkpoints copy message ids here. These will
+ * be translated into transaction ids later.
+ */
+ memcpy(state->fw_state.pipe_state[pipe].acheck_point,
+ firmware_state.pipestate[pipe].check_point,
+ sizeof(state->fw_state.pipe_state[pipe].acheck_point));
+ state->fw_state.pipe_state[pipe].firmware_action =
+ firmware_state.pipestate[pipe].firmware_action;
+ state->fw_state.pipe_state[pipe].cur_codec =
+ firmware_state.pipestate[pipe].curr_codec;
+ state->fw_state.pipe_state[pipe].fe_slices =
+ firmware_state.pipestate[pipe].fe_slices;
+ state->fw_state.pipe_state[pipe].be_slices =
+ firmware_state.pipestate[pipe].be_slices;
+ state->fw_state.pipe_state[pipe].fe_errored_slices =
+ firmware_state.pipestate[pipe].fe_errored_slices;
+ state->fw_state.pipe_state[pipe].be_errored_slices =
+ firmware_state.pipestate[pipe].be_errored_slices;
+ state->fw_state.pipe_state[pipe].be_mbs_dropped =
+ firmware_state.pipestate[pipe].be_mbs_dropped;
+ state->fw_state.pipe_state[pipe].be_mbs_recovered =
+ firmware_state.pipestate[pipe].be_mbs_recovered;
+ state->fw_state.pipe_state[pipe].fe_mb.x =
+ firmware_state.pipestate[pipe].last_fe_mb_xy & 0xFF;
+ state->fw_state.pipe_state[pipe].fe_mb.y =
+ (firmware_state.pipestate[pipe].last_fe_mb_xy >> 16) & 0xFF;
+ state->fw_state.pipe_state[pipe].be_mb.x =
+ REGIO_READ_FIELD(firmware_state.pipestate[pipe].last_be_mb_xy,
+ MSVDX_VDMC,
+ CR_VDMC_MACROBLOCK_NUMBER,
+ CR_VDMC_MACROBLOCK_X_OFFSET);
+ state->fw_state.pipe_state[pipe].be_mb.y =
+ REGIO_READ_FIELD(firmware_state.pipestate[pipe].last_be_mb_xy,
+ MSVDX_VDMC,
+ CR_VDMC_MACROBLOCK_NUMBER,
+ CR_VDMC_MACROBLOCK_Y_OFFSET);
+ }
+ }
+
+ return 0;
+}
+
+static int vdeckm_prepare_batch(struct vdeckm_context *core_ctx,
+ const struct hwctrl_batch_msgdata *batch_msgdata,
+ unsigned char **msg)
+{
+ unsigned char vdec_flags = 0;
+ unsigned short flags = 0;
+ unsigned char *pmsg = kzalloc(FW_DEVA_DECODE_SIZE, GFP_KERNEL);
+ struct vidio_ddbufinfo *pbatch_msg_bufinfo = batch_msgdata->batchmsg_bufinfo;
+
+ if (!pmsg)
+ return IMG_ERROR_MALLOC_FAILED;
+
+ if (batch_msgdata->size_delimited_mode)
+ vdec_flags |= FW_VDEC_NAL_SIZE_DELIM;
+
+ flags |= FW_DEVA_RENDER_HOST_INT;
+
+ /*
+ * Message type and stream ID
+ */
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_GENMSG_MSG_TYPE, FW_DEVA_PARSE, unsigned char*);
+
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_CTRL_ALLOC_ADDR,
+ (unsigned int)pbatch_msg_bufinfo->dev_virt, unsigned char*);
+
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_BUFFER_SIZE,
+ batch_msgdata->ctrl_alloc_bytes / sizeof(unsigned int), unsigned char*);
+
+ /*
+ * Operating mode and decode flags
+ */
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_OPERATING_MODE, batch_msgdata->operating_mode,
+ unsigned char*);
+
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_FLAGS, flags, unsigned char*);
+
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_VDEC_FLAGS, vdec_flags, unsigned char*);
+
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_GENC_ID, batch_msgdata->genc_id, unsigned char*);
+
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_MB_LOAD, batch_msgdata->mb_load, unsigned char*);
+
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_STREAMID,
+ GET_STREAM_ID(batch_msgdata->transaction_id), unsigned char*);
+
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_EXT_STATE_BUFFER,
+ (unsigned int)batch_msgdata->pvdec_fwctx->dev_virt, unsigned char*);
+
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_MSG_ID, ++core_ctx->current_msgid,
+ unsigned char*);
+
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_TRANS_ID, batch_msgdata->transaction_id,
+ unsigned char*);
+
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_TILE_CFG, batch_msgdata->tile_cfg, unsigned char*);
+
+ /*
+ * size of message
+ */
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_GENMSG_MSG_SIZE,
+ FW_DEVA_DECODE_SIZE / sizeof(unsigned int), unsigned char*);
+
+ *msg = pmsg;
+
+ return 0;
+}
+
+static int vdeckm_prepare_fragment(struct vdeckm_context *core_ctx,
+ const struct hwctrl_fragment_msgdata
+ *fragment_msgdata,
+ unsigned char **msg)
+{
+ struct vidio_ddbufinfo *pbatch_msg_bufinfo = NULL;
+ unsigned char *pmsg = NULL;
+
+ pbatch_msg_bufinfo = fragment_msgdata->batchmsg_bufinfo;
+
+ if (!(fragment_msgdata->batchmsg_bufinfo)) {
+ pr_err("Batch message info missing!\n");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ pmsg = kzalloc(FW_DEVA_DECODE_FRAGMENT_SIZE, GFP_KERNEL);
+ if (!pmsg)
+ return IMG_ERROR_MALLOC_FAILED;
+ /*
+ * message type and stream id
+ */
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_GENMSG_MSG_TYPE,
+ FW_DEVA_PARSE_FRAGMENT, unsigned char*);
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_MSG_ID, ++core_ctx->current_msgid, unsigned char*);
+
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_FRAGMENT_CTRL_ALLOC_ADDR,
+ (unsigned int)pbatch_msg_bufinfo->dev_virt
+ + fragment_msgdata->ctrl_alloc_offset, unsigned char*);
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_FRAGMENT_BUFFER_SIZE,
+ fragment_msgdata->ctrl_alloc_bytes / sizeof(unsigned int),
+ unsigned char*);
+
+ /*
+ * size of message
+ */
+ MEMIO_WRITE_FIELD(pmsg, FW_DEVA_GENMSG_MSG_SIZE,
+ FW_DEVA_DECODE_FRAGMENT_SIZE / sizeof(unsigned int), unsigned char*);
+
+ *msg = pmsg;
+
+ return 0;
+}
+
+static int vdeckm_get_message(const void *hndl_vxd, const enum hwctrl_msgid msgid,
+ const struct hwctrl_msgdata *msgdata,
+ struct hwctrl_to_kernel_msg *to_kernelmsg)
+{
+ unsigned int result = 0;
+ struct vdeckm_context *core_ctx = (struct vdeckm_context *)hndl_vxd;
+
+ if (!core_ctx || !to_kernelmsg || !msgdata)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ switch (msgid) {
+ case HWCTRL_MSGID_BATCH:
+ result = vdeckm_prepare_batch(core_ctx, &msgdata->batch_msgdata,
+ &to_kernelmsg->msg_hdr);
+ break;
+
+ case HWCTRL_MSGID_FRAGMENT:
+ result = vdeckm_prepare_fragment(core_ctx, &msgdata->fragment_msgdata,
+ &to_kernelmsg->msg_hdr);
+ vxd_set_msgflag(VXD_MSG_FLAG_DROP, &to_kernelmsg->flags);
+ break;
+
+ default:
+ result = IMG_ERROR_GENERIC_FAILURE;
+ pr_err("got a message that is not supported by PVDEC");
+ break;
+ }
+
+ if (result == 0) {
+ /* Set the stream ID for the next message to be sent. */
+ to_kernelmsg->km_str_id = msgdata->km_str_id;
+ to_kernelmsg->msg_size = MEMIO_READ_FIELD(to_kernelmsg->msg_hdr,
+ FW_DEVA_GENMSG_MSG_SIZE) *
+ sizeof(unsigned int);
+ }
+
+ return result;
+}
+
+static void hwctrl_dump_state(struct vxd_states *prev_state,
+ struct vxd_states *cur_state,
+ unsigned char pipe_minus1)
+{
+ pr_info("Back-End MbX [% 10d]",
+ prev_state->fw_state.pipe_state[pipe_minus1].be_mb.x);
+ pr_info("Back-End MbY [% 10d]",
+ prev_state->fw_state.pipe_state[pipe_minus1].be_mb.y);
+ pr_info("Front-End MbX [% 10d]",
+ prev_state->fw_state.pipe_state[pipe_minus1].fe_mb.x);
+ pr_info("Front-End MbY [% 10d]",
+ prev_state->fw_state.pipe_state[pipe_minus1].fe_mb.y);
+ pr_info("VDECFW_CHECKPOINT_BE_PICTURE_COMPLETE [0x%08X]",
+ cur_state->fw_state.pipe_state[pipe_minus1].acheck_point
+ [VDECFW_CHECKPOINT_BE_PICTURE_COMPLETE]);
+ pr_info("VDECFW_CHECKPOINT_BE_1SLICE_DONE [0x%08X]",
+ cur_state->fw_state.pipe_state[pipe_minus1].acheck_point
+ [VDECFW_CHECKPOINT_BE_1SLICE_DONE]);
+ pr_info("VDECFW_CHECKPOINT_BE_PICTURE_STARTED [0x%08X]",
+ cur_state->fw_state.pipe_state[pipe_minus1].acheck_point
+ [VDECFW_CHECKPOINT_BE_PICTURE_STARTED]);
+ pr_info("VDECFW_CHECKPOINT_FE_PICTURE_COMPLETE [0x%08X]",
+ cur_state->fw_state.pipe_state[pipe_minus1].acheck_point
+ [VDECFW_CHECKPOINT_FE_PICTURE_COMPLETE]);
+ pr_info("VDECFW_CHECKPOINT_FE_PARSE_DONE [0x%08X]",
+ cur_state->fw_state.pipe_state[pipe_minus1].acheck_point
+ [VDECFW_CHECKPOINT_FE_PARSE_DONE]);
+ pr_info("VDECFW_CHECKPOINT_FE_1SLICE_DONE [0x%08X]",
+ cur_state->fw_state.pipe_state[pipe_minus1].acheck_point
+ [VDECFW_CHECKPOINT_FE_1SLICE_DONE]);
+ pr_info("VDECFW_CHECKPOINT_ENTDEC_STARTED [0x%08X]",
+ cur_state->fw_state.pipe_state[pipe_minus1].acheck_point
+ [VDECFW_CHECKPOINT_ENTDEC_STARTED]);
+ pr_info("VDECFW_CHECKPOINT_FIRMWARE_SAVED [0x%08X]",
+ cur_state->fw_state.pipe_state[pipe_minus1].acheck_point
+ [VDECFW_CHECKPOINT_FIRMWARE_SAVED]);
+ pr_info("VDECFW_CHECKPOINT_PICMAN_COMPLETE [0x%08X]",
+ cur_state->fw_state.pipe_state[pipe_minus1].acheck_point
+ [VDECFW_CHECKPOINT_PICMAN_COMPLETE]);
+ pr_info("VDECFW_CHECKPOINT_FIRMWARE_READY [0x%08X]",
+ cur_state->fw_state.pipe_state[pipe_minus1].acheck_point
+ [VDECFW_CHECKPOINT_FIRMWARE_READY]);
+ pr_info("VDECFW_CHECKPOINT_PICTURE_STARTED [0x%08X]",
+ cur_state->fw_state.pipe_state[pipe_minus1].acheck_point
+ [VDECFW_CHECKPOINT_PICTURE_STARTED]);
+}
+
+static unsigned int hwctrl_calculate_load(struct bspp_pict_hdr_info *pict_hdr_info)
+{
+ return (((pict_hdr_info->coded_frame_size.width + 15) / 16)
+ * ((pict_hdr_info->coded_frame_size.height + 15) / 16));
+}
+
+static int hwctrl_send_batch_message(struct hwctrl_ctx *hwctx,
+ struct dec_decpict *decpict,
+ void *vxd_dec_ctx)
+{
+ int result;
+ struct hwctrl_to_kernel_msg to_kernelmsg = {0};
+ struct vidio_ddbufinfo *batchmsg_bufinfo =
+ decpict->batch_msginfo->ddbuf_info;
+ struct hwctrl_msgdata msg_data;
+ struct hwctrl_batch_msgdata *batch_msgdata = &msg_data.batch_msgdata;
+
+ memset(&msg_data, 0, sizeof(msg_data));
+
+ msg_data.km_str_id = GET_STREAM_ID(decpict->transaction_id);
+
+ batch_msgdata->batchmsg_bufinfo = batchmsg_bufinfo;
+
+ batch_msgdata->transaction_id = decpict->transaction_id;
+ batch_msgdata->pvdec_fwctx = decpict->str_pvdec_fw_ctxbuf;
+ batch_msgdata->ctrl_alloc_bytes = decpict->ctrl_alloc_bytes;
+ batch_msgdata->operating_mode = decpict->operating_op;
+ batch_msgdata->genc_id = decpict->genc_id;
+ batch_msgdata->mb_load = hwctrl_calculate_load(decpict->pict_hdr_info);
+ batch_msgdata->size_delimited_mode =
+ (decpict->pict_hdr_info->parser_mode != VDECFW_SCP_ONLY) ?
+ (1) : (0);
+
+ result = vdeckm_get_message(hwctx->hndl_vxd, HWCTRL_MSGID_BATCH,
+ &msg_data, &to_kernelmsg);
+ if (result != 0) {
+ pr_err("failed to get decode message\n");
+ return result;
+ }
+
+ pr_debug("[HWCTRL] send batch message\n");
+ result = vdeckm_send_message(hwctx->hndl_vxd, &to_kernelmsg,
+ vxd_dec_ctx);
+ if (result != 0)
+ return result;
+
+ vdeckm_return_msg(hwctx->hndl_vxd, &to_kernelmsg);
+
+ return 0;
+}
+
+int hwctrl_process_msg(void *hndl_hwctx, unsigned int msg_flags, unsigned int *msg,
+ struct dec_decpict **decpict)
+{
+ int result;
+ struct hwctrl_ctx *hwctx;
+ enum vxd_msg_attr msg_attr = VXD_MSG_ATTR_NONE;
+ struct dec_decpict *pdecpict = NULL;
+ unsigned int val_first = 0;
+ unsigned int val_sec = 0;
+
+ if (!hndl_hwctx || !msg || !decpict) {
+ VDEC_ASSERT(0);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ hwctx = (struct hwctrl_ctx *)hndl_hwctx;
+
+ *decpict = NULL;
+
+ pr_debug("[HWCTRL] : process message\n");
+ result = vdeckm_process_msg(hwctx->hndl_vxd, msg, &hwctx->pend_pict_list, msg_flags,
+ &msg_attr, &pdecpict);
+
+ /* validate pointers before using them */
+ if (!pdecpict || !pdecpict->first_fld_fwmsg || !pdecpict->second_fld_fwmsg) {
+ VDEC_ASSERT(0);
+ return -EIO;
+ }
+
+ val_first = pdecpict->first_fld_fwmsg->pict_attrs.pict_attrs.deverror;
+ val_sec = pdecpict->second_fld_fwmsg->pict_attrs.pict_attrs.deverror;
+
+ if (val_first || val_sec)
+ pr_err("device signaled critical error!!!\n");
+
+ if (msg_attr == VXD_MSG_ATTR_DECODED) {
+ pdecpict->state = DECODER_PICTURE_STATE_DECODED;
+ /*
+ * We have successfully decoded a picture as normally or
+ * after the replay.
+ * Mark HW is in good state.
+ */
+ hwctx->is_fatal_state = 0;
+ } else if (msg_attr == VXD_MSG_ATTR_FATAL) {
+ struct hwctrl_state state;
+ unsigned char pipe_minus1 = 0;
+
+ memset(&state, 0, sizeof(state));
+
+ result = hwctrl_get_core_status(hwctx, &state);
+ if (result == 0) {
+ hwctx->is_prev_hw_state_set = 1;
+ memcpy(&hwctx->prev_state, &state, sizeof(struct hwctrl_state));
+
+ for (pipe_minus1 = 0; pipe_minus1 < hwctx->num_pipes;
+ pipe_minus1++) {
+ hwctrl_dump_state(&state.core_state, &state.core_state,
+ pipe_minus1);
+ }
+ }
+ }
+ *decpict = pdecpict;
+
+ return 0;
+}
+
+int hwctrl_getcore_cached_status(void *hndl_hwctx, struct hwctrl_state *state)
+{
+ struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx;
+
+ if (hwctx->is_prev_hw_state_set)
+ memcpy(state, &hwctx->prev_state, sizeof(struct hwctrl_state));
+ else
+ return IMG_ERROR_UNEXPECTED_STATE;
+
+ return 0;
+}
+
+int hwctrl_get_core_status(void *hndl_hwctx, struct hwctrl_state *state)
+{
+ struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx;
+ unsigned int result = IMG_ERROR_GENERIC_FAILURE;
+
+ if (!hwctx->is_fatal_state && state) {
+ struct vxd_states *pcorestate = NULL;
+
+ pcorestate = &state->core_state;
+
+ memset(pcorestate, 0, sizeof(*(pcorestate)));
+
+ result = vdeckm_get_core_state(hwctx->hndl_vxd, pcorestate);
+ }
+
+ return result;
+}
+
+int hwctrl_is_on_seq_replay(void *hndl_hwctx)
+{
+ struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx;
+
+ return hwctx->is_on_seq_replay;
+}
+
+int hwctrl_picture_submitbatch(void *hndl_hwctx, struct dec_decpict *decpict, void *vxd_dec_ctx)
+{
+ struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx;
+
+ if (hwctx->is_initialised) {
+ lst_add(&hwctx->pend_pict_list, decpict);
+ if (!hwctx->is_on_seq_replay)
+ return hwctrl_send_batch_message(hwctx, decpict, vxd_dec_ctx);
+ }
+
+ return 0;
+}
+
+int hwctrl_getpicpend_pictlist(void *hndl_hwctx, unsigned int transaction_id,
+ struct dec_decpict **decpict)
+{
+ struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx;
+ struct dec_decpict *dec_pic;
+
+ dec_pic = lst_first(&hwctx->pend_pict_list);
+ while (dec_pic) {
+ if (dec_pic->transaction_id == transaction_id) {
+ *decpict = dec_pic;
+ break;
+ }
+ dec_pic = lst_next(dec_pic);
+ }
+
+ if (!dec_pic)
+ return IMG_ERROR_INVALID_ID;
+
+ return 0;
+}
+
+int hwctrl_peekheadpiclist(void *hndl_hwctx, struct dec_decpict **decpict)
+{
+ struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx;
+
+ if (hwctx)
+ *decpict = lst_first(&hwctx->pend_pict_list);
+
+ if (*decpict)
+ return 0;
+
+ return IMG_ERROR_GENERIC_FAILURE;
+}
+
+int hwctrl_getdecodedpicture(void *hndl_hwctx, struct dec_decpict **decpict)
+{
+ struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx;
+
+ if (hwctx) {
+ struct dec_decpict *cur_decpict;
+ /*
+ * Ensure that this picture is in the list.
+ */
+ cur_decpict = lst_first(&hwctx->pend_pict_list);
+ while (cur_decpict) {
+ if (cur_decpict->state == DECODER_PICTURE_STATE_DECODED) {
+ *decpict = cur_decpict;
+ return 0;
+ }
+
+ cur_decpict = lst_next(cur_decpict);
+ }
+ }
+
+ return IMG_ERROR_VALUE_OUT_OF_RANGE;
+}
+
+void hwctrl_removefrom_piclist(void *hndl_hwctx, struct dec_decpict *decpict)
+{
+ struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx;
+
+ if (hwctx) {
+ struct dec_decpict *cur_decpict;
+ /*
+ * Ensure that this picture is in the list.
+ */
+ cur_decpict = lst_first(&hwctx->pend_pict_list);
+ while (cur_decpict) {
+ if (cur_decpict == decpict) {
+ lst_remove(&hwctx->pend_pict_list, decpict);
+ break;
+ }
+
+ cur_decpict = lst_next(cur_decpict);
+ }
+ }
+}
+
+int hwctrl_getregsoffset(void *hndl_hwctx, struct decoder_regsoffsets *regs_offsets)
+{
+ struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx;
+
+ return vdeckm_getregsoffsets(hwctx->hndl_vxd, regs_offsets);
+}
+
+static int pvdec_create(struct vxd_dev *vxd, struct vxd_coreprops *core_props,
+ void **hndl_vdeckm_context)
+{
+ struct vdeckm_context *corectx;
+ struct vxd_core_props hndl_core_props;
+ int result;
+
+ if (!hndl_vdeckm_context || !core_props)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /*
+ * Obtain core context.
+ */
+ corectx = &acore_ctx[0];
+
+ memset(corectx, 0, sizeof(*corectx));
+
+ corectx->core_num = 0;
+
+ result = vxd_pvdec_get_props(vxd->dev, vxd->reg_base, &hndl_core_props);
+ if (result != 0)
+ return result;
+
+ vxd_get_coreproperties(&hndl_core_props, &corectx->props);
+
+ memcpy(core_props, &corectx->props, sizeof(*core_props));
+
+ *hndl_vdeckm_context = corectx;
+
+ return 0;
+}
+
+int hwctrl_deinitialise(void *hndl_hwctx)
+{
+ struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx;
+
+ if (hwctx->is_initialised) {
+ kfree(hwctx);
+ hwctx = NULL;
+ }
+
+ return 0;
+}
+
+int hwctrl_initialise(void *dec_core, void *comp_int_userdata,
+ const struct vdecdd_dd_devconfig *dd_devconfig,
+ struct vxd_coreprops *core_props, void **hndl_hwctx)
+{
+ struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)*hndl_hwctx;
+ int result;
+
+ if (!hwctx) {
+ hwctx = kzalloc(sizeof(*(hwctx)), GFP_KERNEL);
+ if (!hwctx)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ *hndl_hwctx = hwctx;
+ }
+
+ if (!hwctx->is_initialised) {
+ hwctx->hndl_vxd = ((struct dec_core_ctx *)dec_core)->dec_ctx->dev_handle;
+ result = pvdec_create(hwctx->hndl_vxd, core_props, &hwctx->hndl_vxd);
+ if (result != 0)
+ goto error;
+
+ lst_init(&hwctx->pend_pict_list);
+
+ hwctx->devconfig = *dd_devconfig;
+ hwctx->num_pipes = core_props->num_pixel_pipes;
+ hwctx->comp_init_userdata = comp_int_userdata;
+ hwctx->dec_core = dec_core;
+ hwctx->is_initialised = 1;
+ hwctx->is_on_seq_replay = 0;
+ hwctx->is_fatal_state = 0;
+ }
+
+ return 0;
+error:
+ hwctrl_deinitialise(*hndl_hwctx);
+
+ return result;
+}
+
+static int hwctrl_send_fragment_message(struct hwctrl_ctx *hwctx,
+ struct dec_pict_fragment *pict_fragment,
+ struct dec_decpict *decpict,
+ void *vxd_dec_ctx)
+{
+ int result;
+ struct hwctrl_to_kernel_msg to_kernelmsg = {0};
+ struct hwctrl_msgdata msg_data;
+ struct hwctrl_fragment_msgdata *pfragment_msgdata =
+ &msg_data.fragment_msgdata;
+
+ msg_data.km_str_id = GET_STREAM_ID(decpict->transaction_id);
+
+ pfragment_msgdata->ctrl_alloc_bytes = pict_fragment->ctrl_alloc_bytes;
+
+ pfragment_msgdata->ctrl_alloc_offset = pict_fragment->ctrl_alloc_offset;
+
+ pfragment_msgdata->batchmsg_bufinfo = decpict->batch_msginfo->ddbuf_info;
+
+ result = vdeckm_get_message(hwctx->hndl_vxd, HWCTRL_MSGID_FRAGMENT, &msg_data,
+ &to_kernelmsg);
+ if (result != 0) {
+ pr_err("Failed to get decode message\n");
+ return result;
+ }
+
+ result = vdeckm_send_message(hwctx->hndl_vxd, &to_kernelmsg, vxd_dec_ctx);
+ if (result != 0)
+ return result;
+
+ vdeckm_return_msg(hwctx->hndl_vxd, &to_kernelmsg);
+
+ return 0;
+}
+
+int hwctrl_picture_submit_fragment(void *hndl_hwctx,
+ struct dec_pict_fragment *pict_fragment,
+ struct dec_decpict *decpict,
+ void *vxd_dec_ctx)
+{
+ struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx;
+ unsigned int result = 0;
+
+ if (hwctx->is_initialised) {
+ result = hwctrl_send_fragment_message(hwctx, pict_fragment,
+ decpict, vxd_dec_ctx);
+ if (result != 0)
+ pr_err("Failed to send fragment message to firmware !");
+ }
+
+ return result;
+}
diff --git a/drivers/staging/media/vxd/decoder/hw_control.h b/drivers/staging/media/vxd/decoder/hw_control.h
new file mode 100644
index 000000000000..3f430969b998
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/hw_control.h
@@ -0,0 +1,144 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD DEC Hardware control implementation
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#ifndef _HW_CONTROL_H
+#define _HW_CONTROL_H
+
+#include "bspp.h"
+#include "decoder.h"
+#include "fw_interface.h"
+#include "img_dec_common.h"
+#include "img_errors.h"
+#include "lst.h"
+#include "mem_io.h"
+#include "vdecdd_defs.h"
+#include "vdecfw_shared.h"
+#include "vid_buf.h"
+#include "vxd_ext.h"
+#include "vxd_props.h"
+
+/* Size of additional buffers needed for each HEVC picture */
+#ifdef HAS_HEVC
+
+/* Empirically defined */
+#define MEM_TO_REG_BUF_SIZE 0x2000
+
+/*
+ * Max. no. of slices found in stream db: approx. 2200,
+ * set MAX_SLICES to 2368 to get buffer size page aligned
+ */
+#define MAX_SLICES 2368
+#define SLICE_PARAMS_SIZE 64
+#define SLICE_PARAMS_BUF_SIZE (MAX_SLICES * SLICE_PARAMS_SIZE)
+
+/*
+ * Size of buffer for "above params" structure, sufficient for stream of width 8192
+ * 192 * (8192/64) == 0x6000, see "above_param_size" in TRM
+ */
+#define ABOVE_PARAMS_BUF_SIZE 0x6000
+#endif
+
+enum hwctrl_msgid {
+ HWCTRL_MSGID_BATCH = 0,
+ HWCTRL_MSGID_FRAGMENT = 1,
+ CORE_MSGID_MAX,
+ CORE_MSGID_FORCE32BITS = 0x7FFFFFFFU
+};
+
+struct hwctrl_to_kernel_msg {
+ unsigned int msg_size;
+ unsigned int km_str_id;
+ unsigned int flags;
+ unsigned char *msg_hdr;
+};
+
+struct hwctrl_batch_msgdata {
+ struct vidio_ddbufinfo *batchmsg_bufinfo;
+ struct vidio_ddbufinfo *pvdec_fwctx;
+ unsigned int ctrl_alloc_bytes;
+ unsigned int operating_mode;
+ unsigned int transaction_id;
+ unsigned int tile_cfg;
+ unsigned int genc_id;
+ unsigned int mb_load;
+ unsigned int size_delimited_mode;
+};
+
+struct hwctrl_fragment_msgdata {
+ struct vidio_ddbufinfo *batchmsg_bufinfo;
+ unsigned int ctrl_alloc_offset;
+ unsigned int ctrl_alloc_bytes;
+};
+
+struct hwctrl_msgdata {
+ unsigned int km_str_id;
+ struct hwctrl_batch_msgdata batch_msgdata;
+ struct hwctrl_fragment_msgdata fragment_msgdata;
+};
+
+/*
+ * This structure contains MSVDX Message information.
+ */
+struct hwctrl_msgstatus {
+ unsigned char control_fence_id[VDECFW_MSGID_CONTROL_TYPES];
+ unsigned char decode_fence_id[VDECFW_MSGID_DECODE_TYPES];
+ unsigned char completion_fence_id[VDECFW_MSGID_COMPLETION_TYPES];
+};
+
+/*
+ * this structure contains the HWCTRL Core state.
+ */
+struct hwctrl_state {
+ struct vxd_states core_state;
+ struct hwctrl_msgstatus fwmsg_status;
+ struct hwctrl_msgstatus hostmsg_status;
+};
+
+int hwctrl_picture_submit_fragment(void *hndl_hwctx,
+ struct dec_pict_fragment *pict_fragment,
+ struct dec_decpict *decpict,
+ void *vxd_dec_ctx);
+
+int hwctrl_process_msg(void *hndl_hwct, unsigned int msg_flags, unsigned int *msg,
+ struct dec_decpict **decpict);
+
+int hwctrl_getcore_cached_status(void *hndl_hwctx, struct hwctrl_state *state);
+
+int hwctrl_get_core_status(void *hndl_hwctx, struct hwctrl_state *state);
+
+int hwctrl_is_on_seq_replay(void *hndl_hwctx);
+
+int hwctrl_picture_submitbatch(void *hndl_hwctx, struct dec_decpict *decpict,
+ void *vxd_dec_ctx);
+
+int hwctrl_getpicpend_pictlist(void *hndl_hwctx, unsigned int transaction_id,
+ struct dec_decpict **decpict);
+
+int hwctrl_peekheadpiclist(void *hndl_hwctx, struct dec_decpict **decpict);
+
+int hwctrl_getdecodedpicture(void *hndl_hwctx, struct dec_decpict **decpict);
+
+void hwctrl_removefrom_piclist(void *hndl_hwctx, struct dec_decpict *decpict);
+
+int hwctrl_getregsoffset(void *hndl_hwctx,
+ struct decoder_regsoffsets *regs_offsets);
+
+int hwctrl_initialise(void *dec_core, void *comp_int_userdata,
+ const struct vdecdd_dd_devconfig *dd_devconfig,
+ struct vxd_coreprops *core_props, void **hndl_hwctx);
+
+int hwctrl_deinitialise(void *hndl_hwctx);
+
+#endif /* _HW_CONTROL_H */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
It contains VDEC MMU APIs, used for buffer mapping, generating
virtual addresses. It will use TALMMU APIs.
Signed-off-by: Lakshmi Sankar <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
.../media/vxd/decoder/vdec_mmu_wrapper.c | 829 ++++++++++++++++++
.../media/vxd/decoder/vdec_mmu_wrapper.h | 174 ++++
3 files changed, 1005 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c
create mode 100644 drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 2b0d0708d852..6c3f7a55ce9b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19565,6 +19565,8 @@ F: drivers/staging/media/vxd/decoder/hw_control.h
F: drivers/staging/media/vxd/decoder/img_dec_common.h
F: drivers/staging/media/vxd/decoder/translation_api.c
F: drivers/staging/media/vxd/decoder/translation_api.h
+F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c
+F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.h
F: drivers/staging/media/vxd/decoder/vxd_core.c
F: drivers/staging/media/vxd/decoder/vxd_dec.c
F: drivers/staging/media/vxd/decoder/vxd_dec.h
diff --git a/drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c b/drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c
new file mode 100644
index 000000000000..384ce840b4dc
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c
@@ -0,0 +1,829 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * VDEC MMU Functions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include "img_dec_common.h"
+#include "lst.h"
+#include "talmmu_api.h"
+#include "vdec_defs.h"
+#include "vdec_mmu_wrapper.h"
+#include "vxd_dec.h"
+
+#define GUARD_BAND 0x1000
+
+struct mmuheap {
+ unsigned char *name;
+ enum mmu_eheap_id heap_id;
+ enum talmmu_heap_type heap_type;
+ unsigned int start_offset;
+ unsigned int size;
+ unsigned char *mem_space;
+ unsigned char use_guard_band;
+ unsigned char image_buffers;
+};
+
+static const struct mmuheap mmu_heaps[MMU_HEAP_MAX] = {
+ { "Image untiled", MMU_HEAP_IMAGE_BUFFERS_UNTILED,
+ TALMMU_HEAP_PERCONTEXT, PVDEC_HEAP_UNTILED_START,
+ PVDEC_HEAP_UNTILED_SIZE, "MEMBE", 1, 1 },
+
+ { "Bitstream", MMU_HEAP_BITSTREAM_BUFFERS,
+ TALMMU_HEAP_PERCONTEXT, PVDEC_HEAP_BITSTREAM_START,
+ PVDEC_HEAP_BITSTREAM_SIZE, "MEMDMAC_02", 1, 0 },
+
+ { "Stream", MMU_HEAP_STREAM_BUFFERS,
+ TALMMU_HEAP_PERCONTEXT, PVDEC_HEAP_STREAM_START,
+ PVDEC_HEAP_STREAM_SIZE, "MEM", 1, 0 },
+};
+
+/*
+ * @Heap ID
+ * @Heap type
+ * @Heap flags
+ * @Memory space name
+ * @Start address (virtual)
+ * @Size of heap, in bytes
+ */
+static struct talmmu_heap_info heap_info = {
+ MMU_HEAP_IMAGE_BUFFERS_UNTILED,
+ TALMMU_HEAP_PERCONTEXT,
+ TALMMU_HEAPFLAGS_NONE,
+ "MEMBE",
+ 0,
+ 0,
+};
+
+/*
+ * This structure contains the device context.
+ * @brief VDECDD MMU Device Context
+ * @devmem_template_hndl: Handle for MMU template.
+ * @devmem_ctx_hndl: Handle for MMU context.
+ * @str_list: List of streams.
+ */
+struct mmu_dev_context {
+ void *devmem_template_hndl;
+ void *devmem_ctx_hndl;
+ struct lst_t str_list;
+ unsigned int ctx_id;
+ unsigned int next_ctx_id;
+};
+
+/*
+ * This structure contains the stream context.
+ * @brief VDECDD MMU Stream Context
+ * @LST_LINK: List link (allows the structure to be part of a MeOS list).
+ * @devmem_ctx_hndl: Handle for MMU context.
+ * @dev_ctx: Pointer to device context.
+ * @ctx_id: MMU context Id.
+ * km_str_id: Stream ID used in communication with new KM interface
+ */
+struct mmu_str_context {
+ void **link;
+ void *devmem_ctx_hndl;
+ struct mmu_dev_context *dev_ctx;
+ unsigned int ctx_id;
+ void *ptd_memspace_hndl;
+ unsigned int int_reg_num;
+ unsigned int km_str_id;
+ struct vxd_dec_ctx *vxd_dec_context;
+};
+
+static unsigned int set_attributes(enum sys_emem_attrib mem_attrib)
+{
+ unsigned int attrib = 0;
+
+ if (mem_attrib & SYS_MEMATTRIB_CACHED)
+ attrib |= MEM_ATTR_CACHED;
+
+ if (mem_attrib & SYS_MEMATTRIB_UNCACHED)
+ attrib |= MEM_ATTR_UNCACHED;
+
+ if (mem_attrib & SYS_MEMATTRIB_WRITECOMBINE)
+ attrib |= MEM_ATTR_WRITECOMBINE;
+
+ if (mem_attrib & SYS_MEMATTRIB_SECURE)
+ attrib |= MEM_ATTR_SECURE;
+
+ return attrib;
+}
+
+/*
+ * @Function mmu_dev_mem_context_create
+ */
+static int mmu_devmem_context_create(struct mmu_dev_context *dev_ctx, void **mmu_ctx_hndl)
+{
+ int result;
+ void *devmem_heap_hndl;
+ union talmmu_heap_options heap_opt1;
+ unsigned int i;
+ unsigned char use_guardband;
+ enum talmmu_heap_option_id heap_option_id;
+
+ dev_ctx->next_ctx_id++;
+
+ /* Create a context from the template */
+ result = talmmu_devmem_ctx_create(dev_ctx->devmem_template_hndl, dev_ctx->next_ctx_id,
+ mmu_ctx_hndl);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /* Apply options to heaps. */
+ heap_opt1.guardband_opt.guardband = GUARD_BAND;
+
+ for (i = 0; i < MMU_HEAP_MAX; i++) {
+ result = talmmu_get_heap_handle(mmu_heaps[i].heap_id, *mmu_ctx_hndl,
+ &devmem_heap_hndl);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ use_guardband = mmu_heaps[i].use_guard_band;
+ heap_option_id = TALMMU_HEAP_OPT_ADD_GUARD_BAND;
+ if (use_guardband)
+ talmmu_devmem_heap_options(devmem_heap_hndl, heap_option_id, heap_opt1);
+ }
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function mmu_device_create
+ */
+int mmu_device_create(enum mmu_etype mmu_type_arg,
+ unsigned int ptd_alignment,
+ void **mmudev_handle)
+{
+ int result = IMG_SUCCESS;
+ enum talmmu_mmu_type talmmu_type =
+ TALMMU_MMUTYPE_4K_PAGES_32BIT_ADDR;
+ unsigned int i;
+ struct mmu_dev_context *dev_ctx;
+ struct talmmu_devmem_info dev_mem_info;
+
+ /* Set the TAL MMU type. */
+ switch (mmu_type_arg) {
+ case MMU_TYPE_32BIT:
+ talmmu_type = TALMMU_MMUTYPE_4K_PAGES_32BIT_ADDR;
+ break;
+
+ case MMU_TYPE_36BIT:
+ talmmu_type = TALMMU_MMUTYPE_4K_PAGES_36BIT_ADDR;
+ break;
+
+ case MMU_TYPE_40BIT:
+ talmmu_type = TALMMU_MMUTYPE_4K_PAGES_40BIT_ADDR;
+ break;
+
+ default:
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Allocate a device context structure */
+ dev_ctx = kzalloc(sizeof(*dev_ctx), GFP_KERNEL);
+ if (!dev_ctx)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /* Initialise stream list. */
+ lst_init(&dev_ctx->str_list);
+
+ /* Initialise TALMMU. */
+ result = talmmu_init();
+ if (result != IMG_SUCCESS)
+ goto error_tal_init;
+
+ dev_mem_info.device_id = 0;
+ dev_mem_info.mmu_type = talmmu_type;
+ dev_mem_info.dev_flags = TALMMU_DEVFLAGS_NONE;
+ dev_mem_info.pagedir_memspace_name = "MEM";
+ dev_mem_info.pagetable_memspace_name = NULL;
+ dev_mem_info.page_size = DEV_MMU_PAGE_SIZE;
+ dev_mem_info.ptd_alignment = ptd_alignment;
+
+ result = talmmu_devmem_template_create(&dev_mem_info, &dev_ctx->devmem_template_hndl);
+ if (result != IMG_SUCCESS)
+ goto error_tal_template;
+
+ /* Add heaps to template */
+ for (i = 0; i < MMU_HEAP_MAX; i++) {
+ heap_info.heap_id = mmu_heaps[i].heap_id;
+ heap_info.heap_type = mmu_heaps[i].heap_type;
+ heap_info.memspace_name = mmu_heaps[i].name;
+ heap_info.size = mmu_heaps[i].size;
+ heap_info.basedev_virtaddr = mmu_heaps[i].start_offset;
+
+ result = talmmu_devmem_heap_add(dev_ctx->devmem_template_hndl, &heap_info);
+ if (result != IMG_SUCCESS)
+ goto error_tal_heap;
+ }
+
+ /* Create the device context. */
+ result = mmu_devmem_context_create(dev_ctx, &dev_ctx->devmem_ctx_hndl);
+ if (result != IMG_SUCCESS)
+ goto error_mmu_context;
+
+ dev_ctx->ctx_id = dev_ctx->next_ctx_id;
+
+ /* Return the device context. */
+ *mmudev_handle = dev_ctx;
+
+ return IMG_SUCCESS;
+
+ /* Roll back in case of errors. */
+error_mmu_context:
+error_tal_heap:
+ talmmu_devmem_template_destroy(dev_ctx->devmem_template_hndl);
+error_tal_template:
+ talmmu_deinit();
+error_tal_init:
+ kfree(dev_ctx);
+ return result;
+}
+
+/*
+ * @Function mmu_device_destroy
+ */
+int mmu_device_destroy(void *mmudev_handle)
+{
+ struct mmu_dev_context *dev_ctx = mmudev_handle;
+ unsigned int result;
+ struct mmu_str_context *str_ctx;
+
+ /* Validate inputs. */
+ if (!mmudev_handle)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* Destroy all streams associated with the device. */
+ str_ctx = lst_first(&dev_ctx->str_list);
+ while (str_ctx) {
+ result = mmu_stream_destroy(str_ctx);
+ if (result != IMG_SUCCESS)
+ return result;
+ /* See if there are more streams. */
+ str_ctx = lst_first(&dev_ctx->str_list);
+ }
+
+ /* Destroy the device context */
+ result = talmmu_devmem_ctx_destroy(dev_ctx->devmem_ctx_hndl);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /* Destroy the template. */
+ result = talmmu_devmem_template_destroy(dev_ctx->devmem_template_hndl);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ talmmu_deinit();
+
+ kfree(dev_ctx);
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function mmu_stream_create
+ * @Description
+ * This function is used to create and initialise the MMU stream context.
+ * @Input mmudev_handle : The MMU device handle.
+ * @Input km_str_id : Stream Id used in communication with KM driver.
+ * @Output mmu_str_hndl : A pointer used to return the MMU stream
+ * handle.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int mmu_stream_create(void *mmudev_handle,
+ unsigned int km_str_id,
+ void *vxd_dec_ctx_arg,
+ void **mmu_str_hndl)
+{
+ struct mmu_dev_context *dev_ctx = mmudev_handle;
+ struct mmu_str_context *str_ctx;
+ int res;
+
+ /* Validate inputs. */
+ if (!mmudev_handle)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* Allocate a stream context structure */
+ str_ctx = kzalloc(sizeof(*str_ctx), GFP_KERNEL);
+ if (!str_ctx)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ str_ctx->km_str_id = km_str_id;
+ str_ctx->dev_ctx = dev_ctx;
+ str_ctx->int_reg_num = 32;
+ str_ctx->vxd_dec_context = (struct vxd_dec_ctx *)vxd_dec_ctx_arg;
+
+ /* Create a stream context. */
+ res = mmu_devmem_context_create(dev_ctx, &str_ctx->devmem_ctx_hndl);
+ if (res != IMG_SUCCESS) {
+ kfree(str_ctx);
+ return res;
+ }
+
+ str_ctx->ctx_id = dev_ctx->next_ctx_id;
+
+ /* Add stream to list. */
+ lst_add(&dev_ctx->str_list, str_ctx);
+
+ *mmu_str_hndl = str_ctx;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function mmu_stream_destroy
+ * @Description
+ * This function is used to create and initialise the MMU stream context.
+ * NOTE: Destroy automatically frees and memory allocated using
+ * mmu_stream_malloc().
+ * @Input mmu_str_hndl : The MMU stream handle.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int mmu_stream_destroy(void *mmu_str_hndl)
+{
+ struct mmu_str_context *str_ctx = mmu_str_hndl;
+ int res;
+
+ /* Validate inputs. */
+ if (!mmu_str_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* remove stream to list. */
+ lst_remove(&str_ctx->dev_ctx->str_list, str_ctx);
+
+ /* Destroy the device context */
+ res = talmmu_devmem_ctx_destroy(str_ctx->devmem_ctx_hndl);
+ if (res != IMG_SUCCESS)
+ return res;
+
+ kfree(str_ctx);
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function mmu_malloc
+ */
+static int mmu_alloc(void *devmem_ctx_hndl,
+ struct vxd_dec_ctx *vxd_dec_ctx_arg,
+ enum mmu_eheap_id heap_id,
+ unsigned int mem_heap_id,
+ enum sys_emem_attrib mem_attrib,
+ unsigned int size,
+ unsigned int alignment,
+ struct vidio_ddbufinfo *ddbuf_info)
+{
+ int result;
+ void *devmem_heap_hndl;
+ struct vxd_free_data free_data;
+ struct vxd_dec_ctx *ctx;
+ struct vxd_dev *vxd;
+ struct vxd_alloc_data alloc_data;
+ unsigned int flags;
+
+ if (!devmem_ctx_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* Set buffer size. */
+ ddbuf_info->buf_size = size;
+
+ /* Round size up to next multiple of physical pages */
+ if ((size % HOST_MMU_PAGE_SIZE) != 0)
+ size = ((size / HOST_MMU_PAGE_SIZE) + 1) * HOST_MMU_PAGE_SIZE;
+
+ /* Allocate memory */
+ ctx = vxd_dec_ctx_arg;
+ vxd = ctx->dev;
+
+ alloc_data.heap_id = mem_heap_id;
+ alloc_data.size = ddbuf_info->buf_size;
+
+ alloc_data.attributes = set_attributes(mem_attrib);
+
+ result = img_mem_alloc(vxd->dev, ctx->mem_ctx, alloc_data.heap_id, alloc_data.size,
+ (enum mem_attr)alloc_data.attributes,
+ (int *)&ddbuf_info->buff_id);
+ if (result != IMG_SUCCESS)
+ goto error_alloc;
+
+ ddbuf_info->is_internal = 1;
+
+ if (mem_attrib & SYS_MEMATTRIB_SECURE) {
+ ddbuf_info->cpu_virt = NULL;
+ } else {
+ /* Map the buffer to CPU */
+ result = img_mem_map_km(ctx->mem_ctx, ddbuf_info->buff_id);
+ if (result) {
+ dev_err(vxd->dev, "%s: failed to map buf to cpu!(%d)\n", __func__, result);
+ goto error_get_heap_handle;
+ }
+ ddbuf_info->cpu_virt = img_mem_get_kptr(ctx->mem_ctx, ddbuf_info->buff_id);
+ }
+
+ /* Get heap handle */
+ result = talmmu_get_heap_handle(heap_id, devmem_ctx_hndl, &devmem_heap_hndl);
+ if (result != IMG_SUCCESS)
+ goto error_get_heap_handle;
+
+ /* Allocate device "virtual" memory. */
+ result = talmmu_devmem_addr_alloc(devmem_ctx_hndl, devmem_heap_hndl, size, alignment,
+ &ddbuf_info->hndl_memory);
+ if (result != IMG_SUCCESS)
+ goto error_mem_map_ext_mem;
+
+ /* Get the device virtual address. */
+ result = talmmu_get_dev_virt_addr(ddbuf_info->hndl_memory, &ddbuf_info->dev_virt);
+ if (result != IMG_SUCCESS)
+ goto error_get_dev_virt_addr;
+
+ flags = VXD_MAP_FLAG_NONE;
+
+ if (mem_attrib & SYS_MEMATTRIB_CORE_READ_ONLY)
+ flags |= VXD_MAP_FLAG_READ_ONLY;
+
+ if (mem_attrib & SYS_MEMATTRIB_CORE_WRITE_ONLY)
+ flags |= VXD_MAP_FLAG_WRITE_ONLY;
+
+ result = vxd_map_buffer(vxd, ctx, ddbuf_info->kmstr_id, ddbuf_info->buff_id,
+ ddbuf_info->dev_virt,
+ flags);
+
+ if (result != IMG_SUCCESS)
+ goto error_map_dev;
+
+ return IMG_SUCCESS;
+
+error_map_dev:
+error_get_dev_virt_addr:
+ talmmu_devmem_addr_free(ddbuf_info->hndl_memory);
+ ddbuf_info->hndl_memory = NULL;
+error_mem_map_ext_mem:
+error_get_heap_handle:
+ free_data.buf_id = ddbuf_info->buff_id;
+ img_mem_free(ctx->mem_ctx, free_data.buf_id);
+error_alloc:
+ return result;
+}
+
+/*
+ * @Function mmu_stream_malloc
+ */
+int mmu_stream_alloc(void *mmu_str_hndl,
+ enum mmu_eheap_id heap_id,
+ unsigned int mem_heap_id,
+ enum sys_emem_attrib mem_attrib,
+ unsigned int size,
+ unsigned int alignment,
+ struct vidio_ddbufinfo *ddbuf_info)
+{
+ struct mmu_str_context *str_ctx =
+ (struct mmu_str_context *)mmu_str_hndl;
+ int result;
+
+ /* Validate inputs. */
+ if (!mmu_str_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* Check if device level heap. */
+ switch (heap_id) {
+ case MMU_HEAP_IMAGE_BUFFERS_UNTILED:
+ case MMU_HEAP_BITSTREAM_BUFFERS:
+ case MMU_HEAP_STREAM_BUFFERS:
+ break;
+
+ default:
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ ddbuf_info->kmstr_id = str_ctx->km_str_id;
+
+ /* Allocate device memory. */
+ result = mmu_alloc(str_ctx->devmem_ctx_hndl, str_ctx->vxd_dec_context, heap_id, mem_heap_id,
+ mem_attrib, size, alignment, ddbuf_info);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function mmu_stream_map_ext_sg
+ */
+int mmu_stream_map_ext_sg(void *mmu_str_hndl,
+ enum mmu_eheap_id heap_id,
+ void *sgt,
+ unsigned int size,
+ unsigned int alignment,
+ enum sys_emem_attrib mem_attrib,
+ void *cpu_linear_addr,
+ struct vidio_ddbufinfo *ddbuf_info,
+ unsigned int *buff_id)
+{
+ struct mmu_str_context *str_ctx =
+ (struct mmu_str_context *)mmu_str_hndl;
+ int result;
+ void *devmem_heap_hndl;
+ unsigned int flags;
+
+ struct vxd_dec_ctx *ctx = str_ctx->vxd_dec_context;
+ struct vxd_dev *vxd = ctx->dev;
+
+ /* Validate inputs. */
+ if (!mmu_str_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* Check if device level heap. */
+ switch (heap_id) {
+ case MMU_HEAP_IMAGE_BUFFERS_UNTILED:
+ case MMU_HEAP_BITSTREAM_BUFFERS:
+ case MMU_HEAP_STREAM_BUFFERS:
+ break;
+
+ default:
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!str_ctx->devmem_ctx_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* Set buffer size. */
+ ddbuf_info->buf_size = size;
+
+ /* Round size up to next multiple of physical pages */
+ if ((size % HOST_MMU_PAGE_SIZE) != 0)
+ size = ((size / HOST_MMU_PAGE_SIZE) + 1) * HOST_MMU_PAGE_SIZE;
+
+ result = img_mem_import(vxd->dev, ctx->mem_ctx, ddbuf_info->buf_size,
+ (enum mem_attr)set_attributes(mem_attrib),
+ (int *)buff_id);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ if (mem_attrib & SYS_MEMATTRIB_SECURE)
+ ddbuf_info->cpu_virt = NULL;
+
+ ddbuf_info->buff_id = *buff_id;
+ ddbuf_info->is_internal = 0;
+
+ ddbuf_info->kmstr_id = str_ctx->km_str_id;
+
+ /* Set buffer size. */
+ ddbuf_info->buf_size = size;
+
+ /* Ensure the address of the buffer is at least page aligned. */
+ ddbuf_info->cpu_virt = cpu_linear_addr;
+
+ /* Get heap handle */
+ result = talmmu_get_heap_handle(heap_id, str_ctx->devmem_ctx_hndl, &devmem_heap_hndl);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /* Allocate device "virtual" memory. */
+ result = talmmu_devmem_addr_alloc(str_ctx->devmem_ctx_hndl, devmem_heap_hndl, size,
+ alignment,
+ &ddbuf_info->hndl_memory);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /* Get the device virtual address. */
+ result = talmmu_get_dev_virt_addr(ddbuf_info->hndl_memory, &ddbuf_info->dev_virt);
+ if (result != IMG_SUCCESS)
+ goto error_get_dev_virt_addr;
+
+ /* Map memory to the device */
+ flags = VXD_MAP_FLAG_NONE;
+
+ if (mem_attrib & SYS_MEMATTRIB_CORE_READ_ONLY)
+ flags |= VXD_MAP_FLAG_READ_ONLY;
+
+ if (mem_attrib & SYS_MEMATTRIB_CORE_WRITE_ONLY)
+ flags |= VXD_MAP_FLAG_WRITE_ONLY;
+
+ result = vxd_map_buffer_sg(vxd, ctx, ddbuf_info->kmstr_id, ddbuf_info->buff_id, sgt,
+ ddbuf_info->dev_virt,
+ flags);
+
+ if (result != IMG_SUCCESS)
+ goto error_map_dev;
+
+ return IMG_SUCCESS;
+
+error_map_dev:
+error_get_dev_virt_addr:
+ talmmu_devmem_addr_free(ddbuf_info->hndl_memory);
+ ddbuf_info->hndl_memory = NULL;
+ return result;
+}
+
+/*
+ * @Function mmu_stream_map_ext
+ */
+int mmu_stream_map_ext(void *mmu_str_hndl,
+ enum mmu_eheap_id heap_id,
+ unsigned int buff_id,
+ unsigned int size,
+ unsigned int alignment,
+ enum sys_emem_attrib mem_attrib,
+ void *cpu_linear_addr,
+ struct vidio_ddbufinfo *ddbuf_info)
+{
+ struct mmu_str_context *str_ctx =
+ (struct mmu_str_context *)mmu_str_hndl;
+ int result;
+ void *devmem_heap_hndl;
+ struct vxd_dec_ctx *ctx;
+ struct vxd_dev *vxd;
+ unsigned int flags;
+
+ /* Validate inputs. */
+ if (!mmu_str_hndl)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* Check if device level heap. */
+ switch (heap_id) {
+ case MMU_HEAP_IMAGE_BUFFERS_UNTILED:
+ case MMU_HEAP_BITSTREAM_BUFFERS:
+ case MMU_HEAP_STREAM_BUFFERS:
+ break;
+
+ default:
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Round size up to next multiple of physical pages */
+ if ((size % HOST_MMU_PAGE_SIZE) != 0)
+ size = ((size / HOST_MMU_PAGE_SIZE) + 1) * HOST_MMU_PAGE_SIZE;
+
+ ddbuf_info->buff_id = buff_id;
+ ddbuf_info->is_internal = 0;
+
+ ddbuf_info->kmstr_id = str_ctx->km_str_id;
+
+ /* Set buffer size. */
+ ddbuf_info->buf_size = size;
+
+ /* Ensure the address of the buffer is at least page aligned. */
+ ddbuf_info->cpu_virt = cpu_linear_addr;
+
+ /* Get heap handle */
+ result = talmmu_get_heap_handle(heap_id, str_ctx->devmem_ctx_hndl, &devmem_heap_hndl);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /* Allocate device "virtual" memory. */
+ result = talmmu_devmem_addr_alloc(str_ctx->devmem_ctx_hndl, devmem_heap_hndl, size,
+ alignment,
+ &ddbuf_info->hndl_memory);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /* Get the device virtual address. */
+ result = talmmu_get_dev_virt_addr(ddbuf_info->hndl_memory, &ddbuf_info->dev_virt);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /*
+ * Map device memory (allocated from outside VDEC)
+ * into the stream PTD.
+ */
+ ctx = str_ctx->vxd_dec_context;
+ vxd = ctx->dev;
+
+ flags = VXD_MAP_FLAG_NONE;
+
+ if (mem_attrib & SYS_MEMATTRIB_CORE_READ_ONLY)
+ flags |= VXD_MAP_FLAG_READ_ONLY;
+
+ if (mem_attrib & SYS_MEMATTRIB_CORE_WRITE_ONLY)
+ flags |= VXD_MAP_FLAG_WRITE_ONLY;
+
+ result = vxd_map_buffer(vxd, ctx, ddbuf_info->kmstr_id, ddbuf_info->buff_id,
+ ddbuf_info->dev_virt,
+ flags);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function mmu_free_mem
+ */
+int mmu_free_mem(void *mmustr_hndl, struct vidio_ddbufinfo *ddbuf_info)
+{
+ int tmp_result;
+ int result = IMG_SUCCESS;
+ struct vxd_dec_ctx *ctx;
+ struct vxd_dev *vxd;
+
+ struct mmu_str_context *str_ctx =
+ (struct mmu_str_context *)mmustr_hndl;
+
+ /* Validate inputs. */
+ if (!ddbuf_info)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (!str_ctx)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* Unmap the memory mapped to the device */
+ ctx = str_ctx->vxd_dec_context;
+ vxd = ctx->dev;
+
+ tmp_result = vxd_unmap_buffer(vxd, ctx, ddbuf_info->kmstr_id, ddbuf_info->buff_id);
+ if (tmp_result != IMG_SUCCESS)
+ result = tmp_result;
+
+ /*
+ * Unmapping the memory mapped to the device - done
+ * Free the memory.
+ */
+ tmp_result = talmmu_devmem_addr_free(ddbuf_info->hndl_memory);
+ if (tmp_result != IMG_SUCCESS)
+ result = tmp_result;
+
+ if (ddbuf_info->is_internal) {
+ struct vxd_free_data free_data = { ddbuf_info->buff_id };
+
+ img_mem_free(ctx->mem_ctx, free_data.buf_id);
+ }
+
+ return result;
+}
+
+/*
+ * @Function mmu_free_mem
+ */
+int mmu_free_mem_sg(void *mmustr_hndl, struct vidio_ddbufinfo *ddbuf_info)
+{
+ int tmp_result;
+ int result = IMG_SUCCESS;
+ struct vxd_dec_ctx *ctx;
+ struct vxd_dev *vxd;
+ struct vxd_free_data free_data;
+
+ struct mmu_str_context *str_ctx =
+ (struct mmu_str_context *)mmustr_hndl;
+
+ /* Validate inputs. */
+ if (!ddbuf_info)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (!str_ctx)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ free_data.buf_id = ddbuf_info->buff_id;
+ /* Unmap the memory mapped to the device */
+ ctx = str_ctx->vxd_dec_context;
+ vxd = ctx->dev;
+
+ tmp_result = vxd_unmap_buffer(vxd, ctx, ddbuf_info->kmstr_id, ddbuf_info->buff_id);
+ if (tmp_result != IMG_SUCCESS)
+ result = tmp_result;
+
+ /*
+ * Unmapping the memory mapped to the device - done
+ * Free the memory.
+ */
+ tmp_result = talmmu_devmem_addr_free(ddbuf_info->hndl_memory);
+ if (tmp_result != IMG_SUCCESS)
+ result = tmp_result;
+
+ /*
+ * for external mem manager buffers, just cleanup the idr list and
+ * buffer objects
+ */
+ img_mem_free_bufid(ctx->mem_ctx, free_data.buf_id);
+
+ return result;
+}
+
+/*
+ * @Function MMU_GetHeap
+ */
+int mmu_get_heap(unsigned int image_stride, enum mmu_eheap_id *heap_id)
+{
+ unsigned int i;
+ unsigned char found = FALSE;
+
+ for (i = 0; i < MMU_HEAP_MAX; i++) {
+ if (mmu_heaps[i].image_buffers) {
+ *heap_id = mmu_heaps[i].heap_id;
+ found = TRUE;
+ break;
+ }
+ }
+
+ VDEC_ASSERT(found);
+ if (!found)
+ return IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+
+ return IMG_SUCCESS;
+}
diff --git a/drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.h b/drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.h
new file mode 100644
index 000000000000..50bed98240a6
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.h
@@ -0,0 +1,174 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VDEC MMU Functions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "img_errors.h"
+#include "img_mem.h"
+#include "lst.h"
+#include "mmu_defs.h"
+#include "vid_buf.h"
+
+#ifndef _VXD_MMU_H_
+#define _VXD_MMU_H_
+
+/* Page size of the device MMU */
+#define DEV_MMU_PAGE_SIZE (0x1000)
+/* Page alignment of the device MMU */
+#define DEV_MMU_PAGE_ALIGNMENT (0x1000)
+
+#define HOST_MMU_PAGE_SIZE PAGE_SIZE
+
+/*
+ * @Function mmu_stream_get_ptd_handle
+ * @Description
+ * This function is used to obtain the stream PTD (Page Table Directory)handle
+ * @Input mmu_str_handle : MMU stream handle.
+ * @Output str_ptd : Pointer to stream PTD handle.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int mmu_stream_get_ptd_handle(void *mmu_str_handle, void **str_ptd);
+
+/*
+ * @Function mmu_device_create
+ * @Description
+ * This function is used to create and initialise the MMU device context.
+ * @Input mmu_type : MMU type.
+ * @Input ptd_alignment : Alignment of Page Table directory.
+ * @Output mmudev_hndl : A pointer used to return the
+ * MMU device handle.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int mmu_device_create(enum mmu_etype mmu_type,
+ unsigned int ptd_alignment,
+ void **mmudev_hndl);
+
+/*
+ * @Function mmu_device_destroy
+ * @Description
+ * This function is used to create and initialise the MMU device context.
+ * NOTE: Destroy device automatically destroys any streams and frees and
+ * memory allocated using MMU_StreamMalloc().
+ * @Input mmudev_hndl : The MMU device handle.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int mmu_device_destroy(void *mmudev_hndl);
+
+/*
+ * @Function mmu_stream_create
+ * @Description
+ * This function is used to create and initialise the MMU stream context.
+ * @Input mmudev_hndl : The MMU device handle.
+ * @Input km_str_id : Stream Id used in communication with KM driver.
+ * @Output mmustr_hndl : A pointer used to return the MMU stream handle.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int mmu_stream_create(void *mmudev_hndl, unsigned int km_str_id, void *vxd_dec_ctx,
+ void **mmustr_hndl);
+
+/**
+ * mmu_stream_destroy - This function is used to create and initialise the MMU stream context.
+ * @mmustr_hndl : The MMU stream handle.
+ * Return IMG_SUCCESS or an error code.
+ *
+ * NOTE: Destroy automatically frees and memory allocated using
+ * mmu_stream_malloc().
+ */
+int mmu_stream_destroy(void *mmustr_hndl);
+
+/*
+ * @Function mmu_stream_alloc
+ * @Description
+ * This function is used to allocate stream memory.
+ * @Input mmustr_hndl : The MMU stream handle.
+ * @Input heap_id : The MMU heap Id.
+ * @Input mem_heap_id : Memory heap id
+ * @Input mem_attrib : Memory attributes
+ * @Input size : The size, in bytes, to be allocated.
+ * @Input alignment : The required byte alignment
+ * (1, 2, 4, 8, 16 etc).
+ * @Output ddbuf_info : A pointer to a #vidio_ddbufinfo structure
+ * used to return the buffer info.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int mmu_stream_alloc(void *mmustr_hndl,
+ enum mmu_eheap_id heap_id,
+ unsigned int mem_heap_id,
+ enum sys_emem_attrib mem_attrib,
+ unsigned int size,
+ unsigned int alignment,
+ struct vidio_ddbufinfo *ddbuf_info);
+
+/*
+ * @Function mmu_stream_map_ext
+ * @Description
+ * This function is used to malloc device memory (virtual memory), but mapping
+ * this to memory that has already been allocated (externally).
+ * NOTE: Memory can be freed using MMU_Free(). However, this does not
+ * free the memory provided by the caller via pvCpuLinearAddr.
+ * @Input mmustr_hndl : The MMU stream handle.
+ * @Input heap_id : The heap Id.
+ * @Input buff_id : The buffer Id.
+ * @Input size : The size, in bytes, to be allocated.
+ * @Input alignment : The required byte alignment (1, 2, 4, 8, 16 etc).
+ * @Input mem_attrib : Memory attributes
+ * @Input cpu_linear_addr : CPU linear address of the memory
+ * to be allocated for the device.
+ * @Output ddbuf_info : A pointer to a #vidio_ddbufinfo structure
+ * used to return the buffer info.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int mmu_stream_map_ext(void *mmustr_hndl,
+ enum mmu_eheap_id heap_id,
+ unsigned int buff_id,
+ unsigned int size,
+ unsigned int alignment,
+ enum sys_emem_attrib mem_attrib,
+ void *cpu_linear_addr,
+ struct vidio_ddbufinfo *ddbuf_info);
+
+int mmu_stream_map_ext_sg(void *mmustr_hndl,
+ enum mmu_eheap_id heap_id,
+ void *sgt,
+ unsigned int size,
+ unsigned int alignment,
+ enum sys_emem_attrib mem_attrib,
+ void *cpu_linear_addr,
+ struct vidio_ddbufinfo *ddbuf_info,
+ unsigned int *buff_id);
+
+/*
+ * @Function mmu_free_mem
+ * @Description
+ * This function is used to free device memory.
+ * @Input ps_dd_buf_info : A pointer to a #vidio_ddbufinfo structure.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int mmu_free_mem(void *mmustr_hndl, struct vidio_ddbufinfo *ddbuf_info);
+
+/*
+ * @Function mmu_free_mem
+ * @Description
+ * This function is used to free device memory.
+ * @Input ps_dd_buf_info : A pointer to a #vidio_ddbufinfo structure.
+ * @Return IMG_SUCCESS or an error code.
+ */
+int mmu_free_mem_sg(void *mmustr_hndl, struct vidio_ddbufinfo *ddbuf_info);
+
+int mmu_get_heap(unsigned int image_stride, enum mmu_eheap_id *heap_id);
+
+#endif /* _VXD_MMU_H_ */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
Decoder resource component is responsible for
allocating all the internal, auxiliary resources
required for decoder and maintain their life cycle.
Signed-off-by: Sunita Nadampalli <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
.../staging/media/vxd/decoder/dec_resources.c | 554 ++++++++++++++++++
.../staging/media/vxd/decoder/dec_resources.h | 46 ++
3 files changed, 602 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/dec_resources.c
create mode 100644 drivers/staging/media/vxd/decoder/dec_resources.h
diff --git a/MAINTAINERS b/MAINTAINERS
index c7edc60f4d5b..6dadec058ab3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19569,6 +19569,8 @@ F: drivers/staging/media/vxd/common/work_queue.h
F: drivers/staging/media/vxd/decoder/bspp.c
F: drivers/staging/media/vxd/decoder/bspp.h
F: drivers/staging/media/vxd/decoder/bspp_int.h
+F: drivers/staging/media/vxd/decoder/dec_resources.c
+F: drivers/staging/media/vxd/decoder/dec_resources.h
F: drivers/staging/media/vxd/decoder/fw_interface.h
F: drivers/staging/media/vxd/decoder/h264_idx.h
F: drivers/staging/media/vxd/decoder/h264_secure_parser.c
diff --git a/drivers/staging/media/vxd/decoder/dec_resources.c b/drivers/staging/media/vxd/decoder/dec_resources.c
new file mode 100644
index 000000000000..e993a45eb540
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/dec_resources.c
@@ -0,0 +1,554 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * VXD Decoder resource allocation and tracking function implementations
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "decoder.h"
+#include "dec_resources.h"
+#include "hw_control.h"
+#include "h264fw_data.h"
+#include "h264_idx.h"
+#include "h264_vlc.h"
+#include "img_mem.h"
+#include "pool_api.h"
+#include "vdecdd_utils.h"
+#include "vdec_mmu_wrapper.h"
+#include "vid_buf.h"
+#include "vxd_mmu_defs.h"
+
+#define DECODER_END_BYTES_SIZE 40
+
+#define BATCH_MSG_BUFFER_SIZE (8 * 4096)
+#define INTRA_BUF_SIZE (1024 * 32)
+#define AUX_LINE_BUFFER_SIZE (512 * 1024)
+
+static void decres_pack_vlc_tables(unsigned short *packed,
+ unsigned short *unpacked,
+ unsigned short size)
+{
+ unsigned short i, j;
+
+ for (i = 0; i < size; i++) {
+ j = i * 3;
+ /*
+ * opcode 14:12
+ * width 11:9
+ * symbol 8:0
+ */
+ packed[i] = 0 | ((unpacked[j]) << 12) |
+ ((unpacked[j + 1]) << 9) | (unpacked[j + 2]);
+ }
+}
+
+struct dec_vlctable {
+ void *data;
+ unsigned int num_entries;
+ void *index_table;
+ unsigned int num_tables;
+};
+
+/*
+ * Union with sizes of firmware parser header structure sizes. Dec_resources
+ * uses the largest to allocate the header buffer.
+ */
+union decres_fw_hdrs {
+ struct h264fw_header_data h264_header;
+};
+
+/*
+ * This array contains the size of each resource allocation.
+ * @brief Resource Allocation Sizes
+ * NOTE: This should be kept in step with #DECODER_eResType.
+ */
+static const unsigned int res_size[DECODER_RESTYPE_MAX] = {
+ sizeof(struct vdecfw_transaction),
+ sizeof(union decres_fw_hdrs),
+ BATCH_MSG_BUFFER_SIZE,
+#ifdef HAS_HEVC
+ MEM_TO_REG_BUF_SIZE + SLICE_PARAMS_BUF_SIZE + ABOVE_PARAMS_BUF_SIZE,
+#endif
+};
+
+static const unsigned char start_code[] = {
+ 0x00, 0x00, 0x01, 0x00,
+};
+
+static void decres_get_vlc_data(struct dec_vlctable *vlc_table,
+ enum vdec_vid_std vid_std)
+{
+ switch (vid_std) {
+ case VDEC_STD_H264:
+ vlc_table->data = h264_vlc_table_data;
+ vlc_table->num_entries = h264_vlc_table_size;
+ vlc_table->index_table = h264_vlc_index_data;
+ vlc_table->num_tables = h264_vlc_index_size;
+ break;
+
+ default:
+ memset(vlc_table, 0x0, sizeof(*vlc_table));
+ break;
+ }
+}
+
+static void decres_fnbuf_info_destructor(void *param, void *cb_handle)
+{
+ struct vidio_ddbufinfo *dd_bufinfo = (struct vidio_ddbufinfo *)param;
+ int ret;
+ void *mmu_handle = cb_handle;
+
+ VDEC_ASSERT(dd_bufinfo);
+
+ ret = mmu_free_mem(mmu_handle, dd_bufinfo);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+
+ kfree(dd_bufinfo);
+ dd_bufinfo = NULL;
+}
+
+int dec_res_picture_detach(void **res_ctx, struct dec_decpict *dec_pict)
+{
+ struct dec_res_ctx *local_res_ctx;
+
+ VDEC_ASSERT(res_ctx);
+ VDEC_ASSERT(res_ctx && *res_ctx);
+ VDEC_ASSERT(dec_pict);
+ VDEC_ASSERT(dec_pict && dec_pict->transaction_info);
+
+ if (!res_ctx || !(*res_ctx) || !dec_pict ||
+ !dec_pict->transaction_info) {
+ pr_err("Invalid parameters\n");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ local_res_ctx = (struct dec_res_ctx *)*res_ctx;
+
+ /* return transaction buffer */
+ lst_add(&local_res_ctx->pool_data_list[DECODER_RESTYPE_TRANSACTION],
+ dec_pict->transaction_info);
+ pool_resfree(dec_pict->transaction_info->res);
+
+ /* return picture header information buffer */
+ lst_add(&local_res_ctx->pool_data_list[DECODER_RESTYPE_HDR],
+ dec_pict->hdr_info);
+ pool_resfree(dec_pict->hdr_info->res);
+
+ /* return batch message buffer */
+ lst_add(&local_res_ctx->pool_data_list[DECODER_RESTYPE_BATCH_MSG],
+ dec_pict->batch_msginfo);
+ pool_resfree(dec_pict->batch_msginfo->res);
+
+#ifdef HAS_HEVC
+ if (dec_pict->pvdec_info) {
+ lst_add(&local_res_ctx->pool_data_list[DECODER_RESTYPE_PVDEC_BUF],
+ dec_pict->pvdec_info);
+ pool_resfree(dec_pict->pvdec_info->res);
+ }
+#endif
+
+ return IMG_SUCCESS;
+}
+
+static int decres_get_resource(struct dec_res_ctx *res_ctx,
+ enum dec_res_type res_type,
+ struct res_resinfo **res_info,
+ unsigned char fill_zeros)
+{
+ struct res_resinfo *local_res_info = NULL;
+ unsigned int ret = IMG_SUCCESS;
+
+ VDEC_ASSERT(res_ctx);
+ VDEC_ASSERT(res_info);
+
+ local_res_info = lst_removehead(&res_ctx->pool_data_list[res_type]);
+ VDEC_ASSERT(local_res_info);
+ if (local_res_info) {
+ VDEC_ASSERT(local_res_info->ddbuf_info);
+ if (local_res_info->ddbuf_info) {
+ ret = pool_resalloc(res_ctx->res_pool[res_type], local_res_info->res);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS) {
+ ret = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ return ret;
+ }
+
+ if (fill_zeros)
+ memset(local_res_info->ddbuf_info->cpu_virt, 0,
+ local_res_info->ddbuf_info->buf_size);
+
+ *res_info = local_res_info;
+ } else {
+ ret = IMG_ERROR_FATAL;
+ return ret;
+ }
+ } else {
+ ret = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ return ret;
+ }
+
+ return ret;
+}
+
+int dec_res_picture_attach(void **res_ctx, enum vdec_vid_std vid_std,
+ struct dec_decpict *dec_pict)
+{
+ struct dec_res_ctx *local_res_ctx;
+ int ret;
+
+ VDEC_ASSERT(res_ctx);
+ VDEC_ASSERT(res_ctx && *res_ctx);
+ VDEC_ASSERT(dec_pict);
+ if (!res_ctx || !(*res_ctx) || !dec_pict) {
+ pr_err("Invalid parameters");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ local_res_ctx = (struct dec_res_ctx *)*res_ctx;
+
+ /* Obtain transaction buffer. */
+ ret = decres_get_resource(local_res_ctx, DECODER_RESTYPE_TRANSACTION,
+ &dec_pict->transaction_info, TRUE);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Obtain picture header information buffer */
+ ret = decres_get_resource(local_res_ctx, DECODER_RESTYPE_HDR,
+ &dec_pict->hdr_info, TRUE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+#ifdef HAS_HEVC
+ /* Obtain HEVC buffer */
+ if (vid_std == VDEC_STD_HEVC) {
+ ret = decres_get_resource(local_res_ctx, DECODER_RESTYPE_PVDEC_BUF,
+ &dec_pict->pvdec_info, TRUE);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+#endif
+ /* Obtain picture batch message buffer */
+ ret = decres_get_resource(local_res_ctx, DECODER_RESTYPE_BATCH_MSG,
+ &dec_pict->batch_msginfo, TRUE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dec_pict->intra_bufinfo = &local_res_ctx->intra_bufinfo;
+ dec_pict->auxline_bufinfo = &local_res_ctx->auxline_bufinfo;
+ dec_pict->vlc_tables_bufinfo =
+ &local_res_ctx->vlc_tables_bufinfo[vid_std];
+ dec_pict->vlc_idx_tables_bufinfo =
+ &local_res_ctx->vlc_idxtables_bufinfo[vid_std];
+ dec_pict->start_code_bufinfo = &local_res_ctx->start_code_bufinfo;
+
+ return IMG_SUCCESS;
+}
+
+int dec_res_create(void *mmu_handle, struct vxd_coreprops *core_props,
+ unsigned int num_dec_slots,
+ unsigned int mem_heap_id, void **resources)
+{
+ struct dec_res_ctx *local_res_ctx;
+ int ret;
+ unsigned int i = 0;
+ struct dec_vlctable vlc_table;
+ enum sys_emem_attrib mem_attrib;
+
+ VDEC_ASSERT(core_props);
+ VDEC_ASSERT(resources);
+ if (!core_props || !resources) {
+ pr_err("Invalid parameters");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ mem_attrib = (enum sys_emem_attrib)(SYS_MEMATTRIB_UNCACHED | SYS_MEMATTRIB_WRITECOMBINE);
+ mem_attrib |= (enum sys_emem_attrib)SYS_MEMATTRIB_INTERNAL;
+
+ local_res_ctx = kzalloc(sizeof(*local_res_ctx), GFP_KERNEL);
+ VDEC_ASSERT(local_res_ctx);
+ if (!local_res_ctx)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /* Allocate Intra buffer. */
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s:%d call MMU_StreamMalloc", __func__, __LINE__);
+#endif
+
+ ret = mmu_stream_alloc(mmu_handle, MMU_HEAP_STREAM_BUFFERS, mem_heap_id,
+ mem_attrib,
+ core_props->num_pixel_pipes *
+ INTRA_BUF_SIZE * 3,
+ DEV_MMU_PAGE_ALIGNMENT,
+ &local_res_ctx->intra_bufinfo);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ /* Allocate aux line buffer. */
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s:%d call MMU_StreamMalloc", __func__, __LINE__);
+#endif
+ ret = mmu_stream_alloc(mmu_handle, MMU_HEAP_STREAM_BUFFERS, mem_heap_id,
+ mem_attrib,
+ AUX_LINE_BUFFER_SIZE * 3 *
+ core_props->num_pixel_pipes,
+ DEV_MMU_PAGE_ALIGNMENT,
+ &local_res_ctx->auxline_bufinfo);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ /* Allocate standard-specific buffers. */
+ for (i = VDEC_STD_UNDEFINED + 1; i < VDEC_STD_MAX; i++) {
+ decres_get_vlc_data(&vlc_table, (enum vdec_vid_std)i);
+
+ if (vlc_table.num_tables > 0) {
+ /*
+ * Size of VLC IDX table in bytes. Has to be aligned
+ * to 4, so transfer to MTX succeeds.
+ * (VLC IDX is copied to local RAM of MTX)
+ */
+ unsigned int vlc_idxtable_sz =
+ ALIGN((sizeof(unsigned short) * vlc_table.num_tables * 3), 4);
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info(" %s:%d calling MMU_StreamMalloc", __func__, __LINE__);
+#endif
+
+ ret = mmu_stream_alloc(mmu_handle,
+ MMU_HEAP_STREAM_BUFFERS,
+ mem_heap_id, (enum sys_emem_attrib)(mem_attrib |
+ SYS_MEMATTRIB_CORE_READ_ONLY |
+ SYS_MEMATTRIB_CPU_WRITE),
+ sizeof(unsigned short) * vlc_table.num_entries,
+ DEV_MMU_PAGE_ALIGNMENT,
+ &local_res_ctx->vlc_tables_bufinfo[i]);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ if (vlc_table.data)
+ decres_pack_vlc_tables
+ (local_res_ctx->vlc_tables_bufinfo[i].cpu_virt,
+ vlc_table.data,
+ vlc_table.num_entries);
+
+ /* VLC index table */
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s:%d calling MMU_StreamMalloc",
+ __func__, __LINE__);
+#endif
+ ret = mmu_stream_alloc(mmu_handle,
+ MMU_HEAP_STREAM_BUFFERS,
+ mem_heap_id, (enum sys_emem_attrib)(mem_attrib |
+ SYS_MEMATTRIB_CORE_READ_ONLY |
+ SYS_MEMATTRIB_CPU_WRITE),
+ vlc_idxtable_sz,
+ DEV_MMU_PAGE_ALIGNMENT,
+ &local_res_ctx->vlc_idxtables_bufinfo[i]);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ if (vlc_table.index_table)
+ memcpy(local_res_ctx->vlc_idxtables_bufinfo[i].cpu_virt,
+ vlc_table.index_table,
+ local_res_ctx->vlc_idxtables_bufinfo[i].buf_size);
+ }
+ }
+
+ /* Start code */
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s:%d calling MMU_StreamMalloc", __func__, __LINE__);
+#endif
+ ret = mmu_stream_alloc(mmu_handle, MMU_HEAP_STREAM_BUFFERS, mem_heap_id,
+ (enum sys_emem_attrib)(mem_attrib |
+ SYS_MEMATTRIB_CORE_READ_ONLY |
+ SYS_MEMATTRIB_CPU_WRITE),
+ sizeof(start_code),
+ DEV_MMU_PAGE_ALIGNMENT,
+ &local_res_ctx->start_code_bufinfo);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ memcpy(local_res_ctx->start_code_bufinfo.cpu_virt, start_code, sizeof(start_code));
+
+ for (i = 0; i < DECODER_RESTYPE_MAX; i++) {
+ unsigned int j;
+
+ ret = pool_api_create(&local_res_ctx->res_pool[i]);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ lst_init(&local_res_ctx->pool_data_list[i]);
+
+ for (j = 0; j < num_dec_slots; j++) {
+ struct res_resinfo *local_res_info;
+
+ local_res_info = kzalloc(sizeof(*local_res_info), GFP_KERNEL);
+
+ VDEC_ASSERT(local_res_info);
+ if (!local_res_info) {
+ pr_err("Failed to allocate memory\n");
+ ret = IMG_ERROR_OUT_OF_MEMORY;
+ goto error_local_res_info_alloc;
+ }
+
+ local_res_info->ddbuf_info = kzalloc(sizeof(*local_res_info->ddbuf_info),
+ GFP_KERNEL);
+ VDEC_ASSERT(local_res_info->ddbuf_info);
+ if (!local_res_info->ddbuf_info) {
+ pr_err("Failed to allocate memory for resource buffer information structure");
+ ret = IMG_ERROR_OUT_OF_MEMORY;
+ goto error_local_dd_buf_alloc;
+ }
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s:%d calling MMU_StreamMalloc", __func__, __LINE__);
+#endif
+ ret = mmu_stream_alloc(mmu_handle, MMU_HEAP_STREAM_BUFFERS,
+ mem_heap_id, (enum sys_emem_attrib)(mem_attrib |
+ SYS_MEMATTRIB_CPU_READ |
+ SYS_MEMATTRIB_CPU_WRITE),
+ res_size[i],
+ DEV_MMU_PAGE_ALIGNMENT,
+ local_res_info->ddbuf_info);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error_local_res_alloc;
+
+ /* Register with the buffer pool */
+ ret = pool_resreg(local_res_ctx->res_pool[i],
+ decres_fnbuf_info_destructor,
+ local_res_info->ddbuf_info,
+ sizeof(*local_res_info->ddbuf_info),
+ FALSE, NULL,
+ &local_res_info->res, mmu_handle);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error_local_res_register;
+
+ lst_add(&local_res_ctx->pool_data_list[i],
+ local_res_info);
+ continue;
+
+/* Roll back in case of local errors. */
+error_local_res_register: mmu_free_mem(mmu_handle, local_res_info->ddbuf_info);
+
+error_local_res_alloc: kfree(local_res_info->ddbuf_info);
+
+error_local_dd_buf_alloc: kfree(local_res_info);
+
+error_local_res_info_alloc: goto error;
+ }
+ }
+
+ *resources = (void *)local_res_ctx;
+
+ return IMG_SUCCESS;
+
+/* Roll back in case of errors. */
+error: dec_res_destroy(mmu_handle, (void *)local_res_ctx);
+
+ return ret;
+}
+
+/*
+ *@Function RESOURCES_Destroy
+ *
+ */
+int dec_res_destroy(void *mmudev_handle, void *res_ctx)
+{
+ int ret = IMG_SUCCESS;
+ int ret1 = IMG_SUCCESS;
+ unsigned int i = 0;
+ struct res_resinfo *local_res_info;
+ struct res_resinfo *next_res_info;
+
+ struct dec_res_ctx *local_res_ctx = (struct dec_res_ctx *)res_ctx;
+
+ if (!local_res_ctx) {
+ pr_err("Invalid parameters");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (local_res_ctx->intra_bufinfo.hndl_memory) {
+ ret1 = mmu_free_mem(mmudev_handle, &local_res_ctx->intra_bufinfo);
+ VDEC_ASSERT(ret1 == IMG_SUCCESS);
+ if (ret1 != IMG_SUCCESS)
+ ret = ret1;
+ }
+
+ if (local_res_ctx->auxline_bufinfo.hndl_memory) {
+ ret1 = mmu_free_mem(mmudev_handle, &local_res_ctx->auxline_bufinfo);
+ VDEC_ASSERT(ret1 == IMG_SUCCESS);
+ if (ret1 != IMG_SUCCESS)
+ ret = ret1;
+ }
+
+ for (i = 0; i < VDEC_STD_MAX; i++) {
+ if (local_res_ctx->vlc_tables_bufinfo[i].hndl_memory) {
+ ret1 = mmu_free_mem(mmudev_handle, &local_res_ctx->vlc_tables_bufinfo[i]);
+ VDEC_ASSERT(ret1 == IMG_SUCCESS);
+ if (ret1 != IMG_SUCCESS)
+ ret = ret1;
+ }
+
+ if (local_res_ctx->vlc_idxtables_bufinfo[i].hndl_memory) {
+ ret1 = mmu_free_mem(mmudev_handle,
+ &local_res_ctx->vlc_idxtables_bufinfo[i]);
+ VDEC_ASSERT(ret1 == IMG_SUCCESS);
+ if (ret1 != IMG_SUCCESS)
+ ret = ret1;
+ }
+ }
+
+ if (local_res_ctx->start_code_bufinfo.hndl_memory) {
+ ret1 = mmu_free_mem(mmudev_handle, &local_res_ctx->start_code_bufinfo);
+ VDEC_ASSERT(ret1 == IMG_SUCCESS);
+ if (ret1 != IMG_SUCCESS)
+ ret = ret1;
+ }
+
+ for (i = 0; i < DECODER_RESTYPE_MAX; i++) {
+ if (local_res_ctx->res_pool[i]) {
+ local_res_info =
+ lst_first(&local_res_ctx->pool_data_list[i]);
+ while (local_res_info) {
+ next_res_info = lst_next(local_res_info);
+ lst_remove(&local_res_ctx->pool_data_list[i], local_res_info);
+ ret1 = pool_resdestroy(local_res_info->res, TRUE);
+ VDEC_ASSERT(ret1 == IMG_SUCCESS);
+ if (ret1 != IMG_SUCCESS)
+ ret = ret1;
+ kfree(local_res_info);
+ local_res_info = next_res_info;
+ }
+ pool_destroy(local_res_ctx->res_pool[i]);
+ }
+ }
+
+ kfree(local_res_ctx);
+ return ret;
+}
diff --git a/drivers/staging/media/vxd/decoder/dec_resources.h b/drivers/staging/media/vxd/decoder/dec_resources.h
new file mode 100644
index 000000000000..d068ca57d147
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/dec_resources.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD Decoder resource allocation and destroy Interface header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _DEC_RESOURCES_H_
+#define _DEC_RESOURCES_H_
+
+#include "decoder.h"
+#include "lst.h"
+
+/*
+ * This structure contains the core resources.
+ * @brief Decoder Core Resources
+ */
+struct dec_res_ctx {
+ struct vidio_ddbufinfo intra_bufinfo;
+ struct vidio_ddbufinfo auxline_bufinfo;
+ struct vidio_ddbufinfo start_code_bufinfo;
+ struct vidio_ddbufinfo vlc_tables_bufinfo[VDEC_STD_MAX];
+ struct vidio_ddbufinfo vlc_idxtables_bufinfo[VDEC_STD_MAX];
+ void *res_pool[DECODER_RESTYPE_MAX];
+ struct lst_t pool_data_list[DECODER_RESTYPE_MAX];
+};
+
+int dec_res_picture_detach(void **res_ctx, struct dec_decpict *dec_pict);
+
+int dec_res_picture_attach(void **res_ctx, enum vdec_vid_std vid_std,
+ struct dec_decpict *dec_pict);
+
+int dec_res_create(void *mmudev_handle,
+ struct vxd_coreprops *core_props, unsigned int num_dec_slots,
+ unsigned int mem_heap_id, void **resources);
+
+int dec_res_destroy(void *mmudev_handle, void *res_ctx);
+
+#endif
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
Contains the implementation of Bitstream preparser, it supports
preparsing of H264, HEVC and MJPEG bitstreams at present.
It uses software shift register implementation for bitstream parsing.
Signed-off-by: Lakshmi Sankar <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 11 +
drivers/staging/media/vxd/decoder/bspp.c | 2479 ++++++++++++++
drivers/staging/media/vxd/decoder/bspp.h | 363 ++
drivers/staging/media/vxd/decoder/bspp_int.h | 514 +++
.../media/vxd/decoder/h264_secure_parser.c | 3051 +++++++++++++++++
.../media/vxd/decoder/h264_secure_parser.h | 278 ++
.../media/vxd/decoder/hevc_secure_parser.c | 2895 ++++++++++++++++
.../media/vxd/decoder/hevc_secure_parser.h | 455 +++
.../media/vxd/decoder/jpeg_secure_parser.c | 645 ++++
.../media/vxd/decoder/jpeg_secure_parser.h | 37 +
drivers/staging/media/vxd/decoder/swsr.c | 1657 +++++++++
drivers/staging/media/vxd/decoder/swsr.h | 278 ++
12 files changed, 12663 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/bspp.c
create mode 100644 drivers/staging/media/vxd/decoder/bspp.h
create mode 100644 drivers/staging/media/vxd/decoder/bspp_int.h
create mode 100644 drivers/staging/media/vxd/decoder/h264_secure_parser.c
create mode 100644 drivers/staging/media/vxd/decoder/h264_secure_parser.h
create mode 100644 drivers/staging/media/vxd/decoder/hevc_secure_parser.c
create mode 100644 drivers/staging/media/vxd/decoder/hevc_secure_parser.h
create mode 100644 drivers/staging/media/vxd/decoder/jpeg_secure_parser.c
create mode 100644 drivers/staging/media/vxd/decoder/jpeg_secure_parser.h
create mode 100644 drivers/staging/media/vxd/decoder/swsr.c
create mode 100644 drivers/staging/media/vxd/decoder/swsr.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 6c3f7a55ce9b..baf1f19e21f7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19560,9 +19560,20 @@ F: drivers/staging/media/vxd/common/talmmu_api.c
F: drivers/staging/media/vxd/common/talmmu_api.h
F: drivers/staging/media/vxd/common/work_queue.c
F: drivers/staging/media/vxd/common/work_queue.h
+F: drivers/staging/media/vxd/decoder/bspp.c
+F: drivers/staging/media/vxd/decoder/bspp.h
+F: drivers/staging/media/vxd/decoder/bspp_int.h
+F: drivers/staging/media/vxd/decoder/h264_secure_parser.c
+F: drivers/staging/media/vxd/decoder/h264_secure_parser.h
+F: drivers/staging/media/vxd/decoder/hevc_secure_parser.c
+F: drivers/staging/media/vxd/decoder/hevc_secure_parser.h
F: drivers/staging/media/vxd/decoder/hw_control.c
F: drivers/staging/media/vxd/decoder/hw_control.h
F: drivers/staging/media/vxd/decoder/img_dec_common.h
+F: drivers/staging/media/vxd/decoder/jpeg_secure_parser.c
+F: drivers/staging/media/vxd/decoder/jpeg_secure_parser.h
+F: drivers/staging/media/vxd/decoder/swsr.c
+F: drivers/staging/media/vxd/decoder/swsr.h
F: drivers/staging/media/vxd/decoder/translation_api.c
F: drivers/staging/media/vxd/decoder/translation_api.h
F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c
diff --git a/drivers/staging/media/vxd/decoder/bspp.c b/drivers/staging/media/vxd/decoder/bspp.c
new file mode 100644
index 000000000000..b3a03e99a2c2
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/bspp.c
@@ -0,0 +1,2479 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * VXD Bitstream Buffer Pre-Parser
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstreming
+ * Prashanth Kumar Amai <[email protected]>
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/slab.h>
+#include <linux/printk.h>
+#include <linux/mutex.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "bspp.h"
+#include "h264_secure_parser.h"
+#include "hevc_secure_parser.h"
+#ifdef HAS_JPEG
+#include "jpeg_secure_parser.h"
+#endif
+#include "lst.h"
+#include "swsr.h"
+#include "vdecdd_defs.h"
+
+#define BSPP_ERR_MSG_LENGTH 1024
+
+/*
+ * This type defines the exception flag to catch the error if more catch block
+ * is required to catch different kind of error then more enum can be added
+ * @breif BSPP exception handler to catch the errors
+ */
+enum bspp_exception_handler {
+ /* BSPP parse exception handler */
+ BSPP_EXCEPTION_HANDLER_NONE = 0x00,
+ /* Jump at exception (external use) */
+ BSPP_EXCEPTION_HANDLER_JUMP,
+ BSPP_EXCEPTION_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This structure contains bitstream buffer information.
+ * @brief BSPP Bitstream Buffer Information
+ */
+struct bspp_bitstream_buffer {
+ void **lst_link;
+ struct bspp_ddbuf_info ddbuf_info;
+ unsigned int data_size;
+ unsigned int bufmap_id;
+ enum vdec_bstr_element_type bstr_element_type;
+ unsigned long long bytes_read;
+ void *pict_tag_param;
+};
+
+/*
+ * This structure contains shift-register state.
+ * @brief BSPP Shift-register State
+ */
+struct bspp_parse_ctx {
+ void *swsr_context;
+ enum swsr_exception exception;
+};
+
+/*
+ * This structure contains context for the current picture.
+ * @brief BSPP Picture Context
+ */
+struct bspp_pict_ctx {
+ struct bspp_sequence_hdr_info *sequ_hdr_info;
+ int closed_gop;
+ struct bspp_pict_hdr_info pict_hdr_info[VDEC_H264_MVC_MAX_VIEWS];
+ struct bspp_sequence_hdr_info *ext_sequ_hdr_info;
+ int present;
+ int invalid;
+ int unsupported;
+ int finished;
+ unsigned int new_pict_signalled;
+};
+
+/*
+ * This structure contains resources allocated for the stream.
+ * @brief BSPP Stream Resource Allocations
+ */
+struct bspp_stream_alloc_data {
+ struct lst_t sequence_data_list[SEQUENCE_SLOTS];
+ struct lst_t pps_data_list[PPS_SLOTS];
+ struct lst_t available_sequence_list;
+ struct lst_t available_ppss_list;
+ struct lst_t raw_data_list_available;
+ struct lst_t raw_data_list_used;
+ struct lst_t vps_data_list[VPS_SLOTS];
+ struct lst_t raw_sei_alloc_list;
+ struct lst_t available_vps_list;
+};
+
+struct bspp_raw_sei_alloc {
+ void **lst_link;
+ struct vdec_raw_bstr_data raw_sei_data;
+};
+
+/*
+ * This structure contains bitstream parsing state information for the current
+ * group of buffers.
+ * @brief BSPP Bitstream Parsing State Information
+ */
+struct bspp_grp_bstr_ctx {
+ enum vdec_vid_std vid_std;
+ int disable_mvc;
+ int delim_present;
+ void *swsr_context;
+ enum bspp_unit_type unit_type;
+ enum bspp_unit_type last_unit_type;
+ int not_pic_unit_yet;
+ int not_ext_pic_unit_yet;
+ unsigned int total_data_size;
+ unsigned int total_bytes_read;
+ struct lst_t buffer_chain;
+ struct lst_t in_flight_bufs;
+ struct lst_t *pre_pict_seg_list[3];
+ struct lst_t *pict_seg_list[3];
+ void **pict_tag_param_array[3];
+ struct lst_t *segment_list;
+ void **pict_tag_param;
+ struct lst_t *free_segments;
+ unsigned int segment_offset;
+ int insert_start_code;
+ unsigned char start_code_suffix;
+ unsigned char current_view_idx;
+};
+
+/*
+ * This structure contains the stream context information.
+ * @brief BSPP Stream Context Information
+ */
+struct bspp_str_context {
+ enum vdec_vid_std vid_std;
+ int disable_mvc;
+ int full_scan;
+ int immediate_decode;
+ enum vdec_bstr_format bstr_format;
+ struct vdec_codec_config codec_config;
+ unsigned int user_str_id;
+ struct bspp_vid_std_features vid_std_features;
+ struct bspp_swsr_ctx swsr_ctx;
+ struct bspp_parser_callbacks parser_callbacks;
+ struct bspp_stream_alloc_data str_alloc;
+ unsigned int sequ_hdr_id;
+ unsigned char *sequ_hdr_info;
+ unsigned char *secure_sequence_info;
+ unsigned char *pps_info;
+ unsigned char *secure_pps_info;
+ unsigned char *raw_data;
+ struct bspp_grp_bstr_ctx grp_bstr_ctx;
+ struct bspp_parse_ctx parse_ctx;
+ struct bspp_inter_pict_data inter_pict_data;
+ struct lst_t decoded_pictures_list;
+ /* Mutex for secure access */
+ struct mutex *bspp_mutex;
+ int intra_frame_closed_gop;
+ struct bspp_pict_ctx pict_ctx;
+ struct bspp_parse_state parse_state;
+};
+
+/*
+ * This structure contains the standard related parser functions.
+ * @brief BSPP Standard Related Functions
+ */
+struct bspp_parser_functions {
+ /* Pointer to standard-specific parser configuration function */
+ bspp_cb_set_parser_config set_parser_config;
+ /* Pointer to standard-specific unit type determining function */
+ bspp_cb_determine_unit_type determine_unit_type;
+};
+
+static struct bspp_parser_functions parser_fxns[VDEC_STD_MAX] = {
+ /* VDEC_STD_UNDEFINED */
+ { NULL, NULL },
+ /* VDEC_STD_MPEG2 */
+ { NULL, NULL },
+ /* VDEC_STD_MPEG4 */
+ { NULL, NULL },
+ /* VDEC_STD_H263 */
+ { NULL, NULL },
+ /* VDEC_STD_H264 */
+ { bspp_h264_set_parser_config, bspp_h264_determine_unittype },
+ /* VDEC_STD_VC1 */
+ { NULL, NULL },
+ /* VDEC_STD_AVS */
+ { NULL, NULL },
+ /* VDEC_STD_REAL */
+ { NULL, NULL },
+ /* VDEC_STD_JPEG */
+#ifdef HAS_JPEG
+ { bspp_jpeg_setparser_config, bspp_jpeg_determine_unit_type },
+#else
+ { NULL, NULL },
+#endif
+ /* VDEC_STD_VP6 */
+ { NULL, NULL },
+ /* VDEC_STD_VP8 */
+ { NULL, NULL },
+ /* VDEC_STD_SORENSON */
+ { NULL, NULL },
+ /* VDEC_STD_HEVC */
+ { bspp_hevc_set_parser_config, bspp_hevc_determine_unittype },
+};
+
+/*
+ * @Function bspp_get_pps_hdr
+ * @Description Obtains the most recent PPS header of a given Id.
+ */
+struct bspp_pps_info *bspp_get_pps_hdr(void *str_res_handle, unsigned int pps_id)
+{
+ struct bspp_stream_alloc_data *alloc_data =
+ (struct bspp_stream_alloc_data *)str_res_handle;
+
+ if (pps_id >= PPS_SLOTS || !alloc_data)
+ return NULL;
+
+ return lst_last(&alloc_data->pps_data_list[pps_id]);
+}
+
+/*
+ * @Function bspp_get_sequ_hdr
+ * @Description Obtains the most recent sequence header of a given Id.
+ */
+struct bspp_sequence_hdr_info *bspp_get_sequ_hdr(void *str_res_handle,
+ unsigned int sequ_id)
+{
+ struct bspp_stream_alloc_data *alloc_data =
+ (struct bspp_stream_alloc_data *)str_res_handle;
+ if (sequ_id >= SEQUENCE_SLOTS || !alloc_data)
+ return NULL;
+
+ return lst_last(&alloc_data->sequence_data_list[sequ_id]);
+}
+
+/*
+ * @Function bspp_free_bitstream_elem
+ * @Description Frees a bitstream chain element.
+ */
+static void bspp_free_bitstream_elem(struct bspp_bitstream_buffer *bstr_buf)
+{
+ memset(bstr_buf, 0, sizeof(struct bspp_bitstream_buffer));
+
+ kfree(bstr_buf);
+}
+
+/*
+ * @Function bspp_create_segment
+ * @Description Constructs a bitstream segment for the current unit and adds
+ * it to the list.
+ */
+static int bspp_create_segment(struct bspp_grp_bstr_ctx *grp_btsr_ctx,
+ struct bspp_bitstream_buffer *cur_buf)
+{
+ struct bspp_bitstr_seg *segment;
+ unsigned int result;
+
+ /*
+ * Only create a segment when data (not in a previous segment) has been
+ * parsed from the buffer.
+ */
+ if (cur_buf->bytes_read != grp_btsr_ctx->segment_offset) {
+ /* Allocate a software shift-register context structure */
+ segment = lst_removehead(grp_btsr_ctx->free_segments);
+ if (!segment) {
+ result = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ goto error;
+ }
+ memset(segment, 0, sizeof(struct bspp_bitstr_seg));
+
+ segment->bufmap_id = cur_buf->bufmap_id;
+ segment->data_size = (unsigned int)cur_buf->bytes_read
+ - grp_btsr_ctx->segment_offset;
+ segment->data_byte_offset = grp_btsr_ctx->segment_offset;
+
+ if (cur_buf->bytes_read == cur_buf->data_size) {
+ /* This is the last segment in the buffer. */
+ segment->bstr_seg_flag |= VDECDD_BSSEG_LASTINBUFF;
+ }
+
+ /*
+ * Next segment will start part way through the buffer
+ * (current read position).
+ */
+ grp_btsr_ctx->segment_offset = (unsigned int)cur_buf->bytes_read;
+
+ if (grp_btsr_ctx->insert_start_code) {
+ segment->bstr_seg_flag |= VDECDD_BSSEG_INSERT_STARTCODE;
+ segment->start_code_suffix = grp_btsr_ctx->start_code_suffix;
+ grp_btsr_ctx->insert_start_code = 0;
+ }
+
+ lst_add(grp_btsr_ctx->segment_list, segment);
+
+ /*
+ * If multiple segments correspond to the same (picture)
+ * stream-unit, update it only the first time
+ */
+ if (cur_buf->pict_tag_param && grp_btsr_ctx->pict_tag_param &&
+ (grp_btsr_ctx->segment_list ==
+ grp_btsr_ctx->pict_seg_list[0] ||
+ grp_btsr_ctx->segment_list ==
+ grp_btsr_ctx->pict_seg_list[1] ||
+ grp_btsr_ctx->segment_list ==
+ grp_btsr_ctx->pict_seg_list[2]))
+ *grp_btsr_ctx->pict_tag_param = cur_buf->pict_tag_param;
+ }
+
+ return IMG_SUCCESS;
+error:
+ return result;
+}
+
+/*
+ * @Function bspp_DetermineUnitType
+ *
+ */
+static int bspp_determine_unit_type(enum vdec_vid_std vid_std,
+ unsigned char unit_type,
+ int disable_mvc,
+ enum bspp_unit_type *unit_type_enum)
+{
+ /* Determine the unit type from the NAL type. */
+ if (vid_std < VDEC_STD_MAX && parser_fxns[vid_std].determine_unit_type)
+ parser_fxns[vid_std].determine_unit_type(unit_type, disable_mvc, unit_type_enum);
+ else
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function bspp_shift_reg_cb
+ *
+ */
+static void bspp_shift_reg_cb(enum swsr_cbevent event,
+ struct bspp_grp_bstr_ctx *grp_btsr_ctx,
+ unsigned char nal_type,
+ unsigned char **data_buffer,
+ unsigned long long *data_size)
+{
+ unsigned int result;
+
+ switch (event) {
+ case SWSR_EVENT_INPUT_BUFFER_START: {
+ struct bspp_bitstream_buffer *next_buf;
+
+ /* Take the next bitstream buffer for use in shift-register. */
+ next_buf = lst_removehead(&grp_btsr_ctx->buffer_chain);
+
+ if (next_buf && data_buffer && data_size) {
+ lst_add(&grp_btsr_ctx->in_flight_bufs, next_buf);
+
+ *data_buffer = next_buf->ddbuf_info.cpu_virt_addr;
+ *data_size = next_buf->data_size;
+
+ next_buf->bytes_read = 0;
+ } else {
+ goto error;
+ }
+ }
+ break;
+ case SWSR_EVENT_OUTPUT_BUFFER_END: {
+ struct bspp_bitstream_buffer *cur_buf;
+
+ cur_buf = lst_removehead(&grp_btsr_ctx->in_flight_bufs);
+
+ if (cur_buf) {
+ /*
+ * Indicate that the whole buffer content has been
+ * used.
+ */
+ cur_buf->bytes_read = cur_buf->data_size;
+ grp_btsr_ctx->total_bytes_read += (unsigned int)cur_buf->bytes_read;
+
+ /*
+ * Construct segment for current buffer and add to
+ * active list.
+ */
+ result = bspp_create_segment(grp_btsr_ctx, cur_buf);
+ if (result != IMG_SUCCESS)
+ goto error;
+
+ /*
+ * Next segment will start at the beginning of the next
+ * buffer.
+ */
+ grp_btsr_ctx->segment_offset = 0;
+
+ /* Destroy the bitstream element. */
+ bspp_free_bitstream_elem(cur_buf);
+ } else {
+ goto error;
+ }
+ }
+ break;
+
+ case SWSR_EVENT_DELIMITER_NAL_TYPE:
+ /*
+ * Initialise the unit type with the last (unclassified or
+ * unsupported types are not retained since they.
+ */
+ grp_btsr_ctx->unit_type = grp_btsr_ctx->last_unit_type;
+
+ /*
+ * Determine the unit type without consuming any data (start
+ * code) from shift-register. Segments are created automatically
+ * when a new buffer is requested by the shift-register so the
+ * unit type must be known in order to switch over the segment
+ * list.
+ */
+ result = bspp_determine_unit_type(grp_btsr_ctx->vid_std, nal_type,
+ grp_btsr_ctx->disable_mvc,
+ &grp_btsr_ctx->unit_type);
+
+ /*
+ * Only look to change bitstream segment list when the unit type
+ * is different and the current unit contains data that could be
+ * placed in a new list.
+ */
+ if (grp_btsr_ctx->last_unit_type != grp_btsr_ctx->unit_type &&
+ grp_btsr_ctx->unit_type != BSPP_UNIT_UNSUPPORTED &&
+ grp_btsr_ctx->unit_type != BSPP_UNIT_UNCLASSIFIED) {
+ int prev_pict_data;
+ int curr_pict_data;
+
+ prev_pict_data = (grp_btsr_ctx->last_unit_type == BSPP_UNIT_PICTURE ||
+ grp_btsr_ctx->last_unit_type ==
+ BSPP_UNIT_SKIP_PICTURE) ? 1 : 0;
+
+ curr_pict_data = (grp_btsr_ctx->unit_type == BSPP_UNIT_PICTURE ||
+ grp_btsr_ctx->unit_type ==
+ BSPP_UNIT_SKIP_PICTURE) ? 1 : 0;
+
+ /*
+ * When switching between picture and non-picture
+ * units.
+ */
+ if ((prev_pict_data && !curr_pict_data) ||
+ (!prev_pict_data && curr_pict_data)) {
+ /*
+ * Only delimit unit change when we're not the
+ * first unit and we're not already in the last
+ * segment list.
+ */
+ if (grp_btsr_ctx->last_unit_type != BSPP_UNIT_NONE &&
+ grp_btsr_ctx->segment_list !=
+ grp_btsr_ctx->pict_seg_list[2]) {
+ struct bspp_bitstream_buffer *cur_buf =
+ lst_first(&grp_btsr_ctx->in_flight_bufs);
+ if (!cur_buf)
+ goto error;
+
+ /*
+ * Update the offset within current buf.
+ */
+ swsr_get_byte_offset_curbuf(grp_btsr_ctx->swsr_context,
+ &cur_buf->bytes_read);
+
+ /*
+ * Create the last segment of the
+ * previous type (which may split a
+ * buffer into two). If the unit is
+ * exactly at the start of a buffer this
+ * will not create a zero-byte segment.
+ */
+ result = bspp_create_segment(grp_btsr_ctx, cur_buf);
+ if (result != IMG_SUCCESS)
+ goto error;
+ }
+
+ /* Point at the next segment list. */
+ if (grp_btsr_ctx->segment_list
+ == grp_btsr_ctx->pre_pict_seg_list[0]) {
+ grp_btsr_ctx->segment_list =
+ grp_btsr_ctx->pict_seg_list[0];
+ grp_btsr_ctx->pict_tag_param =
+ grp_btsr_ctx->pict_tag_param_array[0];
+ } else if (grp_btsr_ctx->segment_list
+ == grp_btsr_ctx->pict_seg_list[0])
+ grp_btsr_ctx->segment_list =
+ grp_btsr_ctx->pre_pict_seg_list[1];
+ else if (grp_btsr_ctx->segment_list
+ == grp_btsr_ctx->pre_pict_seg_list[1]) {
+ grp_btsr_ctx->segment_list =
+ grp_btsr_ctx->pict_seg_list[1];
+ grp_btsr_ctx->pict_tag_param =
+ grp_btsr_ctx->pict_tag_param_array[1];
+ } else if (grp_btsr_ctx->segment_list
+ == grp_btsr_ctx->pict_seg_list[1])
+ grp_btsr_ctx->segment_list =
+ grp_btsr_ctx->pre_pict_seg_list[2];
+ else if (grp_btsr_ctx->segment_list
+ == grp_btsr_ctx->pre_pict_seg_list[2]) {
+ grp_btsr_ctx->segment_list =
+ grp_btsr_ctx->pict_seg_list[2];
+ grp_btsr_ctx->pict_tag_param =
+ grp_btsr_ctx->pict_tag_param_array[2];
+ }
+ }
+
+ grp_btsr_ctx->last_unit_type = grp_btsr_ctx->unit_type;
+ }
+ break;
+
+ default:
+ break;
+ }
+
+error:
+ return;
+}
+
+/*
+ * @Function bspp_exception_handler
+ *
+ */
+static void bspp_exception_handler(enum swsr_exception exception, void *parse_ctx_handle)
+{
+ struct bspp_parse_ctx *parse_ctx = (struct bspp_parse_ctx *)parse_ctx_handle;
+
+ /* Store the exception. */
+ parse_ctx->exception = exception;
+
+ switch (parse_ctx->exception) {
+ case SWSR_EXCEPT_NO_EXCEPTION:
+ break;
+ case SWSR_EXCEPT_ENCAPULATION_ERROR1:
+ break;
+ case SWSR_EXCEPT_ENCAPULATION_ERROR2:
+ break;
+ case SWSR_EXCEPT_ACCESS_INTO_SCP:
+ break;
+ case SWSR_EXCEPT_ACCESS_BEYOND_EOD:
+ break;
+ case SWSR_EXCEPT_EXPGOULOMB_ERROR:
+ break;
+ case SWSR_EXCEPT_WRONG_CODEWORD_ERROR:
+ break;
+ case SWSR_EXCEPT_NO_SCP:
+ break;
+ case SWSR_EXCEPT_INVALID_CONTEXT:
+ break;
+
+ default:
+ break;
+ }
+
+ /* Clear the exception. */
+ swsr_check_exception(parse_ctx->swsr_context);
+}
+
+/*
+ * @Function bspp_reset_sequence
+ *
+ */
+static void bspp_reset_sequence(struct bspp_str_context *str_ctx,
+ struct bspp_sequence_hdr_info *sequ_hdr_info)
+{
+ /* Temporarily store relevant sequence fields. */
+ struct bspp_ddbuf_array_info aux_fw_sequence = sequ_hdr_info->fw_sequence;
+ void *aux_secure_sequence_info_hndl = sequ_hdr_info->secure_sequence_info;
+
+ struct bspp_ddbuf_array_info *tmp = &sequ_hdr_info->fw_sequence;
+
+ /* Reset all related structures. */
+ memset(((unsigned char *)tmp->ddbuf_info.cpu_virt_addr + tmp->buf_offset), 0x00,
+ sequ_hdr_info->fw_sequence.buf_element_size);
+
+ if (str_ctx->parser_callbacks.reset_data_cb)
+ str_ctx->parser_callbacks.reset_data_cb(BSPP_UNIT_SEQUENCE,
+ sequ_hdr_info->secure_sequence_info);
+ else
+ memset(aux_secure_sequence_info_hndl, 0, str_ctx->vid_std_features.seq_size);
+
+ memset(sequ_hdr_info, 0, sizeof(*sequ_hdr_info));
+
+ /* Restore relevant sequence fields. */
+ sequ_hdr_info->fw_sequence = aux_fw_sequence;
+ sequ_hdr_info->sequ_hdr_info.bufmap_id = aux_fw_sequence.ddbuf_info.bufmap_id;
+ sequ_hdr_info->sequ_hdr_info.buf_offset = aux_fw_sequence.buf_offset;
+ sequ_hdr_info->secure_sequence_info = aux_secure_sequence_info_hndl;
+}
+
+/*
+ * @Function bspp_reset_pps
+ *
+ */
+static void bspp_reset_pps(struct bspp_str_context *str_ctx,
+ struct bspp_pps_info *pps_info)
+{
+ /* Temporarily store relevant PPS fields. */
+ struct bspp_ddbuf_array_info aux_fw_pps = pps_info->fw_pps;
+ void *aux_secure_pps_info_hndl = pps_info->secure_pps_info;
+ struct bspp_ddbuf_array_info *tmp = &pps_info->fw_pps;
+
+ /* Reset all related structures. */
+ memset(((unsigned char *)tmp->ddbuf_info.cpu_virt_addr + tmp->buf_offset), 0x00,
+ pps_info->fw_pps.buf_element_size);
+
+ /* Reset the parser specific data. */
+ if (str_ctx->parser_callbacks.reset_data_cb)
+ str_ctx->parser_callbacks.reset_data_cb(BSPP_UNIT_PPS, pps_info->secure_pps_info);
+
+ /* Reset the common data. */
+ memset(pps_info, 0, sizeof(*pps_info));
+
+ /* Restore relevant PPS fields. */
+ pps_info->fw_pps = aux_fw_pps;
+ pps_info->bufmap_id = aux_fw_pps.ddbuf_info.bufmap_id;
+ pps_info->buf_offset = aux_fw_pps.buf_offset;
+ pps_info->secure_pps_info = aux_secure_pps_info_hndl;
+}
+
+/*
+ * @Function bspp_stream_submit_buffer
+ *
+ */
+int bspp_stream_submit_buffer(void *str_context_handle,
+ const struct bspp_ddbuf_info *ddbuf_info,
+ unsigned int bufmap_id,
+ unsigned int data_size,
+ void *pict_tag_param,
+ enum vdec_bstr_element_type bstr_element_type)
+{
+ struct bspp_str_context *str_ctx = (struct bspp_str_context *)str_context_handle;
+ struct bspp_bitstream_buffer *bstr_buf;
+ unsigned int result = IMG_SUCCESS;
+
+ if (!str_context_handle) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+
+ if (bstr_element_type == VDEC_BSTRELEMENT_UNDEFINED ||
+ bstr_element_type >= VDEC_BSTRELEMENT_MAX) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+
+ /*
+ * Check that the new bitstream buffer is compatible with those
+ * before.
+ */
+ bstr_buf = lst_last(&str_ctx->grp_bstr_ctx.buffer_chain);
+ if (bstr_buf && bstr_buf->bstr_element_type != bstr_element_type) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+
+ /* Allocate a bitstream buffer chain element structure */
+ bstr_buf = kmalloc(sizeof(*bstr_buf), GFP_KERNEL);
+ if (!bstr_buf) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+ memset(bstr_buf, 0, sizeof(*bstr_buf));
+
+ /* Queue buffer in a chain since units might span buffers. */
+ if (ddbuf_info)
+ bstr_buf->ddbuf_info = *ddbuf_info;
+
+ bstr_buf->data_size = data_size;
+ bstr_buf->bstr_element_type = bstr_element_type;
+ bstr_buf->pict_tag_param = pict_tag_param;
+ bstr_buf->bufmap_id = bufmap_id;
+ lst_add(&str_ctx->grp_bstr_ctx.buffer_chain, bstr_buf);
+
+ str_ctx->grp_bstr_ctx.total_data_size += data_size;
+
+error:
+ return result;
+}
+
+/*
+ * @Function bspp_sequence_hdr_info
+ *
+ */
+static struct bspp_sequence_hdr_info *bspp_obtain_sequence_hdr(struct bspp_str_context *str_ctx)
+{
+ struct bspp_stream_alloc_data *str_alloc = &str_ctx->str_alloc;
+ struct bspp_sequence_hdr_info *sequ_hdr_info;
+
+ /*
+ * Obtain any partially filled sequence data else provide a new one
+ * (always new for H.264 and HEVC)
+ */
+ sequ_hdr_info = lst_last(&str_alloc->sequence_data_list[BSPP_DEFAULT_SEQUENCE_ID]);
+ if (!sequ_hdr_info || sequ_hdr_info->ref_count > 0 || str_ctx->vid_std == VDEC_STD_H264 ||
+ str_ctx->vid_std == VDEC_STD_HEVC) {
+ /* Get Sequence resource. */
+ sequ_hdr_info = lst_removehead(&str_alloc->available_sequence_list);
+ if (sequ_hdr_info) {
+ bspp_reset_sequence(str_ctx, sequ_hdr_info);
+ sequ_hdr_info->sequ_hdr_info.sequ_hdr_id = BSPP_INVALID;
+ }
+ }
+
+ return sequ_hdr_info;
+}
+
+/*
+ * @Function bspp_submit_picture_decoded
+ *
+ */
+int bspp_submit_picture_decoded(void *str_context_handle,
+ struct bspp_picture_decoded *picture_decoded)
+{
+ struct bspp_picture_decoded *picture_decoded_elem;
+ struct bspp_str_context *str_ctx = (struct bspp_str_context *)str_context_handle;
+
+ /* Validate input arguments. */
+ if (!str_context_handle)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ picture_decoded_elem = kmalloc(sizeof(*picture_decoded_elem), GFP_KERNEL);
+ if (!picture_decoded_elem)
+ return IMG_ERROR_MALLOC_FAILED;
+
+ *picture_decoded_elem = *picture_decoded;
+
+ /* Lock access to the list for adding a picture - HIGH PRIORITY */
+ mutex_lock_nested(str_ctx->bspp_mutex, SUBCLASS_BSPP);
+
+ lst_add(&str_ctx->decoded_pictures_list, picture_decoded_elem);
+
+ /* Unlock access to the list for adding a picture - HIGH PRIORITY */
+ mutex_unlock(str_ctx->bspp_mutex);
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function bspp_check_and_detach_pps_info
+ *
+ */
+static void bspp_check_and_detach_pps_info(struct bspp_stream_alloc_data *str_alloc,
+ unsigned int pps_id)
+{
+ if (pps_id != BSPP_INVALID) {
+ struct bspp_pps_info *pps_info = lst_first(&str_alloc->pps_data_list[pps_id]);
+
+ if (!pps_info) /* Invalid id */
+ return;
+
+ pps_info->ref_count--;
+ /* If nothing references it any more */
+ if (pps_info->ref_count == 0) {
+ struct bspp_pps_info *next_pps_info = lst_next(pps_info);
+
+ /*
+ * If it is not the last sequence in the slot list
+ * remove it and return it to the pool-list
+ */
+ if (next_pps_info) {
+ lst_remove(&str_alloc->pps_data_list[pps_id], pps_info);
+ lst_addhead(&str_alloc->available_ppss_list, pps_info);
+ }
+ }
+ }
+}
+
+/*
+ * @Function bspp_picture_decoded
+ *
+ */
+static int bspp_picture_decoded(struct bspp_str_context *str_ctx,
+ struct bspp_picture_decoded *picture_decoded)
+{
+ struct bspp_stream_alloc_data *str_alloc = &str_ctx->str_alloc;
+
+ /* Manage Sequence */
+ if (picture_decoded->sequ_hdr_id != BSPP_INVALID) {
+ struct bspp_sequence_hdr_info *seq =
+ lst_first(&str_alloc->sequence_data_list[picture_decoded->sequ_hdr_id]);
+
+ if (!seq)
+ return IMG_ERROR_INVALID_ID;
+
+ if (picture_decoded->not_decoded) {
+ /* Release sequence data. */
+ if (str_ctx->parser_callbacks.release_data_cb)
+ str_ctx->parser_callbacks.release_data_cb((void *)str_alloc,
+ BSPP_UNIT_SEQUENCE, seq->secure_sequence_info);
+ }
+
+ seq->ref_count--;
+ /* If nothing references it any more */
+ if (seq->ref_count == 0) {
+ struct bspp_sequence_hdr_info *next_sequ_hdr_info = lst_next(seq);
+
+ /*
+ * If it is not the last sequence in the slot list
+ * remove it and return it to the pool-list
+ */
+ if (next_sequ_hdr_info) {
+ lst_remove(&str_alloc->sequence_data_list
+ [picture_decoded->sequ_hdr_id], seq);
+ /* Release sequence data. */
+ if (str_ctx->parser_callbacks.release_data_cb)
+ str_ctx->parser_callbacks.release_data_cb((void *)str_alloc,
+ BSPP_UNIT_SEQUENCE, seq->secure_sequence_info);
+
+ lst_addhead(&str_alloc->available_sequence_list, seq);
+ }
+ }
+ }
+
+ /*
+ * Expect at least one valid PPS for H.264 and always invalid for all
+ * others
+ */
+ bspp_check_and_detach_pps_info(str_alloc, picture_decoded->pps_id);
+ bspp_check_and_detach_pps_info(str_alloc, picture_decoded->second_pps_id);
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function bspp_service_pictures_decoded
+ *
+ */
+static int bspp_service_pictures_decoded(struct bspp_str_context *str_ctx)
+{
+ struct bspp_picture_decoded *picture_decoded;
+
+ while (1) {
+ /*
+ * Lock access to the list for removing a picture -
+ * LOW PRIORITY
+ */
+ mutex_lock_nested(str_ctx->bspp_mutex, SUBCLASS_BSPP);
+
+ picture_decoded = lst_removehead(&str_ctx->decoded_pictures_list);
+
+ /*
+ * Unlock access to the list for removing a picture -
+ * LOW PRIORITY
+ */
+ mutex_unlock(str_ctx->bspp_mutex);
+
+ if (!picture_decoded)
+ break;
+
+ bspp_picture_decoded(str_ctx, picture_decoded);
+ kfree(picture_decoded);
+ }
+
+ return IMG_SUCCESS;
+}
+
+static void bspp_remove_unused_vps(struct bspp_str_context *str_ctx, unsigned int vps_id)
+{
+ struct bspp_stream_alloc_data *str_alloc = &str_ctx->str_alloc;
+ struct bspp_vps_info *temp_vps_info = NULL;
+ struct bspp_vps_info *next_temp_vps_info = NULL;
+
+ /*
+ * Check the whole Vps slot list for any unused Vpss
+ * BEFORE ADDING THE NEW ONE, if found remove them
+ */
+ next_temp_vps_info = lst_first(&str_alloc->vps_data_list[vps_id]);
+ while (next_temp_vps_info) {
+ /* Set Temp, it is the one which we will potentially remove */
+ temp_vps_info = next_temp_vps_info;
+ /*
+ * Set Next Temp, it is the one for the next iteration
+ * (we cannot ask for next after removing it)
+ */
+ next_temp_vps_info = lst_next(temp_vps_info);
+ /* If it is not used remove it */
+ if (temp_vps_info->ref_count == 0 && next_temp_vps_info) {
+ /* Return resource to the available pool */
+ lst_remove(&str_alloc->vps_data_list[vps_id], temp_vps_info);
+ lst_addhead(&str_alloc->available_vps_list, temp_vps_info);
+ }
+ }
+}
+
+static void bspp_remove_unused_pps(struct bspp_str_context *str_ctx, unsigned int pps_id)
+{
+ struct bspp_stream_alloc_data *str_alloc = &str_ctx->str_alloc;
+ struct bspp_pps_info *temp_pps_info = NULL;
+ struct bspp_pps_info *next_temp_pps_info = NULL;
+
+ /*
+ * Check the whole PPS slot list for any unused PPSs BEFORE ADDING
+ * THE NEW ONE, if found remove them
+ */
+ next_temp_pps_info = lst_first(&str_alloc->pps_data_list[pps_id]);
+ while (next_temp_pps_info) {
+ /* Set Temp, it is the one which we will potentially remove */
+ temp_pps_info = next_temp_pps_info;
+ /*
+ * Set Next Temp, it is the one for the next iteration
+ * (we cannot ask for next after removing it)
+ */
+ next_temp_pps_info = lst_next(temp_pps_info);
+ /* If it is not used remove it */
+ if (temp_pps_info->ref_count == 0 && next_temp_pps_info) {
+ /* Return resource to the available pool */
+ lst_remove(&str_alloc->pps_data_list[pps_id], temp_pps_info);
+ lst_addhead(&str_alloc->available_ppss_list, temp_pps_info);
+ }
+ }
+}
+
+static void bspp_remove_unused_sequence(struct bspp_str_context *str_ctx, unsigned int sps_id)
+{
+ struct bspp_stream_alloc_data *str_alloc = &str_ctx->str_alloc;
+ struct bspp_sequence_hdr_info *seq = NULL;
+ struct bspp_sequence_hdr_info *next_seq = NULL;
+
+ /*
+ * Check the whole sequence slot list for any unused sequences,
+ * if found remove them
+ */
+ next_seq = lst_first(&str_alloc->sequence_data_list[sps_id]);
+ while (next_seq) {
+ /* Set Temp, it is the one which we will potentially remove */
+ seq = next_seq;
+ /*
+ * Set Next Temp, it is the one for the next iteration (we
+ * cannot ask for next after removing it)
+ */
+ next_seq = lst_next(seq);
+
+ /*
+ * If the head is no longer used and there is something after,
+ * remove it
+ */
+ if (seq->ref_count == 0 && next_seq) {
+ /* Return resource to the pool-list */
+ lst_remove(&str_alloc->sequence_data_list[sps_id], seq);
+ if (str_ctx->parser_callbacks.release_data_cb) {
+ str_ctx->parser_callbacks.release_data_cb
+ ((void *)str_alloc,
+ BSPP_UNIT_SEQUENCE,
+ seq->secure_sequence_info);
+ }
+ lst_addhead(&str_alloc->available_sequence_list, seq);
+ }
+ }
+}
+
+/*
+ * @Function bspp_return_or_store_sequence_hdr
+ *
+ */
+static int bspp_return_or_store_sequence_hdr(struct bspp_str_context *str_ctx,
+ enum bspp_error_type parse_error,
+ struct bspp_sequence_hdr_info *sequ_hdr_info)
+{
+ struct bspp_stream_alloc_data *str_alloc = &str_ctx->str_alloc;
+ struct bspp_sequence_hdr_info *prev_sequ_hdr_info;
+
+ if (((parse_error & BSPP_ERROR_UNRECOVERABLE) || (parse_error & BSPP_ERROR_UNSUPPORTED)) &&
+ sequ_hdr_info->sequ_hdr_info.sequ_hdr_id != BSPP_INVALID) {
+ prev_sequ_hdr_info =
+ lst_last(&str_alloc->sequence_data_list
+ [sequ_hdr_info->sequ_hdr_info.sequ_hdr_id]);
+
+ /* check if it's not the same pointer */
+ if (prev_sequ_hdr_info && prev_sequ_hdr_info != sequ_hdr_info) {
+ /*
+ * Throw away corrupted sequence header if a previous "good" one exists.
+ */
+ sequ_hdr_info->sequ_hdr_info.sequ_hdr_id = BSPP_INVALID;
+ }
+ }
+
+ /* Store or return Sequence resource. */
+ if (sequ_hdr_info->sequ_hdr_info.sequ_hdr_id != BSPP_INVALID) {
+ /* Only add when not already in list. */
+ if (sequ_hdr_info != lst_last(&str_alloc->sequence_data_list
+ [sequ_hdr_info->sequ_hdr_info.sequ_hdr_id])) {
+ /*
+ * Add new sequence header (not already in list) to end
+ * of the slot-list.
+ */
+ lst_add(&str_alloc->sequence_data_list
+ [sequ_hdr_info->sequ_hdr_info.sequ_hdr_id], sequ_hdr_info);
+ }
+
+ bspp_remove_unused_sequence(str_ctx, sequ_hdr_info->sequ_hdr_info.sequ_hdr_id);
+ } else {
+ /*
+ * if unit was not a sequnce info, add resource to the
+ * pool-list
+ */
+ lst_addhead(&str_alloc->available_sequence_list, sequ_hdr_info);
+ }
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function bspp_get_resource
+ *
+ */
+static int bspp_get_resource(struct bspp_str_context *str_ctx,
+ struct bspp_pict_hdr_info *pict_hdr_info,
+ struct bspp_unit_data *unit_data)
+{
+ int result = IMG_SUCCESS;
+ struct bspp_stream_alloc_data *str_alloc = &str_ctx->str_alloc;
+
+ switch (unit_data->unit_type) {
+ case BSPP_UNIT_VPS:
+ /* Get VPS resource (HEVC only). */
+ if (unit_data->vid_std != VDEC_STD_HEVC)
+ break;
+ unit_data->out.vps_info = lst_removehead(&str_alloc->available_vps_list);
+ if (!unit_data->out.vps_info) {
+ result = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ } else {
+ unit_data->out.vps_info->vps_id = BSPP_INVALID;
+ unit_data->out.vps_info->ref_count = 0;
+ }
+ break;
+ case BSPP_UNIT_SEQUENCE:
+ unit_data->out.sequ_hdr_info = bspp_obtain_sequence_hdr(str_ctx);
+ if (!unit_data->out.sequ_hdr_info)
+ result = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+
+ break;
+
+ case BSPP_UNIT_PPS:
+ /* Get PPS resource (H.264 only). */
+ unit_data->out.pps_info = lst_removehead(&str_alloc->available_ppss_list);
+ /* allocate and return extra resources */
+ if (!unit_data->out.pps_info) {
+ result = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ } else {
+ bspp_reset_pps(str_ctx, unit_data->out.pps_info);
+ unit_data->out.pps_info->pps_id = BSPP_INVALID;
+ }
+ break;
+
+ case BSPP_UNIT_PICTURE:
+ case BSPP_UNIT_SKIP_PICTURE:
+ unit_data->out.pict_hdr_info = pict_hdr_info;
+#ifdef HAS_JPEG
+ if (unit_data->vid_std == VDEC_STD_JPEG) {
+ unit_data->impl_sequ_hdr_info = bspp_obtain_sequence_hdr(str_ctx);
+ if (!unit_data->impl_sequ_hdr_info)
+ result = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ }
+#endif
+ break;
+
+ default:
+ break;
+ }
+
+ return result;
+}
+
+/*
+ * @Function bspp_file_resource
+ * @Description Stores or returns all resources provided to parse unit.
+ */
+static int bspp_file_resource(struct bspp_str_context *str_ctx, struct bspp_unit_data *unit_data)
+{
+ unsigned int result = IMG_SUCCESS;
+ struct bspp_stream_alloc_data *str_alloc = &str_ctx->str_alloc;
+
+ switch (unit_data->unit_type) {
+ case BSPP_UNIT_VPS:
+ /* Store or return VPS resource (HEVC only) */
+ if (unit_data->vid_std != VDEC_STD_HEVC)
+ break;
+
+ if (unit_data->out.vps_info->vps_id != BSPP_INVALID) {
+ lst_add(&str_alloc->vps_data_list[unit_data->out.vps_info->vps_id],
+ unit_data->out.vps_info);
+
+ bspp_remove_unused_vps(str_ctx, unit_data->out.vps_info->vps_id);
+ } else {
+ lst_addhead(&str_alloc->available_vps_list, unit_data->out.vps_info);
+ }
+ break;
+ case BSPP_UNIT_SEQUENCE:
+ result = bspp_return_or_store_sequence_hdr(str_ctx, unit_data->parse_error,
+ unit_data->out.sequ_hdr_info);
+ VDEC_ASSERT(result == IMG_SUCCESS);
+ break;
+
+ case BSPP_UNIT_PPS:
+ /* Store or return PPS resource (H.264 only). */
+ if (unit_data->out.pps_info->pps_id != BSPP_INVALID) {
+ /*
+ * if unit was a PPS info, add resource to the slot-list
+ * AFTER REMOVING THE UNUSED ONES otherwise this will be
+ * removed along the rest unless special provision for
+ * last is made
+ */
+ lst_add(&str_alloc->pps_data_list[unit_data->out.pps_info->pps_id],
+ unit_data->out.pps_info);
+
+ bspp_remove_unused_pps(str_ctx, unit_data->out.pps_info->pps_id);
+ } else {
+ /*
+ * if unit was not a PPS info, add resource to the
+ * pool-list
+ */
+ lst_addhead(&str_alloc->available_ppss_list, unit_data->out.pps_info);
+ }
+ break;
+
+ case BSPP_UNIT_PICTURE:
+ case BSPP_UNIT_SKIP_PICTURE:
+#ifdef HAS_JPEG
+ if (unit_data->vid_std == VDEC_STD_JPEG) {
+ result = bspp_return_or_store_sequence_hdr(str_ctx,
+ unit_data->parse_error,
+ unit_data->impl_sequ_hdr_info);
+ VDEC_ASSERT(result == IMG_SUCCESS);
+ }
+#endif
+ break;
+
+ default:
+ break;
+ }
+
+ return result;
+}
+
+/*
+ * @Function bspp_process_unit
+ *
+ */
+static int bspp_process_unit(struct bspp_str_context *str_ctx,
+ unsigned int size_delim_bits,
+ struct bspp_pict_ctx *pict_ctx,
+ struct bspp_parse_state *parse_state)
+{
+ struct bspp_unit_data unit_data;
+ unsigned long long unit_size = 0; /* Unit size (in bytes, size delimited only). */
+ unsigned int result;
+ unsigned char vidx = str_ctx->grp_bstr_ctx.current_view_idx;
+ struct bspp_pict_hdr_info *curr_pict_hdr_info;
+
+ /*
+ * during call to swsr_consume_delim(), above.
+ * Setup default unit data.
+ */
+ memset(&unit_data, 0, sizeof(struct bspp_unit_data));
+
+ if (str_ctx->grp_bstr_ctx.delim_present) {
+ /* Consume delimiter and catch any exceptions. */
+ /*
+ * Consume the bitstream unit delimiter (size or
+ * start code prefix).
+ * When size-delimited the unit size is also returned
+ * so that the next unit can be found.
+ */
+ result = swsr_consume_delim(str_ctx->swsr_ctx.swsr_context,
+ str_ctx->swsr_ctx.emulation_prevention,
+ size_delim_bits, &unit_size);
+ if (result != IMG_SUCCESS)
+ goto error;
+ }
+
+ unit_data.unit_type = str_ctx->grp_bstr_ctx.unit_type;
+ unit_data.vid_std = str_ctx->vid_std;
+ unit_data.delim_present = str_ctx->grp_bstr_ctx.delim_present;
+ unit_data.codec_config = &str_ctx->codec_config;
+ unit_data.parse_state = parse_state;
+ unit_data.pict_sequ_hdr_id = str_ctx->sequ_hdr_id;
+ unit_data.str_res_handle = &str_ctx->str_alloc;
+ unit_data.unit_data_size = str_ctx->grp_bstr_ctx.total_data_size;
+ unit_data.intra_frm_as_closed_gop = str_ctx->intra_frame_closed_gop;
+
+ /* ponit to picture headers, check boundaries */
+ curr_pict_hdr_info = vidx < VDEC_H264_MVC_MAX_VIEWS ?
+ &pict_ctx->pict_hdr_info[vidx] : NULL;
+ unit_data.parse_state->next_pict_hdr_info =
+ vidx + 1 < VDEC_H264_MVC_MAX_VIEWS ?
+ &pict_ctx->pict_hdr_info[vidx + 1] : NULL;
+ unit_data.parse_state->is_prefix = 0;
+
+ /* Obtain output data containers. */
+ result = bspp_get_resource(str_ctx, curr_pict_hdr_info, &unit_data);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /* Process Unit and catch any exceptions. */
+ /*
+ * Call the standard-specific function to parse the bitstream
+ * unit.
+ */
+ result = str_ctx->parser_callbacks.parse_unit_cb(str_ctx->swsr_ctx.swsr_context,
+ &unit_data);
+ if (result != IMG_SUCCESS) {
+ pr_err("Failed to process unit, error = %d", unit_data.parse_error);
+ goto error;
+ }
+
+ if (unit_data.parse_error != BSPP_ERROR_NONE)
+ pr_err("Issues found while processing unit, error = %d\n", unit_data.parse_error);
+
+ /* Store or return resource used for parsing unit. */
+ result = bspp_file_resource(str_ctx, &unit_data);
+
+ if (!str_ctx->inter_pict_data.seen_closed_gop &&
+ str_ctx->grp_bstr_ctx.unit_type == BSPP_UNIT_PICTURE &&
+ unit_data.slice &&
+ (unit_data.out.pict_hdr_info &&
+ unit_data.out.pict_hdr_info->intra_coded) &&
+ str_ctx->vid_std != VDEC_STD_H264)
+ unit_data.new_closed_gop = 1;
+
+ if (unit_data.new_closed_gop) {
+ str_ctx->inter_pict_data.seen_closed_gop = 1;
+ str_ctx->inter_pict_data.new_closed_gop = 1;
+ }
+
+ /*
+ * Post-process unit (use local context in case
+ * parse function tried to change the unit type.
+ */
+ if (str_ctx->grp_bstr_ctx.unit_type == BSPP_UNIT_PICTURE ||
+ str_ctx->grp_bstr_ctx.unit_type == BSPP_UNIT_SKIP_PICTURE) {
+ if (str_ctx->inter_pict_data.new_closed_gop) {
+ pict_ctx->closed_gop = 1;
+ str_ctx->inter_pict_data.new_closed_gop = 0;
+ }
+
+ if (unit_data.ext_slice && str_ctx->grp_bstr_ctx.not_ext_pic_unit_yet &&
+ unit_data.pict_sequ_hdr_id != BSPP_INVALID) {
+ unsigned int id = unit_data.pict_sequ_hdr_id;
+
+ str_ctx->grp_bstr_ctx.not_ext_pic_unit_yet = 0;
+ pict_ctx->ext_sequ_hdr_info =
+ lst_last(&str_ctx->str_alloc.sequence_data_list[id]);
+ }
+
+ if (unit_data.slice) {
+ if (!curr_pict_hdr_info) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+ if (str_ctx->grp_bstr_ctx.not_pic_unit_yet &&
+ unit_data.pict_sequ_hdr_id != BSPP_INVALID) {
+ str_ctx->grp_bstr_ctx.not_pic_unit_yet = 0;
+
+ /*
+ * depend upon the picture header being
+ * populated (in addition to slice data).
+ */
+ pict_ctx->present = 1;
+
+ /*
+ * Update the picture context from the last unit parsed.
+ * This context must be stored since a non-picture unit may follow.
+ * Obtain current instance of sequence data for given ID.
+ */
+ if (!pict_ctx->sequ_hdr_info) {
+ unsigned int id = unit_data.pict_sequ_hdr_id;
+
+ pict_ctx->sequ_hdr_info =
+ lst_last(&str_ctx->str_alloc.sequence_data_list[id]);
+
+ /* Do the sequence flagging/reference-counting */
+ pict_ctx->sequ_hdr_info->ref_count++;
+ }
+
+ /* Override the field here. */
+ if (str_ctx->swsr_ctx.sr_config.delim_type == SWSR_DELIM_NONE) {
+ if (str_ctx->grp_bstr_ctx.unit_type ==
+ BSPP_UNIT_SKIP_PICTURE) {
+ /* VDECFW_SKIPPED_PICTURE; */
+ curr_pict_hdr_info->parser_mode =
+ VDECFW_SKIPPED_PICTURE;
+ curr_pict_hdr_info->pic_data_size = 0;
+ } else {
+ /* VDECFW_SIZE_SIDEBAND; */
+ curr_pict_hdr_info->parser_mode =
+ VDECFW_SIZE_SIDEBAND;
+ curr_pict_hdr_info->pic_data_size =
+ str_ctx->grp_bstr_ctx.total_data_size;
+ }
+ } else if (str_ctx->swsr_ctx.sr_config.delim_type ==
+ SWSR_DELIM_SIZE) {
+ if (str_ctx->swsr_ctx.sr_config.delim_length <= 8)
+ /* VDECFW_SIZE_DELIMITED_1_ONLY; */
+ curr_pict_hdr_info->parser_mode =
+ VDECFW_SIZE_DELIMITED_1_ONLY;
+ else if (str_ctx->swsr_ctx.sr_config.delim_length <= 16)
+ /* VDECFW_SIZE_DELIMITED_2_ONLY; */
+ curr_pict_hdr_info->parser_mode =
+ VDECFW_SIZE_DELIMITED_2_ONLY;
+ else if (str_ctx->swsr_ctx.sr_config.delim_length <= 32)
+ /* VDECFW_SIZE_DELIMITED_4_ONLY; */
+ curr_pict_hdr_info->parser_mode =
+ VDECFW_SIZE_DELIMITED_4_ONLY;
+
+ curr_pict_hdr_info->pic_data_size +=
+ ((unsigned int)unit_size
+ + (size_delim_bits / 8));
+ } else if (str_ctx->swsr_ctx.sr_config.delim_type == SWSR_DELIM_SCP)
+ /* VDECFW_SCP_ONLY; */
+ curr_pict_hdr_info->parser_mode = VDECFW_SCP_ONLY;
+ }
+
+ /*
+ * for MVC, the Slice Extension should also have the
+ * same ParserMode as the Base view.
+ */
+ if (unit_data.parse_state->next_pict_hdr_info) {
+ unit_data.parse_state->next_pict_hdr_info->parser_mode =
+ curr_pict_hdr_info->parser_mode;
+ }
+
+ if (unit_data.parse_error & BSPP_ERROR_UNSUPPORTED) {
+ pict_ctx->invalid = 1;
+ pict_ctx->unsupported = 1;
+ } else if (!str_ctx->full_scan) {
+ /*
+ * Only parse up to and including the first
+ * valid video slice unless full scanning.
+ */
+ pict_ctx->finished = 1;
+ }
+ }
+ }
+
+ if (unit_data.extracted_all_data) {
+ enum swsr_found found;
+
+ swsr_byte_align(str_ctx->swsr_ctx.swsr_context);
+
+ found = swsr_check_delim_or_eod(str_ctx->swsr_ctx.swsr_context);
+ if (found != SWSR_FOUND_DELIM && found != SWSR_FOUND_EOD) {
+ /*
+ * Should already be at the next delimiter or EOD.
+ * Any bits left at the end of the unit could indicate
+ * corrupted syntax or erroneous parsing.
+ */
+ }
+ }
+
+ return IMG_SUCCESS;
+
+error:
+ if (unit_data.unit_type == BSPP_UNIT_PICTURE ||
+ unit_data.unit_type == BSPP_UNIT_SKIP_PICTURE)
+ pict_ctx->invalid = 1;
+
+ /*
+ * Tidy-up resources.
+ * Store or return resource used for parsing unit.
+ */
+ bspp_file_resource(str_ctx, &unit_data);
+
+ return result;
+}
+
+/*
+ * @Function bspp_terminate_buffer
+ *
+ */
+static int bspp_terminate_buffer(struct bspp_grp_bstr_ctx *grp_btsr_ctx,
+ struct bspp_bitstream_buffer *buf)
+{
+ int result = -1;
+
+ /* Indicate that all the data in buffer should be added to segment. */
+ buf->bytes_read = buf->data_size;
+
+ result = bspp_create_segment(grp_btsr_ctx, buf);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /* Next segment will start at the beginning of the next buffer. */
+ grp_btsr_ctx->segment_offset = 0;
+
+ bspp_free_bitstream_elem(buf);
+
+ return result;
+}
+
+/*
+ * @Function bspp_jump_to_next_view
+ *
+ */
+static int bspp_jump_to_next_view(struct bspp_grp_bstr_ctx *grp_btsr_ctx,
+ struct bspp_preparsed_data *preparsed_data,
+ struct bspp_parse_state *parse_state)
+{
+ struct bspp_bitstream_buffer *cur_buf;
+ int result;
+ unsigned int i;
+ unsigned char vidx;
+
+ if (!grp_btsr_ctx || !parse_state || !preparsed_data) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+
+ vidx = grp_btsr_ctx->current_view_idx;
+
+ if (vidx >= VDEC_H264_MVC_MAX_VIEWS) {
+ result = IMG_ERROR_NOT_SUPPORTED;
+ goto error;
+ }
+
+ /* get current buffer */
+ cur_buf = (struct bspp_bitstream_buffer *)lst_first(&grp_btsr_ctx->in_flight_bufs);
+ if (!cur_buf) {
+ result = IMG_ERROR_CANCELLED;
+ goto error;
+ }
+
+ if (cur_buf->bufmap_id != parse_state->prev_buf_map_id) {
+ /*
+ * If we moved to the next buffer while parsing the slice
+ * header of the new view we have to reduce the size of
+ * the last segment up to the beginning of the new view slice
+ * and create a new segment from that point up to the end of
+ * the buffer. The new segment should belong to the new view.
+ * THIS ONLY WORKS IF THE SLICE HEADER DOES NOT SPAN MORE THAN
+ * TWO BUFFERS. If we want to support the case that the slice
+ * header of the new view spans multiple buffer we either have
+ * here remove all the segments up to the point were we find
+ * the buffer we are looking for, then adjust the size of this
+ * segment and then add the segments we removed to the next
+ * view list or we can implement a mechanism like the one that
+ * peeks for the NAL unit type and delimit the next view
+ * segment before parsing the first slice of the view.
+ */
+ struct bspp_bitstr_seg *segment;
+
+ segment = lst_last(grp_btsr_ctx->segment_list);
+ if (segment && segment->bufmap_id == parse_state->prev_buf_map_id) {
+ struct bspp_bitstream_buffer prev_buf;
+
+ segment->data_size -= parse_state->prev_buf_data_size
+ - parse_state->prev_byte_offset_buf;
+ segment->bstr_seg_flag &= ~VDECDD_BSSEG_LASTINBUFF;
+
+ /*
+ * Change the segmenOffset value with the value it
+ * would have if we had delemited the segment correctly
+ * beforehand.
+ */
+ grp_btsr_ctx->segment_offset = parse_state->prev_byte_offset_buf;
+
+ /* set lists of segments to new view... */
+ for (i = 0; i < BSPP_MAX_PICTURES_PER_BUFFER; i++) {
+ grp_btsr_ctx->pre_pict_seg_list[i] =
+ &preparsed_data->ext_pictures_data[vidx].pre_pict_seg_list
+ [i];
+ grp_btsr_ctx->pict_seg_list[i] =
+ &preparsed_data->ext_pictures_data[vidx].pict_seg_list[i];
+
+ lst_init(grp_btsr_ctx->pre_pict_seg_list[i]);
+ lst_init(grp_btsr_ctx->pict_seg_list[i]);
+ }
+ /* and current segment list */
+ grp_btsr_ctx->segment_list = grp_btsr_ctx->pict_seg_list[0];
+
+ memset(&prev_buf, 0, sizeof(struct bspp_bitstream_buffer));
+ prev_buf.bufmap_id = segment->bufmap_id;
+ prev_buf.data_size = parse_state->prev_buf_data_size;
+ prev_buf.bytes_read = prev_buf.data_size;
+
+ /* Create the segment the first part of the next view */
+ result = bspp_create_segment(grp_btsr_ctx, &prev_buf);
+ if (result != IMG_SUCCESS)
+ goto error;
+ } else {
+ result = IMG_ERROR_NOT_SUPPORTED;
+ goto error;
+ }
+ } else {
+ /*
+ * the data just parsed belongs to new view, so use previous byte
+ * offset
+ */
+ cur_buf->bytes_read = parse_state->prev_byte_offset_buf;
+
+ /* Create the segment for previous view */
+ result = bspp_create_segment(grp_btsr_ctx, cur_buf);
+ if (result != IMG_SUCCESS)
+ goto error;
+
+ /* set lists of segments to new view */
+ for (i = 0; i < BSPP_MAX_PICTURES_PER_BUFFER; i++) {
+ grp_btsr_ctx->pre_pict_seg_list[i] =
+ &preparsed_data->ext_pictures_data[vidx].pre_pict_seg_list[i];
+ grp_btsr_ctx->pict_seg_list[i] =
+ &preparsed_data->ext_pictures_data[vidx].pict_seg_list[i];
+
+ lst_init(grp_btsr_ctx->pre_pict_seg_list[i]);
+ lst_init(grp_btsr_ctx->pict_seg_list[i]);
+ }
+ /* and current segment list */
+ grp_btsr_ctx->segment_list = grp_btsr_ctx->pict_seg_list[0];
+ }
+
+ /* update prefix flag */
+ preparsed_data->ext_pictures_data[vidx].is_prefix = parse_state->is_prefix;
+ /* and view index */
+ grp_btsr_ctx->current_view_idx++;
+
+ /* set number of extended pictures */
+ preparsed_data->num_ext_pictures = grp_btsr_ctx->current_view_idx;
+
+error:
+ return result;
+}
+
+static void bspp_reset_pict_state(struct bspp_str_context *str_ctx, struct bspp_pict_ctx *pict_ctx,
+ struct bspp_parse_state *parse_state)
+{
+ memset(pict_ctx, 0, sizeof(struct bspp_pict_ctx));
+ memset(parse_state, 0, sizeof(struct bspp_parse_state));
+
+ /* Setup group buffer processing state. */
+ parse_state->inter_pict_ctx = &str_ctx->inter_pict_data;
+ parse_state->prev_bottom_pic_flag = (unsigned char)BSPP_INVALID;
+ parse_state->next_pic_is_new = 1;
+ parse_state->prev_frame_num = BSPP_INVALID;
+ parse_state->second_field_flag = 0;
+ parse_state->first_chunk = 1;
+}
+
+/*
+ * @Function bspp_stream_preparse_buffers
+ * @Description Buffer list cannot be processed since units in this last buffer
+ * may not be complete. Must wait until a buffer is provided with end-of-picture
+ * signalled. When the buffer indicates that units won't span then we can
+ * process the bitstream buffer chain.
+ */
+int bspp_stream_preparse_buffers(void *str_context_handle,
+ const struct bspp_ddbuf_info *contig_buf_info,
+ unsigned int contig_buf_map_id, struct lst_t *segments,
+ struct bspp_preparsed_data *preparsed_data,
+ int end_of_pic)
+{
+ struct bspp_str_context *str_ctx = (struct bspp_str_context *)str_context_handle;
+ struct bspp_pict_ctx *pict_ctx = &str_ctx->pict_ctx;
+ struct bspp_parse_state *parse_state = &str_ctx->parse_state;
+ int i;
+ unsigned int unit_count = 0, num_arrays = 0;
+ unsigned int size_delim_bits = 0;
+ enum swsr_found found = SWSR_FOUND_NONE;
+ unsigned int result;
+ struct bspp_bitstr_seg *segment;
+ struct lst_t temp_list;
+
+ /*
+ * since it is new picture, resetting the context status to
+ * beginning
+ */
+ /* TODO: revisit this */
+ pict_ctx->finished = 0;
+ pict_ctx->new_pict_signalled = 0;
+
+ if (!str_context_handle)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ if (!segments || !preparsed_data)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* Check that bitstream buffers have been registered. */
+ if (!lst_last(&str_ctx->grp_bstr_ctx.buffer_chain))
+ return IMG_ERROR_OPERATION_PROHIBITED;
+
+ /* Initialise the output data. */
+ memset(preparsed_data, 0, sizeof(struct bspp_preparsed_data));
+
+ if (!parse_state->initialised) {
+ bspp_reset_pict_state(str_ctx, pict_ctx, parse_state);
+ parse_state->initialised = 1;
+ }
+
+ for (i = 0; i < 3; i++) {
+ lst_init(&preparsed_data->picture_data.pre_pict_seg_list[i]);
+ lst_init(&preparsed_data->picture_data.pict_seg_list[i]);
+ }
+
+ /* Initialise parsing for this video standard. */
+ if (str_ctx->parser_callbacks.initialise_parsing_cb && parse_state->first_chunk)
+ str_ctx->parser_callbacks.initialise_parsing_cb(parse_state);
+
+ parse_state->first_chunk = 0;
+
+ for (i = 0; i < VDEC_H264_MVC_MAX_VIEWS; i++) {
+ pict_ctx->pict_hdr_info[i].pict_aux_data.id = BSPP_INVALID;
+ pict_ctx->pict_hdr_info[i].second_pict_aux_data.id = BSPP_INVALID;
+ }
+
+ /* Setup buffer group bitstream context. */
+ str_ctx->grp_bstr_ctx.vid_std = str_ctx->vid_std;
+ str_ctx->grp_bstr_ctx.disable_mvc = str_ctx->disable_mvc;
+ str_ctx->grp_bstr_ctx.delim_present = 1;
+ str_ctx->grp_bstr_ctx.swsr_context = str_ctx->swsr_ctx.swsr_context;
+ str_ctx->grp_bstr_ctx.unit_type = BSPP_UNIT_NONE;
+ str_ctx->grp_bstr_ctx.last_unit_type = BSPP_UNIT_NONE;
+ str_ctx->grp_bstr_ctx.not_pic_unit_yet = 1;
+ str_ctx->grp_bstr_ctx.not_ext_pic_unit_yet = 1;
+ str_ctx->grp_bstr_ctx.total_bytes_read = 0;
+ str_ctx->grp_bstr_ctx.current_view_idx = 0;
+
+ for (i = 0; i < 3; i++) {
+ str_ctx->grp_bstr_ctx.pre_pict_seg_list[i] =
+ &preparsed_data->picture_data.pre_pict_seg_list[i];
+ str_ctx->grp_bstr_ctx.pict_seg_list[i] =
+ &preparsed_data->picture_data.pict_seg_list[i];
+ str_ctx->grp_bstr_ctx.pict_tag_param_array[i] =
+ &preparsed_data->picture_data.pict_tag_param[i];
+ }
+ str_ctx->grp_bstr_ctx.segment_list = str_ctx->grp_bstr_ctx.pre_pict_seg_list[0];
+ str_ctx->grp_bstr_ctx.pict_tag_param = str_ctx->grp_bstr_ctx.pict_tag_param_array[0];
+ str_ctx->grp_bstr_ctx.free_segments = segments;
+ str_ctx->grp_bstr_ctx.segment_offset = 0;
+ str_ctx->grp_bstr_ctx.insert_start_code = 0;
+
+ /*
+ * Before processing the units service all the picture decoded events
+ * to free the resources1794
+ */
+ bspp_service_pictures_decoded(str_ctx);
+
+ /*
+ * A picture currently being parsed is already decoded (may happen
+ * after dwr in low latency mode) and its recourses were freed. Skip
+ * the rest of the picture.
+ */
+ if (pict_ctx->sequ_hdr_info && pict_ctx->sequ_hdr_info->ref_count == 0) {
+ pict_ctx->present = 0;
+ pict_ctx->finished = 1;
+ }
+
+ /*
+ * For bitstreams without unit delimiters treat all the buffers as
+ * a single unit whose type is defined by the first buffer element.
+ */
+ if (str_ctx->swsr_ctx.sr_config.delim_type == SWSR_DELIM_NONE) {
+ struct bspp_bitstream_buffer *cur_buf =
+ lst_first(&str_ctx->grp_bstr_ctx.buffer_chain);
+
+ /* if there is no picture data we must be skipped. */
+ if (!cur_buf || cur_buf->data_size == 0) {
+ str_ctx->grp_bstr_ctx.unit_type = BSPP_UNIT_SKIP_PICTURE;
+ } else if (cur_buf->bstr_element_type == VDEC_BSTRELEMENT_CODEC_CONFIG) {
+ str_ctx->grp_bstr_ctx.unit_type = BSPP_UNIT_SEQUENCE;
+ } else if (cur_buf->bstr_element_type == VDEC_BSTRELEMENT_PICTURE_DATA ||
+ cur_buf->bstr_element_type == VDEC_BSTRELEMENT_UNSPECIFIED) {
+ str_ctx->grp_bstr_ctx.unit_type = BSPP_UNIT_PICTURE;
+ str_ctx->grp_bstr_ctx.segment_list = str_ctx->grp_bstr_ctx.pict_seg_list[0];
+ }
+
+ str_ctx->grp_bstr_ctx.delim_present = 0;
+ }
+
+ /*
+ * Load the first section (buffer) of biststream into the software
+ * shift-register. BSPP maps "buffer" to "section" and allows for
+ * contiguous parsing of all buffers since unit boundaries are not
+ * known up-front. Unit parsing and segment creation is happening in a
+ * single pass.
+ */
+ result = swsr_start_bitstream(str_ctx->swsr_ctx.swsr_context,
+ &str_ctx->swsr_ctx.sr_config,
+ str_ctx->grp_bstr_ctx.total_data_size,
+ str_ctx->swsr_ctx.emulation_prevention);
+
+ /* Seek for next delimiter or end of data and catch any exceptions. */
+ if (str_ctx->grp_bstr_ctx.delim_present) {
+ /* Locate the first bitstream unit. */
+ found = swsr_seek_delim_or_eod(str_ctx->swsr_ctx.swsr_context);
+ }
+
+ if (str_ctx->swsr_ctx.sr_config.delim_type == SWSR_DELIM_SIZE) {
+ struct bspp_bitstream_buffer *cur_buf =
+ lst_first(&str_ctx->grp_bstr_ctx.in_flight_bufs);
+
+ if (cur_buf->bstr_element_type == VDEC_BSTRELEMENT_CODEC_CONFIG &&
+ str_ctx->parser_callbacks.parse_codec_config_cb) {
+ /* Parse codec config header and catch any exceptions */
+ str_ctx->parser_callbacks.parse_codec_config_cb
+ (str_ctx->swsr_ctx.swsr_context,
+ &unit_count,
+ &num_arrays,
+ &str_ctx->swsr_ctx.sr_config.delim_length,
+ &size_delim_bits);
+ } else {
+ size_delim_bits = str_ctx->swsr_ctx.sr_config.delim_length;
+ }
+ }
+
+ /* Process all the bitstream units until the picture is located. */
+ while (found != SWSR_FOUND_EOD && !pict_ctx->finished) {
+ struct bspp_bitstream_buffer *cur_buf =
+ lst_first(&str_ctx->grp_bstr_ctx.in_flight_bufs);
+
+ if (!cur_buf) {
+ pr_err("%s: cur_buf pointer is NULL\n", __func__);
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+
+ if (str_ctx->swsr_ctx.sr_config.delim_type ==
+ SWSR_DELIM_SIZE && cur_buf->bstr_element_type ==
+ VDEC_BSTRELEMENT_CODEC_CONFIG &&
+ str_ctx->parser_callbacks.update_unit_counts_cb) {
+ /*
+ * Parse middle part of codec config header and catch
+ * any exceptions.
+ */
+ str_ctx->parser_callbacks.update_unit_counts_cb
+ (str_ctx->swsr_ctx.swsr_context,
+ &unit_count,
+ &num_arrays);
+ }
+
+ /* Process the next unit. */
+ result = bspp_process_unit(str_ctx, size_delim_bits, pict_ctx, parse_state);
+ if (result == IMG_ERROR_NOT_SUPPORTED)
+ goto error;
+
+ if (str_ctx->swsr_ctx.sr_config.delim_type != SWSR_DELIM_NONE)
+ str_ctx->grp_bstr_ctx.delim_present = 1;
+
+ /* jump to the next view */
+ if (parse_state->new_view) {
+ result = bspp_jump_to_next_view(&str_ctx->grp_bstr_ctx,
+ preparsed_data,
+ parse_state);
+ if (result != IMG_SUCCESS)
+ goto error;
+
+ parse_state->new_view = 0;
+ }
+
+ if (!pict_ctx->finished) {
+ /*
+ * Seek for next delimiter or end of data and catch any
+ * exceptions.
+ */
+ /* Locate the next bitstream unit or end of data */
+ found = swsr_seek_delim_or_eod(str_ctx->swsr_ctx.swsr_context);
+
+ {
+ struct bspp_bitstream_buffer *buf;
+ /* Update the offset within current buffer. */
+ swsr_get_byte_offset_curbuf(str_ctx->grp_bstr_ctx.swsr_context,
+ &parse_state->prev_byte_offset_buf);
+ buf = lst_first(&str_ctx->grp_bstr_ctx.in_flight_bufs);
+ if (buf) {
+ parse_state->prev_buf_map_id = buf->bufmap_id;
+ parse_state->prev_buf_data_size = buf->data_size;
+ }
+ }
+ }
+ }
+
+ /* Finalize parsing for this video standard. */
+ if (str_ctx->parser_callbacks.finalise_parsing_cb && end_of_pic) {
+ str_ctx->parser_callbacks.finalise_parsing_cb((void *)&str_ctx->str_alloc,
+ parse_state);
+ }
+
+ /*
+ * Create segments for each buffer held by the software shift register
+ * (and not yet processed).
+ */
+ while (lst_first(&str_ctx->grp_bstr_ctx.in_flight_bufs)) {
+ struct bspp_bitstream_buffer *buf =
+ lst_removehead(&str_ctx->grp_bstr_ctx.in_flight_bufs);
+
+ result = bspp_terminate_buffer(&str_ctx->grp_bstr_ctx, buf);
+ }
+
+ /*
+ * Create segments for each buffer not yet requested by the shift
+ * register.
+ */
+ while (lst_first(&str_ctx->grp_bstr_ctx.buffer_chain)) {
+ struct bspp_bitstream_buffer *buf =
+ lst_removehead(&str_ctx->grp_bstr_ctx.buffer_chain);
+
+ result = bspp_terminate_buffer(&str_ctx->grp_bstr_ctx, buf);
+ }
+
+ /*
+ * Populate the parsed data information for picture only if one is
+ * present. The anonymous data has already been added to the
+ * appropriate segment list.
+ */
+ if (pict_ctx->present && !pict_ctx->invalid) {
+ if (!pict_ctx->new_pict_signalled) {
+ /*
+ * Provide data about sequence used by picture.
+ * Signal "new sequence" if the sequence header is new
+ * or has changed. always switch seq when changing base
+ * and additional views
+ */
+ if (pict_ctx->sequ_hdr_info) {
+ if (pict_ctx->sequ_hdr_info->sequ_hdr_info.sequ_hdr_id !=
+ str_ctx->sequ_hdr_id ||
+ pict_ctx->sequ_hdr_info->ref_count == 1 ||
+ pict_ctx->ext_sequ_hdr_info ||
+ pict_ctx->closed_gop) {
+ preparsed_data->new_sequence = 1;
+ preparsed_data->sequ_hdr_info =
+ pict_ctx->sequ_hdr_info->sequ_hdr_info;
+ }
+ }
+
+ /* Signal "new subsequence" and its common header information. */
+ if (pict_ctx->ext_sequ_hdr_info) {
+ preparsed_data->new_sub_sequence = 1;
+ preparsed_data->ext_sequ_hdr_info =
+ pict_ctx->ext_sequ_hdr_info->sequ_hdr_info;
+
+ for (i = 0; i < VDEC_H264_MVC_MAX_VIEWS - 1;
+ i++) {
+ /*
+ * prefix is always the last one
+ * do not attach any header info to it
+ */
+ if (preparsed_data->ext_pictures_data[i].is_prefix)
+ break;
+
+ /* attach headers */
+ preparsed_data->ext_pictures_data[i].sequ_hdr_id =
+ pict_ctx->ext_sequ_hdr_info->sequ_hdr_info.sequ_hdr_id;
+ pict_ctx->ext_sequ_hdr_info->ref_count++;
+ preparsed_data->ext_pictures_data[i].pict_hdr_info =
+ pict_ctx->pict_hdr_info[i + 1];
+ }
+
+ preparsed_data->ext_pictures_data
+ [0].pict_hdr_info.first_pic_of_sequence =
+ preparsed_data->new_sub_sequence;
+
+ /*
+ * Update the base view common sequence info
+ * with the number of views that the stream has.
+ * Otherwise the number of views is inconsistent
+ * between base view sequence and dependent view
+ * sequences. Also base view sequence appears
+ * with one view and the driver calculates the
+ * wrong number of resources.
+ */
+ preparsed_data->sequ_hdr_info.com_sequ_hdr_info.num_views =
+ preparsed_data->ext_sequ_hdr_info.com_sequ_hdr_info.num_views;
+ }
+
+ /* Signal if this picture is the first in a closed GOP */
+ if (pict_ctx->closed_gop) {
+ preparsed_data->closed_gop = 1;
+ preparsed_data->sequ_hdr_info.com_sequ_hdr_info.not_dpb_flush =
+ str_ctx->inter_pict_data.not_dpb_flush;
+ }
+
+ /*
+ * Signal "new picture" and its common header
+ * information.
+ */
+ preparsed_data->new_picture = 1;
+ if (pict_ctx->sequ_hdr_info) {
+ preparsed_data->picture_data.sequ_hdr_id =
+ pict_ctx->sequ_hdr_info->sequ_hdr_info.sequ_hdr_id;
+ }
+ preparsed_data->picture_data.pict_hdr_info = pict_ctx->pict_hdr_info[0];
+
+ preparsed_data->picture_data.pict_hdr_info.first_pic_of_sequence =
+ preparsed_data->new_sequence;
+ if (contig_buf_info)
+ preparsed_data->picture_data.pict_hdr_info.fragmented_data = 1;
+ else
+ preparsed_data->picture_data.pict_hdr_info.fragmented_data = 0;
+
+ str_ctx->sequ_hdr_id = preparsed_data->picture_data.sequ_hdr_id;
+
+ pict_ctx->new_pict_signalled = 1;
+
+ /*
+ * aso/fmo supported only when a frame is submitted as
+ * a whole
+ */
+ if (parse_state->discontinuous_mb && !end_of_pic)
+ result = IMG_ERROR_NOT_SUPPORTED;
+ } else {
+ preparsed_data->new_fragment = 1;
+
+ if (parse_state->discontinuous_mb)
+ result = IMG_ERROR_NOT_SUPPORTED;
+ }
+
+ lst_init(&temp_list);
+
+ segment = lst_removehead(&preparsed_data->picture_data.pict_seg_list[0]);
+ while (segment) {
+ lst_add(&temp_list, segment);
+ segment = lst_removehead(&preparsed_data->picture_data.pict_seg_list[0]);
+ }
+
+ segment = lst_removehead(&str_ctx->inter_pict_data.pic_prefix_seg);
+ while (segment) {
+ lst_add(&preparsed_data->picture_data.pict_seg_list[0],
+ segment);
+ segment = lst_removehead(&str_ctx->inter_pict_data.pic_prefix_seg);
+ }
+
+ segment = lst_removehead(&temp_list);
+ while (segment) {
+ lst_add(&preparsed_data->picture_data.pict_seg_list[0],
+ segment);
+ segment = lst_removehead(&temp_list);
+ }
+
+ for (i = 0; i < VDEC_H264_MVC_MAX_VIEWS; i++) {
+ unsigned int j;
+ struct bspp_picture_data *ext_pic_data =
+ &preparsed_data->ext_pictures_data[i];
+
+ if (preparsed_data->ext_pictures_data[i].is_prefix) {
+ for (j = 0; j < BSPP_MAX_PICTURES_PER_BUFFER;
+ j++) {
+ segment = lst_removehead(&ext_pic_data->pict_seg_list[j]);
+ while (segment) {
+ lst_add(&str_ctx->inter_pict_data.pic_prefix_seg,
+ segment);
+ segment = lst_removehead
+ (&ext_pic_data->pict_seg_list[j]);
+ }
+ }
+ preparsed_data->num_ext_pictures--;
+ break;
+ }
+ }
+ } else if (pict_ctx->present && pict_ctx->sequ_hdr_info) {
+ /*
+ * Reduce the reference count since this picture will not be
+ * decoded.
+ */
+ pict_ctx->sequ_hdr_info->ref_count--;
+ /* Release sequence data. */
+ if (str_ctx->parser_callbacks.release_data_cb) {
+ str_ctx->parser_callbacks.release_data_cb((void *)&str_ctx->str_alloc,
+ BSPP_UNIT_SEQUENCE,
+ pict_ctx->sequ_hdr_info->secure_sequence_info);
+ }
+ }
+
+ /* Reset the group bitstream context */
+ lst_init(&str_ctx->grp_bstr_ctx.buffer_chain);
+ memset(&str_ctx->grp_bstr_ctx, 0, sizeof(str_ctx->grp_bstr_ctx));
+
+ /*
+ * for now: return IMG_ERROR_NOT_SUPPORTED only if explicitly set by
+ * parser
+ */
+ result = (result == IMG_ERROR_NOT_SUPPORTED) ?
+ IMG_ERROR_NOT_SUPPORTED : IMG_SUCCESS;
+
+ if (end_of_pic)
+ parse_state->initialised = 0;
+
+ return result;
+
+error:
+ /* Free the SWSR list of buffers */
+ while (lst_first(&str_ctx->grp_bstr_ctx.in_flight_bufs))
+ lst_removehead(&str_ctx->grp_bstr_ctx.in_flight_bufs);
+
+ return result;
+}
+
+/*
+ * @Function bspp_stream_destroy
+ *
+ */
+int bspp_stream_destroy(void *str_context_handle)
+{
+ struct bspp_str_context *str_ctx = (struct bspp_str_context *)str_context_handle;
+ unsigned int i;
+ unsigned int sps_id;
+ unsigned int pps_id;
+ struct bspp_sequence_hdr_info *sequ_hdr_info;
+ struct bspp_pps_info *pps_info;
+ unsigned int result;
+
+ /* Validate input arguments. */
+ if (!str_context_handle) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+
+ swsr_deinitialise(str_ctx->swsr_ctx.swsr_context);
+
+ /*
+ * Service all the picture decoded events and free any unused
+ * resources.
+ */
+ bspp_service_pictures_decoded(str_ctx);
+ for (sps_id = 0; sps_id < SEQUENCE_SLOTS; sps_id++)
+ bspp_remove_unused_sequence(str_ctx, sps_id);
+
+ if (str_ctx->vid_std_features.uses_pps) {
+ for (pps_id = 0; pps_id < PPS_SLOTS; pps_id++)
+ bspp_remove_unused_pps(str_ctx, pps_id);
+ }
+
+ if (str_ctx->vid_std_features.uses_vps) {
+ struct bspp_vps_info *vps_info;
+
+ for (i = 0; i < VPS_SLOTS; ++i) {
+ vps_info = lst_removehead(&str_ctx->str_alloc.vps_data_list[i]);
+
+ if (vps_info)
+ lst_add(&str_ctx->str_alloc.available_vps_list, vps_info);
+
+ /*
+ * when we are done with the stream we should have MAXIMUM 1 VPS
+ * per slot, so after removing this one we should have none
+ * In case of "decodenframes" this is not true because we send more
+ * pictures for decode than what we expect to receive back, which
+ * means that potentially additional sequences/PPS are in the list
+ */
+ vps_info = lst_removehead(&str_ctx->str_alloc.vps_data_list[i]);
+ if (vps_info) {
+ do {
+ lst_add(&str_ctx->str_alloc.available_vps_list, vps_info);
+ vps_info =
+ lst_removehead(&str_ctx->str_alloc.vps_data_list[i]);
+ } while (vps_info);
+ }
+ VDEC_ASSERT(lst_empty(&str_ctx->str_alloc.vps_data_list[i]));
+ }
+
+ vps_info = NULL;
+ for (i = 0; i < MAX_VPSS; ++i) {
+ VDEC_ASSERT(!lst_empty(&str_ctx->str_alloc.available_vps_list));
+ vps_info = lst_removehead(&str_ctx->str_alloc.available_vps_list);
+ if (vps_info) {
+ kfree(vps_info->secure_vpsinfo);
+ kfree(vps_info);
+ } else {
+ VDEC_ASSERT(vps_info);
+ pr_err("vps still active at shutdown\n");
+ }
+ }
+ VDEC_ASSERT(lst_empty(&str_ctx->str_alloc.available_vps_list));
+ }
+
+ /* Free the memory required for this stream. */
+ for (i = 0; i < SEQUENCE_SLOTS; i++) {
+ sequ_hdr_info = lst_removehead(&str_ctx->str_alloc.sequence_data_list[i]);
+ if (sequ_hdr_info) {
+ if (str_ctx->parser_callbacks.release_data_cb)
+ str_ctx->parser_callbacks.release_data_cb
+ ((void *)&str_ctx->str_alloc,
+ BSPP_UNIT_SEQUENCE,
+ sequ_hdr_info->secure_sequence_info);
+ lst_add(&str_ctx->str_alloc.available_sequence_list,
+ sequ_hdr_info);
+ }
+
+ /*
+ * when we are done with the stream we should have MAXIMUM 1
+ * sequence per slot, so after removing this one we should have
+ * none In case of "decoded frames" this is not true because we
+ * send more pictures for decode than what we expect to receive
+ * back, which means that potentially additional sequences/PPS
+ * are in the list
+ */
+ sequ_hdr_info = lst_removehead(&str_ctx->str_alloc.sequence_data_list[i]);
+ if (sequ_hdr_info) {
+ unsigned int count_extra_sequences = 0;
+
+ do {
+ count_extra_sequences++;
+ if (str_ctx->parser_callbacks.release_data_cb) {
+ str_ctx->parser_callbacks.release_data_cb
+ ((void *)&str_ctx->str_alloc,
+ BSPP_UNIT_SEQUENCE,
+ sequ_hdr_info->secure_sequence_info);
+ }
+ lst_add(&str_ctx->str_alloc.available_sequence_list,
+ sequ_hdr_info);
+ sequ_hdr_info =
+ lst_removehead(&str_ctx->str_alloc.sequence_data_list[i]);
+ } while (sequ_hdr_info);
+ }
+ }
+
+ if (str_ctx->vid_std_features.uses_pps) {
+ for (i = 0; i < PPS_SLOTS; i++) {
+ pps_info = lst_removehead(&str_ctx->str_alloc.pps_data_list[i]);
+ if (pps_info)
+ lst_add(&str_ctx->str_alloc.available_ppss_list, pps_info);
+
+ /*
+ * when we are done with the stream we should have
+ * MAXIMUM 1 PPS per slot, so after removing this one
+ * we should have none
+ * In case of "decodedframes" this is not true because
+ * we send more pictures for decode than what we expect
+ * to receive back, which means that potentially
+ * additional sequences/PPS are in the list
+ */
+ pps_info = lst_removehead(&str_ctx->str_alloc.pps_data_list[i]);
+ if (pps_info) {
+ unsigned int count_extra_ppss = 0;
+
+ do {
+ count_extra_ppss++;
+ lst_add(&str_ctx->str_alloc.available_ppss_list,
+ pps_info);
+ pps_info =
+ lst_removehead(&str_ctx->str_alloc.pps_data_list[i]);
+ } while (pps_info);
+ }
+ }
+ }
+
+ for (i = 0; i < MAX_SEQUENCES; i++) {
+ sequ_hdr_info = lst_removehead(&str_ctx->str_alloc.available_sequence_list);
+ if (sequ_hdr_info && str_ctx->parser_callbacks.destroy_data_cb)
+ str_ctx->parser_callbacks.destroy_data_cb
+ (BSPP_UNIT_SEQUENCE, sequ_hdr_info->secure_sequence_info);
+ }
+
+ kfree(str_ctx->secure_sequence_info);
+ str_ctx->secure_sequence_info = NULL;
+ kfree(str_ctx->sequ_hdr_info);
+ str_ctx->sequ_hdr_info = NULL;
+
+ if (str_ctx->vid_std_features.uses_pps) {
+ for (i = 0; i < MAX_PPSS; i++) {
+ pps_info = lst_removehead(&str_ctx->str_alloc.available_ppss_list);
+ if (pps_info && str_ctx->parser_callbacks.destroy_data_cb)
+ str_ctx->parser_callbacks.destroy_data_cb
+ (BSPP_UNIT_PPS, pps_info->secure_pps_info);
+ }
+
+ kfree(str_ctx->secure_pps_info);
+ str_ctx->secure_pps_info = NULL;
+ kfree(str_ctx->pps_info);
+ str_ctx->pps_info = NULL;
+ }
+
+ /* destroy mutex */
+ mutex_destroy(str_ctx->bspp_mutex);
+ kfree(str_ctx->bspp_mutex);
+ str_ctx->bspp_mutex = NULL;
+
+ kfree(str_ctx);
+
+ return IMG_SUCCESS;
+error:
+ return result;
+}
+
+/*
+ * @Function bspp_set_codec_config
+ *
+ */
+int bspp_set_codec_config(const void *str_context_handle,
+ const struct vdec_codec_config *codec_config)
+{
+ struct bspp_str_context *str_ctx = (struct bspp_str_context *)str_context_handle;
+ unsigned int result = IMG_SUCCESS;
+
+ /* Validate input arguments. */
+ if (!str_context_handle || !codec_config) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+
+ switch (str_ctx->vid_std) {
+ default:
+ result = IMG_ERROR_NOT_SUPPORTED;
+ break;
+ }
+error:
+ return result;
+}
+
+/*
+ * @Function bspp_stream_create
+ *
+ */
+int bspp_stream_create(const struct vdec_str_configdata *str_config_data,
+ void **str_ctx_handle,
+ struct bspp_ddbuf_array_info fw_sequence[],
+ struct bspp_ddbuf_array_info fw_pps[])
+{
+ struct bspp_str_context *str_ctx;
+ unsigned int result = IMG_SUCCESS;
+ unsigned int i;
+ struct bspp_sequence_hdr_info *sequ_hdr_info;
+ struct bspp_pps_info *pps_info;
+ struct bspp_parse_state *parse_state;
+
+ /* Allocate a stream structure */
+ str_ctx = kmalloc(sizeof(*str_ctx), GFP_KERNEL);
+ if (!str_ctx) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+ memset(str_ctx, 0, sizeof(*str_ctx));
+
+ /* Initialise the stream context structure. */
+ str_ctx->sequ_hdr_id = BSPP_INVALID;
+ str_ctx->vid_std = str_config_data->vid_std;
+ str_ctx->bstr_format = str_config_data->bstr_format;
+ str_ctx->disable_mvc = str_config_data->disable_mvc;
+ str_ctx->full_scan = str_config_data->full_scan;
+ str_ctx->immediate_decode = str_config_data->immediate_decode;
+ str_ctx->intra_frame_closed_gop = str_config_data->intra_frame_closed_gop;
+
+ parse_state = &str_ctx->parse_state;
+
+ /* Setup group buffer processing state. */
+ parse_state->inter_pict_ctx = &str_ctx->inter_pict_data;
+ parse_state->prev_bottom_pic_flag = (unsigned char)BSPP_INVALID;
+ parse_state->next_pic_is_new = 1;
+ parse_state->prev_frame_num = BSPP_INVALID;
+ parse_state->second_field_flag = 0;
+
+ lst_init(&str_ctx->grp_bstr_ctx.buffer_chain);
+
+ if (str_ctx->vid_std < VDEC_STD_MAX && parser_fxns[str_ctx->vid_std].set_parser_config) {
+ parser_fxns[str_ctx->vid_std].set_parser_config(str_ctx->bstr_format,
+ &str_ctx->vid_std_features,
+ &str_ctx->swsr_ctx,
+ &str_ctx->parser_callbacks,
+ &str_ctx->inter_pict_data);
+ } else {
+ result = IMG_ERROR_NOT_SUPPORTED;
+ goto error;
+ }
+
+ /* Allocate the memory required for this stream for Sequence/PPS info */
+ lst_init(&str_ctx->str_alloc.available_sequence_list);
+
+ str_ctx->sequ_hdr_info = kmalloc((MAX_SEQUENCES * sizeof(struct bspp_sequence_hdr_info)),
+ GFP_KERNEL);
+ if (!str_ctx->sequ_hdr_info) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+ memset(str_ctx->sequ_hdr_info, 0x00,
+ (MAX_SEQUENCES * sizeof(struct bspp_sequence_hdr_info)));
+
+ str_ctx->secure_sequence_info =
+ kmalloc((MAX_SEQUENCES * str_ctx->vid_std_features.seq_size),
+ GFP_KERNEL);
+ if (!str_ctx->secure_sequence_info) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+ memset(str_ctx->secure_sequence_info, 0x00,
+ (MAX_SEQUENCES * str_ctx->vid_std_features.seq_size));
+
+ sequ_hdr_info = (struct bspp_sequence_hdr_info *)(str_ctx->sequ_hdr_info);
+ for (i = 0; i < MAX_SEQUENCES; i++) {
+ /* Deal with the device memory for FW SPS data */
+ sequ_hdr_info->fw_sequence = fw_sequence[i];
+ sequ_hdr_info->sequ_hdr_info.bufmap_id =
+ fw_sequence[i].ddbuf_info.bufmap_id;
+ sequ_hdr_info->sequ_hdr_info.buf_offset =
+ fw_sequence[i].buf_offset;
+ sequ_hdr_info->secure_sequence_info = (void *)(str_ctx->secure_sequence_info +
+ (i * str_ctx->vid_std_features.seq_size));
+
+ lst_add(&str_ctx->str_alloc.available_sequence_list,
+ sequ_hdr_info);
+ sequ_hdr_info++;
+ }
+
+ if (str_ctx->vid_std_features.uses_pps) {
+ lst_init(&str_ctx->str_alloc.available_ppss_list);
+ str_ctx->pps_info = kmalloc((MAX_PPSS * sizeof(struct bspp_pps_info)), GFP_KERNEL);
+ if (!str_ctx->pps_info) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+ memset(str_ctx->pps_info, 0x00, (MAX_PPSS * sizeof(struct bspp_pps_info)));
+ str_ctx->secure_pps_info = kmalloc((MAX_PPSS * str_ctx->vid_std_features.pps_size),
+ GFP_KERNEL);
+ if (!str_ctx->secure_pps_info) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+ memset(str_ctx->secure_pps_info, 0x00,
+ (MAX_PPSS * str_ctx->vid_std_features.pps_size));
+
+ pps_info = (struct bspp_pps_info *)(str_ctx->pps_info);
+ for (i = 0; i < MAX_PPSS; i++) {
+ /* Deal with the device memory for FW PPS data */
+ pps_info->fw_pps = fw_pps[i];
+ pps_info->bufmap_id = fw_pps[i].ddbuf_info.bufmap_id;
+ pps_info->buf_offset = fw_pps[i].buf_offset;
+
+ /*
+ * We have no container for the PPS that passes down to the kernel,
+ * for this reason the h264 secure parser needs to populate that
+ * info into the picture header (Second)PictAuxData.
+ */
+ pps_info->secure_pps_info = (void *)(str_ctx->secure_pps_info + (i *
+ str_ctx->vid_std_features.pps_size));
+
+ lst_add(&str_ctx->str_alloc.available_ppss_list, pps_info);
+ pps_info++;
+ }
+
+ /* As only standards that use PPS also use VUI, initialise
+ * the appropriate data structures here.
+ * Initialise the list of raw bitstream data containers.
+ */
+ lst_init(&str_ctx->str_alloc.raw_data_list_available);
+ lst_init(&str_ctx->str_alloc.raw_data_list_used);
+ }
+
+ if (str_ctx->vid_std_features.uses_vps) {
+ struct bspp_vps_info *vps_info;
+
+ lst_init(&str_ctx->str_alloc.available_vps_list);
+ for (i = 0; i < MAX_VPSS; ++i) {
+ vps_info = kmalloc(sizeof(*vps_info), GFP_KERNEL);
+ VDEC_ASSERT(vps_info);
+ if (!vps_info) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+
+ memset(vps_info, 0x00, sizeof(struct bspp_vps_info));
+ /*
+ * for VPS we do not allocate device memory since (at least for now)
+ * there is no need to pass any data from VPS directly to FW
+ */
+ /* Allocate memory for BSPP local VPS data structure. */
+ vps_info->secure_vpsinfo =
+ kmalloc(str_ctx->vid_std_features.vps_size, GFP_KERNEL);
+
+ VDEC_ASSERT(vps_info->secure_vpsinfo);
+ if (!vps_info->secure_vpsinfo) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+ memset(vps_info->secure_vpsinfo, 0, str_ctx->vid_std_features.vps_size);
+
+ lst_add(&str_ctx->str_alloc.available_vps_list, vps_info);
+ }
+ }
+
+ /* ... and initialise the lists that will use this data */
+ for (i = 0; i < SEQUENCE_SLOTS; i++)
+ lst_init(&str_ctx->str_alloc.sequence_data_list[i]);
+
+ if (str_ctx->vid_std_features.uses_pps)
+ for (i = 0; i < PPS_SLOTS; i++)
+ lst_init(&str_ctx->str_alloc.pps_data_list[i]);
+
+ str_ctx->bspp_mutex = kzalloc(sizeof(*str_ctx->bspp_mutex), GFP_KERNEL);
+ if (!str_ctx->bspp_mutex) {
+ result = -ENOMEM;
+ goto error;
+ }
+ mutex_init(str_ctx->bspp_mutex);
+
+ /* Initialise the software shift-register */
+ swsr_initialise(bspp_exception_handler, &str_ctx->parse_ctx,
+ (swsr_callback_fxn) bspp_shift_reg_cb,
+ &str_ctx->grp_bstr_ctx,
+ &str_ctx->swsr_ctx.swsr_context);
+
+ /* Setup the parse context */
+ str_ctx->parse_ctx.swsr_context = str_ctx->swsr_ctx.swsr_context;
+
+ *str_ctx_handle = str_ctx;
+
+ return IMG_SUCCESS;
+
+error:
+ if (str_ctx) {
+ kfree(str_ctx->sequ_hdr_info);
+ kfree(str_ctx->secure_sequence_info);
+ kfree(str_ctx->pps_info);
+ kfree(str_ctx->secure_pps_info);
+ kfree(str_ctx);
+ }
+
+ return result;
+}
+
+void bspp_freeraw_sei_datacontainer(const void *str_res,
+ struct vdec_raw_bstr_data *rawsei_datacontainer)
+{
+ struct bspp_raw_sei_alloc *rawsei_alloc = NULL;
+
+ /* Check input params. */
+ if (str_res && rawsei_datacontainer) {
+ struct bspp_stream_alloc_data *alloc_data =
+ (struct bspp_stream_alloc_data *)str_res;
+
+ rawsei_alloc = container_of(rawsei_datacontainer,
+ struct bspp_raw_sei_alloc,
+ raw_sei_data);
+ memset(&rawsei_alloc->raw_sei_data, 0, sizeof(rawsei_alloc->raw_sei_data));
+ lst_remove(&alloc_data->raw_sei_alloc_list, rawsei_alloc);
+ kfree(rawsei_alloc);
+ }
+}
+
+void bspp_freeraw_sei_datalist(const void *str_res, struct vdec_raw_bstr_data *rawsei_datalist)
+{
+ /* Check input params. */
+ if (rawsei_datalist && str_res) {
+ struct vdec_raw_bstr_data *sei_raw_datacurr = NULL;
+
+ /* Start fromm the first element... */
+ sei_raw_datacurr = rawsei_datalist;
+ /* Free all the linked raw SEI data containers. */
+ while (sei_raw_datacurr) {
+ struct vdec_raw_bstr_data *seiraw_datanext =
+ sei_raw_datacurr->next;
+ bspp_freeraw_sei_datacontainer(str_res, sei_raw_datacurr);
+ sei_raw_datacurr = seiraw_datanext;
+ }
+ }
+}
+
+void bspp_streamrelese_rawbstrdataplain(const void *str_res, const void *rawdata)
+{
+ struct bspp_stream_alloc_data *str_alloc =
+ (struct bspp_stream_alloc_data *)str_res;
+ struct bspp_raw_bitstream_data *rawbstrdata =
+ (struct bspp_raw_bitstream_data *)rawdata;
+
+ if (rawbstrdata) {
+ /* Decrement the raw bitstream data reference count. */
+ rawbstrdata->ref_count--;
+ /* If no entity is referencing the raw
+ * bitstream data any more
+ */
+ if (rawbstrdata->ref_count == 0) {
+ /* ... free the raw bistream data buffer... */
+ kfree(rawbstrdata->raw_bitstream_data.data);
+ memset(&rawbstrdata->raw_bitstream_data, 0,
+ sizeof(rawbstrdata->raw_bitstream_data));
+ /* ...and return it to the list. */
+ lst_remove(&str_alloc->raw_data_list_used, rawbstrdata);
+ lst_add(&str_alloc->raw_data_list_available, rawbstrdata);
+ }
+ }
+}
+
+struct bspp_vps_info *bspp_get_vpshdr(void *str_res, unsigned int vps_id)
+{
+ struct bspp_stream_alloc_data *alloc_data =
+ (struct bspp_stream_alloc_data *)str_res;
+
+ if (vps_id >= VPS_SLOTS || !alloc_data)
+ return NULL;
+
+ return lst_last(&alloc_data->vps_data_list[vps_id]);
+}
diff --git a/drivers/staging/media/vxd/decoder/bspp.h b/drivers/staging/media/vxd/decoder/bspp.h
new file mode 100644
index 000000000000..2198d9d6966e
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/bspp.h
@@ -0,0 +1,363 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD Bitstream Buffer Pre-Parser
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ * Re-written for upstreming
+ * Prashanth Kumar Amai <[email protected]>
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef __BSPP_H__
+#define __BSPP_H__
+
+#include <linux/types.h>
+
+#include "h264fw_data.h"
+#include "lst.h"
+#include "vdec_defs.h"
+
+/*
+ * There are up to 2 pictures in each buffer
+ * (plus trailing data for the next picture, e.g. PPS).
+ */
+#define BSPP_MAX_PICTURES_PER_BUFFER 3
+
+#define BSPP_INVALID ((unsigned int)(-1))
+
+/*
+ * This enables signalling of closed gop at every I frame. Add resilience to
+ * seeking functionality.
+ */
+#define I_FRAME_SIGNALS_CLOSED_GOP
+
+/*
+ * enum bspp_error_type - enumeration of parsing error , different error flag
+ * for different data unit
+ */
+enum bspp_error_type {
+ /* No Error in parsing. */
+ BSPP_ERROR_NONE = (0),
+ /* Correction in VSH, Replaced VSH with faulty one */
+ BSPP_ERROR_CORRECTION_VSH = (1 << 0),
+ /*
+ * Correction in parsed Value, clamp the value if it goes beyond
+ * the limit
+ */
+ BSPP_ERROR_CORRECTION_VALIDVALUE = (1 << 1),
+ /* Error in Aux data (i.e. PPS in H.264) parsing */
+ BSPP_ERROR_AUXDATA = (1 << 2),
+ /* Error in parsing, more data remains in VSH data unit after parsing */
+ BSPP_ERROR_DATA_REMAINS = (1 << 3),
+ /* Error in parsing, parsed codeword is invalid */
+ BSPP_ERROR_INVALID_VALUE = (1 << 4),
+ /* Error in parsing, parsing error */
+ BSPP_ERROR_DECODE = (1 << 5),
+ /* reference frame is not available for decoding */
+ BSPP_ERROR_NO_REF_FRAME = (1 << 6),
+ /* Non IDR frame loss detected */
+ BSPP_ERROR_NONIDR_FRAME_LOSS = (1 << 7),
+ /* IDR frame loss detected */
+ BSPP_ERROR_IDR_FRAME_LOSS = (1 << 8),
+ /* Error in parsing, insufficient data to complete parsing */
+ BSPP_ERROR_INSUFFICIENT_DATA = (1 << 9),
+ /* Severe Error, Error indicates, no support for this picture data */
+ BSPP_ERROR_UNSUPPORTED = (1 << 10),
+ /* Severe Error, Error in which could not be recovered */
+ BSPP_ERROR_UNRECOVERABLE = (1 << 11),
+ /* Severe Error, to indicate that NAL Header is absent after SCP */
+ BSPP_ERROR_NO_NALHEADER = (1 << 12),
+ BSPP_ERROR_NO_SEQUENCE_HDR = (1 << 13),
+ BSPP_ERROR_SIGNALED_IN_STREAM = (1 << 14),
+ BSPP_ERROR_UNKNOWN_DATAUNIT_DETECTED = (1 << 15),
+ BSPP_ERROR_NO_PPS = (1 << 16),
+ BSPP_ERROR_NO_VPS = (1 << 17),
+ BSPP_ERROR_OUT_OF_MEMORY = (1 << 18),
+ /* The shift value of the last error bit */
+ BSPP_ERROR_MAX_SHIFT = 18,
+ BSPP_ERROR_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * struct bspp_ddbuf_info - Buffer info
+ * @buf_size: The size of the buffer (in bytes)
+ * @cpu_virt_addr: The CPU virtual address (mapped into the local cpu MMU)
+ * @mem_attrib: Memory attributes
+ * @bufmap_id: buffer mappind id
+ */
+struct bspp_ddbuf_info {
+ unsigned int buf_size;
+ void *cpu_virt_addr;
+ enum sys_emem_attrib mem_attrib;
+ unsigned int buf_id;
+ unsigned int bufmap_id;
+};
+
+/*
+ * struct bspp_ddbuf_array_info - Buffer array info
+ * @ddbuf_info: Buffer info (container)
+ * @buf_element_size: Size of each element
+ * @buf_offset: Offset for each element
+ */
+struct bspp_ddbuf_array_info {
+ struct bspp_ddbuf_info ddbuf_info;
+ unsigned int buf_element_size;
+ unsigned int buf_offset;
+};
+
+/**
+ * struct bspp_bitstr_seg - Bitstream segment
+ * @lst_padding:
+ * @data_size: Size of data
+ * @data_byte_offset: Offset for data
+ * @bstr_seg_flag: flag indicates the bitstream segment type
+ * @start_code_suffix: start code prefix
+ * @bufmap_id: Buffer map ID
+ */
+struct bspp_bitstr_seg {
+ void *lst_padding;
+ unsigned int data_size;
+ unsigned int data_byte_offset;
+ unsigned int bstr_seg_flag;
+ unsigned char start_code_suffix;
+ unsigned int bufmap_id;
+};
+
+/*
+ * struct bspp_pict_data - Picture Header Data Information
+ * @bufmap_id: Buffer ID to use inside kernel #VXDIO_sDdBufInfo
+ * @buf_offset: Buffer offset (for packed device buffers, e.g. PPS)
+ * @pic_data: Picture data
+ * @size: Size (in bytes) of data.
+ * @data_id: Data identifier.
+ */
+struct bspp_pict_data {
+ unsigned int bufmap_id;
+ unsigned int buf_offset;
+ void *pic_data;
+ unsigned int size;
+ unsigned int id;
+};
+
+/*
+ * struct bspp_pict_hdr_info - Picture Header Information
+ */
+struct bspp_pict_hdr_info {
+ /*
+ * Picture is entirely intra-coded and doesn't use any reference data.
+ * NOTE: should be IMG_FALSE if this cannot be determined.
+ */
+ int intra_coded;
+ /* Picture might be referenced by subsequent pictures. */
+ int ref;
+ /* Picture is a field as part of a frame. */
+ int field;
+ /* Emulation prevention bytes are present in picture data. */
+ int emulation_prevention;
+ /* Post Processing */
+ int post_processing;
+ /* Macroblocks within the picture may not occur in raster-scan order */
+ int discontinuous_mbs;
+ /* Flag to indicate data is span across mulitple buffer. */
+ int fragmented_data;
+ /* SOS fields count value */
+ unsigned char sos_count;
+ /* This picture is the first of the sequence or not */
+ int first_pic_of_sequence;
+
+ enum vdecfw_parsermode parser_mode;
+ /* Total size of picture data which is going to be submitted. */
+ unsigned int pic_data_size;
+ /* Size of coded frame as specified in the bitstream. */
+ struct vdec_pict_size coded_frame_size;
+ /* Display information for picture */
+ struct vdec_pict_disp_info disp_info;
+
+ /* Picture auxiliary data (e.g. H.264 SPS/PPS) */
+ struct bspp_pict_data pict_aux_data;
+ /* Picture auxiliary data (e.g. H.264 SPS/PPS) for 2nd picture */
+ struct bspp_pict_data second_pict_aux_data;
+ /* Slice group-map data. */
+ struct bspp_pict_data pict_sgm_data;
+#ifdef HAS_JPEG
+ /* JPEG specific picture header information.*/
+ struct vdec_jpeg_pict_hdr_info jpeg_pict_hdr_info;
+#endif
+
+ struct h264_pict_hdr_info {
+ void *raw_vui_data;
+ void *raw_sei_data_list_first_field;
+ void *raw_sei_data_list_second_field;
+ unsigned char nal_ref_idc;
+ unsigned short frame_num;
+ } h264_pict_hdr_info;
+
+ struct { /* HEVC specific frame information.*/
+ int range_ext_present;
+ int is_full_range_ext;
+ void *raw_vui_data;
+ void *raw_sei_datalist_firstfield;
+ void *raw_sei_datalist_secondfield;
+ } hevc_pict_hdr_info;
+};
+
+/*
+ * struct bspp_sequ_hdr_info - Sequence header information
+ */
+struct bspp_sequ_hdr_info {
+ unsigned int sequ_hdr_id;
+ unsigned int ref_count;
+ struct vdec_comsequ_hdrinfo com_sequ_hdr_info;
+ unsigned int bufmap_id;
+ unsigned int buf_offset;
+};
+
+/*
+ * struct bspp_picture_data - Picture data
+ */
+struct bspp_picture_data {
+ /* Anonymous */
+ /*
+ * Bitstream segments that contain other (non-picture) data before
+ * the picture in the buffer (elements of type #VDECDD_sBitStrSeg).
+ */
+ struct lst_t pre_pict_seg_list[BSPP_MAX_PICTURES_PER_BUFFER];
+ /* Picture */
+ unsigned int sequ_hdr_id;
+ struct bspp_pict_hdr_info pict_hdr_info;
+ /*
+ * Bitstream segments that contain picture data, one for each field
+ * (if present in same group of buffers (elements of type
+ * #VDECDD_sBitStrSeg).
+ */
+ struct lst_t pict_seg_list[BSPP_MAX_PICTURES_PER_BUFFER];
+ void *pict_tag_param[BSPP_MAX_PICTURES_PER_BUFFER];
+ int is_prefix;
+};
+
+/*
+ * struct bspp_preparsed_data - Pre-parsed buffer information
+ */
+struct bspp_preparsed_data {
+ /* Sequence */
+ int new_sequence;
+ struct bspp_sequ_hdr_info sequ_hdr_info;
+ int sequence_end;
+
+ /* Closed GOP */
+ int closed_gop;
+
+ /* Picture */
+ int new_picture;
+ int new_fragment;
+ struct bspp_picture_data picture_data;
+
+ /* Additional pictures (MVC extension) */
+ int new_sub_sequence;
+ struct bspp_sequ_hdr_info ext_sequ_hdr_info;
+ /* non-base view pictures + picture prefix for next frame */
+ struct bspp_picture_data ext_pictures_data[VDEC_H264_MVC_MAX_VIEWS];
+ unsigned int num_ext_pictures;
+
+ /*
+ * Additional information
+ * Flags word to indicate error in parsing/decoding - see
+ * #VDEC_eErrorType
+ */
+ unsigned int error_flags;
+};
+
+/*
+ * struct bspp_picture_decoded - used to store picture-decoded information for
+ * resource handling (sequences/PPSs)
+ */
+struct bspp_picture_decoded {
+ void **lst_link;
+ unsigned int sequ_hdr_id;
+ unsigned int pps_id;
+ unsigned int second_pps_id;
+ int not_decoded;
+ struct vdec_raw_bstr_data *sei_raw_data_first_field;
+ struct vdec_raw_bstr_data *sei_raw_data_second_field;
+};
+
+/*
+ * @Function bspp_stream_create
+ * @Description Creates a stream context for which to pre-parse bitstream
+ * buffers. The following allocations will take place:
+ * - Local storage for high-level header parameters (secure)
+ * - Host memory for common sequence information (insecure)
+ * - Device memory for Sequence information (secure)
+ * - Device memory for PPS (secure, H.264 only)
+ * @Input vdec_str_configdata : config data corresponding to bitstream
+ * @Output str_context : A pointer used to return the stream context handle
+ * @Input fw_sequ: FW sequence data
+ * @Input fw_pps: FW pps data
+ * @Return This function returns either IMG_SUCCESS or an error code.
+ */
+int bspp_stream_create(const struct vdec_str_configdata *str_config_data,
+ void **str_context,
+ struct bspp_ddbuf_array_info fw_sequ[],
+ struct bspp_ddbuf_array_info fw_pps[]);
+
+/*
+ * @Function bspp_set_codec_config
+ * @Description This function is used to set the out-of-band codec config data.
+ * @Input str_context_handle : Stream context handle.
+ * @Input codec_config : Codec-config data
+ * @Return This function returns either IMG_SUCCESS or an error code.
+ */
+int bspp_set_codec_config(const void *str_context_handle,
+ const struct vdec_codec_config *codec_config);
+
+/*
+ * @Function bspp_stream_destroy
+ * @Description Destroys a stream context used to pre-parse bitstream buffers.
+ * @Input str_context_handle : Stream context handle.
+ * @Return This function returns either IMG_SUCCESS or an error code.
+ */
+int bspp_stream_destroy(void *str_context_handle);
+
+/*
+ * @Function bspp_submit_picture_decoded
+ */
+int bspp_submit_picture_decoded(void *str_context_handle,
+ struct bspp_picture_decoded *picture_decoded);
+
+/*
+ * @Function bspp_stream_submit_buffer
+ */
+int bspp_stream_submit_buffer(void *str_context_handle,
+ const struct bspp_ddbuf_info *ddbuf_info,
+ unsigned int bufmap_id,
+ unsigned int data_size,
+ void *pict_tag_param,
+ enum vdec_bstr_element_type bstr_element_type);
+
+/*
+ * @Function bspp_stream_preparse_buffers
+ * @Description Pre-parses bistream buffer and returns picture information in
+ * structure that also signals when the buffer is last in picture.
+ * @Input str_context_handle: Stream context handle.
+ * @Input contiguous_buf_info : Contiguous buffer information
+ * multiple segments that may be non contiguous in memory
+ * @Input contiguous_buf_map_id : Contiguous Buffer Map id
+ * @Input segments: Pointer to a list of segments (see #VDECDD_sBitStrSeg)
+ * @Output preparsed_data: Container to return picture information. Only
+ * provide when buffer is last in picture (see #bForceEop in
+ * function #VDEC_StreamSubmitBstrBuf)
+ * @Output eos_flag: flag indicates end of stream
+ * @Return int : This function returns either IMG_SUCCESS or an error code.
+ */
+int bspp_stream_preparse_buffers
+ (void *str_context_handle,
+ const struct bspp_ddbuf_info *contiguous_buf_info,
+ unsigned int contiguous_buf_map_id,
+ struct lst_t *segments,
+ struct bspp_preparsed_data *preparsed_data,
+ int eos_flag);
+
+#endif /* __BSPP_H__ */
diff --git a/drivers/staging/media/vxd/decoder/bspp_int.h b/drivers/staging/media/vxd/decoder/bspp_int.h
new file mode 100644
index 000000000000..e37c8c9c415b
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/bspp_int.h
@@ -0,0 +1,514 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD Bitstream Buffer Pre-Parser Internal
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef __BSPP_INT_H__
+#define __BSPP_INT_H__
+
+#include "bspp.h"
+#include "swsr.h"
+
+#define VDEC_MB_DIMENSION (16)
+#define MAX_COMPONENTS (4)
+
+#define print_value(a, ...)
+
+#define BSPP_DEFAULT_SEQUENCE_ID (0)
+
+enum bspp_unit_type {
+ BSPP_UNIT_NONE = 0,
+ /* Only relevant for HEVC. */
+ BSPP_UNIT_VPS,
+ /* Only relevant for h.264 and HEVC */
+ BSPP_UNIT_SEQUENCE, BSPP_UNIT_PPS,
+ /*
+ * !< Data from these units should be placed in non-picture bitstream
+ * segment lists. In conformant streams these units should not occur
+ * in-between the picture data.
+ */
+ BSPP_UNIT_PICTURE,
+ BSPP_UNIT_SKIP_PICTURE,
+ BSPP_UNIT_NON_PICTURE,
+ BSPP_UNIT_UNCLASSIFIED,
+ /* Unit is unsupported, don't change segment list */
+ BSPP_UNIT_UNSUPPORTED,
+ BSPP_UNIT_MAX,
+ BSPP_UNIT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+struct bspp_raw_bitstream_data {
+ void **lst_link;
+ unsigned int ref_count;
+ struct vdec_raw_bstr_data raw_bitstream_data;
+};
+
+/*
+ * struct bspp_h264_inter_pict_ctx
+ * @Brief: This structure contains H264 state to be retained between pictures.
+ */
+struct bspp_h264_inter_pict_ctx {
+ /*
+ * The following get applied to every picture until updated
+ * (bitstream properties)
+ */
+ int disable_vdmc_filt;
+ int b4x4transform_mb_unavailable;
+ /*
+ * The following get applied to the next picture only
+ * (picture properties)
+ */
+ int repeat_first_field;
+ unsigned int max_frm_repeat;
+ /*
+ * Control variable to decide when to attach the SEI info
+ * (picture properties) to a picture
+ */
+ int sei_info_attached_to_pic;
+ /*
+ * The following variable is an approximation because we cannot
+ * parse out-of-order, it takes value as described:
+ * 1) Initially it is BSPP_INVALID
+ * 2) The first SPS sets it to its SPSid
+ * 3) The last bspp_H264SeiBufferingPeriod sets it, and it is used
+ * for every SEI parsing until updated by another
+ * bspp_H264SeiBufferingPeriod message
+ */
+ unsigned int active_sps_for_sei_parsing;
+ unsigned short current_view_id;
+ struct vdec_raw_bstr_data *sei_raw_data_list;
+};
+
+/* This structure contains HEVC state to be retained between pictures. */
+struct bspp_hevc_inter_pict_ctx {
+ /* Picture count in a sequence */
+ unsigned int seq_pic_count;
+ struct {
+ /* There was EOS NAL detected and no new picture yet */
+ unsigned eos_detected : 1;
+ /* This is first picture after EOS NAL */
+ unsigned first_after_eos : 1;
+ };
+
+ /* control variable to decide when to attach the SEI info
+ * (picture properties) to a picture.
+ */
+ unsigned char sei_info_attached_to_pic;
+ /* Raw SEI list to be attached to a picture. */
+ struct vdec_raw_bstr_data *sei_rawdata_list;
+ /* Handle to a picture header field to attach the raw SEI list to. */
+ void **hndl_pichdr_sei_rawdata_list;
+};
+
+/*
+ * struct bspp_inter_pict_data
+ * @Brief This structure contains state to be retained between pictures.
+ */
+struct bspp_inter_pict_data {
+ /* A closed GOP has occurred in the bitstream. */
+ int seen_closed_gop;
+ /* Closed GOP has been signaled by a unit before the next picture */
+ int new_closed_gop;
+ /* Indicates whether or not DPB flush is needed */
+ int not_dpb_flush;
+ struct lst_t pic_prefix_seg;
+ union {
+ struct bspp_h264_inter_pict_ctx h264_ctx;
+ struct bspp_hevc_inter_pict_ctx hevc_ctx;
+ };
+};
+
+/*
+ * struct bspp_parse_state
+ * @Brief This structure contains parse state
+ */
+struct bspp_parse_state {
+ struct bspp_inter_pict_data *inter_pict_ctx;
+ int initialised;
+
+ /* Input/Output (H264 etc. state). */
+ /* For SCP ASO detection we need to log 3 components */
+ unsigned int prev_first_mb_in_slice[MAX_COMPONENTS];
+ struct bspp_pict_hdr_info *next_pict_hdr_info;
+ unsigned char prev_bottom_pic_flag;
+ unsigned char second_field_flag;
+ unsigned char next_pic_is_new;
+ unsigned int prev_frame_num;
+ unsigned int prev_pps_id;
+ unsigned int prev_field_pic_flag;
+ unsigned int prev_nal_ref_idc;
+ unsigned int prev_pic_order_cnt_lsb;
+ int prev_delta_pic_order_cnt_bottom;
+ int prev_delta_pic_order_cnt[2];
+ int prev_nal_unit_type;
+ int prev_idr_pic_id;
+ int discontinuous_mb;
+ /* Position in bitstream before parsing a unit */
+ unsigned long long prev_byte_offset_buf;
+ unsigned int prev_buf_map_id;
+ unsigned int prev_buf_data_size;
+ /*
+ * !< Flags word to indicate error in parsing/decoding
+ * - see #VDEC_eErrorType.
+ */
+ unsigned int error_flags;
+ /* Outputs. */
+ int new_closed_gop;
+ unsigned char new_view;
+ unsigned char is_prefix;
+ int first_chunk;
+};
+
+/*
+ * struct bspp_pps_info
+ * @Brief Contains PPS information
+ */
+struct bspp_pps_info {
+ void **lst_link;
+ /* PPS Id. INSECURE MEMORY HOST */
+ unsigned int pps_id;
+ /* Reference count for PPS. INSECURE MEMORY HOST */
+ unsigned int ref_count;
+ struct bspp_ddbuf_array_info fw_pps;
+ /* Buffer ID to be used in Kernel */
+ unsigned int bufmap_id;
+ /* Parsing Info. SECURE MEMORY HOST */
+ void *secure_pps_info;
+ /* Buffer Offset to be used in kernel */
+ unsigned int buf_offset;
+};
+
+/*
+ * struct bspp_sequence_hdr_info
+ * @Brief Contains SPS information
+ */
+struct bspp_sequence_hdr_info {
+ void **lst_link;
+ /* Reference count for sequence header */
+ unsigned int ref_count;
+ struct bspp_sequ_hdr_info sequ_hdr_info;
+ struct bspp_ddbuf_array_info fw_sequence;
+ /* Parsing Info. SECURE MEMORY HOST */
+ void *secure_sequence_info;
+};
+
+enum bspp_element_status {
+ BSPP_UNALLOCATED = 0,
+ BSPP_AVAILABLE,
+ BSPP_UNAVAILABLE,
+ BSPP_STATUSMAX,
+ BSPP_FORCE32BITS = 0x7FFFFFFFU
+};
+
+struct bspp_vps_info {
+ void **lst_link;
+ /* VPS Id INSECURE MEMORY HOST */
+ unsigned int vps_id;
+ /* Reference count for video header. INSECURE MEMORY HOST */
+ unsigned int ref_count;
+ /*!< Parsing Info. SECURE MEMORY HOST */
+ void *secure_vpsinfo;
+};
+
+/*
+ * struct bspp_unit_data
+ * @Brief Contains bitstream unit data
+ */
+struct bspp_unit_data {
+ /* Input. */
+ /* Indicates which output data to populate */
+ enum bspp_unit_type unit_type;
+ /* Video Standard of unit to parse */
+ enum vdec_vid_std vid_std;
+ /* Indicates whether delimiter is present for unit */
+ int delim_present;
+ /* Codec configuration used by this stream */
+ const struct vdec_codec_config *codec_config;
+ void *str_res_handle;
+ /* Needed for calculating the size of the last fragment */
+ unsigned int unit_data_size;
+ /* Input/Output. */
+ struct bspp_parse_state *parse_state;
+ /* Output */
+ /* eVidStd == VDEC_STD_H263 && BSPP_UNIT_PICTURE. */
+ struct bspp_sequence_hdr_info *impl_sequ_hdr_info;
+ /* Union of output data for each of the unit types. */
+ union {
+ /* BSPP_UNIT_SEQUENCE. */
+ struct bspp_sequence_hdr_info *sequ_hdr_info;
+ /* BSPP_UNIT_PPS. */
+ struct bspp_pps_info *pps_info;
+ /* BSPP_UNIT_PICTURE. */
+ struct bspp_pict_hdr_info *pict_hdr_info;
+ /* For Video Header (HEVC) */
+ struct bspp_vps_info *vps_info;
+ } out;
+
+ /*
+ * For picture it should give the SequenceHdrId, for anything
+ * else it should contain BSPP_INVALID. This value is pre-loaded
+ * with the sequence ID of the last picture.
+ */
+ unsigned int pict_sequ_hdr_id;
+ /* State: output. */
+ /*
+ * Picture unit (BSPP_UNIT_PICTURE) contains slice data.
+ * Picture header information must be populated once this unit has been
+ * parsed.
+ */
+ int slice;
+ int ext_slice; /* Current slice belongs to non-base view (MVC only) */
+ /*
+ * True if we meet a unit that signifies closed gop, different
+ * for each standard.
+ */
+ int new_closed_gop;
+ /* True if the end of a sequence of pictures has been reached. */
+ int sequence_end;
+ /*
+ * Extracted all data from unit whereby shift-register should now
+ * be at the next delimiter or end of data (when byte-aligned).
+ */
+ int extracted_all_data;
+ /* Indicates the presence of any errors while processing this unit. */
+ enum bspp_error_type parse_error;
+ /* To turn on/off considering I-Frames as ClosedGop boundaries. */
+ int intra_frm_as_closed_gop;
+};
+
+/*
+ * struct bspp_swsr_ctx
+ * @brief BSPP Software Shift Register Context Information
+ */
+struct bspp_swsr_ctx {
+ /*
+ * Default configuration for the shift-register for this
+ * stream. The delimiter type may be adjusted for each unit
+ * where the buffer requires it. Information about how to
+ * process each unit will be passed down with the picture
+ * header information.
+ */
+ struct swsr_config sr_config;
+ /*
+ * Emulation prevention scheme present in bitstream. This is
+ * sometimes not ascertained (e.g. VC-1) until the first
+ * bitstream buffer (often codec configuration) has been
+ * received.
+ */
+ enum swsr_emprevent emulation_prevention;
+ /* Software shift-register context. */
+ void *swsr_context;
+};
+
+/*
+ * struct bspp_vid_std_features
+ * @brief BSPP Video Standard Specific Features and Information
+ */
+struct bspp_vid_std_features {
+ /* The size of the sequence header structure for this video standard */
+ unsigned long seq_size;
+ /* This video standard uses Picture Parameter Sets. */
+ int uses_pps;
+ /*
+ * The size of the Picture Parameter Sets structure for
+ * this video standard.
+ */
+ unsigned long pps_size;
+ /* This video standard uses Video Parameter Sets. */
+ int uses_vps;
+ /*
+ * The size of the Video Parameter Sets structure for
+ * this video standard
+ */
+ unsigned long vps_size;
+};
+
+/*
+ * @Function bspp_cb_parse_unit
+ * @Description Function prototype for the parse unit callback functions.
+ * @Input swsr_context_handle: A handle to software shift-register context
+ * @InOut unit_data: A pointer to unit data which includes input & output
+ * parameters as defined by structure.
+ * @Return int : This function returns either IMG_SUCCESS or an error code.
+ */
+typedef int (*bspp_cb_parse_unit)(void *swsr_context_handle,
+ struct bspp_unit_data *unit_data);
+
+/*
+ * @Function bspp_pfnReleaseData
+ * @Description This is a function prototype for the data releasing callback
+ * functions.
+ * @Input str_alloc_handle : A handle to stream related resources.
+ * @Input data_type : A type of data which is to be released.
+ * @Input data_handle : A handle for data which is to be released.
+ * @Return int : This function returns either IMG_SUCCESS or an error code.
+ */
+typedef int (*bspp_cb_release_data)(void *str_alloc_handle,
+ enum bspp_unit_type data_type,
+ void *data_handle);
+
+/*
+ * @Function bspp_cb_reset_data
+ * @Description This is a function prototype for the data resetting callback
+ * functions.
+ * @Input data_type : A type of data which is to be reset.
+ * @InOut data_handle : A handle for data which is to be reset.
+ * @Return int : This function returns either IMG_SUCCESS or an error code.
+ */
+typedef int (*bspp_cb_reset_data)(enum bspp_unit_type data_type,
+ void *data_handle);
+
+/*
+ * @Function bspp_cb_destroy_data
+ * @Description This is a function prototype for the data destruction callback
+ * functions.
+ * @Input data_type : A type of data which is to be destroyed.
+ * @InOut data_handle : A handle for data which is to be destroyed.
+ * @Return int : This function returns either IMG_SUCCESS or an error code.
+ */
+typedef int (*bspp_cb_destroy_data)(enum bspp_unit_type data_type,
+ void *data_handle);
+
+/*
+ * @Function bspp_cb_parse_codec_config
+ * @Description This is a function prototype for parsing codec config bitstream
+ * element for size delimited bitstreams.
+ * @Input swsr_context_handle: A handle to Shift Register processing
+ * current bitstream.
+ * @Output unit_count: A pointer to variable in which to return unit count.
+ * @Output unit_array_count: A pointer to variable in which to return unit
+ * array count.
+ * @Output delim_length: A pointer to variable in which to return NAL
+ * delimiter length in bits.
+ * @Output size_delim_length: A pointer to variable in which to return size
+ * delimiter length in bits.
+ * @Return None.
+ */
+typedef void (*bspp_cb_parse_codec_config)(void *swsr_context_handle,
+ unsigned int *unit_count,
+ unsigned int *unit_array_count,
+ unsigned int *delim_length,
+ unsigned int *size_delim_length);
+
+/*
+ * @Function bspp_cb_update_unit_counts
+ * @Description This is a function prototype for updating unit counts for size
+ * delimited bitstreams.
+ * @Input swsr_context_handle: A handle to Shift Register processing
+ * current bitstream.
+ * @InOut unit_count: A pointer to variable holding current unit count
+ * @InOut unit_array_count: A pointer to variable holding current unit
+ * array count.
+ * @Return None.
+ */
+typedef void (*bspp_cb_update_unit_counts)(void *swsr_context_handle,
+ unsigned int *unit_count,
+ unsigned int *unit_array_count);
+
+/*
+ * @Function bspp_cb_initialise_parsing
+ * @Description This prototype is for unit group parsing initialization.
+ * @InOut parse_state: The current unit group parsing state.
+ * @Return None.
+ */
+typedef void (*bspp_cb_initialise_parsing)(struct bspp_parse_state *prs_state);
+
+/*
+ * @Function bspp_cb_finalise_parsing
+ * @Description This is prototype is for unit group parsing finalization.
+ * @Input str_alloc_handle: A handle to stream related resources.
+ * @InOut parse_state: The current unit group parsing state.
+ * @Return None.
+ */
+typedef void (*bspp_cb_finalise_parsing)(void *str_alloc_handle,
+ struct bspp_parse_state *parse_state);
+
+/*
+ * struct bspp_parser_callbacks
+ * @brief BSPP Standard Related Parser Callback Functions
+ */
+struct bspp_parser_callbacks {
+ /* Pointer to standard-specific unit parsing callback function. */
+ bspp_cb_parse_unit parse_unit_cb;
+ /* Pointer to standard-specific data releasing callback function. */
+ bspp_cb_release_data release_data_cb;
+ /* Pointer to standard-specific data resetting callback function. */
+ bspp_cb_reset_data reset_data_cb;
+ /* Pointer to standard-specific data destruction callback function. */
+ bspp_cb_destroy_data destroy_data_cb;
+ /* Pointer to standard-specific codec config parsing callback function */
+ bspp_cb_parse_codec_config parse_codec_config_cb;
+ /* Pointer to standard-specific unit count updating callback function */
+ bspp_cb_update_unit_counts update_unit_counts_cb;
+ /*
+ * Pointer to standard-specific unit group parsing initialization
+ * function.
+ */
+ bspp_cb_initialise_parsing initialise_parsing_cb;
+ /*
+ * Pointer to standard-specific unit group parsing finalization
+ * function
+ */
+ bspp_cb_finalise_parsing finalise_parsing_cb;
+};
+
+/*
+ * @Function bspp_cb_set_parser_config
+ * @Description Prototype is for the setting parser config callback functions.
+ * @Input bstr_format: Input bitstream format.
+ * @Output vid_std_features: Features of video standard for this bitstream.
+ * @Output swsr_ctx: Software Shift Register settings for this bitstream.
+ * @Output parser_callbacks: Parser functions to be used for parsing this
+ * bitstream.
+ * @Output inter_pict_data: Inter-picture settings specific for this
+ * bitstream.
+ * @Return int : This function returns either IMG_SUCCESS or an error code.
+ */
+typedef int (*bspp_cb_set_parser_config)(enum vdec_bstr_format bstr_format,
+ struct bspp_vid_std_features *vid_std_features,
+ struct bspp_swsr_ctx *swsr_ctx,
+ struct bspp_parser_callbacks *parser_callbacks,
+ struct bspp_inter_pict_data *inter_pict_data);
+
+/*
+ * @Function bspp_cb_determine_unit_type
+ * @Description This is a function prototype for determining the BSPP unit type
+ * based on the bitstream (video standard specific) unit type
+ * callback functions.
+ * @Input bitstream_unit_type: Bitstream (video standard specific) unit
+ * type.
+ * @Input disable_mvc: Skip MVC related units (relevant for standards
+ * that support it).
+ * @InOut bspp_unit_type *: Last BSPP unit type on input. Current BSPP
+ * unit type on output.
+ * @Return None.
+ */
+typedef void (*bspp_cb_determine_unit_type)(unsigned char bitstream_unit_type,
+ int disable_mvc,
+ enum bspp_unit_type *bspp_unit_type);
+
+struct bspp_pps_info *bspp_get_pps_hdr(void *str_res_handle, unsigned int pps_id);
+
+struct bspp_sequence_hdr_info *bspp_get_sequ_hdr(void *str_res_handle,
+ unsigned int sequ_id);
+
+struct bspp_vps_info *bspp_get_vpshdr(void *str_res, unsigned int vps_id);
+
+void bspp_streamrelese_rawbstrdataplain(const void *str_res,
+ const void *rawdata);
+
+void bspp_freeraw_sei_datacontainer(const void *str_res,
+ struct vdec_raw_bstr_data *rawsei_datacontainer);
+
+void bspp_freeraw_sei_datalist(const void *str_res,
+ struct vdec_raw_bstr_data *rawsei_datalist);
+
+#endif /* __BSPP_INT_H__ */
diff --git a/drivers/staging/media/vxd/decoder/h264_secure_parser.c b/drivers/staging/media/vxd/decoder/h264_secure_parser.c
new file mode 100644
index 000000000000..3973749eac58
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/h264_secure_parser.c
@@ -0,0 +1,3051 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * h.264 secure data unit parsing API.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ * Re-written for upstreming
+ * Prashanth Kumar Amai <[email protected]>
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "bspp.h"
+#include "bspp_int.h"
+#include "h264_secure_parser.h"
+#include "pixel_api.h"
+#include "swsr.h"
+#include "vdec_defs.h"
+
+/*
+ * Reduce DPB to 1 when no pic reordering.
+ */
+#define SL_MAX_REF_IDX 32
+#define VUI_CPB_CNT_MAX 32
+#define MAX_SPS_COUNT 32
+#define MAX_PPS_COUNT 256
+/* changed from 810 */
+#define MAX_SLICE_GROUPMBS 65536
+#define MAX_SPS_COUNT 32
+#define MAX_PPS_COUNT 256
+#define MAX_SLICEGROUP_COUNT 8
+#define MAX_WIDTH_IN_MBS 256
+#define MAX_HEIGHT_IN_MBS 256
+#define MAX_COLOR_PLANE 4
+#define H264_MAX_SGM_SIZE 8196
+
+#define H264_MAX_CHROMA_QP_INDEX_OFFSET (12)
+#define H264_MIN_CHROMA_QP_INDEX_OFFSET (-12)
+
+/*
+ * AVC Profile IDC definitions
+ */
+enum h264_profile_idc {
+ h264_profile_cavlc444 = 44, /* YUV 4:4:4/14 "CAVLC 4:4:4" */
+ h264_profile_baseline = 66, /* YUV 4:2:0/8 "Baseline" */
+ h264_profile_main = 77, /* YUV 4:2:0/8 "Main" */
+ h264_profile_scalable = 83, /* YUV 4:2:0/8 "Scalable" */
+ h264_profile_extended = 88, /* YUV 4:2:0/8 "Extended" */
+ h264_profile_high = 100, /* YUV 4:2:0/8 "High" */
+ h264_profile_hig10 = 110, /* YUV 4:2:0/10 "High 10" */
+ h264_profile_mvc_high = 118, /* YUV 4:2:0/8 "Multiview High" */
+ h264_profile_high422 = 122, /* YUV 4:2:2/10 "High 4:2:2" */
+ h264_profile_mvc_stereo = 128, /* YUV 4:2:0/8 "Stereo High" */
+ h264_profile_high444 = 244, /* YUV 4:4:4/14 "High 4:4:4" */
+ h264_profile_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * Remap H.264 colour format into internal representation.
+ */
+static const enum pixel_fmt_idc pixel_format_idc[] = {
+ PIXEL_FORMAT_MONO,
+ PIXEL_FORMAT_420,
+ PIXEL_FORMAT_422,
+ PIXEL_FORMAT_444,
+};
+
+/*
+ * Pixel Aspect Ratio
+ */
+static const unsigned short pixel_aspect[17][2] = {
+ { 0, 1 },
+ { 1, 1 },
+ { 12, 11 },
+ { 10, 11 },
+ { 16, 11 },
+ { 40, 33 },
+ { 24, 11 },
+ { 20, 11 },
+ { 32, 11 },
+ { 80, 33 },
+ { 18, 11 },
+ { 15, 11 },
+ { 64, 33 },
+ { 160, 99 },
+ { 4, 3 },
+ { 3, 2 },
+ { 2, 1 },
+};
+
+/*
+ * Table 7-3, 7-4: Default Scaling lists
+ */
+static const unsigned char default_4x4_intra[16] = {
+ 6, 13, 13, 20,
+ 20, 20, 28, 28,
+ 28, 28, 32, 32,
+ 32, 37, 37, 42
+};
+
+static const unsigned char default_4x4_inter[16] = {
+ 10, 14, 14, 20,
+ 20, 20, 24, 24,
+ 24, 24, 27, 27,
+ 27, 30, 30, 34
+};
+
+static const unsigned char default_8x8_intra[64] = {
+ 6, 10, 10, 13, 11, 13, 16, 16,
+ 16, 16, 18, 18, 18, 18, 18, 23,
+ 23, 23, 23, 23, 23, 25, 25, 25,
+ 25, 25, 25, 25, 27, 27, 27, 27,
+ 27, 27, 27, 27, 29, 29, 29, 29,
+ 29, 29, 29, 31, 31, 31, 31, 31,
+ 31, 33, 33, 33, 33, 33, 36, 36,
+ 36, 36, 38, 38, 38, 40, 40, 42
+};
+
+static const unsigned char default_8x8_inter[64] = {
+ 9, 13, 13, 15, 13, 15, 17, 17,
+ 17, 17, 19, 19, 19, 19, 19, 21,
+ 21, 21, 21, 21, 21, 22, 22, 22,
+ 22, 22, 22, 22, 24, 24, 24, 24,
+ 24, 24, 24, 24, 25, 25, 25, 25,
+ 25, 25, 25, 27, 27, 27, 27, 27,
+ 27, 28, 28, 28, 28, 28, 30, 30,
+ 30, 30, 32, 32, 32, 33, 33, 35
+};
+
+/*
+ * to be use if no q matrix is chosen
+ */
+static const unsigned char default_4x4_org[16] = {
+ 16, 16, 16, 16,
+ 16, 16, 16, 16,
+ 16, 16, 16, 16,
+ 16, 16, 16, 16
+};
+
+/*
+ * to be use if no q matrix is chosen
+ */
+static const unsigned char default_8x8_org[64] = {
+ 16, 16, 16, 16, 16, 16, 16, 16,
+ 16, 16, 16, 16, 16, 16, 16, 16,
+ 16, 16, 16, 16, 16, 16, 16, 16,
+ 16, 16, 16, 16, 16, 16, 16, 16,
+ 16, 16, 16, 16, 16, 16, 16, 16,
+ 16, 16, 16, 16, 16, 16, 16, 16,
+ 16, 16, 16, 16, 16, 16, 16, 16,
+ 16, 16, 16, 16, 16, 16, 16, 16
+};
+
+/*
+ * source: ITU-T H.264 2010/03, page 20 Table 6-1
+ */
+static const int bspp_subheightc[] = { -1, 2, 1, 1 };
+
+/*
+ * source: ITU-T H.264 2010/03, page 20 Table 6-1
+ */
+static const int bspp_subwidthc[] = { -1, 2, 2, 1 };
+
+/*
+ * inline functions for Minimum and Maximum value
+ */
+static inline unsigned int umin(unsigned int a, unsigned int b)
+{
+ return (((a) < (b)) ? (a) : (b));
+}
+
+static inline int smin(int a, int b)
+{
+ return (((a) < (b)) ? (a) : (b));
+}
+
+static inline int smax(int a, int b)
+{
+ return (((a) > (b)) ? (a) : (b));
+}
+
+static void set_if_not_determined_yet(int *determined,
+ unsigned char condition,
+ int *target,
+ unsigned int value)
+{
+ if ((!(*determined)) && (condition)) {
+ *target = value;
+ *determined = 1;
+ }
+}
+
+static int bspp_h264_get_subwidthc(int chroma_format_idc, int separate_colour_plane_flag)
+{
+ return bspp_subwidthc[chroma_format_idc];
+}
+
+static int bspp_h264_get_subheightc(int chroma_format_idc, int separate_colour_plane_flag)
+{
+ return bspp_subheightc[chroma_format_idc];
+}
+
+static unsigned int h264ceillog2(unsigned int value)
+{
+ unsigned int status = 0;
+
+ value -= 1;
+ while (value > 0) {
+ value >>= 1;
+ status++;
+ }
+ return status;
+}
+
+/*
+ * @Function bspp_h264_set_default_vui
+ * @Description Sets default values of the VUI info
+ */
+static void bspp_h264_set_default_vui(struct bspp_h264_vui_info *vui_info)
+{
+ unsigned int *nal_hrd_bitrate_valueminus1 = NULL;
+ unsigned int *vcl_hrd_bitrate_valueminus1 = NULL;
+ unsigned int *nal_hrd_cpbsize_valueminus1 = NULL;
+ unsigned int *vcl_hrd_cpbsize_valueminus1 = NULL;
+ unsigned char *nal_hrd_cbr_flag = NULL;
+ unsigned char *vcl_hrd_cbr_flag = NULL;
+
+ /* Saving pointers */
+ nal_hrd_bitrate_valueminus1 = vui_info->nal_hrd_parameters.bit_rate_value_minus1;
+ vcl_hrd_bitrate_valueminus1 = vui_info->vcl_hrd_parameters.bit_rate_value_minus1;
+
+ nal_hrd_cpbsize_valueminus1 = vui_info->nal_hrd_parameters.cpb_size_value_minus1;
+ vcl_hrd_cpbsize_valueminus1 = vui_info->vcl_hrd_parameters.cpb_size_value_minus1;
+
+ nal_hrd_cbr_flag = vui_info->nal_hrd_parameters.cbr_flag;
+ vcl_hrd_cbr_flag = vui_info->vcl_hrd_parameters.cbr_flag;
+
+ /* Cleaning sVUIInfo */
+ if (vui_info->nal_hrd_parameters.bit_rate_value_minus1)
+ memset(vui_info->nal_hrd_parameters.bit_rate_value_minus1, 0x00,
+ VDEC_H264_MAXIMUMVALUEOFCPB_CNT * sizeof(unsigned int));
+
+ if (vui_info->nal_hrd_parameters.cpb_size_value_minus1)
+ memset(vui_info->nal_hrd_parameters.cpb_size_value_minus1, 0x00,
+ VDEC_H264_MAXIMUMVALUEOFCPB_CNT * sizeof(unsigned int));
+
+ if (vui_info->vcl_hrd_parameters.cpb_size_value_minus1)
+ memset(vui_info->vcl_hrd_parameters.cpb_size_value_minus1, 0x00,
+ VDEC_H264_MAXIMUMVALUEOFCPB_CNT * sizeof(unsigned int));
+
+ if (vui_info->nal_hrd_parameters.cbr_flag)
+ memset(vui_info->nal_hrd_parameters.cbr_flag, 0x00,
+ VDEC_H264_MAXIMUMVALUEOFCPB_CNT * sizeof(unsigned char));
+
+ if (vui_info->vcl_hrd_parameters.cbr_flag)
+ memset(vui_info->vcl_hrd_parameters.cbr_flag, 0x00,
+ VDEC_H264_MAXIMUMVALUEOFCPB_CNT * sizeof(unsigned char));
+
+ /* Make sure you set default for everything */
+ memset(vui_info, 0, sizeof(*vui_info));
+ vui_info->video_format = 5;
+ vui_info->colour_primaries = 2;
+ vui_info->transfer_characteristics = 2;
+ vui_info->matrix_coefficients = 2;
+ vui_info->motion_vectors_over_pic_boundaries_flag = 1;
+ vui_info->max_bytes_per_pic_denom = 2;
+ vui_info->max_bits_per_mb_denom = 1;
+ vui_info->log2_max_mv_length_horizontal = 16;
+ vui_info->log2_max_mv_length_vertical = 16;
+
+#ifdef REDUCED_DPB_NO_PIC_REORDERING
+ vui_info->max_dec_frame_buffering = 1;
+ vui_info->num_reorder_frames = 0;
+#else
+ vui_info->max_dec_frame_buffering = 0;
+ vui_info->num_reorder_frames = vui_info->max_dec_frame_buffering;
+#endif
+
+ /* Restoring pointers */
+ vui_info->nal_hrd_parameters.bit_rate_value_minus1 = nal_hrd_bitrate_valueminus1;
+ vui_info->vcl_hrd_parameters.bit_rate_value_minus1 = vcl_hrd_bitrate_valueminus1;
+
+ vui_info->nal_hrd_parameters.cpb_size_value_minus1 = nal_hrd_cpbsize_valueminus1;
+ vui_info->vcl_hrd_parameters.cpb_size_value_minus1 = vcl_hrd_cpbsize_valueminus1;
+
+ vui_info->nal_hrd_parameters.cbr_flag = nal_hrd_cbr_flag;
+ vui_info->vcl_hrd_parameters.cbr_flag = vcl_hrd_cbr_flag;
+}
+
+/*
+ * @Function bspp_h264_hrd_param_parser
+ * @Description Parse the HRD parameter
+ */
+static enum bspp_error_type bspp_h264_hrd_param_parser
+ (void *swsr_context,
+ struct bspp_h264_hrdparam_info *h264_hrd_param_info)
+{
+ unsigned int sched_sel_idx;
+
+ VDEC_ASSERT(swsr_context);
+ h264_hrd_param_info->cpb_cnt_minus1 = swsr_read_unsigned_expgoulomb(swsr_context);
+
+ if (h264_hrd_param_info->cpb_cnt_minus1 >= 32)
+ pr_info("pb_cnt_minus1 is not within the range");
+
+ h264_hrd_param_info->bit_rate_scale = swsr_read_bits(swsr_context, 4);
+ h264_hrd_param_info->cpb_size_scale = swsr_read_bits(swsr_context, 4);
+
+ if (!h264_hrd_param_info->bit_rate_value_minus1) {
+ h264_hrd_param_info->bit_rate_value_minus1 = kcalloc
+ (VDEC_H264_MAXIMUMVALUEOFCPB_CNT,
+ sizeof(unsigned int), GFP_KERNEL);
+ VDEC_ASSERT(h264_hrd_param_info->bit_rate_value_minus1);
+ if (!h264_hrd_param_info->bit_rate_value_minus1)
+ return BSPP_ERROR_OUT_OF_MEMORY;
+ }
+
+ if (!h264_hrd_param_info->cpb_size_value_minus1) {
+ h264_hrd_param_info->cpb_size_value_minus1 = kcalloc
+ (VDEC_H264_MAXIMUMVALUEOFCPB_CNT,
+ sizeof(unsigned int),
+ GFP_KERNEL);
+ VDEC_ASSERT(h264_hrd_param_info->cpb_size_value_minus1);
+ if (!h264_hrd_param_info->cpb_size_value_minus1)
+ return BSPP_ERROR_OUT_OF_MEMORY;
+ }
+
+ if (!h264_hrd_param_info->cbr_flag) {
+ h264_hrd_param_info->cbr_flag =
+ kcalloc(VDEC_H264_MAXIMUMVALUEOFCPB_CNT, sizeof(unsigned char), GFP_KERNEL);
+ VDEC_ASSERT(h264_hrd_param_info->cbr_flag);
+ if (!h264_hrd_param_info->cbr_flag)
+ return BSPP_ERROR_OUT_OF_MEMORY;
+ }
+
+ for (sched_sel_idx = 0; sched_sel_idx <= h264_hrd_param_info->cpb_cnt_minus1;
+ sched_sel_idx++) {
+ h264_hrd_param_info->bit_rate_value_minus1[sched_sel_idx] =
+ swsr_read_unsigned_expgoulomb(swsr_context);
+ h264_hrd_param_info->cpb_size_value_minus1[sched_sel_idx] =
+ swsr_read_unsigned_expgoulomb(swsr_context);
+
+ if (h264_hrd_param_info->cpb_size_value_minus1[sched_sel_idx] == 0xffffffff)
+ /* 65 bit pattern, 32 0's -1 - 32 0's then value should be 0 */
+ h264_hrd_param_info->cpb_size_value_minus1[sched_sel_idx] = 0;
+
+ h264_hrd_param_info->cbr_flag[sched_sel_idx] = swsr_read_bits(swsr_context, 1);
+ }
+
+ h264_hrd_param_info->initial_cpb_removal_delay_length_minus1 = swsr_read_bits(swsr_context,
+ 5);
+ h264_hrd_param_info->cpb_removal_delay_length_minus1 = swsr_read_bits(swsr_context, 5);
+ h264_hrd_param_info->dpb_output_delay_length_minus1 = swsr_read_bits(swsr_context, 5);
+ h264_hrd_param_info->time_offset_length = swsr_read_bits(swsr_context, 5);
+
+ return BSPP_ERROR_NONE;
+}
+
+/*
+ * @Function bspp_h264_get_default_hrd_param
+ * @Description Get default value of the HRD parameter
+ */
+static void bspp_h264_get_default_hrd_param(struct bspp_h264_hrdparam_info *h264_hrd_param_info)
+{
+ /* other parameters already set to '0' */
+ h264_hrd_param_info->initial_cpb_removal_delay_length_minus1 = 23;
+ h264_hrd_param_info->cpb_removal_delay_length_minus1 = 23;
+ h264_hrd_param_info->dpb_output_delay_length_minus1 = 23;
+ h264_hrd_param_info->time_offset_length = 24;
+}
+
+/*
+ * @Function bspp_h264_vui_parser
+ * @Description Parse the VUI info
+ */
+static enum bspp_error_type bspp_h264_vui_parser(void *swsr_context,
+ struct bspp_h264_vui_info *vui_info,
+ struct bspp_h264_sps_info *sps_info)
+{
+ enum bspp_error_type vui_parser_error = BSPP_ERROR_NONE;
+
+ vui_info->aspect_ratio_info_present_flag = swsr_read_bits(swsr_context, 1);
+ if (vui_info->aspect_ratio_info_present_flag) {
+ vui_info->aspect_ratio_idc = swsr_read_bits(swsr_context, 8);
+ /* Extended SAR */
+ if (vui_info->aspect_ratio_idc == 255) {
+ vui_info->sar_width = swsr_read_bits(swsr_context, 16);
+ vui_info->sar_height = swsr_read_bits(swsr_context, 16);
+ } else if (vui_info->aspect_ratio_idc < 17) {
+ vui_info->sar_width = pixel_aspect[vui_info->aspect_ratio_idc][0];
+ vui_info->sar_height = pixel_aspect[vui_info->aspect_ratio_idc][1];
+ } else {
+ /* we can consider this error as a aux data error */
+ vui_parser_error |= BSPP_ERROR_INVALID_VALUE;
+ }
+ }
+
+ vui_info->overscan_info_present_flag = swsr_read_bits(swsr_context, 1);
+ if (vui_info->overscan_info_present_flag)
+ vui_info->overscan_appropriate_flag = swsr_read_bits(swsr_context, 1);
+
+ vui_info->video_signal_type_present_flag = swsr_read_bits(swsr_context, 1);
+ if (vui_info->video_signal_type_present_flag) {
+ vui_info->video_format = swsr_read_bits(swsr_context, 3);
+ vui_info->video_full_range_flag = swsr_read_bits(swsr_context, 1);
+ vui_info->colour_description_present_flag = swsr_read_bits(swsr_context, 1);
+ if (vui_info->colour_description_present_flag) {
+ vui_info->colour_primaries = swsr_read_bits(swsr_context, 8);
+ vui_info->transfer_characteristics = swsr_read_bits(swsr_context, 8);
+ vui_info->matrix_coefficients = swsr_read_bits(swsr_context, 8);
+ }
+ }
+
+ vui_info->chroma_location_info_present_flag = swsr_read_bits(swsr_context, 1);
+ if (vui_info->chroma_location_info_present_flag) {
+ vui_info->chroma_sample_loc_type_top_field = swsr_read_unsigned_expgoulomb
+ (swsr_context);
+ vui_info->chroma_sample_loc_type_bottom_field = swsr_read_unsigned_expgoulomb
+ (swsr_context);
+ }
+
+ vui_info->timing_info_present_flag = swsr_read_bits(swsr_context, 1);
+ if (vui_info->timing_info_present_flag) {
+ vui_info->num_units_in_tick = swsr_read_bits(swsr_context, 16);
+ vui_info->num_units_in_tick <<= 16; /* SR can only do up to 31 bit reads */
+ vui_info->num_units_in_tick |= swsr_read_bits(swsr_context, 16);
+ vui_info->time_scale = swsr_read_bits(swsr_context, 16);
+ vui_info->time_scale <<= 16; /* SR can only do up to 31 bit reads */
+ vui_info->time_scale |= swsr_read_bits(swsr_context, 16);
+ if (!vui_info->num_units_in_tick || !vui_info->time_scale)
+ vui_parser_error |= BSPP_ERROR_INVALID_VALUE;
+
+ vui_info->fixed_frame_rate_flag = swsr_read_bits(swsr_context, 1);
+ }
+
+ /* no default values */
+ vui_info->nal_hrd_parameters_present_flag = swsr_read_bits(swsr_context, 1);
+ if (vui_info->nal_hrd_parameters_present_flag)
+ vui_parser_error |= bspp_h264_hrd_param_parser(swsr_context,
+ &vui_info->nal_hrd_parameters);
+ else
+ bspp_h264_get_default_hrd_param(&vui_info->nal_hrd_parameters);
+
+ vui_info->vcl_hrd_parameters_present_flag = swsr_read_bits(swsr_context, 1);
+
+ if (vui_info->vcl_hrd_parameters_present_flag)
+ vui_parser_error |= bspp_h264_hrd_param_parser(swsr_context,
+ &vui_info->vcl_hrd_parameters);
+ else
+ bspp_h264_get_default_hrd_param(&vui_info->vcl_hrd_parameters);
+
+ if (vui_info->nal_hrd_parameters_present_flag || vui_info->vcl_hrd_parameters_present_flag)
+ vui_info->low_delay_hrd_flag = swsr_read_bits(swsr_context, 1);
+
+ vui_info->pic_struct_present_flag = swsr_read_bits(swsr_context, 1);
+ vui_info->bitstream_restriction_flag = swsr_read_bits(swsr_context, 1);
+ if (vui_info->bitstream_restriction_flag) {
+ vui_info->motion_vectors_over_pic_boundaries_flag = swsr_read_bits(swsr_context, 1);
+ vui_info->max_bytes_per_pic_denom = swsr_read_unsigned_expgoulomb(swsr_context);
+ vui_info->max_bits_per_mb_denom = swsr_read_unsigned_expgoulomb(swsr_context);
+ vui_info->log2_max_mv_length_horizontal =
+ swsr_read_unsigned_expgoulomb(swsr_context);
+ vui_info->log2_max_mv_length_vertical = swsr_read_unsigned_expgoulomb(swsr_context);
+ vui_info->num_reorder_frames = swsr_read_unsigned_expgoulomb(swsr_context);
+ vui_info->max_dec_frame_buffering = swsr_read_unsigned_expgoulomb(swsr_context);
+ }
+
+ if ((sps_info->profile_idc == h264_profile_baseline ||
+ sps_info->profile_idc == h264_profile_extended) &&
+ sps_info->max_num_ref_frames == 1) {
+ vui_info->bitstream_restriction_flag = 1;
+ vui_info->num_reorder_frames = 0;
+ vui_info->max_dec_frame_buffering = 1;
+ }
+
+ if (vui_info->num_reorder_frames > 32)
+ vui_parser_error |= BSPP_ERROR_UNSUPPORTED;
+
+ return vui_parser_error;
+}
+
+/*
+ * Parse scaling list
+ */
+static enum bspp_error_type bspp_h264_scl_listparser(void *swsr_context,
+ unsigned char *scaling_list,
+ unsigned char sizeof_scaling_list,
+ unsigned char *usedefaultscalingmatrixflag)
+{
+ enum bspp_error_type parse_error = BSPP_ERROR_NONE;
+ int delta_scale;
+ unsigned int lastscale = 8;
+ unsigned int nextscale = 8;
+ unsigned int j;
+
+ VDEC_ASSERT(swsr_context);
+ VDEC_ASSERT(scaling_list);
+ VDEC_ASSERT(usedefaultscalingmatrixflag);
+
+ if (!scaling_list || !swsr_context || !usedefaultscalingmatrixflag) {
+ parse_error = BSPP_ERROR_UNRECOVERABLE;
+ return parse_error;
+ }
+
+ /* 7.3.2.1.1 */
+ for (j = 0; j < sizeof_scaling_list; j++) {
+ if (nextscale != 0) {
+ delta_scale = swsr_read_signed_expgoulomb(swsr_context);
+ if ((-128 > delta_scale) || delta_scale > 127)
+ parse_error |= BSPP_ERROR_INVALID_VALUE;
+ nextscale = (lastscale + delta_scale + 256) & 0xff;
+ *usedefaultscalingmatrixflag = (j == 0 && nextscale == 0);
+ }
+ scaling_list[j] = (nextscale == 0) ? lastscale : nextscale;
+ lastscale = scaling_list[j];
+ }
+ return parse_error;
+}
+
+/*
+ * Parse the SPS NAL unit
+ */
+static enum bspp_error_type bspp_h264_sps_parser(void *swsr_context,
+ void *str_res,
+ struct bspp_h264_seq_hdr_info *h264_seq_hdr_info)
+{
+ unsigned int i;
+ unsigned char scaling_list_num;
+ struct bspp_h264_sps_info *sps_info;
+ struct bspp_h264_vui_info *vui_info;
+ enum bspp_error_type sps_parser_error = BSPP_ERROR_NONE;
+ enum bspp_error_type vui_parser_error = BSPP_ERROR_NONE;
+
+ sps_info = &h264_seq_hdr_info->sps_info;
+ vui_info = &h264_seq_hdr_info->vui_info;
+
+ /* Set always the default VUI/MVCExt, their values
+ * may be used even if VUI/MVCExt not present
+ */
+ bspp_h264_set_default_vui(vui_info);
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("Parsing Sequence Parameter Set");
+#endif
+ sps_info->profile_idc = swsr_read_bits(swsr_context, 8);
+ if (sps_info->profile_idc != H264_PROFILE_BASELINE &&
+ sps_info->profile_idc != H264_PROFILE_MAIN &&
+ sps_info->profile_idc != H264_PROFILE_SCALABLE &&
+ sps_info->profile_idc != H264_PROFILE_EXTENDED &&
+ sps_info->profile_idc != H264_PROFILE_HIGH &&
+ sps_info->profile_idc != H264_PROFILE_HIGH10 &&
+ sps_info->profile_idc != H264_PROFILE_MVC_HIGH &&
+ sps_info->profile_idc != H264_PROFILE_HIGH422 &&
+ sps_info->profile_idc != H264_PROFILE_CAVLC444 &&
+ sps_info->profile_idc != H264_PROFILE_MVC_STEREO &&
+ sps_info->profile_idc != H264_PROFILE_HIGH444) {
+ pr_err("Invalid Profile ID [%d],Parsed by BSPP", sps_info->profile_idc);
+ return BSPP_ERROR_UNSUPPORTED;
+ }
+ sps_info->constraint_set_flags = swsr_read_bits(swsr_context, 8);
+ sps_info->level_idc = swsr_read_bits(swsr_context, 8);
+
+ /* sequence parameter set id */
+ sps_info->seq_parameter_set_id = swsr_read_unsigned_expgoulomb(swsr_context);
+ if (sps_info->seq_parameter_set_id >= MAX_SPS_COUNT) {
+ pr_err("SPS ID [%d] goes beyond the limit", sps_info->seq_parameter_set_id);
+ return BSPP_ERROR_UNSUPPORTED;
+ }
+
+ /* High profile settings */
+ if (sps_info->profile_idc == H264_PROFILE_HIGH ||
+ sps_info->profile_idc == H264_PROFILE_HIGH10 ||
+ sps_info->profile_idc == H264_PROFILE_HIGH422 ||
+ sps_info->profile_idc == H264_PROFILE_HIGH444 ||
+ sps_info->profile_idc == H264_PROFILE_CAVLC444 ||
+ sps_info->profile_idc == H264_PROFILE_MVC_HIGH ||
+ sps_info->profile_idc == H264_PROFILE_MVC_STEREO) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("This is High Profile Bitstream");
+#endif
+ sps_info->chroma_format_idc = swsr_read_unsigned_expgoulomb(swsr_context);
+ if (sps_info->chroma_format_idc > 3) {
+ pr_err("chroma_format_idc[%d] is not within the range",
+ sps_info->chroma_format_idc);
+ sps_parser_error |= BSPP_ERROR_INVALID_VALUE;
+ }
+ if (sps_info->chroma_format_idc == 3)
+ sps_info->separate_colour_plane_flag = swsr_read_bits(swsr_context, 1);
+ else
+ sps_info->separate_colour_plane_flag = 0;
+
+ sps_info->bit_depth_luma_minus8 = swsr_read_unsigned_expgoulomb(swsr_context);
+ if (sps_info->bit_depth_luma_minus8 > 6)
+ sps_parser_error |= BSPP_ERROR_INVALID_VALUE;
+
+ sps_info->bit_depth_chroma_minus8 = swsr_read_unsigned_expgoulomb(swsr_context);
+ if (sps_info->bit_depth_chroma_minus8 > 6)
+ sps_parser_error |= BSPP_ERROR_INVALID_VALUE;
+
+ sps_info->qpprime_y_zero_transform_bypass_flag = swsr_read_bits(swsr_context, 1);
+ sps_info->seq_scaling_matrix_present_flag = swsr_read_bits(swsr_context, 1);
+ if (sps_info->seq_scaling_matrix_present_flag) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("seq_scaling_matrix_present_flag is available");
+#endif
+ scaling_list_num = (sps_info->chroma_format_idc != 3) ? 8 : 12;
+
+ if (!sps_info->scllst4x4seq) {
+ sps_info->scllst4x4seq =
+ kmalloc((sizeof(unsigned char[H264FW_NUM_4X4_LISTS]
+ [H264FW_4X4_SIZE])), GFP_KERNEL);
+ if (!sps_info->scllst4x4seq) {
+ sps_parser_error |= BSPP_ERROR_OUT_OF_MEMORY;
+ } else {
+ VDEC_ASSERT(sps_info->scllst4x4seq);
+ memset(sps_info->scllst4x4seq, 0x00,
+ sizeof(unsigned char[H264FW_NUM_4X4_LISTS]
+ [H264FW_4X4_SIZE]));
+ }
+ }
+ if (!sps_info->scllst8x8seq) {
+ sps_info->scllst8x8seq =
+ kmalloc((sizeof(unsigned char[H264FW_NUM_8X8_LISTS]
+ [H264FW_8X8_SIZE])), GFP_KERNEL);
+ if (!sps_info->scllst8x8seq) {
+ sps_parser_error |= BSPP_ERROR_OUT_OF_MEMORY;
+ } else {
+ VDEC_ASSERT(sps_info->scllst8x8seq);
+ memset(sps_info->scllst8x8seq, 0x00,
+ sizeof(unsigned char[H264FW_NUM_8X8_LISTS]
+ [H264FW_8X8_SIZE]));
+ }
+ }
+
+ {
+ unsigned char(*scllst4x4seq)[H264FW_NUM_4X4_LISTS]
+ [H264FW_4X4_SIZE] =
+ (unsigned char (*)[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE])
+ sps_info->scllst4x4seq;
+ unsigned char(*scllst8x8seq)[H264FW_NUM_8X8_LISTS]
+ [H264FW_8X8_SIZE] =
+ (unsigned char (*)[H264FW_NUM_8X8_LISTS]
+ [H264FW_8X8_SIZE])
+ sps_info->scllst8x8seq;
+
+ for (i = 0; i < scaling_list_num; i++) {
+ unsigned char *ptr =
+ &sps_info->usedefaultscalingmatrixflag_seq[i];
+
+ sps_info->seq_scaling_list_present_flag[i] =
+ swsr_read_bits(swsr_context, 1);
+ if (sps_info->seq_scaling_list_present_flag[i]) {
+ if (i < 6) {
+ sps_parser_error |=
+ bspp_h264_scl_listparser
+ (swsr_context,
+ (*scllst4x4seq)[i], 16,
+ ptr);
+ } else {
+ sps_parser_error |=
+ bspp_h264_scl_listparser
+ (swsr_context,
+ (*scllst8x8seq)[i - 6], 64,
+ ptr);
+ }
+ }
+ }
+ }
+ }
+ } else {
+ /* default values in here */
+ sps_info->chroma_format_idc = 1;
+ sps_info->bit_depth_luma_minus8 = 0;
+ sps_info->bit_depth_chroma_minus8 = 0;
+ sps_info->qpprime_y_zero_transform_bypass_flag = 0;
+ sps_info->seq_scaling_matrix_present_flag = 0;
+ }
+
+ sps_info->log2_max_frame_num_minus4 = swsr_read_unsigned_expgoulomb(swsr_context);
+ if (sps_info->log2_max_frame_num_minus4 > 12) {
+ pr_err("log2_max_frame_num_minus4[%d] is not within range [0 - 12]",
+ sps_info->log2_max_frame_num_minus4);
+ sps_parser_error |= BSPP_ERROR_INVALID_VALUE;
+ }
+
+ sps_info->pic_order_cnt_type = swsr_read_unsigned_expgoulomb(swsr_context);
+ if (sps_info->pic_order_cnt_type > 2) {
+ pr_err("pic_order_cnt_type[%d] is not within range [0 - 2]",
+ sps_info->pic_order_cnt_type);
+ sps_parser_error |= BSPP_ERROR_INVALID_VALUE;
+ }
+
+ if (sps_info->pic_order_cnt_type == 0) {
+ sps_info->log2_max_pic_order_cnt_lsb_minus4 = swsr_read_unsigned_expgoulomb
+ (swsr_context);
+ if (sps_info->log2_max_pic_order_cnt_lsb_minus4 > 12) {
+ pr_err("log2_max_pic_order_cnt_lsb_minus4[%d] is not within range [0 - 12]",
+ sps_info->log2_max_pic_order_cnt_lsb_minus4);
+ sps_info->log2_max_pic_order_cnt_lsb_minus4 = 12;
+ sps_parser_error |= BSPP_ERROR_CORRECTION_VALIDVALUE;
+ }
+ } else if (sps_info->pic_order_cnt_type == 1) {
+ sps_info->delta_pic_order_always_zero_flag = swsr_read_bits(swsr_context, 1);
+ sps_info->offset_for_non_ref_pic = swsr_read_signed_expgoulomb(swsr_context);
+ sps_info->offset_for_top_to_bottom_field = swsr_read_signed_expgoulomb
+ (swsr_context);
+ sps_info->num_ref_frames_in_pic_order_cnt_cycle = swsr_read_unsigned_expgoulomb
+ (swsr_context);
+ if (sps_info->num_ref_frames_in_pic_order_cnt_cycle > 255) {
+ pr_err("num_ref_frames_in_pic_order_cnt_cycle[%d] is not within range [0 - 256]",
+ sps_info->num_ref_frames_in_pic_order_cnt_cycle);
+ sps_parser_error |= BSPP_ERROR_INVALID_VALUE;
+ }
+
+ if (!sps_info->offset_for_ref_frame) {
+ sps_info->offset_for_ref_frame =
+ kmalloc((H264FW_MAX_CYCLE_REF_FRAMES * sizeof(unsigned int)),
+ GFP_KERNEL);
+ if (!sps_info->offset_for_ref_frame) {
+ pr_err("out of memory");
+ sps_parser_error |= BSPP_ERROR_OUT_OF_MEMORY;
+ }
+ }
+
+ if (sps_info->offset_for_ref_frame) {
+ VDEC_ASSERT(sps_info->num_ref_frames_in_pic_order_cnt_cycle <=
+ H264FW_MAX_CYCLE_REF_FRAMES);
+ memset(sps_info->offset_for_ref_frame, 0x00,
+ (H264FW_MAX_CYCLE_REF_FRAMES * sizeof(unsigned int)));
+ for (i = 0; i < sps_info->num_ref_frames_in_pic_order_cnt_cycle; i++) {
+ /* check the max value and if it crosses then exit from the loop */
+ sps_info->offset_for_ref_frame[i] = swsr_read_signed_expgoulomb
+ (swsr_context);
+ }
+ }
+ } else if (sps_info->pic_order_cnt_type != 2) {
+ sps_parser_error |= BSPP_ERROR_INVALID_VALUE;
+ }
+ sps_info->max_num_ref_frames = swsr_read_unsigned_expgoulomb(swsr_context);
+
+ if (sps_info->max_num_ref_frames > 16) {
+ pr_err("num_ref_frames[%d] is not within range [0 - 16]",
+ sps_info->max_num_ref_frames);
+ sps_parser_error |= BSPP_ERROR_INVALID_VALUE;
+ }
+ sps_info->gaps_in_frame_num_value_allowed_flag = swsr_read_bits(swsr_context, 1);
+ sps_info->pic_width_in_mbs_minus1 = swsr_read_unsigned_expgoulomb(swsr_context);
+ if (sps_info->pic_width_in_mbs_minus1 >= MAX_WIDTH_IN_MBS) {
+ pr_err("pic_width_in_mbs_minus1[%d] is not within range",
+ sps_info->pic_width_in_mbs_minus1);
+ sps_parser_error |= BSPP_ERROR_INVALID_VALUE;
+ }
+ sps_info->pic_height_in_map_units_minus1 = swsr_read_unsigned_expgoulomb(swsr_context);
+ if (sps_info->pic_height_in_map_units_minus1 >= MAX_HEIGHT_IN_MBS) {
+ pr_err("pic_height_in_map_units_minus1[%d] is not within range",
+ sps_info->pic_height_in_map_units_minus1);
+ sps_parser_error |= BSPP_ERROR_INVALID_VALUE;
+ }
+
+ sps_info->frame_mbs_only_flag = swsr_read_bits(swsr_context, 1);
+ if (!sps_info->frame_mbs_only_flag)
+ sps_info->mb_adaptive_frame_field_flag = swsr_read_bits(swsr_context, 1);
+ else
+ sps_info->mb_adaptive_frame_field_flag = 0;
+
+ sps_info->direct_8x8_inference_flag = swsr_read_bits(swsr_context, 1);
+
+ sps_info->frame_cropping_flag = swsr_read_bits(swsr_context, 1);
+ if (sps_info->frame_cropping_flag) {
+ sps_info->frame_crop_left_offset = swsr_read_unsigned_expgoulomb(swsr_context);
+ sps_info->frame_crop_right_offset = swsr_read_unsigned_expgoulomb(swsr_context);
+ sps_info->frame_crop_top_offset = swsr_read_unsigned_expgoulomb(swsr_context);
+ sps_info->frame_crop_bottom_offset = swsr_read_unsigned_expgoulomb(swsr_context);
+ } else {
+ sps_info->frame_crop_left_offset = 0;
+ sps_info->frame_crop_right_offset = 0;
+ sps_info->frame_crop_top_offset = 0;
+ sps_info->frame_crop_bottom_offset = 0;
+ }
+
+ sps_info->vui_parameters_present_flag = swsr_read_bits(swsr_context, 1);
+ /* initialise matrix_coefficients to 2 (unspecified) */
+ vui_info->matrix_coefficients = 2;
+
+ if (sps_info->vui_parameters_present_flag) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("vui_parameters_present_flag is available");
+#endif
+ /* save the SPS parse error in temp variable */
+ vui_parser_error = bspp_h264_vui_parser(swsr_context, vui_info, sps_info);
+ if (vui_parser_error != BSPP_ERROR_NONE)
+ sps_parser_error |= BSPP_ERROR_AUXDATA;
+
+#ifdef REDUCED_DPB_NO_PIC_REORDERING
+ vui_info->max_dec_frame_buffering = 1;
+ vui_info->num_reorder_frames = 0;
+#endif
+ }
+
+ if (sps_info->profile_idc == H264_PROFILE_MVC_HIGH ||
+ sps_info->profile_idc == H264_PROFILE_MVC_STEREO) {
+ pr_err("No MVC Support for this version\n");
+ }
+
+ if (swsr_check_exception(swsr_context) != SWSR_EXCEPT_NO_EXCEPTION)
+ sps_parser_error |= BSPP_ERROR_INSUFFICIENT_DATA;
+
+ return sps_parser_error;
+}
+
+/*
+ * Parse the PPS NAL unit
+ */
+static enum bspp_error_type bspp_h264_pps_parser(void *swsr_context,
+ void *str_res,
+ struct bspp_h264_pps_info *h264_pps_info)
+{
+ int i, group, chroma_format_idc;
+ unsigned int number_bits_per_slicegroup_id;
+ unsigned char n_scaling_list;
+ unsigned char more_rbsp_data;
+ unsigned int result;
+ enum bspp_error_type pps_parse_error = BSPP_ERROR_NONE;
+
+ VDEC_ASSERT(swsr_context);
+
+ h264_pps_info->pps_id = swsr_read_unsigned_expgoulomb(swsr_context);
+ if (h264_pps_info->pps_id >= MAX_PPS_COUNT) {
+ pr_err("Picture Parameter Set(PPS) ID is not within the range");
+ h264_pps_info->pps_id = (int)BSPP_INVALID;
+ return BSPP_ERROR_UNSUPPORTED;
+ }
+ h264_pps_info->seq_parameter_set_id = swsr_read_unsigned_expgoulomb(swsr_context);
+ if (h264_pps_info->seq_parameter_set_id >= MAX_SPS_COUNT) {
+ pr_err("Sequence Parameter Set(SPS) ID is not within the range");
+ h264_pps_info->seq_parameter_set_id = (int)BSPP_INVALID;
+ return BSPP_ERROR_UNSUPPORTED;
+ }
+
+ {
+ /*
+ * Get the chroma_format_idc from sps. Because of MVC sharing sps and subset sps ids
+ * (H.7.4.1.2.1).
+ * At this point is not clear if this pps refers to an sps or a subset sps.
+ * It should be finehowever for the case of chroma_format_idc to try and locate
+ * a subset sps if there isn't a normal one.
+ */
+ struct bspp_h264_seq_hdr_info *h264_seq_hdr_info;
+ struct bspp_sequence_hdr_info *seq_hdr_info;
+
+ seq_hdr_info = bspp_get_sequ_hdr(str_res, h264_pps_info->seq_parameter_set_id);
+
+ if (!seq_hdr_info) {
+ seq_hdr_info = bspp_get_sequ_hdr(str_res,
+ h264_pps_info->seq_parameter_set_id + 32);
+ if (!seq_hdr_info)
+ return BSPP_ERROR_NO_SEQUENCE_HDR;
+ }
+
+ h264_seq_hdr_info =
+ (struct bspp_h264_seq_hdr_info *)seq_hdr_info->secure_sequence_info;
+
+ chroma_format_idc = h264_seq_hdr_info->sps_info.chroma_format_idc;
+ }
+
+ h264_pps_info->entropy_coding_mode_flag = swsr_read_bits(swsr_context, 1);
+ h264_pps_info->pic_order_present_flag = swsr_read_bits(swsr_context, 1);
+ h264_pps_info->num_slice_groups_minus1 = swsr_read_unsigned_expgoulomb(swsr_context);
+ if ((h264_pps_info->num_slice_groups_minus1 + 1) >
+ MAX_SLICEGROUP_COUNT) {
+ h264_pps_info->num_slice_groups_minus1 =
+ MAX_SLICEGROUP_COUNT - 1;
+ pps_parse_error |= BSPP_ERROR_UNRECOVERABLE;
+ }
+
+ if (h264_pps_info->num_slice_groups_minus1 > 0) {
+ h264_pps_info->slice_group_map_type = swsr_read_unsigned_expgoulomb(swsr_context);
+ pr_err("slice_group_map_type is : %d, Parsed by BSPP",
+ h264_pps_info->slice_group_map_type);
+ if (h264_pps_info->slice_group_map_type > 6) {
+ pr_err("slice_group_map_type [%d] is not within the range [ 0- 6 ]",
+ h264_pps_info->slice_group_map_type);
+ pps_parse_error |= BSPP_ERROR_UNRECOVERABLE;
+ }
+
+ if (h264_pps_info->slice_group_map_type == 0) {
+ for (group = 0; group <= h264_pps_info->num_slice_groups_minus1; group++) {
+ h264_pps_info->run_length_minus1[group] =
+ swsr_read_unsigned_expgoulomb(swsr_context);
+ }
+ } else if (h264_pps_info->slice_group_map_type == 2) {
+ for (group = 0; group < h264_pps_info->num_slice_groups_minus1; group++) {
+ h264_pps_info->top_left[group] = swsr_read_unsigned_expgoulomb
+ (swsr_context);
+ h264_pps_info->bottom_right[group] =
+ swsr_read_unsigned_expgoulomb(swsr_context);
+ }
+ } else if (h264_pps_info->slice_group_map_type == 3 ||
+ h264_pps_info->slice_group_map_type == 4 ||
+ h264_pps_info->slice_group_map_type == 5) {
+ h264_pps_info->slice_group_change_direction_flag = swsr_read_bits
+ (swsr_context, 1);
+ h264_pps_info->slice_group_change_rate_minus1 =
+ swsr_read_unsigned_expgoulomb(swsr_context);
+ } else if (h264_pps_info->slice_group_map_type == 6) {
+ h264_pps_info->pic_size_in_map_unit = swsr_read_unsigned_expgoulomb
+ (swsr_context);
+ if (h264_pps_info->pic_size_in_map_unit >= H264_MAX_SGM_SIZE) {
+ pr_err("pic_size_in_map_units_minus1 [%d] is not within the range",
+ h264_pps_info->pic_size_in_map_unit);
+ pps_parse_error |= BSPP_ERROR_UNRECOVERABLE;
+ }
+ number_bits_per_slicegroup_id = h264ceillog2
+ (h264_pps_info->num_slice_groups_minus1 + 1);
+
+ if ((h264_pps_info->pic_size_in_map_unit + 1) >
+ h264_pps_info->h264_ppssgm_info.slicegroupidnum) {
+ unsigned char *slice_group_id =
+ kmalloc(((h264_pps_info->pic_size_in_map_unit + 1) *
+ sizeof(unsigned char)),
+ GFP_KERNEL);
+ if (!slice_group_id) {
+ pr_err("out of memory");
+ pps_parse_error |= BSPP_ERROR_OUT_OF_MEMORY;
+ } else {
+ pr_err("reallocating SGM info from size %lu bytes to size %lu bytes",
+ h264_pps_info->h264_ppssgm_info.slicegroupidnum *
+ sizeof(unsigned char),
+ (h264_pps_info->pic_size_in_map_unit + 1) *
+ sizeof(unsigned char));
+ if (h264_pps_info->h264_ppssgm_info.slice_group_id) {
+ memcpy
+ (slice_group_id,
+ h264_pps_info->h264_ppssgm_info.slice_group_id,
+ h264_pps_info->h264_ppssgm_info.slicegroupidnum *
+ sizeof(unsigned char));
+ kfree
+ (h264_pps_info->h264_ppssgm_info.slice_group_id);
+ }
+ h264_pps_info->h264_ppssgm_info.slicegroupidnum =
+ (h264_pps_info->pic_size_in_map_unit + 1);
+ h264_pps_info->h264_ppssgm_info.slice_group_id =
+ slice_group_id;
+ }
+ }
+
+ VDEC_ASSERT((h264_pps_info->pic_size_in_map_unit + 1) <=
+ h264_pps_info->h264_ppssgm_info.slicegroupidnum);
+ for (i = 0; i <= h264_pps_info->pic_size_in_map_unit; i++)
+ h264_pps_info->h264_ppssgm_info.slice_group_id[i] =
+ swsr_read_bits(swsr_context, number_bits_per_slicegroup_id);
+ }
+ }
+
+ for (i = 0; i < H264FW_MAX_REFPIC_LISTS; i++) {
+ h264_pps_info->num_ref_idx_lx_active_minus1[i] = swsr_read_unsigned_expgoulomb
+ (swsr_context);
+ if (h264_pps_info->num_ref_idx_lx_active_minus1[i] >=
+ SL_MAX_REF_IDX) {
+ pr_err("num_ref_idx_lx_active_minus1[%d] [%d] is not within the range",
+ i, h264_pps_info->num_ref_idx_lx_active_minus1[i]);
+ pps_parse_error |= BSPP_ERROR_UNRECOVERABLE;
+ }
+ }
+
+ h264_pps_info->weighted_pred_flag = swsr_read_bits(swsr_context, 1);
+ h264_pps_info->weighted_bipred_idc = swsr_read_bits(swsr_context, 2);
+ h264_pps_info->pic_init_qp_minus26 = swsr_read_signed_expgoulomb(swsr_context);
+ if (h264_pps_info->pic_init_qp_minus26 > 26)
+ pr_err("pic_init_qp_minus26[%d] is not within the range [-25 , 26]",
+ h264_pps_info->pic_init_qp_minus26);
+
+ h264_pps_info->pic_init_qs_minus26 = swsr_read_signed_expgoulomb(swsr_context);
+ if (h264_pps_info->pic_init_qs_minus26 > 26)
+ pr_err("pic_init_qs_minus26[%d] is not within the range [-25 , 26]",
+ h264_pps_info->pic_init_qs_minus26);
+
+ h264_pps_info->chroma_qp_index_offset = swsr_read_signed_expgoulomb(swsr_context);
+ if (h264_pps_info->chroma_qp_index_offset > H264_MAX_CHROMA_QP_INDEX_OFFSET)
+ h264_pps_info->chroma_qp_index_offset = H264_MAX_CHROMA_QP_INDEX_OFFSET;
+
+ else if (h264_pps_info->chroma_qp_index_offset < H264_MIN_CHROMA_QP_INDEX_OFFSET)
+ h264_pps_info->chroma_qp_index_offset = H264_MIN_CHROMA_QP_INDEX_OFFSET;
+
+ h264_pps_info->deblocking_filter_control_present_flag = swsr_read_bits(swsr_context, 1);
+ h264_pps_info->constrained_intra_pred_flag = swsr_read_bits(swsr_context, 1);
+ h264_pps_info->redundant_pic_cnt_present_flag = swsr_read_bits(swsr_context, 1);
+
+ /* Check for more rbsp data. */
+ result = swsr_check_more_rbsp_data(swsr_context, &more_rbsp_data);
+ if (result == 0 && more_rbsp_data) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("More RBSP data is available");
+#endif
+ /* Fidelity Range Extensions Stuff */
+ h264_pps_info->transform_8x8_mode_flag = swsr_read_bits(swsr_context, 1);
+ h264_pps_info->pic_scaling_matrix_present_flag = swsr_read_bits(swsr_context, 1);
+ if (h264_pps_info->pic_scaling_matrix_present_flag) {
+ if (!h264_pps_info->scllst4x4pic) {
+ h264_pps_info->scllst4x4pic =
+ kmalloc((sizeof(unsigned char[H264FW_NUM_4X4_LISTS]
+ [H264FW_4X4_SIZE])), GFP_KERNEL);
+ if (!h264_pps_info->scllst4x4pic) {
+ pps_parse_error |= BSPP_ERROR_OUT_OF_MEMORY;
+ } else {
+ VDEC_ASSERT(h264_pps_info->scllst4x4pic);
+ memset(h264_pps_info->scllst4x4pic, 0x00,
+ sizeof(unsigned char[H264FW_NUM_4X4_LISTS]
+ [H264FW_4X4_SIZE]));
+ }
+ }
+ if (!h264_pps_info->scllst8x8pic) {
+ h264_pps_info->scllst8x8pic =
+ kmalloc((sizeof(unsigned char[H264FW_NUM_8X8_LISTS]
+ [H264FW_8X8_SIZE])), GFP_KERNEL);
+ if (!h264_pps_info->scllst8x8pic) {
+ pps_parse_error |= BSPP_ERROR_OUT_OF_MEMORY;
+ } else {
+ VDEC_ASSERT(h264_pps_info->scllst8x8pic);
+ memset(h264_pps_info->scllst8x8pic, 0x00,
+ sizeof(unsigned char[H264FW_NUM_8X8_LISTS]
+ [H264FW_8X8_SIZE]));
+ }
+ }
+ {
+ unsigned char(*scllst4x4pic)[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE] =
+ (unsigned char (*)[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE])
+ h264_pps_info->scllst4x4pic;
+ unsigned char(*scllst8x8pic)[H264FW_NUM_8X8_LISTS][H264FW_8X8_SIZE] =
+ (unsigned char (*)[H264FW_NUM_8X8_LISTS][H264FW_8X8_SIZE])
+ h264_pps_info->scllst8x8pic;
+
+ /*
+ * For chroma_format =3 (YUV444) total list would be 12
+ * if transform_8x8_mode_flag is enabled else 6.
+ */
+ n_scaling_list = 6 + (chroma_format_idc != 3 ? 2 : 6) *
+ h264_pps_info->transform_8x8_mode_flag;
+ if (n_scaling_list > 12)
+ pps_parse_error |= BSPP_ERROR_UNRECOVERABLE;
+
+ VDEC_ASSERT(h264_pps_info->scllst4x4pic);
+ VDEC_ASSERT(h264_pps_info->scllst8x8pic);
+ for (i = 0; i < n_scaling_list; i++) {
+ unsigned char *ptr =
+ &h264_pps_info->usedefaultscalingmatrixflag_pic[i];
+
+ h264_pps_info->pic_scaling_list_present_flag[i] =
+ swsr_read_bits(swsr_context, 1);
+ if (h264_pps_info->pic_scaling_list_present_flag[i]) {
+ if (i < 6)
+ pps_parse_error |=
+ bspp_h264_scl_listparser
+ (swsr_context,
+ (*scllst4x4pic)[i], 16, ptr);
+ else
+ pps_parse_error |=
+ bspp_h264_scl_listparser
+ (swsr_context,
+ (*scllst8x8pic)[i - 6], 64, ptr);
+ }
+ }
+ }
+ }
+ h264_pps_info->second_chroma_qp_index_offset = swsr_read_signed_expgoulomb
+ (swsr_context);
+
+ if (h264_pps_info->second_chroma_qp_index_offset > H264_MAX_CHROMA_QP_INDEX_OFFSET)
+ h264_pps_info->second_chroma_qp_index_offset =
+ H264_MAX_CHROMA_QP_INDEX_OFFSET;
+ else if (h264_pps_info->second_chroma_qp_index_offset <
+ H264_MIN_CHROMA_QP_INDEX_OFFSET)
+ h264_pps_info->second_chroma_qp_index_offset =
+ H264_MIN_CHROMA_QP_INDEX_OFFSET;
+ } else {
+ h264_pps_info->second_chroma_qp_index_offset =
+ h264_pps_info->chroma_qp_index_offset;
+ }
+
+ if (swsr_check_exception(swsr_context) != SWSR_EXCEPT_NO_EXCEPTION)
+ pps_parse_error |= BSPP_ERROR_INSUFFICIENT_DATA;
+
+ return pps_parse_error;
+}
+
+static int bspp_h264_release_sequ_hdr_info(void *str_alloc, void *secure_sps_info)
+{
+ struct bspp_h264_seq_hdr_info *h264_seq_hdr_info =
+ (struct bspp_h264_seq_hdr_info *)secure_sps_info;
+
+ if (!h264_seq_hdr_info)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ return 0;
+}
+
+static int bspp_h264_reset_seq_hdr_info(void *secure_sps_info)
+{
+ struct bspp_h264_seq_hdr_info *h264_seq_hdr_info = NULL;
+ unsigned int *nal_hrd_bitrate_valueminus1 = NULL;
+ unsigned int *vcl_hrd_bitrate_valueminus1 = NULL;
+ unsigned int *nal_hrd_cpbsize_valueminus1 = NULL;
+ unsigned int *vcl_hrd_cpbsize_valueminus1 = NULL;
+ unsigned char *nal_hrd_cbrflag = NULL;
+ unsigned char *vcl_hrd_cbrflag = NULL;
+ unsigned int *offset_for_ref_frame = NULL;
+ unsigned char *scllst4x4seq = NULL;
+ unsigned char *scllst8x8seq = NULL;
+
+ if (!secure_sps_info)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ h264_seq_hdr_info = (struct bspp_h264_seq_hdr_info *)secure_sps_info;
+
+ offset_for_ref_frame = h264_seq_hdr_info->sps_info.offset_for_ref_frame;
+ scllst4x4seq = h264_seq_hdr_info->sps_info.scllst4x4seq;
+ scllst8x8seq = h264_seq_hdr_info->sps_info.scllst8x8seq;
+ nal_hrd_bitrate_valueminus1 =
+ h264_seq_hdr_info->vui_info.nal_hrd_parameters.bit_rate_value_minus1;
+ vcl_hrd_bitrate_valueminus1 =
+ h264_seq_hdr_info->vui_info.vcl_hrd_parameters.bit_rate_value_minus1;
+ nal_hrd_cpbsize_valueminus1 =
+ h264_seq_hdr_info->vui_info.nal_hrd_parameters.cpb_size_value_minus1;
+ vcl_hrd_cpbsize_valueminus1 =
+ h264_seq_hdr_info->vui_info.vcl_hrd_parameters.cpb_size_value_minus1;
+ nal_hrd_cbrflag = h264_seq_hdr_info->vui_info.nal_hrd_parameters.cbr_flag;
+ vcl_hrd_cbrflag = h264_seq_hdr_info->vui_info.vcl_hrd_parameters.cbr_flag;
+
+ /* Cleaning vui_info */
+ if (h264_seq_hdr_info->vui_info.nal_hrd_parameters.bit_rate_value_minus1)
+ memset(h264_seq_hdr_info->vui_info.nal_hrd_parameters.bit_rate_value_minus1,
+ 0x00, VDEC_H264_MAXIMUMVALUEOFCPB_CNT * sizeof(unsigned int));
+
+ if (h264_seq_hdr_info->vui_info.nal_hrd_parameters.cpb_size_value_minus1)
+ memset(h264_seq_hdr_info->vui_info.nal_hrd_parameters.cpb_size_value_minus1,
+ 0x00, VDEC_H264_MAXIMUMVALUEOFCPB_CNT * sizeof(unsigned int));
+
+ if (h264_seq_hdr_info->vui_info.vcl_hrd_parameters.cpb_size_value_minus1)
+ memset(h264_seq_hdr_info->vui_info.vcl_hrd_parameters.cpb_size_value_minus1,
+ 0x00, VDEC_H264_MAXIMUMVALUEOFCPB_CNT * sizeof(unsigned int));
+
+ if (h264_seq_hdr_info->vui_info.nal_hrd_parameters.cbr_flag)
+ memset(h264_seq_hdr_info->vui_info.nal_hrd_parameters.cbr_flag,
+ 0x00, VDEC_H264_MAXIMUMVALUEOFCPB_CNT * sizeof(unsigned char));
+
+ if (h264_seq_hdr_info->vui_info.vcl_hrd_parameters.cbr_flag)
+ memset(h264_seq_hdr_info->vui_info.vcl_hrd_parameters.cbr_flag,
+ 0x00, VDEC_H264_MAXIMUMVALUEOFCPB_CNT * sizeof(unsigned char));
+
+ /* Cleaning sps_info */
+ if (h264_seq_hdr_info->sps_info.offset_for_ref_frame)
+ memset(h264_seq_hdr_info->sps_info.offset_for_ref_frame, 0x00,
+ H264FW_MAX_CYCLE_REF_FRAMES * sizeof(unsigned int));
+
+ if (h264_seq_hdr_info->sps_info.scllst4x4seq)
+ memset(h264_seq_hdr_info->sps_info.scllst4x4seq, 0x00,
+ sizeof(unsigned char[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE]));
+
+ if (h264_seq_hdr_info->sps_info.scllst8x8seq)
+ memset(h264_seq_hdr_info->sps_info.scllst8x8seq, 0x00,
+ sizeof(unsigned char[H264FW_NUM_8X8_LISTS][H264FW_8X8_SIZE]));
+
+ /* Erasing the structure */
+ memset(h264_seq_hdr_info, 0, sizeof(*h264_seq_hdr_info));
+
+ /* Restoring pointers */
+ h264_seq_hdr_info->sps_info.offset_for_ref_frame = offset_for_ref_frame;
+ h264_seq_hdr_info->sps_info.scllst4x4seq = scllst4x4seq;
+ h264_seq_hdr_info->sps_info.scllst8x8seq = scllst8x8seq;
+
+ h264_seq_hdr_info->vui_info.nal_hrd_parameters.bit_rate_value_minus1 =
+ nal_hrd_bitrate_valueminus1;
+ h264_seq_hdr_info->vui_info.vcl_hrd_parameters.bit_rate_value_minus1 =
+ vcl_hrd_bitrate_valueminus1;
+
+ h264_seq_hdr_info->vui_info.nal_hrd_parameters.cpb_size_value_minus1 =
+ nal_hrd_cpbsize_valueminus1;
+ h264_seq_hdr_info->vui_info.vcl_hrd_parameters.cpb_size_value_minus1 =
+ vcl_hrd_cpbsize_valueminus1;
+
+ h264_seq_hdr_info->vui_info.nal_hrd_parameters.cbr_flag = nal_hrd_cbrflag;
+ h264_seq_hdr_info->vui_info.vcl_hrd_parameters.cbr_flag = vcl_hrd_cbrflag;
+
+ return 0;
+}
+
+static int bspp_h264_reset_pps_info(void *secure_pps_info)
+{
+ struct bspp_h264_pps_info *h264_pps_info = NULL;
+ unsigned short slicegroupidnum = 0;
+ unsigned char *slice_group_id = NULL;
+ unsigned char *scllst4x4pic = NULL;
+ unsigned char *scllst8x8pic = NULL;
+
+ if (!secure_pps_info)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ h264_pps_info = (struct bspp_h264_pps_info *)secure_pps_info;
+
+ /*
+ * Storing temp values (we want to leave the SGM structure
+ * it may be useful again instead of reallocating later
+ */
+ slice_group_id = h264_pps_info->h264_ppssgm_info.slice_group_id;
+ slicegroupidnum = h264_pps_info->h264_ppssgm_info.slicegroupidnum;
+ scllst4x4pic = h264_pps_info->scllst4x4pic;
+ scllst8x8pic = h264_pps_info->scllst8x8pic;
+
+ if (h264_pps_info->h264_ppssgm_info.slice_group_id)
+ memset(h264_pps_info->h264_ppssgm_info.slice_group_id, 0x00,
+ h264_pps_info->h264_ppssgm_info.slicegroupidnum * sizeof(unsigned char));
+
+ if (h264_pps_info->scllst4x4pic)
+ memset(h264_pps_info->scllst4x4pic, 0x00,
+ sizeof(unsigned char[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE]));
+
+ if (h264_pps_info->scllst8x8pic)
+ memset(h264_pps_info->scllst8x8pic, 0x00,
+ sizeof(unsigned char[H264FW_NUM_8X8_LISTS][H264FW_8X8_SIZE]));
+
+ /* Erasing the structure */
+ memset(h264_pps_info, 0x00, sizeof(*h264_pps_info));
+
+ /* Copy the temp variable back */
+ h264_pps_info->h264_ppssgm_info.slicegroupidnum = slicegroupidnum;
+ h264_pps_info->h264_ppssgm_info.slice_group_id = slice_group_id;
+ h264_pps_info->scllst4x4pic = scllst4x4pic;
+ h264_pps_info->scllst8x8pic = scllst8x8pic;
+
+ return 0;
+}
+
+static enum bspp_error_type bspp_h264_pict_hdr_parser
+ (void *swsr_context, void *str_res,
+ struct bspp_h264_slice_hdr_info *h264_slice_hdr_info,
+ struct bspp_pps_info **pps_info,
+ struct bspp_sequence_hdr_info **seq_hdr_info,
+ enum h264_nalunittype nal_unit_type,
+ unsigned char nal_ref_idc)
+{
+ enum bspp_error_type slice_parse_error = BSPP_ERROR_NONE;
+ struct bspp_h264_pps_info *h264_pps_info;
+ struct bspp_pps_info *pps_info_loc;
+ struct bspp_h264_seq_hdr_info *h264_seq_hdr_info;
+ struct bspp_sequence_hdr_info *seq_hdr_info_loc;
+ int id_loc;
+
+ VDEC_ASSERT(swsr_context);
+
+ memset(h264_slice_hdr_info, 0, sizeof(*h264_slice_hdr_info));
+
+ h264_slice_hdr_info->first_mb_in_slice = swsr_read_unsigned_expgoulomb(swsr_context);
+ h264_slice_hdr_info->slice_type = (enum bspp_h264_slice_type)swsr_read_unsigned_expgoulomb
+ (swsr_context);
+ if ((unsigned int)h264_slice_hdr_info->slice_type > 9) {
+ pr_err("Slice Type [%d] invalid, set to P", h264_slice_hdr_info->slice_type);
+ h264_slice_hdr_info->slice_type = (enum bspp_h264_slice_type)0;
+ slice_parse_error |= BSPP_ERROR_CORRECTION_VALIDVALUE;
+ }
+ h264_slice_hdr_info->slice_type =
+ (enum bspp_h264_slice_type)(h264_slice_hdr_info->slice_type % 5);
+
+ h264_slice_hdr_info->pps_id = swsr_read_unsigned_expgoulomb(swsr_context);
+ if (h264_slice_hdr_info->pps_id >= MAX_PPS_COUNT) {
+ pr_err("Picture Parameter ID [%d] invalid, set to 0", h264_slice_hdr_info->pps_id);
+ h264_slice_hdr_info->pps_id = 0;
+ slice_parse_error |= BSPP_ERROR_CORRECTION_VALIDVALUE;
+ }
+
+ /* Set relevant PPS and SPS */
+ pps_info_loc = bspp_get_pps_hdr(str_res, h264_slice_hdr_info->pps_id);
+
+ if (!pps_info_loc) {
+ slice_parse_error |= BSPP_ERROR_NO_PPS;
+ goto error;
+ }
+ h264_pps_info = (struct bspp_h264_pps_info *)pps_info_loc->secure_pps_info;
+ if (!h264_pps_info) {
+ slice_parse_error |= BSPP_ERROR_NO_PPS;
+ goto error;
+ }
+ VDEC_ASSERT(h264_pps_info->pps_id == h264_slice_hdr_info->pps_id);
+ *pps_info = pps_info_loc;
+
+ /* seq_parameter_set_id is always in range 0-31,
+ * so we can add offset indicating subsequence header
+ */
+ id_loc = h264_pps_info->seq_parameter_set_id;
+ id_loc = (nal_unit_type == H264_NALTYPE_SLICE_SCALABLE ||
+ nal_unit_type == H264_NALTYPE_SLICE_IDR_SCALABLE ||
+ nal_unit_type == H264_NALTYPE_SUBSET_SPS) ? id_loc + 32 : id_loc;
+
+ seq_hdr_info_loc = bspp_get_sequ_hdr(str_res, id_loc);
+
+ if (!seq_hdr_info_loc) {
+ slice_parse_error |= BSPP_ERROR_NO_SEQUENCE_HDR;
+ goto error;
+ }
+ h264_seq_hdr_info = (struct bspp_h264_seq_hdr_info *)seq_hdr_info_loc->secure_sequence_info;
+ VDEC_ASSERT((unsigned int)h264_seq_hdr_info->sps_info.seq_parameter_set_id ==
+ h264_pps_info->seq_parameter_set_id);
+ *seq_hdr_info = seq_hdr_info_loc;
+
+ /*
+ * For MINIMAL parsing in secure mode, slice header parsing can stop
+ * here, may be problematic with field-coded streams and splitting
+ * fields
+ */
+ if (h264_seq_hdr_info->sps_info.separate_colour_plane_flag)
+ h264_slice_hdr_info->colour_plane_id = swsr_read_bits(swsr_context, 2);
+
+ else
+ h264_slice_hdr_info->colour_plane_id = 0;
+
+ h264_slice_hdr_info->frame_num = swsr_read_bits
+ (swsr_context,
+ h264_seq_hdr_info->sps_info.log2_max_frame_num_minus4
+ + 4);
+
+ VDEC_ASSERT(h264_slice_hdr_info->frame_num <
+ (1UL << (h264_seq_hdr_info->sps_info.log2_max_frame_num_minus4 + 4)));
+
+ if (!h264_seq_hdr_info->sps_info.frame_mbs_only_flag) {
+ if (h264_slice_hdr_info->slice_type == B_SLICE &&
+ !h264_seq_hdr_info->sps_info.direct_8x8_inference_flag)
+ slice_parse_error |= BSPP_ERROR_INVALID_VALUE;
+
+ h264_slice_hdr_info->field_pic_flag = swsr_read_bits(swsr_context, 1);
+ if (h264_slice_hdr_info->field_pic_flag)
+ h264_slice_hdr_info->bottom_field_flag = swsr_read_bits(swsr_context, 1);
+ else
+ h264_slice_hdr_info->bottom_field_flag = 0;
+ } else {
+ h264_slice_hdr_info->field_pic_flag = 0;
+ h264_slice_hdr_info->bottom_field_flag = 0;
+ }
+
+ /*
+ * At this point we have everything we need, but we still lack all the
+ * conditions for detecting new pictures (needed for error cases)
+ */
+ if (nal_unit_type == H264_NALTYPE_IDR_SLICE)
+ h264_slice_hdr_info->idr_pic_id = swsr_read_unsigned_expgoulomb(swsr_context);
+
+ if (h264_seq_hdr_info->sps_info.pic_order_cnt_type == 0) {
+ h264_slice_hdr_info->pic_order_cnt_lsb = swsr_read_bits
+ (swsr_context,
+ h264_seq_hdr_info->sps_info.log2_max_pic_order_cnt_lsb_minus4 + 4);
+ if (h264_pps_info->pic_order_present_flag && !h264_slice_hdr_info->field_pic_flag)
+ h264_slice_hdr_info->delta_pic_order_cnt_bottom =
+ swsr_read_signed_expgoulomb(swsr_context);
+ }
+
+ if (h264_seq_hdr_info->sps_info.pic_order_cnt_type == 1 &&
+ !h264_seq_hdr_info->sps_info.delta_pic_order_always_zero_flag) {
+ h264_slice_hdr_info->delta_pic_order_cnt[0] = swsr_read_signed_expgoulomb
+ (swsr_context);
+ if (h264_pps_info->pic_order_present_flag && !h264_slice_hdr_info->field_pic_flag)
+ h264_slice_hdr_info->delta_pic_order_cnt[1] = swsr_read_signed_expgoulomb
+ (swsr_context);
+ }
+
+ if (h264_pps_info->redundant_pic_cnt_present_flag)
+ h264_slice_hdr_info->redundant_pic_cnt =
+ swsr_read_unsigned_expgoulomb(swsr_context);
+
+ /* For FMO streams, we need to go further */
+ if (h264_pps_info->num_slice_groups_minus1 != 0 &&
+ h264_pps_info->slice_group_map_type >= 3 &&
+ h264_pps_info->slice_group_map_type <= 5) {
+ if (h264_slice_hdr_info->slice_type == B_SLICE)
+ swsr_read_bits(swsr_context, 1);
+
+ if (h264_slice_hdr_info->slice_type == P_SLICE ||
+ h264_slice_hdr_info->slice_type == SP_SLICE ||
+ h264_slice_hdr_info->slice_type == B_SLICE) {
+ h264_slice_hdr_info->num_ref_idx_active_override_flag =
+ swsr_read_bits(swsr_context, 1);
+ if (h264_slice_hdr_info->num_ref_idx_active_override_flag) {
+ h264_slice_hdr_info->num_ref_idx_lx_active_minus1[0] =
+ swsr_read_unsigned_expgoulomb(swsr_context);
+ if (h264_slice_hdr_info->slice_type == B_SLICE)
+ h264_slice_hdr_info->num_ref_idx_lx_active_minus1[1] =
+ swsr_read_unsigned_expgoulomb(swsr_context);
+ }
+ }
+
+ if (h264_slice_hdr_info->slice_type != SI_SLICE &&
+ h264_slice_hdr_info->slice_type != I_SLICE) {
+ /* Reference picture list modification */
+ /* parse reordering info and pack into commands */
+ unsigned int i;
+ unsigned int cmd_num, list_num;
+ unsigned int command;
+
+ i = (h264_slice_hdr_info->slice_type == B_SLICE) ? 2 : 1;
+
+ for (list_num = 0; list_num < i; list_num++) {
+ cmd_num = 0;
+ if (swsr_read_bits(swsr_context, 1)) {
+ do {
+ command =
+ swsr_read_unsigned_expgoulomb(swsr_context);
+ if (command != 3) {
+ swsr_read_unsigned_expgoulomb(swsr_context);
+ cmd_num++;
+ }
+ } while (command != 3 && cmd_num <= SL_MAX_REF_IDX);
+ }
+ }
+ }
+
+ if ((h264_pps_info->weighted_pred_flag &&
+ h264_slice_hdr_info->slice_type == P_SLICE) ||
+ (h264_pps_info->weighted_bipred_idc &&
+ h264_slice_hdr_info->slice_type == B_SLICE)) {
+ int mono_chrome;
+ unsigned int list, i, j, k;
+
+ mono_chrome = (!h264_seq_hdr_info->sps_info.chroma_format_idc) ? 1 : 0;
+
+ swsr_read_unsigned_expgoulomb(swsr_context);
+ if (!mono_chrome)
+ swsr_read_unsigned_expgoulomb(swsr_context);
+
+ k = (h264_slice_hdr_info->slice_type == B_SLICE) ? 2 : 1;
+
+ for (list = 0; list < k; list++) {
+ for (i = 0;
+ i <=
+ h264_slice_hdr_info->num_ref_idx_lx_active_minus1[list];
+ i++) {
+ if (swsr_read_bits(swsr_context, 1)) {
+ swsr_read_signed_expgoulomb(swsr_context);
+ swsr_read_signed_expgoulomb(swsr_context);
+ }
+
+ if (!mono_chrome && (swsr_read_bits(swsr_context, 1))) {
+ for (j = 0; j < 2; j++) {
+ swsr_read_signed_expgoulomb
+ (swsr_context);
+ swsr_read_signed_expgoulomb
+ (swsr_context);
+ }
+ }
+ }
+ }
+ }
+
+ if (nal_ref_idc != 0) {
+ unsigned int memmanop;
+
+ if (nal_unit_type == H264_NALTYPE_IDR_SLICE) {
+ swsr_read_bits(swsr_context, 1);
+ swsr_read_bits(swsr_context, 1);
+ }
+ if (swsr_read_bits(swsr_context, 1)) {
+ do {
+ /* clamp 0--6 */
+ memmanop = swsr_read_unsigned_expgoulomb
+ (swsr_context);
+ if (memmanop != 0 && memmanop != 5) {
+ if (memmanop == 3) {
+ swsr_read_unsigned_expgoulomb
+ (swsr_context);
+ swsr_read_unsigned_expgoulomb
+ (swsr_context);
+ } else {
+ swsr_read_unsigned_expgoulomb
+ (swsr_context);
+ }
+ }
+ } while (memmanop != 0);
+ }
+ }
+
+ if (h264_pps_info->entropy_coding_mode_flag &&
+ h264_slice_hdr_info->slice_type != I_SLICE)
+ swsr_read_unsigned_expgoulomb(swsr_context);
+
+ swsr_read_signed_expgoulomb(swsr_context);
+
+ if (h264_slice_hdr_info->slice_type == SP_SLICE ||
+ h264_slice_hdr_info->slice_type == SI_SLICE) {
+ if (h264_slice_hdr_info->slice_type == SP_SLICE)
+ swsr_read_bits(swsr_context, 1);
+
+ /* slice_qs_delta */
+ swsr_read_signed_expgoulomb(swsr_context);
+ }
+
+ if (h264_pps_info->deblocking_filter_control_present_flag) {
+ if (swsr_read_unsigned_expgoulomb(swsr_context) != 1) {
+ swsr_read_signed_expgoulomb(swsr_context);
+ swsr_read_signed_expgoulomb(swsr_context);
+ }
+ }
+
+ if (h264_pps_info->slice_group_map_type >= 3 &&
+ h264_pps_info->slice_group_map_type <= 5) {
+ unsigned int num_slice_group_map_units =
+ (h264_seq_hdr_info->sps_info.pic_height_in_map_units_minus1 + 1) *
+ (h264_seq_hdr_info->sps_info.pic_width_in_mbs_minus1 + 1);
+
+ unsigned short slice_group_change_rate =
+ (h264_pps_info->slice_group_change_rate_minus1 + 1);
+
+ unsigned int width = h264ceillog2(num_slice_group_map_units /
+ slice_group_change_rate +
+ (num_slice_group_map_units % slice_group_change_rate ==
+ 0 ? 0 : 1) + 1); /* (7-32) */
+ h264_slice_hdr_info->slice_group_change_cycle = swsr_read_bits(swsr_context,
+ width);
+ }
+ }
+
+error:
+ return slice_parse_error;
+}
+
+static void bspp_h264_select_scaling_list(struct h264fw_picture_ps *h264fw_pps_info,
+ struct bspp_h264_pps_info *h264_pps_info,
+ struct bspp_h264_seq_hdr_info *h264_seq_hdr_info)
+{
+ unsigned int num8x8_lists;
+ unsigned int i;
+ const unsigned char *quant_matrix = NULL;
+ unsigned char (*scllst4x4pic)[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE] =
+ (unsigned char (*)[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE])h264_pps_info->scllst4x4pic;
+ unsigned char (*scllst8x8pic)[H264FW_NUM_8X8_LISTS][H264FW_8X8_SIZE] =
+ (unsigned char (*)[H264FW_NUM_8X8_LISTS][H264FW_8X8_SIZE])h264_pps_info->scllst8x8pic;
+
+ unsigned char (*scllst4x4seq)[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE] =
+ (unsigned char (*)[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE])
+ h264_seq_hdr_info->sps_info.scllst4x4seq;
+ unsigned char (*scllst8x8seq)[H264FW_NUM_8X8_LISTS][H264FW_8X8_SIZE] =
+ (unsigned char (*)[H264FW_NUM_8X8_LISTS][H264FW_8X8_SIZE])
+ h264_seq_hdr_info->sps_info.scllst8x8seq;
+
+ if (h264_seq_hdr_info->sps_info.seq_scaling_matrix_present_flag) {
+ VDEC_ASSERT(h264_seq_hdr_info->sps_info.scllst4x4seq);
+ VDEC_ASSERT(h264_seq_hdr_info->sps_info.scllst8x8seq);
+ }
+
+ if (h264_pps_info->pic_scaling_matrix_present_flag) {
+ for (i = 0; i < H264FW_NUM_4X4_LISTS; i++) {
+ if (h264_pps_info->pic_scaling_list_present_flag[i]) {
+ if (h264_pps_info->usedefaultscalingmatrixflag_pic[i])
+ quant_matrix =
+ (i > 2) ? default_4x4_inter : default_4x4_intra;
+ else
+ quant_matrix = (*scllst4x4pic)[i];
+
+ } else {
+ if (h264_seq_hdr_info->sps_info.seq_scaling_matrix_present_flag) {
+ /* SPS matrix present - use fallback rule B */
+ /* first 4x4 Intra list */
+ if (i == 0) {
+ if
+ (h264_seq_hdr_info->sps_info.seq_scaling_list_present_flag[i] &&
+ !h264_seq_hdr_info->sps_info.usedefaultscalingmatrixflag_seq[i]) {
+ VDEC_ASSERT
+ (h264_seq_hdr_info->sps_info.scllst4x4seq);
+ if (scllst4x4seq)
+ quant_matrix = (*scllst4x4seq)[i];
+ } else {
+ quant_matrix = default_4x4_intra;
+ }
+ }
+ /* first 4x4 Inter list */
+ else if (i == 3) {
+ if
+ (h264_seq_hdr_info->sps_info.seq_scaling_list_present_flag[i] &&
+ !h264_seq_hdr_info->sps_info.usedefaultscalingmatrixflag_seq[i]) {
+ VDEC_ASSERT
+ (h264_seq_hdr_info->sps_info.scllst4x4seq);
+ if (scllst4x4seq)
+ quant_matrix = (*scllst4x4seq)[i];
+ } else {
+ quant_matrix = default_4x4_inter;
+ }
+ } else {
+ quant_matrix =
+ h264fw_pps_info->scalinglist4x4[i - 1];
+ }
+ } else {
+ /* SPS matrix not present - use fallback rule A */
+ /* first 4x4 Intra list */
+ if (i == 0)
+ quant_matrix = default_4x4_intra;
+ /* first 4x4 Interlist */
+ else if (i == 3)
+ quant_matrix = default_4x4_inter;
+ else
+ quant_matrix =
+ h264fw_pps_info->scalinglist4x4[i - 1];
+ }
+ }
+ if (!quant_matrix) {
+ VDEC_ASSERT(0);
+ return;
+ }
+ /* copy correct 4x4 list to output - as selected by PPS */
+ memcpy(h264fw_pps_info->scalinglist4x4[i], quant_matrix,
+ sizeof(h264fw_pps_info->scalinglist4x4[i]));
+ }
+ } else {
+ /* PPS matrix not present, use SPS information */
+ if (h264_seq_hdr_info->sps_info.seq_scaling_matrix_present_flag) {
+ for (i = 0; i < H264FW_NUM_4X4_LISTS; i++) {
+ if (h264_seq_hdr_info->sps_info.seq_scaling_list_present_flag[i]) {
+ if
+ (h264_seq_hdr_info->sps_info.usedefaultscalingmatrixflag_seq
+ [i]) {
+ quant_matrix = (i > 2) ? default_4x4_inter
+ : default_4x4_intra;
+ } else {
+ VDEC_ASSERT
+ (h264_seq_hdr_info->sps_info.scllst4x4seq);
+ if (scllst4x4seq)
+ quant_matrix = (*scllst4x4seq)[i];
+ }
+ } else {
+ /* SPS list not present - use fallback rule A */
+ /* first 4x4 Intra list */
+ if (i == 0)
+ quant_matrix = default_4x4_intra;
+ else if (i == 3) /* first 4x4 Inter list */
+ quant_matrix = default_4x4_inter;
+ else
+ quant_matrix =
+ h264fw_pps_info->scalinglist4x4[i - 1];
+ }
+ if (quant_matrix) {
+ /* copy correct 4x4 list to output - as selected by SPS */
+ memcpy(h264fw_pps_info->scalinglist4x4[i], quant_matrix,
+ sizeof(h264fw_pps_info->scalinglist4x4[i]));
+ }
+ }
+ } else {
+ /* SPS matrix not present - use flat lists */
+ quant_matrix = default_4x4_org;
+ for (i = 0; i < H264FW_NUM_4X4_LISTS; i++)
+ memcpy(h264fw_pps_info->scalinglist4x4[i], quant_matrix,
+ sizeof(h264fw_pps_info->scalinglist4x4[i]));
+ }
+ }
+
+ /* 8x8 matrices */
+ num8x8_lists = (h264_seq_hdr_info->sps_info.chroma_format_idc == 3) ? 6 : 2;
+ if (h264_pps_info->transform_8x8_mode_flag) {
+ unsigned char *seq_scllstflg =
+ h264_seq_hdr_info->sps_info.seq_scaling_list_present_flag;
+ unsigned char *def_sclmatflg_seq =
+ h264_seq_hdr_info->sps_info.usedefaultscalingmatrixflag_seq;
+
+ if (h264_pps_info->pic_scaling_matrix_present_flag) {
+ for (i = 0; i < num8x8_lists; i++) {
+ if (h264_pps_info->pic_scaling_list_present_flag[i +
+ H264FW_NUM_4X4_LISTS]) {
+ if (h264_pps_info->usedefaultscalingmatrixflag_pic[i +
+ H264FW_NUM_4X4_LISTS]) {
+ quant_matrix = (i & 0x1) ? default_8x8_inter
+ : default_8x8_intra;
+ } else {
+ VDEC_ASSERT(h264_pps_info->scllst8x8pic);
+ if (scllst8x8pic)
+ quant_matrix = (*scllst8x8pic)[i];
+ }
+ } else {
+ if
+ (h264_seq_hdr_info->sps_info.seq_scaling_matrix_present_flag) {
+ /* SPS matrix present - use fallback rule B */
+ /* list 6 - first 8x8 Intra list */
+ if (i == 0) {
+ if (seq_scllstflg[i +
+ H264FW_NUM_4X4_LISTS] &&
+ !def_sclmatflg_seq[i +
+ H264FW_NUM_4X4_LISTS]) {
+ VDEC_ASSERT
+ (h264_seq_hdr_info->sps_info.scllst8x8seq);
+ if (scllst8x8seq)
+ quant_matrix = (*scllst8x8seq)[i];
+ } else {
+ quant_matrix = default_8x8_intra;
+ }
+ /* list 7 - first 8x8 Inter list */
+ } else if (i == 1) {
+ if (seq_scllstflg[i +
+ H264FW_NUM_4X4_LISTS] &&
+ !def_sclmatflg_seq[i +
+ H264FW_NUM_4X4_LISTS]) {
+ VDEC_ASSERT
+ (h264_seq_hdr_info->sps_info.scllst8x8seq);
+ if (scllst8x8seq)
+ quant_matrix = (*scllst8x8seq)[i];
+ } else {
+ quant_matrix = default_8x8_inter;
+ }
+ } else {
+ quant_matrix =
+ h264fw_pps_info->scalinglist8x8[i - 2];
+ }
+ } else {
+ /* SPS matrix not present - use fallback rule A */
+ /* list 6 - first 8x8 Intra list */
+ if (i == 0)
+ quant_matrix = default_8x8_intra;
+ /* list 7 - first 8x8 Inter list */
+ else if (i == 1)
+ quant_matrix = default_8x8_inter;
+ else
+ quant_matrix =
+ h264fw_pps_info->scalinglist8x8[i - 2];
+ }
+ }
+ if (quant_matrix) {
+ /* copy correct 8x8 list to output - as selected by PPS */
+ memcpy(h264fw_pps_info->scalinglist8x8[i], quant_matrix,
+ sizeof(h264fw_pps_info->scalinglist8x8[i]));
+ }
+ }
+ } else {
+ /* PPS matrix not present, use SPS information */
+ if (h264_seq_hdr_info->sps_info.seq_scaling_matrix_present_flag) {
+ for (i = 0; i < num8x8_lists; i++) {
+ if (seq_scllstflg[i + H264FW_NUM_4X4_LISTS] &&
+ def_sclmatflg_seq[i + H264FW_NUM_4X4_LISTS]) {
+ quant_matrix =
+ (i & 0x1) ? default_8x8_inter :
+ default_8x8_intra;
+ } else if ((seq_scllstflg[i + H264FW_NUM_4X4_LISTS]) &&
+ !(def_sclmatflg_seq[i + H264FW_NUM_4X4_LISTS])) {
+ VDEC_ASSERT
+ (h264_seq_hdr_info->sps_info.scllst8x8seq);
+ if (scllst8x8seq)
+ quant_matrix = (*scllst8x8seq)[i];
+ } else if (!(seq_scllstflg[i + H264FW_NUM_4X4_LISTS]) &&
+ (i == 0)) {
+ /* SPS list not present - use fallback rule A */
+ /* list 6 - first 8x8 Intra list */
+ quant_matrix = default_8x8_intra;
+ } else if (!(seq_scllstflg[i + H264FW_NUM_4X4_LISTS]) &&
+ (i == 1)) {
+ /* list 7 - first 8x8 Inter list */
+ quant_matrix = default_8x8_inter;
+ } else {
+ quant_matrix =
+ h264fw_pps_info->scalinglist8x8
+ [i - 2];
+ }
+ if (quant_matrix) {
+ /* copy correct 8x8 list to output -
+ * as selected by SPS
+ */
+ memcpy(h264fw_pps_info->scalinglist8x8[i],
+ quant_matrix,
+ sizeof(h264fw_pps_info->scalinglist8x8[i]));
+ }
+ }
+ } else {
+ /* SPS matrix not present - use flat lists */
+ quant_matrix = default_8x8_org;
+ for (i = 0; i < num8x8_lists; i++)
+ memcpy(h264fw_pps_info->scalinglist8x8[i], quant_matrix,
+ sizeof(h264fw_pps_info->scalinglist8x8[i]));
+ }
+ }
+ }
+}
+
+static void bspp_h264_fwpps_populate(struct bspp_h264_pps_info *h264_pps_info,
+ struct h264fw_picture_ps *h264fw_pps_info)
+{
+ h264fw_pps_info->deblocking_filter_control_present_flag =
+ h264_pps_info->deblocking_filter_control_present_flag;
+ h264fw_pps_info->transform_8x8_mode_flag = h264_pps_info->transform_8x8_mode_flag;
+ h264fw_pps_info->entropy_coding_mode_flag = h264_pps_info->entropy_coding_mode_flag;
+ h264fw_pps_info->redundant_pic_cnt_present_flag =
+ h264_pps_info->redundant_pic_cnt_present_flag;
+ h264fw_pps_info->weighted_bipred_idc = h264_pps_info->weighted_bipred_idc;
+ h264fw_pps_info->weighted_pred_flag = h264_pps_info->weighted_pred_flag;
+ h264fw_pps_info->pic_order_present_flag = h264_pps_info->pic_order_present_flag;
+ h264fw_pps_info->pic_init_qp = h264_pps_info->pic_init_qp_minus26 + 26;
+ h264fw_pps_info->constrained_intra_pred_flag = h264_pps_info->constrained_intra_pred_flag;
+ VDEC_ASSERT(sizeof(h264fw_pps_info->num_ref_lx_active_minus1) ==
+ sizeof(h264_pps_info->num_ref_idx_lx_active_minus1));
+ VDEC_ASSERT(sizeof(h264fw_pps_info->num_ref_lx_active_minus1) ==
+ sizeof(unsigned char) * H264FW_MAX_REFPIC_LISTS);
+ memcpy(h264fw_pps_info->num_ref_lx_active_minus1,
+ h264_pps_info->num_ref_idx_lx_active_minus1,
+ sizeof(h264fw_pps_info->num_ref_lx_active_minus1));
+ h264fw_pps_info->slice_group_map_type = h264_pps_info->slice_group_map_type;
+ h264fw_pps_info->num_slice_groups_minus1 = h264_pps_info->num_slice_groups_minus1;
+ h264fw_pps_info->slice_group_change_rate_minus1 =
+ h264_pps_info->slice_group_change_rate_minus1;
+ h264fw_pps_info->chroma_qp_index_offset = h264_pps_info->chroma_qp_index_offset;
+ h264fw_pps_info->second_chroma_qp_index_offset =
+ h264_pps_info->second_chroma_qp_index_offset;
+}
+
+static void bspp_h264_fwseq_hdr_populate(struct bspp_h264_seq_hdr_info *h264_seq_hdr_info,
+ struct h264fw_sequence_ps *h264_fwseq_hdr_info)
+{
+ /* Basic SPS */
+ h264_fwseq_hdr_info->profile_idc = h264_seq_hdr_info->sps_info.profile_idc;
+ h264_fwseq_hdr_info->chroma_format_idc = h264_seq_hdr_info->sps_info.chroma_format_idc;
+ h264_fwseq_hdr_info->separate_colour_plane_flag =
+ h264_seq_hdr_info->sps_info.separate_colour_plane_flag;
+ h264_fwseq_hdr_info->bit_depth_luma_minus8 =
+ h264_seq_hdr_info->sps_info.bit_depth_luma_minus8;
+ h264_fwseq_hdr_info->bit_depth_chroma_minus8 =
+ h264_seq_hdr_info->sps_info.bit_depth_chroma_minus8;
+ h264_fwseq_hdr_info->delta_pic_order_always_zero_flag =
+ h264_seq_hdr_info->sps_info.delta_pic_order_always_zero_flag;
+ h264_fwseq_hdr_info->log2_max_pic_order_cnt_lsb =
+ h264_seq_hdr_info->sps_info.log2_max_pic_order_cnt_lsb_minus4 + 4;
+ h264_fwseq_hdr_info->max_num_ref_frames = h264_seq_hdr_info->sps_info.max_num_ref_frames;
+ h264_fwseq_hdr_info->log2_max_frame_num =
+ h264_seq_hdr_info->sps_info.log2_max_frame_num_minus4 + 4;
+ h264_fwseq_hdr_info->pic_order_cnt_type = h264_seq_hdr_info->sps_info.pic_order_cnt_type;
+ h264_fwseq_hdr_info->frame_mbs_only_flag = h264_seq_hdr_info->sps_info.frame_mbs_only_flag;
+ h264_fwseq_hdr_info->gaps_in_frame_num_value_allowed_flag =
+ h264_seq_hdr_info->sps_info.gaps_in_frame_num_value_allowed_flag;
+ h264_fwseq_hdr_info->constraint_set_flags =
+ h264_seq_hdr_info->sps_info.constraint_set_flags;
+ h264_fwseq_hdr_info->level_idc = h264_seq_hdr_info->sps_info.level_idc;
+ h264_fwseq_hdr_info->num_ref_frames_in_pic_order_cnt_cycle =
+ h264_seq_hdr_info->sps_info.num_ref_frames_in_pic_order_cnt_cycle;
+ h264_fwseq_hdr_info->mb_adaptive_frame_field_flag =
+ h264_seq_hdr_info->sps_info.mb_adaptive_frame_field_flag;
+ h264_fwseq_hdr_info->offset_for_non_ref_pic =
+ h264_seq_hdr_info->sps_info.offset_for_non_ref_pic;
+ h264_fwseq_hdr_info->offset_for_top_to_bottom_field =
+ h264_seq_hdr_info->sps_info.offset_for_top_to_bottom_field;
+ h264_fwseq_hdr_info->pic_width_in_mbs_minus1 =
+ h264_seq_hdr_info->sps_info.pic_width_in_mbs_minus1;
+ h264_fwseq_hdr_info->pic_height_in_map_units_minus1 =
+ h264_seq_hdr_info->sps_info.pic_height_in_map_units_minus1;
+ h264_fwseq_hdr_info->direct_8x8_inference_flag =
+ h264_seq_hdr_info->sps_info.direct_8x8_inference_flag;
+ h264_fwseq_hdr_info->qpprime_y_zero_transform_bypass_flag =
+ h264_seq_hdr_info->sps_info.qpprime_y_zero_transform_bypass_flag;
+
+ if (h264_seq_hdr_info->sps_info.offset_for_ref_frame)
+ memcpy(h264_fwseq_hdr_info->offset_for_ref_frame,
+ h264_seq_hdr_info->sps_info.offset_for_ref_frame,
+ sizeof(h264_fwseq_hdr_info->offset_for_ref_frame));
+ else
+ memset(h264_fwseq_hdr_info->offset_for_ref_frame, 0x00,
+ sizeof(h264_fwseq_hdr_info->offset_for_ref_frame));
+
+ memset(h264_fwseq_hdr_info->anchor_inter_view_reference_id_list, 0x00,
+ sizeof(h264_fwseq_hdr_info->anchor_inter_view_reference_id_list));
+ memset(h264_fwseq_hdr_info->non_anchor_inter_view_reference_id_list, 0x00,
+ sizeof(h264_fwseq_hdr_info->non_anchor_inter_view_reference_id_list));
+
+#ifdef REDUCED_DPB_NO_PIC_REORDERING
+ /* From VUI */
+ h264_fwseq_hdr_info->max_dec_frame_buffering =
+ h264_seq_hdr_info->vui_info.max_dec_frame_buffering;
+ h264_fwseq_hdr_info->num_reorder_frames = h264_seq_hdr_info->vui_info.num_reorder_frames;
+#else
+ /* From VUI */
+ if (h264_seq_hdr_info->vui_info.bitstream_restriction_flag) {
+ VDEC_ASSERT(h264_seq_hdr_info->sps_info.vui_parameters_present_flag);
+ h264_fwseq_hdr_info->max_dec_frame_buffering =
+ h264_seq_hdr_info->vui_info.max_dec_frame_buffering;
+ h264_fwseq_hdr_info->num_reorder_frames =
+ h264_seq_hdr_info->vui_info.num_reorder_frames;
+ } else {
+ h264_fwseq_hdr_info->max_dec_frame_buffering = 1;
+ h264_fwseq_hdr_info->num_reorder_frames = 16;
+ }
+#endif
+}
+
+static void bspp_h264_commonseq_hdr_populate(struct bspp_h264_seq_hdr_info *h264_seq_hdr_info,
+ struct vdec_comsequ_hdrinfo *comseq_hdr_info)
+{
+ struct bspp_h264_sps_info *sps_info = &h264_seq_hdr_info->sps_info;
+ struct bspp_h264_vui_info *vui_info = &h264_seq_hdr_info->vui_info;
+
+ comseq_hdr_info->codec_profile = sps_info->profile_idc;
+ comseq_hdr_info->codec_level = sps_info->level_idc;
+
+ if (sps_info->vui_parameters_present_flag && vui_info->timing_info_present_flag) {
+ comseq_hdr_info->frame_rate_num = vui_info->time_scale;
+ comseq_hdr_info->frame_rate_den = 2 * vui_info->num_units_in_tick;
+ comseq_hdr_info->frame_rate = ((long)comseq_hdr_info->frame_rate_num) /
+ ((long)comseq_hdr_info->frame_rate_den);
+ }
+
+ /*
+ * ColorSpace Description was present in the VUI parameters.
+ * copy it in CommonSeqHdr info for use by application.
+ */
+ if (vui_info->video_signal_type_present_flag & vui_info->colour_description_present_flag) {
+ comseq_hdr_info->color_space_info.is_present = TRUE;
+ comseq_hdr_info->color_space_info.color_primaries = vui_info->colour_primaries;
+ comseq_hdr_info->color_space_info.transfer_characteristics =
+ vui_info->transfer_characteristics;
+ comseq_hdr_info->color_space_info.matrix_coefficients =
+ vui_info->matrix_coefficients;
+ }
+
+ if (vui_info->aspect_ratio_info_present_flag) {
+ comseq_hdr_info->aspect_ratio_num = vui_info->sar_width;
+ comseq_hdr_info->aspect_ratio_den = vui_info->sar_height;
+ }
+
+ comseq_hdr_info->interlaced_frames = sps_info->frame_mbs_only_flag ? 0 : 1;
+
+ /* pixel_info populate */
+ VDEC_ASSERT(sps_info->chroma_format_idc < 4);
+ comseq_hdr_info->pixel_info.chroma_fmt = (sps_info->chroma_format_idc == 0) ? 0 : 1;
+ comseq_hdr_info->pixel_info.chroma_fmt_idc = pixel_format_idc[sps_info->chroma_format_idc];
+ comseq_hdr_info->pixel_info.chroma_interleave =
+ ((sps_info->chroma_format_idc == 0) ||
+ (sps_info->chroma_format_idc == 3 && sps_info->separate_colour_plane_flag)) ?
+ PIXEL_INVALID_CI : PIXEL_UV_ORDER;
+ comseq_hdr_info->pixel_info.num_planes =
+ (sps_info->chroma_format_idc == 0) ? 1 :
+ (sps_info->chroma_format_idc == 3 && sps_info->separate_colour_plane_flag) ? 3 : 2;
+ comseq_hdr_info->pixel_info.bitdepth_y = sps_info->bit_depth_luma_minus8 + 8;
+ comseq_hdr_info->pixel_info.bitdepth_c = sps_info->bit_depth_chroma_minus8 + 8;
+ comseq_hdr_info->pixel_info.mem_pkg =
+ (comseq_hdr_info->pixel_info.bitdepth_y > 8 ||
+ comseq_hdr_info->pixel_info.bitdepth_c > 8) ?
+ PIXEL_BIT10_MSB_MP : PIXEL_BIT8_MP;
+ comseq_hdr_info->pixel_info.pixfmt =
+ pixel_get_pixfmt(comseq_hdr_info->pixel_info.chroma_fmt_idc,
+ comseq_hdr_info->pixel_info.chroma_interleave,
+ comseq_hdr_info->pixel_info.mem_pkg,
+ comseq_hdr_info->pixel_info.bitdepth_y,
+ comseq_hdr_info->pixel_info.bitdepth_c,
+ comseq_hdr_info->pixel_info.num_planes);
+
+ /* max_frame_size populate */
+ comseq_hdr_info->max_frame_size.width = (sps_info->pic_width_in_mbs_minus1 + 1) * 16;
+ /*
+ * H264 has always coded size MB aligned. For sequences which *may* have Field-Coded
+ * pictures, as described by the frame_mbs_only_flag, the pic_height_in_map_units_minus1
+ * refers to field height in MBs, so to find the actual Frame height we need to do
+ * Field_MBs_InHeight * 32
+ */
+ comseq_hdr_info->max_frame_size.height = (sps_info->pic_height_in_map_units_minus1 + 1) *
+ (sps_info->frame_mbs_only_flag ? 1 : 2) * 16;
+
+ /* Passing 2*N to vxd_dec so that get_nbuffers can use formula N+3 for all codecs*/
+ comseq_hdr_info->max_ref_frame_num = 2 * sps_info->max_num_ref_frames;
+
+ comseq_hdr_info->field_codec_mblocks = sps_info->mb_adaptive_frame_field_flag;
+ comseq_hdr_info->min_pict_buf_num = vui_info->max_dec_frame_buffering;
+
+ /* orig_display_region populate */
+ if (sps_info->frame_cropping_flag) {
+ int sub_width_c, sub_height_c, crop_unit_x, crop_unit_y;
+ int frame_crop_left, frame_crop_right, frame_crop_top, frame_crop_bottom;
+
+ sub_width_c = bspp_h264_get_subwidthc(sps_info->chroma_format_idc,
+ sps_info->separate_colour_plane_flag);
+
+ sub_height_c = bspp_h264_get_subheightc(sps_info->chroma_format_idc,
+ sps_info->separate_colour_plane_flag);
+
+ /* equation source: ITU-T H.264 2010/03, page 77 */
+ /* ChromaArrayType == 0 */
+ if (sps_info->separate_colour_plane_flag || sps_info->chroma_format_idc == 0) {
+ /* (7-18) */
+ crop_unit_x = 1;
+ /* (7-19) */
+ crop_unit_y = 2 - sps_info->frame_mbs_only_flag;
+ /* ChromaArrayType == chroma_format_idc */
+ } else {
+ /* (7-20) */
+ crop_unit_x = sub_width_c;
+ /* (7-21) */
+ crop_unit_y = sub_height_c * (2 - sps_info->frame_mbs_only_flag);
+ }
+
+ VDEC_ASSERT(sps_info->frame_crop_left_offset <=
+ (comseq_hdr_info->max_frame_size.width / crop_unit_x) -
+ (sps_info->frame_crop_right_offset + 1));
+
+ VDEC_ASSERT(sps_info->frame_crop_top_offset <=
+ (comseq_hdr_info->max_frame_size.height / crop_unit_y) -
+ (sps_info->frame_crop_bottom_offset + 1));
+ frame_crop_left = crop_unit_x * sps_info->frame_crop_left_offset;
+ frame_crop_right = comseq_hdr_info->max_frame_size.width -
+ (crop_unit_x * sps_info->frame_crop_right_offset);
+ frame_crop_top = crop_unit_y * sps_info->frame_crop_top_offset;
+ frame_crop_bottom = comseq_hdr_info->max_frame_size.height -
+ (crop_unit_y * sps_info->frame_crop_bottom_offset);
+ comseq_hdr_info->orig_display_region.left_offset = (unsigned int)frame_crop_left;
+ comseq_hdr_info->orig_display_region.top_offset = (unsigned int)frame_crop_top;
+ comseq_hdr_info->orig_display_region.width = (frame_crop_right - frame_crop_left);
+ comseq_hdr_info->orig_display_region.height = (frame_crop_bottom - frame_crop_top);
+ } else {
+ comseq_hdr_info->orig_display_region.left_offset = 0;
+ comseq_hdr_info->orig_display_region.top_offset = 0;
+ comseq_hdr_info->orig_display_region.width = comseq_hdr_info->max_frame_size.width;
+ comseq_hdr_info->orig_display_region.height =
+ comseq_hdr_info->max_frame_size.height;
+ }
+
+#ifdef REDUCED_DPB_NO_PIC_REORDERING
+ comseq_hdr_info->max_reorder_picts = vui_info->max_dec_frame_buffering;
+#else
+ if (sps_info->vui_parameters_present_flag && vui_info->bitstream_restriction_flag)
+ comseq_hdr_info->max_reorder_picts = vui_info->max_dec_frame_buffering;
+ else
+ comseq_hdr_info->max_reorder_picts = 0;
+#endif
+ comseq_hdr_info->separate_chroma_planes =
+ h264_seq_hdr_info->sps_info.separate_colour_plane_flag ? 1 : 0;
+}
+
+static void bspp_h264_pict_hdr_populate(enum h264_nalunittype nal_unit_type,
+ struct bspp_h264_slice_hdr_info *h264_slice_hdr_info,
+ struct vdec_comsequ_hdrinfo *comseq_hdr_info,
+ struct bspp_pict_hdr_info *pict_hdr_info)
+{
+ /*
+ * H264 has slice coding type, not picture. The bReference contrary to the rest of the
+ * standards is set explicitly from the NAL externally (see just below the call to
+ * bspp_h264_pict_hdr_populate) pict_hdr_info->bReference = ? (Set externally for H264)
+ */
+ pict_hdr_info->intra_coded = (nal_unit_type == H264_NALTYPE_IDR_SLICE) ? 1 : 0;
+ pict_hdr_info->field = h264_slice_hdr_info->field_pic_flag;
+
+ pict_hdr_info->post_processing = 0;
+ /* For H264 Maximum and Coded sizes are the same */
+ pict_hdr_info->coded_frame_size.width = comseq_hdr_info->max_frame_size.width;
+ /* For H264 Maximum and Coded sizes are the same */
+ pict_hdr_info->coded_frame_size.height = comseq_hdr_info->max_frame_size.height;
+ /*
+ * For H264 Encoded Display size has been precomputed as part of the
+ * common sequence info
+ */
+ pict_hdr_info->disp_info.enc_disp_region = comseq_hdr_info->orig_display_region;
+ /*
+ * For H264 there is no resampling, so encoded and actual display
+ * regions are the same
+ */
+ pict_hdr_info->disp_info.disp_region = comseq_hdr_info->orig_display_region;
+ /* H264 does not have that */
+ pict_hdr_info->disp_info.num_pan_scan_windows = 0;
+ memset(pict_hdr_info->disp_info.pan_scan_windows, 0,
+ sizeof(pict_hdr_info->disp_info.pan_scan_windows));
+}
+
+static int bspp_h264_destroy_seq_hdr_info(const void *secure_sps_info)
+{
+ struct bspp_h264_seq_hdr_info *h264_seq_hdr_info = NULL;
+
+ if (!secure_sps_info)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ h264_seq_hdr_info = (struct bspp_h264_seq_hdr_info *)secure_sps_info;
+
+ /* Cleaning vui_info */
+ kfree(h264_seq_hdr_info->vui_info.nal_hrd_parameters.bit_rate_value_minus1);
+ kfree(h264_seq_hdr_info->vui_info.nal_hrd_parameters.cpb_size_value_minus1);
+ kfree(h264_seq_hdr_info->vui_info.nal_hrd_parameters.cbr_flag);
+ kfree(h264_seq_hdr_info->vui_info.vcl_hrd_parameters.bit_rate_value_minus1);
+ kfree(h264_seq_hdr_info->vui_info.vcl_hrd_parameters.cpb_size_value_minus1);
+ kfree(h264_seq_hdr_info->vui_info.vcl_hrd_parameters.cbr_flag);
+
+ /* Cleaning sps_info */
+ kfree(h264_seq_hdr_info->sps_info.offset_for_ref_frame);
+ kfree(h264_seq_hdr_info->sps_info.scllst4x4seq);
+ kfree(h264_seq_hdr_info->sps_info.scllst8x8seq);
+
+ return 0;
+}
+
+static int bspp_h264_destroy_pps_info(const void *secure_pps_info)
+{
+ struct bspp_h264_pps_info *h264_pps_info = NULL;
+
+ if (!secure_pps_info)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ h264_pps_info = (struct bspp_h264_pps_info *)secure_pps_info;
+ kfree(h264_pps_info->h264_ppssgm_info.slice_group_id);
+ h264_pps_info->h264_ppssgm_info.slicegroupidnum = 0;
+ kfree(h264_pps_info->scllst4x4pic);
+ kfree(h264_pps_info->scllst8x8pic);
+
+ return 0;
+}
+
+static int bspp_h264_destroy_data(enum bspp_unit_type data_type, void *data_handle)
+{
+ int result = 0;
+
+ if (!data_handle)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ switch (data_type) {
+ case BSPP_UNIT_SEQUENCE:
+ result = bspp_h264_destroy_seq_hdr_info(data_handle);
+ break;
+ case BSPP_UNIT_PPS:
+ result = bspp_h264_destroy_pps_info(data_handle);
+ break;
+ default:
+ break;
+ }
+ return result;
+}
+
+static void bspp_h264_generate_slice_groupmap(struct bspp_h264_slice_hdr_info *h264_slice_hdr_info,
+ struct bspp_h264_seq_hdr_info *h264_seq_hdr_info,
+ struct bspp_h264_pps_info *h264_pps_info,
+ unsigned char *map_unit_to_slice_groupmap,
+ unsigned int map_size)
+{
+ int group;
+ unsigned int num_slice_group_mapunits;
+ unsigned int i = 0, j, k = 0;
+ unsigned char num_slice_groups = h264_pps_info->num_slice_groups_minus1 + 1;
+ unsigned int pic_width_in_mbs = h264_seq_hdr_info->sps_info.pic_width_in_mbs_minus1 + 1;
+ unsigned int pic_height_in_map_units =
+ h264_seq_hdr_info->sps_info.pic_height_in_map_units_minus1 + 1;
+
+ num_slice_group_mapunits = map_size;
+ if (h264_pps_info->slice_group_map_type == 6) {
+ if ((unsigned int)num_slice_groups != num_slice_group_mapunits) {
+ VDEC_ASSERT
+ ("wrong pps->num_slice_group_map_units_minus1 for used SPS and FMO type 6"
+ ==
+ NULL);
+ if (num_slice_group_mapunits >
+ h264_pps_info->h264_ppssgm_info.slicegroupidnum)
+ num_slice_group_mapunits =
+ h264_pps_info->h264_ppssgm_info.slicegroupidnum;
+ }
+ }
+
+ /* only one slice group */
+ if (h264_pps_info->num_slice_groups_minus1 == 0) {
+ memset(map_unit_to_slice_groupmap, 0, map_size * sizeof(unsigned char));
+ return;
+ }
+ if (h264_pps_info->num_slice_groups_minus1 >= MAX_SLICEGROUP_COUNT) {
+ memset(map_unit_to_slice_groupmap, 0, map_size * sizeof(unsigned char));
+ return;
+ }
+ if (h264_pps_info->slice_group_map_type == 0) {
+ do {
+ for (group =
+ 0;
+ group <= h264_pps_info->num_slice_groups_minus1 &&
+ i < num_slice_group_mapunits;
+ i += h264_pps_info->run_length_minus1[group++] + 1) {
+ for (j = 0;
+ j <= h264_pps_info->run_length_minus1[group] &&
+ i + j < num_slice_group_mapunits;
+ j++)
+ map_unit_to_slice_groupmap[i + j] = group;
+ }
+ } while (i < num_slice_group_mapunits);
+ } else if (h264_pps_info->slice_group_map_type == 1) {
+ for (i = 0; i < num_slice_group_mapunits; i++) {
+ map_unit_to_slice_groupmap[i] = ((i % pic_width_in_mbs) +
+ (((i / pic_width_in_mbs) *
+ (h264_pps_info->num_slice_groups_minus1 + 1)) / 2)) %
+ (h264_pps_info->num_slice_groups_minus1 + 1);
+ }
+ } else if (h264_pps_info->slice_group_map_type == 2) {
+ unsigned int y_top_left, x_top_left, y_bottom_right, x_bottom_right, x, y;
+
+ for (i = 0; i < num_slice_group_mapunits; i++)
+ map_unit_to_slice_groupmap[i] = h264_pps_info->num_slice_groups_minus1;
+
+ for (group = h264_pps_info->num_slice_groups_minus1 - 1; group >= 0; group--) {
+ y_top_left = h264_pps_info->top_left[group] / pic_width_in_mbs;
+ x_top_left = h264_pps_info->top_left[group] % pic_width_in_mbs;
+ y_bottom_right = h264_pps_info->bottom_right[group] / pic_width_in_mbs;
+ x_bottom_right = h264_pps_info->bottom_right[group] % pic_width_in_mbs;
+ for (y = y_top_left; y <= y_bottom_right; y++)
+ for (x = x_top_left; x <= x_bottom_right; x++) {
+ if (h264_pps_info->top_left[group] >
+ h264_pps_info->bottom_right[group] ||
+ h264_pps_info->bottom_right[group] >=
+ num_slice_group_mapunits)
+ continue;
+ map_unit_to_slice_groupmap[y * pic_width_in_mbs +
+ x] = group;
+ }
+ }
+ } else if (h264_pps_info->slice_group_map_type == 3) {
+ int left_bound, top_bound, right_bound, bottom_bound;
+ int x, y, x_dir, y_dir;
+ int map_unit_vacant;
+
+ unsigned int mapunits_in_slicegroup_0 =
+ umin((unsigned int)((h264_pps_info->slice_group_change_rate_minus1 + 1) *
+ h264_slice_hdr_info->slice_group_change_cycle),
+ (unsigned int)num_slice_group_mapunits);
+
+ for (i = 0; i < num_slice_group_mapunits; i++)
+ map_unit_to_slice_groupmap[i] = 2;
+
+ x = (pic_width_in_mbs - h264_pps_info->slice_group_change_direction_flag) / 2;
+ y = (pic_height_in_map_units - h264_pps_info->slice_group_change_direction_flag) /
+ 2;
+
+ left_bound = x;
+ top_bound = y;
+ right_bound = x;
+ bottom_bound = y;
+
+ x_dir = h264_pps_info->slice_group_change_direction_flag - 1;
+ y_dir = h264_pps_info->slice_group_change_direction_flag;
+
+ for (k = 0; k < num_slice_group_mapunits; k += map_unit_vacant) {
+ map_unit_vacant =
+ (map_unit_to_slice_groupmap[y * pic_width_in_mbs + x] ==
+ 2);
+ if (map_unit_vacant)
+ map_unit_to_slice_groupmap[y * pic_width_in_mbs + x] =
+ (k >= mapunits_in_slicegroup_0);
+
+ if (x_dir == -1 && x == left_bound) {
+ left_bound = smax(left_bound - 1, 0);
+ x = left_bound;
+ x_dir = 0;
+ y_dir = 2 * h264_pps_info->slice_group_change_direction_flag - 1;
+ } else if (x_dir == 1 && x == right_bound) {
+ right_bound = smin(right_bound + 1, (int)pic_width_in_mbs - 1);
+ x = right_bound;
+ x_dir = 0;
+ y_dir = 1 - 2 * h264_pps_info->slice_group_change_direction_flag;
+ } else if (y_dir == -1 && y == top_bound) {
+ top_bound = smax(top_bound - 1, 0);
+ y = top_bound;
+ x_dir = 1 - 2 * h264_pps_info->slice_group_change_direction_flag;
+ y_dir = 0;
+ } else if (y_dir == 1 && y == bottom_bound) {
+ bottom_bound = smin(bottom_bound + 1,
+ (int)pic_height_in_map_units - 1);
+ y = bottom_bound;
+ x_dir = 2 * h264_pps_info->slice_group_change_direction_flag - 1;
+ y_dir = 0;
+ } else {
+ x = x + x_dir;
+ y = y + y_dir;
+ }
+ }
+ } else if (h264_pps_info->slice_group_map_type == 4) {
+ unsigned int mapunits_in_slicegroup_0 =
+ umin((unsigned int)((h264_pps_info->slice_group_change_rate_minus1 + 1) *
+ h264_slice_hdr_info->slice_group_change_cycle),
+ (unsigned int)num_slice_group_mapunits);
+ unsigned int sizeof_upper_left_group =
+ h264_pps_info->slice_group_change_direction_flag ?
+ (num_slice_group_mapunits -
+ mapunits_in_slicegroup_0) : mapunits_in_slicegroup_0;
+ for (i = 0; i < num_slice_group_mapunits; i++) {
+ if (i < sizeof_upper_left_group)
+ map_unit_to_slice_groupmap[i] =
+ h264_pps_info->slice_group_change_direction_flag;
+
+ else
+ map_unit_to_slice_groupmap[i] = 1 -
+ h264_pps_info->slice_group_change_direction_flag;
+ }
+ } else if (h264_pps_info->slice_group_map_type == 5) {
+ unsigned int mapunits_in_slicegroup_0 =
+ umin((unsigned int)((h264_pps_info->slice_group_change_rate_minus1 + 1) *
+ h264_slice_hdr_info->slice_group_change_cycle),
+ (unsigned int)num_slice_group_mapunits);
+ unsigned int sizeof_upper_left_group =
+ h264_pps_info->slice_group_change_direction_flag ?
+ (num_slice_group_mapunits -
+ mapunits_in_slicegroup_0) : mapunits_in_slicegroup_0;
+
+ for (j = 0; j < (unsigned int)pic_width_in_mbs; j++) {
+ for (i = 0; i < (unsigned int)pic_height_in_map_units; i++) {
+ if (k++ < sizeof_upper_left_group)
+ map_unit_to_slice_groupmap[i * pic_width_in_mbs + j] =
+ h264_pps_info->slice_group_change_direction_flag;
+ else
+ map_unit_to_slice_groupmap[i * pic_width_in_mbs + j] =
+ 1 -
+ h264_pps_info->slice_group_change_direction_flag;
+ }
+ }
+ } else if (h264_pps_info->slice_group_map_type == 6) {
+ VDEC_ASSERT(num_slice_group_mapunits <=
+ h264_pps_info->h264_ppssgm_info.slicegroupidnum);
+ for (i = 0; i < num_slice_group_mapunits; i++)
+ map_unit_to_slice_groupmap[i] =
+ h264_pps_info->h264_ppssgm_info.slice_group_id[i];
+ }
+}
+
+static int bspp_h264_parse_mvc_slice_extension(void *swsr_context,
+ struct bspp_h264_inter_pict_ctx *inter_pict_ctx)
+{
+ if (!swsr_read_bits(swsr_context, 1)) {
+ swsr_read_bits(swsr_context, 7);
+ inter_pict_ctx->current_view_id = swsr_read_bits(swsr_context, 10);
+ swsr_read_bits(swsr_context, 6);
+ return 1;
+ }
+
+ return 0;
+}
+
+static int bspp_h264_unitparser_compile_sgmdata
+ (struct bspp_h264_slice_hdr_info *h264_slice_hdr_info,
+ struct bspp_h264_seq_hdr_info *h264_seq_hdr_info,
+ struct bspp_h264_pps_info *h264_pps_info,
+ struct bspp_pict_hdr_info *pict_hdr_info)
+{
+ memset(&pict_hdr_info->pict_sgm_data, 0, sizeof(*&pict_hdr_info->pict_sgm_data));
+
+ pict_hdr_info->pict_sgm_data.id = 1;
+
+ /* Allocate memory for SGM. */
+ pict_hdr_info->pict_sgm_data.size =
+ (h264_seq_hdr_info->sps_info.pic_height_in_map_units_minus1 + 1) *
+ (h264_seq_hdr_info->sps_info.pic_width_in_mbs_minus1 + 1);
+
+ pict_hdr_info->pict_sgm_data.pic_data = kmalloc((pict_hdr_info->pict_sgm_data.size),
+ GFP_KERNEL);
+ VDEC_ASSERT(pict_hdr_info->pict_sgm_data.pic_data);
+ if (!pict_hdr_info->pict_sgm_data.pic_data) {
+ pict_hdr_info->pict_sgm_data.id = BSPP_INVALID;
+ return IMG_ERROR_OUT_OF_MEMORY;
+ }
+
+ bspp_h264_generate_slice_groupmap(h264_slice_hdr_info, h264_seq_hdr_info, h264_pps_info,
+ pict_hdr_info->pict_sgm_data.pic_data,
+ pict_hdr_info->pict_sgm_data.size);
+
+ /* check the discontinuous_mbs_flaginCurrFrame flag for FMO */
+ /* NO FMO support */
+ pict_hdr_info->discontinuous_mbs = 0;
+
+ return 0;
+}
+
+static int bspp_h264_unit_parser(void *swsr_context, struct bspp_unit_data *unit_data)
+{
+ unsigned int result = 0;
+ enum bspp_error_type parse_error = BSPP_ERROR_NONE;
+ enum h264_nalunittype nal_unit_type;
+ unsigned char nal_ref_idc;
+ struct bspp_h264_inter_pict_ctx *interpicctx;
+ struct bspp_sequence_hdr_info *out_seq_info;
+ unsigned char id;
+
+ interpicctx = &unit_data->parse_state->inter_pict_ctx->h264_ctx;
+ out_seq_info = unit_data->out.sequ_hdr_info;
+
+ /* At this point we should be EXACTLY at the NALTYPE byte */
+ /* parse the nal header type */
+ swsr_read_bits(swsr_context, 1);
+ nal_ref_idc = swsr_read_bits(swsr_context, 2);
+ nal_unit_type = (enum h264_nalunittype)swsr_read_bits(swsr_context, 5);
+
+ switch (unit_data->unit_type) {
+ case BSPP_UNIT_SEQUENCE:
+ VDEC_ASSERT(nal_unit_type == H264_NALTYPE_SEQUENCE_PARAMETER_SET ||
+ nal_unit_type == H264_NALTYPE_SUBSET_SPS);
+ {
+ unsigned char id_loc;
+ /* Parse SPS structure */
+ struct bspp_h264_seq_hdr_info *h264_seq_hdr_info =
+ (struct bspp_h264_seq_hdr_info *)(out_seq_info->secure_sequence_info);
+ /* FW SPS Data structure */
+ struct bspp_ddbuf_array_info *tmp = &out_seq_info->fw_sequence;
+ struct h264fw_sequence_ps *h264_fwseq_hdr_info =
+ (struct h264fw_sequence_ps *)((unsigned char *)tmp->ddbuf_info.cpu_virt_addr
+ + tmp->buf_offset);
+ /* Common Sequence Header Info */
+ struct vdec_comsequ_hdrinfo *comseq_hdr_info =
+ &out_seq_info->sequ_hdr_info.com_sequ_hdr_info;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("Unit Parser:Found SEQUENCE_PARAMETER_SET NAL unit");
+#endif
+ VDEC_ASSERT(h264_seq_hdr_info);
+ VDEC_ASSERT(h264_fwseq_hdr_info);
+ if (!h264_seq_hdr_info)
+ return IMG_ERROR_ALREADY_COMPLETE;
+
+ if (!h264_fwseq_hdr_info)
+ return IMG_ERROR_ALREADY_COMPLETE;
+
+ /* Call SPS parser to populate the "Parse SPS Structure" */
+ unit_data->parse_error |=
+ bspp_h264_sps_parser(swsr_context, unit_data->str_res_handle,
+ h264_seq_hdr_info);
+ /* From "Parse SPS Structure" populate the "FW SPS Data Structure" */
+ bspp_h264_fwseq_hdr_populate(h264_seq_hdr_info, h264_fwseq_hdr_info);
+ /*
+ * From "Parse SPS Structure" populate the
+ * "Common Sequence Header Info"
+ */
+ bspp_h264_commonseq_hdr_populate(h264_seq_hdr_info, comseq_hdr_info);
+ /* Set the SPS ID */
+ /*
+ * seq_parameter_set_id is always in range 0-31, so we can
+ * add offset indicating subsequence header
+ */
+ id_loc = h264_seq_hdr_info->sps_info.seq_parameter_set_id;
+ out_seq_info->sequ_hdr_info.sequ_hdr_id =
+ (nal_unit_type == H264_NALTYPE_SLICE_SCALABLE ||
+ nal_unit_type == H264_NALTYPE_SLICE_IDR_SCALABLE ||
+ nal_unit_type == H264_NALTYPE_SUBSET_SPS) ? id_loc + 32 : id_loc;
+
+ /*
+ * Set the first SPS ID as Active SPS ID for SEI parsing
+ * to cover the case of not having SeiBufferingPeriod to
+ * give us the SPS ID
+ */
+ if (interpicctx->active_sps_for_sei_parsing == BSPP_INVALID)
+ interpicctx->active_sps_for_sei_parsing =
+ h264_seq_hdr_info->sps_info.seq_parameter_set_id;
+ }
+ break;
+
+ case BSPP_UNIT_PPS:
+ VDEC_ASSERT(nal_unit_type == H264_NALTYPE_PICTURE_PARAMETER_SET);
+ {
+ /* Parse PPS structure */
+ struct bspp_h264_pps_info *h264_pps_info =
+ (struct bspp_h264_pps_info *)(unit_data->out.pps_info->secure_pps_info);
+ /* FW PPS Data structure */
+ struct bspp_ddbuf_array_info *tmp = &unit_data->out.pps_info->fw_pps;
+ struct h264fw_picture_ps *h264fw_pps_info =
+ (struct h264fw_picture_ps *)((unsigned char *)
+ tmp->ddbuf_info.cpu_virt_addr + tmp->buf_offset);
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("Unit Parser:Found PICTURE_PARAMETER_SET NAL unit");
+#endif
+ VDEC_ASSERT(h264_pps_info);
+ VDEC_ASSERT(h264fw_pps_info);
+
+ /* Call PPS parser to populate the "Parse PPS Structure" */
+ unit_data->parse_error |=
+ bspp_h264_pps_parser(swsr_context, unit_data->str_res_handle,
+ h264_pps_info);
+ /* From "Parse PPS Structure" populate the "FW PPS Data Structure"
+ * - the scaling lists
+ */
+ bspp_h264_fwpps_populate(h264_pps_info, h264fw_pps_info);
+ /* Set the PPS ID */
+ unit_data->out.pps_info->pps_id = h264_pps_info->pps_id;
+ }
+ break;
+
+ case BSPP_UNIT_PICTURE:
+ if (nal_unit_type == H264_NALTYPE_SLICE_PREFIX) {
+ if (bspp_h264_parse_mvc_slice_extension(swsr_context, interpicctx))
+ pr_err("%s: No MVC support\n", __func__);
+ } else if (nal_unit_type == H264_NALTYPE_SLICE_SCALABLE ||
+ nal_unit_type == H264_NALTYPE_SLICE_IDR_SCALABLE ||
+ nal_unit_type == H264_NALTYPE_SLICE ||
+ nal_unit_type == H264_NALTYPE_IDR_SLICE) {
+ struct bspp_h264_slice_hdr_info h264_slice_hdr_info;
+ struct bspp_h264_pps_info *h264_pps_info;
+ struct bspp_pps_info *pps_info;
+ struct h264fw_picture_ps *h264fw_pps_info;
+ struct h264fw_sequence_ps *h264_fwseq_hdr_info;
+ struct bspp_h264_seq_hdr_info *h264_seq_hdr_info;
+ struct bspp_sequence_hdr_info *sequ_hdr_info;
+ struct bspp_ddbuf_array_info *tmp1;
+ struct bspp_ddbuf_array_info *tmp2;
+ int current_pic_is_new = 0;
+ int determined = 0;
+ int id_loc;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("Unit Parser:Found PICTURE DATA unit");
+#endif
+
+ unit_data->slice = 1;
+ unit_data->ext_slice = 0;
+
+ if (nal_unit_type == H264_NALTYPE_SLICE_SCALABLE ||
+ nal_unit_type == H264_NALTYPE_SLICE_IDR_SCALABLE) {
+ pr_err("%s: No SVC support\n", __func__);
+ }
+
+ VDEC_ASSERT(unit_data->out.pict_hdr_info);
+ if (!unit_data->out.pict_hdr_info)
+ return IMG_ERROR_CANCELLED;
+
+ /* Default */
+ unit_data->out.pict_hdr_info->discontinuous_mbs = 0;
+
+ /*
+ * Parse the Pic Header, return Parse SPS/PPS
+ * structures
+ */
+ parse_error = bspp_h264_pict_hdr_parser(swsr_context,
+ unit_data->str_res_handle,
+ &h264_slice_hdr_info,
+ &pps_info,
+ &sequ_hdr_info,
+ nal_unit_type,
+ nal_ref_idc);
+
+ if (parse_error) {
+ unit_data->parse_error |= parse_error;
+ return IMG_ERROR_CANCELLED;
+ }
+
+ /*
+ * We are signalling closed GOP at every I frame
+ * This does not conform 100% with the
+ * specification but insures that seeking always
+ * works.
+ */
+ unit_data->new_closed_gop = h264_slice_hdr_info.slice_type ==
+ I_SLICE ? 1 : 0;
+
+ /*
+ * Now pps_info and sequ_hdr_info contain the
+ * PPS/SPS info related to this picture
+ */
+ h264_pps_info = (struct bspp_h264_pps_info *)pps_info->secure_pps_info;
+ h264_seq_hdr_info =
+ (struct bspp_h264_seq_hdr_info *)sequ_hdr_info->secure_sequence_info;
+
+ tmp1 = &pps_info->fw_pps;
+ tmp2 = &sequ_hdr_info->fw_sequence;
+
+ h264fw_pps_info = (struct h264fw_picture_ps *)((unsigned char *)
+ tmp1->ddbuf_info.cpu_virt_addr + tmp1->buf_offset);
+ h264_fwseq_hdr_info = (struct h264fw_sequence_ps *)((unsigned char *)
+ tmp2->ddbuf_info.cpu_virt_addr + tmp2->buf_offset);
+ VDEC_ASSERT(h264_slice_hdr_info.pps_id == h264_pps_info->pps_id);
+ VDEC_ASSERT(h264_pps_info->seq_parameter_set_id ==
+ (unsigned int)h264_seq_hdr_info->sps_info.seq_parameter_set_id);
+
+ /*
+ * Update the decoding-related FW SPS info related to the current picture
+ * with the SEI data that were potentially received and also relate to
+ * the current info. Until we receive the picture we do not know which
+ * sequence to update with the SEI data.
+ * Setfrom last SEI, needed for decoding
+ */
+ h264_fwseq_hdr_info->disable_vdmc_filt = interpicctx->disable_vdmc_filt;
+ h264_fwseq_hdr_info->transform4x4_mb_not_available =
+ interpicctx->b4x4transform_mb_unavailable;
+
+ /*
+ * Determine if current slice is a new picture, and update the related
+ * params for future reference
+ * Order of checks is important
+ */
+ {
+ struct bspp_parse_state *state = unit_data->parse_state;
+
+ set_if_not_determined_yet(&determined, state->new_view,
+ ¤t_pic_is_new, 1);
+ set_if_not_determined_yet(&determined, state->next_pic_is_new,
+ ¤t_pic_is_new, 1);
+ set_if_not_determined_yet
+ (&determined,
+ (h264_slice_hdr_info.redundant_pic_cnt > 0),
+ ¤t_pic_is_new, 0);
+ set_if_not_determined_yet
+ (&determined,
+ (state->prev_frame_num !=
+ h264_slice_hdr_info.frame_num),
+ ¤t_pic_is_new, 1);
+ set_if_not_determined_yet
+ (&determined,
+ (state->prev_pps_id != h264_slice_hdr_info.pps_id),
+ ¤t_pic_is_new, 1);
+ set_if_not_determined_yet
+ (&determined,
+ (state->prev_field_pic_flag !=
+ h264_slice_hdr_info.field_pic_flag),
+ ¤t_pic_is_new, 1);
+ set_if_not_determined_yet
+ (&determined,
+ ((h264_slice_hdr_info.field_pic_flag) &&
+ (state->prev_bottom_pic_flag !=
+ h264_slice_hdr_info.bottom_field_flag)),
+ ¤t_pic_is_new, 1);
+ set_if_not_determined_yet
+ (&determined,
+ ((state->prev_nal_ref_idc == 0 || nal_ref_idc == 0) &&
+ (state->prev_nal_ref_idc != nal_ref_idc)),
+ ¤t_pic_is_new, 1);
+ set_if_not_determined_yet
+ (&determined,
+ ((h264_seq_hdr_info->sps_info.pic_order_cnt_type == 0) &&
+ ((state->prev_pic_order_cnt_lsb !=
+ h264_slice_hdr_info.pic_order_cnt_lsb) ||
+ (state->prev_delta_pic_order_cnt_bottom !=
+ h264_slice_hdr_info.delta_pic_order_cnt_bottom))),
+ ¤t_pic_is_new, 1);
+ set_if_not_determined_yet
+ (&determined,
+ ((h264_seq_hdr_info->sps_info.pic_order_cnt_type == 1) &&
+ ((state->prev_delta_pic_order_cnt[0] !=
+ h264_slice_hdr_info.delta_pic_order_cnt[0]) ||
+ (state->prev_delta_pic_order_cnt[1] !=
+ h264_slice_hdr_info.delta_pic_order_cnt[1]))),
+ ¤t_pic_is_new, 1);
+ set_if_not_determined_yet
+ (&determined,
+ ((state->prev_nal_unit_type ==
+ (int)H264_NALTYPE_IDR_SLICE ||
+ nal_unit_type == (int)H264_NALTYPE_IDR_SLICE) &&
+ (state->prev_nal_unit_type !=
+ (int)nal_unit_type)),
+ ¤t_pic_is_new, 1);
+ set_if_not_determined_yet(&determined,
+ ((state->prev_nal_unit_type ==
+ (int)H264_NALTYPE_IDR_SLICE) &&
+ (state->prev_idr_pic_id !=
+ h264_slice_hdr_info.idr_pic_id)),
+ ¤t_pic_is_new, 1);
+
+ /*
+ * Update whatever is not updated already in different places of
+ * the code or just needs to be updated here
+ */
+ state->prev_frame_num = h264_slice_hdr_info.frame_num;
+ state->prev_pps_id = h264_slice_hdr_info.pps_id;
+ state->prev_field_pic_flag =
+ h264_slice_hdr_info.field_pic_flag;
+ state->prev_nal_ref_idc = nal_ref_idc;
+ state->prev_pic_order_cnt_lsb =
+ h264_slice_hdr_info.pic_order_cnt_lsb;
+ state->prev_delta_pic_order_cnt_bottom =
+ h264_slice_hdr_info.delta_pic_order_cnt_bottom;
+ state->prev_delta_pic_order_cnt[0] =
+ h264_slice_hdr_info.delta_pic_order_cnt[0];
+ state->prev_delta_pic_order_cnt[1] =
+ h264_slice_hdr_info.delta_pic_order_cnt[1];
+ state->prev_nal_unit_type = (int)nal_unit_type;
+ state->prev_idr_pic_id = h264_slice_hdr_info.idr_pic_id;
+ }
+
+ /* Detect second field and manage the prev_bottom_pic_flag flag */
+ if (h264_slice_hdr_info.field_pic_flag && current_pic_is_new) {
+ unit_data->parse_state->prev_bottom_pic_flag =
+ h264_slice_hdr_info.bottom_field_flag;
+ }
+
+ /* Detect ASO Just met new pic */
+ id = h264_slice_hdr_info.colour_plane_id;
+ if (current_pic_is_new) {
+ unsigned int i;
+
+ for (i = 0; i < MAX_COMPONENTS; i++)
+ unit_data->parse_state->prev_first_mb_in_slice[i] = 0;
+ } else if (unit_data->parse_state->prev_first_mb_in_slice[id] >
+ h264_slice_hdr_info.first_mb_in_slice) {
+ /* We just found ASO */
+ unit_data->parse_state->discontinuous_mb = 1;
+ }
+ unit_data->parse_state->prev_first_mb_in_slice[id] =
+ h264_slice_hdr_info.first_mb_in_slice;
+
+ /* We may already knew we were DiscontinuousMB */
+ if (unit_data->parse_state->discontinuous_mb)
+ unit_data->out.pict_hdr_info->discontinuous_mbs =
+ unit_data->parse_state->discontinuous_mb;
+
+ /*
+ * We want to calculate the scaling lists only once per picture/field,
+ * not every slice We want to populate the VDEC Picture Header Info
+ * only once per picture/field, not every slice
+ */
+ if (current_pic_is_new) {
+ /* Common Sequence Header Info fetched */
+ struct vdec_comsequ_hdrinfo *comseq_hdr_info =
+ &sequ_hdr_info->sequ_hdr_info.com_sequ_hdr_info;
+ struct bspp_pict_data *type_pict_aux_data;
+
+ unit_data->parse_state->next_pic_is_new = 0;
+
+ /* Generate SGM for this picture */
+ if (h264_pps_info->num_slice_groups_minus1 != 0 &&
+ h264_pps_info->slice_group_map_type <= 6) {
+ bspp_h264_unitparser_compile_sgmdata
+ (&h264_slice_hdr_info,
+ h264_seq_hdr_info,
+ h264_pps_info,
+ unit_data->out.pict_hdr_info);
+ } else {
+ unit_data->out.pict_hdr_info->pict_sgm_data.pic_data = NULL;
+ unit_data->out.pict_hdr_info->pict_sgm_data.bufmap_id = 0;
+ unit_data->out.pict_hdr_info->pict_sgm_data.buf_offset = 0;
+ unit_data->out.pict_hdr_info->pict_sgm_data.id =
+ BSPP_INVALID;
+ unit_data->out.pict_hdr_info->pict_sgm_data.size = 0;
+ }
+
+ unit_data->parse_state->discontinuous_mb =
+ unit_data->out.pict_hdr_info->discontinuous_mbs;
+
+ /*
+ * Select the scaling lists based on h264_pps_info and
+ * h264_seq_hdr_info and pass them to h264fw_pps_info
+ */
+ bspp_h264_select_scaling_list(h264fw_pps_info,
+ h264_pps_info,
+ h264_seq_hdr_info);
+
+ /*
+ * Uses the common sequence/SINGLE-slice info to populate the
+ * VDEC Picture Header Info
+ */
+ bspp_h264_pict_hdr_populate(nal_unit_type, &h264_slice_hdr_info,
+ comseq_hdr_info,
+ unit_data->out.pict_hdr_info);
+
+ /* Store some raw bitstream fields for output. */
+ unit_data->out.pict_hdr_info->h264_pict_hdr_info.frame_num =
+ h264_slice_hdr_info.frame_num;
+ unit_data->out.pict_hdr_info->h264_pict_hdr_info.nal_ref_idc =
+ nal_ref_idc;
+
+ /*
+ * Update the display-related picture header information with
+ * the related SEI parsed data The display-related SEI is
+ * used only for the first picture after the SEI
+ */
+ if (!interpicctx->sei_info_attached_to_pic) {
+ interpicctx->sei_info_attached_to_pic = 1;
+ if (interpicctx->active_sps_for_sei_parsing !=
+ h264_seq_hdr_info->sps_info.seq_parameter_set_id) {
+ /*
+ * We tried to guess the SPS ID that we should use
+ * to parse the SEI, but we guessed wrong
+ */
+ pr_err("Parsed SEI with wrong SPS, data may be parsed wrong");
+ }
+ unit_data->out.pict_hdr_info->disp_info.repeat_first_fld =
+ interpicctx->repeat_first_field;
+ unit_data->out.pict_hdr_info->disp_info.max_frm_repeat =
+ interpicctx->max_frm_repeat;
+ /* SEI - Not supported */
+ }
+
+ /*
+ * For Idr slices update the Active
+ * Sequence ID for SEI parsing,
+ * error resilient
+ */
+ if (nal_unit_type == H264_NALTYPE_IDR_SLICE)
+ interpicctx->active_sps_for_sei_parsing =
+ h264_seq_hdr_info->sps_info.seq_parameter_set_id;
+
+ /*
+ * Choose the appropriate auxiliary data
+ * structure to populate.
+ */
+ if (unit_data->parse_state->second_field_flag)
+ type_pict_aux_data =
+ &unit_data->out.pict_hdr_info->second_pict_aux_data;
+
+ else
+ type_pict_aux_data =
+ &unit_data->out.pict_hdr_info->pict_aux_data;
+
+ /*
+ * We have no container for the PPS that
+ * passes down to the kernel, for this
+ * reason the h264 secure parser needs
+ * to populate that info into the
+ * picture header (Second)PictAuxData.
+ */
+ type_pict_aux_data->bufmap_id = pps_info->bufmap_id;
+ type_pict_aux_data->buf_offset = pps_info->buf_offset;
+ type_pict_aux_data->pic_data = (void *)h264fw_pps_info;
+ type_pict_aux_data->id = h264_pps_info->pps_id;
+ type_pict_aux_data->size = sizeof(struct h264fw_picture_ps);
+
+ pps_info->ref_count++;
+
+ /* This info comes from NAL directly */
+ unit_data->out.pict_hdr_info->ref = (nal_ref_idc == 0) ? 0 : 1;
+ }
+ if (nal_unit_type == H264_NALTYPE_IDR_SLICE)
+ unit_data->new_closed_gop = 1;
+
+ /* Return the SPS ID */
+ /*
+ * seq_parameter_set_id is always in range 0-31,
+ * so we can add offset indicating subsequence header
+ */
+ id_loc = h264_pps_info->seq_parameter_set_id;
+ unit_data->pict_sequ_hdr_id =
+ (nal_unit_type == H264_NALTYPE_SLICE_SCALABLE ||
+ nal_unit_type ==
+ H264_NALTYPE_SLICE_IDR_SCALABLE) ? id_loc + 32 : id_loc;
+
+ } else if (nal_unit_type == H264_NALTYPE_SLICE_PARTITION_A ||
+ nal_unit_type == H264_NALTYPE_SLICE_PARTITION_B ||
+ nal_unit_type == H264_NALTYPE_SLICE_PARTITION_C) {
+ unit_data->slice = 1;
+
+ pr_err("Unsupported Slice NAL type: %d", nal_unit_type);
+ unit_data->parse_error = BSPP_ERROR_UNSUPPORTED;
+ }
+ break;
+
+ case BSPP_UNIT_UNCLASSIFIED:
+ if (nal_unit_type == H264_NALTYPE_ACCESS_UNIT_DELIMITER) {
+ unit_data->parse_state->next_pic_is_new = 1;
+ } else if (nal_unit_type == H264_NALTYPE_SLICE_PREFIX ||
+ nal_unit_type == H264_NALTYPE_SUBSET_SPS) {
+ /* if mvc disabled do nothing */
+ } else {
+ /* Should not have any other type of unclassified data. */
+ pr_err("unclassified data detected!\n");
+ }
+ break;
+
+ case BSPP_UNIT_NON_PICTURE:
+ if (nal_unit_type == H264_NALTYPE_END_OF_SEQUENCE ||
+ nal_unit_type == H264_NALTYPE_END_OF_STREAM) {
+ unit_data->parse_state->next_pic_is_new = 1;
+ } else if (nal_unit_type == H264_NALTYPE_FILLER_DATA ||
+ nal_unit_type == H264_NALTYPE_SEQUENCE_PARAMETER_SET_EXTENSION ||
+ nal_unit_type == H264_NALTYPE_AUXILIARY_SLICE) {
+ } else if (nal_unit_type == H264_NALTYPE_SLICE_SCALABLE ||
+ nal_unit_type == H264_NALTYPE_SLICE_IDR_SCALABLE) {
+ /* if mvc disabled do nothing */
+ } else {
+ /* Should not have any other type of non-picture data. */
+ VDEC_ASSERT(0);
+ }
+ break;
+
+ case BSPP_UNIT_UNSUPPORTED:
+ pr_err("Unsupported NAL type: %d", nal_unit_type);
+ unit_data->parse_error = BSPP_ERROR_UNKNOWN_DATAUNIT_DETECTED;
+ break;
+
+ default:
+ VDEC_ASSERT(0);
+ break;
+ }
+
+ return result;
+}
+
+static int bspp_h264releasedata(void *str_alloc, enum bspp_unit_type data_type, void *data_handle)
+{
+ int result = 0;
+
+ if (!data_handle)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ switch (data_type) {
+ case BSPP_UNIT_SEQUENCE:
+ result = bspp_h264_release_sequ_hdr_info(str_alloc, data_handle);
+ break;
+ default:
+ break;
+ }
+
+ return result;
+}
+
+static int bspp_h264resetdata(enum bspp_unit_type data_type, void *data_handle)
+{
+ int result = 0;
+
+ if (!data_handle)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ switch (data_type) {
+ case BSPP_UNIT_SEQUENCE:
+ result = bspp_h264_reset_seq_hdr_info(data_handle);
+ break;
+ case BSPP_UNIT_PPS:
+ result = bspp_h264_reset_pps_info(data_handle);
+ break;
+ default:
+ break;
+ }
+
+ return result;
+}
+
+static void bspp_h264parse_codecconfig(void *swsr_ctx,
+ unsigned int *unitcount,
+ unsigned int *unit_arraycount,
+ unsigned int *delimlength,
+ unsigned int *size_delimlength)
+{
+ unsigned long long value = 6;
+
+ /*
+ * Set the shift-register up to provide next 6 bytes
+ * without emulation prevention detection.
+ */
+ swsr_consume_delim(swsr_ctx, SWSR_EMPREVENT_NONE, 0, &value);
+
+ /*
+ * Codec config header must be read for size delimited data (H.264)
+ * to get to the start of each unit.
+ * This parsing follows section 5.2.4.1.1 of ISO/IEC 14496-15:2004(E).
+ */
+ /* Configuration version. */
+ swsr_read_bits(swsr_ctx, 8);
+ /* AVC Profile Indication. */
+ swsr_read_bits(swsr_ctx, 8);
+ /* Profile compatibility. */
+ swsr_read_bits(swsr_ctx, 8);
+ /* AVC Level Indication. */
+ swsr_read_bits(swsr_ctx, 8);
+ *delimlength = ((swsr_read_bits(swsr_ctx, 8) & 0x3) + 1) * 8;
+ *unitcount = swsr_read_bits(swsr_ctx, 8) & 0x1f;
+
+ /* Size delimiter is only 2 bytes for H.264 codec configuration. */
+ *size_delimlength = 2 * 8;
+}
+
+static void bspp_h264update_unitcounts(void *swsr_ctx,
+ unsigned int *unitcount,
+ unsigned int *unit_arraycount)
+{
+ if (*unitcount == 0) {
+ unsigned long long value = 1;
+
+ /*
+ * Set the shift-register up to provide next 1 byte without
+ * emulation prevention detection.
+ */
+ swsr_consume_delim(swsr_ctx, SWSR_EMPREVENT_NONE, 0, &value);
+
+ *unitcount = swsr_read_bits(swsr_ctx, 8);
+ }
+
+ (*unitcount)--;
+}
+
+/*
+ * Sets the parser configuration
+ */
+int bspp_h264_set_parser_config(enum vdec_bstr_format bstr_format,
+ struct bspp_vid_std_features *pvidstd_features,
+ struct bspp_swsr_ctx *pswsr_ctx,
+ struct bspp_parser_callbacks *pparser_callbacks,
+ struct bspp_inter_pict_data *pinterpict_data)
+{
+ /* Set h.246 parser callbacks. */
+ pparser_callbacks->parse_unit_cb = bspp_h264_unit_parser;
+ pparser_callbacks->release_data_cb = bspp_h264releasedata;
+ pparser_callbacks->reset_data_cb = bspp_h264resetdata;
+ pparser_callbacks->destroy_data_cb = bspp_h264_destroy_data;
+ pparser_callbacks->parse_codec_config_cb = bspp_h264parse_codecconfig;
+ pparser_callbacks->update_unit_counts_cb = bspp_h264update_unitcounts;
+
+ /* Set h.246 specific features. */
+ pvidstd_features->seq_size = sizeof(struct bspp_h264_seq_hdr_info);
+ pvidstd_features->uses_pps = 1;
+ pvidstd_features->pps_size = sizeof(struct bspp_h264_pps_info);
+
+ /* Set h.246 specific shift register config. */
+ pswsr_ctx->emulation_prevention = SWSR_EMPREVENT_00000300;
+ pinterpict_data->h264_ctx.active_sps_for_sei_parsing = BSPP_INVALID;
+
+ if (bstr_format == VDEC_BSTRFORMAT_DEMUX_BYTESTREAM ||
+ bstr_format == VDEC_BSTRFORMAT_ELEMENTARY) {
+ pswsr_ctx->sr_config.delim_type = SWSR_DELIM_SCP;
+ pswsr_ctx->sr_config.delim_length = 3 * 8;
+ pswsr_ctx->sr_config.scp_value = 0x000001;
+ } else if (bstr_format == VDEC_BSTRFORMAT_DEMUX_SIZEDELIMITED) {
+ pswsr_ctx->sr_config.delim_type = SWSR_DELIM_SIZE;
+ /* Set the default size-delimiter number of bits */
+ pswsr_ctx->sr_config.delim_length = 4 * 8;
+ } else {
+ VDEC_ASSERT(0);
+ return IMG_ERROR_NOT_SUPPORTED;
+ }
+
+ return 0;
+}
+
+/*
+ * This function determines the BSPP unit type based on the
+ * provided bitstream (H264 specific) unit type
+ */
+void bspp_h264_determine_unittype(unsigned char bitstream_unittype,
+ int disable_mvc,
+ enum bspp_unit_type *bspp_unittype)
+{
+ unsigned char type = bitstream_unittype & 0x1f;
+
+ switch (type) {
+ case H264_NALTYPE_SLICE_PREFIX:
+ *bspp_unittype = disable_mvc ? BSPP_UNIT_UNCLASSIFIED : BSPP_UNIT_PICTURE;
+ break;
+ case H264_NALTYPE_SUBSET_SPS:
+ *bspp_unittype = disable_mvc ? BSPP_UNIT_UNCLASSIFIED : BSPP_UNIT_SEQUENCE;
+ break;
+ case H264_NALTYPE_SLICE_SCALABLE:
+ case H264_NALTYPE_SLICE_IDR_SCALABLE:
+ *bspp_unittype = disable_mvc ? BSPP_UNIT_NON_PICTURE : BSPP_UNIT_PICTURE;
+ break;
+ case H264_NALTYPE_SEQUENCE_PARAMETER_SET:
+ *bspp_unittype = BSPP_UNIT_SEQUENCE;
+ break;
+ case H264_NALTYPE_PICTURE_PARAMETER_SET:
+ *bspp_unittype = BSPP_UNIT_PPS;
+ break;
+ case H264_NALTYPE_SLICE:
+ case H264_NALTYPE_SLICE_PARTITION_A:
+ case H264_NALTYPE_SLICE_PARTITION_B:
+ case H264_NALTYPE_SLICE_PARTITION_C:
+ case H264_NALTYPE_IDR_SLICE:
+ *bspp_unittype = BSPP_UNIT_PICTURE;
+ break;
+ case H264_NALTYPE_ACCESS_UNIT_DELIMITER:
+ case H264_NALTYPE_SUPPLEMENTAL_ENHANCEMENT_INFO:
+ /*
+ * Each of these NAL units should not change unit type if
+ * current is picture, since they can occur anywhere, any number
+ * of times
+ */
+ *bspp_unittype = BSPP_UNIT_UNCLASSIFIED;
+ break;
+ case H264_NALTYPE_END_OF_SEQUENCE:
+ case H264_NALTYPE_END_OF_STREAM:
+ case H264_NALTYPE_FILLER_DATA:
+ case H264_NALTYPE_SEQUENCE_PARAMETER_SET_EXTENSION:
+ case H264_NALTYPE_AUXILIARY_SLICE:
+ *bspp_unittype = BSPP_UNIT_NON_PICTURE;
+ break;
+ default:
+ *bspp_unittype = BSPP_UNIT_UNSUPPORTED;
+ break;
+ }
+}
diff --git a/drivers/staging/media/vxd/decoder/h264_secure_parser.h b/drivers/staging/media/vxd/decoder/h264_secure_parser.h
new file mode 100644
index 000000000000..68789dfcc439
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/h264_secure_parser.h
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * h.264 secure data unit parsing API.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ * Re-written for upstreming
+ * Prashanth Kumar Amai <[email protected]>
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef __H264SECUREPARSER_H__
+#define __H264SECUREPARSER_H__
+
+#include "bspp_int.h"
+#include "vdec_defs.h"
+
+/*
+ * enum h264_nalunittype
+ * @Description Contains H264 NAL unit types
+ */
+enum h264_nalunittype {
+ H264_NALTYPE_UNSPECIFIED = 0,
+ H264_NALTYPE_SLICE = 1,
+ H264_NALTYPE_SLICE_PARTITION_A = 2,
+ H264_NALTYPE_SLICE_PARTITION_B = 3,
+ H264_NALTYPE_SLICE_PARTITION_C = 4,
+ H264_NALTYPE_IDR_SLICE = 5,
+ H264_NALTYPE_SUPPLEMENTAL_ENHANCEMENT_INFO = 6,
+ H264_NALTYPE_SEQUENCE_PARAMETER_SET = 7,
+ H264_NALTYPE_PICTURE_PARAMETER_SET = 8,
+ H264_NALTYPE_ACCESS_UNIT_DELIMITER = 9,
+ H264_NALTYPE_END_OF_SEQUENCE = 10,
+ H264_NALTYPE_END_OF_STREAM = 11,
+ H264_NALTYPE_FILLER_DATA = 12,
+ H264_NALTYPE_SEQUENCE_PARAMETER_SET_EXTENSION = 13,
+ H264_NALTYPE_SLICE_PREFIX = 14,
+ H264_NALTYPE_SUBSET_SPS = 15,
+ H264_NALTYPE_AUXILIARY_SLICE = 19,
+ H264_NALTYPE_SLICE_SCALABLE = 20,
+ H264_NALTYPE_SLICE_IDR_SCALABLE = 21,
+ H264_NALTYPE_MAX = 31,
+ H264_NALTYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * struct bspp_h264_sps_info
+ * @Description H264 SPS parsed information
+ */
+struct bspp_h264_sps_info {
+ unsigned int profile_idc;
+ unsigned int constraint_set_flags;
+ unsigned int level_idc;
+ unsigned char seq_parameter_set_id;
+ unsigned char chroma_format_idc;
+ int separate_colour_plane_flag;
+ unsigned int bit_depth_luma_minus8;
+ unsigned int bit_depth_chroma_minus8;
+ unsigned char qpprime_y_zero_transform_bypass_flag;
+ int seq_scaling_matrix_present_flag;
+ unsigned char seq_scaling_list_present_flag[12];
+ unsigned int log2_max_frame_num_minus4;
+ unsigned int pic_order_cnt_type;
+ unsigned int log2_max_pic_order_cnt_lsb_minus4;
+ int delta_pic_order_always_zero_flag;
+ int offset_for_non_ref_pic;
+ int offset_for_top_to_bottom_field;
+ unsigned int num_ref_frames_in_pic_order_cnt_cycle;
+ unsigned int *offset_for_ref_frame;
+ unsigned int max_num_ref_frames;
+ int gaps_in_frame_num_value_allowed_flag;
+ unsigned int pic_width_in_mbs_minus1;
+ unsigned int pic_height_in_map_units_minus1;
+ int frame_mbs_only_flag;
+ int mb_adaptive_frame_field_flag;
+ int direct_8x8_inference_flag;
+ int frame_cropping_flag;
+ unsigned int frame_crop_left_offset;
+ unsigned int frame_crop_right_offset;
+ unsigned int frame_crop_top_offset;
+ unsigned int frame_crop_bottom_offset;
+ int vui_parameters_present_flag;
+ /* mvc_vui_parameters_present_flag; UNUSED */
+ int bmvcvuiparameterpresentflag;
+ /*
+ * scaling lists are derived from both SPS and PPS information
+ * but will change whenever the PPS changes
+ * The derived set of tables are associated here with the PPS
+ * NB: These are in H.264 order
+ */
+ /* derived from SPS and PPS - 8 bit each */
+ unsigned char *scllst4x4seq;
+ /* derived from SPS and PPS - 8 bit each */
+ unsigned char *scllst8x8seq;
+ /* This is not direct parsed data, though it is extracted */
+ unsigned char usedefaultscalingmatrixflag_seq[12];
+};
+
+struct bspp_h264_hrdparam_info {
+ unsigned char cpb_cnt_minus1;
+ unsigned char bit_rate_scale;
+ unsigned char cpb_size_scale;
+ unsigned int *bit_rate_value_minus1;
+ unsigned int *cpb_size_value_minus1;
+ unsigned char *cbr_flag;
+ unsigned char initial_cpb_removal_delay_length_minus1;
+ unsigned char cpb_removal_delay_length_minus1;
+ unsigned char dpb_output_delay_length_minus1;
+ unsigned char time_offset_length;
+};
+
+struct bspp_h264_vui_info {
+ unsigned char aspect_ratio_info_present_flag;
+ unsigned int aspect_ratio_idc;
+ unsigned int sar_width;
+ unsigned int sar_height;
+ unsigned char overscan_info_present_flag;
+ unsigned char overscan_appropriate_flag;
+ unsigned char video_signal_type_present_flag;
+ unsigned int video_format;
+ unsigned char video_full_range_flag;
+ unsigned char colour_description_present_flag;
+ unsigned int colour_primaries;
+ unsigned int transfer_characteristics;
+ unsigned int matrix_coefficients;
+ unsigned char chroma_location_info_present_flag;
+ unsigned int chroma_sample_loc_type_top_field;
+ unsigned int chroma_sample_loc_type_bottom_field;
+ unsigned char timing_info_present_flag;
+ unsigned int num_units_in_tick;
+ unsigned int time_scale;
+ unsigned char fixed_frame_rate_flag;
+ unsigned char nal_hrd_parameters_present_flag;
+ struct bspp_h264_hrdparam_info nal_hrd_parameters;
+ unsigned char vcl_hrd_parameters_present_flag;
+ struct bspp_h264_hrdparam_info vcl_hrd_parameters;
+ unsigned char low_delay_hrd_flag;
+ unsigned char pic_struct_present_flag;
+ unsigned char bitstream_restriction_flag;
+ unsigned char motion_vectors_over_pic_boundaries_flag;
+ unsigned int max_bytes_per_pic_denom;
+ unsigned int max_bits_per_mb_denom;
+ unsigned int log2_max_mv_length_vertical;
+ unsigned int log2_max_mv_length_horizontal;
+ unsigned int num_reorder_frames;
+ unsigned int max_dec_frame_buffering;
+};
+
+/*
+ * struct bspp_h264_seq_hdr_info
+ * @Description Contains everything parsed from the Sequence Header.
+ */
+struct bspp_h264_seq_hdr_info {
+ /* Video sequence header information */
+ struct bspp_h264_sps_info sps_info;
+ /* VUI sequence header information. */
+ struct bspp_h264_vui_info vui_info;
+};
+
+/**
+ * struct bspp_h264_ppssgm_info - This structure contains H264 PPS parse data.
+ * @slice_group_id: slice_group_id
+ * @slicegroupidnum: slicegroupidnum
+ */
+struct bspp_h264_ppssgm_info {
+ unsigned char *slice_group_id;
+ unsigned short slicegroupidnum;
+};
+
+/*
+ * struct bspp_h264_pps_info
+ * @Description This structure contains H264 PPS parse data.
+ */
+struct bspp_h264_pps_info {
+ /* pic_parameter_set_id: defines the PPS ID of the current PPS */
+ int pps_id;
+ /* seq_parameter_set_id: defines the SPS that current PPS points to */
+ int seq_parameter_set_id;
+ int entropy_coding_mode_flag;
+ int pic_order_present_flag;
+ unsigned char num_slice_groups_minus1;
+ unsigned char slice_group_map_type;
+ unsigned short run_length_minus1[8];
+ unsigned short top_left[8];
+ unsigned short bottom_right[8];
+ int slice_group_change_direction_flag;
+ unsigned short slice_group_change_rate_minus1;
+ unsigned short pic_size_in_map_unit;
+ struct bspp_h264_ppssgm_info h264_ppssgm_info;
+ unsigned char num_ref_idx_lx_active_minus1[H264FW_MAX_REFPIC_LISTS];
+ int weighted_pred_flag;
+ unsigned char weighted_bipred_idc;
+ int pic_init_qp_minus26;
+ int pic_init_qs_minus26;
+ int chroma_qp_index_offset;
+ int deblocking_filter_control_present_flag;
+ int constrained_intra_pred_flag;
+ int redundant_pic_cnt_present_flag;
+ int transform_8x8_mode_flag;
+ int pic_scaling_matrix_present_flag;
+ unsigned char pic_scaling_list_present_flag[12];
+ int second_chroma_qp_index_offset;
+
+ /*
+ * scaling lists are derived from both SPS and PPS information
+ * but will change whenever the PPS changes
+ * The derived set of tables are associated here with the PPS
+ * NB: These are in H.264 order
+ */
+ /* derived from SPS and PPS - 8 bit each */
+ unsigned char *scllst4x4pic;
+ /* derived from SPS and PPS - 8 bit each */
+ unsigned char *scllst8x8pic;
+ /* This is not direct parsed data, though it is extracted */
+ unsigned char usedefaultscalingmatrixflag_pic[12];
+};
+
+/*
+ * enum bspp_h264_slice_type
+ * @Description contains H264 slice types
+ */
+enum bspp_h264_slice_type {
+ P_SLICE = 0,
+ B_SLICE,
+ I_SLICE,
+ SP_SLICE,
+ SI_SLICE,
+ SLICE_TYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * struct bspp_h264_slice_hdr_info
+ * @Description This structure contains H264 slice header information
+ */
+struct bspp_h264_slice_hdr_info {
+ unsigned short first_mb_in_slice;
+ enum bspp_h264_slice_type slice_type;
+
+ /* data to ID new picture */
+ unsigned int pps_id;
+ unsigned int frame_num;
+ unsigned char colour_plane_id;
+ unsigned char field_pic_flag;
+ unsigned char bottom_field_flag;
+ unsigned int idr_pic_id;
+ unsigned int pic_order_cnt_lsb;
+ int delta_pic_order_cnt_bottom;
+ int delta_pic_order_cnt[2];
+ unsigned int redundant_pic_cnt;
+
+ /* Things we need to read out when doing In Secure */
+ unsigned char num_ref_idx_active_override_flag;
+ unsigned char num_ref_idx_lx_active_minus1[2];
+ unsigned short slice_group_change_cycle;
+};
+
+/*
+ * @Function bspp_h264_set_parser_config
+ * @Description Sets the parser configuration
+ */
+int bspp_h264_set_parser_config(enum vdec_bstr_format bstr_format,
+ struct bspp_vid_std_features *pvidstd_features,
+ struct bspp_swsr_ctx *pswsr_ctx,
+ struct bspp_parser_callbacks *pparser_callbacks,
+ struct bspp_inter_pict_data *pinterpict_data);
+
+/*
+ * @Function bspp_h264_determine_unittype
+ * @Description This function determines the BSPP unit type based on the
+ * provided bitstream (H264 specific) unit type
+ */
+void bspp_h264_determine_unittype(unsigned char bitstream_unittype,
+ int disable_mvc,
+ enum bspp_unit_type *pbsppunittype);
+
+#endif /*__H264SECUREPARSER_H__ */
diff --git a/drivers/staging/media/vxd/decoder/hevc_secure_parser.c b/drivers/staging/media/vxd/decoder/hevc_secure_parser.c
new file mode 100644
index 000000000000..35fbd7155420
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/hevc_secure_parser.c
@@ -0,0 +1,2895 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * hevc secure data unit parsing API.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ * Re-written for upstreming
+ * Prashanth Kumar Amai <[email protected]>
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "bspp_int.h"
+#include "hevc_secure_parser.h"
+#include "hevcfw_data.h"
+#include "pixel_api.h"
+#include "swsr.h"
+#include "vdec_defs.h"
+#include "vdecdd_utils.h"
+
+#if defined(DEBUG_DECODER_DRIVER)
+#define BSPP_HEVC_SYNTAX(fmt, ...) pr_info("[hevc] " fmt, ## __VA_ARGS__)
+
+#else
+
+#define BSPP_HEVC_SYNTAX(fmt, ...)
+#endif
+
+static void HEVC_SWSR_U1(unsigned char *what, unsigned char *where, void *swsr_ctx)
+{
+ *where = swsr_read_bits(swsr_ctx, 1);
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s, u(1) : %u", what, *where);
+#endif
+}
+
+static void HEVC_SWSR_UN(unsigned char *what, unsigned int *where,
+ unsigned char numbits, void *swsr_ctx)
+{
+ *where = swsr_read_bits(swsr_ctx, numbits);
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s, u(%u) : %u", what, numbits, *where);
+#endif
+}
+
+static void HEVC_SWSR_UE(unsigned char *what, unsigned int *where, void *swsr_ctx)
+{
+ *where = swsr_read_unsigned_expgoulomb(swsr_ctx);
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s, ue(v) : %u", what, *where);
+#endif
+}
+
+static void HEVC_SWSR_SE(unsigned char *what, int *where, void *swsr_ctx)
+{
+ *where = swsr_read_signed_expgoulomb(swsr_ctx);
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s, se(v) : %u", what, *where);
+#endif
+}
+
+static void HEVC_SWSR_FN(unsigned char *what, unsigned char *where,
+ unsigned char numbits, unsigned char pattern,
+ enum bspp_error_type *bspperror, void *swsr_ctx)
+{
+ *where = swsr_read_bits(swsr_ctx, numbits);
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s, f(%u) : %u", what, numbits, *where);
+#endif
+ if (*where != pattern) {
+ *bspperror |= BSPP_ERROR_INVALID_VALUE;
+ pr_warn("Invalid value of %s (f(%u), expected: %u, got: %u)",
+ what, numbits, pattern, *where);
+ }
+}
+
+static void HEVC_UCHECK(unsigned char *what, unsigned int val,
+ unsigned int expected,
+ enum bspp_error_type *bspperror)
+{
+ if (val != expected) {
+ *bspperror |= BSPP_ERROR_INVALID_VALUE;
+ pr_warn("Invalid value of %s (expected: %u, got: %u)",
+ what, expected, val);
+ }
+}
+
+static void HEVC_RANGEUCHECK(unsigned char *what, unsigned int val,
+ unsigned int min, unsigned int max,
+ enum bspp_error_type *bspperror)
+{
+ if ((min > 0 && val < min) || val > max) {
+ *bspperror |= BSPP_ERROR_INVALID_VALUE;
+ pr_warn("Value of %s out of range (expected: [%u, %u], got: %u)",
+ what, min, max, val);
+ }
+}
+
+static void HEVC_RANGESCHECK(unsigned char *what, int val, int min, int max,
+ enum bspp_error_type *bspperror)
+{
+ if (val < min || val > max) {
+ *bspperror |= BSPP_ERROR_INVALID_VALUE;
+ pr_warn("Value of %s out of range (expected: [%d, %d], got: %d)",
+ what, min, max, val);
+ }
+}
+
+#define HEVC_STATIC_ASSERT(expr) ((void)sizeof(unsigned char[1 - 2 * !(expr)]))
+
+#define HEVC_MIN(a, b, type) ({ \
+ type __a = a; \
+ type __b = b; \
+ (((__a) <= (__b)) ? (__a) : (__b)); })
+#define HEVC_MAX(a, b, type) ({ \
+ type __a = a; \
+ type __b = b; \
+ (((__a) >= (__b)) ? (__a) : (__b)); })
+#define HEVC_ALIGN(_val, _alignment, type) ({ \
+ type val = _val; \
+ type alignment = _alignment; \
+ (((val) + (alignment) - 1) & ~((alignment) - 1)); })
+
+static const enum pixel_fmt_idc pixelformat_idc[] = {
+ PIXEL_FORMAT_MONO,
+ PIXEL_FORMAT_420,
+ PIXEL_FORMAT_422,
+ PIXEL_FORMAT_444
+};
+
+static enum bspp_error_type bspp_hevc_parse_vps(void *sr_ctx, struct bspp_hevc_vps *vps);
+
+static void bspp_hevc_sublayhrdparams(void *sr_ctx,
+ struct bspp_hevc_hrd_parameters *hrdparams,
+ unsigned char sublayer_id);
+
+static void bspp_hevc_parsehrdparams(void *sr_ctx,
+ struct bspp_hevc_hrd_parameters *hrdparams,
+ unsigned char common_infpresent,
+ unsigned char max_numsublayers_minus1);
+
+static enum bspp_error_type bspp_hevc_parsesps(void *sr_ctx,
+ void *str_res,
+ struct bspp_hevc_sps *sps);
+
+static enum bspp_error_type bspp_hevc_parsepps(void *sr_ctx, void *str_res,
+ struct bspp_hevc_pps *pps);
+
+static int bspp_hevc_reset_ppsinfo(void *secure_ppsinfo);
+
+static void bspp_hevc_dotilecalculations(struct bspp_hevc_sps *sps,
+ struct bspp_hevc_pps *pps);
+
+static enum bspp_error_type bspp_hevc_parse_slicesegmentheader
+ (void *sr_ctx, void *str_res,
+ struct bspp_hevc_slice_segment_header *ssh,
+ unsigned char nalunit_type,
+ struct bspp_vps_info **vpsinfo,
+ struct bspp_sequence_hdr_info **spsinfo,
+ struct bspp_pps_info **ppsinfo);
+
+static enum bspp_error_type bspp_hevc_parse_profiletierlevel
+ (void *sr_ctx,
+ struct bspp_hevc_profile_tierlevel *ptl,
+ unsigned char vps_maxsublayers_minus1);
+
+static void bspp_hevc_getdefault_scalinglist(unsigned char size_id, unsigned char matrix_id,
+ const unsigned char **default_scalinglist,
+ unsigned int *size);
+
+static enum bspp_error_type bspp_hevc_parse_scalinglistdata
+ (void *sr_ctx,
+ struct bspp_hevc_scalinglist_data *scaling_listdata);
+
+static void bspp_hevc_usedefault_scalinglists(struct bspp_hevc_scalinglist_data *scaling_listdata);
+
+static enum bspp_error_type bspp_hevc_parse_shortterm_refpicset
+ (void *sr_ctx,
+ struct bspp_hevc_shortterm_refpicset *st_refpicset,
+ unsigned char st_rps_idx,
+ unsigned char in_slice_header);
+
+static void bspp_hevc_fillcommonseqhdr(struct bspp_hevc_sps *sps,
+ struct vdec_comsequ_hdrinfo *common_seq);
+
+static void bspp_hevc_fillpicturehdr(struct vdec_comsequ_hdrinfo *common_seq,
+ enum hevc_nalunittype nalunit_type,
+ struct bspp_pict_hdr_info *picture_hdr,
+ struct bspp_hevc_sps *sps,
+ struct bspp_hevc_pps *pps,
+ struct bspp_hevc_vps *vps);
+
+static void bspp_hevc_fill_fwsps(struct bspp_hevc_sps *sps,
+ struct hevcfw_sequence_ps *fwsps);
+
+static void bspp_hevc_fill_fwst_rps(struct bspp_hevc_shortterm_refpicset *strps,
+ struct hevcfw_short_term_ref_picset *fwstrps);
+
+static void bspp_hevc_fill_fwpps(struct bspp_hevc_pps *pps,
+ struct hevcfw_picture_ps *fw_pps);
+
+static void bspp_hevc_fill_fw_scaling_lists(struct bspp_hevc_pps *pps,
+ struct bspp_hevc_sps *sps,
+ struct hevcfw_picture_ps *fw_pps);
+
+static unsigned int bspp_ceil_log2(unsigned int linear_val);
+
+static unsigned char bspp_hevc_picture_is_irap(enum hevc_nalunittype nalunit_type);
+
+static unsigned char bspp_hevc_picture_is_cra(enum hevc_nalunittype nalunit_type);
+
+static unsigned char bspp_hevc_picture_is_idr(enum hevc_nalunittype nalunit_type);
+
+static unsigned char bspp_hevc_picture_is_bla(enum hevc_nalunittype nalunit_type);
+
+static unsigned char bspp_hevc_picture_getnorasl_outputflag
+ (enum hevc_nalunittype nalunit_type,
+ struct bspp_hevc_inter_pict_ctx *inter_pict_ctx);
+
+static unsigned char bspp_hevc_range_extensions_is_enabled
+ (struct bspp_hevc_profile_tierlevel *profile_tierlevel);
+
+static int bspp_hevc_unitparser(void *swsr_ctx, struct bspp_unit_data *unitdata)
+{
+ void *sr_ctx = swsr_ctx;
+ int result = 0;
+ enum bspp_error_type parse_err = BSPP_ERROR_NONE;
+ struct bspp_inter_pict_data *inter_pict_ctx =
+ unitdata->parse_state->inter_pict_ctx;
+ unsigned char forbidden_zero_bit = 0;
+ unsigned char nal_unit_type = 0;
+ unsigned char nuh_layer_id = 0;
+ unsigned char nuh_temporal_id_plus1 = 0;
+
+ HEVC_SWSR_FN("forbidden_zero_bit", &forbidden_zero_bit, 1, 0, &parse_err, sr_ctx);
+ HEVC_SWSR_UN("nal_unit_type", (unsigned int *)&nal_unit_type, 6, sr_ctx);
+ /* for current version of HEVC nuh_layer_id "shall be equal to 0" */
+ HEVC_SWSR_FN("nuh_layer_id", &nuh_layer_id, 6, 0, &parse_err, sr_ctx);
+ HEVC_SWSR_UN("nuh_temporal_id_plus1", (unsigned int *)&nuh_temporal_id_plus1, 3, sr_ctx);
+
+ switch (unitdata->unit_type) {
+ case BSPP_UNIT_VPS:
+ {
+ struct bspp_hevc_vps *vps =
+ (struct bspp_hevc_vps *)unitdata->out.vps_info->secure_vpsinfo;
+
+ unitdata->parse_error |= bspp_hevc_parse_vps(sr_ctx, vps);
+ unitdata->out.vps_info->vps_id =
+ vps->vps_video_parameter_set_id;
+ }
+ break;
+
+ case BSPP_UNIT_SEQUENCE:
+ {
+ struct bspp_ddbuf_array_info *tmp;
+ struct hevcfw_sequence_ps *fwsps;
+ struct vdec_comsequ_hdrinfo *common_seq;
+ struct bspp_hevc_sps *sps =
+ (struct bspp_hevc_sps *)unitdata->out.sequ_hdr_info->secure_sequence_info;
+
+ unitdata->parse_error |= bspp_hevc_parsesps(sr_ctx,
+ unitdata->str_res_handle,
+ sps);
+ unitdata->out.sequ_hdr_info->sequ_hdr_info.sequ_hdr_id =
+ sps->sps_seq_parameter_set_id;
+
+ tmp = &unitdata->out.sequ_hdr_info->fw_sequence;
+ /* handle firmware headers */
+ fwsps =
+ (struct hevcfw_sequence_ps *)((unsigned char *)tmp->ddbuf_info.cpu_virt_addr +
+ tmp->buf_offset);
+
+ bspp_hevc_fill_fwsps(sps, fwsps);
+
+ /* handle common sequence header */
+ common_seq =
+ &unitdata->out.sequ_hdr_info->sequ_hdr_info.com_sequ_hdr_info;
+
+ bspp_hevc_fillcommonseqhdr(sps, common_seq);
+ }
+ break;
+
+ case BSPP_UNIT_PPS:
+ {
+ struct bspp_ddbuf_array_info *tmp;
+ struct hevcfw_picture_ps *fw_pps;
+ struct bspp_hevc_pps *pps =
+ (struct bspp_hevc_pps *)unitdata->out.pps_info->secure_pps_info;
+
+ unitdata->parse_error |= bspp_hevc_parsepps(sr_ctx,
+ unitdata->str_res_handle,
+ pps);
+ unitdata->out.pps_info->pps_id = pps->pps_pic_parameter_set_id;
+
+ tmp = &unitdata->out.pps_info->fw_pps;
+ /* handle firmware headers */
+ fw_pps =
+ (struct hevcfw_picture_ps *)((unsigned char *)tmp->ddbuf_info.cpu_virt_addr +
+ tmp->buf_offset);
+ bspp_hevc_fill_fwpps(pps, fw_pps);
+ }
+ break;
+
+ case BSPP_UNIT_PICTURE:
+ {
+ struct bspp_hevc_slice_segment_header ssh;
+ struct bspp_vps_info *vps_info = NULL;
+ struct bspp_sequence_hdr_info *sequ_hdr_info = NULL;
+ struct bspp_hevc_sps *hevc_sps = NULL;
+ struct bspp_pps_info *ppsinfo = NULL;
+ enum bspp_error_type parse_error;
+ struct bspp_ddbuf_array_info *tmp;
+ struct hevcfw_picture_ps *fw_pps;
+ struct bspp_pict_data *pictdata;
+ struct bspp_hevc_pps *pps;
+
+ /*
+ * EOS has to be attached to picture data, so it can be used
+ * for NoRaslOutputFlag calculation in FW
+ */
+ inter_pict_ctx->hevc_ctx.eos_detected = 0;
+ if (nal_unit_type == HEVC_NALTYPE_EOS) {
+ inter_pict_ctx->hevc_ctx.eos_detected = 1;
+ break;
+ }
+
+ parse_error = bspp_hevc_parse_slicesegmentheader(sr_ctx,
+ unitdata->str_res_handle,
+ &ssh,
+ nal_unit_type,
+ &vps_info,
+ &sequ_hdr_info,
+ &ppsinfo);
+ unitdata->parse_error |= parse_error;
+ unitdata->slice = 1;
+
+ if (parse_error != BSPP_ERROR_NONE &&
+ parse_error != BSPP_ERROR_CORRECTION_VALIDVALUE) {
+ result = IMG_ERROR_CANCELLED;
+ break;
+ }
+
+ /* if we just started new picture. */
+ if (ssh.first_slice_segment_in_pic_flag) {
+ tmp = &ppsinfo->fw_pps;
+ /* handle firmware headers */
+ fw_pps =
+ (struct hevcfw_picture_ps *)((unsigned char *)tmp->ddbuf_info.cpu_virt_addr
+ + tmp->buf_offset);
+
+ inter_pict_ctx->hevc_ctx.first_after_eos = 0;
+ if (inter_pict_ctx->hevc_ctx.eos_detected) {
+ inter_pict_ctx->hevc_ctx.first_after_eos = 1;
+ inter_pict_ctx->hevc_ctx.eos_detected = 0;
+ }
+
+ /* fill common picture header */
+ bspp_hevc_fillpicturehdr(&sequ_hdr_info->sequ_hdr_info.com_sequ_hdr_info,
+ (enum hevc_nalunittype)nal_unit_type,
+ unitdata->out.pict_hdr_info,
+ (struct bspp_hevc_sps *)
+ sequ_hdr_info->secure_sequence_info,
+ (struct bspp_hevc_pps *)ppsinfo->secure_pps_info,
+ (struct bspp_hevc_vps *)vps_info->secure_vpsinfo);
+
+ bspp_hevc_fill_fw_scaling_lists(ppsinfo->secure_pps_info,
+ sequ_hdr_info->secure_sequence_info,
+ fw_pps);
+
+ pictdata = &unitdata->out.pict_hdr_info->pict_aux_data;
+ /*
+ * We have no container for the PPS that passes down
+ * to the kernel, for this reason the hevc secure parser
+ * needs to populate that info into the picture
+ * header PictAuxData.
+ */
+ pictdata->bufmap_id = ppsinfo->bufmap_id;
+ pictdata->buf_offset = ppsinfo->buf_offset;
+ pictdata->pic_data = fw_pps;
+ pictdata->id = fw_pps->pps_pic_parameter_set_id;
+ pictdata->size = sizeof(*fw_pps);
+
+ ppsinfo->ref_count++;
+
+ /* new Coded Video Sequence indication */
+ if (nal_unit_type == HEVC_NALTYPE_IDR_W_RADL ||
+ nal_unit_type == HEVC_NALTYPE_IDR_N_LP ||
+ nal_unit_type == HEVC_NALTYPE_BLA_N_LP ||
+ nal_unit_type == HEVC_NALTYPE_BLA_W_RADL ||
+ nal_unit_type == HEVC_NALTYPE_BLA_W_LP ||
+ nal_unit_type == HEVC_NALTYPE_CRA) {
+ unitdata->new_closed_gop = 1;
+ inter_pict_ctx->hevc_ctx.seq_pic_count = 0;
+ }
+
+ /* Attach SEI data to the picture. */
+ if (!inter_pict_ctx->hevc_ctx.sei_info_attached_to_pic) {
+ /*
+ * If there is already a non-empty SEI list
+ * available
+ */
+ if (inter_pict_ctx->hevc_ctx.sei_rawdata_list) {
+ /* attach it to the picture header. */
+ unitdata->out.pict_hdr_info->hevc_pict_hdr_info.raw_sei_datalist_firstfield
+ =
+ (void *)inter_pict_ctx->hevc_ctx.sei_rawdata_list;
+ inter_pict_ctx->hevc_ctx.sei_info_attached_to_pic = 1;
+ } else {
+ /* Otherwise expose a handle a picture header field to
+ * attach SEI list later.
+ */
+ inter_pict_ctx->hevc_ctx.hndl_pichdr_sei_rawdata_list =
+ &unitdata->out.pict_hdr_info->hevc_pict_hdr_info.raw_sei_datalist_firstfield;
+ }
+ }
+
+ /* Attach raw VUI data to the picture header. */
+ hevc_sps = (struct bspp_hevc_sps *)sequ_hdr_info->secure_sequence_info;
+ if (hevc_sps->vui_raw_data) {
+ hevc_sps->vui_raw_data->ref_count++;
+ unitdata->out.pict_hdr_info->hevc_pict_hdr_info.raw_vui_data =
+ (void *)hevc_sps->vui_raw_data;
+ }
+
+ inter_pict_ctx->hevc_ctx.seq_pic_count++;
+
+ /* NoOutputOfPriorPicsFlag */
+ inter_pict_ctx->not_dpb_flush = 0;
+ if (unitdata->new_closed_gop &&
+ bspp_hevc_picture_is_irap((enum hevc_nalunittype)nal_unit_type) &&
+ bspp_hevc_picture_getnorasl_outputflag((enum hevc_nalunittype)
+ nal_unit_type,
+ &inter_pict_ctx->hevc_ctx)) {
+ if (bspp_hevc_picture_is_cra((enum hevc_nalunittype)nal_unit_type))
+ inter_pict_ctx->not_dpb_flush = 1;
+ else
+ inter_pict_ctx->not_dpb_flush =
+ ssh.no_output_of_prior_pics_flag;
+ }
+
+ unitdata->parse_state->next_pic_is_new = 0;
+ }
+
+ pps = (struct bspp_hevc_pps *)ppsinfo->secure_pps_info;
+ unitdata->pict_sequ_hdr_id = pps->pps_seq_parameter_set_id;
+ }
+ break;
+
+ case BSPP_UNIT_UNCLASSIFIED:
+ case BSPP_UNIT_NON_PICTURE:
+ case BSPP_UNIT_UNSUPPORTED:
+ break;
+
+ default:
+ VDEC_ASSERT("Unknown BSPP Unit Type" == NULL);
+ break;
+ }
+
+ return result;
+}
+
+static void bspp_hevc_initialiseparsing(struct bspp_parse_state *parse_state)
+{
+ /* Indicate that SEI info has not yet been attached to this picture. */
+ parse_state->inter_pict_ctx->hevc_ctx.sei_info_attached_to_pic = 0;
+}
+
+static void bspp_hevc_finaliseparsing(void *str_alloc, struct bspp_parse_state *parse_state)
+{
+ /*
+ * If SEI info has not yet been attached to the picture and
+ * there is anything to be attached.
+ */
+ if (!parse_state->inter_pict_ctx->hevc_ctx.sei_info_attached_to_pic &&
+ parse_state->inter_pict_ctx->hevc_ctx.sei_rawdata_list) {
+ /* attach the SEI list if there is a handle provided for that. */
+ if (parse_state->inter_pict_ctx->hevc_ctx.hndl_pichdr_sei_rawdata_list) {
+ /* Attach the raw SEI list to the picture. */
+ *parse_state->inter_pict_ctx->hevc_ctx.hndl_pichdr_sei_rawdata_list =
+ (void *)parse_state->inter_pict_ctx->hevc_ctx.sei_rawdata_list;
+ /* Reset the inter-picture data. */
+ parse_state->inter_pict_ctx->hevc_ctx.hndl_pichdr_sei_rawdata_list = NULL;
+ } else {
+ /* Nowhere to attach the raw SEI list, so just free it. */
+ bspp_freeraw_sei_datalist
+ (str_alloc, parse_state->inter_pict_ctx->hevc_ctx.sei_rawdata_list);
+ }
+ }
+
+ /* Indicate that SEI info has been attached to the picture. */
+ parse_state->inter_pict_ctx->hevc_ctx.sei_info_attached_to_pic = 1;
+ /* Reset the inter-picture SEI list. */
+ parse_state->inter_pict_ctx->hevc_ctx.sei_rawdata_list = NULL;
+}
+
+static enum bspp_error_type bspp_hevc_parse_vps(void *sr_ctx, struct bspp_hevc_vps *vps)
+{
+ unsigned int parse_err = BSPP_ERROR_NONE;
+ unsigned int i, j;
+
+ VDEC_ASSERT(vps);
+ VDEC_ASSERT(sr_ctx);
+
+ memset(vps, 0, sizeof(struct bspp_hevc_vps));
+
+ HEVC_SWSR_UN("vps_video_parameter_set_id",
+ (unsigned int *)&vps->vps_video_parameter_set_id, 4, sr_ctx);
+ HEVC_SWSR_UN("vps_reserved_three_2bits",
+ (unsigned int *)&vps->vps_reserved_three_2bits, 2, sr_ctx);
+ HEVC_SWSR_UN("vps_max_layers_minus1",
+ (unsigned int *)&vps->vps_max_layers_minus1, 6, sr_ctx);
+ HEVC_SWSR_UN("vps_max_sub_layers_minus1",
+ (unsigned int *)&vps->vps_max_sub_layers_minus1, 3, sr_ctx);
+ HEVC_RANGEUCHECK("vps_max_sub_layers_minus1", vps->vps_max_sub_layers_minus1, 0,
+ HEVC_MAX_NUM_SUBLAYERS - 1, &parse_err);
+ HEVC_SWSR_U1("vps_temporal_id_nesting_flag",
+ &vps->vps_temporal_id_nesting_flag, sr_ctx);
+ HEVC_SWSR_UN("vps_reserved_0xffff_16bits",
+ (unsigned int *)&vps->vps_reserved_0xffff_16bits, 16, sr_ctx);
+
+ if (vps->vps_max_sub_layers_minus1 == 0)
+ HEVC_UCHECK("vps_temporal_id_nesting_flag",
+ vps->vps_temporal_id_nesting_flag, 1, &parse_err);
+
+ parse_err |= bspp_hevc_parse_profiletierlevel(sr_ctx, &vps->profiletierlevel,
+ vps->vps_max_sub_layers_minus1);
+
+ HEVC_SWSR_U1("vps_sub_layer_ordering_info_present_flag",
+ &vps->vps_sub_layer_ordering_info_present_flag, sr_ctx);
+ for (i = vps->vps_sub_layer_ordering_info_present_flag ?
+ 0 : vps->vps_max_sub_layers_minus1;
+ i <= vps->vps_max_sub_layers_minus1; ++i) {
+ HEVC_SWSR_UE("vps_max_dec_pic_buffering_minus1",
+ (unsigned int *)&vps->vps_max_dec_pic_buffering_minus1[i], sr_ctx);
+ HEVC_SWSR_UE("vps_max_num_reorder_pics",
+ (unsigned int *)&vps->vps_max_num_reorder_pics[i], sr_ctx);
+ HEVC_SWSR_UE("vps_max_latency_increase_plus1",
+ (unsigned int *)&vps->vps_max_latency_increase_plus1[i], sr_ctx);
+ }
+
+ HEVC_SWSR_UN("vps_max_layer_id", (unsigned int *)&vps->vps_max_layer_id, 6, sr_ctx);
+ HEVC_SWSR_UE("vps_num_layer_sets_minus1",
+ (unsigned int *)&vps->vps_num_layer_sets_minus1, sr_ctx);
+
+ for (i = 1; i <= vps->vps_num_layer_sets_minus1; ++i) {
+ for (j = 0; j <= vps->vps_max_layer_id; ++j) {
+ HEVC_SWSR_U1("layer_id_included_flag",
+ &vps->layer_id_included_flag[i][j], sr_ctx);
+ }
+ }
+
+ HEVC_SWSR_U1("vps_timing_info_present_flag", &vps->vps_timing_info_present_flag, sr_ctx);
+ if (vps->vps_timing_info_present_flag) {
+ HEVC_SWSR_UN("vps_num_units_in_tick",
+ (unsigned int *)&vps->vps_num_units_in_tick, 32, sr_ctx);
+ HEVC_SWSR_UN("vps_time_scale",
+ (unsigned int *)&vps->vps_time_scale, 32, sr_ctx);
+ HEVC_SWSR_U1("vps_poc_proportional_to_timing_flag",
+ &vps->vps_poc_proportional_to_timing_flag, sr_ctx);
+ if (vps->vps_poc_proportional_to_timing_flag)
+ HEVC_SWSR_UE("vps_num_ticks_poc_diff_one_minus1",
+ (unsigned int *)&vps->vps_num_ticks_poc_diff_one_minus1,
+ sr_ctx);
+
+ HEVC_SWSR_UE("vps_num_hrd_parameters",
+ (unsigned int *)&vps->vps_num_hrd_parameters, sr_ctx);
+
+ /* consume hrd_parameters */
+ for (i = 0; i < vps->vps_num_hrd_parameters; i++) {
+ unsigned short hrd_layer_set_idx;
+ unsigned char cprms_present_flag = 1;
+ struct bspp_hevc_hrd_parameters hrdparams;
+
+ HEVC_SWSR_UE("hrd_layer_set_idx",
+ (unsigned int *)&hrd_layer_set_idx, sr_ctx);
+ if (i > 0)
+ HEVC_SWSR_U1("cprms_present_flag", &cprms_present_flag, sr_ctx);
+
+ bspp_hevc_parsehrdparams(sr_ctx, &hrdparams,
+ cprms_present_flag,
+ vps->vps_max_sub_layers_minus1);
+ }
+ }
+ HEVC_SWSR_U1("vps_extension_flag", &vps->vps_extension_flag, sr_ctx);
+
+ return (enum bspp_error_type)parse_err;
+}
+
+static void bspp_hevc_sublayhrdparams(void *sr_ctx,
+ struct bspp_hevc_hrd_parameters *hrdparams,
+ unsigned char sublayer_id)
+{
+ unsigned char i;
+ unsigned char cpb_cnt = hrdparams->cpb_cnt_minus1[sublayer_id];
+ struct bspp_hevc_sublayer_hrd_parameters *sublay_hrdparams =
+ &hrdparams->sublayhrdparams[sublayer_id];
+
+ VDEC_ASSERT(sr_ctx);
+ VDEC_ASSERT(hrdparams);
+ VDEC_ASSERT(cpb_cnt < HEVC_MAX_CPB_COUNT);
+ VDEC_ASSERT(sublayer_id < HEVC_MAX_NUM_SUBLAYERS);
+
+ for (i = 0; i <= cpb_cnt; i++) {
+ HEVC_SWSR_UE("bit_rate_value_minus1",
+ (unsigned int *)&sublay_hrdparams->bit_rate_value_minus1[i], sr_ctx);
+ HEVC_SWSR_UE("cpb_size_value_minus1",
+ (unsigned int *)&sublay_hrdparams->cpb_size_value_minus1[i], sr_ctx);
+ if (hrdparams->sub_pic_hrd_params_present_flag) {
+ HEVC_SWSR_UE("cpb_size_du_value_minus1",
+ (unsigned int *)
+ &sublay_hrdparams->cpb_size_du_value_minus1[i],
+ sr_ctx);
+ HEVC_SWSR_UE("bit_rate_du_value_minus1",
+ (unsigned int *)
+ &sublay_hrdparams->bit_rate_du_value_minus1[i],
+ sr_ctx);
+ }
+ HEVC_SWSR_U1("cbr_flag", &sublay_hrdparams->cbr_flag[i], sr_ctx);
+ }
+}
+
+static void bspp_hevc_parsehrdparams(void *sr_ctx,
+ struct bspp_hevc_hrd_parameters *hrdparams,
+ unsigned char common_infpresent,
+ unsigned char max_numsublayers_minus1)
+{
+ unsigned char i;
+
+ VDEC_ASSERT(sr_ctx);
+ VDEC_ASSERT(hrdparams);
+ VDEC_ASSERT(max_numsublayers_minus1 < HEVC_MAX_NUM_SUBLAYERS);
+
+ memset(hrdparams, 0, sizeof(struct bspp_hevc_hrd_parameters));
+
+ if (common_infpresent) {
+ HEVC_SWSR_U1("nal_hrd_parameters_present_flag",
+ &hrdparams->nal_hrd_parameters_present_flag, sr_ctx);
+ HEVC_SWSR_U1("vcl_hrd_parameters_present_flag",
+ &hrdparams->vcl_hrd_parameters_present_flag, sr_ctx);
+ if (hrdparams->nal_hrd_parameters_present_flag ||
+ hrdparams->vcl_hrd_parameters_present_flag) {
+ HEVC_SWSR_U1("sub_pic_hrd_params_present_flag",
+ &hrdparams->sub_pic_hrd_params_present_flag,
+ sr_ctx);
+ if (hrdparams->sub_pic_hrd_params_present_flag) {
+ HEVC_SWSR_UN("tick_divisor_minus2",
+ (unsigned int *)&hrdparams->tick_divisor_minus2,
+ 8, sr_ctx);
+ HEVC_SWSR_UN
+ ("du_cpb_removal_delay_increment_length_minus1",
+ (unsigned int *)
+ &hrdparams->du_cpb_removal_delay_increment_length_minus1,
+ 5, sr_ctx);
+ HEVC_SWSR_U1("sub_pic_cpb_params_in_pic_timing_sei_flag",
+ &hrdparams->sub_pic_cpb_params_in_pic_timing_sei_flag,
+ sr_ctx);
+ HEVC_SWSR_UN("dpb_output_delay_du_length_minus1",
+ (unsigned int *)
+ &hrdparams->dpb_output_delay_du_length_minus1,
+ 5, sr_ctx);
+ }
+ HEVC_SWSR_UN("bit_rate_scale",
+ (unsigned int *)&hrdparams->bit_rate_scale, 4, sr_ctx);
+ HEVC_SWSR_UN("cpb_size_scale",
+ (unsigned int *)&hrdparams->cpb_size_scale, 4, sr_ctx);
+ if (hrdparams->sub_pic_hrd_params_present_flag)
+ HEVC_SWSR_UN("cpb_size_du_scale",
+ (unsigned int *)&hrdparams->cpb_size_du_scale,
+ 4, sr_ctx);
+
+ HEVC_SWSR_UN("initial_cpb_removal_delay_length_minus1",
+ (unsigned int *)
+ &hrdparams->initial_cpb_removal_delay_length_minus1,
+ 5, sr_ctx);
+ HEVC_SWSR_UN("au_cpb_removal_delay_length_minus1",
+ (unsigned int *)&hrdparams->au_cpb_removal_delay_length_minus1,
+ 5, sr_ctx);
+ HEVC_SWSR_UN("dpb_output_delay_length_minus1",
+ (unsigned int *)&hrdparams->dpb_output_delay_length_minus1,
+ 5, sr_ctx);
+ }
+ }
+ for (i = 0; i <= max_numsublayers_minus1; i++) {
+ HEVC_SWSR_U1("fixed_pic_rate_general_flag",
+ &hrdparams->fixed_pic_rate_general_flag[i], sr_ctx);
+ hrdparams->fixed_pic_rate_within_cvs_flag[i] =
+ hrdparams->fixed_pic_rate_general_flag[i];
+ if (!hrdparams->fixed_pic_rate_general_flag[i])
+ HEVC_SWSR_U1("fixed_pic_rate_within_cvs_flag",
+ &hrdparams->fixed_pic_rate_within_cvs_flag[i],
+ sr_ctx);
+
+ if (hrdparams->fixed_pic_rate_within_cvs_flag[i])
+ HEVC_SWSR_UE("elemental_duration_in_tc_minus1",
+ (unsigned int *)&hrdparams->elemental_duration_in_tc_minus1[i],
+ sr_ctx);
+ else
+ HEVC_SWSR_U1("low_delay_hrd_flag",
+ &hrdparams->low_delay_hrd_flag[i], sr_ctx);
+
+ if (!hrdparams->low_delay_hrd_flag[i])
+ HEVC_SWSR_UE("cpb_cnt_minus1",
+ (unsigned int *)&hrdparams->cpb_cnt_minus1[i], sr_ctx);
+
+ if (hrdparams->nal_hrd_parameters_present_flag)
+ bspp_hevc_sublayhrdparams(sr_ctx, hrdparams, i);
+
+ if (hrdparams->vcl_hrd_parameters_present_flag)
+ bspp_hevc_sublayhrdparams(sr_ctx, hrdparams, i);
+ }
+}
+
+static enum bspp_error_type bspp_hevc_parsevui_parameters
+ (void *sr_ctx,
+ struct bspp_hevc_vui_params *vui_params,
+ unsigned char sps_max_sub_layers_minus1)
+{
+ enum bspp_error_type parse_err = BSPP_ERROR_NONE;
+
+ VDEC_ASSERT(sr_ctx);
+ VDEC_ASSERT(vui_params);
+
+ memset(vui_params, 0, sizeof(struct bspp_hevc_vui_params));
+
+ HEVC_SWSR_U1("aspect_ratio_info_present_flag",
+ &vui_params->aspect_ratio_info_present_flag, sr_ctx);
+ if (vui_params->aspect_ratio_info_present_flag) {
+ HEVC_SWSR_UN("aspect_ratio_idc",
+ (unsigned int *)&vui_params->aspect_ratio_idc, 8, sr_ctx);
+ if (vui_params->aspect_ratio_idc == HEVC_EXTENDED_SAR) {
+ HEVC_SWSR_UN("sar_width",
+ (unsigned int *)&vui_params->sar_width, 16, sr_ctx);
+ HEVC_SWSR_UN("sar_height",
+ (unsigned int *)&vui_params->sar_height, 16, sr_ctx);
+ }
+ }
+ HEVC_SWSR_U1("overscan_info_present_flag",
+ &vui_params->overscan_info_present_flag, sr_ctx);
+
+ if (vui_params->overscan_info_present_flag)
+ HEVC_SWSR_U1("overscan_appropriate_flag",
+ &vui_params->overscan_appropriate_flag, sr_ctx);
+
+ HEVC_SWSR_U1("video_signal_type_present_flag",
+ &vui_params->video_signal_type_present_flag, sr_ctx);
+
+ if (vui_params->video_signal_type_present_flag) {
+ HEVC_SWSR_UN("video_format",
+ (unsigned int *)&vui_params->video_format, 3, sr_ctx);
+ HEVC_SWSR_U1("video_full_range_flag",
+ &vui_params->video_full_range_flag, sr_ctx);
+ HEVC_SWSR_U1("colour_description_present_flag",
+ &vui_params->colour_description_present_flag,
+ sr_ctx);
+ if (vui_params->colour_description_present_flag) {
+ HEVC_SWSR_UN("colour_primaries",
+ (unsigned int *)&vui_params->colour_primaries, 8, sr_ctx);
+ HEVC_SWSR_UN("transfer_characteristics",
+ (unsigned int *)&vui_params->transfer_characteristics,
+ 8, sr_ctx);
+ HEVC_SWSR_UN("matrix_coeffs",
+ (unsigned int *)&vui_params->matrix_coeffs, 8, sr_ctx);
+ }
+ }
+
+ HEVC_SWSR_U1("chroma_loc_info_present_flag",
+ &vui_params->chroma_loc_info_present_flag, sr_ctx);
+ if (vui_params->chroma_loc_info_present_flag) {
+ HEVC_SWSR_UE("chroma_sample_loc_type_top_field",
+ (unsigned int *)&vui_params->chroma_sample_loc_type_top_field,
+ sr_ctx);
+ HEVC_RANGEUCHECK("chroma_sample_loc_type_top_field",
+ vui_params->chroma_sample_loc_type_top_field,
+ 0, 5, &parse_err);
+ HEVC_SWSR_UE("chroma_sample_loc_type_bottom_field",
+ (unsigned int *)&vui_params->chroma_sample_loc_type_bottom_field,
+ sr_ctx);
+ HEVC_RANGEUCHECK("chroma_sample_loc_type_bottom_field",
+ vui_params->chroma_sample_loc_type_bottom_field,
+ 0, 5, &parse_err);
+ }
+ HEVC_SWSR_U1("neutral_chroma_indication_flag",
+ &vui_params->neutral_chroma_indication_flag, sr_ctx);
+ HEVC_SWSR_U1("field_seq_flag",
+ &vui_params->field_seq_flag, sr_ctx);
+ HEVC_SWSR_U1("frame_field_info_present_flag",
+ &vui_params->frame_field_info_present_flag, sr_ctx);
+ HEVC_SWSR_U1("default_display_window_flag",
+ &vui_params->default_display_window_flag, sr_ctx);
+ if (vui_params->default_display_window_flag) {
+ HEVC_SWSR_UE("def_disp_win_left_offset",
+ (unsigned int *)&vui_params->def_disp_win_left_offset, sr_ctx);
+ HEVC_SWSR_UE("def_disp_win_right_offset",
+ (unsigned int *)&vui_params->def_disp_win_right_offset, sr_ctx);
+ HEVC_SWSR_UE("def_disp_win_top_offset",
+ (unsigned int *)&vui_params->def_disp_win_top_offset, sr_ctx);
+ HEVC_SWSR_UE("def_disp_win_bottom_offset",
+ (unsigned int *)&vui_params->def_disp_win_bottom_offset, sr_ctx);
+ }
+ HEVC_SWSR_U1("vui_timing_info_present_flag",
+ &vui_params->vui_timing_info_present_flag, sr_ctx);
+ if (vui_params->vui_timing_info_present_flag) {
+ HEVC_SWSR_UN("vui_num_units_in_tick",
+ (unsigned int *)&vui_params->vui_num_units_in_tick, 32, sr_ctx);
+ HEVC_SWSR_UN("vui_time_scale",
+ (unsigned int *)&vui_params->vui_time_scale, 32, sr_ctx);
+ HEVC_SWSR_U1("vui_poc_proportional_to_timing_flag",
+ &vui_params->vui_poc_proportional_to_timing_flag,
+ sr_ctx);
+ if (vui_params->vui_poc_proportional_to_timing_flag)
+ HEVC_SWSR_UE("vui_num_ticks_poc_diff_one_minus1",
+ (unsigned int *)&vui_params->vui_num_ticks_poc_diff_one_minus1,
+ sr_ctx);
+
+ HEVC_SWSR_U1("vui_hrd_parameters_present_flag",
+ &vui_params->vui_hrd_parameters_present_flag,
+ sr_ctx);
+ if (vui_params->vui_hrd_parameters_present_flag)
+ bspp_hevc_parsehrdparams(sr_ctx, &vui_params->vui_hrd_params,
+ 1, sps_max_sub_layers_minus1);
+ }
+ HEVC_SWSR_U1("bitstream_restriction_flag",
+ &vui_params->bitstream_restriction_flag, sr_ctx);
+
+ if (vui_params->bitstream_restriction_flag) {
+ HEVC_SWSR_U1("tiles_fixed_structure_flag",
+ &vui_params->tiles_fixed_structure_flag, sr_ctx);
+ HEVC_SWSR_U1("motion_vectors_over_pic_boundaries_flag",
+ &vui_params->motion_vectors_over_pic_boundaries_flag,
+ sr_ctx);
+ HEVC_SWSR_U1("restricted_ref_pic_lists_flag",
+ &vui_params->restricted_ref_pic_lists_flag, sr_ctx);
+
+ HEVC_SWSR_UE("min_spatial_segmentation_idc",
+ (unsigned int *)&vui_params->min_spatial_segmentation_idc, sr_ctx);
+ HEVC_RANGEUCHECK("min_spatial_segmentation_idc",
+ vui_params->min_spatial_segmentation_idc,
+ 0, 4095, &parse_err);
+
+ HEVC_SWSR_UE("max_bytes_per_pic_denom",
+ (unsigned int *)&vui_params->max_bytes_per_pic_denom, sr_ctx);
+ HEVC_RANGEUCHECK("max_bytes_per_pic_denom", vui_params->max_bytes_per_pic_denom,
+ 0, 16, &parse_err);
+
+ HEVC_SWSR_UE("max_bits_per_min_cu_denom",
+ (unsigned int *)&vui_params->max_bits_per_min_cu_denom, sr_ctx);
+ HEVC_RANGEUCHECK("max_bits_per_min_cu_denom", vui_params->max_bits_per_min_cu_denom,
+ 0, 16, &parse_err);
+
+ HEVC_SWSR_UE("log2_max_mv_length_horizontal",
+ (unsigned int *)&vui_params->log2_max_mv_length_horizontal, sr_ctx);
+ HEVC_RANGEUCHECK("log2_max_mv_length_horizontal",
+ vui_params->log2_max_mv_length_horizontal,
+ 0, 16, &parse_err);
+
+ HEVC_SWSR_UE("log2_max_mv_length_vertical",
+ (unsigned int *)&vui_params->log2_max_mv_length_vertical, sr_ctx);
+ HEVC_RANGEUCHECK("log2_max_mv_length_vertical",
+ vui_params->log2_max_mv_length_vertical,
+ 0, 15, &parse_err);
+ }
+
+ return parse_err;
+}
+
+static enum bspp_error_type bspp_hevc_parse_spsrange_extensions
+ (void *sr_ctx,
+ struct bspp_hevc_sps_range_exts *range_exts)
+{
+ enum bspp_error_type parse_err = BSPP_ERROR_NONE;
+
+ VDEC_ASSERT(sr_ctx);
+ VDEC_ASSERT(range_exts);
+
+ memset(range_exts, 0, sizeof(struct bspp_hevc_sps_range_exts));
+
+ HEVC_SWSR_U1("transform_skip_rotation_enabled_flag",
+ &range_exts->transform_skip_rotation_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("transform_skip_context_enabled_flag",
+ &range_exts->transform_skip_context_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("implicit_rdpcm_enabled_flag",
+ &range_exts->implicit_rdpcm_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("explicit_rdpcm_enabled_flag",
+ &range_exts->explicit_rdpcm_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("extended_precision_processing_flag",
+ &range_exts->extended_precision_processing_flag, sr_ctx);
+ HEVC_UCHECK("extended_precision_processing_flag",
+ range_exts->extended_precision_processing_flag,
+ 0, &parse_err);
+ HEVC_SWSR_U1("intra_smoothing_disabled_flag",
+ &range_exts->intra_smoothing_disabled_flag, sr_ctx);
+ HEVC_SWSR_U1("high_precision_offsets_enabled_flag",
+ &range_exts->high_precision_offsets_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("persistent_rice_adaptation_enabled_flag",
+ &range_exts->persistent_rice_adaptation_enabled_flag,
+ sr_ctx);
+ HEVC_SWSR_U1("cabac_bypass_alignment_enabled_flag",
+ &range_exts->cabac_bypass_alignment_enabled_flag, sr_ctx);
+
+ return parse_err;
+}
+
+static unsigned char
+bspp_hevc_checksps_range_extensions(struct bspp_hevc_sps_range_exts *range_exts)
+{
+ VDEC_ASSERT(range_exts);
+
+ if (range_exts->transform_skip_rotation_enabled_flag ||
+ range_exts->transform_skip_context_enabled_flag ||
+ range_exts->implicit_rdpcm_enabled_flag ||
+ range_exts->explicit_rdpcm_enabled_flag ||
+ range_exts->extended_precision_processing_flag ||
+ range_exts->intra_smoothing_disabled_flag ||
+ range_exts->persistent_rice_adaptation_enabled_flag ||
+ range_exts->cabac_bypass_alignment_enabled_flag)
+ return 1;
+ /*
+ * Note: high_precision_offsets_enabled_flag is supported even
+ * if hw capabilities (bHevcRangeExt is not set)
+ */
+ return 0;
+}
+
+static enum bspp_error_type bspp_hevc_parsesps(void *sr_ctx,
+ void *str_res,
+ struct bspp_hevc_sps *sps)
+{
+ enum bspp_error_type parse_err = BSPP_ERROR_NONE;
+ unsigned char i;
+ unsigned int min_cblog2_size_y;
+
+ if (!sr_ctx || !sps) {
+ VDEC_ASSERT(0);
+ return BSPP_ERROR_INVALID_VALUE;
+ }
+
+ memset(sps, 0, sizeof(struct bspp_hevc_sps));
+
+ HEVC_SWSR_UN("sps_video_parameter_set_id",
+ (unsigned int *)&sps->sps_video_parameter_set_id, 4, sr_ctx);
+ HEVC_SWSR_UN("sps_max_sub_layers_minus1",
+ (unsigned int *)&sps->sps_max_sub_layers_minus1, 3, sr_ctx);
+ HEVC_RANGEUCHECK("sps_max_sub_layers_minus1", sps->sps_max_sub_layers_minus1, 0,
+ HEVC_MAX_NUM_SUBLAYERS - 1, &parse_err);
+ HEVC_SWSR_U1("sps_temporal_id_nesting_flag",
+ &sps->sps_temporal_id_nesting_flag, sr_ctx);
+
+ if (sps->sps_max_sub_layers_minus1 == 0)
+ HEVC_UCHECK("sps_temporal_id_nesting_flag",
+ sps->sps_temporal_id_nesting_flag, 1, &parse_err);
+
+ parse_err |= bspp_hevc_parse_profiletierlevel
+ (sr_ctx, &sps->profile_tier_level,
+ sps->sps_max_sub_layers_minus1);
+
+ HEVC_SWSR_UE("sps_seq_parameter_set_id",
+ (unsigned int *)&sps->sps_seq_parameter_set_id, sr_ctx);
+ HEVC_RANGEUCHECK("sps_seq_parameter_set_id", sps->sps_seq_parameter_set_id, 0,
+ HEVC_MAX_SPS_COUNT - 1, &parse_err);
+
+ HEVC_SWSR_UE("chroma_format_idc", (unsigned int *)&sps->chroma_format_idc, sr_ctx);
+ HEVC_RANGEUCHECK("chroma_format_idc", sps->chroma_format_idc, 0, 3, &parse_err);
+
+ if (sps->chroma_format_idc == 3)
+ HEVC_SWSR_U1("separate_colour_plane_flag",
+ &sps->separate_colour_plane_flag, sr_ctx);
+
+ HEVC_SWSR_UE("pic_width_in_luma_samples",
+ (unsigned int *)&sps->pic_width_in_luma_samples, sr_ctx);
+ HEVC_SWSR_UE("pic_height_in_luma_samples",
+ (unsigned int *)&sps->pic_height_in_luma_samples, sr_ctx);
+
+ HEVC_SWSR_U1("conformance_window_flag", &sps->conformance_window_flag, sr_ctx);
+
+ if (sps->pic_width_in_luma_samples == 0 ||
+ sps->pic_height_in_luma_samples == 0) {
+ pr_warn("Invalid video dimensions (%u, %u)",
+ sps->pic_width_in_luma_samples,
+ sps->pic_height_in_luma_samples);
+ parse_err |= BSPP_ERROR_UNRECOVERABLE;
+ }
+
+ if (sps->conformance_window_flag) {
+ HEVC_SWSR_UE("conf_win_left_offset",
+ (unsigned int *)&sps->conf_win_left_offset, sr_ctx);
+ HEVC_SWSR_UE("conf_win_right_offset",
+ (unsigned int *)&sps->conf_win_right_offset, sr_ctx);
+ HEVC_SWSR_UE("conf_win_top_offset",
+ (unsigned int *)&sps->conf_win_top_offset, sr_ctx);
+ HEVC_SWSR_UE("conf_win_bottom_offset",
+ (unsigned int *)&sps->conf_win_bottom_offset, sr_ctx);
+ }
+
+ HEVC_SWSR_UE("bit_depth_luma_minus8",
+ (unsigned int *)&sps->bit_depth_luma_minus8, sr_ctx);
+ HEVC_RANGEUCHECK("bit_depth_luma_minus8",
+ sps->bit_depth_luma_minus8, 0, 6, &parse_err);
+ HEVC_SWSR_UE("bit_depth_chroma_minus8",
+ (unsigned int *)&sps->bit_depth_chroma_minus8, sr_ctx);
+ HEVC_RANGEUCHECK("bit_depth_chroma_minus8", sps->bit_depth_chroma_minus8,
+ 0, 6, &parse_err);
+
+ HEVC_SWSR_UE("log2_max_pic_order_cnt_lsb_minus4",
+ (unsigned int *)&sps->log2_max_pic_order_cnt_lsb_minus4, sr_ctx);
+ HEVC_RANGEUCHECK("log2_max_pic_order_cnt_lsb_minus4",
+ sps->log2_max_pic_order_cnt_lsb_minus4,
+ 0, 12, &parse_err);
+
+ HEVC_SWSR_U1("sps_sub_layer_ordering_info_present_flag",
+ &sps->sps_sub_layer_ordering_info_present_flag, sr_ctx);
+ for (i = (sps->sps_sub_layer_ordering_info_present_flag ?
+ 0 : sps->sps_max_sub_layers_minus1);
+ i <= sps->sps_max_sub_layers_minus1; ++i) {
+ HEVC_SWSR_UE("sps_max_dec_pic_buffering_minus1",
+ (unsigned int *)&sps->sps_max_dec_pic_buffering_minus1[i], sr_ctx);
+ HEVC_SWSR_UE("sps_max_num_reorder_pics",
+ (unsigned int *)&sps->sps_max_num_reorder_pics[i], sr_ctx);
+ HEVC_SWSR_UE("sps_max_latency_increase_plus1",
+ (unsigned int *)&sps->sps_max_latency_increase_plus1[i], sr_ctx);
+ }
+
+ HEVC_SWSR_UE("log2_min_luma_coding_block_size_minus3",
+ (unsigned int *)&sps->log2_min_luma_coding_block_size_minus3, sr_ctx);
+ HEVC_SWSR_UE("log2_diff_max_min_luma_coding_block_size",
+ (unsigned int *)&sps->log2_diff_max_min_luma_coding_block_size, sr_ctx);
+ HEVC_SWSR_UE("log2_min_transform_block_size_minus2",
+ (unsigned int *)&sps->log2_min_transform_block_size_minus2, sr_ctx);
+ HEVC_SWSR_UE("log2_diff_max_min_transform_block_size",
+ (unsigned int *)&sps->log2_diff_max_min_transform_block_size, sr_ctx);
+ HEVC_SWSR_UE("max_transform_hierarchy_depth_inter",
+ (unsigned int *)&sps->max_transform_hierarchy_depth_inter, sr_ctx);
+ HEVC_SWSR_UE("max_transform_hierarchy_depth_intra",
+ (unsigned int *)&sps->max_transform_hierarchy_depth_intra, sr_ctx);
+
+ HEVC_SWSR_U1("scaling_list_enabled_flag", &sps->scaling_list_enabled_flag, sr_ctx);
+
+ if (sps->scaling_list_enabled_flag) {
+ HEVC_SWSR_U1("sps_scaling_list_data_present_flag",
+ &sps->sps_scaling_list_data_present_flag, sr_ctx);
+ if (sps->sps_scaling_list_data_present_flag)
+ parse_err |= bspp_hevc_parse_scalinglistdata(sr_ctx,
+ &sps->scalinglist_data);
+ else
+ bspp_hevc_usedefault_scalinglists(&sps->scalinglist_data);
+ }
+
+ HEVC_SWSR_U1("amp_enabled_flag", &sps->amp_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("sample_adaptive_offset_enabled_flag",
+ &sps->sample_adaptive_offset_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("pcm_enabled_flag", &sps->pcm_enabled_flag, sr_ctx);
+
+ if (sps->pcm_enabled_flag) {
+ HEVC_SWSR_UN("pcm_sample_bit_depth_luma_minus1",
+ (unsigned int *)&sps->pcm_sample_bit_depth_luma_minus1,
+ 4, sr_ctx);
+ HEVC_SWSR_UN("pcm_sample_bit_depth_chroma_minus1",
+ (unsigned int *)&sps->pcm_sample_bit_depth_chroma_minus1,
+ 4, sr_ctx);
+ HEVC_SWSR_UE("log2_min_pcm_luma_coding_block_size_minus3",
+ (unsigned int *)&sps->log2_min_pcm_luma_coding_block_size_minus3,
+ sr_ctx);
+ HEVC_SWSR_UE("log2_diff_max_min_pcm_luma_coding_block_size",
+ (unsigned int *)&sps->log2_diff_max_min_pcm_luma_coding_block_size,
+ sr_ctx);
+ HEVC_SWSR_U1("pcm_loop_filter_disabled_flag",
+ &sps->pcm_loop_filter_disabled_flag, sr_ctx);
+ } else {
+ sps->pcm_sample_bit_depth_luma_minus1 = 7;
+ sps->pcm_sample_bit_depth_chroma_minus1 = 7;
+ sps->log2_min_pcm_luma_coding_block_size_minus3 = 0;
+ sps->log2_diff_max_min_pcm_luma_coding_block_size = 2;
+ }
+
+ HEVC_SWSR_UE("num_short_term_ref_pic_sets",
+ (unsigned int *)&sps->num_short_term_ref_pic_sets, sr_ctx);
+ HEVC_RANGEUCHECK("num_short_term_ref_pic_sets", sps->num_short_term_ref_pic_sets, 0,
+ HEVC_MAX_NUM_ST_REF_PIC_SETS - 1, &parse_err);
+
+ for (i = 0; i < sps->num_short_term_ref_pic_sets; ++i) {
+ parse_err |= bspp_hevc_parse_shortterm_refpicset(sr_ctx,
+ sps->rps_list,
+ i,
+ 0);
+ }
+
+ HEVC_SWSR_U1("long_term_ref_pics_present_flag",
+ &sps->long_term_ref_pics_present_flag, sr_ctx);
+ if (sps->long_term_ref_pics_present_flag) {
+ HEVC_SWSR_UE("num_long_term_ref_pics_sps",
+ (unsigned int *)&sps->num_long_term_ref_pics_sps, sr_ctx);
+ HEVC_RANGEUCHECK("num_long_term_ref_pics_sps",
+ sps->num_long_term_ref_pics_sps, 0,
+ HEVC_MAX_NUM_LT_REF_PICS, &parse_err);
+ for (i = 0; i < sps->num_long_term_ref_pics_sps; ++i) {
+ HEVC_SWSR_UN("lt_ref_pic_poc_lsb_sps",
+ (unsigned int *)&sps->lt_ref_pic_poc_lsb_sps[i],
+ sps->log2_max_pic_order_cnt_lsb_minus4 + 4,
+ sr_ctx);
+ HEVC_SWSR_U1("used_by_curr_pic_lt_sps_flag",
+ &sps->used_by_curr_pic_lt_sps_flag[i],
+ sr_ctx);
+ }
+ }
+
+ HEVC_SWSR_U1("sps_temporal_mvp_enabled_flag", &sps->sps_temporal_mvp_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("strong_intra_smoothing_enabled_flag",
+ &sps->strong_intra_smoothing_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("vui_parameters_present_flag", &sps->vui_parameters_present_flag, sr_ctx);
+
+ if (sps->vui_parameters_present_flag)
+ bspp_hevc_parsevui_parameters(sr_ctx, &sps->vui_params,
+ sps->sps_max_sub_layers_minus1);
+
+ HEVC_SWSR_U1("sps_extension_present_flag", &sps->sps_extension_present_flag, sr_ctx);
+ if (sps->sps_extension_present_flag &&
+ bspp_hevc_range_extensions_is_enabled(&sps->profile_tier_level)) {
+ HEVC_SWSR_U1("sps_range_extensions_flag", &sps->sps_range_extensions_flag, sr_ctx);
+
+ HEVC_SWSR_UN("sps_extension_7bits", (unsigned int *)&sps->sps_extension_7bits, 7,
+ sr_ctx);
+ /*
+ * ignore extension data. Although we inform
+ * if some non-zero data was found
+ */
+ HEVC_UCHECK("sps_extension_7bits", sps->sps_extension_7bits, 0, &parse_err);
+ /*
+ * TODO ?: the newest HEVC spec (10/2014) splits
+ * "sps_extension_7bits" to * sps_multilayer_extension_flag (1)
+ * sps_extension_6bits (6)
+ */
+ if (sps->sps_range_extensions_flag)
+ parse_err |= bspp_hevc_parse_spsrange_extensions
+ (sr_ctx, &sps->range_exts);
+ }
+ /*
+ * calculate "derived" variables needed further in the parsing process
+ * (of other headers) and save them for later use
+ */
+ sps->sub_width_c = 1;
+ sps->sub_height_c = 1;
+ if (sps->chroma_format_idc == 2) {
+ sps->sub_width_c = 2;
+ } else if (sps->chroma_format_idc == 1) {
+ sps->sub_width_c = 2;
+ sps->sub_height_c = 2;
+ }
+
+ min_cblog2_size_y = sps->log2_min_luma_coding_block_size_minus3 + 3;
+ sps->ctb_log2size_y =
+ min_cblog2_size_y + sps->log2_diff_max_min_luma_coding_block_size;
+ sps->ctb_size_y = 1 << sps->ctb_log2size_y;
+
+ if (sps->ctb_size_y > 0) {
+ /* use integer division with rounding up */
+ sps->pic_width_in_ctbs_y =
+ (sps->pic_width_in_luma_samples + sps->ctb_size_y - 1)
+ / sps->ctb_size_y;
+ sps->pic_height_in_ctbs_y =
+ (sps->pic_height_in_luma_samples + sps->ctb_size_y - 1)
+ / sps->ctb_size_y;
+ } else {
+ parse_err |= BSPP_ERROR_INVALID_VALUE;
+ }
+
+ sps->pic_size_in_ctbs_y =
+ sps->pic_width_in_ctbs_y * sps->pic_height_in_ctbs_y;
+
+ sps->max_pic_order_cnt_lsb =
+ 1 << (sps->log2_max_pic_order_cnt_lsb_minus4 + 4);
+
+ for (i = 0; i <= sps->sps_max_sub_layers_minus1; ++i) {
+ sps->sps_max_latency_pictures[i] =
+ sps->sps_max_num_reorder_pics[i] +
+ sps->sps_max_latency_increase_plus1[i] - 1;
+ }
+
+ BSPP_HEVC_SYNTAX("ctb_size_y: %u", sps->ctb_size_y);
+ BSPP_HEVC_SYNTAX("pic_width_in_ctbs_y: %u", sps->pic_width_in_ctbs_y);
+ BSPP_HEVC_SYNTAX("pic_height_in_ctbs_y: %u", sps->pic_height_in_ctbs_y);
+ BSPP_HEVC_SYNTAX("pic_size_in_ctbs_y: %u", sps->pic_size_in_ctbs_y);
+
+ return parse_err;
+}
+
+static int bspp_hevc_release_sequhdrinfo(void *str_alloc, void *secure_spsinfo)
+{
+ struct bspp_hevc_sps *hevc_sps = (struct bspp_hevc_sps *)secure_spsinfo;
+
+ if (!hevc_sps)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /* Release the raw VIU data. */
+ bspp_streamrelese_rawbstrdataplain(str_alloc, (void *)hevc_sps->vui_raw_data);
+ return 0;
+}
+
+static int bspp_hevc_releasedata(void *str_alloc, enum bspp_unit_type data_type,
+ void *data_handle)
+{
+ int result = 0;
+
+ if (!data_handle)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ switch (data_type) {
+ case BSPP_UNIT_SEQUENCE:
+ result = bspp_hevc_release_sequhdrinfo(str_alloc, data_handle);
+ break;
+ default:
+ break;
+ }
+
+ return result;
+}
+
+static int bspp_hevc_reset_ppsinfo(void *secure_ppsinfo)
+{
+ struct bspp_hevc_pps *hevc_pps = NULL;
+
+ if (!secure_ppsinfo)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ hevc_pps = (struct bspp_hevc_pps *)secure_ppsinfo;
+
+ memset(hevc_pps, 0, sizeof(*hevc_pps));
+
+ return 0;
+}
+
+static int bspp_hevc_resetdata(enum bspp_unit_type data_type, void *data_handle)
+{
+ int result = 0;
+
+ switch (data_type) {
+ case BSPP_UNIT_PPS:
+ result = bspp_hevc_reset_ppsinfo(data_handle);
+ break;
+ default:
+ break;
+ }
+ return result;
+}
+
+static enum bspp_error_type bspp_hevc_parsepps_range_extensions
+ (void *sr_ctx,
+ struct bspp_hevc_pps_range_exts *range_exts,
+ unsigned char transform_skip_enabled_flag,
+ unsigned char log2_diff_max_min_luma_coding_block_size)
+{
+ enum bspp_error_type parse_err = BSPP_ERROR_NONE;
+
+ VDEC_ASSERT(sr_ctx);
+ VDEC_ASSERT(range_exts);
+
+ memset(range_exts, 0, sizeof(struct bspp_hevc_pps_range_exts));
+
+ if (transform_skip_enabled_flag)
+ HEVC_SWSR_UE("log2_max_transform_skip_block_size_minus2",
+ (unsigned int *)&range_exts->log2_max_transform_skip_block_size_minus2,
+ sr_ctx);
+
+ HEVC_SWSR_U1("cross_component_prediction_enabled_flag",
+ &range_exts->cross_component_prediction_enabled_flag,
+ sr_ctx);
+ HEVC_UCHECK("cross_component_prediction_enabled_flag",
+ range_exts->cross_component_prediction_enabled_flag, 0,
+ &parse_err);
+
+ HEVC_SWSR_U1("chroma_qp_offset_list_enabled_flag",
+ &range_exts->chroma_qp_offset_list_enabled_flag, sr_ctx);
+
+ if (range_exts->chroma_qp_offset_list_enabled_flag) {
+ unsigned char i;
+
+ HEVC_SWSR_UE("diff_cu_chroma_qp_offset_depth",
+ (unsigned int *)&range_exts->diff_cu_chroma_qp_offset_depth,
+ sr_ctx);
+ HEVC_RANGEUCHECK("diff_cu_chroma_qp_offset_depth",
+ range_exts->diff_cu_chroma_qp_offset_depth, 0,
+ log2_diff_max_min_luma_coding_block_size,
+ &parse_err);
+
+ HEVC_SWSR_UE("chroma_qp_offset_list_len_minus1",
+ (unsigned int *)&range_exts->chroma_qp_offset_list_len_minus1,
+ sr_ctx);
+ HEVC_RANGEUCHECK("chroma_qp_offset_list_len_minus1",
+ range_exts->chroma_qp_offset_list_len_minus1,
+ 0, HEVC_MAX_CHROMA_QP - 1, &parse_err);
+ for (i = 0; i <= range_exts->chroma_qp_offset_list_len_minus1; i++) {
+ HEVC_SWSR_SE("cb_qp_offset_list",
+ (int *)&range_exts->cb_qp_offset_list[i], sr_ctx);
+ HEVC_RANGESCHECK("cb_qp_offset_list", range_exts->cb_qp_offset_list[i],
+ -12, 12, &parse_err);
+ HEVC_SWSR_SE("cr_qp_offset_list",
+ (int *)&range_exts->cr_qp_offset_list[i], sr_ctx);
+ HEVC_RANGESCHECK("cr_qp_offset_list", range_exts->cr_qp_offset_list[i],
+ -12, 12, &parse_err);
+ }
+ }
+ HEVC_SWSR_UE("log2_sao_offset_scale_luma",
+ (unsigned int *)&range_exts->log2_sao_offset_scale_luma, sr_ctx);
+ HEVC_UCHECK("log2_sao_offset_scale_luma",
+ range_exts->log2_sao_offset_scale_luma, 0, &parse_err);
+ HEVC_SWSR_UE("log2_sao_offset_scale_chroma",
+ (unsigned int *)&range_exts->log2_sao_offset_scale_chroma, sr_ctx);
+ HEVC_UCHECK("log2_sao_offset_scale_chroma",
+ range_exts->log2_sao_offset_scale_chroma, 0, &parse_err);
+
+ return parse_err;
+}
+
+static unsigned char bspp_hevc_checkppsrangeextensions
+ (struct bspp_hevc_pps_range_exts *range_exts)
+{
+ VDEC_ASSERT(range_exts);
+
+ if (range_exts->log2_max_transform_skip_block_size_minus2 ||
+ range_exts->cross_component_prediction_enabled_flag)
+ return 1;
+ /*
+ * Note: chroma_qp_offset_list_enabled_flag is supported even
+ * if hw capabilities (bHevcRangeExt is not set)
+ */
+ return 0;
+}
+
+static enum bspp_error_type bspp_hevc_parsepps
+ (void *sr_ctx, void *str_res,
+ struct bspp_hevc_pps *pps)
+{
+ enum bspp_error_type parse_err = BSPP_ERROR_NONE;
+ struct bspp_sequence_hdr_info *spsinfo = NULL;
+ struct bspp_hevc_sps *sps = NULL;
+
+ VDEC_ASSERT(sr_ctx);
+ VDEC_ASSERT(pps);
+ memset(pps, 0, sizeof(struct bspp_hevc_pps));
+
+ HEVC_SWSR_UE("pps_pic_parameter_set_id",
+ (unsigned int *)&pps->pps_pic_parameter_set_id, sr_ctx);
+ HEVC_RANGEUCHECK("pps_pic_parameter_set_id", pps->pps_pic_parameter_set_id, 0,
+ HEVC_MAX_PPS_COUNT - 1, &parse_err);
+ HEVC_SWSR_UE("pps_seq_parameter_set_id",
+ (unsigned int *)&pps->pps_seq_parameter_set_id, sr_ctx);
+ HEVC_RANGEUCHECK("pps_seq_parameter_set_id", pps->pps_seq_parameter_set_id, 0,
+ HEVC_MAX_SPS_COUNT - 1, &parse_err);
+
+ spsinfo = bspp_get_sequ_hdr(str_res, pps->pps_seq_parameter_set_id);
+ if (!spsinfo) {
+ parse_err |= BSPP_ERROR_NO_SEQUENCE_HDR;
+ } else {
+ sps = (struct bspp_hevc_sps *)spsinfo->secure_sequence_info;
+ VDEC_ASSERT(sps->sps_seq_parameter_set_id ==
+ pps->pps_seq_parameter_set_id);
+ }
+
+ HEVC_SWSR_U1("dependent_slice_segments_enabled_flag",
+ &pps->dependent_slice_segments_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("output_flag_present_flag",
+ &pps->output_flag_present_flag, sr_ctx);
+ HEVC_SWSR_UN("num_extra_slice_header_bits",
+ (unsigned int *)&pps->num_extra_slice_header_bits, 3, sr_ctx);
+ HEVC_SWSR_U1("sign_data_hiding_enabled_flag", &pps->sign_data_hiding_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("cabac_init_present_flag", &pps->cabac_init_present_flag, sr_ctx);
+ HEVC_SWSR_UE("num_ref_idx_l0_default_active_minus1",
+ (unsigned int *)&pps->num_ref_idx_l0_default_active_minus1, sr_ctx);
+ HEVC_RANGEUCHECK("num_ref_idx_l0_default_active_minus1",
+ pps->num_ref_idx_l0_default_active_minus1, 0, 14, &parse_err);
+ HEVC_SWSR_UE("num_ref_idx_l1_default_active_minus1",
+ (unsigned int *)&pps->num_ref_idx_l1_default_active_minus1, sr_ctx);
+ HEVC_RANGEUCHECK("num_ref_idx_l1_default_active_minus1",
+ pps->num_ref_idx_l1_default_active_minus1, 0, 14, &parse_err);
+ HEVC_SWSR_SE("init_qp_minus26", (int *)&pps->init_qp_minus26, sr_ctx);
+
+ if (sps)
+ HEVC_RANGESCHECK("init_qp_minus26", pps->init_qp_minus26,
+ -(26 + (6 * sps->bit_depth_luma_minus8)), 25, &parse_err);
+
+ HEVC_SWSR_U1("constrained_intra_pred_flag", &pps->constrained_intra_pred_flag, sr_ctx);
+ HEVC_SWSR_U1("transform_skip_enabled_flag", &pps->transform_skip_enabled_flag, sr_ctx);
+
+ HEVC_SWSR_U1("cu_qp_delta_enabled_flag", &pps->cu_qp_delta_enabled_flag, sr_ctx);
+
+ if (pps->cu_qp_delta_enabled_flag)
+ HEVC_SWSR_UE("diff_cu_qp_delta_depth",
+ (unsigned int *)&pps->diff_cu_qp_delta_depth, sr_ctx);
+
+ HEVC_SWSR_SE("pps_cb_qp_offset", (int *)&pps->pps_cb_qp_offset, sr_ctx);
+ HEVC_RANGESCHECK("pps_cb_qp_offset", pps->pps_cb_qp_offset, -12, 12, &parse_err);
+ HEVC_SWSR_SE("pps_cr_qp_offset", (int *)&pps->pps_cr_qp_offset, sr_ctx);
+ HEVC_RANGESCHECK("pps_cr_qp_offset", pps->pps_cr_qp_offset, -12, 12, &parse_err);
+ HEVC_SWSR_U1("pps_slice_chroma_qp_offsets_present_flag",
+ &pps->pps_slice_chroma_qp_offsets_present_flag, sr_ctx);
+ HEVC_SWSR_U1("weighted_pred_flag", &pps->weighted_pred_flag, sr_ctx);
+ HEVC_SWSR_U1("weighted_bipred_flag", &pps->weighted_bipred_flag, sr_ctx);
+ HEVC_SWSR_U1("transquant_bypass_enabled_flag",
+ &pps->transquant_bypass_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("tiles_enabled_flag", &pps->tiles_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("entropy_coding_sync_enabled_flag",
+ &pps->entropy_coding_sync_enabled_flag, sr_ctx);
+
+ if (pps->tiles_enabled_flag) {
+ HEVC_SWSR_UE("num_tile_columns_minus1",
+ (unsigned int *)&pps->num_tile_columns_minus1, sr_ctx);
+ HEVC_RANGEUCHECK("num_tile_columns_minus1", pps->num_tile_columns_minus1, 0,
+ HEVC_MAX_TILE_COLS - 1, &parse_err);
+
+ if (pps->num_tile_columns_minus1 > HEVC_MAX_TILE_COLS)
+ pps->num_tile_columns_minus1 = HEVC_MAX_TILE_COLS;
+
+ HEVC_SWSR_UE("num_tile_rows_minus1", (unsigned int *)&pps->num_tile_rows_minus1,
+ sr_ctx);
+ HEVC_RANGEUCHECK("num_tile_rows_minus1", pps->num_tile_rows_minus1, 0,
+ HEVC_MAX_TILE_ROWS - 1, &parse_err);
+
+ if (pps->num_tile_rows_minus1 > HEVC_MAX_TILE_ROWS)
+ pps->num_tile_rows_minus1 = HEVC_MAX_TILE_ROWS;
+
+ HEVC_SWSR_U1("uniform_spacing_flag", &pps->uniform_spacing_flag, sr_ctx);
+
+ if (!pps->uniform_spacing_flag) {
+ unsigned char i = 0;
+
+ for (i = 0; i < pps->num_tile_columns_minus1; ++i)
+ HEVC_SWSR_UE("column_width_minus1",
+ (unsigned int *)&pps->column_width_minus1[i],
+ sr_ctx);
+
+ for (i = 0; i < pps->num_tile_rows_minus1; ++i)
+ HEVC_SWSR_UE("row_height_minus1",
+ (unsigned int *)&pps->row_height_minus1[i],
+ sr_ctx);
+ }
+ HEVC_SWSR_U1("loop_filter_across_tiles_enabled_flag",
+ &pps->loop_filter_across_tiles_enabled_flag, sr_ctx);
+ } else {
+ pps->loop_filter_across_tiles_enabled_flag = 1;
+ }
+
+ HEVC_SWSR_U1("pps_loop_filter_across_slices_enabled_flag",
+ &pps->pps_loop_filter_across_slices_enabled_flag, sr_ctx);
+
+ HEVC_SWSR_U1("deblocking_filter_control_present_flag",
+ &pps->deblocking_filter_control_present_flag, sr_ctx);
+
+ if (pps->deblocking_filter_control_present_flag) {
+ HEVC_SWSR_U1("deblocking_filter_override_enabled_flag",
+ &pps->deblocking_filter_override_enabled_flag, sr_ctx);
+ HEVC_SWSR_U1("pps_deblocking_filter_disabled_flag",
+ &pps->pps_deblocking_filter_disabled_flag, sr_ctx);
+ if (!pps->pps_deblocking_filter_disabled_flag) {
+ HEVC_SWSR_SE("pps_beta_offset_div2", (int *)&pps->pps_beta_offset_div2,
+ sr_ctx);
+ HEVC_RANGESCHECK("pps_beta_offset_div2", pps->pps_beta_offset_div2, -6, 6,
+ &parse_err);
+ HEVC_SWSR_SE("pps_tc_offset_div2", (int *)&pps->pps_tc_offset_div2, sr_ctx);
+ HEVC_RANGESCHECK("pps_tc_offset_div2", pps->pps_tc_offset_div2, -6, 6,
+ &parse_err);
+ }
+ }
+
+ HEVC_SWSR_U1("pps_scaling_list_data_present_flag",
+ &pps->pps_scaling_list_data_present_flag, sr_ctx);
+ if (pps->pps_scaling_list_data_present_flag)
+ parse_err |= bspp_hevc_parse_scalinglistdata(sr_ctx, &pps->scaling_list);
+
+ HEVC_SWSR_U1("lists_modification_present_flag",
+ &pps->lists_modification_present_flag, sr_ctx);
+ HEVC_SWSR_UE("log2_parallel_merge_level_minus2",
+ (unsigned int *)&pps->log2_parallel_merge_level_minus2, sr_ctx);
+ HEVC_SWSR_U1("slice_segment_header_extension_present_flag",
+ &pps->slice_segment_header_extension_present_flag, sr_ctx);
+
+ HEVC_SWSR_U1("pps_extension_present_flag", &pps->pps_extension_present_flag, sr_ctx);
+ if (pps->pps_extension_present_flag &&
+ bspp_hevc_range_extensions_is_enabled(&sps->profile_tier_level)) {
+ HEVC_SWSR_U1("pps_range_extensions_flag",
+ &pps->pps_range_extensions_flag, sr_ctx);
+ HEVC_SWSR_UN("pps_extension_7bits",
+ (unsigned int *)&pps->pps_extension_7bits, 7, sr_ctx);
+ /*
+ * ignore extension data. Although we inform
+ * if some non-zero data was found
+ */
+ HEVC_UCHECK("pps_extension_7bits", pps->pps_extension_7bits, 0, &parse_err);
+
+ /*
+ * TODO ?: the newest HEVC spec (10/2014) splits "pps_extension_7bits" to
+ * pps_multilayer_extension_flag (1)
+ * pps_extension_6bits (6)
+ */
+ if (pps->pps_range_extensions_flag && sps) {
+ parse_err |= bspp_hevc_parsepps_range_extensions
+ (sr_ctx,
+ &pps->range_exts,
+ pps->transform_skip_enabled_flag,
+ sps->log2_diff_max_min_luma_coding_block_size);
+ }
+ }
+
+ /* calculate derived elements */
+ if (pps->tiles_enabled_flag && sps)
+ bspp_hevc_dotilecalculations(sps, pps);
+
+ return parse_err;
+}
+
+static void bspp_hevc_dotilecalculations(struct bspp_hevc_sps *sps,
+ struct bspp_hevc_pps *pps)
+{
+ unsigned short colwidth[HEVC_MAX_TILE_COLS];
+ unsigned short rowheight[HEVC_MAX_TILE_ROWS];
+ unsigned char i;
+
+ if (!pps->tiles_enabled_flag) {
+ pps->max_tile_height_in_ctbs_y = sps->pic_height_in_ctbs_y;
+ return;
+ }
+
+ if (pps->uniform_spacing_flag) {
+ for (i = 0; i <= pps->num_tile_columns_minus1; ++i) {
+ colwidth[i] = ((i + 1) * sps->pic_width_in_ctbs_y) /
+ (pps->num_tile_columns_minus1 + 1) -
+ (i * sps->pic_width_in_ctbs_y) /
+ (pps->num_tile_columns_minus1 + 1);
+ }
+
+ for (i = 0; i <= pps->num_tile_rows_minus1; ++i) {
+ rowheight[i] = ((i + 1) * sps->pic_height_in_ctbs_y) /
+ (pps->num_tile_rows_minus1 + 1) -
+ (i * sps->pic_height_in_ctbs_y) /
+ (pps->num_tile_rows_minus1 + 1);
+ }
+
+ pps->max_tile_height_in_ctbs_y = rowheight[0];
+ } else {
+ pps->max_tile_height_in_ctbs_y = 0;
+
+ colwidth[pps->num_tile_columns_minus1] = sps->pic_width_in_ctbs_y;
+ for (i = 0; i <= pps->num_tile_columns_minus1; ++i) {
+ colwidth[i] = pps->column_width_minus1[i] + 1;
+ colwidth[pps->num_tile_columns_minus1] -= colwidth[i];
+ }
+
+ rowheight[pps->num_tile_rows_minus1] = sps->pic_height_in_ctbs_y;
+ for (i = 0; i <= pps->num_tile_rows_minus1; ++i) {
+ rowheight[i] = pps->row_height_minus1[i] + 1;
+ rowheight[pps->num_tile_rows_minus1] -= rowheight[i];
+
+ if (rowheight[i] > pps->max_tile_height_in_ctbs_y)
+ pps->max_tile_height_in_ctbs_y = rowheight[i];
+ }
+
+ if (rowheight[pps->num_tile_rows_minus1] > pps->max_tile_height_in_ctbs_y)
+ pps->max_tile_height_in_ctbs_y =
+ rowheight[pps->num_tile_rows_minus1];
+ }
+
+ for (i = 0; i <= pps->num_tile_columns_minus1; ++i)
+ pps->col_bd[i + 1] = pps->col_bd[i] + colwidth[i];
+
+ for (i = 0; i <= pps->num_tile_rows_minus1; ++i)
+ pps->row_bd[i + 1] = pps->row_bd[i] + rowheight[i];
+}
+
+static enum bspp_error_type bspp_hevc_parse_slicesegmentheader
+ (void *sr_ctx, void *str_res,
+ struct bspp_hevc_slice_segment_header *ssh,
+ unsigned char nalunit_type,
+ struct bspp_vps_info **vpsinfo,
+ struct bspp_sequence_hdr_info **spsinfo,
+ struct bspp_pps_info **ppsinfo)
+{
+ enum bspp_error_type parse_err = BSPP_ERROR_NONE;
+ struct bspp_hevc_pps *pps = NULL;
+ struct bspp_hevc_sps *sps = NULL;
+ struct bspp_hevc_vps *vps = NULL;
+
+ VDEC_ASSERT(sr_ctx);
+ VDEC_ASSERT(ssh);
+ VDEC_ASSERT(vpsinfo);
+ VDEC_ASSERT(spsinfo);
+ VDEC_ASSERT(ppsinfo);
+
+ memset(ssh, 0, sizeof(struct bspp_hevc_slice_segment_header));
+
+ HEVC_SWSR_U1("first_slice_segment_in_pic_flag",
+ &ssh->first_slice_segment_in_pic_flag, sr_ctx);
+
+ if (bspp_hevc_picture_is_irap((enum hevc_nalunittype)nalunit_type))
+ HEVC_SWSR_U1("no_output_of_prior_pics_flag",
+ &ssh->no_output_of_prior_pics_flag, sr_ctx);
+
+ HEVC_SWSR_UE("slice_pic_parameter_set_id", (unsigned int *)&ssh->slice_pic_parameter_set_id,
+ sr_ctx);
+ HEVC_RANGEUCHECK("slice_pic_parameter_set_id", ssh->slice_pic_parameter_set_id, 0,
+ HEVC_MAX_PPS_COUNT - 1, &parse_err);
+
+ if (ssh->slice_pic_parameter_set_id >= HEVC_MAX_PPS_COUNT) {
+ pr_warn("PPS Id invalid (%u), setting to 0",
+ ssh->slice_pic_parameter_set_id);
+ ssh->slice_pic_parameter_set_id = 0;
+ parse_err &= ~BSPP_ERROR_INVALID_VALUE;
+ parse_err |= BSPP_ERROR_CORRECTION_VALIDVALUE;
+ }
+
+ /* set PPS */
+ *ppsinfo = bspp_get_pps_hdr(str_res, ssh->slice_pic_parameter_set_id);
+ if (!(*ppsinfo)) {
+ parse_err |= BSPP_ERROR_NO_PPS;
+ goto error;
+ }
+ pps = (struct bspp_hevc_pps *)(*ppsinfo)->secure_pps_info;
+ if (!pps) {
+ parse_err |= BSPP_ERROR_NO_PPS;
+ goto error;
+ }
+ VDEC_ASSERT(pps->pps_pic_parameter_set_id == ssh->slice_pic_parameter_set_id);
+
+ *spsinfo = bspp_get_sequ_hdr(str_res, pps->pps_seq_parameter_set_id);
+ if (!(*spsinfo)) {
+ parse_err |= BSPP_ERROR_NO_SEQUENCE_HDR;
+ goto error;
+ }
+ sps = (struct bspp_hevc_sps *)(*spsinfo)->secure_sequence_info;
+ VDEC_ASSERT(sps->sps_seq_parameter_set_id == pps->pps_seq_parameter_set_id);
+
+ *vpsinfo = bspp_get_vpshdr(str_res, sps->sps_video_parameter_set_id);
+ if (!(*vpsinfo)) {
+ parse_err |= BSPP_ERROR_NO_VPS;
+ goto error;
+ }
+ vps = (struct bspp_hevc_vps *)(*vpsinfo)->secure_vpsinfo;
+ VDEC_ASSERT(vps->vps_video_parameter_set_id == sps->sps_video_parameter_set_id);
+
+ if (!ssh->first_slice_segment_in_pic_flag) {
+ if (pps->dependent_slice_segments_enabled_flag)
+ HEVC_SWSR_U1("dependent_slice_segment_flag",
+ &ssh->dependent_slice_segment_flag, sr_ctx);
+
+ HEVC_SWSR_UN("slice_segment_address",
+ (unsigned int *)&ssh->slice_segment_address,
+ bspp_ceil_log2(sps->pic_size_in_ctbs_y), sr_ctx);
+ }
+
+error:
+ return parse_err;
+}
+
+static enum bspp_error_type bspp_hevc_parse_profiletierlevel
+ (void *sr_ctx,
+ struct bspp_hevc_profile_tierlevel *ptl,
+ unsigned char vps_maxsublayers_minus1)
+{
+ enum bspp_error_type parse_err = BSPP_ERROR_NONE;
+ unsigned char i, j;
+ unsigned int res = 0;
+
+ VDEC_ASSERT(sr_ctx);
+ VDEC_ASSERT(ptl);
+ VDEC_ASSERT(vps_maxsublayers_minus1 < HEVC_MAX_NUM_SUBLAYERS);
+
+ memset(ptl, 0, sizeof(struct bspp_hevc_profile_tierlevel));
+
+ HEVC_SWSR_UN("general_profile_space", (unsigned int *)&ptl->general_profile_space, 2,
+ sr_ctx);
+ HEVC_SWSR_U1("general_tier_flag", &ptl->general_tier_flag, sr_ctx);
+ HEVC_SWSR_UN("general_profile_idc", (unsigned int *)&ptl->general_profile_idc, 5, sr_ctx);
+
+ for (j = 0; j < HEVC_MAX_NUM_PROFILE_IDC; ++j) {
+ HEVC_SWSR_U1("general_profile_compatibility_flag",
+ &ptl->general_profile_compatibility_flag[j],
+ sr_ctx);
+ }
+
+ HEVC_SWSR_U1("general_progressive_source_flag",
+ &ptl->general_progressive_source_flag, sr_ctx);
+ HEVC_SWSR_U1("general_interlaced_source_flag",
+ &ptl->general_interlaced_source_flag, sr_ctx);
+ HEVC_SWSR_U1("general_non_packed_constraint_flag",
+ &ptl->general_non_packed_constraint_flag, sr_ctx);
+ HEVC_SWSR_U1("general_frame_only_constraint_flag",
+ &ptl->general_frame_only_constraint_flag, sr_ctx);
+
+ if (ptl->general_profile_idc == 4 ||
+ ptl->general_profile_compatibility_flag[4]) {
+ HEVC_SWSR_U1("general_max_12bit_constraint_flag",
+ &ptl->general_max_12bit_constraint_flag, sr_ctx);
+ HEVC_SWSR_U1("general_max_10bit_constraint_flag",
+ &ptl->general_max_10bit_constraint_flag, sr_ctx);
+ HEVC_SWSR_U1("general_max_8bit_constraint_flag",
+ &ptl->general_max_8bit_constraint_flag, sr_ctx);
+ HEVC_SWSR_U1("general_max_422chroma_constraint_flag",
+ &ptl->general_max_422chroma_constraint_flag,
+ sr_ctx);
+ HEVC_SWSR_U1("general_max_420chroma_constraint_flag",
+ &ptl->general_max_420chroma_constraint_flag,
+ sr_ctx);
+ HEVC_SWSR_U1("general_max_monochrome_constraint_flag",
+ &ptl->general_max_monochrome_constraint_flag,
+ sr_ctx);
+ HEVC_SWSR_U1("general_intra_constraint_flag",
+ &ptl->general_intra_constraint_flag, sr_ctx);
+ HEVC_SWSR_U1("general_one_picture_only_constraint_flag",
+ &ptl->general_one_picture_only_constraint_flag,
+ sr_ctx);
+ HEVC_SWSR_U1("general_lower_bit_rate_constraint_flag",
+ &ptl->general_lower_bit_rate_constraint_flag,
+ sr_ctx);
+ HEVC_SWSR_UN("general_reserved_zero_35bits", &res, 32, sr_ctx);
+ HEVC_UCHECK("general_reserved_zero_35bits", res, 0, &parse_err);
+ HEVC_SWSR_UN("general_reserved_zero_35bits", &res, 3, sr_ctx);
+ HEVC_UCHECK("general_reserved_zero_35bits", res, 0, &parse_err);
+ } else {
+ HEVC_SWSR_UN("general_reserved_zero_44bits (1)", &res, 32, sr_ctx);
+ HEVC_UCHECK("general_reserved_zero_44bits (1)", res, 0, &parse_err);
+ HEVC_SWSR_UN("general_reserved_zero_44bits (2)", &res, 12, sr_ctx);
+ HEVC_UCHECK("general_reserved_zero_44bits (2)", res, 0, &parse_err);
+ }
+
+ HEVC_SWSR_UN("general_level_idc", (unsigned int *)&ptl->general_level_idc, 8, sr_ctx);
+ HEVC_RANGEUCHECK("general_level_idc", ptl->general_level_idc,
+ HEVC_LEVEL_IDC_MIN, HEVC_LEVEL_IDC_MAX, &parse_err);
+
+ for (i = 0; i < vps_maxsublayers_minus1; ++i) {
+ HEVC_SWSR_U1("sub_layer_profile_present_flag",
+ &ptl->sub_layer_profile_present_flag[i], sr_ctx);
+ HEVC_SWSR_U1("sub_layer_level_present_flag",
+ &ptl->sub_layer_level_present_flag[i], sr_ctx);
+ }
+
+ if (vps_maxsublayers_minus1 > 0) {
+ for (i = vps_maxsublayers_minus1; i < 8; ++i) {
+ HEVC_SWSR_UN("reserved_zero_2bits", &res, 2, sr_ctx);
+ HEVC_UCHECK("reserved_zero_2bits", res, 0, &parse_err);
+ }
+ }
+
+ for (i = 0; i < vps_maxsublayers_minus1; ++i) {
+ if (ptl->sub_layer_profile_present_flag[i]) {
+ HEVC_SWSR_UN("sub_layer_profile_space",
+ (unsigned int *)&ptl->sub_layer_profile_space[i], 2, sr_ctx);
+ HEVC_SWSR_U1("sub_layer_tier_flag", &ptl->sub_layer_tier_flag[i], sr_ctx);
+ HEVC_SWSR_UN("sub_layer_profile_idc",
+ (unsigned int *)&ptl->sub_layer_profile_idc[i], 5, sr_ctx);
+ for (j = 0; j < HEVC_MAX_NUM_PROFILE_IDC; ++j)
+ HEVC_SWSR_U1("sub_layer_profile_compatibility_flag",
+ &ptl->sub_layer_profile_compatibility_flag[i][j],
+ sr_ctx);
+
+ HEVC_SWSR_U1("sub_layer_progressive_source_flag",
+ &ptl->sub_layer_progressive_source_flag[i],
+ sr_ctx);
+ HEVC_SWSR_U1("sub_layer_interlaced_source_flag",
+ &ptl->sub_layer_interlaced_source_flag[i],
+ sr_ctx);
+ HEVC_SWSR_U1("sub_layer_non_packed_constraint_flag",
+ &ptl->sub_layer_non_packed_constraint_flag[i],
+ sr_ctx);
+ HEVC_SWSR_U1("sub_layer_frame_only_constraint_flag",
+ &ptl->sub_layer_frame_only_constraint_flag[i],
+ sr_ctx);
+
+ if (ptl->sub_layer_profile_idc[i] == 4 ||
+ ptl->sub_layer_profile_compatibility_flag[i][4]) {
+ HEVC_SWSR_U1("sub_layer_max_12bit_constraint_flag",
+ &ptl->sub_layer_max_12bit_constraint_flag[i],
+ sr_ctx);
+ HEVC_SWSR_U1("sub_layer_max_10bit_constraint_flag",
+ &ptl->sub_layer_max_10bit_constraint_flag[i],
+ sr_ctx);
+ HEVC_SWSR_U1("sub_layer_max_8bit_constraint_flag",
+ &ptl->sub_layer_max_8bit_constraint_flag[i],
+ sr_ctx);
+ HEVC_SWSR_U1("sub_layer_max_422chroma_constraint_flag",
+ &ptl->sub_layer_max_422chroma_constraint_flag[i],
+ sr_ctx);
+ HEVC_SWSR_U1("sub_layer_max_420chroma_constraint_flag",
+ &ptl->sub_layer_max_420chroma_constraint_flag[i],
+ sr_ctx);
+ HEVC_SWSR_U1("sub_layer_max_monochrome_constraint_flag",
+ &ptl->sub_layer_max_monochrome_constraint_flag[i],
+ sr_ctx);
+ HEVC_SWSR_U1("sub_layer_intra_constraint_flag",
+ &ptl->sub_layer_intra_constraint_flag[i],
+ sr_ctx);
+ HEVC_SWSR_U1("sub_layer_one_picture_only_constraint_flag",
+ &ptl->sub_layer_one_picture_only_constraint_flag[i],
+ sr_ctx);
+ HEVC_SWSR_U1("sub_layer_lower_bit_rate_constraint_flag",
+ &ptl->sub_layer_lower_bit_rate_constraint_flag[i],
+ sr_ctx);
+ HEVC_SWSR_UN("sub_layer_reserved_zero_35bits",
+ &res, 32, sr_ctx);
+ HEVC_UCHECK("sub_layer_reserved_zero_35bits",
+ res, 0, &parse_err);
+ HEVC_SWSR_UN("sub_layer_reserved_zero_35bits",
+ &res, 3, sr_ctx);
+ HEVC_UCHECK("sub_layer_reserved_zero_35bits",
+ res, 0, &parse_err);
+ } else {
+ HEVC_SWSR_UN("sub_layer_reserved_zero_44bits (1)",
+ &res, 32, sr_ctx);
+ HEVC_UCHECK("sub_layer_reserved_zero_44bits (1)",
+ res, 0, &parse_err);
+ HEVC_SWSR_UN("sub_layer_reserved_zero_44bits (2)",
+ &res, 12, sr_ctx);
+ HEVC_UCHECK("sub_layer_reserved_zero_44bits (2)",
+ res, 0, &parse_err);
+ }
+ }
+ if (ptl->sub_layer_level_present_flag[i])
+ HEVC_SWSR_UN("sub_layer_level_idc",
+ (unsigned int *)&ptl->sub_layer_level_idc[i], 8, sr_ctx);
+ }
+ return parse_err;
+}
+
+/* Default scaling lists */
+#define HEVC_SCALING_LIST_0_SIZE 16
+#define HEVC_SCALING_LIST_123_SIZE 64
+
+static const unsigned char def_4x4[HEVC_SCALING_LIST_0_SIZE] = {
+ 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16
+};
+
+static const unsigned char def_8x8_intra[HEVC_SCALING_LIST_123_SIZE] = {
+ 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 17, 16, 17, 16, 17, 18,
+ 17, 18, 18, 17, 18, 21, 19, 20, 21, 20, 19, 21, 24, 22, 22, 24,
+ 24, 22, 22, 24, 25, 25, 27, 30, 27, 25, 25, 29, 31, 35, 35, 31,
+ 29, 36, 41, 44, 41, 36, 47, 54, 54, 47, 65, 70, 65, 88, 88, 115
+};
+
+static const unsigned char def_8x8_inter[HEVC_SCALING_LIST_123_SIZE] = {
+ 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 17, 17, 17, 17, 17, 18,
+ 18, 18, 18, 18, 18, 20, 20, 20, 20, 20, 20, 20, 24, 24, 24, 24,
+ 24, 24, 24, 24, 25, 25, 25, 25, 25, 25, 25, 28, 28, 28, 28, 28,
+ 28, 33, 33, 33, 33, 33, 41, 41, 41, 41, 54, 54, 54, 71, 71, 91
+};
+
+/*
+ * Scan order mapping when translating scaling lists from bitstream order
+ * to PVDEC order
+ */
+static const unsigned char HEVC_INV_ZZ_SCAN4[HEVC_SCALING_LIST_MATRIX_SIZE / 4] = {
+ 0, 1, 2, 4, 3, 6, 7, 10, 5, 8, 9, 12, 11, 13, 14, 15
+};
+
+static const unsigned char HEVC_INV_ZZ_SCAN8[HEVC_SCALING_LIST_MATRIX_SIZE] = {
+ 0, 1, 2, 4, 3, 6, 7, 11, 5, 8, 9, 13, 12, 17, 18, 24,
+ 10, 15, 16, 22, 21, 28, 29, 36, 23, 30, 31, 38, 37, 43, 44, 49,
+ 14, 19, 20, 26, 25, 32, 33, 40, 27, 34, 35, 42, 41, 47, 48, 53,
+ 39, 45, 46, 51, 50, 54, 55, 58, 52, 56, 57, 60, 59, 61, 62, 63
+};
+
+static void bspp_hevc_getdefault_scalinglist
+ (unsigned char size_id, unsigned char matrix_id,
+ const unsigned char **default_scalinglist,
+ unsigned int *size)
+{
+ static const unsigned char *defaultlists
+ [HEVC_SCALING_LIST_NUM_SIZES][HEVC_SCALING_LIST_NUM_MATRICES] = {
+ { def_4x4, def_4x4, def_4x4, def_4x4, def_4x4, def_4x4 },
+ { def_8x8_intra, def_8x8_intra, def_8x8_intra,
+ def_8x8_inter, def_8x8_inter, def_8x8_inter },
+ { def_8x8_intra, def_8x8_intra, def_8x8_intra,
+ def_8x8_inter, def_8x8_inter, def_8x8_inter },
+ { def_8x8_intra, def_8x8_inter, NULL, NULL, NULL, NULL }
+ };
+
+ static const unsigned int lists_sizes
+ [HEVC_SCALING_LIST_NUM_SIZES][HEVC_SCALING_LIST_NUM_MATRICES] = {
+ { sizeof(def_4x4), sizeof(def_4x4), sizeof(def_4x4),
+ sizeof(def_4x4), sizeof(def_4x4), sizeof(def_4x4) },
+ { sizeof(def_8x8_intra), sizeof(def_8x8_intra),
+ sizeof(def_8x8_intra), sizeof(def_8x8_inter),
+ sizeof(def_8x8_inter), sizeof(def_8x8_inter) },
+ { sizeof(def_8x8_intra), sizeof(def_8x8_intra),
+ sizeof(def_8x8_intra), sizeof(def_8x8_inter),
+ sizeof(def_8x8_inter), sizeof(def_8x8_inter) },
+ { sizeof(def_8x8_intra), sizeof(def_8x8_inter), 0, 0, 0, 0 }
+ };
+
+ /* to assert that input to this function was correct */
+ VDEC_ASSERT(size_id < 4);
+ VDEC_ASSERT(size_id < 3 ? (matrix_id < 6) : (matrix_id < 2));
+
+ *default_scalinglist = defaultlists[size_id][matrix_id];
+ *size = lists_sizes[size_id][matrix_id];
+}
+
+static enum bspp_error_type bspp_hevc_parse_scalinglistdata
+ (void *sr_ctx,
+ struct bspp_hevc_scalinglist_data *scaling_listdata)
+{
+ enum bspp_error_type parse_err = BSPP_ERROR_NONE;
+ unsigned char size_id, matrix_id;
+
+ for (size_id = 0; size_id < HEVC_SCALING_LIST_NUM_SIZES; ++size_id) {
+ for (matrix_id = 0; matrix_id < ((size_id == 3) ? 2 : 6);
+ ++matrix_id) {
+ /*
+ * Select scaling list on which we will operate in
+ * the iteration
+ */
+ unsigned char *scalinglist = scaling_listdata->lists[size_id][matrix_id];
+
+ unsigned char scaling_list_pred_mode_flag = 0;
+
+ HEVC_SWSR_U1("scaling_list_pred_mode_flag",
+ &scaling_list_pred_mode_flag, sr_ctx);
+ if (!scaling_list_pred_mode_flag) {
+ unsigned char scaling_list_pred_matrix_id_delta = 0;
+ const unsigned char *defaultlist = NULL;
+ unsigned int listsize = 0;
+
+ HEVC_SWSR_UE("scaling_list_pred_matrixid_delta",
+ (unsigned int *)&scaling_list_pred_matrix_id_delta,
+ sr_ctx);
+
+ bspp_hevc_getdefault_scalinglist(size_id,
+ matrix_id,
+ &defaultlist,
+ &listsize);
+
+ if (scaling_list_pred_matrix_id_delta == 0) {
+ /* use default one */
+ memcpy(scalinglist, defaultlist, listsize);
+ if (size_id > 1)
+ scaling_listdata->dccoeffs[size_id -
+ 2][matrix_id] = 8 + 8;
+ } else {
+ unsigned char ref_matrix_id =
+ matrix_id - scaling_list_pred_matrix_id_delta;
+ unsigned char *refscalinglist =
+ scaling_listdata->lists[size_id][ref_matrix_id];
+ /*
+ * use reference list given by
+ * scaling_list_pred_matrix_id_delta
+ */
+ memcpy(scalinglist, refscalinglist, listsize);
+ if (size_id > 1)
+ scaling_listdata->dccoeffs[size_id - 2][matrix_id] =
+ scaling_listdata->dccoeffs[size_id -
+ 2][ref_matrix_id];
+ }
+ } else {
+ /*
+ * scaling list coefficients
+ * signalled explicitly
+ */
+ static const short coef_startvalue = 8;
+ static const unsigned char matrix_max_coef_num = 64;
+
+ short next_coef = coef_startvalue;
+ unsigned char coef_num =
+ HEVC_MIN(matrix_max_coef_num,
+ (1 << (4 + (size_id << 1))), unsigned char);
+
+ unsigned char i;
+
+ if (size_id > 1) {
+ short scaling_list_dc_coef_minus8 = 0;
+
+ HEVC_SWSR_SE("scaling_list_dc_coef_minus8",
+ (int *)&scaling_list_dc_coef_minus8,
+ sr_ctx);
+ HEVC_RANGESCHECK("scaling_list_dc_coef_minus8",
+ scaling_list_dc_coef_minus8,
+ -7, 247, &parse_err);
+
+ next_coef = scaling_list_dc_coef_minus8 + 8;
+ scaling_listdata->dccoeffs[size_id - 2][matrix_id] =
+ (unsigned char)next_coef;
+ }
+ for (i = 0; i < coef_num; ++i) {
+ short scaling_list_delta_coef = 0;
+
+ HEVC_SWSR_SE("scaling_list_delta_coef",
+ (int *)&scaling_list_delta_coef, sr_ctx);
+ HEVC_RANGESCHECK("scaling_list_delta_coef",
+ scaling_list_delta_coef, -128, 127,
+ &parse_err);
+
+ next_coef = (next_coef + scaling_list_delta_coef + 256) &
+ 0xFF;
+ scalinglist[i] = next_coef;
+ }
+ }
+ }
+ }
+
+#ifdef DEBUG_DECODER_DRIVER
+ /* print calculated scaling lists */
+ for (size_id = 0; size_id < HEVC_SCALING_LIST_NUM_SIZES; ++size_id) {
+ for (matrix_id = 0; matrix_id < ((size_id == 3) ? 2 : 6);
+ ++matrix_id) {
+ unsigned char i = 0;
+ /*
+ * Select scaling list on which we will operate
+ * in the iteration
+ */
+ unsigned char *scalinglist = scaling_listdata->lists[size_id][matrix_id];
+
+ for (; i < ((size_id == 0) ? 16 : 64); ++i) {
+ BSPP_HEVC_SYNTAX("scalinglist[%u][%u][%u] = %u",
+ size_id,
+ matrix_id,
+ i,
+ scalinglist[i]);
+ }
+ }
+ }
+#endif
+
+ return parse_err;
+}
+
+static void
+bspp_hevc_usedefault_scalinglists(struct bspp_hevc_scalinglist_data *scaling_listdata)
+{
+ unsigned char size_id, matrix_id;
+
+ for (size_id = 0; size_id < HEVC_SCALING_LIST_NUM_SIZES; ++size_id) {
+ for (matrix_id = 0; matrix_id < ((size_id == 3) ? 2 : 6);
+ ++matrix_id) {
+ unsigned char *list = scaling_listdata->lists[size_id][matrix_id];
+ const unsigned char *defaultlist = NULL;
+ unsigned int listsize = 0;
+
+ bspp_hevc_getdefault_scalinglist(size_id, matrix_id, &defaultlist,
+ &listsize);
+
+ memcpy(list, defaultlist, listsize);
+ }
+ }
+
+ memset(scaling_listdata->dccoeffs, 8 + 8, sizeof(scaling_listdata->dccoeffs));
+}
+
+static enum bspp_error_type bspp_hevc_parse_shortterm_refpicset
+ (void *sr_ctx,
+ struct bspp_hevc_shortterm_refpicset *st_refpicset,
+ unsigned char st_rps_idx,
+ unsigned char in_slice_header)
+{
+ /*
+ * Note: unfortunately short term ref pic set has to be
+ * "partially-decoded" and parsed at the same time because derived
+ * syntax elements are used for prediction of subsequent
+ * short term ref pic sets.
+ */
+ enum bspp_error_type parse_err = BSPP_ERROR_NONE;
+
+ struct bspp_hevc_shortterm_refpicset *strps =
+ &st_refpicset[st_rps_idx];
+ unsigned char inter_ref_pic_set_prediction_flag = 0;
+ unsigned int i = 0;
+
+ memset(strps, 0, sizeof(*strps));
+
+ if (st_rps_idx != 0) {
+ HEVC_SWSR_U1("inter_ref_pic_set_prediction_flag",
+ &inter_ref_pic_set_prediction_flag, sr_ctx);
+ }
+
+ if (inter_ref_pic_set_prediction_flag) {
+ signed char j = 0;
+ unsigned char j_8 = 0;
+ unsigned char ref_rps_idx = 0;
+ int delta_rps = 0;
+ unsigned char i = 0;
+ unsigned char delta_idx_minus1 = 0;
+ unsigned char delta_rps_sign = 0;
+ unsigned short abs_delta_rps_minus1 = 0;
+ unsigned char used_by_curr_pic_flag[HEVC_MAX_NUM_REF_PICS];
+ unsigned char use_delta_flag[HEVC_MAX_NUM_REF_PICS];
+
+ struct bspp_hevc_shortterm_refpicset *ref_strps = NULL;
+
+ if (in_slice_header) {
+ HEVC_SWSR_UE("delta_idx_minus1", (unsigned int *)&delta_idx_minus1, sr_ctx);
+ HEVC_RANGEUCHECK("delta_idx_minus1", delta_idx_minus1, 0, st_rps_idx - 1,
+ &parse_err);
+ }
+
+ HEVC_SWSR_U1("delta_rps_sign", &delta_rps_sign, sr_ctx);
+ HEVC_SWSR_UE("abs_delta_rps_minus1", (unsigned int *)&abs_delta_rps_minus1, sr_ctx);
+ HEVC_RANGEUCHECK("abs_delta_rps_minus1", abs_delta_rps_minus1, 0, ((1 << 15) - 1),
+ &parse_err);
+
+ ref_rps_idx = st_rps_idx - (delta_idx_minus1 + 1);
+ ref_strps = &st_refpicset[ref_rps_idx];
+
+ memset(use_delta_flag, 1, sizeof(use_delta_flag));
+
+ for (j_8 = 0; j_8 <= ref_strps->num_delta_pocs; ++j_8) {
+ HEVC_SWSR_U1("used_by_curr_pic_flag", &used_by_curr_pic_flag[j_8], sr_ctx);
+ if (!used_by_curr_pic_flag[j_8])
+ HEVC_SWSR_U1("use_delta_flag", &use_delta_flag[j_8], sr_ctx);
+ }
+
+ delta_rps =
+ (1 - 2 * delta_rps_sign) * (abs_delta_rps_minus1 + 1);
+
+ /*
+ * predict delta POC values of current strps from
+ * reference strps
+ */
+ for (j = ref_strps->num_positive_pics - 1; j >= 0; --j) {
+ int dpoc = ref_strps->delta_poc_s1[j] + delta_rps;
+
+ if (dpoc < 0 && use_delta_flag[ref_strps->num_negative_pics + j]) {
+ strps->delta_poc_s0[i] = dpoc;
+ strps->used_bycurr_pic_s0[i++] =
+ used_by_curr_pic_flag[ref_strps->num_negative_pics + j];
+ }
+ }
+
+ if (delta_rps < 0 && use_delta_flag[ref_strps->num_delta_pocs]) {
+ strps->delta_poc_s0[i] = delta_rps;
+ strps->used_bycurr_pic_s0[i++] =
+ used_by_curr_pic_flag[ref_strps->num_delta_pocs];
+ }
+
+ for (j_8 = 0; j_8 < ref_strps->num_negative_pics; ++j_8) {
+ int dpoc = ref_strps->delta_poc_s0[j_8] + delta_rps;
+
+ if (dpoc < 0 && use_delta_flag[j_8]) {
+ strps->delta_poc_s0[i] = dpoc;
+ strps->used_bycurr_pic_s0[i++] = used_by_curr_pic_flag[j_8];
+ }
+ }
+
+ strps->num_negative_pics = i;
+
+ i = 0;
+ for (j = ref_strps->num_negative_pics - 1; j >= 0; --j) {
+ int dpoc = ref_strps->delta_poc_s0[j] + delta_rps;
+
+ if (dpoc > 0 && use_delta_flag[j]) {
+ strps->delta_poc_s1[i] = dpoc;
+ strps->used_bycurr_pic_s1[i++] =
+ used_by_curr_pic_flag[j];
+ }
+ }
+
+ if (delta_rps > 0 && use_delta_flag[ref_strps->num_delta_pocs]) {
+ strps->delta_poc_s1[i] = delta_rps;
+ strps->used_bycurr_pic_s1[i++] =
+ used_by_curr_pic_flag[ref_strps->num_delta_pocs];
+ }
+
+ for (j_8 = 0; j_8 < ref_strps->num_positive_pics; ++j_8) {
+ int dpoc = ref_strps->delta_poc_s1[j_8] + delta_rps;
+
+ if (dpoc > 0 && use_delta_flag[ref_strps->num_negative_pics + j_8]) {
+ strps->delta_poc_s1[i] = dpoc;
+ strps->used_bycurr_pic_s1[i++] =
+ used_by_curr_pic_flag[ref_strps->num_negative_pics + j_8];
+ }
+ }
+
+ strps->num_positive_pics = i;
+ strps->num_delta_pocs = strps->num_negative_pics + strps->num_positive_pics;
+ if (strps->num_delta_pocs > (HEVC_MAX_NUM_REF_PICS - 1)) {
+ strps->num_delta_pocs = HEVC_MAX_NUM_REF_PICS - 1;
+ parse_err |= BSPP_ERROR_CORRECTION_VALIDVALUE;
+ }
+ } else {
+ unsigned char num_negative_pics = 0;
+ unsigned char num_positive_pics = 0;
+ unsigned short delta_poc_s0_minus1[HEVC_MAX_NUM_REF_PICS];
+ unsigned char used_by_curr_pic_s0_flag[HEVC_MAX_NUM_REF_PICS];
+ unsigned short delta_poc_s1_minus1[HEVC_MAX_NUM_REF_PICS];
+ unsigned char used_by_curr_pic_s1_flag[HEVC_MAX_NUM_REF_PICS];
+ unsigned char j = 0;
+
+ HEVC_SWSR_UE("num_negative_pics", (unsigned int *)&num_negative_pics, sr_ctx);
+ if (num_negative_pics > HEVC_MAX_NUM_REF_PICS) {
+ num_negative_pics = HEVC_MAX_NUM_REF_PICS;
+ parse_err |= BSPP_ERROR_CORRECTION_VALIDVALUE;
+ }
+ HEVC_SWSR_UE("num_positive_pics", (unsigned int *)&num_positive_pics, sr_ctx);
+ if (num_positive_pics > HEVC_MAX_NUM_REF_PICS) {
+ num_positive_pics = HEVC_MAX_NUM_REF_PICS;
+ parse_err |= BSPP_ERROR_CORRECTION_VALIDVALUE;
+ }
+
+ for (j = 0; j < num_negative_pics; ++j) {
+ HEVC_SWSR_UE("delta_poc_s0_minus1",
+ (unsigned int *)&delta_poc_s0_minus1[j], sr_ctx);
+ HEVC_RANGEUCHECK("delta_poc_s0_minus1", delta_poc_s0_minus1[j], 0,
+ ((1 << 15) - 1), &parse_err);
+ HEVC_SWSR_U1("used_by_curr_pic_s0_flag",
+ &used_by_curr_pic_s0_flag[j], sr_ctx);
+
+ if (j == 0)
+ strps->delta_poc_s0[j] =
+ -(delta_poc_s0_minus1[j] + 1);
+ else
+ strps->delta_poc_s0[j] = strps->delta_poc_s0[j - 1] -
+ (delta_poc_s0_minus1[j] + 1);
+
+ strps->used_bycurr_pic_s0[j] = used_by_curr_pic_s0_flag[j];
+ }
+
+ for (j = 0; j < num_positive_pics; j++) {
+ HEVC_SWSR_UE("delta_poc_s1_minus1",
+ (unsigned int *)&delta_poc_s1_minus1[j], sr_ctx);
+ HEVC_RANGEUCHECK("delta_poc_s1_minus1", delta_poc_s1_minus1[j], 0,
+ ((1 << 15) - 1), &parse_err);
+ HEVC_SWSR_U1("used_by_curr_pic_s1_flag",
+ &used_by_curr_pic_s1_flag[j], sr_ctx);
+
+ if (j == 0)
+ strps->delta_poc_s1[j] =
+ (delta_poc_s1_minus1[j] + 1);
+ else
+ strps->delta_poc_s1[j] = strps->delta_poc_s1[j - 1] +
+ (delta_poc_s1_minus1[j] + 1);
+ strps->used_bycurr_pic_s1[j] = used_by_curr_pic_s1_flag[j];
+ }
+
+ strps->num_negative_pics = num_negative_pics;
+ strps->num_positive_pics = num_positive_pics;
+ strps->num_delta_pocs = strps->num_negative_pics + strps->num_positive_pics;
+ if (strps->num_delta_pocs > (HEVC_MAX_NUM_REF_PICS - 1)) {
+ strps->num_delta_pocs = HEVC_MAX_NUM_REF_PICS - 1;
+ parse_err |= BSPP_ERROR_CORRECTION_VALIDVALUE;
+ }
+ }
+
+ BSPP_HEVC_SYNTAX
+ ("strps[%u]: num_delta_pocs: %u (%u (num_negative_pics) + %u (num_positive_pics))",
+ st_rps_idx, strps->num_delta_pocs, strps->num_negative_pics,
+ strps->num_positive_pics);
+
+ for (i = 0; i < strps->num_negative_pics; ++i) {
+ BSPP_HEVC_SYNTAX("StRps[%u][%u]: delta_poc_s0: %d, used_bycurr_pic_s0: %u",
+ st_rps_idx, i, strps->delta_poc_s0[i],
+ strps->used_bycurr_pic_s0[i]);
+ }
+
+ for (i = 0; i < strps->num_positive_pics; ++i) {
+ BSPP_HEVC_SYNTAX("StRps[%u][%u]: delta_poc_s1: %d, used_bycurr_pic_s1: %u",
+ st_rps_idx, i, strps->delta_poc_s1[i],
+ strps->used_bycurr_pic_s1[i]);
+ }
+
+ return parse_err;
+}
+
+static void bspp_hevc_fillcommonseqhdr(struct bspp_hevc_sps *sps,
+ struct vdec_comsequ_hdrinfo *common_seq)
+{
+ struct bspp_hevc_vui_params *vui = &sps->vui_params;
+ unsigned char chroma_idc = sps->chroma_format_idc;
+ struct pixel_pixinfo *pixel_info = &common_seq->pixel_info;
+ unsigned int maxsub_layersmin1;
+ unsigned int maxdpb_size;
+ struct vdec_rect *rawdisp_region;
+
+ common_seq->codec_profile = sps->profile_tier_level.general_profile_idc;
+ common_seq->codec_level = sps->profile_tier_level.general_level_idc;
+
+ if (sps->vui_parameters_present_flag &&
+ vui->vui_timing_info_present_flag) {
+ common_seq->frame_rate_num = vui->vui_time_scale;
+ common_seq->frame_rate_den = vui->vui_num_units_in_tick;
+ common_seq->frame_rate =
+ 1 * common_seq->frame_rate_num / common_seq->frame_rate_den;
+ }
+
+ if (vui->aspect_ratio_info_present_flag) {
+ common_seq->aspect_ratio_num = vui->sar_width;
+ common_seq->aspect_ratio_den = vui->sar_height;
+ }
+
+ common_seq->interlaced_frames = 0;
+
+ /* handle pixel format definitions */
+ pixel_info->chroma_fmt = chroma_idc == 0 ? 0 : 1;
+ pixel_info->chroma_fmt_idc = pixelformat_idc[chroma_idc];
+ pixel_info->chroma_interleave =
+ chroma_idc == 0 ? PIXEL_INVALID_CI : PIXEL_UV_ORDER;
+ pixel_info->bitdepth_y = sps->bit_depth_luma_minus8 + 8;
+ pixel_info->bitdepth_c = sps->bit_depth_chroma_minus8 + 8;
+
+ pixel_info->mem_pkg = (pixel_info->bitdepth_y > 8 ||
+ (pixel_info->bitdepth_c > 8 && pixel_info->chroma_fmt)) ?
+ PIXEL_BIT10_MSB_MP : PIXEL_BIT8_MP;
+ pixel_info->num_planes =
+ chroma_idc == 0 ? 1 : (sps->separate_colour_plane_flag ? 3 : 2);
+
+ pixel_info->pixfmt = pixel_get_pixfmt(pixel_info->chroma_fmt_idc,
+ pixel_info->chroma_interleave,
+ pixel_info->mem_pkg,
+ pixel_info->bitdepth_y,
+ pixel_info->chroma_fmt ?
+ pixel_info->bitdepth_c : PIXEL_INVALID_BDC,
+ pixel_info->num_planes);
+
+ common_seq->max_frame_size.width = sps->pic_width_in_ctbs_y * sps->ctb_size_y;
+ common_seq->max_frame_size.height = sps->pic_height_in_ctbs_y * sps->ctb_size_y;
+
+ common_seq->frame_size.width = sps->pic_width_in_luma_samples;
+ common_seq->frame_size.height = sps->pic_height_in_luma_samples;
+
+ /* Get HEVC max num ref pictures and pass to bspp info*/
+ vdecddutils_ref_pic_hevc_get_maxnum(common_seq, &common_seq->max_ref_frame_num);
+
+ common_seq->field_codec_mblocks = 0;
+
+ maxsub_layersmin1 = sps->sps_max_sub_layers_minus1;
+ maxdpb_size =
+ HEVC_MAX(sps->sps_max_dec_pic_buffering_minus1[maxsub_layersmin1] + 1,
+ sps->sps_max_num_reorder_pics[maxsub_layersmin1], unsigned char);
+
+ if (sps->sps_max_latency_increase_plus1[maxsub_layersmin1]) {
+ maxdpb_size =
+ HEVC_MAX(maxdpb_size,
+ sps->sps_max_latency_pictures[maxsub_layersmin1], unsigned int);
+ }
+
+ maxdpb_size = HEVC_MIN(maxdpb_size,
+ HEVC_MAX_NUM_REF_IDX_ACTIVE + 1, unsigned int);
+
+ common_seq->min_pict_buf_num = HEVC_MAX(maxdpb_size, 6, unsigned int);
+
+ common_seq->picture_reordering = 1;
+ common_seq->post_processing = 0;
+
+ /* handle display region calculation */
+ rawdisp_region = &common_seq->raw_display_region;
+
+ rawdisp_region->width = sps->pic_width_in_luma_samples;
+ rawdisp_region->height = sps->pic_height_in_luma_samples;
+ rawdisp_region->top_offset = 0;
+ rawdisp_region->left_offset = 0;
+
+ if (sps->conformance_window_flag) {
+ struct vdec_rect *disp_region =
+ &common_seq->orig_display_region;
+
+ disp_region->top_offset =
+ sps->sub_height_c * sps->conf_win_top_offset;
+ disp_region->left_offset =
+ sps->sub_width_c * sps->conf_win_left_offset;
+ disp_region->width =
+ sps->pic_width_in_luma_samples -
+ disp_region->left_offset -
+ sps->sub_width_c * sps->conf_win_right_offset;
+ disp_region->height =
+ sps->pic_height_in_luma_samples -
+ disp_region->top_offset -
+ sps->sub_height_c * sps->conf_win_bottom_offset;
+ } else {
+ common_seq->orig_display_region =
+ common_seq->raw_display_region;
+ }
+}
+
+static void bspp_hevc_fillpicturehdr(struct vdec_comsequ_hdrinfo *common_seq,
+ enum hevc_nalunittype nalunit_type,
+ struct bspp_pict_hdr_info *picture_hdr,
+ struct bspp_hevc_sps *sps,
+ struct bspp_hevc_pps *pps,
+ struct bspp_hevc_vps *vps)
+{
+ picture_hdr->intra_coded = (nalunit_type == HEVC_NALTYPE_IDR_W_RADL ||
+ nalunit_type == HEVC_NALTYPE_IDR_N_LP);
+ picture_hdr->field = 0;
+ picture_hdr->post_processing = 0;
+ picture_hdr->discontinuous_mbs = 0;
+ picture_hdr->pict_aux_data.id = BSPP_INVALID;
+ picture_hdr->second_pict_aux_data.id = BSPP_INVALID;
+ picture_hdr->pict_sgm_data.id = BSPP_INVALID;
+ picture_hdr->coded_frame_size.width =
+ HEVC_ALIGN(sps->pic_width_in_luma_samples, HEVC_MIN_CODED_UNIT_SIZE, unsigned int);
+ picture_hdr->coded_frame_size.height =
+ HEVC_ALIGN(sps->pic_height_in_luma_samples, HEVC_MIN_CODED_UNIT_SIZE, unsigned int);
+ picture_hdr->disp_info.enc_disp_region = common_seq->orig_display_region;
+ picture_hdr->disp_info.disp_region = common_seq->orig_display_region;
+ picture_hdr->disp_info.raw_disp_region = common_seq->raw_display_region;
+ picture_hdr->disp_info.num_pan_scan_windows = 0;
+ picture_hdr->hevc_pict_hdr_info.range_ext_present =
+ (sps->profile_tier_level.general_profile_idc == 4) ||
+ sps->profile_tier_level.general_profile_compatibility_flag[4];
+
+ picture_hdr->hevc_pict_hdr_info.is_full_range_ext = 0;
+ if (picture_hdr->hevc_pict_hdr_info.range_ext_present &&
+ (bspp_hevc_checkppsrangeextensions(&pps->range_exts) ||
+ bspp_hevc_checksps_range_extensions(&sps->range_exts)))
+ picture_hdr->hevc_pict_hdr_info.is_full_range_ext = 1;
+
+ memset(picture_hdr->disp_info.pan_scan_windows, 0,
+ sizeof(picture_hdr->disp_info.pan_scan_windows));
+}
+
+static void bspp_hevc_fill_fwsps(struct bspp_hevc_sps *sps, struct hevcfw_sequence_ps *fwsps)
+{
+ unsigned char i;
+
+ fwsps->pic_width_in_luma_samples = sps->pic_width_in_luma_samples;
+ fwsps->pic_height_in_luma_samples = sps->pic_height_in_luma_samples;
+ fwsps->num_short_term_ref_pic_sets = sps->num_short_term_ref_pic_sets;
+ fwsps->num_long_term_ref_pics_sps = sps->num_long_term_ref_pics_sps;
+ fwsps->sps_max_sub_layers_minus1 = sps->sps_max_sub_layers_minus1;
+ fwsps->max_transform_hierarchy_depth_inter =
+ sps->max_transform_hierarchy_depth_inter;
+ fwsps->max_transform_hierarchy_depth_intra =
+ sps->max_transform_hierarchy_depth_intra;
+ fwsps->log2_diff_max_min_transform_block_size =
+ sps->log2_diff_max_min_transform_block_size;
+ fwsps->log2_min_transform_block_size_minus2 =
+ sps->log2_min_transform_block_size_minus2;
+ fwsps->log2_diff_max_min_luma_coding_block_size =
+ sps->log2_diff_max_min_luma_coding_block_size;
+ fwsps->log2_min_luma_coding_block_size_minus3 =
+ sps->log2_min_luma_coding_block_size_minus3;
+
+ HEVC_STATIC_ASSERT(sizeof(sps->sps_max_dec_pic_buffering_minus1) ==
+ sizeof(fwsps->sps_max_dec_pic_buffering_minus1));
+ memcpy(fwsps->sps_max_dec_pic_buffering_minus1, sps->sps_max_dec_pic_buffering_minus1,
+ sizeof(fwsps->sps_max_dec_pic_buffering_minus1[0]) *
+ (sps->sps_max_sub_layers_minus1 + 1));
+
+ HEVC_STATIC_ASSERT(sizeof(sps->sps_max_num_reorder_pics) ==
+ sizeof(fwsps->sps_max_num_reorder_pics));
+ memcpy(fwsps->sps_max_num_reorder_pics, sps->sps_max_num_reorder_pics,
+ sizeof(fwsps->sps_max_num_reorder_pics[0]) *
+ (sps->sps_max_sub_layers_minus1 + 1));
+
+ HEVC_STATIC_ASSERT(sizeof(sps->sps_max_latency_increase_plus1) ==
+ sizeof(fwsps->sps_max_latency_increase_plus1));
+ memcpy(fwsps->sps_max_latency_increase_plus1, sps->sps_max_latency_increase_plus1,
+ sizeof(fwsps->sps_max_latency_increase_plus1[0]) *
+ (sps->sps_max_sub_layers_minus1 + 1));
+
+ fwsps->chroma_format_idc = sps->chroma_format_idc;
+ fwsps->separate_colour_plane_flag = sps->separate_colour_plane_flag;
+ fwsps->log2_max_pic_order_cnt_lsb_minus4 =
+ sps->log2_max_pic_order_cnt_lsb_minus4;
+ fwsps->long_term_ref_pics_present_flag =
+ sps->long_term_ref_pics_present_flag;
+ fwsps->sample_adaptive_offset_enabled_flag =
+ sps->sample_adaptive_offset_enabled_flag;
+ fwsps->sps_temporal_mvp_enabled_flag =
+ sps->sps_temporal_mvp_enabled_flag;
+ fwsps->bit_depth_luma_minus8 = sps->bit_depth_luma_minus8;
+ fwsps->bit_depth_chroma_minus8 = sps->bit_depth_chroma_minus8;
+ fwsps->pcm_sample_bit_depth_luma_minus1 =
+ sps->pcm_sample_bit_depth_luma_minus1;
+ fwsps->pcm_sample_bit_depth_chroma_minus1 =
+ sps->pcm_sample_bit_depth_chroma_minus1;
+ fwsps->log2_min_pcm_luma_coding_block_size_minus3 =
+ sps->log2_min_pcm_luma_coding_block_size_minus3;
+ fwsps->log2_diff_max_min_pcm_luma_coding_block_size =
+ sps->log2_diff_max_min_pcm_luma_coding_block_size;
+ fwsps->pcm_loop_filter_disabled_flag =
+ sps->pcm_loop_filter_disabled_flag;
+ fwsps->amp_enabled_flag = sps->amp_enabled_flag;
+ fwsps->pcm_enabled_flag = sps->pcm_enabled_flag;
+ fwsps->strong_intra_smoothing_enabled_flag =
+ sps->strong_intra_smoothing_enabled_flag;
+ fwsps->scaling_list_enabled_flag = sps->scaling_list_enabled_flag;
+ fwsps->transform_skip_rotation_enabled_flag =
+ sps->range_exts.transform_skip_rotation_enabled_flag;
+ fwsps->transform_skip_context_enabled_flag =
+ sps->range_exts.transform_skip_context_enabled_flag;
+ fwsps->implicit_rdpcm_enabled_flag =
+ sps->range_exts.implicit_rdpcm_enabled_flag;
+ fwsps->explicit_rdpcm_enabled_flag =
+ sps->range_exts.explicit_rdpcm_enabled_flag;
+ fwsps->extended_precision_processing_flag =
+ sps->range_exts.extended_precision_processing_flag;
+ fwsps->intra_smoothing_disabled_flag =
+ sps->range_exts.intra_smoothing_disabled_flag;
+ /* high precision makes no sense for 8 bit luma & chroma,
+ * so forward this parameter only when bitdepth > 8
+ */
+ if (sps->bit_depth_luma_minus8 || sps->bit_depth_chroma_minus8)
+ fwsps->high_precision_offsets_enabled_flag =
+ sps->range_exts.high_precision_offsets_enabled_flag;
+
+ fwsps->persistent_rice_adaptation_enabled_flag =
+ sps->range_exts.persistent_rice_adaptation_enabled_flag;
+ fwsps->cabac_bypass_alignment_enabled_flag =
+ sps->range_exts.cabac_bypass_alignment_enabled_flag;
+
+ HEVC_STATIC_ASSERT(sizeof(sps->lt_ref_pic_poc_lsb_sps) ==
+ sizeof(fwsps->lt_ref_pic_poc_lsb_sps));
+ HEVC_STATIC_ASSERT(sizeof(sps->used_by_curr_pic_lt_sps_flag) ==
+ sizeof(fwsps->used_by_curr_pic_lt_sps_flag));
+ memcpy(fwsps->lt_ref_pic_poc_lsb_sps, sps->lt_ref_pic_poc_lsb_sps,
+ sizeof(fwsps->lt_ref_pic_poc_lsb_sps[0]) *
+ sps->num_long_term_ref_pics_sps);
+ memcpy(fwsps->used_by_curr_pic_lt_sps_flag, sps->used_by_curr_pic_lt_sps_flag,
+ sizeof(fwsps->used_by_curr_pic_lt_sps_flag[0]) * sps->num_long_term_ref_pics_sps);
+
+ for (i = 0; i < sps->num_short_term_ref_pic_sets; ++i)
+ bspp_hevc_fill_fwst_rps(&sps->rps_list[i], &fwsps->st_rps_list[i]);
+
+ /* derived elements */
+ fwsps->pic_size_in_ctbs_y = sps->pic_size_in_ctbs_y;
+ fwsps->pic_height_in_ctbs_y = sps->pic_height_in_ctbs_y;
+ fwsps->pic_width_in_ctbs_y = sps->pic_width_in_ctbs_y;
+ fwsps->ctb_size_y = sps->ctb_size_y;
+ fwsps->ctb_log2size_y = sps->ctb_log2size_y;
+ fwsps->max_pic_order_cnt_lsb = sps->max_pic_order_cnt_lsb;
+
+ HEVC_STATIC_ASSERT(sizeof(sps->sps_max_latency_pictures) ==
+ sizeof(fwsps->sps_max_latency_pictures));
+ memcpy(fwsps->sps_max_latency_pictures, sps->sps_max_latency_pictures,
+ sizeof(fwsps->sps_max_latency_pictures[0]) *
+ (sps->sps_max_sub_layers_minus1 + 1));
+}
+
+static void bspp_hevc_fill_fwst_rps(struct bspp_hevc_shortterm_refpicset *strps,
+ struct hevcfw_short_term_ref_picset *fwstrps)
+{
+ fwstrps->num_delta_pocs = strps->num_delta_pocs;
+ fwstrps->num_negative_pics = strps->num_negative_pics;
+ fwstrps->num_positive_pics = strps->num_positive_pics;
+
+ HEVC_STATIC_ASSERT(sizeof(strps->delta_poc_s0) ==
+ sizeof(fwstrps->delta_poc_s0));
+ memcpy(fwstrps->delta_poc_s0, strps->delta_poc_s0,
+ sizeof(fwstrps->delta_poc_s0[0]) * strps->num_negative_pics);
+
+ HEVC_STATIC_ASSERT(sizeof(strps->delta_poc_s1) ==
+ sizeof(fwstrps->delta_poc_s1));
+ memcpy(fwstrps->delta_poc_s1, strps->delta_poc_s1,
+ sizeof(fwstrps->delta_poc_s1[0]) * strps->num_positive_pics);
+
+ HEVC_STATIC_ASSERT(sizeof(strps->used_bycurr_pic_s0) ==
+ sizeof(fwstrps->used_bycurr_pic_s0));
+ memcpy(fwstrps->used_bycurr_pic_s0, strps->used_bycurr_pic_s0,
+ sizeof(fwstrps->used_bycurr_pic_s0[0]) * strps->num_negative_pics);
+
+ HEVC_STATIC_ASSERT(sizeof(strps->used_bycurr_pic_s1) ==
+ sizeof(fwstrps->used_bycurr_pic_s1));
+ memcpy(fwstrps->used_bycurr_pic_s1, strps->used_bycurr_pic_s1,
+ sizeof(fwstrps->used_bycurr_pic_s1[0]) * strps->num_positive_pics);
+}
+
+static void bspp_hevc_fill_fwpps(struct bspp_hevc_pps *pps, struct hevcfw_picture_ps *fw_pps)
+{
+ fw_pps->pps_pic_parameter_set_id = pps->pps_pic_parameter_set_id;
+ fw_pps->num_tile_columns_minus1 = pps->num_tile_columns_minus1;
+ fw_pps->num_tile_rows_minus1 = pps->num_tile_rows_minus1;
+ fw_pps->diff_cu_qp_delta_depth = pps->diff_cu_qp_delta_depth;
+ fw_pps->init_qp_minus26 = pps->init_qp_minus26;
+ fw_pps->pps_beta_offset_div2 = pps->pps_beta_offset_div2;
+ fw_pps->pps_tc_offset_div2 = pps->pps_tc_offset_div2;
+ fw_pps->pps_cb_qp_offset = pps->pps_cb_qp_offset;
+ fw_pps->pps_cr_qp_offset = pps->pps_cr_qp_offset;
+ fw_pps->log2_parallel_merge_level_minus2 =
+ pps->log2_parallel_merge_level_minus2;
+
+ fw_pps->dependent_slice_segments_enabled_flag =
+ pps->dependent_slice_segments_enabled_flag;
+ fw_pps->output_flag_present_flag = pps->output_flag_present_flag;
+ fw_pps->num_extra_slice_header_bits = pps->num_extra_slice_header_bits;
+ fw_pps->lists_modification_present_flag =
+ pps->lists_modification_present_flag;
+ fw_pps->cabac_init_present_flag = pps->cabac_init_present_flag;
+ fw_pps->weighted_pred_flag = pps->weighted_pred_flag;
+ fw_pps->weighted_bipred_flag = pps->weighted_bipred_flag;
+ fw_pps->pps_slice_chroma_qp_offsets_present_flag =
+ pps->pps_slice_chroma_qp_offsets_present_flag;
+ fw_pps->deblocking_filter_override_enabled_flag =
+ pps->deblocking_filter_override_enabled_flag;
+ fw_pps->tiles_enabled_flag = pps->tiles_enabled_flag;
+ fw_pps->entropy_coding_sync_enabled_flag =
+ pps->entropy_coding_sync_enabled_flag;
+ fw_pps->slice_segment_header_extension_present_flag =
+ pps->slice_segment_header_extension_present_flag;
+ fw_pps->transquant_bypass_enabled_flag =
+ pps->transquant_bypass_enabled_flag;
+ fw_pps->cu_qp_delta_enabled_flag = pps->cu_qp_delta_enabled_flag;
+ fw_pps->transform_skip_enabled_flag = pps->transform_skip_enabled_flag;
+ fw_pps->sign_data_hiding_enabled_flag =
+ pps->sign_data_hiding_enabled_flag;
+ fw_pps->num_ref_idx_l0_default_active_minus1 =
+ pps->num_ref_idx_l0_default_active_minus1;
+ fw_pps->num_ref_idx_l1_default_active_minus1 =
+ pps->num_ref_idx_l1_default_active_minus1;
+ fw_pps->constrained_intra_pred_flag = pps->constrained_intra_pred_flag;
+ fw_pps->pps_deblocking_filter_disabled_flag =
+ pps->pps_deblocking_filter_disabled_flag;
+ fw_pps->pps_loop_filter_across_slices_enabled_flag =
+ pps->pps_loop_filter_across_slices_enabled_flag;
+ fw_pps->loop_filter_across_tiles_enabled_flag =
+ pps->loop_filter_across_tiles_enabled_flag;
+ fw_pps->log2_max_transform_skip_block_size_minus2 =
+ pps->range_exts.log2_max_transform_skip_block_size_minus2;
+ fw_pps->cross_component_prediction_enabled_flag =
+ pps->range_exts.cross_component_prediction_enabled_flag;
+ fw_pps->chroma_qp_offset_list_enabled_flag =
+ pps->range_exts.chroma_qp_offset_list_enabled_flag;
+ fw_pps->diff_cu_chroma_qp_offset_depth =
+ pps->range_exts.diff_cu_chroma_qp_offset_depth;
+ fw_pps->chroma_qp_offset_list_len_minus1 =
+ pps->range_exts.chroma_qp_offset_list_len_minus1;
+ memcpy(fw_pps->cb_qp_offset_list, pps->range_exts.cb_qp_offset_list,
+ sizeof(pps->range_exts.cb_qp_offset_list));
+ memcpy(fw_pps->cr_qp_offset_list, pps->range_exts.cr_qp_offset_list,
+ sizeof(pps->range_exts.cr_qp_offset_list));
+
+ /* derived elements */
+ HEVC_STATIC_ASSERT(sizeof(pps->col_bd) == sizeof(fw_pps->col_bd));
+ HEVC_STATIC_ASSERT(sizeof(pps->row_bd) == sizeof(fw_pps->row_bd));
+ memcpy(fw_pps->col_bd, pps->col_bd, sizeof(fw_pps->col_bd));
+ memcpy(fw_pps->row_bd, pps->row_bd, sizeof(fw_pps->row_bd));
+}
+
+static void bspp_hevc_fill_fw_scaling_lists(struct bspp_hevc_pps *pps,
+ struct bspp_hevc_sps *sps,
+ struct hevcfw_picture_ps *fw_pps)
+{
+ signed char size_id, matrix_id;
+ unsigned char *scalinglist;
+ /*
+ * We are starting at 1 to leave space for addresses,
+ * filled by lower layer
+ */
+ unsigned int *scaling_lists = &fw_pps->scaling_lists[1];
+ unsigned char i;
+
+ struct bspp_hevc_scalinglist_data *scaling_listdata =
+ pps->pps_scaling_list_data_present_flag ?
+ &pps->scaling_list :
+ &sps->scalinglist_data;
+
+ if (!sps->scaling_list_enabled_flag)
+ return;
+
+ fw_pps->scaling_list_enabled_flag = sps->scaling_list_enabled_flag;
+
+ for (size_id = HEVC_SCALING_LIST_NUM_SIZES - 1;
+ size_id >= 0; --size_id) {
+ const unsigned char *zz =
+ (size_id == 0 ? HEVC_INV_ZZ_SCAN4 : HEVC_INV_ZZ_SCAN8);
+
+ for (matrix_id = 0; matrix_id < ((size_id == 3) ? 2 : 6);
+ ++matrix_id) {
+ /*
+ * Select scaling list on which we will operate
+ * in the iteration
+ */
+ scalinglist =
+ scaling_listdata->lists[size_id][matrix_id];
+
+ for (i = 0; i < ((size_id == 0) ? 16 : 64); i += 4) {
+ *scaling_lists =
+ scalinglist[zz[i + 3]] << 24 |
+ scalinglist[zz[i + 2]] << 16 |
+ scalinglist[zz[i + 1]] << 8 |
+ scalinglist[zz[i]];
+ scaling_lists += 2;
+ }
+ }
+ }
+
+ for (i = 0; i < 2; ++i) {
+ *scaling_lists = scaling_listdata->dccoeffs[1][i];
+ scaling_lists += 2;
+ }
+
+ for (i = 0; i < 6; ++i) {
+ *scaling_lists = scaling_listdata->dccoeffs[0][i];
+ scaling_lists += 2;
+ }
+}
+
+static unsigned int bspp_ceil_log2(unsigned int linear_val)
+{
+ unsigned int log_val = 0;
+
+ if (linear_val > 0)
+ --linear_val;
+
+ while (linear_val > 0) {
+ linear_val >>= 1;
+ ++log_val;
+ }
+
+ return log_val;
+}
+
+static unsigned char bspp_hevc_picture_is_irap(enum hevc_nalunittype nalunit_type)
+{
+ return (nalunit_type >= HEVC_NALTYPE_BLA_W_LP) &&
+ (nalunit_type <= HEVC_NALTYPE_RSV_IRAP_VCL23);
+}
+
+static unsigned char bspp_hevc_picture_is_cra(enum hevc_nalunittype nalunit_type)
+{
+ return (nalunit_type == HEVC_NALTYPE_CRA);
+}
+
+static unsigned char bspp_hevc_picture_is_idr(enum hevc_nalunittype nalunit_type)
+{
+ return (nalunit_type == HEVC_NALTYPE_IDR_N_LP) ||
+ (nalunit_type == HEVC_NALTYPE_IDR_W_RADL);
+}
+
+static unsigned char bspp_hevc_picture_is_bla(enum hevc_nalunittype nalunit_type)
+{
+ return (nalunit_type >= HEVC_NALTYPE_BLA_W_LP) &&
+ (nalunit_type <= HEVC_NALTYPE_BLA_N_LP);
+}
+
+static unsigned char bspp_hevc_picture_getnorasl_outputflag
+ (enum hevc_nalunittype nalunit_type,
+ struct bspp_hevc_inter_pict_ctx *inter_pict_ctx)
+{
+ VDEC_ASSERT(inter_pict_ctx);
+
+ if (bspp_hevc_picture_is_idr(nalunit_type) ||
+ bspp_hevc_picture_is_bla(nalunit_type) ||
+ inter_pict_ctx->first_after_eos ||
+ (bspp_hevc_picture_is_cra(nalunit_type) && inter_pict_ctx->seq_pic_count == 1))
+ return 1;
+
+ return 0;
+}
+
+static unsigned char bspp_hevc_range_extensions_is_enabled
+ (struct bspp_hevc_profile_tierlevel *profile_tierlevel)
+{
+ unsigned char is_enabled;
+
+ is_enabled = profile_tierlevel->general_profile_idc >= 4 ||
+ profile_tierlevel->general_profile_compatibility_flag[4];
+
+ return is_enabled;
+}
+
+static void bspp_hevc_parse_codec_config(void *hndl_swsr_ctx, unsigned int *unit_count,
+ unsigned int *unit_array_count,
+ unsigned int *delim_length,
+ unsigned int *size_delim_length)
+{
+ unsigned long long value = 23;
+
+ /*
+ * Set the shift-register up to provide next 23 bytes
+ * without emulation prevention detection.
+ */
+ swsr_consume_delim(hndl_swsr_ctx, SWSR_EMPREVENT_NONE, 0, &value);
+ /*
+ * Codec config header must be read for size delimited data (HEVC)
+ * to get to the start of each unit.
+ * This parsing follows section 8.3.3.1.2 of ISO/IEC 14496-15:2013.
+ */
+ swsr_read_bits(hndl_swsr_ctx, 8 * 4);
+ swsr_read_bits(hndl_swsr_ctx, 8 * 4);
+ swsr_read_bits(hndl_swsr_ctx, 8 * 4);
+ swsr_read_bits(hndl_swsr_ctx, 8 * 4);
+ swsr_read_bits(hndl_swsr_ctx, 8 * 4);
+ swsr_read_bits(hndl_swsr_ctx, 8);
+
+ *delim_length = ((swsr_read_bits(hndl_swsr_ctx, 8) & 0x3) + 1) * 8;
+ *unit_array_count = swsr_read_bits(hndl_swsr_ctx, 8);
+
+ /* Size delimiter is only 2 bytes for HEVC codec configuration. */
+ *size_delim_length = 2 * 8;
+}
+
+static void bspp_hevc_update_unitcounts(void *hndl_swsr_ctx, unsigned int *unit_count,
+ unsigned int *unit_array_count)
+{
+ if (*unit_array_count != 0) {
+ unsigned long long value = 3;
+
+ if (*unit_count == 0) {
+ /*
+ * Set the shift-register up to provide next 3 bytes
+ * without emulation prevention detection.
+ */
+ swsr_consume_delim(hndl_swsr_ctx, SWSR_EMPREVENT_NONE, 0, &value);
+
+ swsr_read_bits(hndl_swsr_ctx, 8);
+ *unit_count = swsr_read_bits(hndl_swsr_ctx, 16);
+
+ (*unit_array_count)--;
+ (*unit_count)--;
+ }
+ }
+}
+
+void bspp_hevc_determine_unittype(unsigned char bitstream_unittype,
+ int disable_mvc,
+ enum bspp_unit_type *bspp_unittype)
+{
+ /* 6 bits for NAL Unit Type in HEVC */
+ unsigned char type = (bitstream_unittype >> 1) & 0x3f;
+
+ switch (type) {
+ case HEVC_NALTYPE_VPS:
+ *bspp_unittype = BSPP_UNIT_VPS;
+ break;
+
+ case HEVC_NALTYPE_SPS:
+ *bspp_unittype = BSPP_UNIT_SEQUENCE;
+ break;
+
+ case HEVC_NALTYPE_PPS:
+ *bspp_unittype = BSPP_UNIT_PPS;
+ break;
+
+ case HEVC_NALTYPE_TRAIL_N:
+ case HEVC_NALTYPE_TRAIL_R:
+ case HEVC_NALTYPE_TSA_N:
+ case HEVC_NALTYPE_TSA_R:
+ case HEVC_NALTYPE_STSA_N:
+ case HEVC_NALTYPE_STSA_R:
+ case HEVC_NALTYPE_RADL_N:
+ case HEVC_NALTYPE_RADL_R:
+ case HEVC_NALTYPE_RASL_N:
+ case HEVC_NALTYPE_RASL_R:
+ case HEVC_NALTYPE_BLA_W_LP:
+ case HEVC_NALTYPE_BLA_W_RADL:
+ case HEVC_NALTYPE_BLA_N_LP:
+ case HEVC_NALTYPE_IDR_W_RADL:
+ case HEVC_NALTYPE_IDR_N_LP:
+ case HEVC_NALTYPE_CRA:
+ case HEVC_NALTYPE_EOS:
+ /* Attach EOS to picture data, so it can be detected in FW */
+ *bspp_unittype = BSPP_UNIT_PICTURE;
+ break;
+
+ case HEVC_NALTYPE_AUD:
+ case HEVC_NALTYPE_PREFIX_SEI:
+ case HEVC_NALTYPE_SUFFIX_SEI:
+ case HEVC_NALTYPE_EOB:
+ case HEVC_NALTYPE_FD:
+ *bspp_unittype = BSPP_UNIT_NON_PICTURE;
+ break;
+
+ default:
+ *bspp_unittype = BSPP_UNIT_UNSUPPORTED;
+ break;
+ }
+}
+
+int bspp_hevc_set_parser_config(enum vdec_bstr_format bstr_format,
+ struct bspp_vid_std_features *pvidstd_features,
+ struct bspp_swsr_ctx *pswsr_ctx,
+ struct bspp_parser_callbacks *parser_callbacks,
+ struct bspp_inter_pict_data *pinterpict_data)
+{
+ /* set HEVC parser callbacks. */
+ parser_callbacks->parse_unit_cb = bspp_hevc_unitparser;
+ parser_callbacks->release_data_cb = bspp_hevc_releasedata;
+ parser_callbacks->reset_data_cb = bspp_hevc_resetdata;
+ parser_callbacks->parse_codec_config_cb = bspp_hevc_parse_codec_config;
+ parser_callbacks->update_unit_counts_cb = bspp_hevc_update_unitcounts;
+ parser_callbacks->initialise_parsing_cb = bspp_hevc_initialiseparsing;
+ parser_callbacks->finalise_parsing_cb = bspp_hevc_finaliseparsing;
+
+ /* Set HEVC specific features. */
+ pvidstd_features->seq_size = sizeof(struct bspp_hevc_sequ_hdr_info);
+ pvidstd_features->uses_vps = 1;
+ pvidstd_features->vps_size = sizeof(struct bspp_hevc_vps);
+ pvidstd_features->uses_pps = 1;
+ pvidstd_features->pps_size = sizeof(struct bspp_hevc_pps);
+
+ /* Set HEVC specific shift register config. */
+ pswsr_ctx->emulation_prevention = SWSR_EMPREVENT_00000300;
+
+ if (bstr_format == VDEC_BSTRFORMAT_DEMUX_BYTESTREAM ||
+ bstr_format == VDEC_BSTRFORMAT_ELEMENTARY) {
+ pswsr_ctx->sr_config.delim_type = SWSR_DELIM_SCP;
+ pswsr_ctx->sr_config.delim_length = 3 * 8;
+ pswsr_ctx->sr_config.scp_value = 0x000001;
+ } else if (bstr_format == VDEC_BSTRFORMAT_DEMUX_SIZEDELIMITED) {
+ pswsr_ctx->sr_config.delim_type = SWSR_DELIM_SIZE;
+ pswsr_ctx->sr_config.delim_length = 4 * 8;
+ } else {
+ return IMG_ERROR_NOT_SUPPORTED;
+ }
+
+ return 0;
+}
diff --git a/drivers/staging/media/vxd/decoder/hevc_secure_parser.h b/drivers/staging/media/vxd/decoder/hevc_secure_parser.h
new file mode 100644
index 000000000000..72424e8b8041
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/hevc_secure_parser.h
@@ -0,0 +1,455 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * h.264 secure data unit parsing API.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreming
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef __HEVCSECUREPARSER_H__
+#define __HEVCSECUREPARSER_H__
+
+#include "bspp_int.h"
+
+#define HEVC_MAX_NUM_PROFILE_IDC (32)
+#define HEVC_MAX_NUM_SUBLAYERS (7)
+#define HEVC_MAX_VPS_OP_SETS_PLUS1 (1024)
+#define HEVC_MAX_VPS_NUH_RESERVED_ZERO_LAYER_ID_PLUS1 (1)
+#define HEVC_MAX_NUM_REF_PICS (16)
+#define HEVC_MAX_NUM_ST_REF_PIC_SETS (65)
+#define HEVC_MAX_NUM_LT_REF_PICS (32)
+#define HEVC_MAX_NUM_REF_IDX_ACTIVE (15)
+#define HEVC_LEVEL_IDC_MIN (30)
+#define HEVC_LEVEL_IDC_MAX (186)
+#define HEVC_1_0_PROFILE_IDC_MAX (3)
+#define HEVC_MAX_CPB_COUNT (32)
+#define HEVC_MIN_CODED_UNIT_SIZE (8)
+
+/* hevc scaling lists (all values are maximum possible ones) */
+#define HEVC_SCALING_LIST_NUM_SIZES (4)
+#define HEVC_SCALING_LIST_NUM_MATRICES (6)
+#define HEVC_SCALING_LIST_MATRIX_SIZE (64)
+
+#define HEVC_MAX_TILE_COLS (20)
+#define HEVC_MAX_TILE_ROWS (22)
+
+#define HEVC_EXTENDED_SAR (255)
+
+#define HEVC_MAX_CHROMA_QP (6)
+
+enum hevc_nalunittype {
+ HEVC_NALTYPE_TRAIL_N = 0,
+ HEVC_NALTYPE_TRAIL_R = 1,
+ HEVC_NALTYPE_TSA_N = 2,
+ HEVC_NALTYPE_TSA_R = 3,
+ HEVC_NALTYPE_STSA_N = 4,
+ HEVC_NALTYPE_STSA_R = 5,
+ HEVC_NALTYPE_RADL_N = 6,
+ HEVC_NALTYPE_RADL_R = 7,
+ HEVC_NALTYPE_RASL_N = 8,
+ HEVC_NALTYPE_RASL_R = 9,
+ HEVC_NALTYPE_RSV_VCL_N10 = 10,
+ HEVC_NALTYPE_RSV_VCL_R11 = 11,
+ HEVC_NALTYPE_RSV_VCL_N12 = 12,
+ HEVC_NALTYPE_RSV_VCL_R13 = 13,
+ HEVC_NALTYPE_RSV_VCL_N14 = 14,
+ HEVC_NALTYPE_RSV_VCL_R15 = 15,
+ HEVC_NALTYPE_BLA_W_LP = 16,
+ HEVC_NALTYPE_BLA_W_RADL = 17,
+ HEVC_NALTYPE_BLA_N_LP = 18,
+ HEVC_NALTYPE_IDR_W_RADL = 19,
+ HEVC_NALTYPE_IDR_N_LP = 20,
+ HEVC_NALTYPE_CRA = 21,
+ HEVC_NALTYPE_RSV_IRAP_VCL22 = 22,
+ HEVC_NALTYPE_RSV_IRAP_VCL23 = 23,
+ HEVC_NALTYPE_VPS = 32,
+ HEVC_NALTYPE_SPS = 33,
+ HEVC_NALTYPE_PPS = 34,
+ HEVC_NALTYPE_AUD = 35,
+ HEVC_NALTYPE_EOS = 36,
+ HEVC_NALTYPE_EOB = 37,
+ HEVC_NALTYPE_FD = 38,
+ HEVC_NALTYPE_PREFIX_SEI = 39,
+ HEVC_NALTYPE_SUFFIX_SEI = 40,
+ HEVC_NALTYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+enum bspp_hevcslicetype {
+ HEVC_SLICE_B = 0,
+ HEVC_SLICE_P = 1,
+ HEVC_SLICE_I = 2,
+ HEVC_SLICE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* HEVC NAL unit header */
+struct bspp_hevcnalheader {
+ unsigned char nal_unit_type;
+ unsigned char nuh_layer_id;
+ unsigned char nuh_temporal_id_plus1;
+};
+
+/* HEVC video profile_tier_level */
+struct bspp_hevc_profile_tierlevel {
+ unsigned char general_profile_space;
+ unsigned char general_tier_flag;
+ unsigned char general_profile_idc;
+ unsigned char general_profile_compatibility_flag[HEVC_MAX_NUM_PROFILE_IDC];
+ unsigned char general_progressive_source_flag;
+ unsigned char general_interlaced_source_flag;
+ unsigned char general_non_packed_constraint_flag;
+ unsigned char general_frame_only_constraint_flag;
+ unsigned char general_max_12bit_constraint_flag;
+ unsigned char general_max_10bit_constraint_flag;
+ unsigned char general_max_8bit_constraint_flag;
+ unsigned char general_max_422chroma_constraint_flag;
+ unsigned char general_max_420chroma_constraint_flag;
+ unsigned char general_max_monochrome_constraint_flag;
+ unsigned char general_intra_constraint_flag;
+ unsigned char general_one_picture_only_constraint_flag;
+ unsigned char general_lower_bit_rate_constraint_flag;
+ unsigned char general_level_idc;
+ unsigned char sub_layer_profile_present_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_level_present_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_profile_space[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_tier_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_profile_idc[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_profile_compatibility_flag[HEVC_MAX_NUM_SUBLAYERS -
+ 1][HEVC_MAX_NUM_PROFILE_IDC];
+ unsigned char sub_layer_progressive_source_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_interlaced_source_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_non_packed_constraint_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_frame_only_constraint_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_max_12bit_constraint_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_max_10bit_constraint_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_max_8bit_constraint_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_max_422chroma_constraint_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_max_420chroma_constraint_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_max_monochrome_constraint_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_intra_constraint_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_one_picture_only_constraint_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_lower_bit_rate_constraint_flag[HEVC_MAX_NUM_SUBLAYERS - 1];
+ unsigned char sub_layer_level_idc[HEVC_MAX_NUM_SUBLAYERS - 1];
+};
+
+/* HEVC sub layer HRD parameters */
+struct bspp_hevc_sublayer_hrd_parameters {
+ unsigned char bit_rate_value_minus1[HEVC_MAX_CPB_COUNT];
+ unsigned char cpb_size_value_minus1[HEVC_MAX_CPB_COUNT];
+ unsigned char cpb_size_du_value_minus1[HEVC_MAX_CPB_COUNT];
+ unsigned char bit_rate_du_value_minus1[HEVC_MAX_CPB_COUNT];
+ unsigned char cbr_flag[HEVC_MAX_CPB_COUNT];
+};
+
+/* HEVC HRD parameters */
+struct bspp_hevc_hrd_parameters {
+ unsigned char nal_hrd_parameters_present_flag;
+ unsigned char vcl_hrd_parameters_present_flag;
+ unsigned char sub_pic_hrd_params_present_flag;
+ unsigned char tick_divisor_minus2;
+ unsigned char du_cpb_removal_delay_increment_length_minus1;
+ unsigned char sub_pic_cpb_params_in_pic_timing_sei_flag;
+ unsigned char dpb_output_delay_du_length_minus1;
+ unsigned char bit_rate_scale;
+ unsigned char cpb_size_scale;
+ unsigned char cpb_size_du_scale;
+ unsigned char initial_cpb_removal_delay_length_minus1;
+ unsigned char au_cpb_removal_delay_length_minus1;
+ unsigned char dpb_output_delay_length_minus1;
+ unsigned char fixed_pic_rate_general_flag[HEVC_MAX_NUM_SUBLAYERS];
+ unsigned char fixed_pic_rate_within_cvs_flag[HEVC_MAX_NUM_SUBLAYERS];
+ unsigned char elemental_duration_in_tc_minus1[HEVC_MAX_NUM_SUBLAYERS];
+ unsigned char low_delay_hrd_flag[HEVC_MAX_NUM_SUBLAYERS];
+ unsigned char cpb_cnt_minus1[HEVC_MAX_NUM_SUBLAYERS];
+ struct bspp_hevc_sublayer_hrd_parameters sublayhrdparams[HEVC_MAX_NUM_SUBLAYERS];
+};
+
+/* HEVC video parameter set */
+struct bspp_hevc_vps {
+ unsigned char is_different;
+ unsigned char is_sent;
+ unsigned char is_available;
+ unsigned char vps_video_parameter_set_id;
+ unsigned char vps_reserved_three_2bits;
+ unsigned char vps_max_layers_minus1;
+ unsigned char vps_max_sub_layers_minus1;
+ unsigned char vps_temporal_id_nesting_flag;
+ unsigned short vps_reserved_0xffff_16bits;
+ struct bspp_hevc_profile_tierlevel profiletierlevel;
+ unsigned char vps_max_dec_pic_buffering_minus1[HEVC_MAX_NUM_SUBLAYERS];
+ unsigned char vps_max_num_reorder_pics[HEVC_MAX_NUM_SUBLAYERS];
+ unsigned char vps_max_latency_increase_plus1[HEVC_MAX_NUM_SUBLAYERS];
+ unsigned char vps_sub_layer_ordering_info_present_flag;
+ unsigned char vps_max_layer_id;
+ unsigned char vps_num_layer_sets_minus1;
+ unsigned char layer_id_included_flag[HEVC_MAX_VPS_OP_SETS_PLUS1]
+ [HEVC_MAX_VPS_NUH_RESERVED_ZERO_LAYER_ID_PLUS1];
+ unsigned char vps_timing_info_present_flag;
+ unsigned int vps_num_units_in_tick;
+ unsigned int vps_time_scale;
+ unsigned char vps_poc_proportional_to_timing_flag;
+ unsigned char vps_num_ticks_poc_diff_one_minus1;
+ unsigned char vps_num_hrd_parameters;
+ unsigned char *hrd_layer_set_idx;
+ unsigned char *cprms_present_flag;
+ unsigned char vps_extension_flag;
+ unsigned char vps_extension_data_flag;
+};
+
+/* HEVC scaling lists */
+struct bspp_hevc_scalinglist_data {
+ unsigned char dccoeffs[HEVC_SCALING_LIST_NUM_SIZES - 2][HEVC_SCALING_LIST_NUM_MATRICES];
+ unsigned char lists[HEVC_SCALING_LIST_NUM_SIZES][HEVC_SCALING_LIST_NUM_MATRICES]
+ [HEVC_SCALING_LIST_MATRIX_SIZE];
+};
+
+/* HEVC short term reference picture set */
+struct bspp_hevc_shortterm_refpicset {
+ unsigned char num_negative_pics;
+ unsigned char num_positive_pics;
+ short delta_poc_s0[HEVC_MAX_NUM_REF_PICS];
+ short delta_poc_s1[HEVC_MAX_NUM_REF_PICS];
+ unsigned char used_bycurr_pic_s0[HEVC_MAX_NUM_REF_PICS];
+ unsigned char used_bycurr_pic_s1[HEVC_MAX_NUM_REF_PICS];
+ unsigned char num_delta_pocs;
+};
+
+/* HEVC video usability information */
+struct bspp_hevc_vui_params {
+ unsigned char aspect_ratio_info_present_flag;
+ unsigned char aspect_ratio_idc;
+ unsigned short sar_width;
+ unsigned short sar_height;
+ unsigned char overscan_info_present_flag;
+ unsigned char overscan_appropriate_flag;
+ unsigned char video_signal_type_present_flag;
+ unsigned char video_format;
+ unsigned char video_full_range_flag;
+ unsigned char colour_description_present_flag;
+ unsigned char colour_primaries;
+ unsigned char transfer_characteristics;
+ unsigned char matrix_coeffs;
+ unsigned char chroma_loc_info_present_flag;
+ unsigned char chroma_sample_loc_type_top_field;
+ unsigned char chroma_sample_loc_type_bottom_field;
+ unsigned char neutral_chroma_indication_flag;
+ unsigned char field_seq_flag;
+ unsigned char frame_field_info_present_flag;
+ unsigned char default_display_window_flag;
+ unsigned short def_disp_win_left_offset;
+ unsigned short def_disp_win_right_offset;
+ unsigned short def_disp_win_top_offset;
+ unsigned short def_disp_win_bottom_offset;
+ unsigned char vui_timing_info_present_flag;
+ unsigned int vui_num_units_in_tick;
+ unsigned int vui_time_scale;
+ unsigned char vui_poc_proportional_to_timing_flag;
+ unsigned int vui_num_ticks_poc_diff_one_minus1;
+ unsigned char vui_hrd_parameters_present_flag;
+ struct bspp_hevc_hrd_parameters vui_hrd_params;
+ unsigned char bitstream_restriction_flag;
+ unsigned char tiles_fixed_structure_flag;
+ unsigned char motion_vectors_over_pic_boundaries_flag;
+ unsigned char restricted_ref_pic_lists_flag;
+ unsigned short min_spatial_segmentation_idc;
+ unsigned char max_bytes_per_pic_denom;
+ unsigned char max_bits_per_min_cu_denom;
+ unsigned char log2_max_mv_length_horizontal;
+ unsigned char log2_max_mv_length_vertical;
+};
+
+/* HEVC sps range extensions */
+struct bspp_hevc_sps_range_exts {
+ unsigned char transform_skip_rotation_enabled_flag;
+ unsigned char transform_skip_context_enabled_flag;
+ unsigned char implicit_rdpcm_enabled_flag;
+ unsigned char explicit_rdpcm_enabled_flag;
+ unsigned char extended_precision_processing_flag;
+ unsigned char intra_smoothing_disabled_flag;
+ unsigned char high_precision_offsets_enabled_flag;
+ unsigned char persistent_rice_adaptation_enabled_flag;
+ unsigned char cabac_bypass_alignment_enabled_flag;
+};
+
+/* HEVC sequence parameter set */
+struct bspp_hevc_sps {
+ unsigned char is_different;
+ unsigned char is_sent;
+ unsigned char is_available;
+ unsigned char sps_video_parameter_set_id;
+ unsigned char sps_max_sub_layers_minus1;
+ unsigned char sps_temporal_id_nesting_flag;
+ struct bspp_hevc_profile_tierlevel profile_tier_level;
+ unsigned char sps_seq_parameter_set_id;
+ unsigned char chroma_format_idc;
+ unsigned char separate_colour_plane_flag;
+ unsigned int pic_width_in_luma_samples;
+ unsigned int pic_height_in_luma_samples;
+ unsigned char conformance_window_flag;
+ unsigned short conf_win_left_offset;
+ unsigned short conf_win_right_offset;
+ unsigned short conf_win_top_offset;
+ unsigned short conf_win_bottom_offset;
+ unsigned char bit_depth_luma_minus8;
+ unsigned char bit_depth_chroma_minus8;
+ unsigned char log2_max_pic_order_cnt_lsb_minus4;
+ unsigned char sps_sub_layer_ordering_info_present_flag;
+ unsigned char sps_max_dec_pic_buffering_minus1[HEVC_MAX_NUM_SUBLAYERS];
+ unsigned char sps_max_num_reorder_pics[HEVC_MAX_NUM_SUBLAYERS];
+ unsigned int sps_max_latency_increase_plus1[HEVC_MAX_NUM_SUBLAYERS];
+ unsigned char log2_min_luma_coding_block_size_minus3;
+ unsigned char log2_diff_max_min_luma_coding_block_size;
+ unsigned char log2_min_transform_block_size_minus2;
+ unsigned char log2_diff_max_min_transform_block_size;
+ unsigned char max_transform_hierarchy_depth_inter;
+ unsigned char max_transform_hierarchy_depth_intra;
+ unsigned char scaling_list_enabled_flag;
+ unsigned char sps_scaling_list_data_present_flag;
+ struct bspp_hevc_scalinglist_data scalinglist_data;
+ unsigned char amp_enabled_flag;
+ unsigned char sample_adaptive_offset_enabled_flag;
+ unsigned char pcm_enabled_flag;
+ unsigned char pcm_sample_bit_depth_luma_minus1;
+ unsigned char pcm_sample_bit_depth_chroma_minus1;
+ unsigned char log2_min_pcm_luma_coding_block_size_minus3;
+ unsigned char log2_diff_max_min_pcm_luma_coding_block_size;
+ unsigned char pcm_loop_filter_disabled_flag;
+ unsigned char num_short_term_ref_pic_sets;
+ struct bspp_hevc_shortterm_refpicset rps_list[HEVC_MAX_NUM_ST_REF_PIC_SETS];
+ unsigned char long_term_ref_pics_present_flag;
+ unsigned char num_long_term_ref_pics_sps;
+ unsigned short lt_ref_pic_poc_lsb_sps[HEVC_MAX_NUM_LT_REF_PICS];
+ unsigned char used_by_curr_pic_lt_sps_flag[HEVC_MAX_NUM_LT_REF_PICS];
+ unsigned char sps_temporal_mvp_enabled_flag;
+ unsigned char strong_intra_smoothing_enabled_flag;
+ unsigned char vui_parameters_present_flag;
+ struct bspp_hevc_vui_params vui_params;
+ unsigned char sps_extension_present_flag;
+ unsigned char sps_range_extensions_flag;
+ struct bspp_hevc_sps_range_exts range_exts;
+ unsigned char sps_extension_7bits;
+ unsigned char sps_extension_data_flag;
+ /* derived elements */
+ unsigned char sub_width_c;
+ unsigned char sub_height_c;
+ unsigned char ctb_log2size_y;
+ unsigned char ctb_size_y;
+ unsigned int pic_width_in_ctbs_y;
+ unsigned int pic_height_in_ctbs_y;
+ unsigned int pic_size_in_ctbs_y;
+ int max_pic_order_cnt_lsb;
+ unsigned int sps_max_latency_pictures[HEVC_MAX_NUM_SUBLAYERS];
+ /* raw vui data as extracted from bitstream. */
+ struct bspp_raw_bitstream_data *vui_raw_data;
+};
+
+/**
+ * struct bspp_hevc_sequ_hdr_info - This structure contains HEVC sequence
+ * header information (VPS, SPS, VUI)
+ * contains everything parsed from the
+ * video/sequence header.
+ * @vps: HEVC sequence header information
+ * @sps:HEVC sequence header information
+ */
+struct bspp_hevc_sequ_hdr_info {
+ struct bspp_hevc_vps vps;
+ struct bspp_hevc_sps sps;
+};
+
+/* HEVC pps range extensions */
+struct bspp_hevc_pps_range_exts {
+ unsigned char log2_max_transform_skip_block_size_minus2;
+ unsigned char cross_component_prediction_enabled_flag;
+ unsigned char chroma_qp_offset_list_enabled_flag;
+ unsigned char diff_cu_chroma_qp_offset_depth;
+ unsigned char chroma_qp_offset_list_len_minus1;
+ unsigned char cb_qp_offset_list[HEVC_MAX_CHROMA_QP];
+ unsigned char cr_qp_offset_list[HEVC_MAX_CHROMA_QP];
+ unsigned char log2_sao_offset_scale_luma;
+ unsigned char log2_sao_offset_scale_chroma;
+};
+
+/* HEVC picture parameter set */
+struct bspp_hevc_pps {
+ unsigned char is_available;
+ unsigned char is_param_copied;
+ unsigned char pps_pic_parameter_set_id;
+ unsigned char pps_seq_parameter_set_id;
+ unsigned char dependent_slice_segments_enabled_flag;
+ unsigned char output_flag_present_flag;
+ unsigned char num_extra_slice_header_bits;
+ unsigned char sign_data_hiding_enabled_flag;
+ unsigned char cabac_init_present_flag;
+ unsigned char num_ref_idx_l0_default_active_minus1;
+ unsigned char num_ref_idx_l1_default_active_minus1;
+ unsigned char init_qp_minus26;
+ unsigned char constrained_intra_pred_flag;
+ unsigned char transform_skip_enabled_flag;
+ unsigned char cu_qp_delta_enabled_flag;
+ unsigned char diff_cu_qp_delta_depth;
+ int pps_cb_qp_offset;
+ int pps_cr_qp_offset;
+ unsigned char pps_slice_chroma_qp_offsets_present_flag;
+ unsigned char weighted_pred_flag;
+ unsigned char weighted_bipred_flag;
+ unsigned char transquant_bypass_enabled_flag;
+ unsigned char tiles_enabled_flag;
+ unsigned char entropy_coding_sync_enabled_flag;
+ unsigned char num_tile_columns_minus1;
+ unsigned char num_tile_rows_minus1;
+ unsigned char uniform_spacing_flag;
+ unsigned char column_width_minus1[HEVC_MAX_TILE_COLS];
+ unsigned char row_height_minus1[HEVC_MAX_TILE_ROWS];
+ unsigned char loop_filter_across_tiles_enabled_flag;
+ unsigned char pps_loop_filter_across_slices_enabled_flag;
+ unsigned char deblocking_filter_control_present_flag;
+ unsigned char deblocking_filter_override_enabled_flag;
+ unsigned char pps_deblocking_filter_disabled_flag;
+ unsigned char pps_beta_offset_div2;
+ unsigned char pps_tc_offset_div2;
+ unsigned char pps_scaling_list_data_present_flag;
+ struct bspp_hevc_scalinglist_data scaling_list;
+ unsigned char lists_modification_present_flag;
+ unsigned char log2_parallel_merge_level_minus2;
+ unsigned char slice_segment_header_extension_present_flag;
+ unsigned char pps_extension_present_flag;
+ unsigned char pps_range_extensions_flag;
+ struct bspp_hevc_pps_range_exts range_exts;
+ unsigned char pps_extension_7bits;
+ unsigned char pps_extension_data_flag;
+ /* derived elements */
+ unsigned short col_bd[HEVC_MAX_TILE_COLS + 1];
+ unsigned short row_bd[HEVC_MAX_TILE_ROWS + 1];
+ /* PVDEC derived elements */
+ unsigned int max_tile_height_in_ctbs_y;
+};
+
+/* HEVC slice segment header */
+struct bspp_hevc_slice_segment_header {
+ unsigned char bslice_is_idr;
+ unsigned char first_slice_segment_in_pic_flag;
+ unsigned char no_output_of_prior_pics_flag;
+ unsigned char slice_pic_parameter_set_id;
+ unsigned char dependent_slice_segment_flag;
+ unsigned int slice_segment_address;
+};
+
+/*
+ * @Function bspp_hevc_set_parser_config
+ * sets the parser configuration.
+ */
+int bspp_hevc_set_parser_config(enum vdec_bstr_format bstr_format,
+ struct bspp_vid_std_features *pvidstd_features,
+ struct bspp_swsr_ctx *pswsr_ctx,
+ struct bspp_parser_callbacks *pparser_callbacks,
+ struct bspp_inter_pict_data *pinterpict_data);
+
+void bspp_hevc_determine_unittype(unsigned char bitstream_unittype,
+ int disable_mvc,
+ enum bspp_unit_type *bspp_unittype);
+
+#endif /*__H264SECUREPARSER_H__ */
diff --git a/drivers/staging/media/vxd/decoder/jpeg_secure_parser.c b/drivers/staging/media/vxd/decoder/jpeg_secure_parser.c
new file mode 100644
index 000000000000..7effd67034be
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/jpeg_secure_parser.c
@@ -0,0 +1,645 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * h.264 secure data unit parsing API.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstreming
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "bspp_int.h"
+#include "jpeg_secure_parser.h"
+#include "jpegfw_data.h"
+#include "swsr.h"
+
+#define JPEG_MCU_SIZE 8
+
+#define JPEG_MAX_COMPONENTS 4
+#define MAX_SETS_HUFFMAN_TABLES 2
+#define MAX_QUANT_TABLES 4
+
+#define TABLE_CLASS_DC 0
+#define TABLE_CLASS_AC 1
+#define TABLE_CLASS_NUM 2
+
+/* Marker Codes */
+#define CODE_SOF_BASELINE 0xC0
+#define CODE_SOF1 0xC1
+#define CODE_SOF2 0xC2
+#define CODE_SOF3 0xC3
+#define CODE_SOF5 0xC5
+#define CODE_SOF6 0xC6
+#define CODE_SOF7 0xC7
+#define CODE_SOF8 0xC8
+#define CODE_SOF9 0xC9
+#define CODE_SOF10 0xCA
+#define CODE_SOF11 0xCB
+#define CODE_SOF13 0xCD
+#define CODE_SOF14 0xCE
+#define CODE_SOF15 0xCF
+#define CODE_DHT 0xC4
+#define CODE_RST0 0xD0
+#define CODE_RST1 0xD1
+#define CODE_RST2 0xD2
+#define CODE_RST3 0xD3
+#define CODE_RST4 0xD4
+#define CODE_RST5 0xD5
+#define CODE_RST6 0xD6
+#define CODE_RST7 0xD7
+#define CODE_SOI 0xD8
+#define CODE_EOI 0xD9
+#define CODE_SOS 0xDA
+#define CODE_DQT 0xDB
+#define CODE_DRI 0xDD
+#define CODE_APP0 0xE0
+#define CODE_APP1 0xE1
+#define CODE_APP2 0xE2
+#define CODE_APP3 0xE3
+#define CODE_APP4 0xE4
+#define CODE_APP5 0xE5
+#define CODE_APP6 0xE6
+#define CODE_APP7 0xE7
+#define CODE_APP8 0xE8
+#define CODE_APP9 0xE9
+#define CODE_APP10 0xEA
+#define CODE_APP11 0xEB
+#define CODE_APP12 0xEC
+#define CODE_APP13 0xED
+#define CODE_APP14 0xEE
+#define CODE_APP15 0xEF
+#define CODE_M_DAC 0xCC
+#define CODE_COMMENT 0xFE
+
+enum bspp_exception_handler {
+ /* BSPP parse exception handler */
+ BSPP_EXCEPTION_HANDLER_NONE = 0x00,
+ /* Jump at exception (external use) */
+ BSPP_EXCEPTION_HANDLER_JUMP,
+ BSPP_EXCEPTION_HANDLER_FORCE32BITS = 0x7FFFFFFFU
+};
+
+struct components {
+ unsigned char identifier;
+ unsigned char horz_factor;
+ unsigned char vert_factor;
+ unsigned char quant_table;
+};
+
+struct jpeg_segment_sof {
+ unsigned char precision;
+ unsigned short height;
+ unsigned short width;
+ unsigned char component;
+ struct components components[JPEG_VDEC_MAX_COMPONENTS];
+};
+
+struct jpeg_segment_header {
+ unsigned char type;
+ unsigned short payload_size;
+};
+
+/*
+ * Read bitstream data that may LOOK like SCP
+ * (but in fact is regular data and should be read as such)
+ * @return 8bits read from the bitstream
+ */
+static unsigned char bspp_jpeg_readbyte_asdata(void *swsr_ctx)
+{
+ if (swsr_check_delim_or_eod(swsr_ctx) == SWSR_FOUND_DELIM) {
+ swsr_consume_delim(swsr_ctx, SWSR_EMPREVENT_NONE, 8, NULL);
+ return 0xFF;
+ } else {
+ return swsr_read_bits(swsr_ctx, 8);
+ }
+}
+
+/*
+ * Read bitstream data that may LOOK like SCP
+ * (but in fact be regular data should be read as such)
+ * @return 16bits read from the bitstream
+ */
+static unsigned short bspp_jpeg_readword_asdata(void *swsr_ctx)
+{
+ unsigned short byte1 = bspp_jpeg_readbyte_asdata(swsr_ctx);
+ unsigned short byte2 = bspp_jpeg_readbyte_asdata(swsr_ctx);
+
+ return (byte1 << 8 | byte2);
+}
+
+/*
+ * Access regular bitstream data that may LOOK like SCP
+ * (but in fact be regular data)
+ */
+static void bspp_jpeg_consume_asdata(void *swsr_ctx, int len)
+{
+ while (len > 0) {
+ bspp_jpeg_readbyte_asdata(swsr_ctx);
+ len--;
+ }
+}
+
+/*
+ * Parse SOF segment
+ */
+static enum bspp_error_type bspp_jpeg_segment_parse_sof(void *swsr_ctx,
+ struct jpeg_segment_sof *sof_header)
+{
+ unsigned char comp_ind;
+
+ sof_header->precision = swsr_read_bits(swsr_ctx, 8);
+ if (sof_header->precision != 8) {
+ pr_warn("Sample precision has invalid value %d\n",
+ sof_header->precision);
+ return BSPP_ERROR_INVALID_VALUE;
+ }
+
+ sof_header->height = bspp_jpeg_readword_asdata(swsr_ctx);
+ sof_header->width = bspp_jpeg_readword_asdata(swsr_ctx);
+ if (sof_header->height < JPEG_MCU_SIZE || sof_header->width < JPEG_MCU_SIZE) {
+ pr_warn("Sample X/Y smaller than macroblock\n");
+ return BSPP_ERROR_INVALID_VALUE;
+ }
+ sof_header->component = swsr_read_bits(swsr_ctx, 8);
+ if (sof_header->component > JPEG_MAX_COMPONENTS) {
+ pr_warn("Number of components (%d) is greater than max allowed\n",
+ sof_header->component);
+ return BSPP_ERROR_INVALID_VALUE;
+ }
+ /* parse the component */
+ for (comp_ind = 0; comp_ind < sof_header->component; comp_ind++) {
+ sof_header->components[comp_ind].identifier = swsr_read_bits(swsr_ctx, 8);
+ sof_header->components[comp_ind].horz_factor = swsr_read_bits(swsr_ctx, 4);
+ sof_header->components[comp_ind].vert_factor = swsr_read_bits(swsr_ctx, 4);
+ sof_header->components[comp_ind].quant_table = swsr_read_bits(swsr_ctx, 8);
+
+ pr_debug("components[%d]=(identifier=%d; horz_factor=%d; vert_factor=%d; quant_table=%d)",
+ comp_ind,
+ sof_header->components[comp_ind].identifier,
+ sof_header->components[comp_ind].horz_factor,
+ sof_header->components[comp_ind].vert_factor,
+ sof_header->components[comp_ind].quant_table);
+ }
+
+ return BSPP_ERROR_NONE;
+}
+
+/*
+ * Seeks to delimeter if we're not already on one
+ */
+static enum swsr_found bspp_jpeg_tryseek_delimeter(void *swsr_ctx)
+{
+ enum swsr_found was_delim_or_eod = swsr_check_delim_or_eod(swsr_ctx);
+
+ if (was_delim_or_eod != SWSR_FOUND_DELIM)
+ was_delim_or_eod = swsr_seek_delim_or_eod(swsr_ctx);
+
+ return was_delim_or_eod;
+}
+
+static enum swsr_found bspp_jpeg_tryconsume_delimeters(void *swsr_ctx)
+{
+ enum swsr_found is_delim_or_eod = swsr_check_delim_or_eod(swsr_ctx);
+
+ while (is_delim_or_eod == SWSR_FOUND_DELIM) {
+ swsr_consume_delim(swsr_ctx, SWSR_EMPREVENT_NONE, 8, NULL);
+ is_delim_or_eod = swsr_check_delim_or_eod(swsr_ctx);
+ }
+ return is_delim_or_eod;
+}
+
+static enum swsr_found bspp_jpeg_tryseek_and_consume_delimeters(void *swsr_ctx)
+{
+ enum swsr_found is_delim_or_eod;
+
+ bspp_jpeg_tryseek_delimeter(swsr_ctx);
+ is_delim_or_eod = bspp_jpeg_tryconsume_delimeters(swsr_ctx);
+ return is_delim_or_eod;
+}
+
+/*
+ * Read segment type and size
+ * @return IMG_TRUE when header is found,
+ * IMG_FALSE if it has to be called again
+ */
+static unsigned char bspp_jpeg_segment_read_header(void *swsr_ctx,
+ struct bspp_unit_data *unit_data,
+ struct jpeg_segment_header *jpeg_segment_header)
+{
+ bspp_jpeg_tryconsume_delimeters(swsr_ctx);
+ jpeg_segment_header->type = swsr_read_bits(swsr_ctx, 8);
+
+ if (jpeg_segment_header->type != 0)
+ pr_debug("NAL=0x%x\n", jpeg_segment_header->type);
+
+ jpeg_segment_header->payload_size = 0;
+
+ switch (jpeg_segment_header->type) {
+ case CODE_SOS:
+ case CODE_DRI:
+ case CODE_SOF_BASELINE:
+ case CODE_SOF1:
+ case CODE_SOF2:
+ case CODE_SOF3:
+ case CODE_SOF5:
+ case CODE_SOF6:
+ case CODE_SOF7:
+ case CODE_SOF8:
+ case CODE_SOF9:
+ case CODE_SOF10:
+ case CODE_SOF11:
+ case CODE_SOF13:
+ case CODE_SOF14:
+ case CODE_SOF15:
+ case CODE_APP0:
+ case CODE_APP1:
+ case CODE_APP2:
+ case CODE_APP3:
+ case CODE_APP4:
+ case CODE_APP5:
+ case CODE_APP6:
+ case CODE_APP7:
+ case CODE_APP8:
+ case CODE_APP9:
+ case CODE_APP10:
+ case CODE_APP11:
+ case CODE_APP12:
+ case CODE_APP13:
+ case CODE_APP14:
+ case CODE_APP15:
+ case CODE_DHT:
+ case CODE_DQT:
+ case CODE_COMMENT:
+ {
+ jpeg_segment_header->payload_size =
+ bspp_jpeg_readword_asdata(swsr_ctx) - 2;
+ }
+ break;
+ case CODE_EOI:
+ case CODE_SOI:
+ case CODE_RST0:
+ case CODE_RST1:
+ case CODE_RST2:
+ case CODE_RST3:
+ case CODE_RST4:
+ case CODE_RST5:
+ case CODE_RST6:
+ case CODE_RST7:
+ /*
+ * jpeg_segment_header->payload_size reset to 0 previously,
+ * so just break.
+ */
+ break;
+ case 0:
+ {
+ /*
+ * Emulation prevention is OFF which means that 0 after
+ * 0xff will not be swallowed
+ * and has to be treated as data
+ */
+ bspp_jpeg_tryseek_and_consume_delimeters(swsr_ctx);
+ return 0;
+ }
+ default:
+ {
+ pr_err("BAD NAL=%#x\n", jpeg_segment_header->type);
+ unit_data->parse_error |= BSPP_ERROR_UNRECOVERABLE;
+ }
+ }
+
+ pr_debug("payloadSize=%#x\n", jpeg_segment_header->payload_size);
+ return 1;
+}
+
+static void bspp_jpeg_calculate_mcus(struct jpeg_segment_sof *data_sof,
+ unsigned char *alignment_width,
+ unsigned char *alignment_height)
+{
+ unsigned char i;
+ unsigned char max_horz_factor = 0;
+ unsigned char max_vert_factor = 0;
+ unsigned short mcu_width = 0;
+ unsigned short mcu_height = 0;
+
+ /* Determine maximum scale factors */
+ for (i = 0; i < data_sof->component; i++) {
+ unsigned char horz_factor = data_sof->components[i].horz_factor;
+ unsigned char vert_factor = data_sof->components[i].vert_factor;
+
+ max_horz_factor = horz_factor > max_horz_factor ? horz_factor : max_horz_factor;
+ max_vert_factor = vert_factor > max_vert_factor ? vert_factor : max_vert_factor;
+ }
+ /*
+ * Alignment we want to have must be:
+ * - mutliple of VDEC_MB_DIMENSION
+ * - at least of the size that will fit whole MCUs
+ */
+ *alignment_width =
+ VDEC_ALIGN_SIZE((8 * max_horz_factor), VDEC_MB_DIMENSION,
+ unsigned int, unsigned int);
+ *alignment_height =
+ VDEC_ALIGN_SIZE((8 * max_vert_factor), VDEC_MB_DIMENSION,
+ unsigned int, unsigned int);
+
+ /* Calculate dimensions in MCUs */
+ mcu_width += (data_sof->width + (8 * max_horz_factor) - 1) / (8 * max_horz_factor);
+ mcu_height += (data_sof->height + (8 * max_vert_factor) - 1) / (8 * max_vert_factor);
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s; w=%d; w[MCU]=%d\n", __func__, data_sof->width, mcu_width);
+ pr_info("%s; h=%d; h[MCU]=%d\n", __func__, data_sof->height, mcu_height);
+#endif
+}
+
+static int bspp_jpeg_common_seq_hdr_populate(struct jpeg_segment_sof *sof_header,
+ struct vdec_comsequ_hdrinfo *com_sequ_hdr_info,
+ unsigned char alignment_width,
+ unsigned char alignment_height)
+{
+ unsigned short i;
+ int res;
+ struct img_pixfmt_desc format_desc;
+
+ memset(&format_desc, 0, sizeof(struct img_pixfmt_desc));
+ memset(com_sequ_hdr_info, 0, sizeof(*com_sequ_hdr_info));
+
+ com_sequ_hdr_info->max_frame_size.width = VDEC_ALIGN_SIZE(sof_header->width,
+ alignment_width,
+ unsigned int, unsigned int);
+ com_sequ_hdr_info->max_frame_size.height = VDEC_ALIGN_SIZE(sof_header->height,
+ alignment_height, unsigned int,
+ unsigned int);
+ com_sequ_hdr_info->frame_size.width = sof_header->width;
+ com_sequ_hdr_info->frame_size.height = sof_header->height;
+ com_sequ_hdr_info->orig_display_region.width = sof_header->width;
+ com_sequ_hdr_info->orig_display_region.height = sof_header->height;
+
+ com_sequ_hdr_info->pixel_info.bitdepth_y = 8;
+ com_sequ_hdr_info->pixel_info.bitdepth_c = 8;
+ com_sequ_hdr_info->pixel_info.num_planes = sof_header->component;
+ /* actually we have to set foramt accroding to the following table
+ * H1 V1 H2 V2 H3 V3 J:a:b h/v
+ * 1 1 1 1 1 1 4:4:4 1/1
+ * 1 2 1 1 1 1 4:4:0 1/2
+ * 1 4 1 1 1 1 4:4:1* 1/4
+ * 1 4 1 2 1 2 4:4:0 1/2
+ * 2 1 1 1 1 1 4:2:2 2/1
+ * 2 2 1 1 1 1 4:2:0 2/2
+ * 2 2 2 1 2 1 4:4:0 1/2
+ * 2 4 1 1 1 1 4:2:1* 2/4
+ * 4 1 1 1 1 1 4:1:1 4/1
+ * 4 1 2 1 2 1 4:2:2 2/1
+ * 4 2 1 1 1 1 4:1:0 4/2
+ * 4 4 2 2 2 2 4:2:0 2/2
+ */
+ if (sof_header->component == (JPEG_MAX_COMPONENTS - 1)) {
+ com_sequ_hdr_info->pixel_info.chroma_fmt = PIXEL_MULTICHROME;
+ if ((sof_header->components[1].horz_factor == 1 &&
+ sof_header->components[1].vert_factor == 1) &&
+ (sof_header->components[2].horz_factor == 1 &&
+ sof_header->components[2].vert_factor == 1)) {
+ if (sof_header->components[0].horz_factor == 1 &&
+ sof_header->components[0].vert_factor == 1) {
+ com_sequ_hdr_info->pixel_info.chroma_fmt_idc = PIXEL_FORMAT_444;
+ } else if (sof_header->components[0].horz_factor == 2) {
+ if (sof_header->components[0].vert_factor == 1) {
+ com_sequ_hdr_info->pixel_info.chroma_fmt_idc =
+ PIXEL_FORMAT_422;
+ } else if (sof_header->components[0].vert_factor == 2) {
+ com_sequ_hdr_info->pixel_info.chroma_fmt_idc =
+ PIXEL_FORMAT_420;
+ } else {
+ com_sequ_hdr_info->pixel_info.chroma_fmt_idc =
+ PIXEL_FORMAT_444;
+ }
+ } else if ((sof_header->components[0].horz_factor == 4) &&
+ (sof_header->components[0].vert_factor == 1)) {
+ com_sequ_hdr_info->pixel_info.chroma_fmt_idc = PIXEL_FORMAT_411;
+ } else {
+ com_sequ_hdr_info->pixel_info.chroma_fmt_idc = PIXEL_FORMAT_444;
+ }
+ } else {
+ com_sequ_hdr_info->pixel_info.chroma_fmt_idc = PIXEL_FORMAT_444;
+ }
+ } else {
+ com_sequ_hdr_info->pixel_info.chroma_fmt = PIXEL_MONOCHROME;
+ com_sequ_hdr_info->pixel_info.chroma_fmt_idc = PIXEL_FORMAT_MONO;
+ }
+
+ for (i = 0; (i < sof_header->component) && (i < IMG_MAX_NUM_PLANES); i++) {
+ format_desc.planes[i] = 1;
+ format_desc.h_numer[i] = sof_header->components[i].horz_factor;
+ format_desc.v_numer[i] = sof_header->components[i].vert_factor;
+ }
+
+ res = pixel_gen_pixfmt(&com_sequ_hdr_info->pixel_info.pixfmt, &format_desc);
+ if (res != 0) {
+ pr_err("Failed to generate pixel format.\n");
+ return res;
+ }
+
+ return 0;
+}
+
+static void bspp_jpeg_pict_hdr_populate(struct jpeg_segment_sof *sof_header,
+ struct bspp_pict_hdr_info *pict_hdr_info)
+{
+ memset(pict_hdr_info, 0, sizeof(*pict_hdr_info));
+
+ pict_hdr_info->intra_coded = 1;
+ pict_hdr_info->ref = 0;
+
+ pict_hdr_info->coded_frame_size.width = (unsigned int)sof_header->width;
+ pict_hdr_info->coded_frame_size.height = (unsigned int)sof_header->height;
+ pict_hdr_info->disp_info.enc_disp_region.width = (unsigned int)sof_header->width;
+ pict_hdr_info->disp_info.enc_disp_region.height = (unsigned int)sof_header->height;
+
+ pict_hdr_info->pict_aux_data.id = BSPP_INVALID;
+ pict_hdr_info->second_pict_aux_data.id = BSPP_INVALID;
+ pict_hdr_info->pict_sgm_data.id = BSPP_INVALID;
+}
+
+static int bspp_jpeg_parse_picture_unit(void *swsr_ctx,
+ struct bspp_unit_data *unit_data)
+{
+ /* assume we'll be fine */
+ unit_data->parse_error = BSPP_ERROR_NONE;
+
+ while ((unit_data->parse_error == BSPP_ERROR_NONE) &&
+ !(unit_data->slice || unit_data->extracted_all_data)) {
+ struct jpeg_segment_header segment_header;
+ /*
+ * Try hard to read segment header. The only limit we set here is EOD-
+ * if it happens, we will get an exception, to stop this madness.
+ */
+ while (!bspp_jpeg_segment_read_header(swsr_ctx, unit_data, &segment_header) &&
+ unit_data->parse_error == BSPP_ERROR_NONE)
+ ;
+
+ switch (segment_header.type) {
+ case CODE_SOF1:
+ case CODE_SOF2:
+ case CODE_SOF3:
+ case CODE_SOF5:
+ case CODE_SOF6:
+ case CODE_SOF8:
+ case CODE_SOF9:
+ case CODE_SOF10:
+ case CODE_SOF11:
+ case CODE_SOF13:
+ case CODE_SOF14:
+ case CODE_SOF15:
+ {
+ bspp_jpeg_consume_asdata(swsr_ctx, segment_header.payload_size);
+ bspp_jpeg_tryseek_delimeter(swsr_ctx);
+ unit_data->extracted_all_data = 1;
+ unit_data->slice = 1;
+ unit_data->parse_error |= BSPP_ERROR_UNSUPPORTED;
+ return IMG_ERROR_NOT_SUPPORTED;
+ }
+ case CODE_SOI:
+ {
+ /*
+ * Reinitialize context at the beginning of each image
+ */
+ }
+ break;
+ case CODE_EOI:
+ {
+ /*
+ * Some more frames can be concatenated after SOI,
+ * but we'll discard it for now
+ */
+ while (bspp_jpeg_tryseek_and_consume_delimeters(swsr_ctx) != SWSR_FOUND_EOD)
+ ;
+ unit_data->extracted_all_data = 1;
+ return 0;
+ }
+ case CODE_SOF_BASELINE:
+ {
+ int res;
+ unsigned char alignment_width = 0;
+ unsigned char alignment_height = 0;
+ struct jpeg_segment_sof sof_data;
+
+ struct bspp_sequ_hdr_info *sequ_hdr_info =
+ &unit_data->impl_sequ_hdr_info->sequ_hdr_info;
+
+ memset(&sof_data, 0, sizeof(*&sof_data));
+
+ /* SOF is the only segment we are interested in- parse it */
+ unit_data->parse_error |= bspp_jpeg_segment_parse_sof(swsr_ctx, &sof_data);
+ /*
+ * to correctly allocate size for frame we need to have correct MCUs to
+ * get alignment info
+ */
+ bspp_jpeg_calculate_mcus(&sof_data, &alignment_width, &alignment_height);
+
+ /* fill in headers expected by BSPP framework */
+ res = bspp_jpeg_common_seq_hdr_populate(&sof_data,
+ &sequ_hdr_info->com_sequ_hdr_info,
+ alignment_width,
+ alignment_height);
+ if (res != 0) {
+ unit_data->parse_error |= BSPP_ERROR_UNRECOVERABLE;
+ return res;
+ }
+
+ bspp_jpeg_pict_hdr_populate(&sof_data, unit_data->out.pict_hdr_info);
+
+ /* fill in sequence IDs for header and picture */
+ sequ_hdr_info->sequ_hdr_id = BSPP_DEFAULT_SEQUENCE_ID;
+ unit_data->pict_sequ_hdr_id = BSPP_DEFAULT_SEQUENCE_ID;
+
+ /* reset SOS fields counter value */
+ unit_data->out.pict_hdr_info->sos_count = 0;
+ }
+ break;
+ case CODE_SOS:
+ {
+ /* increment the SOS fields counter */
+ unit_data->out.pict_hdr_info->sos_count++;
+
+ unit_data->slice = 1;
+ bspp_jpeg_consume_asdata(swsr_ctx, segment_header.payload_size);
+ return 0;
+ }
+ case CODE_DRI:
+ break;
+ default:
+ {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("Skipping over 0x%x bytes\n", segment_header.payload_size);
+#endif
+ bspp_jpeg_consume_asdata(swsr_ctx, segment_header.payload_size);
+ }
+ break;
+ }
+ /*
+ * After parsing segment we should already be on delimeter.
+ * Consume it, so header parsing can be started.
+ */
+ bspp_jpeg_tryseek_and_consume_delimeters(swsr_ctx);
+ }
+ return 0;
+}
+
+int bspp_jpeg_unit_parser(void *swsr_ctx, struct bspp_unit_data *unit_data)
+{
+ int retval = 0;
+
+ switch (unit_data->unit_type) {
+ case BSPP_UNIT_PICTURE:
+ {
+ retval = bspp_jpeg_parse_picture_unit(swsr_ctx, unit_data);
+ unit_data->new_closed_gop = 1;
+ }
+ break;
+ default:
+ {
+ unit_data->parse_error = BSPP_ERROR_INVALID_VALUE;
+ }
+ break;
+ }
+
+ return retval;
+}
+
+int bspp_jpeg_setparser_config(enum vdec_bstr_format bstr_format,
+ struct bspp_vid_std_features *pvidstd_features,
+ struct bspp_swsr_ctx *pswsr_ctx,
+ struct bspp_parser_callbacks *pparser_callbacks,
+ struct bspp_inter_pict_data *pinterpict_data)
+{
+ /* Set JPEG parser callbacks. */
+ pparser_callbacks->parse_unit_cb = bspp_jpeg_unit_parser;
+
+ /* Set JPEG specific features. */
+ pvidstd_features->seq_size = sizeof(struct bspp_jpeg_sequ_hdr_info);
+ pvidstd_features->uses_vps = 0;
+ pvidstd_features->uses_pps = 0;
+
+ /* Set JPEG specific shift register config. */
+ pswsr_ctx->emulation_prevention = SWSR_EMPREVENT_NONE;
+ pswsr_ctx->sr_config.delim_type = SWSR_DELIM_SCP;
+ pswsr_ctx->sr_config.delim_length = 8;
+ pswsr_ctx->sr_config.scp_value = 0xFF;
+
+ return 0;
+}
+
+void bspp_jpeg_determine_unit_type(unsigned char bitstream_unittype,
+ int disable_mvc,
+ enum bspp_unit_type *bspp_unittype)
+{
+ *bspp_unittype = BSPP_UNIT_PICTURE;
+}
diff --git a/drivers/staging/media/vxd/decoder/jpeg_secure_parser.h b/drivers/staging/media/vxd/decoder/jpeg_secure_parser.h
new file mode 100644
index 000000000000..439a38504b96
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/jpeg_secure_parser.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * JPEG secure data unit parsing API.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstreming
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef __JPEGSECUREPARSER_H__
+#define __JPEGSECUREPARSER_H__
+
+#include "bspp_int.h"
+
+/**
+ * struct bspp_jpeg_sequ_hdr_info - bspp_jpeg_sequ_hdr_info dummu structure
+ * @dummy: dummy structure
+ */
+struct bspp_jpeg_sequ_hdr_info {
+ unsigned int dummy;
+};
+
+int bspp_jpeg_setparser_config(enum vdec_bstr_format bstr_format,
+ struct bspp_vid_std_features *pvidstd_features,
+ struct bspp_swsr_ctx *pswsr_ctx,
+ struct bspp_parser_callbacks *pparser_callbacks,
+ struct bspp_inter_pict_data *pinterpict_data);
+
+void bspp_jpeg_determine_unit_type(unsigned char bitstream_unittype,
+ int disable_mvc,
+ enum bspp_unit_type *bspp_unittype);
+
+#endif /*__JPEGSECUREPARSER_H__ */
diff --git a/drivers/staging/media/vxd/decoder/swsr.c b/drivers/staging/media/vxd/decoder/swsr.c
new file mode 100644
index 000000000000..d59f8b06b397
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/swsr.c
@@ -0,0 +1,1657 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Software Shift Register Access fucntions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ * Re-written for upstreming
+ * Prashanth Kumar Amai <[email protected]>
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "swsr.h"
+#include "vdec_defs.h"
+
+#define NBIT_8BYTE_MASK(n) ((1ULL << (n)) - 1)
+
+/* Input FIFO length (in bytes). */
+#define SWSR_INPUT_FIFO_LENGTH 8
+
+/* Output FIFO length (in bits). */
+#define SWSR_OUTPUT_FIFO_LENGTH 64
+
+#define SWSR_NALTYPE_LENGTH 8
+
+#define SWSR_MAX_SYNTAX_LENGTH 32
+
+#define SWSR_ASSERT(expected) ({WARN_ON(!(expected)); 0; })
+
+struct swsr_buffer {
+ void **lst_link;
+ /* Pointer to bitstream data. */
+ unsigned char *data;
+ /* Number of bytes of bitstream */
+ unsigned long long num_bytes;
+ /* Index (in bytes) to next data within the buffer */
+ unsigned long long byte_offset;
+ /* Number of bytes read from input FIFO */
+ unsigned long long num_bytes_read;
+};
+
+struct swsr_input {
+ /* Bitstream data (byte-based and pre emu prev) - left aligned. */
+ unsigned long long fifo;
+ /* Number of *bytes* in Input FIFO */
+ unsigned int num_bytes;
+ struct swsr_config config;
+ /* Emulation prevention mode used to process data in Input FIFO */
+ enum swsr_emprevent emprevent;
+ /* Number of bytes in emulation prevention sequence */
+ unsigned int emprev_seq_len;
+ /* Size of bitstream declared at initialisation */
+ unsigned long long bitstream_size;
+ /*
+ * Number of bytes required from input buffer before checking
+ * next emulation prevention sequence.
+ */
+ unsigned int bytes_for_next_sequ;
+ /* Byte count read from size delimiter */
+ unsigned long long byte_count;
+ unsigned long long bytes_read_since_delim;
+ /* Cumulative offset (in bytes) into input buffer data */
+ unsigned long long bitstream_offset;
+ /* Bitstream delimiter found (see #SWSR_delim_type) */
+ unsigned char delim_found;
+ /*
+ * No More Valid Data before next delimiter.
+ * Set only for SWSR_EMPREVENT_00000300.
+ */
+ unsigned char no_moredata;
+ /* Pointer to current input buffer in the context of Input FIFO */
+ struct swsr_buffer *buf;
+ /* Start offset within buffer of current delimited unit */
+ long delimited_unit_start_offset;
+ /* Size of current delimited unit (if already calculated) */
+ unsigned int delimited_unit_size;
+ /* Current bit offset within the current delimited unit */
+ unsigned int delimunit_bitofst;
+};
+
+struct swsr_output {
+ /*
+ * Bitstream data (post emulation prevention removal
+ * delimiter checking) - left aligned.
+ */
+ unsigned long long fifo;
+ /* Number of *bits* in Output FIFO */
+ unsigned int num_bits;
+ unsigned long long totalbits_consumed;
+};
+
+struct swsr_buffer_ctx {
+ /*
+ * Callback function to notify event and provide/request data.
+ * See #SWSR_eCbEvent for event types and description
+ * of CB argument usage.
+ */
+ swsr_callback_fxn cb_fxn;
+ /* Caller supplied pointer for callback */
+ void *cb_param;
+ /* List of buffers */
+ struct lst_t free_buffer_list;
+ /*
+ * List of buffers (#SWSR_sBufferCtx) whose data reside
+ * in the Input/Output FIFOs.
+ */
+ struct lst_t used_buffer_list;
+};
+
+struct swsr_context {
+ /* IMG_TRUE if the context is initialised */
+ unsigned char initialised;
+ /* A pointer to an exception handler */
+ swsr_except_handler_fxn exception_handler_fxn;
+ /* Caller supplied pointer */
+ void *pexception_param;
+ /* Last recorded exception */
+ enum swsr_exception exception;
+ /* Buffer context data */
+ struct swsr_buffer_ctx buffer_ctx;
+ /* Context of shift register input. */
+ struct swsr_input input;
+ /* Context of shift register output */
+ struct swsr_output output;
+};
+
+static unsigned long long left_aligned_nbit_8byte_mask(unsigned int mask, unsigned int nbits)
+{
+ return (((unsigned long long)mask << (64 - nbits)) |
+ (unsigned long long)NBIT_8BYTE_MASK(64 - nbits));
+}
+
+/*
+ * buffer has been exhausted and there is still more bytes declared in bitstream
+ */
+static int swsr_extractbyte(struct swsr_context *ctx, unsigned char *byte_ext)
+{
+ struct swsr_input *input;
+ struct swsr_buffer_ctx *buf_ctx;
+ unsigned char byte = 0;
+ unsigned long long cur_byte_offset;
+ unsigned int result = 0;
+
+ if (!ctx || !byte_ext)
+ return IMG_ERROR_FATAL;
+
+ input = &ctx->input;
+ buf_ctx = &ctx->buffer_ctx;
+
+ cur_byte_offset = input->bitstream_offset;
+
+ if (input->buf && input->buf->byte_offset < input->buf->num_bytes) {
+ input->bitstream_offset++;
+ byte = input->buf->data[input->buf->byte_offset++];
+ } else if (input->bitstream_offset < input->bitstream_size) {
+ struct swsr_buffer *buffer;
+
+ buffer = lst_removehead(&buf_ctx->free_buffer_list);
+ if (!buffer)
+ return IMG_ERROR_FATAL;
+
+ buffer->num_bytes_read = 0;
+ buffer->byte_offset = 0;
+
+ buf_ctx->cb_fxn(SWSR_EVENT_INPUT_BUFFER_START,
+ buf_ctx->cb_param, 0,
+ &buffer->data, &buffer->num_bytes);
+ SWSR_ASSERT(buffer->data && buffer->num_bytes > 0);
+
+ if (buffer->data && buffer->num_bytes > 0) {
+ input->buf = buffer;
+
+ /* Add input buffer to output buffer list. */
+ lst_add(&buf_ctx->used_buffer_list, input->buf);
+
+ input->bitstream_offset++;
+ byte = input->buf->data[input->buf->byte_offset++];
+ }
+ }
+
+ {
+ struct swsr_buffer *buffer = input->buf;
+
+ if (!buffer)
+ buffer = lst_first(&buf_ctx->used_buffer_list);
+
+ if (!buffer || buffer->num_bytes_read > buffer->num_bytes) {
+ input->delimited_unit_start_offset = -1;
+ input->delimited_unit_size = 0;
+ }
+ }
+ /* If the bitstream offset hasn't increased we failed to read a byte. */
+ if (cur_byte_offset == input->bitstream_offset) {
+ input->buf = NULL;
+ result = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ }
+
+ *byte_ext = byte;
+
+ return result;
+}
+
+static unsigned char swsr_checkfor_delimiter(struct swsr_context *ctx)
+{
+ struct swsr_input *input;
+ unsigned char delim_found = 0;
+
+ input = &ctx->input;
+
+ /* Check for delimiter. */
+ if (input->config.delim_type == SWSR_DELIM_SCP) {
+ unsigned int shift = (SWSR_INPUT_FIFO_LENGTH * 8)
+ - input->config.delim_length;
+ unsigned long long sequ = input->fifo >> shift;
+
+ /*
+ * Check if the SCP value is matched outside of
+ * emulation prevention data.
+ */
+ if (sequ == input->config.scp_value && input->bytes_for_next_sequ == 0)
+ delim_found = 1;
+
+ } else if (input->config.delim_type == SWSR_DELIM_SIZE) {
+ delim_found = (input->bytes_read_since_delim >= input->byte_count) ? 1 : 0;
+ }
+
+ return delim_found;
+}
+
+static int swsr_increment_cur_bufoffset(struct swsr_context *ctx)
+{
+ struct swsr_buffer_ctx *buf_ctx;
+ struct swsr_buffer *cur_buf;
+
+ buf_ctx = &ctx->buffer_ctx;
+
+ /* Update the number of bytes read from input FIFO for current buffer */
+ cur_buf = lst_first(&buf_ctx->used_buffer_list);
+ if (cur_buf->num_bytes_read >= cur_buf->num_bytes) {
+ /* Mark current bitstream buffer as fully consumed */
+ cur_buf->num_bytes_read = cur_buf->num_bytes;
+
+ /* Notify the application that the old buffer is exhausted. */
+ buf_ctx->cb_fxn(SWSR_EVENT_OUTPUT_BUFFER_END,
+ buf_ctx->cb_param, 0,
+ NULL, NULL);
+
+ /*
+ * Discard the buffer whose data was at the head of
+ * the input FIFO.
+ */
+ cur_buf = lst_removehead(&buf_ctx->used_buffer_list);
+ /* Add the buffer container to free list. */
+ lst_add(&buf_ctx->free_buffer_list, cur_buf);
+
+ /*
+ * Since the byte that we read was actually from the next
+ * buffer increment it's counter.
+ */
+ cur_buf = lst_first(&buf_ctx->used_buffer_list);
+ cur_buf->num_bytes_read++;
+ } else {
+ cur_buf->num_bytes_read++;
+ }
+
+ return 0;
+}
+
+static enum swsr_found swsr_readbyte_from_inputfifo(struct swsr_context *ctx,
+ unsigned char *byte)
+{
+ struct swsr_input *input;
+ enum swsr_found found = SWSR_FOUND_NONE;
+ unsigned int result = 0;
+
+ input = &ctx->input;
+
+ input->delim_found |= swsr_checkfor_delimiter(ctx);
+
+ /*
+ * Refill the input FIFO before checking for emulation prevention etc.
+ * The only exception is when there are no more bytes left to extract
+ * from input buffer.
+ */
+ while (input->num_bytes < SWSR_INPUT_FIFO_LENGTH && result == 0) {
+ unsigned char byte;
+
+ result = swsr_extractbyte(ctx, &byte);
+ if (result == 0) {
+ input->fifo |= ((unsigned long long)byte <<
+ ((SWSR_INPUT_FIFO_LENGTH - 1 - input->num_bytes) * 8));
+ input->num_bytes += 1;
+ }
+ }
+
+ if (input->num_bytes == 0) {
+ found = SWSR_FOUND_EOD;
+ } else if (!input->delim_found) {
+ /*
+ * Check for emulation prevention when enabled and enough
+ * bytes are remaining in input FIFO.
+ */
+ if (input->emprevent != SWSR_EMPREVENT_NONE &&
+ /*
+ * Ensure you have enough bytes to check for emulation
+ * prevention.
+ */
+ input->num_bytes >= input->emprev_seq_len &&
+ (input->config.delim_type != SWSR_DELIM_SIZE ||
+ /*
+ * Ensure that you don't remove emu bytes beyond current
+ * delimited unit.
+ */
+ ((input->bytes_read_since_delim + input->emprev_seq_len) <
+ input->byte_count)) && input->bytes_for_next_sequ == 0) {
+ unsigned char emprev_removed = 0;
+ unsigned int shift = (SWSR_INPUT_FIFO_LENGTH - input->emprev_seq_len) * 8;
+ unsigned long long sequ = input->fifo >> shift;
+
+ if (input->emprevent == SWSR_EMPREVENT_00000300) {
+ if ((sequ & 0xffffff00) == 0x00000300) {
+ if ((sequ & 0x000000ff) > 0x03)
+ pr_err("Invalid start code emulation preventionbytes found\n");
+
+ /*
+ * Instead of trying to remove the emulation prevention
+ * byte from the middle of the FIFO simply make it zero
+ * and drop the next byte from the FIFO which will
+ * also be zero.
+ */
+ input->fifo &= left_aligned_nbit_8byte_mask
+ (0xffff00ff,
+ input->emprev_seq_len * 8);
+ input->fifo <<= 8;
+
+ emprev_removed = 1;
+ } else if ((sequ & 0xffffffff) == 0x00000000 ||
+ (sequ & 0xffffffff) == 0x00000001) {
+ input->no_moredata = 1;
+ }
+ } else if (input->emprevent == SWSR_EMPREVENT_ff00) {
+ if (sequ == 0xff00) {
+ /* Remove the zero byte. */
+ input->fifo <<= 8;
+ input->fifo |= (0xff00ULL << shift);
+ emprev_removed = 1;
+ }
+ } else if (input->emprevent == SWSR_EMPREVENT_000002) {
+ /*
+ * Remove the emulation prevention bytes
+ * if we find 22 consecutive 0 bits
+ * (from a byte-aligned position?!)
+ */
+ if (sequ == 0x000002) {
+ /*
+ * Appear to "remove" the 0x02 byte by clearing
+ * it and then dropping the top (zero) byte.
+ */
+ input->fifo &= left_aligned_nbit_8byte_mask
+ (0xffff00,
+ input->emprev_seq_len * 8);
+ input->fifo <<= 8;
+ emprev_removed = 1;
+ }
+ }
+
+ if (emprev_removed) {
+ input->num_bytes--;
+ input->bytes_read_since_delim++;
+
+ /* Increment the buffer offset for the
+ * byte that has been removed.
+ */
+ swsr_increment_cur_bufoffset(ctx);
+
+ /*
+ * Signal that two more new bytes in the emulation
+ * prevention sequence are required before another match
+ * can be made.
+ */
+ input->bytes_for_next_sequ = input->emprev_seq_len - 2;
+ }
+ }
+
+ if (input->bytes_for_next_sequ > 0)
+ input->bytes_for_next_sequ--;
+
+ /* return the first bytes from read data */
+ *byte = (unsigned char)(input->fifo >> ((SWSR_INPUT_FIFO_LENGTH - 1) * 8));
+ input->fifo <<= 8;
+
+ input->num_bytes--;
+ input->bytes_read_since_delim++;
+
+ /* Increment the buffer offset for byte that has been read. */
+ swsr_increment_cur_bufoffset(ctx);
+
+ found = SWSR_FOUND_DATA;
+ } else {
+ found = SWSR_FOUND_DELIM;
+ }
+
+ return found;
+}
+
+static enum swsr_found swsr_consumebyte_from_inputfifo
+ (struct swsr_context *ctx, unsigned char *byte)
+{
+ enum swsr_found found;
+
+ found = swsr_readbyte_from_inputfifo(ctx, byte);
+
+ if (found == SWSR_FOUND_DATA) {
+ /* Only whole bytes can be read from Input FIFO. */
+ ctx->output.totalbits_consumed += 8;
+ ctx->input.delimunit_bitofst += 8;
+ }
+
+ return found;
+}
+
+static int swsr_fill_outputfifo(struct swsr_context *ctx)
+{
+ unsigned char byte;
+ enum swsr_found found = SWSR_FOUND_DATA;
+
+ /* Fill output FIFO with whole bytes up to (but not over) max length */
+ while (ctx->output.num_bits <= (SWSR_OUTPUT_FIFO_LENGTH - 8) && found == SWSR_FOUND_DATA) {
+ found = swsr_readbyte_from_inputfifo(ctx, &byte);
+ if (found == SWSR_FOUND_DATA) {
+ ctx->output.fifo |= ((unsigned long long)byte <<
+ (SWSR_OUTPUT_FIFO_LENGTH - 8 - ctx->output.num_bits));
+ ctx->output.num_bits += 8;
+ }
+ }
+
+ return 0;
+}
+
+static unsigned int swsr_getbits_from_outputfifo(struct swsr_context *ctx,
+ unsigned int numbits,
+ unsigned char bconsume)
+{
+ unsigned int bitsread;
+
+ /*
+ * Fetch more bits from the input FIFO if the output FIFO
+ * doesn't have enough bits to satisfy the request on its own.
+ */
+ if (numbits > ctx->output.num_bits)
+ swsr_fill_outputfifo(ctx);
+
+ /* Ensure that are now enough bits in the output FIFO. */
+ if (numbits > ctx->output.num_bits) {
+ /* Tried to access into an SCP or other delimiter. */
+ if (ctx->input.delim_found) {
+ ctx->exception = SWSR_EXCEPT_ACCESS_INTO_SCP;
+ } else {
+ /*
+ * Data has been exhausted if after extracting bits
+ * there are still not enough bits in the internal
+ * storage to fulfil the number requested.
+ */
+ ctx->exception = SWSR_EXCEPT_ACCESS_BEYOND_EOD;
+ }
+
+ ctx->exception_handler_fxn(ctx->exception, ctx->pexception_param);
+
+ /* Return zero if the bits couldn't be obtained */
+ bitsread = 0;
+ } else {
+ unsigned int shift;
+
+ /* Extract all the bits from the output FIFO */
+ shift = (SWSR_OUTPUT_FIFO_LENGTH - numbits);
+ bitsread = (unsigned int)(ctx->output.fifo >> shift);
+
+ if (bconsume) {
+ /* Update output FIFO. */
+ ctx->output.fifo <<= numbits;
+ ctx->output.num_bits -= numbits;
+ }
+ }
+
+ if (bconsume && ctx->exception == SWSR_EXCEPT_NO_EXCEPTION) {
+ ctx->output.totalbits_consumed += numbits;
+ ctx->input.delimunit_bitofst += numbits;
+ }
+
+ /* Return the bits */
+ return bitsread;
+}
+
+int swsr_read_signed_expgoulomb(void *ctx_hndl)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+ unsigned int exp_goulomb;
+ unsigned char unsign;
+
+ /* Validate input arguments. */
+ if (!ctx) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ /* Read unsigned value then convert to signed value */
+ exp_goulomb = swsr_read_unsigned_expgoulomb(ctx);
+
+ unsign = exp_goulomb & 1;
+ exp_goulomb >>= 1;
+ exp_goulomb = (unsign) ? exp_goulomb + 1 : -(int)exp_goulomb;
+
+ if (ctx->exception != SWSR_EXCEPT_NO_EXCEPTION)
+ ctx->exception_handler_fxn(ctx->exception, ctx->pexception_param);
+
+ /* Return the signed value */
+ return exp_goulomb;
+}
+
+static unsigned int swsr_readunsigned_expgoulomb(struct swsr_context *ctx)
+{
+ unsigned int numbits = 0;
+ unsigned int bitpeeked;
+ unsigned int bitread;
+ unsigned int setbits;
+ unsigned int expgoulomb;
+
+ /* Loop until we have found a non-zero nibble or reached 31 0-bits */
+ /* first read is 3 bits only to prevent an illegal 32-bit peek */
+ numbits = 1;
+ do {
+ bitpeeked = swsr_peekbits(ctx, numbits);
+ /* Check for non-zero nibble */
+ if (bitpeeked != 0)
+ break;
+
+ numbits++;
+
+ } while (numbits < 32);
+
+ /* Correct the number of leading zero bits */
+ numbits--;
+
+ if (bitpeeked) {
+ /* read leading zeros and 1-bit */
+ bitread = swsr_read_bits(ctx, numbits + 1);
+ if (bitread != 1)
+ ctx->exception = SWSR_EXCEPT_EXPGOULOMB_ERROR;
+ } else {
+ /*
+ * read 31 zero bits - special case to deal with 31 or 32
+ * leading zeros
+ */
+ bitread = swsr_read_bits(ctx, 31);
+ if (bitread != 0)
+ ctx->exception = SWSR_EXCEPT_EXPGOULOMB_ERROR;
+
+ /*
+ * next 3 bits make either 31 0-bit code:'1xx',
+ * or 32 0-bit code:'010'
+ */
+ /*
+ * only valid 32 0-bit code is:'0..010..0'
+ * and results in 0xffffffff
+ */
+ bitpeeked = swsr_peekbits(ctx, 3);
+
+ if (ctx->exception == SWSR_EXCEPT_NO_EXCEPTION) {
+ if (0x4 & bitpeeked) {
+ bitread = swsr_read_bits(ctx, 1);
+ numbits = 31;
+ } else {
+ if (bitpeeked != 2)
+ ctx->exception = SWSR_EXCEPT_EXPGOULOMB_ERROR;
+
+ bitread = swsr_read_bits(ctx, 3);
+ bitread = swsr_read_bits(ctx, 31);
+ if (bitread != 0)
+ ctx->exception = SWSR_EXCEPT_EXPGOULOMB_ERROR;
+
+ return 0xffffffff;
+ }
+ } else {
+ /* encountered an exception while reading code */
+ /* just return a valid value */
+ return 0;
+ }
+ }
+
+ /* read data bits */
+ bitread = 0;
+ if (numbits)
+ bitread = swsr_read_bits(ctx, numbits);
+
+ /* convert exp-goulomb to value */
+ setbits = (1 << numbits) - 1;
+ expgoulomb = setbits + bitread;
+ /* Return the value */
+ return expgoulomb;
+}
+
+unsigned int swsr_read_unsigned_expgoulomb(void *ctx_hndl)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+ unsigned int value;
+
+ /* Validate input arguments. */
+ if (!ctx) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ value = swsr_readunsigned_expgoulomb(ctx);
+
+ if (ctx->exception != SWSR_EXCEPT_NO_EXCEPTION)
+ ctx->exception_handler_fxn(ctx->exception, ctx->pexception_param);
+
+ return value;
+}
+
+enum swsr_exception swsr_check_exception(void *ctx_hndl)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+ enum swsr_exception exception;
+
+ /* Validate input arguments. */
+ if (!ctx) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return (enum swsr_exception)IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ exception = ctx->exception;
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return (enum swsr_exception)IMG_ERROR_NOT_INITIALISED;
+ }
+
+ ctx->exception = SWSR_EXCEPT_NO_EXCEPTION;
+ return exception;
+}
+
+int swsr_check_more_rbsp_data(void *ctx_hndl, unsigned char *more_rbsp_data)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+
+ int rembitsinbyte;
+ unsigned char currentbyte;
+ int numof_aligned_rembits;
+ unsigned long long rest_alignedbytes;
+ unsigned char moredata = 0;
+
+ /* Validate input arguments. */
+ if (!ctx) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ if (ctx->input.emprevent != SWSR_EMPREVENT_00000300) {
+ pr_err("SWSR cannot determine More RBSP data for a stream without SWSR_EMPREVENT_00000300: %s\n",
+ __func__);
+ return IMG_ERROR_OPERATION_PROHIBITED;
+ }
+
+ /*
+ * Always fill the output FIFO to ensure the no_moredata flag is set
+ * when there are enough remaining bytes
+ */
+
+ swsr_fill_outputfifo(ctx);
+
+ if (ctx->output.num_bits != 0) {
+ /* Calculate the number of bits in the MS byte */
+ rembitsinbyte = (ctx->output.num_bits & 0x7);
+ if (rembitsinbyte == 0)
+ rembitsinbyte = 8;
+
+ numof_aligned_rembits = (ctx->output.num_bits - rembitsinbyte);
+
+ /* Peek the value of last byte. */
+ currentbyte = swsr_peekbits(ctx, rembitsinbyte);
+ rest_alignedbytes = (ctx->output.fifo >>
+ (64 - ctx->output.num_bits)) &
+ ((1ULL << numof_aligned_rembits) - 1);
+
+ if ((currentbyte == (1 << (rembitsinbyte - 1))) &&
+ (numof_aligned_rembits == 0 || (rest_alignedbytes == 0 &&
+ ((((((unsigned int)numof_aligned_rembits >> 3)) <
+ ctx->input.emprev_seq_len) &&
+ ctx->input.num_bytes == 0) || ctx->input.no_moredata))))
+ moredata = 0;
+ else
+ moredata = 1;
+ }
+
+ *more_rbsp_data = moredata;
+
+ return 0;
+}
+
+unsigned int swsr_read_onebit(void *ctx_hndl)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+ unsigned int bitread;
+
+ /* Validate input arguments. */
+ if (!ctx_hndl) {
+ VDEC_ASSERT(0);
+ return -EIO;
+ }
+
+ ctx = (struct swsr_context *)ctx_hndl;
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ /* Optimize with inline code (specific version of call below). */
+ bitread = swsr_read_bits(ctx, 1);
+
+ return bitread;
+}
+
+unsigned int swsr_read_bits(void *ctx_hndl, unsigned int no_bits)
+{
+ struct swsr_context *ctx;
+
+ /* Validate input arguments. */
+ if (!ctx_hndl) {
+ VDEC_ASSERT(0);
+ return -EIO;
+ }
+
+ ctx = (struct swsr_context *)ctx_hndl;
+
+ /* Validate input arguments. */
+ if (!ctx->initialised) {
+ pr_err("%s: Invalid SWSR context\n", __func__);
+ ctx->exception = SWSR_EXCEPT_INVALID_CONTEXT;
+ ctx->exception_handler_fxn(ctx->exception, ctx->pexception_param);
+
+ return 0;
+ }
+
+ if (no_bits > SWSR_MAX_SYNTAX_LENGTH) {
+ pr_err("Maximum symbol length exceeded\n");
+ ctx->exception = SWSR_EXCEPT_WRONG_CODEWORD_ERROR;
+ ctx->exception_handler_fxn(ctx->exception, ctx->pexception_param);
+
+ return 0;
+ }
+
+ return swsr_getbits_from_outputfifo(ctx, no_bits, 1);
+}
+
+int swsr_read_signedbits(void *ctx_hndl, unsigned int no_bits)
+{
+ struct swsr_context *ctx;
+ int outbits = 0;
+
+ /* Validate input arguments. */
+ if (!ctx_hndl) {
+ VDEC_ASSERT(0);
+ return -EIO;
+ }
+
+ ctx = (struct swsr_context *)ctx_hndl;
+
+ /* Check if the context has been initialized. */
+ if (!ctx->initialised) {
+ pr_err("%s: Invalid SWSR context\n", __func__);
+ ctx->exception = SWSR_EXCEPT_INVALID_CONTEXT;
+ ctx->exception_handler_fxn(ctx->exception, ctx->pexception_param);
+
+ return 0;
+ }
+
+ if ((no_bits + 1) > SWSR_MAX_SYNTAX_LENGTH) {
+ pr_err("Maximum symbol length exceeded\n");
+ ctx->exception = SWSR_EXCEPT_WRONG_CODEWORD_ERROR;
+ ctx->exception_handler_fxn(ctx->exception, ctx->pexception_param);
+
+ return 0;
+ }
+ outbits = swsr_getbits_from_outputfifo(ctx, no_bits, 1);
+
+ return (swsr_getbits_from_outputfifo(ctx, 1, 1)) ? -outbits : outbits;
+}
+
+unsigned int swsr_peekbits(void *ctx_hndl, unsigned int no_bits)
+{
+ struct swsr_context *ctx;
+
+ /* validate input parameters */
+ if (!ctx_hndl) {
+ VDEC_ASSERT(0);
+ return -EIO;
+ }
+
+ ctx = (struct swsr_context *)ctx_hndl;
+
+ /* Validate input arguments. */
+ if (!ctx->initialised) {
+ pr_err("%s: Invalid SWSR context\n", __func__);
+ ctx->exception = SWSR_EXCEPT_INVALID_CONTEXT;
+ ctx->exception_handler_fxn(ctx->exception, ctx->pexception_param);
+
+ return 0;
+ }
+
+ if (no_bits > SWSR_MAX_SYNTAX_LENGTH) {
+ pr_err("Maximum symbol length exceeded\n");
+ ctx->exception = SWSR_EXCEPT_WRONG_CODEWORD_ERROR;
+ ctx->exception_handler_fxn(ctx->exception, ctx->pexception_param);
+
+ return 0;
+ }
+
+ return swsr_getbits_from_outputfifo(ctx, no_bits, 0);
+}
+
+int swsr_byte_align(void *ctx_hndl)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+ unsigned int numbits;
+
+ /* Validate input arguments. */
+ if (!ctx) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ numbits = (ctx->output.num_bits & 0x7);
+ /* Read the required number of bits if not already byte-aligned. */
+ if (numbits != 0)
+ swsr_read_bits(ctx, numbits);
+
+ SWSR_ASSERT((ctx->output.num_bits & 0x7) == 0);
+
+ return 0;
+}
+
+int swsr_get_total_bitsconsumed(void *ctx_hndl, unsigned long long *total_bitsconsumed)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+
+ /* Validate input arguments. */
+ if (!ctx || !total_bitsconsumed) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ *total_bitsconsumed = ctx->output.totalbits_consumed;
+
+ return 0;
+}
+
+int swsr_get_byte_offset_curbuf(void *ctx_hndl, unsigned long long *byte_offset)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+ struct swsr_buffer *outbuf;
+
+ /* Validate input arguments. */
+ if (!ctx || !byte_offset) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ if (ctx->output.num_bits != 0) {
+ pr_err("SWSR output FIFO not empty. First seek to next delimiter: %s\n",
+ __func__);
+ return IMG_ERROR_OPERATION_PROHIBITED;
+ }
+
+ outbuf = lst_first(&ctx->buffer_ctx.used_buffer_list);
+ if (outbuf)
+ *byte_offset = outbuf->num_bytes_read;
+ else
+ return IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+
+ return 0;
+}
+
+static int swsr_update_emprevent(enum swsr_emprevent emprevent,
+ struct swsr_context *ctx)
+{
+ struct swsr_input *input;
+
+ input = &ctx->input;
+
+ input->emprevent = emprevent;
+ switch (input->emprevent) {
+ case SWSR_EMPREVENT_00000300:
+ input->emprev_seq_len = 4;
+ break;
+
+ case SWSR_EMPREVENT_ff00:
+ input->emprev_seq_len = 2;
+ break;
+
+ case SWSR_EMPREVENT_000002:
+ input->emprev_seq_len = 3;
+ break;
+
+ default:
+ input->emprev_seq_len = 0;
+ break;
+ }
+
+ return 0;
+}
+
+int swsr_consume_delim(void *ctx_hndl, enum swsr_emprevent emprevent,
+ unsigned int size_delim_length, unsigned long long *byte_count)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+ struct swsr_input *input;
+ unsigned long long delimiter = 0;
+
+ /* Validate input arguments. */
+ if (!ctx || emprevent >= SWSR_EMPREVENT_MAX ||
+ (ctx->input.config.delim_type == SWSR_DELIM_SIZE &&
+ size_delim_length > SWSR_MAX_DELIM_LENGTH)) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ if (ctx->input.config.delim_type == SWSR_DELIM_SIZE &&
+ size_delim_length == 0 && !byte_count) {
+ pr_err("Byte count value must be provided when size delimiter is zero length: %s\n",
+ __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ input = &ctx->input;
+
+ /*
+ * Ensure that the input is at a delimiter since emulation prevention
+ * removal will not have spanned into this next unit.
+ * This allows emulation prevention detection modes to be changed.
+ * Now check for delimiter.
+ */
+ input->delim_found = swsr_checkfor_delimiter(ctx);
+
+ if (!input->delim_found)
+ return IMG_ERROR_UNEXPECTED_STATE;
+
+ /* Output bitstream FIFOs should be empty. */
+ /* NOTE: flush output queue using seek function. */
+ SWSR_ASSERT(ctx->output.num_bits == 0);
+
+ /* Only update the delimiter length for size delimiters. */
+ if (input->config.delim_type == SWSR_DELIM_SIZE)
+ input->config.delim_length = size_delim_length;
+
+ /* Update the emulation prevention detection/removal scheme */
+ swsr_update_emprevent(emprevent, ctx);
+
+ /*
+ * Peek at the NAL type and return in callback only
+ * when delimiter is in bitstream.
+ */
+ if (input->config.delim_length) {
+ unsigned int shift;
+ unsigned char naltype;
+
+ /*
+ * Peek at the next 8-bits after the delimiter that
+ * resides in internal FIFO.
+ */
+ shift = SWSR_OUTPUT_FIFO_LENGTH -
+ (input->config.delim_length + SWSR_NALTYPE_LENGTH);
+ naltype = (input->fifo >> shift) & NBIT_8BYTE_MASK(SWSR_NALTYPE_LENGTH);
+
+ /*
+ * Notify caller of NAL type so that bitstream segmentation
+ * can take place before the delimiter is consumed
+ */
+ ctx->buffer_ctx.cb_fxn(SWSR_EVENT_DELIMITER_NAL_TYPE, ctx->buffer_ctx.cb_param,
+ naltype, NULL, NULL);
+ }
+
+ /*
+ * Clear the delimiter found flag and reset bytes read to allow
+ * reading of data from input FIFO.
+ */
+ input->delim_found = 0;
+
+ if (input->config.delim_length != 0) {
+ unsigned long long scpvalue = input->config.scp_value;
+ unsigned int i;
+ unsigned char byte = 0;
+
+ /*
+ * Ensure that delimiter is not detected while delimiter
+ * is read.
+ */
+ if (input->config.delim_type == SWSR_DELIM_SIZE) {
+ input->bytes_read_since_delim = 0;
+ input->byte_count = (input->config.delim_length + 7) / 8;
+ } else if (input->config.delim_type == SWSR_DELIM_SCP) {
+ input->config.scp_value = 0xdeadbeefdeadbeefUL;
+ }
+
+ /*
+ * Fill output FIFO only with bytes at least partially
+ * used for delimiter.
+ */
+ for (i = 0; i < ((input->config.delim_length + 7) / 8); i++) {
+ swsr_readbyte_from_inputfifo(ctx, &byte);
+
+ ctx->output.fifo |= ((unsigned long long)byte <<
+ (SWSR_OUTPUT_FIFO_LENGTH - 8 - ctx->output.num_bits));
+ ctx->output.num_bits += 8;
+ }
+
+ /*
+ * Read delimiter from output FIFO leaving any remaining
+ * non-byte-aligned bits behind.
+ */
+ delimiter = swsr_getbits_from_outputfifo(ctx, input->config.delim_length, 1);
+
+ /* Restore SCP value. */
+ if (input->config.delim_type == SWSR_DELIM_SCP)
+ input->config.scp_value = scpvalue;
+ } else {
+ /*
+ * For size delimited bitstreams without a delimiter use
+ * the byte count provided.
+ */
+ SWSR_ASSERT(*byte_count > 0);
+ delimiter = *byte_count;
+ SWSR_ASSERT(input->config.delim_type == SWSR_DELIM_SIZE);
+ }
+
+ if (input->config.delim_type == SWSR_DELIM_SCP)
+ SWSR_ASSERT((delimiter & NBIT_8BYTE_MASK(input->config.delim_length)) ==
+ input->config.scp_value);
+ else if (input->config.delim_type == SWSR_DELIM_SIZE) {
+ input->byte_count = delimiter;
+
+ /* Return byte count if argument provided. */
+ if (byte_count)
+ *byte_count = input->byte_count;
+ }
+
+ input->bytes_read_since_delim = 0;
+ {
+ struct swsr_buffer *buffer = input->buf;
+
+ if (!buffer)
+ buffer = lst_first(&ctx->buffer_ctx.used_buffer_list);
+ if (buffer)
+ input->delimited_unit_start_offset = (long)buffer->num_bytes_read;
+ else
+ input->delimited_unit_start_offset = 0;
+ }
+ input->delimited_unit_size = 0;
+ input->delimunit_bitofst = 0;
+
+ input->no_moredata = 0;
+
+ return 0;
+}
+
+enum swsr_found swsr_seek_delim_or_eod(void *ctx_hndl)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+ enum swsr_found found = SWSR_FOUND_DATA;
+ unsigned char byte;
+
+ /* Validate input arguments. */
+ if (!ctx) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return (enum swsr_found)IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return (enum swsr_found)IMG_ERROR_NOT_INITIALISED;
+ }
+
+ /* Read the residual contents of the output FIFO */
+ swsr_byte_align(ctx);
+ while (ctx->output.num_bits > 0) {
+ SWSR_ASSERT((ctx->output.num_bits & 0x7) == 0);
+ swsr_read_bits(ctx, 8);
+ }
+ SWSR_ASSERT(ctx->output.num_bits == 0);
+ if (ctx->input.config.delim_type == SWSR_DELIM_SCP) {
+ struct swsr_input *input = &ctx->input;
+ struct swsr_output *output = &ctx->output;
+
+ while (found == SWSR_FOUND_DATA) {
+ unsigned char *offset;
+ unsigned int delimlength_inbytes;
+ unsigned char *startoffset;
+ unsigned long long mask;
+ unsigned long long scp;
+ unsigned char scpfirstbyte;
+
+ /*
+ * ensure that all the data in the input FIFO comes
+ * from the current buffer
+ */
+ if (input->buf && input->buf->byte_offset <= input->num_bytes) {
+ found = swsr_consumebyte_from_inputfifo(ctx, &byte);
+ continue;
+ }
+
+ /* consume remaining bytes from the FIFO */
+ if (!input->buf) {
+ found = swsr_consumebyte_from_inputfifo(ctx, &byte);
+ continue;
+ }
+
+ delimlength_inbytes = (input->config.delim_length + 7) / 8;
+
+ /*
+ * Make the mask and the scp value byte aligned to
+ * speed up things
+ */
+ mask = ((1UL << input->config.delim_length) - 1) <<
+ (8 * delimlength_inbytes - input->config.delim_length);
+ scp = input->config.scp_value <<
+ (8 * delimlength_inbytes - input->config.delim_length);
+ scpfirstbyte = (scp >> 8 * (delimlength_inbytes - 1)) & 0xFF;
+
+ /* rollback the input FIFO */
+ input->buf->byte_offset -= input->num_bytes;
+ input->buf->num_bytes_read -= input->num_bytes;
+ input->bitstream_offset -= input->num_bytes;
+ input->num_bytes = 0;
+ input->fifo = 0;
+
+ startoffset = input->buf->data + input->buf->byte_offset;
+
+ while (found == SWSR_FOUND_DATA) {
+ offset = memchr(input->buf->data + input->buf->byte_offset,
+ scpfirstbyte,
+ input->buf->num_bytes -
+ (input->buf->byte_offset + delimlength_inbytes -
+ 1));
+
+ if (offset) {
+ unsigned int i;
+
+ /*
+ * load bytes that might be SCP into
+ * the FIFO
+ */
+ for (i = 0; i < delimlength_inbytes; i++) {
+ input->fifo <<= 8;
+ input->fifo |= offset[i];
+ }
+
+ input->buf->byte_offset = offset - input->buf->data;
+
+ if ((input->fifo & mask) == scp) {
+ unsigned long long bytesread = offset
+ - startoffset;
+
+ /*
+ * Scp found, fill the rest of
+ * the FIFO
+ */
+ for (i = delimlength_inbytes;
+ i < SWSR_INPUT_FIFO_LENGTH &&
+ input->buf->byte_offset + i <
+ input->buf->num_bytes;
+ i++) {
+ input->fifo <<= 8;
+ input->fifo |= offset[i];
+ }
+
+ input->fifo <<= (SWSR_INPUT_FIFO_LENGTH - i) * 8;
+
+ input->bytes_for_next_sequ = 0;
+ input->num_bytes = i;
+
+ input->buf->byte_offset += i;
+
+ input->buf->num_bytes_read = offset -
+ input->buf->data;
+ input->bitstream_offset += bytesread + i;
+
+ output->totalbits_consumed += bytesread * 8;
+
+ input->delimunit_bitofst += bytesread * 8;
+
+ output->num_bits = 0;
+ output->fifo = 0;
+
+ SWSR_ASSERT(swsr_checkfor_delimiter(ctx));
+
+ found = SWSR_FOUND_DELIM;
+ } else {
+ input->buf->byte_offset++;
+ }
+ } else {
+ /* End of the current buffer */
+ unsigned int bytesread = input->buf->num_bytes -
+ (startoffset - input->buf->data);
+ unsigned int i;
+
+ /* update offsets */
+ input->bitstream_offset += bytesread;
+ output->totalbits_consumed += bytesread * 8;
+ input->delimunit_bitofst += bytesread * 8;
+
+ input->buf->byte_offset = input->buf->num_bytes;
+ input->buf->num_bytes_read = input->buf->num_bytes -
+ (delimlength_inbytes - 1);
+
+ /* load remaining bytes to FIFO */
+ offset = input->buf->data +
+ input->buf->num_bytes -
+ (delimlength_inbytes - 1);
+ for (i = 0; i < delimlength_inbytes - 1;
+ i++) {
+ input->fifo <<= 8;
+ input->fifo |= offset[i];
+ }
+
+ input->fifo <<= (SWSR_INPUT_FIFO_LENGTH - i) * 8;
+
+ input->bytes_for_next_sequ = 0;
+ input->num_bytes = delimlength_inbytes - 1;
+
+ output->num_bits = 0;
+ output->fifo = 0;
+
+ /*
+ * Consume a few bytes from the next
+ * byte to check if there is scp on
+ * buffers boundary
+ */
+ for (i = 0;
+ i < delimlength_inbytes && found == SWSR_FOUND_DATA;
+ i++) {
+ found = swsr_consumebyte_from_inputfifo(ctx, &byte);
+ SWSR_ASSERT(found != SWSR_FOUND_NONE);
+ }
+
+ break;
+ }
+ }
+ }
+ } else {
+ /*
+ * Extract data from input FIFO until data is not found either
+ * because we have run out or a SCP has been detected.
+ */
+ while (found == SWSR_FOUND_DATA) {
+ found = swsr_consumebyte_from_inputfifo(ctx, &byte);
+ SWSR_ASSERT(found != SWSR_FOUND_NONE);
+ }
+ }
+
+ /*
+ * When the end of data has been reached there should be no
+ * more data in the input FIFO.
+ */
+ if (found == SWSR_FOUND_EOD)
+ SWSR_ASSERT(ctx->input.num_bytes == 0);
+
+ SWSR_ASSERT(found != SWSR_FOUND_DATA);
+ return found;
+}
+
+enum swsr_found swsr_check_delim_or_eod(void *ctx_hndl)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+ enum swsr_found found = SWSR_FOUND_DATA;
+
+ /* Validate input arguments. */
+ if (!ctx) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+
+ return (enum swsr_found)IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+
+ return (enum swsr_found)IMG_ERROR_NOT_INITIALISED;
+ }
+
+ /*
+ * End of data when all FIFOs are empty and there is nothing left to
+ * read from the input buffers.
+ */
+ if (ctx->output.num_bits == 0 && ctx->input.num_bytes == 0 &&
+ ctx->input.bitstream_offset >= ctx->input.bitstream_size)
+ found = SWSR_FOUND_EOD;
+ else if (ctx->output.num_bits == 0 && swsr_checkfor_delimiter(ctx)) {
+ /*
+ * Output queue is empty and delimiter is at the head of
+ * input queue.
+ */
+ found = SWSR_FOUND_DELIM;
+ }
+
+ return found;
+}
+
+int swsr_start_bitstream(void *ctx_hndl, const struct swsr_config *config,
+ unsigned long long bitstream_size, enum swsr_emprevent emprevent)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+ struct swsr_buffer *buffer;
+ unsigned int result;
+
+ /* Validate input arguments. */
+ if (!ctx || !config || config->delim_type >= SWSR_DELIM_MAX ||
+ config->delim_length > SWSR_MAX_DELIM_LENGTH ||
+ config->scp_value > NBIT_8BYTE_MASK(config->delim_length) ||
+ emprevent >= SWSR_EMPREVENT_MAX) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ /* Move all used buffers into free list */
+ buffer = lst_removehead(&ctx->buffer_ctx.used_buffer_list);
+ while (buffer) {
+ lst_add(&ctx->buffer_ctx.free_buffer_list, buffer);
+ buffer = lst_removehead(&ctx->buffer_ctx.used_buffer_list);
+ }
+
+ /* Clear all the shift-register state (except config) */
+ memset(&ctx->input, 0, sizeof(ctx->input));
+ memset(&ctx->output, 0, sizeof(ctx->output));
+
+ /* Update input FIFO configuration */
+ ctx->input.bitstream_size = bitstream_size;
+ ctx->input.config = *config;
+ result = swsr_update_emprevent(emprevent, ctx);
+ SWSR_ASSERT(result == 0);
+
+ /*
+ * Signal delimiter found to ensure that no data is read out of
+ * input FIFO
+ * while fetching the first bitstream data into input FIFO.
+ */
+ ctx->input.delim_found = 1;
+ result = swsr_fill_outputfifo(ctx);
+ SWSR_ASSERT(result == 0);
+
+ /* Now check for delimiter. */
+ ctx->input.delim_found = swsr_checkfor_delimiter(ctx);
+
+ return 0;
+}
+
+int swsr_deinitialise(void *ctx_hndl)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+ struct swsr_buffer *buffer;
+
+ /* Validate input arguments. */
+ if (!ctx) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ /* Free all used buffer containers */
+ buffer = lst_removehead(&ctx->buffer_ctx.used_buffer_list);
+ while (buffer) {
+ kfree(buffer);
+ buffer = lst_removehead(&ctx->buffer_ctx.used_buffer_list);
+ }
+
+ /* Free all free buffer containers. */
+ buffer = lst_removehead(&ctx->buffer_ctx.free_buffer_list);
+ while (buffer) {
+ kfree(buffer);
+ buffer = lst_removehead(&ctx->buffer_ctx.free_buffer_list);
+ }
+
+ ctx->initialised = 0;
+ kfree(ctx);
+
+ return 0;
+}
+
+int swsr_initialise(swsr_except_handler_fxn exception_handler_fxn,
+ void *exception_cbparam, swsr_callback_fxn callback_fxn,
+ void *cb_param, void **ctx_hndl)
+{
+ struct swsr_context *ctx;
+ struct swsr_buffer *buffer;
+ unsigned int i;
+ unsigned int result;
+
+ /* Validate input arguments. */
+ if (!exception_handler_fxn || !exception_cbparam || !callback_fxn ||
+ !cb_param || !ctx_hndl) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Allocate and initialise shift-register context */
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+
+ /* Setup shift-register context */
+ ctx->exception_handler_fxn = exception_handler_fxn;
+ ctx->pexception_param = exception_cbparam;
+
+ ctx->buffer_ctx.cb_fxn = callback_fxn;
+ ctx->buffer_ctx.cb_param = cb_param;
+
+ /*
+ * Allocate a new buffer container for each byte in internal storage.
+ * This is the theoretical maximum number of buffers in the SWSR at
+ * any one time.
+ */
+ for (i = 0; i < SWSR_INPUT_FIFO_LENGTH + (SWSR_OUTPUT_FIFO_LENGTH / 8);
+ i++) {
+ /* Allocate a buffer container */
+ buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
+ SWSR_ASSERT(buffer);
+ if (!buffer) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+
+ /* Add container to free list */
+ lst_add(&ctx->buffer_ctx.free_buffer_list, buffer);
+ }
+
+ SWSR_ASSERT(SWSR_MAX_SYNTAX_LENGTH <= (sizeof(unsigned int) * 8));
+
+ ctx->initialised = 1;
+ *ctx_hndl = ctx;
+
+ return 0;
+error:
+ buffer = lst_removehead(&ctx->buffer_ctx.free_buffer_list);
+ while (buffer) {
+ kfree(buffer);
+ buffer = lst_removehead(&ctx->buffer_ctx.free_buffer_list);
+ }
+ kfree(ctx);
+
+ return result;
+}
+
+static unsigned char swsr_israwdata_extraction_supported(struct swsr_context *ctx)
+{
+ /*
+ * For now only h.264/HEVC like 0x000001 SCP delimited
+ * bistreams are supported.
+ */
+ if (ctx->input.config.delim_type == SWSR_DELIM_SCP &&
+ ctx->input.config.delim_length == (3 * 8) &&
+ ctx->input.config.scp_value == 0x000001)
+ return 1;
+
+ return 0;
+}
+
+static int swsr_getcurrent_delimited_unitsize(struct swsr_context *ctx, unsigned int *size)
+{
+ struct swsr_buffer *buf;
+
+ buf = ctx->input.buf;
+ if (!buf)
+ buf = lst_first(&ctx->buffer_ctx.used_buffer_list);
+
+ if (buf && ctx->input.delimited_unit_start_offset >= 0 &&
+ ctx->input.delimited_unit_start_offset < buf->num_bytes) {
+ unsigned long long bufptr =
+ (unsigned long long)ctx->input.delimited_unit_start_offset;
+ unsigned int zeros = 0;
+
+ /* Scan the current buffer for the next SCP. */
+ while (1) {
+ /* Look for two consecutive 0 bytes. */
+ while ((bufptr < buf->num_bytes) && (zeros < 2)) {
+ if (buf->data[bufptr++] == 0)
+ zeros++;
+ else
+ zeros = 0;
+ }
+ /*
+ * If we're not at the end of the buffer already and
+ * the next byte is 1, we've got it.
+ */
+ /*
+ * If we're at the end of the buffer, just assume
+ * we've got it too
+ * as we do not support buffer spanning units.
+ */
+ if (bufptr < buf->num_bytes && buf->data[bufptr] == 1) {
+ break;
+ } else if (bufptr == buf->num_bytes) {
+ zeros = 0;
+ break;
+ }
+ /*
+ * Finally just decrease the number of 0s found
+ * already and go on scanning.
+ */
+ else
+ zeros = 1;
+ }
+ /* Calculate the unit size. */
+ ctx->input.delimited_unit_size = (unsigned int)(bufptr -
+ (unsigned long long)ctx->input.delimited_unit_start_offset) - zeros;
+ *size = ctx->input.delimited_unit_size;
+ } else {
+ return IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ }
+
+ return 0;
+}
+
+int swsr_get_current_delimited_unitsize(void *ctx_hndl, unsigned int *size)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+
+ /* Validate input arguments. */
+ if (!ctx || !size) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ if (!swsr_israwdata_extraction_supported(ctx))
+ return IMG_ERROR_NOT_SUPPORTED;
+
+ return swsr_getcurrent_delimited_unitsize(ctx, size);
+}
+
+int swsr_get_current_delimited_unit(void *ctx_hndl, unsigned char *data, unsigned int *size)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+ struct swsr_buffer *buf;
+ unsigned int copysize;
+
+ /* Validate input arguments. */
+ if (!ctx || !data || !size || *size == 0) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ if (!swsr_israwdata_extraction_supported(ctx))
+ return IMG_ERROR_NOT_SUPPORTED;
+
+ buf = ctx->input.buf;
+ if (!buf)
+ buf = lst_first(&ctx->buffer_ctx.used_buffer_list);
+
+ if (buf && ctx->input.delimited_unit_start_offset >= 0) {
+ if (ctx->input.delimited_unit_size == 0)
+ swsr_getcurrent_delimited_unitsize(ctx, ©size);
+
+ if (ctx->input.delimited_unit_size < *size)
+ *size = ctx->input.delimited_unit_size;
+
+ memcpy(data, buf->data + ctx->input.delimited_unit_start_offset, *size);
+ } else {
+ return IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ }
+
+ return 0;
+}
+
+int swsr_get_current_delimited_unit_bit_offset(void *ctx_hndl, unsigned int *bit_offset)
+{
+ struct swsr_context *ctx = (struct swsr_context *)ctx_hndl;
+
+ /* Validate input arguments. */
+ if (!ctx || !bit_offset) {
+ pr_err("Invalid arguments to function: %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!ctx->initialised) {
+ pr_err("SWSR not yet initialised: %s\n", __func__);
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ if (!swsr_israwdata_extraction_supported(ctx))
+ return IMG_ERROR_NOT_SUPPORTED;
+
+ if (ctx->input.delimited_unit_start_offset >= 0)
+ *bit_offset = ctx->input.delimunit_bitofst;
+
+ return 0;
+}
diff --git a/drivers/staging/media/vxd/decoder/swsr.h b/drivers/staging/media/vxd/decoder/swsr.h
new file mode 100644
index 000000000000..5c27e8c41240
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/swsr.h
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Software Shift Register Access fucntions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Lakshmi Sankar <[email protected]>
+ *
+ * Re-written for upstreming
+ * Prashanth Kumar Amai <[email protected]>
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef _SWSR_H
+#define _SWSR_H
+
+#include <linux/types.h>
+
+#include "img_errors.h"
+#include "lst.h"
+
+#define SWSR_MAX_DELIM_LENGTH (8 * 8)
+
+enum swsr_exception {
+ SWSR_EXCEPT_NO_EXCEPTION = 0x00,
+ SWSR_EXCEPT_ENCAPULATION_ERROR1,
+ SWSR_EXCEPT_ENCAPULATION_ERROR2,
+ SWSR_EXCEPT_ACCESS_INTO_SCP,
+ SWSR_EXCEPT_ACCESS_BEYOND_EOD,
+ SWSR_EXCEPT_EXPGOULOMB_ERROR,
+ SWSR_EXCEPT_WRONG_CODEWORD_ERROR,
+ SWSR_EXCEPT_NO_SCP,
+ SWSR_EXCEPT_INVALID_CONTEXT,
+ SWSR_EXCEPT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+enum swsr_cbevent {
+ SWSR_EVENT_INPUT_BUFFER_START = 0,
+ SWSR_EVENT_OUTPUT_BUFFER_END,
+ SWSR_EVENT_DELIMITER_NAL_TYPE,
+ SWSR_EVENT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+enum swsr_found {
+ SWSR_FOUND_NONE = 0,
+ SWSR_FOUND_EOD,
+ SWSR_FOUND_DELIM,
+ SWSR_FOUND_DATA,
+ SWSR_FOUND_FORCE32BITS = 0x7FFFFFFFU
+};
+
+enum swsr_delim_type {
+ SWSR_DELIM_NONE = 0,
+ SWSR_DELIM_SCP,
+ SWSR_DELIM_SIZE,
+ SWSR_DELIM_MAX,
+ SWSR_DELIM_FORCE32BITS = 0x7FFFFFFFU
+};
+
+enum swsr_emprevent {
+ SWSR_EMPREVENT_NONE = 0x00,
+ SWSR_EMPREVENT_00000300,
+ SWSR_EMPREVENT_ff00,
+ SWSR_EMPREVENT_000002,
+ SWSR_EMPREVENT_MAX,
+ SWSR_EMPREVENT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+struct swsr_config {
+ enum swsr_delim_type delim_type;
+ unsigned int delim_length;
+ unsigned long long scp_value;
+};
+
+/*
+ * This is the function prototype for the caller supplier exception handler.
+ *
+ * NOTE: The internally recorded exception is reset to #SWSR_EXCEPT_NO_EXCEPTION
+ * on return from SWSR_CheckException() or a call to the caller supplied
+ * exception handler see #SWSR_pfnExceptHandler.
+ *
+ * NOTE: By defining an exception handler the caller can handle Shift Register
+ * errors as they occur - for example, using a structure exception mechanism
+ * such as setjmp/longjmp.
+ */
+typedef void (*swsr_except_handler_fxn)(enum swsr_exception exception,
+ void *callback_param);
+
+/*
+ * This is the function prototype for the caller supplier to retrieve the data
+ * from the application
+ */
+typedef void (*swsr_callback_fxn)(enum swsr_cbevent event,
+ void *priv_data,
+ unsigned char nal_type, unsigned char **data_buffer,
+ unsigned long long *data_size);
+
+int swsr_get_total_bitsconsumed(void *context, unsigned long long *total_bitsconsumed);
+
+/*
+ * This function is used to return the offset into the current bitstream buffer
+ * on the shift-register output FIFO. Call after #SWSR_SeekDelimOrEOD to
+ * determine the offset of an delimiter.
+ */
+int swsr_get_byte_offset_curbuf(void *context, unsigned long long *byte_offset);
+
+/*
+ * This function is used to read a signed Exp-Goulomb value from the Shift
+ * Register.
+ *
+ * NOTE: If this function is used to attempt to read into a Start-Code-Prefix
+ * or beyond the End-Of-Data then and exception is generated which can be
+ * handled by the caller supplied exception handler see
+ * #SWSR_pfnExceptionHandler. If no exception handler has been supplied (or the
+ * exception handler returns) then the exception is recorded and can be obtained
+ * using SWSR_CheckException(). In this event the function returns 0.
+ */
+int swsr_read_signed_expgoulomb(void *context);
+
+/*
+ * This function is used to read a unsigned Exp-Goulomb value from the Shift
+ * Register.
+ *
+ * NOTE: If this function is used to attempt to read into a Start-Code-Prefix
+ * or beyond the End-Of-Data then and exception is generated which can be
+ * handled by the caller supplied exception handler see
+ * #SWSR_pfnExceptionHandler. If no exception handler has been supplied (or the
+ * exception handler returns) then the exception is recorded and can be obtained
+ * using SWSR_CheckException(). In this event the function returns 0.
+ */
+unsigned int swsr_read_unsigned_expgoulomb(void *context);
+
+/*
+ * This function is used to check for exceptions.
+ *
+ * NOTE: The internally recorded exception is reset to #SWSR_EXCEPT_NO_EXCEPTION
+ * on return from SWSR_CheckException() or a call to the caller supplied
+ * exception handler see #SWSR_pfnExceptionHandler.
+ */
+enum swsr_exception swsr_check_exception(void *context);
+
+/*
+ * This function is used to check for bitstream data with
+ * SWSR_EMPREVENT_00000300 whether more RBSP data is present.
+ */
+int swsr_check_more_rbsp_data(void *context, unsigned char *more_rbsp_data);
+
+/*
+ * This function is used to read a single bit from the Shift Register.
+ *
+ * NOTE: If this function is used to attempt to read into a Start-Code-Prefix
+ * or beyond the End-Of-Data then and exception is generated which can be
+ * handled by the caller supplied exception handler see
+ * #SWSR_pfnExceptionHandler. If no exception handler has been supplied (or the
+ * exception handler returns) then the exception is recorded and can be obtained
+ * using SWSR_CheckException(). In this event the function returns 0.
+ */
+unsigned int swsr_read_onebit(void *context);
+
+/*
+ * This function is used to consume a number of bits from the Shift Register.
+ *
+ * NOTE: If this function is used to attempt to read into a Start-Code-Prefix
+ * or beyond the End-Of-Data then and exception is generated which can be
+ * handled by the caller supplied exception handler see
+ * #SWSR_pfnExceptionHandler. If no exception handler has been supplied (or the
+ * exception handler returns) then the exception is recorded and can be obtained
+ * using SWSR_CheckException(). In this event the function returns 0.
+ */
+unsigned int swsr_read_bits(void *context, unsigned int no_bits);
+
+int swsr_read_signedbits(void *context, unsigned int no_bits);
+
+/*
+ * This function is used to peek at number of bits from the Shift Register. The
+ * bits are not consumed.
+ *
+ * NOTE: If this function is used to attempt to read into a Start-Code-Prefix
+ * or beyond the End-Of-Data then and exception is generated which can be
+ * handled by the caller supplied exception handler see
+ * #SWSR_pfnExceptionHandler. If no exception handler has been supplied (or
+ * the exception handler returns) then the exception is recorded and can be
+ * obtained using SWSR_CheckException(). In this event the function returns 0.
+ */
+unsigned int swsr_peekbits(void *context, unsigned int no_bits);
+
+/*
+ * Makes the shift-register output byte-aligned by consuming the remainder of
+ * the current partially read byte.
+ */
+int swsr_byte_align(void *context);
+
+/*
+ * Consume the next delimiter whose length should be specified if delimiter type
+ * is #SWSR_DELIM_SIZE. The emulation prevention detection/removal scheme can
+ * also be specified for this and subsequent units.
+ *
+ * Consumes the unit delimiter from the bitstream buffer. The delimiter type
+ * depends upon the bitstream format.
+ */
+int swsr_consume_delim(void *context,
+ enum swsr_emprevent emprevent,
+ unsigned int size_delim_length,
+ unsigned long long *byte_count);
+
+/*
+ * Seek for the next delimiter or end of bitstream data if no delimiter is
+ * found.
+ */
+enum swsr_found swsr_seek_delim_or_eod(void *context);
+
+/*
+ * Check if shift-register is at a delimiter or end of data.
+ */
+enum swsr_found swsr_check_delim_or_eod(void *context);
+
+/*
+ * This function automatically fetches the first bitstream buffer (using
+ * callback with event type #SWSR_EVENT_INPUT_BUFFER_START) before returning.
+ */
+int swsr_start_bitstream(void *context,
+ const struct swsr_config *pconfig,
+ unsigned long long bitstream_size,
+ enum swsr_emprevent emprevent);
+
+/*
+ * This function is used to de-initialise the Shift Register.
+ */
+int swsr_deinitialise(void *context);
+
+/*
+ * This function is used to initialise the Shift Register.
+ *
+ * NOTE: If no exception handler is provided (pfnExceptionHandler == IMG_NULL)
+ * then the caller must check for exceptions using the function
+ * SWSR_CheckException().
+ *
+ * NOTE: If pui8RbduBuffer is IMG_NULL then the bit stream is not encapsulated
+ * so the Shift Register needn't perform and de-encapsulation. However,
+ * if this is not IMG_NULL then, from time to time, the Shift Register APIs
+ * will de-encapsulate portions of the bit stream into this intermediate buffer
+ * - the larger the buffer the less frequent the de-encapsulation function
+ * needs to be called.
+ */
+int swsr_initialise(swsr_except_handler_fxn exception_handler_fxn,
+ void *exception_cbparam,
+ swsr_callback_fxn callback_fxn,
+ void *cb_param,
+ void **context);
+
+/*
+ * This function is used to return the size in bytes of the delimited unit
+ * that's currently being processed.
+ *
+ * NOTE: This size includes all the emulation prevention bytes present
+ * in the delimited unit.
+ */
+int swsr_get_current_delimited_unitsize(void *context, unsigned int *size);
+
+/*
+ * This function is used to copy the delimited unit that's currently being
+ * processed to the provided buffer.
+ *
+ * NOTE: This delimited unit includes all the emulation prevention bytes present
+ * in it.
+ */
+int swsr_get_current_delimited_unit(void *context, unsigned char *data, unsigned int *size);
+
+/*
+ * This function is used to return the bit offset the shift register is at
+ * in processing the current delimited unit.
+ *
+ * NOTE: This offset does not count emulation prevention bytes.
+ */
+int swsr_get_current_delimited_unit_bit_offset(void *context, unsigned int *bit_offset);
+
+#endif /* _SWSR_H */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
This module handles the resources which is added to list
and return the resource based requested list.
Signed-off-by: Amit Makani <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
drivers/staging/media/vxd/common/resource.c | 576 ++++++++++++++++++++
drivers/staging/media/vxd/common/resource.h | 66 +++
3 files changed, 644 insertions(+)
create mode 100644 drivers/staging/media/vxd/common/resource.c
create mode 100644 drivers/staging/media/vxd/common/resource.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 7c7c008efd97..b5875f9015ba 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19560,6 +19560,8 @@ F: drivers/staging/media/vxd/common/pool_api.c
F: drivers/staging/media/vxd/common/pool_api.h
F: drivers/staging/media/vxd/common/ra.c
F: drivers/staging/media/vxd/common/ra.h
+F: drivers/staging/media/vxd/common/resource.c
+F: drivers/staging/media/vxd/common/resource.h
F: drivers/staging/media/vxd/common/rman_api.c
F: drivers/staging/media/vxd/common/rman_api.h
F: drivers/staging/media/vxd/common/talmmu_api.c
diff --git a/drivers/staging/media/vxd/common/resource.c b/drivers/staging/media/vxd/common/resource.c
new file mode 100644
index 000000000000..c3dd6d010d73
--- /dev/null
+++ b/drivers/staging/media/vxd/common/resource.c
@@ -0,0 +1,576 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * VXD DEC Resource manager implementations
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include <linux/types.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "dq.h"
+#include "img_errors.h"
+#include "lst.h"
+#include "resource.h"
+
+struct resource_list_elem {
+ struct dq_linkage_t link;
+ void *item;
+ unsigned int id;
+ unsigned int *refcnt;
+};
+
+/*
+ * marks an item as used by incrementing the reference count
+ */
+int resource_item_use(unsigned int *refcnt)
+{
+ if (refcnt)
+ (*refcnt)++;
+
+ return 0;
+}
+
+/*
+ * returns an item by decrementing the reference count
+ */
+void resource_item_return(unsigned int *refcnt)
+{
+ if (refcnt && *refcnt > 0)
+ (*refcnt)--;
+}
+
+/*
+ * releases an item by setting reference count to 1 (original owner)
+ */
+int resource_item_release(unsigned int *refcnt)
+{
+ if (refcnt)
+ *refcnt = 1;
+
+ return 0;
+}
+
+/*
+ * indicates whether an item is free to be used (no owners)
+ */
+int resource_item_isavailable(unsigned int *refcnt)
+{
+ if (refcnt)
+ return (*refcnt == 0) ? 1 : 0;
+ else
+ return 0;
+}
+
+/*
+ * adds an item (and associated id) to a resource list
+ */
+int resource_list_add_img(struct lst_t *list, void *item, unsigned int id, unsigned int *refcnt)
+{
+ struct resource_list_elem *listelem = NULL;
+ int bfound = 0;
+ unsigned int result = 0;
+
+ if (!list || !item) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+
+ /*
+ * Decrement the reference count on the item
+ * to signal that the owner has relinquished it.
+ */
+ resource_item_return(refcnt);
+
+ /*
+ * Determine whether this buffer is already in the list.
+ */
+ listelem = lst_first(list);
+ while (listelem) {
+ if (listelem->item == item) {
+ bfound = 1;
+ break;
+ }
+
+ listelem = lst_next(listelem);
+ }
+
+ if (!bfound) {
+ /*
+ * allocate the image buffer list element structure.
+ */
+ listelem = kmalloc(sizeof(*(listelem)), GFP_KERNEL);
+ if (!listelem) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+ memset(listelem, 0, sizeof(*(listelem)));
+
+ /*
+ * setup the list element.
+ */
+ listelem->item = item;
+ listelem->id = id;
+ listelem->refcnt = refcnt;
+
+ /*
+ * add the element to the list.
+ */
+ lst_add(list, (void *)listelem);
+ }
+
+ return 0;
+
+error:
+ return result;
+}
+
+/*
+ * obtains pointer to item at head of resource list
+ */
+void *resource_list_pickhead(struct lst_t *list)
+{
+ struct resource_list_elem *listelem = NULL;
+ void *item = NULL;
+
+ if (!list)
+ goto error;
+ /*
+ * peek the head item of the list.
+ */
+ listelem = lst_first(list);
+ if (listelem)
+ item = listelem->item;
+
+error:
+ return item;
+}
+
+/*
+ * removes item from resource list
+ */
+int resource_list_remove(struct lst_t *list, void *item)
+{
+ struct resource_list_elem *listelem = NULL;
+ unsigned int result = 0;
+
+ if (!list || !item) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+
+ /*
+ * find the specified item in the list.
+ */
+ listelem = lst_first(list);
+ while (listelem) {
+ if (listelem->item == item) {
+ if (*listelem->refcnt != 0)
+ pr_warn("item remove from list still in use\n");
+
+ /*
+ * Remove the item from the list.
+ */
+ lst_remove(list, listelem);
+ /*
+ * Free the stream unit queue element.
+ */
+ kfree(listelem);
+ listelem = NULL;
+ return 0;
+ }
+
+ listelem = lst_next(listelem);
+ }
+
+#if defined(DEBUG_DECODER_DRIVER) || defined(DEBUG_ENCODER_DRIVER)
+ pr_info("item could not be located to remove from RESOURCE list\n");
+#endif
+
+ return IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+
+error:
+ return result;
+}
+
+/*
+ * resource_list_removehead - removes item at head of resource list
+ * @list: head of resource list
+ */
+void *resource_list_removehead(struct lst_t *list)
+{
+ struct resource_list_elem *listelem = NULL;
+ void *item = NULL;
+
+ if (!list)
+ goto error;
+
+ /*
+ * peek the head item of the list.
+ */
+ listelem = lst_removehead(list);
+ if (listelem) {
+ item = listelem->item;
+ kfree(listelem);
+ listelem = NULL;
+ }
+
+error:
+ return item;
+}
+
+/*
+ * removes next available item from resource list.
+ * item is freed if no longer used
+ */
+int resource_list_remove_nextavail(struct lst_t *list,
+ resource_pfn_freeitem fn_freeitem,
+ void *free_cb_param)
+{
+ struct resource_list_elem *listelem = NULL;
+ unsigned int result = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+
+ if (!list) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+
+ /*
+ * find the next unused item in the list.
+ */
+ listelem = lst_first(list);
+ while (listelem) {
+ if (resource_item_isavailable(listelem->refcnt)) {
+ resource_item_return(listelem->refcnt);
+
+ if (*listelem->refcnt == 0) {
+ if (fn_freeitem)
+ fn_freeitem(listelem->item, free_cb_param);
+ else
+ kfree(listelem->item);
+
+ listelem->item = NULL;
+ }
+
+ /*
+ * get the next element from the list.
+ */
+ lst_remove(list, listelem);
+
+ /*
+ * free the buffer list element.
+ */
+ kfree(listelem);
+ listelem = NULL;
+
+ result = 0;
+ break;
+ }
+
+ listelem = lst_next(listelem);
+ }
+
+ if (result == IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE)
+ pr_debug("failed to locate an available resource element to remove\n");
+
+error:
+ return result;
+}
+
+/*
+ * obtains pointer to an available item from the resource list
+ */
+void *resource_list_get_avail(struct lst_t *list)
+{
+ struct resource_list_elem *listelem = NULL;
+ void *item = NULL;
+
+ if (!list)
+ goto error;
+
+ /*
+ * find the next unused item in the list.
+ */
+ listelem = lst_first(list);
+ while (listelem) {
+ if (resource_item_isavailable(listelem->refcnt)) {
+ resource_item_use(listelem->refcnt);
+ item = listelem->item;
+ break;
+ }
+ listelem = lst_next(listelem);
+ }
+
+error:
+ return item;
+}
+
+/*
+ * signal duplicate use of specified item with resource list
+ */
+void *resource_list_reuseitem(struct lst_t *list, void *item)
+{
+ struct resource_list_elem *listelem = NULL;
+ void *ret_item = NULL;
+
+ if (!list || !item)
+ goto error;
+
+ /*
+ * find the specified item in the list.
+ */
+ listelem = lst_first(list);
+
+ while (listelem) {
+ if (listelem->item == item) {
+ resource_item_use(listelem->refcnt);
+ ret_item = item;
+ break;
+ }
+
+ listelem = lst_next(listelem);
+ }
+
+error:
+ return ret_item;
+}
+
+/*
+ * obtain pointer to item from resource list with id
+ */
+void *resource_list_getbyid(struct lst_t *list, unsigned int id)
+{
+ struct resource_list_elem *listelem = NULL;
+ void *item = NULL;
+
+ if (!list)
+ goto error;
+
+ /*
+ * find the next unused buffer in the list.
+ */
+ listelem = lst_first(list);
+ while (listelem) {
+ if (listelem->id == id) {
+ resource_item_use(listelem->refcnt);
+ item = listelem->item;
+ break;
+ }
+
+ listelem = lst_next(listelem);
+ }
+
+error:
+ return item;
+}
+
+/*
+ * obtain the number of available (unused) items within list.
+ */
+int resource_list_getnumavail(struct lst_t *list)
+{
+ struct resource_list_elem *listelem = NULL;
+ unsigned int num_items = 0;
+
+ if (!list)
+ goto error;
+
+ /*
+ * find the next unused buffer in the list.
+ */
+ listelem = lst_first(list);
+ while (listelem) {
+ if (resource_item_isavailable(listelem->refcnt))
+ num_items++;
+
+ listelem = lst_next(listelem);
+ }
+
+error:
+ return num_items;
+}
+
+/*
+ * Obtain the number of items within list
+ */
+int resource_list_getnum(struct lst_t *list)
+{
+ struct resource_list_elem *listelem = NULL;
+ unsigned int num_items = 0;
+
+ if (!list)
+ goto error;
+
+ /*
+ * find the next unused buffer in the list.
+ */
+ listelem = lst_first(list);
+ while (listelem) {
+ num_items++;
+ listelem = lst_next(listelem);
+ }
+
+error:
+ return num_items;
+}
+
+/*
+ * replaces an item (of specified id) within a resource list
+ */
+int resource_list_replace(struct lst_t *list, void *item, unsigned int id, unsigned int *refcnt,
+ resource_pfn_freeitem fn_freeitem,
+ void *free_cb_param)
+{
+ struct resource_list_elem *listelem = NULL;
+ unsigned int result = 0;
+
+ if (!list || !item) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+
+ /*
+ * determine whether this sequence header is already in the list
+ */
+ listelem = lst_first(list);
+ while (listelem) {
+ if (listelem->id == id) {
+ resource_item_return(listelem->refcnt);
+ if (*listelem->refcnt == 0) {
+ if (fn_freeitem)
+ fn_freeitem(listelem->item,
+ free_cb_param);
+ else
+ kfree(listelem->item);
+ listelem->item = NULL;
+ }
+
+ lst_remove(list, listelem);
+ break;
+ }
+
+ listelem = lst_next(listelem);
+ }
+
+ if (!listelem) {
+ /*
+ * Allocate the sequence header list element structure.
+ */
+ listelem = kmalloc(sizeof(*(listelem)), GFP_KERNEL);
+ if (!listelem) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error;
+ }
+ memset(listelem, 0, sizeof(*(listelem)));
+ }
+
+ /*
+ * setup the sequence header list element.
+ */
+ resource_item_use(refcnt);
+
+ listelem->item = item;
+ listelem->id = id;
+ listelem->refcnt = refcnt;
+
+ /*
+ * Add the sequence header list element to the sequence header list.
+ */
+ lst_add(list, (void *)listelem);
+
+ return 0;
+
+error:
+ return result;
+}
+
+/*
+ * removes all items from a resource list.
+ */
+int resource_list_empty(struct lst_t *list, unsigned int release_item,
+ resource_pfn_freeitem fn_freeitem,
+ void *free_cb_param)
+{
+ struct resource_list_elem *listelem = NULL;
+ unsigned int result = 0;
+
+ if (!list) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+
+ /*
+ * remove all the buffer list elements from the image buffer list
+ */
+ listelem = lst_removehead(list);
+ while (listelem) {
+ if (release_item) {
+ resource_item_release(listelem->refcnt);
+ } else {
+ /*
+ * Return and free.
+ */
+ resource_item_return(listelem->refcnt);
+
+ if (!listelem->refcnt || *listelem->refcnt == 0) {
+ if (fn_freeitem)
+ fn_freeitem(listelem->item,
+ free_cb_param);
+ else
+ kfree(listelem->item);
+ listelem->item = NULL;
+ }
+ }
+
+ /*
+ * free the buffer list element.
+ */
+ kfree(listelem);
+ listelem = NULL;
+
+ /*
+ * Get the next element from the list.
+ */
+ listelem = lst_removehead(list);
+ }
+
+ return 0;
+
+error:
+ return result;
+}
+
+/*
+ * obtain the number of pictures within list
+ */
+int resource_getnumpict(struct lst_t *list)
+{
+ struct resource_list_elem *listelem = NULL;
+ unsigned int num_pict = 0;
+
+ if (!list)
+ goto error;
+
+ /*
+ * find the next unused buffer in the list.
+ */
+ listelem = lst_first(list);
+ while (listelem) {
+ num_pict++;
+ listelem = lst_next(listelem);
+ }
+
+error:
+ return num_pict;
+}
diff --git a/drivers/staging/media/vxd/common/resource.h b/drivers/staging/media/vxd/common/resource.h
new file mode 100644
index 000000000000..b041ff918e23
--- /dev/null
+++ b/drivers/staging/media/vxd/common/resource.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD DEC SYSDEV and UI Interface header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#ifndef _VXD_RESOURCE_H
+#define _VXD_RESOURCE_H
+
+typedef int (*resource_pfn_freeitem)(void *item, void *free_cb_param);
+
+int resource_item_use(unsigned int *refcnt);
+
+void resource_item_return(unsigned int *refcnt);
+
+int resource_item_release(unsigned int *refcnt);
+
+int resource_item_isavailable(unsigned int *refcnt);
+
+int resource_list_add_img(struct lst_t *list, void *item, unsigned int id, unsigned int *refcnt);
+
+void *resource_list_pickhead(struct lst_t *list);
+
+int resource_list_remove(struct lst_t *list, void *item);
+
+/**
+ * resource_list_removehead - removes item at head of resource list
+ * @list: head of resource list
+ */
+
+void *resource_list_removehead(struct lst_t *list);
+
+int resource_list_remove_nextavail(struct lst_t *list,
+ resource_pfn_freeitem fn_freeitem,
+ void *free_cb_param);
+
+void *resource_list_get_avail(struct lst_t *list);
+
+void *resource_list_reuseitem(struct lst_t *list, void *item);
+
+void *resource_list_getbyid(struct lst_t *list, unsigned int id);
+
+int resource_list_getnumavail(struct lst_t *list);
+
+int resource_list_getnum(struct lst_t *list);
+
+int resource_list_replace(struct lst_t *list, void *item, unsigned int id, unsigned int *refcnt,
+ resource_pfn_freeitem fn_freeitem,
+ void *free_cb_param);
+
+int resource_list_empty(struct lst_t *list, unsigned int release_item,
+ resource_pfn_freeitem fn_freeitem,
+ void *free_cb_param);
+
+int resource_getnumpict(struct lst_t *list);
+
+#endif /* _VXD_RESOURCE_H */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
This patch includes IMG D5520 registers and firmware
headers and utility to decode the registers.
Signed-off-by: Amit Makani <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 21 +
drivers/staging/media/vxd/decoder/h264_idx.h | 60 ++
drivers/staging/media/vxd/decoder/h264_vlc.h | 604 ++++++++++++
.../media/vxd/decoder/h264fw_data_shared.h | 759 +++++++++++++++
.../media/vxd/decoder/hevcfw_data_shared.h | 767 +++++++++++++++
.../media/vxd/decoder/img_msvdx_cmds.h | 279 ++++++
.../media/vxd/decoder/img_msvdx_core_regs.h | 22 +
.../media/vxd/decoder/img_msvdx_vdmc_regs.h | 26 +
.../media/vxd/decoder/img_msvdx_vec_regs.h | 60 ++
.../media/vxd/decoder/img_pvdec_core_regs.h | 60 ++
.../media/vxd/decoder/img_pvdec_pixel_regs.h | 35 +
.../media/vxd/decoder/img_pvdec_test_regs.h | 39 +
.../media/vxd/decoder/img_vdec_fw_msg.h | 192 ++++
.../vxd/decoder/img_video_bus4_mmu_regs.h | 120 +++
.../media/vxd/decoder/jpegfw_data_shared.h | 84 ++
drivers/staging/media/vxd/decoder/mem_io.h | 42 +
.../media/vxd/decoder/pvdec_entropy_regs.h | 33 +
drivers/staging/media/vxd/decoder/pvdec_int.h | 82 ++
.../media/vxd/decoder/pvdec_vec_be_regs.h | 35 +
drivers/staging/media/vxd/decoder/reg_io2.h | 74 ++
.../staging/media/vxd/decoder/vdecfw_share.h | 36 +
.../staging/media/vxd/decoder/vdecfw_shared.h | 893 ++++++++++++++++++
22 files changed, 4323 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/h264_idx.h
create mode 100644 drivers/staging/media/vxd/decoder/h264_vlc.h
create mode 100644 drivers/staging/media/vxd/decoder/h264fw_data_shared.h
create mode 100644 drivers/staging/media/vxd/decoder/hevcfw_data_shared.h
create mode 100644 drivers/staging/media/vxd/decoder/img_msvdx_cmds.h
create mode 100644 drivers/staging/media/vxd/decoder/img_msvdx_core_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/img_msvdx_vdmc_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/img_msvdx_vec_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/img_pvdec_core_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/img_pvdec_pixel_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/img_pvdec_test_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/img_vdec_fw_msg.h
create mode 100644 drivers/staging/media/vxd/decoder/img_video_bus4_mmu_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/jpegfw_data_shared.h
create mode 100644 drivers/staging/media/vxd/decoder/mem_io.h
create mode 100644 drivers/staging/media/vxd/decoder/pvdec_entropy_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/pvdec_int.h
create mode 100644 drivers/staging/media/vxd/decoder/pvdec_vec_be_regs.h
create mode 100644 drivers/staging/media/vxd/decoder/reg_io2.h
create mode 100644 drivers/staging/media/vxd/decoder/vdecfw_share.h
create mode 100644 drivers/staging/media/vxd/decoder/vdecfw_shared.h
diff --git a/MAINTAINERS b/MAINTAINERS
index c7a6a0974415..a00ac0852b2a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19566,21 +19566,40 @@ F: drivers/staging/media/vxd/decoder/bspp.c
F: drivers/staging/media/vxd/decoder/bspp.h
F: drivers/staging/media/vxd/decoder/bspp_int.h
F: drivers/staging/media/vxd/decoder/fw_interface.h
+F: drivers/staging/media/vxd/decoder/h264_idx.h
F: drivers/staging/media/vxd/decoder/h264_secure_parser.c
F: drivers/staging/media/vxd/decoder/h264_secure_parser.h
+F: drivers/staging/media/vxd/decoder/h264_vlc.h
F: drivers/staging/media/vxd/decoder/h264fw_data.h
+F: drivers/staging/media/vxd/decoder/h264fw_data_shared.h
F: drivers/staging/media/vxd/decoder/hevc_secure_parser.c
F: drivers/staging/media/vxd/decoder/hevc_secure_parser.h
F: drivers/staging/media/vxd/decoder/hevcfw_data.h
+F: drivers/staging/media/vxd/decoder/hevcfw_data_shared.h
F: drivers/staging/media/vxd/decoder/hw_control.c
F: drivers/staging/media/vxd/decoder/hw_control.h
F: drivers/staging/media/vxd/decoder/img_dec_common.h
+F: drivers/staging/media/vxd/decoder/img_msvdx_cmds.h
+F: drivers/staging/media/vxd/decoder/img_msvdx_core_regs.h
+F: drivers/staging/media/vxd/decoder/img_msvdx_vdmc_regs.h
+F: drivers/staging/media/vxd/decoder/img_msvdx_vec_regs.h
F: drivers/staging/media/vxd/decoder/img_pixfmts.h
F: drivers/staging/media/vxd/decoder/img_profiles_levels.h
+F: drivers/staging/media/vxd/decoder/img_pvdec_core_regs.h
+F: drivers/staging/media/vxd/decoder/img_pvdec_pixel_regs.h
+F: drivers/staging/media/vxd/decoder/img_pvdec_test_regs.h
+F: drivers/staging/media/vxd/decoder/img_vdec_fw_msg.h
+F: drivers/staging/media/vxd/decoder/img_video_bus4_mmu_regs.h
F: drivers/staging/media/vxd/decoder/jpeg_secure_parser.c
F: drivers/staging/media/vxd/decoder/jpeg_secure_parser.h
F: drivers/staging/media/vxd/decoder/jpegfw_data.h
+F: drivers/staging/media/vxd/decoder/jpegfw_data_shared.h
+F: drivers/staging/media/vxd/decoder/mem_io.h
F: drivers/staging/media/vxd/decoder/mmu_defs.h
+F: drivers/staging/media/vxd/decoder/pvdec_entropy_regs.h
+F: drivers/staging/media/vxd/decoder/pvdec_int.h
+F: drivers/staging/media/vxd/decoder/pvdec_vec_be_regs.h
+F: drivers/staging/media/vxd/decoder/reg_io2.h
F: drivers/staging/media/vxd/decoder/scaler_setup.h
F: drivers/staging/media/vxd/decoder/swsr.c
F: drivers/staging/media/vxd/decoder/swsr.h
@@ -19589,6 +19608,8 @@ F: drivers/staging/media/vxd/decoder/translation_api.h
F: drivers/staging/media/vxd/decoder/vdec_defs.h
F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c
F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.h
+F: drivers/staging/media/vxd/decoder/vdecfw_share.h
+F: drivers/staging/media/vxd/decoder/vdecfw_shared.h
F: drivers/staging/media/vxd/decoder/vxd_core.c
F: drivers/staging/media/vxd/decoder/vxd_dec.c
F: drivers/staging/media/vxd/decoder/vxd_dec.h
diff --git a/drivers/staging/media/vxd/decoder/h264_idx.h b/drivers/staging/media/vxd/decoder/h264_idx.h
new file mode 100644
index 000000000000..5fd050f664b3
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/h264_idx.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * h264 idx table definitions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ */
+
+#ifndef __H264_IDX_H__
+#define __H264_IDX_H__
+
+#include <linux/types.h>
+
+static unsigned short h264_vlc_index_data[38][3] = {
+ { 2, 5, 0 }, /* NumCoeffTrailingOnes_Table9-5_nC_0-1.out */
+ { 0, 3, 76 }, /* NumCoeffTrailingOnes_Table9-5_nC_2-3.out */
+ { 0, 3, 160 }, /* NumCoeffTrailingOnes_Table9-5_nC_4-7.out */
+ { 0, 2, 231 }, /* NumCoeffTrailingOnesFixedLen.out */
+ { 2, 2, 244 }, /* NumCoeffTrailingOnesChromaDC_YUV420.out */
+ { 2, 5, 261 }, /* NumCoeffTrailingOnesChromaDC_YUV422.out */
+ { 2, 5, 301 }, /* TotalZeros_00.out */
+ { 0, 2, 326 }, /* TotalZeros_01.out */
+ { 0, 2, 345 }, /* TotalZeros_02.out */
+ { 0, 2, 363 }, /* TotalZeros_03.out */
+ { 0, 2, 379 }, /* TotalZeros_04.out */
+ { 0, 2, 394 }, /* TotalZeros_05.out */
+ { 0, 2, 406 }, /* TotalZeros_06.out */
+ { 0, 1, 418 }, /* TotalZeros_07.out */
+ { 0, 1, 429 }, /* TotalZeros_08.out */
+ { 0, 1, 438 }, /* TotalZeros_09.out */
+ { 2, 2, 446 }, /* TotalZeros_10.out */
+ { 2, 2, 452 }, /* TotalZeros_11.out */
+ { 2, 1, 456 }, /* TotalZeros_12.out */
+ { 0, 0, 459 }, /* TotalZeros_13.out */
+ { 0, 0, 461 }, /* TotalZeros_14.out */
+ { 2, 2, 463 }, /* TotalZerosChromaDC_YUV420_00.out */
+ { 2, 1, 467 }, /* TotalZerosChromaDC_YUV420_01.out */
+ { 0, 0, 470 }, /* TotalZerosChromaDC_YUV420_02.out */
+ { 0, 0, 472 }, /* Run_00.out */
+ { 2, 1, 474 }, /* Run_01.out */
+ { 0, 1, 477 }, /* Run_02.out */
+ { 0, 1, 481 }, /* Run_03.out */
+ { 1, 1, 487 }, /* Run_04.out */
+ { 0, 2, 494 }, /* Run_05.out */
+ { 0, 2, 502 }, /* Run_06.out */
+ { 2, 4, 520 }, /* TotalZerosChromaDC_YUV422_00.out */
+ { 2, 2, 526 }, /* TotalZerosChromaDC_YUV422_01.out */
+ { 0, 1, 530 }, /* TotalZerosChromaDC_YUV422_02.out */
+ { 1, 2, 534 }, /* TotalZerosChromaDC_YUV422_03.out */
+ { 0, 0, 538 }, /* TotalZerosChromaDC_YUV422_04.out */
+ { 0, 0, 540 }, /* TotalZerosChromaDC_YUV422_05.out */
+ { 0, 0, 542 }, /* TotalZerosChromaDC_YUV422_06.out */
+};
+
+static const unsigned char h264_vlc_index_size = 38;
+
+#endif
diff --git a/drivers/staging/media/vxd/decoder/h264_vlc.h b/drivers/staging/media/vxd/decoder/h264_vlc.h
new file mode 100644
index 000000000000..7cd79ada8ecc
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/h264_vlc.h
@@ -0,0 +1,604 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * h264 vlc table definitions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ */
+
+#ifndef __H264_VLC_H__
+#define __H264_VLC_H__
+
+#include <linux/types.h>
+
+static unsigned short h264_vlc_table_data[] = {
+/* NumCoeffTrailingOnes_Table9-5_nC_0-1.out */
+ 4, 0, 0,
+ 4, 1, 5,
+ 4, 2, 10,
+ 2, 1, 4,
+ 2, 1, 6,
+ 0, 1, 8,
+ 0, 2, 11,
+ 4, 0, 15,
+ 4, 1, 4,
+ 4, 1, 9,
+ 4, 0, 19,
+ 4, 1, 14,
+ 4, 1, 23,
+ 4, 1, 27,
+ 4, 1, 18,
+ 4, 1, 13,
+ 4, 1, 8,
+ 2, 5, 8,
+ 0, 1, 50,
+ 0, 0, 53,
+ 0, 0, 54,
+ 4, 2, 31,
+ 4, 2, 22,
+ 4, 2, 17,
+ 4, 2, 12,
+ 0, 2, 7,
+ 0, 2, 14,
+ 0, 2, 21,
+ 0, 2, 28,
+ 0, 1, 35,
+ 4, 5, 53,
+ 3, 5, 0,
+ 4, 2, 32,
+ 4, 2, 38,
+ 4, 2, 33,
+ 4, 2, 28,
+ 4, 2, 43,
+ 4, 2, 34,
+ 4, 2, 29,
+ 4, 2, 24,
+ 4, 2, 51,
+ 4, 2, 46,
+ 4, 2, 41,
+ 4, 2, 40,
+ 4, 2, 47,
+ 4, 2, 42,
+ 4, 2, 37,
+ 4, 2, 36,
+ 4, 2, 59,
+ 4, 2, 54,
+ 4, 2, 49,
+ 4, 2, 48,
+ 4, 2, 55,
+ 4, 2, 50,
+ 4, 2, 45,
+ 4, 2, 44,
+ 4, 2, 67,
+ 4, 2, 62,
+ 4, 2, 61,
+ 4, 2, 56,
+ 4, 2, 63,
+ 4, 2, 58,
+ 4, 2, 57,
+ 4, 2, 52,
+ 4, 1, 64,
+ 4, 1, 66,
+ 4, 1, 65,
+ 4, 1, 60,
+ 4, 1, 39,
+ 4, 1, 30,
+ 4, 1, 25,
+ 4, 1, 20,
+ 4, 0, 35,
+ 4, 0, 26,
+ 4, 0, 21,
+ 4, 0, 16,
+/* NumCoeffTrailingOnes_Table9-5_nC_2-3.out */
+ 0, 2, 16,
+ 0, 1, 73,
+ 0, 1, 76,
+ 0, 0, 79,
+ 4, 3, 19,
+ 4, 3, 15,
+ 4, 2, 10,
+ 4, 2, 10,
+ 4, 1, 5,
+ 4, 1, 5,
+ 4, 1, 5,
+ 4, 1, 5,
+ 4, 1, 0,
+ 4, 1, 0,
+ 4, 1, 0,
+ 4, 1, 0,
+ 2, 5, 8,
+ 0, 1, 49,
+ 0, 0, 52,
+ 0, 0, 53,
+ 4, 2, 35,
+ 4, 2, 22,
+ 4, 2, 21,
+ 4, 2, 12,
+ 0, 2, 7,
+ 0, 2, 14,
+ 0, 2, 21,
+ 1, 1, 28,
+ 0, 1, 34,
+ 4, 5, 63,
+ 3, 5, 0,
+ 4, 2, 47,
+ 4, 2, 38,
+ 4, 2, 37,
+ 4, 2, 32,
+ 4, 2, 43,
+ 4, 2, 34,
+ 4, 2, 33,
+ 4, 2, 28,
+ 4, 2, 44,
+ 4, 2, 46,
+ 4, 2, 45,
+ 4, 2, 40,
+ 4, 2, 51,
+ 4, 2, 42,
+ 4, 2, 41,
+ 4, 2, 36,
+ 4, 2, 59,
+ 4, 2, 54,
+ 4, 2, 53,
+ 4, 2, 52,
+ 4, 2, 55,
+ 4, 2, 50,
+ 4, 2, 49,
+ 4, 2, 48,
+ 0, 1, 3,
+ 4, 1, 58,
+ 4, 1, 56,
+ 4, 1, 61,
+ 4, 1, 60,
+ 4, 1, 62,
+ 4, 1, 57,
+ 4, 1, 67,
+ 4, 1, 66,
+ 4, 1, 65,
+ 4, 1, 64,
+ 4, 1, 39,
+ 4, 1, 30,
+ 4, 1, 29,
+ 4, 1, 24,
+ 4, 0, 20,
+ 4, 0, 26,
+ 4, 0, 25,
+ 4, 0, 16,
+ 4, 1, 31,
+ 4, 1, 18,
+ 4, 1, 17,
+ 4, 1, 8,
+ 4, 1, 27,
+ 4, 1, 14,
+ 4, 1, 13,
+ 4, 1, 4,
+ 4, 0, 23,
+ 4, 0, 9,
+/* NumCoeffTrailingOnes_Table9-5_nC_4-7.out */
+ 2, 1, 16,
+ 0, 2, 50,
+ 0, 1, 57,
+ 0, 1, 60,
+ 6, 0, 10,
+ 6, 0, 8,
+ 0, 0, 61,
+ 0, 0, 62,
+ 4, 3, 31,
+ 4, 3, 27,
+ 4, 3, 23,
+ 4, 3, 19,
+ 4, 3, 15,
+ 4, 3, 10,
+ 4, 3, 5,
+ 4, 3, 0,
+ 0, 2, 3,
+ 0, 2, 10,
+ 0, 3, 17,
+ 4, 2, 51,
+ 4, 2, 46,
+ 4, 2, 41,
+ 4, 2, 36,
+ 4, 2, 47,
+ 4, 2, 42,
+ 4, 2, 37,
+ 4, 2, 32,
+ 4, 2, 48,
+ 4, 2, 54,
+ 4, 2, 49,
+ 4, 2, 44,
+ 4, 2, 55,
+ 4, 2, 50,
+ 4, 2, 45,
+ 4, 2, 40,
+ 3, 3, 0,
+ 4, 3, 64,
+ 4, 3, 67,
+ 4, 3, 66,
+ 4, 3, 65,
+ 4, 3, 60,
+ 4, 3, 63,
+ 4, 3, 62,
+ 4, 3, 61,
+ 4, 3, 56,
+ 4, 3, 59,
+ 4, 3, 58,
+ 4, 3, 57,
+ 4, 3, 52,
+ 4, 2, 53,
+ 4, 2, 53,
+ 4, 2, 28,
+ 4, 2, 24,
+ 4, 2, 38,
+ 4, 2, 20,
+ 4, 2, 43,
+ 4, 2, 34,
+ 4, 2, 33,
+ 4, 2, 16,
+ 4, 1, 12,
+ 4, 1, 30,
+ 4, 1, 29,
+ 4, 1, 8,
+ 4, 1, 39,
+ 4, 1, 26,
+ 4, 1, 25,
+ 4, 1, 4,
+ 4, 0, 13,
+ 4, 0, 35,
+ 4, 0, 14,
+ 4, 0, 9,
+/* NumCoeffTrailingOnesFixedLen.out */
+ 2, 1, 8,
+ 5, 2, 6,
+ 5, 2, 10,
+ 5, 2, 14,
+ 5, 2, 18,
+ 5, 2, 22,
+ 5, 2, 26,
+ 5, 2, 30,
+ 5, 1, 4,
+ 0, 0, 2,
+ 5, 0, 2,
+ 3, 0, 0,
+ 4, 0, 0,
+/* NumCoeffTrailingOnesChromaDC_YUV420.out */
+ 4, 0, 5,
+ 4, 1, 0,
+ 4, 2, 10,
+ 0, 2, 1,
+ 1, 1, 8,
+ 0, 0, 10,
+ 4, 2, 16,
+ 4, 2, 12,
+ 4, 2, 8,
+ 4, 2, 15,
+ 4, 2, 9,
+ 4, 2, 4,
+ 4, 0, 19,
+ 4, 1, 18,
+ 4, 1, 17,
+ 4, 0, 14,
+ 4, 0, 13,
+/* NumCoeffTrailingOnesChromaDC_YUV422.out */
+ 4, 0, 0,
+ 4, 1, 5,
+ 4, 2, 10,
+ 0, 2, 4,
+ 4, 4, 15,
+ 4, 5, 19,
+ 2, 3, 9,
+ 4, 2, 27,
+ 4, 2, 23,
+ 4, 2, 18,
+ 4, 2, 14,
+ 4, 2, 13,
+ 4, 2, 9,
+ 4, 2, 8,
+ 4, 2, 4,
+ 0, 1, 5,
+ 0, 1, 8,
+ 0, 1, 11,
+ 0, 1, 14,
+ 1, 2, 17,
+ 4, 1, 22,
+ 4, 1, 17,
+ 4, 1, 16,
+ 4, 1, 12,
+ 4, 1, 31,
+ 4, 1, 26,
+ 4, 1, 21,
+ 4, 1, 20,
+ 4, 1, 35,
+ 4, 1, 30,
+ 4, 1, 25,
+ 4, 1, 24,
+ 4, 1, 34,
+ 4, 1, 33,
+ 4, 1, 29,
+ 4, 1, 28,
+ 3, 2, 0,
+ 3, 2, 0,
+ 3, 2, 0,
+ 4, 2, 32,
+/* TotalZeros_00.out */
+ 4, 0, 0,
+ 0, 0, 6,
+ 0, 0, 7,
+ 0, 0, 8,
+ 0, 0, 9,
+ 0, 0, 10,
+ 0, 2, 11,
+ 4, 0, 2,
+ 4, 0, 1,
+ 4, 0, 4,
+ 4, 0, 3,
+ 4, 0, 6,
+ 4, 0, 5,
+ 4, 0, 8,
+ 4, 0, 7,
+ 4, 0, 10,
+ 4, 0, 9,
+ 3, 2, 0,
+ 4, 2, 15,
+ 4, 2, 14,
+ 4, 2, 13,
+ 4, 1, 12,
+ 4, 1, 12,
+ 4, 1, 11,
+ 4, 1, 11,
+/* TotalZeros_01.out */
+ 1, 1, 8,
+ 0, 0, 14,
+ 0, 0, 15,
+ 4, 2, 4,
+ 4, 2, 3,
+ 4, 2, 2,
+ 4, 2, 1,
+ 4, 2, 0,
+ 0, 1, 3,
+ 4, 1, 10,
+ 4, 1, 9,
+ 4, 1, 14,
+ 4, 1, 13,
+ 4, 1, 12,
+ 4, 1, 11,
+ 4, 0, 8,
+ 4, 0, 7,
+ 4, 0, 6,
+ 4, 0, 5,
+/* TotalZeros_02.out */
+ 0, 1, 8,
+ 0, 0, 13,
+ 0, 0, 14,
+ 4, 2, 7,
+ 4, 2, 6,
+ 4, 2, 3,
+ 4, 2, 2,
+ 4, 2, 1,
+ 0, 0, 4,
+ 4, 1, 12,
+ 4, 1, 10,
+ 4, 1, 9,
+ 4, 0, 13,
+ 4, 0, 11,
+ 4, 0, 8,
+ 4, 0, 5,
+ 4, 0, 4,
+ 4, 0, 0,
+/* TotalZeros_03.out */
+ 0, 1, 8,
+ 0, 0, 11,
+ 0, 0, 12,
+ 4, 2, 8,
+ 4, 2, 6,
+ 4, 2, 5,
+ 4, 2, 4,
+ 4, 2, 1,
+ 4, 1, 12,
+ 4, 1, 11,
+ 4, 1, 10,
+ 4, 1, 0,
+ 4, 0, 9,
+ 4, 0, 7,
+ 4, 0, 3,
+ 4, 0, 2,
+/* TotalZeros_04.out */
+ 2, 1, 8,
+ 0, 0, 10,
+ 0, 0, 11,
+ 4, 2, 7,
+ 4, 2, 6,
+ 4, 2, 5,
+ 4, 2, 4,
+ 4, 2, 3,
+ 4, 0, 10,
+ 4, 1, 9,
+ 4, 1, 11,
+ 4, 0, 8,
+ 4, 0, 2,
+ 4, 0, 1,
+ 4, 0, 0,
+/* TotalZeros_05.out */
+ 2, 2, 8,
+ 4, 2, 9,
+ 4, 2, 7,
+ 4, 2, 6,
+ 4, 2, 5,
+ 4, 2, 4,
+ 4, 2, 3,
+ 4, 2, 2,
+ 4, 0, 8,
+ 4, 1, 1,
+ 4, 2, 0,
+ 4, 2, 10,
+/* TotalZeros_06.out */
+ 2, 2, 8,
+ 4, 2, 8,
+ 4, 2, 6,
+ 4, 2, 4,
+ 4, 2, 3,
+ 4, 2, 2,
+ 4, 1, 5,
+ 4, 1, 5,
+ 4, 0, 7,
+ 4, 1, 1,
+ 4, 2, 0,
+ 4, 2, 9,
+/* TotalZeros_07.out */
+ 2, 3, 4,
+ 0, 0, 8,
+ 4, 1, 5,
+ 4, 1, 4,
+ 4, 0, 7,
+ 4, 1, 1,
+ 4, 2, 2,
+ 4, 3, 0,
+ 4, 3, 8,
+ 4, 0, 6,
+ 4, 0, 3,
+/* TotalZeros_08.out */
+ 2, 3, 4,
+ 4, 1, 6,
+ 4, 1, 4,
+ 4, 1, 3,
+ 4, 0, 5,
+ 4, 1, 2,
+ 4, 2, 7,
+ 4, 3, 0,
+ 4, 3, 1,
+/* TotalZeros_09.out */
+ 2, 2, 4,
+ 4, 1, 5,
+ 4, 1, 4,
+ 4, 1, 3,
+ 4, 0, 2,
+ 4, 1, 6,
+ 4, 2, 0,
+ 4, 2, 1,
+/* TotalZeros_10.out */
+ 4, 0, 4,
+ 0, 0, 3,
+ 4, 2, 2,
+ 5, 0, 0,
+ 4, 0, 3,
+ 4, 0, 5,
+/* TotalZeros_11.out */
+ 4, 0, 3,
+ 4, 1, 2,
+ 4, 2, 4,
+ 5, 0, 0,
+/* TotalZeros_12.out */
+ 4, 0, 2,
+ 4, 1, 3,
+ 5, 0, 0,
+/* TotalZeros_13.out */
+ 5, 0, 0,
+ 4, 0, 2,
+/* TotalZeros_14.out */
+ 4, 0, 0,
+ 4, 0, 1,
+/* TotalZerosChromaDC_YUV420_00.out */
+ 4, 0, 0,
+ 4, 1, 1,
+ 4, 2, 2,
+ 4, 2, 3,
+/* TotalZerosChromaDC_YUV420_01.out */
+ 4, 0, 0,
+ 4, 1, 1,
+ 4, 1, 2,
+/* TotalZerosChromaDC_YUV420_02.out */
+ 4, 0, 1,
+ 4, 0, 0,
+/* Run_00.out */
+ 4, 0, 1,
+ 4, 0, 0,
+/* Run_01.out */
+ 4, 0, 0,
+ 4, 1, 1,
+ 4, 1, 2,
+/* Run_02.out */
+ 4, 1, 3,
+ 4, 1, 2,
+ 4, 1, 1,
+ 4, 1, 0,
+/* Run_03.out */
+ 0, 0, 4,
+ 4, 1, 2,
+ 4, 1, 1,
+ 4, 1, 0,
+ 4, 0, 4,
+ 4, 0, 3,
+/* Run_04.out */
+ 0, 1, 3,
+ 4, 1, 1,
+ 4, 1, 0,
+ 4, 1, 5,
+ 4, 1, 4,
+ 4, 1, 3,
+ 4, 1, 2,
+/* Run_05.out */
+ 4, 2, 1,
+ 4, 2, 2,
+ 4, 2, 4,
+ 4, 2, 3,
+ 4, 2, 6,
+ 4, 2, 5,
+ 4, 1, 0,
+ 4, 1, 0,
+/* Run_06.out */
+ 2, 5, 8,
+ 4, 2, 6,
+ 4, 2, 5,
+ 4, 2, 4,
+ 4, 2, 3,
+ 4, 2, 2,
+ 4, 2, 1,
+ 4, 2, 0,
+ 4, 0, 7,
+ 4, 1, 8,
+ 4, 2, 9,
+ 4, 3, 10,
+ 4, 4, 11,
+ 4, 5, 12,
+ 2, 1, 1,
+ 4, 0, 13,
+ 4, 1, 14,
+ 3, 1, 0,
+/* TotalZerosChromaDC_YUV422_00.out */
+ 4, 0, 0,
+ 6, 0, 0,
+ 6, 0, 1,
+ 4, 3, 5,
+ 4, 4, 6,
+ 4, 4, 7,
+/* TotalZerosChromaDC_YUV422_01.out */
+ 6, 1, 1,
+ 4, 1, 1,
+ 4, 2, 2,
+ 4, 2, 0,
+/* TotalZerosChromaDC_YUV422_02.out */
+ 5, 0, 0,
+ 4, 1, 2,
+ 4, 1, 3,
+ 5, 0, 2,
+/* TotalZerosChromaDC_YUV422_03.out */
+ 6, 0, 0,
+ 4, 1, 3,
+ 4, 2, 0,
+ 4, 2, 4,
+/* TotalZerosChromaDC_YUV422_04.out */
+ 5, 0, 0,
+ 5, 0, 1,
+/* TotalZerosChromaDC_YUV422_05.out */
+ 5, 0, 0,
+ 4, 0, 2,
+/* TotalZerosChromaDC_YUV422_06.out */
+ 4, 0, 0,
+ 4, 0, 1
+};
+
+static const unsigned short h264_vlc_table_size = 544;
+
+#endif
diff --git a/drivers/staging/media/vxd/decoder/h264fw_data_shared.h b/drivers/staging/media/vxd/decoder/h264fw_data_shared.h
new file mode 100644
index 000000000000..b8efd5f4c2f5
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/h264fw_data_shared.h
@@ -0,0 +1,759 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Public data structures for the h264 parser firmware module
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifdef USE_SHARING
+#endif
+
+#ifndef _H264FW_DATA_H_
+#define _H264FW_DATA_H_
+
+#include "vdecfw_share.h"
+#include "vdecfw_shared.h"
+
+#define H264_MAX_SPS_COUNT 32
+#define H264_MAX_PPS_COUNT 256
+
+#define H264_SCALING_LISTS_NUM_CHROMA_IDC_NON_3 (8)
+#define H264_SCALING_LISTS_NUM_CHROMA_IDC_3 (12)
+#define MAX_PIC_SCALING_LIST (12)
+
+/* Maximum number of alternative CPB specifications in the stream */
+#define H264_MAXIMUMVALUEOFCPB_CNT 32
+
+/*
+ * The maximum DPB size is related to the number of MVC views supported
+ * The size is defined in H.10.2 for the H.264 spec.
+ * If the number of views needs to be changed the DPB size should be too
+ * The limits are as follows:
+ * NumViews: 1, 2, 4, 8, 16
+ * MaxDpbFrames: 16, 16, 32, 48, 64
+ */
+
+#define H264FW_MAX_NUM_VIEWS 1
+#define H264FW_MAX_DPB_SIZE 16
+#define H264FW_MAX_NUM_MVC_REFS 1
+
+/* Number of H264 VLC table configuration registers */
+#define H264FW_NUM_VLC_REG 22
+
+/* Maximum value for num_ref_frames_in_pic_order_cnt_cycle */
+#define H264FW_MAX_CYCLE_REF_FRAMES 256
+
+/* 4x4 scaling list size */
+#define H264FW_4X4_SIZE 16
+/* 8x8 scaling list size */
+#define H264FW_8X8_SIZE 64
+/* Number of 4x4 scaling lists */
+#define H264FW_NUM_4X4_LISTS 6
+/* Number of 8x8 scaling lists */
+#define H264FW_NUM_8X8_LISTS 6
+
+/* Number of reference picture lists */
+#define H264FW_MAX_REFPIC_LISTS 2
+
+/*
+ * The maximum number of slice groups
+ * remove if slice group map is prepared on the host
+ */
+#define H264FW_MAX_SLICE_GROUPS 8
+
+/* The maximum number of planes for 4:4:4 separate colour plane streams */
+#define H264FW_MAX_PLANES 3
+
+#define H264_MAX_SGM_SIZE 8196
+
+#define IS_H264_HIGH_PROFILE(profile_idc, type) \
+ ({ \
+ type __profile_idc = profile_idc; \
+ (__profile_idc == H264_PROFILE_HIGH) || \
+ (__profile_idc == H264_PROFILE_HIGH10) || \
+ (__profile_idc == H264_PROFILE_HIGH422) || \
+ (__profile_idc == H264_PROFILE_HIGH444) || \
+ (__profile_idc == H264_PROFILE_CAVLC444) || \
+ (__profile_idc == H264_PROFILE_MVC_HIGH) || \
+ (__profile_idc == H264_PROFILE_MVC_STEREO); }) \
+
+/* This type describes the H.264 NAL unit types */
+enum h264_enaltype {
+ H264FW_NALTYPE_SLICE = 1,
+ H264FW_NALTYPE_IDRSLICE = 5,
+ H264FW_NALTYPE_SEI = 6,
+ H264FW_NALTYPE_SPS = 7,
+ H264FW_NALTYPE_PPS = 8,
+ H264FW_NALTYPE_AUD = 9,
+ H264FW_NALTYPE_EOSEQ = 10,
+ H264FW_NALTYPE_EOSTR = 11,
+ H264FW_NALTYPE_PREFIX = 14,
+ H264FW_NALTYPE_SUBSET_SPS = 15,
+ H264FW_NALTYPE_AUXILIARY_SLICE = 19,
+ H264FW_NALTYPE_EXTSLICE = 20,
+ H264FW_NALTYPE_EXTSLICE_DEPTH_VIEW = 21,
+ H264FW_NALTYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* AVC Profile IDC definitions */
+enum h264_eprofileidc {
+ H264_PROFILE_CAVLC444 = 44,
+ H264_PROFILE_BASELINE = 66,
+ H264_PROFILE_MAIN = 77,
+ H264_PROFILE_SCALABLE = 83,
+ H264_PROFILE_EXTENDED = 88,
+ H264_PROFILE_HIGH = 100,
+ H264_PROFILE_HIGH10 = 110,
+ H264_PROFILE_MVC_HIGH = 118,
+ H264_PROFILE_HIGH422 = 122,
+ H264_PROFILE_MVC_STEREO = 128,
+ H264_PROFILE_HIGH444 = 244,
+ H264_PROFILE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* This type defines the constraint set flags */
+enum h264fw_econstraint_flag {
+ H264FW_CONSTRAINT_BASELINE_SHIFT = 7,
+ H264FW_CONSTRAINT_MAIN_SHIFT = 6,
+ H264FW_CONSTRAINT_EXTENDED_SHIFT = 5,
+ H264FW_CONSTRAINT_INTRA_SHIFT = 4,
+ H264FW_CONSTRAINT_MULTIHIGH_SHIFT = 3,
+ H264FW_CONSTRAINT_STEREOHIGH_SHIFT = 2,
+ H264FW_CONSTRAINT_RESERVED6_SHIFT = 1,
+ H264FW_CONSTRAINT_RESERVED7_SHIFT = 0,
+ H264FW_CONSTRAINT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This enum describes the reference status of an H.264 picture.
+ *
+ * Unpaired fields should have all eRefStatusX set to the same value
+ *
+ * For Frame, Mbaff, and Pair types individual fields and frame ref status
+ * should be set accordingly.
+ *
+ * eRefStatusFrame eRefStatusTop eRefStatusBottom
+ * UNUSED UNUSED UNUSED
+ * SHORTTERM SHORTTERM SHORTTERM
+ * LONGTERM LONGTERM LONGTERM
+ *
+ * UNUSED SHORT/LONGTERM UNUSED
+ * UNUSED UNUSED SHORT/LONGTERM
+ *
+ * SHORTTERM LONGTERM SHORTTERM
+ * SHORTTERM SHORTTERM LONGTERM
+ * - NB: It is not clear from the spec if the Frame should be marked as short
+ * or long term in this case
+ */
+enum h264fw_ereference {
+ H264FW_REF_UNUSED = 0,
+ H264FW_REF_SHORTTERM,
+ H264FW_REF_LONGTERM,
+ H264FW_REF_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* This type defines the picture structure. */
+enum h264fw_epicture_type {
+ H264FW_TYPE_NONE = 0,
+ H264FW_TYPE_TOP,
+ H264FW_TYPE_BOTTOM,
+ H264FW_TYPE_FRAME,
+ H264FW_TYPE_MBAFF,
+ H264FW_TYPE_PAIR,
+ H264FW_TYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This describes the SPS header data required by the H264 firmware that should
+ * be supplied by the Host.
+ */
+struct h264fw_sequence_ps {
+ /* syntax elements from SPS */
+
+ /* syntax element from bitstream - 8 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, profile_idc);
+ /* syntax element from bitstream - 2 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, chroma_format_idc);
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, separate_colour_plane_flag);
+ /* syntax element from bitstream - 3 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, bit_depth_luma_minus8);
+ /* syntax element from bitstream - 3 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, bit_depth_chroma_minus8);
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, delta_pic_order_always_zero_flag);
+ /* syntax element from bitstream - 4+ bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, log2_max_pic_order_cnt_lsb);
+
+ /* syntax element from bitstream - 5 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, max_num_ref_frames);
+ /* syntax element from bitstream - 4+ bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, log2_max_frame_num);
+ /* syntax element from bitstream - 2 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pic_order_cnt_type);
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, frame_mbs_only_flag);
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, gaps_in_frame_num_value_allowed_flag);
+
+ /*
+ * set0--7 flags as they occur in the bitstream
+ * (including reserved values)
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, constraint_set_flags);
+ /* syntax element from bitstream - 8 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, level_idc);
+ /* syntax element from bitstream - 8 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_ref_frames_in_pic_order_cnt_cycle);
+
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, mb_adaptive_frame_field_flag);
+ /* syntax element from bitstream - 32 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, offset_for_non_ref_pic);
+ /* syntax element from bitstream - 32 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, offset_for_top_to_bottom_field);
+
+ /* syntax element from bitstream */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, pic_width_in_mbs_minus1);
+ /* syntax element from bitstream */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, pic_height_in_map_units_minus1);
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, direct_8x8_inference_flag);
+ /* syntax element from bitstream */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, qpprime_y_zero_transform_bypass_flag);
+
+ /* syntax element from bitstream - 32 bit each */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, offset_for_ref_frame[H264FW_MAX_CYCLE_REF_FRAMES]);
+
+ /* From VUI information */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_reorder_frames);
+ /*
+ * From VUI/MVC SEI, 0 indicates not set, any actual 0
+ * value will be inferred by the firmware
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, max_dec_frame_buffering);
+
+ /* From SPS MVC Extension - for the current view_id */
+
+ /* Number of views in this stream */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_views);
+ /* a Map in order of VOIdx of view_id's */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, view_ids[H264FW_MAX_NUM_VIEWS]);
+
+ /* Disable VDMC horizontal/vertical filtering */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, disable_vdmc_filt);
+ /* Disable CABAC 4:4:4 4x4 transform as not available */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, transform4x4_mb_not_available);
+
+ /* anchor reference list */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned short,
+ anchor_inter_view_reference_id_list[2][H264FW_MAX_NUM_VIEWS]
+ [H264FW_MAX_NUM_MVC_REFS]);
+ /* nonanchor reference list */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned short,
+ non_anchor_inter_view_reference_id_list[2][H264FW_MAX_NUM_VIEWS]
+ [H264FW_MAX_NUM_MVC_REFS]);
+ /* number of elements in aui16AnchorInterViewReferenceIndiciesLX[] */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned short,
+ num_anchor_refsx[2][H264FW_MAX_NUM_VIEWS]);
+ /* number of elements in aui16NonAnchorInterViewReferenceIndiciesLX[] */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned short,
+ num_non_anchor_refsx[2][H264FW_MAX_NUM_VIEWS]);
+};
+
+/*
+ * This structure represents HRD parameters.
+ */
+struct h264fw_hrd {
+ /* cpb_cnt_minus1; */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, cpb_cnt_minus1);
+ /* bit_rate_scale; */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, bit_rate_scale);
+ /* cpb_size_scale; */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, cpb_size_scale);
+ /* bit_rate_value_minus1 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int,
+ bit_rate_value_minus1[H264_MAXIMUMVALUEOFCPB_CNT]);
+ /* cpb_size_value_minus1 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int,
+ cpb_size_value_minus1[H264_MAXIMUMVALUEOFCPB_CNT]);
+ /* cbr_flag */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ cbr_flag[H264_MAXIMUMVALUEOFCPB_CNT]);
+ /* initial_cpb_removal_delay_length_minus1; */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ initial_cpb_removal_delay_length_minus1);
+ /* cpb_removal_delay_length_minus1; */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ cpb_removal_delay_length_minus1);
+ /* dpb_output_delay_length_minus1; */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ dpb_output_delay_length_minus1);
+ /* time_offset_length; */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, time_offset_length);
+};
+
+/*
+ * This structure represents the VUI parameters data.
+ */
+struct h264fw_vui {
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, aspect_ratio_info_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, aspect_ratio_idc);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, sar_width);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, sar_height);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, overscan_info_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, overscan_appropriate_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, video_signal_type_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, video_format);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, video_full_range_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, colour_description_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, colour_primaries);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, transfer_characteristics);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, matrix_coefficients);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, chroma_location_info_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, chroma_sample_loc_type_top_field);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, chroma_sample_loc_type_bottom_field);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, timing_info_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, num_units_in_tick);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, time_scale);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, fixed_frame_rate_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, nal_hrd_parameters_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ struct h264fw_hrd, nal_hrd_params);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, vcl_hrd_parameters_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ struct h264fw_hrd, vcl_hrd_params);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, low_delay_hrd_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, pic_struct_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, bitstream_restriction_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, motion_vectors_over_pic_boundaries_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, max_bytes_per_pic_denom);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, max_bits_per_mb_denom);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, log2_max_mv_length_vertical);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, log2_max_mv_length_horizontal);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, num_reorder_frames);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, max_dec_frame_buffering);
+};
+
+/*
+ * This describes the HW specific SPS header data required by the H264
+ * firmware that should be supplied by the Host.
+ */
+struct h264fw_ddsequence_ps {
+ /* Value for CR_VEC_ENTDEC_FE_CONTROL */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, regentdec_control);
+
+ /* NB: This register should contain the 4-bit SGM flag */
+
+ /* Value for CR_VEC_H264_FE_SPS0 & CR_VEC_H264_BE_SPS0 combined */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int, reg_sps0);
+ /* Value of CR_VEC_H264_BE_INTRA_8x8 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, reg_beintra);
+ /* Value of CR_VEC_H264_FE_CABAC444 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, reg_fecaabac444);
+
+ /* Treat CABAC 4:4:4 4x4 transform as not available */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ transform4x4_mb_notavialbale);
+ /* Disable VDMC horizontal/vertical filtering */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ disable_vdmcfilt);
+};
+
+/*
+ * This describes the PPS header data required by the H264 firmware that should
+ * be supplied by the Host.
+ */
+struct h264fw_picture_ps {
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ deblocking_filter_control_present_flag);
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ transform_8x8_mode_flag);
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ entropy_coding_mode_flag);
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ redundant_pic_cnt_present_flag);
+
+ /* syntax element from bitstream - 2 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ weighted_bipred_idc);
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ weighted_pred_flag);
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ pic_order_present_flag);
+
+ /* 26 + syntax element from bitstream - 7 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char, pic_init_qp);
+ /* syntax element from bitstream - 1 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ constrained_intra_pred_flag);
+ /* syntax element from bitstream - 5 bit each */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ num_ref_lx_active_minus1[H264FW_MAX_REFPIC_LISTS]);
+
+ /* syntax element from bitstream - 3 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ slice_group_map_type);
+ /* syntax element from bitstream - 3 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ num_slice_groups_minus1);
+ /* syntax element from bitstream - 13 bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned short,
+ slice_group_change_rate_minus1);
+
+ /* syntax element from bitstream */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, int,
+ chroma_qp_index_offset);
+ /* syntax element from bitstream */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, int,
+ second_chroma_qp_index_offset);
+
+ /*
+ * scaling lists are derived from both SPS and PPS information
+ * but will change whenever the PPS changes
+ * The derived set of tables are associated here with the PPS
+ * NB: These are in H.264 order
+ */
+
+ /* derived from SPS and PPS - 8 bit each */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ scalinglist4x4[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE]);
+ /* derived from SPS and PPS - 8 bit each */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ scalinglist8x8[H264FW_NUM_8X8_LISTS][H264FW_8X8_SIZE]);
+};
+
+/*
+ * This describes the HW specific PPS header data required by the H264
+ * firmware that should be supplied by the Host.
+ */
+struct h264fw_dd_picture_ps {
+ /* Value for MSVDX_CMDS_SLICE_PARAMS_MODE_CONFIG */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ vdmc_mode_config);
+ /* Value for CR_VEC_H264_FE_PPS0 & CR_VEC_H264_BE_PPS0 combined */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int, reg_pps0);
+
+ /*
+ * Scaling lists are derived from both SPS and PPS information
+ * but will change whenever the PPS changes. The derived set of tables
+ * are associated here with the PPS, but this will become invalid if
+ * the SPS changes and will have to be recalculated.
+ * These tables MUST be aligned on a 32-bit boundary
+ * NB: These are in MSVDX order
+ */
+
+ /* derived from SPS and PPS - 8 bit each */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ scalinglist4x4[H264FW_NUM_4X4_LISTS][H264FW_4X4_SIZE]);
+ /* derived from SPS and PPS - 8 bit each */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ scalinglist8x8[H264FW_NUM_8X8_LISTS][H264FW_8X8_SIZE]);
+};
+
+/*
+ * This describes the H.264 parser component "Header data", shown in the
+ * Firmware Memory Layout diagram. This data is required by the H264 firmware
+ * and should be supplied by the Host.
+ */
+struct h264fw_header_data {
+ struct vdecfw_image_buffer primary;
+ struct vdecfw_image_buffer alternate;
+
+ /* Output control: rotation, scaling, oold, etc. */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int,
+ pic_cmds[VDECFW_CMD_MAX]);
+ /* Macroblock parameters base address for the picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int,
+ mbparams_base_address);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int,
+ mbparams_size_per_plane);
+ /* Buffers for context preload for colour plane switching (6.x.x) */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int,
+ preload_buffer_base_address[H264FW_MAX_PLANES]);
+ /* Base address of active slice group map */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int,
+ slicegroupmap_base_address);
+
+ /* do second pass Intra Deblock on frame */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char, do_old);
+ /* set to IMG_FALSE to disable second-pass deblock */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ two_pass_flag);
+ /* set to IMG_TRUE to disable MVC */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ disable_mvc);
+ /*
+ * Do we have second PPS in uipSecondPPSInfoSource provided
+ * for the second field.
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ second_pps);
+};
+
+/* This describes an H.264 picture. It is part of the Context data */
+struct h264fw_picture {
+ /* Primary (reconstructed) picture buffers */
+ struct vdecfw_image_buffer primary;
+ /* Secondary (alternative) picture buffers */
+ struct vdecfw_image_buffer alternate;
+ /* Macroblock parameters base address for the picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, mbparams_base_address);
+
+ /* Unique ID for this picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, transaction_id);
+ /* Picture type */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ enum h264fw_epicture_type, pricture_type);
+
+ /* Reference status of the picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ enum h264fw_ereference, ref_status_bottom);
+ /* Reference status of the picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ enum h264fw_ereference, ref_status_top);
+ /* Reference status of the picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ enum h264fw_ereference, ref_status_frame);
+
+ /* Frame Number */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, frame_number);
+ /* Short term reference info */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, frame_number_wrap);
+ /* long term reference number - should be 8-bit */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, longterm_frame_idx);
+
+ /* Top field order count for this picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, top_field_order_count);
+ /* Bottom field order count for this picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, bottom_field_order_count);
+
+ /* MVC view_id */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, view_id);
+
+ /*
+ * When picture is in the DPB Offset to use into
+ * the MSVDX DPB reg table when the current
+ * picture is the same view as this.
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, view_dpb_offset);
+ /* Flags for this picture for the display process */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, display_flags);
+
+ /* IMG_FALSE if sent to display, or otherwise not needed for display */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, needed_for_output);
+};
+
+/* This structure describes frame data for POC calculation */
+struct h264fw_poc_picture_data {
+ /* type 0,1,2 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, mmco_5_flag);
+
+ /* type 0 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, bottom_field_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, pic_order_cnt_lsb);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, top_field_order_count);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, pic_order_count_msb);
+
+ /* type 1,2 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, short, frame_num);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, int, frame_num_offset);
+
+ /* output */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, bottom_filed_order_count);
+};
+
+/*
+ * This structure describes picture data for determining
+ * Complementary Field Pairs
+ */
+struct h264fw_last_pic_data {
+ /* Unique ID for this picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, transaction_id);
+ /* Picture type */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ enum h264fw_epicture_type, picture_type);
+ /* Reference status of the picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ enum h264fw_ereference, ref_status_frame);
+ /* Frame Number */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, frame_number);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, luma_recon);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, chroma_recon);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, chroma_2_recon);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, luma_alter);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, chroma_alter);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, chroma_2_alter);
+
+ struct vdecfw_image_buffer primary;
+ struct vdecfw_image_buffer alternate;
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, mbparams_base_address);
+ /* Top field order count for this picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, top_field_order_count);
+ /* Bottom field order count for this picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, bottom_field_order_count);
+};
+
+/*
+ * This describes the H.264 parser component "Context data", shown in the
+ * Firmware Memory Layout diagram. This data is the state preserved across
+ * pictures. It is loaded and saved by the Firmware, but requires the host to
+ * provide buffer(s) for this.
+ */
+struct h264fw_context_data {
+ struct h264fw_picture dpb[H264FW_MAX_DPB_SIZE];
+ /*
+ * Inter-view reference components - also used as detail of the previous
+ * picture for any particular view, can be used to determine
+ * complemetary field pairs
+ */
+ struct h264fw_picture interview_prediction_ref[H264FW_MAX_NUM_VIEWS];
+ /* previous ref pic for type0, previous pic for type1&2 */
+ struct h264fw_poc_picture_data prev_poc_pic_data[H264FW_MAX_NUM_VIEWS];
+ /* previous picture information to detect complementary field pairs */
+ struct h264fw_last_pic_data last_pic_data[H264FW_MAX_NUM_VIEWS];
+ struct h264fw_last_pic_data
+ last_displayed_pic_data[H264FW_MAX_NUM_VIEWS];
+
+ /* previous reference frame number for each view */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned short,
+ prev_ref_frame_num[H264FW_MAX_NUM_VIEWS]);
+ /* Bitmap of used slots in each view DPB */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned short,
+ dpb_bitmap[H264FW_MAX_NUM_VIEWS]);
+
+ /* DPB size */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int, dpb_size);
+ /* Number of pictures in DPB */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int,
+ dpb_fullness);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ prev_display_flags);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, int, prev_display);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, int, prev_release);
+ /* Sequence Parameter Set data */
+ struct h264fw_sequence_ps sps;
+ /* Picture Parameter Set data */
+ struct h264fw_picture_ps pps;
+ /* Picture Parameter Set data for second field if in the same buffer */
+ struct h264fw_picture_ps second_pps;
+
+ /* Set if stream is MVC */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, int, mvc);
+ /* DPB long term reference information */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, int,
+ max_longterm_frame_idx[H264FW_MAX_NUM_VIEWS]);
+};
+
+#endif /* _H264FW_DATA_H_ */
diff --git a/drivers/staging/media/vxd/decoder/hevcfw_data_shared.h b/drivers/staging/media/vxd/decoder/hevcfw_data_shared.h
new file mode 100644
index 000000000000..d57008fd96f8
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/hevcfw_data_shared.h
@@ -0,0 +1,767 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Public data structures for the hevc parser firmware module
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreming
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifdef USE_SHARING
+#endif
+
+#ifndef _HEVCFW_DATA_H_
+#define _HEVCFW_DATA_H_
+
+#include "vdecfw_share.h"
+#include "vdecfw_shared.h"
+
+#define HEVC_MAX_VPS_COUNT 16
+#define HEVC_MAX_SPS_COUNT 16
+#define HEVC_MAX_PPS_COUNT 64
+
+#define HEVCFW_MAX_NUM_PROFILE_IDC 32
+#define HEVCFW_MAX_VPS_OP_SETS_PLUS1 1024
+#define HEVCFW_MAX_VPS_NUH_RESERVED_ZERO_LAYER_ID_PLUS1 1
+
+#define HEVCFW_MAX_NUM_REF_PICS 16
+#define HEVCFW_MAX_NUM_ST_REF_PIC_SETS 65
+#define HEVCFW_MAX_NUM_LT_REF_PICS 32
+#define HEVCFW_MAX_NUM_SUBLAYERS 7
+#define HEVCFW_SCALING_LISTS_BUFSIZE 256
+#define HEVCFW_MAX_TILE_COLS 20
+#define HEVCFW_MAX_TILE_ROWS 22
+
+#define HEVCFW_MAX_CHROMA_QP 6
+
+#define HEVCFW_MAX_DPB_SIZE HEVCFW_MAX_NUM_REF_PICS
+#define HEVCFW_REF_PIC_LIST0 0
+#define HEVCFW_REF_PIC_LIST1 1
+#define HEVCFW_NUM_REF_PIC_LISTS 2
+#define HEVCFW_NUM_DPB_DIFF_REGS 4
+
+/* non-critical errors*/
+#define HEVC_ERR_INVALID_VALUE (20)
+#define HEVC_ERR_CORRECTION_VALIDVALUE (21)
+
+#define HEVC_IS_ERR_CRITICAL(err) \
+ ((err) > HEVC_ERR_CORRECTION_VALIDVALUE ? 1 : 0)
+
+/* critical errors*/
+#define HEVC_ERR_INV_VIDEO_DIMENSION (22)
+#define HEVC_ERR_NO_SEQUENCE_HDR (23)
+#define HEVC_ERR_SPS_EXT_UNSUPP (24 | VDECFW_UNSUPPORTED_CODE_BASE)
+#define HEVC_ERR_PPS_EXT_UNSUPP (25 | VDECFW_UNSUPPORTED_CODE_BASE)
+
+#define HEVC_ERR_FAILED_TO_STORE_VPS (100)
+#define HEVC_ERR_FAILED_TO_STORE_SPS (101)
+#define HEVC_ERR_FAILED_TO_STORE_PPS (102)
+
+#define HEVC_ERR_FAILED_TO_FETCH_VPS (103)
+#define HEVC_ERR_FAILED_TO_FETCH_SPS (104)
+#define HEVC_ERR_FAILED_TO_FETCH_PPS (105)
+/* HEVC Scaling Lists (all values are maximum possible ones) */
+#define HEVCFW_SCALING_LIST_NUM_SIZES 4
+#define HEVCFW_SCALING_LIST_NUM_MATRICES 6
+#define HEVCFW_SCALING_LIST_MATRIX_SIZE 64
+
+struct hevcfw_scaling_listdata {
+ unsigned char dc_coeffs
+ [HEVCFW_SCALING_LIST_NUM_SIZES - 2]
+ [HEVCFW_SCALING_LIST_NUM_MATRICES];
+
+ unsigned char lists
+ [HEVCFW_SCALING_LIST_NUM_SIZES]
+ [HEVCFW_SCALING_LIST_NUM_MATRICES]
+ [HEVCFW_SCALING_LIST_MATRIX_SIZE];
+};
+
+/* HEVC Video Profile_Tier_Level */
+struct hevcfw_profile_tier_level {
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_profile_space);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_tier_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_profile_idc);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ general_profile_compatibility_flag
+ [HEVCFW_MAX_NUM_PROFILE_IDC]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_progressive_source_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_interlaced_source_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_non_packed_constraint_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_frame_only_constraint_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_max_12bit_constraint_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_max_10bit_constraint_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_max_8bit_constraint_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_max_422chroma_constraint_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_max_420chroma_constraint_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_max_monochrome_constraint_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_intra_constraint_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ general_one_picture_only_constraint_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_lower_bit_rate_constraint_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, general_level_idc);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_profile_present_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_level_present_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_profile_space[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_tier_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_profile_idc[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_profile_compatibility_flag
+ [HEVCFW_MAX_NUM_SUBLAYERS - 1][HEVCFW_MAX_NUM_PROFILE_IDC]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_progressive_source_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_interlaced_source_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_non_packed_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_frame_only_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_max_12bit_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_max_10bit_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_max_8bit_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_max_422chroma_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_max_420chroma_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_max_monochrome_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_intra_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_one_picture_only_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_lower_bit_rate_constraint_flag[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sub_layer_level_idc[HEVCFW_MAX_NUM_SUBLAYERS - 1]);
+};
+
+struct hevcfw_video_ps {
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, is_different);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, is_sent);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, is_available);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, vps_video_parameter_set_id);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, vps_reserved_three_2bits);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, vps_max_layers_minus1);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, vps_max_sub_layers_minus1);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, vps_temporal_id_nesting_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, vps_reserved_0xffff_16bits);
+ struct hevcfw_profile_tier_level profile_tier_level;
+};
+
+/* HEVC Video Usability Information */
+struct hevcfw_vui_params {
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, aspect_ratio_info_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, aspect_ratio_idc);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, sar_width);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, sar_height);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, overscan_info_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, overscan_appropriate_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, video_signal_type_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, video_format);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, video_full_range_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, colour_description_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, colour_primaries);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, transfer_characteristics);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, matrix_coeffs);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, chroma_loc_info_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, chroma_sample_loc_type_top_field);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, chroma_sample_loc_type_bottom_field);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, neutral_chroma_indication_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, field_seq_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, frame_field_info_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, default_display_window_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, def_disp_win_left_offset);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, def_disp_win_right_offset);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, def_disp_win_top_offset);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, def_disp_win_bottom_offset);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, vui_timing_info_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, vui_num_units_in_tick);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, vui_time_scale);
+};
+
+/* HEVC Short Term Reference Picture Set */
+struct hevcfw_short_term_ref_picset {
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_negative_pics);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_positive_pics);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ short, delta_poc_s0[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ short, delta_poc_s1[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, used_bycurr_pic_s0[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, used_bycurr_pic_s1[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_delta_pocs);
+};
+
+/*
+ * This describes the SPS header data required by the HEVC firmware that should
+ * be supplied by the Host.
+ */
+struct hevcfw_sequence_ps {
+ /* syntax elements from SPS */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, pic_width_in_luma_samples);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, pic_height_in_luma_samples);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_short_term_ref_pic_sets);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_long_term_ref_pics_sps);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short,
+ lt_ref_pic_poc_lsb_sps[HEVCFW_MAX_NUM_LT_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ used_by_curr_pic_lt_sps_flag[HEVCFW_MAX_NUM_LT_REF_PICS]);
+ struct hevcfw_short_term_ref_picset st_rps_list[HEVCFW_MAX_NUM_ST_REF_PIC_SETS];
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, sps_max_sub_layers_minus1);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sps_max_dec_pic_buffering_minus1[HEVCFW_MAX_NUM_SUBLAYERS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ sps_max_num_reorder_pics[HEVCFW_MAX_NUM_SUBLAYERS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int,
+ sps_max_latency_increase_plus1[HEVCFW_MAX_NUM_SUBLAYERS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, max_transform_hierarchy_depth_inter);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, max_transform_hierarchy_depth_intra);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, log2_diff_max_min_transform_block_size);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, log2_min_transform_block_size_minus2);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ log2_diff_max_min_luma_coding_block_size);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, log2_min_luma_coding_block_size_minus3);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, chroma_format_idc);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, separate_colour_plane_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_extra_slice_header_bits);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, log2_max_pic_order_cnt_lsb_minus4);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, long_term_ref_pics_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, sample_adaptive_offset_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, sps_temporal_mvp_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, bit_depth_luma_minus8);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, bit_depth_chroma_minus8);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pcm_sample_bit_depth_luma_minus1);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pcm_sample_bit_depth_chroma_minus1);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ log2_min_pcm_luma_coding_block_size_minus3);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ log2_diff_max_min_pcm_luma_coding_block_size);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pcm_loop_filter_disabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, amp_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pcm_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, strong_intra_smoothing_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, scaling_list_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, transform_skip_rotation_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, transform_skip_context_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, implicit_rdpcm_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, explicit_rdpcm_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, extended_precision_processing_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, intra_smoothing_disabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, high_precision_offsets_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, persistent_rice_adaptation_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, cabac_bypass_alignment_enabled_flag);
+ /* derived elements */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, pic_size_in_ctbs_y);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, pic_height_in_ctbs_y);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, pic_width_in_ctbs_y);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, ctb_size_y);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, ctb_log2size_y);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, max_pic_order_cnt_lsb);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int,
+ sps_max_latency_pictures[HEVCFW_MAX_NUM_SUBLAYERS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pps_seq_parameter_set_id);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, sps_video_parameter_set_id);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, sps_temporal_id_nesting_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, sps_seq_parameter_set_id);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, conformance_window_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, conf_win_left_offset);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, conf_win_right_offset);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, conf_win_top_offset);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, conf_win_bottom_offset);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, sps_sub_layer_ordering_info_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, sps_scaling_list_data_present_flag);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, vui_parameters_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, sps_extension_present_flag);
+
+ struct hevcfw_vui_params vui_params;
+ /* derived elements */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, sub_width_c);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, sub_height_c);
+
+ struct hevcfw_profile_tier_level profile_tier_level;
+ struct hevcfw_scaling_listdata scaling_listdata;
+};
+
+/*
+ * This describes the HEVC parser component "Header data", shown in the
+ * Firmware Memory Layout diagram. This data is required by the HEVC firmware
+ * and should be supplied by the Host.
+ */
+struct hevcfw_headerdata {
+ /* Decode buffers and output control for the current picture */
+ /* Primary decode buffer base addresses */
+ struct vdecfw_image_buffer primary;
+ /* buffer base addresses for alternate output */
+ struct vdecfw_image_buffer alternate;
+ /* address of buffer for temporal mv params */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, temporal_outaddr);
+};
+
+/*
+ * This describes the PPS header data required by the HEVC firmware that should
+ * be supplied by the Host.
+ */
+struct hevcfw_picture_ps {
+ /* syntax elements from the PPS */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pps_pic_parameter_set_id);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_tile_columns_minus1);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_tile_rows_minus1);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, diff_cu_qp_delta_depth);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, init_qp_minus26);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pps_beta_offset_div2);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pps_tc_offset_div2);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pps_cb_qp_offset);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pps_cr_qp_offset);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, log2_parallel_merge_level_minus2);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, dependent_slice_segments_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, output_flag_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_extra_slice_header_bits);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, lists_modification_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, cabac_init_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, weighted_pred_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, weighted_bipred_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ pps_slice_chroma_qp_offsets_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ deblocking_filter_override_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, tiles_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, entropy_coding_sync_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ slice_segment_header_extension_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, transquant_bypass_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, cu_qp_delta_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, transform_skip_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, sign_data_hiding_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_ref_idx_l0_default_active_minus1);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_ref_idx_l1_default_active_minus1);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, constrained_intra_pred_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pps_deblocking_filter_disabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ pps_loop_filter_across_slices_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, loop_filter_across_tiles_enabled_flag);
+
+ /* rewritten from SPS, maybe at some point we could get rid of this */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, scaling_list_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ log2_max_transform_skip_block_size_minus2);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ cross_component_prediction_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, chroma_qp_offset_list_enabled_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, diff_cu_chroma_qp_offset_depth);
+ /*
+ * PVDEC derived elements. HEVCFW_SCALING_LISTS_BUFSIZE is
+ * multiplied by 2 to ensure that there will be space for address of
+ * each element. These addresses are completed in lower layer.
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int,
+ scaling_lists[HEVCFW_SCALING_LISTS_BUFSIZE * 2]);
+ /* derived elements */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, col_bd[HEVCFW_MAX_TILE_COLS + 1]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, row_bd[HEVCFW_MAX_TILE_ROWS + 1]);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, chroma_qp_offset_list_len_minus1);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, cb_qp_offset_list[HEVCFW_MAX_CHROMA_QP]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, cr_qp_offset_list[HEVCFW_MAX_CHROMA_QP]);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, uniform_spacing_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ column_width_minus1[HEVCFW_MAX_TILE_COLS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ row_height_minus1[HEVCFW_MAX_TILE_ROWS]);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pps_seq_parameter_set_id);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, deblocking_filter_control_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pps_scaling_list_data_present_flag);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pps_extension_present_flag);
+
+ struct hevcfw_scaling_listdata scaling_list;
+};
+
+/* This enum determines reference picture status */
+enum hevcfw_reference_type {
+ HEVCFW_REF_UNUSED = 0,
+ HEVCFW_REF_SHORTTERM,
+ HEVCFW_REF_LONGTERM,
+ HEVCFW_REF_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* This describes an HEVC picture. It is part of the Context data */
+struct hevcfw_picture {
+ /* Primary (reconstructed) picture buffers */
+ struct vdecfw_image_buffer primary;
+ /* Secondary (alternative) picture buffers */
+ struct vdecfw_image_buffer alternate;
+ /* Unique ID for this picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, transaction_id);
+ /* nut of first ssh of picture, determines picture type */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, nalunit_type);
+ /* Picture Order Count (frame number) */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, pic_order_cnt_val);
+ /* Slice Picture Order Count Lsb */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, slice_pic_ordercnt_lsb);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pic_output_flag);
+ /* information about long-term pictures */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, dpb_longterm_flags);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int,
+ dpb_pic_order_diff[HEVCFW_NUM_DPB_DIFF_REGS]);
+ /* address of buffer for temporal mv params */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, temporal_outaddr);
+ /* worst case Dpb diff for the current pic */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, dpb_diff);
+};
+
+/*
+ * This is a wrapper for a picture to hold it in a Decoded Picture Buffer
+ * for further reference
+ */
+struct hevcfw_picture_in_dpb {
+ /* DPB data about the picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ enum hevcfw_reference_type, ref_type);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, valid);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, needed_for_output);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, pic_latency_count);
+ /* Picture itself */
+ struct hevcfw_picture picture;
+};
+
+/*
+ * This describes an HEVC's Decoded Picture Buffer (DPB).
+ * It is part of the Context data
+ */
+
+#define HEVCFW_DPB_IDX_INVALID -1
+
+struct hevcfw_decoded_picture_buffer {
+ /* reference pictures */
+ struct hevcfw_picture_in_dpb pictures[HEVCFW_MAX_DPB_SIZE];
+ /* organizational data of DPB */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int, fullness);
+};
+
+/*
+ * This describes an HEVC's Reference Picture Set (RPS).
+ * It is part of the Context data
+ */
+struct hevcfw_reference_picture_set {
+ /* sizes of poc lists */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_pocst_curr_before);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_pocst_curr_after);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_pocst_foll);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_poclt_curr);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, num_poclt_foll);
+ /* poc lists */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, pocst_curr_before[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, pocst_curr_after[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, pocst_foll[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, poclt_curr[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, poclt_foll[HEVCFW_MAX_NUM_REF_PICS]);
+ /* derived elements */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ curr_delta_pocmsb_presentflag[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ foll_delta_pocmsb_presentflag[HEVCFW_MAX_NUM_REF_PICS]);
+ /* reference picture sets: indices in DPB */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, ref_picsetlt_curr[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, ref_picsetlt_foll[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ ref_picsetst_curr_before[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ ref_picsetst_curr_after[HEVCFW_MAX_NUM_REF_PICS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, ref_picsetst_foll[HEVCFW_MAX_NUM_REF_PICS]);
+};
+
+/*
+ * This describes the HEVC parser component "Context data", shown in the
+ * Firmware Memory Layout diagram. This data is the state preserved across
+ * pictures. It is loaded and saved by the Firmware, but requires the host to
+ * provide buffer(s) for this.
+ */
+struct hevcfw_ctx_data {
+ struct hevcfw_sequence_ps sps;
+ struct hevcfw_picture_ps pps;
+ /*
+ * data from last picture with TemporalId = 0 that is not a RASL, RADL
+ * or sub-layer non-reference picture
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, prev_pic_order_cnt_lsb);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, prev_pic_order_cnt_msb);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, last_irapnorasl_output_flag);
+ /*
+ * Decoded Pictures Buffer holds information about decoded pictures
+ * needed for further INTER decoding
+ */
+ struct hevcfw_decoded_picture_buffer dpb;
+ /* Reference Picture Set is determined on per-picture basis */
+ struct hevcfw_reference_picture_set rps;
+ /*
+ * Reference Picture List is determined using data from Reference
+ * Picture Set and from Slice (Segment) Header on per-slice basis
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char,
+ ref_pic_list[HEVCFW_NUM_REF_PIC_LISTS][HEVCFW_MAX_NUM_REF_PICS]);
+ /*
+ * Reference Picture List used to send reflist to the host, the only
+ * difference is that missing references are marked
+ * with HEVCFW_DPB_IDX_INVALID
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char,
+ ref_pic_listhlp[HEVCFW_NUM_REF_PIC_LISTS][HEVCFW_MAX_NUM_REF_PICS]);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, pic_count);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, slice_segment_count);
+ /* There was EOS NAL detected and no new picture yet */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, eos_detected);
+ /* This is first picture after EOS NAL */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, first_after_eos);
+};
+
+#endif /* _HEVCFW_DATA_H_ */
diff --git a/drivers/staging/media/vxd/decoder/img_msvdx_cmds.h b/drivers/staging/media/vxd/decoder/img_msvdx_cmds.h
new file mode 100644
index 000000000000..2748ff44624e
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/img_msvdx_cmds.h
@@ -0,0 +1,279 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG MSVDX core Registers
+ * This file contains the MSVDX_CORE_REGS_H Definitions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _IMG_MSVDX_CMDS_H
+#define _IMG_MSVDX_CMDS_H
+
+#define MSVDX_CMDS_HORIZONTAL_LUMA_COEFFICIENTS_OFFSET (0x0060)
+#define MSVDX_CMDS_VERTICAL_LUMA_COEFFICIENTS_OFFSET (0x0070)
+/**
+ * MSVDX_CMDS_HORIZONTAL_CHROMA_COEFFICIENTS_OFFSET -
+ * MSVDX_CMDS, VERTICAL_LUMA_COEFFICIENTS, VER_LUMA_COEFF_0
+ */
+#define MSVDX_CMDS_HORIZONTAL_CHROMA_COEFFICIENTS_OFFSET (0x0080)
+/* MSVDX_CMDS, HORIZONTAL_CHROMA_COEFFICIENTS, HOR_CHROMA_COEFF_0 */
+#define MSVDX_CMDS_VERTICAL_CHROMA_COEFFICIENTS_OFFSET (0x0090)
+/* MSVDX_CMDS, DISPLAY_PICTURE_SIZE, DISPLAY_PICTURE_HEIGHT */
+#define MSVDX_CMDS_DISPLAY_PICTURE_SIZE_DISPLAY_PICTURE_HEIGHT_LSBMASK (0x00000FFF)
+#define MSVDX_CMDS_DISPLAY_PICTURE_SIZE_DISPLAY_PICTURE_HEIGHT_SHIFT (12)
+/* MSVDX_CMDS, DISPLAY_PICTURE_SIZE, DISPLAY_PICTURE_WIDTH */
+#define MSVDX_CMDS_DISPLAY_PICTURE_SIZE_DISPLAY_PICTURE_WIDTH_LSBMASK (0x00000FFF)
+#define MSVDX_CMDS_DISPLAY_PICTURE_SIZE_DISPLAY_PICTURE_WIDTH_SHIFT (0)
+#define MSVDX_CMDS_PVDEC_DISPLAY_PICTURE_SIZE_OFFSET (0x00B0)
+#define MSVDX_CMDS_PVDEC_DISPLAY_PICTURE_SIZE_PVDEC_DISPLAY_PICTURE_HEIGHT_MIN1_LSBMASK \
+ (0x0000FFFF)
+#define MSVDX_CMDS_PVDEC_DISPLAY_PICTURE_SIZE_PVDEC_DISPLAY_PICTURE_HEIGHT_MIN1_SHIFT (16)
+/* MSVDX_CMDS, PVDEC_DISPLAY_PICTURE_SIZE, PVDEC_DISPLAY_PICTURE_WIDTH_MIN1 */
+#define MSVDX_CMDS_PVDEC_DISPLAY_PICTURE_SIZE_PVDEC_DISPLAY_PICTURE_WIDTH_MIN1_LSBMASK \
+ (0x0000FFFF)
+#define MSVDX_CMDS_PVDEC_DISPLAY_PICTURE_SIZE_PVDEC_DISPLAY_PICTURE_WIDTH_MIN1_SHIFT (0)
+/* MSVDX_CMDS, CODED_PICTURE_SIZE, CODED_PICTURE_HEIGHT */
+#define MSVDX_CMDS_CODED_PICTURE_SIZE_CODED_PICTURE_HEIGHT_LSBMASK (0x00000FFF)
+#define MSVDX_CMDS_CODED_PICTURE_SIZE_CODED_PICTURE_HEIGHT_SHIFT (12)
+/* MSVDX_CMDS, CODED_PICTURE_SIZE, CODED_PICTURE_WIDTH */
+#define MSVDX_CMDS_CODED_PICTURE_SIZE_CODED_PICTURE_WIDTH_LSBMASK (0x00000FFF)
+#define MSVDX_CMDS_CODED_PICTURE_SIZE_CODED_PICTURE_WIDTH_SHIFT (0)
+#define MSVDX_CMDS_PVDEC_CODED_PICTURE_SIZE_OFFSET (0x00B4)
+/* MSVDX_CMDS, OPERATING_MODE, USE_EXT_ROW_STRIDE */
+#define MSVDX_CMDS_OPERATING_MODE_USE_EXT_ROW_STRIDE_MASK (0x10000000)
+#define MSVDX_CMDS_OPERATING_MODE_USE_EXT_ROW_STRIDE_LSBMASK (0x00000001)
+#define MSVDX_CMDS_OPERATING_MODE_USE_EXT_ROW_STRIDE_SHIFT (28)
+/* MSVDX_CMDS, OPERATING_MODE, CHROMA_INTERLEAVED */
+#define MSVDX_CMDS_OPERATING_MODE_CHROMA_INTERLEAVED_MASK (0x08000000)
+#define MSVDX_CMDS_OPERATING_MODE_CHROMA_INTERLEAVED_LSBMASK (0x00000001)
+#define MSVDX_CMDS_OPERATING_MODE_CHROMA_INTERLEAVED_SHIFT (27)
+/* MSVDX_CMDS, OPERATING_MODE, ROW_STRIDE */
+#define MSVDX_CMDS_OPERATING_MODE_ROW_STRIDE_MASK (0x07000000)
+#define MSVDX_CMDS_OPERATING_MODE_ROW_STRIDE_LSBMASK (0x00000007)
+#define MSVDX_CMDS_OPERATING_MODE_ROW_STRIDE_SHIFT (24)
+/* MSVDX_CMDS, OPERATING_MODE, CODEC_PROFILE */
+#define MSVDX_CMDS_OPERATING_MODE_CODEC_PROFILE_MASK (0x00300000)
+#define MSVDX_CMDS_OPERATING_MODE_CODEC_PROFILE_LSBMASK (0x00000003)
+#define MSVDX_CMDS_OPERATING_MODE_CODEC_PROFILE_SHIFT (20)
+/* MSVDX_CMDS, OPERATING_MODE, CODEC_MODE */
+#define MSVDX_CMDS_OPERATING_MODE_CODEC_MODE_MASK (0x000F0000)
+#define MSVDX_CMDS_OPERATING_MODE_CODEC_MODE_LSBMASK (0x0000000F)
+#define MSVDX_CMDS_OPERATING_MODE_CODEC_MODE_SHIFT (16)
+/* MSVDX_CMDS, OPERATING_MODE, ASYNC_MODE */
+#define MSVDX_CMDS_OPERATING_MODE_ASYNC_MODE_MASK (0x00006000)
+#define MSVDX_CMDS_OPERATING_MODE_ASYNC_MODE_LSBMASK (0x00000003)
+#define MSVDX_CMDS_OPERATING_MODE_ASYNC_MODE_SHIFT (13)
+/* MSVDX_CMDS, OPERATING_MODE, CHROMA_FORMAT */
+#define MSVDX_CMDS_OPERATING_MODE_CHROMA_FORMAT_MASK (0x00001000)
+#define MSVDX_CMDS_OPERATING_MODE_CHROMA_FORMAT_LSBMASK (0x00000001)
+#define MSVDX_CMDS_OPERATING_MODE_CHROMA_FORMAT_SHIFT (12)
+/* MSVDX_CMDS, OPERATING_MODE, PIC_QUANT */
+#define MSVDX_CMDS_PVDEC_OPERATING_MODE_OFFSET (0x00A0)
+/* MSVDX_CMDS, EXT_OP_MODE, BIT_DEPTH_CHROMA_MINUS8 */
+#define MSVDX_CMDS_EXT_OP_MODE_BIT_DEPTH_CHROMA_MINUS8_MASK (0x00003000)
+#define MSVDX_CMDS_EXT_OP_MODE_BIT_DEPTH_CHROMA_MINUS8_LSBMASK (0x00000003)
+#define MSVDX_CMDS_EXT_OP_MODE_BIT_DEPTH_CHROMA_MINUS8_SHIFT (12)
+/* MSVDX_CMDS, EXT_OP_MODE, BIT_DEPTH_LUMA_MINUS8 */
+#define MSVDX_CMDS_EXT_OP_MODE_BIT_DEPTH_LUMA_MINUS8_MASK (0x00000300)
+#define MSVDX_CMDS_EXT_OP_MODE_BIT_DEPTH_LUMA_MINUS8_LSBMASK (0x00000003)
+#define MSVDX_CMDS_EXT_OP_MODE_BIT_DEPTH_LUMA_MINUS8_SHIFT (8)
+/* MSVDX_CMDS, EXT_OP_MODE, MEMORY_PACKING */
+#define MSVDX_CMDS_EXT_OP_MODE_MEMORY_PACKING_MASK (0x00000008)
+#define MSVDX_CMDS_EXT_OP_MODE_MEMORY_PACKING_LSBMASK (0x00000001)
+#define MSVDX_CMDS_EXT_OP_MODE_MEMORY_PACKING_SHIFT (3)
+/* MSVDX_CMDS, EXT_OP_MODE, CHROMA_FORMAT_IDC */
+#define MSVDX_CMDS_EXT_OP_MODE_CHROMA_FORMAT_IDC_MASK (0x00000003)
+#define MSVDX_CMDS_EXT_OP_MODE_CHROMA_FORMAT_IDC_LSBMASK (0x00000003)
+#define MSVDX_CMDS_EXT_OP_MODE_CHROMA_FORMAT_IDC_SHIFT (0)
+#define MSVDX_CMDS_LUMA_RECONSTRUCTED_PICTURE_BASE_ADDRESSES_OFFSET (0x000C)
+/*
+ * MSVDX_CMDS, LUMA_RECONSTRUCTED_PICTURE_BASE_ADDRESSES,
+ * LUMA_RECON_BASE_ADDR
+ */
+#define MSVDX_CMDS_CHROMA_RECONSTRUCTED_PICTURE_BASE_ADDRESSES_OFFSET (0x0010)
+///* MSVDX_CMDS, AUX_MSB_BUFFER_BASE_ADDRESSES, AUX_MSB_BUFFER_BASE_ADDR */
+#define MSVDX_CMDS_INTRA_BUFFER_BASE_ADDRESS_OFFSET (0x0018)
+/* MSVDX_CMDS, INTRA_BUFFER_BASE_ADDRESS, INTRA_BASE_ADDR */
+
+#define MSVDX_CMDS_MC_CACHE_CONFIGURATION_OFFSET (0x001C)
+
+/* MSVDX_CMDS, MC_CACHE_CONFIGURATION, CONFIG_REF_CHROMA_ADJUST */
+#define MSVDX_CMDS_MC_CACHE_CONFIGURATION_CONFIG_REF_CHROMA_ADJUST_MASK (0x01000000)
+#define MSVDX_CMDS_MC_CACHE_CONFIGURATION_CONFIG_REF_CHROMA_ADJUST_LSBMASK (0x00000001)
+#define MSVDX_CMDS_MC_CACHE_CONFIGURATION_CONFIG_REF_CHROMA_ADJUST_SHIFT (24)
+/* MSVDX_CMDS, MC_CACHE_CONFIGURATION, CONFIG_REF_OFFSET */
+#define MSVDX_CMDS_MC_CACHE_CONFIGURATION_CONFIG_REF_OFFSET_MASK (0x00FFF000)
+#define MSVDX_CMDS_MC_CACHE_CONFIGURATION_CONFIG_REF_OFFSET_LSBMASK (0x00000FFF)
+#define MSVDX_CMDS_MC_CACHE_CONFIGURATION_CONFIG_REF_OFFSET_SHIFT (12)
+/* MSVDX_CMDS, MC_CACHE_CONFIGURATION, CONFIG_ROW_OFFSET */
+#define MSVDX_CMDS_MC_CACHE_CONFIGURATION_CONFIG_ROW_OFFSET_MASK (0x0000003F)
+#define MSVDX_CMDS_MC_CACHE_CONFIGURATION_CONFIG_ROW_OFFSET_LSBMASK (0x0000003F)
+#define MSVDX_CMDS_MC_CACHE_CONFIGURATION_CONFIG_ROW_OFFSET_SHIFT (0)
+/* MSVDX_CMDS, H264_WEIGHTED_FACTOR_DENOMINATOR, Y_LOG2_WEIGHT_DENOM */
+#define MSVDX_CMDS_VC1_LUMA_RANGE_MAPPING_BASE_ADDRESS_OFFSET (0x0028)
+/* MSVDX_CMDS, VC1_LUMA_RANGE_MAPPING_BASE_ADDRESS, LUMA_RANGE_BASE_ADDR */
+#define MSVDX_CMDS_VC1_CHROMA_RANGE_MAPPING_BASE_ADDRESS_OFFSET (0x002C)
+/* MSVDX_CMDS, VC1_RANGE_MAPPING_FLAGS, LUMA_RANGE_MAP */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_OFFSET (0x003C)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION, EXT_ROT_ROW_STRIDE */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_EXT_ROT_ROW_STRIDE_MASK (0xFFC00000)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_EXT_ROT_ROW_STRIDE_LSBMASK \
+ (0x000003FF)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_EXT_ROT_ROW_STRIDE_SHIFT (22)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION, PACKED_422_OUTPUT */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_PACKED_422_OUTPUT_MASK (0x00000800)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_PACKED_422_OUTPUT_LSBMASK \
+ (0x00000001)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_PACKED_422_OUTPUT_SHIFT (11)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION, USE_AUX_LINE_BUF */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_USE_AUX_LINE_BUF_MASK (0x00000400)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_USE_AUX_LINE_BUF_LSBMASK (0x00000001)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_USE_AUX_LINE_BUF_SHIFT (10)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION, SCALE_INPUT_SIZE_SEL */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_SCALE_INPUT_SIZE_SEL_MASK \
+ (0x00000200)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_SCALE_INPUT_SIZE_SEL_LSBMASK \
+ (0x00000001)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_SCALE_INPUT_SIZE_SEL_SHIFT (9)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION, USE_EXT_ROT_ROW_STRIDE */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_USE_EXT_ROT_ROW_STRIDE_MASK \
+ (0x00000100)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_USE_EXT_ROT_ROW_STRIDE_LSBMASK \
+ (0x00000001)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_USE_EXT_ROT_ROW_STRIDE_SHIFT (8)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION, ROTATION_ROW_STRIDE */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_ROTATION_ROW_STRIDE_MASK (0x00000070)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_ROTATION_ROW_STRIDE_LSBMASK \
+ (0x00000007)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_PICTURE_ROTATION_ROTATION_ROW_STRIDE_SHIFT (4)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION, ROTATION_MODE */
+#define MSVDX_CMDS_EXTENDED_ROW_STRIDE_OFFSET (0x0040)
+
+/* MSVDX_CMDS, EXTENDED_ROW_STRIDE, EXT_ROW_STRIDE */
+#define MSVDX_CMDS_EXTENDED_ROW_STRIDE_EXT_ROW_STRIDE_MASK (0x0003FFC0)
+#define MSVDX_CMDS_EXTENDED_ROW_STRIDE_EXT_ROW_STRIDE_LSBMASK (0x00000FFF)
+#define MSVDX_CMDS_EXTENDED_ROW_STRIDE_EXT_ROW_STRIDE_SHIFT (6)
+/* MSVDX_CMDS, EXTENDED_ROW_STRIDE, REF_PIC_MMU_TILED */
+#define MSVDX_CMDS_CHROMA_ROW_STRIDE_OFFSET (0x01AC)
+/* MSVDX_CMDS, CHROMA_ROW_STRIDE, ALT_CHROMA_ROW_STRIDE */
+#define MSVDX_CMDS_CHROMA_ROW_STRIDE_ALT_CHROMA_ROW_STRIDE_MASK (0xFFC00000)
+#define MSVDX_CMDS_CHROMA_ROW_STRIDE_ALT_CHROMA_ROW_STRIDE_LSBMASK (0x000003FF)
+#define MSVDX_CMDS_CHROMA_ROW_STRIDE_ALT_CHROMA_ROW_STRIDE_SHIFT (22)
+/* MSVDX_CMDS, CHROMA_ROW_STRIDE, CHROMA_ROW_STRIDE */
+#define MSVDX_CMDS_CHROMA_ROW_STRIDE_CHROMA_ROW_STRIDE_MASK (0x0003FFC0)
+#define MSVDX_CMDS_CHROMA_ROW_STRIDE_CHROMA_ROW_STRIDE_LSBMASK (0x00000FFF)
+#define MSVDX_CMDS_CHROMA_ROW_STRIDE_CHROMA_ROW_STRIDE_SHIFT (6)
+/* MSVDX_CMDS, RPR_PICTURE_SIZE, RPR_PICTURE_WIDTH */
+#define MSVDX_CMDS_SCALED_DISPLAY_SIZE_OFFSET (0x0050)
+/* MSVDX_CMDS, SCALED_DISPLAY_SIZE, SCALE_DISPLAY_HEIGHT */
+#define MSVDX_CMDS_SCALED_DISPLAY_SIZE_SCALE_DISPLAY_HEIGHT_MASK (0x00FFF000)
+#define MSVDX_CMDS_SCALED_DISPLAY_SIZE_SCALE_DISPLAY_HEIGHT_LSBMASK (0x00000FFF)
+#define MSVDX_CMDS_SCALED_DISPLAY_SIZE_SCALE_DISPLAY_HEIGHT_SHIFT (12)
+/* MSVDX_CMDS, SCALED_DISPLAY_SIZE, SCALE_DISPLAY_WIDTH */
+#define MSVDX_CMDS_SCALED_DISPLAY_SIZE_SCALE_DISPLAY_WIDTH_MASK (0x00000FFF)
+#define MSVDX_CMDS_SCALED_DISPLAY_SIZE_SCALE_DISPLAY_WIDTH_LSBMASK (0x00000FFF)
+#define MSVDX_CMDS_SCALED_DISPLAY_SIZE_SCALE_DISPLAY_WIDTH_SHIFT (0)
+#define MSVDX_CMDS_PVDEC_SCALED_DISPLAY_SIZE_OFFSET (0x00B8)
+/* MSVDX_CMDS, PVDEC_SCALED_DISPLAY_SIZE, PVDEC_SCALE_DISPLAY_HEIGHT */
+#define MSVDX_CMDS_PVDEC_SCALED_DISPLAY_SIZE_PVDEC_SCALE_DISPLAY_HEIGHT_MASK (0xFFFF0000)
+#define MSVDX_CMDS_PVDEC_SCALED_DISPLAY_SIZE_PVDEC_SCALE_DISPLAY_HEIGHT_LSBMASK (0x0000FFFF)
+#define MSVDX_CMDS_PVDEC_SCALED_DISPLAY_SIZE_PVDEC_SCALE_DISPLAY_HEIGHT_SHIFT (16)
+/* MSVDX_CMDS, PVDEC_SCALED_DISPLAY_SIZE, PVDEC_SCALE_DISPLAY_WIDTH */
+#define MSVDX_CMDS_PVDEC_SCALED_DISPLAY_SIZE_PVDEC_SCALE_DISPLAY_WIDTH_MASK (0x0000FFFF)
+#define MSVDX_CMDS_PVDEC_SCALED_DISPLAY_SIZE_PVDEC_SCALE_DISPLAY_WIDTH_LSBMASK (0x0000FFFF)
+#define MSVDX_CMDS_PVDEC_SCALED_DISPLAY_SIZE_PVDEC_SCALE_DISPLAY_WIDTH_SHIFT (0)
+#define MSVDX_CMDS_HORIZONTAL_SCALE_CONTROL_OFFSET (0x0054)
+/* MSVDX_CMDS, HORIZONTAL_SCALE_CONTROL, HORIZONTAL_INITIAL_POS */
+#define MSVDX_CMDS_HORIZONTAL_SCALE_CONTROL_HORIZONTAL_INITIAL_POS_MASK (0xFFFF0000)
+#define MSVDX_CMDS_HORIZONTAL_SCALE_CONTROL_HORIZONTAL_INITIAL_POS_LSBMASK (0x0000FFFF)
+#define MSVDX_CMDS_HORIZONTAL_SCALE_CONTROL_HORIZONTAL_INITIAL_POS_SHIFT (16)
+/* MSVDX_CMDS, HORIZONTAL_SCALE_CONTROL, HORIZONTAL_SCALE_PITCH */
+#define MSVDX_CMDS_HORIZONTAL_SCALE_CONTROL_HORIZONTAL_SCALE_PITCH_MASK (0x0000FFFF)
+#define MSVDX_CMDS_HORIZONTAL_SCALE_CONTROL_HORIZONTAL_SCALE_PITCH_LSBMASK (0x0000FFFF)
+#define MSVDX_CMDS_HORIZONTAL_SCALE_CONTROL_HORIZONTAL_SCALE_PITCH_SHIFT (0)
+#define MSVDX_CMDS_VERTICAL_SCALE_CONTROL_OFFSET (0x0058)
+/* MSVDX_CMDS, VERTICAL_SCALE_CONTROL, VERTICAL_INITIAL_POS */
+#define MSVDX_CMDS_VERTICAL_SCALE_CONTROL_VERTICAL_INITIAL_POS_MASK (0xFFFF0000)
+#define MSVDX_CMDS_VERTICAL_SCALE_CONTROL_VERTICAL_INITIAL_POS_LSBMASK (0x0000FFFF)
+#define MSVDX_CMDS_VERTICAL_SCALE_CONTROL_VERTICAL_INITIAL_POS_SHIFT (16)
+/* MSVDX_CMDS, VERTICAL_SCALE_CONTROL, VERTICAL_SCALE_PITCH */
+#define MSVDX_CMDS_VERTICAL_SCALE_CONTROL_VERTICAL_SCALE_PITCH_MASK (0x0000FFFF)
+#define MSVDX_CMDS_VERTICAL_SCALE_CONTROL_VERTICAL_SCALE_PITCH_LSBMASK (0x0000FFFF)
+#define MSVDX_CMDS_VERTICAL_SCALE_CONTROL_VERTICAL_SCALE_PITCH_SHIFT (0)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_OFFSET (0x01B4)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, ALT_BIT_DEPTH_CHROMA_MINUS8 */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_ALT_BIT_DEPTH_CHROMA_MINUS8_MASK (0x00007000)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_ALT_BIT_DEPTH_CHROMA_MINUS8_LSBMASK \
+ (0x00000007)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_ALT_BIT_DEPTH_CHROMA_MINUS8_SHIFT (12)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, ALT_BIT_DEPTH_LUMA_MINUS8 */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_ALT_BIT_DEPTH_LUMA_MINUS8_MASK (0x00000700)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_ALT_BIT_DEPTH_LUMA_MINUS8_LSBMASK (0x00000007)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_ALT_BIT_DEPTH_LUMA_MINUS8_SHIFT (8)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, SCALE_LUMA_BIFILTER_HORIZ */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_LUMA_BIFILTER_HORIZ_MASK (0x00000080)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_LUMA_BIFILTER_HORIZ_LSBMASK (0x00000001)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_LUMA_BIFILTER_HORIZ_SHIFT (7)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, SCALE_LUMA_BIFILTER_VERT */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_LUMA_BIFILTER_VERT_MASK (0x00000040)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_LUMA_BIFILTER_VERT_LSBMASK (0x00000001)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_LUMA_BIFILTER_VERT_SHIFT (6)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, SCALE_CHROMA_BIFILTER_HORIZ */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_CHROMA_BIFILTER_HORIZ_MASK (0x00000020)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_CHROMA_BIFILTER_HORIZ_LSBMASK \
+ (0x00000001)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_CHROMA_BIFILTER_HORIZ_SHIFT (5)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, SCALE_CHROMA_BIFILTER_VERT */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_CHROMA_BIFILTER_VERT_MASK (0x00000010)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_CHROMA_BIFILTER_VERT_LSBMASK \
+ (0x00000001)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_CHROMA_BIFILTER_VERT_SHIFT (4)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, ALT_MEMORY_PACKING */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_ALT_MEMORY_PACKING_MASK (0x00000008)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_ALT_MEMORY_PACKING_LSBMASK (0x00000001)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_ALT_MEMORY_PACKING_SHIFT (3)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, SCALE_CHROMA_RESAMP_ONLY */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_CHROMA_RESAMP_ONLY_MASK (0x00000004)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_CHROMA_RESAMP_ONLY_LSBMASK (0x00000001)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_SCALE_CHROMA_RESAMP_ONLY_SHIFT (2)
+/* MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, ALT_OUTPUT_FORMAT */
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_ALT_OUTPUT_FORMAT_MASK (0x00000003)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_ALT_OUTPUT_FORMAT_LSBMASK (0x00000003)
+#define MSVDX_CMDS_ALTERNATIVE_OUTPUT_CONTROL_ALT_OUTPUT_FORMAT_SHIFT (0)
+#define MSVDX_CMDS_SCALE_OUTPUT_SIZE_OFFSET (0x01B8)
+/* MSVDX_CMDS, SCALE_OUTPUT_SIZE, SCALE_OUTPUT_HEIGHT_MIN1 */
+#define MSVDX_CMDS_SCALE_OUTPUT_SIZE_SCALE_OUTPUT_HEIGHT_MIN1_MASK (0xFFFF0000)
+#define MSVDX_CMDS_SCALE_OUTPUT_SIZE_SCALE_OUTPUT_HEIGHT_MIN1_LSBMASK (0x0000FFFF)
+#define MSVDX_CMDS_SCALE_OUTPUT_SIZE_SCALE_OUTPUT_HEIGHT_MIN1_SHIFT (16)
+/* MSVDX_CMDS, SCALE_OUTPUT_SIZE, SCALE_OUTPUT_WIDTH_MIN1 */
+#define MSVDX_CMDS_SCALE_OUTPUT_SIZE_SCALE_OUTPUT_WIDTH_MIN1_MASK (0x0000FFFF)
+#define MSVDX_CMDS_SCALE_OUTPUT_SIZE_SCALE_OUTPUT_WIDTH_MIN1_LSBMASK (0x0000FFFF)
+#define MSVDX_CMDS_SCALE_OUTPUT_SIZE_SCALE_OUTPUT_WIDTH_MIN1_SHIFT (0)
+#define MSVDX_CMDS_SCALE_HORIZONTAL_CHROMA_OFFSET (0x01BC)
+/* MSVDX_CMDS, SCALE_HORIZONTAL_CHROMA, CHROMA_HORIZONTAL_INITIAL */
+#define MSVDX_CMDS_SCALE_HORIZONTAL_CHROMA_CHROMA_HORIZONTAL_INITIAL_MASK (0xFFFF0000)
+#define MSVDX_CMDS_SCALE_HORIZONTAL_CHROMA_CHROMA_HORIZONTAL_INITIAL_LSBMASK (0x0000FFFF)
+#define MSVDX_CMDS_SCALE_HORIZONTAL_CHROMA_CHROMA_HORIZONTAL_INITIAL_SHIFT (16)
+#define MSVDX_CMDS_SCALE_HORIZONTAL_CHROMA_CHROMA_HORIZONTAL_PITCH_MASK (0x0000FFFF)
+#define MSVDX_CMDS_SCALE_HORIZONTAL_CHROMA_CHROMA_HORIZONTAL_PITCH_LSBMASK (0x0000FFFF)
+#define MSVDX_CMDS_SCALE_HORIZONTAL_CHROMA_CHROMA_HORIZONTAL_PITCH_SHIFT (0)
+#define MSVDX_CMDS_SCALE_VERTICAL_CHROMA_OFFSET (0x01C0)
+/* MSVDX_CMDS, SCALE_VERTICAL_CHROMA, CHROMA_VERTICAL_INITIAL */
+#define MSVDX_CMDS_SCALE_VERTICAL_CHROMA_CHROMA_VERTICAL_INITIAL_MASK (0xFFFF0000)
+#define MSVDX_CMDS_SCALE_VERTICAL_CHROMA_CHROMA_VERTICAL_INITIAL_LSBMASK (0x0000FFFF)
+#define MSVDX_CMDS_SCALE_VERTICAL_CHROMA_CHROMA_VERTICAL_INITIAL_SHIFT (16)
+/* MSVDX_CMDS, SCALE_VERTICAL_CHROMA, CHROMA_VERTICAL_PITCH */
+#define MSVDX_CMDS_SCALE_VERTICAL_CHROMA_CHROMA_VERTICAL_PITCH_MASK (0x0000FFFF)
+#define MSVDX_CMDS_SCALE_VERTICAL_CHROMA_CHROMA_VERTICAL_PITCH_LSBMASK (0x0000FFFF)
+#define MSVDX_CMDS_SCALE_VERTICAL_CHROMA_CHROMA_VERTICAL_PITCH_SHIFT (0)
+/* MSVDX_CMDS, MULTICORE_OPERATING_MODE, MBLK_ROW_OFFSET */
+#define MSVDX_CMDS_AUX_LINE_BUFFER_BASE_ADDRESS_OFFSET (0x01EC)
+
+#endif /* _IMG_MSVDX_CMDS_H */
diff --git a/drivers/staging/media/vxd/decoder/img_msvdx_core_regs.h b/drivers/staging/media/vxd/decoder/img_msvdx_core_regs.h
new file mode 100644
index 000000000000..d46d5e04e826
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/img_msvdx_core_regs.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG MSVDX core Registers
+ * This file contains the MSVDX_CORE_REGS_H Definitions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _IMG_MSVDX_CORE_REGS_H
+#define _IMG_MSVDX_CORE_REGS_H
+
+#define MSVDX_CORE_CR_MMU_TILE_NO_ENTRIES (4)
+#define MSVDX_CORE_CR_MMU_TILE_EXT_NO_ENTRIES (4)
+
+#endif /* _IMG_MSVDX_CORE_REGS_H */
diff --git a/drivers/staging/media/vxd/decoder/img_msvdx_vdmc_regs.h b/drivers/staging/media/vxd/decoder/img_msvdx_vdmc_regs.h
new file mode 100644
index 000000000000..493eb7114ad7
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/img_msvdx_vdmc_regs.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG MSVDX VDMC Registers
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _IMG_MSVDX_VDMC_REGS_H
+#define _IMG_MSVDX_VDMC_REGS_H
+
+/* MSVDX_VDMC, CR_VDMC_MACROBLOCK_NUMBER, CR_VDMC_MACROBLOCK_X_OFFSET */
+#define MSVDX_VDMC_CR_VDMC_MACROBLOCK_NUMBER_CR_VDMC_MACROBLOCK_X_OFFSET_MASK (0x0000FFFF)
+#define MSVDX_VDMC_CR_VDMC_MACROBLOCK_NUMBER_CR_VDMC_MACROBLOCK_X_OFFSET_SHIFT (0)
+
+/* MSVDX_VDMC, CR_VDMC_MACROBLOCK_NUMBER, CR_VDMC_MACROBLOCK_Y_OFFSET */
+#define MSVDX_VDMC_CR_VDMC_MACROBLOCK_NUMBER_CR_VDMC_MACROBLOCK_Y_OFFSET_MASK (0xFFFF0000)
+#define MSVDX_VDMC_CR_VDMC_MACROBLOCK_NUMBER_CR_VDMC_MACROBLOCK_Y_OFFSET_SHIFT (16)
+
+#endif /* _IMG_MSVDX_VDMC_REGS_H */
diff --git a/drivers/staging/media/vxd/decoder/img_msvdx_vec_regs.h b/drivers/staging/media/vxd/decoder/img_msvdx_vec_regs.h
new file mode 100644
index 000000000000..58840d501b53
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/img_msvdx_vec_regs.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG MSVDX VEC Registers
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#if !defined(__MSVDX_VEC_REGS_H__)
+#define __MSVDX_VEC_REGS_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+ /* MSVDX_VEC, CR_VEC_VLR_COMMANDS_NUM, VLR_COMMANDS_STORE_NUMBER_OF_CMDS */
+#define MSVDX_VEC_CR_VEC_VLC_TABLE_ADDR0_OFFSET (0x00EC)
+
+ /* MSVDX_VEC, CR_VEC_VLC_TABLE_ADDR0, VLC_TABLE_ADDR0 */
+#define MSVDX_VEC_CR_VEC_VLC_TABLE_ADDR0_VLC_TABLE_ADDR0_MASK (0x000007FF)
+
+ /* MSVDX_VEC, CR_VEC_VLC_TABLE_ADDR15, VLC_TABLE_ADDR31 */
+#define MSVDX_VEC_CR_VEC_VLC_TABLE_ADDR16_OFFSET (0x01C0)
+
+ /* MSVDX_VEC, CR_VEC_VLC_TABLE_ADDR15, VLC_TABLE_ADDR31 */
+#define MSVDX_VEC_CR_VEC_VLC_TABLE_ADDR16_OFFSET (0x01C0)
+
+ /* MSVDX_VEC, CR_VEC_VLC_TABLE_ADDR18, VLC_TABLE_ADDR37 */
+#define MSVDX_VEC_CR_VEC_VLC_TABLE_INITIAL_WIDTH0_OFFSET (0x012C)
+
+ /* MSVDX_VEC, CR_VEC_VLC_TABLE_ADDR0, VLC_TABLE_ADDR1 */
+#define MSVDX_VEC_CR_VEC_VLC_TABLE_ADDR0_VLC_TABLE_ADDR1_SHIFT (11)
+
+ /* MSVDX_VEC, CR_VEC_VLC_TABLE_INITIAL_WIDTH0, VLC_TABLE_INITIAL_WIDTH0 */
+#define MSVDX_VEC_CR_VEC_VLC_TABLE_INITIAL_WIDTH0_VLC_TABLE_INITIAL_WIDTH0_MASK (0x00000007)
+
+ /* MSVDX_VEC, CR_VEC_VLC_TABLE_INITIAL_WIDTH0, VLC_TABLE_INITIAL_WIDTH1 */
+#define MSVDX_VEC_CR_VEC_VLC_TABLE_INITIAL_WIDTH0_VLC_TABLE_INITIAL_WIDTH1_SHIFT (3)
+
+ /* MSVDX_VEC, CR_VEC_VLC_TABLE_INITIAL_OPCODE0, VLC_TABLE_INITIAL_OPCODE0 */
+#define MSVDX_VEC_CR_VEC_VLC_TABLE_INITIAL_OPCODE0_VLC_TABLE_INITIAL_OPCODE0_MASK \
+ (0x00000003)
+
+ /* MSVDX_VEC, CR_VEC_VLC_TABLE_INITIAL_WIDTH3, VLC_TABLE_INITIAL_WIDTH37 */
+#define MSVDX_VEC_CR_VEC_VLC_TABLE_INITIAL_OPCODE0_OFFSET (0x013C)
+
+ /* MSVDX_VEC, CR_VEC_VLC_TABLE_INITIAL_OPCODE0, VLC_TABLE_INITIAL_OPCODE1 */
+#define MSVDX_VEC_CR_VEC_VLC_TABLE_INITIAL_OPCODE0_VLC_TABLE_INITIAL_OPCODE1_SHIFT (2)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __MSVDX_VEC_REGS_H__ */
diff --git a/drivers/staging/media/vxd/decoder/img_pvdec_core_regs.h b/drivers/staging/media/vxd/decoder/img_pvdec_core_regs.h
new file mode 100644
index 000000000000..70bb68a3154f
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/img_pvdec_core_regs.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG PVDEC CORE Registers
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _IMG_PVDEC_CORE_REGS_H
+#define _IMG_PVDEC_CORE_REGS_H
+
+/* PVDEC_CORE, CR_PVDEC_HOST_INTERRUPT_STATUS, CR_HOST_SYS_WDT */
+#define PVDEC_CORE_CR_PVDEC_HOST_INTERRUPT_STATUS_CR_HOST_SYS_WDT_MASK (0x10000000)
+
+#define PVDEC_CORE_CR_PVDEC_HOST_INTERRUPT_STATUS_CR_HOST_SYS_WDT_SHIFT (28)
+
+/* PVDEC_CORE, CR_PVDEC_HOST_INTERRUPT_STATUS, CR_HOST_READ_TIMEOUT_PROC_IRQ */
+#define PVDEC_CORE_CR_PVDEC_HOST_INTERRUPT_STATUS_CR_HOST_READ_TIMEOUT_PROC_IRQ_MASK \
+ (0x08000000)
+
+/* PVDEC_CORE, CR_PVDEC_CORE_REV, CR_PVDEC_MAJOR_REV */
+#define PVDEC_CORE_CR_PVDEC_CORE_REV_CR_PVDEC_MAJOR_REV_MASK (0x00FF0000)
+#define PVDEC_CORE_CR_PVDEC_CORE_REV_CR_PVDEC_MAJOR_REV_SHIFT (16)
+
+/* PVDEC_CORE, CR_PVDEC_CORE_REV, CR_PVDEC_MINOR_REV */
+#define PVDEC_CORE_CR_PVDEC_CORE_REV_CR_PVDEC_MINOR_REV_MASK (0x0000FF00)
+#define PVDEC_CORE_CR_PVDEC_CORE_REV_CR_PVDEC_MINOR_REV_SHIFT (8)
+
+/* PVDEC_CORE, CR_PVDEC_HOST_INTERRUPT_STATUS, CR_HOST_READ_TIMEOUT_PROC_IRQ */
+#define PVDEC_CORE_CR_PVDEC_HOST_INTERRUPT_STATUS_CR_HOST_READ_TIMEOUT_PROC_IRQ_SHIFT (27)
+
+/* PVDEC_CORE, CR_PVDEC_HOST_INTERRUPT_STATUS, CR_HOST_COMMAND_TIMEOUT_PROC_IRQ */
+#define PVDEC_CORE_CR_PVDEC_HOST_INTERRUPT_STATUS_CR_HOST_COMMAND_TIMEOUT_PROC_IRQ_MASK \
+ (0x04000000)
+#define PVDEC_CORE_CR_PVDEC_HOST_INTERRUPT_STATUS_CR_HOST_COMMAND_TIMEOUT_PROC_IRQ_SHIFT \
+ (26)
+
+/* PVDEC_CORE, CR_PVDEC_CORE_ID, CR_GROUP_ID */
+#define PVDEC_CORE_CR_PVDEC_CORE_ID_CR_GROUP_ID_MASK (0xFF000000)
+#define PVDEC_CORE_CR_PVDEC_CORE_ID_CR_GROUP_ID_SHIFT (24)
+
+/* PVDEC_CORE, CR_PVDEC_CORE_REV, CR_PVDEC_MAINT_REV */
+#define PVDEC_CORE_CR_PVDEC_CORE_REV_CR_PVDEC_MAINT_REV_MASK (0x000000FF)
+#define PVDEC_CORE_CR_PVDEC_CORE_REV_CR_PVDEC_MAINT_REV_SHIFT (0)
+
+/* PVDEC_CORE, CR_PVDEC_CORE_ID, CR_CORE_ID */
+#define PVDEC_CORE_CR_PVDEC_CORE_ID_CR_CORE_ID_MASK (0x00FF0000)
+#define PVDEC_CORE_CR_PVDEC_CORE_ID_CR_CORE_ID_SHIFT (16)
+
+/* PVDEC_CORE, CR_PVDEC_CORE_ID, CR_PVDEC_CORE_CONFIG */
+#define PVDEC_CORE_CR_PVDEC_CORE_ID_CR_PVDEC_CORE_CONFIG_MASK (0x0000FFFF)
+#define PVDEC_CORE_CR_PVDEC_CORE_ID_CR_PVDEC_CORE_CONFIG_SHIFT (0)
+
+#endif /* _IMG_PVDEC_CORE_REGS_H */
diff --git a/drivers/staging/media/vxd/decoder/img_pvdec_pixel_regs.h b/drivers/staging/media/vxd/decoder/img_pvdec_pixel_regs.h
new file mode 100644
index 000000000000..be122c41d4b9
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/img_pvdec_pixel_regs.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG PVDEC pixel Registers
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _IMG_PVDEC_PIXEL_REGS_H
+#define _IMG_PVDEC_PIXEL_REGS_H
+
+/* PVDEC_PIXEL, CR_MAX_FRAME_CONFIG, CR_PVDEC_HOR_MSB */
+#define PVDEC_PIXEL_CR_MAX_FRAME_CONFIG_CR_PVDEC_HOR_MSB_MASK (0x001F0000)
+
+#define PVDEC_PIXEL_CR_MAX_FRAME_CONFIG_CR_PVDEC_HOR_MSB_SHIFT (16)
+
+/* PVDEC_PIXEL, CR_MAX_FRAME_CONFIG, CR_PVDEC_VER_MSB */
+#define PVDEC_PIXEL_CR_MAX_FRAME_CONFIG_CR_PVDEC_VER_MSB_MASK (0x1F000000)
+#define PVDEC_PIXEL_CR_MAX_FRAME_CONFIG_CR_PVDEC_VER_MSB_SHIFT (24)
+
+/* PVDEC_PIXEL, CR_MAX_FRAME_CONFIG, CR_MSVDX_HOR_MSB */
+#define PVDEC_PIXEL_CR_MAX_FRAME_CONFIG_CR_MSVDX_HOR_MSB_MASK (0x0000001F)
+#define PVDEC_PIXEL_CR_MAX_FRAME_CONFIG_CR_MSVDX_HOR_MSB_SHIFT (0)
+
+/* PVDEC_PIXEL, CR_MAX_FRAME_CONFIG, CR_MSVDX_VER_MSB */
+#define PVDEC_PIXEL_CR_MAX_FRAME_CONFIG_CR_MSVDX_VER_MSB_MASK (0x00001F00)
+#define PVDEC_PIXEL_CR_MAX_FRAME_CONFIG_CR_MSVDX_VER_MSB_SHIFT (8)
+
+#endif /* _IMG_PVDEC_PIXEL_REGS_H */
diff --git a/drivers/staging/media/vxd/decoder/img_pvdec_test_regs.h b/drivers/staging/media/vxd/decoder/img_pvdec_test_regs.h
new file mode 100644
index 000000000000..7cf2f2ded360
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/img_pvdec_test_regs.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG PVDEC test Registers
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _IMG_PVDEC_TEST_REGS_H
+#define _IMG_PVDEC_TEST_REGS_H
+
+/* PVDEC_TEST, RAND_STL_MEM_RDATA_CONFIG, STALL_ENABLE_MEM_RDATA */
+#define PVDEC_TEST_MEM_READ_LATENCY_OFFSET (0x00F0)
+
+/* PVDEC_TEST, MEM_READ_LATENCY, READ_RESPONSE_RAND_LATENCY */
+#define PVDEC_TEST_MEM_WRITE_RESPONSE_LATENCY_OFFSET (0x00F4)
+
+/* PVDEC_TEST, MEM_WRITE_RESPONSE_LATENCY, WRITE_RESPONSE_RAND_LATENCY */
+#define PVDEC_TEST_MEM_CTRL_OFFSET (0x00F8)
+
+/* PVDEC_TEST, RAND_STL_MEM_WDATA_CONFIG, STALL_ENABLE_MEM_WDATA */
+#define PVDEC_TEST_RAND_STL_MEM_WRESP_CONFIG_OFFSET (0x00E8)
+
+/* PVDEC_TEST, RAND_STL_MEM_WRESP_CONFIG, STALL_ENABLE_MEM_WRESP */
+#define PVDEC_TEST_RAND_STL_MEM_RDATA_CONFIG_OFFSET (0x00EC)
+
+/* PVDEC_TEST, MEMORY_BUS2_MONITOR_2, BUS2_ADDR */
+#define PVDEC_TEST_RAND_STL_MEM_CMD_CONFIG_OFFSET (0x00E0)
+
+/* PVDEC_TEST, RAND_STL_MEM_CMD_CONFIG, STALL_ENABLE_MEM_CMD */
+#define PVDEC_TEST_RAND_STL_MEM_WDATA_CONFIG_OFFSET (0x00E4)
+
+#endif /* _IMG_PVDEC_TEST_REGS_H */
diff --git a/drivers/staging/media/vxd/decoder/img_vdec_fw_msg.h b/drivers/staging/media/vxd/decoder/img_vdec_fw_msg.h
new file mode 100644
index 000000000000..5a655b552f14
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/img_vdec_fw_msg.h
@@ -0,0 +1,192 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG VDEC firmware messages
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _IMG_VDEC_FW_MSG_H
+#define _IMG_VDEC_FW_MSG_H
+
+#include <linux/types.h>
+
+/* FW_DEVA_COMPLETED ERROR_FLAGS */
+#define FW_DEVA_COMPLETED_ERROR_FLAGS_TYPE unsigned short
+#define FW_DEVA_COMPLETED_ERROR_FLAGS_MASK (0xFFFF)
+#define FW_DEVA_COMPLETED_ERROR_FLAGS_SHIFT (0)
+#define FW_DEVA_COMPLETED_ERROR_FLAGS_OFFSET (0x000C)
+
+/* FW_DEVA_COMPLETED NUM_BEWDTS */
+#define FW_DEVA_COMPLETED_NUM_BEWDTS_TYPE unsigned int
+#define FW_DEVA_COMPLETED_NUM_BEWDTS_MASK (0xFFFFFFFF)
+#define FW_DEVA_COMPLETED_NUM_BEWDTS_SHIFT (0)
+#define FW_DEVA_COMPLETED_NUM_BEWDTS_OFFSET (0x0010)
+
+/* FW_DEVA_COMPLETED NUM_MBSDROPPED */
+#define FW_DEVA_COMPLETED_NUM_MBSDROPPED_TYPE unsigned int
+#define FW_DEVA_COMPLETED_NUM_MBSDROPPED_MASK (0xFFFFFFFF)
+#define FW_DEVA_COMPLETED_NUM_MBSDROPPED_SHIFT (0)
+#define FW_DEVA_COMPLETED_NUM_MBSDROPPED_OFFSET (0x0014)
+
+/* FW_DEVA_COMPLETED NUM_MBSRECOVERED */
+#define FW_DEVA_COMPLETED_NUM_MBSRECOVERED_TYPE unsigned int
+#define FW_DEVA_COMPLETED_NUM_MBSRECOVERED_MASK (0xFFFFFFFF)
+#define FW_DEVA_COMPLETED_NUM_MBSRECOVERED_SHIFT (0)
+#define FW_DEVA_COMPLETED_NUM_MBSRECOVERED_OFFSET (0x0018)
+
+/* FW_DEVA_PANIC ERROR_INT */
+#define FW_DEVA_PANIC_ERROR_INT_TYPE unsigned int
+#define FW_DEVA_PANIC_ERROR_INT_MASK (0xFFFFFFFF)
+#define FW_DEVA_PANIC_ERROR_INT_SHIFT (0)
+#define FW_DEVA_PANIC_ERROR_INT_OFFSET (0x000C)
+
+/* FW_ASSERT FILE_NAME_HASH */
+#define FW_ASSERT_FILE_NAME_HASH_TYPE unsigned int
+#define FW_ASSERT_FILE_NAME_HASH_MASK (0xFFFFFFFF)
+#define FW_ASSERT_FILE_NAME_HASH_SHIFT (0)
+#define FW_ASSERT_FILE_NAME_HASH_OFFSET (0x0004)
+
+/* FW_ASSERT FILE_LINE */
+#define FW_ASSERT_FILE_LINE_TYPE unsigned int
+#define FW_ASSERT_FILE_LINE_MASK (0xFFFFFFFE)
+#define FW_ASSERT_FILE_LINE_SHIFT (1)
+#define FW_ASSERT_FILE_LINE_OFFSET (0x0008)
+
+/* FW_SO TASK_NAME */
+#define FW_SO_TASK_NAME_TYPE unsigned int
+#define FW_SO_TASK_NAME_MASK (0xFFFFFFFF)
+#define FW_SO_TASK_NAME_SHIFT (0)
+#define FW_SO_TASK_NAME_OFFSET (0x0004)
+
+/* FW_DEVA_GENMSG TRANS_ID */
+#define FW_DEVA_GENMSG_TRANS_ID_TYPE unsigned int
+#define FW_DEVA_GENMSG_TRANS_ID_MASK (0xFFFFFFFF)
+#define FW_DEVA_GENMSG_TRANS_ID_SHIFT (0)
+#define FW_DEVA_GENMSG_TRANS_ID_OFFSET (0x0008)
+
+/* FW_DEVA_GENMSG MSG_TYPE */
+#define FW_DEVA_GENMSG_MSG_TYPE_TYPE unsigned char
+#define FW_DEVA_GENMSG_MSG_TYPE_MASK (0xFF)
+#define FW_DEVA_GENMSG_MSG_TYPE_SHIFT (0)
+#define FW_DEVA_GENMSG_MSG_TYPE_OFFSET (0x0001)
+
+/* FW_DEVA_SIGNATURES SIGNATURES */
+#define FW_DEVA_SIGNATURES_SIGNATURES_OFFSET (0x0010)
+
+/* FW_DEVA_SIGNATURES MSG_SIZE */
+#define FW_DEVA_SIGNATURES_MSG_SIZE_TYPE unsigned char
+#define FW_DEVA_SIGNATURES_MSG_SIZE_MASK (0x7F)
+#define FW_DEVA_SIGNATURES_MSG_SIZE_SHIFT (0)
+#define FW_DEVA_SIGNATURES_MSG_SIZE_OFFSET (0x0000)
+
+/* FW_DEVA_CONTIGUITY_WARNING BEGIN_MB_NUM */
+#define FW_DEVA_SIGNATURES_SIZE (20)
+
+/* FW_DEVA_SIGNATURES SIGNATURE_SELECT */
+#define FW_DEVA_SIGNATURES_SIGNATURE_SELECT_TYPE unsigned int
+#define FW_DEVA_SIGNATURES_SIGNATURE_SELECT_MASK (0xFFFFFFFF)
+#define FW_DEVA_SIGNATURES_SIGNATURE_SELECT_SHIFT (0)
+#define FW_DEVA_SIGNATURES_SIGNATURE_SELECT_OFFSET (0x000C)
+
+/* FW_DEVA_GENMSG TRANS_ID */
+#define FW_DEVA_DECODE_SIZE (52)
+
+/* FW_DEVA_DECODE CTRL_ALLOC_ADDR */
+#define FW_DEVA_DECODE_CTRL_ALLOC_ADDR_TYPE unsigned int
+#define FW_DEVA_DECODE_CTRL_ALLOC_ADDR_MASK (0xFFFFFFFF)
+#define FW_DEVA_DECODE_CTRL_ALLOC_ADDR_SHIFT (0)
+#define FW_DEVA_DECODE_CTRL_ALLOC_ADDR_OFFSET (0x0010)
+
+/* FW_DEVA_DECODE BUFFER_SIZE */
+#define FW_DEVA_DECODE_BUFFER_SIZE_TYPE unsigned short
+#define FW_DEVA_DECODE_BUFFER_SIZE_MASK (0xFFFF)
+#define FW_DEVA_DECODE_BUFFER_SIZE_SHIFT (0)
+#define FW_DEVA_DECODE_BUFFER_SIZE_OFFSET (0x000E)
+
+/* FW_DEVA_DECODE OPERATING_MODE */
+#define FW_DEVA_DECODE_OPERATING_MODE_TYPE unsigned int
+#define FW_DEVA_DECODE_OPERATING_MODE_MASK (0xFFFFFFFF)
+#define FW_DEVA_DECODE_OPERATING_MODE_OFFSET (0x0018)
+#define FW_DEVA_DECODE_OPERATING_MODE_SHIFT (0)
+
+/* FW_DEVA_DECODE FLAGS */
+#define FW_DEVA_DECODE_FLAGS_TYPE unsigned short
+#define FW_DEVA_DECODE_FLAGS_MASK (0xFFFF)
+#define FW_DEVA_DECODE_FLAGS_SHIFT (0)
+#define FW_DEVA_DECODE_FLAGS_OFFSET (0x000C)
+
+/* FW_DEVA_DECODE VDEC_FLAGS */
+#define FW_DEVA_DECODE_VDEC_FLAGS_TYPE unsigned char
+#define FW_DEVA_DECODE_VDEC_FLAGS_MASK (0xFF)
+#define FW_DEVA_DECODE_VDEC_FLAGS_SHIFT (0)
+#define FW_DEVA_DECODE_VDEC_FLAGS_OFFSET (0x001E)
+
+/* FW_DEVA_DECODE GENC_ID */
+#define FW_DEVA_DECODE_GENC_ID_TYPE unsigned int
+#define FW_DEVA_DECODE_GENC_ID_MASK (0xFFFFFFFF)
+#define FW_DEVA_DECODE_GENC_ID_SHIFT (0)
+#define FW_DEVA_DECODE_GENC_ID_OFFSET (0x0028)
+
+/* FW_DEVA_DECODE MB_LOAD */
+#define FW_DEVA_DECODE_MB_LOAD_TYPE unsigned int
+#define FW_DEVA_DECODE_MB_LOAD_MASK (0xFFFFFFFF)
+#define FW_DEVA_DECODE_MB_LOAD_OFFSET (0x0030)
+#define FW_DEVA_DECODE_MB_LOAD_SHIFT (0)
+#define FW_DEVA_DECODE_FRAGMENT_SIZE (16)
+
+/* FW_DEVA_DECODE STREAMID */
+#define FW_DEVA_DECODE_STREAMID_TYPE unsigned char
+#define FW_DEVA_DECODE_STREAMID_MASK (0xFF)
+#define FW_DEVA_DECODE_STREAMID_OFFSET (0x001F)
+#define FW_DEVA_DECODE_STREAMID_SHIFT (0)
+
+/* FW_DEVA_DECODE EXT_STATE_BUFFER */
+#define FW_DEVA_DECODE_EXT_STATE_BUFFER_TYPE unsigned int
+#define FW_DEVA_DECODE_EXT_STATE_BUFFER_MASK (0xFFFFFFFF)
+#define FW_DEVA_DECODE_EXT_STATE_BUFFER_OFFSET (0x0020)
+#define FW_DEVA_DECODE_EXT_STATE_BUFFER_SHIFT (0)
+
+/* FW_DEVA_DECODE MSG_ID */
+#define FW_DEVA_DECODE_MSG_ID_TYPE unsigned short
+#define FW_DEVA_DECODE_MSG_ID_MASK (0xFFFF)
+#define FW_DEVA_DECODE_MSG_ID_OFFSET (0x0002)
+#define FW_DEVA_DECODE_MSG_ID_SHIFT (0)
+
+/* FW_DEVA_DECODE TRANS_ID */
+#define FW_DEVA_DECODE_TRANS_ID_TYPE unsigned int
+#define FW_DEVA_DECODE_TRANS_ID_MASK (0xFFFFFFFF)
+#define FW_DEVA_DECODE_TRANS_ID_OFFSET (0x0008)
+#define FW_DEVA_DECODE_TRANS_ID_SHIFT (0)
+
+/* FW_DEVA_DECODE TILE_CFG */
+#define FW_DEVA_DECODE_TILE_CFG_TYPE unsigned int
+#define FW_DEVA_DECODE_TILE_CFG_MASK (0xFFFFFFFF)
+#define FW_DEVA_DECODE_TILE_CFG_OFFSET (0x0024)
+#define FW_DEVA_DECODE_TILE_CFG_SHIFT (0)
+
+/* FW_DEVA_GENMSG MSG_SIZE */
+#define FW_DEVA_GENMSG_MSG_SIZE_TYPE unsigned char
+#define FW_DEVA_GENMSG_MSG_SIZE_MASK (0x7F)
+#define FW_DEVA_GENMSG_MSG_SIZE_OFFSET (0x0000)
+#define FW_DEVA_GENMSG_MSG_SIZE_SHIFT (0)
+
+/* FW_DEVA_DECODE_FRAGMENT CTRL_ALLOC_ADDR */
+#define FW_DEVA_DECODE_FRAGMENT_CTRL_ALLOC_ADDR_TYPE unsigned int
+#define FW_DEVA_DECODE_FRAGMENT_CTRL_ALLOC_ADDR_MASK (0xFFFFFFFF)
+#define FW_DEVA_DECODE_FRAGMENT_CTRL_ALLOC_ADDR_OFFSET (0x000C)
+#define FW_DEVA_DECODE_FRAGMENT_CTRL_ALLOC_ADDR_SHIFT (0)
+
+/* FW_DEVA_DECODE_FRAGMENT BUFFER_SIZE */
+#define FW_DEVA_DECODE_FRAGMENT_BUFFER_SIZE_TYPE unsigned short
+#define FW_DEVA_DECODE_FRAGMENT_BUFFER_SIZE_MASK (0xFFFF)
+#define FW_DEVA_DECODE_FRAGMENT_BUFFER_SIZE_OFFSET (0x000A)
+#define FW_DEVA_DECODE_FRAGMENT_BUFFER_SIZE_SHIFT (0)
+
+#endif /* _IMG_VDEC_FW_MSG_H */
diff --git a/drivers/staging/media/vxd/decoder/img_video_bus4_mmu_regs.h b/drivers/staging/media/vxd/decoder/img_video_bus4_mmu_regs.h
new file mode 100644
index 000000000000..34c1cf4e55ec
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/img_video_bus4_mmu_regs.h
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG video bus4 mmu registers
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _IMG_VIDEO_BUS4_MMU_REGS_H
+#define _IMG_VIDEO_BUS4_MMU_REGS_H
+
+#define IMG_VIDEO_BUS4_MMU_MMU_DIR_BASE_ADDR_OFFSET (0x0020)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_ADDRESS_CONTROL, MMU_BYPASS */
+#define IMG_VIDEO_BUS4_MMU_MMU_ADDRESS_CONTROL_MMU_BYPASS_MASK (0x00000001)
+#define IMG_VIDEO_BUS4_MMU_MMU_ADDRESS_CONTROL_MMU_BYPASS_SHIFT (0)
+
+/* IMG_VIDEO_BUS4_MMU, REQUEST_LIMITED_THROUGHPUT, REQUEST_GAP */
+#define IMG_VIDEO_BUS4_MMU_MMU_ADDRESS_CONTROL_OFFSET (0x0070)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_SOFT_RESET */
+#define IMG_VIDEO_BUS4_MMU_MMU_BANK_INDEX_OFFSET (0x0010)
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL1_MMU_SOFT_RESET_SHIFT (28)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_TILE_MAX_ADDR, TILE_MAX_ADDR */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL0_OFFSET (0x0000)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONTROL0, MMU_TILING_SCHEME */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL0_MMU_TILING_SCHEME_MASK (0x00000001)
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL0_MMU_TILING_SCHEME_SHIFT (0)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_TILE_CFG, TILE_STRIDE */
+#define IMG_VIDEO_BUS4_MMU_MMU_TILE_MIN_ADDR_STRIDE (4)
+#define IMG_VIDEO_BUS4_MMU_MMU_TILE_MIN_ADDR_OFFSET (0x0050)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_TILE_MIN_ADDR, TILE_MIN_ADDR */
+#define IMG_VIDEO_BUS4_MMU_MMU_TILE_MAX_ADDR_OFFSET (0x0060)
+#define IMG_VIDEO_BUS4_MMU_MMU_TILE_MAX_ADDR_STRIDE (4)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_STATUS1, MMU_FAULT_RNW */
+#define IMG_VIDEO_BUS4_MMU_MMU_MEM_REQ_OFFSET (0x0090)
+#define IMG_VIDEO_BUS4_MMU_MMU_STATUS1_MMU_FAULT_RNW_MASK (0x10000000)
+#define IMG_VIDEO_BUS4_MMU_MMU_STATUS1_MMU_FAULT_RNW_SHIFT (28)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_MEM_REQ, TAG_OUTSTANDING */
+#define IMG_VIDEO_BUS4_MMU_MMU_MEM_REQ_TAG_OUTSTANDING_MASK (0x000003FF)
+#define IMG_VIDEO_BUS4_MMU_MMU_MEM_REQ_TAG_OUTSTANDING_SHIFT (0)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONTROL0, USE_TILE_STRIDE_PER_CONTEXT */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL1_OFFSET (0x0008)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_SOFT_RESET */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL1_MMU_SOFT_RESET_MASK (0x10000000)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_PAUSE_SET */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL1_MMU_PAUSE_SET_MASK (0x01000000)
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL1_MMU_PAUSE_SET_SHIFT (24)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_PAUSE_CLEAR */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL1_MMU_PAUSE_CLEAR_MASK (0x02000000)
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL1_MMU_PAUSE_CLEAR_SHIFT (25)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_ADDRESS_CONTROL, UPPER_ADDRESS_FIXED */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONFIG0_OFFSET (0x0080)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_MEM_REQ, INT_PROTOCOL_FAULT */
+#define IMG_VIDEO_BUS4_MMU_MMU_MEM_EXT_OUTSTANDING_OFFSET (0x0094)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONFIG0, TAGS_SUPPORTED */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONFIG1_OFFSET (0x0084)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_INVALDC */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL1_MMU_INVALDC_MASK (0x00000F00)
+#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL1_MMU_INVALDC_SHIFT (8)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONFIG1, SUPPORT_SECURE */
+#define IMG_VIDEO_BUS4_MMU_MMU_STATUS0_OFFSET (0x0088)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_STATUS0, MMU_FAULT_ADDR */
+#define IMG_VIDEO_BUS4_MMU_MMU_STATUS1_OFFSET (0x008C)
+#define IMG_VIDEO_BUS4_MMU_MMU_STATUS0_MMU_FAULT_ADDR_SHIFT (12)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_STATUS0, MMU_FAULT_ADDR */
+#define IMG_VIDEO_BUS4_MMU_MMU_STATUS0_MMU_FAULT_ADDR_MASK (0xFFFFF000)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_STATUS0, MMU_PF_N_RW */
+#define IMG_VIDEO_BUS4_MMU_MMU_STATUS0_MMU_PF_N_RW_MASK (0x00000001)
+#define IMG_VIDEO_BUS4_MMU_MMU_STATUS0_MMU_PF_N_RW_SHIFT (0)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_STATUS1, MMU_FAULT_REQ_ID */
+#define IMG_VIDEO_BUS4_MMU_MMU_STATUS1_MMU_FAULT_REQ_ID_MASK (0x003F0000)
+#define IMG_VIDEO_BUS4_MMU_MMU_STATUS1_MMU_FAULT_REQ_ID_SHIFT (16)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_STATUS0, MMU_SECURE_FAULT */
+#define IMG_VIDEO_BUS4_MMU_MMU_STATUS0_MMU_SECURE_FAULT_MASK (0x00000002)
+#define IMG_VIDEO_BUS4_MMU_MMU_STATUS0_MMU_SECURE_FAULT_SHIFT (1)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONFIG1, SUPPORT_STRIDE_PER_CONTEXT */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONFIG1_SUPPORT_STRIDE_PER_CONTEXT_MASK (0x20000000)
+#define IMG_VIDEO_BUS4_MMU_MMU_CONFIG1_SUPPORT_STRIDE_PER_CONTEXT_SHIFT (29)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONFIG1, SUPPORT_SECURE */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONFIG1_SUPPORT_SECURE_MASK (0x80000000)
+#define IMG_VIDEO_BUS4_MMU_MMU_CONFIG1_SUPPORT_SECURE_SHIFT (31)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONFIG0, EXTENDED_ADDR_RANGE */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONFIG0_EXTENDED_ADDR_RANGE_MASK (0x000000F0)
+#define IMG_VIDEO_BUS4_MMU_MMU_CONFIG0_EXTENDED_ADDR_RANGE_SHIFT (4)
+
+/* IMG_VIDEO_BUS4_MMU, MMU_CONFIG0, GROUP_OVERRIDE_SIZE */
+#define IMG_VIDEO_BUS4_MMU_MMU_CONFIG0_GROUP_OVERRIDE_SIZE_MASK (0x00000700)
+#define IMG_VIDEO_BUS4_MMU_MMU_CONFIG0_GROUP_OVERRIDE_SIZE_SHIFT (8)
+
+#endif /* _IMG_VIDEO_BUS4_MMU_REGS_H */
diff --git a/drivers/staging/media/vxd/decoder/jpegfw_data_shared.h b/drivers/staging/media/vxd/decoder/jpegfw_data_shared.h
new file mode 100644
index 000000000000..2448fde864fb
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/jpegfw_data_shared.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Public data structures for the hevc parser firmware module
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifdef USE_SHARING
+#endif
+
+#ifndef _JPEGFW_DATA_H_
+#define _JPEGFW_DATA_H_
+
+#include "vdecfw_share.h"
+#include "vdecfw_shared.h"
+
+#define JPEG_VDEC_8x8_DCT_SIZE 64 //!< Number of elements in 8x8 DCT
+#define JPEG_VDEC_MAX_COMPONENTS 4 //!< Maximum number of component in JPEG
+#define JPEG_VDEC_MAX_SETS_HUFFMAN_TABLES 2 //!< Maximum set of huffman table in JPEG
+#define JPEG_VDEC_MAX_QUANT_TABLES 4 //!< Maximum set of quantisation table in JPEG
+#define JPEG_VDEC_TABLE_CLASS_NUM 2 //!< Maximum set of class of huffman table in JPEG
+#define JPEG_VDEC_PLANE_MAX 4 //!< Maximum number of planes
+
+struct hentry {
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned short, code);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char, codelen);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char, value);
+};
+
+/*
+ * This structure contains JPEG huffmant table
+ * NOTE: Should only contain JPEG specific information.
+ * @brief JPEG Huffman Table Information
+ */
+struct vdec_jpeg_huffman_tableinfo {
+ /* number of bits */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char, bits[16]);
+ /* codeword value */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char, values[256]);
+};
+
+/*
+ * This structure contains JPEG DeQunatisation table
+ * NOTE: Should only contain JPEG specific information.
+ * @brief JPEG Dequantisation Table Information
+ */
+struct vdec_jpeg_de_quant_tableinfo {
+ /* Qunatisation precision */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char, precision);
+ /* Qunatisation Value for 8x8 DCT */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned short, elements[64]);
+};
+
+/*
+ * This describes the JPEG parser component "Header data", shown in the
+ * Firmware Memory Layout diagram. This data is required by the JPEG firmware
+ * and should be supplied by the Host.
+ */
+struct jpegfw_header_data {
+ /* Primary decode buffer base addresses */
+ struct vdecfw_image_buffer primary;
+ /* Reference (output) picture base addresses */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int,
+ plane_offsets[JPEG_VDEC_PLANE_MAX]);
+ /* SOS fields count value */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned char, hdr_sos_count);
+};
+
+/*
+ * This describes the JPEG parser component "Context data".
+ * JPEG does not need any data to be saved between pictures, this structure
+ * is needed only to fit in firmware framework.
+ */
+struct jpegfw_context_data {
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int, dummy);
+};
+
+#endif /* _JPEGFW_DATA_H_ */
diff --git a/drivers/staging/media/vxd/decoder/mem_io.h b/drivers/staging/media/vxd/decoder/mem_io.h
new file mode 100644
index 000000000000..1e63f889f258
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/mem_io.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG PVDEC pixel Registers
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _MEM_IO_H
+#define _MEM_IO_H
+
+#include <linux/types.h>
+
+#include "reg_io2.h"
+
+#define MEMIO_CHECK_ALIGNMENT(vpmem) \
+ IMG_ASSERT((vpmem))
+
+#define MEMIO_READ_FIELD(vpmem, field) \
+ ((((*((field ## _TYPE *)(((unsigned long)(vpmem)) + field ## _OFFSET))) & \
+ field ## _MASK) >> field ## _SHIFT))
+
+#define MEMIO_WRITE_FIELD(vpmem, field, value, type) \
+ do { \
+ type __vpmem = vpmem; \
+ MEMIO_CHECK_ALIGNMENT(__vpmem); \
+ (*((field ## _TYPE *)(((unsigned long)(__vpmem)) + \
+ field ## _OFFSET))) = \
+ (field ## _TYPE)(((*((field ## _TYPE *)(((unsigned long)(__vpmem)) + \
+ field ## _OFFSET))) & \
+ ~(field ## _TYPE)field ## _MASK) | \
+ (field ## _TYPE)(((value) << field ## _SHIFT) & \
+ field ## _MASK)); \
+ } while (0) \
+
+#endif /* _MEM_IO_H */
diff --git a/drivers/staging/media/vxd/decoder/pvdec_entropy_regs.h b/drivers/staging/media/vxd/decoder/pvdec_entropy_regs.h
new file mode 100644
index 000000000000..3c495c198853
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/pvdec_entropy_regs.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD DEC Common low level core interface component
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreming
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef __PVDEC_ENTROPY_REGS_H__
+#define __PVDEC_ENTROPY_REGS_H__
+
+/*
+ * PVDEC_ENTROPY, CR_ENTROPY_SHIFTREG_CONTROL, SR_SW_RESET
+ */
+#define PVDEC_ENTROPY_CR_GENC_BUFFER_SIZE_OFFSET (0x0100)
+
+/*
+ * PVDEC_ENTROPY, CR_GENC_BUFFER_SIZE, GENC_BUFFER_SIZE
+ */
+#define PVDEC_ENTROPY_CR_GENC_BUFFER_BASE_ADDRESS_OFFSET (0x0110)
+
+/*
+ * PVDEC_ENTROPY, CR_ENTROPY_SLICE_PARAMETER_SIZE, SLICE_PARAMETER_SIZE
+ */
+#define PVDEC_ENTROPY_CR_GENC_FRAGMENT_BASE_ADDRESS_OFFSET (0x0098)
+
+#endif
diff --git a/drivers/staging/media/vxd/decoder/pvdec_int.h b/drivers/staging/media/vxd/decoder/pvdec_int.h
new file mode 100644
index 000000000000..01f5a038e69f
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/pvdec_int.h
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Low-level PVDEC interface component.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef __PVDEC_INT_H__
+#define __PVDEC_INT_H__
+
+#include "hw_control.h"
+#include "vxd_ext.h"
+#include "vxd_props.h"
+
+/* How many VLC IDX addresses fits in single address register */
+#define PVDECIO_VLC_IDX_ADDR_PARTS 2
+
+/* How many VLC IDX initial fits in single width register */
+#define PVDECIO_VLC_IDX_WIDTH_PARTS 10
+
+/* How many VLC IDX initial opcodes fits in single opcode register */
+#define PVDECIO_VLC_IDX_OPCODE_PARTS 16
+
+/*
+ * Length (shift) of VLC IDX opcode field. We're taking [0][1] here, as it
+ * corresponds to shift of one element
+ */
+#define PVDECIO_VLC_IDX_ADDR_ID 2
+
+/*
+ * Mask for VLC IDX address field. We're taking [0][0] here, as it corresponds
+ * to unshifted mask
+ */
+#define PVDECIO_VLC_IDX_ADDR_MASK MSVDX_VEC_CR_VEC_VLC_TABLE_ADDR0_VLC_TABLE_ADDR0_MASK
+
+/*
+ * Length (shift) of VLC IDX address field. We're taking [0][1] here, as it
+ * corresponds to shift of one element
+ */
+#define PVDECIO_VLC_IDX_ADDR_SHIFT MSVDX_VEC_CR_VEC_VLC_TABLE_ADDR0_VLC_TABLE_ADDR1_SHIFT
+#define PVDECIO_VLC_IDX_WIDTH_ID 1
+
+/*
+ * Mask for VLC IDX width field. We're taking [0][0] here, as it corresponds
+ * to unshifted mask
+ */
+#define PVDECIO_VLC_IDX_WIDTH_MASK \
+ MSVDX_VEC_CR_VEC_VLC_TABLE_INITIAL_WIDTH0_VLC_TABLE_INITIAL_WIDTH0_MASK
+
+/*
+ * Length (shift) of VLC IDX width field. We're taking [0][1] here, as it
+ * corresponds to shift of one element
+ */
+#define PVDECIO_VLC_IDX_WIDTH_SHIFT \
+ MSVDX_VEC_CR_VEC_VLC_TABLE_INITIAL_WIDTH0_VLC_TABLE_INITIAL_WIDTH1_SHIFT
+
+#define PVDECIO_VLC_IDX_OPCODE_ID 0
+
+/*
+ * Length (shift) of VLC IDX opcode field. We're taking [0][1] here, as it
+ * corresponds to shift of one element
+ */
+#define PVDECIO_VLC_IDX_OPCODE_SHIFT \
+ MSVDX_VEC_CR_VEC_VLC_TABLE_INITIAL_OPCODE0_VLC_TABLE_INITIAL_OPCODE1_SHIFT
+
+/* This comes from DEVA PVDEC FW */
+#define CTRL_ALLOC_MAX_SEGMENT_SIZE 1024
+
+/*
+ * Mask for VLC IDX opcode field. We're taking [0][0] here, as it corresponds
+ * to unshifted mask
+ */
+#define PVDECIO_VLC_IDX_OPCODE_MASK \
+ MSVDX_VEC_CR_VEC_VLC_TABLE_INITIAL_OPCODE0_VLC_TABLE_INITIAL_OPCODE0_MASK
+
+#endif /* __PVDEC_INT_H__ */
diff --git a/drivers/staging/media/vxd/decoder/pvdec_vec_be_regs.h b/drivers/staging/media/vxd/decoder/pvdec_vec_be_regs.h
new file mode 100644
index 000000000000..06593f050a97
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/pvdec_vec_be_regs.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD DEC Common low level core interface component
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreming
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef __PVDEC_VEC_BE_REGS_H__
+#define __PVDEC_VEC_BE_REGS_H__
+
+#define PVDEC_VEC_BE_CR_GENC_BUFFER_SIZE_OFFSET (0x0040)
+
+/*
+ * PVDEC_VEC_BE, CR_GENC_BUFFER_SIZE, GENC_BUFFER_SIZE
+ */
+#define PVDEC_VEC_BE_CR_GENC_BUFFER_BASE_ADDRESS_OFFSET (0x0050)
+
+/*
+ * PVDEC_VEC_BE, CR_MEM_TO_REG_CONTROL, MEM_TO_REG_NUM_PAIRS
+ */
+#define PVDEC_VEC_BE_CR_GENC_FRAGMENT_BASE_ADDRESS_OFFSET (0x0030)
+
+/*
+ * PVDEC_VEC_BE, CR_GENC_CONTEXT1, GENC_CONTEXT1_1
+ */
+#define PVDEC_VEC_BE_CR_ABOVE_PARAM_BASE_ADDRESS_OFFSET (0x00C0)
+
+#endif
diff --git a/drivers/staging/media/vxd/decoder/reg_io2.h b/drivers/staging/media/vxd/decoder/reg_io2.h
new file mode 100644
index 000000000000..a18ffda4efcb
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/reg_io2.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * IMG MSVDX core Registers
+ * This file contains the MSVDX_CORE_REGS_H Definitions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef REG_IO2_H_
+#define REG_IO2_H_
+
+#define IMG_ASSERT(expected) \
+ ((void)((expected) || \
+ (pr_err("Assertion failed: %s, file %s, line %d\n", \
+ #expected, __FILE__, __LINE__), dump_stack(), 0)))
+
+/* This macro is used to extract a field from a register. */
+#define REGIO_READ_FIELD(regval, group, reg, field) \
+ (((regval) & group ## _ ## reg ## _ ## field ## _MASK) >> \
+ group ## _ ## reg ## _ ## field ## _SHIFT)
+
+#if (defined WIN32 || defined __linux__) && !defined NO_REGIO_CHECK_FIELD_VALUE
+/*
+ * Only provide register field range checking for Windows and
+ * Linux builds
+ * Simple range check that ensures that if bits outside the valid field
+ * range are set, that the provided value is at least consistent with a
+ * negative value (i.e.: all top bits are set to 1).
+ * Cannot perform more comprehensive testing without knowing
+ * whether field
+ * should be interpreted as signed or unsigned.
+ */
+#define REGIO_CHECK_VALUE_FITS_WITHIN_FIELD(group, reg, field, value, type) \
+ { \
+ type __value = value; \
+ unsigned int temp = (unsigned int)(__value); \
+ if (temp > group ## _ ## reg ## _ ## field ## _LSBMASK) { \
+ IMG_ASSERT((((unsigned int)__value) & \
+ (unsigned int)~(group ## _ ## reg ## _ ## field ## _LSBMASK)) == \
+ (unsigned int)~(group ## _ ## reg ## _ ## field ## _LSBMASK)); \
+ } \
+ }
+#else
+#define REGIO_CHECK_VALUE_FITS_WITHIN_FIELD(group, reg, field, value, type)
+#endif
+
+/* This macro is used to update the value of a field in a register. */
+#define REGIO_WRITE_FIELD(regval, group, reg, field, value, reg_type, val_type) \
+ { \
+ reg_type __regval = regval; \
+ val_type __value = value; \
+ REGIO_CHECK_VALUE_FITS_WITHIN_FIELD(group, reg, field, __value, val_type); \
+ (regval) = \
+ ((__regval) & ~(group ## _ ## reg ## _ ## field ## _MASK)) | \
+ (((unsigned int)(__value) << (group ## _ ## reg ## _ ## field ## _SHIFT)) & \
+ (group ## _ ## reg ## _ ## field ## _MASK)); \
+ }
+
+/* This macro is used to update the value of a field in a register. */
+#define REGIO_WRITE_FIELD_LITE(regval, group, reg, field, value, type) \
+{ \
+ type __value = value; \
+ REGIO_CHECK_VALUE_FITS_WITHIN_FIELD(group, reg, field, __value, type); \
+ (regval) |= ((unsigned int)(__value) << (group ## _ ## reg ## _ ## field ## _SHIFT)); \
+}
+
+#endif /* REG_IO2_H_ */
diff --git a/drivers/staging/media/vxd/decoder/vdecfw_share.h b/drivers/staging/media/vxd/decoder/vdecfw_share.h
new file mode 100644
index 000000000000..7c6b9df00472
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vdecfw_share.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD DEC SYSDEV and UI Interface header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef _VDECFW_SHARE_H_
+#define _VDECFW_SHARE_H_
+
+/*
+ * This macro sets alignment for a field structure.
+ * Parameters :
+ * a - alignment value
+ * t - field type
+ * n - field name
+ */
+#define IMG_ALIGN_FIELD(a, t, n) t n __aligned(a)
+
+/* END of vdecfw_share_macros.h */
+
+/*
+ * Field alignments in shared data structures
+ */
+/* Default field alignment */
+#define VDECFW_SHARE_DEFAULT_ALIGNMENT 4
+/* Pointer field alignment */
+#define VDECFW_SHARE_PTR_ALIGNMENT 4
+
+#endif /* _VDECFW_SHARE_H_ */
diff --git a/drivers/staging/media/vxd/decoder/vdecfw_shared.h b/drivers/staging/media/vxd/decoder/vdecfw_shared.h
new file mode 100644
index 000000000000..a582987d45bb
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vdecfw_shared.h
@@ -0,0 +1,893 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Public data structures and enums for the firmware
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifdef USE_SHARING
+#endif
+
+#ifndef _VDECFW_H_
+#define _VDECFW_H_
+
+#include "img_msvdx_core_regs.h"
+#include "vdecfw_share.h"
+
+/* brief This type defines the buffer type */
+enum img_buffer_type {
+ IMG_BUFFERTYPE_FRAME = 0,
+ IMG_BUFFERTYPE_FIELD_TOP,
+ IMG_BUFFERTYPE_FIELD_BOTTOM,
+ IMG_BUFFERTYPE_PAIR,
+ IMG_BUFFERTYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* Number of scaling coefficients */
+#define VDECFW_NUM_SCALE_COEFFS 4
+
+/*
+ * maximum number of pictures handled by the firmware
+ * for H.264 (largest requirement): 32 for 4 view MVC
+ */
+#define VDECFW_MAX_NUM_PICTURES 32
+#define VDECFW_MAX_NUM_VIEWS 4
+#define EMERALD_CORE 6
+
+/*
+ * maximum number of colocated pictures handled by
+ * firmware in FWBSP mode
+ */
+#define VDECFWBSP_MAX_NUM_COL_PICS 16
+
+/* Maximum number of colour planes. */
+#define VDECFW_PLANE_MAX 4
+
+#define VDECFW_NON_EXISTING_PICTURE_TID (0xffffffff)
+
+#define NO_VALUE 0
+
+/* Indicates whether a cyclic sequence number (x) has reached another (y). */
+#define HAS_X_REACHED_Y(x, y, range, type) \
+ ({ \
+ type __x = x; \
+ type __y = y; \
+ type __range = range; \
+ (((((__x) - (__y) + (__range)) % (__range)) <= \
+ (((__y) - (__x) + (__range)) % (__range))) ? TRUE : FALSE); })
+
+/* Indicates whether a cyclic sequence number (x) has passed another (y). */
+#define HAS_X_PASSED_Y(x, y, range, type) \
+ ({ \
+ type __x = x; \
+ type __y = y; \
+ type __range = range; \
+ (((((__x) - (__y) + (__range)) % (__range)) < \
+ (((__y) - (__x) + (__range)) % (__range))) ? TRUE : FALSE); })
+
+#define FWIF_BIT_MASK(num) ((1 << (num)) - 1)
+
+/*
+ * Number of bits in transaction ID used to represent picture number in stream.
+ */
+#define FWIF_NUMBITS_STREAM_PICTURE_ID 16
+/* Number of bits in transaction ID used to represent picture number in core. */
+#define FWIF_NUMBITS_CORE_PICTURE_ID 4
+/* Number of bits in transaction ID used to represent stream id. */
+#define FWIF_NUMBITS_STREAM_ID 8
+/* Number of bits in transaction ID used to represent core id. */
+#define FWIF_NUMBITS_CORE_ID 4
+
+/* Offset in transaction ID to picture number in stream. */
+#define FWIF_OFFSET_STREAM_PICTURE_ID 0
+/* Offset in transaction ID to picture number in core. */
+#define FWIF_OFFSET_CORE_PICTURE_ID \
+ (FWIF_OFFSET_STREAM_PICTURE_ID + FWIF_NUMBITS_STREAM_PICTURE_ID)
+/* Offset in transaction ID to stream id. */
+#define FWIF_OFFSET_STREAM_ID \
+ (FWIF_OFFSET_CORE_PICTURE_ID + FWIF_NUMBITS_CORE_PICTURE_ID)
+/* Offset in transaction ID to core id. */
+#define FWIF_OFFSET_CORE_ID \
+ (FWIF_OFFSET_STREAM_ID + FWIF_NUMBITS_STREAM_ID)
+
+/* Picture id (stream) from transaction id. */
+#define GET_STREAM_PICTURE_ID(transaction_id) \
+ ((transaction_id) & FWIF_BIT_MASK(FWIF_NUMBITS_STREAM_PICTURE_ID))
+/* Picture id (core) from transaction id. */
+#define GET_CORE_PICTURE_ID(transaction_id) \
+ (((transaction_id) >> FWIF_OFFSET_CORE_PICTURE_ID) & \
+ FWIF_BIT_MASK(FWIF_NUMBITS_CORE_PICTURE_ID))
+/* Stream id from transaction id. */
+#define GET_STREAM_ID(transaction_id) \
+ (((transaction_id) >> FWIF_OFFSET_STREAM_ID) & \
+ FWIF_BIT_MASK(FWIF_NUMBITS_STREAM_ID))
+/* Core id from transaction id. */
+#define GET_CORE_ID(transaction_id) \
+ (((transaction_id) >> FWIF_OFFSET_CORE_ID) & \
+ FWIF_BIT_MASK(FWIF_NUMBITS_CORE_ID))
+
+/* Picture id (stream) for transaction id. */
+#define SET_STREAM_PICTURE_ID(str_pic_id) \
+ (((str_pic_id) & FWIF_BIT_MASK(FWIF_NUMBITS_STREAM_PICTURE_ID)) << \
+ FWIF_OFFSET_STREAM_PICTURE_ID)
+/* Picture id (core) for transaction id. */
+#define SET_CORE_PICTURE_ID(core_pic_id) \
+ (((core_pic_id) % (1 << FWIF_NUMBITS_CORE_PICTURE_ID)) << \
+ FWIF_OFFSET_CORE_PICTURE_ID)
+/* Stream id for transaction id. */
+#define SET_STREAM_ID(stream_id) \
+ (((stream_id) & FWIF_BIT_MASK(FWIF_NUMBITS_STREAM_ID)) << \
+ FWIF_OFFSET_STREAM_ID)
+/* Core id for transaction id. */
+#define SET_CORE_ID(core_id) \
+ (((core_id) & FWIF_BIT_MASK(FWIF_NUMBITS_CORE_ID)) << \
+ FWIF_OFFSET_CORE_ID)
+/* flag checking */
+#define FLAG_MASK(_flagname_) ((1 << _flagname_ ## _SHIFT))
+#define FLAG_IS_SET(_flagsword_, _flagname_) \
+ (((_flagsword_) & FLAG_MASK(_flagname_)) ? TRUE : FALSE)
+
+/* This type defines the parser component types */
+enum vdecfw_codectype {
+ VDECFW_CODEC_H264 = 0, /* H.264, AVC, MVC */
+ VDECFW_CODEC_MPEG4, /* MPEG4, H.263, DivX, Sorenson */
+ VDECFW_CODEC_VP8, /* VP8 */
+
+ VDECFW_CODEC_VC1, /* VC1 (includes WMV9) */
+ VDECFW_CODEC_MPEG2, /* MPEG2 */
+
+ VDECFW_CODEC_JPEG, /* JPEG */
+
+ VDECFW_CODEC_VP6, /* VP6 */
+ VDECFW_CODEC_AVS, /* AVS */
+ VDECFW_CODEC_RV, /* RV30, RV40 */
+
+ VDECFW_CODEC_HEVC, /* HEVC/H265 */
+
+ VDECFW_CODEC_VP9, /* VP9 */
+
+ VDECFW_CODEC_MAX, /* End Marker */
+
+ VDEC_CODEC_NONE = -1, /* No codec */
+ VDEC_CODEC_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* This type defines the FW parser mode - SCP, size delimited, etc. */
+enum vdecfw_parsermode {
+ /* Every NAL is expected to have SCP */
+ VDECFW_SCP_ONLY = 0,
+ /* Every NAL is expect to be size delimited with field size 4 */
+ VDECFW_SIZE_DELIMITED_4_ONLY,
+ /* Every NAL is expect to be size delimited with field size 2 */
+ VDECFW_SIZE_DELIMITED_2_ONLY,
+ /* Every NAL is expect to be size delimited with field size 1 */
+ VDECFW_SIZE_DELIMITED_1_ONLY,
+ /* Size of NAL is provided in the picture header */
+ VDECFW_SIZE_SIDEBAND,
+ /* Unit is a skipped picture with no data to process */
+ VDECFW_SKIPPED_PICTURE,
+ VDECFW_SKIPPED_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This enum defines values of ENTDEC_BE_MODE field of VEC_ENTDEC_BE_CONTROL
+ * register and ENTDEC_FE_MODE field of VEC_ENTDEC_FE_CONTROL register.
+ */
+enum vdecfw_msvdxentdecmode {
+ /* JPEG */
+ VDECFW_ENTDEC_MODE_JPEG = 0x0,
+ /* H264 (MPEG4/AVC) */
+ VDECFW_ENTDEC_MODE_H264 = 0x1,
+ /* VC1 */
+ VDECFW_ENTDEC_MODE_VC1 = 0x2,
+ /* MPEG2 */
+ VDECFW_ENTDEC_MODE_MPEG2 = 0x3,
+ /* MPEG4 */
+ VDECFW_ENTDEC_MODE_MPEG4 = 0x4,
+ /* AVS */
+ VDECFW_ENTDEC_MODE_AVS = 0x5,
+ /* WMV9 */
+ VDECFW_ENTDEC_MODE_WMV9 = 0x6,
+ /* MPEG1 */
+ VDECFW_ENTDEC_MODE_MPEG1 = 0x7,
+ /* RealVideo8, with ENTDEC_[BE|FE]_EXTENDED_MODE bit set */
+ VDECFW_ENTDEC_MODE_EXT_REAL8 = 0x0,
+ /* RealVideo9, with ENTDEC_[BE|FE]_EXTENDED_MODE bit set */
+ VDECFW_ENTDEC_MODE_EXT_REAL9 = 0x1,
+ /* VP6, with ENTDEC_[BE|FE]_EXTENDED_MODE bit set */
+ VDECFW_ENTDEC_MODE_EXT_VP6 = 0x2,
+ /* VP8, with ENTDEC_[BE|FE]_EXTENDED_MODE bit set */
+ VDECFW_ENTDEC_MODE_EXT_VP8 = 0x3,
+ /* SVC, with ENTDEC_[BE|FE]_EXTENDED_MODE bit set */
+ VDECFW_ENTDEC_MODE_EXT_SVC = 0x4,
+ VDECFW_ENTDEC_MODE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This describes the Firmware Parser checkpoints in VEC Local RAM.
+ * Each checkpoint is updated with the TransactionID of the picture as it passes
+ * that point in its decode. Together they describe the current position of
+ * pictures in the VXD/Firmware pipeline.
+ *
+ * Numbers indicate point in the "VDEC Firmware Component Timing" diagram.
+ */
+enum vdecfw_progresscheckpoint {
+ /* Decode message has been read */
+ VDECFW_CHECKPOINT_PICTURE_STARTED = 1,
+ /* Firmware has been loaded and bitstream DMA started */
+ VDECFW_CHECKPOINT_FIRMWARE_READY = 2,
+ /* Picture management operations have completed */
+ VDECFW_CHECKPOINT_PICMAN_COMPLETE = 3,
+ /* Firmware context for this picture has been saved */
+ VDECFW_CHECKPOINT_FIRMWARE_SAVED = 4,
+ /*
+ * 1st Picture/Slice header has been read,
+ * registers written and Entdec started
+ */
+ VDECFW_CHECKPOINT_ENTDEC_STARTED = 5,
+ /* 1st Slice has been completed by Entdec */
+ VDECFW_CHECKPOINT_FE_1SLICE_DONE = 6,
+ /* Parsing of picture has completed on FE */
+ VDECFW_CHECKPOINT_FE_PARSE_DONE = 7,
+ /* Picture end code has been read and picture closed */
+ VDECFW_CHECKPOINT_FE_PICTURE_COMPLETE = 8,
+ /* Picture has started decoding on VXD Backend */
+ VDECFW_CHECKPOINT_BE_PICTURE_STARTED = 9,
+ /* 1st Slice has completed on VXD Backend */
+ VDECFW_CHECKPOINT_BE_1SLICE_DONE = 10,
+ /* Picture decode has completed and done message sent to the Host */
+ VDECFW_CHECKPOINT_BE_PICTURE_COMPLETE = 11,
+#ifndef FW_STACK_USAGE_TRACKING
+ /* General purpose check point 1 */
+ VDECFW_CHECKPOINT_AUX1 = 12,
+ /* General purpose check point 2 */
+ VDECFW_CHECKPOINT_AUX2 = 13,
+ /* General purpose check point 3 */
+ VDECFW_CHECKPOINT_AUX3 = 14,
+ /* General purpose check point 4 */
+ VDECFW_CHECKPOINT_AUX4 = 15,
+#endif /* ndef FW_STACK_USAGE_TRACKING */
+ VDECFW_CHECKPOINT_MAX,
+ /*
+ * Indicate which checkpoints mark the start and end of each
+ * group (FW, FE and BE).
+ * The start and end values should be updated if new checkpoints are
+ * added before the current start or after the current end of any group.
+ */
+ VDECFW_CHECKPOINT_FW_START = VDECFW_CHECKPOINT_PICTURE_STARTED,
+ VDECFW_CHECKPOINT_FW_END = VDECFW_CHECKPOINT_FIRMWARE_SAVED,
+ VDECFW_CHECKPOINT_FE_START = VDECFW_CHECKPOINT_ENTDEC_STARTED,
+ VDECFW_CHECKPOINT_FE_END = VDECFW_CHECKPOINT_FE_PICTURE_COMPLETE,
+ VDECFW_CHECKPOINT_BE_START = VDECFW_CHECKPOINT_BE_PICTURE_STARTED,
+ VDECFW_CHECKPOINT_BE_END = VDECFW_CHECKPOINT_BE_PICTURE_COMPLETE,
+ VDECFW_CHECKPOINT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* Number of auxiliary firmware checkpoints. */
+#define VDECFW_CHECKPOINT_AUX_COUNT 4
+/* This describes the action currently being done by the Firmware. */
+enum vdecfw_firmwareaction {
+ VDECFW_FWACT_IDLE = 1, /* Firmware is currently doing nothing */
+ VDECFW_FWACT_BASE_LOADING_PSR, /* Loading parser context */
+ VDECFW_FWACT_BASE_SAVING_PSR, /* Saving parser context */
+ VDECFW_FWACT_BASE_LOADING_BEMOD, /* Loading Backend module */
+ VDECFW_FWACT_BASE_LOADING_FEMOD, /* Loading Frontend module */
+ VDECFW_FWACT_PARSER_SLICE, /* Parser active: parsing slice */
+ VDECFW_FWACT_PARSER_PM, /* Parser active: picture management */
+ VDECFE_FWACT_BEMOD_ACTIVE, /* Backend module active */
+ VDECFE_FWACT_FEMOD_ACTIVE, /* Frontend module active */
+ VDECFW_FWACT_MAX,
+ VDECFW_FWACT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This describes the FE_ERR flags word in the VDECFW_MSGID_PIC_DECODED message
+ */
+enum vdecfw_msgflagdecodedfeerror {
+ /* Front-end hardware watchdog timeout (FE_WDT_CM0) */
+ VDECFW_MSGFLAG_DECODED_FEERROR_HWWDT_SHIFT = 0,
+ /* Front-end entdec error (VEC_ERROR_DETECTED_ENTDEC) */
+ VDECFW_MSGFLAG_DECODED_FEERROR_ENTDECERROR_SHIFT,
+ /* Shift-register error (VEC_ERROR_DETECTED_SR) */
+ VDECFW_MSGFLAG_DECODED_FEERROR_SRERROR_SHIFT,
+ /* For cases when B frame comes after I without P. */
+ VDECFW_MSGFLAG_DECODED_MISSING_REFERENCES_SHIFT,
+ /* MMCO operation failed. */
+ VDECFW_MSGFLAG_DECODED_MMCO_ERROR_SHIFT,
+ /* Back-end WDT timeout */
+ VDECFW_MSGFLAG_DECODED_BEERROR_HWWDT_SHIFT,
+ /* Some macroblocks were dropped */
+ VDECFW_MSGFLAG_DECODED_MBS_DROPPED_ERROR_SHIFT,
+ VDECFW_MSGFLAG_DECODED_FEERROR_MAX,
+ VDECFW_MSGFLAG_DECODED_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This type defines the IDs of the messages used to communicate with the
+ * Firmware.
+ *
+ * The Firmware has 3 message buffers, each buffer uses a different set of IDs.
+ * The buffers are:
+ * Host -> FW -Control messages(High Priority: processed in interrupt context)
+ * Host -> FW -Decode commands and associated information
+ * (Normal Priority: processed in baseloop)
+ * FW -> Host -Completion message
+ */
+enum vdecfw_message_id {
+ /* Control Messages */
+ /*
+ * Host -> FW Padding message
+ * Sent to optionally pad the message buffer
+ */
+ VDECFW_MSGID_BASE_PADDING = 0x01,
+ /*
+ * Host -> FW Initialisation message Initialisation should be
+ * sent *immediately* after loading the base component
+ * ie. while the FW is idle
+ */
+ VDECFW_MSGID_FIRMWARE_INIT,
+ /*
+ * Host -> FW Configuration message
+ * Configuration should be setup after loading the base component
+ * and before decoding the next picture ie. while the FW is idle
+ */
+ VDECFW_MSGID_FIRMWARE_CONFIG,
+ /*
+ * Host -> FW Control message
+ * Firmware control command to have immediate affect
+ * eg. Stop stream, return CRCs, return Performance Data
+ */
+ VDECFW_MSGID_FIRMWARE_CONTROL,
+ VDECFW_MSGID_CONTROL_MAX,
+ /* Decode Commands */
+ /*
+ * Host -> FW Padding message
+ * Sent to optionally pad the message buffer
+ */
+ VDECFW_MSGID_PSR_PADDING = 0x40,
+ /*
+ * Host -> FW Decode message
+ * Describes the picture to decode
+ */
+ VDECFW_MSGID_DECODE_PICTURE,
+ /*
+ * Host -> FW Bitstream buffer information
+ * Information describing a bitstream buffer to DMA to VXD
+ */
+ VDECFW_MSGID_BITSTREAM_BUFFER,
+ /*
+ * Host -> FW Fence message
+ * Generate an interrupt when this is read,
+ * FenceID should be written to a location in VLR
+ */
+ VDECFW_MSGID_FENCE,
+ /*
+ * Host -> FW Batch message
+ * Contains a pointer to a host memory buffer
+ * containing a batch of decode command FW messages
+ */
+ VDECFW_MSGID_BATCH,
+ VDECFW_MSGID_DECODE_MAX,
+ /* Completion Messages */
+ /*
+ * FW -> Host Padding message
+ * Sent to optionally pad the message buffer
+ */
+ VDECFW_MSGID_BE_PADDING = 0x80,
+ /*
+ * FW -> Host Decoded Picture message
+ * Notification of decoded picture including errors recorded
+ */
+ VDECFW_MSGID_PIC_DECODED,
+ /*
+ * FW -> Host CRC message
+ * Optionally sent with Decoded Picture message, contains VXD CRCs
+ */
+ VDECFW_MSGID_PIC_CRCS,
+ /*
+ * FW -> Host Performance message
+ * Optional timestamps at the decode checkpoints and other information
+ * about the image to assist in measuring performance
+ */
+ VDECFW_MSGID_PIC_PERFORMANCE,
+ /* FW -> Host POST calculation test message */
+ VDECFW_MSGID_PIC_POST_RESP,
+ VDECFW_MSGID_COMPLETION_MAX,
+ VDECFW_MSGID_FORCE32BITS = 0x7FFFFFFFU
+};
+
+#define VDECFW_MSGID_CONTROL_TYPES \
+ (VDECFW_MSGID_CONTROL_MAX - VDECFW_MSGID_BASE_PADDING)
+#define VDECFW_MSGID_DECODE_TYPES \
+ (VDECFW_MSGID_DECODE_MAX - VDECFW_MSGID_PSR_PADDING)
+#define VDECFW_MSGID_COMPLETION_TYPES \
+ (VDECFW_MSGID_COMPLETION_MAX - VDECFW_MSGID_BE_PADDING)
+
+/* This describes the layout of PVDEC Firmware state indicators in Comms RAM. */
+
+/* Maximum number of PVDEC decoding pipes per core supported. */
+#define VDECFW_MAX_DP 3
+
+struct vdecfw_pvdecpipestate {
+ /* TransactionID at each checkpoint */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, check_point[VDECFW_CHECKPOINT_MAX]);
+ /* VDECFW_eFirmwareAction (UINT32 used to guarantee size) */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, firmware_action);
+ /* Number of FE Slices processed for the last picture in FE */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, fe_slices);
+ /* Number of BE Slices processed for the last picture in BE */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, be_slices);
+ /*
+ * Number of FE Slices being detected as erroed for the last picture
+ * in FE
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, fe_errored_slices);
+ /*
+ * Number of BE Slices being detected as erroed for the last picture
+ * in BE
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, be_errored_slices);
+ /* Number of BE macroblocks dropped for the last picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, be_mbs_dropped);
+ /* Number of BE macroblocks recovered for the last picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, be_mbs_recovered);
+ /* Number of FE macroblocks processed for the last picture in FE */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, last_fe_mb_xy);
+ /* Number of BE macroblocks processed for the last picture in BE */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, last_be_mb_xy);
+ /* VDECFW_eCodecType - Codec currently loaded */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, curr_codec);
+ /* TRUE if this pipe is available for processing */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, pipe_present);
+};
+
+#ifdef FW_STACK_USAGE_TRACKING
+/* Stack usage info array size. */
+#define VDECFW_STACK_INFO_SIZE (VDECFW_MAX_DP * VDECFW_CHECKPOINT_AUX_COUNT)
+#endif /* FW_STACK_USAGE_TRACKING */
+struct vdecfw_pvdecfirmwarestate {
+ /*
+ * Indicates generic progress taken by firmware
+ * (must be the first item)
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int, fwstep);
+ /* Pipe state array. */
+ struct vdecfw_pvdecpipestate pipestate[VDECFW_MAX_DP];
+#ifdef FW_STACK_USAGE_TRACKING
+ /* Stack usage info array. */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, unsigned int,
+ stackinfo[VDECFW_STACK_INFO_SIZE]);
+#endif /* FW_STACK_USAGE_TRACKING */
+};
+
+/*
+ * This describes the flags word in the aui8DisplayFlags
+ * in VDECFW_sBufferControl
+ */
+enum vdecfw_bufflagdisplay {
+ /* TID has been flushed with a "no display" indication */
+ VDECFW_BUFFLAG_DISPLAY_NODISPLAY_SHIFT = 0,
+ /* TID contains an unpaired field */
+ VDECFW_BUFFLAG_DISPLAY_SINGLE_FIELD_SHIFT = 1,
+ /* TID contains field coded picture(s) - single field or pair */
+ VDECFW_BUFFLAG_DISPLAY_FIELD_CODED_SHIFT = 2,
+ /* if TID contains a single field, this defines which field that is */
+ VDECFW_BUFFLAG_DISPLAY_BOTTOM_FIELD_SHIFT = 3,
+ /* if TID contains a frame with two interlaced fields */
+ VDECFW_BUFFLAG_DISPLAY_INTERLACED_FIELDS_SHIFT = 4,
+ /* End marker */
+ VDECFW_BUFFLAG_DISPLAY_MAX = 8,
+ VDECFW_BUFFLAG_DISPLAY_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This describes the flags in the ui8PictMgmtFlags field in
+ * VDECFW_sBufferControl
+ */
+enum vdecfw_picmgmflags {
+ /* Picture management for this picture successfully executed */
+ VDECFW_PICMGMTFLAG_PICTURE_EXECUTED_SHIFT = 0,
+ /*
+ * Picture management for the first field of this picture
+ * successfully executed
+ */
+ VDECFW_PICMGMTFLAG_1ST_FIELD_EXECUTED_SHIFT = 0,
+ /*
+ * Picture management for the second field of this picture
+ * successfully executed
+ */
+ VDECFW_PICMGMTFLAG_2ND_FIELD_EXECUTED_SHIFT = 1,
+ VDECFW_PICMGMTFLAG_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * Macro for checking if picture management was successfully executed for
+ * field coded picture
+ */
+#define VDECFW_PICMGMT_FIELD_CODED_PICTURE_EXECUTED(_flagsword_) \
+ ((FLAG_IS_SET(buf_control->picmgmt_flags, \
+ VDECFW_PICMGMTFLAG_1ST_FIELD_EXECUTED) && \
+ FLAG_IS_SET(buf_control->picmgmt_flags, \
+ VDECFW_PICMGMTFLAG_2ND_FIELD_EXECUTED)) ? \
+ TRUE : FALSE)
+/* This describes the REAL related data for the current picture. */
+struct vdecfw_real_data {
+ /* Picture width */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, width);
+ /* Picture height */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, height);
+ /* Scaled Picture Width */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, scaled_width);
+ /* Scaled Picture Height */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, scaled_height);
+ /* Timestamp parsed in the firmware */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, timestamp);
+};
+
+/* This describes the HEVC related data for the current picture. */
+struct vdecfw_hevcdata {
+ /* POC */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT, int, pic_order_count);
+};
+
+/*
+ * This describes the buffer control structure that is used by the firmware to
+ * signal to the Host to control the display and release of buffers.
+ */
+struct vdecfw_buffer_control {
+ /*
+ * List of TransactionIDs indicating buffers ready to display,
+ * in display order
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, display_list[VDECFW_MAX_NUM_PICTURES]);
+ /*
+ * List of TransactionIDs indicating buffers that are no longer
+ * required for reference
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int,
+ release_list[VDECFW_MAX_NUM_PICTURES +
+ VDECFW_MAX_NUM_VIEWS]);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short,
+ display_view_ids[VDECFW_MAX_NUM_PICTURES]);
+ /* List of flags for each TID in the DisplayList */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, display_flags[VDECFW_MAX_NUM_PICTURES]);
+ /* Number of TransactionIDs in aui32DisplayList */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, display_list_length);
+ /* Number of TransactionIDs in aui32ReleaseList */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, release_list_length);
+ union {
+ struct vdecfw_real_data real_data;
+ struct vdecfw_hevcdata hevc_data;
+ };
+ /*
+ * Refers to the picture decoded with the current transaction ID
+ * (not affected by merge with field of previous transaction ID)
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ enum img_buffer_type, dec_pict_type);
+ /* Set if the current field is a pair to the previous field */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, second_field_of_pair);
+ /*
+ * Set if for a pair we decoded first the top field or
+ * if we have only top field
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, top_field_first);
+ /* Top field is first to be displayed */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, out_top_field_first);
+ /* Picture management flags for this picture */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, picmgmt_flags);
+ /*
+ * List of TransactionIDs indicating buffers used as references
+ * when decoding current picture
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, ref_list[VDECFW_MAX_NUM_PICTURES]);
+};
+
+/*
+ * This describes an image buffer for one picture supplied to
+ * the firmware by the host
+ */
+struct vdecfw_image_buffer {
+ /* Virtual Address of each plane */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, byte_offset[VDECFW_PLANE_MAX]);
+};
+
+/* This type defines the picture commands that are prepared for the firmware. */
+enum vdecfw_picture_cmds {
+ /* Reconstructed buffer */
+ /* DISPLAY_PICTURE_SIZE */
+ VDECFW_CMD_DISPLAY_PICTURE,
+ /* CODED_PICTURE_SIZE */
+ VDECFW_CMD_CODED_PICTURE,
+ /* OPERATING_MODE */
+ VDECFW_CMD_OPERATING_MODE,
+ /* LUMA_RECONSTRUCTED_PICTURE_BASE_ADDRESSES */
+ VDECFW_CMD_LUMA_RECONSTRUCTED_PICTURE_BASE_ADDRESS,
+ /* CHROMA_RECONSTRUCTED_PICTURE_BASE_ADDRESSES */
+ VDECFW_CMD_CHROMA_RECONSTRUCTED_PICTURE_BASE_ADDRESS,
+ /* CHROMA2_RECONSTRUCTED_PICTURE_BASE_ADDRESSES */
+ VDECFW_CMD_CHROMA2_RECONSTRUCTED_PICTURE_BASE_ADDRESS,
+ /* VC1_LUMA_RANGE_MAPPING_BASE_ADDRESS */
+ VDECFW_CMD_LUMA_ALTERNATIVE_PICTURE_BASE_ADDRESS,
+ /* VC1_CHROMA_RANGE_MAPPING_BASE_ADDRESS */
+ VDECFW_CMD_CHROMA_ALTERNATIVE_PICTURE_BASE_ADDRESS,
+ /* CHROMA2_ALTERNATIVE_PICTURE_BASE_ADDRESS */
+ VDECFW_CMD_CHROMA2_ALTERNATIVE_PICTURE_BASE_ADDRESS,
+ /* LUMA_ERROR_PICTURE_BASE_ADDRESSES */
+ VDECFW_CMD_LUMA_ERROR_PICTURE_BASE_ADDRESS,
+ /* CHROMA_ERROR_PICTURE_BASE_ADDRESSES */
+ VDECFW_CMD_CHROMA_ERROR_PICTURE_BASE_ADDRESS,
+ /* AUX_MSB_BUFFER_BASE_ADDRESSES (VC-1 only) */
+ VDECFW_CMD_AUX_MSB_BUFFER,
+ /* INTRA_BUFFER_BASE_ADDRESS (various) */
+ VDECFW_CMD_INTRA_BUFFER_BASE_ADDRESS,
+ /* AUX_LINE_BUFFER_BASE_ADDRESS */
+ VDECFW_CMD_AUX_LINE_BUFFER_BASE_ADDRESS,
+ /* MBFLAGS_BUFFER_BASE_ADDRESSES (VP8 only) */
+ VDECFW_CMD_MBFLAGS_BUFFER_BASE_ADDRESS,
+ /* FIRST_PARTITION_BASE_ADDRESSES (VP8 only) */
+ VDECFW_CMD_FIRST_PARTITION_BUFFER_BASE_ADDRESS,
+ /* CURRENT_PICTURE_BUFFER_BASE_ADDRESSES (VP8 only) */
+ VDECFW_CMD_CURRENT_PICTURE_BUFFER_BASE_ADDRESS,
+ /* SEGMENTID_BUFFER_BASE_ADDRESSES (VP8 only) */
+ VDECFW_CMD_SEGMENTID_BASE_ADDRESS,
+ /* EXT_OP_MODE (H.264 only) */
+ VDECFW_CMD_EXT_OP_MODE,
+ /* MC_CACHE_CONFIGURATION */
+ VDECFW_CMD_MC_CACHE_CONFIGURATION,
+ /* Alternative output buffer (rotation etc.) */
+ /* ALTERNATIVE_OUTPUT_PICTURE_ROTATION */
+ VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION,
+ /* EXTENDED_ROW_STRIDE */
+ VDECFW_CMD_EXTENDED_ROW_STRIDE,
+ /* CHROMA_ROW_STRIDE (H.264 only) */
+ VDECFW_CMD_CHROMA_ROW_STRIDE,
+ /* ALTERNATIVE_OUTPUT_CONTROL */
+ VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL,
+ /* RPR specific commands */
+ /* RPR_AX_INITIAL */
+ VDECFW_CMD_RPR_AX_INITIAL,
+ /* RPR_AX_INCREMENT */
+ VDECFW_CMD_RPR_AX_INCREMENT,
+ /* RPR_AY_INITIAL */
+ VDECFW_CMD_RPR_AY_INITIAL,
+ /* RPR_AY_INCREMENT */
+ VDECFW_CMD_RPR_AY_INCREMENT,
+ /* RPR_PICTURE_SIZE */
+ VDECFW_CMD_RPR_PICTURE_SIZE,
+ /* Scaling specific params */
+ /* SCALED_DISPLAY_SIZE */
+ VDECFW_CMD_SCALED_DISPLAY_SIZE,
+ /* HORIZONTAL_SCALE_CONTROL */
+ VDECFW_CMD_HORIZONTAL_SCALE_CONTROL,
+ /* SCALE_HORIZONTAL_CHROMA (H.264 only) */
+ VDECFW_CMD_SCALE_HORIZONTAL_CHROMA,
+ /* VERTICAL_SCALE_CONTROL */
+ VDECFW_CMD_VERTICAL_SCALE_CONTROL,
+ /* SCALE_VERTICAL_CHROMA (H.264 only) */
+ VDECFW_CMD_SCALE_VERTICAL_CHROMA,
+ /* HORIZONTAL_LUMA_COEFFICIENTS_0 */
+ VDECFW_CMD_HORIZONTAL_LUMA_COEFFICIENTS_0,
+ /* HORIZONTAL_LUMA_COEFFICIENTS_1 */
+ VDECFW_CMD_HORIZONTAL_LUMA_COEFFICIENTS_1,
+ /* HORIZONTAL_LUMA_COEFFICIENTS_2 */
+ VDECFW_CMD_HORIZONTAL_LUMA_COEFFICIENTS_2,
+ /* HORIZONTAL_LUMA_COEFFICIENTS_3 */
+ VDECFW_CMD_HORIZONTAL_LUMA_COEFFICIENTS_3,
+ /* VERTICAL_LUMA_COEFFICIENTS_0 */
+ VDECFW_CMD_VERTICAL_LUMA_COEFFICIENTS_0,
+ /* VERTICAL_LUMA_COEFFICIENTS_1 */
+ VDECFW_CMD_VERTICAL_LUMA_COEFFICIENTS_1,
+ /* VERTICAL_LUMA_COEFFICIENTS_2 */
+ VDECFW_CMD_VERTICAL_LUMA_COEFFICIENTS_2,
+ /* VERTICAL_LUMA_COEFFICIENTS_3 */
+ VDECFW_CMD_VERTICAL_LUMA_COEFFICIENTS_3,
+ /* HORIZONTAL_CHROMA_COEFFICIENTS_0 */
+ VDECFW_CMD_HORIZONTAL_CHROMA_COEFFICIENTS_0,
+ /* HORIZONTAL_CHROMA_COEFFICIENTS_1 */
+ VDECFW_CMD_HORIZONTAL_CHROMA_COEFFICIENTS_1,
+ /* HORIZONTAL_CHROMA_COEFFICIENTS_2 */
+ VDECFW_CMD_HORIZONTAL_CHROMA_COEFFICIENTS_2,
+ /* HORIZONTAL_CHROMA_COEFFICIENTS_3 */
+ VDECFW_CMD_HORIZONTAL_CHROMA_COEFFICIENTS_3,
+ /* VERTICAL_CHROMA_COEFFICIENTS_0 */
+ VDECFW_CMD_VERTICAL_CHROMA_COEFFICIENTS_0,
+ /* VERTICAL_CHROMA_COEFFICIENTS_1 */
+ VDECFW_CMD_VERTICAL_CHROMA_COEFFICIENTS_1,
+ /* VERTICAL_CHROMA_COEFFICIENTS_2 */
+ VDECFW_CMD_VERTICAL_CHROMA_COEFFICIENTS_2,
+ /* VERTICAL_CHROMA_COEFFICIENTS_3 */
+ VDECFW_CMD_VERTICAL_CHROMA_COEFFICIENTS_3,
+ /* SCALE_OUTPUT_SIZE */
+ VDECFW_CMD_SCALE_OUTPUT_SIZE,
+ /* VDECFW_CMD_INTRA_BUFFER_PLANE_SIZE */
+ VDECFW_CMD_INTRA_BUFFER_PLANE_SIZE,
+ /* VDECFW_CMD_INTRA_BUFFER_SIZE_PER_PIPE */
+ VDECFW_CMD_INTRA_BUFFER_SIZE_PER_PIPE,
+ /* VDECFW_CMD_AUX_LINE_BUFFER_SIZE_PER_PIPE */
+ VDECFW_CMD_AUX_LINE_BUFFER_SIZE_PER_PIPE,
+ VDECFW_SLICE_X_MB_OFFSET,
+ VDECFW_SLICE_Y_MB_OFFSET,
+ VDECFW_SLICE_TYPE,
+ VDECFW_CMD_MAX,
+ VDECFW_CMD_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/* Size of relocation data attached to VDECFW_sTransaction message in words */
+#define VDECFW_RELOC_SIZE 125
+
+/* This structure defines the MMU Tile configuration. */
+struct vdecfw_mmu_tile_config {
+ /* MMU_CONTROL2 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned char, tilig_scheme);
+ /* MMU_TILE */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int,
+ mmu_tiling[MSVDX_CORE_CR_MMU_TILE_NO_ENTRIES]);
+ /* MMU_TILE_EXT */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int,
+ mmu_tiling_ext[MSVDX_CORE_CR_MMU_TILE_EXT_NO_ENTRIES]);
+};
+
+/*
+ * This structure contains the transaction attributes to be given to the
+ * firmware
+ * @brief Transaction Attributes
+ */
+struct vdecfw_transaction {
+ /* Unique identifier for the picture (driver-wide). */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, transation_id);
+ /* Codec */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ enum vdecfw_codectype, codec);
+ /*
+ * Flag to indicate that the stream needs to ge handled
+ * in secure memory (if available)
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ int, secure_stream);
+ /* Unique identifier for the current stream */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, stream_id);
+ /* Dictates to the FW parser how the NALs are delimited */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ enum vdecfw_parsermode, parser_mode);
+ /* Address from which to load the parser context data. */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_PTR_ALIGNMENT,
+ unsigned int, ctx_load_addr);
+ /*
+ * Address to save the parser state data including the updated
+ * "parser context data".
+ */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_PTR_ALIGNMENT,
+ unsigned int, ctx_save_addr);
+ /* Size of the parser context data in bytes. */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, ctx_size);
+ /* Address to save the "buffer control" data. */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_PTR_ALIGNMENT,
+ unsigned int, ctrl_save_addr);
+ /* Size of the buffer control data in bytes. */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, ctrl_size);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, pict_cmds[VDECFW_CMD_MAX]);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, pic_width_inmbs);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, pic_height_inmbs);
+
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, mbparams_base_addr);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, mbparams_size_per_plane);
+ /* Address of VLC table data. */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_PTR_ALIGNMENT,
+ unsigned int, vlc_tables_addr);
+ /* Size of the VLC table data in bytes. */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, vlc_tables_size);
+ /* Address of VLC index table data. */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_PTR_ALIGNMENT,
+ unsigned int, vlc_index_table_addr);
+ /* Size of the VLC index table data in bytes. */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, vlc_index_table_size);
+ /* Address of parser picture header. */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_PTR_ALIGNMENT,
+ unsigned int, psr_hdr_addr);
+ /* Size of the parser picture header in bytes. */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, psr_hdr_size);
+ /* Address of Sequence Info in the Host (secure) */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_PTR_ALIGNMENT,
+ unsigned int, sequence_info_source);
+ /* Address of PPS Info in the Host (secure) */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_PTR_ALIGNMENT,
+ unsigned int, pps_info_source);
+ /* Address of Second PPS Info in the Host (secure) */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_PTR_ALIGNMENT,
+ unsigned int, second_pps_info_source);
+ /* MMU Tile config comes down with each transaction. */
+ struct vdecfw_mmu_tile_config mmu_tile_config;
+};
+
+/*
+ * This structure contains the info for extracting a subset of VLC tables
+ * indexed inside the index table.
+ * aui32VlcTablesOffset is the offset to the first table inside the index table
+ * aui32VlcConsecutiveTables indicates the consecutive number of entries (from
+ * aui32VlcTablesOffset to aui32VlcTablesOffset+aui32VlcConsecutiveTables)
+ * which will be copied.
+ */
+struct vdecfw_vlc_table_info {
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, vlc_table_offset);
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned short, vlc_consecutive_tables);
+};
+
+/* This structure defines the RENDEC buffer configuration. */
+struct vdecfw_rendec_config {
+ /* VEC_RENDEC_CONTROL0 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, regvec_rendec_control0);
+ /* VEC_RENDEC_CONTROL1 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, regvec_rendec_control1);
+ /* VEC_RENDEC_BASE_ADDR0 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, rendec_buffer_baseaddr0);
+ /* VEC_RENDEC_BASE_ADDR1 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, rendec_buffer_baseaddr1);
+ /* VEC_RENDEC_BUFFER_SIZE */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, regvec_rendec_buffer_size);
+ /* VEC_RENDEC_CONTEXT0 - VEC_RENDEC_CONTEXT5 */
+ IMG_ALIGN_FIELD(VDECFW_SHARE_DEFAULT_ALIGNMENT,
+ unsigned int, rendec_initial_ctx[6]);
+};
+
+#endif /* _VDECFW_H_ */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
Core component is the entry point for decoder
stack, and manages decoder state machine.
Signed-off-by: Sunita Nadampalli <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
drivers/staging/media/vxd/decoder/core.c | 3656 ++++++++++++++++++++++
drivers/staging/media/vxd/decoder/core.h | 72 +
3 files changed, 3730 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/core.c
create mode 100644 drivers/staging/media/vxd/decoder/core.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 6dadec058ab3..41716f2916d1 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19569,6 +19569,8 @@ F: drivers/staging/media/vxd/common/work_queue.h
F: drivers/staging/media/vxd/decoder/bspp.c
F: drivers/staging/media/vxd/decoder/bspp.h
F: drivers/staging/media/vxd/decoder/bspp_int.h
+F: drivers/staging/media/vxd/decoder/core.c
+F: drivers/staging/media/vxd/decoder/core.h
F: drivers/staging/media/vxd/decoder/dec_resources.c
F: drivers/staging/media/vxd/decoder/dec_resources.h
F: drivers/staging/media/vxd/decoder/fw_interface.h
diff --git a/drivers/staging/media/vxd/decoder/core.c b/drivers/staging/media/vxd/decoder/core.c
new file mode 100644
index 000000000000..237e7f937751
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/core.c
@@ -0,0 +1,3656 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * VXD Decoder Core component function implementations
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include "core.h"
+#include "decoder.h"
+#include "img_errors.h"
+#include "img_pixfmts.h"
+#include "img_profiles_levels.h"
+#include "lst.h"
+#include "resource.h"
+#include "rman_api.h"
+#include "vdecdd_utils.h"
+#include "vdec_mmu_wrapper.h"
+#include "vxd_dec.h"
+
+#ifdef HAS_HEVC
+#define SEQ_RES_NEEDED
+#define GENC_BUFF_COUNT 4
+#endif
+
+/*
+ * This enum defines resource availability masks.
+ * @brief Resource Availability
+ */
+enum core_availability {
+ CORE_AVAIL_PICTBUF = (1 << 0),
+ CORE_AVAIL_PICTRES = (1 << 1),
+ CORE_AVAIL_CORE = (1 << 2),
+ CORE_AVAIL_MAX,
+ CORE_AVAIL_FORCE32BITS = 0x7FFFFFFFU
+};
+
+struct core_mbparam_alloc_info {
+ unsigned char alloc_mbparam_bufs;
+ unsigned int mbparam_size;
+ unsigned int overalloc_mbnum;
+};
+
+static struct core_mbparam_alloc_info mbparam_allocinfo[VDEC_STD_MAX - 1] = {
+ /* AllocFlag MBParamSize Overalloc */
+ /* MPEG2 */ { TRUE, 0xc8, 0 },
+ /* MPEG4 */ { TRUE, 0xc8, 0 },
+ /* H263 */ { TRUE, 0xc8, 0 },
+ /* H264 */ { TRUE, 0x80, 0 },
+ /* VC1 */ { TRUE, 0x80, (4096 * 2) / 0x80 },
+ /* AVS */ { TRUE, 0x80, 0 },
+ /* REAL */ { TRUE, 0x80, 0 },
+ /* JPEG */ { FALSE, 0x00, 0 },
+ /* VP6 */ { TRUE, 0x80, 0 },
+ /* VP8 */ { TRUE, 0x80, 0 },
+ /* SORENSON */ { TRUE, 0xc8, 0 },
+ /* HEVC */ { TRUE, 0x40, 0 },
+};
+
+struct vxdio_mempool {
+ unsigned int mem_heap_id;
+ enum sys_emem_attrib mem_attrib;
+};
+
+static unsigned int global_avail_slots;
+static unsigned char is_core_initialized;
+
+/*
+ * This structure contains the core Context.
+ * @brief core Context
+ */
+struct core_context {
+ struct vdecdd_dddev_context *dev_ctx;
+ /* List of stream context structures */
+ struct lst_t core_str_ctx;
+ vxd_cb vxd_str_processed_cb;
+};
+
+/* Global Core Context */
+static struct core_context *global_core_ctx;
+
+/*
+ * This structure contains the picture buffer size info.
+ * @brief Picture Resource Info
+ */
+struct core_pict_bufsize_info {
+ unsigned int mbparams_bufsize;
+
+#ifdef HAS_HEVC
+ union {
+ struct hevc_bufsize_pict {
+ /* Size of GENC fragment buffer for HEVC */
+ unsigned int genc_fragment_bufsize;
+ } hevc_bufsize_pict;
+ };
+#endif
+};
+
+/*
+ * This structure contains the sequence resource info.
+ * @brief Sequence Resource Info
+ */
+struct core_seq_resinfo {
+ union {
+#ifdef HAS_HEVC
+ struct hevc_bufsize_seqres {
+ unsigned int genc_bufsize; /* Size of GEN buffers for HEVC */
+ unsigned int intra_bufsize; /* Size of GEN buffers for HEVC */
+ unsigned int aux_bufsize; /* Size of GEN buffers for HEVC */
+ } hevc_bufsize_seqres;
+#endif
+
+#ifndef SEQ_RES_NEEDED
+ unsigned int dummy;
+#endif
+ };
+};
+
+struct core_pict_resinfo {
+ unsigned int pict_res_num;
+ struct core_pict_bufsize_info size_info;
+ unsigned char is_valid;
+};
+
+/*
+ * This structure contains the standard specific part of plant context.
+ * @brief Standard Specific Context
+ */
+struct core_std_spec_context {
+ union {
+#ifdef HAS_HEVC
+ struct hevc_ctx {
+ /* Counts genc buffer allocations */
+ unsigned short genc_id_gen;
+ } hevc_ctx;
+#else
+ unsigned int dummy;
+#endif
+ };
+};
+
+struct core_stream_context;
+
+struct core_std_spec_operations {
+ /* Allocates standard specific picture buffers. */
+ int (*alloc_picture_buffers)(struct core_stream_context *core_strctx,
+ struct vdecdd_pict_resint *pict_resint,
+ struct vxdio_mempool mem_pool,
+ struct core_pict_resinfo *pict_res_info);
+
+ /* Frees standard specific picture buffers. */
+ int (*free_picture_resource)(struct core_stream_context *core_strctx,
+ struct vdecdd_pict_resint *pic_res_int);
+
+ /* Allocates standard specific sequence buffers. */
+ int (*alloc_sequence_buffers)(struct core_stream_context *core_strctx,
+ struct vdecdd_seq_resint *seq_res_int,
+ struct vxdio_mempool mem_pool,
+ struct core_seq_resinfo *seq_res_info);
+
+ /* Frees standard specific sequence buffers. */
+ int (*free_sequence_resource)(struct core_stream_context *core_strctx,
+ struct vdecdd_seq_resint *seq_res_int);
+
+ /* Returns buffer's sizes (common and standard specific). */
+ int (*bufs_get_size)(struct core_stream_context *core_strctx,
+ const struct vdec_comsequ_hdrinfo *seq_hdrinfo,
+ struct vdec_pict_size *max_pict_size,
+ struct core_pict_bufsize_info *size_info,
+ struct core_seq_resinfo *seq_resinfo,
+ unsigned char *resource_needed);
+
+ /* Checks whether resource is still suitable. */
+ unsigned char (*is_stream_resource_suitable)(struct core_pict_resinfo *pict_resinfo,
+ struct core_pict_resinfo *old_pict_resinfo,
+ struct core_seq_resinfo *seq_resinfo,
+ struct core_seq_resinfo *old_seq_resinfo);
+};
+
+/*
+ * This structure contains the core Stream Context.
+ * @brief core Stream Context
+ */
+struct core_stream_context {
+ void **link; /* to be part of single linked list */
+ struct core_context *core_ctx;
+ struct vdecdd_ddstr_ctx *dd_str_ctx;
+ struct vxd_dec_ctx *vxd_dec_context;
+
+ /* list of picture buffers */
+ struct lst_t pict_buf_list;
+
+ /* List of picture resources allocated for this stream */
+ struct lst_t pict_res_list;
+ struct lst_t old_pict_res_list;
+
+ struct lst_t aux_pict_res_list;
+
+#ifdef SEQ_RES_NEEDED
+ /* List of active sequence resources that are allocated for this stream. */
+ struct lst_t seq_res_list;
+ /*
+ * List of sequence resources that are allocated for this stream but no
+ * longer suitable for new sequence(s).
+ */
+ struct lst_t old_seq_res_list;
+#endif
+
+ /* List of sequence header information */
+ struct lst_t seq_hdr_list;
+ /* Queue of stream units to be processed */
+ struct lst_t str_unit_list;
+
+ struct vdec_comsequ_hdrinfo comseq_hdr_info;
+ unsigned char opcfg_set;
+ /* Picture buffer layout to use for decoding. */
+ struct vdecdd_ddpict_buf disp_pict_buf;
+ struct vdec_str_opconfig op_cfg;
+ unsigned char new_seq;
+ unsigned char new_op_cfg;
+ unsigned char no_prev_refs_used;
+ unsigned int avail_slots;
+ unsigned int res_avail;
+ unsigned char stopped;
+ struct core_pict_resinfo pict_resinfo;
+ /* Current sequence resource info. */
+ struct core_seq_resinfo seq_resinfo;
+
+ /* Reconstructed picture buffer */
+ struct vdecdd_ddpict_buf recon_pictbuf;
+ /* Coded picture size of last reconfiguration */
+ struct vdec_pict_size coded_pict_size;
+ /* Standard specific operations. */
+ struct core_std_spec_operations *std_spec_ops;
+ /* Standard specific context. */
+ struct core_std_spec_context std_spec_context;
+};
+
+#ifdef HAS_HEVC
+static int core_free_hevc_picture_resource(struct core_stream_context *core_strctx,
+ struct vdecdd_pict_resint *pic_res_int);
+
+static int core_free_hevc_sequence_resource(struct core_stream_context *core_strctx,
+ struct vdecdd_seq_resint *seq_res_int);
+
+static int core_hevc_bufs_get_size(struct core_stream_context *core_str_ctx,
+ const struct vdec_comsequ_hdrinfo *seq_hdr_info,
+ struct vdec_pict_size *max_pict_size,
+ struct core_pict_bufsize_info *size_info,
+ struct core_seq_resinfo *seq_res_info,
+ unsigned char *resource_needed);
+
+static unsigned char core_is_hevc_stream_resource_suitable
+ (struct core_pict_resinfo *pict_res_info,
+ struct core_pict_resinfo *old_pict_res_info,
+ struct core_seq_resinfo *seq_res_info,
+ struct core_seq_resinfo *old_seq_res_info);
+
+static int core_alloc_hevc_specific_seq_buffers(struct core_stream_context *core_strctx,
+ struct vdecdd_seq_resint *seq_res_int,
+ struct vxdio_mempool mempool,
+ struct core_seq_resinfo *seq_res_info);
+
+static int core_alloc_hevc_specific_pict_buffers(struct core_stream_context *core_strctx,
+ struct vdecdd_pict_resint *pict_res_int,
+ struct vxdio_mempool mempool,
+ struct core_pict_resinfo *pict_res_info);
+#endif
+
+static int
+core_common_bufs_getsize(struct core_stream_context *core_str_ctx,
+ const struct vdec_comsequ_hdrinfo *comseq_hdrinfo,
+ struct vdec_pict_size *max_pict_size,
+ struct core_pict_bufsize_info *size_info,
+ struct core_seq_resinfo *seq_res_info, unsigned char *res_needed);
+
+static struct core_std_spec_operations std_specific_ops[VDEC_STD_MAX - 1] = {
+ /* AllocPicture FreePicture AllocSeq FreeSeq BufsGetSize IsStreamResourceSuitable */
+ /* MPEG2 */ { NULL, NULL, NULL, NULL, NULL, NULL},
+ /* MPEG4 */ { NULL, NULL, NULL, NULL, NULL, NULL},
+ /* H263 */ { NULL, NULL, NULL, NULL, NULL, NULL},
+
+ /* H264 */ { NULL, NULL, NULL, NULL, core_common_bufs_getsize, NULL},
+
+ /* VC1 */ { NULL, NULL, NULL, NULL, NULL, NULL},
+
+ /* AVS */ { NULL, NULL, NULL, NULL, NULL, NULL},
+
+ /* REAL */ { NULL, NULL, NULL, NULL, NULL, NULL},
+
+ /* JPEG */ { NULL, NULL, NULL, NULL, NULL, NULL},
+
+ /* VP6 */ { NULL, NULL, NULL, NULL, NULL, NULL},
+
+ /* VP8 */ { NULL, NULL, NULL, NULL, NULL, NULL},
+
+ /* SORENSON */ { NULL, NULL, NULL, NULL, NULL, NULL},
+#ifdef HAS_HEVC
+ /* HEVC */ { core_alloc_hevc_specific_pict_buffers,
+ core_free_hevc_picture_resource,
+ core_alloc_hevc_specific_seq_buffers,
+ core_free_hevc_sequence_resource,
+ core_hevc_bufs_get_size,
+ core_is_hevc_stream_resource_suitable},
+#else
+ /* HEVC */ { NULL, NULL, NULL, NULL, NULL, NULL},
+#endif
+};
+
+#ifdef ERROR_CONCEALMENT
+/*
+ * This structure contains the Error Recovery Frame Store info.
+ * @brief Error Recovery Frame Store Info
+ */
+struct core_err_recovery_frame_info {
+ /* Flag to indicate if Error Recovery Frame Store is enabled for standard. */
+ unsigned char enabled;
+ /* Limitation for maximum frame size based on dimensions. */
+ unsigned int max_size;
+};
+
+static struct core_err_recovery_frame_info err_recovery_frame_info[VDEC_STD_MAX - 1] = {
+ /* enabled max_frame_size */
+ /* MPEG2 */ { TRUE, ~0 },
+ /* MPEG4 */ { TRUE, ~0 },
+ /* H263 */ { FALSE, 0 },
+ /* H264 */ { TRUE, ~0 },
+ /* VC1 */ { FALSE, 0 },
+ /* AVS */ { FALSE, 0 },
+ /* REAL */ { FALSE, 0 },
+ /* JPEG */ { FALSE, 0 },
+ /* VP6 */ { FALSE, 0 },
+ /* VP8 */ { FALSE, 0 },
+ /* SORENSON */ { FALSE, 0 },
+ /* HEVC */ { TRUE, ~0 },
+};
+#endif
+
+static void core_fw_response_cb(int res_str_id, unsigned int *msg, unsigned int msg_size,
+ unsigned int msg_flags)
+{
+ struct core_stream_context *core_str_ctx;
+ int ret;
+
+ /* extract core_str_ctx and dec_core_ctx from res_str_id */
+ VDEC_ASSERT(res_str_id);
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(res_str_id, VDECDD_STREAM_TYPE_ID, (void **)&core_str_ctx, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ pr_err("could not extract core_str_context\n");
+
+ ret = decoder_service_firmware_response(core_str_ctx->dd_str_ctx->dec_ctx,
+ msg, msg_size, msg_flags);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ pr_err("decoder_service_firmware_response failed\n");
+}
+
+/*
+ * @Function core_initialise
+ */
+int core_initialise(void *dev_handle, unsigned int int_heap_id, void *vxd_cb_ptr)
+{
+ struct vdecdd_dd_devconfig dev_cfg_local;
+ unsigned int num_pipes_local;
+ int ret;
+
+ if (is_core_initialized)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ is_core_initialized = TRUE;
+
+ global_core_ctx = kzalloc(sizeof(*global_core_ctx), GFP_KERNEL);
+ if (!global_core_ctx) {
+ is_core_initialized = FALSE;
+ return IMG_ERROR_OUT_OF_MEMORY;
+ }
+
+ global_core_ctx->dev_ctx = kzalloc(sizeof(*global_core_ctx->dev_ctx), GFP_KERNEL);
+ if (!global_core_ctx->dev_ctx) {
+ kfree(global_core_ctx);
+ global_core_ctx = NULL;
+ is_core_initialized = FALSE;
+ return IMG_ERROR_OUT_OF_MEMORY;
+ }
+
+ /* Initialise device context. */
+ global_core_ctx->dev_ctx->dev_handle = dev_handle; /* v4L2 dev handle */
+ global_core_ctx->vxd_str_processed_cb = (vxd_cb)vxd_cb_ptr;
+
+ ret = decoder_initialise(global_core_ctx->dev_ctx, int_heap_id,
+ &dev_cfg_local, &num_pipes_local,
+ &global_core_ctx->dev_ctx->dec_context);
+ if (ret != IMG_SUCCESS)
+ goto decoder_init_error;
+
+ global_core_ctx->dev_ctx->internal_heap_id = int_heap_id;
+
+#ifdef DEBUG_DECODER_DRIVER
+ /* Dump codec config */
+ pr_info("Decode slots/core: %d", dev_cfg_local.num_slots_per_pipe);
+#endif
+
+ lst_init(&global_core_ctx->core_str_ctx);
+
+ /* Ensure the resource manager is initialised.. */
+ ret = rman_initialise();
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto rman_init_error;
+
+ /* Create resource bucket.. */
+ ret = rman_create_bucket(&global_core_ctx->dev_ctx->res_buck_handle);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto create_bucket_error;
+
+ return IMG_SUCCESS;
+
+create_bucket_error:
+ rman_deinitialise();
+
+rman_init_error:
+ decoder_deinitialise(global_core_ctx->dev_ctx->dec_context);
+
+decoder_init_error:
+ kfree(global_core_ctx->dev_ctx);
+ global_core_ctx->dev_ctx = NULL;
+ kfree(global_core_ctx);
+ global_core_ctx = NULL;
+
+ is_core_initialized = FALSE;
+
+ return ret;
+}
+
+/*
+ * @Function core_check_decoder_support
+ * @Description
+ * This function determines whether Decoder supports bitstream and
+ * configuration.
+ */
+static int
+core_check_decoder_support(const struct vdecdd_dddev_context *dd_dev_ctx,
+ const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_comsequ_hdrinfo *prev_seq_hdrinfo,
+ const struct bspp_pict_hdr_info *prev_pict_hdrinfo,
+ const struct vdecdd_mapbuf_info *map_bufinfo,
+ struct vdecdd_supp_check *supp_check)
+{
+ int ret;
+ struct vdec_unsupp_flags unsupported;
+ struct vdec_pict_rendinfo disp_pict_rend_info;
+
+ memset(&disp_pict_rend_info, 0, sizeof(struct vdec_pict_rendinfo));
+
+ /*
+ * If output picture buffer information is provided create another
+ * with properties required by bitstream so that it can be compared.
+ */
+ if (supp_check->disp_pictbuf) {
+ struct vdec_pict_rend_config pict_rend_cfg;
+
+ memset(&pict_rend_cfg, 0, sizeof(pict_rend_cfg));
+
+ /*
+ * Cannot validate the display picture buffer layout without
+ * knowing the pixel format required for the output and the
+ * sequence information.
+ */
+ if (supp_check->comseq_hdrinfo && supp_check->op_cfg) {
+ pict_rend_cfg.coded_pict_size =
+ supp_check->comseq_hdrinfo->max_frame_size;
+
+ pict_rend_cfg.byte_interleave =
+ supp_check->disp_pictbuf->buf_config.byte_interleave;
+
+ pict_rend_cfg.packed =
+ supp_check->disp_pictbuf->buf_config.packed;
+
+ pict_rend_cfg.stride_alignment =
+ supp_check->disp_pictbuf->buf_config.stride_alignment;
+
+ /*
+ * Recalculate render picture layout based upon
+ * sequence and output config.
+ */
+ vdecddutils_pictbuf_getinfo(str_cfg_data,
+ &pict_rend_cfg,
+ supp_check->op_cfg,
+ &disp_pict_rend_info);
+ }
+ }
+ /* Check that the decoder supports the picture. */
+ ret = decoder_check_support(dd_dev_ctx->dec_context, str_cfg_data,
+ supp_check->op_cfg,
+ supp_check->disp_pictbuf,
+ (disp_pict_rend_info.rendered_size) ?
+ &disp_pict_rend_info : NULL,
+ supp_check->comseq_hdrinfo,
+ supp_check->pict_hdrinfo,
+ prev_seq_hdrinfo,
+ prev_pict_hdrinfo,
+ supp_check->non_cfg_req,
+ &unsupported,
+ &supp_check->features);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS) {
+ if (ret == IMG_ERROR_NOT_SUPPORTED)
+ supp_check->unsupp_flags = unsupported;
+ }
+
+ return ret;
+}
+
+/*
+ * @Function core_supported_features
+ */
+int core_supported_features(struct vdec_features *features)
+{
+ struct vdecdd_dddev_context *dd_dev_ctx;
+
+ VDEC_ASSERT(global_core_ctx);
+
+ dd_dev_ctx = global_core_ctx->dev_ctx;
+ VDEC_ASSERT(dd_dev_ctx);
+ if (!dd_dev_ctx)
+ return IMG_ERROR_NOT_INITIALISED;
+
+ return decoder_supported_features(dd_dev_ctx->dec_context, features);
+}
+
+/*
+ * @Function core_stream_stop
+ */
+int core_stream_stop(unsigned int res_str_id)
+{
+ int ret = IMG_SUCCESS;
+ struct vdecdd_str_unit *stop_unit;
+ struct vdecdd_ddstr_ctx *ddstr_ctx;
+ struct core_stream_context *core_str_ctx;
+
+ /*
+ * Stream based messages without a device context
+ * must have a stream ID.
+ */
+ VDEC_ASSERT(res_str_id);
+
+ if (res_str_id == 0) {
+ pr_err("Invalid params passed to %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(res_str_id, VDECDD_STREAM_TYPE_ID,
+ (void **)&core_str_ctx, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ VDEC_ASSERT(core_str_ctx);
+
+ ddstr_ctx = core_str_ctx->dd_str_ctx;
+
+ /* Validate input arguments */
+ VDEC_ASSERT(ddstr_ctx);
+
+ /*
+ * Disregard this stop request if the stream is currently
+ * stopped or being stopped.
+ */
+ if (ddstr_ctx->dd_str_state == VDECDD_STRSTATE_PLAYING) {
+ vdecddutils_create_strunit(&stop_unit, NULL);
+ if (!stop_unit) {
+ pr_err("Failed to allocate memory for stop unit\n");
+ return IMG_ERROR_OUT_OF_MEMORY;
+ }
+ memset(stop_unit, 0, sizeof(*stop_unit));
+
+ stop_unit->str_unit_type = VDECDD_STRUNIT_STOP;
+ stop_unit->str_unit_tag = NULL;
+ stop_unit->decode = FALSE;
+
+ /*
+ * Since the stop is now to be passed to the decoder signal
+ * that we're stopping.
+ */
+ ddstr_ctx->dd_str_state = VDECDD_STRSTATE_STOPPING;
+ decoder_stream_process_unit(ddstr_ctx->dec_ctx, stop_unit);
+ core_str_ctx->stopped = TRUE;
+ vdecddutils_free_strunit(stop_unit);
+ }
+
+ return ret;
+}
+
+/*
+ * @Function core_is_stream_idle
+ */
+static unsigned char core_is_stream_idle(struct vdecdd_ddstr_ctx *dd_str_ctx)
+{
+ unsigned char is_stream_idle;
+
+ is_stream_idle = decoder_is_stream_idle(dd_str_ctx->dec_ctx);
+
+ return is_stream_idle;
+}
+
+/*
+ * @Function core_stream_destroy
+ */
+int core_stream_destroy(unsigned int res_str_id)
+{
+ struct vdecdd_ddstr_ctx *ddstr_ctx;
+ struct core_stream_context *core_str_ctx;
+ int ret;
+
+ /*
+ * Stream based messages without a device context
+ * must have a stream ID.
+ */
+ VDEC_ASSERT(res_str_id);
+
+ if (res_str_id == 0) {
+ pr_err("Invalid params passed to %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(res_str_id, VDECDD_STREAM_TYPE_ID,
+ (void **)&core_str_ctx, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ VDEC_ASSERT(core_str_ctx);
+
+ ddstr_ctx = core_str_ctx->dd_str_ctx;
+
+ /* Validate input arguments */
+ VDEC_ASSERT(ddstr_ctx);
+
+ ret = core_stream_stop(res_str_id);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ lst_remove(&global_core_ctx->core_str_ctx, core_str_ctx);
+
+ /* Destroy stream if idle otherwise wait and do it later */
+ if (core_is_stream_idle(ddstr_ctx))
+ rman_free_resource(ddstr_ctx->res_handle);
+
+ pr_debug("Core stream destroy successfully\n");
+ /* Return success.. */
+ return IMG_SUCCESS;
+}
+
+static int
+core_picture_attach_resources(struct core_stream_context *core_str_ctx,
+ struct vdecdd_str_unit *str_unit, unsigned char check)
+{
+ unsigned int ret = IMG_SUCCESS;
+
+ /*
+ * Take sequence header from cache.
+ * Note: sequence header id must be set in PICTURE_START unit
+ */
+ str_unit->seq_hdr_info = resource_list_getbyid(&core_str_ctx->seq_hdr_list,
+ str_unit->seq_hdr_id);
+
+ /* Check is not needed e.g. when freeing resources at stream destroy */
+ if (check && !str_unit->seq_hdr_info) {
+ pr_err("[USERSID=0x%08X] Sequence header not available for current picture while attaching",
+ core_str_ctx->dd_str_ctx->str_config_data.user_str_id);
+ ret = IMG_ERROR_NOT_SUPPORTED;
+ }
+
+ return ret;
+}
+
+/*
+ * @Function core_handle_processed_unit
+ */
+static int core_handle_processed_unit(struct core_stream_context *c_str_ctx,
+ struct vdecdd_str_unit *str_unit)
+{
+ struct bspp_bitstr_seg *bstr_seg;
+ struct vdecdd_ddstr_ctx *dd_str_ctx = c_str_ctx->dd_str_ctx;
+ int ret;
+ struct core_context *g_ctx = global_core_ctx;
+
+ pr_debug("%s stream unit type = %d\n", __func__, str_unit->str_unit_type);
+ /* check for type of the unit */
+ switch (str_unit->str_unit_type) {
+ case VDECDD_STRUNIT_SEQUENCE_START:
+ /* nothing to be done as sps is maintained till it changes */
+ break;
+
+ case VDECDD_STRUNIT_PICTURE_START:
+ /* Loop over bit stream segments.. */
+ bstr_seg = (struct bspp_bitstr_seg *)
+ lst_removehead(&str_unit->bstr_seg_list);
+
+ while (bstr_seg) {
+ lst_add(&c_str_ctx->vxd_dec_context->seg_list, bstr_seg);
+ if (bstr_seg->bstr_seg_flag & VDECDD_BSSEG_LASTINBUFF &&
+ dd_str_ctx->dd_str_state != VDECDD_STRSTATE_STOPPED) {
+ struct vdecdd_ddbuf_mapinfo *ddbuf_map_info;
+ /* Get access to map info context.. */
+ ret = rman_get_resource(bstr_seg->bufmap_id, VDECDD_BUFMAP_TYPE_ID,
+ (void **)&ddbuf_map_info, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ g_ctx->vxd_str_processed_cb(c_str_ctx->vxd_dec_context,
+ VXD_CB_STRUNIT_PROCESSED,
+ bstr_seg->bufmap_id);
+ }
+ /* Get next segment. */
+ bstr_seg = (struct bspp_bitstr_seg *)
+ lst_removehead(&str_unit->bstr_seg_list);
+ }
+ break;
+
+ case VDECDD_STRUNIT_PICTURE_END:
+ g_ctx->vxd_str_processed_cb(c_str_ctx->vxd_dec_context,
+ VXD_CB_PICT_END, 0xFFFF);
+ break;
+
+ case VDECDD_STRUNIT_STOP:
+ /*
+ * Signal that the stream has been stopped in the
+ * device driver.
+ */
+ dd_str_ctx->dd_str_state = VDECDD_STRSTATE_STOPPED;
+
+ break;
+
+ default:
+ pr_err("Invalid stream unit type passed\n");
+ return IMG_ERROR_GENERIC_FAILURE;
+ }
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[SID=0x%08X] [UTYPE=0x%08X] PROCESSED",
+ dd_str_ctx->res_str_id,
+ str_unit->str_unit_type);
+#endif
+
+ /* Return success.. */
+ return IMG_SUCCESS;
+}
+
+static int
+core_handle_decoded_picture(struct core_stream_context *core_str_ctx,
+ struct vdecdd_picture *picture, unsigned int type)
+{
+ /* Pick the client image buffer. */
+ struct vdecdd_ddbuf_mapinfo *pictbuf_mapinfo = picture->disp_pict_buf.pict_buf;
+
+ VDEC_ASSERT(pictbuf_mapinfo);
+ if (!pictbuf_mapinfo)
+ return IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+
+ global_core_ctx->vxd_str_processed_cb(core_str_ctx->vxd_dec_context,
+ (enum vxd_cb_type)type, pictbuf_mapinfo->buf_map_id);
+ return IMG_SUCCESS;
+}
+
+static int core_stream_processed_cb(void *handle, int cb_type, void *cb_item)
+{
+ int ret;
+ struct core_stream_context *core_str_ctx =
+ (struct core_stream_context *)handle;
+ VDEC_ASSERT(core_str_ctx);
+ if (!core_str_ctx) {
+ pr_err("NULL handle passed to core callback\n");
+ return IMG_ERROR_GENERIC_FAILURE;
+ }
+
+ pr_debug("%s callback type = %d\n", __func__, cb_type);
+ /* Based on callback type, retrieve the item */
+ switch (cb_type) {
+ case VXD_CB_STRUNIT_PROCESSED:
+ {
+ struct vdecdd_str_unit *str_unit =
+ (struct vdecdd_str_unit *)cb_item;
+ VDEC_ASSERT(str_unit);
+ if (!str_unit) {
+ pr_err("NULL item passed to core callback type STRUNIT_PROCESSED\n");
+ return IMG_ERROR_GENERIC_FAILURE;
+ }
+ ret = core_handle_processed_unit(core_str_ctx, str_unit);
+ if (ret != IMG_SUCCESS) {
+ pr_err("core_handle_processed_unit returned error\n");
+ return ret;
+ }
+ break;
+ }
+
+ case VXD_CB_PICT_DECODED:
+ case VXD_CB_PICT_DISPLAY:
+ case VXD_CB_PICT_RELEASE:
+ {
+ struct vdecdd_picture *picture = (struct vdecdd_picture *)cb_item;
+
+ if (!picture) {
+ pr_err("NULL item passed to core callback type PICTURE_DECODED\n");
+ return IMG_ERROR_GENERIC_FAILURE;
+ }
+ ret = core_handle_decoded_picture(core_str_ctx, picture, cb_type);
+ break;
+ }
+
+ case VXD_CB_STR_END:
+ global_core_ctx->vxd_str_processed_cb(core_str_ctx->vxd_dec_context,
+ (enum vxd_cb_type)cb_type, 0);
+ ret = IMG_SUCCESS;
+
+ break;
+
+ case VXD_CB_ERROR_FATAL:
+ /*
+ * Whenever the error case occurs, we need to handle the error case.
+ * Need to forward this error to v4l2 glue layer.
+ */
+ global_core_ctx->vxd_str_processed_cb(core_str_ctx->vxd_dec_context,
+ (enum vxd_cb_type)cb_type, *((unsigned int *)cb_item));
+ ret = IMG_SUCCESS;
+ break;
+
+ default:
+ return 0;
+ }
+
+ return ret;
+}
+
+static int core_decoder_queries(void *handle, int query, void *item)
+{
+ struct core_stream_context *core_str_ctx =
+ (struct core_stream_context *)handle;
+ VDEC_ASSERT(core_str_ctx);
+ if (!core_str_ctx) {
+ pr_err("NULL handle passed to %s callback\n", __func__);
+ return IMG_ERROR_GENERIC_FAILURE;
+ }
+
+ switch (query) {
+ case DECODER_CORE_GET_RES_LIMIT:
+ {
+ unsigned int num_img_bufs;
+ unsigned int num_res;
+
+ num_img_bufs = resource_list_getnum(&core_str_ctx->pict_buf_list);
+
+ /* Return the number of internal resources. */
+ num_res = core_str_ctx->pict_resinfo.pict_res_num;
+
+ /* Return the minimum of the two. */
+ *((unsigned int *)item) = vdec_size_min(num_img_bufs, num_res);
+ }
+ break;
+
+ default:
+ return IMG_ERROR_GENERIC_FAILURE;
+ }
+ return IMG_SUCCESS;
+}
+
+static int
+core_free_common_picture_resource(struct core_stream_context *core_str_ctx,
+ struct vdecdd_pict_resint *pict_resint)
+{
+ int ret = IMG_SUCCESS;
+
+ if (pict_resint->mb_param_buf && pict_resint->mb_param_buf->ddbuf_info.hndl_memory) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("mmu_free for buff_id[%d]\n",
+ pict_resint->mb_param_buf->ddbuf_info.buff_id);
+#endif
+ ret = mmu_free_mem(core_str_ctx->dd_str_ctx->mmu_str_handle,
+ &pict_resint->mb_param_buf->ddbuf_info);
+ if (ret != IMG_SUCCESS)
+ pr_err("MMU_Free for MBParam buffer failed with error %u", ret);
+
+ kfree(pict_resint->mb_param_buf);
+ pict_resint->mb_param_buf = NULL;
+ }
+ return ret;
+}
+
+static int core_free_resbuf(struct vdecdd_ddbuf_mapinfo **buf_handle, void *mmu_handle)
+{
+ int ret = IMG_SUCCESS;
+ struct vdecdd_ddbuf_mapinfo *buf = *buf_handle;
+
+ if (buf) {
+ if (buf->ddbuf_info.hndl_memory) {
+ ret = mmu_free_mem(mmu_handle, &buf->ddbuf_info);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ }
+ kfree(buf);
+ *buf_handle = NULL;
+ }
+ return ret;
+}
+
+/*
+ * @Function core_free_picture_resource
+ */
+static int
+core_free_picture_resource(struct core_stream_context *core_strctx,
+ struct vdecdd_pict_resint *pict_resint)
+{
+ int result = IMG_SUCCESS;
+
+ /* Check input arguments */
+ if (!core_strctx || !pict_resint) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+
+ result = core_free_common_picture_resource(core_strctx, pict_resint);
+
+ VDEC_ASSERT(core_strctx->std_spec_ops);
+ if (core_strctx->std_spec_ops->free_picture_resource)
+ core_strctx->std_spec_ops->free_picture_resource(core_strctx,
+ pict_resint);
+
+#ifdef SEQ_RES_NEEDED
+ if (pict_resint->seq_resint) {
+ resource_item_return(&pict_resint->seq_resint->ref_count);
+ pict_resint->seq_resint = 0;
+ }
+#endif
+
+ if (result == IMG_SUCCESS)
+ kfree(pict_resint);
+
+ return result;
+}
+
+/*
+ * @Function core_free_sequence_resource
+ */
+#ifdef SEQ_RES_NEEDED
+static int
+core_free_common_sequence_resource(struct core_stream_context *core_strctx,
+ struct vdecdd_seq_resint *seqres_int)
+{
+ int result;
+
+ result = core_free_resbuf(&seqres_int->err_pict_buf,
+ core_strctx->dd_str_ctx->mmu_str_handle);
+ if (result != IMG_SUCCESS)
+ pr_err("MMU_Free for Error Recover Frame Store buffer failed with error %u",
+ result);
+
+ return result;
+}
+
+static void
+core_free_sequence_resource(struct core_stream_context *core_strctx,
+ struct vdecdd_seq_resint *seqres_int)
+{
+ VDEC_ASSERT(core_strctx->std_spec_ops);
+ core_free_common_sequence_resource(core_strctx, seqres_int);
+
+ if (core_strctx->std_spec_ops->free_sequence_resource)
+ core_strctx->std_spec_ops->free_sequence_resource(core_strctx, seqres_int);
+
+ kfree(seqres_int);
+}
+#endif
+
+/*
+ * @Function core_stream_resource_deprecate
+ */
+static int core_stream_resource_deprecate(struct core_stream_context *core_str_ctx)
+{
+ struct vdecdd_pict_resint *picres_int;
+ int ret;
+
+ /* Free all "old" picture resources since these should now be unused. */
+ picres_int = lst_first(&core_str_ctx->old_pict_res_list);
+ while (picres_int) {
+ if (picres_int->ref_cnt != 0) {
+ pr_warn("[USERSID=0x%08X] Internal resource should be unused since it has been deprecated before",
+ core_str_ctx->dd_str_ctx->str_config_data.user_str_id);
+
+ picres_int = lst_next(picres_int);
+ } else {
+ struct vdecdd_pict_resint *picres_int_to_remove = picres_int;
+
+ picres_int = lst_next(picres_int);
+
+ lst_remove(&core_str_ctx->old_pict_res_list, picres_int_to_remove);
+ ret = core_free_picture_resource(core_str_ctx, picres_int_to_remove);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+ }
+
+ /* Move all "active" picture resources to the "old" list if they are still in use. */
+ picres_int = lst_removehead(&core_str_ctx->pict_res_list);
+ while (picres_int) {
+ /* Remove picture resource from the list. */
+ ret = resource_list_remove(&core_str_ctx->aux_pict_res_list, picres_int);
+
+ /* IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE is a valid return code
+ * e.g. during reconfigure we are clearing the sPictBufferList list
+ * and then try to remove the buffers again from the same list (empty now)
+ * though core UNMAP_BUF messages
+ */
+ if (ret != IMG_SUCCESS && ret != IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE) {
+ pr_err("[USERSID=0x%08X] Failed to remove picture resource",
+ core_str_ctx->dd_str_ctx->str_config_data.user_str_id);
+ return ret;
+ }
+ /*
+ * If the active resource is not being used, free now.
+ * Otherwise add to the old list to be freed later.
+ */
+ if (picres_int->ref_cnt == 0) {
+ ret = core_free_picture_resource(core_str_ctx, picres_int);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ } else {
+ lst_add(&core_str_ctx->old_pict_res_list, picres_int);
+ }
+ picres_int = lst_removehead(&core_str_ctx->pict_res_list);
+ }
+
+ /* Reset the resource configuration. */
+ memset(&core_str_ctx->pict_resinfo, 0, sizeof(core_str_ctx->pict_resinfo));
+
+#ifdef SEQ_RES_NEEDED
+ {
+ struct vdecdd_seq_resint *seqres_int;
+
+ /* Free all "old" sequence resources since these should now be unused. */
+ seqres_int = lst_first(&core_str_ctx->old_seq_res_list);
+ while (seqres_int) {
+ if (seqres_int->ref_count != 0) {
+ pr_warn("[USERSID=0x%08X] Internal sequence resource should be unused since it has been deprecated before",
+ core_str_ctx->dd_str_ctx->str_config_data.user_str_id);
+ seqres_int = lst_next(seqres_int);
+ } else {
+ struct vdecdd_seq_resint *seqres_int_to_remove = seqres_int;
+
+ seqres_int = lst_next(seqres_int);
+
+ lst_remove(&core_str_ctx->old_seq_res_list, seqres_int_to_remove);
+ core_free_sequence_resource(core_str_ctx, seqres_int_to_remove);
+ }
+ }
+
+ /* Move all "active" sequence resources to the "old"
+ * list if they are still in use.
+ */
+ seqres_int = lst_removehead(&core_str_ctx->seq_res_list);
+ while (seqres_int) {
+ /*
+ * If the active resource is not being used, free now.
+ * Otherwise add to the old list to be freed later.
+ */
+ seqres_int->ref_count == 0 ? core_free_sequence_resource(core_str_ctx,
+ seqres_int) :
+ lst_add(&core_str_ctx->old_seq_res_list, seqres_int);
+
+ seqres_int = lst_removehead(&core_str_ctx->seq_res_list);
+ }
+
+ /* Reset the resource configuration. */
+ memset(&core_str_ctx->seq_resinfo, 0, sizeof(core_str_ctx->seq_resinfo));
+ }
+#endif
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function core_stream_resource_destroy
+ */
+static int core_stream_resource_destroy(struct core_stream_context *core_str_ctx)
+{
+ struct vdecdd_pict_resint *picres_int;
+ int ret;
+
+ /* Remove any "active" picture resources allocated for this stream. */
+ picres_int = lst_removehead(&core_str_ctx->pict_res_list);
+ while (picres_int) {
+ ret = core_free_picture_resource(core_str_ctx, picres_int);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ picres_int = lst_removehead(&core_str_ctx->pict_res_list);
+ }
+
+ /* Remove any "old" picture resources allocated for this stream. */
+ picres_int = lst_removehead(&core_str_ctx->old_pict_res_list);
+ while (picres_int) {
+ ret = core_free_picture_resource(core_str_ctx, picres_int);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ picres_int = lst_removehead(&core_str_ctx->old_pict_res_list);
+ }
+
+ /* Reset the resource configuration. */
+ memset(&core_str_ctx->pict_resinfo, 0, sizeof(core_str_ctx->pict_resinfo));
+
+#ifdef SEQ_RES_NEEDED
+ {
+ struct vdecdd_seq_resint *seqres_int;
+
+ /* Remove any "active" sequence resources allocated for this stream. */
+ seqres_int = lst_removehead(&core_str_ctx->seq_res_list);
+ while (seqres_int) {
+ core_free_sequence_resource(core_str_ctx, seqres_int);
+ seqres_int = lst_removehead(&core_str_ctx->seq_res_list);
+ }
+
+ /* Remove any "old" sequence resources allocated for this stream. */
+ seqres_int = lst_removehead(&core_str_ctx->old_seq_res_list);
+ while (seqres_int) {
+ core_free_sequence_resource(core_str_ctx, seqres_int);
+ seqres_int = lst_removehead(&core_str_ctx->old_seq_res_list);
+ }
+
+ /* Reset the resource configuration. */
+ memset(&core_str_ctx->seq_resinfo, 0, sizeof(core_str_ctx->seq_resinfo));
+ }
+#endif
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function core_fn_free_stream_unit
+ */
+static int core_fn_free_stream_unit(struct vdecdd_str_unit *str_unit, void *param)
+{
+ struct core_stream_context *core_str_ctx = (struct core_stream_context *)param;
+ unsigned int ret = IMG_SUCCESS;
+
+ /* Attach picture resources where required. */
+ if (str_unit->str_unit_type == VDECDD_STRUNIT_PICTURE_START)
+ /*
+ * Do not force attachment because the resources can be
+ * unattached yet, e.g. in case of not yet processed picture
+ * units
+ */
+ ret = core_picture_attach_resources(core_str_ctx, str_unit, FALSE);
+
+ str_unit->decode = FALSE;
+
+ return ret;
+}
+
+/*
+ * @Function core_fn_free_stream
+ */
+static void core_fn_free_stream(void *param)
+{
+ int ret;
+ struct vdecdd_ddstr_ctx *dd_str_context;
+ struct vdecdd_dddev_context *dd_dev_ctx;
+ struct core_stream_context *core_str_ctx;
+
+ /* Validate input arguments */
+ VDEC_ASSERT(param);
+
+ core_str_ctx = (struct core_stream_context *)param;
+
+ dd_str_context = core_str_ctx->dd_str_ctx;
+
+ VDEC_ASSERT(dd_str_context);
+ if (!dd_str_context)
+ return;
+
+ dd_dev_ctx = dd_str_context->dd_dev_context;
+ VDEC_ASSERT(dd_dev_ctx);
+
+ if (!lst_empty(&core_str_ctx->str_unit_list)) {
+ /*
+ * Try and empty the list. Since this function is tearing down the core stream,
+ * test result using assert and continue to tidy-up as much as possible.
+ */
+ ret = resource_list_empty(&core_str_ctx->str_unit_list, FALSE,
+ (resource_pfn_freeitem)core_fn_free_stream_unit,
+ core_str_ctx);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ }
+
+ if (!lst_empty(&core_str_ctx->pict_buf_list)) {
+ /*
+ * Try and empty the list. Since this function is tearing down the core stream,
+ * test result using assert and continue to tidy-up as much as possible.
+ */
+ ret = resource_list_empty(&core_str_ctx->pict_buf_list, TRUE, NULL, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ }
+
+ if (!lst_empty(&core_str_ctx->aux_pict_res_list)) {
+ /*
+ * Try and empty the list. Since this function is tearing down the core stream,
+ * test result using assert and continue to tidy-up as much as possible.
+ */
+ ret = resource_list_empty(&core_str_ctx->aux_pict_res_list, TRUE, NULL, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ }
+
+ if (!lst_empty(&core_str_ctx->seq_hdr_list)) {
+ /*
+ * Try and empty the list. Since this function is tearing down the core stream,
+ * test result using assert and continue to tidy-up as much as possible.
+ */
+ ret = resource_list_empty(&core_str_ctx->seq_hdr_list, FALSE, NULL, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ }
+
+ /* Destroy stream in the Decoder. */
+ if (dd_str_context->dec_ctx) {
+ ret = decoder_stream_destroy(dd_str_context->dec_ctx, FALSE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ dd_str_context->dec_ctx = NULL;
+ }
+
+ core_stream_resource_destroy(core_str_ctx);
+
+ /* Destroy the MMU context for this stream. */
+ if (dd_str_context->mmu_str_handle) {
+ ret = mmu_stream_destroy(dd_str_context->mmu_str_handle);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ dd_str_context->mmu_str_handle = NULL;
+ }
+
+ /* Destroy the stream resources. */
+ if (dd_str_context->res_buck_handle) {
+ rman_destroy_bucket(dd_str_context->res_buck_handle);
+ dd_str_context->res_buck_handle = NULL;
+ }
+
+ /* Free stream context. */
+ kfree(dd_str_context);
+
+ /* Free the stream context. */
+ kfree(core_str_ctx);
+}
+
+/*
+ * @Function core_is_unsupported
+ */
+static unsigned char core_is_unsupported(struct vdec_unsupp_flags *unsupp_flags)
+{
+ unsigned char unsupported = FALSE;
+
+ if (unsupp_flags->str_cfg || unsupp_flags->seq_hdr ||
+ unsupp_flags->pict_hdr || unsupp_flags->str_opcfg ||
+ unsupp_flags->op_bufcfg)
+ unsupported = TRUE;
+
+ return unsupported;
+}
+
+int core_stream_create(void *vxd_dec_ctx_arg,
+ const struct vdec_str_configdata *str_cfg_data,
+ unsigned int *res_str_id)
+{
+ int ret;
+ struct vdecdd_ddstr_ctx *dd_str_context;
+ struct vdecdd_supp_check supp_check;
+ struct vdecdd_dddev_context *dd_dev_ctx;
+ struct core_stream_context *core_str_ctx;
+
+ /* Validate input arguments */
+ VDEC_ASSERT(str_cfg_data);
+ VDEC_ASSERT(res_str_id);
+
+ VDEC_ASSERT(global_core_ctx);
+ dd_dev_ctx = global_core_ctx->dev_ctx;
+
+ VDEC_ASSERT(dd_dev_ctx);
+ if (!dd_dev_ctx)
+ return IMG_ERROR_NOT_INITIALISED;
+
+ /* Allocate Core Stream Context */
+ core_str_ctx = kzalloc(sizeof(*core_str_ctx), GFP_KERNEL);
+ if (!core_str_ctx)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ core_str_ctx->core_ctx = global_core_ctx;
+ core_str_ctx->vxd_dec_context = (struct vxd_dec_ctx *)vxd_dec_ctx_arg;
+ /* register callback for firmware response */
+ core_str_ctx->vxd_dec_context->cb = (decode_cb)core_fw_response_cb;
+
+ lst_init(&core_str_ctx->pict_buf_list);
+ lst_init(&core_str_ctx->pict_res_list);
+ lst_init(&core_str_ctx->old_pict_res_list);
+ lst_init(&core_str_ctx->aux_pict_res_list);
+ lst_init(&core_str_ctx->seq_hdr_list);
+ lst_init(&core_str_ctx->str_unit_list);
+
+#ifdef SEQ_RES_NEEDED
+ lst_init(&core_str_ctx->seq_res_list);
+ lst_init(&core_str_ctx->old_seq_res_list);
+#endif
+
+ /* Allocate device stream context.. */
+ dd_str_context = kzalloc(sizeof(*dd_str_context), GFP_KERNEL);
+ VDEC_ASSERT(dd_str_context);
+ if (!dd_str_context) {
+ kfree(core_str_ctx);
+ core_str_ctx = NULL;
+ return IMG_ERROR_OUT_OF_MEMORY;
+ }
+
+ dd_str_context->dd_dev_context = dd_dev_ctx;
+ core_str_ctx->dd_str_ctx = dd_str_context;
+
+ /* Check stream configuration. */
+ memset(&supp_check, 0x0, sizeof(supp_check));
+ ret = core_check_decoder_support(dd_dev_ctx, str_cfg_data, NULL, NULL, NULL, &supp_check);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ if (core_is_unsupported(&supp_check.unsupp_flags)) {
+ ret = IMG_ERROR_NOT_SUPPORTED;
+ goto error;
+ }
+
+ /* Create a bucket for the resources.. */
+ ret = rman_create_bucket(&dd_str_context->res_buck_handle);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ /* Register the stream as a device resource.. */
+ ret = rman_register_resource(dd_dev_ctx->res_buck_handle,
+ VDECDD_STREAM_TYPE_ID,
+ core_fn_free_stream, core_str_ctx,
+ &dd_str_context->res_handle,
+ &dd_str_context->res_str_id);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ /* Create unique Stream Id */
+ dd_str_context->km_str_id = core_str_ctx->vxd_dec_context->stream.id;
+
+ /*
+ * Create stream in the Decoder.
+ * NOTE: this must take place first since it creates the MMU context.
+ */
+ ret = decoder_stream_create(dd_dev_ctx->dec_context, *str_cfg_data,
+ dd_str_context->km_str_id,
+ &dd_str_context->mmu_str_handle,
+ core_str_ctx->vxd_dec_context,
+ core_str_ctx, &dd_str_context->dec_ctx,
+ (void *)core_stream_processed_cb,
+ (void *)core_decoder_queries);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ /* Setup stream context.. */
+ dd_str_context->str_config_data = *str_cfg_data;
+ dd_str_context->dd_str_state = VDECDD_STRSTATE_STOPPED;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[SID=0x%08X] New stream created [USERSID=0x%08X]",
+ dd_str_context->res_str_id, str_cfg_data->user_str_id);
+#endif
+
+ *res_str_id = dd_str_context->res_str_id;
+ if (str_cfg_data->vid_std > 0 && str_cfg_data->vid_std <= VDEC_STD_MAX) {
+ core_str_ctx->std_spec_ops = &std_specific_ops[str_cfg_data->vid_std - 1];
+ } else {
+ pr_err("%s: Invalid parameters\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ lst_add(&global_core_ctx->core_str_ctx, core_str_ctx);
+
+ /* Return success.. */
+ return IMG_SUCCESS;
+
+error:
+ if (dd_str_context->res_handle)
+ rman_free_resource(dd_str_context->res_handle);
+ else
+ core_fn_free_stream(core_str_ctx);
+
+ return ret;
+}
+
+static int
+core_get_resource_availability(struct core_stream_context *core_str_ctx)
+{
+ unsigned int avail = ~0;
+
+ if (resource_list_getnumavail(&core_str_ctx->pict_buf_list) == 0)
+ avail &= ~CORE_AVAIL_PICTBUF;
+
+ if (resource_list_getnumavail(&core_str_ctx->aux_pict_res_list) == 0)
+ avail &= ~CORE_AVAIL_PICTRES;
+
+ if (global_avail_slots == 0)
+ avail &= ~CORE_AVAIL_CORE;
+
+ return avail;
+}
+
+static int
+core_stream_set_pictbuf_config(struct vdecdd_ddstr_ctx *dd_str_ctx,
+ struct vdec_pict_bufconfig *pictbuf_cfg)
+{
+ int ret;
+
+ /* Validate input arguments */
+ VDEC_ASSERT(dd_str_ctx);
+ VDEC_ASSERT(pictbuf_cfg);
+
+ /*
+ * If there are no buffers mapped or the configuration is not set
+ * (only done when reconfiguring output) then calculate the output
+ * picture buffer layout.
+ */
+ if (dd_str_ctx->map_buf_info.num_buf == 0 ||
+ dd_str_ctx->disp_pict_buf.buf_config.buf_size == 0) {
+ struct vdecdd_supp_check supp_check;
+ struct vdecdd_ddpict_buf disp_pictbuf;
+
+ memset(&disp_pictbuf, 0, sizeof(disp_pictbuf));
+
+ disp_pictbuf.buf_config = *pictbuf_cfg;
+
+ /*
+ * Ensure that the external picture buffer information
+ * is compatible with the hardware and convert to internal
+ * driver representation.
+ */
+ ret = vdecddutils_convert_buffer_config(&dd_str_ctx->str_config_data,
+ &disp_pictbuf.buf_config,
+ &disp_pictbuf.rend_info);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ /*
+ * Provide the current state for validation against the new
+ * buffer configuration.
+ */
+ memset(&supp_check, 0, sizeof(supp_check));
+ supp_check.disp_pictbuf = &disp_pictbuf;
+
+ if (dd_str_ctx->comseq_hdr_info.max_frame_size.width)
+ supp_check.comseq_hdrinfo = &dd_str_ctx->comseq_hdr_info;
+
+ if (dd_str_ctx->str_op_configured)
+ supp_check.op_cfg = &dd_str_ctx->opconfig;
+
+ ret = core_check_decoder_support(dd_str_ctx->dd_dev_context,
+ &dd_str_ctx->str_config_data,
+ &dd_str_ctx->prev_comseq_hdr_info,
+ &dd_str_ctx->prev_pict_hdr_info,
+ &dd_str_ctx->map_buf_info,
+ &supp_check);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ if (core_is_unsupported(&supp_check.unsupp_flags)) {
+ ret = IMG_ERROR_NOT_SUPPORTED;
+ goto error;
+ }
+
+ dd_str_ctx->disp_pict_buf = disp_pictbuf;
+ } else {
+ /*
+ * Check configuration of buffer matches that for stream
+ * including any picture buffers that are already mapped.
+ */
+ if (memcmp(pictbuf_cfg, &dd_str_ctx->disp_pict_buf.buf_config,
+ sizeof(*pictbuf_cfg))) {
+ /*
+ * Configuration of output buffer doesn't match the
+ * rest.
+ */
+ pr_err("[SID=0x%08X] All output buffers must have the same properties.",
+ dd_str_ctx->res_str_id);
+ ret = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+ }
+
+ /* Return success.. */
+ return IMG_SUCCESS;
+
+error:
+ return ret;
+}
+
+int
+core_stream_set_output_config(unsigned int res_str_id,
+ struct vdec_str_opconfig *str_opcfg,
+ struct vdec_pict_bufconfig *pict_bufcfg_handle)
+{
+ struct vdecdd_supp_check supp_check;
+ struct vdec_pict_bufconfig pict_buf_cfg;
+ struct vdec_pict_rendinfo disp_pict_rend_info;
+ int ret;
+
+ struct vdecdd_ddstr_ctx *dd_str_context;
+ struct core_stream_context *core_str_ctx;
+
+ /*
+ * Stream based messages without a device context
+ * must have a stream ID.
+ */
+ VDEC_ASSERT(res_str_id);
+
+ /* Get access to stream context */
+ ret = rman_get_resource(res_str_id, VDECDD_STREAM_TYPE_ID, (void **)&core_str_ctx, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dd_str_context = core_str_ctx->dd_str_ctx;
+
+ VDEC_ASSERT(dd_str_context);
+ VDEC_ASSERT(str_opcfg);
+
+ memset(&supp_check, 0, sizeof(supp_check));
+ if (core_str_ctx->new_seq)
+ supp_check.comseq_hdrinfo = &dd_str_context->comseq_hdr_info;
+ else
+ supp_check.comseq_hdrinfo = NULL;
+
+ supp_check.op_cfg = str_opcfg;
+
+ /*
+ * Validate stream output configuration against display
+ * buffer properties if no new picture buffer configuration
+ * is provided.
+ */
+ if (!pict_bufcfg_handle) {
+ VDEC_ASSERT(dd_str_context->disp_pict_buf.rend_info.rendered_size);
+ supp_check.disp_pictbuf = &dd_str_context->disp_pict_buf;
+ }
+
+ /* Validate output configuration. */
+ ret = core_check_decoder_support(dd_str_context->dd_dev_context,
+ &dd_str_context->str_config_data,
+ &dd_str_context->prev_comseq_hdr_info,
+ &dd_str_context->prev_pict_hdr_info,
+ &dd_str_context->map_buf_info,
+ &supp_check);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return IMG_SUCCESS;
+
+ if (core_is_unsupported(&supp_check.unsupp_flags))
+ return IMG_ERROR_NOT_SUPPORTED;
+
+ /* Update the stream output configuration. */
+ dd_str_context->opconfig = *str_opcfg;
+
+ /* Mark output as configured. */
+ dd_str_context->str_op_configured = TRUE;
+
+ if (pict_bufcfg_handle) {
+ /*
+ * Clear/invalidate the latest picture buffer configuration
+ * since it is easier to reuse the set function to calculate
+ * for this new output configuration than to determine
+ * compatibility. Keep a copy beforehand just in case the new
+ * configuration is invalid.
+ */
+ if (dd_str_context->disp_pict_buf.rend_info.rendered_size != 0) {
+ pict_buf_cfg = dd_str_context->disp_pict_buf.buf_config;
+ disp_pict_rend_info = dd_str_context->disp_pict_buf.rend_info;
+
+ memset(&dd_str_context->disp_pict_buf.buf_config, 0,
+ sizeof(dd_str_context->disp_pict_buf.buf_config));
+ memset(&dd_str_context->disp_pict_buf.rend_info, 0,
+ sizeof(dd_str_context->disp_pict_buf.rend_info));
+ }
+
+ /*
+ * Recalculate the picture buffer internal layout from the
+ * externalconfiguration. These settings provided by the
+ * allocator should be adhered to since the display process
+ * will expect the decoder to use them.
+ * If the configuration is invalid we need to leave the
+ * decoder state as it was before.
+ */
+ ret = core_stream_set_pictbuf_config(dd_str_context, pict_bufcfg_handle);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS && dd_str_context->disp_pict_buf.rend_info.rendered_size
+ != 0) {
+ /* Restore old picture buffer configuration */
+ dd_str_context->disp_pict_buf.buf_config =
+ pict_buf_cfg;
+ dd_str_context->disp_pict_buf.rend_info =
+ disp_pict_rend_info;
+ return ret;
+ }
+ } else if (core_is_unsupported(&supp_check.unsupp_flags)) {
+ return IMG_ERROR_NOT_SUPPORTED;
+ }
+
+ /* Return success.. */
+ return ret;
+}
+
+/*
+ * @Function core_stream_play
+ */
+int core_stream_play(unsigned int res_str_id)
+{
+ int ret;
+ struct vdecdd_ddstr_ctx *dd_str_context;
+ struct core_stream_context *core_str_ctx;
+ /* Picture buffer layout to use for decoding. */
+ struct vdecdd_ddpict_buf *disp_pict_buf;
+ struct vdec_str_opconfig *op_cfg;
+ struct vdecdd_supp_check supp_check;
+
+ /*
+ * Stream based messages without a device context
+ * must have a stream ID.
+ */
+ VDEC_ASSERT(res_str_id);
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(res_str_id, VDECDD_STREAM_TYPE_ID,
+ (void **)&core_str_ctx, NULL);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dd_str_context = core_str_ctx->dd_str_ctx;
+
+ VDEC_ASSERT(dd_str_context);
+
+ /* Ensure we are stopped. */
+ VDEC_ASSERT(dd_str_context->dd_str_state == VDECDD_STRSTATE_STOPPED);
+
+ /* Set "playing". */
+ dd_str_context->dd_str_state = VDECDD_STRSTATE_PLAYING;
+
+ /* set that is it not yet in closed GOP */
+ core_str_ctx->no_prev_refs_used = TRUE;
+
+ disp_pict_buf = dd_str_context->disp_pict_buf.rend_info.rendered_size ?
+ &dd_str_context->disp_pict_buf : NULL;
+ op_cfg = dd_str_context->str_op_configured ?
+ &dd_str_context->opconfig : NULL;
+
+ if (disp_pict_buf && op_cfg) {
+ VDEC_ASSERT(!disp_pict_buf->pict_buf);
+
+ if (memcmp(&core_str_ctx->op_cfg, op_cfg,
+ sizeof(core_str_ctx->op_cfg)) ||
+ memcmp(&core_str_ctx->disp_pict_buf, disp_pict_buf,
+ sizeof(core_str_ctx->disp_pict_buf)))
+ core_str_ctx->new_op_cfg = TRUE;
+
+ core_str_ctx->disp_pict_buf = *disp_pict_buf;
+ core_str_ctx->op_cfg = *op_cfg;
+
+ core_str_ctx->opcfg_set = TRUE;
+ } else {
+ core_str_ctx->opcfg_set = FALSE;
+ /* Must not be decoding without output configuration */
+ VDEC_ASSERT(0);
+ }
+
+ memset(&supp_check, 0, sizeof(supp_check));
+
+ if (vdec_size_nz(core_str_ctx->comseq_hdr_info.max_frame_size))
+ supp_check.comseq_hdrinfo = &core_str_ctx->comseq_hdr_info;
+
+ if (core_str_ctx->opcfg_set) {
+ supp_check.op_cfg = &core_str_ctx->op_cfg;
+ supp_check.disp_pictbuf = &core_str_ctx->disp_pict_buf;
+ }
+ supp_check.non_cfg_req = TRUE;
+ ret = core_check_decoder_support(dd_str_context->dd_dev_context,
+ &dd_str_context->str_config_data,
+ &dd_str_context->prev_comseq_hdr_info,
+ &dd_str_context->prev_pict_hdr_info,
+ &dd_str_context->map_buf_info,
+ &supp_check);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Return success.. */
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function core_deinitialise
+ */
+int core_deinitialise(void)
+{
+ struct vdecdd_dddev_context *dd_dev_ctx;
+ int ret;
+
+ dd_dev_ctx = global_core_ctx->dev_ctx;
+ VDEC_ASSERT(dd_dev_ctx);
+ if (!dd_dev_ctx)
+ return IMG_ERROR_NOT_INITIALISED;
+
+ ret = decoder_deinitialise(dd_dev_ctx->dec_context);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+
+ /* Free context resources.. */
+ rman_destroy_bucket(dd_dev_ctx->res_buck_handle);
+
+ rman_deinitialise();
+
+ kfree(dd_dev_ctx);
+
+ global_core_ctx->dev_ctx = NULL;
+
+ kfree(global_core_ctx);
+ global_core_ctx = NULL;
+
+ is_core_initialized = FALSE;
+
+ pr_debug("Core deinitialise successfully\n");
+ return IMG_SUCCESS;
+}
+
+static int core_get_mb_num(unsigned int width, unsigned int height)
+{
+ /*
+ * Calculate the number of MBs needed for current video
+ * sequence settings.
+ */
+ unsigned int width_mb = ALIGN(width, VDEC_MB_DIMENSION) / VDEC_MB_DIMENSION;
+ unsigned int height_mb = ALIGN(height, 2 * VDEC_MB_DIMENSION) / VDEC_MB_DIMENSION;
+
+ return width_mb * height_mb;
+}
+
+static int core_common_bufs_getsize(struct core_stream_context *core_str_ctx,
+ const struct vdec_comsequ_hdrinfo *comseq_hdrinfo,
+ struct vdec_pict_size *max_pict_size,
+ struct core_pict_bufsize_info *size_info,
+ struct core_seq_resinfo *seq_res_info,
+ unsigned char *res_needed)
+{
+ enum vdec_vid_std vid_std = core_str_ctx->dd_str_ctx->str_config_data.vid_std;
+ unsigned int std_idx = vid_std - 1;
+ unsigned int mb_num = 0;
+
+ if (core_str_ctx->dd_str_ctx->str_config_data.vid_std >= VDEC_STD_MAX)
+ return IMG_ERROR_GENERIC_FAILURE;
+
+ /* Reset the MB parameters buffer size. */
+ size_info->mbparams_bufsize = 0;
+
+ if (mbparam_allocinfo[std_idx].alloc_mbparam_bufs) {
+ *res_needed = TRUE;
+
+ /*
+ * Calculate the number of MBs needed for current video
+ * sequence settings.
+ */
+ mb_num = core_get_mb_num(max_pict_size->width, max_pict_size->height);
+
+ /* Calculate the final number of MBs needed. */
+ mb_num += mbparam_allocinfo[std_idx].overalloc_mbnum;
+
+ /* Calculate the MB params buffer size. */
+ size_info->mbparams_bufsize = mb_num * mbparam_allocinfo[std_idx].mbparam_size;
+
+ /* Adjust the buffer size for MSVDX. */
+ vdecddutils_buf_vxd_adjust_size(&size_info->mbparams_bufsize);
+
+ if (comseq_hdrinfo->separate_chroma_planes)
+ size_info->mbparams_bufsize *= 3;
+ }
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function core_pict_res_getinfo
+ */
+static int
+core_pict_res_getinfo(struct core_stream_context *core_str_ctx,
+ const struct vdec_comsequ_hdrinfo *comseq_hdrinfo,
+ const struct vdec_str_opconfig *op_cfg,
+ const struct vdecdd_ddpict_buf *disp_pictbuf,
+ struct core_pict_resinfo *pict_resinfo,
+ struct core_seq_resinfo *seq_resinfo)
+{
+ struct vdec_pict_size coded_pict_size;
+ struct dec_ctx *decctx;
+ unsigned char res_needed = FALSE;
+ int ret;
+
+ /* Reset the picture resource info. */
+ memset(pict_resinfo, 0, sizeof(*pict_resinfo));
+
+ coded_pict_size = comseq_hdrinfo->max_frame_size;
+
+ VDEC_ASSERT(core_str_ctx->std_spec_ops);
+ if (core_str_ctx->std_spec_ops->bufs_get_size)
+ core_str_ctx->std_spec_ops->bufs_get_size(core_str_ctx, comseq_hdrinfo,
+ &coded_pict_size,
+ &pict_resinfo->size_info, seq_resinfo, &res_needed);
+
+ /* If any picture resources are needed... */
+ if (res_needed) {
+ /* Get the number of resources required. */
+ ret = vdecddutils_get_minrequired_numpicts
+ (&core_str_ctx->dd_str_ctx->str_config_data,
+ comseq_hdrinfo, op_cfg,
+ &pict_resinfo->pict_res_num);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ decctx = (struct dec_ctx *)global_core_ctx->dev_ctx->dec_context;
+
+ if (core_str_ctx->dd_str_ctx->str_config_data.vid_std == VDEC_STD_HEVC)
+ pict_resinfo->pict_res_num += decctx->dev_cfg->num_slots_per_pipe - 1;
+ else
+ pict_resinfo->pict_res_num +=
+ decctx->num_pipes * decctx->dev_cfg->num_slots_per_pipe - 1;
+ }
+
+ return IMG_SUCCESS;
+}
+
+static int core_alloc_resbuf(struct vdecdd_ddbuf_mapinfo **buf_handle,
+ unsigned int size, void *mmu_handle,
+ struct vxdio_mempool mem_pool)
+{
+ int ret;
+ struct vdecdd_ddbuf_mapinfo *buf;
+
+ *buf_handle = kzalloc(sizeof(**buf_handle), GFP_KERNEL);
+ buf = *buf_handle;
+ VDEC_ASSERT(buf);
+ if (buf) {
+ buf->mmuheap_id = MMU_HEAP_STREAM_BUFFERS;
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s:%d calling MMU_StreamMalloc", __func__, __LINE__);
+#endif
+ ret = mmu_stream_alloc(mmu_handle, buf->mmuheap_id,
+ mem_pool.mem_heap_id,
+ mem_pool.mem_attrib, size,
+ DEV_MMU_PAGE_SIZE,
+ &buf->ddbuf_info);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ ret = IMG_ERROR_OUT_OF_MEMORY;
+ } else {
+ ret = IMG_ERROR_OUT_OF_MEMORY;
+ }
+ return ret;
+}
+
+#ifdef SEQ_RES_NEEDED
+static int core_alloc_common_sequence_buffers(struct core_stream_context *core_str_ctx,
+ struct vdecdd_seq_resint *seqres_int,
+ struct vxdio_mempool mem_pool,
+ struct core_seq_resinfo *seqres_info,
+ struct core_pict_resinfo *pictres_info,
+ const struct vdec_str_opconfig *op_cfg,
+ const struct vdecdd_ddpict_buf *disp_pict_buf)
+{
+ int ret = IMG_SUCCESS;
+#ifdef ERROR_CONCEALMENT
+ enum vdec_vid_std vid_std = core_str_ctx->dd_str_ctx->str_config_data.vid_std;
+ unsigned int std_idx = vid_std - 1;
+ struct vidio_ddbufinfo *err_buf_info;
+
+ /* Allocate error concealment pattern frame for current sequence */
+ if (err_recovery_frame_info[std_idx].enabled) {
+ struct vdec_pict_bufconfig buf_config;
+ unsigned int size;
+
+ buf_config = disp_pict_buf->buf_config;
+ size = buf_config.coded_width * buf_config.coded_height;
+
+ if (err_recovery_frame_info[std_idx].max_size > size) {
+ seqres_int->err_pict_buf = kzalloc(sizeof(*seqres_int->err_pict_buf),
+ GFP_KERNEL);
+ VDEC_ASSERT(seqres_int->err_pict_buf);
+ if (!seqres_int->err_pict_buf)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ seqres_int->err_pict_buf->mmuheap_id = MMU_HEAP_STREAM_BUFFERS;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("===== %s:%d calling MMU_StreamMalloc", __func__, __LINE__);
+#endif
+ ret = mmu_stream_alloc(core_str_ctx->dd_str_ctx->mmu_str_handle,
+ seqres_int->err_pict_buf->mmuheap_id,
+ mem_pool.mem_heap_id,
+ (enum sys_emem_attrib)(mem_pool.mem_attrib |
+ SYS_MEMATTRIB_CPU_WRITE),
+ buf_config.buf_size,
+ DEV_MMU_PAGE_ALIGNMENT,
+ &seqres_int->err_pict_buf->ddbuf_info);
+ if (ret != IMG_SUCCESS)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /* make grey pattern - luma & chroma at mid-rail */
+ err_buf_info = &seqres_int->err_pict_buf->ddbuf_info;
+ if (op_cfg->pixel_info.mem_pkg == PIXEL_BIT10_MP) {
+ unsigned int *out = (unsigned int *)err_buf_info->cpu_virt;
+ unsigned int i;
+
+ for (i = 0; i < err_buf_info->buf_size / sizeof(unsigned int); i++)
+ /* See PIXEL_BIT10_MP layout definition */
+ out[i] = 0x20080200;
+ } else {
+ /* Note: Setting 0x80 also gives grey pattern
+ * for 10bit upacked MSB format.
+ */
+ memset(err_buf_info->cpu_virt, 0x80, err_buf_info->buf_size);
+ }
+ }
+ }
+#endif
+ return ret;
+}
+#endif
+
+/*
+ * @Function core_do_resource_realloc
+ */
+static unsigned char core_do_resource_realloc(struct core_stream_context *core_str_ctx,
+ struct core_pict_resinfo *pictres_info,
+ struct core_seq_resinfo *seqres_info)
+{
+ VDEC_ASSERT(core_str_ctx->std_spec_ops);
+ /* If buffer sizes are sufficient and only the greater number of resources is needed... */
+ if (core_str_ctx->pict_resinfo.size_info.mbparams_bufsize >=
+ pictres_info->size_info.mbparams_bufsize &&
+ (core_str_ctx->std_spec_ops->is_stream_resource_suitable ?
+ core_str_ctx->std_spec_ops->is_stream_resource_suitable(pictres_info,
+ &core_str_ctx->pict_resinfo,
+ seqres_info, &core_str_ctx->seq_resinfo) : TRUE) &&
+ core_str_ctx->pict_resinfo.pict_res_num < pictres_info->pict_res_num)
+ /* ...full internal resource reallocation is not required. */
+ return FALSE;
+
+ /* Otherwise request full internal resource reallocation. */
+ return TRUE;
+}
+
+/*
+ * @Function core_is_stream_resource_suitable
+ */
+static unsigned char core_is_stream_resource_suitable
+ (struct core_stream_context *core_str_ctx,
+ const struct vdec_comsequ_hdrinfo *comseq_hdrinfo,
+ const struct vdec_str_opconfig *op_cfg,
+ const struct vdecdd_ddpict_buf *disp_pict_buf,
+ struct core_pict_resinfo *pictres_info,
+ struct core_seq_resinfo *seqres_info_ptr)
+{
+ int ret;
+ struct core_pict_resinfo aux_pictes_info;
+ struct core_pict_resinfo *aux_pictes_info_ptr;
+ struct core_seq_resinfo seqres_info;
+
+ /* If resource info is needed externally, just use it. Otherwise use internal structure. */
+ if (pictres_info)
+ aux_pictes_info_ptr = pictres_info;
+ else
+ aux_pictes_info_ptr = &aux_pictes_info;
+
+ if (!seqres_info_ptr)
+ seqres_info_ptr = &seqres_info;
+
+ /* Get the resource info for current settings. */
+ ret = core_pict_res_getinfo(core_str_ctx, comseq_hdrinfo, op_cfg, disp_pict_buf,
+ aux_pictes_info_ptr, seqres_info_ptr);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return FALSE;
+
+ VDEC_ASSERT(core_str_ctx->std_spec_ops);
+ if (core_str_ctx->std_spec_ops->is_stream_resource_suitable) {
+ if (!core_str_ctx->std_spec_ops->is_stream_resource_suitable
+ (aux_pictes_info_ptr,
+ &core_str_ctx->pict_resinfo,
+ seqres_info_ptr, &core_str_ctx->seq_resinfo))
+ return FALSE;
+ }
+
+ /* Check the number of picture resources required against the current number. */
+ if (aux_pictes_info_ptr->pict_res_num > core_str_ctx->pict_resinfo.pict_res_num)
+ return FALSE;
+
+ return TRUE;
+}
+
+static int core_alloc_common_pict_buffers(struct core_stream_context *core_str_ctx,
+ struct vdecdd_pict_resint *pictres_int,
+ struct vxdio_mempool mem_pool,
+ struct core_pict_resinfo *pictres_info)
+{
+ int ret = IMG_SUCCESS;
+
+ /* If MB params buffers are needed... */
+ if (pictres_info->size_info.mbparams_bufsize > 0)
+ /* Allocate the MB parameters buffer info structure. */
+ ret = core_alloc_resbuf(&pictres_int->mb_param_buf,
+ pictres_info->size_info.mbparams_bufsize,
+ core_str_ctx->dd_str_ctx->mmu_str_handle,
+ mem_pool);
+
+ return ret;
+}
+
+/*
+ * @Function core_stream_resource_create
+ */
+static int core_stream_resource_create(struct core_stream_context *core_str_ctx,
+ unsigned char closed_gop, unsigned int mem_heap_id,
+ const struct vdec_comsequ_hdrinfo *comseq_hdrinfo,
+ const struct vdec_str_opconfig *op_cfg,
+ const struct vdecdd_ddpict_buf *disp_pict_buf)
+{
+ struct vdecdd_pict_resint *pictres_int = NULL;
+ int ret = IMG_SUCCESS;
+ unsigned int i, start_cnt = 0;
+ struct core_pict_resinfo pictres_info;
+ struct vdecdd_seq_resint *seqres_int = NULL;
+ struct core_seq_resinfo seqres_info;
+ struct vxdio_mempool mem_pool;
+
+ mem_pool.mem_heap_id = mem_heap_id;
+ mem_pool.mem_attrib = (enum sys_emem_attrib)(SYS_MEMATTRIB_UNCACHED
+ | SYS_MEMATTRIB_WRITECOMBINE | SYS_MEMATTRIB_INTERNAL);
+
+#ifdef SEQ_RES_NEEDED
+ seqres_int = lst_first(&core_str_ctx->seq_res_list);
+#endif
+ /*
+ * Clear the reconstructed picture buffer layout if the previous
+ * references are no longer used. Only under these circumstances
+ * should the bitstream resolution change.
+ */
+ if (closed_gop) {
+ memset(&core_str_ctx->recon_pictbuf.rend_info, 0,
+ sizeof(core_str_ctx->recon_pictbuf.rend_info));
+ memset(&core_str_ctx->coded_pict_size, 0, sizeof(core_str_ctx->coded_pict_size));
+ } else {
+ if (vdec_size_ne(core_str_ctx->coded_pict_size, comseq_hdrinfo->max_frame_size)) {
+ VDEC_ASSERT(FALSE);
+ pr_err("Coded picture size changed within the closed GOP (i.e. mismatched references)");
+ }
+ }
+
+ /* If current buffers are not suitable for specified VSH/Output config... */
+ if (!core_is_stream_resource_suitable(core_str_ctx, comseq_hdrinfo,
+ op_cfg, disp_pict_buf, &pictres_info,
+ &seqres_info)) {
+ /* If full internal resource reallocation is needed... */
+ if (core_do_resource_realloc(core_str_ctx, &pictres_info, &seqres_info)) {
+ /*
+ * Mark all the active resources as deprecated and
+ * free-up where no longer used.
+ */
+ core_stream_resource_deprecate(core_str_ctx);
+ } else {
+ /* Use current buffer size settings. */
+ pictres_info.size_info = core_str_ctx->pict_resinfo.size_info;
+ seqres_info = core_str_ctx->seq_resinfo;
+
+ /* Set start counter to only allocate the number of
+ * resources that are missing.
+ */
+ start_cnt = core_str_ctx->pict_resinfo.pict_res_num;
+ }
+
+#ifdef SEQ_RES_NEEDED
+ /* allocate sequence resources */
+ {
+ seqres_int = kzalloc(sizeof(*seqres_int), GFP_KERNEL);
+ VDEC_ASSERT(seqres_int);
+ if (!seqres_int)
+ goto err_out_of_memory;
+
+ lst_add(&core_str_ctx->seq_res_list, seqres_int);
+ /* Allocate sequence buffers common for all standards. */
+ ret = core_alloc_common_sequence_buffers
+ (core_str_ctx, seqres_int, mem_pool,
+ &seqres_info,
+ &pictres_info, op_cfg, disp_pict_buf);
+ if (ret != IMG_SUCCESS)
+ goto err_out_of_memory;
+
+ VDEC_ASSERT(core_str_ctx->std_spec_ops);
+ if (core_str_ctx->std_spec_ops->alloc_sequence_buffers) {
+ ret = core_str_ctx->std_spec_ops->alloc_sequence_buffers
+ (core_str_ctx, seqres_int,
+ mem_pool, &seqres_info);
+ if (ret != IMG_SUCCESS)
+ goto err_out_of_memory;
+ }
+ }
+#endif
+ /* Allocate resources for current settings. */
+ for (i = start_cnt; i < pictres_info.pict_res_num; i++) {
+ /* Allocate the picture resources structure. */
+ pictres_int = kzalloc(sizeof(*pictres_int), GFP_KERNEL);
+ VDEC_ASSERT(pictres_int);
+ if (!pictres_int)
+ goto err_out_of_memory;
+
+ /* Allocate picture buffers common for all standards. */
+ ret = core_alloc_common_pict_buffers(core_str_ctx, pictres_int,
+ mem_pool, &pictres_info);
+ if (ret != IMG_SUCCESS)
+ goto err_out_of_memory;
+
+ /* Allocate standard specific picture buffers. */
+ VDEC_ASSERT(core_str_ctx->std_spec_ops);
+ if (core_str_ctx->std_spec_ops->alloc_picture_buffers) {
+ ret = core_str_ctx->std_spec_ops->alloc_picture_buffers
+ (core_str_ctx, pictres_int,
+ mem_pool, &pictres_info);
+ if (ret != IMG_SUCCESS)
+ goto err_out_of_memory;
+ }
+
+ /* attach sequence resources */
+#ifdef SEQ_RES_NEEDED
+ resource_item_use(&seqres_int->ref_count);
+ pictres_int->seq_resint = seqres_int;
+#endif
+ lst_add(&core_str_ctx->pict_res_list, pictres_int);
+ core_str_ctx->pict_resinfo.pict_res_num++;
+ }
+ }
+
+ /*
+ * When demand for picture resources reduces (in quantity) the extra buffers
+ * are still retained. Preserve the existing count in case the demand increases
+ * again, at which time these residual buffers won't need to be reallocated.
+ */
+ pictres_info.pict_res_num = core_str_ctx->pict_resinfo.pict_res_num;
+
+ /* Store the current resource config. */
+ core_str_ctx->pict_resinfo = pictres_info;
+ core_str_ctx->seq_resinfo = seqres_info;
+
+ pictres_int = lst_first(&core_str_ctx->pict_res_list);
+ while (pictres_int) {
+ /*
+ * Increment the reference count to indicate that this resource is also
+ * held by plant until it is added to the Scheduler list. If the resource has
+ * not just been created it might already be in circulation.
+ */
+ resource_item_use(&pictres_int->ref_cnt);
+#ifdef SEQ_RES_NEEDED
+ /* attach sequence resources */
+ resource_item_use(&seqres_int->ref_count);
+ pictres_int->seq_resint = seqres_int;
+#endif
+ /* Add the internal picture resources to the list. */
+ ret = resource_list_add_img(&core_str_ctx->aux_pict_res_list,
+ pictres_int, 0, &pictres_int->ref_cnt);
+
+ pictres_int = lst_next(pictres_int);
+ }
+
+ /*
+ * Set the reconstructed buffer properties if they
+ * may have been changed.
+ */
+ if (core_str_ctx->recon_pictbuf.rend_info.rendered_size == 0) {
+ core_str_ctx->recon_pictbuf.rend_info =
+ disp_pict_buf->rend_info;
+ core_str_ctx->recon_pictbuf.buf_config =
+ disp_pict_buf->buf_config;
+ core_str_ctx->coded_pict_size = comseq_hdrinfo->max_frame_size;
+ } else {
+ if (memcmp(&disp_pict_buf->rend_info,
+ &core_str_ctx->recon_pictbuf.rend_info,
+ sizeof(core_str_ctx->recon_pictbuf.rend_info))) {
+ /*
+ * Reconstructed picture buffer information has changed
+ * during a closed GOP.
+ */
+ VDEC_ASSERT
+ ("Reconstructed picture buffer information cannot change within a GOP"
+ == NULL);
+ pr_err("Reconstructed picture buffer information cannot change within a GOP.");
+ return IMG_ERROR_GENERIC_FAILURE;
+ }
+ }
+
+ /*
+ * When demand for picture resources reduces (in quantity) the extra buffers
+ * are still retained. Preserve the existing count in case the demand increases
+ * again, at which time these residual buffers won't need to be reallocated.
+ */
+ pictres_info.pict_res_num = core_str_ctx->pict_resinfo.pict_res_num;
+
+ /* Store the current resource config. */
+ core_str_ctx->pict_resinfo = pictres_info;
+ core_str_ctx->seq_resinfo = seqres_info;
+
+ return IMG_SUCCESS;
+
+ /* Handle out of memory errors. */
+err_out_of_memory:
+ /* Free resources being currently allocated. */
+ if (pictres_int) {
+ core_free_common_picture_resource(core_str_ctx, pictres_int);
+ if (core_str_ctx->std_spec_ops->free_picture_resource)
+ core_str_ctx->std_spec_ops->free_picture_resource(core_str_ctx,
+ pictres_int);
+
+ kfree(pictres_int);
+ }
+
+#ifdef SEQ_RES_NEEDED
+ if (seqres_int) {
+ core_free_common_sequence_resource(core_str_ctx, seqres_int);
+
+ if (core_str_ctx->std_spec_ops->free_sequence_resource)
+ core_str_ctx->std_spec_ops->free_sequence_resource(core_str_ctx,
+ seqres_int);
+
+ VDEC_ASSERT(lst_last(&core_str_ctx->seq_res_list) == seqres_int);
+ lst_remove(&core_str_ctx->seq_res_list, seqres_int);
+ kfree(seqres_int);
+ }
+#endif
+
+ /* Free all the other resources. */
+ core_stream_resource_destroy(core_str_ctx);
+
+ pr_err("[USERSID=0x%08X] Core not able to allocate stream resources due to lack of memory",
+ core_str_ctx->dd_str_ctx->str_config_data.user_str_id);
+
+ return IMG_ERROR_OUT_OF_MEMORY;
+}
+
+static int
+core_reconfigure_recon_pictbufs(struct core_stream_context *core_str_ctx,
+ unsigned char no_references)
+{
+ struct vdecdd_ddstr_ctx *dd_str_ctx;
+ int ret;
+
+ dd_str_ctx = core_str_ctx->dd_str_ctx;
+ VDEC_ASSERT(dd_str_ctx->str_op_configured);
+
+ /* Re-configure the internal picture buffers now that none are held. */
+ ret = core_stream_resource_create(core_str_ctx, no_references,
+ dd_str_ctx->dd_dev_context->internal_heap_id,
+ &dd_str_ctx->comseq_hdr_info,
+ &dd_str_ctx->opconfig,
+ &dd_str_ctx->disp_pict_buf);
+ return ret;
+}
+
+/*
+ * @Function core_picture_prepare
+ */
+static int core_picture_prepare(struct core_stream_context *core_str_ctx,
+ struct vdecdd_str_unit *str_unit)
+{
+ int ret = IMG_SUCCESS;
+ struct vdecdd_picture *pict_local = NULL;
+ unsigned int avail = 0;
+ unsigned char need_pict_res;
+
+ /*
+ * For normal decode, setup picture data.
+ * Preallocate the picture structure.
+ */
+ pict_local = kzalloc(sizeof(*pict_local), GFP_KERNEL);
+ if (!pict_local)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /* Determine whether the picture can be decoded. */
+ ret = decoder_get_load(core_str_ctx->dd_str_ctx->dec_ctx, &global_avail_slots);
+ if (ret != IMG_SUCCESS) {
+ pr_err("No resources avaialable to decode this picture");
+ ret = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ goto unwind;
+ }
+
+ /*
+ * Load and availability is cached in stream context simply
+ * for status reporting.
+ */
+ avail = core_get_resource_availability(core_str_ctx);
+
+ if ((avail & CORE_AVAIL_CORE) == 0) {
+ /* Return straight away if the core is not available */
+ ret = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ goto unwind;
+ }
+
+ if (core_str_ctx->new_op_cfg || core_str_ctx->new_seq) {
+ /*
+ * Reconstructed buffers should be checked for reconfiguration
+ * under these conditions:
+ * 1. New output configuration,
+ * 2. New sequence.
+ * Core can decide to reset the reconstructed buffer properties
+ * if there are no previous reference pictures used
+ * (i.e. at a closed GOP). This code must go here because we
+ * may not stop when new sequence is found or references become
+ * unused.
+ */
+ ret = core_reconfigure_recon_pictbufs(core_str_ctx,
+ core_str_ctx->no_prev_refs_used);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto unwind;
+ }
+
+ /* Update the display information for this picture. */
+ ret = vdecddutils_get_display_region(&str_unit->pict_hdr_info->coded_frame_size,
+ &str_unit->pict_hdr_info->disp_info.enc_disp_region,
+ &str_unit->pict_hdr_info->disp_info.disp_region);
+
+ if (ret != IMG_SUCCESS)
+ goto unwind;
+
+ /* Clear internal state */
+ core_str_ctx->new_seq = FALSE;
+ core_str_ctx->new_op_cfg = FALSE;
+ core_str_ctx->no_prev_refs_used = FALSE;
+
+ /*
+ * Recalculate this since we might have just created
+ * internal resources.
+ */
+ core_str_ctx->res_avail = core_get_resource_availability(core_str_ctx);
+
+ /*
+ * If picture resources were needed for this stream, picture resources
+ * list wouldn't be empty
+ */
+ need_pict_res = !lst_empty(&core_str_ctx->aux_pict_res_list);
+ /* If there are resources available */
+ if ((core_str_ctx->res_avail & CORE_AVAIL_PICTBUF) &&
+ (!need_pict_res || (core_str_ctx->res_avail & CORE_AVAIL_PICTRES))) {
+ /* Pick internal picture resources. */
+ if (need_pict_res) {
+ pict_local->pict_res_int =
+ resource_list_get_avail(&core_str_ctx->aux_pict_res_list);
+
+ VDEC_ASSERT(pict_local->pict_res_int);
+ if (!pict_local->pict_res_int) {
+ ret = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ goto unwind;
+ }
+ }
+
+ /* Pick the client image buffer. */
+ pict_local->disp_pict_buf.pict_buf =
+ resource_list_get_avail(&core_str_ctx->pict_buf_list);
+ VDEC_ASSERT(pict_local->disp_pict_buf.pict_buf);
+ if (!pict_local->disp_pict_buf.pict_buf) {
+ ret = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ goto unwind;
+ }
+ } else {
+ /* Need resources to process picture start. */
+ ret = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE;
+ goto unwind;
+ }
+
+ /* Ensure that the buffer contains layout information. */
+ pict_local->disp_pict_buf.rend_info = core_str_ctx->disp_pict_buf.rend_info;
+ pict_local->disp_pict_buf.buf_config = core_str_ctx->disp_pict_buf.buf_config;
+ pict_local->op_config = core_str_ctx->op_cfg;
+ pict_local->last_pict_in_seq = str_unit->last_pict_in_seq;
+
+ str_unit->dd_pict_data = pict_local;
+
+ /* Indicate that all necessary resources are now available. */
+ if (core_str_ctx->res_avail != ~0) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("LAST AVAIL: 0x%08X\n", core_str_ctx->res_avail);
+#endif
+ core_str_ctx->res_avail = ~0;
+ }
+
+#ifdef DEBUG_DECODER_DRIVER
+ /* dump decoder internal resource addresses */
+ if (pict_local->pict_res_int) {
+ if (pict_local->pict_res_int->mb_param_buf) {
+ pr_info("[USERSID=0x%08X] MB parameter buffer device virtual address: 0x%08X",
+ core_str_ctx->dd_str_ctx->str_config_data.user_str_id,
+ pict_local->pict_res_int->mb_param_buf->ddbuf_info.dev_virt);
+ }
+
+ if (core_str_ctx->comseq_hdr_info.separate_chroma_planes) {
+ pr_info("[USERSID=0x%08X] Display picture virtual address: LUMA 0x%08X, CHROMA 0x%08X, CHROMA2 0x%08X",
+ core_str_ctx->dd_str_ctx->str_config_data.user_str_id,
+ pict_local->disp_pict_buf.pict_buf->ddbuf_info.dev_virt,
+ pict_local->disp_pict_buf.pict_buf->ddbuf_info.dev_virt +
+ pict_local->disp_pict_buf.rend_info.plane_info
+ [VDEC_PLANE_VIDEO_U].offset,
+ pict_local->disp_pict_buf.pict_buf->ddbuf_info.dev_virt +
+ pict_local->disp_pict_buf.rend_info.plane_info
+ [VDEC_PLANE_VIDEO_V].offset);
+ } else {
+ pr_info("[USERSID=0x%08X] Display picture virtual address: LUMA 0x%08X, CHROMA 0x%08X",
+ core_str_ctx->dd_str_ctx->str_config_data.user_str_id,
+ pict_local->disp_pict_buf.pict_buf->ddbuf_info.dev_virt,
+ pict_local->disp_pict_buf.pict_buf->ddbuf_info.dev_virt +
+ pict_local->disp_pict_buf.rend_info.plane_info
+ [VDEC_PLANE_VIDEO_UV].offset);
+ }
+ }
+#endif
+
+ ret = core_picture_attach_resources(core_str_ctx, str_unit, TRUE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto unwind;
+
+ return IMG_SUCCESS;
+
+unwind:
+ if (pict_local->pict_res_int) {
+ resource_item_return(&pict_local->pict_res_int->ref_cnt);
+ pict_local->pict_res_int = NULL;
+ }
+ if (pict_local->disp_pict_buf.pict_buf) {
+ resource_item_return(&pict_local->disp_pict_buf.pict_buf->ddbuf_info.ref_count);
+ pict_local->disp_pict_buf.pict_buf = NULL;
+ }
+ kfree(pict_local);
+ return ret;
+}
+
+/*
+ * @Function core_validate_new_sequence
+ */
+static int core_validate_new_sequence(struct core_stream_context *core_str_ctx,
+ const struct vdec_comsequ_hdrinfo *comseq_hdrinfo)
+{
+ int ret;
+ struct vdecdd_supp_check supp_check;
+ struct vdecdd_ddstr_ctx *dd_str_ctx;
+ unsigned int num_req_bufs_prev, num_req_bufs_cur;
+ struct vdecdd_mapbuf_info mapbuf_info;
+
+ memset(&supp_check, 0, sizeof(supp_check));
+
+ /*
+ * Omit picture header from this setup since we can'supp_check
+ * validate this here.
+ */
+ supp_check.comseq_hdrinfo = comseq_hdrinfo;
+
+ if (core_str_ctx->opcfg_set) {
+ supp_check.op_cfg = &core_str_ctx->op_cfg;
+ supp_check.disp_pictbuf = &core_str_ctx->disp_pict_buf;
+
+ ret = vdecddutils_get_minrequired_numpicts
+ (&core_str_ctx->dd_str_ctx->str_config_data,
+ &core_str_ctx->comseq_hdr_info,
+ &core_str_ctx->op_cfg,
+ &num_req_bufs_prev);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ ret = vdecddutils_get_minrequired_numpicts
+ (&core_str_ctx->dd_str_ctx->str_config_data,
+ comseq_hdrinfo,
+ &core_str_ctx->op_cfg,
+ &num_req_bufs_cur);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+
+ /* Check if the output configuration is compatible with new VSH. */
+ dd_str_ctx = core_str_ctx->dd_str_ctx;
+ mapbuf_info = dd_str_ctx->map_buf_info;
+
+ /* Check the compatibility of the bitstream data and configuration */
+ supp_check.non_cfg_req = TRUE;
+ ret = core_check_decoder_support(dd_str_ctx->dd_dev_context,
+ &dd_str_ctx->str_config_data,
+ &dd_str_ctx->prev_comseq_hdr_info,
+ &dd_str_ctx->prev_pict_hdr_info,
+ &mapbuf_info, &supp_check);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ core_str_ctx->new_seq = TRUE;
+
+ return IMG_SUCCESS;
+}
+
+static int
+core_validate_new_picture(struct core_stream_context *core_str_ctx,
+ const struct bspp_pict_hdr_info *pict_hdrinfo,
+ unsigned int *features)
+{
+ int ret;
+ struct vdecdd_supp_check supp_check;
+ struct vdecdd_ddstr_ctx *dd_str_ctx;
+ struct vdecdd_mapbuf_info mapbuf_info;
+
+ memset(&supp_check, 0, sizeof(supp_check));
+ supp_check.comseq_hdrinfo = &core_str_ctx->comseq_hdr_info;
+ supp_check.pict_hdrinfo = pict_hdrinfo;
+
+ /*
+ * They cannot become invalid during a sequence.
+ * However, output configuration may signal something that
+ * changes compatibility on a closed GOP within a sequence
+ * (e.g. resolution may significantly decrease
+ * in a GOP and scaling wouldn't be supported). This resolution shift
+ * would not be signalled in the sequence header
+ * (since that is the maximum) but only
+ * found now when validating the first picture in the GOP.
+ */
+ if (core_str_ctx->opcfg_set)
+ supp_check.op_cfg = &core_str_ctx->op_cfg;
+
+ /*
+ * Check if the new picture is compatible with the
+ * current driver state.
+ */
+ dd_str_ctx = core_str_ctx->dd_str_ctx;
+ mapbuf_info = dd_str_ctx->map_buf_info;
+
+ /* Check the compatibility of the bitstream data and configuration */
+ supp_check.non_cfg_req = TRUE;
+ ret = core_check_decoder_support(dd_str_ctx->dd_dev_context,
+ &dd_str_ctx->str_config_data,
+ &dd_str_ctx->prev_comseq_hdr_info,
+ &dd_str_ctx->prev_pict_hdr_info,
+ &mapbuf_info, &supp_check);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ if (supp_check.unsupp_flags.str_opcfg || supp_check.unsupp_flags.pict_hdr)
+ return IMG_ERROR_NOT_SUPPORTED;
+
+ /*
+ * Clear the reconfiguration flags unless triggered by
+ * unsupported output config.
+ */
+ *features = supp_check.features;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function core_stream_submit_unit
+ */
+int core_stream_submit_unit(unsigned int res_str_id, struct vdecdd_str_unit *str_unit)
+{
+ int ret;
+ unsigned char process_str_unit = TRUE;
+
+ struct vdecdd_ddstr_ctx *dd_str_context;
+ struct core_stream_context *core_str_ctx;
+
+ /*
+ * Stream based messages without a device context
+ * must have a stream ID.
+ */
+ VDEC_ASSERT(res_str_id);
+ VDEC_ASSERT(str_unit);
+
+ if (res_str_id == 0 || !str_unit) {
+ pr_err("Invalid params passed to %s\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(res_str_id, VDECDD_STREAM_TYPE_ID, (void **)&core_str_ctx, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ VDEC_ASSERT(core_str_ctx);
+ dd_str_context = core_str_ctx->dd_str_ctx;
+ VDEC_ASSERT(dd_str_context);
+
+ ret = resource_list_add_img(&core_str_ctx->str_unit_list, str_unit, 0, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+
+ pr_debug("%s stream unit type = %d\n", __func__, str_unit->str_unit_type);
+ switch (str_unit->str_unit_type) {
+ case VDECDD_STRUNIT_SEQUENCE_START:
+ if (str_unit->seq_hdr_info) {
+ /* Add sequence header to cache. */
+ ret =
+ resource_list_replace(&core_str_ctx->seq_hdr_list,
+ str_unit->seq_hdr_info,
+ str_unit->seq_hdr_info->sequ_hdr_id,
+ &str_unit->seq_hdr_info->ref_count,
+ NULL, NULL);
+
+ if (ret != IMG_SUCCESS)
+ pr_err("[USERSID=0x%08X] Failed to replace resource",
+ res_str_id);
+ } else {
+ /* ...or take from cache. */
+ str_unit->seq_hdr_info =
+ resource_list_getbyid(&core_str_ctx->seq_hdr_list,
+ str_unit->seq_hdr_id);
+ }
+
+ VDEC_ASSERT(str_unit->seq_hdr_info);
+ if (!str_unit->seq_hdr_info) {
+ pr_err("Sequence header information not available for current picture");
+ break;
+ }
+ /*
+ * Check that this latest sequence header information is
+ * compatible with current state and then if no errors store
+ * as current.
+ */
+ core_str_ctx->comseq_hdr_info = str_unit->seq_hdr_info->com_sequ_hdr_info;
+
+ ret = core_validate_new_sequence(core_str_ctx,
+ &str_unit->seq_hdr_info->com_sequ_hdr_info);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dd_str_context->prev_comseq_hdr_info =
+ dd_str_context->comseq_hdr_info;
+ dd_str_context->comseq_hdr_info =
+ str_unit->seq_hdr_info->com_sequ_hdr_info;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[SID=0x%08X] VSH: Maximum Frame Resolution [%dx%d]",
+ dd_str_context->res_str_id,
+ dd_str_context->comseq_hdr_info.max_frame_size.width,
+ dd_str_context->comseq_hdr_info.max_frame_size.height);
+#endif
+
+ break;
+
+ case VDECDD_STRUNIT_PICTURE_START:
+ /*
+ * Check that the picture configuration is compatible
+ * with the current state.
+ */
+ ret = core_validate_new_picture(core_str_ctx,
+ str_unit->pict_hdr_info,
+ &str_unit->features);
+ if (ret != IMG_SUCCESS) {
+ if (ret == IMG_ERROR_NOT_SUPPORTED) {
+ /*
+ * Do not process stream unit since there is
+ * something unsupported.
+ */
+ process_str_unit = FALSE;
+ break;
+ }
+ }
+
+ /* Prepare picture for decoding. */
+ ret = core_picture_prepare(core_str_ctx, str_unit);
+ if (ret != IMG_SUCCESS)
+ if (ret == IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE ||
+ ret == IMG_ERROR_NOT_SUPPORTED)
+ /*
+ * Do not process stream unit since there is
+ * something unsupported or resources are not
+ * available.
+ */
+ process_str_unit = FALSE;
+ break;
+
+ default:
+ /*
+ * Sequence/picture headers should only be attached to
+ * corresponding units.
+ */
+ VDEC_ASSERT(!str_unit->seq_hdr_info);
+ VDEC_ASSERT(!str_unit->pict_hdr_info);
+ break;
+ }
+
+ if (process_str_unit) {
+ /* Submit stream unit to the decoder for processing. */
+ str_unit->decode = TRUE;
+ ret = decoder_stream_process_unit(dd_str_context->dec_ctx,
+ str_unit);
+ } else {
+ ret = IMG_ERROR_GENERIC_FAILURE;
+ }
+
+ return ret;
+}
+
+/*
+ * @Function core_stream_fill_pictbuf
+ */
+int core_stream_fill_pictbuf(unsigned int buf_map_id)
+{
+ int ret;
+ struct vdecdd_ddbuf_mapinfo *ddbuf_map_info;
+ struct vdecdd_ddstr_ctx *dd_str_ctx;
+ struct core_stream_context *core_str_ctx;
+
+ /* Get access to map info context.. */
+ ret = rman_get_resource(buf_map_id, VDECDD_BUFMAP_TYPE_ID,
+ (void **)&ddbuf_map_info, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dd_str_ctx = ddbuf_map_info->ddstr_context;
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(dd_str_ctx->res_str_id, VDECDD_STREAM_TYPE_ID,
+ (void **)&core_str_ctx, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Check buffer type. */
+ VDEC_ASSERT(ddbuf_map_info->buf_type == VDEC_BUFTYPE_PICTURE);
+
+ /* Add the image buffer to the list */
+ ret = resource_list_add_img(&core_str_ctx->pict_buf_list, ddbuf_map_info,
+ 0, &ddbuf_map_info->ddbuf_info.ref_count);
+
+ return ret;
+}
+
+/*
+ * @Function core_fn_free_mapped
+ */
+static void core_fn_free_mapped(void *param)
+{
+ struct vdecdd_ddbuf_mapinfo *ddbuf_map_info =
+ (struct vdecdd_ddbuf_mapinfo *)param;
+
+ /* Validate input arguments */
+ VDEC_ASSERT(param);
+
+ /* Do not free the MMU mapping. It is handled by talmmu code. */
+ kfree(ddbuf_map_info);
+}
+
+/*
+ * @Function core_stream_map_buf
+ */
+int core_stream_map_buf(unsigned int res_str_id, enum vdec_buf_type buf_type,
+ struct vdec_buf_info *buf_info, unsigned int *buf_map_id)
+{
+ int ret;
+ struct vdecdd_ddstr_ctx *dd_str_ctx;
+ struct core_stream_context *core_str_ctx;
+ struct vdecdd_ddbuf_mapinfo *ddbuf_map_info;
+
+ /*
+ * Stream based messages without a device context
+ * must have a stream ID.
+ */
+ VDEC_ASSERT(res_str_id);
+ VDEC_ASSERT(buf_type < VDEC_BUFTYPE_MAX);
+ VDEC_ASSERT(buf_info);
+ VDEC_ASSERT(buf_map_id);
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(res_str_id, VDECDD_STREAM_TYPE_ID,
+ (void **)&core_str_ctx, NULL);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dd_str_ctx = core_str_ctx->dd_str_ctx;
+
+ VDEC_ASSERT(dd_str_ctx);
+
+ /* Allocate an active stream unit.. */
+ ddbuf_map_info = kzalloc(sizeof(*ddbuf_map_info), GFP_KERNEL);
+ VDEC_ASSERT(ddbuf_map_info);
+
+ if (!ddbuf_map_info) {
+ pr_err("[SID=0x%08X] Failed to allocate memory for DD buffer map information",
+ dd_str_ctx->res_str_id);
+
+ return IMG_ERROR_OUT_OF_MEMORY;
+ }
+ memset(ddbuf_map_info, 0, sizeof(*ddbuf_map_info));
+
+ /* Save the stream context etc. */
+ ddbuf_map_info->ddstr_context = dd_str_ctx;
+ ddbuf_map_info->buf_type = buf_type;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s:%d vdec2plus: vxd map buff id %d", __func__, __LINE__,
+ buf_info->buf_id);
+#endif
+ ddbuf_map_info->buf_id = buf_info->buf_id;
+
+ /* Register the allocation as a stream resource.. */
+ ret = rman_register_resource(dd_str_ctx->res_buck_handle,
+ VDECDD_BUFMAP_TYPE_ID,
+ core_fn_free_mapped,
+ ddbuf_map_info,
+ &ddbuf_map_info->res_handle,
+ buf_map_id);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ ddbuf_map_info->buf_map_id = *buf_map_id;
+
+ if (buf_type == VDEC_BUFTYPE_PICTURE) {
+ if (dd_str_ctx->map_buf_info.num_buf == 0) {
+ dd_str_ctx->map_buf_info.buf_size = buf_info->buf_size;
+ dd_str_ctx->map_buf_info.byte_interleave =
+ buf_info->pictbuf_cfg.byte_interleave;
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[SID=0x%08X] Mapped Buffer size: %d (bytes)",
+ dd_str_ctx->res_str_id, buf_info->buf_size);
+#endif
+ } else {
+ /*
+ * Same byte interleaved setting should be used.
+ * Convert to actual bools by comparing with zero.
+ */
+ if (buf_info->pictbuf_cfg.byte_interleave !=
+ dd_str_ctx->map_buf_info.byte_interleave) {
+ pr_err("[SID=0x%08X] Buffer cannot be mapped since its byte interleave value (%s) is not the same as buffers already mapped (%s)",
+ dd_str_ctx->res_str_id,
+ buf_info->pictbuf_cfg.byte_interleave ?
+ "ON" : "OFF",
+ dd_str_ctx->map_buf_info.byte_interleave ?
+ "ON" : "OFF");
+ ret = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+ }
+
+ /* Configure the buffer.. */
+ ret = core_stream_set_pictbuf_config(dd_str_ctx, &buf_info->pictbuf_cfg);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+ }
+
+ /* Map heap from VDEC to MMU. */
+ switch (buf_type) {
+ case VDEC_BUFTYPE_BITSTREAM:
+ ddbuf_map_info->mmuheap_id = MMU_HEAP_BITSTREAM_BUFFERS;
+ break;
+
+ case VDEC_BUFTYPE_PICTURE:
+ mmu_get_heap(buf_info->pictbuf_cfg.stride[VDEC_PLANE_VIDEO_Y],
+ &ddbuf_map_info->mmuheap_id);
+ break;
+
+ default:
+ VDEC_ASSERT(FALSE);
+ }
+
+ /* Map this buffer into the MMU. */
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("----- %s:%d calling MMU_StreamMapExt", __func__, __LINE__);
+#endif
+ ret = mmu_stream_map_ext(dd_str_ctx->mmu_str_handle,
+ (enum mmu_eheap_id)ddbuf_map_info->mmuheap_id,
+ ddbuf_map_info->buf_id,
+ buf_info->buf_size, DEV_MMU_PAGE_SIZE,
+ buf_info->mem_attrib,
+ buf_info->cpu_linear_addr,
+ &ddbuf_map_info->ddbuf_info);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ if (buf_type == VDEC_BUFTYPE_PICTURE)
+ dd_str_ctx->map_buf_info.num_buf++;
+
+ /*
+ * Initialise the reference count to indicate that the client
+ * still holds the buffer.
+ */
+ ddbuf_map_info->ddbuf_info.ref_count = 1;
+
+ /* Return success.. */
+ return IMG_SUCCESS;
+
+error:
+ if (ddbuf_map_info) {
+ if (ddbuf_map_info->res_handle)
+ rman_free_resource(ddbuf_map_info->res_handle);
+ else
+ kfree(ddbuf_map_info);
+ }
+
+ return ret;
+}
+
+/*
+ * @Function core_stream_map_buf_sg
+ */
+int core_stream_map_buf_sg(unsigned int res_str_id, enum vdec_buf_type buf_type,
+ struct vdec_buf_info *buf_info,
+ void *sgt, unsigned int *buf_map_id)
+{
+ int ret;
+ struct vdecdd_ddstr_ctx *dd_str_ctx;
+ struct core_stream_context *core_str_ctx;
+ struct vdecdd_ddbuf_mapinfo *ddbuf_map_info;
+
+ /*
+ * Resource stream ID cannot be zero. If zero just warning and
+ * proceeding further will break the code. Return IMG_ERROR_INVALID_ID.
+ */
+ if (res_str_id <= 0)
+ return IMG_ERROR_INVALID_ID;
+
+ VDEC_ASSERT(buf_type < VDEC_BUFTYPE_MAX);
+ VDEC_ASSERT(buf_info);
+ VDEC_ASSERT(buf_map_id);
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(res_str_id, VDECDD_STREAM_TYPE_ID, (void **)&core_str_ctx, NULL);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dd_str_ctx = core_str_ctx->dd_str_ctx;
+
+ VDEC_ASSERT(dd_str_ctx);
+
+ /* Allocate an active stream unit.. */
+ ddbuf_map_info = kzalloc(sizeof(*ddbuf_map_info), GFP_KERNEL);
+ VDEC_ASSERT(ddbuf_map_info);
+
+ if (!ddbuf_map_info) {
+ pr_err("[SID=0x%08X] Failed to allocate memory for DD buffer map information",
+ dd_str_ctx->res_str_id);
+
+ return IMG_ERROR_OUT_OF_MEMORY;
+ }
+
+ /* Save the stream context etc. */
+ ddbuf_map_info->ddstr_context = dd_str_ctx;
+ ddbuf_map_info->buf_type = buf_type;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s:%d vdec2plus: vxd map buff id %d", __func__, __LINE__,
+ buf_info->buf_id);
+#endif
+ ddbuf_map_info->buf_id = buf_info->buf_id;
+
+ /* Register the allocation as a stream resource.. */
+ ret = rman_register_resource(dd_str_ctx->res_buck_handle,
+ VDECDD_BUFMAP_TYPE_ID,
+ core_fn_free_mapped, ddbuf_map_info,
+ &ddbuf_map_info->res_handle, buf_map_id);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ ddbuf_map_info->buf_map_id = *buf_map_id;
+
+ if (buf_type == VDEC_BUFTYPE_PICTURE) {
+ if (dd_str_ctx->map_buf_info.num_buf == 0) {
+ dd_str_ctx->map_buf_info.buf_size = buf_info->buf_size;
+
+ dd_str_ctx->map_buf_info.byte_interleave =
+ buf_info->pictbuf_cfg.byte_interleave;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[SID=0x%08X] Mapped Buffer size: %d (bytes)",
+ dd_str_ctx->res_str_id, buf_info->buf_size);
+#endif
+ } else {
+ /*
+ * Same byte interleaved setting should be used.
+ * Convert to actual bools by comparing with zero.
+ */
+ if (buf_info->pictbuf_cfg.byte_interleave !=
+ dd_str_ctx->map_buf_info.byte_interleave) {
+ pr_err("[SID=0x%08X] Buffer cannot be mapped since its byte interleave value (%s) is not the same as buffers already mapped (%s)",
+ dd_str_ctx->res_str_id,
+ buf_info->pictbuf_cfg.byte_interleave ?
+ "ON" : "OFF",
+ dd_str_ctx->map_buf_info.byte_interleave ?
+ "ON" : "OFF");
+ ret = IMG_ERROR_INVALID_PARAMETERS;
+ goto error;
+ }
+ }
+
+ /* Configure the buffer.. */
+ ret = core_stream_set_pictbuf_config(dd_str_ctx, &buf_info->pictbuf_cfg);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+ }
+
+ /* Map heap from VDEC to MMU. */
+ switch (buf_type) {
+ case VDEC_BUFTYPE_BITSTREAM:
+ ddbuf_map_info->mmuheap_id = MMU_HEAP_BITSTREAM_BUFFERS;
+ break;
+
+ case VDEC_BUFTYPE_PICTURE:
+ mmu_get_heap(buf_info->pictbuf_cfg.stride[VDEC_PLANE_VIDEO_Y],
+ &ddbuf_map_info->mmuheap_id);
+ break;
+
+ default:
+ VDEC_ASSERT(FALSE);
+ }
+
+ /* Map this buffer into the MMU. */
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("----- %s:%d calling MMU_StreamMapExt_sg", __func__, __LINE__);
+#endif
+ ret =
+ mmu_stream_map_ext_sg(dd_str_ctx->mmu_str_handle,
+ (enum mmu_eheap_id)ddbuf_map_info->mmuheap_id,
+ sgt, buf_info->buf_size, DEV_MMU_PAGE_SIZE,
+ buf_info->mem_attrib, buf_info->cpu_linear_addr,
+ &ddbuf_map_info->ddbuf_info,
+ &ddbuf_map_info->buf_id);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ if (buf_type == VDEC_BUFTYPE_PICTURE)
+ dd_str_ctx->map_buf_info.num_buf++;
+
+ /*
+ * Initialise the reference count to indicate that the client
+ * still holds the buffer.
+ */
+ ddbuf_map_info->ddbuf_info.ref_count = 1;
+
+ /* Return success.. */
+ return IMG_SUCCESS;
+
+error:
+ if (ddbuf_map_info->res_handle)
+ rman_free_resource(ddbuf_map_info->res_handle);
+ else
+ kfree(ddbuf_map_info);
+
+ return ret;
+}
+
+/*
+ * @Function core_stream_unmap_buf
+ */
+int core_stream_unmap_buf(unsigned int buf_map_id)
+{
+ int ret;
+ struct vdecdd_ddbuf_mapinfo *ddbuf_map_info;
+ struct vdecdd_ddstr_ctx *dd_str_ctx;
+ struct core_stream_context *core_str_ctx;
+
+ /* Get access to map info context.. */
+ ret = rman_get_resource(buf_map_id, VDECDD_BUFMAP_TYPE_ID,
+ (void **)&ddbuf_map_info, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dd_str_ctx = ddbuf_map_info->ddstr_context;
+ VDEC_ASSERT(dd_str_ctx);
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(dd_str_ctx->res_str_id, VDECDD_STREAM_TYPE_ID,
+ (void **)&core_str_ctx, NULL);
+ VDEC_ASSERT(core_str_ctx);
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("UNMAP: PM [0x%p] --> VM [0x%08X - 0x%08X] (%d bytes)",
+ ddbuf_map_info->ddbuf_info.cpu_virt,
+ ddbuf_map_info->ddbuf_info.dev_virt,
+ ddbuf_map_info->ddbuf_info.dev_virt +
+ ddbuf_map_info->ddbuf_info.buf_size,
+ ddbuf_map_info->ddbuf_info.buf_size);
+#endif
+
+ /* Buffer should only be held by the client. */
+ VDEC_ASSERT(ddbuf_map_info->ddbuf_info.ref_count == 1);
+ if (ddbuf_map_info->ddbuf_info.ref_count != 1)
+ return IMG_ERROR_MEMORY_IN_USE;
+
+ ddbuf_map_info->ddbuf_info.ref_count = 0;
+ if (ddbuf_map_info->buf_type == VDEC_BUFTYPE_PICTURE) {
+ /* Remove this picture buffer from pictbuf list */
+ ret = resource_list_remove(&core_str_ctx->pict_buf_list, ddbuf_map_info);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS || ret == IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE);
+ if (ret != IMG_SUCCESS && ret != IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE)
+ return ret;
+
+ ddbuf_map_info->ddstr_context->map_buf_info.num_buf--;
+
+ /* Clear some state if there are no more mapped buffers. */
+ if (dd_str_ctx->map_buf_info.num_buf == 0) {
+ dd_str_ctx->map_buf_info.buf_size = 0;
+ dd_str_ctx->map_buf_info.byte_interleave = FALSE;
+ }
+ }
+
+ /* Unmap this buffer from the MMU. */
+ ret = mmu_free_mem(dd_str_ctx->mmu_str_handle, &ddbuf_map_info->ddbuf_info);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Free buffer map info. */
+ rman_free_resource(ddbuf_map_info->res_handle);
+
+ /* Return success.. */
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function core_stream_unmap_buf_sg
+ */
+int core_stream_unmap_buf_sg(unsigned int buf_map_id)
+{
+ int ret;
+ struct vdecdd_ddbuf_mapinfo *ddbuf_map_info;
+ struct vdecdd_ddstr_ctx *dd_str_ctx;
+ struct core_stream_context *core_str_ctx;
+
+ /* Get access to map info context.. */
+ ret = rman_get_resource(buf_map_id, VDECDD_BUFMAP_TYPE_ID, (void **)&ddbuf_map_info, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dd_str_ctx = ddbuf_map_info->ddstr_context;
+ VDEC_ASSERT(dd_str_ctx);
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(dd_str_ctx->res_str_id, VDECDD_STREAM_TYPE_ID,
+ (void **)&core_str_ctx, NULL);
+ VDEC_ASSERT(core_str_ctx);
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("UNMAP: PM [0x%p] --> VM [0x%08X - 0x%08X] (%d bytes)",
+ ddbuf_map_info->ddbuf_info.cpu_virt,
+ ddbuf_map_info->ddbuf_info.dev_virt,
+ ddbuf_map_info->ddbuf_info.dev_virt +
+ ddbuf_map_info->ddbuf_info.buf_size,
+ ddbuf_map_info->ddbuf_info.buf_size);
+#endif
+
+ /* Buffer should only be held by the client. */
+ VDEC_ASSERT(ddbuf_map_info->ddbuf_info.ref_count == 1);
+ if (ddbuf_map_info->ddbuf_info.ref_count != 1)
+ return IMG_ERROR_MEMORY_IN_USE;
+
+ ddbuf_map_info->ddbuf_info.ref_count = 0;
+
+ if (ddbuf_map_info->buf_type == VDEC_BUFTYPE_PICTURE) {
+ /* Remove this picture buffer from pictbuf list */
+ ret = resource_list_remove(&core_str_ctx->pict_buf_list, ddbuf_map_info);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS || ret == IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE);
+ if (ret != IMG_SUCCESS && ret != IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE)
+ return ret;
+
+ ddbuf_map_info->ddstr_context->map_buf_info.num_buf--;
+
+ /*
+ * Clear some state if there are no more
+ * mapped buffers.
+ */
+ if (dd_str_ctx->map_buf_info.num_buf == 0) {
+ dd_str_ctx->map_buf_info.buf_size = 0;
+ dd_str_ctx->map_buf_info.byte_interleave = FALSE;
+ }
+ }
+
+ /* Unmap this buffer from the MMU. */
+ ret = mmu_free_mem_sg(dd_str_ctx->mmu_str_handle, &ddbuf_map_info->ddbuf_info);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Free buffer map info. */
+ rman_free_resource(ddbuf_map_info->res_handle);
+
+ /* Return success.. */
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function core_stream_flush
+ */
+int core_stream_flush(unsigned int res_str_id, unsigned char discard_refs)
+{
+ struct vdecdd_ddstr_ctx *dd_str_ctx;
+ struct core_stream_context *core_str_ctx;
+ int ret;
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(res_str_id, VDECDD_STREAM_TYPE_ID,
+ (void **)&core_str_ctx, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dd_str_ctx = core_str_ctx->dd_str_ctx;
+
+ VDEC_ASSERT(dd_str_ctx);
+ VDEC_ASSERT(dd_str_ctx->dd_str_state == VDECDD_STRSTATE_STOPPED);
+
+ /*
+ * If unsupported sequence is found, we need to do additional
+ * check for DPB flush condition
+ */
+ if (!dd_str_ctx->comseq_hdr_info.not_dpb_flush) {
+ ret = decoder_stream_flush(dd_str_ctx->dec_ctx, discard_refs);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+
+ /* Return success.. */
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function core_stream_release_bufs
+ */
+int core_stream_release_bufs(unsigned int res_str_id, enum vdec_buf_type buf_type)
+{
+ int ret;
+ struct core_stream_context *core_str_ctx;
+ struct vdecdd_ddstr_ctx *dd_str_ctx;
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(res_str_id, VDECDD_STREAM_TYPE_ID, (void **)&core_str_ctx, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dd_str_ctx = core_str_ctx->dd_str_ctx;
+
+ VDEC_ASSERT(dd_str_ctx);
+ VDEC_ASSERT(buf_type < VDEC_BUFTYPE_MAX);
+
+ switch (buf_type) {
+ case VDEC_BUFTYPE_PICTURE:
+ {
+ /* Empty all the decoded picture related buffer lists. */
+ ret = resource_list_empty(&core_str_ctx->pict_buf_list, TRUE, NULL, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ break;
+ }
+
+ case VDEC_BUFTYPE_BITSTREAM:
+ {
+ /* Empty the stream unit queue. */
+ ret = resource_list_empty(&core_str_ctx->str_unit_list, FALSE,
+ (resource_pfn_freeitem)core_fn_free_stream_unit,
+ core_str_ctx);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ break;
+ }
+
+ case VDEC_BUFTYPE_ALL:
+ {
+ /* Empty all the decoded picture related buffer lists. */
+ ret = resource_list_empty(&core_str_ctx->pict_buf_list, TRUE, NULL, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+
+ /* Empty the stream unit queue. */
+ ret = resource_list_empty(&core_str_ctx->str_unit_list, FALSE,
+ (resource_pfn_freeitem)core_fn_free_stream_unit,
+ core_str_ctx);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ break;
+ }
+
+ default:
+ {
+ ret = IMG_ERROR_INVALID_PARAMETERS;
+ VDEC_ASSERT(FALSE);
+ break;
+ }
+ }
+
+ if (buf_type == VDEC_BUFTYPE_PICTURE || buf_type == VDEC_BUFTYPE_ALL) {
+ ret = decoder_stream_release_buffers(dd_str_ctx->dec_ctx);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+
+ /* Return success.. */
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function core_stream_get_status
+ */
+int core_stream_get_status(unsigned int res_str_id,
+ struct vdecdd_decstr_status *str_st)
+{
+ int ret;
+ struct core_stream_context *core_str_ctx;
+ struct vdecdd_ddstr_ctx *dd_str_ctx;
+
+ /* Get access to stream context.. */
+ ret = rman_get_resource(res_str_id, VDECDD_STREAM_TYPE_ID, (void **)&core_str_ctx, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dd_str_ctx = core_str_ctx->dd_str_ctx;
+
+ VDEC_ASSERT(dd_str_ctx);
+ VDEC_ASSERT(str_st);
+
+ ret = decoder_stream_get_status(dd_str_ctx->dec_ctx, str_st);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Return success.. */
+ return IMG_SUCCESS;
+}
+
+#ifdef HAS_HEVC
+/*
+ * @Function core_free_hevc_picture_resource
+ */
+static int core_free_hevc_picture_resource(struct core_stream_context *core_strctx,
+ struct vdecdd_pict_resint *pic_res_int)
+{
+ int ret = IMG_SUCCESS;
+
+ ret = core_free_resbuf(&pic_res_int->genc_fragment_buf,
+ core_strctx->dd_str_ctx->mmu_str_handle);
+ if (ret != IMG_SUCCESS)
+ pr_err("MMU_Free for Genc Fragment buffer failed with error %u", ret);
+
+ return ret;
+}
+
+/*
+ * @Function core_free_hevc_sequence_resource
+ */
+static int core_free_hevc_sequence_resource(struct core_stream_context *core_strctx,
+ struct vdecdd_seq_resint *seq_res_int)
+{
+ unsigned int i;
+ int local_result = IMG_SUCCESS;
+ int ret = IMG_SUCCESS;
+
+ for (i = 0; i < GENC_BUFF_COUNT; ++i) {
+ local_result = core_free_resbuf(&seq_res_int->genc_buffers[i],
+ core_strctx->dd_str_ctx->mmu_str_handle);
+ if (local_result != IMG_SUCCESS) {
+ ret = local_result;
+ pr_warn("MMU_Free for GENC buffer %u failed with error %u", i,
+ local_result);
+ }
+ }
+
+ local_result = core_free_resbuf(&seq_res_int->intra_buffer,
+ core_strctx->dd_str_ctx->mmu_str_handle);
+ if (local_result != IMG_SUCCESS) {
+ ret = local_result;
+ pr_warn("MMU_Free for GENC buffer %u failed with error %u", i, local_result);
+ }
+
+ local_result = core_free_resbuf(&seq_res_int->aux_buffer,
+ core_strctx->dd_str_ctx->mmu_str_handle);
+ if (local_result != IMG_SUCCESS) {
+ ret = local_result;
+ pr_warn("MMU_Free for GENC buffer %u failed with error %u", i, local_result);
+ }
+
+ return ret;
+}
+
+/*
+ * @Function core_hevc_bufs_get_size
+ */
+static int core_hevc_bufs_get_size(struct core_stream_context *core_strctx,
+ const struct vdec_comsequ_hdrinfo *seqhdr_info,
+ struct vdec_pict_size *max_pict_size,
+ struct core_pict_bufsize_info *size_info,
+ struct core_seq_resinfo *seqres_info,
+ unsigned char *resource_needed)
+{
+ enum vdec_vid_std vid_std = core_strctx->dd_str_ctx->str_config_data.vid_std;
+ unsigned int std_idx = vid_std - 1;
+
+ static const unsigned short max_slice_segments_list
+ [HEVC_LEVEL_MAJOR_NUM][HEVC_LEVEL_MINOR_NUM] = {
+ /* level: 1.0 1.1 1.2 */
+ { 16, 0, 0, },
+ /* level: 2.0 2.1 2.2 */
+ { 16, 20, 0, },
+ /* level: 3.0 3.1 3.2 */
+ { 30, 40, 0, },
+ /* level: 4.0 4.1 4.2 */
+ { 75, 75, 0, },
+ /* level: 5.0 5.1 5.2 */
+ { 200, 200, 200, },
+ /* level: 6.0 6.1 6.2 */
+ { 600, 600, 600, }
+ };
+
+ static const unsigned char max_tile_cols_list
+ [HEVC_LEVEL_MAJOR_NUM][HEVC_LEVEL_MINOR_NUM] = {
+ /* level: 1.0 1.1 1.2 */
+ { 1, 0, 0, },
+ /* level: 2.0 2.1 2.2 */
+ { 1, 1, 0, },
+ /* level: 3.0 3.1 3.2 */
+ { 2, 3, 0, },
+ /* level: 4.0 4.1 4.2 */
+ { 5, 5, 0, },
+ /* level: 5.0 5.1 5.2 */
+ { 10, 10, 10, },
+ /* level: 6.0 6.1 6.2 */
+ { 20, 20, 20, }
+ };
+
+ /* TRM 3.11.11 */
+ static const unsigned int total_sample_per_mb[PIXEL_FORMAT_444 + 1] = {
+ 256, 384, 384, 512, 768};
+
+ static const unsigned int HEVC_LEVEL_IDC_MIN = 30;
+ static const unsigned int HEVC_LEVEL_IDC_MAX = 186;
+ static const unsigned int GENC_ALIGNMENT = 0x1000;
+ static const unsigned int mb_size = 16;
+ static const unsigned int max_mb_rows_in_ctu = 4;
+ static const unsigned int bytes_per_fragment_pointer = 16;
+
+ const unsigned int max_tile_height_in_mbs =
+ seqhdr_info->max_frame_size.height / mb_size;
+
+ signed char level_maj = seqhdr_info->codec_level / 30;
+ signed char level_min = (seqhdr_info->codec_level % 30) / 3;
+
+ /*
+ * If we are somehow able to deliver more information here (CTU size,
+ * number of tile columns/rows) then memory usage could be reduced
+ */
+ const struct pixel_pixinfo *pix_info = &seqhdr_info->pixel_info;
+ const unsigned int bit_depth = pix_info->bitdepth_y >= pix_info->bitdepth_c ?
+ pix_info->bitdepth_y : pix_info->bitdepth_c;
+ unsigned short max_slice_segments;
+ unsigned char max_tile_cols;
+ unsigned int raw_byte_per_mb;
+ unsigned int *genc_fragment_bufsize;
+ unsigned int *genc_buf_size;
+
+ /* Reset the MB parameters buffer size. */
+ size_info->mbparams_bufsize = 0;
+ *resource_needed = TRUE;
+
+ if (mbparam_allocinfo[std_idx].alloc_mbparam_bufs) {
+ /* shall be == 64 (0x40)*/
+ const unsigned int align = mbparam_allocinfo[std_idx].mbparam_size;
+ const unsigned int dpb_width = (max_pict_size->width + align * 2 - 1) / align * 2;
+ const unsigned int pic_height = (max_pict_size->height + align - 1) / align;
+ const unsigned int pic_width = (max_pict_size->width + align - 1) / align;
+
+ /* calculating for worst case: max frame size, B-frame */
+ size_info->mbparams_bufsize = (align * 2) * pic_width * pic_height +
+ align * dpb_width * pic_height;
+
+ /* Adjust the buffer size for MSVDX. */
+ vdecddutils_buf_vxd_adjust_size(&size_info->mbparams_bufsize);
+ }
+
+ if (seqhdr_info->codec_level > HEVC_LEVEL_IDC_MAX ||
+ seqhdr_info->codec_level < HEVC_LEVEL_IDC_MIN) {
+ level_maj = 6;
+ level_min = 2;
+ }
+
+ if (level_maj > 0 && level_maj <= HEVC_LEVEL_MAJOR_NUM &&
+ level_min >= 0 && level_min < HEVC_LEVEL_MINOR_NUM) {
+ max_slice_segments = max_slice_segments_list[level_maj - 1][level_min];
+ max_tile_cols = max_tile_cols_list[level_maj - 1][level_min];
+ } else {
+ pr_err("%s: Invalid parameters\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ raw_byte_per_mb = total_sample_per_mb[pix_info->chroma_fmt_idc] *
+ VDEC_ALIGN_SIZE(bit_depth, 8, unsigned int, int) / 8;
+
+ genc_fragment_bufsize = &size_info->hevc_bufsize_pict.genc_fragment_bufsize;
+ genc_buf_size = &seqres_info->hevc_bufsize_seqres.genc_bufsize;
+
+ *genc_fragment_bufsize = bytes_per_fragment_pointer * (seqhdr_info->max_frame_size.height /
+ mb_size * max_tile_cols + max_slice_segments - 1) * max_mb_rows_in_ctu;
+
+ /*
+ * GencBufferSize formula is taken from TRM and found by HW * CSIM teams for a sensible
+ * streams i.e. size_of_stream < size_of_output_YUV. In videostream data base it's
+ * possible to find pathological Argon streams that do not fulfill this sensible
+ * requirement. eg. #58417, #58419, #58421, #58423. To make a #58417 stream running the
+ * formula below should be changed from (2 * 384) *... ---> (3 * 384) *...
+ * This solution is applied by DEVA.
+ */
+ *genc_buf_size = 2 * raw_byte_per_mb * seqhdr_info->max_frame_size.width /
+ mb_size * max_tile_height_in_mbs / 4;
+
+ *genc_buf_size = VDEC_ALIGN_SIZE(*genc_buf_size, GENC_ALIGNMENT,
+ unsigned int, unsigned int);
+ *genc_fragment_bufsize = VDEC_ALIGN_SIZE(*genc_fragment_bufsize, GENC_ALIGNMENT,
+ unsigned int, unsigned int);
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("Sizes for GENC in HEVC: 0x%X (frag), 0x%X (x4)",
+ *genc_fragment_bufsize,
+ *genc_buf_size);
+#endif
+
+ seqres_info->hevc_bufsize_seqres.intra_bufsize = 4 * seqhdr_info->max_frame_size.width;
+ if (seqhdr_info->pixel_info.mem_pkg != PIXEL_BIT8_MP)
+ seqres_info->hevc_bufsize_seqres.intra_bufsize *= 2;
+
+ seqres_info->hevc_bufsize_seqres.aux_bufsize = (512 * 1024);
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function core_is_hevc_stream_resource_suitable
+ */
+static unsigned char
+core_is_hevc_stream_resource_suitable(struct core_pict_resinfo *pict_res_info,
+ struct core_pict_resinfo *old_pict_res_info,
+ struct core_seq_resinfo *seq_res_info,
+ struct core_seq_resinfo *old_seq_res_info)
+{
+ return (seq_res_info->hevc_bufsize_seqres.genc_bufsize <=
+ old_seq_res_info->hevc_bufsize_seqres.genc_bufsize &&
+ seq_res_info->hevc_bufsize_seqres.intra_bufsize <=
+ old_seq_res_info->hevc_bufsize_seqres.intra_bufsize &&
+ seq_res_info->hevc_bufsize_seqres.aux_bufsize <=
+ old_seq_res_info->hevc_bufsize_seqres.aux_bufsize &&
+ pict_res_info->size_info.hevc_bufsize_pict.genc_fragment_bufsize <=
+ old_pict_res_info->size_info.hevc_bufsize_pict.genc_fragment_bufsize);
+}
+
+/*
+ * @Function core_alloc_hevc_specific_seq_buffers
+ */
+static int
+core_alloc_hevc_specific_seq_buffers(struct core_stream_context *core_strctx,
+ struct vdecdd_seq_resint *seqres_int,
+ struct vxdio_mempool mempool,
+ struct core_seq_resinfo *seqres_info)
+{
+ unsigned int i;
+ int ret = IMG_SUCCESS;
+
+ /* Allocate GENC buffers */
+ for (i = 0; i < GENC_BUFF_COUNT; ++i) {
+ /* Allocate the GENC buffer info structure. */
+ ret = core_alloc_resbuf(&seqres_int->genc_buffers[i],
+ seqres_info->hevc_bufsize_seqres.genc_bufsize,
+ core_strctx->dd_str_ctx->mmu_str_handle,
+ mempool);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+
+ seqres_int->genc_buf_id = ++core_strctx->std_spec_context.hevc_ctx.genc_id_gen;
+
+ /* Allocate the intra buffer info structure. */
+ ret = core_alloc_resbuf(&seqres_int->intra_buffer,
+ seqres_info->hevc_bufsize_seqres.intra_bufsize,
+ core_strctx->dd_str_ctx->mmu_str_handle,
+ mempool);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Allocate the aux buffer info structure. */
+ ret = core_alloc_resbuf(&seqres_int->aux_buffer,
+ seqres_info->hevc_bufsize_seqres.aux_bufsize,
+ core_strctx->dd_str_ctx->mmu_str_handle,
+ mempool);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function core_alloc_hevc_specific_pict_buffers
+ */
+static int
+core_alloc_hevc_specific_pict_buffers(struct core_stream_context *core_strctx,
+ struct vdecdd_pict_resint *pict_res_int,
+ struct vxdio_mempool mempool,
+ struct core_pict_resinfo *pict_res_info)
+{
+ int ret;
+
+ /* Allocate the GENC fragment buffer. */
+ ret = core_alloc_resbuf(&pict_res_int->genc_fragment_buf,
+ pict_res_info->size_info.hevc_bufsize_pict.genc_fragment_bufsize,
+ core_strctx->dd_str_ctx->mmu_str_handle,
+ mempool);
+
+ return ret;
+}
+#endif
diff --git a/drivers/staging/media/vxd/decoder/core.h b/drivers/staging/media/vxd/decoder/core.h
new file mode 100644
index 000000000000..23a2ec835a15
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/core.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD Decoder CORE and V4L2 Node Interface header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#ifndef __CORE_H__
+#define __CORE_H__
+
+#include <linux/types.h>
+#include "decoder.h"
+
+int core_initialise(void *dev_handle, unsigned int internal_heap_id,
+ void *cb);
+
+/**
+ * core_deinitialise - deinitialise core
+ */
+int core_deinitialise(void);
+
+int core_supported_features(struct vdec_features *features);
+
+int core_stream_create(void *vxd_dec_ctx_arg,
+ const struct vdec_str_configdata *str_cfgdata,
+ unsigned int *res_str_id);
+
+int core_stream_destroy(unsigned int res_str_id);
+
+int core_stream_play(unsigned int res_str_id);
+
+int core_stream_stop(unsigned int res_str_id);
+
+int core_stream_map_buf(unsigned int res_str_id, enum vdec_buf_type buf_type,
+ struct vdec_buf_info *buf_info, unsigned int *buf_map_id);
+
+int core_stream_map_buf_sg(unsigned int res_str_id,
+ enum vdec_buf_type buf_type,
+ struct vdec_buf_info *buf_info,
+ void *sgt, unsigned int *buf_map_id);
+
+int core_stream_unmap_buf(unsigned int buf_map_id);
+
+int core_stream_unmap_buf_sg(unsigned int buf_map_id);
+
+int core_stream_submit_unit(unsigned int res_str_id,
+ struct vdecdd_str_unit *str_unit);
+
+int core_stream_fill_pictbuf(unsigned int buf_map_id);
+
+/* This function to be called before stream play */
+int core_stream_set_output_config(unsigned int res_str_id,
+ struct vdec_str_opconfig *str_opcfg,
+ struct vdec_pict_bufconfig *pict_bufcg);
+
+int core_stream_flush(unsigned int res_str_id, unsigned char discard_refs);
+
+int core_stream_release_bufs(unsigned int res_str_id,
+ enum vdec_buf_type buf_type);
+
+int core_stream_get_status(unsigned int res_str_id,
+ struct vdecdd_decstr_status *str_status);
+
+#endif
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
The default color formats support only 8bit color depth. This patch
adds 10bit definitions for NV12 and NV16.
Signed-off-by: Sunita Nadampalli <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
drivers/media/v4l2-core/v4l2-ioctl.c | 2 ++
include/uapi/linux/videodev2.h | 2 ++
2 files changed, 4 insertions(+)
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
index 05d5db3d85e5..445458c15168 100644
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -1367,6 +1367,8 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
case V4L2_META_FMT_VIVID: descr = "Vivid Metadata"; break;
case V4L2_META_FMT_RK_ISP1_PARAMS: descr = "Rockchip ISP1 3A Parameters"; break;
case V4L2_META_FMT_RK_ISP1_STAT_3A: descr = "Rockchip ISP1 3A Statistics"; break;
+ case V4L2_PIX_FMT_TI1210: descr = "10-bit YUV 4:2:0 (NV12)"; break;
+ case V4L2_PIX_FMT_TI1610: descr = "10-bit YUV 4:2:2 (NV16)"; break;
default:
/* Compressed formats */
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
index 9260791b8438..a71ffd686050 100644
--- a/include/uapi/linux/videodev2.h
+++ b/include/uapi/linux/videodev2.h
@@ -737,6 +737,8 @@ struct v4l2_pix_format {
#define V4L2_PIX_FMT_SUNXI_TILED_NV12 v4l2_fourcc('S', 'T', '1', '2') /* Sunxi Tiled NV12 Format */
#define V4L2_PIX_FMT_CNF4 v4l2_fourcc('C', 'N', 'F', '4') /* Intel 4-bit packed depth confidence information */
#define V4L2_PIX_FMT_HI240 v4l2_fourcc('H', 'I', '2', '4') /* BTTV 8-bit dithered RGB */
+#define V4L2_PIX_FMT_TI1210 v4l2_fourcc('T', 'I', '1', '2') /* TI NV12 10-bit, two bytes per channel */
+#define V4L2_PIX_FMT_TI1610 v4l2_fourcc('T', 'I', '1', '6') /* TI NV16 10-bit, two bytes per channel */
/* 10bit raw bayer packed, 32 bytes for every 25 pixels, last LSB 6 bits unused */
#define V4L2_PIX_FMT_IPU3_SBGGR10 v4l2_fourcc('i', 'p', '3', 'b') /* IPU3 packed 10-bit BGGR bayer */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
Decoder component is responsible for managing decoder state machine
and resource allocation, firmware message preparation and
firmware response processing
Signed-off-by: Sunita Nadampalli <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
drivers/staging/media/vxd/decoder/decoder.c | 4622 +++++++++++++++++++
drivers/staging/media/vxd/decoder/decoder.h | 375 ++
3 files changed, 4999 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/decoder.c
create mode 100644 drivers/staging/media/vxd/decoder/decoder.h
diff --git a/MAINTAINERS b/MAINTAINERS
index fa5c69d71c3e..7c7c008efd97 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19573,6 +19573,8 @@ F: drivers/staging/media/vxd/decoder/core.c
F: drivers/staging/media/vxd/decoder/core.h
F: drivers/staging/media/vxd/decoder/dec_resources.c
F: drivers/staging/media/vxd/decoder/dec_resources.h
+F: drivers/staging/media/vxd/decoder/decoder.c
+F: drivers/staging/media/vxd/decoder/decoder.h
F: drivers/staging/media/vxd/decoder/fw_interface.h
F: drivers/staging/media/vxd/decoder/h264_idx.h
F: drivers/staging/media/vxd/decoder/h264_secure_parser.c
diff --git a/drivers/staging/media/vxd/decoder/decoder.c b/drivers/staging/media/vxd/decoder/decoder.c
new file mode 100644
index 000000000000..f695cf9c4433
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/decoder.c
@@ -0,0 +1,4622 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * VXD Decoder Component function implementations
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include "decoder.h"
+#include "dec_resources.h"
+#include "dq.h"
+#include "hw_control.h"
+#include "h264fw_data.h"
+#include "idgen_api.h"
+#include "img_errors.h"
+#ifdef HAS_JPEG
+#include "jpegfw_data.h"
+#endif
+#include "lst.h"
+#include "pool_api.h"
+#include "resource.h"
+#include "translation_api.h"
+#include "vdecdd_utils.h"
+#include "vdec_mmu_wrapper.h"
+#include "vxd_dec.h"
+
+#define CORE_NUM_DECODE_SLOTS 2
+
+#define MAX_PLATFORM_SUPPORTED_HEIGHT 65536
+#define MAX_PLATFORM_SUPPORTED_WIDTH 65536
+
+#define MAX_CONCURRENT_STREAMS 16
+
+/* Maximum number of unique picture ids within stream. */
+#define DECODER_MAX_PICT_ID GET_STREAM_PICTURE_ID(((1ULL << 32) - 1ULL))
+
+/* Maximum number of concurrent pictures within stream. */
+#define DECODER_MAX_CONCURRENT_PICT 0x100
+
+static inline unsigned int get_next_picture_id(unsigned int cur_pict_id)
+{
+ return(cur_pict_id == FWIF_BIT_MASK(FWIF_NUMBITS_STREAM_PICTURE_ID) ?
+ 1 : cur_pict_id + 1);
+}
+
+static inline unsigned int get_prev_picture_id(unsigned int cur_pict_id)
+{
+ return(cur_pict_id == 1 ?
+ FWIF_BIT_MASK(FWIF_NUMBITS_STREAM_PICTURE_ID) :
+ cur_pict_id - 1);
+}
+
+#define H264_SGM_BUFFER_BYTES_PER_MB 1
+#define H264_SGM_MAX_MBS 3600
+
+#define CONTEXT_BUFF_SIZE (72)
+
+/*
+ * Number of bits in transaction ID used to represent
+ * picture number in stream.
+ */
+#define FWIF_NUMBITS_STREAM_PICTURE_ID 16
+/*
+ * Number of bits in transaction ID used to represent
+ * picture number in core.
+ */
+#define FWIF_NUMBITS_CORE_PICTURE_ID 4
+/*
+ * Number of bits in transaction ID used to represent
+ * stream id.
+ */
+#define FWIF_NUMBITS_STREAM_ID 8
+/* Number of bits in transaction ID used to represent core id. */
+#define FWIF_NUMBITS_CORE_ID 4
+
+/* Offset in transaction ID to picture number in stream. */
+#define FWIF_OFFSET_STREAM_PICTURE_ID 0
+/* Offset in transaction ID to picture number in core. */
+#define FWIF_OFFSET_CORE_PICTURE_ID (FWIF_OFFSET_STREAM_PICTURE_ID + \
+ FWIF_NUMBITS_STREAM_PICTURE_ID)
+/* Offset in transaction ID to stream id. */
+#define FWIF_OFFSET_STREAM_ID (FWIF_OFFSET_CORE_PICTURE_ID + \
+ FWIF_NUMBITS_CORE_PICTURE_ID)
+/* Offset in transaction ID to core id. */
+#define FWIF_OFFSET_CORE_ID (FWIF_OFFSET_STREAM_ID + \
+ FWIF_NUMBITS_STREAM_ID)
+
+/* Maximum number of unique picture ids within stream. */
+#define DECODER_MAX_PICT_ID GET_STREAM_PICTURE_ID(((1ULL << 32) - 1ULL))
+
+/* Maximum number of concurrent pictures within stream. */
+#define DECODER_MAX_CONCURRENT_PICT 0x100
+
+#define CREATE_TRANSACTION_ID(core_id, stream_id, core_pic, stream_pic) \
+ (SET_CORE_ID((core_id)) | SET_STREAM_ID((stream_id)) | \
+ SET_CORE_PICTURE_ID((core_pic)) | SET_STREAM_PICTURE_ID((stream_pic)))
+
+static inline struct dec_core_ctx *decoder_str_ctx_to_core_ctx(struct dec_str_ctx *decstrctx)
+{
+ if (decstrctx && decstrctx->decctx)
+ return decstrctx->decctx->dec_core_ctx;
+
+ else
+ return NULL;
+}
+
+static const struct vdecdd_dd_devconfig def_dev_cfg = {
+ CORE_NUM_DECODE_SLOTS, /* ui32NumSlotsPerPipe; */
+};
+
+/*
+ * This array defines names of the VDEC standards.
+ * Shall be in sync with #VDEC_eVidStd
+ * @brief Names of the VDEC standards
+ */
+static unsigned char *vid_std_names[] = {
+ "VDEC_STD_UNDEFINED",
+ "VDEC_STD_MPEG2",
+ "VDEC_STD_MPEG4",
+ "VDEC_STD_H263",
+ "VDEC_STD_H264",
+ "VDEC_STD_VC1",
+ "VDEC_STD_AVS",
+ "VDEC_STD_REAL",
+ "VDEC_STD_JPEG",
+ "VDEC_STD_VP6",
+ "VDEC_STD_VP8",
+ "VDEC_STD_SORENSON",
+ "VDEC_STD_HEVC"
+};
+
+#ifdef ERROR_RECOVERY_SIMULATION
+extern int fw_error_value;
+#endif
+
+/*
+ * @Function decoder_set_device_config
+ */
+static int decoder_set_device_config(const struct vdecdd_dd_devconfig **dd_dev_config)
+{
+ struct vdecdd_dd_devconfig *local_dev_cfg;
+
+ VDEC_ASSERT(dd_dev_config);
+
+ /* Allocate device configuration structure */
+ local_dev_cfg = kzalloc(sizeof(*local_dev_cfg), GFP_KERNEL);
+ VDEC_ASSERT(local_dev_cfg);
+ if (!local_dev_cfg)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /* Set the default device configuration */
+ *local_dev_cfg = def_dev_cfg;
+
+ *dd_dev_config = local_dev_cfg;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_set_feature_flags
+ * @Description
+ * This function sets the features flags from the core properties.
+ * @Input p : Pointer to core properties.
+ * @Input core_feat_flags : Pointer to core feature flags word.
+ */
+static void decoder_set_feature_flags(struct vxd_coreprops *core_props,
+ unsigned int *core_feat_flags,
+ unsigned int *pipe_feat_flags)
+{
+ unsigned char pipe_minus_one;
+
+ VDEC_ASSERT(core_props);
+ VDEC_ASSERT(core_feat_flags);
+ VDEC_ASSERT(pipe_feat_flags);
+
+ for (pipe_minus_one = 0; pipe_minus_one < core_props->num_pixel_pipes;
+ pipe_minus_one++) {
+ *core_feat_flags |= pipe_feat_flags[pipe_minus_one] |=
+ core_props->h264[pipe_minus_one] ?
+ VDECDD_COREFEATURE_H264 : 0;
+#ifdef HAS_JPEG
+ *core_feat_flags |= pipe_feat_flags[pipe_minus_one] |=
+ core_props->jpeg[pipe_minus_one] ?
+ VDECDD_COREFEATURE_JPEG : 0;
+#endif
+#ifdef HAS_HEVC
+ *core_feat_flags |= pipe_feat_flags[pipe_minus_one] |=
+ core_props->hevc[pipe_minus_one] ?
+ VDECDD_COREFEATURE_HEVC : 0;
+#endif
+ }
+}
+
+/*
+ * @Function decoder_stream_get_context
+ * @Description
+ * This function returns the stream context structure for the given
+ * stream handle.
+ * @Return struct dec_str_ctx : This function returns a pointer
+ * to the stream
+ * context structure or NULL if not found.
+ */
+static struct dec_str_ctx *decoder_stream_get_context(void *dec_str_context)
+{
+ return (struct dec_str_ctx *)dec_str_context;
+}
+
+/*
+ * @Function decoder_core_enumerate
+ * @Description
+ * This function enumerates a decoder core and returns its handle.
+ * Usage: before calls to other DECODE_Core or DECODE_Stream functions.
+ * @Input dec_context : Pointer to Decoder context.
+ * @Input dev_cfg : Device configuration.
+ * @Return This function returns either IMG_SUCCESS or an
+ * error code.
+ */
+static int decoder_core_enumerate(struct dec_ctx *dec_context,
+ const struct vdecdd_dd_devconfig *dev_cfg,
+ unsigned int *num_pipes)
+{
+ struct dec_core_ctx *dec_core_ctx_local;
+ unsigned int ret;
+ unsigned int ptd_align = DEV_MMU_PAGE_ALIGNMENT;
+
+ /* Create the core. */
+ dec_core_ctx_local = kzalloc(sizeof(*dec_core_ctx_local), GFP_KERNEL);
+ if (!dec_core_ctx_local)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ dec_core_ctx_local->dec_ctx = (struct dec_ctx *)dec_context;
+
+ /* Initialise the hwctrl block here */
+ ret = hwctrl_initialise(dec_core_ctx_local, dec_context->user_data,
+ dev_cfg, &dec_core_ctx_local->core_props,
+ &dec_core_ctx_local->hw_ctx);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ decoder_set_feature_flags(&dec_core_ctx_local->core_props,
+ &dec_core_ctx_local->core_features,
+ dec_core_ctx_local->pipe_features);
+
+ /* Perform device setup for master core. */
+ if (num_pipes)
+ *num_pipes = dec_core_ctx_local->core_props.num_pixel_pipes;
+
+ /* DEVA PVDEC FW requires PTD to be 64k aligned. */
+ ptd_align = 0x10000;
+
+ /* Create a device MMU context. */
+ ret = mmu_device_create(dec_core_ctx_local->core_props.mmu_type,
+ ptd_align, &dec_context->mmu_dev_handle);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ dec_core_ctx_local->enumerated = TRUE;
+
+ dec_context->dec_core_ctx = dec_core_ctx_local;
+
+ return IMG_SUCCESS;
+
+error:
+ if (dec_core_ctx_local) {
+ unsigned int deinit_result = IMG_SUCCESS;
+
+ /* Destroy a device MMU context. */
+ if (dec_context->mmu_dev_handle) {
+ deinit_result =
+ mmu_device_destroy(dec_context->mmu_dev_handle);
+ VDEC_ASSERT(deinit_result == IMG_SUCCESS);
+ if (deinit_result != IMG_SUCCESS)
+ pr_warn("MMU_DeviceDestroy() failed to tidy-up after error");
+ }
+
+ kfree(dec_core_ctx_local);
+ dec_core_ctx_local = NULL;
+ }
+
+ return ret;
+}
+
+/*
+ * @Function decoder_initialise
+ */
+int decoder_initialise(void *user_init_data, unsigned int int_heap_id,
+ struct vdecdd_dd_devconfig *dd_device_config,
+ unsigned int *num_pipes,
+ void **dec_ctx_handle)
+{
+ struct dec_ctx *dec_context = (struct dec_ctx *)*dec_ctx_handle;
+ int ret;
+
+ if (!dec_context) {
+ dec_context = kzalloc(sizeof(*dec_context), GFP_KERNEL);
+ VDEC_ASSERT(dec_context);
+ if (!dec_context)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ *dec_ctx_handle = dec_context;
+ }
+
+ /* Determine which standards are supported. */
+ memset(&dec_context->sup_stds, 0x0, sizeof(dec_context->sup_stds[VDEC_STD_MAX]));
+
+ dec_context->sup_stds[VDEC_STD_H264] = TRUE;
+#ifdef HAS_HEVC
+ dec_context->sup_stds[VDEC_STD_HEVC] = TRUE;
+#endif
+ if (!dec_context->inited) {
+ /* Check and store input parameters. */
+ dec_context->user_data = user_init_data;
+ dec_context->dev_handle =
+ ((struct vdecdd_dddev_context *)user_init_data)->dev_handle;
+
+ /* Initialise the context lists. */
+ lst_init(&dec_context->str_list);
+
+ /* Make sure POOL API is initialised */
+ ret = pool_init();
+ if (ret != IMG_SUCCESS)
+ goto pool_init_error;
+
+ ret = decoder_set_device_config(&dec_context->dev_cfg);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ dec_context->internal_heap_id = int_heap_id;
+
+ /* Enumerate the master core. */
+ ret = decoder_core_enumerate(dec_context, dec_context->dev_cfg,
+ &dec_context->num_pipes);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ if (dd_device_config)
+ *dd_device_config = *dec_context->dev_cfg;
+
+ if (num_pipes)
+ *num_pipes = dec_context->num_pipes;
+
+ dec_context->inited = TRUE;
+ }
+
+ return IMG_SUCCESS;
+
+error:
+ pool_deinit();
+
+pool_init_error:
+ if (dec_context->dev_cfg) {
+ kfree((void *)dec_context->dev_cfg);
+ dec_context->dev_cfg = NULL;
+ }
+
+ kfree(*dec_ctx_handle);
+ *dec_ctx_handle = NULL;
+
+ return ret;
+}
+
+/*
+ * @Function decoder_supported_features
+ */
+int decoder_supported_features(void *dec_ctx, struct vdec_features *features)
+{
+ struct dec_ctx *dec_context = (struct dec_ctx *)dec_ctx;
+ struct dec_core_ctx *dec_core_ctx_local;
+
+ /* Check input parameters. */
+ VDEC_ASSERT(dec_context);
+ VDEC_ASSERT(features);
+ if (!dec_context || !features) {
+ pr_err("Invalid parameters!");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Ensure that Decoder component is initialised. */
+ VDEC_ASSERT(dec_context->inited);
+
+ /* Loop over all cores checking for support. */
+ dec_core_ctx_local = dec_context->dec_core_ctx;
+ VDEC_ASSERT(dec_core_ctx_local);
+
+ /*
+ * Determine whether the required core attribute
+ * is present to support requested feature
+ */
+ features->h264 |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_H264) ? TRUE : FALSE;
+ features->mpeg2 |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_MPEG2) ? TRUE : FALSE;
+ features->mpeg4 |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_MPEG4) ? TRUE : FALSE;
+ features->vc1 |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_VC1) ? TRUE : FALSE;
+ features->avs |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_AVS) ? TRUE : FALSE;
+ features->real |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_REAL) ? TRUE : FALSE;
+ features->jpeg |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_JPEG) ? TRUE : FALSE;
+ features->vp6 |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_VP6) ? TRUE : FALSE;
+ features->vp8 |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_VP8) ? TRUE : FALSE;
+ features->hd |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_HD_DECODE) ? TRUE : FALSE;
+ features->rotation |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_ROTATION) ? TRUE : FALSE;
+ features->scaling |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_SCALING) ? TRUE : FALSE;
+ features->hevc |= (dec_core_ctx_local->core_features &
+ VDECDD_COREFEATURE_HEVC) ? TRUE : FALSE;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_stream_get_status
+ */
+int decoder_stream_get_status(void *dec_str_ctx_handle,
+ struct vdecdd_decstr_status *dec_str_st)
+{
+ struct dec_str_ctx *dec_str_ctx;
+ struct dec_decoded_pict *decoded_pict;
+ struct dec_core_ctx *dec_core_ctx;
+ unsigned int item;
+
+ VDEC_ASSERT(dec_str_st);
+ if (!dec_str_st) {
+ pr_err("Invalid decoder streams status pointer!");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ dec_str_ctx = decoder_stream_get_context(dec_str_ctx_handle);
+ VDEC_ASSERT(dec_str_ctx);
+ if (!dec_str_ctx) {
+ pr_err("Invalid decoder stream context handle!");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Obtain the state of each core. */
+ dec_core_ctx = decoder_str_ctx_to_core_ctx(dec_str_ctx);
+ VDEC_ASSERT(dec_core_ctx);
+
+ /*
+ * Obtain the display and release list of first unprocessed
+ * picture in decoded list
+ */
+ dec_str_ctx->dec_str_st.display_pics = 0;
+ dec_str_ctx->dec_str_st.release_pics = 0;
+ decoded_pict = dq_first(&dec_str_ctx->str_decd_pict_list);
+ while (decoded_pict) {
+ /* if this is the first unprocessed picture */
+ if (!decoded_pict->processed) {
+ unsigned int idx;
+ struct vdecfw_buffer_control *buf_ctrl;
+
+ VDEC_ASSERT(decoded_pict->pict_ref_res);
+ buf_ctrl =
+ (struct vdecfw_buffer_control *)decoded_pict->pict_ref_res->fw_ctrlbuf.cpu_virt;
+ VDEC_ASSERT(buf_ctrl);
+
+ /* Get display pictures */
+ idx = decoded_pict->disp_idx;
+ item = dec_str_ctx->dec_str_st.display_pics;
+ while (idx < buf_ctrl->display_list_length &&
+ item < VDECFW_MAX_NUM_PICTURES) {
+ dec_str_ctx->dec_str_st.next_display_items[item] =
+ buf_ctrl->display_list[idx];
+ dec_str_ctx->dec_str_st.next_display_item_parent[item] =
+ decoded_pict->transaction_id;
+ idx++;
+ item++;
+ }
+ dec_str_ctx->dec_str_st.display_pics = item;
+
+ /* Get release pictures */
+ idx = decoded_pict->rel_idx;
+ item = dec_str_ctx->dec_str_st.release_pics;
+ while (idx < buf_ctrl->release_list_length &&
+ item < VDECFW_MAX_NUM_PICTURES) {
+ dec_str_ctx->dec_str_st.next_release_items[item] =
+ buf_ctrl->release_list[idx];
+ dec_str_ctx->dec_str_st.next_release_item_parent[item] =
+ decoded_pict->transaction_id;
+ idx++;
+ item++;
+ }
+ dec_str_ctx->dec_str_st.release_pics = item;
+ break;
+ }
+
+ if (decoded_pict != dq_last(&dec_str_ctx->str_decd_pict_list))
+ decoded_pict = dq_next(decoded_pict);
+ else
+ decoded_pict = NULL;
+ }
+
+ /* Get list of held decoded pictures */
+ item = 0;
+ decoded_pict = dq_first(&dec_str_ctx->str_decd_pict_list);
+ while (decoded_pict) {
+ dec_str_ctx->dec_str_st.decoded_picts[item] =
+ decoded_pict->transaction_id;
+ item++;
+
+ if (decoded_pict != dq_last(&dec_str_ctx->str_decd_pict_list))
+ decoded_pict = dq_next(decoded_pict);
+ else
+ decoded_pict = NULL;
+ }
+
+ VDEC_ASSERT(item == dec_str_ctx->dec_str_st.num_pict_decoded);
+ *dec_str_st = dec_str_ctx->dec_str_st;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_deinitialise
+ */
+int decoder_deinitialise(void *dec_ctx)
+{
+ struct dec_ctx *dec_context = (struct dec_ctx *)dec_ctx;
+ int ret;
+ /* Remove and free all core context structures */
+ struct dec_decpict *dec_pict;
+
+ if (dec_context && dec_context->inited) {
+ struct dec_core_ctx *dec_core_ctx_local =
+ dec_context->dec_core_ctx;
+
+ if (!dec_core_ctx_local) {
+ pr_warn("%s %d NULL Decoder context passed", __func__, __LINE__);
+ VDEC_ASSERT(0);
+ return -EFAULT;
+ }
+
+ /* Stream list should be empty. */
+ if (!lst_empty(&dec_context->str_list))
+ pr_warn("%s %d stream list should be empty", __func__, __LINE__);
+
+ /*
+ * All cores should now be idle since there are no
+ * connections/streams.
+ */
+ ret = hwctrl_peekheadpiclist(dec_core_ctx_local->hw_ctx, &dec_pict);
+ VDEC_ASSERT(ret != IMG_SUCCESS);
+
+ /* Destroy a device MMU context. */
+ ret = mmu_device_destroy(dec_context->mmu_dev_handle);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Remove and free core context structure */
+ dec_core_ctx_local = dec_context->dec_core_ctx;
+ VDEC_ASSERT(dec_core_ctx_local);
+
+ hwctrl_deinitialise(dec_core_ctx_local->hw_ctx);
+
+ kfree(dec_core_ctx_local);
+ dec_core_ctx_local = NULL;
+
+ VDEC_ASSERT(dec_context->dev_cfg);
+ if (dec_context->dev_cfg)
+ kfree((void *)dec_context->dev_cfg);
+
+ dec_context->user_data = NULL;
+
+ pool_deinit();
+
+ dec_context->inited = FALSE;
+
+ kfree(dec_context);
+ } else {
+ pr_err("Decoder has not been initialised so cannot be de-initialised");
+ return IMG_ERROR_NOT_INITIALISED;
+ }
+
+ pr_debug("Decoder deinitialise successfully\n");
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_picture_destroy
+ * @Description
+ * Free the picture container and optionally release image buffer back to
+ * client.
+ * Default is to decrement the reference count held by this picture.
+ */
+static int decoder_picture_destroy(struct dec_str_ctx *dec_str_ctx,
+ unsigned int pict_id,
+ unsigned char release_image)
+{
+ struct vdecdd_picture *picture;
+ int ret;
+
+ VDEC_ASSERT(dec_str_ctx);
+
+ ret = idgen_gethandle(dec_str_ctx->pict_idgen, pict_id, (void **)&picture);
+ if (ret == IMG_SUCCESS) {
+ VDEC_ASSERT(picture);
+ ret = idgen_freeid(dec_str_ctx->pict_idgen, pict_id);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ if (picture->dec_pict_info) {
+ /* Destroy the picture */
+ kfree(picture->dec_pict_info);
+ picture->dec_pict_info = NULL;
+ }
+
+ /* Return unused picture and internal resources */
+ if (picture->disp_pict_buf.pict_buf) {
+ if (release_image)
+ resource_item_release
+ (&picture->disp_pict_buf.pict_buf->ddbuf_info.ref_count);
+ else
+ resource_item_return
+ (&picture->disp_pict_buf.pict_buf->ddbuf_info.ref_count);
+
+ memset(&picture->disp_pict_buf, 0, sizeof(picture->disp_pict_buf));
+ }
+
+ if (picture->pict_res_int) {
+ resource_item_return(&picture->pict_res_int->ref_cnt);
+ picture->pict_res_int = NULL;
+ }
+
+ kfree(picture);
+ picture = NULL;
+ } else {
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ return ret;
+ }
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_decoded_picture_destroy
+ */
+static int
+decoder_decoded_picture_destroy(struct dec_str_ctx *dec_str_ctx,
+ struct dec_decoded_pict *decoded_pict,
+ unsigned char release_image)
+{
+ int ret;
+
+ VDEC_ASSERT(dec_str_ctx);
+ VDEC_ASSERT(decoded_pict);
+
+ if (decoded_pict->pict) {
+ VDEC_ASSERT(decoded_pict->pict->pict_id ==
+ GET_STREAM_PICTURE_ID(decoded_pict->transaction_id));
+
+ ret = decoder_picture_destroy(dec_str_ctx, decoded_pict->pict->pict_id,
+ release_image);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ decoded_pict->pict = NULL;
+ }
+
+ dq_remove(decoded_pict);
+ dec_str_ctx->dec_str_st.num_pict_decoded--;
+
+ resource_item_return(&decoded_pict->pict_ref_res->ref_cnt);
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] [TID=0x%08X] COMPLETE",
+ GET_STREAM_ID(decoded_pict->transaction_id),
+ decoded_pict->transaction_id);
+#endif
+
+ kfree(decoded_pict->first_fld_fwmsg);
+ decoded_pict->first_fld_fwmsg = NULL;
+
+ kfree(decoded_pict->second_fld_fwmsg);
+ decoded_pict->second_fld_fwmsg = NULL;
+
+ kfree(decoded_pict);
+ decoded_pict = NULL;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_stream_decode_resource_destroy
+ */
+static int decoder_stream_decode_resource_destroy(void *item, void *free_cb_param)
+{
+ struct dec_pictdec_res *pict_dec_res = item;
+ struct dec_str_ctx *dec_str_ctx_local =
+ (struct dec_str_ctx *)free_cb_param;
+ int ret;
+
+ VDEC_ASSERT(pict_dec_res);
+ VDEC_ASSERT(resource_item_isavailable(&pict_dec_res->ref_cnt));
+
+ /* Free memory (device-based) to store fw contexts for stream. */
+ ret = mmu_free_mem(dec_str_ctx_local->mmu_str_handle, &pict_dec_res->fw_ctx_buf);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ if (pict_dec_res->h264_sgm_buf.hndl_memory) {
+ /* Free memory (device-based) to store SGM. */
+ ret = mmu_free_mem(dec_str_ctx_local->mmu_str_handle, &pict_dec_res->h264_sgm_buf);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+
+ kfree(pict_dec_res);
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_stream_release_buffers
+ */
+int decoder_stream_release_buffers(void *dec_str_ctx_handle)
+{
+ struct dec_str_ctx *dec_str_ctx;
+ struct dec_decoded_pict *decoded_pict;
+ int ret;
+
+ dec_str_ctx = decoder_stream_get_context(dec_str_ctx_handle);
+
+ /* Decoding queue should be empty since we are stopped */
+ VDEC_ASSERT(dec_str_ctx);
+ if (!dec_str_ctx) {
+ pr_err("Invalid decoder stream context handle!");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+ VDEC_ASSERT(lst_empty(&dec_str_ctx->pend_strunit_list));
+
+ /* Destroy all pictures in the decoded list */
+ decoded_pict = dq_first(&dec_str_ctx->str_decd_pict_list);
+ while (decoded_pict) {
+ ret = decoder_decoded_picture_destroy(dec_str_ctx, decoded_pict, TRUE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ decoded_pict = dq_first(&dec_str_ctx->str_decd_pict_list);
+ }
+
+ /* if and only if the output buffers were used for reference. */
+ if (dec_str_ctx->last_be_pict_dec_res) {
+ /*
+ * Clear the firmware context so that reference pictures
+ * are no longer referred to.
+ */
+ memset(dec_str_ctx->last_be_pict_dec_res->fw_ctx_buf.cpu_virt, 0,
+ dec_str_ctx->last_be_pict_dec_res->fw_ctx_buf.buf_size);
+ }
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_stream_reference_resource_destroy
+ */
+static int decoder_stream_reference_resource_destroy(void *item, void *free_cb_param)
+{
+ struct dec_pictref_res *pict_ref_res = item;
+ struct dec_str_ctx *dec_str_ctx_local =
+ (struct dec_str_ctx *)free_cb_param;
+ int ret;
+
+ VDEC_ASSERT(pict_ref_res);
+ VDEC_ASSERT(resource_item_isavailable(&pict_ref_res->ref_cnt));
+
+ /* Free memory (device-based) to store fw contexts for stream */
+ ret = mmu_free_mem(dec_str_ctx_local->mmu_str_handle, &pict_ref_res->fw_ctrlbuf);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ kfree(pict_ref_res);
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_stream_destroy
+ */
+int decoder_stream_destroy(void *dec_str_context, unsigned char abort)
+{
+ struct dec_str_ctx *dec_str_ctx_local;
+ struct dec_str_unit *dec_str_unit_local;
+ struct dec_decoded_pict *decoded_pict_local;
+ unsigned int i;
+ int ret;
+ unsigned int pict_id;
+ void **res_handle_local;
+
+ /* Required for getting segments from decode picture to free */
+ struct dec_decpict_seg *dec_pict_seg_local;
+ struct dec_ctx *dec_context;
+ struct dec_core_ctx *dec_core_ctx_local;
+
+ /* Get the Decoder stream context. */
+ dec_str_ctx_local = decoder_stream_get_context(dec_str_context);
+
+ VDEC_ASSERT(dec_str_ctx_local);
+ if (!dec_str_ctx_local) {
+ pr_err("Invalid decoder stream context handle!");
+ return FALSE;
+ }
+ VDEC_ASSERT(dec_str_ctx_local->decctx);
+
+ /* Decrement the stream count */
+ dec_str_ctx_local->decctx->str_cnt--;
+
+ /*
+ * Ensure that there are no pictures for this stream outstanding
+ * on any decoder cores.
+ * This should not be removed, it is important to see it if
+ * it ever happens.
+ * In practice we see it many times with Application Timeout.
+ */
+ if (!abort)
+ VDEC_ASSERT(lst_empty(&dec_str_ctx_local->pend_strunit_list));
+
+ /*
+ * At this point all resources for the stream are guaranteed to
+ * not be used and no further hardware interrupts will be received.
+ */
+
+ /* Destroy all stream units submitted for processing. */
+ dec_str_unit_local =
+ lst_removehead(&dec_str_ctx_local->pend_strunit_list);
+ while (dec_str_unit_local) {
+ /* If the unit was submitted for decoding (picture)... */
+ if (dec_str_unit_local->dec_pict) {
+ /*
+ * Explicitly remove picture from core decode queue
+ * and destroy.
+ */
+ struct dec_core_ctx *dec_core_ctx_local =
+ decoder_str_ctx_to_core_ctx(dec_str_ctx_local);
+ VDEC_ASSERT(dec_core_ctx_local);
+
+ res_handle_local = &dec_str_ctx_local->resources;
+
+ if (!dec_core_ctx_local) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+
+ hwctrl_removefrom_piclist(dec_core_ctx_local->hw_ctx,
+ dec_str_unit_local->dec_pict);
+
+ /* Free decoder picture */
+ kfree(dec_str_unit_local->dec_pict->first_fld_fwmsg);
+ dec_str_unit_local->dec_pict->first_fld_fwmsg = NULL;
+
+ kfree(dec_str_unit_local->dec_pict->second_fld_fwmsg);
+ dec_str_unit_local->dec_pict->second_fld_fwmsg = NULL;
+
+ dec_res_picture_detach(res_handle_local, dec_str_unit_local->dec_pict);
+
+ /* Free all the segments of the picture */
+ dec_pict_seg_local =
+ lst_removehead(&dec_str_unit_local->dec_pict->dec_pict_seg_list);
+ while (dec_pict_seg_local) {
+ if (dec_pict_seg_local->internal_seg) {
+ VDEC_ASSERT(dec_pict_seg_local->bstr_seg);
+ kfree(dec_pict_seg_local->bstr_seg);
+ dec_pict_seg_local->bstr_seg = NULL;
+ }
+
+ kfree(dec_pict_seg_local);
+ dec_pict_seg_local = NULL;
+
+ dec_pict_seg_local =
+ lst_removehead
+ (&dec_str_unit_local->dec_pict->dec_pict_seg_list);
+ }
+
+ VDEC_ASSERT(dec_str_unit_local->dec_pict->dec_str_ctx == dec_str_ctx_local);
+
+ dec_str_ctx_local->dec_str_st.num_pict_decoding--;
+ pict_id =
+ GET_STREAM_PICTURE_ID(dec_str_unit_local->dec_pict->transaction_id);
+
+ ret = decoder_picture_destroy(dec_str_ctx_local, pict_id, TRUE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ kfree(dec_str_unit_local->dec_pict);
+ dec_str_unit_local->dec_pict = NULL;
+ }
+
+ /* Free the picture container */
+ kfree(dec_str_unit_local);
+ dec_str_unit_local = NULL;
+
+ dec_str_unit_local = lst_removehead(&dec_str_ctx_local->pend_strunit_list);
+ }
+
+ /* Destroy all pictures in the decoded list */
+ decoded_pict_local = dq_first(&dec_str_ctx_local->str_decd_pict_list);
+ while (decoded_pict_local) {
+ ret = decoder_decoded_picture_destroy(dec_str_ctx_local,
+ decoded_pict_local,
+ TRUE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ decoded_pict_local = dq_first(&dec_str_ctx_local->str_decd_pict_list);
+ }
+
+ /* Ensure all picture queues are empty */
+ VDEC_ASSERT(lst_empty(&dec_str_ctx_local->pend_strunit_list));
+ VDEC_ASSERT(dq_empty(&dec_str_ctx_local->str_decd_pict_list));
+
+ /* Free memory to store stream context buffer. */
+ ret = mmu_free_mem(dec_str_ctx_local->mmu_str_handle,
+ &dec_str_ctx_local->pvdec_fw_ctx_buf);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Release any fw contexts held by stream. */
+ if (dec_str_ctx_local->prev_fe_pict_dec_res)
+ resource_item_return(&dec_str_ctx_local->prev_fe_pict_dec_res->ref_cnt);
+
+ if (dec_str_ctx_local->cur_fe_pict_dec_res)
+ resource_item_return(&dec_str_ctx_local->cur_fe_pict_dec_res->ref_cnt);
+
+ if (dec_str_ctx_local->last_be_pict_dec_res)
+ resource_item_return(&dec_str_ctx_local->last_be_pict_dec_res->ref_cnt);
+
+ /*
+ * Remove the device resources used for decoding and the two
+ * added to hold the last on front and back-end for stream.
+ */
+ for (i = 0; i < dec_str_ctx_local->num_dec_res + 2; i++) {
+ ret = resource_list_empty(&dec_str_ctx_local->dec_res_lst, FALSE,
+ decoder_stream_decode_resource_destroy,
+ dec_str_ctx_local);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+ VDEC_ASSERT(lst_empty(&dec_str_ctx_local->dec_res_lst));
+
+ /* Remove all stream decode resources. */
+ ret = resource_list_empty(&dec_str_ctx_local->ref_res_lst, FALSE,
+ decoder_stream_reference_resource_destroy,
+ dec_str_ctx_local);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ VDEC_ASSERT(lst_empty(&dec_str_ctx_local->ref_res_lst));
+
+ idgen_destroycontext(dec_str_ctx_local->pict_idgen);
+
+ dec_context = dec_str_ctx_local->decctx;
+ dec_core_ctx_local = decoder_str_ctx_to_core_ctx(dec_str_ctx_local);
+
+ VDEC_ASSERT(dec_context);
+ VDEC_ASSERT(dec_core_ctx_local);
+
+ res_handle_local = &dec_str_ctx_local->resources;
+
+ if (*res_handle_local) {
+ ret = dec_res_destroy(dec_str_ctx_local->mmu_str_handle, *res_handle_local);
+ if (ret != IMG_SUCCESS)
+ pr_warn("resourceS_Destroy() failed to tidy-up after error");
+
+ *res_handle_local = NULL;
+ }
+
+ lst_remove(&dec_str_ctx_local->decctx->str_list, dec_str_ctx_local);
+
+ kfree(dec_str_ctx_local);
+ dec_str_ctx_local = NULL;
+
+ pr_debug("%s successfully", __func__);
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_init_avail_slots
+ */
+static int decoder_init_avail_slots(struct dec_str_ctx *dec_str_context)
+{
+ VDEC_ASSERT(dec_str_context);
+
+ switch (dec_str_context->config.vid_std) {
+ case VDEC_STD_H264:
+ /*
+ * only first pipe can be master when decoding H264 in
+ * multipipe mode (FW restriction)
+ */
+ dec_str_context->avail_slots =
+ dec_str_context->decctx->dev_cfg->num_slots_per_pipe *
+ dec_str_context->decctx->num_pipes;
+ break;
+
+ default:
+ /* all pipes by default */
+ dec_str_context->avail_slots =
+ dec_str_context->decctx->dev_cfg->num_slots_per_pipe;
+ break;
+ }
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_stream_decode_resource_create
+ */
+static int decoder_stream_decode_resource_create(struct dec_str_ctx *dec_str_context)
+{
+ struct dec_pictdec_res *pict_dec_res;
+ int ret;
+ unsigned int mem_heap_id;
+ enum sys_emem_attrib mem_attribs;
+
+ unsigned char fw_ctx_buf = FALSE;
+
+ /* Validate input arguments */
+ if (!dec_str_context || !dec_str_context->decctx ||
+ !dec_str_context->decctx->dev_cfg) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+
+ mem_heap_id = dec_str_context->decctx->internal_heap_id;
+ mem_attribs = (enum sys_emem_attrib)(SYS_MEMATTRIB_UNCACHED | SYS_MEMATTRIB_WRITECOMBINE);
+ mem_attribs |= (enum sys_emem_attrib)SYS_MEMATTRIB_INTERNAL;
+
+ /* Allocate the firmware context buffer info structure. */
+ pict_dec_res = kzalloc(sizeof(*pict_dec_res), GFP_KERNEL);
+ VDEC_ASSERT(pict_dec_res);
+ if (!pict_dec_res)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /*
+ * Allocate the firmware context buffer to contain
+ * data required for subsequent picture.
+ */
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s:%d calling MMU_StreamMalloc", __func__, __LINE__);
+#endif
+
+ ret = mmu_stream_alloc(dec_str_context->mmu_str_handle,
+ MMU_HEAP_STREAM_BUFFERS, mem_heap_id,
+ (enum sys_emem_attrib)(mem_attribs | SYS_MEMATTRIB_CPU_READ |
+ SYS_MEMATTRIB_CPU_WRITE),
+ sizeof(union dec_fw_contexts),
+ DEV_MMU_PAGE_ALIGNMENT,
+ &pict_dec_res->fw_ctx_buf);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto err_out_of_memory;
+
+ fw_ctx_buf = TRUE;
+
+ /*
+ * Clear the context data in preparation for first time
+ * use by the firmware.
+ */
+ memset(pict_dec_res->fw_ctx_buf.cpu_virt, 0, pict_dec_res->fw_ctx_buf.buf_size);
+
+ switch (dec_str_context->config.vid_std) {
+ case VDEC_STD_H264:
+ /* Allocate the SGM buffer */
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s:%d calling MMU_StreamMalloc", __func__, __LINE__);
+#endif
+ ret = mmu_stream_alloc
+ (dec_str_context->mmu_str_handle,
+ MMU_HEAP_STREAM_BUFFERS, mem_heap_id,
+ (enum sys_emem_attrib)(mem_attribs | SYS_MEMATTRIB_CPU_WRITE),
+ H264_SGM_BUFFER_BYTES_PER_MB *
+ H264_SGM_MAX_MBS,
+ DEV_MMU_PAGE_ALIGNMENT,
+ &pict_dec_res->h264_sgm_buf);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto err_out_of_memory;
+
+ /* Clear the SGM data. */
+ memset(pict_dec_res->h264_sgm_buf.cpu_virt, 0, pict_dec_res->h264_sgm_buf.buf_size);
+ break;
+
+ default:
+ break;
+ }
+
+ pict_dec_res->ref_cnt = 1;
+
+ ret = resource_list_add_img(&dec_str_context->dec_res_lst, pict_dec_res, 0,
+ &pict_dec_res->ref_cnt);
+
+ if (ret != IMG_SUCCESS) {
+ pr_warn("[USERSID=0x%08X] Failed to add resource",
+ dec_str_context->config.user_str_id);
+ }
+
+ return IMG_SUCCESS;
+
+err_out_of_memory:
+ if (pict_dec_res) {
+ if (fw_ctx_buf)
+ mmu_free_mem(dec_str_context->mmu_str_handle, &pict_dec_res->fw_ctx_buf);
+
+ kfree(pict_dec_res);
+ pict_dec_res = NULL;
+ }
+
+ pr_err("[USERSID=0x%08X] Failed to allocate device memory for stream decode resources",
+ dec_str_context->config.user_str_id);
+
+ return IMG_ERROR_OUT_OF_MEMORY;
+}
+
+/*
+ * @Function decoder_stream_create
+ */
+int decoder_stream_create(void *dec_ctx_arg,
+ struct vdec_str_configdata str_cfg,
+ unsigned int km_str_id, void **mmu_str_handle,
+ void *vxd_dec_ctx, void *str_user_int_data,
+ void **dec_str_ctx_arg, void *decoder_cb,
+ void *query_cb)
+{
+ struct dec_ctx *dec_context = (struct dec_ctx *)dec_ctx_arg;
+ struct dec_str_ctx *dec_str_ctx = NULL;
+ unsigned int i;
+ int ret;
+ unsigned int mem_heap_id;
+ enum sys_emem_attrib mem_attribs;
+ struct dec_core_ctx *dec_core_ctx_local;
+
+ /* Check input parameters. */
+ VDEC_ASSERT(dec_ctx_arg);
+ if (!dec_ctx_arg) {
+ pr_err("Invalid parameters!");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (dec_context->str_cnt >= MAX_CONCURRENT_STREAMS) {
+ pr_err("Device has too many concurrent streams. Number of Concurrent streams allowed: %d.",
+ MAX_CONCURRENT_STREAMS);
+ return IMG_ERROR_DEVICE_UNAVAILABLE;
+ }
+
+ mem_heap_id = dec_context->internal_heap_id;
+ mem_attribs = (enum sys_emem_attrib)(SYS_MEMATTRIB_UNCACHED | SYS_MEMATTRIB_WRITECOMBINE);
+ mem_attribs |= (enum sys_emem_attrib)SYS_MEMATTRIB_INTERNAL;
+
+ /* Allocate Decoder Stream Context */
+ dec_str_ctx = kzalloc(sizeof(*dec_str_ctx), GFP_KERNEL);
+ VDEC_ASSERT(dec_str_ctx);
+ if (!dec_str_ctx)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /* Increment the stream counter */
+ dec_context->str_cnt++;
+
+ /*
+ * Initialise the context structure to NULL. Any non-zero
+ * default values should be set at this point.
+ */
+ dec_str_ctx->config = str_cfg;
+ dec_str_ctx->vxd_dec_ctx = vxd_dec_ctx;
+ dec_str_ctx->usr_int_data = str_user_int_data;
+ dec_str_ctx->decctx = dec_context;
+
+ decoder_init_avail_slots(dec_str_ctx);
+
+ dec_str_ctx->next_dec_pict_id = 1;
+ dec_str_ctx->next_pict_id_expected = 1;
+
+ dec_str_ctx->km_str_id = km_str_id;
+ VDEC_ASSERT(dec_str_ctx->km_str_id > 0);
+
+ lst_init(&dec_str_ctx->pend_strunit_list);
+ dq_init(&dec_str_ctx->str_decd_pict_list);
+ lst_init(&dec_str_ctx->ref_res_lst);
+ lst_init(&dec_str_ctx->dec_res_lst);
+
+ ret = idgen_createcontext(DECODER_MAX_PICT_ID + 1,
+ DECODER_MAX_CONCURRENT_PICT,
+ TRUE,
+ &dec_str_ctx->pict_idgen);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ /* Create an MMU context for this stream. */
+ ret = mmu_stream_create(dec_context->mmu_dev_handle,
+ dec_str_ctx->km_str_id,
+ dec_str_ctx->vxd_dec_ctx,
+ &dec_str_ctx->mmu_str_handle);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ dec_core_ctx_local = dec_context->dec_core_ctx;
+
+ VDEC_ASSERT(dec_core_ctx_local);
+
+ /* Create core resources */
+ ret = dec_res_create(dec_str_ctx->mmu_str_handle,
+ &dec_core_ctx_local->core_props,
+ dec_context->dev_cfg->num_slots_per_pipe *
+ dec_context->num_pipes,
+ dec_context->internal_heap_id,
+ &dec_str_ctx->resources);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ /* Allocate the PVDEC firmware context buffer */
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s:%d calling MMU_StreamMalloc", __func__, __LINE__);
+#endif
+ ret = mmu_stream_alloc(dec_str_ctx->mmu_str_handle, MMU_HEAP_STREAM_BUFFERS,
+ mem_heap_id,
+ (enum sys_emem_attrib)(mem_attribs | SYS_MEMATTRIB_CPU_WRITE),
+ CONTEXT_BUFF_SIZE,
+ DEV_MMU_PAGE_ALIGNMENT,
+ &dec_str_ctx->pvdec_fw_ctx_buf);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ /*
+ * Clear the context data in preparation for
+ * first time use by the firmware.
+ */
+ memset(dec_str_ctx->pvdec_fw_ctx_buf.cpu_virt, 0, dec_str_ctx->pvdec_fw_ctx_buf.buf_size);
+
+ /*
+ * Create enough device resources to hold last context on
+ * front and back-end for stream.
+ */
+ dec_str_ctx->num_dec_res =
+ dec_str_ctx->decctx->dev_cfg->num_slots_per_pipe *
+ dec_str_ctx->decctx->num_pipes;
+ for (i = 0; i < dec_str_ctx->num_dec_res + 2; i++) {
+ ret = decoder_stream_decode_resource_create(dec_str_ctx);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+ }
+
+ dec_str_ctx->str_processed_cb = (strunit_processed_cb)decoder_cb;
+
+ dec_str_ctx->core_query_cb = (core_gen_cb)query_cb;
+
+ lst_add(&dec_context->str_list, dec_str_ctx);
+
+ *dec_str_ctx_arg = (void *)dec_str_ctx;
+ *mmu_str_handle = dec_str_ctx->mmu_str_handle;
+
+ return IMG_SUCCESS;
+
+ /* Roll back in case of errors. */
+error:
+ decoder_stream_destroy((void *)dec_str_ctx, FALSE);
+
+ return ret;
+}
+
+/*
+ * @Function decoder_get_decoded_pict
+ */
+static struct dec_decoded_pict *decoder_get_decoded_pict(unsigned int transaction_id,
+ struct dq_linkage_t *dq_list)
+{
+ struct dec_decoded_pict *decoded_pict;
+ void *item = NULL;
+
+ VDEC_ASSERT(dq_list);
+
+ decoded_pict = dq_first(dq_list);
+ while (decoded_pict) {
+ if (decoded_pict->transaction_id == transaction_id) {
+ item = decoded_pict;
+ break;
+ }
+
+ if (decoded_pict != dq_last(dq_list))
+ decoded_pict = dq_next(decoded_pict);
+ else
+ decoded_pict = NULL;
+ }
+
+ return item;
+}
+
+/*
+ * @Function decoder_get_decoded_pict_of_stream
+ */
+static struct dec_decoded_pict *decoder_get_decoded_pict_of_stream(unsigned int pict_id,
+ struct dq_linkage_t *dq_list)
+{
+ struct dec_decoded_pict *decoded_pict;
+ void *item = NULL;
+
+ VDEC_ASSERT(dq_list);
+
+ decoded_pict = dq_first(dq_list);
+ while (decoded_pict) {
+ if (GET_STREAM_PICTURE_ID(decoded_pict->transaction_id) == pict_id) {
+ item = decoded_pict;
+ break;
+ }
+
+ if (decoded_pict != dq_last(dq_list))
+ decoded_pict = dq_next(decoded_pict);
+ else
+ decoded_pict = NULL;
+ }
+ return item;
+}
+
+/*
+ * @Function decoder_get_next_decpict_contiguous
+ */
+static struct
+dec_decoded_pict *decoder_get_next_decpict_contiguous(struct dec_decoded_pict *decoded_pict,
+ unsigned int next_dec_pict_id,
+ struct dq_linkage_t *str_decoded_pict_list)
+{
+ struct dec_decoded_pict *next_dec_pict = NULL;
+ struct dec_decoded_pict *result_dec_pict = NULL;
+
+ VDEC_ASSERT(str_decoded_pict_list);
+ if (!str_decoded_pict_list) {
+ pr_err("Invalid parameter");
+ return NULL;
+ }
+
+ if (decoded_pict) {
+ if (decoded_pict != dq_last(str_decoded_pict_list)) {
+ next_dec_pict = dq_next(decoded_pict);
+ if (!next_dec_pict) {
+ VDEC_ASSERT(0);
+ return NULL;
+ }
+
+ if (next_dec_pict->pict) {
+ /*
+ * If we have no holes in the decoded list
+ * (i.e. next decoded picture is next in
+ * bitstream decode order).
+ */
+ if (HAS_X_REACHED_Y(next_dec_pict_id, next_dec_pict->pict->pict_id,
+ 1 << FWIF_NUMBITS_STREAM_PICTURE_ID,
+ unsigned int)) {
+ result_dec_pict = next_dec_pict;
+ }
+ }
+ }
+ }
+
+ return result_dec_pict;
+}
+
+/*
+ * @Function decoder_next_picture
+ * @Description
+ * Returns the next unprocessed picture or NULL if the next picture is not
+ * next in bitstream decode order or there are no more decoded pictures in the
+ * list.
+
+ * @Input psCurrentDecodedPicture : Pointer to current decoded picture.
+
+ * @Input ui32NextDecPictId : Picture ID of next picture in decode
+ * order.
+
+ * @Input psStrDecdPictList : Pointer to decoded picture list.
+
+ * @Return DECODER_sDecodedPict * : Pointer to next decoded picture to
+ * process.
+ */
+static struct dec_decoded_pict *decoder_next_picture(struct dec_decoded_pict *cur_decoded_pict,
+ unsigned int next_dec_pict_d,
+ struct dq_linkage_t *str_decodec_pict_list)
+{
+ struct dec_decoded_pict *ret = NULL;
+
+ VDEC_ASSERT(str_decodec_pict_list);
+ if (!str_decodec_pict_list)
+ return NULL;
+
+ if (!cur_decoded_pict)
+ cur_decoded_pict = dq_first(str_decodec_pict_list);
+
+ if (cur_decoded_pict && !cur_decoded_pict->process_failed) {
+ /* Search for picture ID greater than picture in list */
+ do {
+ if (!cur_decoded_pict->processed) {
+ /*
+ * Return the current one because it has not
+ * been processed
+ */
+ ret = cur_decoded_pict;
+ break;
+ }
+ /*
+ * Obtain a pointer to the next picture in bitstream
+ * decode order.
+ */
+ cur_decoded_pict = decoder_get_next_decpict_contiguous
+ (cur_decoded_pict,
+ next_dec_pict_d,
+ str_decodec_pict_list);
+ } while (cur_decoded_pict &&
+ !cur_decoded_pict->process_failed);
+ }
+ return ret;
+}
+
+/*
+ * @Function decoder_picture_display
+ */
+static int decoder_picture_display(struct dec_str_ctx *dec_str_ctx,
+ unsigned int pict_id, unsigned char last)
+{
+ struct vdecdd_picture *picture;
+ int ret;
+ static unsigned int display_num;
+
+ VDEC_ASSERT(dec_str_ctx);
+
+ ret = idgen_gethandle(dec_str_ctx->pict_idgen, pict_id, (void **)&picture);
+ if (ret == IMG_SUCCESS) {
+ struct vdecdd_ddbuf_mapinfo *pict_buf;
+
+ /* validate pointers */
+ if (!picture || !picture->dec_pict_info) {
+ VDEC_ASSERT(0);
+ return -EIO;
+ }
+
+ pict_buf = picture->disp_pict_buf.pict_buf;
+ VDEC_ASSERT(pict_buf);
+
+ /*
+ * Indicate whether there are more pictures
+ * coming for display.
+ */
+ picture->dec_pict_info->last_in_seq = last;
+
+ /* Set decode order id */
+ picture->dec_pict_info->decode_id = pict_id;
+
+ /* Return the picture to the client for display */
+ dec_str_ctx->dec_str_st.total_pict_displayed++;
+ resource_item_use(&pict_buf->ddbuf_info.ref_count);
+ display_num++;
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] DISPLAY(%d): PIC_ID[%d]",
+ dec_str_ctx->config.user_str_id, display_num, pict_id);
+#endif
+
+ VDEC_ASSERT(dec_str_ctx->decctx);
+ ret = dec_str_ctx->str_processed_cb(dec_str_ctx->usr_int_data,
+ VXD_CB_PICT_DISPLAY,
+ picture);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /*
+ * All handles will be freed after actually
+ * displaying the picture.
+ * Reset them to NULL here to avoid any confusion.
+ */
+ memset(&picture->dec_pict_sup_data, 0, sizeof(picture->dec_pict_sup_data));
+ } else {
+ pr_warn("[USERSID=0x%08X] ERROR: DISPLAY PICTURE HAS AN EXPIRED ID",
+ dec_str_ctx->config.user_str_id);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ }
+
+ return IMG_SUCCESS;
+}
+
+#ifdef ERROR_CONCEALMENT
+/*
+ * @Function decoder_get_pict_processing_info
+ */
+static unsigned char decoder_get_pict_processing_info(struct dec_core_ctx *dec_corectx,
+ struct dec_str_ctx *dec_strctx,
+ struct bspp_pict_hdr_info *pict_hdr_info,
+ struct dec_decoded_pict *decoded_pict,
+ struct dec_decpict *dec_pict,
+ unsigned int *pict_last_mb)
+{
+ int ret = IMG_SUCCESS;
+ unsigned char pipe_minus1;
+ struct hwctrl_state last_state;
+ unsigned int width_in_mb;
+ unsigned int height_in_mb;
+ unsigned int i;
+
+ memset(&last_state, 0, sizeof(last_state));
+
+ VDEC_ASSERT(pict_hdr_info);
+ width_in_mb = (pict_hdr_info->coded_frame_size.width +
+ (VDEC_MB_DIMENSION - 1)) / VDEC_MB_DIMENSION;
+ height_in_mb = (pict_hdr_info->coded_frame_size.height +
+ (VDEC_MB_DIMENSION - 1)) / VDEC_MB_DIMENSION;
+
+ VDEC_ASSERT(pict_last_mb);
+ *pict_last_mb = width_in_mb * height_in_mb;
+ VDEC_ASSERT(decoded_pict);
+
+ if (decoded_pict->first_fld_fwmsg->pict_attrs.pict_attrs.dwrfired ||
+ decoded_pict->second_fld_fwmsg->pict_attrs.pict_attrs.dwrfired ||
+ decoded_pict->first_fld_fwmsg->pict_attrs.pict_attrs.mmufault ||
+ decoded_pict->second_fld_fwmsg->pict_attrs.pict_attrs.mmufault) {
+ struct dec_pict_attrs *pict_attrs = &decoded_pict->first_fld_fwmsg->pict_attrs;
+ unsigned char be_found = FALSE;
+ unsigned int mbs_dropped = 0;
+ unsigned int mbs_recovered = 0;
+ unsigned int no_be_wdt = 0;
+ unsigned int max_y = 0;
+ unsigned int row_drop = 0;
+
+ VDEC_ASSERT(dec_corectx);
+ /* Obtain the last available core status -
+ * cached before clocks where switched off
+ */
+ ret = hwctrl_getcore_cached_status(dec_corectx->hw_ctx, &last_state);
+ if (ret != IMG_SUCCESS)
+ return FALSE;
+
+ /* Try to determine pipe where the last picture was decoded on (BE) */
+ for (pipe_minus1 = 0; pipe_minus1 < VDEC_MAX_PIXEL_PIPES; pipe_minus1++) {
+ for (i = VDECFW_CHECKPOINT_BE_END; i >= VDECFW_CHECKPOINT_BE_START; i--) {
+ struct vxd_pipestate *pipe_state =
+ &last_state.core_state.fw_state.pipe_state[pipe_minus1];
+
+ if (!pipe_state->is_pipe_present)
+ continue;
+
+ if (pipe_state->acheck_point[i] == decoded_pict->transaction_id) {
+ row_drop += width_in_mb - pipe_state->be_mb.x;
+ if (pipe_state->be_mb.y > max_y)
+ max_y = pipe_state->be_mb.y;
+
+ if (pipe_state->be_mbs_dropped > mbs_dropped)
+ mbs_dropped = pipe_state->be_mbs_dropped;
+
+ if (pipe_state->be_mbs_recovered > mbs_recovered)
+ mbs_recovered = pipe_state->be_mbs_recovered;
+
+ no_be_wdt += pipe_state->be_errored_slices;
+ be_found = TRUE;
+ }
+ }
+ if (be_found)
+ /* No need to check FE as we already have an info from BE */
+ continue;
+
+ /* If not found, we probbaly stuck on FE ? */
+ for (i = VDECFW_CHECKPOINT_FE_END; i >= VDECFW_CHECKPOINT_FE_START; i--) {
+ struct vxd_pipestate *pipe_state =
+ &last_state.core_state.fw_state.pipe_state[pipe_minus1];
+
+ if (!pipe_state->is_pipe_present)
+ continue;
+
+ if (pipe_state->acheck_point[i] == decoded_pict->transaction_id) {
+ /* Mark all MBs as dropped */
+ pict_attrs->mbs_dropped = *pict_last_mb;
+ pict_attrs->mbs_recovered = 0;
+ return TRUE;
+ }
+ }
+ }
+
+ if (be_found) {
+ /* Calculate last macroblock number processed on BE */
+ unsigned int num_mb_processed = (max_y * width_in_mb) - row_drop;
+
+ /* Sanity check, as HW may signal MbYX position
+ * beyond picture for corrupted streams
+ */
+ if (num_mb_processed > (*pict_last_mb))
+ num_mb_processed = (*pict_last_mb); /* trim */
+
+ if (((*pict_last_mb) - num_mb_processed) > mbs_dropped)
+ mbs_dropped = (*pict_last_mb) - num_mb_processed;
+
+ pict_attrs->mbs_dropped = mbs_dropped;
+ pict_attrs->mbs_recovered = num_mb_processed;
+ pict_attrs->no_be_wdt = no_be_wdt;
+ return TRUE;
+ }
+ return FALSE;
+ }
+ /* Picture was decoded without DWR, so we have already the required info */
+ return TRUE;
+}
+#endif
+
+/*
+ * @Function decoder_picture_release
+ */
+static int decoder_picture_release(struct dec_str_ctx *dec_str_ctx,
+ unsigned int pict_id,
+ unsigned char displayed,
+ unsigned char merged)
+{
+ struct vdecdd_picture *picture;
+ int ret;
+
+ /* validate input arguments */
+ if (!dec_str_ctx) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+
+ ret = idgen_gethandle(dec_str_ctx->pict_idgen, pict_id, (void **)&picture);
+ if (ret == IMG_SUCCESS) {
+ if (!picture || !picture->dec_pict_info) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+
+ /* Set decode order id */
+ picture->dec_pict_info->decode_id = pict_id;
+
+ VDEC_ASSERT(dec_str_ctx->decctx);
+
+ pr_debug("Decoder picture released pict_id = %d\n", pict_id);
+ ret = dec_str_ctx->str_processed_cb(dec_str_ctx->usr_int_data,
+ VXD_CB_PICT_RELEASE,
+ picture);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /*
+ * All handles will be freed after actually displaying
+ * the picture. Reset them to NULL here to avoid any
+ * confusion.
+ */
+ memset(&picture->dec_pict_sup_data, 0, sizeof(picture->dec_pict_sup_data));
+ } else {
+ pr_err("[USERSID=0x%08X] ERROR: RELEASE PICTURE HAS AN EXPIRED ID",
+ dec_str_ctx->config.user_str_id);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ }
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_stream_flush_process_dpb_h264
+ */
+static int
+decoder_stream_flush_process_dpb_h264(struct dec_str_ctx *dec_str_ctx,
+ struct dec_decoded_pict *decoded_pict,
+ unsigned char discard_refs)
+{
+ int ret;
+
+ struct h264fw_context_data *ctx_data =
+ (struct h264fw_context_data *)dec_str_ctx->last_be_pict_dec_res->fw_ctx_buf.cpu_virt;
+ unsigned char found = TRUE;
+ unsigned int i;
+ int min_cnt;
+ int min_cnt_idx;
+ unsigned int num_display_pics = 0;
+ unsigned int num_pics_displayed = 0;
+ struct dec_decoded_pict *display_pict = NULL;
+ unsigned int last_disp_pict_tid;
+ unsigned int pict_id;
+
+ /* Determine how many display pictures reside in the DPB */
+ if (ctx_data->dpb_size > H264FW_MAX_DPB_SIZE || ctx_data->dpb_size <= 0) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] Incorrect DPB size: %d",
+ dec_str_ctx->config.user_str_id, ctx_data->dpb_size);
+#endif
+ ctx_data->dpb_size = H264FW_MAX_DPB_SIZE;
+ }
+ for (i = 0; i < ctx_data->dpb_size; i++) {
+ if (ctx_data->dpb[i].transaction_id)
+ if (ctx_data->dpb[i].needed_for_output)
+ num_display_pics++;
+ }
+
+ last_disp_pict_tid = ctx_data->last_displayed_pic_data[0].transaction_id;
+ /* Check for picture stuck outside the dpb */
+ if (last_disp_pict_tid) {
+ VDEC_ASSERT(last_disp_pict_tid != 0xffffffff);
+
+ display_pict = decoder_get_decoded_pict(last_disp_pict_tid,
+ &dec_str_ctx->str_decd_pict_list);
+
+ if (display_pict && display_pict->pict &&
+ display_pict->pict->dec_pict_info) {
+ if (FLAG_IS_SET(ctx_data->prev_display_flags,
+ VDECFW_BUFFLAG_DISPLAY_FIELD_CODED)) {
+ if (!FLAG_IS_SET(ctx_data->prev_display_flags,
+ VDECFW_BUFFLAG_DISPLAY_SINGLE_FIELD))
+ display_pict->pict->dec_pict_info->buf_type =
+ IMG_BUFFERTYPE_PAIR;
+ else
+ display_pict->pict->dec_pict_info->buf_type =
+ FLAG_IS_SET
+ (ctx_data->prev_display_flags,
+ VDECFW_BUFFLAG_DISPLAY_BOTTOM_FIELD) ?
+ IMG_BUFFERTYPE_FIELD_BOTTOM :
+ IMG_BUFFERTYPE_FIELD_TOP;
+ } else {
+ display_pict->pict->dec_pict_info->buf_type =
+ IMG_BUFFERTYPE_FRAME;
+ }
+ } else {
+ VDEC_ASSERT(display_pict);
+ VDEC_ASSERT(display_pict && display_pict->pict);
+ VDEC_ASSERT(display_pict && display_pict->pict &&
+ display_pict->pict->dec_pict_info);
+ }
+
+ if (display_pict && !display_pict->displayed) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] [TID=0x%08X] DISPLAY",
+ dec_str_ctx->config.user_str_id,
+ last_disp_pict_tid);
+#endif
+ display_pict->displayed = TRUE;
+ ret = decoder_picture_display
+ (dec_str_ctx, GET_STREAM_PICTURE_ID(last_disp_pict_tid),
+ TRUE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+ }
+
+ while (found) {
+ min_cnt = ((unsigned int)(1 << 31)) - 1;
+ min_cnt_idx = -1;
+ found = FALSE;
+
+ /* Loop over the DPB to find the first in order */
+ for (i = 0; i < ctx_data->dpb_size; i++) {
+ if (ctx_data->dpb[i].transaction_id &&
+ (ctx_data->dpb[i].needed_for_output ||
+ discard_refs)) {
+ if (ctx_data->dpb[i].top_field_order_count <
+ min_cnt) {
+ min_cnt =
+ ctx_data->dpb[i].top_field_order_count;
+ min_cnt_idx = i;
+ found = TRUE;
+ }
+ }
+ }
+
+ if (found) {
+ unsigned int umin_cnt_tid = ctx_data->dpb[min_cnt_idx].transaction_id;
+
+ if (ctx_data->dpb[min_cnt_idx].needed_for_output) {
+ VDEC_ASSERT(umin_cnt_tid != 0xffffffff);
+ display_pict =
+ decoder_get_decoded_pict(umin_cnt_tid,
+ &dec_str_ctx->str_decd_pict_list);
+
+ if ((display_pict && display_pict->pict &&
+ display_pict->pict->dec_pict_info) &&
+ FLAG_IS_SET(ctx_data->dpb[min_cnt_idx].display_flags,
+ VDECFW_BUFFLAG_DISPLAY_FIELD_CODED)) {
+ if (!FLAG_IS_SET(ctx_data->dpb[min_cnt_idx].display_flags,
+ VDECFW_BUFFLAG_DISPLAY_SINGLE_FIELD))
+ display_pict->pict->dec_pict_info->buf_type =
+ IMG_BUFFERTYPE_PAIR;
+ else
+ display_pict->pict->dec_pict_info->buf_type =
+ FLAG_IS_SET
+ (ctx_data->dpb
+ [min_cnt_idx].display_flags,
+ VDECFW_BUFFLAG_DISPLAY_BOTTOM_FIELD)
+ ?
+ IMG_BUFFERTYPE_FIELD_BOTTOM :
+ IMG_BUFFERTYPE_FIELD_TOP;
+ display_pict->pict->dec_pict_info->view_id =
+ ctx_data->dpb[min_cnt_idx].view_id;
+ } else if ((display_pict && display_pict->pict &&
+ display_pict->pict->dec_pict_info) &&
+ (!FLAG_IS_SET(ctx_data->dpb[min_cnt_idx].display_flags,
+ VDECFW_BUFFLAG_DISPLAY_FIELD_CODED))){
+ display_pict->pict->dec_pict_info->buf_type =
+ IMG_BUFFERTYPE_FRAME;
+ display_pict->pict->dec_pict_info->view_id =
+ ctx_data->dpb[min_cnt_idx].view_id;
+ } else {
+ VDEC_ASSERT(display_pict);
+ VDEC_ASSERT(display_pict && display_pict->pict);
+ VDEC_ASSERT(display_pict && display_pict->pict &&
+ display_pict->pict->dec_pict_info);
+ }
+
+ if (display_pict && !display_pict->displayed) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] [TID=0x%08X] DISPLAY",
+ dec_str_ctx->config.user_str_id,
+ umin_cnt_tid);
+#endif
+ display_pict->displayed = TRUE;
+ num_pics_displayed++;
+ ret = decoder_picture_display
+ (dec_str_ctx,
+ GET_STREAM_PICTURE_ID(umin_cnt_tid),
+ num_pics_displayed == num_display_pics ?
+ TRUE : FALSE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+ ctx_data->dpb[min_cnt_idx].needed_for_output = FALSE;
+ }
+
+ if (discard_refs) {
+ decoded_pict =
+ decoder_get_decoded_pict(umin_cnt_tid,
+ &dec_str_ctx->str_decd_pict_list);
+ if (decoded_pict) {
+ /* Signal releasing this picture to upper layers. */
+ pict_id =
+ GET_STREAM_PICTURE_ID(decoded_pict->transaction_id);
+ decoder_picture_release(dec_str_ctx,
+ pict_id,
+ decoded_pict->displayed,
+ decoded_pict->merged);
+ /* Destroy the decoded picture. */
+ ret = decoder_decoded_picture_destroy(dec_str_ctx,
+ decoded_pict, FALSE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+ ctx_data->dpb[min_cnt_idx].transaction_id = 0;
+ }
+ }
+ }
+
+ VDEC_ASSERT(num_pics_displayed == num_display_pics);
+
+ return IMG_SUCCESS;
+}
+
+#ifdef HAS_HEVC
+/*
+ * decoder_StreamFlushProcessDPB_HEVC
+ */
+static int decoder_stream_flush_process_dpb_hevc(struct dec_str_ctx *decstr_ctx,
+ struct dec_decoded_pict *decoded_pict,
+ unsigned char discard_refs)
+{
+ int result;
+ struct hevcfw_ctx_data *ctx =
+ (struct hevcfw_ctx_data *)decstr_ctx->last_be_pict_dec_res->fw_ctx_buf.cpu_virt;
+ struct hevcfw_decoded_picture_buffer *dpb;
+ unsigned char found = TRUE;
+ unsigned char idx;
+ int min_poc_val;
+ signed char dpb_idx;
+ unsigned char num_display_pics = 0;
+ unsigned char num_pics_displayed = 0;
+ struct dec_decoded_pict *display_pict = NULL;
+
+ /*
+ * Update the fw context for analysing the dpb in order
+ * to display or release any outstanding picture
+ */
+ dpb = &ctx->dpb;
+
+ /* Determine how many display pictures reside in the DPB. */
+ for (idx = 0; idx < HEVCFW_MAX_DPB_SIZE; ++idx) {
+ struct hevcfw_picture_in_dpb *dpb_pic = &dpb->pictures[idx];
+
+ if (dpb_pic->valid && dpb_pic->needed_for_output)
+ ++num_display_pics;
+ }
+
+ while (found) {
+ struct hevcfw_picture_in_dpb *dpb_pic;
+
+ min_poc_val = 0x7fffffff;
+ dpb_idx = HEVCFW_DPB_IDX_INVALID;
+ found = FALSE;
+
+ /* Loop over the DPB to find the first in order */
+ for (idx = 0; idx < HEVCFW_MAX_DPB_SIZE; ++idx) {
+ dpb_pic = &dpb->pictures[idx];
+ if (dpb_pic->valid && (dpb_pic->needed_for_output || discard_refs)) {
+ if (dpb_pic->picture.pic_order_cnt_val < min_poc_val) {
+ min_poc_val = dpb_pic->picture.pic_order_cnt_val;
+ dpb_idx = idx;
+ found = TRUE;
+ }
+ }
+ }
+
+ if (!found)
+ break;
+
+ dpb_pic = &dpb->pictures[dpb_idx];
+
+ if (dpb_pic->needed_for_output) {
+ unsigned int str_pic_id = GET_STREAM_PICTURE_ID
+ (dpb_pic->picture.transaction_id);
+
+ VDEC_ASSERT(dpb_pic->picture.transaction_id != 0xffffffff);
+ display_pict = decoder_get_decoded_pict(dpb_pic->picture.transaction_id,
+ &decstr_ctx->str_decd_pict_list);
+
+ if (display_pict && display_pict->pict &&
+ display_pict->pict->dec_pict_info) {
+ display_pict->pict->dec_pict_info->buf_type = IMG_BUFFERTYPE_FRAME;
+ } else {
+ VDEC_ASSERT(display_pict);
+ VDEC_ASSERT(display_pict && display_pict->pict);
+ VDEC_ASSERT(display_pict && display_pict->pict &&
+ display_pict->pict->dec_pict_info);
+
+ dpb_pic->valid = FALSE;
+ continue;
+ }
+
+ if (!display_pict->displayed) {
+ display_pict->displayed = TRUE;
+ ++num_pics_displayed;
+ result = decoder_picture_display(decstr_ctx, str_pic_id,
+ num_pics_displayed ==
+ num_display_pics);
+ VDEC_ASSERT(result == IMG_SUCCESS);
+ if (result != IMG_SUCCESS)
+ return result;
+ }
+ dpb_pic->needed_for_output = FALSE;
+ }
+
+ if (discard_refs) {
+ decoded_pict = decoder_get_decoded_pict(dpb_pic->picture.transaction_id,
+ &decstr_ctx->str_decd_pict_list);
+
+ if (decoded_pict) {
+ /* Signal releasing this picture to upper layers. */
+ decoder_picture_release(decstr_ctx,
+ GET_STREAM_PICTURE_ID
+ (decoded_pict->transaction_id),
+ decoded_pict->displayed,
+ decoded_pict->merged);
+ /* Destroy the decoded picture. */
+ result = decoder_decoded_picture_destroy(decstr_ctx, decoded_pict,
+ FALSE);
+ VDEC_ASSERT(result == IMG_SUCCESS);
+ if (result != IMG_SUCCESS)
+ return result;
+ }
+ dpb_pic->valid = FALSE;
+ }
+ }
+
+ VDEC_ASSERT(num_pics_displayed == num_display_pics);
+
+ return IMG_SUCCESS;
+}
+#endif
+
+/*
+ * @Function decoder_stream_flush_process_dpb
+ * @Description
+ * Process DPB fetched from firmware, display and release relevant pictures.
+ */
+static int decoder_stream_flush_process_dpb(struct dec_str_ctx *dec_str_ctx,
+ struct dec_decoded_pict *decoded_pict,
+ unsigned char discard_refs)
+{
+ int ret = 0;
+ /* Get oldest reference to display. */
+ decoded_pict = dq_last(&dec_str_ctx->str_decd_pict_list);
+ if (decoded_pict) {
+ switch (dec_str_ctx->config.vid_std) {
+ case VDEC_STD_H264:
+ ret = decoder_stream_flush_process_dpb_h264(dec_str_ctx, decoded_pict,
+ discard_refs);
+
+ break;
+#ifdef HAS_HEVC
+ case VDEC_STD_HEVC:
+ decoder_stream_flush_process_dpb_hevc(dec_str_ctx,
+ decoded_pict, discard_refs);
+#endif
+ break;
+
+ default:
+ break;
+ }
+ }
+
+ return ret;
+}
+
+int decoder_stream_flush(void *dec_str_ctx_arg, unsigned char discard_refs)
+{
+ struct dec_str_ctx *dec_str_ctx;
+ struct dec_str_unit *dec_str_unit;
+ struct dec_decoded_pict *decoded_pict;
+ int ret = 0;
+
+ dec_str_ctx = decoder_stream_get_context(dec_str_ctx_arg);
+ VDEC_ASSERT(dec_str_ctx);
+ if (!dec_str_ctx) {
+ pr_err("Invalid decoder stream context handle!");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /*
+ * Since the stream should be stopped before flushing
+ * there should be no pictures in the stream list.
+ */
+ dec_str_unit = lst_first(&dec_str_ctx->pend_strunit_list);
+ while (dec_str_unit) {
+ VDEC_ASSERT(dec_str_unit->str_unit->str_unit_type !=
+ VDECDD_STRUNIT_PICTURE_START);
+ dec_str_unit = lst_next(dec_str_unit);
+ }
+
+ decoded_pict = dq_last(&dec_str_ctx->str_decd_pict_list);
+
+ ret = decoder_stream_flush_process_dpb(dec_str_ctx, decoded_pict,
+ discard_refs);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ if (discard_refs) {
+ while (!dq_empty(&dec_str_ctx->str_decd_pict_list)) {
+ struct dec_decoded_pict *non_decoded_pict =
+ dq_first(&dec_str_ctx->str_decd_pict_list);
+
+ if (!non_decoded_pict) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] Decoded picture list contains item ID:0x%08x when DPB is empty",
+ dec_str_ctx->config.user_str_id,
+ non_decoded_pict->transaction_id);
+#endif
+
+ /* release the buffers back to vxd_decoder */
+ decoder_picture_release(dec_str_ctx, GET_STREAM_PICTURE_ID
+ (non_decoded_pict->transaction_id), FALSE,
+ FALSE);
+
+ ret = decoder_decoded_picture_destroy(dec_str_ctx, non_decoded_pict, FALSE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+ VDEC_ASSERT(dq_empty(&dec_str_ctx->str_decd_pict_list));
+
+ if (dec_str_ctx->last_be_pict_dec_res)
+ /*
+ * Clear the firmware context so that reference
+ * pictures are no longer referred to.
+ */
+ memset(dec_str_ctx->last_be_pict_dec_res->fw_ctx_buf.cpu_virt, 0,
+ dec_str_ctx->last_be_pict_dec_res->fw_ctx_buf.buf_size);
+ }
+
+ pr_debug("Decoder stream flushed successfully\n");
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_stream_prepare_ctx
+ */
+int decoder_stream_prepare_ctx(void *dec_str_ctx_arg, unsigned char flush_dpb)
+{
+ struct dec_str_ctx *dec_str_ctx =
+ decoder_stream_get_context(dec_str_ctx_arg);
+ int ret;
+
+ VDEC_ASSERT(dec_str_ctx);
+ if (!dec_str_ctx)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] [TID=0x%08X] Preparing stream context after seek",
+ dec_str_ctx->config.user_str_id,
+ dec_str_ctx->last_fe_transaction_id);
+#endif
+
+ if (flush_dpb) {
+ ret = decoder_stream_flush(dec_str_ctx, TRUE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+
+ /* Reset front-end temporary pointers */
+ if (dec_str_ctx->prev_fe_pict_dec_res) {
+ resource_item_return(&dec_str_ctx->prev_fe_pict_dec_res->ref_cnt);
+ dec_str_ctx->prev_fe_pict_dec_res = NULL;
+ }
+ if (dec_str_ctx->cur_fe_pict_dec_res) {
+ resource_item_return(&dec_str_ctx->cur_fe_pict_dec_res->ref_cnt);
+ dec_str_ctx->cur_fe_pict_dec_res = NULL;
+ }
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_get_load
+ */
+int decoder_get_load(void *dec_str_ctx_arg, unsigned int *avail_slots)
+{
+ struct dec_str_ctx *dec_str_ctx =
+ decoder_stream_get_context(dec_str_ctx_arg);
+ struct dec_core_ctx *dec_core_ctx_local = NULL;
+
+ /* Check input parameters. */
+ VDEC_ASSERT(dec_str_ctx);
+ if (!dec_str_ctx || !avail_slots) {
+ pr_err("Invalid parameters!");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ dec_core_ctx_local = decoder_str_ctx_to_core_ctx(dec_str_ctx);
+ if (!dec_core_ctx_local) {
+ VDEC_ASSERT(0);
+ return -EIO;
+ }
+
+ if (dec_core_ctx_local->busy)
+ *avail_slots = 0;
+ else
+ *avail_slots = dec_str_ctx->avail_slots;
+
+ return IMG_SUCCESS;
+}
+
+static int decoder_check_ref_errors(struct dec_str_ctx *dec_str_ctx,
+ struct vdecfw_buffer_control *buf_ctrl,
+ struct vdecdd_picture *picture)
+{
+ struct dec_decoded_pict *ref_pict;
+ unsigned int i;
+
+ if (!dec_str_ctx) {
+ VDEC_ASSERT(0);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (!buf_ctrl || !picture) {
+ pr_err("[USERSID=0x%08X] Invalid parameters for checking reference lists.",
+ dec_str_ctx->config.user_str_id);
+ VDEC_ASSERT(0);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ for (i = 0; i < VDECFW_MAX_NUM_PICTURES && buf_ctrl->ref_list[i];
+ i++) {
+ ref_pict = decoder_get_decoded_pict_of_stream
+ (GET_STREAM_PICTURE_ID(buf_ctrl->ref_list[i]),
+ &dec_str_ctx->str_decd_pict_list);
+ if (ref_pict && ref_pict->pict && ref_pict->pict->dec_pict_info &&
+ ref_pict->pict->dec_pict_info->err_flags) {
+ picture->dec_pict_info->err_flags |=
+ VDEC_ERROR_CORRUPTED_REFERENCE;
+ pr_warn("Picture decoded using corrupted reference: 0x%08X 0x%08X",
+ ref_pict->transaction_id,
+ ref_pict->pict->dec_pict_info->err_flags);
+ }
+ }
+
+ return IMG_SUCCESS;
+}
+
+static void decoder_clean_bitstr_segments(struct lst_t *decpict_seglist)
+{
+ struct dec_decpict_seg *dec_pict_seg;
+
+ while (NULL != (dec_pict_seg = lst_removehead(decpict_seglist))) {
+ if (dec_pict_seg->internal_seg) {
+ VDEC_ASSERT(dec_pict_seg->bstr_seg);
+ kfree(dec_pict_seg->bstr_seg);
+ dec_pict_seg->bstr_seg = NULL;
+ }
+ kfree(dec_pict_seg);
+ }
+}
+
+static int decoder_wrap_bitstr_segments(struct lst_t *bitstr_seglist,
+ struct lst_t *decpict_seglist,
+ unsigned int user_str_id)
+{
+ /* Required for attaching segments to the decode picture */
+ struct bspp_bitstr_seg *bit_str_seg;
+ struct dec_decpict_seg *dec_pict_seg;
+
+ VDEC_ASSERT(bitstr_seglist);
+ VDEC_ASSERT(decpict_seglist);
+
+ /* Add the segments to the Decode Picture */
+ bit_str_seg = lst_first(bitstr_seglist);
+ while (bit_str_seg) {
+ dec_pict_seg = kzalloc(sizeof(*dec_pict_seg), GFP_KERNEL);
+ VDEC_ASSERT(dec_pict_seg);
+ if (!dec_pict_seg)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ dec_pict_seg->bstr_seg = bit_str_seg;
+ dec_pict_seg->internal_seg = FALSE;
+ lst_add(decpict_seglist, dec_pict_seg);
+
+ bit_str_seg = lst_next(bit_str_seg);
+ }
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_picture_decode
+ */
+static int decoder_picture_decode(struct dec_str_ctx *dec_str_ctx,
+ struct vdecdd_str_unit *str_unit,
+ struct dec_decpict **dec_pict_ptr)
+{
+ struct vdecdd_picture *picture;
+ struct dec_core_ctx *dec_core_ctx;
+ struct dec_decpict *dec_pict;
+ int ret = IMG_SUCCESS;
+ struct decoder_regsoffsets regs_offsets;
+
+ /* Validate input arguments */
+ if (!dec_str_ctx || !str_unit || !str_unit->pict_hdr_info || !dec_pict_ptr) {
+ VDEC_ASSERT(0);
+ return -EIO;
+ }
+
+ picture = (struct vdecdd_picture *)str_unit->dd_pict_data;
+ dec_core_ctx = decoder_str_ctx_to_core_ctx(dec_str_ctx);
+
+ if (!picture || !dec_core_ctx) {
+ VDEC_ASSERT(0);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Ensure this is a new picture */
+ VDEC_ASSERT(!dec_str_ctx->cur_pict);
+ VDEC_ASSERT(str_unit->str_unit_type == VDECDD_STRUNIT_PICTURE_START);
+
+ dec_core_ctx->cum_pics++;
+
+ /* Allocate a unique id to the picture */
+ ret = idgen_allocid(dec_str_ctx->pict_idgen, picture, &picture->pict_id);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Allocate the decoded picture information structure. */
+ picture->dec_pict_info = kzalloc(sizeof(*picture->dec_pict_info), GFP_KERNEL);
+ VDEC_ASSERT(picture->dec_pict_info);
+ if (!picture->dec_pict_info)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /* Extract decoded information from the stream unit */
+ picture->dec_pict_info->err_flags = str_unit->err_flags;
+ picture->dec_pict_info->first_fld_tag_container.pict_tag_param =
+ (unsigned long)(str_unit->str_unit_tag);
+ picture->dec_pict_info->op_config = picture->op_config;
+ picture->dec_pict_info->rend_info = picture->disp_pict_buf.rend_info;
+ picture->dec_pict_info->disp_info = str_unit->pict_hdr_info->disp_info;
+
+ /* Extract aux picture information from the stream unit */
+ picture->dec_pict_aux_info.seq_hdr_id =
+ str_unit->seq_hdr_info->sequ_hdr_id;
+ picture->dec_pict_aux_info.pps_id =
+ str_unit->pict_hdr_info->pict_aux_data.id;
+ picture->dec_pict_aux_info.second_pps_id =
+ str_unit->pict_hdr_info->second_pict_aux_data.id;
+
+ /* Create a new decoder picture container. */
+ dec_pict = kzalloc(sizeof(*dec_pict), GFP_KERNEL);
+ VDEC_ASSERT(dec_pict);
+ if (!dec_pict) {
+ ret = IMG_ERROR_OUT_OF_MEMORY;
+ goto error_dec_pict;
+ }
+
+ /* Attach decoder/picture context information. */
+ dec_pict->dec_str_ctx = dec_str_ctx;
+
+ /*
+ * Construct the transaction Id.
+ * This consists of stream and core number in addition to picture
+ * number in stream and a 4-bit value representing the picture number
+ * in core.
+ */
+ dec_pict->transaction_id =
+ CREATE_TRANSACTION_ID(0, dec_str_ctx->km_str_id, dec_core_ctx->cum_pics,
+ picture->pict_id);
+
+ /* Add picture to core decode list */
+ dec_str_ctx->dec_str_st.num_pict_decoding++;
+
+ /* Fake a FW message to process when decoded. */
+ dec_pict->first_fld_fwmsg = kzalloc(sizeof(*dec_pict->first_fld_fwmsg), GFP_KERNEL);
+ VDEC_ASSERT(dec_pict->first_fld_fwmsg);
+ if (!dec_pict->first_fld_fwmsg) {
+ ret = IMG_ERROR_OUT_OF_MEMORY;
+ goto error_fw_msg;
+ }
+
+ dec_pict->second_fld_fwmsg =
+ kzalloc(sizeof(*dec_pict->second_fld_fwmsg), GFP_KERNEL);
+ VDEC_ASSERT(dec_pict->second_fld_fwmsg);
+ if (!dec_pict->second_fld_fwmsg) {
+ ret = IMG_ERROR_OUT_OF_MEMORY;
+ goto error_fw_msg;
+ }
+
+ /* Add the segments to the Decode Picture */
+ ret = decoder_wrap_bitstr_segments(&str_unit->bstr_seg_list,
+ &dec_pict->dec_pict_seg_list,
+ dec_str_ctx->config.user_str_id);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error_segments;
+
+ /*
+ * Shuffle the current and previous
+ * Hold a reference to the last context on the FE
+ */
+ if (dec_str_ctx->prev_fe_pict_dec_res) {
+ /* Return previous last FW context. */
+ resource_item_return(&dec_str_ctx->prev_fe_pict_dec_res->ref_cnt);
+
+ if (resource_item_isavailable(&dec_str_ctx->prev_fe_pict_dec_res->ref_cnt)) {
+ resource_list_remove(&dec_str_ctx->dec_res_lst,
+ dec_str_ctx->prev_fe_pict_dec_res);
+
+ resource_list_add_img(&dec_str_ctx->dec_res_lst,
+ dec_str_ctx->prev_fe_pict_dec_res, 0,
+ &dec_str_ctx->prev_fe_pict_dec_res->ref_cnt);
+ }
+ }
+
+ dec_str_ctx->prev_fe_pict_dec_res = dec_str_ctx->cur_fe_pict_dec_res;
+ dec_pict->prev_pict_dec_res = dec_str_ctx->prev_fe_pict_dec_res;
+
+ /* Get a new stream decode resource bundle for current picture. */
+ dec_pict->cur_pict_dec_res = resource_list_get_avail(&dec_str_ctx->dec_res_lst);
+ VDEC_ASSERT(dec_pict->cur_pict_dec_res);
+ if (!dec_pict->cur_pict_dec_res) {
+ ret = IMG_ERROR_UNEXPECTED_STATE;
+ goto error_dec_res;
+ }
+
+ if (dec_str_ctx->config.vid_std == VDEC_STD_H264) {
+ /* Copy any SGM for current picture. */
+ if (str_unit->pict_hdr_info->pict_sgm_data.id !=
+ BSPP_INVALID) {
+ VDEC_ASSERT(str_unit->pict_hdr_info->pict_sgm_data.size <=
+ dec_pict->cur_pict_dec_res->h264_sgm_buf.buf_size);
+ /* Updated in translation_api */
+ memcpy(dec_pict->cur_pict_dec_res->h264_sgm_buf.cpu_virt,
+ str_unit->pict_hdr_info->pict_sgm_data.pic_data,
+ str_unit->pict_hdr_info->pict_sgm_data.size);
+ }
+ }
+
+ dec_pict->cur_pict_dec_res->transaction_id = dec_pict->transaction_id;
+ dec_str_ctx->cur_fe_pict_dec_res = dec_pict->cur_pict_dec_res;
+ resource_item_use(&dec_str_ctx->cur_fe_pict_dec_res->ref_cnt);
+
+ /* Get a new control buffer */
+ dec_pict->pict_ref_res =
+ resource_list_get_avail(&dec_str_ctx->ref_res_lst);
+ VDEC_ASSERT(dec_pict->pict_ref_res);
+ if (!dec_pict->pict_ref_res) {
+ ret = IMG_ERROR_UNEXPECTED_STATE;
+ goto error_ref_res;
+ }
+
+ VDEC_ASSERT(dec_str_ctx->decctx);
+ VDEC_ASSERT(dec_str_ctx->decctx->dev_cfg);
+
+ dec_pict->str_pvdec_fw_ctxbuf = &dec_str_ctx->pvdec_fw_ctx_buf;
+ dec_pict->pict_hdr_info = str_unit->pict_hdr_info;
+
+ /* Obtain (core) resources for the picture */
+ ret = dec_res_picture_attach(&dec_str_ctx->resources,
+ dec_str_ctx->config.vid_std, dec_pict);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error_res_attach;
+
+ /* Clear fw context data for re-use */
+ memset(dec_pict->cur_pict_dec_res->fw_ctx_buf.cpu_virt, 0,
+ dec_pict->cur_pict_dec_res->fw_ctx_buf.buf_size);
+
+ /*
+ * Clear the control data in case the picture is discarded before
+ * being prepared by firmware.
+ */
+ memset(dec_pict->pict_ref_res->fw_ctrlbuf.cpu_virt, 0,
+ dec_pict->pict_ref_res->fw_ctrlbuf.buf_size);
+
+ ret = hwctrl_getregsoffset(dec_core_ctx->hw_ctx, ®s_offsets);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error_other;
+
+ ret = translation_ctrl_alloc_prepare(&dec_str_ctx->config, str_unit,
+ dec_pict,
+ &dec_core_ctx->core_props,
+ ®s_offsets);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error_other;
+
+ ret = hwctrl_picture_submitbatch(dec_core_ctx->hw_ctx, dec_pict,
+ dec_str_ctx->vxd_dec_ctx);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error_other;
+
+ VDEC_ASSERT(dec_str_ctx->avail_slots > 0);
+ dec_str_ctx->avail_slots--;
+
+ VDEC_ASSERT(!dec_core_ctx->busy);
+ dec_core_ctx->busy = TRUE;
+ /* Store this transaction ID in stream context */
+ dec_str_ctx->last_fe_transaction_id = dec_pict->transaction_id;
+ dec_str_ctx->cur_pict = (struct dec_decpict *)dec_pict;
+
+ dec_str_ctx->dec_str_st.features = str_unit->features;
+
+ if (str_unit->eop)
+ dec_pict->eop_found = TRUE;
+
+ *dec_pict_ptr = dec_pict;
+
+ return IMG_SUCCESS;
+
+ /* Roll back in case of errors. */
+error_other:
+ dec_res_picture_detach(&dec_str_ctx->resources, dec_pict);
+error_res_attach:
+error_ref_res:
+error_dec_res:
+error_segments:
+ decoder_clean_bitstr_segments(&dec_pict->dec_pict_seg_list);
+ kfree(dec_pict->first_fld_fwmsg);
+ kfree(dec_pict->second_fld_fwmsg);
+error_fw_msg:
+ kfree(dec_pict);
+error_dec_pict:
+ kfree(picture->dec_pict_info);
+
+ return ret;
+}
+
+/*
+ * @Function decoder_stream_reference_resource_create
+ */
+static int
+decoder_stream_reference_resource_create(struct dec_str_ctx *dec_str_ctx)
+{
+ struct dec_pictref_res *pict_ref_res;
+ int ret;
+ unsigned int mem_heap_id;
+ enum sys_emem_attrib mem_attribs;
+
+ if (!dec_str_ctx || !dec_str_ctx->decctx) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+
+ mem_heap_id = dec_str_ctx->decctx->internal_heap_id;
+ mem_attribs = (enum sys_emem_attrib)(SYS_MEMATTRIB_UNCACHED | SYS_MEMATTRIB_WRITECOMBINE);
+ mem_attribs |= (enum sys_emem_attrib)SYS_MEMATTRIB_INTERNAL;
+
+ /* Allocate the firmware context buffer info structure. */
+ pict_ref_res = kzalloc(sizeof(*pict_ref_res), GFP_KERNEL);
+ VDEC_ASSERT(pict_ref_res);
+ if (!pict_ref_res)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /*
+ * Allocate the firmware context buffer to contain data required for
+ * subsequent picture.
+ */
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s:%d calling MMU_StreamMalloc", __func__, __LINE__);
+#endif
+ ret = mmu_stream_alloc(dec_str_ctx->mmu_str_handle, MMU_HEAP_STREAM_BUFFERS, mem_heap_id,
+ (enum sys_emem_attrib)(mem_attribs | SYS_MEMATTRIB_CPU_READ |
+ SYS_MEMATTRIB_CPU_WRITE),
+ sizeof(struct vdecfw_buffer_control),
+ DEV_MMU_PAGE_ALIGNMENT,
+ &pict_ref_res->fw_ctrlbuf);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto err_out_of_memory;
+
+ /*
+ * Clear the context data in preparation for first time use by
+ * the firmware.
+ */
+ memset(pict_ref_res->fw_ctrlbuf.cpu_virt, 0, pict_ref_res->fw_ctrlbuf.buf_size);
+
+ pict_ref_res->ref_cnt = 1;
+
+ ret = resource_list_add_img(&dec_str_ctx->ref_res_lst, pict_ref_res, 0,
+ &pict_ref_res->ref_cnt);
+ if (ret != IMG_SUCCESS) {
+ pr_err("[USERSID=0x%08X] Failed to add resource", dec_str_ctx->config.user_str_id);
+ return ret;
+ }
+
+ return IMG_SUCCESS;
+
+err_out_of_memory:
+
+ kfree(pict_ref_res);
+ pict_ref_res = NULL;
+
+ pr_err("[USERSID=0x%08X] Failed to allocate device memory for stream reference resources",
+ dec_str_ctx->config.user_str_id);
+
+ return IMG_ERROR_OUT_OF_MEMORY;
+}
+
+/*
+ * @Function decoder_picture_finalize
+ */
+static int decoder_picture_finalize(struct dec_str_ctx *dec_str_ctx,
+ struct vdecdd_str_unit *str_unit)
+{
+ struct dec_decpict *dec_pict;
+ struct dec_core_ctx *dec_core_ctx = NULL;
+
+ VDEC_ASSERT(dec_str_ctx);
+
+ dec_pict = dec_str_ctx->cur_pict;
+ if (!dec_pict) {
+ pr_err("[USERSID=0x%08X] Unable to get the current picture from Decoder context",
+ dec_str_ctx->config.user_str_id);
+ return IMG_ERROR_GENERIC_FAILURE;
+ }
+
+ dec_core_ctx = decoder_str_ctx_to_core_ctx(dec_str_ctx);
+
+ if (!dec_core_ctx || !dec_core_ctx->busy) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+
+ dec_core_ctx->busy = FALSE;
+
+ /* Picture data are now complete, nullify pointer */
+ dec_str_ctx->cur_pict = NULL;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_submit_fragment
+ */
+static int decoder_submit_fragment(struct dec_str_ctx *dec_str_context,
+ struct vdecdd_str_unit *str_unit,
+ unsigned char eop)
+{
+ struct dec_core_ctx *dec_core_context = NULL;
+ struct lst_t dec_fragment_seg_list;
+ struct dec_decpict_seg *dec_pict_seg;
+ struct dec_pict_fragment *pict_fragment;
+ int ret = IMG_SUCCESS;
+
+ if (!dec_str_context) {
+ VDEC_ASSERT(0);
+ return IMG_ERROR_GENERIC_FAILURE;
+ }
+
+ if (!dec_str_context->cur_pict) {
+ pr_err("[USERSID=0x%08X] Unable to get the current picture from Decoder context",
+ dec_str_context->config.user_str_id);
+ VDEC_ASSERT(0);
+ return IMG_ERROR_GENERIC_FAILURE;
+ }
+
+ dec_core_context = decoder_str_ctx_to_core_ctx(dec_str_context);
+ if (!dec_core_context) {
+ VDEC_ASSERT(0);
+ return IMG_ERROR_GENERIC_FAILURE;
+ }
+
+ pict_fragment = kzalloc(sizeof(*pict_fragment), GFP_KERNEL);
+ VDEC_ASSERT(pict_fragment);
+ if (!pict_fragment)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ lst_init(&dec_fragment_seg_list);
+
+ /* Add the segments to the temporary list */
+ ret = decoder_wrap_bitstr_segments(&str_unit->bstr_seg_list,
+ &dec_fragment_seg_list,
+ dec_str_context->config.user_str_id);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ /* Prepare ctr alloc for the fragment */
+ ret = translation_fragment_prepare(dec_str_context->cur_pict,
+ &dec_fragment_seg_list, eop,
+ pict_fragment);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ /*
+ * Move segments of the fragment from the temporary list to the picture
+ * segment list
+ */
+ dec_pict_seg = lst_removehead(&dec_fragment_seg_list);
+ while (dec_pict_seg) {
+ lst_add(&dec_str_context->cur_pict->dec_pict_seg_list,
+ dec_pict_seg);
+ dec_pict_seg = lst_removehead(&dec_fragment_seg_list);
+ }
+
+ /* Submit fragment */
+ ret = hwctrl_picture_submit_fragment(dec_core_context->hw_ctx,
+ pict_fragment,
+ dec_str_context->cur_pict,
+ dec_str_context->vxd_dec_ctx);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ goto error;
+
+ lst_add(&dec_str_context->cur_pict->fragment_list, pict_fragment);
+
+ if (eop)
+ dec_str_context->cur_pict->eop_found = TRUE;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] [TID=0x%08X] FRAGMENT",
+ dec_str_context->config.user_str_id,
+ dec_str_context->last_fe_transaction_id);
+#endif
+
+ return IMG_SUCCESS;
+error:
+ kfree(pict_fragment);
+
+ return ret;
+}
+
+/*
+ * @Function decoder_stream_process_unit
+ */
+int decoder_stream_process_unit(void *dec_str_ctx_arg,
+ struct vdecdd_str_unit *str_unit)
+{
+ struct dec_str_ctx *dec_str_ctx =
+ decoder_stream_get_context(dec_str_ctx_arg);
+
+ struct dec_str_unit *dec_str_unit;
+ struct dec_decpict *dec_pict = NULL;
+ unsigned char processed = FALSE;
+ int ret;
+
+ VDEC_ASSERT(dec_str_ctx);
+ VDEC_ASSERT(str_unit);
+
+ if (!dec_str_ctx || !str_unit) {
+ pr_err("Invalid decoder stream context handle!\n");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+ pr_debug("%s : stream unit type = %d\n"
+ , __func__, str_unit->str_unit_type);
+ /* Process the stream unit */
+ switch (str_unit->str_unit_type) {
+ case VDECDD_STRUNIT_SEQUENCE_END:
+ case VDECDD_STRUNIT_ANONYMOUS:
+ case VDECDD_STRUNIT_CLOSED_GOP:
+ case VDECDD_STRUNIT_PICTURE_PORTENT:
+ case VDECDD_STRUNIT_FENCE:
+ /* Nothing more to do so mark the stream unit as processed */
+ processed = TRUE;
+ break;
+
+ case VDECDD_STRUNIT_STOP:
+ if (dec_str_ctx->cur_pict && !dec_str_ctx->cur_pict->eop_found) {
+ ret = decoder_submit_fragment(dec_str_ctx, str_unit, TRUE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ ret = decoder_picture_finalize(dec_str_ctx, str_unit);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] [TID=0x%08X] FORCED END",
+ dec_str_ctx->config.user_str_id,
+ dec_str_ctx->last_fe_transaction_id);
+#endif
+ }
+
+ processed = TRUE;
+ break;
+
+ case VDECDD_STRUNIT_SEQUENCE_START:
+ {
+ unsigned int max_num_activ_pict = 0;
+
+ VDEC_ASSERT(str_unit->seq_hdr_info);
+ /*
+ * Determine how many decoded pictures can be held for
+ * reference in the decoder for this stream.
+ */
+ ret = vdecddutils_ref_pict_get_maxnum(&dec_str_ctx->config,
+ &str_unit->seq_hdr_info->com_sequ_hdr_info,
+ &max_num_activ_pict);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Double for field coding */
+ max_num_activ_pict *= 2;
+
+ /*
+ * Ensure that there are enough resource to have pictures
+ * filling all slots on all cores.
+ */
+ max_num_activ_pict +=
+ dec_str_ctx->decctx->dev_cfg->num_slots_per_pipe *
+ dec_str_ctx->decctx->num_pipes;
+
+ /* Increase decoder stream resources if necessary. */
+ while (dec_str_ctx->num_ref_res < max_num_activ_pict) {
+ ret = decoder_stream_reference_resource_create(dec_str_ctx);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dec_str_ctx->num_ref_res++;
+ }
+
+ /* Nothing more to do so mark the stream unit as processed */
+ processed = TRUE;
+ break;
+ }
+
+ case VDECDD_STRUNIT_PICTURE_START:
+ if (str_unit->decode) {
+ /* Prepare and submit picture to decode. */
+ ret = decoder_picture_decode(dec_str_ctx, str_unit, &dec_pict);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] [TID=0x%08X] START",
+ dec_str_ctx->config.user_str_id,
+ dec_str_ctx->last_fe_transaction_id);
+#endif
+ } else {
+ processed = TRUE;
+ }
+ break;
+
+ case VDECDD_STRUNIT_PICTURE_END:
+ if (str_unit->decode) {
+ ret = decoder_picture_finalize(dec_str_ctx, str_unit);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] [TID=0x%08X] END",
+ dec_str_ctx->config.user_str_id,
+ dec_str_ctx->last_fe_transaction_id);
+#endif
+ } else {
+ processed = TRUE;
+ }
+ break;
+
+ default:
+ VDEC_ASSERT(FALSE);
+ break;
+ }
+
+ /*
+ * If this or any preceding stream unit(s) could not be
+ * completely processed, add this unit to the queue.
+ */
+ if (!processed) {
+ /* Add unit to stream decode list */
+ dec_str_unit = kzalloc(sizeof(*dec_str_unit), GFP_KERNEL);
+ VDEC_ASSERT(dec_str_unit);
+ if (!dec_str_unit)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ dec_str_unit->str_unit = str_unit;
+
+ /* make PICTURE_START owner of dec_pict */
+ if (dec_pict) {
+ VDEC_ASSERT(str_unit->str_unit_type == VDECDD_STRUNIT_PICTURE_START);
+ dec_str_unit->dec_pict = dec_pict;
+ }
+
+ lst_add(&dec_str_ctx->pend_strunit_list, dec_str_unit);
+ } else {
+ /*
+ * If there is nothing being decoded for this stream,
+ * immediately handle the unit (non-picture so doesn't need
+ * decoding). Report that this unit has been processed.
+ */
+ VDEC_ASSERT(dec_str_ctx->decctx);
+ ret = dec_str_ctx->str_processed_cb(dec_str_ctx->usr_int_data,
+ VXD_CB_STRUNIT_PROCESSED,
+ str_unit);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+
+ return IMG_SUCCESS;
+}
+
+static int
+decoder_get_required_core_features(const struct vdec_str_configdata *str_cfg,
+ const struct vdec_str_opconfig *op_cfg,
+ unsigned int *features)
+{
+ unsigned int features_local = 0;
+
+ VDEC_ASSERT(str_cfg);
+ VDEC_ASSERT(features);
+
+ /* Check Video Standard. */
+ switch (str_cfg->vid_std) {
+ case VDEC_STD_H264:
+ features_local = VDECDD_COREFEATURE_H264;
+ break;
+#ifdef HAS_JPEG
+ case VDEC_STD_JPEG:
+ features_local = VDECDD_COREFEATURE_JPEG;
+ break;
+#endif
+#ifdef HAS_HEVC
+ case VDEC_STD_HEVC:
+ features_local = VDECDD_COREFEATURE_HEVC;
+ break;
+#endif
+ default:
+ VDEC_ASSERT(FALSE);
+ break;
+ }
+
+ *features = features_local;
+
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function decoder_is_supported_by_atleast_onepipe
+ */
+static unsigned char decoder_is_supported_by_atleast_onepipe(unsigned char *features,
+ unsigned int num_pipes)
+{
+ unsigned int i;
+
+ VDEC_ASSERT(features);
+ VDEC_ASSERT(num_pipes <= VDEC_MAX_PIXEL_PIPES);
+
+ for (i = 0; i < num_pipes; i++) {
+ if (features[i])
+ return TRUE;
+ }
+
+ return FALSE;
+}
+
+/*
+ * @Function decoder_check_support
+ */
+int decoder_check_support(void *dec_ctx_arg,
+ const struct vdec_str_configdata *str_cfg,
+ const struct vdec_str_opconfig *str_op_cfg,
+ const struct vdecdd_ddpict_buf *disp_pict_buf,
+ const struct vdec_pict_rendinfo *req_pict_rendinfo,
+ const struct vdec_comsequ_hdrinfo *comseq_hdrinfo,
+ const struct bspp_pict_hdr_info *pict_hdrinfo,
+ const struct vdec_comsequ_hdrinfo *prev_comseq_hdrinfo,
+ const struct bspp_pict_hdr_info *prev_pict_hdrinfo,
+ unsigned char non_cfg_req,
+ struct vdec_unsupp_flags *unsupported,
+ unsigned int *features)
+{
+ struct dec_ctx *dec_ctx = (struct dec_ctx *)dec_ctx_arg;
+ struct dec_core_ctx *dec_core_ctx;
+ struct vxd_coreprops *core_props;
+ const struct vdec_pict_rendinfo *disp_pict_rendinfo = NULL;
+ int ret = IMG_ERROR_NOT_SUPPORTED;
+
+ /* Ensure input parameters are valid. */
+ VDEC_ASSERT(dec_ctx_arg);
+ VDEC_ASSERT(str_cfg);
+ VDEC_ASSERT(unsupported);
+
+ if (!dec_ctx_arg || !str_cfg || !unsupported) {
+ pr_err("Invalid parameters!");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (disp_pict_buf)
+ disp_pict_rendinfo = &disp_pict_buf->rend_info;
+
+ /*
+ * Validate compatibility between the supplied configuration/state
+ * and the master core only at the moment (assumed to have superset
+ * of features).
+ * Some features may not be present on any slave cores which might
+ * cause poor utilisation of hardware.
+ */
+ memset(unsupported, 0, sizeof(*unsupported));
+
+ dec_core_ctx = dec_ctx->dec_core_ctx;
+ VDEC_ASSERT(dec_core_ctx);
+
+ core_props = &dec_core_ctx->core_props;
+ VDEC_ASSERT(core_props);
+
+ /* Check that the video standard is supported */
+ switch (str_cfg->vid_std) {
+ case VDEC_STD_H264:
+ if (!decoder_is_supported_by_atleast_onepipe(core_props->h264,
+ core_props->num_pixel_pipes)) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: VIDEO STANDARD (H.264)",
+ str_cfg->user_str_id);
+ unsupported->str_cfg |=
+ VDECDD_UNSUPPORTED_STRCONFIG_STD;
+ }
+
+ if (comseq_hdrinfo && (H264_PROFILE_MVC_HIGH ==
+ comseq_hdrinfo->codec_profile || H264_PROFILE_MVC_STEREO ==
+ comseq_hdrinfo->codec_profile) && comseq_hdrinfo->num_views >
+ VDEC_H264_MVC_MAX_VIEWS) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[SW]: NUMBER OF VIEWS",
+ str_cfg->user_str_id);
+ unsupported->seq_hdr |= VDECDD_UNSUPPORTED_SEQUHDR_NUM_OF_VIEWS;
+ }
+ break;
+#ifdef HAS_HEVC
+ case VDEC_STD_HEVC:
+ if (!decoder_is_supported_by_atleast_onepipe(core_props->hevc,
+ core_props->num_pixel_pipes)) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: VIDEO STANDARD (HEVC)",
+ str_cfg->user_str_id);
+ unsupported->str_cfg |= VDECDD_UNSUPPORTED_STRCONFIG_STD;
+ }
+ if (pict_hdrinfo && pict_hdrinfo->hevc_pict_hdr_info.range_ext_present)
+ if ((pict_hdrinfo->hevc_pict_hdr_info.is_full_range_ext &&
+ !decoder_is_supported_by_atleast_onepipe(core_props->hevc_range_ext,
+ core_props->num_pixel_pipes)) ||
+ (!pict_hdrinfo->hevc_pict_hdr_info.is_full_range_ext &&
+ core_props->vidstd_props[str_cfg->vid_std].max_chroma_format ==
+ PIXEL_FORMAT_420)) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: HEVC RANGE EXTENSIONS",
+ str_cfg->user_str_id);
+ unsupported->pict_hdr |= VDECDD_UNSUPPORTED_PICTHDR_HEVC_RANGE_EXT;
+ }
+ break;
+#endif
+#ifdef HAS_JPEG
+ case VDEC_STD_JPEG:
+ if (!decoder_is_supported_by_atleast_onepipe(core_props->jpeg,
+ core_props->num_pixel_pipes)) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: VIDEO STANDARD (JPEG)",
+ str_cfg->user_str_id);
+ unsupported->str_cfg |=
+ VDECDD_UNSUPPORTED_STRCONFIG_STD;
+ }
+ break;
+#endif
+ default:
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: VIDEO STANDARD (UNKNOWN)",
+ str_cfg->user_str_id);
+ unsupported->str_cfg |=
+ VDECDD_UNSUPPORTED_STRCONFIG_STD;
+ break;
+ }
+
+ if (str_op_cfg) {
+ /*
+ * Ensure that each display feature is supported by the
+ * hardware.
+ */
+ if (comseq_hdrinfo) {
+ /* Validate display pixel format */
+ if (non_cfg_req && prev_comseq_hdrinfo &&
+ vdec_size_nz(prev_comseq_hdrinfo->frame_size) &&
+ prev_comseq_hdrinfo->pixel_info.chroma_fmt_idc ==
+ str_op_cfg->pixel_info.chroma_fmt_idc &&
+ comseq_hdrinfo->pixel_info.chroma_fmt_idc !=
+ prev_comseq_hdrinfo->pixel_info.chroma_fmt_idc) {
+ /*
+ * If this is a non-configuration request and
+ * it looks like a new sequence with
+ * sub-sampling change, just indicate output
+ * format mismatch without any error messages.
+ */
+ unsupported->str_opcfg |= VDECDD_UNSUPPORTED_OUTPUTCONFIG_PIXFORMAT;
+ } else {
+ switch (str_op_cfg->pixel_info.chroma_fmt_idc) {
+ case PIXEL_FORMAT_420:
+ if (comseq_hdrinfo->pixel_info.chroma_fmt_idc ==
+ PIXEL_FORMAT_MONO) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: TRANSFORM PIXEL FORMAT FROM 400 TO 420",
+ str_cfg->user_str_id);
+ unsupported->str_opcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_PIXFORMAT;
+ }
+ break;
+
+ case PIXEL_FORMAT_422:
+ if (comseq_hdrinfo->pixel_info.chroma_fmt_idc ==
+ PIXEL_FORMAT_420 &&
+ str_op_cfg->pixel_info.num_planes > 1) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: REQUESTED NUMBER OF PLANES FOR 422 UPSAMPLING",
+ str_cfg->user_str_id);
+ unsupported->str_opcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_PIXFORMAT;
+ } else if (comseq_hdrinfo->pixel_info.chroma_fmt_idc ==
+ PIXEL_FORMAT_MONO) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: TRANSFORM PIXEL FORMAT FROM 400 TO 422",
+ str_cfg->user_str_id);
+ unsupported->str_opcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_PIXFORMAT;
+ }
+ break;
+
+ default:
+ break;
+ }
+ }
+ }
+
+ if (str_op_cfg->pixel_info.bitdepth_y >
+ core_props->vidstd_props[str_cfg->vid_std].max_luma_bitdepth ||
+ str_op_cfg->pixel_info.bitdepth_y < 8 ||
+ str_op_cfg->pixel_info.bitdepth_y == 9) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: DISPLAY PICTURE LUMA BIT DEPTH %d [RANGE: 8->%d for %s]",
+ str_cfg->user_str_id,
+ str_op_cfg->pixel_info.bitdepth_y,
+ core_props->vidstd_props[str_cfg->vid_std].max_luma_bitdepth,
+ vid_std_names[str_cfg->vid_std]);
+ unsupported->str_opcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_PIXFORMAT;
+ }
+
+ if (str_op_cfg->pixel_info.chroma_fmt_idc !=
+ PIXEL_FORMAT_MONO &&
+ (str_op_cfg->pixel_info.bitdepth_c >
+ core_props->vidstd_props[str_cfg->vid_std].max_chroma_bitdepth ||
+ str_op_cfg->pixel_info.bitdepth_c < 8 ||
+ str_op_cfg->pixel_info.bitdepth_c == 9)) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: DISPLAY PICTURE CHROMA BIT DEPTH %d [RANGE: 8->%d for %s]",
+ str_cfg->user_str_id,
+ str_op_cfg->pixel_info.bitdepth_c,
+ core_props->vidstd_props[str_cfg->vid_std].max_chroma_bitdepth,
+ vid_std_names[str_cfg->vid_std]);
+ unsupported->str_opcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_PIXFORMAT;
+ }
+
+#ifdef HAS_JPEG
+ /* Validate display configuration against existing stream configuration.*/
+ if (str_cfg->vid_std == VDEC_STD_JPEG) {
+ if (str_op_cfg->force_oold) {
+ pr_err("[USERSID=0x%08X] UNSUPPORTED[HW]: OOLD WITH JPEG\n",
+ str_cfg->user_str_id);
+ unsupported->str_opcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_X_WITH_JPEG;
+ }
+ }
+#endif
+ }
+
+ if (disp_pict_rendinfo) {
+ unsigned int stride_alignment = VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT;
+
+ if (req_pict_rendinfo) {
+ /*
+ * Picture size declared in buffer must be at least as
+ * large as that required by bitstream/output config.
+ */
+ if (!vdec_size_ge(disp_pict_rendinfo->rend_pict_size,
+ req_pict_rendinfo->rend_pict_size)) {
+ pr_warn("[USERSID=0x%08X] Picture size of output picture buffer [%d x %d] is not large enough for sequence [%d x %d]",
+ str_cfg->user_str_id,
+ disp_pict_rendinfo->rend_pict_size.width,
+ disp_pict_rendinfo->rend_pict_size.height,
+ req_pict_rendinfo->rend_pict_size.width,
+ req_pict_rendinfo->rend_pict_size.height);
+ unsupported->str_opcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_PICTURE_SIZE;
+ }
+
+ /*
+ * Size of each plane must be at least as large
+ * as that required.
+ */
+ if (disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_Y].size <
+ req_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_Y].size) {
+ pr_warn("[USERSID=0x%08X] Y plane of output picture buffer [%d bytes] is not large enough for bitstream/config [%d bytes]",
+ str_cfg->user_str_id,
+ disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_Y].size,
+ req_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_Y].size);
+ unsupported->op_bufcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_Y_SIZE;
+ }
+
+ /*
+ * Stride of each plane must be at least as large as that
+ * required.
+ */
+ if (disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_Y].stride <
+ req_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_Y].stride) {
+ pr_warn("[USERSID=0x%08X] Y stride of output picture buffer [%d bytes] is not large enough for bitstream/config [%d bytes]",
+ str_cfg->user_str_id,
+ disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_Y].stride,
+ req_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_Y].stride);
+ unsupported->op_bufcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_Y_STRIDE;
+ }
+
+ /*
+ * Size of each plane must be at least
+ * as large as that required.
+ */
+ if (disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_UV].size <
+ req_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_UV].size) {
+ pr_warn("[USERSID=0x%08X] UV plane of output picture buffer [%d bytes] is not large enough for bitstream/config [%d bytes]",
+ str_cfg->user_str_id,
+ disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_UV].size,
+ req_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_UV].size);
+ unsupported->op_bufcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_UV_SIZE;
+ }
+
+ /*
+ * Stride of each plane must be at least
+ * as large as that required.
+ */
+ if (disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_UV].stride <
+ req_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_UV].stride) {
+ pr_warn("[USERSID=0x%08X] UV stride of output picture buffer [%d bytes] is not large enough for bitstream/config [%d bytes]",
+ str_cfg->user_str_id,
+ disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_UV].stride,
+ req_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_UV].stride);
+ unsupported->op_bufcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_UV_STRIDE;
+ }
+
+ if ((req_pict_rendinfo->stride_alignment &
+ (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT - 1)) != 0) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: STRIDE ALIGNMENT [%d] must be a multiple of %d bytes",
+ str_cfg->user_str_id,
+ req_pict_rendinfo->stride_alignment,
+ VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT);
+ unsupported->op_bufcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_64BYTE_STRIDE;
+ }
+
+ if (req_pict_rendinfo->stride_alignment > 0)
+ stride_alignment = req_pict_rendinfo->stride_alignment;
+ }
+
+ if ((disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_Y].stride %
+ stride_alignment) != 0) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: Y STRIDE [%d] must be a multiple of %d bytes",
+ str_cfg->user_str_id,
+ disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_Y].stride,
+ stride_alignment);
+ unsupported->op_bufcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_64BYTE_STRIDE;
+ }
+
+ if ((disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_UV].stride %
+ stride_alignment) != 0) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: UV STRIDE [%d] must be a multiple of %d bytes",
+ str_cfg->user_str_id,
+ disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_UV].stride,
+ stride_alignment);
+ unsupported->op_bufcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_64BYTE_STRIDE;
+ }
+
+ if ((disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_V].stride %
+ stride_alignment) != 0) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: V STRIDE [%d] must be a multiple of %d bytes",
+ str_cfg->user_str_id,
+ disp_pict_rendinfo->plane_info[VDEC_PLANE_VIDEO_V].stride,
+ stride_alignment);
+ unsupported->op_bufcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_64BYTE_STRIDE;
+ }
+
+ if (req_pict_rendinfo) {
+ if (str_op_cfg) {
+ if (str_cfg->vid_std != VDEC_STD_JPEG) {
+ if (str_op_cfg->pixel_info.num_planes <= 2)
+ /*
+ * V plane only required when chroma is
+ * separated.
+ */
+ VDEC_ASSERT(req_pict_rendinfo->plane_info
+ [VDEC_PLANE_VIDEO_V].size == 0);
+
+ if (str_op_cfg->pixel_info.num_planes <= 3)
+ /* Alpha planes should not be required. */
+ VDEC_ASSERT(req_pict_rendinfo->plane_info
+ [VDEC_PLANE_VIDEO_A].size == 0);
+ }
+ }
+
+ /* Size of buffer must be at least as large as that required. */
+ if (disp_pict_rendinfo->rendered_size <
+ req_pict_rendinfo->rendered_size) {
+ pr_warn("[USERSID=0x%08X] Output picture buffer [%d bytes] is not large enough for bitstream/config [%d bytes]",
+ str_cfg->user_str_id,
+ disp_pict_rendinfo->rendered_size,
+ req_pict_rendinfo->rendered_size);
+ unsupported->op_bufcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_BUFFER_SIZE;
+ }
+ }
+
+ if (str_op_cfg) {
+ if (comseq_hdrinfo) {
+ if (vdec_size_lt(disp_pict_rendinfo->rend_pict_size,
+ comseq_hdrinfo->max_frame_size)) {
+ pr_warn("[USERSID=0x%08X] Buffers [%d x %d] must be large enough to contain the maximum frame size [%d x %d] when not scaling",
+ str_cfg->user_str_id,
+ disp_pict_rendinfo->rend_pict_size.width,
+ disp_pict_rendinfo->rend_pict_size.height,
+ comseq_hdrinfo->max_frame_size.width,
+ comseq_hdrinfo->max_frame_size.height);
+ unsupported->op_bufcfg |=
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_PICTURE_SIZE;
+ }
+ }
+ }
+ }
+
+ if (comseq_hdrinfo) {
+ unsigned int max_width =
+ vdec_size_min(core_props->vidstd_props[str_cfg->vid_std].max_width,
+ MAX_PLATFORM_SUPPORTED_WIDTH);
+
+ unsigned int max_height =
+ vdec_size_min(core_props->vidstd_props[str_cfg->vid_std].max_height,
+ MAX_PLATFORM_SUPPORTED_HEIGHT);
+
+ if (comseq_hdrinfo->max_frame_size.width > max_width ||
+ comseq_hdrinfo->max_frame_size.height > max_height) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: FRAME WIDTH %dpx or HEIGHT %dpx are over maximum allowed value [%d, %d]",
+ str_cfg->user_str_id,
+ comseq_hdrinfo->max_frame_size.width,
+ comseq_hdrinfo->max_frame_size.height,
+ max_width, max_height);
+ unsupported->seq_hdr |=
+ VDECDD_UNSUPPORTED_SEQUHDR_SIZE;
+ }
+
+ if (comseq_hdrinfo->pixel_info.bitdepth_y >
+ core_props->vidstd_props[str_cfg->vid_std].max_luma_bitdepth ||
+ comseq_hdrinfo->pixel_info.bitdepth_y < 8 ||
+ comseq_hdrinfo->pixel_info.bitdepth_y == 9) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: CODED PICTURE LUMA BIT DEPTH %d [RANGE: 8->%d for %s]",
+ str_cfg->user_str_id,
+ comseq_hdrinfo->pixel_info.bitdepth_y,
+ core_props->vidstd_props[str_cfg->vid_std].max_luma_bitdepth,
+ vid_std_names[str_cfg->vid_std]);
+ unsupported->seq_hdr |=
+ VDECDD_UNSUPPORTED_SEQUHDR_PIXFORMAT_BIT_DEPTH;
+ }
+
+ if (comseq_hdrinfo->pixel_info.chroma_fmt_idc !=
+ PIXEL_FORMAT_MONO &&
+ (comseq_hdrinfo->pixel_info.bitdepth_c >
+ core_props->vidstd_props[str_cfg->vid_std].max_chroma_bitdepth ||
+ comseq_hdrinfo->pixel_info.bitdepth_c < 8 ||
+ comseq_hdrinfo->pixel_info.bitdepth_c == 9)) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: CODED PICTURE CHROMA BIT DEPTH %d [RANGE: 8->%d for %s]",
+ str_cfg->user_str_id,
+ comseq_hdrinfo->pixel_info.bitdepth_c,
+ core_props->vidstd_props[str_cfg->vid_std].max_chroma_bitdepth,
+ vid_std_names[str_cfg->vid_std]);
+ unsupported->seq_hdr |=
+ VDECDD_UNSUPPORTED_SEQUHDR_PIXFORMAT_BIT_DEPTH;
+ }
+
+ if (comseq_hdrinfo->pixel_info.chroma_fmt_idc !=
+ PIXEL_FORMAT_MONO &&
+ comseq_hdrinfo->pixel_info.bitdepth_y !=
+ comseq_hdrinfo->pixel_info.bitdepth_c) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: CODED PICTURE MIXED BIT DEPTH [%d vs %d]",
+ str_cfg->user_str_id,
+ comseq_hdrinfo->pixel_info.bitdepth_y,
+ comseq_hdrinfo->pixel_info.bitdepth_c);
+ unsupported->seq_hdr |=
+ VDECDD_UNSUPPORTED_SEQUHDR_PIXFORMAT_BIT_DEPTH;
+ }
+
+ if (comseq_hdrinfo->pixel_info.chroma_fmt_idc >
+ core_props->vidstd_props[str_cfg->vid_std].max_chroma_format) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: CODED PIXEL FORMAT IDC %s [for %s]",
+ str_cfg->user_str_id,
+ comseq_hdrinfo->pixel_info.chroma_fmt_idc <
+ ARRAY_SIZE
+ (pix_fmt_idc_names) ? (unsigned char *)
+ pix_fmt_idc_names[comseq_hdrinfo->pixel_info.chroma_fmt_idc] :
+ (unsigned char *)"Invalid",
+ vid_std_names[str_cfg->vid_std]);
+ unsupported->seq_hdr |=
+ VDECDD_UNSUPPORTED_SEQUHDR_PIXEL_FORMAT;
+ }
+
+ if (comseq_hdrinfo->pixel_info.chroma_fmt_idc ==
+ PIXEL_FORMAT_INVALID) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[SW]: UNKNOWN CODED PIXEL FORMAT",
+ str_cfg->user_str_id);
+ unsupported->seq_hdr |=
+ VDECDD_UNSUPPORTED_SEQUHDR_PIXEL_FORMAT;
+ }
+ }
+
+ if (pict_hdrinfo && comseq_hdrinfo) {
+ unsigned int coded_cmd_width;
+ unsigned int coded_cmd_height;
+ unsigned int min_width = core_props->vidstd_props[str_cfg->vid_std].min_width;
+ unsigned int min_height =
+ ALIGN(core_props->vidstd_props[str_cfg->vid_std].min_height,
+ (pict_hdrinfo->field) ?
+ 2 * VDEC_MB_DIMENSION : VDEC_MB_DIMENSION);
+ unsigned int pict_size_in_mbs;
+ unsigned int max_height = core_props->vidstd_props[str_cfg->vid_std].max_height;
+ unsigned int max_width = core_props->vidstd_props[str_cfg->vid_std].max_width;
+ unsigned int max_mbs = core_props->vidstd_props[str_cfg->vid_std].max_macroblocks;
+
+#ifdef HAS_JPEG
+ /* For JPEG, max picture size of four plane images is 16k*16k. */
+ if (str_cfg->vid_std == VDEC_STD_JPEG) {
+ if (comseq_hdrinfo->pixel_info.num_planes >= 4) {
+ max_width = (max_width > 16 * 1024) ? 16 * 1024 : max_width;
+ max_height = (max_height > 16 * 1024) ? 16 * 1024 : max_height;
+ }
+ }
+#endif
+
+ coded_cmd_width =
+ ALIGN(pict_hdrinfo->coded_frame_size.width, VDEC_MB_DIMENSION);
+ coded_cmd_height =
+ ALIGN(pict_hdrinfo->coded_frame_size.height,
+ pict_hdrinfo->field ?
+ 2 * VDEC_MB_DIMENSION : VDEC_MB_DIMENSION);
+
+ pict_size_in_mbs = (coded_cmd_width * coded_cmd_height) /
+ (VDEC_MB_DIMENSION * VDEC_MB_DIMENSION);
+
+ if ((str_cfg->vid_std == VDEC_STD_H264 &&
+ max_mbs && pict_size_in_mbs > max_mbs) ||
+ coded_cmd_width > max_width ||
+ coded_cmd_height > max_height) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: CODED PICTURE SIZE %d x %d [MAX: %d x %d or %d MBs]",
+ str_cfg->user_str_id,
+ coded_cmd_width, coded_cmd_height,
+ max_width, max_height, max_mbs);
+ unsupported->pict_hdr |= VDECDD_UNSUPPORTED_PICTHDR_RESOLUTION;
+ }
+
+ if (pict_hdrinfo->coded_frame_size.width < min_width ||
+ pict_hdrinfo->coded_frame_size.height < min_height) {
+#ifdef USE_STRICT_MIN_PIC_SIZE_CHECK
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: CODED PICTURE SIZE %d x %d [MIN: %d x %d]",
+ str_cfg->user_str_id,
+ pict_hdrinfo->coded_frame_size.width,
+ pict_hdrinfo->coded_frame_size.height,
+ min_width, min_height);
+ unsupported->pict_hdr |= VDECDD_UNSUPPORTED_PICTHDR_RESOLUTION;
+#else /* ndef USE_STRICT_MIN_PIC_SIZE_CHECK */
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: CODED PICTURE SIZE %d x %d [MIN: %d x %d]",
+ str_cfg->user_str_id,
+ pict_hdrinfo->coded_frame_size.width,
+ pict_hdrinfo->coded_frame_size.height,
+ min_width, min_height);
+#endif /* ndef USE_STRICT_MIN_PIC_SIZE_CHECK */
+ }
+
+ if (pict_hdrinfo->pict_sgm_data.id !=
+ BSPP_INVALID && pict_hdrinfo->coded_frame_size.width > 1280) {
+ pr_warn("[USERSID=0x%08X] UNSUPPORTED[HW]: SGM & coded frame width > 1280",
+ str_cfg->user_str_id);
+ unsupported->pict_hdr |=
+ VDECDD_UNSUPPORTED_PICTHDR_OVERSIZED_SGM;
+ }
+
+ if (pict_hdrinfo->discontinuous_mbs)
+ pr_info("Stream has Discontinuous Macroblocks");
+
+ decoder_get_required_core_features(str_cfg, str_op_cfg, features);
+ }
+
+ if (unsupported->str_cfg == 0 && unsupported->str_opcfg == 0 &&
+ unsupported->op_bufcfg == 0 && unsupported->pict_hdr == 0)
+ ret = IMG_SUCCESS;
+
+ return ret;
+}
+
+/*
+ * @Function decoder_picture_decoded
+ */
+static int decoder_picture_decoded(struct dec_str_ctx *dec_str_ctx,
+ struct dec_core_ctx *dec_core_ctx,
+ struct vdecdd_picture *picture,
+ struct dec_decpict *dec_pict,
+ struct bspp_pict_hdr_info *pict_hdrinfo,
+ struct vdecdd_str_unit *str_unit)
+{
+ struct dec_fwmsg *first_fld_fwmsg;
+ struct dec_fwmsg *second_fld_fwmsg;
+ struct dec_pictref_res *pict_ref_res;
+ unsigned int transaction_id;
+ struct dec_decoded_pict *decoded_pict;
+ struct dec_decoded_pict *next_decoded_pict;
+ struct vdecdd_ddbuf_mapinfo *pict_buf;
+ struct dec_decoded_pict *prev_decoded_pict;
+ struct vdecfw_buffer_control *buf_control;
+ struct vdec_comsequ_hdrinfo *comseq_hdrinfo;
+ unsigned int res_limit = 0;
+ unsigned int dec_pict_num = 0;
+ unsigned int req_pict_num = 0;
+ struct dec_decoded_pict *aux_decoded_pict;
+ struct dec_decoded_pict *displayed_decoded_pict = NULL;
+ int ret;
+ unsigned int pict_id;
+ struct vdec_pict_tag_container *fld_tag_container;
+#ifdef ERROR_CONCEALMENT
+ unsigned int first_field_err_level = 0;
+ unsigned int second_field_err_level = 0;
+ unsigned int pict_last_mb = 0;
+#endif
+ struct vxd_dec_ctx *ctx;
+ unsigned int error_flag = 0;
+
+ VDEC_ASSERT(dec_str_ctx);
+ VDEC_ASSERT(str_unit);
+ VDEC_ASSERT(dec_pict);
+
+ first_fld_fwmsg = dec_pict->first_fld_fwmsg;
+ second_fld_fwmsg = dec_pict->second_fld_fwmsg;
+ pict_ref_res = dec_pict->pict_ref_res;
+ transaction_id = dec_pict->transaction_id;
+
+ VDEC_ASSERT(picture);
+ pict_buf = picture->disp_pict_buf.pict_buf;
+ VDEC_ASSERT(pict_buf);
+ comseq_hdrinfo = &pict_buf->ddstr_context->comseq_hdr_info;
+
+ /* Create a container for decoded picture. */
+ decoded_pict = kzalloc(sizeof(*decoded_pict), GFP_KERNEL);
+ VDEC_ASSERT(decoded_pict);
+ if (!decoded_pict)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ decoded_pict->pict = picture;
+ decoded_pict->first_fld_fwmsg = first_fld_fwmsg;
+ decoded_pict->second_fld_fwmsg = second_fld_fwmsg;
+ decoded_pict->pict_ref_res = pict_ref_res;
+ decoded_pict->transaction_id = transaction_id;
+
+ /* Populate the decoded picture information structure. */
+ picture->dec_pict_info->pict_state = VDEC_PICT_STATE_DECODED;
+
+ memcpy(&picture->dec_pict_info->first_fld_tag_container.pict_hwcrc,
+ &first_fld_fwmsg->pict_hwcrc,
+ sizeof(picture->dec_pict_info->first_fld_tag_container.pict_hwcrc));
+
+ memcpy(&picture->dec_pict_info->second_fld_tag_container.pict_hwcrc,
+ &second_fld_fwmsg->pict_hwcrc,
+ sizeof(picture->dec_pict_info->second_fld_tag_container.pict_hwcrc));
+
+ buf_control =
+ (struct vdecfw_buffer_control *)decoded_pict->pict_ref_res->fw_ctrlbuf.cpu_virt;
+ if (buf_control->second_field_of_pair) {
+ /* Search the first field and fill the second_fld_tag_container */
+ unsigned int prev_dec_pict_id =
+ get_prev_picture_id(GET_STREAM_PICTURE_ID(decoded_pict->transaction_id));
+ prev_decoded_pict =
+ decoder_get_decoded_pict_of_stream(prev_dec_pict_id,
+ &dec_str_ctx->str_decd_pict_list);
+
+ if (prev_decoded_pict) {
+ memcpy(&picture->dec_pict_info->second_fld_tag_container.pict_hwcrc,
+ &prev_decoded_pict->first_fld_fwmsg->pict_hwcrc,
+ sizeof
+ (picture->dec_pict_info->second_fld_tag_container.pict_hwcrc));
+ } else {
+ pr_warn("[USERSID=0x%08X] [TID 0x%08X] Failed to find decoded picture to attach second_fld_tag_container",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->transaction_id);
+ }
+ prev_decoded_pict = NULL;
+ }
+
+ /* Report any issues in decoding */
+ if (decoded_pict->pict->dec_pict_info->err_flags)
+ pr_warn("[USERSID=0x%08X] [PID=0x%08X] BSPP reported errors [flags: 0x%08X]",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->pict->pict_id,
+ decoded_pict->pict->dec_pict_info->err_flags);
+
+ if ((decoded_pict->first_fld_fwmsg->pict_attrs.fe_err &
+ FLAG_MASK(VDECFW_MSGFLAG_DECODED_FEERROR_ENTDECERROR)) ||
+ (decoded_pict->second_fld_fwmsg->pict_attrs.fe_err &
+ FLAG_MASK(VDECFW_MSGFLAG_DECODED_FEERROR_ENTDECERROR))) {
+ pr_warn("[USERSID=0x%08X] [TID 0x%08X] Front-end HW processing terminated prematurely due to an error.",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->transaction_id);
+ picture->dec_pict_info->err_flags |= VDEC_ERROR_FEHW_DECODE;
+ }
+
+ if ((decoded_pict->first_fld_fwmsg->pict_attrs.fe_err &
+ FLAG_MASK(VDECFW_MSGFLAG_DECODED_FEERROR_SRERROR)) ||
+ (decoded_pict->second_fld_fwmsg->pict_attrs.fe_err &
+ FLAG_MASK(VDECFW_MSGFLAG_DECODED_FEERROR_SRERROR))) {
+ pr_warn("[USERSID=0x%08X] [TID 0x%08X] HW Shift Register access returned an error during FEHW parsing.",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->transaction_id);
+ picture->dec_pict_info->err_flags |= VDEC_ERROR_SR_ERROR;
+ }
+
+ if ((decoded_pict->first_fld_fwmsg->pict_attrs.fe_err &
+ FLAG_MASK(VDECFW_MSGFLAG_DECODED_FEERROR_HWWDT)) ||
+ (decoded_pict->second_fld_fwmsg->pict_attrs.fe_err &
+ FLAG_MASK(VDECFW_MSGFLAG_DECODED_FEERROR_HWWDT))) {
+ pr_warn("[USERSID=0x%08X] [TID 0x%08X] Front-end HW processing timed-out.",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->transaction_id);
+ picture->dec_pict_info->err_flags |= VDEC_ERROR_FEHW_TIMEOUT;
+ }
+
+ if ((decoded_pict->first_fld_fwmsg->pict_attrs.fe_err &
+ FLAG_MASK(VDECFW_MSGFLAG_DECODED_MISSING_REFERENCES)) ||
+ (decoded_pict->second_fld_fwmsg->pict_attrs.fe_err &
+ FLAG_MASK(VDECFW_MSGFLAG_DECODED_MISSING_REFERENCES))) {
+ pr_warn("[USERSID=0x%08X] [TID 0x%08X] There are missing references for the current frame. May have corruption",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->transaction_id);
+ /*
+ * This is not a serious error, indicate host app to drop the
+ * frame as may have corruption.
+ */
+ picture->dec_pict_info->err_flags |=
+ VDEC_ERROR_MISSING_REFERENCES;
+ }
+
+ if ((decoded_pict->first_fld_fwmsg->pict_attrs.fe_err &
+ FLAG_MASK(VDECFW_MSGFLAG_DECODED_MMCO_ERROR)) ||
+ (decoded_pict->second_fld_fwmsg->pict_attrs.fe_err &
+ FLAG_MASK(VDECFW_MSGFLAG_DECODED_MMCO_ERROR))) {
+ pr_warn("[USERSID=0x%08X] [TID 0x%08X] MMCO error accured when processing the current frame. May have corruption",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->transaction_id);
+
+ /*
+ * This is not a serious error, indicate host app to drop
+ * the frame as may have corruption.
+ */
+ picture->dec_pict_info->err_flags |= VDEC_ERROR_MMCO;
+ }
+
+ if ((decoded_pict->first_fld_fwmsg->pict_attrs.fe_err &
+ FLAG_MASK(VDECFW_MSGFLAG_DECODED_MBS_DROPPED_ERROR)) ||
+ (decoded_pict->second_fld_fwmsg->pict_attrs.fe_err &
+ FLAG_MASK(VDECFW_MSGFLAG_DECODED_MBS_DROPPED_ERROR))) {
+ pr_warn("[USERSID=0x%08X] [TID 0x%08X] Some macroblocks were dropped when processing the current frame. May have corruption",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->transaction_id);
+
+ /*
+ * This is not a serious error, indicate host app to
+ * drop the frame as may have corruption.
+ */
+ picture->dec_pict_info->err_flags |= VDEC_ERROR_MBS_DROPPED;
+ }
+
+ if (decoded_pict->first_fld_fwmsg->pict_attrs.no_be_wdt > 0) {
+ pr_warn("[USERSID=0x%08X] [TID 0x%08X] Back-end HW processing timed-out. Aborted slices %d",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->transaction_id,
+ decoded_pict->first_fld_fwmsg->pict_attrs.no_be_wdt);
+ picture->dec_pict_info->err_flags |= VDEC_ERROR_BEHW_TIMEOUT;
+ }
+
+ if (decoded_pict->second_fld_fwmsg->pict_attrs.no_be_wdt > 0) {
+ pr_warn("[USERSID=0x%08X] [TID 0x%08X] Back-end HW processing timed-out. Aborted slices %d",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->transaction_id,
+ decoded_pict->second_fld_fwmsg->pict_attrs.no_be_wdt);
+ picture->dec_pict_info->err_flags |= VDEC_ERROR_BEHW_TIMEOUT;
+ }
+
+#ifdef ERROR_CONCEALMENT
+ /* Estimate error level in percentage */
+ if (decoder_get_pict_processing_info(dec_core_ctx, dec_str_ctx, pict_hdrinfo,
+ decoded_pict, dec_pict, &pict_last_mb) == TRUE) {
+ if (pict_last_mb) {
+ first_field_err_level = 100 - ((100 * (pict_last_mb -
+ decoded_pict->first_fld_fwmsg->pict_attrs.mbs_dropped +
+ decoded_pict->first_fld_fwmsg->pict_attrs.mbs_recovered)) /
+ pict_last_mb);
+
+ second_field_err_level = 100 - ((100 * (pict_last_mb -
+ decoded_pict->second_fld_fwmsg->pict_attrs.mbs_dropped +
+ decoded_pict->second_fld_fwmsg->pict_attrs.mbs_recovered)) /
+ pict_last_mb);
+ }
+
+ /* does not work properly with discontinuous mbs */
+ if (!pict_hdrinfo->discontinuous_mbs)
+ picture->dec_pict_info->err_level = first_field_err_level >
+ second_field_err_level ?
+ first_field_err_level : second_field_err_level;
+
+ VDEC_ASSERT(picture->dec_pict_info->err_level <= 100);
+ if (picture->dec_pict_info->err_level)
+ pr_warn("[USERSID=0x%08X] [TID 0x%08X] Picture error level: %d(%%)",
+ dec_str_ctx->config.user_str_id, decoded_pict->transaction_id,
+ picture->dec_pict_info->err_level);
+ }
+#endif
+
+ if (decoded_pict->first_fld_fwmsg->pict_attrs.pict_attrs.dwrfired ||
+ decoded_pict->second_fld_fwmsg->pict_attrs.pict_attrs.dwrfired) {
+ pr_warn("[USERSID=0x%08X] VXD Device Reset (Lockup).",
+ dec_str_ctx->config.user_str_id);
+ picture->dec_pict_info->err_flags |=
+ VDEC_ERROR_SERVICE_TIMER_EXPIRY;
+ }
+
+ if (decoded_pict->first_fld_fwmsg->pict_attrs.pict_attrs.mmufault ||
+ decoded_pict->second_fld_fwmsg->pict_attrs.pict_attrs.mmufault) {
+ pr_warn("[USERSID=0x%08X] VXD Device Reset (MMU fault).",
+ dec_str_ctx->config.user_str_id);
+ picture->dec_pict_info->err_flags |= VDEC_ERROR_MMU_FAULT;
+ }
+
+ if (decoded_pict->first_fld_fwmsg->pict_attrs.pict_attrs.deverror ||
+ decoded_pict->second_fld_fwmsg->pict_attrs.pict_attrs.deverror) {
+ pr_warn("[USERSID=0x%08X] VXD Device Error (e.g. firmware load failed).",
+ dec_str_ctx->config.user_str_id);
+ picture->dec_pict_info->err_flags |= VDEC_ERROR_DEVICE;
+ }
+
+ /*
+ * Assigned error flag from the decoder error flag for error recovery.
+ */
+ error_flag = picture->dec_pict_info->err_flags;
+ /*
+ * Loop over references, for each one find the related picture
+ * on the decPictList, and propagate errors if needed
+ */
+ ret =
+ decoder_check_ref_errors(dec_str_ctx, (struct vdecfw_buffer_control *)
+ decoded_pict->pict_ref_res->fw_ctrlbuf.cpu_virt,
+ picture);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+
+ if (dec_str_ctx->config.vid_std == VDEC_STD_H264) {
+ /* Attach the supplementary data to the decoded picture. */
+ picture->dec_pict_sup_data.raw_vui_data =
+ pict_hdrinfo->h264_pict_hdr_info.raw_vui_data;
+ pict_hdrinfo->h264_pict_hdr_info.raw_vui_data = NULL;
+
+ picture->dec_pict_sup_data.raw_sei_list_first_fld =
+ pict_hdrinfo->h264_pict_hdr_info.raw_sei_data_list_first_field;
+ pict_hdrinfo->h264_pict_hdr_info.raw_sei_data_list_first_field = NULL;
+
+ picture->dec_pict_sup_data.raw_sei_list_second_fld =
+ pict_hdrinfo->h264_pict_hdr_info.raw_sei_data_list_second_field;
+ pict_hdrinfo->h264_pict_hdr_info.raw_sei_data_list_second_field = NULL;
+
+ picture->dec_pict_sup_data.h264_pict_supl_data.nal_ref_idc =
+ pict_hdrinfo->h264_pict_hdr_info.nal_ref_idc;
+
+ picture->dec_pict_sup_data.h264_pict_supl_data.frame_num =
+ pict_hdrinfo->h264_pict_hdr_info.frame_num;
+ }
+
+#ifdef HAS_HEVC
+ if (dec_str_ctx->config.vid_std == VDEC_STD_HEVC) {
+ /* Attach the supplementary data to the decoded picture. */
+ picture->dec_pict_sup_data.raw_vui_data =
+ pict_hdrinfo->hevc_pict_hdr_info.raw_vui_data;
+
+ pict_hdrinfo->hevc_pict_hdr_info.raw_vui_data = NULL;
+
+ picture->dec_pict_sup_data.raw_sei_list_first_fld =
+ pict_hdrinfo->hevc_pict_hdr_info.raw_sei_datalist_firstfield;
+
+ pict_hdrinfo->hevc_pict_hdr_info.raw_sei_datalist_firstfield = NULL;
+
+ picture->dec_pict_sup_data.raw_sei_list_second_fld =
+ pict_hdrinfo->hevc_pict_hdr_info.raw_sei_datalist_secondfield;
+
+ pict_hdrinfo->hevc_pict_hdr_info.raw_sei_datalist_secondfield = NULL;
+
+ picture->dec_pict_sup_data.hevc_pict_supl_data.pic_order_cnt =
+ buf_control->hevc_data.pic_order_count;
+ }
+#endif
+
+ if (!((buf_control->dec_pict_type == IMG_BUFFERTYPE_PAIR &&
+ VDECFW_PICMGMT_FIELD_CODED_PICTURE_EXECUTED(buf_control->picmgmt_flags)) ||
+ FLAG_IS_SET(buf_control->picmgmt_flags, VDECFW_PICMGMTFLAG_PICTURE_EXECUTED))) {
+ pr_warn("[USERSID=0x%08X] [TID 0x%08X] Picture management was not executed for this picture; forcing display.",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->transaction_id);
+ decoded_pict->force_display = TRUE;
+ }
+
+ dec_str_ctx->dec_str_st.total_pict_finished++;
+
+ /*
+ * Use NextPictIdExpected to do this check. ui32NextPictId could be
+ * different from what expected at this point because we failed to
+ * process a picture the last time run this function (this is still
+ * an error (unless doing multi-core) but not the error reported here.
+ */
+ if (picture->pict_id != dec_str_ctx->next_pict_id_expected) {
+ pr_warn("[USERSID=0x%08X] ERROR: MISSING DECODED PICTURE (%d)",
+ dec_str_ctx->config.user_str_id,
+ dec_str_ctx->next_dec_pict_id);
+ }
+
+ dec_str_ctx->next_dec_pict_id =
+ get_next_picture_id(GET_STREAM_PICTURE_ID(decoded_pict->transaction_id));
+ dec_str_ctx->next_pict_id_expected = dec_str_ctx->next_dec_pict_id;
+
+ /* Add the picture itself to the decoded list */
+ next_decoded_pict = dq_first(&dec_str_ctx->str_decd_pict_list);
+ while (next_decoded_pict &&
+ !HAS_X_REACHED_Y(GET_STREAM_PICTURE_ID(next_decoded_pict->transaction_id),
+ picture->pict_id,
+ 1 << FWIF_NUMBITS_STREAM_PICTURE_ID, unsigned int)) {
+ if (next_decoded_pict !=
+ dq_last(&dec_str_ctx->str_decd_pict_list))
+ next_decoded_pict = dq_next(next_decoded_pict);
+ else
+ next_decoded_pict = NULL;
+ }
+
+ if (next_decoded_pict)
+ dq_addbefore(next_decoded_pict, decoded_pict);
+ else
+ dq_addtail(&dec_str_ctx->str_decd_pict_list, decoded_pict);
+
+ dec_str_ctx->dec_str_st.num_pict_decoded++;
+
+ pr_debug("%s : number of picture decoded = %d\n"
+ , __func__, dec_str_ctx->dec_str_st.num_pict_decoded);
+ /* Process the decoded pictures in the encoded order */
+ decoded_pict = dq_first(&dec_str_ctx->str_decd_pict_list);
+ VDEC_ASSERT(decoded_pict);
+ if (!decoded_pict)
+ return IMG_ERROR_UNEXPECTED_STATE;
+
+ ret = dec_str_ctx->str_processed_cb((void *)dec_str_ctx->usr_int_data,
+ VXD_CB_PICT_DECODED, (void *)picture);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ /*
+ * Loop on the unprocessed pictures until we failed to process one
+ * or we have processed them all
+ */
+ for (next_decoded_pict = decoder_next_picture(decoded_pict,
+ dec_str_ctx->next_dec_pict_id,
+ &dec_str_ctx->str_decd_pict_list);
+ next_decoded_pict;
+ next_decoded_pict = decoder_next_picture(decoded_pict,
+ dec_str_ctx->next_dec_pict_id,
+ &dec_str_ctx->str_decd_pict_list)) {
+ unsigned int i = 0;
+ struct dec_decoded_pict *display_pict = NULL;
+ struct dec_decoded_pict *release_pict = NULL;
+ unsigned char last_to_display_for_seq = FALSE;
+
+ /*
+ * next_decoded_pict is used to temporarily store decoded_pict
+ * so that we can clear the bProcessFailed flag before
+ * returning
+ */
+ decoded_pict = next_decoded_pict;
+ if (!decoded_pict->force_display) {
+ struct vdecfw_buffer_control *buf_ctrl = NULL;
+
+ buf_ctrl = (struct vdecfw_buffer_control *)
+ decoded_pict->pict_ref_res->fw_ctrlbuf.cpu_virt;
+
+ if (buf_ctrl->real_data.width && buf_ctrl->real_data.height) {
+ /*
+ * Firmware sets image size as it is in
+ * bitstream.
+ */
+ picture->dec_pict_info->disp_info.disp_region.width =
+ buf_ctrl->real_data.width;
+ picture->dec_pict_info->disp_info.disp_region.height =
+ buf_ctrl->real_data.height;
+ picture->dec_pict_info->disp_info.disp_region.top_offset = 0;
+ picture->dec_pict_info->disp_info.disp_region.left_offset = 0;
+
+ picture->dec_pict_info->rend_info.rend_pict_size.width =
+ picture->dec_pict_info->disp_info.disp_region.width;
+ picture->dec_pict_info->rend_info.rend_pict_size.height =
+ picture->dec_pict_info->disp_info.disp_region.height;
+
+ /*
+ * Update encoded size with values coded in
+ * bitstream,so golden image can be loaded
+ * correctly
+ */
+ picture->dec_pict_info->disp_info.enc_disp_region.width =
+ buf_ctrl->real_data.width;
+ picture->dec_pict_info->disp_info.enc_disp_region.height =
+ buf_ctrl->real_data.height;
+ }
+
+ decoded_pict->pict->dec_pict_info->timestamp =
+ buf_ctrl->real_data.timestamp;
+ decoded_pict->pict->dec_pict_info->disp_info.top_fld_first =
+ buf_ctrl->top_field_first;
+ decoded_pict->pict->dec_pict_info->disp_info.top_fld_first =
+ buf_ctrl->top_field_first;
+
+ decoded_pict->pict->dec_pict_info->id_for_hwcrc_chk =
+ GET_STREAM_PICTURE_ID(decoded_pict->transaction_id) - 1;
+ decoded_pict->pict->dec_pict_info->id_for_hwcrc_chk +=
+ dec_str_ctx->dec_str_st.flds_as_frm_decodes;
+
+ if (buf_ctrl->dec_pict_type == IMG_BUFFERTYPE_PAIR &&
+ !buf_ctrl->second_field_of_pair)
+ dec_str_ctx->dec_str_st.flds_as_frm_decodes++;
+
+ if (buf_ctrl->second_field_of_pair) {
+ /*
+ * Second field of pair is always complementary
+ * type to the eFirstPictTagType of the
+ * previous picture
+ */
+ unsigned int prev_dec_pict_id =
+ get_prev_picture_id(GET_STREAM_PICTURE_ID(decoded_pict->transaction_id));
+
+ prev_decoded_pict =
+ decoder_get_decoded_pict_of_stream
+ (prev_dec_pict_id,
+ &dec_str_ctx->str_decd_pict_list);
+ if (prev_decoded_pict) {
+ fld_tag_container =
+ &prev_decoded_pict->pict->dec_pict_info->second_fld_tag_container;
+ fld_tag_container->pict_tag_param =
+ decoded_pict->pict->dec_pict_info->first_fld_tag_container.pict_tag_param;
+
+ /*
+ * Copy the first field info in the
+ * proper place
+ */
+ memcpy(&fld_tag_container->pict_hwcrc,
+ &first_fld_fwmsg->pict_hwcrc,
+ sizeof(fld_tag_container->pict_hwcrc));
+
+ /*
+ * Attach the raw SEI data list for a
+ * second field to a picture.
+ */
+ prev_decoded_pict->pict->dec_pict_sup_data.raw_sei_list_second_fld =
+ decoded_pict->pict->dec_pict_sup_data.raw_sei_list_first_fld;
+
+ prev_decoded_pict->pict->dec_pict_info->disp_info.top_fld_first =
+ buf_ctrl->top_field_first;
+
+ /* Mark this picture as merged fields. */
+ prev_decoded_pict->pict->dec_pict_sup_data.merged_flds =
+ TRUE;
+ /* Mark the picture that was merged to the previous one. */
+ decoded_pict->merged = TRUE;
+ } else {
+ pr_warn("[USERSID=0x%08X] [TID 0x%08X] Failed to find decoded picture to attach tag",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->transaction_id);
+ }
+ } else {
+ /*
+ * Not Second-field-of-pair picture tag
+ * correlates its Tag to the its type by
+ * setting the eFirstPictTagType in the
+ * following way
+ */
+ decoded_pict->pict->dec_pict_info->first_fld_tag_container.pict_type
+ =
+ buf_ctrl->dec_pict_type;
+ memcpy(&picture->dec_pict_info->first_fld_tag_container.pict_hwcrc,
+ &first_fld_fwmsg->pict_hwcrc,
+ sizeof
+ (picture->dec_pict_info->first_fld_tag_container.pict_hwcrc));
+ }
+
+ /*
+ * Update the id of the next picture to process. It has
+ * to be update always (even if we fail to process)
+ * This has to be a global flag because it will be
+ * passed in both decoder_NextPicture (and then to
+ * DECODER_NextDecPictContiguous inside it)
+ * and to the corner case check below
+ */
+ dec_str_ctx->next_dec_pict_id =
+ get_next_picture_id(GET_STREAM_PICTURE_ID
+ (decoded_pict->transaction_id));
+ /*
+ * Display all the picture in the list that have been
+ * decoded and signalled by the fw to be displayed
+ */
+ for (i = decoded_pict->disp_idx;
+ i < buf_ctrl->display_list_length &&
+ !decoded_pict->process_failed;
+ i++, decoded_pict->disp_idx++) {
+ /*
+ * Display picture if it has been decoded
+ * (i.e. in decoded list).
+ */
+ display_pict = decoder_get_decoded_pict
+ (buf_ctrl->display_list[i],
+ &dec_str_ctx->str_decd_pict_list);
+ if (display_pict) {
+ if (FLAG_IS_SET(buf_ctrl->display_flags[i],
+ VDECFW_BUFFLAG_DISPLAY_FIELD_CODED) &&
+ (!FLAG_IS_SET
+ (buf_ctrl->display_flags[i],
+ VDECFW_BUFFLAG_DISPLAY_SINGLE_FIELD))) {
+ display_pict->pict->dec_pict_info->buf_type =
+ IMG_BUFFERTYPE_PAIR;
+ if (FLAG_IS_SET
+ (buf_ctrl->display_flags[i],
+ VDECFW_BUFFLAG_DISPLAY_INTERLACED_FIELDS))
+ display_pict->pict->dec_pict_info->interlaced_flds =
+ TRUE;
+ } else if (FLAG_IS_SET
+ (buf_ctrl->display_flags[i],
+ VDECFW_BUFFLAG_DISPLAY_FIELD_CODED) &&
+ FLAG_IS_SET
+ (buf_ctrl->display_flags[i],
+ VDECFW_BUFFLAG_DISPLAY_SINGLE_FIELD)) {
+ display_pict->pict->dec_pict_info->buf_type =
+ FLAG_IS_SET
+ (buf_ctrl->display_flags[i],
+ VDECFW_BUFFLAG_DISPLAY_BOTTOM_FIELD) ?
+ IMG_BUFFERTYPE_FIELD_BOTTOM :
+ IMG_BUFFERTYPE_FIELD_TOP;
+ } else {
+ display_pict->pict->dec_pict_info->buf_type =
+ IMG_BUFFERTYPE_FRAME;
+ }
+
+ display_pict->pict->dec_pict_info->view_id =
+ buf_ctrl->display_view_ids[i];
+
+ /*
+ * When no reference pictures are left to
+ * display and this is the last display
+ * picture in response to the last decoded
+ * picture, signal.
+ */
+ if (decoded_pict->pict->last_pict_in_seq &&
+ i == (buf_ctrl->display_list_length - 1))
+ last_to_display_for_seq = TRUE;
+
+ if (!display_pict->displayed) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] [TID=0x%08X] DISPLAY",
+ dec_str_ctx->config.user_str_id,
+ buf_ctrl->display_list[i]);
+#endif
+ display_pict->displayed = TRUE;
+ pict_id = GET_STREAM_PICTURE_ID
+ (buf_ctrl->display_list[i]);
+
+ ret = decoder_picture_display
+ (dec_str_ctx, pict_id,
+ last_to_display_for_seq);
+ }
+ } else {
+ /*
+ * In single core scenario should
+ * not come here.
+ */
+ pr_warn("[USERSID=0x%08X] Failed to find decoded picture [TID = 0x%08X] to send for display",
+ dec_str_ctx->config.user_str_id,
+ buf_ctrl->display_list[i]);
+ }
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+
+ /* Release all unused pictures (firmware request) */
+ for (i = decoded_pict->rel_idx;
+ i < buf_ctrl->release_list_length &&
+ !decoded_pict->process_failed;
+ i++, decoded_pict->rel_idx++) {
+ release_pict = decoder_get_decoded_pict
+ (buf_ctrl->release_list[i],
+ &dec_str_ctx->str_decd_pict_list);
+ if (release_pict) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] RELEASE( ): PIC_ID[%d]",
+ dec_str_ctx->config.user_str_id,
+ release_pict->pict->pict_id);
+#endif
+ /*
+ * Signal releasing this picture to upper
+ * layers.
+ */
+ decoder_picture_release(dec_str_ctx,
+ GET_STREAM_PICTURE_ID
+ (buf_ctrl->release_list[i]),
+ release_pict->displayed,
+ release_pict->merged);
+ if (release_pict->processed) {
+ /*
+ * If the decoded picture has been
+ * processed, destroy now.
+ */
+ ret = decoder_decoded_picture_destroy(dec_str_ctx,
+ release_pict,
+ FALSE);
+ } else {
+ /*
+ * If the decoded picture is not
+ * processed just destroy the
+ * containing picture.
+ */
+ pict_id = GET_STREAM_PICTURE_ID
+ (buf_ctrl->release_list[i]);
+ ret = decoder_picture_destroy(dec_str_ctx,
+ pict_id, FALSE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ release_pict->pict = NULL;
+ }
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ } else {
+ /*
+ * In single core scenario should not
+ * come here.
+ */
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] Failed to find decoded picture [TID = 0x%08X] to release",
+ dec_str_ctx->config.user_str_id,
+ buf_ctrl->release_list[i]);
+#endif
+ }
+ }
+ } else {
+ /* Always display the picture if we have no hardware */
+ if (!decoded_pict->displayed) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] [TID=0x%08X] DISPLAY",
+ dec_str_ctx->config.user_str_id,
+ decoded_pict->transaction_id);
+#endif
+ decoded_pict->displayed = TRUE;
+ ret = decoder_picture_display
+ (dec_str_ctx,
+ decoded_pict->pict->pict_id,
+ decoded_pict->pict->last_pict_in_seq);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+
+ /* Always release the picture if we have no hardware */
+ ret = decoder_picture_destroy(dec_str_ctx,
+ decoded_pict->pict->pict_id,
+ FALSE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ decoded_pict->pict = NULL;
+ }
+
+ /* If we have processed the current picture */
+ if (!decoded_pict->process_failed) {
+ decoded_pict->processed = TRUE;
+
+ /*
+ * If the current picture has been released then
+ * remove the container from the decoded list
+ */
+ if (!decoded_pict->pict) {
+ /*
+ * Only destroy the decoded picture once it is processed
+ * and the fw has instructed to release the picture.
+ */
+ ret = decoder_decoded_picture_destroy(dec_str_ctx,
+ decoded_pict, FALSE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ decoded_pict = NULL;
+ } /* end if (decoded_pict->pict == NULL) */
+ } /* end if (!decoded_pict->process_failed) */
+ } /* end for */
+
+ /*
+ * Always clear the process_failed flag to ensure that this picture
+ * will be processed on the next function call
+ */
+ if (decoded_pict)
+ decoded_pict->process_failed = FALSE;
+
+ /*
+ * Go through the list of decoded pictures to check if there are any
+ * pictures left for displaying and that are still not displayed due
+ * to picture management errors.
+ * Get the minimum required number of picture buffers.
+ */
+ vdecddutils_ref_pict_get_maxnum(&dec_str_ctx->config,
+ comseq_hdrinfo, &req_pict_num);
+ req_pict_num += comseq_hdrinfo->interlaced_frames ? 2 : 1;
+
+ ret = dec_str_ctx->core_query_cb(dec_str_ctx->usr_int_data,
+ DECODER_CORE_GET_RES_LIMIT,
+ &res_limit);
+
+ /* Start the procedure only if there is enough resources available. */
+ if (res_limit >= req_pict_num) {
+ /* Allow for one picture buffer for display. */
+ res_limit--;
+
+ /*
+ * Count the number of decoded pictures that were not
+ * displayed yet.
+ */
+ aux_decoded_pict = dq_first(&dec_str_ctx->str_decd_pict_list);
+ while (aux_decoded_pict) {
+ if (aux_decoded_pict->pict) {
+ dec_pict_num++;
+ if (!displayed_decoded_pict)
+ displayed_decoded_pict =
+ aux_decoded_pict;
+ }
+ if (aux_decoded_pict !=
+ dq_last(&dec_str_ctx->str_decd_pict_list))
+ aux_decoded_pict = dq_next(aux_decoded_pict);
+ else
+ aux_decoded_pict = NULL;
+ }
+ }
+
+ /* If there is at least one not displayed picture... */
+ if (displayed_decoded_pict) {
+ /*
+ * While the number of not displayed decoded pictures exceeds
+ * the number of maximum allowed number of pictures being held
+ * by VDEC...
+ */
+ while (dec_pict_num > res_limit) {
+ pr_warn("[USERSID=0x%08X] Number of outstanding decoded pictures exceeded number of available pictures buffers.",
+ dec_str_ctx->config.user_str_id);
+
+ if (!displayed_decoded_pict) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+ /* Find the picture with the least picture id. */
+ aux_decoded_pict = dq_next(displayed_decoded_pict);
+ while (aux_decoded_pict) {
+ if (aux_decoded_pict !=
+ dq_last(&dec_str_ctx->str_decd_pict_list)) {
+ if (aux_decoded_pict->pict &&
+ aux_decoded_pict->pict->pict_id <
+ displayed_decoded_pict->pict->pict_id)
+ displayed_decoded_pict = aux_decoded_pict;
+
+ aux_decoded_pict = dq_next(aux_decoded_pict);
+ } else {
+ if (aux_decoded_pict->pict &&
+ aux_decoded_pict->pict->pict_id <
+ displayed_decoded_pict->pict->pict_id)
+ displayed_decoded_pict = aux_decoded_pict;
+
+ aux_decoded_pict = NULL;
+ }
+ }
+
+ /* Display and release the picture with the least picture id. */
+ if (!displayed_decoded_pict->displayed) {
+ pr_warn("[USERSID=0x%08X] [TID=0x%08X] DISPLAY FORCED",
+ dec_str_ctx->config.user_str_id,
+ displayed_decoded_pict->transaction_id);
+ displayed_decoded_pict->displayed = TRUE;
+ ret = decoder_picture_display
+ (dec_str_ctx,
+ displayed_decoded_pict->pict->pict_id,
+ displayed_decoded_pict->pict->last_pict_in_seq);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+
+ ret = decoder_picture_destroy(dec_str_ctx,
+ displayed_decoded_pict->pict->pict_id,
+ FALSE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ displayed_decoded_pict->pict = NULL;
+ displayed_decoded_pict->processed = TRUE;
+
+ ret = decoder_decoded_picture_destroy(dec_str_ctx, displayed_decoded_pict,
+ FALSE);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ displayed_decoded_pict = NULL;
+
+ /*
+ * Decrease the number of not displayed decoded
+ * pictures.
+ */
+ dec_pict_num--;
+ }
+ }
+
+#ifdef ERROR_RECOVERY_SIMULATION
+ /*
+ * This part of the code should execute only when, DEBUG_FW_ERR_RECOVERY
+ * flag is enabled. This basically reads the error flag attribute from
+ * user space to create fake errors for testing the firmware error
+ * recovery.
+ */
+ if (fw_error_value != VDEC_ERROR_MAX) {
+ error_flag = error_flag | (1 << fw_error_value);
+ /* Now lets make it VDEC_ERROR_MAX */
+ fw_error_value = VDEC_ERROR_MAX;
+ }
+#endif
+
+ /*
+ * Whenever the error flag is set, we need to handle the error case.
+ * Need to forward this error to stream processed callback.
+ */
+ if (error_flag) {
+ pr_err("%s : %d err_flags: 0x%x\n", __func__, __LINE__, error_flag);
+ ret = dec_str_ctx->str_processed_cb((void *)dec_str_ctx->usr_int_data,
+ VXD_CB_ERROR_FATAL, &error_flag);
+ }
+ /*
+ * check for eos on bitstream and propagate the same to picture
+ * buffer
+ */
+ ctx = dec_str_ctx->vxd_dec_ctx;
+ ctx->num_decoding--;
+ if (ctx->eos) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("EOS reached\n");
+#endif
+
+ ret = dec_str_ctx->str_processed_cb((void *)dec_str_ctx->usr_int_data,
+ VXD_CB_STR_END, NULL);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ }
+
+ return ret;
+}
+
+/*
+ * @Function decoder_service_firmware_response
+ */
+int decoder_service_firmware_response(void *dec_str_ctx_arg, unsigned int *msg,
+ unsigned int msg_size, unsigned int msg_flags)
+{
+ int ret = IMG_SUCCESS;
+ struct dec_decpict *dec_pict = NULL;
+ unsigned char head_of_queue = TRUE;
+ struct dec_str_ctx *dec_str_ctx;
+ struct dec_str_unit *dec_str_unit;
+ unsigned char pict_start = FALSE;
+ enum vdecdd_str_unit_type str_unit_type;
+ struct vdecdd_picture *picture;
+ struct decoder_pict_fragment *pict_fragment;
+ struct dec_str_ctx *dec_strctx;
+ struct dec_core_ctx *dec_core_ctx;
+
+ /* validate input arguments */
+ if (!dec_str_ctx_arg || !msg) {
+ VDEC_ASSERT(0);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ dec_strctx = decoder_stream_get_context(dec_str_ctx_arg);
+
+ dec_core_ctx = decoder_str_ctx_to_core_ctx(dec_strctx);
+
+ if (!dec_core_ctx) {
+ pr_err("%s: dec_core_ctx is NULL\n", __func__);
+ VDEC_ASSERT(0);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ pr_debug("%s : process firmware response\n", __func__);
+ ret = hwctrl_process_msg(dec_core_ctx->hw_ctx, msg_flags, msg, &dec_pict);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ if (!dec_pict || (dec_pict->state != DECODER_PICTURE_STATE_DECODED &&
+ dec_pict->state != DECODER_PICTURE_STATE_TO_DISCARD))
+ return IMG_ERROR_UNEXPECTED_STATE;
+
+ /*
+ * Try and locate the stream context in the list of active
+ * streams.
+ */
+ VDEC_ASSERT(dec_core_ctx->dec_ctx);
+ dec_str_ctx = lst_first(&dec_core_ctx->dec_ctx->str_list);
+ if (!dec_str_ctx) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+
+ while (dec_str_ctx) {
+ if (dec_str_ctx == dec_pict->dec_str_ctx)
+ break;
+
+ dec_str_ctx = lst_next(dec_str_ctx);
+ }
+
+ /*
+ * If the stream is not in the list of active streams then
+ * it must have been destroyed.
+ * This interrupt should be ignored.
+ */
+ if (dec_str_ctx != dec_pict->dec_str_ctx)
+ return IMG_SUCCESS;
+
+ /*
+ * Retrieve the picture from the head of the core decode queue
+ * primarily to obtain the correct stream context.
+ */
+ hwctrl_removefrom_piclist(dec_core_ctx->hw_ctx, dec_pict);
+
+ if (!dec_str_ctx) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+ dec_str_ctx->avail_slots++;
+ VDEC_ASSERT(dec_str_ctx->avail_slots > 0);
+
+ /*
+ * Store the stream context of the picture that has been
+ * decoded.
+ */
+ dec_str_ctx = dec_pict->dec_str_ctx;
+ VDEC_ASSERT(dec_str_ctx);
+
+ if (!dec_str_ctx)
+ return IMG_ERROR_UNEXPECTED_STATE;
+
+ /*
+ * Picture has been discarded before EOP unit,
+ * recover the decoder to valid state
+ */
+ if (!dec_pict->eop_found) {
+ VDEC_ASSERT(dec_pict == dec_str_ctx->cur_pict);
+
+ dec_core_ctx->busy = FALSE;
+ dec_str_ctx->cur_pict = NULL;
+ }
+
+ /*
+ * Peek the first stream unit and validate against core
+ * queue to ensure that this really is the next picture
+ * for the stream.
+ */
+ dec_str_unit = lst_first(&dec_str_ctx->pend_strunit_list);
+ if (dec_str_unit) {
+ if (dec_str_unit->dec_pict != dec_pict) {
+ head_of_queue = FALSE;
+
+ /*
+ * For pictures to be decoded
+ * out-of-order there must be
+ * more than one decoder core.
+ */
+ VDEC_ASSERT(dec_str_ctx->decctx->num_pipes > 1);
+ while (dec_str_unit) {
+ dec_str_unit = lst_next(dec_str_unit);
+ if (dec_str_unit->dec_pict == dec_pict)
+ break;
+ }
+ }
+ VDEC_ASSERT(dec_str_unit);
+ if (!dec_str_unit)
+ return IMG_ERROR_FATAL;
+
+ VDEC_ASSERT(dec_str_unit->dec_pict == dec_pict);
+ VDEC_ASSERT(dec_str_unit->str_unit->str_unit_type ==
+ VDECDD_STRUNIT_PICTURE_START);
+ }
+
+ /*
+ * Process all units from the pending stream list until
+ * the next picture start.
+ */
+ while (dec_str_unit && !pict_start) {
+ /*
+ * Actually remove the unit now from the
+ * pending stream list.
+ */
+ lst_remove(&dec_str_ctx->pend_strunit_list, dec_str_unit);
+ if (!dec_str_unit->str_unit || !dec_pict)
+ break;
+
+ str_unit_type = dec_str_unit->str_unit->str_unit_type;
+
+ if (str_unit_type != VDECDD_STRUNIT_PICTURE_START)
+ break;
+
+ dec_str_ctx = dec_pict->dec_str_ctx;
+
+ dec_str_ctx->dec_str_st.num_pict_decoding--;
+ dec_str_ctx->dec_str_st.total_pict_decoded++;
+
+ ret = idgen_gethandle(dec_str_ctx->pict_idgen,
+ GET_STREAM_PICTURE_ID(dec_str_unit->dec_pict->transaction_id),
+ (void **)&picture);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS || !picture) {
+ pr_err("[USERSID=0x%08X] Failed to find picture from ID",
+ dec_str_ctx->config.user_str_id);
+ return IMG_ERROR_INVALID_ID;
+ }
+
+ VDEC_ASSERT(picture == dec_str_unit->str_unit->dd_pict_data);
+
+ /* Hold a reference to the last context on the BE */
+ if (dec_str_ctx->last_be_pict_dec_res && HAS_X_PASSED_Y
+ (picture->pict_id,
+ GET_STREAM_PICTURE_ID(dec_str_ctx->last_be_pict_dec_res->transaction_id),
+ 1 << FWIF_NUMBITS_STREAM_PICTURE_ID, unsigned int)) {
+ /* Return previous last FW context. */
+ resource_item_return(&dec_str_ctx->last_be_pict_dec_res->ref_cnt);
+
+ if (resource_item_isavailable(&dec_str_ctx->last_be_pict_dec_res->ref_cnt
+ )) {
+ resource_list_remove(&dec_str_ctx->dec_res_lst,
+ dec_str_ctx->last_be_pict_dec_res);
+ resource_list_add_img(&dec_str_ctx->dec_res_lst,
+ dec_str_ctx->last_be_pict_dec_res, 0,
+ &dec_str_ctx->last_be_pict_dec_res->ref_cnt);
+ }
+ }
+ if (!dec_str_ctx->last_be_pict_dec_res ||
+ (dec_str_ctx->last_be_pict_dec_res && HAS_X_PASSED_Y
+ (picture->pict_id,
+ GET_STREAM_PICTURE_ID(dec_str_ctx->last_be_pict_dec_res->transaction_id),
+ 1 << FWIF_NUMBITS_STREAM_PICTURE_ID, unsigned int))) {
+ /* Hold onto last FW context. */
+ dec_str_ctx->last_be_pict_dec_res = dec_pict->cur_pict_dec_res;
+ resource_item_use(&dec_str_ctx->last_be_pict_dec_res->ref_cnt);
+ }
+ resource_item_return(&dec_pict->cur_pict_dec_res->ref_cnt);
+
+ if (resource_item_isavailable(&dec_pict->cur_pict_dec_res->ref_cnt)) {
+ resource_list_remove(&dec_str_ctx->dec_res_lst,
+ dec_pict->cur_pict_dec_res);
+ resource_list_add_img(&dec_str_ctx->dec_res_lst,
+ dec_pict->cur_pict_dec_res, 0,
+ &dec_pict->cur_pict_dec_res->ref_cnt);
+ }
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("[USERSID=0x%08X] [TID=0x%08X] DECODED",
+ dec_str_ctx->config.user_str_id,
+ dec_pict->transaction_id);
+#endif
+
+ ret = decoder_picture_decoded(dec_str_ctx, dec_core_ctx,
+ picture, dec_pict,
+ dec_pict->pict_hdr_info,
+ dec_str_unit->str_unit);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ dec_res_picture_detach(&dec_str_ctx->resources, dec_pict);
+
+ /* Free the segments from the decode picture */
+ decoder_clean_bitstr_segments(&dec_pict->dec_pict_seg_list);
+
+ pict_fragment = lst_removehead(&dec_pict->fragment_list);
+ while (pict_fragment) {
+ kfree(pict_fragment);
+ pict_fragment =
+ lst_removehead(&dec_pict->fragment_list);
+ }
+
+ pict_start = (!head_of_queue) ? TRUE : FALSE;
+
+ ret = dec_str_ctx->str_processed_cb(dec_str_ctx->usr_int_data,
+ VXD_CB_STRUNIT_PROCESSED,
+ dec_str_unit->str_unit);
+
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS) {
+ /* Free decoder picture */
+ kfree(dec_pict);
+ dec_pict = NULL;
+ return ret;
+ }
+
+ /* Destroy the Decoder stream unit wrapper */
+ kfree(dec_str_unit);
+
+ /* Peek at the next stream unit */
+ dec_str_unit = lst_first(&dec_str_ctx->pend_strunit_list);
+ if (dec_str_unit)
+ pict_start = (dec_str_unit->str_unit->str_unit_type ==
+ VDECDD_STRUNIT_PICTURE_START &&
+ dec_str_unit->dec_pict != dec_pict);
+
+ /* Free decoder picture */
+ kfree(dec_pict);
+ dec_pict = NULL;
+ }
+
+ kfree(dec_str_unit);
+ return ret;
+}
+
+/*
+ * @Function decoder_is_stream_idle
+ */
+unsigned char decoder_is_stream_idle(void *dec_str_ctx_handle)
+{
+ struct dec_str_ctx *dec_str_ctx;
+
+ dec_str_ctx = decoder_stream_get_context(dec_str_ctx_handle);
+ VDEC_ASSERT(dec_str_ctx);
+ if (!dec_str_ctx) {
+ pr_err("Invalid decoder stream context handle!");
+ return FALSE;
+ }
+
+ return lst_empty(&dec_str_ctx->pend_strunit_list);
+}
diff --git a/drivers/staging/media/vxd/decoder/decoder.h b/drivers/staging/media/vxd/decoder/decoder.h
new file mode 100644
index 000000000000..a6595fa785e4
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/decoder.h
@@ -0,0 +1,375 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD Decoder Component header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#ifndef __DECODER_H__
+#define __DECODER_H__
+
+#include "bspp.h"
+#include "dq.h"
+#ifdef HAS_JPEG
+#include "jpegfw_data.h"
+#endif
+#include "lst.h"
+#include "vdecdd_defs.h"
+#include "vdec_defs.h"
+#include "vid_buf.h"
+#include "vxd_ext.h"
+#include "vxd_props.h"
+#include "hevcfw_data.h"
+
+#define MAX_CONCURRENT_STREAMS 16
+
+enum dec_pict_states {
+ DECODER_PICTURE_STATE_TO_DECODE = 0,
+ DECODER_PICTURE_STATE_DECODED,
+ DECODER_PICTURE_STATE_TO_DISCARD,
+ DECODER_PICTURE_STATE_MAX,
+ DECODER_PICTURE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+enum dec_res_type {
+ DECODER_RESTYPE_TRANSACTION = 0,
+ DECODER_RESTYPE_HDR,
+ DECODER_RESTYPE_BATCH_MSG,
+#ifdef HAS_HEVC
+ DECODER_RESTYPE_PVDEC_BUF,
+#endif
+ DECODER_RESTYPE_MAX,
+ DECODER_RESTYPE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+enum dec_core_query_type {
+ DECODER_CORE_GET_RES_LIMIT = 0,
+ DECODER_CORE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * @Function pfnRefPicGetMaxNum
+ * @Description
+ * This is the prototype for functions calculating the maximum number
+ * of reference pictures required per video standard.
+ *
+ * @Input psComSequHdrInfo : A pointer to the common VSH information
+ * structure.
+ *
+ * @Output pui32MaxRefPicNum : A pointer used to return the maximum number
+ * of reference frames required.
+ *
+ * @Return IMG_RESULT : This function returns either IMG_SUCCESS or
+ * an error code.
+ */
+typedef int (*ref_pic_get_maximum)(const struct vdec_comsequ_hdrinfo *comseq_hdr_info,
+ unsigned int *max_ref_pict_num);
+
+typedef int (*strunit_processed_cb)(void *handle, int cb_type, void *item);
+
+typedef int (*core_gen_cb)(void *handle, int query, void *item);
+
+struct dec_ctx;
+
+/*
+ * This structure contains the core context.
+ * @brief Decoder Core Context
+ */
+struct dec_core_ctx {
+ void **link; /* to be part of single linked list */
+ struct dec_ctx *dec_ctx;
+ unsigned char enumerated;
+ unsigned char master;
+ unsigned char configured;
+ unsigned int core_features;
+ unsigned int pipe_features[VDEC_MAX_PIXEL_PIPES];
+ struct vxd_coreprops core_props;
+ void *resources;
+ void *hw_ctx;
+ unsigned int cum_pics;
+ unsigned char busy;
+};
+
+struct dec_ctx {
+ unsigned char inited;
+ void *user_data;
+ const struct vdecdd_dd_devconfig *dev_cfg;
+ unsigned int num_pipes;
+ struct dec_core_ctx *dec_core_ctx;
+ struct lst_t str_list;
+ void *mmu_dev_handle;
+ void *dev_handle;
+ struct vidio_ddbufinfo ptd_buf_info;
+ unsigned char sup_stds[VDEC_STD_MAX];
+ unsigned int internal_heap_id;
+ unsigned int str_cnt;
+};
+
+/*
+ * This structure contains the device decode resource (used for decoding and
+ * held for subsequent decoding).
+ * @brief Decoder Device Resource
+ */
+struct dec_pictdec_res {
+ void **link; /* to be part of single linked list */
+ unsigned int transaction_id;
+ struct vidio_ddbufinfo fw_ctx_buf;
+ struct vidio_ddbufinfo h264_sgm_buf;
+ unsigned int ref_cnt;
+};
+
+struct dec_decpict;
+
+/*
+ *
+ * This structure contains the stream context.
+ * @brief Decoder Stream Context
+ */
+struct dec_str_ctx {
+ void **link; /* to be part of single linked list */
+ int km_str_id;
+ struct vdec_str_configdata config;
+ struct dec_ctx *decctx;
+ void *vxd_dec_ctx;
+ void *usr_int_data;
+ void *mmu_str_handle;
+ void *pict_idgen;
+ struct lst_t pend_strunit_list;
+ struct dq_linkage_t str_decd_pict_list;
+ unsigned int num_ref_res;
+ struct lst_t ref_res_lst;
+ unsigned int num_dec_res;
+ struct lst_t dec_res_lst;
+ unsigned int avail_pipes;
+ unsigned int avail_slots;
+ struct vdecdd_decstr_status dec_str_st;
+ struct vidio_ddbufinfo pvdec_fw_ctx_buf;
+ unsigned int last_fe_transaction_id;
+ unsigned int next_dec_pict_id;
+ unsigned int next_pict_id_expected;
+ struct dec_pictdec_res *cur_fe_pict_dec_res;
+ struct dec_pictdec_res *prev_fe_pict_dec_res;
+ struct dec_pictdec_res *last_be_pict_dec_res;
+ struct dec_decpict *cur_pict;
+ void *resources;
+ strunit_processed_cb str_processed_cb;
+ core_gen_cb core_query_cb;
+};
+
+/*
+ * Resource Structure for DECODER_sDdResourceInfo to be used with pools
+ */
+struct res_resinfo {
+ void **link; /* to be part of single linked list */
+ void *res;
+ struct vidio_ddbufinfo *ddbuf_info;
+};
+
+struct vdecdd_ddstr_ctx;
+
+/*
+ * This structure contains the Decoded attributes
+ * @brief Decoded attributes
+ */
+struct dec_pict_attrs {
+ unsigned char first_fld_rcvd;
+ unsigned int fe_err;
+ unsigned int no_be_wdt;
+ unsigned int mbs_dropped;
+ unsigned int mbs_recovered;
+ struct vxd_pict_attrs pict_attrs;
+};
+
+/*
+ * This union contains firmware contexts. Used to allocate buffers for firmware
+ * context.
+ */
+union dec_fw_contexts {
+ struct h264fw_context_data h264_context;
+#ifdef HAS_JPEG
+ struct jpegfw_context_data jpeg_context;
+#endif
+#ifdef HAS_HEVC
+ struct hevcfw_ctx_data hevc_context;
+#endif
+};
+
+/*
+ * for debug
+ */
+struct dec_fwmsg {
+ void **link;
+ struct dec_pict_attrs pict_attrs;
+ struct vdec_pict_hwcrc pict_hwcrc;
+};
+
+/*
+ * This structure contains the stream decode resource (persistent for
+ * longer than decoding).
+ * @brief Decoder Stream Resource
+ */
+struct dec_pictref_res {
+ void **link; /* to be part of single linked list */
+ struct vidio_ddbufinfo fw_ctrlbuf;
+ unsigned int ref_cnt;
+};
+
+/*
+ * This structure defines the decode picture.
+ * @brief Decoder Picture
+ */
+struct dec_decpict {
+ void **link;
+ unsigned int transaction_id;
+ void *dec_str_ctx;
+ unsigned char twopass;
+ unsigned char first_fld_rcvd;
+ struct res_resinfo *transaction_info;
+ struct res_resinfo *hdr_info;
+#ifdef HAS_HEVC
+ struct res_resinfo *pvdec_info;
+ unsigned int temporal_out_addr;
+#endif
+ struct vdecdd_ddpict_buf *recon_pict;
+ struct vdecdd_ddpict_buf *alt_pict;
+ struct res_resinfo *batch_msginfo;
+ struct vidio_ddbufinfo *intra_bufinfo;
+ struct vidio_ddbufinfo *auxline_bufinfo;
+ struct vidio_ddbufinfo *vlc_tables_bufinfo;
+ struct vidio_ddbufinfo *vlc_idx_tables_bufinfo;
+ struct vidio_ddbufinfo *start_code_bufinfo;
+ struct dec_fwmsg *first_fld_fwmsg;
+ struct dec_fwmsg *second_fld_fwmsg;
+ struct bspp_pict_hdr_info *pict_hdr_info;
+ struct dec_pictdec_res *cur_pict_dec_res;
+ struct dec_pictdec_res *prev_pict_dec_res;
+ struct dec_pictref_res *pict_ref_res;
+ struct lst_t dec_pict_seg_list;
+ struct lst_t fragment_list;
+ unsigned char eop_found;
+ unsigned int operating_op;
+ unsigned short genc_id;
+ struct vdecdd_ddbuf_mapinfo **genc_bufs;
+ struct vdecdd_ddbuf_mapinfo *genc_fragment_buf;
+ unsigned int ctrl_alloc_bytes;
+ unsigned int ctrl_alloc_offset;
+ enum dec_pict_states state;
+ struct vidio_ddbufinfo *str_pvdec_fw_ctxbuf;
+};
+
+/*
+ *
+ * This structure defines the decode picture reference.
+ * @brief Decoder Picture Reference
+ */
+struct dec_str_unit {
+ void **link; /* to be part of single linked list */
+ struct dec_decpict *dec_pict;
+ struct vdecdd_str_unit *str_unit;
+};
+
+/*
+ * This structure defines the decoded picture.
+ * @brief Decoded Picture
+ */
+struct dec_decoded_pict {
+ struct dq_linkage_t link; /* to be part of double linked list */
+ unsigned int transaction_id;
+ unsigned char processed;
+ unsigned char process_failed;
+ unsigned char force_display;
+ unsigned char displayed;
+ unsigned char merged;
+ unsigned int disp_idx;
+ unsigned int rel_idx;
+ struct vdecdd_picture *pict;
+ struct dec_fwmsg *first_fld_fwmsg;
+ struct dec_fwmsg *second_fld_fwmsg;
+ struct dec_pictref_res *pict_ref_res;
+};
+
+struct dec_pict_fragment {
+ void **link; /* to be part of single linked list */
+ /* Control allocation size in bytes */
+ unsigned int ctrl_alloc_bytes;
+ /* Control allocation offset in bytes */
+ unsigned int ctrl_alloc_offset;
+};
+
+/*
+ * This structure contains the pointer to the picture segment.
+ * All the segments could be added to the list in struct dec_decpict,
+ * but because list items cannot belong to more than one list this wrapper
+ * is used which is added in the list sDecPictSegList inside struct dec_decpict
+ * @brief Decoder Picture Segment
+ */
+struct dec_decpict_seg {
+ void **link; /* to be part of single linked list */
+ struct bspp_bitstr_seg *bstr_seg;
+ unsigned char internal_seg;
+};
+
+struct decoder_regsoffsets {
+ unsigned int vdmc_cmd_offset;
+ unsigned int vec_offset;
+ unsigned int entropy_offset;
+ unsigned int vec_be_regs_offset;
+ unsigned int vdec_be_codec_regs_offset;
+};
+
+int decoder_initialise(void *init_usr_data, unsigned int internal_heap_id,
+ struct vdecdd_dd_devconfig *dd_devcfg, unsigned int *num_pipes,
+ void **dec_ctx);
+
+int decoder_deinitialise(void *dec_ctx);
+
+int decoder_supported_features(void *dec_ctx, struct vdec_features *features);
+
+int decoder_stream_destroy(void *dec_str_ctx, unsigned char abort);
+
+int decoder_stream_create(void *dec_ctx, struct vdec_str_configdata str_cfg,
+ unsigned int kmstr_id, void **mmu_str_handle,
+ void *vxd_dec_ctx, void *str_usr_int_data,
+ void **dec_str_ctx, void *decoder_cb, void *query_cb);
+
+int decoder_stream_prepare_ctx(void *dec_str_ctx, unsigned char flush_dpb);
+
+int decoder_stream_process_unit(void *dec_str_ctx,
+ struct vdecdd_str_unit *str_unit);
+
+int decoder_get_load(void *dec_str_ctx, unsigned int *avail_slots);
+
+int
+decoder_check_support(void *dec_ctx,
+ const struct vdec_str_configdata *str_cfg,
+ const struct vdec_str_opconfig *op_cfg,
+ const struct vdecdd_ddpict_buf *disp_pictbuf,
+ const struct vdec_pict_rendinfo *req_pict_rendinfo,
+ const struct vdec_comsequ_hdrinfo *comseq_hdrinfo,
+ const struct bspp_pict_hdr_info *pict_hdrinfo,
+ const struct vdec_comsequ_hdrinfo *prev_comseq_hdrinfo,
+ const struct bspp_pict_hdr_info *prev_pict_hdrinfo,
+ unsigned char non_cfg_req, struct vdec_unsupp_flags *unsupported,
+ unsigned int *features);
+
+unsigned char decoder_is_stream_idle(void *dec_str_ctx);
+
+int decoder_stream_flush(void *dec_str_ctx, unsigned char discard_refs);
+
+int decoder_stream_release_buffers(void *dec_str_ctx);
+
+int decoder_stream_get_status(void *dec_str_ctx,
+ struct vdecdd_decstr_status *dec_str_st);
+
+int decoder_service_firmware_response(void *dec_str_ctx_arg, unsigned int *msg,
+ unsigned int msg_size, unsigned int msg_flags);
+
+#endif
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
The header file define all the high level data structures required
for decoder, hwcontrol, core and v4l2 decoder components.
Signed-off-by: Sunita Nadampalli <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 1 +
.../staging/media/vxd/decoder/vdecdd_defs.h | 446 ++++++++++++++++++
2 files changed, 447 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_defs.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 41716f2916d1..fa5c69d71c3e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19618,6 +19618,7 @@ F: drivers/staging/media/vxd/decoder/translation_api.h
F: drivers/staging/media/vxd/decoder/vdec_defs.h
F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c
F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.h
+F: drivers/staging/media/vxd/decoder/vdecdd_defs.h
F: drivers/staging/media/vxd/decoder/vdecdd_utils.c
F: drivers/staging/media/vxd/decoder/vdecdd_utils.h
F: drivers/staging/media/vxd/decoder/vdecdd_utils_buf.c
diff --git a/drivers/staging/media/vxd/decoder/vdecdd_defs.h b/drivers/staging/media/vxd/decoder/vdecdd_defs.h
new file mode 100644
index 000000000000..dc4c2695c390
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vdecdd_defs.h
@@ -0,0 +1,446 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD Decoder device driver header definitions
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]
+ */
+
+#ifndef __VDECDD_DEFS_H__
+#define __VDECDD_DEFS_H__
+
+#include "bspp.h"
+#include "lst.h"
+#include "vdec_defs.h"
+#include "vdecfw_shared.h"
+#include "vid_buf.h"
+#include "vxd_mmu_defs.h"
+
+/* RMAN type for streams. */
+#define VDECDD_STREAM_TYPE_ID (0xB0B00001)
+
+/* RMAN type for buffer mappings. */
+#define VDECDD_BUFMAP_TYPE_ID (0xB0B00002)
+
+/*
+ * This type contains core feature flags.
+ * @brief Core Feature Flags
+ */
+enum vdecdd_core_feature_flags {
+ VDECDD_COREFEATURE_MPEG2 = (1 << 0),
+ VDECDD_COREFEATURE_MPEG4 = (1 << 1),
+ VDECDD_COREFEATURE_H264 = (1 << 2),
+ VDECDD_COREFEATURE_VC1 = (1 << 3),
+ VDECDD_COREFEATURE_AVS = (1 << 4),
+ VDECDD_COREFEATURE_REAL = (1 << 5),
+ VDECDD_COREFEATURE_JPEG = (1 << 6),
+ VDECDD_COREFEATURE_VP6 = (1 << 7),
+ VDECDD_COREFEATURE_VP8 = (1 << 8),
+ VDECDD_COREFEATURE_HEVC = (1 << 9),
+ VDECDD_COREFEATURE_HD_DECODE = (1 << 10),
+ VDECDD_COREFEATURE_ROTATION = (1 << 11),
+ VDECDD_COREFEATURE_SCALING = (1 << 12),
+ VDECDD_COREFEATURE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This structure contains configuration relating to a device.
+ * @brief Device Configuration
+ */
+struct vdecdd_dd_devconfig {
+ unsigned int num_slots_per_pipe;
+};
+
+/*
+ * This structure contains the Decoder decoding picture status.
+ * @brief Decoder Decoding Picture Status
+ */
+struct vdecdd_dec_pict_status {
+ unsigned int transaction_id;
+ enum vdecfw_progresscheckpoint fw_cp;
+ enum vdecfw_progresscheckpoint fehw_cp;
+ enum vdecfw_progresscheckpoint behw_cp;
+ unsigned int dmac_status;
+ unsigned int fe_mb_x;
+ unsigned int fe_mb_y;
+ unsigned int be_mb_x;
+ unsigned int be_mb_y;
+ unsigned char fw_control_msg[VDECFW_MSGID_CONTROL_TYPES];
+ unsigned char fw_decode_msg[VDECFW_MSGID_DECODE_TYPES];
+ unsigned char fw_completion_msg[VDECFW_MSGID_COMPLETION_TYPES];
+ unsigned char host_control_msg[VDECFW_MSGID_CONTROL_TYPES];
+ unsigned char host_decode_msg[VDECFW_MSGID_DECODE_TYPES];
+ unsigned char host_completion_msg[VDECFW_MSGID_COMPLETION_TYPES];
+};
+
+/*
+ * This structure contains the Decoder decoding picture status.
+ * @brief Core Status
+ */
+struct vdecdd_core_status {
+ unsigned int mtx_pc;
+ unsigned int mtx_pcx;
+ unsigned int mtx_enable;
+ unsigned int mtx_st_bits;
+ unsigned int mtx_fault0;
+ unsigned int mtx_a0_stack_ptr;
+ unsigned int mtx_a0_frame_ptr;
+ unsigned int dma_setup[3];
+ unsigned int dma_count[3];
+ unsigned int dma_peripheral_addr[3];
+};
+
+/*
+ * This structure contains the Decoder component stream status.
+ * @brief Decoder Component Stream Status
+ */
+struct vdecdd_decstr_status {
+ unsigned int num_pict_decoding;
+ struct vdecdd_dec_pict_status dec_pict_st[VDECFW_MAX_NUM_PICTURES];
+ unsigned int num_pict_decoded;
+ unsigned int decoded_picts[VDECFW_MAX_NUM_PICTURES];
+ unsigned int features;
+ struct vdecdd_core_status core_st;
+ unsigned int display_pics;
+ unsigned int release_pics;
+ unsigned int next_display_items[VDECFW_MAX_NUM_PICTURES];
+ unsigned int next_display_item_parent[VDECFW_MAX_NUM_PICTURES];
+ unsigned int next_release_items[VDECFW_MAX_NUM_PICTURES];
+ unsigned int next_release_item_parent[VDECFW_MAX_NUM_PICTURES];
+ unsigned int flds_as_frm_decodes;
+ unsigned int total_pict_decoded;
+ unsigned int total_pict_displayed;
+ unsigned int total_pict_finished;
+};
+
+/*
+ * This structure contains the device context.
+ * @brief VDECDD Device Context
+ */
+struct vdecdd_dddev_context {
+ void *dev_handle;
+ void *dec_context;
+ unsigned int internal_heap_id;
+ void *res_buck_handle;
+};
+
+/*
+ * This type defines the stream state.
+ * @brief VDEDD Stream State
+ */
+enum vdecdd_ddstr_state {
+ VDECDD_STRSTATE_STOPPED = 0x00,
+ VDECDD_STRSTATE_PLAYING,
+ VDECDD_STRSTATE_STOPPING,
+ VDECDD_STRSTATE_MAX,
+ VDECDD_STRSTATE_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This structure contains the mapped output buffer configuration.
+ * @brief VDECDD Mapped Output Buffer Configuration
+ */
+struct vdecdd_mapbuf_info {
+ unsigned int buf_size;
+ unsigned int num_buf;
+ unsigned char byte_interleave;
+};
+
+struct vdecdd_ddstr_ctx;
+
+/*
+ * This structure contains the map info.
+ * @brief VDECDD Map Info
+ */
+struct vdecdd_ddbuf_mapinfo {
+ void **link; /* to be part of single linked list */
+ void *res_handle;
+ unsigned int buf_id;
+ unsigned int buf_map_id;
+ struct vdecdd_ddstr_ctx *ddstr_context;
+ void *buf_cb_param;
+ enum vdec_buf_type buf_type;
+ enum mmu_eheap_id mmuheap_id;
+ struct vidio_ddbufinfo ddbuf_info;
+};
+
+/*
+ * This structure contains the information about the picture buffer
+ * and its structure.
+ * @brief VDECDD Picture Buffer Info
+ */
+struct vdecdd_ddpict_buf {
+ struct vdecdd_ddbuf_mapinfo *pict_buf;
+ struct vdec_pict_rendinfo rend_info;
+ struct vdec_pict_bufconfig buf_config;
+};
+
+/*
+ * This structure contains the stream context.
+ * @brief VDECDD Stream Context
+ */
+struct vdecdd_ddstr_ctx {
+ void **link; /* to be part of single linked list */
+ unsigned int res_str_id;
+ unsigned int km_str_id;
+ void *res_buck_handle;
+ void *res_handle;
+ struct vdecdd_dddev_context *dd_dev_context;
+ struct vdec_str_configdata str_config_data;
+ unsigned char preempt;
+ enum vdecdd_ddstr_state dd_str_state;
+ void *mmu_str_handle;
+ void *dec_ctx;
+ enum vdec_play_mode play_mode;
+ struct vdec_comsequ_hdrinfo comseq_hdr_info;
+ struct vdecdd_ddpict_buf disp_pict_buf;
+ struct vdecdd_mapbuf_info map_buf_info;
+ struct vdec_str_opconfig opconfig;
+ unsigned char str_op_configured;
+ struct vdec_comsequ_hdrinfo prev_comseq_hdr_info;
+ struct bspp_pict_hdr_info prev_pict_hdr_info;
+};
+
+/*
+ * This type defines the stream unit type.
+ * @brief Stream Unit Type
+ */
+enum vdecdd_str_unit_type {
+ VDECDD_STRUNIT_ANONYMOUS = 0,
+ VDECDD_STRUNIT_SEQUENCE_START,
+ VDECDD_STRUNIT_CLOSED_GOP,
+ VDECDD_STRUNIT_SEQUENCE_END,
+ VDECDD_STRUNIT_PICTURE_PORTENT,
+ VDECDD_STRUNIT_PICTURE_START,
+ VDECDD_STRUNIT_PICTURE_FRAGMENT,
+ VDECDD_STRUNIT_PICTURE_END,
+ VDECDD_STRUNIT_FENCE,
+ VDECDD_STRUNIT_STOP,
+ VDECDD_STRUNIT_MAX,
+ VDECDD_STRUNIT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This structure contains a front end stream unit.
+ * @brief Front End Stream Unit
+ */
+struct vdecdd_str_unit {
+ void *lst_padding;
+ enum vdecdd_str_unit_type str_unit_type;
+ void *str_unit_handle;
+ unsigned int err_flags;
+ struct lst_t bstr_seg_list;
+ void *dd_data;
+ struct bspp_sequ_hdr_info *seq_hdr_info;
+ unsigned int seq_hdr_id;
+ unsigned char closed_gop;
+ unsigned char eop;
+ struct bspp_pict_hdr_info *pict_hdr_info;
+ void *dd_pict_data;
+ unsigned char last_pict_in_seq;
+ void *str_unit_tag;
+ unsigned char decode;
+ unsigned int features;
+ unsigned char eos;
+};
+
+/*
+ * This structure contains a set of picture resources required at all the
+ * interim stages of decoding it as it flows around the internals. It originates
+ * in the plant.
+ * @brief Picture Resources (stream)
+ */
+struct vdecdd_pict_resint {
+ void **link; /* to be part of single linked list */
+ struct vdecdd_ddbuf_mapinfo *mb_param_buf;
+ unsigned int ref_cnt;
+
+#ifdef HAS_HEVC
+ /* GENC fragment buffer */
+ struct vdecdd_ddbuf_mapinfo *genc_fragment_buf;
+#endif
+#if defined(HAS_HEVC) || defined(ERROR_CONCEALMENT)
+ /* Sequence resources (GENC buffers) */
+ struct vdecdd_seq_resint *seq_resint;
+#endif
+};
+
+/*
+ * This structure contains the supplementary information for decoded picture.
+ * @brief Event Callback Information
+ */
+struct vdecdd_pict_sup_data {
+ void *raw_vui_data;
+ void *raw_sei_list_first_fld;
+ void *raw_sei_list_second_fld;
+ unsigned char merged_flds;
+ union {
+ struct vdecdd_h264_pict_supl_data {
+ unsigned char nal_ref_idc;
+ unsigned short frame_num;
+ } h264_pict_supl_data;
+#ifdef HAS_HEVC
+ struct vdecdd_hevc_pict_supl_data {
+ unsigned int pic_order_cnt;
+ } hevc_pict_supl_data;
+#endif
+ };
+};
+
+/*
+ * This structure contains a set of resources representing a picture
+ * at all the stages of processing it.
+ * @brief Picture
+ */
+struct vdecdd_picture {
+ unsigned int pict_id;
+ unsigned char last_pict_in_seq;
+ struct vdecdd_pict_resint *pict_res_int;
+ struct vdecdd_ddpict_buf disp_pict_buf;
+ struct vdec_str_opconfig op_config;
+ struct vdec_dec_pict_info *dec_pict_info;
+ struct vdecdd_pict_sup_data dec_pict_sup_data;
+ struct vdec_dec_pict_auxinfo dec_pict_aux_info;
+};
+
+/*
+ * This structure contains the information required to check Decoder support
+ * Only pointers that are non-null will be used in validation.
+ * @brief VDECDD Support Check Information
+ */
+struct vdecdd_supp_check {
+ /* Inputs */
+ const struct vdec_comsequ_hdrinfo *comseq_hdrinfo;
+ const struct vdec_str_opconfig *op_cfg;
+ const struct vdecdd_ddpict_buf *disp_pictbuf;
+ const struct bspp_pict_hdr_info *pict_hdrinfo;
+ unsigned char non_cfg_req;
+
+ /* Outputs */
+ struct vdec_unsupp_flags unsupp_flags;
+ unsigned int features;
+};
+
+/*
+ * This type defines unsupported stream configuration features.
+ * @brief Unsupported Stream Configuration Flags
+ */
+enum vdec_unsupp_strcfg {
+ VDECDD_UNSUPPORTED_STRCONFIG_STD = (1 << 0),
+ VDECDD_UNSUPPORTED_STRCONFIG_BSTRFORMAT = (1 << 1),
+ VDECDD_UNSUPPORTED_STRCONFIG_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This type defines unsupported output configuration features.
+ * @brief Unsupported Output Configuration Flags
+ */
+enum vdec_unsupp_opcfg {
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_ROTATION = (1 << 0),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_ROTATION_WITH_FIELDS = (1 << 1),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_ROTATION_WITH_SCALING = (1 << 2),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_SCALING = (1 << 3),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_UP_DOWNSAMPLING = (1 << 4),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_SCALING_WITH_OOLD = (1 << 5),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_SCALING_MONOCHROME = (1 << 6),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_SCALING_SIZE = (1 << 7),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_X_WITH_JPEG = (1 << 8),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_SCALESIZE = (1 << 9),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_PIXFORMAT = (1 << 10),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_ROTATION_WITH_HIGH_COLOUR =
+ (1 << 11),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_DOWNSAMPLING_WITH_ROTATION =
+ (1 << 12),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_ROTATION_WITH_10BIT_PACKED =
+ (1 << 13),
+ VDECDD_UNSUPPORTED_OUTPUTCONFIG_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This type defines unsupported output configuration features.
+ * @brief Unsupported Output Configuration Flags
+ */
+enum vdec_unsupp_op_bufcfg {
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_EXTENDED_STRIDE = (1 << 0),
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_64BYTE_STRIDE = (1 << 1),
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_FIXED_STRIDE = (1 << 2),
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_PICTURE_SIZE = (1 << 3),
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_BUFFER_SIZE = (1 << 4),
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_Y_SIZE = (1 << 5),
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_UV_SIZE = (1 << 6),
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_Y_STRIDE = (1 << 7),
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_UV_STRIDE = (1 << 8),
+ VDECDD_UNSUPPORTED_OUTPUTBUFCONFIG_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This type defines unsupported sequence header features.
+ * @brief Unsupported Sequence Header Flags
+ */
+enum vdec_unsupp_seqhdr {
+ VDECDD_UNSUPPORTED_SEQUHDR_PIXFORMAT_BIT_DEPTH = (1 << 0),
+ VDECDD_UNSUPPORTED_SEQUHDR_SCALING = (1 << 1),
+ VDECDD_UNSUPPORTED_SEQUHDR_PIXEL_FORMAT = (1 << 2),
+ VDECDD_UNSUPPORTED_SEQUHDR_NUM_OF_VIEWS = (1 << 3),
+ VDECDD_UNSUPPORTED_SEQUHDR_CODED_HEIGHT = (1 << 4),
+ VDECDD_UNSUPPORTED_SEQUHDR_SEP_COLOUR_PLANE = (1 << 5),
+ VDECDD_UNSUPPORTED_SEQUHDR_SIZE = (1 << 6),
+ VDECDD_UNSUPPORTED_SEQUHDR_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This type defines unsupported picture header features.
+ * @brief Unsupported Picture Header Flags
+ */
+enum vdec_unsupp_picthdr {
+ VDECDD_UNSUPPORTED_PICTHDR_UPSCALING = (1 << 0),
+ VDECDD_UNSUPPORTED_PICTHDR_OVERSIZED_SGM = (1 << 1),
+ VDECDD_UNSUPPORTED_PICTHDR_DISCONTINUOUS_MBS = (1 << 2),
+ VDECDD_UNSUPPORTED_PICTHDR_RESOLUTION = (1 << 3),
+ VDECDD_UNSUPPORTED_PICTHDR_SCALING_ORIGINALSIZE = (1 << 4),
+ VDECDD_UNSUPPORTED_PICTHDR_SCALING_SIZE = (1 << 5),
+ VDECDD_UNSUPPORTED_PICTHDR_HEVC_RANGE_EXT = (1 << 6),
+ VDECDD_UNSUPPORTED_PICTHDR_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * This type defines the Bitstream Segment type.
+ * @brief Bitstream segment type
+ */
+enum vdecdd_bstr_segtype {
+ VDECDD_BSSEG_LASTINBUFF = (1 << 0),
+ VDECDD_BSSEG_SKIP = (1 << 1),
+ VDECDD_BSSEG_INSERTSCP = (1 << 2),
+ VDECDD_BSSEG_INSERT_STARTCODE = (1 << 3) | VDECDD_BSSEG_INSERTSCP,
+ VDECDD_BSSEG_INSERT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+struct vdecdd_seq_resint {
+ void **link;
+
+#ifdef HAS_HEVC
+ unsigned short genc_buf_id;
+ struct vdecdd_ddbuf_mapinfo *genc_buffers[4]; /* GENC buffers */
+ struct vdecdd_ddbuf_mapinfo *intra_buffer; /* GENC buffers */
+ struct vdecdd_ddbuf_mapinfo *aux_buffer; /* GENC buffers */
+#endif
+ struct vdecdd_ddbuf_mapinfo *err_pict_buf; /* Pointer to "Error
+ * Recovery Frame Store" buffer.
+ */
+
+ /*
+ * Ref. counter (number of users) of sequence resources
+ * NOTE: Internal buffer reference counters are not used
+ * for buffers allocated as sequence resources.
+ */
+ unsigned int ref_count;
+};
+
+#endif
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
This library handles picture buffer pixel format checking
and layout conversion to suit HW requirements.
Signed-off-by: Sunita Nadampalli <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 3 +
.../staging/media/vxd/decoder/vdecdd_utils.c | 95 ++
.../staging/media/vxd/decoder/vdecdd_utils.h | 93 ++
.../media/vxd/decoder/vdecdd_utils_buf.c | 897 ++++++++++++++++++
4 files changed, 1088 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_utils.c
create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_utils.h
create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_utils_buf.c
diff --git a/MAINTAINERS b/MAINTAINERS
index bf47d48a1ec2..c7edc60f4d5b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19614,6 +19614,9 @@ F: drivers/staging/media/vxd/decoder/translation_api.h
F: drivers/staging/media/vxd/decoder/vdec_defs.h
F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c
F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.h
+F: drivers/staging/media/vxd/decoder/vdecdd_utils.c
+F: drivers/staging/media/vxd/decoder/vdecdd_utils.h
+F: drivers/staging/media/vxd/decoder/vdecdd_utils_buf.c
F: drivers/staging/media/vxd/decoder/vdecfw_share.h
F: drivers/staging/media/vxd/decoder/vdecfw_shared.h
F: drivers/staging/media/vxd/decoder/vxd_core.c
diff --git a/drivers/staging/media/vxd/decoder/vdecdd_utils.c b/drivers/staging/media/vxd/decoder/vdecdd_utils.c
new file mode 100644
index 000000000000..7fd7a80d46ae
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vdecdd_utils.c
@@ -0,0 +1,95 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * VXD Decoder device driver utility functions implementation
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "bspp.h"
+#include "vdecdd_utils.h"
+
+/*
+ * @Function VDECDDUTILS_FreeStrUnit
+ */
+int vdecddutils_free_strunit(struct vdecdd_str_unit *str_unit)
+{
+ struct bspp_bitstr_seg *bstr_seg;
+
+ /* Loop over bit stream segments */
+ bstr_seg = (struct bspp_bitstr_seg *)lst_removehead(&str_unit->bstr_seg_list);
+ while (bstr_seg) {
+ /* Free segment. */
+ kfree(bstr_seg);
+
+ /* Get next segment. */
+ bstr_seg = (struct bspp_bitstr_seg *)lst_removehead(&str_unit->bstr_seg_list);
+ }
+
+ /* Free the sequence header */
+ if (str_unit->seq_hdr_info) {
+ str_unit->seq_hdr_info->ref_count--;
+ if (str_unit->seq_hdr_info->ref_count == 0) {
+ kfree(str_unit->seq_hdr_info);
+ str_unit->seq_hdr_info = NULL;
+ }
+ }
+
+ /* Free the picture header... */
+ if (str_unit->pict_hdr_info) {
+ kfree(str_unit->pict_hdr_info->pict_sgm_data.pic_data);
+ str_unit->pict_hdr_info->pict_sgm_data.pic_data = NULL;
+
+ kfree(str_unit->pict_hdr_info);
+ str_unit->pict_hdr_info = NULL;
+ }
+
+ /* Free stream unit. */
+ kfree(str_unit);
+ str_unit = NULL;
+
+ /* Return success */
+ return IMG_SUCCESS;
+}
+
+/*
+ * @Function: VDECDDUTILS_CreateStrUnit
+ * @Description: this function allocate a structure for a complete data unit
+ */
+int vdecddutils_create_strunit(struct vdecdd_str_unit **str_unit_handle,
+ struct lst_t *bs_list)
+{
+ struct vdecdd_str_unit *str_unit;
+ struct bspp_bitstr_seg *bstr_seg;
+
+ str_unit = kzalloc(sizeof(*str_unit), GFP_KERNEL);
+ VDEC_ASSERT(str_unit);
+ if (!str_unit)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ if (bs_list) {
+ /* copy BS list to this list */
+ lst_init(&str_unit->bstr_seg_list);
+ for (bstr_seg = lst_first(bs_list); bstr_seg;
+ bstr_seg = lst_first(bs_list)) {
+ bstr_seg = lst_removehead(bs_list);
+ lst_add(&str_unit->bstr_seg_list, bstr_seg);
+ }
+ }
+
+ *str_unit_handle = str_unit;
+
+ return IMG_SUCCESS;
+}
diff --git a/drivers/staging/media/vxd/decoder/vdecdd_utils.h b/drivers/staging/media/vxd/decoder/vdecdd_utils.h
new file mode 100644
index 000000000000..233b7c80fe10
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vdecdd_utils.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * VXD Decoder device driver utility header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef __VDECDD_UTILS_H__
+#define __VDECDD_UTILS_H__
+
+#include "img_errors.h"
+#include "vdecdd_defs.h"
+
+/* The picture buffer alignment (in bytes) for VXD. */
+#define VDEC_VXD_PICTBUF_ALIGNMENT (64)
+/* The buffer alignment (in bytes) for VXD. */
+#define VDEC_VXD_BUF_ALIGNMENT (4096)
+/* The extended stride alignment for VXD. */
+#define VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT (64)
+/* Macroblock dimension (width and height) in pixels. */
+#define VDEC_MB_DIMENSION (16)
+
+static inline unsigned int vdec_size_min(unsigned int a, unsigned int b)
+{
+ return a <= b ? a : b;
+}
+
+static inline unsigned char vdec_size_lt(struct vdec_pict_size sa, struct vdec_pict_size sb)
+{
+ return (sa.width < sb.width && sa.height <= sb.height) ||
+ (sa.width <= sb.width && sa.height < sb.height);
+}
+
+static inline unsigned char vdec_size_ge(struct vdec_pict_size sa, struct vdec_pict_size sb)
+{
+ return sa.width >= sb.width && sa.height >= sb.height;
+}
+
+static inline unsigned char vdec_size_ne(struct vdec_pict_size sa, struct vdec_pict_size sb)
+{
+ return sa.width != sb.width || sa.height != sb.height;
+}
+
+static inline unsigned char vdec_size_nz(struct vdec_pict_size sa)
+{
+ return sa.width != 0 && sa.height != 0;
+}
+
+int vdecddutils_free_strunit(struct vdecdd_str_unit *str_unit);
+
+int vdecddutils_create_strunit(struct vdecdd_str_unit **str_unit_handle,
+ struct lst_t *bs_list);
+
+int vdecddutils_ref_pict_get_maxnum(const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_comsequ_hdrinfo *comseq_hdr_info,
+ unsigned int *num_picts);
+
+int vdecddutils_get_minrequired_numpicts(const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_comsequ_hdrinfo *comseq_hdr_info,
+ const struct vdec_str_opconfig *op_cfg,
+ unsigned int *num_picts);
+
+int vdecddutils_pictbuf_getconfig(const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_pict_rend_config *pict_rend_cfg,
+ const struct vdec_str_opconfig *str_opcfg,
+ struct vdec_pict_bufconfig *pict_bufcfg);
+
+int vdecddutils_pictbuf_getinfo(const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_pict_rend_config *pict_rend_cfg,
+ const struct vdec_str_opconfig *str_opcfg,
+ struct vdec_pict_rendinfo *pict_rend_info);
+
+int vdecddutils_convert_buffer_config(const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_pict_bufconfig *pict_bufcfg,
+ struct vdec_pict_rendinfo *pict_rend_info);
+
+int vdecddutils_get_display_region(const struct vdec_pict_size *coded_size,
+ const struct vdec_rect *orig_disp_region,
+ struct vdec_rect *disp_region);
+
+void vdecddutils_buf_vxd_adjust_size(unsigned int *buf_size);
+
+int vdecddutils_ref_pic_hevc_get_maxnum(const struct vdec_comsequ_hdrinfo *comseq_hdrinfo,
+ unsigned int *max_ref_picnum);
+
+#endif
diff --git a/drivers/staging/media/vxd/decoder/vdecdd_utils_buf.c b/drivers/staging/media/vxd/decoder/vdecdd_utils_buf.c
new file mode 100644
index 000000000000..3a6ad609f981
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vdecdd_utils_buf.c
@@ -0,0 +1,897 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * VXD Decoder device driver buffer utility functions implementation
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ * Prashanth Kumar Amai <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "img_profiles_levels.h"
+#include "pixel_api.h"
+#include "vdecdd_utils.h"
+
+/*
+ * Tests if chroma offset (immediately after size of luma) is exactly
+ * aligned to buffer alignment constraint.
+ */
+static inline unsigned char is_packedbuf_chroma_aligned(unsigned int offset,
+ unsigned int color_plane,
+ unsigned int align)
+{
+ return(color_plane != VDEC_PLANE_VIDEO_Y ? TRUE :
+ (offset == ALIGN(offset, align) ? TRUE : FALSE));
+}
+
+/*
+ * < h.264 MaxDpbMbs values per profile (see Table A-1 of Rec. ITU-T H.264
+ * (03/2010)).
+ * NOTE: Level 1b will be treated as 1.1 in case of Baseline,
+ * Constrained Baseline, Main, and Extended profiles as the value of the
+ * constraint_set3_flag is not available in #VDEC_sComSequHdrInfo structure.
+ */
+static unsigned int h264_max_dpb_mbs[H264_LEVEL_MAJOR_NUM][H264_LEVEL_MINOR_NUM] = {
+ /* level: n/a n/a n/a 1.0b */
+ { 396, 396, 396, 396 },
+ /* level: 1.0 1.1 1.2 1.3 */
+ { 396, 900, 2376, 2376 },
+ /* level: 2.0 2.1 2.2 n/a */
+ { 2376, 4752, 8100, 8100 },
+ /* level: 3.0 3.1 3.2 n/a */
+ { 8100, 18000, 20480, 20480},
+ /* level: 4.0 4.1 4.2 n/a */
+ { 32768, 32768, 34816, 34816},
+ /* level: 5.0 5.1 5.2 n/a */
+ { 110400, 184320, 184320, 184320}
+};
+
+typedef int (*fn_ref_pic_get_max_num)(const struct vdec_comsequ_hdrinfo
+ *comseq_hdrinfo, unsigned int *max_ref_pic_num);
+
+void vdecddutils_buf_vxd_adjust_size(unsigned int *buf_size)
+{
+ /* Align the buffer size to VXD page size. */
+ *buf_size = ALIGN(*buf_size, VDEC_VXD_BUF_ALIGNMENT);
+}
+
+static int vdecddutils_ref_pic_h264_get_maxnum
+ (const struct vdec_comsequ_hdrinfo *comseq_hdrinfo,
+ unsigned int *max_ref_pic_num)
+{
+ unsigned int pic_width_mb;
+ unsigned int pic_height_mb;
+ unsigned int lvl_major = 0;
+ unsigned int lvl_minor = 0;
+
+ /* Pre-validate level. */
+ if (comseq_hdrinfo->codec_level < H264_LEVEL_MIN ||
+ comseq_hdrinfo->codec_level > H264_LEVEL_MAX) {
+ pr_warn("Wrong H264 level value: %u",
+ comseq_hdrinfo->codec_level);
+ }
+
+ if (comseq_hdrinfo->max_reorder_picts) {
+ *max_ref_pic_num = comseq_hdrinfo->max_reorder_picts;
+ } else {
+ /* Calculate level major and minor. */
+ lvl_major = comseq_hdrinfo->codec_level / 10;
+ lvl_minor = comseq_hdrinfo->codec_level % 10;
+
+ /* Calculate picture sizes in MBs. */
+ pic_width_mb = (comseq_hdrinfo->max_frame_size.width +
+ (VDEC_MB_DIMENSION - 1)) / VDEC_MB_DIMENSION;
+ pic_height_mb = (comseq_hdrinfo->max_frame_size.height +
+ (VDEC_MB_DIMENSION - 1)) / VDEC_MB_DIMENSION;
+
+ /* Validate lvl_minor */
+ if (lvl_minor > 3) {
+ pr_warn("Wrong H264 lvl_minor level value: %u, overriding with 3",
+ lvl_minor);
+ lvl_minor = 3;
+ }
+ /* Validate lvl_major */
+ if (lvl_major > 5) {
+ pr_warn("Wrong H264 lvl_major level value: %u, overriding with 5",
+ lvl_major);
+ lvl_major = 5;
+ }
+
+ /*
+ * Calculate the maximum number of reference pictures
+ * required based on level.
+ */
+ *max_ref_pic_num = h264_max_dpb_mbs[lvl_major][lvl_minor] /
+ (pic_width_mb * pic_height_mb);
+ if (*max_ref_pic_num > 16)
+ *max_ref_pic_num = 16;
+ }
+
+ /* Return success. */
+ return IMG_SUCCESS;
+}
+
+#ifdef HAS_HEVC
+/*
+ * @Function vdecddutils_ref_pic_hevc_get_maxnum
+ */
+int vdecddutils_ref_pic_hevc_get_maxnum(const struct vdec_comsequ_hdrinfo *comseq_hdrinfo,
+ unsigned int *max_ref_picnum)
+{
+ static const unsigned int HEVC_LEVEL_IDC_MIN = 30;
+ static const unsigned int HEVC_LEVEL_IDC_MAX = 186;
+
+ static const unsigned int
+ max_luma_ps_list[HEVC_LEVEL_MAJOR_NUM][HEVC_LEVEL_MINOR_NUM] = {
+ /* level: 1.0 1.1 1.2 */
+ { 36864, 0, 0, },
+ /* level: 2.0 2.1 2.2 */
+ { 122880, 245760, 0, },
+ /* level: 3.0 3.1 3.2 */
+ { 552960, 983040, 0, },
+ /* level: 4.0 4.1 4.2 */
+ { 2228224, 2228224, 0, },
+ /* level: 5.0 5.1 5.2 */
+ { 8912896, 8912896, 8912896, },
+ /* level: 6.0 6.1 6.2 */
+ { 35651584, 35651584, 35651584, }
+ };
+
+ /* ITU-T H.265 04/2013 A.4.1 */
+
+ const unsigned int max_dpb_picbuf = 6;
+
+ /* this is rounded to whole Ctbs */
+ unsigned int pic_size_in_samples_Y = comseq_hdrinfo->frame_size.height *
+ comseq_hdrinfo->frame_size.width;
+
+ signed char level_maj, level_min;
+ unsigned int max_luma_ps;
+
+ /* some error resilience */
+ if (comseq_hdrinfo->codec_level > HEVC_LEVEL_IDC_MAX ||
+ comseq_hdrinfo->codec_level < HEVC_LEVEL_IDC_MIN) {
+ pr_warn("HEVC Codec level out of range: %u, falling back to %u",
+ comseq_hdrinfo->codec_level,
+ comseq_hdrinfo->min_pict_buf_num);
+
+ *max_ref_picnum = comseq_hdrinfo->min_pict_buf_num;
+ return IMG_SUCCESS;
+ }
+
+ level_maj = comseq_hdrinfo->codec_level / 30;
+ level_min = (comseq_hdrinfo->codec_level % 30) / 3;
+
+ if (level_maj > 0 && level_maj <= HEVC_LEVEL_MAJOR_NUM &&
+ level_min >= 0 && level_min < HEVC_LEVEL_MINOR_NUM) {
+ max_luma_ps = max_luma_ps_list[level_maj - 1][level_min];
+ } else {
+ pr_err("%s: Invalid parameters\n", __func__);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ if (max_luma_ps == 0) {
+ pr_err("Wrong HEVC level value: %u.%u (general_level_idc: %u)",
+ level_maj, level_min, comseq_hdrinfo->codec_level);
+
+ return IMG_ERROR_VALUE_OUT_OF_RANGE;
+ }
+
+ if (max_luma_ps < pic_size_in_samples_Y)
+ pr_warn("HEVC PicSizeInSamplesY too large for level (%u > %u)",
+ pic_size_in_samples_Y, max_luma_ps);
+
+ if (pic_size_in_samples_Y <= (max_luma_ps >> 2))
+ *max_ref_picnum = vdec_size_min(4 * max_dpb_picbuf, 16);
+ else if (pic_size_in_samples_Y <= (max_luma_ps >> 1))
+ *max_ref_picnum = vdec_size_min(2 * max_dpb_picbuf, 16);
+ else if (pic_size_in_samples_Y <= ((3 * max_luma_ps) >> 2))
+ *max_ref_picnum = vdec_size_min((4 * max_dpb_picbuf) / 3, 16);
+ else
+ *max_ref_picnum = max_dpb_picbuf;
+
+ /* Return success. */
+ return IMG_SUCCESS;
+}
+#endif
+
+#ifdef HAS_JPEG
+static int vdecddutils_ref_pic_jpeg_get_maxnum(const struct vdec_comsequ_hdrinfo *comseq_hdrinfo,
+ unsigned int *max_ref_picnum)
+{
+ /* No reference frames for JPEG. */
+ *max_ref_picnum = 0;
+
+ /* Return success. */
+ return IMG_SUCCESS;
+}
+#endif
+
+/*
+ * The array of pointers to functions calculating the maximum number
+ * of reference pictures required for each supported video standard.
+ * NOTE: The table is indexed by #VDEC_eVidStd enum values.
+ */
+static fn_ref_pic_get_max_num ref_pic_get_maxnum[VDEC_STD_MAX - 1] = {
+ NULL,
+ NULL,
+ NULL,
+ vdecddutils_ref_pic_h264_get_maxnum,
+ NULL,
+ NULL,
+ NULL,
+#ifdef HAS_JPEG
+ vdecddutils_ref_pic_jpeg_get_maxnum,
+#else
+ NULL,
+#endif
+ NULL,
+ NULL,
+ NULL,
+#ifdef HAS_HEVC
+ vdecddutils_ref_pic_hevc_get_maxnum
+#else
+ NULL
+#endif
+};
+
+int
+vdecddutils_ref_pict_get_maxnum(const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_comsequ_hdrinfo *comseq_hdr_info,
+ unsigned int *num_picts)
+{
+ int ret = IMG_SUCCESS;
+
+ /* Validate input params. */
+ if (str_cfg_data->vid_std == VDEC_STD_UNDEFINED || str_cfg_data->vid_std >= VDEC_STD_MAX)
+ return IMG_ERROR_VALUE_OUT_OF_RANGE;
+
+ /* Call the function related to the provided video standard. */
+ ret = ref_pic_get_maxnum[str_cfg_data->vid_std - 1](comseq_hdr_info,
+ num_picts);
+ if (ret != IMG_SUCCESS)
+ pr_warn("[USERSID=0x%08X] Failed to get number of reference pictures",
+ str_cfg_data->user_str_id);
+
+ /*
+ * For non-conformant stream use the
+ * max(*pui32NumPicts,comseq_hdrinfo->ui32MinPicBufNum)
+ */
+ if (*num_picts < comseq_hdr_info->min_pict_buf_num)
+ *num_picts = comseq_hdr_info->min_pict_buf_num;
+
+ /*
+ * Increase for MVC: mvcScaleFactor = 2 (H.10.2) and additional pictures
+ * for a StoreInterViewOnlyRef case (C.4.5.2)
+ */
+ if (comseq_hdr_info->num_views > 1) {
+ *num_picts *= 2;
+ *num_picts += comseq_hdr_info->num_views - 1;
+ }
+
+ return ret;
+}
+
+static void vdecddutils_update_rend_pictsize(struct vdec_pict_size pict_size,
+ struct vdec_pict_size *rend_pict_size)
+{
+ if (rend_pict_size->width == 0) {
+ rend_pict_size->width = pict_size.width;
+ } else {
+ /* Take the smallest resolution supported by all the planes */
+ rend_pict_size->width = (pict_size.width <
+ rend_pict_size->width) ?
+ pict_size.width :
+ rend_pict_size->width;
+ }
+ if (rend_pict_size->height == 0) {
+ rend_pict_size->height = pict_size.height;
+ } else {
+ /* Take the smallest resolution supported by all the planes. */
+ rend_pict_size->height = (pict_size.height <
+ rend_pict_size->height) ?
+ pict_size.height :
+ rend_pict_size->height;
+ }
+}
+
+int vdecddutils_convert_buffer_config(const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_pict_bufconfig *pict_bufcfg,
+ struct vdec_pict_rendinfo *pict_rend_info)
+{
+ const struct pixel_pixinfo *pix_info;
+ struct img_pixfmt_desc pixfmt;
+ unsigned int i;
+ unsigned int total_vert_samples = 0;
+ unsigned int vert_samples[IMG_MAX_NUM_PLANES];
+ unsigned int plane_size = 0;
+ unsigned int plane_offset = 0;
+ struct vdec_pict_size pict_size;
+
+ /* Validate inputs. */
+ VDEC_ASSERT(str_cfg_data);
+ VDEC_ASSERT(pict_bufcfg);
+ VDEC_ASSERT(pict_rend_info);
+
+ /* Reset picture buffer allocation data. */
+ memset(pict_rend_info, 0x0, sizeof(*pict_rend_info));
+
+ pr_debug("%s picture buffer pixel_fmt = %d\n", __func__, pict_bufcfg->pixel_fmt);
+ /* Get pixel format info for regular pixel formats... */
+ if (pict_bufcfg->pixel_fmt < IMG_PIXFMT_ARBPLANAR8) {
+ pix_info = pixel_get_pixinfo(pict_bufcfg->pixel_fmt);
+ pixel_yuv_get_desc((struct pixel_pixinfo *)pix_info, &pixfmt);
+ } else {
+ pixel_get_fmt_desc(pict_bufcfg->pixel_fmt, &pixfmt);
+ }
+
+ /*
+ * Construct the render region information from the picture
+ * buffer configuration.
+ */
+ for (i = 0; i < IMG_MAX_NUM_PLANES; i++) {
+ if (pixfmt.planes[i]) {
+ unsigned int plane_align = VDEC_VXD_PICTBUF_ALIGNMENT;
+
+ /*
+ * Determine the offset (in bytes) to this plane.
+ * This is zero for the first (luma) plane and at the
+ * end of the previous plane for all subsequent planes.
+ */
+ plane_offset = plane_offset + plane_size;
+
+ /*
+ * Calculate the minimum number of vertical samples
+ * for this plane.
+ */
+ vert_samples[i] =
+ ((pict_bufcfg->coded_height +
+ pixfmt.v_denom - 1) / pixfmt.v_denom) *
+ pixfmt.v_numer[i];
+
+ /*
+ * Calculate the mimimum plane size from the stride and
+ * decode picture height. Packed buffers have the luma
+ * and chroma exactly adjacent and consequently the
+ * chroma plane offset is equal to this plane size.
+ */
+ plane_size = pict_bufcfg->stride[i] * vert_samples[i];
+ plane_size = ALIGN(plane_size, plane_align);
+
+ if (!pict_bufcfg->packed && pict_bufcfg->chroma_offset[i]) {
+ unsigned int max_plane_size;
+
+ max_plane_size =
+ pict_bufcfg->chroma_offset[i] - plane_offset;
+
+ if (plane_size > max_plane_size) {
+ pr_err("Chroma offset [%d bytes] is not large enough to fit minimum plane data [%d bytes] at offset [%d]",
+ pict_bufcfg->chroma_offset[i],
+ plane_size, plane_offset);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ plane_size = max_plane_size;
+
+ vert_samples[i] = plane_size /
+ pict_bufcfg->stride[i];
+ } else {
+ if (pict_bufcfg->chroma_offset[i] && (plane_offset + plane_size) !=
+ pict_bufcfg->chroma_offset[i]) {
+ pr_err("Chroma offset specified [%d bytes] should match that required for plane size calculated from stride and height [%d bytes]",
+ pict_bufcfg->chroma_offset[i],
+ plane_offset + plane_size);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+ }
+
+ pict_rend_info->plane_info[i].offset = plane_offset;
+ pict_rend_info->plane_info[i].stride =
+ pict_bufcfg->stride[i];
+ pict_rend_info->plane_info[i].size = plane_size;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("VDECDDUTILS_ConvertBufferConfig() plane %d stride %u size %u offset %u",
+ i, pict_rend_info->plane_info[i].stride,
+ pict_rend_info->plane_info[i].size,
+ pict_rend_info->plane_info[i].offset);
+#endif
+
+ pict_rend_info->rendered_size +=
+ pict_rend_info->plane_info[i].size;
+
+ total_vert_samples += vert_samples[i];
+
+ /* Calculate the render region maximum picture size. */
+ pict_size.width = (pict_rend_info->plane_info[i].stride *
+ pixfmt.bop_denom) / pixfmt.bop_numer[i];
+ pict_size.height = (vert_samples[i] * pixfmt.v_denom) / pixfmt.v_numer[i];
+ vdecddutils_update_rend_pictsize(pict_size,
+ &pict_rend_info->rend_pict_size);
+ }
+ }
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("VDECDDUTILS_ConvertBufferConfig() total required %u (inc. alignment for addressing/tiling) vs. buffer %u",
+ pict_rend_info->rendered_size, pict_bufcfg->buf_size);
+#endif
+
+ /* Ensure that the buffer size is large enough to hold the data */
+ if (pict_bufcfg->buf_size < pict_rend_info->rendered_size) {
+ pr_err("Buffer size [%d bytes] should be at least as large as rendered data (inc. any enforced gap between planes) [%d bytes]",
+ pict_bufcfg->buf_size,
+ pict_rend_info->rendered_size);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Whole buffer should be marked as rendered region */
+ pict_rend_info->rendered_size = pict_bufcfg->buf_size;
+ /* Use the actual stride alignment */
+ pict_rend_info->stride_alignment = pict_bufcfg->stride_alignment;
+
+ return IMG_SUCCESS;
+}
+
+static unsigned char vdecddutils_is_secondary_op_required(const struct vdec_comsequ_hdrinfo
+ *comseq_hdr_info,
+ const struct vdec_str_opconfig
+ *op_cfg)
+{
+ unsigned char result = TRUE;
+
+ if (!op_cfg->force_oold &&
+ !comseq_hdr_info->post_processing &&
+ comseq_hdr_info->pixel_info.chroma_fmt_idc ==
+ op_cfg->pixel_info.chroma_fmt_idc &&
+ comseq_hdr_info->pixel_info.bitdepth_y ==
+ op_cfg->pixel_info.bitdepth_y &&
+ comseq_hdr_info->pixel_info.bitdepth_c ==
+ op_cfg->pixel_info.bitdepth_c)
+ /*
+ * The secondary output is not required (if we have it we will
+ * not use it for transformation (e.g. scaling. rotating or
+ * up/down-sampling).
+ */
+ result = FALSE;
+
+ return result;
+}
+
+int vdecddutils_get_minrequired_numpicts(const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_comsequ_hdrinfo *comseq_hdr_info,
+ const struct vdec_str_opconfig *op_cfg,
+ unsigned int *num_picts)
+{
+ int ret;
+ unsigned int max_held_picnum;
+
+ /* If any operation requiring internal buffers is to be applied... */
+ if (vdecddutils_is_secondary_op_required(comseq_hdr_info, op_cfg)) {
+ /*
+ * Reference picture buffers will be allocated internally,
+ * but there may be a number of picture buffers to which
+ * out-of-display-order pictures will be decoded. These
+ * buffers need to be allocated externally, so there's a
+ * need to calculate the number of out-of-(display)-order
+ * pictures required for the provided video standard.
+ */
+ ret = vdecddutils_ref_pict_get_maxnum(str_cfg_data, comseq_hdr_info,
+ &max_held_picnum);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ } else {
+ /*
+ * All the reference picture buffers have to be allocated
+ * externally, so there's a need to calculate the number of
+ * reference picture buffers required for the provided video
+ * standard.
+ */
+ ret = vdecddutils_ref_pict_get_maxnum(str_cfg_data, comseq_hdr_info,
+ &max_held_picnum);
+ if (ret != IMG_SUCCESS)
+ return ret;
+ }
+
+ /*
+ * Calculate the number of picture buffers required as the maximum
+ * number of picture buffers to be held onto by the driver plus the
+ * current picture buffer.
+ */
+ *num_picts = max_held_picnum +
+ (comseq_hdr_info->interlaced_frames ? 2 : 1);
+
+ return IMG_SUCCESS;
+}
+
+static void vdecddutils_get_codedsize(const struct vdec_pict_rend_config *pict_rend_cfg,
+ struct vdec_pict_size *decoded_pict_size)
+{
+ decoded_pict_size->width = pict_rend_cfg->coded_pict_size.width;
+ decoded_pict_size->height = pict_rend_cfg->coded_pict_size.height;
+}
+
+static unsigned char vdecddutils_is_packed(const struct vdec_pict_rendinfo *pict_rend_info,
+ const struct vdec_pict_rend_config *pict_rend_cfg)
+{
+ unsigned char packed = TRUE;
+ unsigned int pict_buf_align;
+
+ /* Validate inputs. */
+ VDEC_ASSERT(pict_rend_info);
+ VDEC_ASSERT(pict_rend_cfg);
+
+ pict_buf_align = VDEC_VXD_PICTBUF_ALIGNMENT;
+
+ if (pict_rend_info->plane_info[VDEC_PLANE_VIDEO_Y].size !=
+ pict_rend_info->plane_info[VDEC_PLANE_VIDEO_UV].offset) {
+ /* Planes that are not adjacent cannot be packed */
+ packed = FALSE;
+ } else if (!is_packedbuf_chroma_aligned(pict_rend_info->plane_info
+ [VDEC_PLANE_VIDEO_UV].offset,
+ VDEC_PLANE_VIDEO_Y,
+ pict_buf_align)) {
+ /* Chroma plane must be aligned for packed buffers. */
+ VDEC_ASSERT(pict_rend_info->plane_info[VDEC_PLANE_VIDEO_Y].size ==
+ pict_rend_info->plane_info[VDEC_PLANE_VIDEO_UV].offset);
+ packed = FALSE;
+ }
+
+ return packed;
+}
+
+static int vdecddutils_get_stride
+ (const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_pict_rend_config *pict_rend_cfg,
+ unsigned int vert_samples, unsigned int *h_stride,
+ enum vdec_color_planes color_planes)
+{
+ unsigned int hw_h_stride = *h_stride;
+
+ /*
+ * If extended strides are to be used or indexed strides failed,
+ * make extended stride alignment.
+ */
+ hw_h_stride = ALIGN(hw_h_stride,
+ pict_rend_cfg->stride_alignment > 0 ?
+ pict_rend_cfg->stride_alignment :
+ VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT);
+
+ /* A zero-value indicates unsupported stride */
+ if (hw_h_stride == 0)
+ /* No valid stride found */
+ return IMG_ERROR_NOT_SUPPORTED;
+
+ *h_stride = hw_h_stride;
+
+ return IMG_SUCCESS;
+}
+
+static int vdecddutils_get_render_info(const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_pict_rend_config *pict_rend_cfg,
+ const struct pixel_pixinfo *pix_info,
+ struct vdec_pict_rendinfo *pict_rend_info)
+{
+ unsigned int i;
+ struct img_pixfmt_desc pixfmt;
+ struct vdec_pict_size coded_pict_size;
+ unsigned char single_stride = FALSE;
+ unsigned int vert_sample[IMG_MAX_NUM_PLANES] = {0};
+ unsigned int total_vert_samples;
+ unsigned int largest_stride;
+ unsigned int result;
+
+ /* Reset the output structure. */
+ memset(pict_rend_info, 0, sizeof(*pict_rend_info));
+
+ /* Ensure that the coded sizes are in whole macroblocks. */
+ if ((pict_rend_cfg->coded_pict_size.width &
+ (VDEC_MB_DIMENSION - 1)) != 0 ||
+ (pict_rend_cfg->coded_pict_size.height &
+ (VDEC_MB_DIMENSION - 1)) != 0) {
+ pr_err("Invalid render configuration coded picture size [%d x %d]. It should be a whole number of MBs in each dimension",
+ pict_rend_cfg->coded_pict_size.width,
+ pict_rend_cfg->coded_pict_size.height);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Check if the stride alignment is multiple of default. */
+ if ((pict_rend_cfg->stride_alignment &
+ (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT - 1)) != 0) {
+ pr_err("Invalid stride alignment %d used. It should be multiple of %d.",
+ pict_rend_cfg->stride_alignment,
+ VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Get pixel format info for regular pixel formats... */
+ if (pix_info->pixfmt < IMG_PIXFMT_ARBPLANAR8)
+ pixel_yuv_get_desc((struct pixel_pixinfo *)pix_info, &pixfmt);
+ else
+ pixel_get_fmt_desc(pix_info->pixfmt, &pixfmt);
+
+ /* Get the coded size for the appropriate orientation */
+ vdecddutils_get_codedsize(pict_rend_cfg, &coded_pict_size);
+
+ /*
+ * Calculate the hardware (inc. constraints) strides and
+ * number of vertical samples for each plane.
+ */
+ total_vert_samples = 0;
+ largest_stride = 0;
+ for (i = 0; i < IMG_MAX_NUM_PLANES; i++) {
+ if (pixfmt.planes[i]) {
+ unsigned int h_stride;
+
+ /* Horizontal stride must be for a multiple of BOPs. */
+ h_stride = ((coded_pict_size.width +
+ pixfmt.bop_denom - 1) /
+ pixfmt.bop_denom) * pixfmt.bop_numer[i];
+
+ /*
+ * Vertical only has to satisfy whole pixel of
+ * samples.
+ */
+ vert_sample[i] = ((coded_pict_size.height +
+ pixfmt.v_denom - 1) /
+ pixfmt.v_denom) * pixfmt.v_numer[i];
+
+ /*
+ * Obtain a horizontal stride supported by the hardware
+ * (inc. constraints).
+ */
+ result = vdecddutils_get_stride(str_cfg_data, pict_rend_cfg, vert_sample[i],
+ &h_stride, (enum vdec_color_planes)i);
+ if (result != IMG_SUCCESS) {
+ VDEC_ASSERT(0);
+ pr_err("No valid VXD stride found for picture with decoded dimensions [%d x %d] and min stride [%d]",
+ coded_pict_size.width, coded_pict_size.height, h_stride);
+ return result;
+ }
+
+ pict_rend_info->plane_info[i].stride = h_stride;
+ if (i == VDEC_PLANE_VIDEO_UV && (str_cfg_data->vid_std == VDEC_STD_H264 ||
+ str_cfg_data->vid_std == VDEC_STD_HEVC)) {
+ struct pixel_pixinfo *info =
+ pixel_get_pixinfo(pix_info->pixfmt);
+ VDEC_ASSERT(PIXEL_FORMAT_INVALID !=
+ info->chroma_fmt_idc);
+ }
+
+ total_vert_samples += vert_sample[i];
+ if (h_stride > largest_stride)
+ largest_stride = h_stride;
+ }
+ }
+ pict_rend_info->stride_alignment =
+ pict_rend_cfg->stride_alignment > 0 ?
+ pict_rend_cfg->stride_alignment :
+ VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT;
+
+ if (pict_rend_cfg->packed)
+ single_stride = TRUE;
+
+#ifdef HAS_JPEG
+ /* JPEG hardware uses a single (luma) stride for all planes. */
+ if (str_cfg_data->vid_std == VDEC_STD_JPEG) {
+ single_stride = true;
+
+ /* Luma should be largest for this to be used for all planes. */
+ VDEC_ASSERT(largest_stride ==
+ pict_rend_info->plane_info[VDEC_PLANE_VIDEO_Y].stride);
+ }
+#endif
+
+ /* Calculate plane sizes. */
+ for (i = 0; i < IMG_MAX_NUM_PLANES; i++) {
+ if (pixfmt.planes[i]) {
+ struct vdec_pict_size pict_size;
+ unsigned int vert_samples = vert_sample[i];
+ unsigned int plane_align = VDEC_VXD_PICTBUF_ALIGNMENT;
+
+ if (single_stride)
+ pict_rend_info->plane_info[i].stride =
+ largest_stride;
+
+ pict_rend_info->plane_info[i].size =
+ pict_rend_info->plane_info[i].stride *
+ vert_samples;
+ pict_rend_info->plane_info[i].size =
+ ALIGN(pict_rend_info->plane_info[i].size, plane_align);
+ /*
+ * Ensure that the total buffer rendered size is
+ * rounded-up to the picture buffer alignment so that
+ * this plane (within this single buffer) can be
+ * correctly addressed by the hardware at this byte
+ * offset.
+ */
+ if (i == 1 && pict_rend_cfg->packed)
+ /*
+ * Packed buffers must have chroma plane
+ * already aligned since this was factored
+ * into the stride/size calculation.
+ */
+ VDEC_ASSERT(pict_rend_info->rendered_size ==
+ ALIGN(pict_rend_info->rendered_size, plane_align));
+
+ pict_rend_info->plane_info[i].offset = pict_rend_info->rendered_size;
+
+ /* Update the total buffer size (inc. this plane). */
+ pict_rend_info->rendered_size +=
+ pict_rend_info->plane_info[i].size;
+
+ /*
+ * Update the maximum render picture size supported
+ * by all planes of this buffer.
+ */
+ pict_size.width = (pict_rend_info->plane_info[i].stride *
+ pixfmt.bop_denom) / pixfmt.bop_numer[i];
+
+ pict_size.height = (vert_sample[i] * pixfmt.v_denom) / pixfmt.v_numer[i];
+
+ vdecddutils_update_rend_pictsize(pict_size,
+ &pict_rend_info->rend_pict_size);
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("vdecddutils_GetRenderInfo() plane %d stride %u size %u offset %u",
+ i, pict_rend_info->plane_info[i].stride,
+ pict_rend_info->plane_info[i].size,
+ pict_rend_info->plane_info[i].offset);
+#endif
+ }
+ }
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("vdecddutils_GetRenderInfo() total %u (inc. alignment for addressing/tiling)",
+ pict_rend_info->rendered_size);
+#endif
+
+ return IMG_SUCCESS;
+}
+
+int vdecddutils_pictbuf_getconfig(const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_pict_rend_config *pict_rend_cfg,
+ const struct vdec_str_opconfig *str_opcfg,
+ struct vdec_pict_bufconfig *pict_bufcfg)
+{
+ struct vdec_pict_rendinfo disp_pict_rendinfo;
+ struct vdec_pict_size coded_pict_size;
+ unsigned int ret, i;
+ unsigned int size0, size1;
+
+ /* Validate inputs. */
+ VDEC_ASSERT(str_cfg_data);
+ VDEC_ASSERT(pict_rend_cfg);
+ VDEC_ASSERT(str_opcfg);
+ VDEC_ASSERT(pict_bufcfg);
+
+ /* Clear the picture buffer config before populating */
+ memset(pict_bufcfg, 0, sizeof(struct vdec_pict_bufconfig));
+
+ /* Determine the rounded-up coded sizes (compatible with hardware) */
+ ret = vdecddutils_get_render_info(str_cfg_data,
+ pict_rend_cfg,
+ &str_opcfg->pixel_info,
+ &disp_pict_rendinfo);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Get the coded size for the appropriate orientation */
+ vdecddutils_get_codedsize(pict_rend_cfg, &coded_pict_size);
+
+ pict_bufcfg->coded_width = coded_pict_size.width;
+ pict_bufcfg->coded_height = coded_pict_size.height;
+
+ /*
+ * Use the luma stride for all planes in buffer.
+ * Additional chroma stride may be needed for other pixel formats.
+ */
+ for (i = 0; i < VDEC_PLANE_MAX; i++)
+ pict_bufcfg->stride[i] = disp_pict_rendinfo.plane_info[i].stride;
+
+ /*
+ * Pixel information is taken from that
+ * specified for display.
+ */
+ pict_bufcfg->pixel_fmt = str_opcfg->pixel_info.pixfmt;
+ pr_debug("picture buffer pixel_fmt = %d\n", pict_bufcfg->pixel_fmt);
+
+ /* Tiling scheme is taken from render configuration */
+ pict_bufcfg->byte_interleave = pict_rend_cfg->byte_interleave;
+ pr_debug("picture buffer byte_interleave = %d\n", pict_bufcfg->byte_interleave);
+ /* Stride alignment */
+ pict_bufcfg->stride_alignment = pict_rend_cfg->stride_alignment > 0 ?
+ pict_rend_cfg->stride_alignment : VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT;
+
+ pr_debug("picture buffer stride_alignment = %d\n", pict_bufcfg->stride_alignment);
+ /* Chroma offset taken as calculated for render configuration. */
+ pict_bufcfg->chroma_offset[0] = disp_pict_rendinfo.plane_info[VDEC_PLANE_VIDEO_UV].offset;
+ pict_bufcfg->chroma_offset[1] = disp_pict_rendinfo.plane_info[VDEC_PLANE_VIDEO_V].offset;
+
+ if (pict_rend_cfg->packed && str_opcfg->pixel_info.num_planes > 1) {
+ pict_bufcfg->packed = vdecddutils_is_packed(&disp_pict_rendinfo, pict_rend_cfg);
+ if (!pict_bufcfg->packed) {
+ /* Report if unable to meet request to pack. */
+ pr_err("Request for packed buffer could not be met");
+ return IMG_ERROR_NOT_SUPPORTED;
+ }
+
+ size0 = ALIGN(pict_bufcfg->chroma_offset[0], VDEC_VXD_PICTBUF_ALIGNMENT);
+ size1 = ALIGN(pict_bufcfg->chroma_offset[1], VDEC_VXD_PICTBUF_ALIGNMENT);
+
+ if (pict_bufcfg->chroma_offset[0] != size0 ||
+ pict_bufcfg->chroma_offset[1] != size1) {
+ pr_err("Chroma plane could not be located on a %d byte boundary (investigate stride calculations)",
+ VDEC_VXD_PICTBUF_ALIGNMENT);
+ return IMG_ERROR_NOT_SUPPORTED;
+ }
+ } else {
+ pict_bufcfg->packed = FALSE;
+ }
+
+ pict_bufcfg->buf_size = disp_pict_rendinfo.rendered_size;
+
+ /* Return success */
+ return IMG_SUCCESS;
+}
+
+int vdecddutils_get_display_region(const struct vdec_pict_size *coded_size,
+ const struct vdec_rect *orig_disp_region,
+ struct vdec_rect *disp_region)
+{
+ int ret = IMG_SUCCESS;
+
+ /* Validate inputs. */
+ VDEC_ASSERT(coded_size);
+ VDEC_ASSERT(orig_disp_region);
+ VDEC_ASSERT(disp_region);
+ if (!coded_size || !orig_disp_region || !disp_region)
+ return IMG_ERROR_INVALID_PARAMETERS;
+
+ /*
+ * In the simplest case the display region is the same as
+ * that defined in the bitstream.
+ */
+ *disp_region = *orig_disp_region;
+
+ if (orig_disp_region->height == 0 || orig_disp_region->width == 0 ||
+ coded_size->height == 0 || coded_size->width == 0) {
+ pr_err("Invalid params to calculate display region:");
+ pr_err("Display Size: [%d,%d]", orig_disp_region->width, orig_disp_region->height);
+ pr_err("Coded Size : [%d,%d]", coded_size->width, coded_size->height);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ return ret;
+}
+
+int vdecddutils_pictbuf_getinfo(const struct vdec_str_configdata *str_cfg_data,
+ const struct vdec_pict_rend_config *pict_rend_cfg,
+ const struct vdec_str_opconfig *str_op_cfg,
+ struct vdec_pict_rendinfo *pict_rend_info)
+{
+ unsigned int ret;
+
+ /* Validate inputs. */
+ VDEC_ASSERT(str_cfg_data);
+ VDEC_ASSERT(pict_rend_cfg);
+ VDEC_ASSERT(str_op_cfg);
+ VDEC_ASSERT(pict_rend_info);
+
+ ret = vdecddutils_get_render_info(str_cfg_data, pict_rend_cfg,
+ &str_op_cfg->pixel_info,
+ pict_rend_info);
+ VDEC_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ return IMG_SUCCESS;
+}
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
This component is used to track decoder resources, and share them
across other components.
Signed-off-by: Sunita Nadampalli <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
drivers/staging/media/vxd/common/rman_api.c | 620 ++++++++++++++++++++
drivers/staging/media/vxd/common/rman_api.h | 66 +++
3 files changed, 688 insertions(+)
create mode 100644 drivers/staging/media/vxd/common/rman_api.c
create mode 100644 drivers/staging/media/vxd/common/rman_api.h
diff --git a/MAINTAINERS b/MAINTAINERS
index f7e55791f355..d126162984c6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19560,6 +19560,8 @@ F: drivers/staging/media/vxd/common/pool_api.c
F: drivers/staging/media/vxd/common/pool_api.h
F: drivers/staging/media/vxd/common/ra.c
F: drivers/staging/media/vxd/common/ra.h
+F: drivers/staging/media/vxd/common/rman_api.c
+F: drivers/staging/media/vxd/common/rman_api.h
F: drivers/staging/media/vxd/common/talmmu_api.c
F: drivers/staging/media/vxd/common/talmmu_api.h
F: drivers/staging/media/vxd/common/work_queue.c
diff --git a/drivers/staging/media/vxd/common/rman_api.c b/drivers/staging/media/vxd/common/rman_api.c
new file mode 100644
index 000000000000..c595dccd5ed2
--- /dev/null
+++ b/drivers/staging/media/vxd/common/rman_api.c
@@ -0,0 +1,620 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * This component is used to track decoder resources,
+ * and share them across other components.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+#include <linux/slab.h>
+#include <linux/printk.h>
+#include <linux/mutex.h>
+#include <linux/types.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "dq.h"
+#include "idgen_api.h"
+#include "rman_api.h"
+
+/*
+ * The following macros are used to build/decompose the composite resource Id
+ * made up from the bucket index + 1 and the allocated resource Id.
+ */
+#define RMAN_CRESID_BUCKET_INDEX_BITS (8)
+#define RMAN_CRESID_RES_ID_BITS (32 - RMAN_CRESID_BUCKET_INDEX_BITS)
+#define RMAN_CRESID_MAX_RES_ID ((1 << RMAN_CRESID_RES_ID_BITS) - 1)
+#define RMAN_CRESID_RES_ID_MASK (RMAN_CRESID_MAX_RES_ID)
+#define RMAN_CRESID_BUCKET_SHIFT (RMAN_CRESID_RES_ID_BITS)
+#define RMAN_CRESID_MAX_BUCKET_INDEX \
+ ((1 << RMAN_CRESID_BUCKET_INDEX_BITS) - 1)
+
+#define RMAN_MAX_ID 4096
+#define RMAN_ID_BLOCKSIZE 256
+
+/* global state variable */
+static unsigned char inited;
+static struct rman_bucket *bucket_array[RMAN_CRESID_MAX_BUCKET_INDEX] = {0};
+static struct rman_bucket *global_res_bucket;
+static struct rman_bucket *shared_res_bucket;
+static struct mutex *shared_res_mutex_handle;
+static struct mutex *global_mutex;
+
+/*
+ * This structure contains the bucket information.
+ */
+struct rman_bucket {
+ void **link; /* to be part of single linked list */
+ struct dq_linkage_t res_list;
+ unsigned int bucket_idx;
+ void *id_gen;
+ unsigned int res_cnt;
+};
+
+/*
+ * This structure contains the resource details for a resource registered with
+ * the resource manager.
+ */
+struct rman_res {
+ struct dq_linkage_t link; /* to be part of double linked list */
+ struct rman_bucket *bucket;
+ unsigned int type_id;
+ rman_fn_free fn_free;
+ void *param;
+ unsigned int res_id;
+ struct mutex *mutex_handle; /*resource mutex */
+ unsigned char *res_name;
+ struct rman_res *shared_res;
+ unsigned int ref_cnt;
+};
+
+/*
+ * initialization
+ */
+int rman_initialise(void)
+{
+ unsigned int ret;
+
+ if (!inited) {
+ shared_res_mutex_handle = kzalloc(sizeof(*shared_res_mutex_handle), GFP_KERNEL);
+ if (!shared_res_mutex_handle)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ mutex_init(shared_res_mutex_handle);
+
+ /* Set initialised flag */
+ inited = TRUE;
+
+ /* Create the global resource bucket */
+ ret = rman_create_bucket((void **)&global_res_bucket);
+ IMG_DBG_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Create the shared resource bucket */
+ ret = rman_create_bucket((void **)&shared_res_bucket);
+ IMG_DBG_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ global_mutex = kzalloc(sizeof(*global_mutex), GFP_KERNEL);
+ if (!global_mutex)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ mutex_init(global_mutex);
+ }
+ return IMG_SUCCESS;
+}
+
+/*
+ * deinitialization
+ */
+void rman_deinitialise(void)
+{
+ unsigned int i;
+
+ if (inited) {
+ /* Destroy the golbal resource bucket */
+ rman_destroy_bucket(global_res_bucket);
+
+ /* Destroy the shared resource bucket */
+ rman_destroy_bucket(shared_res_bucket);
+
+ /* Make sure we destroy the mutex after destroying the bucket */
+ mutex_destroy(global_mutex);
+ kfree(global_mutex);
+ global_mutex = NULL;
+
+ /* Destroy mutex */
+ mutex_destroy(shared_res_mutex_handle);
+ kfree(shared_res_mutex_handle);
+ shared_res_mutex_handle = NULL;
+
+ /* Check all buckets destroyed */
+ for (i = 0; i < RMAN_CRESID_MAX_BUCKET_INDEX; i++)
+ IMG_DBG_ASSERT(!bucket_array[i]);
+
+ /* Reset initialised flag */
+ inited = FALSE;
+ }
+}
+
+int rman_create_bucket(void **res_bucket_handle)
+{
+ struct rman_bucket *bucket;
+ unsigned int i;
+ int ret;
+
+ IMG_DBG_ASSERT(inited);
+
+ /* Allocate a bucket structure */
+ bucket = kzalloc(sizeof(*bucket), GFP_KERNEL);
+ IMG_DBG_ASSERT(bucket);
+ if (!bucket)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /* Initialise the resource list */
+ dq_init(&bucket->res_list);
+
+ /* Then start allocating resource ids at the first */
+ ret = idgen_createcontext(RMAN_MAX_ID, RMAN_ID_BLOCKSIZE, FALSE,
+ &bucket->id_gen);
+ if (ret != IMG_SUCCESS) {
+ kfree(bucket);
+ IMG_DBG_ASSERT("failed to create IDGEN context" == NULL);
+ return ret;
+ }
+
+ /* Locate free bucket index within the table */
+ mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN);
+ for (i = 0; i < RMAN_CRESID_MAX_BUCKET_INDEX; i++) {
+ if (!bucket_array[i])
+ break;
+ }
+ if (i >= RMAN_CRESID_MAX_BUCKET_INDEX) {
+ mutex_unlock(shared_res_mutex_handle);
+ idgen_destroycontext(bucket->id_gen);
+ kfree(bucket);
+ IMG_DBG_ASSERT("No free buckets left" == NULL);
+ return IMG_ERROR_GENERIC_FAILURE;
+ }
+
+ /* Allocate bucket index */
+ bucket->bucket_idx = i;
+ bucket_array[i] = bucket;
+
+ mutex_unlock(shared_res_mutex_handle);
+
+ /* Return the bucket handle */
+ *res_bucket_handle = bucket;
+
+ return IMG_SUCCESS;
+}
+
+void rman_destroy_bucket(void *res_bucket_handle)
+{
+ struct rman_bucket *bucket = (struct rman_bucket *)res_bucket_handle;
+
+ IMG_DBG_ASSERT(inited);
+
+ IMG_DBG_ASSERT(bucket);
+ if (!bucket)
+ return;
+
+ IMG_DBG_ASSERT(bucket->bucket_idx < RMAN_CRESID_MAX_BUCKET_INDEX);
+ IMG_DBG_ASSERT(bucket_array[bucket->bucket_idx]);
+
+ /* Free all resources from the bucket */
+ rman_free_resources(res_bucket_handle, RMAN_TYPE_P1);
+ rman_free_resources(res_bucket_handle, RMAN_TYPE_P2);
+ rman_free_resources(res_bucket_handle, RMAN_TYPE_P3);
+ rman_free_resources(res_bucket_handle, RMAN_ALL_TYPES);
+
+ /* free sticky resources last: other resources are dependent on them */
+ rman_free_resources(res_bucket_handle, RMAN_STICKY);
+ /* Use proper locking around global buckets. */
+ mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN);
+
+ /* Free from array of bucket pointers */
+ bucket_array[bucket->bucket_idx] = NULL;
+
+ mutex_unlock(shared_res_mutex_handle);
+
+ /* Free the bucket itself */
+ idgen_destroycontext(bucket->id_gen);
+ kfree(bucket);
+}
+
+void *rman_get_global_bucket(void)
+{
+ IMG_DBG_ASSERT(inited);
+ IMG_DBG_ASSERT(global_res_bucket);
+
+ /* Return the handle of the global resource bucket */
+ return global_res_bucket;
+}
+
+int rman_register_resource(void *res_bucket_handle, unsigned int type_id,
+ rman_fn_free fnfree, void *param,
+ void **res_handle, unsigned int *res_id)
+{
+ struct rman_bucket *bucket = (struct rman_bucket *)res_bucket_handle;
+ struct rman_res *res;
+ int ret;
+
+ IMG_DBG_ASSERT(inited);
+ IMG_DBG_ASSERT(type_id != RMAN_ALL_TYPES);
+
+ IMG_DBG_ASSERT(res_bucket_handle);
+ if (!res_bucket_handle)
+ return IMG_ERROR_GENERIC_FAILURE;
+
+ /* Allocate a resource structure */
+ res = kzalloc(sizeof(*res), GFP_KERNEL);
+ IMG_DBG_ASSERT(res);
+ if (!res)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /* Fill in the resource structure */
+ res->bucket = bucket;
+ res->type_id = type_id;
+ res->fn_free = fnfree;
+ res->param = param;
+
+ /* Allocate resource Id */
+ mutex_lock_nested(global_mutex, SUBCLASS_RMAN);
+ ret = idgen_allocid(bucket->id_gen, res, &res->res_id);
+ mutex_unlock(global_mutex);
+ if (ret != IMG_SUCCESS) {
+ IMG_DBG_ASSERT("failed to allocate RMAN id" == NULL);
+ return ret;
+ }
+ IMG_DBG_ASSERT(res->res_id <= RMAN_CRESID_MAX_RES_ID);
+
+ /* add this resource to the bucket */
+ mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN);
+ dq_addtail(&bucket->res_list, res);
+
+ /* Update count of resources */
+ bucket->res_cnt++;
+ mutex_unlock(shared_res_mutex_handle);
+
+ /* If resource handle required */
+ if (res_handle)
+ *res_handle = res;
+
+ /* If resource id required */
+ if (res_id)
+ *res_id = rman_get_resource_id(res);
+
+ return IMG_SUCCESS;
+}
+
+unsigned int rman_get_resource_id(void *res_handle)
+{
+ struct rman_res *res = res_handle;
+ unsigned int ext_res_id;
+
+ IMG_DBG_ASSERT(res_handle);
+ if (!res_handle)
+ return 0;
+
+ IMG_DBG_ASSERT(res->res_id <= RMAN_CRESID_MAX_RES_ID);
+ IMG_DBG_ASSERT(res->bucket->bucket_idx < RMAN_CRESID_MAX_BUCKET_INDEX);
+ if (res->bucket->bucket_idx >= RMAN_CRESID_MAX_BUCKET_INDEX)
+ return 0;
+
+ ext_res_id = (((res->bucket->bucket_idx + 1) <<
+ RMAN_CRESID_BUCKET_SHIFT) | res->res_id);
+
+ return ext_res_id;
+}
+
+static void *rman_getresource_int(void *res_bucket_handle, unsigned int res_id,
+ unsigned int type_id, void **res_handle)
+{
+ struct rman_bucket *bucket = (struct rman_bucket *)res_bucket_handle;
+ struct rman_res *res;
+ int ret;
+
+ IMG_DBG_ASSERT(res_id <= RMAN_CRESID_MAX_RES_ID);
+
+ /* Loop over the resources in this bucket till we find the required id */
+ mutex_lock_nested(global_mutex, SUBCLASS_RMAN);
+ ret = idgen_gethandle(bucket->id_gen, res_id, (void **)&res);
+ mutex_unlock(global_mutex);
+ if (ret != IMG_SUCCESS) {
+ IMG_DBG_ASSERT("failed to get RMAN resource" == NULL);
+ return NULL;
+ }
+
+ /* If the resource handle is required */
+ if (res_handle)
+ *res_handle = res; /* Return it */
+
+ /* If the resource was not found */
+ IMG_DBG_ASSERT(res);
+ IMG_DBG_ASSERT((void *)res != &bucket->res_list);
+ if (!res || ((void *)res == &bucket->res_list))
+ return NULL;
+
+ /* Cross check the type */
+ IMG_DBG_ASSERT(type_id == res->type_id);
+
+ /* Return the resource. */
+ return res->param;
+}
+
+int rman_get_resource(unsigned int res_id, unsigned int type_id, void **param,
+ void **res_handle)
+{
+ unsigned int bucket_idx = (res_id >> RMAN_CRESID_BUCKET_SHIFT) - 1;
+ unsigned int int_res_id = (res_id & RMAN_CRESID_RES_ID_MASK);
+ void *local_param;
+
+ IMG_DBG_ASSERT(bucket_idx < RMAN_CRESID_MAX_BUCKET_INDEX);
+ if (bucket_idx >= RMAN_CRESID_MAX_BUCKET_INDEX)
+ return IMG_ERROR_INVALID_ID; /* Happens when bucket_idx == 0 */
+
+ IMG_DBG_ASSERT(bucket_array[bucket_idx]);
+ if (!bucket_array[bucket_idx])
+ return IMG_ERROR_INVALID_ID;
+
+ local_param = rman_getresource_int(bucket_array[bucket_idx],
+ int_res_id, type_id,
+ res_handle);
+
+ /* If we didn't find the resource */
+ if (!local_param)
+ return IMG_ERROR_INVALID_ID;
+
+ /* Return the resource */
+ if (param)
+ *param = local_param;
+
+ return IMG_SUCCESS;
+}
+
+int rman_get_named_resource(unsigned char *res_name, rman_fn_alloc fn_alloc,
+ void *alloc_info, void *res_bucket_handle,
+ unsigned int type_id, rman_fn_free fn_free,
+ void **param, void **res_handle, unsigned int *res_id)
+{
+ struct rman_bucket *bucket = res_bucket_handle;
+ struct rman_res *res;
+ unsigned int ret;
+ void *local_param;
+ unsigned char found = FALSE;
+
+ IMG_DBG_ASSERT(inited);
+
+ IMG_DBG_ASSERT(res_bucket_handle);
+ if (!res_bucket_handle)
+ return IMG_ERROR_GENERIC_FAILURE;
+
+ /* Lock the shared resources */
+ mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN);
+ res = (struct rman_res *)dq_first(&bucket->res_list);
+ while (res && ((void *)res != &bucket->res_list)) {
+ /* If resource already in the shared list */
+ if (res->res_name && (strcmp(res_name,
+ res->res_name) == 0)) {
+ IMG_DBG_ASSERT(res->fn_free == fn_free);
+ found = TRUE;
+ break;
+ }
+
+ /* Move to next resource */
+ res = (struct rman_res *)dq_next(res);
+ }
+ mutex_unlock(shared_res_mutex_handle);
+
+ /* If the named resource was not found */
+ if (!found) {
+ /* Allocate the resource */
+ ret = fn_alloc(alloc_info, &local_param);
+ IMG_DBG_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ /* Register the named resource */
+ ret = rman_register_resource(res_bucket_handle, type_id,
+ fn_free, local_param,
+ (void **)&res, NULL);
+ IMG_DBG_ASSERT(ret == IMG_SUCCESS);
+ if (ret != IMG_SUCCESS)
+ return ret;
+
+ mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN);
+ res->res_name = res_name;
+ mutex_unlock(shared_res_mutex_handle);
+ }
+
+ /* Return the pvParam value */
+ *param = res->param;
+
+ /* If resource handle required */
+ if (res_handle)
+ *res_handle = res;
+
+ /* If resource id required */
+ if (res_id)
+ *res_id = rman_get_resource_id(res);
+
+ /* Exit */
+ return IMG_SUCCESS;
+}
+
+static void rman_free_resource_int(struct rman_res *res)
+{
+ struct rman_bucket *bucket = res->bucket;
+
+ /* Remove the resource from the active list */
+ mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN);
+
+ /* Remove from list */
+ dq_remove(res);
+
+ /* Update count of resources */
+ bucket->res_cnt--;
+
+ mutex_unlock(shared_res_mutex_handle);
+
+ /* If mutex associated with the resource */
+ if (res->mutex_handle) {
+ /* Destroy mutex */
+ mutex_destroy(res->mutex_handle);
+ kfree(res->mutex_handle);
+ res->mutex_handle = NULL;
+ }
+
+ /* If this resource is not already shared */
+ if (res->shared_res) {
+ /* Lock the shared resources */
+ mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN);
+
+ /* Update the reference count */
+ IMG_DBG_ASSERT(res->shared_res->ref_cnt != 0);
+ res->shared_res->ref_cnt--;
+
+ /* If this is the last free for the shared resource */
+ if (res->shared_res->ref_cnt == 0)
+ /* Free the shared resource */
+ rman_free_resource_int(res->shared_res);
+
+ /* UnLock the shared resources */
+ mutex_unlock(shared_res_mutex_handle);
+ } else {
+ /* If there is a free callback function. */
+ if (res->fn_free)
+ /* Call resource free callback */
+ res->fn_free(res->param);
+ }
+
+ /* If the resource has a name then free it */
+ kfree(res->res_name);
+
+ /* Free the resource ID. */
+ mutex_lock_nested(global_mutex, SUBCLASS_RMAN);
+ idgen_freeid(bucket->id_gen, res->res_id);
+ mutex_unlock(global_mutex);
+
+ /* Free a resource structure */
+ kfree(res);
+}
+
+void rman_free_resource(void *res_handle)
+{
+ struct rman_res *res;
+
+ IMG_DBG_ASSERT(inited);
+
+ IMG_DBG_ASSERT(res_handle);
+ if (!res_handle)
+ return;
+
+ /* Get access to the resource structure */
+ res = (struct rman_res *)res_handle;
+
+ /* Free resource */
+ rman_free_resource_int(res);
+}
+
+void rman_lock_resource(void *res_handle)
+{
+ struct rman_res *res;
+
+ IMG_DBG_ASSERT(inited);
+
+ IMG_DBG_ASSERT(res_handle);
+ if (!res_handle)
+ return;
+
+ /* Get access to the resource structure */
+ res = (struct rman_res *)res_handle;
+
+ /* If this is a shared resource */
+ if (res->shared_res)
+ /* We need to lock/unlock the underlying shared resource */
+ res = res->shared_res;
+
+ /* If no mutex associated with this resource */
+ if (!res->mutex_handle) {
+ /* Create one */
+
+ res->mutex_handle = kzalloc(sizeof(*res->mutex_handle), GFP_KERNEL);
+ if (!res->mutex_handle)
+ return;
+
+ mutex_init(res->mutex_handle);
+ }
+
+ /* lock it */
+ mutex_lock(res->mutex_handle);
+}
+
+void rman_unlock_resource(void *res_handle)
+{
+ struct rman_res *res;
+
+ IMG_DBG_ASSERT(inited);
+
+ IMG_DBG_ASSERT(res_handle);
+ if (!res_handle)
+ return;
+
+ /* Get access to the resource structure */
+ res = (struct rman_res *)res_handle;
+
+ /* If this is a shared resource */
+ if (res->shared_res)
+ /* We need to lock/unlock the underlying shared resource */
+ res = res->shared_res;
+
+ IMG_DBG_ASSERT(res->mutex_handle);
+
+ /* Unlock mutex */
+ mutex_unlock(res->mutex_handle);
+}
+
+void rman_free_resources(void *res_bucket_handle, unsigned int type_id)
+{
+ struct rman_bucket *bucket = (struct rman_bucket *)res_bucket_handle;
+ struct rman_res *res;
+
+ IMG_DBG_ASSERT(inited);
+
+ IMG_DBG_ASSERT(res_bucket_handle);
+ if (!res_bucket_handle)
+ return;
+
+ /* Scan the active list looking for the resources to be freed */
+ mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN);
+ res = (struct rman_res *)dq_first(&bucket->res_list);
+ while ((res) && ((void *)res != &bucket->res_list)) {
+ /* If this is resource is to be removed */
+ if ((type_id == RMAN_ALL_TYPES &&
+ res->type_id != RMAN_STICKY) ||
+ res->type_id == type_id) {
+ /* Yes, remove it, Free current resource */
+ mutex_unlock(shared_res_mutex_handle);
+ rman_free_resource_int(res);
+ mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN);
+
+ /* Restart from the beginning of the list */
+ res = (struct rman_res *)dq_first(&bucket->res_list);
+ } else {
+ /* Move to next resource */
+ res = (struct rman_res *)lst_next(res);
+ }
+ }
+ mutex_unlock(shared_res_mutex_handle);
+}
diff --git a/drivers/staging/media/vxd/common/rman_api.h b/drivers/staging/media/vxd/common/rman_api.h
new file mode 100644
index 000000000000..baadc7f22eff
--- /dev/null
+++ b/drivers/staging/media/vxd/common/rman_api.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * This component is used to track decoder resources,
+ * and share them across other components.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef __RMAN_API_H__
+#define __RMAN_API_H__
+
+#include <linux/types.h>
+
+#include "img_errors.h"
+#include "lst.h"
+
+#define RMAN_ALL_TYPES (0xFFFFFFFF)
+#define RMAN_TYPE_P1 (0xFFFFFFFE)
+#define RMAN_TYPE_P2 (0xFFFFFFFE)
+#define RMAN_TYPE_P3 (0xFFFFFFFE)
+#define RMAN_STICKY (0xFFFFFFFD)
+
+int rman_initialise(void);
+
+void rman_deinitialise(void);
+
+int rman_create_bucket(void **res_handle);
+
+void rman_destroy_bucket(void *res_handle);
+
+void *rman_get_global_bucket(void);
+
+typedef void (*rman_fn_free) (void *param);
+
+int rman_register_resource(void *res_handle, unsigned int type_id, rman_fn_free fn_free,
+ void *param, void **res_handle_ptr,
+ unsigned int *res_id);
+
+typedef int (*rman_fn_alloc) (void *alloc_info, void **param);
+
+int rman_get_named_resource(unsigned char *res_name, rman_fn_alloc fn_alloc,
+ void *alloc_info, void *res_bucket_handle,
+ unsigned int type_id, rman_fn_free fn_free,
+ void **param, void **res_handle, unsigned int *res_id);
+
+unsigned int rman_get_resource_id(void *res_handle);
+
+int rman_get_resource(unsigned int res_id, unsigned int type_id, void **param,
+ void **res_handle);
+
+void rman_free_resource(void *res_handle);
+
+void rman_lock_resource(void *res_handle);
+
+void rman_unlock_resource(void *res_hanle);
+
+void rman_free_resources(void *res_bucket_handle, unsigned int type_id);
+
+#endif
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
Enable v4l2 vxd_dec on dra82
Signed-off-by: Angela Stegmaier <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
arch/arm64/boot/dts/ti/k3-j721e-main.dtsi | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
index cf3482376c1e..a10eb7bcce74 100644
--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
@@ -1242,6 +1242,15 @@
power-domains = <&k3_pds 193 TI_SCI_PD_EXCLUSIVE>;
};
+ d5520: video-decoder@4300000 {
+ /* IMG D5520 driver configuration */
+ compatible = "img,d5500-vxd";
+ reg = <0x00 0x04300000>,
+ <0x00 0x100000>;
+ power-domains = <&k3_pds 144 TI_SCI_PD_EXCLUSIVE>;
+ interrupts = <GIC_SPI 180 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
ufs_wrapper: ufs-wrapper@4e80000 {
compatible = "ti,j721e-ufs";
reg = <0x0 0x4e80000 0x0 0x100>;
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
Add Low-level VXD interface component and v4l2 wrapper Interface
for VXD driver.
Signed-off-by: Sunita Nadampalli <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
drivers/staging/media/vxd/common/vid_buf.h | 42 +
drivers/staging/media/vxd/decoder/vxd_v4l2.c | 2129 ++++++++++++++++++
3 files changed, 2173 insertions(+)
create mode 100644 drivers/staging/media/vxd/common/vid_buf.h
create mode 100644 drivers/staging/media/vxd/decoder/vxd_v4l2.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 0616ab620135..c7b4c860f8a7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19566,6 +19566,7 @@ F: drivers/staging/media/vxd/common/rman_api.c
F: drivers/staging/media/vxd/common/rman_api.h
F: drivers/staging/media/vxd/common/talmmu_api.c
F: drivers/staging/media/vxd/common/talmmu_api.h
+F: drivers/staging/media/vxd/common/vid_buf.h
F: drivers/staging/media/vxd/common/work_queue.c
F: drivers/staging/media/vxd/common/work_queue.h
F: drivers/staging/media/vxd/decoder/Kconfig
@@ -19641,6 +19642,7 @@ F; drivers/staging/media/vxd/decoder/vxd_props.h
F: drivers/staging/media/vxd/decoder/vxd_pvdec.c
F: drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h
F: drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h
+F: drivers/staging/media/vxd/decoder/vxd_v4l2.c
VIDEO I2C POLLING DRIVER
M: Matt Ranostay <[email protected]>
diff --git a/drivers/staging/media/vxd/common/vid_buf.h b/drivers/staging/media/vxd/common/vid_buf.h
new file mode 100644
index 000000000000..ac0e4f9b4894
--- /dev/null
+++ b/drivers/staging/media/vxd/common/vid_buf.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Low-level VXD interface component
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef _VID_BUF_H
+#define _VID_BUF_H
+
+/*
+ * struct vidio_ddbufinfo - contains information about virtual address
+ * @buf_size: the size of the buffer (in bytes).
+ * @cpu_virt: the cpu virtual address (mapped into the local cpu mmu)
+ * @dev_virt: device virtual address (pages mapped into IMG H/W mmu)
+ * @hndl_memory: handle to device mmu mapping
+ * @buff_id: buffer id used in communication with interface
+ * @is_internal: true, if the buffer is allocated internally
+ * @ref_count: reference count (number of users)
+ * @kmstr_id: stream id
+ * @core_id: core id
+ */
+struct vidio_ddbufinfo {
+ unsigned int buf_size;
+ void *cpu_virt;
+ unsigned int dev_virt;
+ void *hndl_memory;
+ unsigned int buff_id;
+ unsigned int is_internal;
+ unsigned int ref_count;
+ unsigned int kmstr_id;
+ unsigned int core_id;
+};
+
+#endif /* _VID_BUF_H */
diff --git a/drivers/staging/media/vxd/decoder/vxd_v4l2.c b/drivers/staging/media/vxd/decoder/vxd_v4l2.c
new file mode 100644
index 000000000000..292e2de0c5e0
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/vxd_v4l2.c
@@ -0,0 +1,2129 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * IMG DEC V4L2 Interface function implementations
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Angela Stegmaier <[email protected]>
+ *
+ * Re-written for upstreming
+ * Prashanth Kumar Amai <[email protected]>
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/of_irq.h>
+#include <linux/platform_device.h>
+#include <linux/pm.h>
+#include <linux/pm_runtime.h>
+#include <linux/videodev2.h>
+#include <linux/idr.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/dma-mapping.h>
+#include <linux/spinlock.h>
+#include <linux/printk.h>
+#include <linux/mutex.h>
+
+#ifdef ERROR_RECOVERY_SIMULATION
+#include <linux/sysfs.h>
+#include <linux/kobject.h>
+#include <linux/types.h>
+#endif
+
+#include <media/v4l2-common.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-mem2mem.h>
+#include <media/videobuf2-core.h>
+#include <media/videobuf2-dma-sg.h>
+#ifdef CAPTURE_CONTIG_ALLOC
+#include <media/videobuf2-dma-contig.h>
+#endif
+
+#include "core.h"
+#include "h264fw_data.h"
+#include "hevcfw_data.h"
+#include "img_dec_common.h"
+#include "vxd_pvdec_priv.h"
+#include "vxd_dec.h"
+
+#define VXD_DEC_SPIN_LOCK_NAME "vxd-dec"
+#define IMG_VXD_DEC_MODULE_NAME "vxd-dec"
+
+#ifdef ERROR_RECOVERY_SIMULATION
+/* This code should be execute only in debug flag */
+/*
+ * vxd decoder kernel object to create sysfs to debug error recovery and firmware
+ * watchdog timer. This kernel object will create a directory under /sys/kernel,
+ * containing two files fw_error_value and disable_fw_irq.
+ */
+struct kobject *vxd_dec_kobject;
+
+/* fw_error_value is the variable used to handle fw_error_attr */
+int fw_error_value = VDEC_ERROR_MAX;
+
+/* irq for the module, stored globally so can be accessed from sysfs */
+int g_module_irq;
+
+/*
+ * fw_error_attr. Application can set the value of this attribute, based on the
+ * firmware error that needs to be reproduced.
+ */
+struct kobj_attribute fw_error_attr =
+ __ATTR(fw_error_value, 0660, vxd_sysfs_show, vxd_sysfs_store);
+
+/* disable_fw_irq_value is variable to handle disable_fw_irq_attr */
+int disable_fw_irq_value;
+
+/*
+ * disable_fw_irq_attr. Application can set the value of this attribute. 1 to
+ * disable irq. 0 to enable irq.
+ */
+struct kobj_attribute disable_fw_irq_attr =
+ __ATTR(disable_fw_irq_value, 0660, vxd_sysfs_show, vxd_sysfs_store);
+
+/*
+ * Group attribute so that we can create and destroy all of them at once.
+ */
+struct attribute *attrs[] = {
+ &fw_error_attr.attr,
+ &disable_fw_irq_attr.attr,
+ NULL, /* Terminate list of attributes with NULL */
+};
+
+/*
+ * An unnamed attribute group will put all of the attributes directly in
+ * the kobject directory. If we specify a name, a sub directory will be
+ * created for the attributes with the directory being the name of the
+ * attribute group
+ */
+struct attribute_group attr_group = {
+ .attrs = attrs,
+};
+
+#endif
+
+static struct heap_config vxd_dec_heap_configs[] = {
+ {
+ .type = MEM_HEAP_TYPE_UNIFIED,
+ .options.unified = {
+ .gfp_type = __GFP_DMA32 | __GFP_ZERO,
+ },
+ .to_dev_addr = NULL,
+ },
+};
+
+static struct vxd_dec_fmt vxd_dec_formats[] = {
+ {
+ .fourcc = V4L2_PIX_FMT_NV12,
+ .num_planes = 1,
+ .type = IMG_DEC_FMT_TYPE_CAPTURE,
+ .std = VDEC_STD_UNDEFINED,
+ .pixfmt = IMG_PIXFMT_420PL12YUV8,
+ .interleave = PIXEL_UV_ORDER,
+ .idc = PIXEL_FORMAT_420,
+ .size_num = 3,
+ .size_den = 2,
+ .bytes_pp = 1,
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_NV16,
+ .num_planes = 1,
+ .type = IMG_DEC_FMT_TYPE_CAPTURE,
+ .std = VDEC_STD_UNDEFINED,
+ .pixfmt = IMG_PIXFMT_422PL12YUV8,
+ .interleave = PIXEL_UV_ORDER,
+ .idc = PIXEL_FORMAT_422,
+ .size_num = 2,
+ .size_den = 1,
+ .bytes_pp = 1,
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_TI1210,
+ .num_planes = 1,
+ .type = IMG_DEC_FMT_TYPE_CAPTURE,
+ .std = VDEC_STD_UNDEFINED,
+ .pixfmt = IMG_PIXFMT_420PL12YUV10_MSB,
+ .interleave = PIXEL_UV_ORDER,
+ .idc = PIXEL_FORMAT_420,
+ .size_num = 3,
+ .size_den = 2,
+ .bytes_pp = 2,
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_TI1610,
+ .num_planes = 1,
+ .type = IMG_DEC_FMT_TYPE_CAPTURE,
+ .std = VDEC_STD_UNDEFINED,
+ .pixfmt = IMG_PIXFMT_422PL12YUV10_MSB,
+ .interleave = PIXEL_UV_ORDER,
+ .idc = PIXEL_FORMAT_422,
+ .size_num = 2,
+ .size_den = 1,
+ .bytes_pp = 2,
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_H264,
+ .num_planes = 1,
+ .type = IMG_DEC_FMT_TYPE_OUTPUT,
+ .std = VDEC_STD_H264,
+ .pixfmt = IMG_PIXFMT_UNDEFINED,
+ .interleave = PIXEL_INVALID_CI,
+ .idc = PIXEL_FORMAT_INVALID,
+ .size_num = 1,
+ .size_den = 1,
+ .bytes_pp = 1,
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_HEVC,
+ .num_planes = 1,
+ .type = IMG_DEC_FMT_TYPE_OUTPUT,
+ .std = VDEC_STD_HEVC,
+ .pixfmt = IMG_PIXFMT_UNDEFINED,
+ .interleave = PIXEL_INVALID_CI,
+ .idc = PIXEL_FORMAT_INVALID,
+ .size_num = 1,
+ .size_den = 1,
+ .bytes_pp = 1,
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_MJPEG,
+ .num_planes = 1,
+ .type = IMG_DEC_FMT_TYPE_OUTPUT,
+ .std = VDEC_STD_JPEG,
+ .pixfmt = IMG_PIXFMT_UNDEFINED,
+ .interleave = PIXEL_INVALID_CI,
+ .idc = PIXEL_FORMAT_INVALID,
+ .size_num = 1,
+ .size_den = 1,
+ .bytes_pp = 1,
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_YUV420M,
+ .num_planes = 3,
+ .type = IMG_DEC_FMT_TYPE_CAPTURE,
+ .std = VDEC_STD_UNDEFINED,
+ .pixfmt = 86031,
+ .interleave = PIXEL_UV_ORDER,
+ .idc = PIXEL_FORMAT_420,
+ .size_num = 2,
+ .size_den = 1,
+ .bytes_pp = 1,
+ },
+ {
+ .fourcc = V4L2_PIX_FMT_YUV422M,
+ .num_planes = 3,
+ .type = IMG_DEC_FMT_TYPE_CAPTURE,
+ .std = VDEC_STD_UNDEFINED,
+ .pixfmt = 81935,
+ .interleave = PIXEL_UV_ORDER,
+ .idc = PIXEL_FORMAT_422,
+ .size_num = 3,
+ .size_den = 1,
+ .bytes_pp = 1,
+ },
+};
+
+#ifdef ERROR_RECOVERY_SIMULATION
+ssize_t vxd_sysfs_show(struct kobject *vxd_dec_kobject,
+ struct kobj_attribute *attr, char *buf)
+
+{
+ int var = 0;
+
+ if (strcmp(attr->attr.name, "fw_error_value") == 0)
+ var = fw_error_value;
+
+ else
+ var = disable_fw_irq_value;
+
+ return sprintf(buf, "%d\n", var);
+}
+
+ssize_t vxd_sysfs_store(struct kobject *vxd_dec_kobject,
+ struct kobj_attribute *attr,
+ const char *buf, unsigned long count)
+{
+ int var = 0, rv = 0;
+
+ rv = sscanf(buf, "%du", &var);
+
+ if (strcmp(attr->attr.name, "fw_error_value") == 0) {
+ fw_error_value = var;
+ } else {
+ disable_fw_irq_value = var;
+ /*
+ * if disable_fw_irq_value is not zero, disable the irq to reproduce
+ * firmware non responsiveness in vxd_worker.
+ */
+ if (disable_fw_irq_value != 0) {
+ /* just ignore the irq */
+ disable_irq(g_module_irq);
+ }
+ }
+ return sprintf((char *)buf, "%d\n", var);
+}
+#endif
+
+static struct vxd_dec_ctx *file2ctx(struct file *file)
+{
+ return container_of(file->private_data, struct vxd_dec_ctx, fh);
+}
+
+static irqreturn_t soft_thread_irq(int irq, void *dev_id)
+{
+ struct platform_device *pdev = (struct platform_device *)dev_id;
+
+ if (!pdev)
+ return IRQ_NONE;
+
+ return vxd_handle_thread_irq(&pdev->dev);
+}
+
+static irqreturn_t hard_isrcb(int irq, void *dev_id)
+{
+ struct platform_device *pdev = (struct platform_device *)dev_id;
+
+ if (!pdev)
+ return IRQ_NONE;
+
+ return vxd_handle_irq(&pdev->dev);
+}
+
+static struct vxd_buffer *find_buffer(unsigned int buf_map_id, struct list_head *head)
+{
+ struct list_head *list;
+ struct vxd_buffer *buf = NULL;
+
+ list_for_each(list, head) {
+ buf = list_entry(list, struct vxd_buffer, list);
+ if (buf->buf_map_id == buf_map_id)
+ break;
+ buf = NULL;
+ }
+ return buf;
+}
+
+static void return_worker(void *work)
+{
+ struct vxd_dec_ctx *ctx;
+ struct vxd_return *res;
+ struct device *dev;
+ struct timespec64 time;
+ int loop;
+
+ work = get_work_buff(work, TRUE);
+
+ res = container_of(work, struct vxd_return, work);
+ ctx = res->ctx;
+ dev = ctx->dev->dev;
+ switch (res->type) {
+ case VXD_CB_PICT_DECODED:
+ v4l2_m2m_job_finish(ctx->dev->m2m_dev, ctx->fh.m2m_ctx);
+ ktime_get_real_ts64(&time);
+ for (loop = 0; loop < ARRAY_SIZE(ctx->dev->time_drv); loop++) {
+ if (ctx->dev->time_drv[loop].id == res->buf_map_id) {
+ ctx->dev->time_drv[loop].end_time =
+ timespec64_to_ns(&time);
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(dev, "picture buf decode time is %llu us for buf_map_id 0x%x\n",
+ div_s64(ctx->dev->time_drv[loop].end_time -
+ ctx->dev->time_drv[loop].start_time, 1000),
+ res->buf_map_id);
+#endif
+ break;
+ }
+ }
+
+ if (loop == ARRAY_SIZE(ctx->dev->time_drv))
+ dev_err(dev, "picture buf decode for buf_map_id x%0x is not measured\n",
+ res->buf_map_id);
+ break;
+
+ default:
+ break;
+ }
+ kfree(res->work);
+ kfree(res);
+}
+
+static void vxd_error_recovery(struct vxd_dec_ctx *ctx)
+{
+ int ret = -1;
+
+ /*
+ * In the previous frame decoding fatal error has been detected
+ * so we need to reload the firmware to make it alive.
+ */
+ pr_debug("Reloading the firmware because of previous error\n");
+ vxd_clean_fw_resources(ctx->dev);
+ ret = vxd_prepare_fw(ctx->dev);
+ if (ret)
+ pr_err("Reloading the firmware failed!!");
+}
+
+static struct vxd_dec_q_data *get_q_data(struct vxd_dec_ctx *ctx,
+ enum v4l2_buf_type type)
+{
+ switch (type) {
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
+ return &ctx->q_data[Q_DATA_SRC];
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
+ return &ctx->q_data[Q_DATA_DST];
+ default:
+ return NULL;
+ }
+ return NULL;
+}
+
+static void vxd_return_resource(void *ctx_handle, enum vxd_cb_type type,
+ unsigned int buf_map_id)
+{
+ struct vxd_return *res;
+ struct vxd_buffer *buf = NULL;
+ struct vb2_v4l2_buffer *vb;
+ struct vxd_dec_ctx *ctx = (struct vxd_dec_ctx *)ctx_handle;
+ struct v4l2_event event = {};
+ struct device *dev = ctx->dev->dev;
+ int i;
+ struct vxd_dec_q_data *q_data;
+
+ if (ctx->aborting) {
+ v4l2_m2m_job_finish(ctx->dev->m2m_dev, ctx->fh.m2m_ctx);
+ ctx->aborting = 0;
+ return;
+ }
+
+ switch (type) {
+ case VXD_CB_STRUNIT_PROCESSED:
+
+ buf = find_buffer(buf_map_id, &ctx->out_buffers);
+ if (!buf) {
+ dev_err(dev, "Could not locate buf_map_id=0x%x in OUTPUT buffers list\n",
+ buf_map_id);
+ break;
+ }
+ buf->buffer.vb.field = V4L2_FIELD_NONE;
+ q_data = get_q_data(ctx, buf->buffer.vb.vb2_buf.vb2_queue->type);
+ if (!q_data)
+ return;
+
+ for (i = 0; i < q_data->fmt->num_planes; i++)
+ vb2_set_plane_payload(&buf->buffer.vb.vb2_buf, i,
+ ctx->pict_bufcfg.plane_size[i]);
+
+ v4l2_m2m_buf_done(&buf->buffer.vb, VB2_BUF_STATE_DONE);
+ break;
+ case VXD_CB_SPS_RELEASE:
+ break;
+ case VXD_CB_PPS_RELEASE:
+ break;
+ case VXD_CB_PICT_DECODED:
+ res = kzalloc(sizeof(*res), GFP_KERNEL);
+ if (!res)
+ return;
+ res->ctx = ctx;
+ res->type = type;
+ res->buf_map_id = buf_map_id;
+
+ init_work(&res->work, return_worker, HWA_DECODER);
+ if (!res->work)
+ return;
+
+ schedule_work(res->work);
+
+ break;
+ case VXD_CB_PICT_DISPLAY:
+ buf = find_buffer(buf_map_id, &ctx->cap_buffers);
+ if (!buf) {
+ dev_err(dev, "Could not locate buf_map_id=0x%x in CAPTURE buffers list\n",
+ buf_map_id);
+ break;
+ }
+ buf->reuse = FALSE;
+ buf->buffer.vb.field = V4L2_FIELD_NONE;
+ q_data = get_q_data(ctx, buf->buffer.vb.vb2_buf.vb2_queue->type);
+ if (!q_data)
+ return;
+
+ for (i = 0; i < q_data->fmt->num_planes; i++)
+ vb2_set_plane_payload(&buf->buffer.vb.vb2_buf, i,
+ ctx->pict_bufcfg.plane_size[i]);
+
+ v4l2_m2m_buf_done(&buf->buffer.vb, VB2_BUF_STATE_DONE);
+ break;
+ case VXD_CB_PICT_RELEASE:
+ buf = find_buffer(buf_map_id, &ctx->reuse_queue);
+ if (buf) {
+ buf->reuse = TRUE;
+ list_move_tail(&buf->list, &ctx->cap_buffers);
+
+ v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, &buf->buffer.vb);
+ break;
+ }
+ buf = find_buffer(buf_map_id, &ctx->cap_buffers);
+ if (!buf) {
+ dev_err(dev, "Could not locate buf_map_id=0x%x in CAPTURE buffers list\n",
+ buf_map_id);
+
+ break;
+ }
+ buf->reuse = TRUE;
+
+ break;
+ case VXD_CB_PICT_END:
+ break;
+ case VXD_CB_STR_END:
+ event.type = V4L2_EVENT_EOS;
+ v4l2_event_queue_fh(&ctx->fh, &event);
+ if (v4l2_m2m_num_dst_bufs_ready(ctx->fh.m2m_ctx) > 0) {
+ vb = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+ vb->flags |= V4L2_BUF_FLAG_LAST;
+
+ q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+ if (!q_data)
+ break;
+
+ for (i = 0; i < q_data->fmt->num_planes; i++)
+ vb2_set_plane_payload(&vb->vb2_buf, i, 0);
+
+ v4l2_m2m_buf_done(vb, VB2_BUF_STATE_DONE);
+ } else {
+ ctx->flag_last = TRUE;
+ }
+ break;
+ case VXD_CB_ERROR_FATAL:
+ /*
+ * There has been FW error, so we need to reload the firmware.
+ */
+ vxd_error_recovery(ctx);
+
+ /*
+ * Just send zero size buffer to v4l2 application,
+ * informing the error condition.
+ */
+ if (v4l2_m2m_num_dst_bufs_ready(ctx->fh.m2m_ctx) > 0) {
+ vb = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+
+ q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+ if (!q_data)
+ break;
+
+ for (i = 0; i < q_data->fmt->num_planes; i++)
+ vb2_set_plane_payload(&vb->vb2_buf, i, 0);
+
+ v4l2_m2m_buf_done(vb, VB2_BUF_STATE_DONE);
+ } else {
+ ctx->flag_last = TRUE;
+ }
+ break;
+ default:
+ break;
+ }
+}
+
+static int vxd_dec_submit_opconfig(struct vxd_dec_ctx *ctx)
+{
+ int ret = 0;
+
+ if (ctx->stream_created) {
+ ret = core_stream_set_output_config(ctx->res_str_id,
+ &ctx->str_opcfg,
+ &ctx->pict_bufcfg);
+ if (ret) {
+ dev_err(ctx->dev->dev, "core_stream_set_output_config failed\n");
+ ctx->opconfig_pending = TRUE;
+ return ret;
+ }
+ ctx->opconfig_pending = FALSE;
+ ctx->stream_configured = TRUE;
+ } else {
+ ctx->opconfig_pending = TRUE;
+ }
+ return ret;
+}
+
+static int vxd_dec_queue_setup(struct vb2_queue *vq,
+ unsigned int *nbuffers,
+ unsigned int *nplanes,
+ unsigned int sizes[],
+ struct device *alloc_devs[])
+{
+ struct vxd_dec_ctx *ctx = vb2_get_drv_priv(vq);
+ struct vxd_dec_q_data *q_data;
+ struct vxd_dec_q_data *src_q_data;
+ int i;
+ unsigned int hw_nbuffers = 0;
+
+ q_data = get_q_data(ctx, vq->type);
+ if (!q_data)
+ return -EINVAL;
+
+ if (*nplanes) {
+ /* This is being called from CREATEBUFS, perform validation */
+ if (*nplanes != q_data->fmt->num_planes)
+ return -EINVAL;
+
+ for (i = 0; i < *nplanes; i++) {
+ if (sizes[i] != q_data->size_image[i])
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ *nplanes = q_data->fmt->num_planes;
+
+ if (!V4L2_TYPE_IS_OUTPUT(vq->type)) {
+ src_q_data = &ctx->q_data[Q_DATA_SRC];
+ if (src_q_data)
+ hw_nbuffers = get_nbuffers(src_q_data->fmt->std,
+ q_data->width,
+ q_data->height,
+ ctx->max_num_ref_frames);
+ }
+
+ *nbuffers = max(*nbuffers, hw_nbuffers);
+
+ for (i = 0; i < *nplanes; i++)
+ sizes[i] = q_data->size_image[i];
+
+ return 0;
+}
+
+static int vxd_dec_buf_prepare(struct vb2_buffer *vb)
+{
+ struct vxd_dec_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+ struct device *dev = ctx->dev->dev;
+ struct vxd_dec_q_data *q_data;
+ void *sgt;
+#ifdef CAPTURE_CONTIG_ALLOC
+ struct page *new_page;
+#else
+ void *sgl;
+#endif
+ struct sg_table *sgt_new;
+ void *sgl_new;
+ int pages;
+ int nents = 0;
+ int size = 0;
+ int plane, num_planes, ret = 0;
+ struct vxd_buffer *buf =
+ container_of(vb, struct vxd_buffer, buffer.vb.vb2_buf);
+
+ q_data = get_q_data(ctx, vb->vb2_queue->type);
+ if (!q_data)
+ return -EINVAL;
+
+ num_planes = q_data->fmt->num_planes;
+
+ for (plane = 0; plane < num_planes; plane++) {
+ if (vb2_plane_size(vb, plane) < q_data->size_image[plane]) {
+ dev_err(dev, "data will not fit into plane (%lu < %lu)\n",
+ vb2_plane_size(vb, plane),
+ (long)q_data->size_image[plane]);
+ return -EINVAL;
+ }
+ }
+
+ if (buf->mapped)
+ return 0;
+
+ buf->buf_info.cpu_linear_addr = vb2_plane_vaddr(vb, 0);
+ buf->buf_info.buf_size = vb2_plane_size(vb, 0);
+ buf->buf_info.fd = -1;
+ sgt = vb2_dma_sg_plane_desc(vb, 0);
+ if (!sgt) {
+ dev_err(dev, "Could not get sg_table from plane 0\n");
+ return -EINVAL;
+ }
+
+ if (V4L2_TYPE_IS_OUTPUT(vb->type)) {
+ ret = core_stream_map_buf_sg(ctx->res_str_id,
+ VDEC_BUFTYPE_BITSTREAM,
+ &buf->buf_info, sgt,
+ &buf->buf_map_id);
+ if (ret) {
+ dev_err(dev, "OUTPUT core_stream_map_buf_sg failed\n");
+ return ret;
+ }
+
+ buf->bstr_info.buf_size = q_data->size_image[0];
+ buf->bstr_info.cpu_virt_addr = buf->buf_info.cpu_linear_addr;
+ buf->bstr_info.mem_attrib =
+ SYS_MEMATTRIB_UNCACHED | SYS_MEMATTRIB_WRITECOMBINE |
+ SYS_MEMATTRIB_INPUT | SYS_MEMATTRIB_CPU_WRITE;
+ buf->bstr_info.bufmap_id = buf->buf_map_id;
+ lst_init(&buf->seq_unit.bstr_seg_list);
+ lst_init(&buf->pic_unit.bstr_seg_list);
+ lst_init(&buf->end_unit.bstr_seg_list);
+
+ list_add_tail(&buf->list, &ctx->out_buffers);
+ } else {
+ /* Create a single sgt from the plane(s) */
+ sgt_new = kmalloc(sizeof(*sgt_new), GFP_KERNEL);
+ if (!sgt_new)
+ return -EINVAL;
+
+ for (plane = 0; plane < num_planes; plane++) {
+ size += ALIGN(vb2_plane_size(vb, plane), PAGE_SIZE);
+ sgt = vb2_dma_sg_plane_desc(vb, plane);
+ if (!sgt) {
+ dev_err(dev, "Could not get sg_table from plane %d\n", plane);
+ kfree(sgt_new);
+ return -EINVAL;
+ }
+#ifdef CAPTURE_CONTIG_ALLOC
+ nents += 1;
+#else
+ nents += sg_nents(img_mmu_get_sgl(sgt));
+#endif
+ }
+ buf->buf_info.buf_size = size;
+
+ pages = (size + PAGE_SIZE - 1) / PAGE_SIZE;
+ ret = sg_alloc_table(sgt_new, nents, GFP_KERNEL);
+ if (ret) {
+ kfree(sgt_new);
+ return -EINVAL;
+ }
+ sgl_new = img_mmu_get_sgl(sgt_new);
+
+ for (plane = 0; plane < num_planes; plane++) {
+ sgt = vb2_dma_sg_plane_desc(vb, plane);
+ if (!sgt) {
+ dev_err(dev, "Could not get sg_table from plane %d\n", plane);
+ sg_free_table(sgt_new);
+ kfree(sgt_new);
+ return -EINVAL;
+ }
+#ifdef CAPTURE_CONTIG_ALLOC
+ new_page = phys_to_page(vb2_dma_contig_plane_dma_addr(vb, plane));
+ sg_set_page(sgl_new, new_page, ALIGN(vb2_plane_size(vb, plane),
+ PAGE_SIZE), 0);
+ sgl_new = sg_next(sgl_new);
+#else
+ sgl = img_mmu_get_sgl(sgt);
+
+ while (sgl) {
+ sg_set_page(sgl_new, sg_page(sgl), img_mmu_get_sgl_length(sgl), 0);
+ sgl = sg_next(sgl);
+ sgl_new = sg_next(sgl_new);
+ }
+#endif
+ }
+
+ buf->buf_info.pictbuf_cfg = ctx->pict_bufcfg;
+ ret = core_stream_map_buf_sg(ctx->res_str_id,
+ VDEC_BUFTYPE_PICTURE,
+ &buf->buf_info, sgt_new,
+ &buf->buf_map_id);
+ sg_free_table(sgt_new);
+ kfree(sgt_new);
+ if (ret) {
+ dev_err(dev, "CAPTURE core_stream_map_buf_sg failed\n");
+ return ret;
+ }
+ list_add_tail(&buf->list, &ctx->cap_buffers);
+ }
+ buf->mapped = TRUE;
+ buf->reuse = TRUE;
+
+ return 0;
+}
+
+static void vxd_dec_buf_queue(struct vb2_buffer *vb)
+{
+ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ struct vxd_dec_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+ struct vxd_buffer *buf =
+ container_of(vb, struct vxd_buffer, buffer.vb.vb2_buf);
+ struct vxd_dec_q_data *q_data;
+ int i;
+
+ if (V4L2_TYPE_IS_OUTPUT(vb->type)) {
+ v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf);
+ } else {
+ mutex_lock_nested(ctx->mutex, SUBCLASS_VXD_V4L2);
+ if (buf->reuse) {
+ mutex_unlock(ctx->mutex);
+ if (ctx->flag_last) {
+ q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+ vbuf->flags |= V4L2_BUF_FLAG_LAST;
+
+ for (i = 0; i < q_data->fmt->num_planes; i++)
+ vb2_set_plane_payload(&vbuf->vb2_buf, i, 0);
+
+ v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
+ } else {
+ v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf);
+ }
+ } else {
+ list_move_tail(&buf->list, &ctx->reuse_queue);
+ mutex_unlock(ctx->mutex);
+ }
+ }
+}
+
+static void vxd_dec_return_all_buffers(struct vxd_dec_ctx *ctx,
+ struct vb2_queue *q,
+ enum vb2_buffer_state state)
+{
+ struct vb2_v4l2_buffer *vb;
+ unsigned long flags;
+
+ for (;;) {
+ if (V4L2_TYPE_IS_OUTPUT(q->type))
+ vb = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ else
+ vb = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+
+ if (!vb)
+ break;
+
+ spin_lock_irqsave(ctx->dev->lock, flags);
+ v4l2_m2m_buf_done(vb, state);
+ spin_unlock_irqrestore(ctx->dev->lock, (unsigned long)flags);
+ }
+}
+
+static int vxd_dec_start_streaming(struct vb2_queue *vq, unsigned int count)
+{
+ int ret = 0;
+ struct vxd_dec_ctx *ctx = vb2_get_drv_priv(vq);
+
+ if (V4L2_TYPE_IS_OUTPUT(vq->type))
+ ctx->dst_streaming = TRUE;
+ else
+ ctx->src_streaming = TRUE;
+
+ if (ctx->dst_streaming && ctx->src_streaming && !ctx->core_streaming) {
+ if (!ctx->stream_configured) {
+ vxd_dec_return_all_buffers(ctx, vq, VB2_BUF_STATE_ERROR);
+ return -EINVAL;
+ }
+ ctx->eos = FALSE;
+ ctx->stop_initiated = FALSE;
+ ctx->flag_last = FALSE;
+ ret = core_stream_play(ctx->res_str_id);
+ if (ret) {
+ vxd_dec_return_all_buffers(ctx, vq, VB2_BUF_STATE_ERROR);
+ return ret;
+ }
+ ctx->core_streaming = TRUE;
+ }
+
+ return 0;
+}
+
+static void vxd_dec_stop_streaming(struct vb2_queue *vq)
+{
+ struct vxd_dec_ctx *ctx = vb2_get_drv_priv(vq);
+ struct list_head *list;
+ struct list_head *temp;
+ struct vxd_buffer *buf = NULL;
+
+ if (V4L2_TYPE_IS_OUTPUT(vq->type))
+ ctx->dst_streaming = FALSE;
+ else
+ ctx->src_streaming = FALSE;
+
+ if (ctx->core_streaming) {
+ core_stream_stop(ctx->res_str_id);
+ ctx->core_streaming = FALSE;
+
+ core_stream_flush(ctx->res_str_id, TRUE);
+ }
+
+ /* unmap all the output and capture plane buffers */
+ if (V4L2_TYPE_IS_OUTPUT(vq->type)) {
+ list_for_each(list, &ctx->out_buffers) {
+ buf = list_entry(list, struct vxd_buffer, list);
+ core_stream_unmap_buf_sg(buf->buf_map_id);
+ buf->mapped = FALSE;
+ __list_del_entry(&buf->list);
+ }
+ } else {
+ list_for_each_safe(list, temp, &ctx->reuse_queue) {
+ buf = list_entry(list, struct vxd_buffer, list);
+ list_move_tail(&buf->list, &ctx->cap_buffers);
+ v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, &buf->buffer.vb);
+ }
+
+ list_for_each(list, &ctx->cap_buffers) {
+ buf = list_entry(list, struct vxd_buffer, list);
+ core_stream_unmap_buf_sg(buf->buf_map_id);
+ buf->mapped = FALSE;
+ __list_del_entry(&buf->list);
+ }
+ }
+
+ vxd_dec_return_all_buffers(ctx, vq, VB2_BUF_STATE_ERROR);
+}
+
+static struct vb2_ops vxd_dec_video_ops = {
+ .queue_setup = vxd_dec_queue_setup,
+ .buf_prepare = vxd_dec_buf_prepare,
+ .buf_queue = vxd_dec_buf_queue,
+ .wait_prepare = vb2_ops_wait_prepare,
+ .wait_finish = vb2_ops_wait_finish,
+ .start_streaming = vxd_dec_start_streaming,
+ .stop_streaming = vxd_dec_stop_streaming,
+};
+
+static int queue_init(void *priv, struct vb2_queue *src_vq, struct vb2_queue *dst_vq)
+{
+ struct vxd_dec_ctx *ctx = priv;
+ struct vxd_dev *vxd = ctx->dev;
+ int ret = 0;
+
+ /* src_vq */
+ memset(src_vq, 0, sizeof(*src_vq));
+ src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
+ src_vq->io_modes = VB2_MMAP | VB2_DMABUF;
+ src_vq->drv_priv = ctx;
+ src_vq->buf_struct_size = sizeof(struct vxd_buffer);
+ src_vq->ops = &vxd_dec_video_ops;
+ src_vq->mem_ops = &vb2_dma_sg_memops;
+ src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+ src_vq->lock = vxd->mutex;
+ src_vq->dev = vxd->v4l2_dev.dev;
+ ret = vb2_queue_init(src_vq);
+ if (ret)
+ return ret;
+
+ /* dst_vq */
+ memset(dst_vq, 0, sizeof(*dst_vq));
+ dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
+ dst_vq->io_modes = VB2_MMAP | VB2_DMABUF;
+ dst_vq->drv_priv = ctx;
+ dst_vq->buf_struct_size = sizeof(struct vxd_buffer);
+ dst_vq->ops = &vxd_dec_video_ops;
+#ifdef CAPTURE_CONTIG_ALLOC
+ dst_vq->mem_ops = &vb2_dma_contig_memops;
+#else
+ dst_vq->mem_ops = &vb2_dma_sg_memops;
+#endif
+ dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+ dst_vq->lock = vxd->mutex;
+ dst_vq->dev = vxd->v4l2_dev.dev;
+ ret = vb2_queue_init(dst_vq);
+ if (ret) {
+ vb2_queue_release(src_vq);
+ return ret;
+ }
+
+ return ret;
+}
+
+static int vxd_dec_open(struct file *file)
+{
+ struct vxd_dev *vxd = video_drvdata(file);
+ struct vxd_dec_ctx *ctx;
+ struct vxd_dec_q_data *s_q_data;
+ int i, ret = 0;
+
+ dev_dbg(vxd->dev, "%s:%d vxd %p\n", __func__, __LINE__, vxd);
+
+ if (vxd->no_fw) {
+ dev_err(vxd->dev, "Error!! fw binary is not present");
+ return -1;
+ }
+
+ mutex_lock_nested(vxd->mutex, SUBCLASS_BASE);
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx) {
+ mutex_unlock(vxd->mutex);
+ return -ENOMEM;
+ }
+ ctx->dev = vxd;
+
+ v4l2_fh_init(&ctx->fh, video_devdata(file));
+ file->private_data = &ctx->fh;
+
+ s_q_data = &ctx->q_data[Q_DATA_SRC];
+ s_q_data->fmt = &vxd_dec_formats[0];
+ s_q_data->width = 1920;
+ s_q_data->height = 1080;
+ for (i = 0; i < s_q_data->fmt->num_planes; i++) {
+ s_q_data->bytesperline[i] = s_q_data->width;
+ s_q_data->size_image[i] = s_q_data->bytesperline[i] * s_q_data->height;
+ }
+
+ ctx->q_data[Q_DATA_DST] = *s_q_data;
+
+ ctx->fh.m2m_ctx = v4l2_m2m_ctx_init(vxd->m2m_dev, ctx, &queue_init);
+ if (IS_ERR_VALUE((unsigned long)ctx->fh.m2m_ctx)) {
+ ret = (long)(ctx->fh.m2m_ctx);
+ goto exit;
+ }
+
+ v4l2_fh_add(&ctx->fh);
+
+ ret = idr_alloc_cyclic(vxd->streams, &ctx->stream, VXD_MIN_STREAM_ID, VXD_MAX_STREAM_ID,
+ GFP_KERNEL);
+ if (ret < VXD_MIN_STREAM_ID || ret > VXD_MAX_STREAM_ID) {
+ dev_err(vxd->dev, "%s: stream id creation failed!\n",
+ __func__);
+ ret = -EFAULT;
+ goto exit;
+ }
+
+ ctx->stream.id = ret;
+ ctx->stream.ctx = ctx;
+
+ ctx->stream_created = FALSE;
+ ctx->stream_configured = FALSE;
+ ctx->src_streaming = FALSE;
+ ctx->dst_streaming = FALSE;
+ ctx->core_streaming = FALSE;
+ ctx->eos = FALSE;
+ ctx->stop_initiated = FALSE;
+ ctx->flag_last = FALSE;
+
+ lst_init(&ctx->seg_list);
+ for (i = 0; i < MAX_SEGMENTS; i++)
+ lst_add(&ctx->seg_list, &ctx->bstr_segments[i]);
+
+ if (vxd_create_ctx(vxd, ctx))
+ goto out_idr_remove;
+
+ ctx->stream.mmu_ctx = ctx->mmu_ctx;
+ ctx->stream.ptd = ctx->ptd;
+
+ ctx->mutex = kzalloc(sizeof(*ctx->mutex), GFP_KERNEL);
+ if (!ctx->mutex) {
+ ret = -ENOMEM;
+ goto out_idr_remove;
+ }
+ mutex_init(ctx->mutex);
+
+ INIT_LIST_HEAD(&ctx->items_done);
+ INIT_LIST_HEAD(&ctx->reuse_queue);
+ INIT_LIST_HEAD(&ctx->return_queue);
+ INIT_LIST_HEAD(&ctx->out_buffers);
+ INIT_LIST_HEAD(&ctx->cap_buffers);
+
+ mutex_unlock(vxd->mutex);
+
+ return 0;
+
+out_idr_remove:
+ idr_remove(vxd->streams, ctx->stream.id);
+
+exit:
+ v4l2_fh_exit(&ctx->fh);
+ get_work_buff(ctx->work, TRUE);
+ kfree(ctx->work);
+ kfree(ctx);
+ mutex_unlock(vxd->mutex);
+ return ret;
+}
+
+static int vxd_dec_release(struct file *file)
+{
+ struct vxd_dev *vxd = video_drvdata(file);
+ struct vxd_dec_ctx *ctx = file2ctx(file);
+ struct bspp_ddbuf_array_info *fw_sequ = ctx->fw_sequ;
+ struct bspp_ddbuf_array_info *fw_pps = ctx->fw_pps;
+ int i, ret = 0;
+ struct vxd_dec_q_data *s_q_data;
+
+ s_q_data = &ctx->q_data[Q_DATA_SRC];
+
+ if (ctx->stream_created) {
+ bspp_stream_destroy(ctx->bspp_context);
+
+ for (i = 0; i < MAX_SEQUENCES; i++) {
+ core_stream_unmap_buf(fw_sequ[i].ddbuf_info.bufmap_id);
+ img_mem_free(ctx->mem_ctx, fw_sequ[i].ddbuf_info.buf_id);
+ }
+
+ if (s_q_data->fmt->std != VDEC_STD_JPEG) {
+ for (i = 0; i < MAX_PPSS; i++) {
+ core_stream_unmap_buf(fw_pps[i].ddbuf_info.bufmap_id);
+ img_mem_free(ctx->mem_ctx, fw_pps[i].ddbuf_info.buf_id);
+ }
+ }
+ core_stream_destroy(ctx->res_str_id);
+ ctx->stream_created = FALSE;
+ }
+
+ mutex_lock_nested(vxd->mutex, SUBCLASS_BASE);
+
+ vxd_destroy_ctx(vxd, ctx);
+
+ idr_remove(vxd->streams, ctx->stream.id);
+
+ v4l2_fh_del(&ctx->fh);
+
+ v4l2_fh_exit(&ctx->fh);
+
+ v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
+
+ mutex_destroy(ctx->mutex);
+ kfree(ctx->mutex);
+ ctx->mutex = NULL;
+
+ get_work_buff(ctx->work, TRUE);
+ kfree(ctx->work);
+ kfree(ctx);
+
+ mutex_unlock(vxd->mutex);
+
+ return ret;
+}
+
+static int vxd_dec_querycap(struct file *file, void *priv, struct v4l2_capability *cap)
+{
+ strncpy(cap->driver, IMG_VXD_DEC_MODULE_NAME, sizeof(cap->driver) - 1);
+ strncpy(cap->card, IMG_VXD_DEC_MODULE_NAME, sizeof(cap->card) - 1);
+ snprintf(cap->bus_info, sizeof(cap->bus_info), "platform:%s", IMG_VXD_DEC_MODULE_NAME);
+ cap->device_caps = V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING;
+ cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
+ return 0;
+}
+
+static int __enum_fmt(struct v4l2_fmtdesc *f, unsigned int type)
+{
+ int i, index;
+ struct vxd_dec_fmt *fmt = NULL;
+
+ index = 0;
+ for (i = 0; i < ARRAY_SIZE(vxd_dec_formats); ++i) {
+ if (vxd_dec_formats[i].type & type) {
+ if (index == f->index) {
+ fmt = &vxd_dec_formats[i];
+ break;
+ }
+ index++;
+ }
+ }
+
+ if (!fmt)
+ return -EINVAL;
+
+ f->pixelformat = fmt->fourcc;
+ return 0;
+}
+
+static int vxd_dec_enum_fmt(struct file *file, void *priv, struct v4l2_fmtdesc *f)
+{
+ if (V4L2_TYPE_IS_OUTPUT(f->type))
+ return __enum_fmt(f, IMG_DEC_FMT_TYPE_OUTPUT);
+
+ return __enum_fmt(f, IMG_DEC_FMT_TYPE_CAPTURE);
+}
+
+static struct vxd_dec_fmt *find_format(struct v4l2_format *f, unsigned int type)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(vxd_dec_formats); ++i) {
+ if (vxd_dec_formats[i].fourcc == f->fmt.pix_mp.pixelformat &&
+ vxd_dec_formats[i].type == type)
+ return &vxd_dec_formats[i];
+ }
+ return NULL;
+}
+
+static unsigned int get_sizeimage(int w, int h, struct vxd_dec_fmt *fmt, int plane)
+{
+ switch (fmt->fourcc) {
+ case V4L2_PIX_FMT_YUV420M:
+ return ((plane == 0) ? (w * h) : (w * h / 2));
+ case V4L2_PIX_FMT_YUV422M:
+ return (w * h);
+ default:
+ return (w * h * fmt->size_num / fmt->size_den);
+ }
+
+ return 0;
+}
+
+static unsigned int get_stride(int w, struct vxd_dec_fmt *fmt)
+{
+ return (ALIGN(w, HW_ALIGN) * fmt->bytes_pp);
+}
+
+/*
+ * @ Function vxd_get_header_info
+ * Run bspp stream submit and preparse once before device_run
+ * To retrieve header information
+ */
+static int vxd_get_header_info(void *priv)
+{
+ struct vxd_dec_ctx *ctx = priv;
+ struct vxd_dev *vxd_dev = ctx->dev;
+ struct device *dev = vxd_dev->v4l2_dev.dev;
+ struct vb2_v4l2_buffer *src_vb;
+ struct vxd_buffer *src_vxdb;
+ struct vxd_buffer *dst_vxdb;
+ struct bspp_preparsed_data *preparsed_data;
+ unsigned int data_size;
+ int ret;
+
+ /*
+ * Checking for queued buffer.
+ * If no next buffer present, do not get information from header.
+ * Else, get header information and store for later use.
+ */
+ src_vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ if (!src_vb) {
+ dev_warn(dev, "get_header_info Next src buffer is null\n");
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+ mutex_lock_nested(ctx->mutex, SUBCLASS_VXD_V4L2);
+
+ src_vxdb = container_of(src_vb, struct vxd_buffer, buffer.vb);
+ /* Setting dst_vxdb to arbitrary value (using src_vb) for now */
+ dst_vxdb = container_of(src_vb, struct vxd_buffer, buffer.vb);
+
+ preparsed_data = &dst_vxdb->preparsed_data;
+
+ data_size = vb2_get_plane_payload(&src_vxdb->buffer.vb.vb2_buf, 0);
+
+ ret = bspp_stream_submit_buffer(ctx->bspp_context,
+ &src_vxdb->bstr_info,
+ src_vxdb->buf_map_id,
+ data_size, NULL,
+ VDEC_BSTRELEMENT_UNSPECIFIED);
+ if (ret) {
+ dev_err(dev, "get_header_info bspp_stream_submit_buffer failed %d\n", ret);
+ return ret;
+ }
+ mutex_unlock(ctx->mutex);
+
+ ret = bspp_stream_preparse_buffers(ctx->bspp_context, NULL, 0,
+ &ctx->seg_list,
+ preparsed_data, ctx->eos);
+ if (ret) {
+ dev_err(dev, "get_header_info bspp_stream_preparse_buffers failed %d\n", ret);
+ return ret;
+ }
+
+ if (preparsed_data->sequ_hdr_info.com_sequ_hdr_info.max_frame_size.height &&
+ preparsed_data->sequ_hdr_info.com_sequ_hdr_info.max_ref_frame_num) {
+ ctx->height = preparsed_data->sequ_hdr_info.com_sequ_hdr_info.max_frame_size.height;
+ ctx->max_num_ref_frames =
+ preparsed_data->sequ_hdr_info.com_sequ_hdr_info.max_ref_frame_num;
+ } else {
+ dev_err(dev, "get_header_info preparsed data is null %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int vxd_dec_g_fmt(struct file *file, void *priv, struct v4l2_format *f)
+{
+ struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
+ struct vxd_dec_ctx *ctx = file2ctx(file);
+ struct vxd_dec_q_data *q_data;
+ struct vxd_dev *vxd_dev = ctx->dev;
+ unsigned int i = 0;
+ int ret = 0;
+
+ q_data = get_q_data(ctx, f->type);
+ if (!q_data)
+ return -EINVAL;
+
+ pix_mp->field = V4L2_FIELD_NONE;
+ pix_mp->pixelformat = q_data->fmt->fourcc;
+ pix_mp->num_planes = q_data->fmt->num_planes;
+
+ if (f->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
+ /* The buffer contains compressed image. */
+ pix_mp->width = ctx->width;
+ pix_mp->height = ctx->height;
+ pix_mp->plane_fmt[0].bytesperline = 0;
+ pix_mp->plane_fmt[0].sizeimage = q_data->size_image[0];
+ } else if (f->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
+ /* The buffer contains decoded YUV image. */
+ pix_mp->width = ctx->width;
+ pix_mp->height = ctx->height;
+ for (i = 0; i < q_data->fmt->num_planes; i++) {
+ pix_mp->plane_fmt[i].bytesperline = get_stride(pix_mp->width, q_data->fmt);
+ pix_mp->plane_fmt[i].sizeimage = get_sizeimage
+ (pix_mp->plane_fmt[i].bytesperline,
+ ctx->height, q_data->fmt, i);
+ }
+ } else {
+ dev_err(vxd_dev->v4l2_dev.dev, "Wrong V4L2_format type\n");
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static int vxd_dec_try_fmt(struct file *file, void *priv, struct v4l2_format *f)
+{
+ struct vxd_dec_ctx *ctx = file2ctx(file);
+ struct vxd_dev *vxd_dev = ctx->dev;
+ struct vxd_dec_fmt *fmt;
+ struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
+ struct v4l2_plane_pix_format *plane_fmt = pix_mp->plane_fmt;
+ unsigned int i = 0;
+ int ret = 0;
+
+ if (V4L2_TYPE_IS_OUTPUT(f->type)) {
+ fmt = find_format(f, IMG_DEC_FMT_TYPE_OUTPUT);
+ if (!fmt) {
+ dev_err(vxd_dev->v4l2_dev.dev, "Unsupported format for source.\n");
+ return -EINVAL;
+ }
+ /*
+ * Allocation for worst case input frame size:
+ * I frame with full YUV size (YUV422)
+ */
+ plane_fmt[0].sizeimage = ALIGN(pix_mp->width, HW_ALIGN) *
+ ALIGN(pix_mp->height, HW_ALIGN) * 2;
+ } else {
+ fmt = find_format(f, IMG_DEC_FMT_TYPE_CAPTURE);
+ if (!fmt) {
+ dev_err(vxd_dev->v4l2_dev.dev, "Unsupported format for dest.\n");
+ return -EINVAL;
+ }
+ for (i = 0; i < fmt->num_planes; i++) {
+ plane_fmt[i].bytesperline = get_stride(pix_mp->width, fmt);
+ plane_fmt[i].sizeimage = get_sizeimage(plane_fmt[i].bytesperline,
+ pix_mp->height, fmt, i);
+ }
+ pix_mp->num_planes = fmt->num_planes;
+ pix_mp->flags = 0;
+ }
+
+ if (pix_mp->field == V4L2_FIELD_ANY)
+ pix_mp->field = V4L2_FIELD_NONE;
+
+ return ret;
+}
+
+static int vxd_dec_s_fmt(struct file *file, void *priv, struct v4l2_format *f)
+{
+ struct v4l2_pix_format_mplane *pix_mp;
+ struct vxd_dec_ctx *ctx = file2ctx(file);
+ struct vxd_dev *vxd_dev = ctx->dev;
+ struct device *dev = vxd_dev->v4l2_dev.dev;
+ struct vxd_dec_q_data *q_data;
+ struct vb2_queue *vq;
+ struct vdec_str_configdata strcfgdata;
+ int ret = 0;
+ unsigned char i = 0, j = 0;
+
+ pix_mp = &f->fmt.pix_mp;
+
+ if (!V4L2_TYPE_IS_OUTPUT(f->type)) {
+ int res = vxd_get_header_info(ctx);
+
+ if (res == 0)
+ pix_mp->height = ctx->height;
+ }
+
+ ret = vxd_dec_try_fmt(file, priv, f);
+ if (ret)
+ return ret;
+
+ vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type);
+ if (!vq)
+ return -EINVAL;
+
+ if (vb2_is_busy(vq)) {
+ dev_err(dev, "Queue is busy\n");
+ return -EBUSY;
+ }
+
+ q_data = get_q_data(ctx, f->type);
+
+ if (!q_data)
+ return -EINVAL;
+
+ /*
+ * saving the original dimensions to pass to gstreamer (to remove the green
+ * padding on kmsink)
+ */
+ ctx->width_orig = pix_mp->width;
+ ctx->height_orig = pix_mp->height;
+
+ ctx->width = pix_mp->width;
+ ctx->height = pix_mp->height;
+
+ q_data->width = pix_mp->width;
+ q_data->height = pix_mp->height;
+
+ if (V4L2_TYPE_IS_OUTPUT(f->type)) {
+ q_data->fmt = find_format(f, IMG_DEC_FMT_TYPE_OUTPUT);
+ q_data->size_image[0] = pix_mp->plane_fmt[0].sizeimage;
+
+ if (!ctx->stream_created) {
+ strcfgdata.vid_std = q_data->fmt->std;
+
+ if (strcfgdata.vid_std == VDEC_STD_UNDEFINED) {
+ dev_err(dev, "Invalid input format\n");
+ return -EINVAL;
+ }
+ strcfgdata.bstr_format = VDEC_BSTRFORMAT_ELEMENTARY;
+ strcfgdata.user_str_id = ctx->stream.id;
+ strcfgdata.update_yuv = FALSE;
+ strcfgdata.bandwidth_efficient = FALSE;
+ strcfgdata.disable_mvc = FALSE;
+ strcfgdata.full_scan = FALSE;
+ strcfgdata.immediate_decode = TRUE;
+ strcfgdata.intra_frame_closed_gop = TRUE;
+
+ ret = core_stream_create(ctx, &strcfgdata, &ctx->res_str_id);
+ if (ret) {
+ dev_err(dev, "Core stream create failed\n");
+ return -EINVAL;
+ }
+ ctx->stream_created = TRUE;
+ if (ctx->opconfig_pending) {
+ ret = vxd_dec_submit_opconfig(ctx);
+ if (ret) {
+ dev_err(dev, "Output config failed\n");
+ return -EINVAL;
+ }
+ }
+
+ vxd_dec_alloc_bspp_resource(ctx, strcfgdata.vid_std);
+ ret = bspp_stream_create(&strcfgdata,
+ &ctx->bspp_context,
+ ctx->fw_sequ,
+ ctx->fw_pps);
+ if (ret) {
+ dev_err(dev, "BSPP stream create failed %d\n", ret);
+ return ret;
+ }
+ } else if (q_data->fmt !=
+ find_format(f, IMG_DEC_FMT_TYPE_OUTPUT)) {
+ dev_err(dev, "Input format already set\n");
+ return -EBUSY;
+ }
+ } else {
+ q_data->fmt = find_format(f, IMG_DEC_FMT_TYPE_CAPTURE);
+ for (i = 0; i < q_data->fmt->num_planes; i++) {
+ q_data->size_image[i] =
+ get_sizeimage(get_stride(pix_mp->width, q_data->fmt),
+ ctx->height, q_data->fmt, i);
+ }
+
+ ctx->str_opcfg.pixel_info.pixfmt = q_data->fmt->pixfmt;
+ ctx->str_opcfg.pixel_info.chroma_interleave = q_data->fmt->interleave;
+ ctx->str_opcfg.pixel_info.chroma_fmt = TRUE;
+ ctx->str_opcfg.pixel_info.chroma_fmt_idc = q_data->fmt->idc;
+
+ if (q_data->fmt->pixfmt == IMG_PIXFMT_420PL12YUV10_MSB ||
+ q_data->fmt->pixfmt == IMG_PIXFMT_422PL12YUV10_MSB) {
+ ctx->str_opcfg.pixel_info.mem_pkg = PIXEL_BIT10_MSB_MP;
+ ctx->str_opcfg.pixel_info.bitdepth_y = 10;
+ ctx->str_opcfg.pixel_info.bitdepth_c = 10;
+ } else {
+ ctx->str_opcfg.pixel_info.mem_pkg = PIXEL_BIT8_MP;
+ ctx->str_opcfg.pixel_info.bitdepth_y = 8;
+ ctx->str_opcfg.pixel_info.bitdepth_c = 8;
+ }
+
+ ctx->str_opcfg.force_oold = FALSE;
+
+ ctx->pict_bufcfg.coded_width = pix_mp->width;
+ ctx->pict_bufcfg.coded_height = pix_mp->height;
+ ctx->pict_bufcfg.pixel_fmt = q_data->fmt->pixfmt;
+ for (i = 0; i < pix_mp->num_planes; i++) {
+ q_data->bytesperline[i] = get_stride(q_data->width, q_data->fmt);
+ if (q_data->bytesperline[i] <
+ pix_mp->plane_fmt[0].bytesperline)
+ q_data->bytesperline[i] =
+ ALIGN(pix_mp->plane_fmt[0].bytesperline, HW_ALIGN);
+ pix_mp->plane_fmt[0].bytesperline =
+ q_data->bytesperline[i];
+ ctx->pict_bufcfg.stride[i] = q_data->bytesperline[i];
+ }
+ for (j = i; j < IMG_MAX_NUM_PLANES; j++) {
+ if ((i - 1) < 0)
+ i++;
+ ctx->pict_bufcfg.stride[j] =
+ q_data->bytesperline[i - 1];
+ }
+ ctx->pict_bufcfg.stride_alignment = HW_ALIGN;
+ ctx->pict_bufcfg.byte_interleave = FALSE;
+ for (i = 0; i < pix_mp->num_planes; i++) {
+ unsigned int plane_size =
+ get_sizeimage(ctx->pict_bufcfg.stride[i],
+ ctx->pict_bufcfg.coded_height,
+ q_data->fmt, i);
+ ctx->pict_bufcfg.buf_size += ALIGN(plane_size, PAGE_SIZE);
+ ctx->pict_bufcfg.plane_size[i] = plane_size;
+ pix_mp->plane_fmt[i].sizeimage = plane_size;
+ }
+ if (q_data->fmt->pixfmt == 86031 ||
+ q_data->fmt->pixfmt == 81935) {
+ /* Handle the v4l2 multi-planar formats */
+ ctx->str_opcfg.pixel_info.num_planes = 3;
+ ctx->pict_bufcfg.packed = FALSE;
+ for (i = 0; i < pix_mp->num_planes; i++) {
+ ctx->pict_bufcfg.chroma_offset[i] =
+ ALIGN(pix_mp->plane_fmt[i].sizeimage, PAGE_SIZE);
+ ctx->pict_bufcfg.chroma_offset[i] +=
+ (i ? ctx->pict_bufcfg.chroma_offset[i - 1] : 0);
+ }
+ } else {
+ /* IMG Decoders support only multi-planar formats */
+ ctx->str_opcfg.pixel_info.num_planes = 2;
+ ctx->pict_bufcfg.packed = TRUE;
+ ctx->pict_bufcfg.chroma_offset[0] = 0;
+ ctx->pict_bufcfg.chroma_offset[1] = 0;
+ }
+
+ vxd_dec_submit_opconfig(ctx);
+ }
+
+ return ret;
+}
+
+static int vxd_dec_subscribe_event(struct v4l2_fh *fh, const struct v4l2_event_subscription *sub)
+{
+ if (sub->type != V4L2_EVENT_EOS)
+ return -EINVAL;
+
+ v4l2_event_subscribe(fh, sub, 0, NULL);
+ return 0;
+}
+
+static int vxd_dec_try_cmd(struct file *file, void *fh, struct v4l2_decoder_cmd *cmd)
+{
+ if (cmd->cmd != V4L2_DEC_CMD_STOP)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int vxd_dec_cmd(struct file *file, void *fh, struct v4l2_decoder_cmd *cmd)
+{
+ struct vxd_dec_ctx *ctx = file2ctx(file);
+
+ if (cmd->cmd != V4L2_DEC_CMD_STOP)
+ return -EINVAL;
+
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("%s CMD_STOP\n", __func__);
+#endif
+ /*
+ * When stop command is received, notify device_run if it is
+ * scheduled to run, or tell the decoder that eos has
+ * happened.
+ */
+ mutex_lock_nested(ctx->mutex, SUBCLASS_VXD_V4L2);
+ if (v4l2_m2m_num_src_bufs_ready(ctx->fh.m2m_ctx) > 0) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("V4L2 src bufs not empty, set a flag to notify device_run\n");
+#endif
+ ctx->stop_initiated = TRUE;
+ mutex_unlock(ctx->mutex);
+ } else {
+ if (ctx->num_decoding) {
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("buffers are still being decoded, so just set eos flag\n");
+#endif
+ ctx->eos = TRUE;
+ mutex_unlock(ctx->mutex);
+ } else {
+ mutex_unlock(ctx->mutex);
+#ifdef DEBUG_DECODER_DRIVER
+ pr_info("All buffers are decoded, so issue dummy stream end\n");
+#endif
+ vxd_return_resource((void *)ctx, VXD_CB_STR_END, 0);
+ }
+ }
+
+ return 0;
+}
+
+static int vxd_g_selection(struct file *file, void *fh, struct v4l2_selection *s)
+{
+ struct vxd_dec_ctx *ctx = file2ctx(file);
+ bool def_bounds = true;
+
+ if (s->type != V4L2_BUF_TYPE_VIDEO_CAPTURE &&
+ s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT)
+ return -EINVAL;
+
+ switch (s->target) {
+ case V4L2_SEL_TGT_COMPOSE_DEFAULT:
+ case V4L2_SEL_TGT_COMPOSE_BOUNDS:
+ if (s->type == V4L2_BUF_TYPE_VIDEO_OUTPUT)
+ return -EINVAL;
+ break;
+ case V4L2_SEL_TGT_CROP_BOUNDS:
+ case V4L2_SEL_TGT_CROP_DEFAULT:
+ if (s->type == V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+ break;
+ case V4L2_SEL_TGT_COMPOSE:
+ if (s->type == V4L2_BUF_TYPE_VIDEO_OUTPUT)
+ return -EINVAL;
+ def_bounds = false;
+ break;
+ case V4L2_SEL_TGT_CROP:
+ if (s->type == V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+ def_bounds = false;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (def_bounds) {
+ s->r.left = 0;
+ s->r.top = 0;
+ s->r.width = ctx->width_orig;
+ s->r.height = ctx->height_orig;
+ }
+
+ return 0;
+}
+
+static const struct v4l2_ioctl_ops vxd_dec_ioctl_ops = {
+ .vidioc_querycap = vxd_dec_querycap,
+
+ .vidioc_enum_fmt_vid_cap = vxd_dec_enum_fmt,
+ .vidioc_g_fmt_vid_cap_mplane = vxd_dec_g_fmt,
+ .vidioc_try_fmt_vid_cap_mplane = vxd_dec_try_fmt,
+ .vidioc_s_fmt_vid_cap_mplane = vxd_dec_s_fmt,
+
+ .vidioc_enum_fmt_vid_out = vxd_dec_enum_fmt,
+ .vidioc_g_fmt_vid_out_mplane = vxd_dec_g_fmt,
+ .vidioc_try_fmt_vid_out_mplane = vxd_dec_try_fmt,
+ .vidioc_s_fmt_vid_out_mplane = vxd_dec_s_fmt,
+
+ .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs,
+ .vidioc_querybuf = v4l2_m2m_ioctl_querybuf,
+ .vidioc_qbuf = v4l2_m2m_ioctl_qbuf,
+ .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf,
+ .vidioc_expbuf = v4l2_m2m_ioctl_expbuf,
+
+ .vidioc_streamon = v4l2_m2m_ioctl_streamon,
+ .vidioc_streamoff = v4l2_m2m_ioctl_streamoff,
+ .vidioc_log_status = v4l2_ctrl_log_status,
+ .vidioc_subscribe_event = vxd_dec_subscribe_event,
+ .vidioc_unsubscribe_event = v4l2_event_unsubscribe,
+ .vidioc_try_decoder_cmd = vxd_dec_try_cmd,
+ .vidioc_decoder_cmd = vxd_dec_cmd,
+
+ .vidioc_g_selection = vxd_g_selection,
+};
+
+static const struct v4l2_file_operations vxd_dec_fops = {
+ .owner = THIS_MODULE,
+ .open = vxd_dec_open,
+ .release = vxd_dec_release,
+ .poll = v4l2_m2m_fop_poll,
+ .unlocked_ioctl = video_ioctl2,
+ .mmap = v4l2_m2m_fop_mmap,
+};
+
+static struct video_device vxd_dec_videodev = {
+ .name = IMG_VXD_DEC_MODULE_NAME,
+ .fops = &vxd_dec_fops,
+ .ioctl_ops = &vxd_dec_ioctl_ops,
+ .minor = -1,
+ .release = video_device_release,
+ .vfl_dir = VFL_DIR_M2M,
+};
+
+static void device_run(void *priv)
+{
+ struct vxd_dec_ctx *ctx = priv;
+ struct vxd_dev *vxd_dev = ctx->dev;
+ struct device *dev = vxd_dev->v4l2_dev.dev;
+ struct vb2_v4l2_buffer *src_vb;
+ struct vb2_v4l2_buffer *dst_vb;
+ struct vxd_buffer *src_vxdb;
+ struct vxd_buffer *dst_vxdb;
+ struct bspp_bitstr_seg *item = NULL, *next = NULL;
+ struct bspp_preparsed_data *preparsed_data;
+ unsigned int data_size;
+ int ret;
+ struct timespec64 time;
+ static int cnt;
+ int i;
+
+ mutex_lock_nested(ctx->mutex, SUBCLASS_VXD_V4L2);
+ ctx->num_decoding++;
+
+ src_vb = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ if (!src_vb)
+ dev_err(dev, "Next src buffer is null\n");
+
+ dst_vb = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+ if (!dst_vb)
+ dev_err(dev, "Next dst buffer is null\n");
+
+ src_vxdb = container_of(src_vb, struct vxd_buffer, buffer.vb);
+ dst_vxdb = container_of(dst_vb, struct vxd_buffer, buffer.vb);
+
+ preparsed_data = &dst_vxdb->preparsed_data;
+
+ data_size = vb2_get_plane_payload(&src_vxdb->buffer.vb.vb2_buf, 0);
+
+ ret = bspp_stream_submit_buffer(ctx->bspp_context,
+ &src_vxdb->bstr_info,
+ src_vxdb->buf_map_id,
+ data_size, NULL,
+ VDEC_BSTRELEMENT_UNSPECIFIED);
+ if (ret)
+ dev_err(dev, "bspp_stream_submit_buffer failed %d\n", ret);
+
+ if (ctx->stop_initiated &&
+ (v4l2_m2m_num_src_bufs_ready(ctx->fh.m2m_ctx) == 0))
+ ctx->eos = TRUE;
+
+ mutex_unlock(ctx->mutex);
+
+ ret = bspp_stream_preparse_buffers(ctx->bspp_context, NULL, 0, &ctx->seg_list,
+ preparsed_data, ctx->eos);
+ if (ret)
+ dev_err(dev, "bspp_stream_preparse_buffers failed %d\n", ret);
+
+ ktime_get_real_ts64(&time);
+ vxd_dev->time_drv[cnt].start_time = timespec64_to_ns(&time);
+ vxd_dev->time_drv[cnt].id = dst_vxdb->buf_map_id;
+ cnt++;
+
+ if (cnt >= ARRAY_SIZE(vxd_dev->time_drv))
+ cnt = 0;
+
+ core_stream_fill_pictbuf(dst_vxdb->buf_map_id);
+
+ if (preparsed_data->new_sequence) {
+ src_vxdb->seq_unit.str_unit_type =
+ VDECDD_STRUNIT_SEQUENCE_START;
+ src_vxdb->seq_unit.str_unit_handle = ctx;
+ src_vxdb->seq_unit.err_flags = 0;
+ src_vxdb->seq_unit.dd_data = NULL;
+ src_vxdb->seq_unit.seq_hdr_info =
+ &preparsed_data->sequ_hdr_info;
+ src_vxdb->seq_unit.seq_hdr_id = 0;
+ src_vxdb->seq_unit.closed_gop = TRUE;
+ src_vxdb->seq_unit.eop = FALSE;
+ src_vxdb->seq_unit.pict_hdr_info = NULL;
+ src_vxdb->seq_unit.dd_pict_data = NULL;
+ src_vxdb->seq_unit.last_pict_in_seq = FALSE;
+ src_vxdb->seq_unit.str_unit_tag = NULL;
+ src_vxdb->seq_unit.decode = FALSE;
+ src_vxdb->seq_unit.features = 0;
+ core_stream_submit_unit(ctx->res_str_id, &src_vxdb->seq_unit);
+ }
+
+ src_vxdb->pic_unit.str_unit_type = VDECDD_STRUNIT_PICTURE_START;
+ src_vxdb->pic_unit.str_unit_handle = ctx;
+ src_vxdb->pic_unit.err_flags = 0;
+ /* Move the processed segments to the submission buffer */
+ for (i = 0; i < BSPP_MAX_PICTURES_PER_BUFFER; i++) {
+ item = lst_first(&preparsed_data->picture_data.pre_pict_seg_list[i]);
+ while (item) {
+ next = lst_next(item);
+ lst_remove(&preparsed_data->picture_data.pre_pict_seg_list[i], item);
+ lst_add(&src_vxdb->pic_unit.bstr_seg_list, item);
+ item = next;
+ }
+ /* Move the processed segments to the submission buffer */
+ item = lst_first(&preparsed_data->picture_data.pict_seg_list[i]);
+ while (item) {
+ next = lst_next(item);
+ lst_remove(&preparsed_data->picture_data.pict_seg_list[i], item);
+ lst_add(&src_vxdb->pic_unit.bstr_seg_list, item);
+ item = next;
+ }
+ }
+
+ src_vxdb->pic_unit.dd_data = NULL;
+ src_vxdb->pic_unit.seq_hdr_info = NULL;
+ src_vxdb->pic_unit.seq_hdr_id = 0;
+ if (preparsed_data->new_sequence)
+ src_vxdb->pic_unit.closed_gop = TRUE;
+ else
+ src_vxdb->pic_unit.closed_gop = FALSE;
+ src_vxdb->pic_unit.eop = TRUE;
+ src_vxdb->pic_unit.eos = ctx->eos;
+ src_vxdb->pic_unit.pict_hdr_info =
+ &preparsed_data->picture_data.pict_hdr_info;
+ src_vxdb->pic_unit.dd_pict_data = NULL;
+ src_vxdb->pic_unit.last_pict_in_seq = FALSE;
+ src_vxdb->pic_unit.str_unit_tag = NULL;
+ src_vxdb->pic_unit.decode = FALSE;
+ src_vxdb->pic_unit.features = 0;
+ core_stream_submit_unit(ctx->res_str_id, &src_vxdb->pic_unit);
+
+ src_vxdb->end_unit.str_unit_type = VDECDD_STRUNIT_PICTURE_END;
+ src_vxdb->end_unit.str_unit_handle = ctx;
+ src_vxdb->end_unit.err_flags = 0;
+ src_vxdb->end_unit.dd_data = NULL;
+ src_vxdb->end_unit.seq_hdr_info = NULL;
+ src_vxdb->end_unit.seq_hdr_id = 0;
+ src_vxdb->end_unit.closed_gop = FALSE;
+ src_vxdb->end_unit.eop = FALSE;
+ src_vxdb->end_unit.eos = ctx->eos;
+ src_vxdb->end_unit.pict_hdr_info = NULL;
+ src_vxdb->end_unit.dd_pict_data = NULL;
+ src_vxdb->end_unit.last_pict_in_seq = FALSE;
+ src_vxdb->end_unit.str_unit_tag = NULL;
+ src_vxdb->end_unit.decode = FALSE;
+ src_vxdb->end_unit.features = 0;
+ core_stream_submit_unit(ctx->res_str_id, &src_vxdb->end_unit);
+}
+
+static int job_ready(void *priv)
+{
+ struct vxd_dec_ctx *ctx = priv;
+
+ if (v4l2_m2m_num_src_bufs_ready(ctx->fh.m2m_ctx) < 1 ||
+ v4l2_m2m_num_dst_bufs_ready(ctx->fh.m2m_ctx) < 1 ||
+ !ctx->core_streaming)
+ return 0;
+
+ return 1;
+}
+
+static void job_abort(void *priv)
+{
+ struct vxd_dec_ctx *ctx = priv;
+
+ /* Cancel the transaction at next callback */
+ ctx->aborting = 1;
+}
+
+static const struct v4l2_m2m_ops m2m_ops = {
+ .device_run = device_run,
+ .job_ready = job_ready,
+ .job_abort = job_abort,
+};
+
+static const struct of_device_id vxd_dec_of_match[] = {
+ {.compatible = "img,d5500-vxd"},
+ { /* end */},
+};
+MODULE_DEVICE_TABLE(of, vxd_dec_of_match);
+
+static int vxd_dec_probe(struct platform_device *pdev)
+{
+ struct vxd_dev *vxd;
+ struct resource *res;
+ const struct of_device_id *of_dev_id;
+ int ret;
+ int module_irq;
+ struct video_device *vfd;
+
+ struct heap_config *heap_configs;
+ int num_heaps;
+ unsigned int i_heap_id;
+ /* Protect structure fields */
+ spinlock_t **lock;
+
+ of_dev_id = of_match_device(vxd_dec_of_match, &pdev->dev);
+ if (!of_dev_id) {
+ dev_err(&pdev->dev, "%s: Unable to match device\n", __func__);
+ return -ENODEV;
+ }
+
+ dma_set_mask(&pdev->dev, DMA_BIT_MASK(40));
+
+ vxd = devm_kzalloc(&pdev->dev, sizeof(*vxd), GFP_KERNEL);
+ if (!vxd)
+ return -ENOMEM;
+
+ vxd->dev = &pdev->dev;
+ vxd->plat_dev = pdev;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ vxd->reg_base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR_VALUE((unsigned long)vxd->reg_base))
+ return (long)(vxd->reg_base);
+
+ module_irq = platform_get_irq(pdev, 0);
+ if (module_irq < 0)
+ return -ENXIO;
+ vxd->module_irq = module_irq;
+#ifdef ERROR_RECOVERY_SIMULATION
+ g_module_irq = module_irq;
+#endif
+
+ heap_configs = vxd_dec_heap_configs;
+ num_heaps = ARRAY_SIZE(vxd_dec_heap_configs);
+
+ vxd->mutex = kzalloc(sizeof(*vxd->mutex), GFP_KERNEL);
+ if (!vxd->mutex)
+ return -ENOMEM;
+
+ mutex_init(vxd->mutex);
+ platform_set_drvdata(pdev, vxd);
+
+ pm_runtime_enable(&pdev->dev);
+ ret = pm_runtime_get_sync(&pdev->dev);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "%s: failed to enable clock, status = %d\n", __func__, ret);
+ goto exit;
+ }
+
+ /* Read HW properties */
+ ret = vxd_pvdec_get_props(vxd->dev, vxd->reg_base, &vxd->props);
+ if (ret) {
+ dev_err(&pdev->dev, "%s: failed to fetch core properties!\n", __func__);
+ ret = -ENXIO;
+ goto out_put_sync;
+ }
+ vxd->mmu_config_addr_width = VXD_EXTRN_ADDR_WIDTH(vxd->props);
+#ifdef DEBUG_DECODER_DRIVER
+ dev_info(&pdev->dev, "hw:%u.%u.%u, num_pix: %d, num_ent: %d, mmu: %d, MTX RAM: %d\n",
+ VXD_MAJ_REV(vxd->props),
+ VXD_MIN_REV(vxd->props),
+ VXD_MAINT_REV(vxd->props),
+ VXD_NUM_PIX_PIPES(vxd->props),
+ VXD_NUM_ENT_PIPES(vxd->props),
+ VXD_EXTRN_ADDR_WIDTH(vxd->props),
+ vxd->props.mtx_ram_size);
+#endif
+
+ INIT_LIST_HEAD(&vxd->msgs);
+ INIT_LIST_HEAD(&vxd->pend);
+
+ /* initialize memory manager */
+ ret = img_mem_init(&pdev->dev);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to initialize memory\n");
+ ret = -ENOMEM;
+ goto out_put_sync;
+ }
+ vxd->streams = kzalloc(sizeof(*vxd->streams), GFP_KERNEL);
+ if (!vxd->streams) {
+ ret = -ENOMEM;
+ goto out_init;
+ }
+
+ idr_init(vxd->streams);
+
+ ret = vxd_init(&pdev->dev, vxd, heap_configs, num_heaps);
+ if (ret) {
+ dev_err(&pdev->dev, "%s: main component initialisation failed!\n", __func__);
+ goto out_idr_init;
+ }
+
+ /* initialize core */
+ i_heap_id = vxd_g_internal_heap_id();
+ if (i_heap_id < 0) {
+ dev_err(&pdev->dev, "%s: Invalid internal heap id", __func__);
+ goto out_vxd_init;
+ }
+ ret = core_initialise(vxd, i_heap_id, vxd_return_resource);
+ if (ret) {
+ dev_err(&pdev->dev, "%s: core initialization failed!", __func__);
+ goto out_vxd_init;
+ }
+
+ vxd->fw_refcnt = 0;
+ vxd->hw_on = 0;
+
+#ifdef DEBUG_DECODER_DRIVER
+ vxd->hw_pm_delay = 10000;
+ vxd->hw_dwr_period = 10000;
+#else
+ vxd->hw_pm_delay = 1000;
+ vxd->hw_dwr_period = 1000;
+#endif
+ ret = vxd_prepare_fw(vxd);
+ if (ret) {
+ dev_err(&pdev->dev, "%s fw acquire failed!", __func__);
+ goto out_core_init;
+ }
+
+ if (vxd->no_fw) {
+ dev_err(&pdev->dev, "%s fw acquire failed!", __func__);
+ goto out_core_init;
+ }
+
+ lock = (spinlock_t **)&vxd->lock;
+ *lock = kzalloc(sizeof(spinlock_t), GFP_KERNEL);
+
+ if (!(*lock)) {
+ pr_err("Memory allocation failed for spin-lock\n");
+ ret = ENOMEM;
+ goto out_core_init;
+ }
+ spin_lock_init(*lock);
+
+ ret = v4l2_device_register(&pdev->dev, &vxd->v4l2_dev);
+ if (ret)
+ goto out_clean_fw;
+
+#ifdef ERROR_RECOVERY_SIMULATION
+ /*
+ * create a sysfs entry here, to debug firmware error recovery.
+ */
+ vxd_dec_kobject = kobject_create_and_add("vxd_decoder", kernel_kobj);
+ if (!vxd_dec_kobject) {
+ dev_err(&pdev->dev, "Failed to create kernel object\n");
+ goto out_clean_fw;
+ }
+
+ ret = sysfs_create_group(vxd_dec_kobject, &attr_group);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to create sysfs files\n");
+ kobject_put(vxd_dec_kobject);
+ }
+#endif
+
+ vfd = video_device_alloc();
+ if (!vfd) {
+ dev_err(&pdev->dev, "Failed to allocate video device\n");
+ ret = -ENOMEM;
+ goto out_v4l2_device;
+ }
+
+ vxd->vfd_dec = vfd;
+ *vfd = vxd_dec_videodev;
+ vfd->v4l2_dev = &vxd->v4l2_dev;
+ vfd->device_caps = V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING;
+ vfd->lock = vxd->mutex;
+
+ video_set_drvdata(vfd, vxd);
+
+ snprintf(vfd->name, sizeof(vfd->name), "%s", vxd_dec_videodev.name);
+ ret = devm_request_threaded_irq(&pdev->dev, module_irq, (irq_handler_t)hard_isrcb,
+ (irq_handler_t)soft_thread_irq, IRQF_SHARED,
+ IMG_VXD_DEC_MODULE_NAME, pdev);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to request irq\n");
+ goto out_vid_dev;
+ }
+
+ vxd->m2m_dev = v4l2_m2m_init(&m2m_ops);
+ if (IS_ERR_VALUE((unsigned long)vxd->m2m_dev)) {
+ dev_err(&pdev->dev, "Failed to init mem2mem device\n");
+ ret = -EINVAL;
+ goto out_vid_dev;
+ }
+
+ ret = video_register_device(vfd, VFL_TYPE_VIDEO, 0);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to register video device\n");
+ goto out_vid_reg;
+ }
+ v4l2_info(&vxd->v4l2_dev, "decoder registered as /dev/video%d\n", vfd->num);
+
+ return 0;
+
+out_vid_reg:
+ v4l2_m2m_release(vxd->m2m_dev);
+
+out_vid_dev:
+ video_device_release(vfd);
+
+out_v4l2_device:
+ v4l2_device_unregister(&vxd->v4l2_dev);
+
+out_clean_fw:
+ vxd_clean_fw_resources(vxd);
+
+out_core_init:
+ core_deinitialise();
+
+out_vxd_init:
+ vxd_deinit(vxd);
+
+out_idr_init:
+ idr_destroy(vxd->streams);
+ kfree(vxd->streams);
+
+out_init:
+ img_mem_exit();
+
+out_put_sync:
+ pm_runtime_put_sync(&pdev->dev);
+
+exit:
+ pm_runtime_disable(&pdev->dev);
+ mutex_destroy(vxd->mutex);
+ kfree(vxd->mutex);
+ vxd->mutex = NULL;
+
+ return ret;
+}
+
+static int vxd_dec_remove(struct platform_device *pdev)
+{
+ struct vxd_dev *vxd = platform_get_drvdata(pdev);
+
+ core_deinitialise();
+
+ vxd_clean_fw_resources(vxd);
+ vxd_deinit(vxd);
+ idr_destroy(vxd->streams);
+ kfree(vxd->streams);
+ get_delayed_work_buff(&vxd->dwork, TRUE);
+ kfree(&vxd->lock);
+ img_mem_exit();
+
+ pm_runtime_put_sync(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+ kfree(vxd->dwork);
+ mutex_destroy(vxd->mutex);
+ kfree(vxd->mutex);
+ vxd->mutex = NULL;
+
+ video_unregister_device(vxd->vfd_dec);
+ v4l2_m2m_release(vxd->m2m_dev);
+ v4l2_device_unregister(&vxd->v4l2_dev);
+
+ return 0;
+}
+
+static int __maybe_unused vxd_dec_suspend(struct device *dev)
+{
+ int ret = 0;
+
+ ret = vxd_suspend_dev(dev);
+ if (ret)
+ dev_err(dev, "failed to suspend core hw!\n");
+
+ return ret;
+}
+
+static int __maybe_unused vxd_dec_resume(struct device *dev)
+{
+ int ret = 0;
+
+ ret = vxd_resume_dev(dev);
+ if (ret)
+ dev_err(dev, "failed to resume core hw!\n");
+
+ return ret;
+}
+
+static UNIVERSAL_DEV_PM_OPS(vxd_dec_pm_ops,
+ vxd_dec_suspend, vxd_dec_resume, NULL);
+
+static struct platform_driver vxd_dec_driver = {
+ .probe = vxd_dec_probe,
+ .remove = vxd_dec_remove,
+ .driver = {
+ .name = "img_dec",
+ .pm = &vxd_dec_pm_ops,
+ .of_match_table = vxd_dec_of_match,
+ },
+};
+module_platform_driver(vxd_dec_driver);
+
+MODULE_AUTHOR("Prashanth Kumar Amai <[email protected]> Sidraya Jayagond <[email protected]>");
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("IMG D5520 video decoder driver");
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
This library is used to handle different pixel format layouts.
Signed-off-by: Sunita Nadampalli <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
drivers/staging/media/vxd/decoder/pixel_api.c | 895 ++++++++++++++++++
drivers/staging/media/vxd/decoder/pixel_api.h | 152 +++
3 files changed, 1049 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/pixel_api.c
create mode 100644 drivers/staging/media/vxd/decoder/pixel_api.h
diff --git a/MAINTAINERS b/MAINTAINERS
index d126162984c6..bf47d48a1ec2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19600,6 +19600,8 @@ F: drivers/staging/media/vxd/decoder/jpegfw_data.h
F: drivers/staging/media/vxd/decoder/jpegfw_data_shared.h
F: drivers/staging/media/vxd/decoder/mem_io.h
F: drivers/staging/media/vxd/decoder/mmu_defs.h
+F: drivers/staging/media/vxd/decoder/pixel_api.c
+F: drivers/staging/media/vxd/decoder/pixel_api.h
F: drivers/staging/media/vxd/decoder/pvdec_entropy_regs.h
F: drivers/staging/media/vxd/decoder/pvdec_int.h
F: drivers/staging/media/vxd/decoder/pvdec_vec_be_regs.h
diff --git a/drivers/staging/media/vxd/decoder/pixel_api.c b/drivers/staging/media/vxd/decoder/pixel_api.c
new file mode 100644
index 000000000000..a0620662a68e
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/pixel_api.c
@@ -0,0 +1,895 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Pixel processing function implementations
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "img_errors.h"
+#include "img_pixfmts.h"
+#include "pixel_api.h"
+#include "vdec_defs.h"
+
+#define NUM_OF_FORMATS 17
+#define PIXNAME(x) /* Pixel name support not enabled */
+#define FACT_SPEC_FORMAT_NUM_PLANES 4
+#define FACT_SPEC_FORMAT_PLANE_UNUSED 0xf
+#define FACT_SPEC_FORMAT_PLANE_CODE_BITS 4
+#define FACT_SPEC_FORMAT_PLANE_CODE_MASK 3
+#define FACT_SPEC_FORMAT_MIN_FACT_VAL 1
+
+/*
+ * @brief Pointer to the default format in the asPixelFormats array
+ * default format is an invalid format
+ * @note pointer set by initSearch()
+ * This pointer is also used to know if the arrays were sorted
+ */
+static struct pixel_pixinfo *def_fmt;
+
+/*
+ * @brief Actual array storing the pixel formats information.
+ */
+static struct pixel_pixinfo pix_fmts[NUM_OF_FORMATS] = {
+ {
+ IMG_PIXFMT_420PL12YUV8,
+ PIXEL_UV_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT8_MP,
+ PIXEL_FORMAT_420,
+ 8,
+ 8,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_420PL12YVU8,
+ PIXEL_VU_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT8_MP,
+ PIXEL_FORMAT_420,
+ 8,
+ 8,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_420PL12YUV10,
+ PIXEL_UV_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT10_MP,
+ PIXEL_FORMAT_420,
+ 10,
+ 10,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_420PL12YVU10,
+ PIXEL_VU_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT10_MP,
+ PIXEL_FORMAT_420,
+ 10,
+ 10,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_420PL12YUV10_MSB,
+ PIXEL_UV_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT10_MSB_MP,
+ PIXEL_FORMAT_420,
+ 10,
+ 10,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_420PL12YVU10_MSB,
+ PIXEL_VU_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT10_MSB_MP,
+ PIXEL_FORMAT_420,
+ 10,
+ 10,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_420PL12YUV10_LSB,
+ PIXEL_UV_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT10_LSB_MP,
+ PIXEL_FORMAT_420,
+ 10,
+ 10,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_420PL12YVU10_LSB,
+ PIXEL_VU_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT10_LSB_MP,
+ PIXEL_FORMAT_420,
+ 10,
+ 10,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_422PL12YUV8,
+ PIXEL_UV_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT8_MP,
+ PIXEL_FORMAT_422,
+ 8,
+ 8,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_422PL12YVU8,
+ PIXEL_VU_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT8_MP,
+ PIXEL_FORMAT_422,
+ 8,
+ 8,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_422PL12YUV10,
+ PIXEL_UV_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT10_MP,
+ PIXEL_FORMAT_422,
+ 10,
+ 10,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_422PL12YVU10,
+ PIXEL_VU_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT10_MP,
+ PIXEL_FORMAT_422,
+ 10,
+ 10,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_422PL12YUV10_MSB,
+ PIXEL_UV_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT10_MSB_MP,
+ PIXEL_FORMAT_422,
+ 10,
+ 10,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_422PL12YVU10_MSB,
+ PIXEL_VU_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT10_MSB_MP,
+ PIXEL_FORMAT_422,
+ 10,
+ 10,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_422PL12YUV10_LSB,
+ PIXEL_UV_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT10_LSB_MP,
+ PIXEL_FORMAT_422,
+ 10,
+ 10,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_422PL12YVU10_LSB,
+ PIXEL_VU_ORDER,
+ PIXEL_MULTICHROME,
+ PIXEL_BIT10_LSB_MP,
+ PIXEL_FORMAT_422,
+ 10,
+ 10,
+ 2
+ },
+
+ {
+ IMG_PIXFMT_UNDEFINED,
+ PIXEL_INVALID_CI,
+ 0,
+ (enum pixel_mem_packing)0,
+ PIXEL_FORMAT_INVALID,
+ 0,
+ 0,
+ 0
+ }
+};
+
+static struct pixel_pixinfo_table pixinfo_table[] = {
+ {
+ IMG_PIXFMT_420PL12YUV8_A8,
+ {
+ PIXNAME(IMG_PIXFMT_420PL12YUV8_A8)
+ 16,
+ 16,
+ 16,
+ 0,
+ 16,
+ TRUE,
+ TRUE,
+ 4,
+ TRUE
+ }
+ },
+
+ {
+ IMG_PIXFMT_422PL12YUV8_A8,
+ {
+ PIXNAME(IMG_PIXFMT_422PL12YUV8_A8)
+ 16,
+ 16,
+ 16,
+ 0,
+ 16,
+ TRUE,
+ FALSE,
+ 4,
+ TRUE
+ }
+ },
+
+ {
+ IMG_PIXFMT_420PL12YUV8,
+ {
+ PIXNAME(IMG_PIXFMT_420PL12YUV8)
+ 16,
+ 16,
+ 16,
+ 0,
+ 0,
+ TRUE,
+ TRUE,
+ 4,
+ FALSE
+ }
+ },
+
+ {
+ IMG_PIXFMT_420PL12YVU8,
+ {
+ PIXNAME(IMG_PIXFMT_420PL12YVU8)
+ 16,
+ 16,
+ 16,
+ 0,
+ 0,
+ TRUE,
+ TRUE,
+ 4,
+ FALSE
+ }
+ },
+
+ {
+ IMG_PIXFMT_420PL12YUV10,
+ {
+ PIXNAME(IMG_PIXFMT_420PL12YUV10)
+ 12,
+ 16,
+ 16,
+ 0,
+ 0,
+ TRUE,
+ TRUE,
+ 4,
+ FALSE
+ }
+ },
+
+ {
+ IMG_PIXFMT_420PL12YVU10,
+ {
+ PIXNAME(IMG_PIXFMT_420PL12YVU10)
+ 12,
+ 16,
+ 16,
+ 0,
+ 0,
+ TRUE,
+ TRUE,
+ 4,
+ FALSE
+ }
+ },
+
+ {
+ IMG_PIXFMT_420PL12YUV10_MSB,
+ {
+ PIXNAME(IMG_PIXFMT_420PL12YUV10_MSB)
+ 8,
+ 16,
+ 16,
+ 0,
+ 0,
+ TRUE,
+ TRUE,
+ 4,
+ FALSE
+ }
+ },
+
+ {
+ IMG_PIXFMT_420PL12YVU10_MSB,
+ {
+ PIXNAME(IMG_PIXFMT_420PL12YVU10_MSB)
+ 8,
+ 16,
+ 16,
+ 0,
+ 0,
+ TRUE,
+ TRUE,
+ 4,
+ FALSE
+ }
+ },
+
+ {
+ IMG_PIXFMT_422PL12YUV8,
+ {
+ PIXNAME(IMG_PIXFMT_422PL12YUV8)
+ 16,
+ 16,
+ 16,
+ 0,
+ 0,
+ TRUE,
+ FALSE,
+ 4,
+ FALSE
+ }
+ },
+
+ {
+ IMG_PIXFMT_422PL12YVU8,
+ {
+ PIXNAME(IMG_PIXFMT_422PL12YVU8)
+ 16,
+ 16,
+ 16,
+ 0,
+ 0,
+ TRUE,
+ FALSE,
+ 4,
+ FALSE
+ }
+ },
+
+ {
+ IMG_PIXFMT_422PL12YUV10,
+ {
+ PIXNAME(IMG_PIXFMT_422PL12YUV10)
+ 12,
+ 16,
+ 16,
+ 0,
+ 0,
+ TRUE,
+ FALSE,
+ 4,
+ FALSE
+ }
+ },
+
+ {
+ IMG_PIXFMT_422PL12YVU10,
+ {
+ PIXNAME(IMG_PIXFMT_422PL12YVU10)
+ 12,
+ 16,
+ 16,
+ 0,
+ 0,
+ TRUE,
+ FALSE,
+ 4,
+ FALSE
+ }
+ },
+
+ {
+ IMG_PIXFMT_422PL12YUV10_MSB,
+ {
+ PIXNAME(IMG_PIXFMT_422PL12YUV10_MSB)
+ 8,
+ 16,
+ 16,
+ 0,
+ 0,
+ TRUE,
+ FALSE,
+ 4,
+ FALSE
+ }
+ },
+
+ {
+ IMG_PIXFMT_422PL12YVU10_MSB,
+ {
+ PIXNAME(IMG_PIXFMT_422PL12YVU10_MSB)
+ 8,
+ 16,
+ 16,
+ 0,
+ 0,
+ TRUE,
+ FALSE,
+ 4,
+ FALSE
+ }
+ },
+};
+
+static struct pixel_pixinfo_table*
+pixel_get_pixelinfo_from_pixfmt(enum img_pixfmt pix_fmt)
+{
+ unsigned int i;
+ unsigned char found = FALSE;
+ struct pixel_pixinfo_table *this_pixinfo_table_entry = NULL;
+
+ for (i = 0;
+ i < (sizeof(pixinfo_table) / sizeof(struct pixel_pixinfo_table));
+ i++) {
+ if (pix_fmt == pixinfo_table[i].pix_color_fmt) {
+ /*
+ * There must only be one entry per pixel colour format
+ * in the table
+ */
+ VDEC_ASSERT(!found);
+ found = TRUE;
+ this_pixinfo_table_entry = &pixinfo_table[i];
+
+ /*
+ * We deliberately do NOT break here - scan rest of
+ * table to ensure there are not duplicate entries
+ */
+ }
+ }
+ return this_pixinfo_table_entry;
+}
+
+/*
+ * @brief Array containing string lookup of pixel format IDC.
+ * @warning this must be kept in step with PIXEL_FormatIdc.
+ */
+unsigned char pix_fmt_idc_names[6][16] = {
+ "Monochrome",
+ "4:1:1",
+ "4:2:0",
+ "4:2:2",
+ "4:4:4",
+ "Invalid",
+};
+
+static int pixel_compare_pixfmts(const void *a, const void *b)
+{
+ return ((struct pixel_pixinfo *)a)->pixfmt -
+ ((struct pixel_pixinfo *)b)->pixfmt;
+}
+
+static struct pixel_info*
+pixel_get_bufinfo_from_pixfmt(enum img_pixfmt pix_fmt)
+{
+ struct pixel_pixinfo_table *pixinfo_table_entry = NULL;
+ struct pixel_info *pix_info = NULL;
+
+ pixinfo_table_entry = pixel_get_pixelinfo_from_pixfmt(pix_fmt);
+ VDEC_ASSERT(pixinfo_table_entry);
+ if (pixinfo_table_entry)
+ pix_info = &pixinfo_table_entry->info;
+
+ return pix_info;
+}
+
+/*
+ * @brief Search a pixel format based on its attributes rather than its format
+ * enum.
+ * @warning use PIXEL_Comparpix_fmts to search by enum
+ */
+static int pixel_compare_pixinfo(const void *a, const void *b)
+{
+ int result = 0;
+ const struct pixel_pixinfo *fmt_a = (struct pixel_pixinfo *)a;
+ const struct pixel_pixinfo *fmt_b = (struct pixel_pixinfo *)b;
+
+ result = fmt_a->chroma_fmt_idc - fmt_b->chroma_fmt_idc;
+ if (result != 0)
+ return result;
+
+ result = fmt_a->mem_pkg - fmt_b->mem_pkg;
+ if (result != 0)
+ return result;
+
+ result = fmt_a->chroma_interleave - fmt_b->chroma_interleave;
+ if (result != 0)
+ return result;
+
+ result = fmt_a->bitdepth_y - fmt_b->bitdepth_y;
+ if (result != 0)
+ return result;
+
+ result = fmt_a->bitdepth_c - fmt_b->bitdepth_c;
+ if (result != 0)
+ return result;
+
+ result = fmt_a->num_planes - fmt_b->num_planes;
+ if (result != 0)
+ return result;
+
+ return result;
+}
+
+static void pixel_init_search(void)
+{
+ static unsigned int search_inited;
+
+ search_inited++;
+ if (search_inited == 1) {
+ if (!def_fmt) {
+ int i = 0;
+
+ i = NUM_OF_FORMATS - 1;
+ while (i >= 0) {
+ if (IMG_PIXFMT_UNDEFINED ==
+ pix_fmts[i].pixfmt) {
+ def_fmt = &pix_fmts[i];
+ break;
+ }
+ }
+ VDEC_ASSERT(def_fmt);
+ }
+ } else {
+ search_inited--;
+ }
+}
+
+static struct pixel_pixinfo *pixel_search_fmt(const struct pixel_pixinfo *key,
+ unsigned char enum_only)
+{
+ struct pixel_pixinfo *fmt_found = NULL;
+ int (*compar)(const void *pixfmt1, const void *pixfmt2);
+
+ if (enum_only)
+ compar = &pixel_compare_pixfmts;
+ else
+ compar = &pixel_compare_pixinfo;
+
+ {
+ unsigned int i;
+
+ for (i = 0; i < NUM_OF_FORMATS; i++) {
+ if (compar(key, &pix_fmts[i]) == 0) {
+ fmt_found = &pix_fmts[i];
+ break;
+ }
+ }
+ }
+ return fmt_found;
+}
+
+/*
+ * @brief Set a pixel format info structure to the default.
+ * @warning This MODIDIFES the pointer therefore you shouldn't
+ * call it on pointer you got from the library!
+ */
+static void pixel_pixinfo_defaults(struct pixel_pixinfo *to_def)
+{
+ if (!def_fmt)
+ pixel_init_search();
+
+ memcpy(to_def, def_fmt, sizeof(struct pixel_pixinfo));
+}
+
+enum img_pixfmt pixel_get_pixfmt(enum pixel_fmt_idc chroma_fmt_idc,
+ enum pixel_chroma_interleaved
+ chroma_interleaved,
+ enum pixel_mem_packing mem_pkg,
+ unsigned int bitdepth_y, unsigned int bitdepth_c,
+ unsigned int num_planes)
+{
+ unsigned int internal_num_planes = (num_planes == 0 || num_planes > 4) ? 2 :
+ num_planes;
+ struct pixel_pixinfo key;
+ struct pixel_pixinfo *fmt_found = NULL;
+
+ if (chroma_fmt_idc != PIXEL_FORMAT_MONO &&
+ chroma_fmt_idc != PIXEL_FORMAT_411 &&
+ chroma_fmt_idc != PIXEL_FORMAT_420 &&
+ chroma_fmt_idc != PIXEL_FORMAT_422 &&
+ chroma_fmt_idc != PIXEL_FORMAT_444)
+ return IMG_PIXFMT_UNDEFINED;
+
+ /* valid bit depth 8, 9, 10, or 16/0 for 422 */
+ if (bitdepth_y < 8 || bitdepth_y > 10)
+ return IMG_PIXFMT_UNDEFINED;
+
+ /* valid bit depth 8, 9, 10, or 16/0 for 422 */
+ if (bitdepth_c < 8 || bitdepth_c > 10)
+ return IMG_PIXFMT_UNDEFINED;
+
+ key.pixfmt = IMG_PIXFMT_UNDEFINED;
+ key.chroma_fmt_idc = chroma_fmt_idc;
+ key.chroma_interleave = chroma_interleaved;
+ key.mem_pkg = mem_pkg;
+ key.bitdepth_y = bitdepth_y;
+ key.bitdepth_c = bitdepth_c;
+ key.num_planes = internal_num_planes;
+
+ /*
+ * 9 and 10 bits formats are handled in the same way, and there is only
+ * one entry in the PixelFormat table
+ */
+ if (key.bitdepth_y == 9)
+ key.bitdepth_y = 10;
+
+ /*
+ * 9 and 10 bits formats are handled in the same way, and there is only
+ * one entry in the PixelFormat table
+ */
+ if (key.bitdepth_c == 9)
+ key.bitdepth_c = 10;
+
+ pixel_init_search();
+
+ /* do not search by format */
+ fmt_found = pixel_search_fmt(&key, FALSE);
+ if (!fmt_found)
+ return IMG_PIXFMT_UNDEFINED;
+
+ return fmt_found->pixfmt;
+}
+
+static void pixel_get_internal_pixelinfo(struct pixel_pixinfo *pixinfo,
+ struct pixel_info *pix_bufinfo)
+{
+ if (pixinfo->bitdepth_y == 8 && pixinfo->bitdepth_c == 8)
+ pix_bufinfo->pixels_in_bop = 16;
+ else if (pixinfo->mem_pkg == PIXEL_BIT10_MP)
+ pix_bufinfo->pixels_in_bop = 12;
+ else
+ pix_bufinfo->pixels_in_bop = 8;
+
+ if (pixinfo->bitdepth_y == 8)
+ pix_bufinfo->ybytes_in_bop = pix_bufinfo->pixels_in_bop;
+ else
+ pix_bufinfo->ybytes_in_bop = 16;
+
+ if (pixinfo->chroma_fmt_idc == PIXEL_FORMAT_MONO) {
+ pix_bufinfo->uvbytes_in_bop = 0;
+ } else if (pixinfo->bitdepth_c == 8) {
+ pix_bufinfo->uvbytes_in_bop = pix_bufinfo->pixels_in_bop;
+ if (pixinfo->chroma_fmt_idc == PIXEL_FORMAT_422 && pixinfo->num_planes == 1) {
+ pix_bufinfo->uvbytes_in_bop = 0;
+ pix_bufinfo->pixels_in_bop = 8;
+ }
+ } else {
+ pix_bufinfo->uvbytes_in_bop = 16;
+ }
+
+ if (pixinfo->chroma_fmt_idc == PIXEL_FORMAT_444)
+ pix_bufinfo->uvbytes_in_bop *= 2;
+
+ if (pixinfo->chroma_interleave == PIXEL_INVALID_CI) {
+ pix_bufinfo->uvbytes_in_bop /= 2;
+ pix_bufinfo->vbytes_in_bop = pix_bufinfo->uvbytes_in_bop;
+ } else {
+ pix_bufinfo->vbytes_in_bop = 0;
+ }
+
+ pix_bufinfo->alphabytes_in_bop = 0;
+
+ if (pixinfo->num_planes == 1)
+ pix_bufinfo->is_planar = FALSE;
+ else
+ pix_bufinfo->is_planar = TRUE;
+
+ if (pixinfo->chroma_fmt_idc == PIXEL_FORMAT_420)
+ pix_bufinfo->uv_height_halved = TRUE;
+ else
+ pix_bufinfo->uv_height_halved = FALSE;
+
+ if (pixinfo->chroma_fmt_idc == PIXEL_FORMAT_444)
+ pix_bufinfo->uv_stride_ratio_times4 = 8;
+ else
+ pix_bufinfo->uv_stride_ratio_times4 = 4;
+
+ if (pixinfo->chroma_interleave == PIXEL_INVALID_CI)
+ pix_bufinfo->uv_stride_ratio_times4 /= 2;
+
+ pix_bufinfo->has_alpha = FALSE;
+}
+
+static void pixel_yuv_get_descriptor_int(struct pixel_info *pixinfo,
+ struct img_pixfmt_desc *pix_desc)
+{
+ pix_desc->bop_denom = pixinfo->pixels_in_bop;
+ pix_desc->h_denom = (pixinfo->uv_stride_ratio_times4 == 2 ||
+ !pixinfo->is_planar) ? 2 : 1;
+ pix_desc->v_denom = (pixinfo->uv_height_halved || !pixinfo->is_planar)
+ ? 2 : 1;
+
+ pix_desc->planes[0] = TRUE;
+ pix_desc->bop_numer[0] = pixinfo->ybytes_in_bop;
+ pix_desc->h_numer[0] = pix_desc->h_denom;
+ pix_desc->v_numer[0] = pix_desc->v_denom;
+
+ pix_desc->planes[1] = pixinfo->is_planar;
+ pix_desc->bop_numer[1] = pixinfo->uvbytes_in_bop;
+ pix_desc->h_numer[1] = (pix_desc->h_denom * pixinfo->uv_stride_ratio_times4) / 4;
+ pix_desc->v_numer[1] = 1;
+
+ pix_desc->planes[2] = (pixinfo->vbytes_in_bop > 0) ? TRUE : FALSE;
+ pix_desc->bop_numer[2] = pixinfo->vbytes_in_bop;
+ pix_desc->h_numer[2] = (pixinfo->vbytes_in_bop > 0) ? 1 : 0;
+ pix_desc->v_numer[2] = (pixinfo->vbytes_in_bop > 0) ? 1 : 0;
+
+ pix_desc->planes[3] = pixinfo->has_alpha;
+ pix_desc->bop_numer[3] = pixinfo->alphabytes_in_bop;
+ pix_desc->h_numer[3] = pix_desc->h_denom;
+ pix_desc->v_numer[3] = pix_desc->v_denom;
+}
+
+int pixel_yuv_get_desc(struct pixel_pixinfo *pix_info, struct img_pixfmt_desc *pix_desc)
+{
+ struct pixel_info int_pix_info;
+
+ struct pixel_info *int_pix_info_old = NULL;
+ enum img_pixfmt pix_fmt = pixel_get_pixfmt(pix_info->chroma_fmt_idc,
+ pix_info->chroma_interleave,
+ pix_info->mem_pkg,
+ pix_info->bitdepth_y,
+ pix_info->bitdepth_c,
+ pix_info->num_planes);
+
+ /* Validate the output from new function. */
+ if (pix_fmt != IMG_PIXFMT_UNDEFINED)
+ int_pix_info_old = pixel_get_bufinfo_from_pixfmt(pix_fmt);
+
+ pixel_get_internal_pixelinfo(pix_info, &int_pix_info);
+
+ if (int_pix_info_old) {
+ VDEC_ASSERT(int_pix_info_old->has_alpha ==
+ int_pix_info.has_alpha);
+ VDEC_ASSERT(int_pix_info_old->is_planar ==
+ int_pix_info.is_planar);
+ VDEC_ASSERT(int_pix_info_old->uv_height_halved ==
+ int_pix_info.uv_height_halved);
+ VDEC_ASSERT(int_pix_info_old->alphabytes_in_bop ==
+ int_pix_info.alphabytes_in_bop);
+ VDEC_ASSERT(int_pix_info_old->pixels_in_bop ==
+ int_pix_info.pixels_in_bop);
+ VDEC_ASSERT(int_pix_info_old->uvbytes_in_bop ==
+ int_pix_info.uvbytes_in_bop);
+ VDEC_ASSERT(int_pix_info_old->uv_stride_ratio_times4 ==
+ int_pix_info.uv_stride_ratio_times4);
+ VDEC_ASSERT(int_pix_info_old->vbytes_in_bop ==
+ int_pix_info.vbytes_in_bop);
+ VDEC_ASSERT(int_pix_info_old->ybytes_in_bop ==
+ int_pix_info.ybytes_in_bop);
+ }
+
+ pixel_yuv_get_descriptor_int(&int_pix_info, pix_desc);
+
+ return IMG_SUCCESS;
+}
+
+struct pixel_pixinfo *pixel_get_pixinfo(const enum img_pixfmt pix_fmt)
+{
+ struct pixel_pixinfo key;
+ struct pixel_pixinfo *fmt_found = NULL;
+
+ pixel_init_search();
+ pixel_pixinfo_defaults(&key);
+ key.pixfmt = pix_fmt;
+
+ fmt_found = pixel_search_fmt(&key, TRUE);
+ if (!fmt_found)
+ return def_fmt;
+ return fmt_found;
+}
+
+int pixel_get_fmt_desc(enum img_pixfmt pix_fmt, struct img_pixfmt_desc *pix_desc)
+{
+ if (pix_fmt >= IMG_PIXFMT_ARBPLANAR8 && pix_fmt <= IMG_PIXFMT_ARBPLANAR8_LAST) {
+ unsigned int i;
+ unsigned short spec;
+
+ pix_desc->bop_denom = 1;
+ pix_desc->h_denom = 1;
+ pix_desc->v_denom = 1;
+
+ spec = (pix_fmt - IMG_PIXFMT_ARBPLANAR8) & 0xffff;
+ for (i = 0; i < FACT_SPEC_FORMAT_NUM_PLANES; i++) {
+ unsigned char code = (spec >> FACT_SPEC_FORMAT_PLANE_CODE_BITS *
+ (FACT_SPEC_FORMAT_NUM_PLANES - 1 - i)) & 0xf;
+ pix_desc->bop_numer[i] = 1;
+ pix_desc->h_numer[i] = ((code >> 2) & FACT_SPEC_FORMAT_PLANE_CODE_MASK) +
+ FACT_SPEC_FORMAT_MIN_FACT_VAL;
+ pix_desc->v_numer[i] = (code & FACT_SPEC_FORMAT_PLANE_CODE_MASK) +
+ FACT_SPEC_FORMAT_MIN_FACT_VAL;
+ if (i == 0 || code != FACT_SPEC_FORMAT_PLANE_UNUSED) {
+ pix_desc->planes[i] = TRUE;
+
+ pix_desc->h_denom =
+ pix_desc->h_denom > pix_desc->h_numer[i] ?
+ pix_desc->h_denom : pix_desc->h_numer[i];
+
+ pix_desc->v_denom =
+ pix_desc->v_denom > pix_desc->v_numer[i] ?
+ pix_desc->v_denom : pix_desc->v_numer[i];
+ } else {
+ pix_desc->planes[i] = FALSE;
+ }
+ }
+ } else {
+ struct pixel_info *info =
+ pixel_get_bufinfo_from_pixfmt(pix_fmt);
+ if (!info) {
+ VDEC_ASSERT(0);
+ return -EINVAL;
+ }
+
+ pixel_yuv_get_descriptor_int(info, pix_desc);
+ }
+
+ return IMG_SUCCESS;
+}
+
+int pixel_gen_pixfmt(enum img_pixfmt *pix_fmt, struct img_pixfmt_desc *pix_desc)
+{
+ unsigned short spec = 0, i;
+ unsigned char code;
+
+ for (i = 0; i < FACT_SPEC_FORMAT_NUM_PLANES; i++) {
+ if (pix_desc->planes[i] != 1) {
+ code = FACT_SPEC_FORMAT_PLANE_UNUSED;
+ } else {
+ code = (((pix_desc->h_numer[i] - FACT_SPEC_FORMAT_MIN_FACT_VAL) &
+ FACT_SPEC_FORMAT_PLANE_CODE_MASK) << 2) |
+ ((pix_desc->v_numer[i] - FACT_SPEC_FORMAT_MIN_FACT_VAL) &
+ FACT_SPEC_FORMAT_PLANE_CODE_MASK);
+ }
+ spec |= (code << FACT_SPEC_FORMAT_PLANE_CODE_BITS *
+ (FACT_SPEC_FORMAT_NUM_PLANES - 1 - i));
+ }
+
+ *pix_fmt = (enum img_pixfmt)(IMG_PIXFMT_ARBPLANAR8 | spec);
+
+ return 0;
+}
diff --git a/drivers/staging/media/vxd/decoder/pixel_api.h b/drivers/staging/media/vxd/decoder/pixel_api.h
new file mode 100644
index 000000000000..3648c1b32ea7
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/pixel_api.h
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Pixel processing functions header
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Sunita Nadampalli <[email protected]>
+ *
+ * Re-written for upstream
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#ifndef __PIXEL_API_H__
+#define __PIXEL_API_H__
+
+#include <linux/types.h>
+
+#include "img_errors.h"
+#include "img_pixfmts.h"
+
+#define PIXEL_MULTICHROME TRUE
+#define PIXEL_MONOCHROME FALSE
+#define IMG_MAX_NUM_PLANES 4
+#define PIXEL_INVALID_BDC 8
+
+extern unsigned char pix_fmt_idc_names[6][16];
+
+struct img_pixfmt_desc {
+ unsigned char planes[IMG_MAX_NUM_PLANES];
+ unsigned int bop_denom;
+ unsigned int bop_numer[IMG_MAX_NUM_PLANES];
+ unsigned int h_denom;
+ unsigned int v_denom;
+ unsigned int h_numer[IMG_MAX_NUM_PLANES];
+ unsigned int v_numer[IMG_MAX_NUM_PLANES];
+};
+
+/*
+ * @brief This type defines memory chroma interleaved order
+ */
+enum pixel_chroma_interleaved {
+ PIXEL_INVALID_CI = 0,
+ PIXEL_UV_ORDER = 1,
+ PIXEL_VU_ORDER = 2,
+ PIXEL_YAYB_ORDER = 4,
+ PIXEL_AYBY_ORDER = 8,
+ PIXEL_ORDER_FORCE32BITS = 0x7FFFFFFFU
+};
+
+/*
+ * @brief This macro translates enum pixel_chroma_interleaved values into
+ * value that can be used to write HW registers directly.
+ */
+#define PIXEL_GET_HW_CHROMA_INTERLEAVED(value) \
+ ((value) & PIXEL_VU_ORDER ? TRUE : FALSE)
+
+/*
+ * @brief This type defines memory packing types
+ */
+enum pixel_mem_packing {
+ PIXEL_BIT8_MP = 0,
+ PIXEL_BIT10_MSB_MP = 1,
+ PIXEL_BIT10_LSB_MP = 2,
+ PIXEL_BIT10_MP = 3,
+ PIXEL_DEFAULT_MP = 0xff,
+ PIXEL_DEFAULT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+static inline unsigned char pixel_get_hw_memory_packing(enum pixel_mem_packing value)
+{
+ return value == PIXEL_BIT8_MP ? FALSE :
+ value == PIXEL_BIT10_MSB_MP ? FALSE :
+ value == PIXEL_BIT10_LSB_MP ? FALSE :
+ value == PIXEL_BIT10_MP ? TRUE : FALSE;
+}
+
+/*
+ * @brief This type defines chroma formats
+ */
+enum pixel_fmt_idc {
+ PIXEL_FORMAT_MONO = 0,
+ PIXEL_FORMAT_411 = 1,
+ PIXEL_FORMAT_420 = 2,
+ PIXEL_FORMAT_422 = 3,
+ PIXEL_FORMAT_444 = 4,
+ PIXEL_FORMAT_INVALID = 0xFF,
+ PIXEL_FORMAT_FORCE32BITS = 0x7FFFFFFFU
+};
+
+static inline int pixel_get_hw_chroma_format_idc(enum pixel_fmt_idc value)
+{
+ return value == PIXEL_FORMAT_MONO ? 0 :
+ value == PIXEL_FORMAT_420 ? 1 :
+ value == PIXEL_FORMAT_422 ? 2 :
+ value == PIXEL_FORMAT_444 ? 3 :
+ PIXEL_FORMAT_INVALID;
+}
+
+/*
+ * @brief This structure contains information about the pixel formats
+ */
+struct pixel_pixinfo {
+ enum img_pixfmt pixfmt;
+ enum pixel_chroma_interleaved chroma_interleave;
+ unsigned char chroma_fmt;
+ enum pixel_mem_packing mem_pkg;
+ enum pixel_fmt_idc chroma_fmt_idc;
+ unsigned int bitdepth_y;
+ unsigned int bitdepth_c;
+ unsigned int num_planes;
+};
+
+/*
+ * @brief This type defines the image in memory
+ */
+struct pixel_info {
+ unsigned int pixels_in_bop;
+ unsigned int ybytes_in_bop;
+ unsigned int uvbytes_in_bop;
+ unsigned int vbytes_in_bop;
+ unsigned int alphabytes_in_bop;
+ unsigned char is_planar;
+ unsigned char uv_height_halved;
+ unsigned int uv_stride_ratio_times4;
+ unsigned char has_alpha;
+};
+
+struct pixel_pixinfo_table {
+ enum img_pixfmt pix_color_fmt;
+ struct pixel_info info;
+};
+
+struct pixel_pixinfo *pixel_get_pixinfo(const enum img_pixfmt pixfmt);
+
+enum img_pixfmt pixel_get_pixfmt(enum pixel_fmt_idc chroma_fmt_idc,
+ enum pixel_chroma_interleaved
+ chroma_interleaved,
+ enum pixel_mem_packing mem_packing,
+ unsigned int bitdepth_y, unsigned int bitdepth_c,
+ unsigned int num_planes);
+
+int pixel_yuv_get_desc(struct pixel_pixinfo *pix_info,
+ struct img_pixfmt_desc *desc);
+
+int pixel_get_fmt_desc(enum img_pixfmt pixfmt,
+ struct img_pixfmt_desc *fmt_desc);
+
+int pixel_gen_pixfmt(enum img_pixfmt *pix_fmt, struct img_pixfmt_desc *pix_desc);
+
+#endif
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
This patch create and destroy the pool of the resources
and it manages the allocation and free of the resources.
Signed-off-by: Amit Makani <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
drivers/staging/media/vxd/common/pool_api.c | 709 ++++++++++++++++++++
drivers/staging/media/vxd/common/pool_api.h | 113 ++++
3 files changed, 824 insertions(+)
create mode 100644 drivers/staging/media/vxd/common/pool_api.c
create mode 100644 drivers/staging/media/vxd/common/pool_api.h
diff --git a/MAINTAINERS b/MAINTAINERS
index a00ac0852b2a..f7e55791f355 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19556,6 +19556,8 @@ F: drivers/staging/media/vxd/common/lst.c
F: drivers/staging/media/vxd/common/lst.h
F: drivers/staging/media/vxd/common/pool.c
F: drivers/staging/media/vxd/common/pool.h
+F: drivers/staging/media/vxd/common/pool_api.c
+F: drivers/staging/media/vxd/common/pool_api.h
F: drivers/staging/media/vxd/common/ra.c
F: drivers/staging/media/vxd/common/ra.h
F: drivers/staging/media/vxd/common/talmmu_api.c
diff --git a/drivers/staging/media/vxd/common/pool_api.c b/drivers/staging/media/vxd/common/pool_api.c
new file mode 100644
index 000000000000..68d960a687da
--- /dev/null
+++ b/drivers/staging/media/vxd/common/pool_api.c
@@ -0,0 +1,709 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Resource pool manager API.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+
+#include <linux/slab.h>
+#include <linux/printk.h>
+#include <linux/mutex.h>
+#include <linux/dma-mapping.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+#include <linux/types.h>
+
+#include "idgen_api.h"
+#include "lst.h"
+#include "pool_api.h"
+
+/*
+ * list can be modified by different instances. So please,
+ * make sure to acquire mutex lock before initializing the list.
+ */
+static struct mutex *shared_res_mutex_handle;
+
+/*
+ * Max resource ID's.
+ */
+#define POOL_IDGEN_MAX_ID (0xFFFFFFFF)
+/*
+ * Size of blocks used for ID's.
+ */
+#define POOL_IDGEN_BLOCK_SIZE (50)
+
+/*
+ * Indicates if the pool API has been indialized or not.
+ * zero if not done. 1 if done.
+ */
+static int poolinitdone;
+
+/* list of resource pool */
+static struct lst_t poollist = {0};
+
+/**
+ * struct poollist - Structure contains resource list information.
+ * @link: to be able to part of single linked list
+ * @pool_mutex: lock
+ * @freereslst: list of free resource structure
+ * @actvreslst: list of active resource structure
+ * @pfnfree: pool free callback function
+ * @idgenhandle: ID generator context handl
+ */
+struct poollist {
+ void **link;
+ struct mutex *pool_mutex; /* Mutex lock */
+ struct lst_t freereslst;
+ struct lst_t actvreslst;
+ pfrecalbkpntr pfnfree;
+ void *idgenhandle;
+};
+
+/*
+ * This structure contains pool resource.
+ */
+struct poolres {
+ void **link; /* to be able to part of single linked list */
+ /* Resource id */
+ unsigned int resid;
+ /* Pointer to destructor function */
+ pdestcallbkptr desfunc;
+ /* resource param */
+ void *resparam;
+ /* size of resource param in bytes */
+ unsigned int resparmsize;
+ /* pointer to resource pool list */
+ struct poollist *respoollst;
+ /* 1 if this is a clone of the original resource */
+ int isclone;
+ /* pointer to original resource */
+ struct poolres *origres;
+ /* list of cloned resource structures. Only used on the original */
+ struct lst_t clonereslst;
+ /* reference count. Only used on the original resource */
+ unsigned int refcnt;
+ void *cb_handle;
+};
+
+/*
+ * This function initializes the list if not done earlier.
+ */
+int pool_init(void)
+{
+ /* Check if list already initialized */
+ if (!poolinitdone) {
+ /*
+ * list can be modified by different instances. So please,
+ * make sure to acquire mutex lock before initializing the list.
+ */
+
+ shared_res_mutex_handle = kzalloc(sizeof(*shared_res_mutex_handle), GFP_KERNEL);
+ if (!shared_res_mutex_handle)
+ return -ENOMEM;
+
+ mutex_init(shared_res_mutex_handle);
+
+ /* initialize the list of pools */
+ lst_init(&poollist);
+ /* Get initialized flag to true */
+ poolinitdone = 1;
+ }
+
+ return 0;
+}
+
+/*
+ * This function de-initializes the list.
+ */
+void pool_deinit(void)
+{
+ struct poollist *respoollist;
+
+ /* Check if list initialized */
+ if (poolinitdone) {
+ /* destroy any active pools */
+ respoollist = (struct poollist *)lst_first(&poollist);
+ while (respoollist) {
+ pool_destroy(respoollist);
+ respoollist = (struct poollist *)lst_first(&poollist);
+ }
+
+ /* Destroy mutex */
+ mutex_destroy(shared_res_mutex_handle);
+ kfree(shared_res_mutex_handle);
+ shared_res_mutex_handle = NULL;
+
+ /* set initialized flag to 0 */
+ poolinitdone = 0;
+ }
+}
+
+/*
+ * This function creates pool.
+ */
+int pool_api_create(void **poolhndle)
+{
+ struct poollist *respoollist;
+ unsigned int result = 0;
+
+ /* Allocate a pool structure */
+ respoollist = kzalloc(sizeof(*respoollist), GFP_KERNEL);
+ if (!respoollist)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /* Initialize the pool info */
+ lst_init(&respoollist->freereslst);
+ lst_init(&respoollist->actvreslst);
+
+ /* Create mutex */
+ respoollist->pool_mutex = kzalloc(sizeof(*respoollist->pool_mutex), GFP_KERNEL);
+ if (!respoollist->pool_mutex) {
+ result = ENOMEM;
+ goto error_create_context;
+ }
+ mutex_init(respoollist->pool_mutex);
+
+ /* Create context for the Id generator */
+ result = idgen_createcontext(POOL_IDGEN_MAX_ID,
+ POOL_IDGEN_BLOCK_SIZE, 0,
+ &respoollist->idgenhandle);
+ if (result != IMG_SUCCESS)
+ goto error_create_context;
+
+ /* Disable interrupts */
+ mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_POOL_RES);
+
+ /* Add to list of pools */
+ lst_add(&poollist, respoollist);
+
+ /* Enable interrupts */
+ mutex_unlock(shared_res_mutex_handle);
+
+ /* Return handle to pool */
+ *poolhndle = respoollist;
+
+ return IMG_SUCCESS;
+
+ /* Error handling. */
+error_create_context:
+ kfree(respoollist);
+
+ return result;
+}
+
+/*
+ * This function destroys the pool.
+ */
+int pool_destroy(void *poolhndle)
+{
+ struct poollist *respoollist = poolhndle;
+ struct poolres *respool;
+ struct poolres *clonerespool;
+ unsigned int result = 0;
+
+ if (!poolinitdone || !respoollist) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error_nolock;
+ }
+
+ /* Lock the pool */
+ mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL);
+
+ /* Disable interrupts */
+ /*
+ * We need to check if we really need to check disable,
+ * interrupts because before deleting we need to make sure the
+ * pool lst is not being used other process. As of now getting ipl
+ * global mutex
+ */
+ mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_POOL_RES);
+
+ /* Remove the pool from the active list */
+ lst_remove(&poollist, respoollist);
+
+ /* Enable interrupts */
+ mutex_unlock(shared_res_mutex_handle);
+
+ /* Destroy any resources in the free list */
+ respool = (struct poolres *)lst_removehead(&respoollist->freereslst);
+ while (respool) {
+ respool->desfunc(respool->resparam, respool->cb_handle);
+ kfree(respool);
+ respool = (struct poolres *)
+ lst_removehead(&respoollist->freereslst);
+ }
+
+ /* Destroy any resources in the active list */
+ respool = (struct poolres *)lst_removehead(&respoollist->actvreslst);
+ while (respool) {
+ clonerespool = (struct poolres *)
+ lst_removehead(&respool->clonereslst);
+ while (clonerespool) {
+ /*
+ * If we created a copy of the resources pvParam
+ * then free it.
+ * kfree(NULL) is safe and this check is probably not
+ * required
+ */
+ kfree(clonerespool->resparam);
+
+ kfree(clonerespool);
+ clonerespool = (struct poolres *)
+ lst_removehead(&respool->clonereslst);
+ }
+
+ /* Call the resource destructor */
+ respool->desfunc(respool->resparam, respool->cb_handle);
+ kfree(respool);
+ respool = (struct poolres *)
+ lst_removehead(&respoollist->actvreslst);
+ }
+ /* Destroy the context for the Id generator */
+ if (respoollist->idgenhandle)
+ result = idgen_destroycontext(respoollist->idgenhandle);
+
+ /* Unlock the pool */
+ mutex_unlock(respoollist->pool_mutex);
+
+ /* Destroy mutex */
+ mutex_destroy(respoollist->pool_mutex);
+ kfree(respoollist->pool_mutex);
+ respoollist->pool_mutex = NULL;
+
+ /* Free the pool structure */
+ kfree(respoollist);
+
+ return IMG_SUCCESS;
+
+error_nolock:
+ return result;
+}
+
+int pool_setfreecalbck(void *poolhndle, pfrecalbkpntr pfnfree)
+{
+ struct poollist *respoollist = poolhndle;
+ struct poolres *respool;
+ unsigned int result = 0;
+
+ if (!poolinitdone || !respoollist) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error_nolock;
+ }
+
+ /* Lock the pool */
+ mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL);
+
+ respoollist->pfnfree = pfnfree;
+
+ /* If free callback set */
+ if (respoollist->pfnfree) {
+ /* Move resources from free to active list */
+ respool = (struct poolres *)
+ lst_removehead(&respoollist->freereslst);
+ while (respool) {
+ /* Add to active list */
+ lst_add(&respoollist->actvreslst, respool);
+ respool->refcnt++;
+
+ /* Unlock the pool */
+ mutex_unlock(respoollist->pool_mutex);
+
+ /* Call the free callback */
+ respoollist->pfnfree(respool->resid, respool->resparam);
+
+ /* Lock the pool */
+ mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL);
+
+ /* Get next free resource */
+ respool = (struct poolres *)
+ lst_removehead(&respoollist->freereslst);
+ }
+ }
+
+ /* Unlock the pool */
+ mutex_unlock(respoollist->pool_mutex);
+
+ /* Return IMG_SUCCESS */
+ return IMG_SUCCESS;
+
+error_nolock:
+ return result;
+}
+
+int pool_resreg(void *poolhndle, pdestcallbkptr fndestructor,
+ void *resparam, unsigned int resparamsize,
+ int balloc, unsigned int *residptr,
+ void **poolreshndle, void *cb_handle)
+{
+ struct poollist *respoollist = poolhndle;
+ struct poolres *respool;
+ unsigned int result = 0;
+
+ if (!poolinitdone || !respoollist) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error_nolock;
+ }
+
+ /* Allocate a resource structure */
+ respool = kzalloc(sizeof(*respool), GFP_KERNEL);
+ if (!respool)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ /* Setup the resource */
+ respool->desfunc = fndestructor;
+ respool->cb_handle = cb_handle;
+ respool->resparam = resparam;
+ respool->resparmsize = resparamsize;
+ respool->respoollst = respoollist;
+ lst_init(&respool->clonereslst);
+
+ /* Lock the pool */
+ mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL);
+
+ /* Set resource id */
+ result = idgen_allocid(respoollist->idgenhandle,
+ (void *)respool, &respool->resid);
+ if (result != IMG_SUCCESS) {
+ kfree(respool);
+ /* Unlock the pool */
+ mutex_unlock(respoollist->pool_mutex);
+ return result;
+ }
+
+ /* If allocated or free callback not set */
+ if (balloc || respoollist->pfnfree) {
+ /* Add to active list */
+ lst_add(&respoollist->actvreslst, respool);
+ respool->refcnt++;
+ } else {
+ /* Add to free list */
+ lst_add(&respoollist->freereslst, respool);
+ }
+
+ /* Return the resource id */
+ if (residptr)
+ *residptr = respool->resid;
+
+ /* Return the handle to the resource */
+ if (poolreshndle)
+ *poolreshndle = respool;
+
+ /* Unlock the pool */
+ mutex_unlock(respoollist->pool_mutex);
+
+ /* If free callback set */
+ if (respoollist->pfnfree) {
+ /* Call the free callback */
+ respoollist->pfnfree(respool->resid, respool->resparam);
+ }
+
+ /* Return IMG_SUCCESS */
+ return IMG_SUCCESS;
+
+error_nolock:
+ return result;
+}
+
+int pool_resdestroy(void *poolreshndle, int bforce)
+{
+ struct poolres *respool = poolreshndle;
+ struct poollist *respoollist;
+ struct poolres *origrespool;
+ unsigned int result = 0;
+
+ if (!poolinitdone || !respool) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error_nolock;
+ }
+
+ respoollist = respool->respoollst;
+
+ /* If this is a clone */
+ if (respool->isclone) {
+ /* Get access to the original */
+ origrespool = respool->origres;
+ if (!origrespool) {
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ goto error_nolock;
+ }
+
+ if (origrespool->isclone) {
+ result = IMG_ERROR_UNEXPECTED_STATE;
+ goto error_nolock;
+ }
+
+ /* Remove from the clone list */
+ lst_remove(&origrespool->clonereslst, respool);
+
+ /* Free resource id */
+ result = idgen_freeid(respoollist->idgenhandle,
+ respool->resid);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /*
+ * If we created a copy of the resources pvParam then free it
+ * kfree(NULL) is safe and this check is probably not required.
+ */
+ kfree(respool->resparam);
+
+ /* Free the clone resource structure */
+ kfree(respool);
+
+ /* Set resource to be "freed" to the original */
+ respool = origrespool;
+ }
+
+ /* If there are still outstanding references */
+ if (!bforce && respool->refcnt != 0) {
+ /*
+ * We may need to mark the resource and destroy it when
+ * there are no outstanding references
+ */
+ return IMG_SUCCESS;
+ }
+
+ /* Has the resource outstanding references */
+ if (respool->refcnt != 0) {
+ /* Remove the resource from the active list */
+ lst_remove(&respoollist->actvreslst, respool);
+ } else {
+ /* Remove the resource from the free list */
+ lst_remove(&respoollist->freereslst, respool);
+ }
+
+ /* Free resource id */
+ result = idgen_freeid(respoollist->idgenhandle,
+ respool->resid);
+ if (result != IMG_SUCCESS)
+ return result;
+
+ /* Call the resource destructor */
+ respool->desfunc(respool->resparam, respool->cb_handle);
+ kfree(respool);
+
+ return IMG_SUCCESS;
+
+error_nolock:
+ return result;
+}
+
+int pool_resalloc(void *poolhndle, void *poolreshndle)
+{
+ struct poollist *respoollist = poolhndle;
+ struct poolres *respool = poolreshndle;
+ unsigned int result = 0;
+
+ if (!poolinitdone || !respoollist || !poolreshndle) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error_nolock;
+ }
+
+ /* Lock the pool */
+ mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL);
+
+ /* Remove resource from free list */
+ lst_remove(&respoollist->freereslst, respool);
+
+ /* Add to active list */
+ lst_add(&respoollist->actvreslst, respool);
+ respool->refcnt++;
+
+ /* Unlock the pool */
+ mutex_unlock(respoollist->pool_mutex);
+
+ /* Return IMG_SUCCESS */
+ return IMG_SUCCESS;
+
+error_nolock:
+ return result;
+}
+
+int pool_resfree(void *poolreshndle)
+{
+ struct poolres *respool = poolreshndle;
+ struct poollist *respoollist;
+ struct poolres *origrespool;
+ unsigned int result = 0;
+
+ if (!poolinitdone || !respool) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error_nolock;
+ }
+
+ respoollist = respool->respoollst;
+
+ /* Lock the pool */
+ mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL);
+
+ /* If this is a clone */
+ if (respool->isclone) {
+ /* Get access to the original */
+ origrespool = respool->origres;
+ if (!origrespool) {
+ mutex_unlock(respoollist->pool_mutex);
+ return IMG_ERROR_INVALID_PARAMETERS;
+ }
+
+ /* Remove from the clone list */
+ lst_remove(&origrespool->clonereslst, respool);
+
+ /* Free resource id */
+ result = idgen_freeid(respoollist->idgenhandle,
+ respool->resid);
+ if (result != IMG_SUCCESS) {
+ /* Unlock the pool */
+ mutex_unlock(respoollist->pool_mutex);
+ return result;
+ }
+
+ /*
+ * If we created a copy of the resources pvParam then free it
+ * kfree(NULL) is safe and this check is probably not required.
+ */
+ kfree(respool->resparam);
+
+ /* Free the clone resource structure */
+ kfree(respool);
+
+ /* Set resource to be "freed" to the original */
+ respool = origrespool;
+ }
+
+ /* Update the reference count */
+ respool->refcnt--;
+
+ /* If there are still outstanding references */
+ if (respool->refcnt != 0) {
+ /* Unlock the pool */
+ mutex_unlock(respoollist->pool_mutex);
+ /* Return IMG_SUCCESS */
+ return IMG_SUCCESS;
+ }
+
+ /* Remove the resource from the active list */
+ lst_remove(&respoollist->actvreslst, respool);
+
+ /* If free callback set */
+ if (respoollist->pfnfree) {
+ /* Add to active list */
+ lst_add(&respoollist->actvreslst, respool);
+ respool->refcnt++;
+ } else {
+ /* Add to free list */
+ lst_add(&respoollist->freereslst, respool);
+ }
+
+ /* Unlock the pool */
+ mutex_unlock(respoollist->pool_mutex);
+
+ /* If free callback set */
+ if (respoollist->pfnfree) {
+ /* Call the free callback */
+ respoollist->pfnfree(respool->resid, respool->resparam);
+ }
+
+ /* Return IMG_SUCCESS */
+ return IMG_SUCCESS;
+
+error_nolock:
+ return result;
+}
+
+int pool_resclone(void *poolreshndle, void **clonereshndle, void **resparam)
+{
+ struct poolres *respool = poolreshndle;
+ struct poollist *respoollist;
+ struct poolres *origrespool = respool;
+ struct poolres *clonerespool;
+ unsigned int result = 0;
+
+ if (!poolinitdone || !respool) {
+ result = IMG_ERROR_INVALID_PARAMETERS;
+ goto error_nolock;
+ }
+
+ /* Allocate a resource structure */
+ clonerespool = kzalloc(sizeof(*clonerespool), GFP_KERNEL);
+ if (!clonerespool)
+ return IMG_ERROR_OUT_OF_MEMORY;
+
+ respoollist = respool->respoollst;
+ if (!respoollist)
+ return IMG_ERROR_FATAL;
+
+ /* Lock the pool */
+ mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL);
+
+ /* Set resource id */
+ result = idgen_allocid(respoollist->idgenhandle,
+ (void *)clonerespool, &clonerespool->resid);
+ if (result != IMG_SUCCESS)
+ goto error_alloc_id;
+
+ /* If this is a clone, set the original */
+ if (respool->isclone)
+ origrespool = respool->origres;
+
+ /* Setup the cloned resource */
+ clonerespool->isclone = 1;
+ clonerespool->respoollst = respoollist;
+ clonerespool->origres = origrespool;
+
+ /* Add to clone list */
+ lst_add(&origrespool->clonereslst, clonerespool);
+ origrespool->refcnt++;
+
+ /* If ppvParam is not IMG_NULL */
+ if (resparam) {
+ /* If the size of the original vParam is 0 */
+ if (origrespool->resparmsize == 0) {
+ *resparam = NULL;
+ } else {
+ /* Allocate memory for a copy of the original vParam */
+ /*
+ * kmemdup allocates memory of length
+ * origrespool->resparmsize and to resparam and copy
+ * origrespool->resparam to resparam of the allocated
+ * length
+ */
+ *resparam = kmemdup(origrespool->resparam,
+ origrespool->resparmsize,
+ GFP_KERNEL);
+ if (!(*resparam)) {
+ result = IMG_ERROR_OUT_OF_MEMORY;
+ goto error_copy_param;
+ }
+ }
+ }
+
+ /* Unlock the pool */
+ mutex_unlock(respoollist->pool_mutex);
+
+ /* Return the cloned resource */
+ *clonereshndle = clonerespool;
+
+ /* Return IMG_SUCCESS */
+ return IMG_SUCCESS;
+
+ /* Error handling. */
+error_copy_param:
+ lst_remove(&origrespool->clonereslst, clonerespool);
+ origrespool->refcnt--;
+error_alloc_id:
+ kfree(clonerespool);
+
+ /* Unlock the pool */
+ mutex_unlock(respoollist->pool_mutex);
+
+error_nolock:
+ return result;
+}
diff --git a/drivers/staging/media/vxd/common/pool_api.h b/drivers/staging/media/vxd/common/pool_api.h
new file mode 100644
index 000000000000..1e7803abb715
--- /dev/null
+++ b/drivers/staging/media/vxd/common/pool_api.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Resource pool manager API.
+ *
+ * Copyright (c) Imagination Technologies Ltd.
+ * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * Authors:
+ * Amit Makani <[email protected]>
+ *
+ * Re-written for upstreamimg
+ * Sidraya Jayagond <[email protected]>
+ */
+#ifndef __POOLAPI_H__
+#define __POOLAPI_H__
+
+#include "img_errors.h"
+#include "lst.h"
+
+/*
+ * This is the prototype for "free" callback functions. This function
+ * is called when resources are returned to the pools list of free resources.
+ * NOTE: The "freed" resource is then allocated and passed to the callback
+ * function.
+ */
+typedef void (*pfrecalbkpntr)(unsigned int ui32resid, void *resparam);
+
+/*
+ * This is the prototype for "destructor" callback functions. This function
+ * is called when a resource registered with the resource pool manager is to
+ * be destroyed.
+ */
+typedef void (*pdestcallbkptr)(void *resparam, void *cb_handle);
+
+/*
+ * pool_init - This function is used to initializes the resource pool manager component
+ * and should be called at start-up.
+ */
+int pool_init(void);
+
+/*
+ * This function is used to deinitialises the resource pool manager component
+ * and would normally be called at shutdown.
+ */
+void pool_deinit(void);
+
+/*
+ * This function is used to create a resource pool into which resources can be
+ * placed.
+ */
+int pool_api_create(void **poolhndle);
+
+/*
+ * This function is used to destroy a resource pool.
+ * NOTE: Destroying a resource pool destroys all of the resources within the
+ * pool by calling the associated destructor function #POOL_pfnDestructor
+ * defined when the resource what registered using POOL_ResRegister().
+ *
+ * NOTE: All of the pools resources must be in the pools free list - the
+ * allocated list must be empty.
+ */
+int pool_destroy(void *poolhndle);
+
+/*
+ * This function is used to set or remove a free callback function on a pool.
+ * The free callback function gets call for any resources already in the
+ * pools free list or for any resources that subsequently get freed.
+ * NOTE: The resource passed to the callback function has been allocated before
+ * the callback is made.
+ */
+int pool_setfreecalbck(void *poolhndle, pfrecalbkpntr pfnfree);
+
+/*
+ * This function is used to register a resource within a resource pool. The
+ * resource is added to the pools allocated or free list based on the value
+ * of bAlloc.
+ */
+int pool_resreg(void *poolhndle, pdestcallbkptr fndestructor,
+ void *resparam, unsigned int resparamsize,
+ int balloc, unsigned int *residptr,
+ void **poolreshndle, void *cb_handle);
+
+/*
+ * This function is used to destroy a resource.
+ */
+int pool_resdestroy(void *poolreshndle, int bforce);
+
+/*
+ * This function is used to get/allocate a resource from a pool. This moves
+ * the resource from the free to allocated list.
+ */
+int pool_resalloc(void *poolhndle, void *poolreshndle);
+
+/*
+ * This function is used to free a resource and return it to the pools lists of
+ * free resources.
+ * NOTE: The resources is only moved to the free list when all references to
+ * the resource have been freed.
+ */
+int pool_resfree(void *poolreshndle);
+
+/*
+ * This function is used to clone a resource - this creates an additional
+ * reference to the resource.
+ * NOTE: The resources is only moved to the free list when all references to
+ * the resource have been freed.
+ * NOTE: If this function is used to clone the resource's pvParam data then
+ * the clone of the data is freed when the clone of the resource is freed.
+ * The resource destructor is NOT used for this - simply an IMG_FREE.
+ */
+int pool_resclone(void *poolreshndle, void **clonereshndle, void **resparam);
+
+#endif /* __POOLAPI_H__ */
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
Add video decoder to Makefile
Add video decoder to Kconfig
Signed-off-by: Angela Stegmaier <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
drivers/staging/media/Kconfig | 2 ++
drivers/staging/media/Makefile | 1 +
2 files changed, 3 insertions(+)
diff --git a/drivers/staging/media/Kconfig b/drivers/staging/media/Kconfig
index e3aaae920847..044763f8fe2e 100644
--- a/drivers/staging/media/Kconfig
+++ b/drivers/staging/media/Kconfig
@@ -44,4 +44,6 @@ source "drivers/staging/media/ipu3/Kconfig"
source "drivers/staging/media/av7110/Kconfig"
+source "drivers/staging/media/vxd/decoder/Kconfig"
+
endif
diff --git a/drivers/staging/media/Makefile b/drivers/staging/media/Makefile
index 5b5afc5b03a0..567aed1d2d43 100644
--- a/drivers/staging/media/Makefile
+++ b/drivers/staging/media/Makefile
@@ -11,3 +11,4 @@ obj-$(CONFIG_VIDEO_HANTRO) += hantro/
obj-$(CONFIG_VIDEO_IPU3_IMGU) += ipu3/
obj-$(CONFIG_VIDEO_ZORAN) += zoran/
obj-$(CONFIG_DVB_AV7110) += av7110/
+obj-$(CONFIG_VIDEO_IMG_VXD_DEC) += vxd/decoder/
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
Add video decoder basic config to Kconfig and
select the required V4l2 modules.
Add video decoder Makefile.
Signed-off-by: Angela Stegmaier <[email protected]>
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 2 +
drivers/staging/media/vxd/decoder/Kconfig | 13 +++
drivers/staging/media/vxd/decoder/Makefile | 129 +++++++++++++++++++++
3 files changed, 144 insertions(+)
create mode 100644 drivers/staging/media/vxd/decoder/Kconfig
create mode 100644 drivers/staging/media/vxd/decoder/Makefile
diff --git a/MAINTAINERS b/MAINTAINERS
index b5875f9015ba..0616ab620135 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19568,6 +19568,8 @@ F: drivers/staging/media/vxd/common/talmmu_api.c
F: drivers/staging/media/vxd/common/talmmu_api.h
F: drivers/staging/media/vxd/common/work_queue.c
F: drivers/staging/media/vxd/common/work_queue.h
+F: drivers/staging/media/vxd/decoder/Kconfig
+F: drivers/staging/media/vxd/decoder/Makefile
F: drivers/staging/media/vxd/decoder/bspp.c
F: drivers/staging/media/vxd/decoder/bspp.h
F: drivers/staging/media/vxd/decoder/bspp_int.h
diff --git a/drivers/staging/media/vxd/decoder/Kconfig b/drivers/staging/media/vxd/decoder/Kconfig
new file mode 100644
index 000000000000..5ee44cc07dd8
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/Kconfig
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0
+
+config VIDEO_IMG_VXD_DEC
+ tristate "IMG VXD DEC (Video Decoder) driver"
+ depends on VIDEO_DEV && VIDEO_V4L2
+ select VIDEOBUF2_CORE
+ select VIDEOBUF2_DMA_CONTIG
+ select VIDEOBUF2_DMA_SG
+ select V4L2_MEM2MEM_DEV
+ help
+ This is an IMG VXD DEC V4L2 driver that adds support for the
+ Imagination D5520 (Video Decoder) hardware.
+ The module name when built is vxd-dec.
diff --git a/drivers/staging/media/vxd/decoder/Makefile b/drivers/staging/media/vxd/decoder/Makefile
new file mode 100644
index 000000000000..80e46a7da1ab
--- /dev/null
+++ b/drivers/staging/media/vxd/decoder/Makefile
@@ -0,0 +1,129 @@
+# SPDX-License-Identifier: GPL-2.0
+
+# Optional Video feature configuration control
+
+# (1)
+# This config allows enabling or disabling of HEVC/H265 video
+# decoding functionality with IMG VXD Video decoder. If you
+# do not want HEVC decode capability, select N.
+# If unsure, select Y
+HAS_HEVC ?=y
+
+# (2)
+# This config enables error concealment with gray pattern.
+# Disable if you do not want error concealment capability.
+# If unsure, say Y
+ERROR_CONCEALMENT ?=y
+
+# (3)
+# This config, if enabled, configures H264 video decoder to
+# output frames in the decode order with no buffering and
+# picture reordering inside codec.
+# If unsure, say N
+REDUCED_DPB_NO_PIC_REORDERING ?=n
+
+# (4)
+# This config, if enabled, enables all the debug traces in
+# decoder driver. Enable it only for debug purpose
+# Keep it always disabled for release codebase
+DEBUG_DECODER_DRIVER ?=n
+
+# (5)
+# This config allows enabling or disabling of MJPEG video
+# decoding functionality with IMG VXD Video decoder. If you
+# do not want MJPEG decode capability, select N.
+# If unsure, select Y
+HAS_JPEG ?=y
+
+# (6)
+# This config allows simulation of Error recovery.
+# This config is only for testing, never enable it for release build.
+ERROR_RECOVERY_SIMULATION ?=n
+
+# (7)
+# This config enables allocation of capture buffers from
+# dma contiguous memory.
+# If unsure, say Y
+CAPTURE_CONTIG_ALLOC ?=y
+
+#VXD
+vxd-dec-y += vxd_core.o
+
+#PVDEC
+vxd-dec-y += vxd_pvdec.o
+
+#MEM_MGR
+vxd-dec-y += ../common/img_mem_man.o ../common/img_mem_unified.o
+vxd-dec-y += ../common/imgmmu.o
+
+#Utilities
+vxd-dec-y += ../common/lst.o ../common/dq.o
+vxd-dec-y += ../common/resource.o
+vxd-dec-y += dec_resources.o
+vxd-dec-y += ../common/rman_api.o
+vxd-dec-y += pixel_api.o
+vxd-dec-y += vdecdd_utils_buf.o
+vxd-dec-y += vdecdd_utils.o
+
+#MMU
+vxd-dec-y += ../common/talmmu_api.o
+vxd-dec-y += ../common/pool.o
+vxd-dec-y += ../common/hash.o
+vxd-dec-y += ../common/ra.o
+vxd-dec-y += ../common/addr_alloc.o
+vxd-dec-y += ../common/work_queue.o
+vxd-dec-y += vdec_mmu_wrapper.o
+
+#DECODER
+vxd-dec-y += ../common/pool_api.o ../common/idgen_api.o
+vxd-dec-y += hw_control.o
+vxd-dec-y += vxd_int.o
+vxd-dec-y += translation_api.o
+vxd-dec-y += decoder.o
+vxd-dec-y += core.o
+
+#BSPP
+vxd-dec-y += swsr.o
+vxd-dec-y += h264_secure_parser.o
+vxd-dec-y += bspp.o
+
+#UM INTERFACE & SYSDEV
+vxd-dec-y += vxd_dec.o
+
+vxd-dec-y += vxd_v4l2.o
+
+ifeq ($(DEBUG_DECODER_DRIVER), y)
+ccflags-y += -DDEBUG_DECODER_DRIVER
+ccflags-y += -DDEBUG
+endif
+
+ifeq ($(HAS_HEVC),y)
+ccflags-y += -DHAS_HEVC
+vxd-dec-y += hevc_secure_parser.o
+endif
+
+ifeq ($(HAS_JPEG),y)
+ccflags-y += -DHAS_JPEG
+vxd-dec-y += jpeg_secure_parser.o
+endif
+
+
+ifeq ($(ERROR_CONCEALMENT),y)
+ccflags-y += -DERROR_CONCEALMENT
+endif
+
+ifeq ($(REDUCED_DPB_NO_PIC_REORDERING),y)
+ccflags-y += -DREDUCED_DPB_NO_PIC_REORDERING
+endif
+
+ifeq ($(ERROR_RECOVERY_SIMULATION),y)
+ccflags-y += -DERROR_RECOVERY_SIMULATION
+endif
+
+ifeq ($(CAPTURE_CONTIG_ALLOC),y)
+ccflags-y += -DCAPTURE_CONTIG_ALLOC
+endif
+
+obj-$(CONFIG_VIDEO_IMG_VXD_DEC) += vxd-dec.o
+
+ccflags-y += -I$(srctree)/drivers/staging/media/vxd/common
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
From: Sidraya <[email protected]>
This patch enables d5520 video decoder driver as module
on J721e board.
Signed-off-by: Sidraya <[email protected]>
---
MAINTAINERS | 1 +
.../configs/ti_sdk_arm64_release_defconfig | 7407 +++++++++++++++++
2 files changed, 7408 insertions(+)
create mode 100644 arch/arm64/configs/ti_sdk_arm64_release_defconfig
diff --git a/MAINTAINERS b/MAINTAINERS
index c7b4c860f8a7..db47fcafbcfc 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19537,6 +19537,7 @@ M: Sidraya Jayagond <[email protected]>
L: [email protected]
S: Maintained
F: Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
+F: arch/arm64/configs/tisdk_j7-evm_defconfig
F: drivers/staging/media/vxd/common/addr_alloc.c
F: drivers/staging/media/vxd/common/addr_alloc.h
F: drivers/staging/media/vxd/common/dq.c
diff --git a/arch/arm64/configs/ti_sdk_arm64_release_defconfig b/arch/arm64/configs/ti_sdk_arm64_release_defconfig
new file mode 100644
index 000000000000..a424c76d29da
--- /dev/null
+++ b/arch/arm64/configs/ti_sdk_arm64_release_defconfig
@@ -0,0 +1,7407 @@
+#
+# Automatically generated file; DO NOT EDIT.
+# Linux/arm64 5.10.41 Kernel Configuration
+#
+CONFIG_CC_VERSION_TEXT="aarch64-none-linux-gnu-gcc (GNU Toolchain for the A-profile Architecture 9.2-2019.12 (arm-9.10)) 9.2.1 20191025"
+CONFIG_CC_IS_GCC=y
+CONFIG_GCC_VERSION=90201
+CONFIG_LD_VERSION=233010000
+CONFIG_CLANG_VERSION=0
+CONFIG_LLD_VERSION=0
+CONFIG_CC_CAN_LINK=y
+CONFIG_CC_CAN_LINK_STATIC=y
+CONFIG_CC_HAS_ASM_GOTO=y
+CONFIG_CC_HAS_ASM_INLINE=y
+CONFIG_IRQ_WORK=y
+CONFIG_BUILDTIME_TABLE_SORT=y
+CONFIG_THREAD_INFO_IN_TASK=y
+
+#
+# General setup
+#
+CONFIG_INIT_ENV_ARG_LIMIT=32
+# CONFIG_COMPILE_TEST is not set
+CONFIG_LOCALVERSION=""
+CONFIG_LOCALVERSION_AUTO=y
+CONFIG_BUILD_SALT=""
+CONFIG_DEFAULT_INIT=""
+CONFIG_DEFAULT_HOSTNAME="(none)"
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+CONFIG_SYSVIPC_SYSCTL=y
+CONFIG_POSIX_MQUEUE=y
+CONFIG_POSIX_MQUEUE_SYSCTL=y
+# CONFIG_WATCH_QUEUE is not set
+CONFIG_CROSS_MEMORY_ATTACH=y
+# CONFIG_USELIB is not set
+# CONFIG_AUDIT is not set
+CONFIG_HAVE_ARCH_AUDITSYSCALL=y
+
+#
+# IRQ subsystem
+#
+CONFIG_GENERIC_IRQ_PROBE=y
+CONFIG_GENERIC_IRQ_SHOW=y
+CONFIG_GENERIC_IRQ_SHOW_LEVEL=y
+CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y
+CONFIG_GENERIC_IRQ_MIGRATION=y
+CONFIG_HARDIRQS_SW_RESEND=y
+CONFIG_IRQ_DOMAIN=y
+CONFIG_IRQ_DOMAIN_HIERARCHY=y
+CONFIG_GENERIC_IRQ_IPI=y
+CONFIG_GENERIC_MSI_IRQ=y
+CONFIG_GENERIC_MSI_IRQ_DOMAIN=y
+CONFIG_IRQ_MSI_IOMMU=y
+CONFIG_HANDLE_DOMAIN_IRQ=y
+CONFIG_IRQ_FORCED_THREADING=y
+CONFIG_SPARSE_IRQ=y
+# CONFIG_GENERIC_IRQ_DEBUGFS is not set
+# end of IRQ subsystem
+
+CONFIG_GENERIC_IRQ_MULTI_HANDLER=y
+CONFIG_GENERIC_TIME_VSYSCALL=y
+CONFIG_GENERIC_CLOCKEVENTS=y
+CONFIG_ARCH_HAS_TICK_BROADCAST=y
+CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
+
+#
+# Timers subsystem
+#
+CONFIG_TICK_ONESHOT=y
+CONFIG_NO_HZ_COMMON=y
+# CONFIG_HZ_PERIODIC is not set
+CONFIG_NO_HZ_IDLE=y
+# CONFIG_NO_HZ_FULL is not set
+# CONFIG_NO_HZ is not set
+CONFIG_HIGH_RES_TIMERS=y
+# end of Timers subsystem
+
+# CONFIG_PREEMPT_NONE is not set
+# CONFIG_PREEMPT_VOLUNTARY is not set
+CONFIG_PREEMPT=y
+CONFIG_PREEMPT_COUNT=y
+CONFIG_PREEMPTION=y
+
+#
+# CPU/Task time and stats accounting
+#
+CONFIG_TICK_CPU_ACCOUNTING=y
+# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
+CONFIG_IRQ_TIME_ACCOUNTING=y
+CONFIG_HAVE_SCHED_AVG_IRQ=y
+CONFIG_SCHED_THERMAL_PRESSURE=y
+CONFIG_BSD_PROCESS_ACCT=y
+CONFIG_BSD_PROCESS_ACCT_V3=y
+# CONFIG_TASKSTATS is not set
+# CONFIG_PSI is not set
+# end of CPU/Task time and stats accounting
+
+CONFIG_CPU_ISOLATION=y
+
+#
+# RCU Subsystem
+#
+CONFIG_TREE_RCU=y
+CONFIG_PREEMPT_RCU=y
+# CONFIG_RCU_EXPERT is not set
+CONFIG_SRCU=y
+CONFIG_TREE_SRCU=y
+CONFIG_TASKS_RCU_GENERIC=y
+CONFIG_TASKS_RCU=y
+CONFIG_RCU_STALL_COMMON=y
+CONFIG_RCU_NEED_SEGCBLIST=y
+# end of RCU Subsystem
+
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+# CONFIG_IKHEADERS is not set
+CONFIG_LOG_BUF_SHIFT=17
+CONFIG_LOG_CPU_MAX_BUF_SHIFT=12
+CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13
+CONFIG_GENERIC_SCHED_CLOCK=y
+
+#
+# Scheduler features
+#
+# CONFIG_UCLAMP_TASK is not set
+# end of Scheduler features
+
+CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
+CONFIG_CC_HAS_INT128=y
+CONFIG_ARCH_SUPPORTS_INT128=y
+CONFIG_CGROUPS=y
+CONFIG_PAGE_COUNTER=y
+CONFIG_MEMCG=y
+CONFIG_MEMCG_SWAP=y
+CONFIG_MEMCG_KMEM=y
+CONFIG_BLK_CGROUP=y
+CONFIG_CGROUP_WRITEBACK=y
+CONFIG_CGROUP_SCHED=y
+CONFIG_FAIR_GROUP_SCHED=y
+CONFIG_CFS_BANDWIDTH=y
+# CONFIG_RT_GROUP_SCHED is not set
+CONFIG_CGROUP_PIDS=y
+# CONFIG_CGROUP_RDMA is not set
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CGROUP_HUGETLB=y
+CONFIG_CPUSETS=y
+CONFIG_PROC_PID_CPUSET=y
+CONFIG_CGROUP_DEVICE=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_CGROUP_PERF=y
+# CONFIG_CGROUP_DEBUG is not set
+CONFIG_NAMESPACES=y
+CONFIG_UTS_NS=y
+CONFIG_TIME_NS=y
+CONFIG_IPC_NS=y
+CONFIG_USER_NS=y
+CONFIG_PID_NS=y
+CONFIG_NET_NS=y
+CONFIG_CHECKPOINT_RESTORE=y
+CONFIG_SCHED_AUTOGROUP=y
+# CONFIG_SYSFS_DEPRECATED is not set
+# CONFIG_RELAY is not set
+CONFIG_BLK_DEV_INITRD=y
+CONFIG_INITRAMFS_SOURCE=""
+CONFIG_RD_GZIP=y
+CONFIG_RD_BZIP2=y
+CONFIG_RD_LZMA=y
+CONFIG_RD_XZ=y
+CONFIG_RD_LZO=y
+CONFIG_RD_LZ4=y
+CONFIG_RD_ZSTD=y
+# CONFIG_BOOT_CONFIG is not set
+CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+CONFIG_LD_ORPHAN_WARN=y
+CONFIG_SYSCTL=y
+CONFIG_HAVE_UID16=y
+CONFIG_SYSCTL_EXCEPTION_TRACE=y
+CONFIG_BPF=y
+CONFIG_EXPERT=y
+CONFIG_UID16=y
+CONFIG_MULTIUSER=y
+# CONFIG_SGETMASK_SYSCALL is not set
+CONFIG_SYSFS_SYSCALL=y
+CONFIG_FHANDLE=y
+CONFIG_POSIX_TIMERS=y
+CONFIG_PRINTK=y
+CONFIG_PRINTK_NMI=y
+CONFIG_BUG=y
+CONFIG_ELF_CORE=y
+CONFIG_BASE_FULL=y
+CONFIG_FUTEX=y
+CONFIG_FUTEX_PI=y
+CONFIG_HAVE_FUTEX_CMPXCHG=y
+CONFIG_EPOLL=y
+CONFIG_SIGNALFD=y
+CONFIG_TIMERFD=y
+CONFIG_EVENTFD=y
+CONFIG_SHMEM=y
+CONFIG_AIO=y
+CONFIG_IO_URING=y
+CONFIG_ADVISE_SYSCALLS=y
+CONFIG_MEMBARRIER=y
+CONFIG_KALLSYMS=y
+CONFIG_KALLSYMS_ALL=y
+CONFIG_KALLSYMS_BASE_RELATIVE=y
+# CONFIG_BPF_SYSCALL is not set
+CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y
+CONFIG_BPF_JIT_DEFAULT_ON=y
+# CONFIG_USERFAULTFD is not set
+CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
+CONFIG_KCMP=y
+CONFIG_RSEQ=y
+# CONFIG_DEBUG_RSEQ is not set
+CONFIG_EMBEDDED=y
+CONFIG_HAVE_PERF_EVENTS=y
+# CONFIG_PC104 is not set
+
+#
+# Kernel Performance Events And Counters
+#
+CONFIG_PERF_EVENTS=y
+# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
+# end of Kernel Performance Events And Counters
+
+CONFIG_VM_EVENT_COUNTERS=y
+# CONFIG_SLUB_DEBUG is not set
+# CONFIG_SLUB_MEMCG_SYSFS_ON is not set
+# CONFIG_COMPAT_BRK is not set
+# CONFIG_SLAB is not set
+CONFIG_SLUB=y
+# CONFIG_SLOB is not set
+CONFIG_SLAB_MERGE_DEFAULT=y
+# CONFIG_SLAB_FREELIST_RANDOM is not set
+# CONFIG_SLAB_FREELIST_HARDENED is not set
+# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set
+CONFIG_SLUB_CPU_PARTIAL=y
+CONFIG_SYSTEM_DATA_VERIFICATION=y
+# CONFIG_PROFILING is not set
+# end of General setup
+
+CONFIG_ARM64=y
+CONFIG_64BIT=y
+CONFIG_MMU=y
+CONFIG_ARM64_PAGE_SHIFT=16
+CONFIG_ARM64_CONT_PTE_SHIFT=5
+CONFIG_ARM64_CONT_PMD_SHIFT=5
+CONFIG_ARCH_MMAP_RND_BITS_MIN=14
+CONFIG_ARCH_MMAP_RND_BITS_MAX=29
+CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=7
+CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16
+CONFIG_STACKTRACE_SUPPORT=y
+CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
+CONFIG_LOCKDEP_SUPPORT=y
+CONFIG_TRACE_IRQFLAGS_SUPPORT=y
+CONFIG_GENERIC_BUG=y
+CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
+CONFIG_GENERIC_HWEIGHT=y
+CONFIG_GENERIC_CSUM=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
+CONFIG_ZONE_DMA=y
+CONFIG_ZONE_DMA32=y
+CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
+CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
+CONFIG_SMP=y
+CONFIG_KERNEL_MODE_NEON=y
+CONFIG_FIX_EARLYCON_MEM=y
+CONFIG_PGTABLE_LEVELS=3
+CONFIG_ARCH_SUPPORTS_UPROBES=y
+CONFIG_ARCH_PROC_KCORE_TEXT=y
+
+#
+# Platform selection
+#
+# CONFIG_ARCH_ACTIONS is not set
+# CONFIG_ARCH_AGILEX is not set
+# CONFIG_ARCH_SUNXI is not set
+# CONFIG_ARCH_ALPINE is not set
+# CONFIG_ARCH_BCM2835 is not set
+# CONFIG_ARCH_BCM_IPROC is not set
+# CONFIG_ARCH_BERLIN is not set
+# CONFIG_ARCH_BITMAIN is not set
+# CONFIG_ARCH_BRCMSTB is not set
+# CONFIG_ARCH_EXYNOS is not set
+# CONFIG_ARCH_SPARX5 is not set
+CONFIG_ARCH_K3=y
+# CONFIG_ARCH_LAYERSCAPE is not set
+# CONFIG_ARCH_LG1K is not set
+# CONFIG_ARCH_HISI is not set
+# CONFIG_ARCH_KEEMBAY is not set
+# CONFIG_ARCH_MEDIATEK is not set
+# CONFIG_ARCH_MESON is not set
+# CONFIG_ARCH_MVEBU is not set
+# CONFIG_ARCH_MXC is not set
+# CONFIG_ARCH_QCOM is not set
+# CONFIG_ARCH_REALTEK is not set
+# CONFIG_ARCH_RENESAS is not set
+# CONFIG_ARCH_ROCKCHIP is not set
+# CONFIG_ARCH_S32 is not set
+# CONFIG_ARCH_SEATTLE is not set
+# CONFIG_ARCH_STRATIX10 is not set
+# CONFIG_ARCH_SYNQUACER is not set
+# CONFIG_ARCH_TEGRA is not set
+# CONFIG_ARCH_SPRD is not set
+# CONFIG_ARCH_THUNDER is not set
+# CONFIG_ARCH_THUNDER2 is not set
+# CONFIG_ARCH_UNIPHIER is not set
+# CONFIG_ARCH_VEXPRESS is not set
+# CONFIG_ARCH_VISCONTI is not set
+# CONFIG_ARCH_XGENE is not set
+# CONFIG_ARCH_ZX is not set
+# CONFIG_ARCH_ZYNQMP is not set
+# end of Platform selection
+
+#
+# Kernel Features
+#
+
+#
+# ARM errata workarounds via the alternatives framework
+#
+CONFIG_ARM64_WORKAROUND_CLEAN_CACHE=y
+CONFIG_ARM64_ERRATUM_826319=y
+CONFIG_ARM64_ERRATUM_827319=y
+CONFIG_ARM64_ERRATUM_824069=y
+CONFIG_ARM64_ERRATUM_819472=y
+CONFIG_ARM64_ERRATUM_832075=y
+CONFIG_ARM64_ERRATUM_845719=y
+CONFIG_ARM64_ERRATUM_843419=y
+CONFIG_ARM64_ERRATUM_1024718=y
+CONFIG_ARM64_ERRATUM_1418040=y
+CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT=y
+CONFIG_ARM64_ERRATUM_1165522=y
+CONFIG_ARM64_ERRATUM_1319367=y
+CONFIG_ARM64_ERRATUM_1530923=y
+CONFIG_ARM64_WORKAROUND_REPEAT_TLBI=y
+CONFIG_ARM64_ERRATUM_1286807=y
+CONFIG_ARM64_ERRATUM_1463225=y
+CONFIG_ARM64_ERRATUM_1542419=y
+CONFIG_ARM64_ERRATUM_1508412=y
+# CONFIG_CAVIUM_ERRATUM_22375 is not set
+# CONFIG_CAVIUM_ERRATUM_23154 is not set
+# CONFIG_CAVIUM_ERRATUM_27456 is not set
+# CONFIG_CAVIUM_ERRATUM_30115 is not set
+CONFIG_CAVIUM_TX2_ERRATUM_219=y
+CONFIG_FUJITSU_ERRATUM_010001=y
+# CONFIG_HISILICON_ERRATUM_161600802 is not set
+# CONFIG_QCOM_FALKOR_ERRATUM_1003 is not set
+# CONFIG_QCOM_FALKOR_ERRATUM_1009 is not set
+# CONFIG_QCOM_QDF2400_ERRATUM_0065 is not set
+# CONFIG_QCOM_FALKOR_ERRATUM_E1041 is not set
+CONFIG_SOCIONEXT_SYNQUACER_PREITS=y
+# end of ARM errata workarounds via the alternatives framework
+
+# CONFIG_ARM64_4K_PAGES is not set
+# CONFIG_ARM64_16K_PAGES is not set
+CONFIG_ARM64_64K_PAGES=y
+# CONFIG_ARM64_VA_BITS_42 is not set
+CONFIG_ARM64_VA_BITS_48=y
+# CONFIG_ARM64_VA_BITS_52 is not set
+CONFIG_ARM64_VA_BITS=48
+CONFIG_ARM64_PA_BITS_48=y
+# CONFIG_ARM64_PA_BITS_52 is not set
+CONFIG_ARM64_PA_BITS=48
+# CONFIG_CPU_BIG_ENDIAN is not set
+CONFIG_CPU_LITTLE_ENDIAN=y
+CONFIG_SCHED_MC=y
+CONFIG_SCHED_SMT=y
+CONFIG_NR_CPUS=256
+CONFIG_HOTPLUG_CPU=y
+# CONFIG_NUMA is not set
+CONFIG_HOLES_IN_ZONE=y
+# CONFIG_HZ_100 is not set
+CONFIG_HZ_250=y
+# CONFIG_HZ_300 is not set
+# CONFIG_HZ_1000 is not set
+CONFIG_HZ=250
+CONFIG_SCHED_HRTICK=y
+CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
+CONFIG_ARCH_SPARSEMEM_ENABLE=y
+CONFIG_ARCH_SPARSEMEM_DEFAULT=y
+CONFIG_ARCH_SELECT_MEMORY_MODEL=y
+CONFIG_ARCH_FLATMEM_ENABLE=y
+CONFIG_HAVE_ARCH_PFN_VALID=y
+CONFIG_HW_PERF_EVENTS=y
+CONFIG_SYS_SUPPORTS_HUGETLBFS=y
+CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
+CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
+# CONFIG_PARAVIRT is not set
+# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
+CONFIG_KEXEC=y
+CONFIG_KEXEC_FILE=y
+# CONFIG_KEXEC_SIG is not set
+# CONFIG_CRASH_DUMP is not set
+# CONFIG_XEN is not set
+CONFIG_FORCE_MAX_ZONEORDER=14
+CONFIG_UNMAP_KERNEL_AT_EL0=y
+CONFIG_RODATA_FULL_DEFAULT_ENABLED=y
+# CONFIG_ARM64_SW_TTBR0_PAN is not set
+CONFIG_ARM64_TAGGED_ADDR_ABI=y
+CONFIG_COMPAT=y
+CONFIG_KUSER_HELPERS=y
+# CONFIG_ARMV8_DEPRECATED is not set
+
+#
+# ARMv8.1 architectural features
+#
+CONFIG_ARM64_HW_AFDBM=y
+CONFIG_ARM64_PAN=y
+CONFIG_AS_HAS_LSE_ATOMICS=y
+CONFIG_ARM64_LSE_ATOMICS=y
+CONFIG_ARM64_USE_LSE_ATOMICS=y
+CONFIG_ARM64_VHE=y
+# end of ARMv8.1 architectural features
+
+#
+# ARMv8.2 architectural features
+#
+CONFIG_ARM64_UAO=y
+# CONFIG_ARM64_PMEM is not set
+CONFIG_ARM64_RAS_EXTN=y
+CONFIG_ARM64_CNP=y
+# end of ARMv8.2 architectural features
+
+#
+# ARMv8.3 architectural features
+#
+CONFIG_ARM64_PTR_AUTH=y
+CONFIG_CC_HAS_BRANCH_PROT_PAC_RET=y
+CONFIG_CC_HAS_SIGN_RETURN_ADDRESS=y
+CONFIG_AS_HAS_PAC=y
+CONFIG_AS_HAS_CFI_NEGATE_RA_STATE=y
+# end of ARMv8.3 architectural features
+
+#
+# ARMv8.4 architectural features
+#
+CONFIG_ARM64_AMU_EXTN=y
+CONFIG_AS_HAS_ARMV8_4=y
+CONFIG_ARM64_TLB_RANGE=y
+# end of ARMv8.4 architectural features
+
+#
+# ARMv8.5 architectural features
+#
+CONFIG_ARM64_BTI=y
+CONFIG_CC_HAS_BRANCH_PROT_PAC_RET_BTI=y
+CONFIG_ARM64_E0PD=y
+CONFIG_ARCH_RANDOM=y
+CONFIG_ARM64_AS_HAS_MTE=y
+CONFIG_ARM64_MTE=y
+# end of ARMv8.5 architectural features
+
+CONFIG_ARM64_SVE=y
+CONFIG_ARM64_MODULE_PLTS=y
+# CONFIG_ARM64_PSEUDO_NMI is not set
+CONFIG_RELOCATABLE=y
+CONFIG_RANDOMIZE_BASE=y
+CONFIG_RANDOMIZE_MODULE_REGION_FULL=y
+CONFIG_CC_HAVE_STACKPROTECTOR_SYSREG=y
+CONFIG_STACKPROTECTOR_PER_TASK=y
+# end of Kernel Features
+
+#
+# Boot options
+#
+CONFIG_CMDLINE=""
+# CONFIG_EFI is not set
+# end of Boot options
+
+CONFIG_SYSVIPC_COMPAT=y
+CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y
+CONFIG_ARCH_ENABLE_THP_MIGRATION=y
+
+#
+# Power management options
+#
+CONFIG_SUSPEND=y
+CONFIG_SUSPEND_FREEZER=y
+# CONFIG_SUSPEND_SKIP_SYNC is not set
+CONFIG_HIBERNATE_CALLBACKS=y
+CONFIG_HIBERNATION=y
+CONFIG_HIBERNATION_SNAPSHOT_DEV=y
+CONFIG_PM_STD_PARTITION=""
+CONFIG_PM_SLEEP=y
+CONFIG_PM_SLEEP_SMP=y
+# CONFIG_PM_AUTOSLEEP is not set
+# CONFIG_PM_WAKELOCKS is not set
+CONFIG_PM=y
+# CONFIG_PM_DEBUG is not set
+CONFIG_PM_CLK=y
+CONFIG_PM_GENERIC_DOMAINS=y
+CONFIG_WQ_POWER_EFFICIENT_DEFAULT=y
+CONFIG_PM_GENERIC_DOMAINS_SLEEP=y
+CONFIG_PM_GENERIC_DOMAINS_OF=y
+CONFIG_CPU_PM=y
+CONFIG_ENERGY_MODEL=y
+CONFIG_ARCH_HIBERNATION_POSSIBLE=y
+CONFIG_ARCH_HIBERNATION_HEADER=y
+CONFIG_ARCH_SUSPEND_POSSIBLE=y
+# end of Power management options
+
+#
+# CPU Power Management
+#
+
+#
+# CPU Idle
+#
+# CONFIG_CPU_IDLE is not set
+# end of CPU Idle
+
+#
+# CPU Frequency scaling
+#
+CONFIG_CPU_FREQ=y
+CONFIG_CPU_FREQ_GOV_ATTR_SET=y
+CONFIG_CPU_FREQ_GOV_COMMON=y
+CONFIG_CPU_FREQ_STAT=y
+# CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE is not set
+# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
+# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
+# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
+# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
+CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL=y
+CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
+CONFIG_CPU_FREQ_GOV_POWERSAVE=m
+CONFIG_CPU_FREQ_GOV_USERSPACE=y
+CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_CONSERVATIVE=m
+CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y
+
+#
+# CPU frequency scaling drivers
+#
+CONFIG_CPUFREQ_DT=y
+CONFIG_CPUFREQ_DT_PLATDEV=y
+# end of CPU Frequency scaling
+# end of CPU Power Management
+
+#
+# Firmware Drivers
+#
+# CONFIG_ARM_SCMI_PROTOCOL is not set
+# CONFIG_ARM_SCPI_PROTOCOL is not set
+# CONFIG_ARM_SDE_INTERFACE is not set
+# CONFIG_FIRMWARE_MEMMAP is not set
+# CONFIG_FW_CFG_SYSFS is not set
+CONFIG_TI_SCI_PROTOCOL=y
+# CONFIG_GOOGLE_FIRMWARE is not set
+CONFIG_ARM_PSCI_FW=y
+CONFIG_HAVE_ARM_SMCCC=y
+CONFIG_HAVE_ARM_SMCCC_DISCOVERY=y
+CONFIG_ARM_SMCCC_SOC_ID=y
+
+#
+# Tegra firmware driver
+#
+# end of Tegra firmware driver
+# end of Firmware Drivers
+
+CONFIG_IRQ_BYPASS_MANAGER=y
+CONFIG_VIRTUALIZATION=y
+# CONFIG_KVM is not set
+CONFIG_ARM64_CRYPTO=y
+CONFIG_CRYPTO_SHA256_ARM64=y
+CONFIG_CRYPTO_SHA512_ARM64=m
+CONFIG_CRYPTO_SHA1_ARM64_CE=y
+CONFIG_CRYPTO_SHA2_ARM64_CE=y
+CONFIG_CRYPTO_SHA512_ARM64_CE=m
+CONFIG_CRYPTO_SHA3_ARM64=m
+CONFIG_CRYPTO_SM3_ARM64_CE=m
+# CONFIG_CRYPTO_SM4_ARM64_CE is not set
+CONFIG_CRYPTO_GHASH_ARM64_CE=y
+CONFIG_CRYPTO_CRCT10DIF_ARM64_CE=m
+CONFIG_CRYPTO_AES_ARM64=y
+CONFIG_CRYPTO_AES_ARM64_CE=y
+CONFIG_CRYPTO_AES_ARM64_CE_CCM=y
+CONFIG_CRYPTO_AES_ARM64_CE_BLK=y
+CONFIG_CRYPTO_AES_ARM64_NEON_BLK=m
+CONFIG_CRYPTO_CHACHA20_NEON=m
+# CONFIG_CRYPTO_POLY1305_NEON is not set
+# CONFIG_CRYPTO_NHPOLY1305_NEON is not set
+CONFIG_CRYPTO_AES_ARM64_BS=m
+
+#
+# General architecture-dependent options
+#
+CONFIG_CRASH_CORE=y
+CONFIG_KEXEC_CORE=y
+CONFIG_SET_FS=y
+# CONFIG_KPROBES is not set
+CONFIG_JUMP_LABEL=y
+# CONFIG_STATIC_KEYS_SELFTEST is not set
+CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
+CONFIG_HAVE_KPROBES=y
+CONFIG_HAVE_KRETPROBES=y
+CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
+CONFIG_HAVE_NMI=y
+CONFIG_HAVE_ARCH_TRACEHOOK=y
+CONFIG_HAVE_DMA_CONTIGUOUS=y
+CONFIG_GENERIC_SMP_IDLE_THREAD=y
+CONFIG_GENERIC_IDLE_POLL_SETUP=y
+CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
+CONFIG_ARCH_HAS_KEEPINITRD=y
+CONFIG_ARCH_HAS_SET_MEMORY=y
+CONFIG_ARCH_HAS_SET_DIRECT_MAP=y
+CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y
+CONFIG_HAVE_ASM_MODVERSIONS=y
+CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
+CONFIG_HAVE_RSEQ=y
+CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y
+CONFIG_HAVE_HW_BREAKPOINT=y
+CONFIG_HAVE_PERF_REGS=y
+CONFIG_HAVE_PERF_USER_STACK_DUMP=y
+CONFIG_HAVE_ARCH_JUMP_LABEL=y
+CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y
+CONFIG_MMU_GATHER_TABLE_FREE=y
+CONFIG_MMU_GATHER_RCU_TABLE_FREE=y
+CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
+CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
+CONFIG_HAVE_CMPXCHG_LOCAL=y
+CONFIG_HAVE_CMPXCHG_DOUBLE=y
+CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
+CONFIG_HAVE_ARCH_SECCOMP=y
+CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
+CONFIG_SECCOMP=y
+CONFIG_SECCOMP_FILTER=y
+CONFIG_HAVE_ARCH_STACKLEAK=y
+CONFIG_HAVE_STACKPROTECTOR=y
+CONFIG_STACKPROTECTOR=y
+CONFIG_STACKPROTECTOR_STRONG=y
+CONFIG_HAVE_CONTEXT_TRACKING=y
+CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
+CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
+CONFIG_HAVE_MOVE_PMD=y
+CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
+CONFIG_HAVE_ARCH_HUGE_VMAP=y
+CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
+CONFIG_MODULES_USE_ELF_RELA=y
+CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
+CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
+CONFIG_ARCH_MMAP_RND_BITS=14
+CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y
+CONFIG_ARCH_MMAP_RND_COMPAT_BITS=7
+CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT=y
+CONFIG_CLONE_BACKWARDS=y
+CONFIG_OLD_SIGSUSPEND3=y
+CONFIG_COMPAT_OLD_SIGACTION=y
+CONFIG_COMPAT_32BIT_TIME=y
+CONFIG_HAVE_ARCH_VMAP_STACK=y
+CONFIG_VMAP_STACK=y
+CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
+CONFIG_STRICT_KERNEL_RWX=y
+CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
+CONFIG_STRICT_MODULE_RWX=y
+CONFIG_HAVE_ARCH_COMPILER_H=y
+CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
+# CONFIG_LOCK_EVENT_COUNTS is not set
+CONFIG_ARCH_HAS_RELR=y
+CONFIG_ARCH_WANT_LD_ORPHAN_WARN=y
+
+#
+# GCOV-based kernel profiling
+#
+# CONFIG_GCOV_KERNEL is not set
+CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
+# end of GCOV-based kernel profiling
+
+CONFIG_HAVE_GCC_PLUGINS=y
+# end of General architecture-dependent options
+
+CONFIG_RT_MUTEXES=y
+CONFIG_BASE_SMALL=0
+CONFIG_MODULES=y
+CONFIG_MODULE_FORCE_LOAD=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODULE_FORCE_UNLOAD=y
+CONFIG_MODVERSIONS=y
+CONFIG_ASM_MODVERSIONS=y
+CONFIG_MODULE_SRCVERSION_ALL=y
+# CONFIG_MODULE_SIG is not set
+# CONFIG_MODULE_COMPRESS is not set
+# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
+# CONFIG_UNUSED_SYMBOLS is not set
+# CONFIG_TRIM_UNUSED_KSYMS is not set
+CONFIG_MODULES_TREE_LOOKUP=y
+CONFIG_BLOCK=y
+CONFIG_BLK_SCSI_REQUEST=y
+CONFIG_BLK_DEV_BSG=y
+CONFIG_BLK_DEV_BSGLIB=y
+CONFIG_BLK_DEV_INTEGRITY=y
+CONFIG_BLK_DEV_INTEGRITY_T10=y
+# CONFIG_BLK_DEV_ZONED is not set
+# CONFIG_BLK_DEV_THROTTLING is not set
+# CONFIG_BLK_CMDLINE_PARSER is not set
+# CONFIG_BLK_WBT is not set
+# CONFIG_BLK_CGROUP_IOLATENCY is not set
+# CONFIG_BLK_CGROUP_IOCOST is not set
+CONFIG_BLK_DEBUG_FS=y
+# CONFIG_BLK_SED_OPAL is not set
+# CONFIG_BLK_INLINE_ENCRYPTION is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_MSDOS_PARTITION=y
+CONFIG_EFI_PARTITION=y
+# end of Partition Types
+
+CONFIG_BLOCK_COMPAT=y
+CONFIG_BLK_MQ_PCI=y
+CONFIG_BLK_MQ_VIRTIO=y
+CONFIG_BLK_PM=y
+
+#
+# IO Schedulers
+#
+CONFIG_MQ_IOSCHED_DEADLINE=y
+CONFIG_MQ_IOSCHED_KYBER=y
+# CONFIG_IOSCHED_BFQ is not set
+# end of IO Schedulers
+
+CONFIG_ASN1=y
+CONFIG_UNINLINE_SPIN_UNLOCK=y
+CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
+CONFIG_MUTEX_SPIN_ON_OWNER=y
+CONFIG_RWSEM_SPIN_ON_OWNER=y
+CONFIG_LOCK_SPIN_ON_OWNER=y
+CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y
+CONFIG_QUEUED_SPINLOCKS=y
+CONFIG_ARCH_USE_QUEUED_RWLOCKS=y
+CONFIG_QUEUED_RWLOCKS=y
+CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y
+CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
+CONFIG_FREEZER=y
+
+#
+# Executable file formats
+#
+CONFIG_BINFMT_ELF=y
+CONFIG_COMPAT_BINFMT_ELF=y
+CONFIG_ARCH_BINFMT_ELF_STATE=y
+CONFIG_ARCH_HAVE_ELF_PROT=y
+CONFIG_ARCH_USE_GNU_PROPERTY=y
+CONFIG_ELFCORE=y
+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+CONFIG_BINFMT_SCRIPT=y
+# CONFIG_BINFMT_MISC is not set
+CONFIG_COREDUMP=y
+# end of Executable file formats
+
+#
+# Memory Management options
+#
+CONFIG_SELECT_MEMORY_MODEL=y
+# CONFIG_FLATMEM_MANUAL is not set
+CONFIG_SPARSEMEM_MANUAL=y
+CONFIG_SPARSEMEM=y
+CONFIG_SPARSEMEM_EXTREME=y
+CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
+CONFIG_SPARSEMEM_VMEMMAP=y
+CONFIG_HAVE_FAST_GUP=y
+CONFIG_ARCH_KEEP_MEMBLOCK=y
+CONFIG_MEMORY_ISOLATION=y
+# CONFIG_MEMORY_HOTPLUG is not set
+CONFIG_SPLIT_PTLOCK_CPUS=4
+CONFIG_MEMORY_BALLOON=y
+CONFIG_BALLOON_COMPACTION=y
+CONFIG_COMPACTION=y
+CONFIG_PAGE_REPORTING=y
+CONFIG_MIGRATION=y
+CONFIG_CONTIG_ALLOC=y
+CONFIG_PHYS_ADDR_T_64BIT=y
+CONFIG_BOUNCE=y
+CONFIG_KSM=y
+CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
+CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
+CONFIG_MEMORY_FAILURE=y
+# CONFIG_HWPOISON_INJECT is not set
+CONFIG_TRANSPARENT_HUGEPAGE=y
+CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
+# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
+# CONFIG_CLEANCACHE is not set
+# CONFIG_FRONTSWAP is not set
+CONFIG_CMA=y
+# CONFIG_CMA_DEBUG is not set
+# CONFIG_CMA_DEBUGFS is not set
+CONFIG_CMA_AREAS=7
+# CONFIG_ZPOOL is not set
+# CONFIG_ZBUD is not set
+# CONFIG_ZSMALLOC is not set
+CONFIG_GENERIC_EARLY_IOREMAP=y
+# CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set
+# CONFIG_IDLE_PAGE_TRACKING is not set
+CONFIG_ARCH_HAS_PTE_DEVMAP=y
+CONFIG_FRAME_VECTOR=y
+CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y
+# CONFIG_PERCPU_STATS is not set
+# CONFIG_GUP_BENCHMARK is not set
+# CONFIG_READ_ONLY_THP_FOR_FS is not set
+CONFIG_ARCH_HAS_PTE_SPECIAL=y
+# end of Memory Management options
+
+CONFIG_NET=y
+CONFIG_NET_INGRESS=y
+CONFIG_NET_EGRESS=y
+CONFIG_SKB_EXTENSIONS=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+# CONFIG_PACKET_DIAG is not set
+CONFIG_UNIX=y
+CONFIG_UNIX_SCM=y
+# CONFIG_UNIX_DIAG is not set
+# CONFIG_TLS is not set
+CONFIG_XFRM=y
+CONFIG_XFRM_ALGO=m
+CONFIG_XFRM_USER=m
+# CONFIG_XFRM_INTERFACE is not set
+# CONFIG_XFRM_SUB_POLICY is not set
+# CONFIG_XFRM_MIGRATE is not set
+# CONFIG_XFRM_STATISTICS is not set
+CONFIG_XFRM_AH=m
+CONFIG_XFRM_ESP=m
+CONFIG_XFRM_IPCOMP=m
+CONFIG_NET_KEY=m
+# CONFIG_NET_KEY_MIGRATE is not set
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_ROUTE_CLASSID=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE_DEMUX is not set
+CONFIG_NET_IP_TUNNEL=m
+# CONFIG_IP_MROUTE is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_NET_IPVTI is not set
+# CONFIG_NET_FOU is not set
+# CONFIG_NET_FOU_IP_TUNNELS is not set
+CONFIG_INET_AH=m
+CONFIG_INET_ESP=m
+# CONFIG_INET_ESP_OFFLOAD is not set
+# CONFIG_INET_ESPINTCP is not set
+CONFIG_INET_IPCOMP=m
+CONFIG_INET_XFRM_TUNNEL=m
+CONFIG_INET_TUNNEL=m
+CONFIG_INET_DIAG=y
+CONFIG_INET_TCP_DIAG=y
+# CONFIG_INET_UDP_DIAG is not set
+# CONFIG_INET_RAW_DIAG is not set
+# CONFIG_INET_DIAG_DESTROY is not set
+# CONFIG_TCP_CONG_ADVANCED is not set
+CONFIG_TCP_CONG_CUBIC=y
+CONFIG_DEFAULT_TCP_CONG="cubic"
+# CONFIG_TCP_MD5SIG is not set
+CONFIG_IPV6=m
+# CONFIG_IPV6_ROUTER_PREF is not set
+# CONFIG_IPV6_OPTIMISTIC_DAD is not set
+CONFIG_INET6_AH=m
+# CONFIG_INET6_ESP is not set
+CONFIG_INET6_IPCOMP=m
+# CONFIG_IPV6_MIP6 is not set
+# CONFIG_IPV6_ILA is not set
+CONFIG_INET6_XFRM_TUNNEL=m
+CONFIG_INET6_TUNNEL=m
+# CONFIG_IPV6_VTI is not set
+CONFIG_IPV6_SIT=m
+# CONFIG_IPV6_SIT_6RD is not set
+CONFIG_IPV6_NDISC_NODETYPE=y
+CONFIG_IPV6_TUNNEL=m
+# CONFIG_IPV6_MULTIPLE_TABLES is not set
+# CONFIG_IPV6_MROUTE is not set
+# CONFIG_IPV6_SEG6_LWTUNNEL is not set
+# CONFIG_IPV6_SEG6_HMAC is not set
+# CONFIG_IPV6_RPL_LWTUNNEL is not set
+# CONFIG_NETLABEL is not set
+# CONFIG_MPTCP is not set
+# CONFIG_NETWORK_SECMARK is not set
+CONFIG_NET_PTP_CLASSIFY=y
+# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
+CONFIG_NETFILTER=y
+CONFIG_NETFILTER_ADVANCED=y
+# CONFIG_BRIDGE_NETFILTER is not set
+
+#
+# Core Netfilter Configuration
+#
+CONFIG_NETFILTER_INGRESS=y
+CONFIG_NETFILTER_FAMILY_BRIDGE=y
+CONFIG_NETFILTER_FAMILY_ARP=y
+# CONFIG_NETFILTER_NETLINK_ACCT is not set
+# CONFIG_NETFILTER_NETLINK_QUEUE is not set
+# CONFIG_NETFILTER_NETLINK_LOG is not set
+# CONFIG_NETFILTER_NETLINK_OSF is not set
+CONFIG_NF_CONNTRACK=m
+CONFIG_NF_LOG_COMMON=m
+# CONFIG_NF_LOG_NETDEV is not set
+# CONFIG_NF_CONNTRACK_MARK is not set
+# CONFIG_NF_CONNTRACK_ZONES is not set
+CONFIG_NF_CONNTRACK_PROCFS=y
+CONFIG_NF_CONNTRACK_EVENTS=y
+# CONFIG_NF_CONNTRACK_TIMEOUT is not set
+# CONFIG_NF_CONNTRACK_TIMESTAMP is not set
+# CONFIG_NF_CONNTRACK_LABELS is not set
+CONFIG_NF_CT_PROTO_DCCP=y
+CONFIG_NF_CT_PROTO_SCTP=y
+CONFIG_NF_CT_PROTO_UDPLITE=y
+# CONFIG_NF_CONNTRACK_AMANDA is not set
+# CONFIG_NF_CONNTRACK_FTP is not set
+# CONFIG_NF_CONNTRACK_H323 is not set
+# CONFIG_NF_CONNTRACK_IRC is not set
+# CONFIG_NF_CONNTRACK_NETBIOS_NS is not set
+# CONFIG_NF_CONNTRACK_SNMP is not set
+# CONFIG_NF_CONNTRACK_PPTP is not set
+# CONFIG_NF_CONNTRACK_SANE is not set
+# CONFIG_NF_CONNTRACK_SIP is not set
+# CONFIG_NF_CONNTRACK_TFTP is not set
+# CONFIG_NF_CT_NETLINK is not set
+CONFIG_NF_NAT=m
+CONFIG_NF_NAT_MASQUERADE=y
+# CONFIG_NF_TABLES is not set
+CONFIG_NETFILTER_XTABLES=m
+
+#
+# Xtables combined modules
+#
+CONFIG_NETFILTER_XT_MARK=m
+# CONFIG_NETFILTER_XT_CONNMARK is not set
+
+#
+# Xtables targets
+#
+CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
+# CONFIG_NETFILTER_XT_TARGET_CONNMARK is not set
+# CONFIG_NETFILTER_XT_TARGET_DSCP is not set
+# CONFIG_NETFILTER_XT_TARGET_HL is not set
+# CONFIG_NETFILTER_XT_TARGET_HMARK is not set
+CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
+# CONFIG_NETFILTER_XT_TARGET_LED is not set
+CONFIG_NETFILTER_XT_TARGET_LOG=m
+CONFIG_NETFILTER_XT_TARGET_MARK=m
+CONFIG_NETFILTER_XT_NAT=m
+# CONFIG_NETFILTER_XT_TARGET_NETMAP is not set
+# CONFIG_NETFILTER_XT_TARGET_NFLOG is not set
+# CONFIG_NETFILTER_XT_TARGET_NFQUEUE is not set
+# CONFIG_NETFILTER_XT_TARGET_RATEEST is not set
+# CONFIG_NETFILTER_XT_TARGET_REDIRECT is not set
+CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m
+# CONFIG_NETFILTER_XT_TARGET_TEE is not set
+# CONFIG_NETFILTER_XT_TARGET_TPROXY is not set
+# CONFIG_NETFILTER_XT_TARGET_TCPMSS is not set
+# CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP is not set
+
+#
+# Xtables matches
+#
+CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
+# CONFIG_NETFILTER_XT_MATCH_BPF is not set
+# CONFIG_NETFILTER_XT_MATCH_CGROUP is not set
+# CONFIG_NETFILTER_XT_MATCH_CLUSTER is not set
+CONFIG_NETFILTER_XT_MATCH_COMMENT=m
+# CONFIG_NETFILTER_XT_MATCH_CONNBYTES is not set
+# CONFIG_NETFILTER_XT_MATCH_CONNLABEL is not set
+# CONFIG_NETFILTER_XT_MATCH_CONNLIMIT is not set
+# CONFIG_NETFILTER_XT_MATCH_CONNMARK is not set
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
+CONFIG_NETFILTER_XT_MATCH_CPU=m
+# CONFIG_NETFILTER_XT_MATCH_DCCP is not set
+# CONFIG_NETFILTER_XT_MATCH_DEVGROUP is not set
+# CONFIG_NETFILTER_XT_MATCH_DSCP is not set
+# CONFIG_NETFILTER_XT_MATCH_ECN is not set
+# CONFIG_NETFILTER_XT_MATCH_ESP is not set
+# CONFIG_NETFILTER_XT_MATCH_HASHLIMIT is not set
+# CONFIG_NETFILTER_XT_MATCH_HELPER is not set
+# CONFIG_NETFILTER_XT_MATCH_HL is not set
+# CONFIG_NETFILTER_XT_MATCH_IPCOMP is not set
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
+# CONFIG_NETFILTER_XT_MATCH_L2TP is not set
+CONFIG_NETFILTER_XT_MATCH_LENGTH=m
+CONFIG_NETFILTER_XT_MATCH_LIMIT=m
+CONFIG_NETFILTER_XT_MATCH_MAC=m
+CONFIG_NETFILTER_XT_MATCH_MARK=m
+CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
+# CONFIG_NETFILTER_XT_MATCH_NFACCT is not set
+# CONFIG_NETFILTER_XT_MATCH_OSF is not set
+# CONFIG_NETFILTER_XT_MATCH_OWNER is not set
+CONFIG_NETFILTER_XT_MATCH_POLICY=m
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
+# CONFIG_NETFILTER_XT_MATCH_QUOTA is not set
+# CONFIG_NETFILTER_XT_MATCH_RATEEST is not set
+# CONFIG_NETFILTER_XT_MATCH_REALM is not set
+# CONFIG_NETFILTER_XT_MATCH_RECENT is not set
+CONFIG_NETFILTER_XT_MATCH_SCTP=m
+# CONFIG_NETFILTER_XT_MATCH_SOCKET is not set
+# CONFIG_NETFILTER_XT_MATCH_STATE is not set
+# CONFIG_NETFILTER_XT_MATCH_STATISTIC is not set
+# CONFIG_NETFILTER_XT_MATCH_STRING is not set
+# CONFIG_NETFILTER_XT_MATCH_TCPMSS is not set
+# CONFIG_NETFILTER_XT_MATCH_TIME is not set
+# CONFIG_NETFILTER_XT_MATCH_U32 is not set
+# end of Core Netfilter Configuration
+
+# CONFIG_IP_SET is not set
+# CONFIG_IP_VS is not set
+
+#
+# IP: Netfilter Configuration
+#
+CONFIG_NF_DEFRAG_IPV4=m
+# CONFIG_NF_SOCKET_IPV4 is not set
+# CONFIG_NF_TPROXY_IPV4 is not set
+# CONFIG_NF_DUP_IPV4 is not set
+# CONFIG_NF_LOG_ARP is not set
+CONFIG_NF_LOG_IPV4=m
+CONFIG_NF_REJECT_IPV4=m
+CONFIG_IP_NF_IPTABLES=m
+# CONFIG_IP_NF_MATCH_AH is not set
+# CONFIG_IP_NF_MATCH_ECN is not set
+# CONFIG_IP_NF_MATCH_RPFILTER is not set
+# CONFIG_IP_NF_MATCH_TTL is not set
+CONFIG_IP_NF_FILTER=m
+CONFIG_IP_NF_TARGET_REJECT=m
+# CONFIG_IP_NF_TARGET_SYNPROXY is not set
+CONFIG_IP_NF_NAT=m
+CONFIG_IP_NF_TARGET_MASQUERADE=m
+# CONFIG_IP_NF_TARGET_NETMAP is not set
+# CONFIG_IP_NF_TARGET_REDIRECT is not set
+CONFIG_IP_NF_MANGLE=m
+# CONFIG_IP_NF_TARGET_CLUSTERIP is not set
+# CONFIG_IP_NF_TARGET_ECN is not set
+# CONFIG_IP_NF_TARGET_TTL is not set
+# CONFIG_IP_NF_RAW is not set
+# CONFIG_IP_NF_SECURITY is not set
+CONFIG_IP_NF_ARPTABLES=m
+CONFIG_IP_NF_ARPFILTER=m
+CONFIG_IP_NF_ARP_MANGLE=m
+# end of IP: Netfilter Configuration
+
+#
+# IPv6: Netfilter Configuration
+#
+# CONFIG_NF_SOCKET_IPV6 is not set
+# CONFIG_NF_TPROXY_IPV6 is not set
+# CONFIG_NF_DUP_IPV6 is not set
+CONFIG_NF_REJECT_IPV6=m
+CONFIG_NF_LOG_IPV6=m
+CONFIG_IP6_NF_IPTABLES=m
+# CONFIG_IP6_NF_MATCH_AH is not set
+# CONFIG_IP6_NF_MATCH_EUI64 is not set
+# CONFIG_IP6_NF_MATCH_FRAG is not set
+# CONFIG_IP6_NF_MATCH_OPTS is not set
+# CONFIG_IP6_NF_MATCH_HL is not set
+# CONFIG_IP6_NF_MATCH_IPV6HEADER is not set
+# CONFIG_IP6_NF_MATCH_MH is not set
+# CONFIG_IP6_NF_MATCH_RPFILTER is not set
+# CONFIG_IP6_NF_MATCH_RT is not set
+# CONFIG_IP6_NF_MATCH_SRH is not set
+# CONFIG_IP6_NF_TARGET_HL is not set
+CONFIG_IP6_NF_FILTER=m
+CONFIG_IP6_NF_TARGET_REJECT=m
+# CONFIG_IP6_NF_TARGET_SYNPROXY is not set
+CONFIG_IP6_NF_MANGLE=m
+# CONFIG_IP6_NF_RAW is not set
+# CONFIG_IP6_NF_SECURITY is not set
+CONFIG_IP6_NF_NAT=m
+CONFIG_IP6_NF_TARGET_MASQUERADE=m
+# CONFIG_IP6_NF_TARGET_NPT is not set
+# end of IPv6: Netfilter Configuration
+
+CONFIG_NF_DEFRAG_IPV6=m
+# CONFIG_NF_CONNTRACK_BRIDGE is not set
+CONFIG_BRIDGE_NF_EBTABLES=m
+CONFIG_BRIDGE_EBT_BROUTE=m
+CONFIG_BRIDGE_EBT_T_FILTER=m
+CONFIG_BRIDGE_EBT_T_NAT=m
+CONFIG_BRIDGE_EBT_802_3=m
+CONFIG_BRIDGE_EBT_AMONG=m
+CONFIG_BRIDGE_EBT_ARP=m
+CONFIG_BRIDGE_EBT_IP=m
+CONFIG_BRIDGE_EBT_IP6=m
+CONFIG_BRIDGE_EBT_LIMIT=m
+CONFIG_BRIDGE_EBT_MARK=m
+CONFIG_BRIDGE_EBT_PKTTYPE=m
+CONFIG_BRIDGE_EBT_STP=m
+CONFIG_BRIDGE_EBT_VLAN=m
+CONFIG_BRIDGE_EBT_ARPREPLY=m
+CONFIG_BRIDGE_EBT_DNAT=m
+CONFIG_BRIDGE_EBT_MARK_T=m
+CONFIG_BRIDGE_EBT_REDIRECT=m
+CONFIG_BRIDGE_EBT_SNAT=m
+CONFIG_BRIDGE_EBT_LOG=m
+CONFIG_BRIDGE_EBT_NFLOG=m
+# CONFIG_BPFILTER is not set
+# CONFIG_IP_DCCP is not set
+CONFIG_IP_SCTP=m
+# CONFIG_SCTP_DBG_OBJCNT is not set
+CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5=y
+# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1 is not set
+# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
+CONFIG_SCTP_COOKIE_HMAC_MD5=y
+# CONFIG_SCTP_COOKIE_HMAC_SHA1 is not set
+CONFIG_INET_SCTP_DIAG=m
+# CONFIG_RDS is not set
+# CONFIG_TIPC is not set
+# CONFIG_ATM is not set
+# CONFIG_L2TP is not set
+CONFIG_STP=m
+CONFIG_GARP=m
+CONFIG_MRP=m
+CONFIG_BRIDGE=m
+CONFIG_BRIDGE_IGMP_SNOOPING=y
+CONFIG_BRIDGE_VLAN_FILTERING=y
+# CONFIG_BRIDGE_MRP is not set
+CONFIG_HAVE_NET_DSA=y
+CONFIG_NET_DSA=m
+# CONFIG_NET_DSA_TAG_AR9331 is not set
+# CONFIG_NET_DSA_TAG_BRCM is not set
+# CONFIG_NET_DSA_TAG_BRCM_PREPEND is not set
+# CONFIG_NET_DSA_TAG_GSWIP is not set
+# CONFIG_NET_DSA_TAG_DSA is not set
+# CONFIG_NET_DSA_TAG_EDSA is not set
+# CONFIG_NET_DSA_TAG_MTK is not set
+# CONFIG_NET_DSA_TAG_KSZ is not set
+# CONFIG_NET_DSA_TAG_RTL4_A is not set
+# CONFIG_NET_DSA_TAG_OCELOT is not set
+# CONFIG_NET_DSA_TAG_QCA is not set
+# CONFIG_NET_DSA_TAG_LAN9303 is not set
+# CONFIG_NET_DSA_TAG_SJA1105 is not set
+# CONFIG_NET_DSA_TAG_TRAILER is not set
+CONFIG_VLAN_8021Q=m
+CONFIG_VLAN_8021Q_GVRP=y
+CONFIG_VLAN_8021Q_MVRP=y
+# CONFIG_DECNET is not set
+CONFIG_LLC=m
+# CONFIG_LLC2 is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_PHONET is not set
+# CONFIG_6LOWPAN is not set
+# CONFIG_IEEE802154 is not set
+CONFIG_NET_SCHED=y
+
+#
+# Queueing/Scheduling
+#
+CONFIG_NET_SCH_CBQ=m
+CONFIG_NET_SCH_HTB=m
+CONFIG_NET_SCH_HFSC=m
+CONFIG_NET_SCH_PRIO=m
+CONFIG_NET_SCH_MULTIQ=m
+CONFIG_NET_SCH_RED=m
+CONFIG_NET_SCH_SFB=m
+CONFIG_NET_SCH_SFQ=m
+CONFIG_NET_SCH_TEQL=m
+CONFIG_NET_SCH_TBF=m
+CONFIG_NET_SCH_CBS=m
+CONFIG_NET_SCH_ETF=m
+CONFIG_NET_SCH_TAPRIO=m
+CONFIG_NET_SCH_GRED=m
+CONFIG_NET_SCH_DSMARK=m
+CONFIG_NET_SCH_NETEM=m
+CONFIG_NET_SCH_DRR=m
+CONFIG_NET_SCH_MQPRIO=m
+# CONFIG_NET_SCH_SKBPRIO is not set
+CONFIG_NET_SCH_CHOKE=m
+CONFIG_NET_SCH_QFQ=m
+CONFIG_NET_SCH_CODEL=m
+CONFIG_NET_SCH_FQ_CODEL=m
+# CONFIG_NET_SCH_CAKE is not set
+# CONFIG_NET_SCH_FQ is not set
+# CONFIG_NET_SCH_HHF is not set
+# CONFIG_NET_SCH_PIE is not set
+CONFIG_NET_SCH_INGRESS=m
+# CONFIG_NET_SCH_PLUG is not set
+# CONFIG_NET_SCH_ETS is not set
+# CONFIG_NET_SCH_DEFAULT is not set
+
+#
+# Classification
+#
+CONFIG_NET_CLS=y
+CONFIG_NET_CLS_BASIC=m
+CONFIG_NET_CLS_TCINDEX=m
+CONFIG_NET_CLS_ROUTE4=m
+CONFIG_NET_CLS_FW=m
+CONFIG_NET_CLS_U32=m
+# CONFIG_CLS_U32_PERF is not set
+CONFIG_CLS_U32_MARK=y
+CONFIG_NET_CLS_RSVP=m
+CONFIG_NET_CLS_RSVP6=m
+CONFIG_NET_CLS_FLOW=m
+# CONFIG_NET_CLS_CGROUP is not set
+# CONFIG_NET_CLS_BPF is not set
+CONFIG_NET_CLS_FLOWER=m
+# CONFIG_NET_CLS_MATCHALL is not set
+CONFIG_NET_EMATCH=y
+CONFIG_NET_EMATCH_STACK=32
+CONFIG_NET_EMATCH_CMP=m
+CONFIG_NET_EMATCH_NBYTE=m
+CONFIG_NET_EMATCH_U32=m
+CONFIG_NET_EMATCH_META=m
+CONFIG_NET_EMATCH_TEXT=m
+# CONFIG_NET_EMATCH_CANID is not set
+# CONFIG_NET_EMATCH_IPT is not set
+CONFIG_NET_CLS_ACT=y
+CONFIG_NET_ACT_POLICE=m
+CONFIG_NET_ACT_GACT=m
+CONFIG_GACT_PROB=y
+CONFIG_NET_ACT_MIRRED=m
+# CONFIG_NET_ACT_SAMPLE is not set
+CONFIG_NET_ACT_IPT=m
+CONFIG_NET_ACT_NAT=m
+CONFIG_NET_ACT_PEDIT=m
+CONFIG_NET_ACT_SIMP=m
+CONFIG_NET_ACT_SKBEDIT=m
+CONFIG_NET_ACT_CSUM=m
+# CONFIG_NET_ACT_MPLS is not set
+# CONFIG_NET_ACT_VLAN is not set
+# CONFIG_NET_ACT_BPF is not set
+# CONFIG_NET_ACT_SKBMOD is not set
+# CONFIG_NET_ACT_IFE is not set
+# CONFIG_NET_ACT_TUNNEL_KEY is not set
+CONFIG_NET_ACT_GATE=m
+# CONFIG_NET_TC_SKB_EXT is not set
+CONFIG_NET_SCH_FIFO=y
+# CONFIG_DCB is not set
+CONFIG_DNS_RESOLVER=y
+# CONFIG_BATMAN_ADV is not set
+# CONFIG_OPENVSWITCH is not set
+# CONFIG_VSOCKETS is not set
+# CONFIG_NETLINK_DIAG is not set
+# CONFIG_MPLS is not set
+# CONFIG_NET_NSH is not set
+CONFIG_HSR=m
+CONFIG_NET_SWITCHDEV=y
+# CONFIG_NET_L3_MASTER_DEV is not set
+CONFIG_QRTR=m
+CONFIG_QRTR_SMD=m
+CONFIG_QRTR_TUN=m
+# CONFIG_NET_NCSI is not set
+CONFIG_RPS=y
+CONFIG_RFS_ACCEL=y
+CONFIG_XPS=y
+# CONFIG_CGROUP_NET_PRIO is not set
+# CONFIG_CGROUP_NET_CLASSID is not set
+CONFIG_NET_RX_BUSY_POLL=y
+CONFIG_BQL=y
+CONFIG_BPF_JIT=y
+CONFIG_NET_FLOW_LIMIT=y
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# end of Network testing
+# end of Networking options
+
+# CONFIG_HAMRADIO is not set
+CONFIG_CAN=m
+CONFIG_CAN_RAW=m
+CONFIG_CAN_BCM=m
+CONFIG_CAN_GW=m
+# CONFIG_CAN_J1939 is not set
+# CONFIG_CAN_ISOTP is not set
+
+#
+# CAN Device Drivers
+#
+# CONFIG_CAN_VCAN is not set
+# CONFIG_CAN_VXCAN is not set
+# CONFIG_CAN_SLCAN is not set
+CONFIG_CAN_DEV=m
+CONFIG_CAN_CALC_BITTIMING=y
+CONFIG_CAN_FLEXCAN=m
+# CONFIG_CAN_GRCAN is not set
+# CONFIG_CAN_KVASER_PCIEFD is not set
+# CONFIG_CAN_XILINXCAN is not set
+CONFIG_CAN_C_CAN=m
+CONFIG_CAN_C_CAN_PLATFORM=m
+# CONFIG_CAN_C_CAN_PCI is not set
+# CONFIG_CAN_CC770 is not set
+# CONFIG_CAN_IFI_CANFD is not set
+CONFIG_CAN_M_CAN=m
+CONFIG_CAN_M_CAN_PLATFORM=m
+# CONFIG_CAN_M_CAN_TCAN4X5X is not set
+# CONFIG_CAN_PEAK_PCIEFD is not set
+# CONFIG_CAN_SJA1000 is not set
+# CONFIG_CAN_SOFTING is not set
+
+#
+# CAN SPI interfaces
+#
+# CONFIG_CAN_HI311X is not set
+# CONFIG_CAN_MCP251X is not set
+# CONFIG_CAN_MCP251XFD is not set
+# end of CAN SPI interfaces
+
+#
+# CAN USB interfaces
+#
+# CONFIG_CAN_8DEV_USB is not set
+# CONFIG_CAN_EMS_USB is not set
+# CONFIG_CAN_ESD_USB2 is not set
+# CONFIG_CAN_GS_USB is not set
+# CONFIG_CAN_KVASER_USB is not set
+# CONFIG_CAN_MCBA_USB is not set
+# CONFIG_CAN_PEAK_USB is not set
+# CONFIG_CAN_UCAN is not set
+# end of CAN USB interfaces
+
+# CONFIG_CAN_DEBUG_DEVICES is not set
+# end of CAN Device Drivers
+
+CONFIG_BT=m
+CONFIG_BT_BREDR=y
+# CONFIG_BT_RFCOMM is not set
+# CONFIG_BT_BNEP is not set
+CONFIG_BT_HIDP=m
+# CONFIG_BT_HS is not set
+# CONFIG_BT_LE is not set
+CONFIG_BT_LEDS=y
+# CONFIG_BT_MSFTEXT is not set
+# CONFIG_BT_DEBUGFS is not set
+# CONFIG_BT_SELFTEST is not set
+# CONFIG_BT_FEATURE_DEBUG is not set
+
+#
+# Bluetooth device drivers
+#
+CONFIG_BT_INTEL=m
+CONFIG_BT_BCM=m
+CONFIG_BT_RTL=m
+CONFIG_BT_QCA=m
+CONFIG_BT_HCIBTUSB=m
+# CONFIG_BT_HCIBTUSB_AUTOSUSPEND is not set
+CONFIG_BT_HCIBTUSB_BCM=y
+# CONFIG_BT_HCIBTUSB_MTK is not set
+CONFIG_BT_HCIBTUSB_RTL=y
+# CONFIG_BT_HCIBTSDIO is not set
+CONFIG_BT_HCIUART=m
+CONFIG_BT_HCIUART_SERDEV=y
+CONFIG_BT_HCIUART_H4=y
+# CONFIG_BT_HCIUART_NOKIA is not set
+# CONFIG_BT_HCIUART_BCSP is not set
+# CONFIG_BT_HCIUART_ATH3K is not set
+CONFIG_BT_HCIUART_LL=y
+# CONFIG_BT_HCIUART_3WIRE is not set
+# CONFIG_BT_HCIUART_INTEL is not set
+CONFIG_BT_HCIUART_BCM=y
+# CONFIG_BT_HCIUART_RTL is not set
+CONFIG_BT_HCIUART_QCA=y
+# CONFIG_BT_HCIUART_AG6XX is not set
+# CONFIG_BT_HCIUART_MRVL is not set
+# CONFIG_BT_HCIBCM203X is not set
+# CONFIG_BT_HCIBPA10X is not set
+# CONFIG_BT_HCIBFUSB is not set
+# CONFIG_BT_HCIVHCI is not set
+# CONFIG_BT_MRVL is not set
+# CONFIG_BT_ATH3K is not set
+# CONFIG_BT_MTKSDIO is not set
+# CONFIG_BT_MTKUART is not set
+# end of Bluetooth device drivers
+
+# CONFIG_AF_RXRPC is not set
+# CONFIG_AF_KCM is not set
+CONFIG_WIRELESS=y
+CONFIG_CFG80211=m
+CONFIG_NL80211_TESTMODE=y
+# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
+# CONFIG_CFG80211_CERTIFICATION_ONUS is not set
+CONFIG_CFG80211_REQUIRE_SIGNED_REGDB=y
+CONFIG_CFG80211_USE_KERNEL_REGDB_KEYS=y
+CONFIG_CFG80211_DEFAULT_PS=y
+# CONFIG_CFG80211_DEBUGFS is not set
+CONFIG_CFG80211_CRDA_SUPPORT=y
+# CONFIG_CFG80211_WEXT is not set
+CONFIG_MAC80211=m
+CONFIG_MAC80211_HAS_RC=y
+CONFIG_MAC80211_RC_MINSTREL=y
+CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
+CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
+CONFIG_MAC80211_MESH=y
+CONFIG_MAC80211_LEDS=y
+# CONFIG_MAC80211_DEBUGFS is not set
+# CONFIG_MAC80211_MESSAGE_TRACING is not set
+# CONFIG_MAC80211_DEBUG_MENU is not set
+CONFIG_MAC80211_STA_HASH_MAX_SIZE=0
+# CONFIG_WIMAX is not set
+CONFIG_RFKILL=m
+CONFIG_RFKILL_LEDS=y
+# CONFIG_RFKILL_INPUT is not set
+# CONFIG_RFKILL_GPIO is not set
+CONFIG_NET_9P=y
+CONFIG_NET_9P_VIRTIO=y
+# CONFIG_NET_9P_DEBUG is not set
+# CONFIG_CAIF is not set
+# CONFIG_RPMSG_PROTO is not set
+# CONFIG_CEPH_LIB is not set
+CONFIG_NFC=m
+# CONFIG_NFC_DIGITAL is not set
+CONFIG_NFC_NCI=m
+# CONFIG_NFC_NCI_SPI is not set
+# CONFIG_NFC_NCI_UART is not set
+# CONFIG_NFC_HCI is not set
+
+#
+# Near Field Communication (NFC) devices
+#
+# CONFIG_NFC_FDP is not set
+# CONFIG_NFC_PN533_USB is not set
+# CONFIG_NFC_PN533_I2C is not set
+# CONFIG_NFC_PN532_UART is not set
+# CONFIG_NFC_MRVL_USB is not set
+# CONFIG_NFC_ST_NCI_I2C is not set
+# CONFIG_NFC_ST_NCI_SPI is not set
+# CONFIG_NFC_NXP_NCI is not set
+CONFIG_NFC_S3FWRN5=m
+CONFIG_NFC_S3FWRN5_I2C=m
+# end of Near Field Communication (NFC) devices
+
+# CONFIG_PSAMPLE is not set
+# CONFIG_NET_IFE is not set
+# CONFIG_LWTUNNEL is not set
+CONFIG_DST_CACHE=y
+CONFIG_GRO_CELLS=y
+CONFIG_NET_DEVLINK=y
+CONFIG_FAILOVER=y
+CONFIG_ETHTOOL_NETLINK=y
+CONFIG_HAVE_EBPF_JIT=y
+
+#
+# Device Drivers
+#
+CONFIG_ARM_AMBA=y
+CONFIG_HAVE_PCI=y
+CONFIG_PCI=y
+CONFIG_PCI_DOMAINS=y
+CONFIG_PCI_DOMAINS_GENERIC=y
+CONFIG_PCI_SYSCALL=y
+CONFIG_PCIEPORTBUS=y
+# CONFIG_HOTPLUG_PCI_PCIE is not set
+# CONFIG_PCIEAER is not set
+CONFIG_PCIEASPM=y
+CONFIG_PCIEASPM_DEFAULT=y
+# CONFIG_PCIEASPM_POWERSAVE is not set
+# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set
+# CONFIG_PCIEASPM_PERFORMANCE is not set
+CONFIG_PCIE_PME=y
+# CONFIG_PCIE_PTM is not set
+CONFIG_PCI_MSI=y
+CONFIG_PCI_MSI_IRQ_DOMAIN=y
+CONFIG_PCI_QUIRKS=y
+# CONFIG_PCI_DEBUG is not set
+# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
+# CONFIG_PCI_STUB is not set
+# CONFIG_PCI_PF_STUB is not set
+CONFIG_PCI_ATS=y
+CONFIG_PCI_ECAM=y
+CONFIG_PCI_IOV=y
+# CONFIG_PCI_PRI is not set
+CONFIG_PCI_PASID=y
+# CONFIG_PCIE_BUS_TUNE_OFF is not set
+CONFIG_PCIE_BUS_DEFAULT=y
+# CONFIG_PCIE_BUS_SAFE is not set
+# CONFIG_PCIE_BUS_PERFORMANCE is not set
+# CONFIG_PCIE_BUS_PEER2PEER is not set
+CONFIG_HOTPLUG_PCI=y
+# CONFIG_HOTPLUG_PCI_CPCI is not set
+# CONFIG_HOTPLUG_PCI_SHPC is not set
+
+#
+# PCI controller drivers
+#
+# CONFIG_PCI_FTPCI100 is not set
+CONFIG_PCI_HOST_COMMON=y
+CONFIG_PCI_HOST_GENERIC=y
+# CONFIG_PCIE_XILINX is not set
+# CONFIG_PCI_XGENE is not set
+CONFIG_PCIE_ALTERA=y
+CONFIG_PCIE_ALTERA_MSI=y
+CONFIG_PCI_HOST_THUNDER_PEM=y
+CONFIG_PCI_HOST_THUNDER_ECAM=y
+
+#
+# DesignWare PCI Core Support
+#
+CONFIG_PCIE_DW=y
+CONFIG_PCIE_DW_HOST=y
+CONFIG_PCIE_DW_EP=y
+# CONFIG_PCIE_DW_PLAT_HOST is not set
+# CONFIG_PCIE_DW_PLAT_EP is not set
+CONFIG_PCI_KEYSTONE=y
+CONFIG_PCI_KEYSTONE_HOST=y
+CONFIG_PCI_KEYSTONE_EP=y
+# CONFIG_PCI_HISI is not set
+# CONFIG_PCIE_KIRIN is not set
+# CONFIG_PCI_MESON is not set
+# CONFIG_PCIE_AL is not set
+# end of DesignWare PCI Core Support
+
+#
+# Mobiveil PCIe Core Support
+#
+CONFIG_PCIE_MOBIVEIL=y
+CONFIG_PCIE_MOBIVEIL_HOST=y
+CONFIG_PCIE_LAYERSCAPE_GEN4=y
+# end of Mobiveil PCIe Core Support
+
+#
+# Cadence PCIe controllers support
+#
+CONFIG_PCIE_CADENCE=y
+CONFIG_PCIE_CADENCE_HOST=y
+CONFIG_PCIE_CADENCE_EP=y
+# CONFIG_PCIE_CADENCE_PLAT_HOST is not set
+# CONFIG_PCIE_CADENCE_PLAT_EP is not set
+CONFIG_PCI_J721E=y
+CONFIG_PCI_J721E_HOST=y
+CONFIG_PCI_J721E_EP=y
+# end of Cadence PCIe controllers support
+# end of PCI controller drivers
+
+#
+# PCI Endpoint
+#
+CONFIG_PCI_ENDPOINT=y
+CONFIG_PCI_ENDPOINT_CONFIGFS=y
+CONFIG_PCI_EPF_TEST=y
+CONFIG_PCI_EPF_NTB=y
+# end of PCI Endpoint
+
+#
+# PCI switch controller drivers
+#
+# CONFIG_PCI_SW_SWITCHTEC is not set
+# end of PCI switch controller drivers
+
+# CONFIG_PCCARD is not set
+# CONFIG_RAPIDIO is not set
+
+#
+# Generic Driver Options
+#
+# CONFIG_UEVENT_HELPER is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+CONFIG_STANDALONE=y
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+
+#
+# Firmware loader
+#
+CONFIG_FW_LOADER=y
+CONFIG_EXTRA_FIRMWARE=""
+# CONFIG_FW_LOADER_USER_HELPER is not set
+# CONFIG_FW_LOADER_COMPRESS is not set
+CONFIG_FW_CACHE=y
+# end of Firmware loader
+
+CONFIG_WANT_DEV_COREDUMP=y
+CONFIG_ALLOW_DEV_COREDUMP=y
+CONFIG_DEV_COREDUMP=y
+# CONFIG_DEBUG_DRIVER is not set
+# CONFIG_DEBUG_DEVRES is not set
+# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
+# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
+CONFIG_GENERIC_CPU_AUTOPROBE=y
+CONFIG_GENERIC_CPU_VULNERABILITIES=y
+CONFIG_SOC_BUS=y
+CONFIG_REGMAP=y
+CONFIG_REGMAP_I2C=y
+CONFIG_REGMAP_SLIMBUS=m
+CONFIG_REGMAP_SPI=y
+CONFIG_REGMAP_SPMI=m
+CONFIG_REGMAP_MMIO=y
+CONFIG_REGMAP_IRQ=y
+CONFIG_REGMAP_SOUNDWIRE=m
+CONFIG_DMA_SHARED_BUFFER=y
+# CONFIG_DMA_FENCE_TRACE is not set
+CONFIG_GENERIC_ARCH_TOPOLOGY=y
+# end of Generic Driver Options
+
+#
+# Bus devices
+#
+# CONFIG_BRCMSTB_GISB_ARB is not set
+# CONFIG_MOXTET is not set
+CONFIG_SIMPLE_PM_BUS=y
+# CONFIG_VEXPRESS_CONFIG is not set
+# CONFIG_MHI_BUS is not set
+# end of Bus devices
+
+# CONFIG_CONNECTOR is not set
+# CONFIG_GNSS is not set
+CONFIG_MTD=y
+CONFIG_MTD_TESTS=m
+
+#
+# Partition parsers
+#
+# CONFIG_MTD_AR7_PARTS is not set
+CONFIG_MTD_CMDLINE_PARTS=y
+CONFIG_MTD_OF_PARTS=y
+# CONFIG_MTD_AFS_PARTS is not set
+# CONFIG_MTD_REDBOOT_PARTS is not set
+# end of Partition parsers
+
+#
+# User Modules And Translation Layers
+#
+CONFIG_MTD_BLKDEVS=y
+CONFIG_MTD_BLOCK=y
+# CONFIG_FTL is not set
+# CONFIG_NFTL is not set
+# CONFIG_INFTL is not set
+# CONFIG_RFD_FTL is not set
+# CONFIG_SSFDC is not set
+# CONFIG_SM_FTL is not set
+# CONFIG_MTD_OOPS is not set
+# CONFIG_MTD_SWAP is not set
+# CONFIG_MTD_PARTITIONED_MASTER is not set
+
+#
+# RAM/ROM/Flash chip drivers
+#
+CONFIG_MTD_CFI=y
+# CONFIG_MTD_JEDECPROBE is not set
+CONFIG_MTD_GEN_PROBE=y
+CONFIG_MTD_CFI_ADV_OPTIONS=y
+CONFIG_MTD_CFI_NOSWAP=y
+# CONFIG_MTD_CFI_BE_BYTE_SWAP is not set
+# CONFIG_MTD_CFI_LE_BYTE_SWAP is not set
+# CONFIG_MTD_CFI_GEOMETRY is not set
+CONFIG_MTD_MAP_BANK_WIDTH_1=y
+CONFIG_MTD_MAP_BANK_WIDTH_2=y
+CONFIG_MTD_MAP_BANK_WIDTH_4=y
+CONFIG_MTD_CFI_I1=y
+CONFIG_MTD_CFI_I2=y
+# CONFIG_MTD_OTP is not set
+CONFIG_MTD_CFI_INTELEXT=y
+CONFIG_MTD_CFI_AMDSTD=y
+CONFIG_MTD_CFI_STAA=y
+CONFIG_MTD_CFI_UTIL=y
+# CONFIG_MTD_RAM is not set
+# CONFIG_MTD_ROM is not set
+# CONFIG_MTD_ABSENT is not set
+# end of RAM/ROM/Flash chip drivers
+
+#
+# Mapping drivers for chip access
+#
+CONFIG_MTD_COMPLEX_MAPPINGS=y
+CONFIG_MTD_PHYSMAP=y
+# CONFIG_MTD_PHYSMAP_COMPAT is not set
+CONFIG_MTD_PHYSMAP_OF=y
+# CONFIG_MTD_PHYSMAP_VERSATILE is not set
+# CONFIG_MTD_PHYSMAP_GEMINI is not set
+# CONFIG_MTD_PHYSMAP_GPIO_ADDR is not set
+# CONFIG_MTD_PCI is not set
+# CONFIG_MTD_INTEL_VR_NOR is not set
+# CONFIG_MTD_PLATRAM is not set
+# end of Mapping drivers for chip access
+
+#
+# Self-contained MTD device drivers
+#
+# CONFIG_MTD_PMC551 is not set
+CONFIG_MTD_DATAFLASH=y
+# CONFIG_MTD_DATAFLASH_WRITE_VERIFY is not set
+# CONFIG_MTD_DATAFLASH_OTP is not set
+# CONFIG_MTD_MCHP23K256 is not set
+CONFIG_MTD_SST25L=y
+# CONFIG_MTD_SLRAM is not set
+# CONFIG_MTD_PHRAM is not set
+# CONFIG_MTD_MTDRAM is not set
+# CONFIG_MTD_BLOCK2MTD is not set
+
+#
+# Disk-On-Chip Device Drivers
+#
+# CONFIG_MTD_DOCG3 is not set
+# end of Self-contained MTD device drivers
+
+#
+# NAND
+#
+CONFIG_MTD_NAND_CORE=y
+# CONFIG_MTD_ONENAND is not set
+CONFIG_MTD_NAND_ECC_SW_HAMMING=y
+# CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC is not set
+CONFIG_MTD_RAW_NAND=y
+# CONFIG_MTD_NAND_ECC_SW_BCH is not set
+
+#
+# Raw/parallel NAND flash controllers
+#
+CONFIG_MTD_NAND_DENALI=y
+# CONFIG_MTD_NAND_DENALI_PCI is not set
+CONFIG_MTD_NAND_DENALI_DT=y
+# CONFIG_MTD_NAND_CAFE is not set
+# CONFIG_MTD_NAND_BRCMNAND is not set
+# CONFIG_MTD_NAND_MXIC is not set
+# CONFIG_MTD_NAND_GPIO is not set
+# CONFIG_MTD_NAND_PLATFORM is not set
+# CONFIG_MTD_NAND_CADENCE is not set
+# CONFIG_MTD_NAND_ARASAN is not set
+
+#
+# Misc
+#
+# CONFIG_MTD_NAND_NANDSIM is not set
+# CONFIG_MTD_NAND_RICOH is not set
+# CONFIG_MTD_NAND_DISKONCHIP is not set
+# CONFIG_MTD_SPI_NAND is not set
+
+#
+# ECC engine support
+#
+CONFIG_MTD_NAND_ECC=y
+# end of ECC engine support
+# end of NAND
+
+#
+# LPDDR & LPDDR2 PCM memory drivers
+#
+# CONFIG_MTD_LPDDR is not set
+# end of LPDDR & LPDDR2 PCM memory drivers
+
+CONFIG_MTD_SPI_NOR=y
+# CONFIG_MTD_SPI_NOR_USE_4K_SECTORS is not set
+CONFIG_MTD_UBI=y
+CONFIG_MTD_UBI_WL_THRESHOLD=4096
+CONFIG_MTD_UBI_BEB_LIMIT=20
+# CONFIG_MTD_UBI_FASTMAP is not set
+# CONFIG_MTD_UBI_GLUEBI is not set
+# CONFIG_MTD_UBI_BLOCK is not set
+CONFIG_MTD_HYPERBUS=y
+CONFIG_HBMC_AM654=y
+CONFIG_DTC=y
+CONFIG_OF=y
+# CONFIG_OF_UNITTEST is not set
+CONFIG_OF_FLATTREE=y
+CONFIG_OF_EARLY_FLATTREE=y
+CONFIG_OF_KOBJ=y
+CONFIG_OF_DYNAMIC=y
+CONFIG_OF_ADDRESS=y
+CONFIG_OF_IRQ=y
+CONFIG_OF_NET=y
+CONFIG_OF_RESERVED_MEM=y
+CONFIG_OF_RESOLVE=y
+CONFIG_OF_OVERLAY=y
+# CONFIG_PARPORT is not set
+CONFIG_BLK_DEV=y
+# CONFIG_BLK_DEV_NULL_BLK is not set
+# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
+# CONFIG_BLK_DEV_UMEM is not set
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+# CONFIG_BLK_DEV_DRBD is not set
+CONFIG_BLK_DEV_NBD=m
+# CONFIG_BLK_DEV_SKD is not set
+# CONFIG_BLK_DEV_SX8 is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
+CONFIG_BLK_DEV_RAM_SIZE=4096
+# CONFIG_CDROM_PKTCDVD is not set
+# CONFIG_ATA_OVER_ETH is not set
+CONFIG_VIRTIO_BLK=y
+# CONFIG_BLK_DEV_RBD is not set
+# CONFIG_BLK_DEV_RSXX is not set
+
+#
+# NVME Support
+#
+CONFIG_NVME_CORE=m
+CONFIG_BLK_DEV_NVME=m
+# CONFIG_NVME_MULTIPATH is not set
+# CONFIG_NVME_HWMON is not set
+# CONFIG_NVME_FC is not set
+# CONFIG_NVME_TCP is not set
+# CONFIG_NVME_TARGET is not set
+# end of NVME Support
+
+#
+# Misc devices
+#
+# CONFIG_AD525X_DPOT is not set
+# CONFIG_DUMMY_IRQ is not set
+# CONFIG_PHANTOM is not set
+# CONFIG_TIFM_CORE is not set
+# CONFIG_ICS932S401 is not set
+# CONFIG_ENCLOSURE_SERVICES is not set
+# CONFIG_HP_ILO is not set
+# CONFIG_APDS9802ALS is not set
+# CONFIG_ISL29003 is not set
+# CONFIG_ISL29020 is not set
+# CONFIG_SENSORS_TSL2550 is not set
+# CONFIG_SENSORS_BH1770 is not set
+# CONFIG_SENSORS_APDS990X is not set
+# CONFIG_HMC6352 is not set
+# CONFIG_DS1682 is not set
+# CONFIG_LATTICE_ECP3_CONFIG is not set
+CONFIG_SRAM=y
+CONFIG_PCI_ENDPOINT_TEST=m
+# CONFIG_XILINX_SDFEC is not set
+# CONFIG_PVPANIC is not set
+# CONFIG_HISI_HIKEY_USB is not set
+# CONFIG_C2PORT is not set
+
+#
+# EEPROM support
+#
+CONFIG_EEPROM_AT24=m
+CONFIG_EEPROM_AT25=m
+# CONFIG_EEPROM_LEGACY is not set
+# CONFIG_EEPROM_MAX6875 is not set
+# CONFIG_EEPROM_93CX6 is not set
+CONFIG_EEPROM_93XX46=m
+# CONFIG_EEPROM_IDT_89HPESX is not set
+# CONFIG_EEPROM_EE1004 is not set
+# end of EEPROM support
+
+# CONFIG_CB710_CORE is not set
+
+#
+# Texas Instruments shared transport line discipline
+#
+# CONFIG_TI_ST is not set
+# end of Texas Instruments shared transport line discipline
+
+# CONFIG_SENSORS_LIS3_SPI is not set
+# CONFIG_SENSORS_LIS3_I2C is not set
+# CONFIG_ALTERA_STAPL is not set
+# CONFIG_GENWQE is not set
+# CONFIG_ECHO is not set
+# CONFIG_MISC_ALCOR_PCI is not set
+# CONFIG_MISC_RTSX_PCI is not set
+# CONFIG_MISC_RTSX_USB is not set
+# CONFIG_HABANA_AI is not set
+CONFIG_UACCE=m
+# end of Misc devices
+
+#
+# SCSI device support
+#
+CONFIG_SCSI_MOD=y
+CONFIG_RAID_ATTRS=m
+CONFIG_SCSI=y
+CONFIG_SCSI_DMA=y
+# CONFIG_SCSI_PROC_FS is not set
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=y
+# CONFIG_CHR_DEV_ST is not set
+# CONFIG_BLK_DEV_SR is not set
+# CONFIG_CHR_DEV_SG is not set
+# CONFIG_CHR_DEV_SCH is not set
+# CONFIG_SCSI_CONSTANTS is not set
+# CONFIG_SCSI_LOGGING is not set
+# CONFIG_SCSI_SCAN_ASYNC is not set
+
+#
+# SCSI Transports
+#
+# CONFIG_SCSI_SPI_ATTRS is not set
+# CONFIG_SCSI_FC_ATTRS is not set
+# CONFIG_SCSI_ISCSI_ATTRS is not set
+CONFIG_SCSI_SAS_ATTRS=m
+CONFIG_SCSI_SAS_LIBSAS=m
+CONFIG_SCSI_SAS_ATA=y
+CONFIG_SCSI_SAS_HOST_SMP=y
+# CONFIG_SCSI_SRP_ATTRS is not set
+# end of SCSI Transports
+
+CONFIG_SCSI_LOWLEVEL=y
+# CONFIG_ISCSI_TCP is not set
+# CONFIG_ISCSI_BOOT_SYSFS is not set
+# CONFIG_SCSI_CXGB3_ISCSI is not set
+# CONFIG_SCSI_CXGB4_ISCSI is not set
+# CONFIG_SCSI_BNX2_ISCSI is not set
+# CONFIG_BE2ISCSI is not set
+# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
+# CONFIG_SCSI_HPSA is not set
+# CONFIG_SCSI_3W_9XXX is not set
+# CONFIG_SCSI_3W_SAS is not set
+# CONFIG_SCSI_ACARD is not set
+# CONFIG_SCSI_AACRAID is not set
+# CONFIG_SCSI_AIC7XXX is not set
+# CONFIG_SCSI_AIC79XX is not set
+# CONFIG_SCSI_AIC94XX is not set
+CONFIG_SCSI_HISI_SAS=m
+# CONFIG_SCSI_MVSAS is not set
+# CONFIG_SCSI_MVUMI is not set
+# CONFIG_SCSI_ADVANSYS is not set
+# CONFIG_SCSI_ARCMSR is not set
+# CONFIG_SCSI_ESAS2R is not set
+# CONFIG_MEGARAID_NEWGEN is not set
+# CONFIG_MEGARAID_LEGACY is not set
+CONFIG_MEGARAID_SAS=y
+CONFIG_SCSI_MPT3SAS=m
+CONFIG_SCSI_MPT2SAS_MAX_SGE=128
+CONFIG_SCSI_MPT3SAS_MAX_SGE=128
+# CONFIG_SCSI_MPT2SAS is not set
+# CONFIG_SCSI_SMARTPQI is not set
+CONFIG_SCSI_UFSHCD=y
+# CONFIG_SCSI_UFSHCD_PCI is not set
+CONFIG_SCSI_UFSHCD_PLATFORM=y
+CONFIG_SCSI_UFS_CDNS_PLATFORM=y
+# CONFIG_SCSI_UFS_DWC_TC_PLATFORM is not set
+CONFIG_SCSI_UFS_TI_J721E=y
+CONFIG_SCSI_UFS_BSG=y
+# CONFIG_SCSI_HPTIOP is not set
+# CONFIG_SCSI_MYRB is not set
+# CONFIG_SCSI_MYRS is not set
+# CONFIG_SCSI_SNIC is not set
+# CONFIG_SCSI_DMX3191D is not set
+# CONFIG_SCSI_FDOMAIN_PCI is not set
+# CONFIG_SCSI_GDTH is not set
+# CONFIG_SCSI_IPS is not set
+# CONFIG_SCSI_INITIO is not set
+# CONFIG_SCSI_INIA100 is not set
+# CONFIG_SCSI_STEX is not set
+# CONFIG_SCSI_SYM53C8XX_2 is not set
+# CONFIG_SCSI_IPR is not set
+# CONFIG_SCSI_QLOGIC_1280 is not set
+# CONFIG_SCSI_QLA_ISCSI is not set
+# CONFIG_SCSI_DC395x is not set
+# CONFIG_SCSI_AM53C974 is not set
+# CONFIG_SCSI_WD719X is not set
+# CONFIG_SCSI_DEBUG is not set
+# CONFIG_SCSI_PMCRAID is not set
+# CONFIG_SCSI_PM8001 is not set
+# CONFIG_SCSI_VIRTIO is not set
+# CONFIG_SCSI_DH is not set
+# end of SCSI device support
+
+CONFIG_HAVE_PATA_PLATFORM=y
+CONFIG_ATA=m
+CONFIG_SATA_HOST=y
+CONFIG_ATA_VERBOSE_ERROR=y
+CONFIG_ATA_FORCE=y
+CONFIG_SATA_PMP=y
+
+#
+# Controllers with non-SFF native interface
+#
+CONFIG_SATA_AHCI=m
+CONFIG_SATA_MOBILE_LPM_POLICY=0
+CONFIG_SATA_AHCI_PLATFORM=m
+CONFIG_AHCI_CEVA=m
+CONFIG_AHCI_XGENE=m
+CONFIG_AHCI_QORIQ=m
+# CONFIG_SATA_INIC162X is not set
+# CONFIG_SATA_ACARD_AHCI is not set
+CONFIG_SATA_SIL24=m
+CONFIG_ATA_SFF=y
+
+#
+# SFF controllers with custom DMA interface
+#
+# CONFIG_PDC_ADMA is not set
+# CONFIG_SATA_QSTOR is not set
+# CONFIG_SATA_SX4 is not set
+CONFIG_ATA_BMDMA=y
+
+#
+# SATA SFF controllers with BMDMA
+#
+# CONFIG_ATA_PIIX is not set
+# CONFIG_SATA_DWC is not set
+# CONFIG_SATA_MV is not set
+# CONFIG_SATA_NV is not set
+# CONFIG_SATA_PROMISE is not set
+# CONFIG_SATA_SIL is not set
+# CONFIG_SATA_SIS is not set
+# CONFIG_SATA_SVW is not set
+# CONFIG_SATA_ULI is not set
+# CONFIG_SATA_VIA is not set
+# CONFIG_SATA_VITESSE is not set
+
+#
+# PATA SFF controllers with BMDMA
+#
+# CONFIG_PATA_ALI is not set
+# CONFIG_PATA_AMD is not set
+# CONFIG_PATA_ARTOP is not set
+# CONFIG_PATA_ATIIXP is not set
+# CONFIG_PATA_ATP867X is not set
+# CONFIG_PATA_CMD64X is not set
+# CONFIG_PATA_CYPRESS is not set
+# CONFIG_PATA_EFAR is not set
+# CONFIG_PATA_HPT366 is not set
+# CONFIG_PATA_HPT37X is not set
+# CONFIG_PATA_HPT3X2N is not set
+# CONFIG_PATA_HPT3X3 is not set
+# CONFIG_PATA_IT8213 is not set
+# CONFIG_PATA_IT821X is not set
+# CONFIG_PATA_JMICRON is not set
+# CONFIG_PATA_MARVELL is not set
+# CONFIG_PATA_NETCELL is not set
+# CONFIG_PATA_NINJA32 is not set
+# CONFIG_PATA_NS87415 is not set
+# CONFIG_PATA_OLDPIIX is not set
+# CONFIG_PATA_OPTIDMA is not set
+# CONFIG_PATA_PDC2027X is not set
+# CONFIG_PATA_PDC_OLD is not set
+# CONFIG_PATA_RADISYS is not set
+# CONFIG_PATA_RDC is not set
+# CONFIG_PATA_SCH is not set
+# CONFIG_PATA_SERVERWORKS is not set
+# CONFIG_PATA_SIL680 is not set
+# CONFIG_PATA_SIS is not set
+# CONFIG_PATA_TOSHIBA is not set
+# CONFIG_PATA_TRIFLEX is not set
+# CONFIG_PATA_VIA is not set
+# CONFIG_PATA_WINBOND is not set
+
+#
+# PIO-only SFF controllers
+#
+# CONFIG_PATA_CMD640_PCI is not set
+# CONFIG_PATA_MPIIX is not set
+# CONFIG_PATA_NS87410 is not set
+# CONFIG_PATA_OPTI is not set
+CONFIG_PATA_PLATFORM=m
+CONFIG_PATA_OF_PLATFORM=m
+# CONFIG_PATA_RZ1000 is not set
+
+#
+# Generic fallback / legacy drivers
+#
+# CONFIG_ATA_GENERIC is not set
+# CONFIG_PATA_LEGACY is not set
+CONFIG_MD=y
+CONFIG_BLK_DEV_MD=m
+# CONFIG_MD_LINEAR is not set
+# CONFIG_MD_RAID0 is not set
+# CONFIG_MD_RAID1 is not set
+# CONFIG_MD_RAID10 is not set
+# CONFIG_MD_RAID456 is not set
+# CONFIG_MD_MULTIPATH is not set
+# CONFIG_MD_FAULTY is not set
+# CONFIG_BCACHE is not set
+CONFIG_BLK_DEV_DM_BUILTIN=y
+CONFIG_BLK_DEV_DM=m
+# CONFIG_DM_DEBUG is not set
+# CONFIG_DM_UNSTRIPED is not set
+# CONFIG_DM_CRYPT is not set
+# CONFIG_DM_SNAPSHOT is not set
+# CONFIG_DM_THIN_PROVISIONING is not set
+# CONFIG_DM_CACHE is not set
+# CONFIG_DM_WRITECACHE is not set
+# CONFIG_DM_EBS is not set
+# CONFIG_DM_ERA is not set
+# CONFIG_DM_CLONE is not set
+CONFIG_DM_MIRROR=m
+# CONFIG_DM_LOG_USERSPACE is not set
+# CONFIG_DM_RAID is not set
+CONFIG_DM_ZERO=m
+# CONFIG_DM_MULTIPATH is not set
+# CONFIG_DM_DELAY is not set
+# CONFIG_DM_DUST is not set
+# CONFIG_DM_UEVENT is not set
+# CONFIG_DM_FLAKEY is not set
+# CONFIG_DM_VERITY is not set
+# CONFIG_DM_SWITCH is not set
+# CONFIG_DM_LOG_WRITES is not set
+# CONFIG_DM_INTEGRITY is not set
+# CONFIG_TARGET_CORE is not set
+# CONFIG_FUSION is not set
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_FIREWIRE is not set
+# CONFIG_FIREWIRE_NOSY is not set
+# end of IEEE 1394 (FireWire) support
+
+CONFIG_NETDEVICES=y
+CONFIG_MII=y
+CONFIG_NET_CORE=y
+# CONFIG_BONDING is not set
+# CONFIG_DUMMY is not set
+# CONFIG_WIREGUARD is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_NET_FC is not set
+# CONFIG_IFB is not set
+# CONFIG_NET_TEAM is not set
+CONFIG_MACVLAN=m
+CONFIG_MACVTAP=m
+# CONFIG_IPVLAN is not set
+# CONFIG_VXLAN is not set
+# CONFIG_GENEVE is not set
+# CONFIG_BAREUDP is not set
+# CONFIG_GTP is not set
+# CONFIG_MACSEC is not set
+# CONFIG_NETCONSOLE is not set
+CONFIG_NTB_NETDEV=m
+CONFIG_TUN=y
+CONFIG_TAP=m
+# CONFIG_TUN_VNET_CROSS_LE is not set
+CONFIG_VETH=m
+CONFIG_VIRTIO_NET=y
+# CONFIG_NLMON is not set
+# CONFIG_ARCNET is not set
+
+#
+# Distributed Switch Architecture drivers
+#
+# CONFIG_B53 is not set
+# CONFIG_NET_DSA_BCM_SF2 is not set
+# CONFIG_NET_DSA_LOOP is not set
+# CONFIG_NET_DSA_LANTIQ_GSWIP is not set
+# CONFIG_NET_DSA_MT7530 is not set
+# CONFIG_NET_DSA_MV88E6060 is not set
+# CONFIG_NET_DSA_MICROCHIP_KSZ9477 is not set
+# CONFIG_NET_DSA_MICROCHIP_KSZ8795 is not set
+# CONFIG_NET_DSA_MV88E6XXX is not set
+# CONFIG_NET_DSA_MSCC_SEVILLE is not set
+# CONFIG_NET_DSA_AR9331 is not set
+# CONFIG_NET_DSA_SJA1105 is not set
+# CONFIG_NET_DSA_QCA8K is not set
+# CONFIG_NET_DSA_REALTEK_SMI is not set
+# CONFIG_NET_DSA_SMSC_LAN9303_I2C is not set
+# CONFIG_NET_DSA_SMSC_LAN9303_MDIO is not set
+# CONFIG_NET_DSA_VITESSE_VSC73XX_SPI is not set
+# CONFIG_NET_DSA_VITESSE_VSC73XX_PLATFORM is not set
+# end of Distributed Switch Architecture drivers
+
+CONFIG_ETHERNET=y
+CONFIG_MDIO=m
+# CONFIG_NET_VENDOR_3COM is not set
+# CONFIG_NET_VENDOR_ADAPTEC is not set
+# CONFIG_NET_VENDOR_AGERE is not set
+CONFIG_NET_VENDOR_ALACRITECH=y
+# CONFIG_SLICOSS is not set
+# CONFIG_NET_VENDOR_ALTEON is not set
+# CONFIG_ALTERA_TSE is not set
+# CONFIG_NET_VENDOR_AMAZON is not set
+# CONFIG_NET_VENDOR_AMD is not set
+CONFIG_NET_VENDOR_AQUANTIA=y
+# CONFIG_AQTION is not set
+# CONFIG_NET_VENDOR_ARC is not set
+# CONFIG_NET_VENDOR_ATHEROS is not set
+CONFIG_NET_VENDOR_AURORA=y
+# CONFIG_AURORA_NB8800 is not set
+CONFIG_NET_VENDOR_BROADCOM=y
+# CONFIG_B44 is not set
+# CONFIG_BCMGENET is not set
+# CONFIG_BNX2 is not set
+# CONFIG_CNIC is not set
+CONFIG_TIGON3=m
+CONFIG_TIGON3_HWMON=y
+CONFIG_BNX2X=m
+CONFIG_BNX2X_SRIOV=y
+# CONFIG_SYSTEMPORT is not set
+# CONFIG_BNXT is not set
+# CONFIG_NET_VENDOR_BROCADE is not set
+CONFIG_NET_VENDOR_CADENCE=y
+CONFIG_MACB=y
+CONFIG_MACB_USE_HWSTAMP=y
+# CONFIG_MACB_PCI is not set
+# CONFIG_NET_VENDOR_CAVIUM is not set
+# CONFIG_NET_VENDOR_CHELSIO is not set
+# CONFIG_NET_VENDOR_CISCO is not set
+CONFIG_NET_VENDOR_CORTINA=y
+# CONFIG_GEMINI_ETHERNET is not set
+# CONFIG_DNET is not set
+# CONFIG_NET_VENDOR_DEC is not set
+# CONFIG_NET_VENDOR_DLINK is not set
+# CONFIG_NET_VENDOR_EMULEX is not set
+# CONFIG_NET_VENDOR_EZCHIP is not set
+CONFIG_NET_VENDOR_GOOGLE=y
+# CONFIG_GVE is not set
+# CONFIG_NET_VENDOR_HISILICON is not set
+CONFIG_NET_VENDOR_HUAWEI=y
+# CONFIG_HINIC is not set
+# CONFIG_NET_VENDOR_I825XX is not set
+CONFIG_NET_VENDOR_INTEL=y
+# CONFIG_E100 is not set
+CONFIG_E1000=m
+CONFIG_E1000E=m
+# CONFIG_IGB is not set
+CONFIG_IGBVF=y
+# CONFIG_IXGB is not set
+# CONFIG_IXGBE is not set
+# CONFIG_IXGBEVF is not set
+# CONFIG_I40E is not set
+# CONFIG_I40EVF is not set
+# CONFIG_ICE is not set
+# CONFIG_FM10K is not set
+# CONFIG_IGC is not set
+# CONFIG_JME is not set
+CONFIG_NET_VENDOR_MARVELL=y
+# CONFIG_MVMDIO is not set
+CONFIG_SKGE=m
+# CONFIG_SKGE_DEBUG is not set
+# CONFIG_SKGE_GENESIS is not set
+CONFIG_SKY2=y
+# CONFIG_SKY2_DEBUG is not set
+# CONFIG_OCTEONTX2_AF is not set
+# CONFIG_OCTEONTX2_PF is not set
+# CONFIG_PRESTERA is not set
+# CONFIG_NET_VENDOR_MELLANOX is not set
+CONFIG_NET_VENDOR_MICREL=y
+# CONFIG_KS8842 is not set
+# CONFIG_KS8851 is not set
+# CONFIG_KS8851_MLL is not set
+# CONFIG_KSZ884X_PCI is not set
+# CONFIG_NET_VENDOR_MICROCHIP is not set
+CONFIG_NET_VENDOR_MICROSEMI=y
+# CONFIG_MSCC_OCELOT_SWITCH is not set
+# CONFIG_NET_VENDOR_MYRI is not set
+# CONFIG_FEALNX is not set
+# CONFIG_NET_VENDOR_NATSEMI is not set
+CONFIG_NET_VENDOR_NETERION=y
+# CONFIG_S2IO is not set
+# CONFIG_VXGE is not set
+# CONFIG_NET_VENDOR_NETRONOME is not set
+CONFIG_NET_VENDOR_NI=y
+# CONFIG_NI_XGE_MANAGEMENT_ENET is not set
+# CONFIG_NET_VENDOR_NVIDIA is not set
+# CONFIG_NET_VENDOR_OKI is not set
+# CONFIG_ETHOC is not set
+CONFIG_NET_VENDOR_PACKET_ENGINES=y
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+CONFIG_NET_VENDOR_PENSANDO=y
+# CONFIG_IONIC is not set
+# CONFIG_NET_VENDOR_QLOGIC is not set
+# CONFIG_NET_VENDOR_QUALCOMM is not set
+# CONFIG_NET_VENDOR_RDC is not set
+# CONFIG_NET_VENDOR_REALTEK is not set
+# CONFIG_NET_VENDOR_RENESAS is not set
+# CONFIG_NET_VENDOR_ROCKER is not set
+# CONFIG_NET_VENDOR_SAMSUNG is not set
+# CONFIG_NET_VENDOR_SEEQ is not set
+CONFIG_NET_VENDOR_SOLARFLARE=y
+# CONFIG_SFC is not set
+# CONFIG_SFC_FALCON is not set
+# CONFIG_NET_VENDOR_SILAN is not set
+# CONFIG_NET_VENDOR_SIS is not set
+CONFIG_NET_VENDOR_SMSC=y
+CONFIG_SMC91X=y
+# CONFIG_EPIC100 is not set
+CONFIG_SMSC911X=y
+# CONFIG_SMSC9420 is not set
+CONFIG_NET_VENDOR_SOCIONEXT=y
+# CONFIG_NET_VENDOR_STMICRO is not set
+# CONFIG_NET_VENDOR_SUN is not set
+# CONFIG_NET_VENDOR_SYNOPSYS is not set
+# CONFIG_NET_VENDOR_TEHUTI is not set
+CONFIG_NET_VENDOR_TI=y
+CONFIG_TI_DAVINCI_MDIO=y
+# CONFIG_TI_CPSW_PHY_SEL is not set
+CONFIG_TI_K3_AM65_CPSW_NUSS=y
+CONFIG_TI_K3_AM65_CPSW_SWITCHDEV=y
+CONFIG_TI_K3_AM65_CPTS=y
+CONFIG_TI_AM65_CPSW_TAS=y
+# CONFIG_TLAN is not set
+CONFIG_TI_RDEV_ETH_SWITCH_VIRT_EMAC=m
+CONFIG_TI_PRUETH=m
+CONFIG_TI_ICSS_IEP=m
+CONFIG_TI_ICSSG_PRUETH=m
+# CONFIG_NET_VENDOR_VIA is not set
+# CONFIG_NET_VENDOR_WIZNET is not set
+CONFIG_NET_VENDOR_XILINX=y
+# CONFIG_XILINX_AXI_EMAC is not set
+# CONFIG_XILINX_LL_TEMAC is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+CONFIG_PHYLINK=y
+CONFIG_PHYLIB=y
+CONFIG_SWPHY=y
+# CONFIG_LED_TRIGGER_PHY is not set
+CONFIG_FIXED_PHY=y
+# CONFIG_SFP is not set
+
+#
+# MII PHY device drivers
+#
+# CONFIG_AMD_PHY is not set
+# CONFIG_ADIN_PHY is not set
+CONFIG_AQUANTIA_PHY=y
+# CONFIG_AX88796B_PHY is not set
+# CONFIG_BROADCOM_PHY is not set
+# CONFIG_BCM54140_PHY is not set
+# CONFIG_BCM7XXX_PHY is not set
+# CONFIG_BCM84881_PHY is not set
+# CONFIG_BCM87XX_PHY is not set
+# CONFIG_CICADA_PHY is not set
+# CONFIG_CORTINA_PHY is not set
+# CONFIG_DAVICOM_PHY is not set
+# CONFIG_ICPLUS_PHY is not set
+# CONFIG_LXT_PHY is not set
+# CONFIG_INTEL_XWAY_PHY is not set
+# CONFIG_LSI_ET1011C_PHY is not set
+CONFIG_MARVELL_PHY=y
+CONFIG_MARVELL_10G_PHY=m
+CONFIG_MICREL_PHY=y
+CONFIG_MICROCHIP_PHY=m
+# CONFIG_MICROCHIP_T1_PHY is not set
+CONFIG_MICROSEMI_PHY=y
+# CONFIG_NATIONAL_PHY is not set
+# CONFIG_NXP_TJA11XX_PHY is not set
+CONFIG_AT803X_PHY=y
+# CONFIG_QSEMI_PHY is not set
+CONFIG_REALTEK_PHY=m
+# CONFIG_RENESAS_PHY is not set
+CONFIG_ROCKCHIP_PHY=y
+CONFIG_SMSC_PHY=m
+# CONFIG_STE10XP is not set
+# CONFIG_TERANETICS_PHY is not set
+# CONFIG_DP83822_PHY is not set
+# CONFIG_DP83TC811_PHY is not set
+CONFIG_DP83848_PHY=y
+CONFIG_DP83867_PHY=y
+CONFIG_DP83869_PHY=y
+CONFIG_VITESSE_PHY=y
+# CONFIG_XILINX_GMII2RGMII is not set
+# CONFIG_MICREL_KS8995MA is not set
+CONFIG_MDIO_DEVICE=y
+CONFIG_MDIO_BUS=y
+CONFIG_OF_MDIO=y
+CONFIG_MDIO_DEVRES=y
+# CONFIG_MDIO_BITBANG is not set
+# CONFIG_MDIO_BCM_UNIMAC is not set
+# CONFIG_MDIO_HISI_FEMAC is not set
+# CONFIG_MDIO_MVUSB is not set
+# CONFIG_MDIO_MSCC_MIIM is not set
+# CONFIG_MDIO_OCTEON is not set
+# CONFIG_MDIO_IPQ4019 is not set
+# CONFIG_MDIO_IPQ8064 is not set
+# CONFIG_MDIO_THUNDER is not set
+
+#
+# MDIO Multiplexers
+#
+CONFIG_MDIO_BUS_MUX=y
+# CONFIG_MDIO_BUS_MUX_GPIO is not set
+CONFIG_MDIO_BUS_MUX_MULTIPLEXER=y
+CONFIG_MDIO_BUS_MUX_MMIOREG=y
+
+#
+# PCS device drivers
+#
+# CONFIG_PCS_XPCS is not set
+# end of PCS device drivers
+
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+
+#
+# Host-side USB support is needed for USB Network Adapter support
+#
+CONFIG_USB_NET_DRIVERS=m
+# CONFIG_USB_CATC is not set
+# CONFIG_USB_KAWETH is not set
+CONFIG_USB_PEGASUS=m
+CONFIG_USB_RTL8150=m
+CONFIG_USB_RTL8152=m
+CONFIG_USB_LAN78XX=m
+CONFIG_USB_USBNET=m
+CONFIG_USB_NET_AX8817X=m
+CONFIG_USB_NET_AX88179_178A=m
+CONFIG_USB_NET_CDCETHER=m
+CONFIG_USB_NET_CDC_EEM=m
+CONFIG_USB_NET_CDC_NCM=m
+# CONFIG_USB_NET_HUAWEI_CDC_NCM is not set
+# CONFIG_USB_NET_CDC_MBIM is not set
+CONFIG_USB_NET_DM9601=m
+# CONFIG_USB_NET_SR9700 is not set
+CONFIG_USB_NET_SR9800=m
+CONFIG_USB_NET_SMSC75XX=m
+CONFIG_USB_NET_SMSC95XX=m
+# CONFIG_USB_NET_GL620A is not set
+CONFIG_USB_NET_NET1080=m
+CONFIG_USB_NET_PLUSB=m
+CONFIG_USB_NET_MCS7830=m
+# CONFIG_USB_NET_RNDIS_HOST is not set
+CONFIG_USB_NET_CDC_SUBSET_ENABLE=m
+CONFIG_USB_NET_CDC_SUBSET=m
+# CONFIG_USB_ALI_M5632 is not set
+# CONFIG_USB_AN2720 is not set
+CONFIG_USB_BELKIN=y
+CONFIG_USB_ARMLINUX=y
+# CONFIG_USB_EPSON2888 is not set
+# CONFIG_USB_KC2190 is not set
+CONFIG_USB_NET_ZAURUS=m
+# CONFIG_USB_NET_CX82310_ETH is not set
+# CONFIG_USB_NET_KALMIA is not set
+# CONFIG_USB_NET_QMI_WWAN is not set
+# CONFIG_USB_HSO is not set
+# CONFIG_USB_NET_INT51X1 is not set
+# CONFIG_USB_IPHETH is not set
+# CONFIG_USB_SIERRA_NET is not set
+# CONFIG_USB_VL600 is not set
+# CONFIG_USB_NET_CH9200 is not set
+# CONFIG_USB_NET_AQC111 is not set
+CONFIG_WLAN=y
+# CONFIG_WIRELESS_WDS is not set
+CONFIG_WLAN_VENDOR_ADMTEK=y
+# CONFIG_ADM8211 is not set
+CONFIG_ATH_COMMON=m
+CONFIG_WLAN_VENDOR_ATH=y
+# CONFIG_ATH_DEBUG is not set
+# CONFIG_ATH5K is not set
+# CONFIG_ATH5K_PCI is not set
+# CONFIG_ATH9K is not set
+# CONFIG_ATH9K_HTC is not set
+# CONFIG_CARL9170 is not set
+# CONFIG_ATH6KL is not set
+# CONFIG_AR5523 is not set
+# CONFIG_WIL6210 is not set
+CONFIG_ATH10K=m
+CONFIG_ATH10K_CE=y
+CONFIG_ATH10K_PCI=m
+# CONFIG_ATH10K_AHB is not set
+# CONFIG_ATH10K_SDIO is not set
+# CONFIG_ATH10K_USB is not set
+# CONFIG_ATH10K_DEBUG is not set
+# CONFIG_ATH10K_DEBUGFS is not set
+# CONFIG_WCN36XX is not set
+CONFIG_WLAN_VENDOR_ATMEL=y
+# CONFIG_ATMEL is not set
+# CONFIG_AT76C50X_USB is not set
+CONFIG_WLAN_VENDOR_BROADCOM=y
+CONFIG_B43=m
+CONFIG_B43_BCMA=y
+CONFIG_B43_SSB=y
+CONFIG_B43_BUSES_BCMA_AND_SSB=y
+# CONFIG_B43_BUSES_BCMA is not set
+# CONFIG_B43_BUSES_SSB is not set
+CONFIG_B43_PCI_AUTOSELECT=y
+CONFIG_B43_PCICORE_AUTOSELECT=y
+# CONFIG_B43_SDIO is not set
+CONFIG_B43_BCMA_PIO=y
+CONFIG_B43_PIO=y
+CONFIG_B43_PHY_G=y
+CONFIG_B43_PHY_N=y
+CONFIG_B43_PHY_LP=y
+CONFIG_B43_PHY_HT=y
+CONFIG_B43_LEDS=y
+CONFIG_B43_HWRNG=y
+# CONFIG_B43_DEBUG is not set
+# CONFIG_B43LEGACY is not set
+CONFIG_BRCMUTIL=m
+# CONFIG_BRCMSMAC is not set
+CONFIG_BRCMFMAC=m
+CONFIG_BRCMFMAC_PROTO_BCDC=y
+CONFIG_BRCMFMAC_SDIO=y
+# CONFIG_BRCMFMAC_USB is not set
+# CONFIG_BRCMFMAC_PCIE is not set
+# CONFIG_BRCM_TRACING is not set
+# CONFIG_BRCMDBG is not set
+CONFIG_WLAN_VENDOR_CISCO=y
+CONFIG_WLAN_VENDOR_INTEL=y
+# CONFIG_IPW2100 is not set
+# CONFIG_IPW2200 is not set
+# CONFIG_IWL4965 is not set
+# CONFIG_IWL3945 is not set
+CONFIG_IWLWIFI=m
+CONFIG_IWLWIFI_LEDS=y
+CONFIG_IWLDVM=m
+CONFIG_IWLMVM=m
+CONFIG_IWLWIFI_OPMODE_MODULAR=y
+# CONFIG_IWLWIFI_BCAST_FILTERING is not set
+
+#
+# Debugging Options
+#
+# CONFIG_IWLWIFI_DEBUG is not set
+# end of Debugging Options
+
+CONFIG_WLAN_VENDOR_INTERSIL=y
+# CONFIG_HOSTAP is not set
+# CONFIG_HERMES is not set
+# CONFIG_P54_COMMON is not set
+# CONFIG_PRISM54 is not set
+CONFIG_WLAN_VENDOR_MARVELL=y
+# CONFIG_LIBERTAS is not set
+# CONFIG_LIBERTAS_THINFIRM is not set
+CONFIG_MWIFIEX=m
+# CONFIG_MWIFIEX_SDIO is not set
+CONFIG_MWIFIEX_PCIE=m
+# CONFIG_MWIFIEX_USB is not set
+# CONFIG_MWL8K is not set
+CONFIG_WLAN_VENDOR_MEDIATEK=y
+# CONFIG_MT7601U is not set
+# CONFIG_MT76x0U is not set
+# CONFIG_MT76x0E is not set
+# CONFIG_MT76x2E is not set
+# CONFIG_MT76x2U is not set
+# CONFIG_MT7603E is not set
+# CONFIG_MT7615E is not set
+# CONFIG_MT7663U is not set
+# CONFIG_MT7663S is not set
+# CONFIG_MT7915E is not set
+CONFIG_WLAN_VENDOR_MICROCHIP=y
+# CONFIG_WILC1000_SDIO is not set
+# CONFIG_WILC1000_SPI is not set
+CONFIG_WLAN_VENDOR_RALINK=y
+# CONFIG_RT2X00 is not set
+CONFIG_WLAN_VENDOR_REALTEK=y
+# CONFIG_RTL8180 is not set
+# CONFIG_RTL8187 is not set
+CONFIG_RTL_CARDS=m
+# CONFIG_RTL8192CE is not set
+# CONFIG_RTL8192SE is not set
+# CONFIG_RTL8192DE is not set
+# CONFIG_RTL8723AE is not set
+# CONFIG_RTL8723BE is not set
+# CONFIG_RTL8188EE is not set
+# CONFIG_RTL8192EE is not set
+# CONFIG_RTL8821AE is not set
+# CONFIG_RTL8192CU is not set
+# CONFIG_RTL8XXXU is not set
+# CONFIG_RTW88 is not set
+CONFIG_WLAN_VENDOR_RSI=y
+# CONFIG_RSI_91X is not set
+CONFIG_WLAN_VENDOR_ST=y
+# CONFIG_CW1200 is not set
+CONFIG_WLAN_VENDOR_TI=y
+# CONFIG_WL1251 is not set
+# CONFIG_WL12XX is not set
+CONFIG_WL18XX=m
+CONFIG_WLCORE=m
+# CONFIG_WLCORE_SPI is not set
+CONFIG_WLCORE_SDIO=m
+CONFIG_WILINK_PLATFORM_DATA=y
+CONFIG_WLAN_VENDOR_ZYDAS=y
+# CONFIG_USB_ZD1201 is not set
+# CONFIG_ZD1211RW is not set
+CONFIG_WLAN_VENDOR_QUANTENNA=y
+# CONFIG_QTNFMAC_PCIE is not set
+# CONFIG_MAC80211_HWSIM is not set
+# CONFIG_USB_NET_RNDIS_WLAN is not set
+# CONFIG_VIRT_WIFI is not set
+
+#
+# Enable WiMAX (Networking options) to see the WiMAX drivers
+#
+# CONFIG_WAN is not set
+# CONFIG_NETDEVSIM is not set
+CONFIG_NET_FAILOVER=y
+# CONFIG_ISDN is not set
+# CONFIG_NVM is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+CONFIG_INPUT_LEDS=y
+# CONFIG_INPUT_FF_MEMLESS is not set
+# CONFIG_INPUT_POLLDEV is not set
+# CONFIG_INPUT_SPARSEKMAP is not set
+CONFIG_INPUT_MATRIXKMAP=y
+
+#
+# Userland interfaces
+#
+# CONFIG_INPUT_MOUSEDEV is not set
+# CONFIG_INPUT_JOYDEV is not set
+CONFIG_INPUT_EVDEV=y
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input Device Drivers
+#
+CONFIG_INPUT_KEYBOARD=y
+CONFIG_KEYBOARD_ADC=m
+# CONFIG_KEYBOARD_ADP5588 is not set
+# CONFIG_KEYBOARD_ADP5589 is not set
+CONFIG_KEYBOARD_ATKBD=y
+# CONFIG_KEYBOARD_QT1050 is not set
+# CONFIG_KEYBOARD_QT1070 is not set
+# CONFIG_KEYBOARD_QT2160 is not set
+# CONFIG_KEYBOARD_DLINK_DIR685 is not set
+# CONFIG_KEYBOARD_LKKBD is not set
+CONFIG_KEYBOARD_GPIO=y
+# CONFIG_KEYBOARD_GPIO_POLLED is not set
+# CONFIG_KEYBOARD_TCA6416 is not set
+# CONFIG_KEYBOARD_TCA8418 is not set
+CONFIG_KEYBOARD_MATRIX=m
+# CONFIG_KEYBOARD_LM8323 is not set
+# CONFIG_KEYBOARD_LM8333 is not set
+# CONFIG_KEYBOARD_MAX7359 is not set
+# CONFIG_KEYBOARD_MCS is not set
+# CONFIG_KEYBOARD_MPR121 is not set
+# CONFIG_KEYBOARD_NEWTON is not set
+# CONFIG_KEYBOARD_OPENCORES is not set
+# CONFIG_KEYBOARD_SAMSUNG is not set
+# CONFIG_KEYBOARD_STOWAWAY is not set
+# CONFIG_KEYBOARD_SUNKBD is not set
+# CONFIG_KEYBOARD_OMAP4 is not set
+# CONFIG_KEYBOARD_TM2_TOUCHKEY is not set
+# CONFIG_KEYBOARD_XTKBD is not set
+CONFIG_KEYBOARD_CROS_EC=y
+# CONFIG_KEYBOARD_CAP11XX is not set
+# CONFIG_KEYBOARD_BCM is not set
+CONFIG_INPUT_MOUSE=y
+CONFIG_MOUSE_PS2=y
+CONFIG_MOUSE_PS2_ALPS=y
+CONFIG_MOUSE_PS2_BYD=y
+CONFIG_MOUSE_PS2_LOGIPS2PP=y
+CONFIG_MOUSE_PS2_SYNAPTICS=y
+CONFIG_MOUSE_PS2_SYNAPTICS_SMBUS=y
+CONFIG_MOUSE_PS2_CYPRESS=y
+CONFIG_MOUSE_PS2_TRACKPOINT=y
+# CONFIG_MOUSE_PS2_ELANTECH is not set
+# CONFIG_MOUSE_PS2_SENTELIC is not set
+# CONFIG_MOUSE_PS2_TOUCHKIT is not set
+CONFIG_MOUSE_PS2_FOCALTECH=y
+CONFIG_MOUSE_PS2_SMBUS=y
+# CONFIG_MOUSE_SERIAL is not set
+# CONFIG_MOUSE_APPLETOUCH is not set
+# CONFIG_MOUSE_BCM5974 is not set
+# CONFIG_MOUSE_CYAPA is not set
+# CONFIG_MOUSE_ELAN_I2C is not set
+# CONFIG_MOUSE_VSXXXAA is not set
+# CONFIG_MOUSE_GPIO is not set
+# CONFIG_MOUSE_SYNAPTICS_I2C is not set
+# CONFIG_MOUSE_SYNAPTICS_USB is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TABLET is not set
+CONFIG_INPUT_TOUCHSCREEN=y
+CONFIG_TOUCHSCREEN_PROPERTIES=y
+# CONFIG_TOUCHSCREEN_ADS7846 is not set
+# CONFIG_TOUCHSCREEN_AD7877 is not set
+# CONFIG_TOUCHSCREEN_AD7879 is not set
+# CONFIG_TOUCHSCREEN_ADC is not set
+# CONFIG_TOUCHSCREEN_AR1021_I2C is not set
+CONFIG_TOUCHSCREEN_ATMEL_MXT=m
+# CONFIG_TOUCHSCREEN_ATMEL_MXT_T37 is not set
+# CONFIG_TOUCHSCREEN_AUO_PIXCIR is not set
+# CONFIG_TOUCHSCREEN_BU21013 is not set
+# CONFIG_TOUCHSCREEN_BU21029 is not set
+# CONFIG_TOUCHSCREEN_CHIPONE_ICN8318 is not set
+# CONFIG_TOUCHSCREEN_CY8CTMA140 is not set
+# CONFIG_TOUCHSCREEN_CY8CTMG110 is not set
+# CONFIG_TOUCHSCREEN_CYTTSP_CORE is not set
+# CONFIG_TOUCHSCREEN_CYTTSP4_CORE is not set
+# CONFIG_TOUCHSCREEN_DYNAPRO is not set
+# CONFIG_TOUCHSCREEN_HAMPSHIRE is not set
+# CONFIG_TOUCHSCREEN_EETI is not set
+# CONFIG_TOUCHSCREEN_EGALAX is not set
+# CONFIG_TOUCHSCREEN_EGALAX_SERIAL is not set
+# CONFIG_TOUCHSCREEN_EXC3000 is not set
+# CONFIG_TOUCHSCREEN_FUJITSU is not set
+CONFIG_TOUCHSCREEN_GOODIX=m
+# CONFIG_TOUCHSCREEN_HIDEEP is not set
+# CONFIG_TOUCHSCREEN_ILI210X is not set
+# CONFIG_TOUCHSCREEN_S6SY761 is not set
+# CONFIG_TOUCHSCREEN_GUNZE is not set
+# CONFIG_TOUCHSCREEN_EKTF2127 is not set
+# CONFIG_TOUCHSCREEN_ELAN is not set
+# CONFIG_TOUCHSCREEN_ELO is not set
+# CONFIG_TOUCHSCREEN_WACOM_W8001 is not set
+# CONFIG_TOUCHSCREEN_WACOM_I2C is not set
+# CONFIG_TOUCHSCREEN_MAX11801 is not set
+# CONFIG_TOUCHSCREEN_MCS5000 is not set
+# CONFIG_TOUCHSCREEN_MMS114 is not set
+# CONFIG_TOUCHSCREEN_MELFAS_MIP4 is not set
+# CONFIG_TOUCHSCREEN_MTOUCH is not set
+# CONFIG_TOUCHSCREEN_IMX6UL_TSC is not set
+# CONFIG_TOUCHSCREEN_INEXIO is not set
+# CONFIG_TOUCHSCREEN_MK712 is not set
+# CONFIG_TOUCHSCREEN_PENMOUNT is not set
+CONFIG_TOUCHSCREEN_EDT_FT5X06=m
+# CONFIG_TOUCHSCREEN_TOUCHRIGHT is not set
+# CONFIG_TOUCHSCREEN_TOUCHWIN is not set
+CONFIG_TOUCHSCREEN_TI_AM335X_TSC=m
+CONFIG_TOUCHSCREEN_PIXCIR=m
+# CONFIG_TOUCHSCREEN_WDT87XX_I2C is not set
+# CONFIG_TOUCHSCREEN_USB_COMPOSITE is not set
+# CONFIG_TOUCHSCREEN_TOUCHIT213 is not set
+# CONFIG_TOUCHSCREEN_TSC_SERIO is not set
+# CONFIG_TOUCHSCREEN_TSC2004 is not set
+# CONFIG_TOUCHSCREEN_TSC2005 is not set
+# CONFIG_TOUCHSCREEN_TSC2007 is not set
+# CONFIG_TOUCHSCREEN_RM_TS is not set
+# CONFIG_TOUCHSCREEN_SILEAD is not set
+# CONFIG_TOUCHSCREEN_SIS_I2C is not set
+# CONFIG_TOUCHSCREEN_ST1232 is not set
+# CONFIG_TOUCHSCREEN_STMFTS is not set
+# CONFIG_TOUCHSCREEN_SUR40 is not set
+# CONFIG_TOUCHSCREEN_SURFACE3_SPI is not set
+# CONFIG_TOUCHSCREEN_SX8654 is not set
+# CONFIG_TOUCHSCREEN_TPS6507X is not set
+# CONFIG_TOUCHSCREEN_ZET6223 is not set
+# CONFIG_TOUCHSCREEN_ZFORCE is not set
+# CONFIG_TOUCHSCREEN_ROHM_BU21023 is not set
+# CONFIG_TOUCHSCREEN_IQS5XX is not set
+# CONFIG_TOUCHSCREEN_ZINITIX is not set
+CONFIG_INPUT_MISC=y
+# CONFIG_INPUT_AD714X is not set
+# CONFIG_INPUT_ATMEL_CAPTOUCH is not set
+# CONFIG_INPUT_BMA150 is not set
+# CONFIG_INPUT_E3X0_BUTTON is not set
+# CONFIG_INPUT_MMA8450 is not set
+# CONFIG_INPUT_GPIO_BEEPER is not set
+CONFIG_INPUT_GPIO_DECODER=m
+# CONFIG_INPUT_GPIO_VIBRA is not set
+# CONFIG_INPUT_ATI_REMOTE2 is not set
+# CONFIG_INPUT_KEYSPAN_REMOTE is not set
+# CONFIG_INPUT_KXTJ9 is not set
+# CONFIG_INPUT_POWERMATE is not set
+# CONFIG_INPUT_YEALINK is not set
+# CONFIG_INPUT_CM109 is not set
+# CONFIG_INPUT_REGULATOR_HAPTIC is not set
+# CONFIG_INPUT_AXP20X_PEK is not set
+# CONFIG_INPUT_UINPUT is not set
+# CONFIG_INPUT_PALMAS_PWRBUTTON is not set
+# CONFIG_INPUT_PCF8574 is not set
+# CONFIG_INPUT_PWM_BEEPER is not set
+# CONFIG_INPUT_PWM_VIBRA is not set
+# CONFIG_INPUT_RK805_PWRKEY is not set
+# CONFIG_INPUT_GPIO_ROTARY_ENCODER is not set
+# CONFIG_INPUT_ADXL34X is not set
+# CONFIG_INPUT_IMS_PCU is not set
+# CONFIG_INPUT_IQS269A is not set
+# CONFIG_INPUT_CMA3000 is not set
+# CONFIG_INPUT_DRV260X_HAPTICS is not set
+# CONFIG_INPUT_DRV2665_HAPTICS is not set
+# CONFIG_INPUT_DRV2667_HAPTICS is not set
+# CONFIG_RMI4_CORE is not set
+
+#
+# Hardware I/O ports
+#
+CONFIG_SERIO=y
+# CONFIG_SERIO_SERPORT is not set
+CONFIG_SERIO_AMBAKMI=y
+# CONFIG_SERIO_PCIPS2 is not set
+CONFIG_SERIO_LIBPS2=y
+# CONFIG_SERIO_RAW is not set
+# CONFIG_SERIO_ALTERA_PS2 is not set
+# CONFIG_SERIO_PS2MULT is not set
+# CONFIG_SERIO_ARC_PS2 is not set
+# CONFIG_SERIO_APBPS2 is not set
+# CONFIG_SERIO_GPIO_PS2 is not set
+# CONFIG_USERIO is not set
+# CONFIG_GAMEPORT is not set
+# end of Hardware I/O ports
+# end of Input device support
+
+#
+# Character devices
+#
+CONFIG_TTY=y
+CONFIG_VT=y
+CONFIG_CONSOLE_TRANSLATIONS=y
+CONFIG_VT_CONSOLE=y
+CONFIG_VT_CONSOLE_SLEEP=y
+CONFIG_HW_CONSOLE=y
+CONFIG_VT_HW_CONSOLE_BINDING=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=16
+CONFIG_LDISC_AUTOLOAD=y
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_EARLYCON=y
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_DEPRECATED_OPTIONS=y
+CONFIG_SERIAL_8250_16550A_VARIANTS=y
+# CONFIG_SERIAL_8250_FINTEK is not set
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_DMA=y
+CONFIG_SERIAL_8250_PCI=y
+CONFIG_SERIAL_8250_EXAR=y
+CONFIG_SERIAL_8250_NR_UARTS=10
+CONFIG_SERIAL_8250_RUNTIME_UARTS=10
+CONFIG_SERIAL_8250_EXTENDED=y
+# CONFIG_SERIAL_8250_MANY_PORTS is not set
+# CONFIG_SERIAL_8250_ASPEED_VUART is not set
+CONFIG_SERIAL_8250_SHARE_IRQ=y
+# CONFIG_SERIAL_8250_DETECT_IRQ is not set
+# CONFIG_SERIAL_8250_RSA is not set
+CONFIG_SERIAL_8250_FSL=y
+# CONFIG_SERIAL_8250_DW is not set
+# CONFIG_SERIAL_8250_RT288X is not set
+CONFIG_SERIAL_8250_OMAP=y
+CONFIG_SERIAL_8250_OMAP_TTYO_FIXUP=y
+CONFIG_SERIAL_8250_PRUSS=m
+CONFIG_SERIAL_OF_PLATFORM=y
+
+#
+# Non-8250 serial port support
+#
+# CONFIG_SERIAL_AMBA_PL010 is not set
+# CONFIG_SERIAL_AMBA_PL011 is not set
+# CONFIG_SERIAL_EARLYCON_ARM_SEMIHOST is not set
+# CONFIG_SERIAL_MAX3100 is not set
+# CONFIG_SERIAL_MAX310X is not set
+# CONFIG_SERIAL_UARTLITE is not set
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+# CONFIG_SERIAL_JSM is not set
+# CONFIG_SERIAL_SIFIVE is not set
+# CONFIG_SERIAL_SCCNXP is not set
+# CONFIG_SERIAL_SC16IS7XX is not set
+# CONFIG_SERIAL_ALTERA_JTAGUART is not set
+# CONFIG_SERIAL_ALTERA_UART is not set
+# CONFIG_SERIAL_IFX6X60 is not set
+# CONFIG_SERIAL_XILINX_PS_UART is not set
+# CONFIG_SERIAL_ARC is not set
+# CONFIG_SERIAL_RP2 is not set
+CONFIG_SERIAL_FSL_LPUART=y
+CONFIG_SERIAL_FSL_LPUART_CONSOLE=y
+CONFIG_SERIAL_FSL_LINFLEXUART=y
+CONFIG_SERIAL_FSL_LINFLEXUART_CONSOLE=y
+# CONFIG_SERIAL_CONEXANT_DIGICOLOR is not set
+# CONFIG_SERIAL_SPRD is not set
+# end of Serial drivers
+
+CONFIG_SERIAL_MCTRL_GPIO=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+# CONFIG_N_GSM is not set
+# CONFIG_NOZOMI is not set
+# CONFIG_NULL_TTY is not set
+# CONFIG_TRACE_SINK is not set
+CONFIG_HVC_DRIVER=y
+# CONFIG_HVC_DCC is not set
+CONFIG_SERIAL_DEV_BUS=y
+CONFIG_SERIAL_DEV_CTRL_TTYPORT=y
+# CONFIG_TTY_PRINTK is not set
+CONFIG_VIRTIO_CONSOLE=y
+CONFIG_IPMI_HANDLER=m
+CONFIG_IPMI_PLAT_DATA=y
+# CONFIG_IPMI_PANIC_EVENT is not set
+CONFIG_IPMI_DEVICE_INTERFACE=m
+CONFIG_IPMI_SI=m
+# CONFIG_IPMI_SSIF is not set
+# CONFIG_IPMI_WATCHDOG is not set
+# CONFIG_IPMI_POWEROFF is not set
+CONFIG_HW_RANDOM=m
+# CONFIG_HW_RANDOM_TIMERIOMEM is not set
+# CONFIG_HW_RANDOM_BA431 is not set
+CONFIG_HW_RANDOM_OMAP=m
+# CONFIG_HW_RANDOM_VIRTIO is not set
+CONFIG_HW_RANDOM_CAVIUM=m
+CONFIG_HW_RANDOM_OPTEE=m
+# CONFIG_HW_RANDOM_CCTRNG is not set
+# CONFIG_HW_RANDOM_XIPHERA is not set
+# CONFIG_APPLICOM is not set
+CONFIG_DEVMEM=y
+# CONFIG_RAW_DRIVER is not set
+CONFIG_DEVPORT=y
+CONFIG_TCG_TPM=y
+# CONFIG_TCG_TIS is not set
+# CONFIG_TCG_TIS_SPI is not set
+# CONFIG_TCG_TIS_I2C_ATMEL is not set
+CONFIG_TCG_TIS_I2C_INFINEON=y
+# CONFIG_TCG_TIS_I2C_NUVOTON is not set
+# CONFIG_TCG_ATMEL is not set
+# CONFIG_TCG_VTPM_PROXY is not set
+# CONFIG_TCG_FTPM_TEE is not set
+# CONFIG_TCG_TIS_ST33ZP24_I2C is not set
+# CONFIG_TCG_TIS_ST33ZP24_SPI is not set
+# CONFIG_XILLYBUS is not set
+# end of Character devices
+
+# CONFIG_RANDOM_TRUST_CPU is not set
+# CONFIG_RANDOM_TRUST_BOOTLOADER is not set
+
+#
+# I2C support
+#
+CONFIG_I2C=y
+CONFIG_I2C_BOARDINFO=y
+CONFIG_I2C_COMPAT=y
+CONFIG_I2C_CHARDEV=y
+CONFIG_I2C_MUX=y
+
+#
+# Multiplexer I2C Chip support
+#
+# CONFIG_I2C_ARB_GPIO_CHALLENGE is not set
+# CONFIG_I2C_MUX_GPIO is not set
+# CONFIG_I2C_MUX_GPMUX is not set
+# CONFIG_I2C_MUX_LTC4306 is not set
+# CONFIG_I2C_MUX_PCA9541 is not set
+CONFIG_I2C_MUX_PCA954x=y
+# CONFIG_I2C_MUX_PINCTRL is not set
+# CONFIG_I2C_MUX_REG is not set
+# CONFIG_I2C_DEMUX_PINCTRL is not set
+# CONFIG_I2C_MUX_MLXCPLD is not set
+# end of Multiplexer I2C Chip support
+
+CONFIG_I2C_HELPER_AUTO=y
+CONFIG_I2C_ALGOBIT=y
+
+#
+# I2C Hardware Bus support
+#
+
+#
+# PC SMBus host controller drivers
+#
+# CONFIG_I2C_ALI1535 is not set
+# CONFIG_I2C_ALI1563 is not set
+# CONFIG_I2C_ALI15X3 is not set
+# CONFIG_I2C_AMD756 is not set
+# CONFIG_I2C_AMD8111 is not set
+# CONFIG_I2C_I801 is not set
+# CONFIG_I2C_ISCH is not set
+# CONFIG_I2C_PIIX4 is not set
+# CONFIG_I2C_NFORCE2 is not set
+# CONFIG_I2C_NVIDIA_GPU is not set
+# CONFIG_I2C_SIS5595 is not set
+# CONFIG_I2C_SIS630 is not set
+# CONFIG_I2C_SIS96X is not set
+# CONFIG_I2C_VIA is not set
+# CONFIG_I2C_VIAPRO is not set
+
+#
+# I2C system bus drivers (mostly embedded / system-on-chip)
+#
+# CONFIG_I2C_CADENCE is not set
+# CONFIG_I2C_CBUS_GPIO is not set
+# CONFIG_I2C_DESIGNWARE_PLATFORM is not set
+# CONFIG_I2C_DESIGNWARE_PCI is not set
+# CONFIG_I2C_EMEV2 is not set
+# CONFIG_I2C_GPIO is not set
+# CONFIG_I2C_NOMADIK is not set
+# CONFIG_I2C_OCORES is not set
+CONFIG_I2C_OMAP=y
+# CONFIG_I2C_PCA_PLATFORM is not set
+# CONFIG_I2C_RK3X is not set
+# CONFIG_I2C_SIMTEC is not set
+# CONFIG_I2C_THUNDERX is not set
+# CONFIG_I2C_XILINX is not set
+
+#
+# External I2C/SMBus adapter drivers
+#
+# CONFIG_I2C_DIOLAN_U2C is not set
+# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
+# CONFIG_I2C_TAOS_EVM is not set
+# CONFIG_I2C_TINY_USB is not set
+
+#
+# Other I2C/SMBus bus drivers
+#
+CONFIG_I2C_CROS_EC_TUNNEL=y
+# end of I2C Hardware Bus support
+
+# CONFIG_I2C_STUB is not set
+# CONFIG_I2C_SLAVE is not set
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# end of I2C support
+
+# CONFIG_I3C is not set
+CONFIG_SPI=y
+# CONFIG_SPI_DEBUG is not set
+CONFIG_SPI_MASTER=y
+CONFIG_SPI_MEM=y
+
+#
+# SPI Master Controller Drivers
+#
+# CONFIG_SPI_ALTERA is not set
+# CONFIG_SPI_AXI_SPI_ENGINE is not set
+# CONFIG_SPI_BITBANG is not set
+# CONFIG_SPI_CADENCE is not set
+CONFIG_SPI_CADENCE_QUADSPI=y
+# CONFIG_SPI_DESIGNWARE is not set
+CONFIG_SPI_NXP_FLEXSPI=y
+# CONFIG_SPI_GPIO is not set
+# CONFIG_SPI_FSL_SPI is not set
+# CONFIG_SPI_OC_TINY is not set
+CONFIG_SPI_OMAP24XX=y
+CONFIG_SPI_PL022=y
+# CONFIG_SPI_PXA2XX is not set
+# CONFIG_SPI_ROCKCHIP is not set
+# CONFIG_SPI_SC18IS602 is not set
+# CONFIG_SPI_SIFIVE is not set
+# CONFIG_SPI_MXIC is not set
+# CONFIG_SPI_THUNDERX is not set
+# CONFIG_SPI_XCOMM is not set
+# CONFIG_SPI_XILINX is not set
+# CONFIG_SPI_ZYNQMP_GQSPI is not set
+# CONFIG_SPI_AMD is not set
+
+#
+# SPI Multiplexer support
+#
+# CONFIG_SPI_MUX is not set
+
+#
+# SPI Protocol Masters
+#
+# CONFIG_SPI_SPIDEV is not set
+# CONFIG_SPI_LOOPBACK_TEST is not set
+# CONFIG_SPI_TLE62X0 is not set
+# CONFIG_SPI_SLAVE is not set
+CONFIG_SPI_DYNAMIC=y
+CONFIG_SPMI=y
+# CONFIG_HSI is not set
+CONFIG_PPS=y
+# CONFIG_PPS_DEBUG is not set
+
+#
+# PPS clients support
+#
+# CONFIG_PPS_CLIENT_KTIMER is not set
+# CONFIG_PPS_CLIENT_LDISC is not set
+# CONFIG_PPS_CLIENT_GPIO is not set
+
+#
+# PPS generators support
+#
+
+#
+# PTP clock support
+#
+CONFIG_PTP_1588_CLOCK=y
+
+#
+# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
+#
+# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set
+# CONFIG_PTP_1588_CLOCK_IDTCM is not set
+# end of PTP clock support
+
+CONFIG_PINCTRL=y
+CONFIG_GENERIC_PINCTRL_GROUPS=y
+CONFIG_PINMUX=y
+CONFIG_GENERIC_PINMUX_FUNCTIONS=y
+CONFIG_PINCONF=y
+CONFIG_GENERIC_PINCONF=y
+# CONFIG_DEBUG_PINCTRL is not set
+# CONFIG_PINCTRL_AXP209 is not set
+# CONFIG_PINCTRL_MCP23S08 is not set
+CONFIG_PINCTRL_SINGLE=y
+# CONFIG_PINCTRL_SX150X is not set
+# CONFIG_PINCTRL_STMFX is not set
+CONFIG_PINCTRL_MAX77620=y
+# CONFIG_PINCTRL_PALMAS is not set
+# CONFIG_PINCTRL_RK805 is not set
+# CONFIG_PINCTRL_OCELOT is not set
+
+#
+# Renesas pinctrl drivers
+#
+# end of Renesas pinctrl drivers
+
+CONFIG_GPIOLIB=y
+CONFIG_GPIOLIB_FASTPATH_LIMIT=512
+CONFIG_OF_GPIO=y
+CONFIG_GPIOLIB_IRQCHIP=y
+# CONFIG_DEBUG_GPIO is not set
+CONFIG_GPIO_SYSFS=y
+CONFIG_GPIO_CDEV=y
+CONFIG_GPIO_CDEV_V1=y
+CONFIG_GPIO_GENERIC=y
+
+#
+# Memory mapped GPIO drivers
+#
+# CONFIG_GPIO_74XX_MMIO is not set
+CONFIG_GPIO_ALTERA=m
+# CONFIG_GPIO_CADENCE is not set
+CONFIG_GPIO_DAVINCI=y
+CONFIG_GPIO_DWAPB=y
+# CONFIG_GPIO_EXAR is not set
+# CONFIG_GPIO_FTGPIO010 is not set
+# CONFIG_GPIO_GENERIC_PLATFORM is not set
+# CONFIG_GPIO_GRGPIO is not set
+# CONFIG_GPIO_HLWD is not set
+# CONFIG_GPIO_LOGICVC is not set
+CONFIG_GPIO_MB86S7X=y
+CONFIG_GPIO_PL061=y
+# CONFIG_GPIO_SAMA5D2_PIOBU is not set
+# CONFIG_GPIO_SIFIVE is not set
+# CONFIG_GPIO_SYSCON is not set
+CONFIG_GPIO_WCD934X=m
+CONFIG_GPIO_XGENE=y
+# CONFIG_GPIO_XILINX is not set
+# CONFIG_GPIO_AMD_FCH is not set
+# end of Memory mapped GPIO drivers
+
+#
+# I2C GPIO expanders
+#
+# CONFIG_GPIO_ADP5588 is not set
+# CONFIG_GPIO_ADNP is not set
+# CONFIG_GPIO_GW_PLD is not set
+# CONFIG_GPIO_MAX7300 is not set
+CONFIG_GPIO_MAX732X=y
+# CONFIG_GPIO_MAX732X_IRQ is not set
+CONFIG_GPIO_PCA953X=y
+CONFIG_GPIO_PCA953X_IRQ=y
+# CONFIG_GPIO_PCA9570 is not set
+CONFIG_GPIO_PCF857X=y
+CONFIG_GPIO_TPIC2810=m
+# end of I2C GPIO expanders
+
+#
+# MFD GPIO expanders
+#
+CONFIG_GPIO_BD9571MWV=m
+CONFIG_GPIO_MAX77620=y
+# CONFIG_GPIO_PALMAS is not set
+# end of MFD GPIO expanders
+
+#
+# PCI GPIO expanders
+#
+# CONFIG_GPIO_BT8XX is not set
+# CONFIG_GPIO_PCI_IDIO_16 is not set
+# CONFIG_GPIO_PCIE_IDIO_24 is not set
+# CONFIG_GPIO_RDC321X is not set
+# end of PCI GPIO expanders
+
+#
+# SPI GPIO expanders
+#
+# CONFIG_GPIO_74X164 is not set
+# CONFIG_GPIO_MAX3191X is not set
+# CONFIG_GPIO_MAX7301 is not set
+# CONFIG_GPIO_MC33880 is not set
+CONFIG_GPIO_PISOSR=m
+# CONFIG_GPIO_XRA1403 is not set
+# end of SPI GPIO expanders
+
+#
+# USB GPIO expanders
+#
+# end of USB GPIO expanders
+
+# CONFIG_GPIO_AGGREGATOR is not set
+# CONFIG_GPIO_MOCKUP is not set
+CONFIG_W1=m
+
+#
+# 1-wire Bus Masters
+#
+# CONFIG_W1_MASTER_MATROX is not set
+# CONFIG_W1_MASTER_DS2490 is not set
+# CONFIG_W1_MASTER_DS2482 is not set
+# CONFIG_W1_MASTER_DS1WM is not set
+# CONFIG_W1_MASTER_GPIO is not set
+# CONFIG_W1_MASTER_SGI is not set
+# end of 1-wire Bus Masters
+
+#
+# 1-wire Slaves
+#
+# CONFIG_W1_SLAVE_THERM is not set
+# CONFIG_W1_SLAVE_SMEM is not set
+# CONFIG_W1_SLAVE_DS2405 is not set
+# CONFIG_W1_SLAVE_DS2408 is not set
+# CONFIG_W1_SLAVE_DS2413 is not set
+# CONFIG_W1_SLAVE_DS2406 is not set
+# CONFIG_W1_SLAVE_DS2423 is not set
+# CONFIG_W1_SLAVE_DS2805 is not set
+# CONFIG_W1_SLAVE_DS2430 is not set
+# CONFIG_W1_SLAVE_DS2431 is not set
+# CONFIG_W1_SLAVE_DS2433 is not set
+# CONFIG_W1_SLAVE_DS2438 is not set
+# CONFIG_W1_SLAVE_DS250X is not set
+# CONFIG_W1_SLAVE_DS2780 is not set
+# CONFIG_W1_SLAVE_DS2781 is not set
+# CONFIG_W1_SLAVE_DS28E04 is not set
+# CONFIG_W1_SLAVE_DS28E17 is not set
+# end of 1-wire Slaves
+
+CONFIG_POWER_RESET=y
+# CONFIG_POWER_RESET_BRCMSTB is not set
+# CONFIG_POWER_RESET_GPIO is not set
+# CONFIG_POWER_RESET_GPIO_RESTART is not set
+# CONFIG_POWER_RESET_LTC2952 is not set
+# CONFIG_POWER_RESET_RESTART is not set
+CONFIG_POWER_RESET_XGENE=y
+CONFIG_POWER_RESET_SYSCON=y
+# CONFIG_POWER_RESET_SYSCON_POWEROFF is not set
+CONFIG_REBOOT_MODE=y
+CONFIG_SYSCON_REBOOT_MODE=y
+# CONFIG_NVMEM_REBOOT_MODE is not set
+CONFIG_POWER_SUPPLY=y
+# CONFIG_POWER_SUPPLY_DEBUG is not set
+CONFIG_POWER_SUPPLY_HWMON=y
+# CONFIG_PDA_POWER is not set
+# CONFIG_GENERIC_ADC_BATTERY is not set
+# CONFIG_TEST_POWER is not set
+# CONFIG_CHARGER_ADP5061 is not set
+# CONFIG_BATTERY_CW2015 is not set
+# CONFIG_BATTERY_DS2760 is not set
+# CONFIG_BATTERY_DS2780 is not set
+# CONFIG_BATTERY_DS2781 is not set
+# CONFIG_BATTERY_DS2782 is not set
+CONFIG_BATTERY_SBS=m
+# CONFIG_CHARGER_SBS is not set
+# CONFIG_MANAGER_SBS is not set
+CONFIG_BATTERY_BQ27XXX=y
+CONFIG_BATTERY_BQ27XXX_I2C=y
+CONFIG_BATTERY_BQ27XXX_HDQ=m
+# CONFIG_BATTERY_BQ27XXX_DT_UPDATES_NVM is not set
+# CONFIG_AXP20X_POWER is not set
+# CONFIG_AXP288_FUEL_GAUGE is not set
+# CONFIG_BATTERY_MAX17040 is not set
+# CONFIG_BATTERY_MAX17042 is not set
+# CONFIG_BATTERY_MAX1721X is not set
+# CONFIG_CHARGER_ISP1704 is not set
+# CONFIG_CHARGER_MAX8903 is not set
+# CONFIG_CHARGER_LP8727 is not set
+# CONFIG_CHARGER_GPIO is not set
+# CONFIG_CHARGER_MANAGER is not set
+# CONFIG_CHARGER_LT3651 is not set
+# CONFIG_CHARGER_DETECTOR_MAX14656 is not set
+# CONFIG_CHARGER_BQ2415X is not set
+# CONFIG_CHARGER_BQ24190 is not set
+# CONFIG_CHARGER_BQ24257 is not set
+# CONFIG_CHARGER_BQ24735 is not set
+# CONFIG_CHARGER_BQ2515X is not set
+# CONFIG_CHARGER_BQ25890 is not set
+# CONFIG_CHARGER_BQ25980 is not set
+# CONFIG_CHARGER_SMB347 is not set
+# CONFIG_BATTERY_GAUGE_LTC2941 is not set
+# CONFIG_CHARGER_RT9455 is not set
+# CONFIG_CHARGER_CROS_USBPD is not set
+# CONFIG_CHARGER_UCS1002 is not set
+# CONFIG_CHARGER_BD99954 is not set
+CONFIG_HWMON=y
+# CONFIG_HWMON_DEBUG_CHIP is not set
+
+#
+# Native drivers
+#
+# CONFIG_SENSORS_AD7314 is not set
+# CONFIG_SENSORS_AD7414 is not set
+# CONFIG_SENSORS_AD7418 is not set
+# CONFIG_SENSORS_ADM1021 is not set
+# CONFIG_SENSORS_ADM1025 is not set
+# CONFIG_SENSORS_ADM1026 is not set
+# CONFIG_SENSORS_ADM1029 is not set
+# CONFIG_SENSORS_ADM1031 is not set
+# CONFIG_SENSORS_ADM1177 is not set
+# CONFIG_SENSORS_ADM9240 is not set
+# CONFIG_SENSORS_ADT7310 is not set
+# CONFIG_SENSORS_ADT7410 is not set
+# CONFIG_SENSORS_ADT7411 is not set
+# CONFIG_SENSORS_ADT7462 is not set
+# CONFIG_SENSORS_ADT7470 is not set
+# CONFIG_SENSORS_ADT7475 is not set
+# CONFIG_SENSORS_AS370 is not set
+# CONFIG_SENSORS_ASC7621 is not set
+# CONFIG_SENSORS_AXI_FAN_CONTROL is not set
+# CONFIG_SENSORS_ASPEED is not set
+# CONFIG_SENSORS_ATXP1 is not set
+# CONFIG_SENSORS_CORSAIR_CPRO is not set
+# CONFIG_SENSORS_DRIVETEMP is not set
+# CONFIG_SENSORS_DS620 is not set
+# CONFIG_SENSORS_DS1621 is not set
+# CONFIG_SENSORS_I5K_AMB is not set
+# CONFIG_SENSORS_F71805F is not set
+# CONFIG_SENSORS_F71882FG is not set
+# CONFIG_SENSORS_F75375S is not set
+# CONFIG_SENSORS_FTSTEUTATES is not set
+# CONFIG_SENSORS_GL518SM is not set
+# CONFIG_SENSORS_GL520SM is not set
+# CONFIG_SENSORS_G760A is not set
+# CONFIG_SENSORS_G762 is not set
+# CONFIG_SENSORS_GPIO_FAN is not set
+# CONFIG_SENSORS_HIH6130 is not set
+# CONFIG_SENSORS_IBMAEM is not set
+# CONFIG_SENSORS_IBMPEX is not set
+# CONFIG_SENSORS_IIO_HWMON is not set
+# CONFIG_SENSORS_IT87 is not set
+# CONFIG_SENSORS_JC42 is not set
+# CONFIG_SENSORS_POWR1220 is not set
+# CONFIG_SENSORS_LINEAGE is not set
+# CONFIG_SENSORS_LTC2945 is not set
+# CONFIG_SENSORS_LTC2947_I2C is not set
+# CONFIG_SENSORS_LTC2947_SPI is not set
+# CONFIG_SENSORS_LTC2990 is not set
+# CONFIG_SENSORS_LTC4151 is not set
+# CONFIG_SENSORS_LTC4215 is not set
+# CONFIG_SENSORS_LTC4222 is not set
+# CONFIG_SENSORS_LTC4245 is not set
+# CONFIG_SENSORS_LTC4260 is not set
+# CONFIG_SENSORS_LTC4261 is not set
+# CONFIG_SENSORS_MAX1111 is not set
+# CONFIG_SENSORS_MAX16065 is not set
+# CONFIG_SENSORS_MAX1619 is not set
+# CONFIG_SENSORS_MAX1668 is not set
+# CONFIG_SENSORS_MAX197 is not set
+# CONFIG_SENSORS_MAX31722 is not set
+# CONFIG_SENSORS_MAX31730 is not set
+# CONFIG_SENSORS_MAX6621 is not set
+# CONFIG_SENSORS_MAX6639 is not set
+# CONFIG_SENSORS_MAX6642 is not set
+# CONFIG_SENSORS_MAX6650 is not set
+# CONFIG_SENSORS_MAX6697 is not set
+# CONFIG_SENSORS_MAX31790 is not set
+# CONFIG_SENSORS_MCP3021 is not set
+# CONFIG_SENSORS_TC654 is not set
+# CONFIG_SENSORS_MR75203 is not set
+# CONFIG_SENSORS_ADCXX is not set
+# CONFIG_SENSORS_LM63 is not set
+# CONFIG_SENSORS_LM70 is not set
+# CONFIG_SENSORS_LM73 is not set
+# CONFIG_SENSORS_LM75 is not set
+# CONFIG_SENSORS_LM77 is not set
+# CONFIG_SENSORS_LM78 is not set
+# CONFIG_SENSORS_LM80 is not set
+# CONFIG_SENSORS_LM83 is not set
+# CONFIG_SENSORS_LM85 is not set
+# CONFIG_SENSORS_LM87 is not set
+CONFIG_SENSORS_LM90=m
+# CONFIG_SENSORS_LM92 is not set
+# CONFIG_SENSORS_LM93 is not set
+# CONFIG_SENSORS_LM95234 is not set
+# CONFIG_SENSORS_LM95241 is not set
+# CONFIG_SENSORS_LM95245 is not set
+# CONFIG_SENSORS_PC87360 is not set
+# CONFIG_SENSORS_PC87427 is not set
+# CONFIG_SENSORS_NTC_THERMISTOR is not set
+# CONFIG_SENSORS_NCT6683 is not set
+# CONFIG_SENSORS_NCT6775 is not set
+# CONFIG_SENSORS_NCT7802 is not set
+# CONFIG_SENSORS_NCT7904 is not set
+# CONFIG_SENSORS_NPCM7XX is not set
+# CONFIG_SENSORS_OCC_P8_I2C is not set
+# CONFIG_SENSORS_PCF8591 is not set
+# CONFIG_PMBUS is not set
+CONFIG_SENSORS_PWM_FAN=m
+# CONFIG_SENSORS_SHT15 is not set
+# CONFIG_SENSORS_SHT21 is not set
+# CONFIG_SENSORS_SHT3x is not set
+# CONFIG_SENSORS_SHTC1 is not set
+# CONFIG_SENSORS_SIS5595 is not set
+# CONFIG_SENSORS_DME1737 is not set
+# CONFIG_SENSORS_EMC1403 is not set
+# CONFIG_SENSORS_EMC2103 is not set
+# CONFIG_SENSORS_EMC6W201 is not set
+# CONFIG_SENSORS_SMSC47M1 is not set
+# CONFIG_SENSORS_SMSC47M192 is not set
+# CONFIG_SENSORS_SMSC47B397 is not set
+# CONFIG_SENSORS_SCH5627 is not set
+# CONFIG_SENSORS_SCH5636 is not set
+# CONFIG_SENSORS_STTS751 is not set
+# CONFIG_SENSORS_SMM665 is not set
+# CONFIG_SENSORS_ADC128D818 is not set
+# CONFIG_SENSORS_ADS7828 is not set
+# CONFIG_SENSORS_ADS7871 is not set
+# CONFIG_SENSORS_AMC6821 is not set
+# CONFIG_SENSORS_INA209 is not set
+CONFIG_SENSORS_INA2XX=m
+CONFIG_SENSORS_INA3221=m
+# CONFIG_SENSORS_TC74 is not set
+# CONFIG_SENSORS_THMC50 is not set
+# CONFIG_SENSORS_TMP102 is not set
+# CONFIG_SENSORS_TMP103 is not set
+# CONFIG_SENSORS_TMP108 is not set
+# CONFIG_SENSORS_TMP401 is not set
+# CONFIG_SENSORS_TMP421 is not set
+# CONFIG_SENSORS_TMP513 is not set
+# CONFIG_SENSORS_VIA686A is not set
+# CONFIG_SENSORS_VT1211 is not set
+# CONFIG_SENSORS_VT8231 is not set
+# CONFIG_SENSORS_W83773G is not set
+# CONFIG_SENSORS_W83781D is not set
+# CONFIG_SENSORS_W83791D is not set
+# CONFIG_SENSORS_W83792D is not set
+# CONFIG_SENSORS_W83793 is not set
+# CONFIG_SENSORS_W83795 is not set
+# CONFIG_SENSORS_W83L785TS is not set
+# CONFIG_SENSORS_W83L786NG is not set
+# CONFIG_SENSORS_W83627HF is not set
+# CONFIG_SENSORS_W83627EHF is not set
+CONFIG_THERMAL=y
+# CONFIG_THERMAL_NETLINK is not set
+# CONFIG_THERMAL_STATISTICS is not set
+CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
+CONFIG_THERMAL_HWMON=y
+CONFIG_THERMAL_OF=y
+# CONFIG_THERMAL_WRITABLE_TRIPS is not set
+CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
+# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
+# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
+# CONFIG_THERMAL_DEFAULT_GOV_POWER_ALLOCATOR is not set
+# CONFIG_THERMAL_GOV_FAIR_SHARE is not set
+CONFIG_THERMAL_GOV_STEP_WISE=y
+# CONFIG_THERMAL_GOV_BANG_BANG is not set
+# CONFIG_THERMAL_GOV_USER_SPACE is not set
+CONFIG_THERMAL_GOV_POWER_ALLOCATOR=y
+CONFIG_CPU_THERMAL=y
+CONFIG_CPU_FREQ_THERMAL=y
+# CONFIG_DEVFREQ_THERMAL is not set
+CONFIG_THERMAL_EMULATION=y
+# CONFIG_THERMAL_MMIO is not set
+CONFIG_K3_THERMAL=y
+# CONFIG_MAX77620_THERMAL is not set
+# CONFIG_TI_SOC_THERMAL is not set
+# CONFIG_GENERIC_ADC_THERMAL is not set
+CONFIG_WATCHDOG=y
+CONFIG_WATCHDOG_CORE=y
+# CONFIG_WATCHDOG_NOWAYOUT is not set
+CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=y
+CONFIG_WATCHDOG_OPEN_TIMEOUT=0
+# CONFIG_WATCHDOG_SYSFS is not set
+
+#
+# Watchdog Pretimeout Governors
+#
+# CONFIG_WATCHDOG_PRETIMEOUT_GOV is not set
+
+#
+# Watchdog Device Drivers
+#
+# CONFIG_SOFT_WATCHDOG is not set
+# CONFIG_GPIO_WATCHDOG is not set
+# CONFIG_XILINX_WATCHDOG is not set
+# CONFIG_ZIIRAVE_WATCHDOG is not set
+CONFIG_ARM_SP805_WATCHDOG=y
+CONFIG_ARM_SBSA_WATCHDOG=y
+# CONFIG_CADENCE_WATCHDOG is not set
+CONFIG_DW_WATCHDOG=y
+CONFIG_K3_RTI_WATCHDOG=m
+# CONFIG_MAX63XX_WATCHDOG is not set
+# CONFIG_MAX77620_WATCHDOG is not set
+CONFIG_ARM_SMC_WATCHDOG=y
+# CONFIG_ALIM7101_WDT is not set
+# CONFIG_I6300ESB_WDT is not set
+# CONFIG_MEN_A21_WDT is not set
+
+#
+# PCI-based Watchdog Cards
+#
+# CONFIG_PCIPCWATCHDOG is not set
+# CONFIG_WDTPCI is not set
+
+#
+# USB-based Watchdog Cards
+#
+# CONFIG_USBPCWATCHDOG is not set
+CONFIG_SSB_POSSIBLE=y
+CONFIG_SSB=m
+CONFIG_SSB_SPROM=y
+CONFIG_SSB_BLOCKIO=y
+CONFIG_SSB_PCIHOST_POSSIBLE=y
+CONFIG_SSB_PCIHOST=y
+CONFIG_SSB_B43_PCI_BRIDGE=y
+CONFIG_SSB_SDIOHOST_POSSIBLE=y
+# CONFIG_SSB_SDIOHOST is not set
+CONFIG_SSB_DRIVER_PCICORE_POSSIBLE=y
+CONFIG_SSB_DRIVER_PCICORE=y
+# CONFIG_SSB_DRIVER_GPIO is not set
+CONFIG_BCMA_POSSIBLE=y
+CONFIG_BCMA=m
+CONFIG_BCMA_BLOCKIO=y
+CONFIG_BCMA_HOST_PCI_POSSIBLE=y
+CONFIG_BCMA_HOST_PCI=y
+# CONFIG_BCMA_HOST_SOC is not set
+CONFIG_BCMA_DRIVER_PCI=y
+# CONFIG_BCMA_DRIVER_GMAC_CMN is not set
+# CONFIG_BCMA_DRIVER_GPIO is not set
+# CONFIG_BCMA_DEBUG is not set
+
+#
+# Multifunction device drivers
+#
+CONFIG_MFD_CORE=y
+# CONFIG_MFD_ACT8945A is not set
+# CONFIG_MFD_AS3711 is not set
+# CONFIG_MFD_AS3722 is not set
+# CONFIG_PMIC_ADP5520 is not set
+# CONFIG_MFD_AAT2870_CORE is not set
+# CONFIG_MFD_ATMEL_FLEXCOM is not set
+# CONFIG_MFD_ATMEL_HLCDC is not set
+# CONFIG_MFD_BCM590XX is not set
+CONFIG_MFD_BD9571MWV=y
+CONFIG_MFD_AXP20X=y
+CONFIG_MFD_AXP20X_I2C=y
+CONFIG_MFD_CROS_EC_DEV=y
+# CONFIG_MFD_MADERA is not set
+# CONFIG_PMIC_DA903X is not set
+# CONFIG_MFD_DA9052_SPI is not set
+# CONFIG_MFD_DA9052_I2C is not set
+# CONFIG_MFD_DA9055 is not set
+# CONFIG_MFD_DA9062 is not set
+# CONFIG_MFD_DA9063 is not set
+# CONFIG_MFD_DA9150 is not set
+# CONFIG_MFD_DLN2 is not set
+# CONFIG_MFD_GATEWORKS_GSC is not set
+# CONFIG_MFD_MC13XXX_SPI is not set
+# CONFIG_MFD_MC13XXX_I2C is not set
+# CONFIG_MFD_MP2629 is not set
+CONFIG_MFD_HI6421_PMIC=y
+# CONFIG_HTC_PASIC3 is not set
+# CONFIG_HTC_I2CPLD is not set
+# CONFIG_LPC_ICH is not set
+# CONFIG_LPC_SCH is not set
+# CONFIG_MFD_IQS62X is not set
+# CONFIG_MFD_JANZ_CMODIO is not set
+# CONFIG_MFD_KEMPLD is not set
+# CONFIG_MFD_88PM800 is not set
+# CONFIG_MFD_88PM805 is not set
+# CONFIG_MFD_88PM860X is not set
+# CONFIG_MFD_MAX14577 is not set
+CONFIG_MFD_MAX77620=y
+# CONFIG_MFD_MAX77650 is not set
+# CONFIG_MFD_MAX77686 is not set
+# CONFIG_MFD_MAX77693 is not set
+# CONFIG_MFD_MAX77843 is not set
+# CONFIG_MFD_MAX8907 is not set
+# CONFIG_MFD_MAX8925 is not set
+# CONFIG_MFD_MAX8997 is not set
+# CONFIG_MFD_MAX8998 is not set
+# CONFIG_MFD_MT6360 is not set
+# CONFIG_MFD_MT6397 is not set
+# CONFIG_MFD_MENF21BMC is not set
+# CONFIG_EZX_PCAP is not set
+# CONFIG_MFD_CPCAP is not set
+# CONFIG_MFD_VIPERBOARD is not set
+# CONFIG_MFD_RETU is not set
+# CONFIG_MFD_PCF50633 is not set
+# CONFIG_MFD_RDC321X is not set
+# CONFIG_MFD_RT5033 is not set
+# CONFIG_MFD_RC5T583 is not set
+CONFIG_MFD_RK808=y
+# CONFIG_MFD_RN5T618 is not set
+CONFIG_MFD_SEC_CORE=y
+# CONFIG_MFD_SI476X_CORE is not set
+# CONFIG_MFD_SM501 is not set
+# CONFIG_MFD_SKY81452 is not set
+# CONFIG_ABX500_CORE is not set
+# CONFIG_MFD_STMPE is not set
+CONFIG_MFD_SYSCON=y
+CONFIG_MFD_TI_AM335X_TSCADC=m
+# CONFIG_MFD_LP3943 is not set
+# CONFIG_MFD_LP8788 is not set
+# CONFIG_MFD_TI_LMU is not set
+CONFIG_MFD_PALMAS=y
+# CONFIG_TPS6105X is not set
+# CONFIG_TPS65010 is not set
+# CONFIG_TPS6507X is not set
+# CONFIG_MFD_TPS65086 is not set
+# CONFIG_MFD_TPS65090 is not set
+# CONFIG_MFD_TPS65217 is not set
+# CONFIG_MFD_TI_LP873X is not set
+# CONFIG_MFD_TI_LP87565 is not set
+# CONFIG_MFD_TPS65218 is not set
+# CONFIG_MFD_TPS6586X is not set
+# CONFIG_MFD_TPS65910 is not set
+# CONFIG_MFD_TPS65912_I2C is not set
+# CONFIG_MFD_TPS65912_SPI is not set
+# CONFIG_MFD_TPS80031 is not set
+# CONFIG_TWL4030_CORE is not set
+# CONFIG_TWL6040_CORE is not set
+# CONFIG_MFD_WL1273_CORE is not set
+# CONFIG_MFD_LM3533 is not set
+# CONFIG_MFD_TC3589X is not set
+# CONFIG_MFD_TQMX86 is not set
+# CONFIG_MFD_VX855 is not set
+# CONFIG_MFD_LOCHNAGAR is not set
+# CONFIG_MFD_ARIZONA_I2C is not set
+# CONFIG_MFD_ARIZONA_SPI is not set
+# CONFIG_MFD_WM8400 is not set
+# CONFIG_MFD_WM831X_I2C is not set
+# CONFIG_MFD_WM831X_SPI is not set
+# CONFIG_MFD_WM8350_I2C is not set
+# CONFIG_MFD_WM8994 is not set
+CONFIG_MFD_ROHM_BD718XX=y
+# CONFIG_MFD_ROHM_BD70528 is not set
+# CONFIG_MFD_ROHM_BD71828 is not set
+# CONFIG_MFD_STPMIC1 is not set
+# CONFIG_MFD_STMFX is not set
+CONFIG_MFD_WCD934X=m
+# CONFIG_RAVE_SP_CORE is not set
+# CONFIG_MFD_INTEL_M10_BMC is not set
+# end of Multifunction device drivers
+
+CONFIG_REGULATOR=y
+# CONFIG_REGULATOR_DEBUG is not set
+CONFIG_REGULATOR_FIXED_VOLTAGE=y
+# CONFIG_REGULATOR_VIRTUAL_CONSUMER is not set
+# CONFIG_REGULATOR_USERSPACE_CONSUMER is not set
+# CONFIG_REGULATOR_88PG86X is not set
+# CONFIG_REGULATOR_ACT8865 is not set
+# CONFIG_REGULATOR_AD5398 is not set
+CONFIG_REGULATOR_AXP20X=y
+CONFIG_REGULATOR_BD718XX=y
+CONFIG_REGULATOR_BD9571MWV=y
+# CONFIG_REGULATOR_CROS_EC is not set
+# CONFIG_REGULATOR_DA9210 is not set
+# CONFIG_REGULATOR_DA9211 is not set
+CONFIG_REGULATOR_FAN53555=y
+# CONFIG_REGULATOR_FAN53880 is not set
+CONFIG_REGULATOR_GPIO=y
+# CONFIG_REGULATOR_HI6421 is not set
+CONFIG_REGULATOR_HI6421V530=y
+# CONFIG_REGULATOR_ISL9305 is not set
+# CONFIG_REGULATOR_ISL6271A is not set
+# CONFIG_REGULATOR_LP3971 is not set
+# CONFIG_REGULATOR_LP3972 is not set
+# CONFIG_REGULATOR_LP872X is not set
+# CONFIG_REGULATOR_LP8755 is not set
+# CONFIG_REGULATOR_LTC3589 is not set
+# CONFIG_REGULATOR_LTC3676 is not set
+# CONFIG_REGULATOR_MAX1586 is not set
+CONFIG_REGULATOR_MAX77620=y
+# CONFIG_REGULATOR_MAX8649 is not set
+# CONFIG_REGULATOR_MAX8660 is not set
+# CONFIG_REGULATOR_MAX8952 is not set
+CONFIG_REGULATOR_MAX8973=y
+# CONFIG_REGULATOR_MAX77826 is not set
+# CONFIG_REGULATOR_MCP16502 is not set
+# CONFIG_REGULATOR_MP5416 is not set
+# CONFIG_REGULATOR_MP8859 is not set
+# CONFIG_REGULATOR_MP886X is not set
+# CONFIG_REGULATOR_MPQ7920 is not set
+# CONFIG_REGULATOR_MT6311 is not set
+CONFIG_REGULATOR_PALMAS=y
+CONFIG_REGULATOR_PCA9450=y
+CONFIG_REGULATOR_PFUZE100=y
+# CONFIG_REGULATOR_PV88060 is not set
+# CONFIG_REGULATOR_PV88080 is not set
+# CONFIG_REGULATOR_PV88090 is not set
+CONFIG_REGULATOR_PWM=y
+CONFIG_REGULATOR_QCOM_SPMI=y
+# CONFIG_REGULATOR_QCOM_USB_VBUS is not set
+# CONFIG_REGULATOR_RASPBERRYPI_TOUCHSCREEN_ATTINY is not set
+CONFIG_REGULATOR_RK808=y
+CONFIG_REGULATOR_ROHM=y
+# CONFIG_REGULATOR_RT4801 is not set
+# CONFIG_REGULATOR_RTMV20 is not set
+# CONFIG_REGULATOR_S2MPA01 is not set
+CONFIG_REGULATOR_S2MPS11=y
+# CONFIG_REGULATOR_S5M8767 is not set
+# CONFIG_REGULATOR_SLG51000 is not set
+# CONFIG_REGULATOR_SY8106A is not set
+# CONFIG_REGULATOR_SY8824X is not set
+# CONFIG_REGULATOR_SY8827N is not set
+# CONFIG_REGULATOR_TPS51632 is not set
+# CONFIG_REGULATOR_TPS62360 is not set
+# CONFIG_REGULATOR_TPS65023 is not set
+# CONFIG_REGULATOR_TPS6507X is not set
+# CONFIG_REGULATOR_TPS65132 is not set
+# CONFIG_REGULATOR_TPS6524X is not set
+CONFIG_REGULATOR_VCTRL=m
+# CONFIG_REGULATOR_QCOM_LABIBB is not set
+# CONFIG_RC_CORE is not set
+CONFIG_CEC_CORE=y
+CONFIG_CEC_NOTIFIER=y
+CONFIG_MEDIA_CEC_SUPPORT=y
+# CONFIG_CEC_CH7322 is not set
+# CONFIG_CEC_CROS_EC is not set
+# CONFIG_CEC_GPIO is not set
+# CONFIG_USB_PULSE8_CEC is not set
+# CONFIG_USB_RAINSHADOW_CEC is not set
+CONFIG_MEDIA_SUPPORT=y
+# CONFIG_MEDIA_SUPPORT_FILTER is not set
+# CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set
+
+#
+# Media device types
+#
+CONFIG_MEDIA_CAMERA_SUPPORT=y
+CONFIG_MEDIA_ANALOG_TV_SUPPORT=y
+CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
+CONFIG_MEDIA_RADIO_SUPPORT=y
+CONFIG_MEDIA_SDR_SUPPORT=y
+CONFIG_MEDIA_PLATFORM_SUPPORT=y
+CONFIG_MEDIA_TEST_SUPPORT=y
+# end of Media device types
+
+#
+# Media core support
+#
+CONFIG_VIDEO_DEV=y
+CONFIG_MEDIA_CONTROLLER=y
+CONFIG_DVB_CORE=y
+# end of Media core support
+
+#
+# Video4Linux options
+#
+CONFIG_VIDEO_V4L2=y
+CONFIG_VIDEO_V4L2_I2C=y
+CONFIG_VIDEO_V4L2_SUBDEV_API=y
+# CONFIG_VIDEO_ADV_DEBUG is not set
+# CONFIG_VIDEO_FIXED_MINOR_RANGES is not set
+CONFIG_V4L2_MEM2MEM_DEV=m
+CONFIG_V4L2_FWNODE=m
+# end of Video4Linux options
+
+#
+# Media controller options
+#
+# CONFIG_MEDIA_CONTROLLER_DVB is not set
+# end of Media controller options
+
+#
+# Digital TV options
+#
+# CONFIG_DVB_MMAP is not set
+# CONFIG_DVB_NET is not set
+CONFIG_DVB_MAX_ADAPTERS=16
+CONFIG_DVB_DYNAMIC_MINORS=y
+# CONFIG_DVB_DEMUX_SECTION_LOSS_LOG is not set
+# CONFIG_DVB_ULE_DEBUG is not set
+# end of Digital TV options
+
+#
+# Media drivers
+#
+CONFIG_MEDIA_USB_SUPPORT=y
+
+#
+# Webcam devices
+#
+CONFIG_USB_VIDEO_CLASS=m
+CONFIG_USB_VIDEO_CLASS_INPUT_EVDEV=y
+CONFIG_USB_GSPCA=m
+# CONFIG_USB_M5602 is not set
+# CONFIG_USB_STV06XX is not set
+# CONFIG_USB_GL860 is not set
+# CONFIG_USB_GSPCA_BENQ is not set
+# CONFIG_USB_GSPCA_CONEX is not set
+# CONFIG_USB_GSPCA_CPIA1 is not set
+# CONFIG_USB_GSPCA_DTCS033 is not set
+# CONFIG_USB_GSPCA_ETOMS is not set
+# CONFIG_USB_GSPCA_FINEPIX is not set
+# CONFIG_USB_GSPCA_JEILINJ is not set
+# CONFIG_USB_GSPCA_JL2005BCD is not set
+# CONFIG_USB_GSPCA_KINECT is not set
+# CONFIG_USB_GSPCA_KONICA is not set
+# CONFIG_USB_GSPCA_MARS is not set
+# CONFIG_USB_GSPCA_MR97310A is not set
+# CONFIG_USB_GSPCA_NW80X is not set
+# CONFIG_USB_GSPCA_OV519 is not set
+# CONFIG_USB_GSPCA_OV534 is not set
+# CONFIG_USB_GSPCA_OV534_9 is not set
+# CONFIG_USB_GSPCA_PAC207 is not set
+# CONFIG_USB_GSPCA_PAC7302 is not set
+# CONFIG_USB_GSPCA_PAC7311 is not set
+# CONFIG_USB_GSPCA_SE401 is not set
+# CONFIG_USB_GSPCA_SN9C2028 is not set
+# CONFIG_USB_GSPCA_SN9C20X is not set
+# CONFIG_USB_GSPCA_SONIXB is not set
+# CONFIG_USB_GSPCA_SONIXJ is not set
+# CONFIG_USB_GSPCA_SPCA500 is not set
+# CONFIG_USB_GSPCA_SPCA501 is not set
+# CONFIG_USB_GSPCA_SPCA505 is not set
+# CONFIG_USB_GSPCA_SPCA506 is not set
+# CONFIG_USB_GSPCA_SPCA508 is not set
+# CONFIG_USB_GSPCA_SPCA561 is not set
+# CONFIG_USB_GSPCA_SPCA1528 is not set
+# CONFIG_USB_GSPCA_SQ905 is not set
+# CONFIG_USB_GSPCA_SQ905C is not set
+# CONFIG_USB_GSPCA_SQ930X is not set
+# CONFIG_USB_GSPCA_STK014 is not set
+# CONFIG_USB_GSPCA_STK1135 is not set
+# CONFIG_USB_GSPCA_STV0680 is not set
+# CONFIG_USB_GSPCA_SUNPLUS is not set
+# CONFIG_USB_GSPCA_T613 is not set
+# CONFIG_USB_GSPCA_TOPRO is not set
+# CONFIG_USB_GSPCA_TOUPTEK is not set
+# CONFIG_USB_GSPCA_TV8532 is not set
+# CONFIG_USB_GSPCA_VC032X is not set
+# CONFIG_USB_GSPCA_VICAM is not set
+# CONFIG_USB_GSPCA_XIRLINK_CIT is not set
+# CONFIG_USB_GSPCA_ZC3XX is not set
+# CONFIG_USB_PWC is not set
+# CONFIG_VIDEO_CPIA2 is not set
+# CONFIG_USB_ZR364XX is not set
+# CONFIG_USB_STKWEBCAM is not set
+# CONFIG_USB_S2255 is not set
+# CONFIG_VIDEO_USBTV is not set
+
+#
+# Analog TV USB devices
+#
+# CONFIG_VIDEO_PVRUSB2 is not set
+# CONFIG_VIDEO_HDPVR is not set
+# CONFIG_VIDEO_STK1160_COMMON is not set
+# CONFIG_VIDEO_GO7007 is not set
+
+#
+# Analog/digital TV USB devices
+#
+# CONFIG_VIDEO_AU0828 is not set
+# CONFIG_VIDEO_CX231XX is not set
+
+#
+# Digital TV USB devices
+#
+# CONFIG_DVB_USB_V2 is not set
+# CONFIG_DVB_TTUSB_BUDGET is not set
+# CONFIG_DVB_TTUSB_DEC is not set
+# CONFIG_SMS_USB_DRV is not set
+# CONFIG_DVB_B2C2_FLEXCOP_USB is not set
+# CONFIG_DVB_AS102 is not set
+
+#
+# Webcam, TV (analog/digital) USB devices
+#
+# CONFIG_VIDEO_EM28XX is not set
+
+#
+# Software defined radio USB devices
+#
+# CONFIG_USB_AIRSPY is not set
+# CONFIG_USB_HACKRF is not set
+# CONFIG_USB_MSI2500 is not set
+# CONFIG_MEDIA_PCI_SUPPORT is not set
+CONFIG_RADIO_ADAPTERS=y
+# CONFIG_RADIO_SI470X is not set
+# CONFIG_RADIO_SI4713 is not set
+# CONFIG_USB_MR800 is not set
+# CONFIG_USB_DSBR is not set
+# CONFIG_RADIO_MAXIRADIO is not set
+# CONFIG_RADIO_SHARK is not set
+# CONFIG_RADIO_SHARK2 is not set
+# CONFIG_USB_KEENE is not set
+# CONFIG_USB_RAREMONO is not set
+# CONFIG_USB_MA901 is not set
+# CONFIG_RADIO_TEA5764 is not set
+# CONFIG_RADIO_SAA7706H is not set
+# CONFIG_RADIO_TEF6862 is not set
+# CONFIG_RADIO_WL1273 is not set
+CONFIG_VIDEOBUF2_CORE=m
+CONFIG_VIDEOBUF2_V4L2=m
+CONFIG_VIDEOBUF2_MEMOPS=m
+CONFIG_VIDEOBUF2_DMA_CONTIG=m
+CONFIG_VIDEOBUF2_VMALLOC=m
+CONFIG_VIDEOBUF2_DMA_SG=m
+CONFIG_V4L_PLATFORM_DRIVERS=y
+# CONFIG_VIDEO_CAFE_CCIC is not set
+CONFIG_VIDEO_CADENCE=y
+CONFIG_VIDEO_CADENCE_CSI2RX=m
+# CONFIG_VIDEO_CADENCE_CSI2TX is not set
+# CONFIG_VIDEO_ASPEED is not set
+# CONFIG_VIDEO_MUX is not set
+# CONFIG_VIDEO_XILINX is not set
+CONFIG_VIDEO_TI_CAL=m
+CONFIG_VIDEO_TI_J721E_CSI2RX=m
+CONFIG_V4L_MEM2MEM_DRIVERS=y
+# CONFIG_VIDEO_MEM2MEM_DEINTERLACE is not set
+CONFIG_VIDEO_IMG_VXD_DEC=m
+# CONFIG_DVB_PLATFORM_DRIVERS is not set
+CONFIG_SDR_PLATFORM_DRIVERS=y
+
+#
+# MMC/SDIO DVB adapters
+#
+# CONFIG_SMS_SDIO_DRV is not set
+# CONFIG_V4L_TEST_DRIVERS is not set
+# CONFIG_DVB_TEST_DRIVERS is not set
+# end of Media drivers
+
+#
+# Media ancillary drivers
+#
+CONFIG_MEDIA_ATTACH=y
+
+#
+# Audio decoders, processors and mixers
+#
+# CONFIG_VIDEO_TVAUDIO is not set
+# CONFIG_VIDEO_TDA7432 is not set
+# CONFIG_VIDEO_TDA9840 is not set
+# CONFIG_VIDEO_TDA1997X is not set
+# CONFIG_VIDEO_TEA6415C is not set
+# CONFIG_VIDEO_TEA6420 is not set
+# CONFIG_VIDEO_MSP3400 is not set
+# CONFIG_VIDEO_CS3308 is not set
+# CONFIG_VIDEO_CS5345 is not set
+# CONFIG_VIDEO_CS53L32A is not set
+# CONFIG_VIDEO_TLV320AIC23B is not set
+# CONFIG_VIDEO_UDA1342 is not set
+# CONFIG_VIDEO_WM8775 is not set
+# CONFIG_VIDEO_WM8739 is not set
+# CONFIG_VIDEO_VP27SMPX is not set
+# CONFIG_VIDEO_SONY_BTF_MPX is not set
+# end of Audio decoders, processors and mixers
+
+#
+# RDS decoders
+#
+# CONFIG_VIDEO_SAA6588 is not set
+# end of RDS decoders
+
+#
+# Video decoders
+#
+# CONFIG_VIDEO_ADV7180 is not set
+# CONFIG_VIDEO_ADV7183 is not set
+# CONFIG_VIDEO_ADV748X is not set
+# CONFIG_VIDEO_ADV7604 is not set
+# CONFIG_VIDEO_ADV7842 is not set
+# CONFIG_VIDEO_BT819 is not set
+# CONFIG_VIDEO_BT856 is not set
+# CONFIG_VIDEO_BT866 is not set
+# CONFIG_VIDEO_KS0127 is not set
+# CONFIG_VIDEO_ML86V7667 is not set
+# CONFIG_VIDEO_SAA7110 is not set
+# CONFIG_VIDEO_SAA711X is not set
+# CONFIG_VIDEO_TC358743 is not set
+# CONFIG_VIDEO_TVP514X is not set
+# CONFIG_VIDEO_TVP5150 is not set
+# CONFIG_VIDEO_TVP7002 is not set
+# CONFIG_VIDEO_TW2804 is not set
+# CONFIG_VIDEO_TW9903 is not set
+# CONFIG_VIDEO_TW9906 is not set
+# CONFIG_VIDEO_TW9910 is not set
+# CONFIG_VIDEO_VPX3220 is not set
+# CONFIG_VIDEO_MAX9286 is not set
+
+#
+# Video and audio decoders
+#
+# CONFIG_VIDEO_SAA717X is not set
+# CONFIG_VIDEO_CX25840 is not set
+# end of Video decoders
+
+#
+# Video encoders
+#
+# CONFIG_VIDEO_SAA7127 is not set
+# CONFIG_VIDEO_SAA7185 is not set
+# CONFIG_VIDEO_ADV7170 is not set
+# CONFIG_VIDEO_ADV7175 is not set
+# CONFIG_VIDEO_ADV7343 is not set
+# CONFIG_VIDEO_ADV7393 is not set
+# CONFIG_VIDEO_ADV7511 is not set
+# CONFIG_VIDEO_AD9389B is not set
+# CONFIG_VIDEO_AK881X is not set
+# CONFIG_VIDEO_THS8200 is not set
+# end of Video encoders
+
+#
+# Video improvement chips
+#
+# CONFIG_VIDEO_UPD64031A is not set
+# CONFIG_VIDEO_UPD64083 is not set
+# end of Video improvement chips
+
+#
+# Audio/Video compression chips
+#
+# CONFIG_VIDEO_SAA6752HS is not set
+# end of Audio/Video compression chips
+
+#
+# SDR tuner chips
+#
+# CONFIG_SDR_MAX2175 is not set
+# end of SDR tuner chips
+
+#
+# Miscellaneous helper chips
+#
+# CONFIG_VIDEO_THS7303 is not set
+# CONFIG_VIDEO_M52790 is not set
+# CONFIG_VIDEO_I2C is not set
+# CONFIG_VIDEO_ST_MIPID02 is not set
+# end of Miscellaneous helper chips
+
+#
+# Camera sensor devices
+#
+# CONFIG_VIDEO_HI556 is not set
+# CONFIG_VIDEO_IMX214 is not set
+CONFIG_VIDEO_IMX219=m
+# CONFIG_VIDEO_IMX258 is not set
+# CONFIG_VIDEO_IMX274 is not set
+# CONFIG_VIDEO_IMX290 is not set
+# CONFIG_VIDEO_IMX319 is not set
+# CONFIG_VIDEO_IMX355 is not set
+# CONFIG_VIDEO_OV2640 is not set
+CONFIG_VIDEO_OV2659=m
+# CONFIG_VIDEO_OV2680 is not set
+# CONFIG_VIDEO_OV2685 is not set
+CONFIG_VIDEO_OV5640=m
+CONFIG_VIDEO_OV5645=m
+# CONFIG_VIDEO_OV5647 is not set
+# CONFIG_VIDEO_OV6650 is not set
+# CONFIG_VIDEO_OV5670 is not set
+# CONFIG_VIDEO_OV5675 is not set
+# CONFIG_VIDEO_OV5695 is not set
+# CONFIG_VIDEO_OV7251 is not set
+# CONFIG_VIDEO_OV772X is not set
+# CONFIG_VIDEO_OV7640 is not set
+# CONFIG_VIDEO_OV7670 is not set
+# CONFIG_VIDEO_OV7740 is not set
+# CONFIG_VIDEO_OV8856 is not set
+# CONFIG_VIDEO_OV9640 is not set
+# CONFIG_VIDEO_OV9650 is not set
+# CONFIG_VIDEO_OV13858 is not set
+# CONFIG_VIDEO_VS6624 is not set
+# CONFIG_VIDEO_MT9M001 is not set
+# CONFIG_VIDEO_MT9M032 is not set
+# CONFIG_VIDEO_MT9M111 is not set
+# CONFIG_VIDEO_MT9P031 is not set
+# CONFIG_VIDEO_MT9T001 is not set
+# CONFIG_VIDEO_MT9T112 is not set
+# CONFIG_VIDEO_MT9V011 is not set
+# CONFIG_VIDEO_MT9V032 is not set
+# CONFIG_VIDEO_MT9V111 is not set
+# CONFIG_VIDEO_SR030PC30 is not set
+# CONFIG_VIDEO_NOON010PC30 is not set
+# CONFIG_VIDEO_M5MOLS is not set
+# CONFIG_VIDEO_RDACM20 is not set
+# CONFIG_VIDEO_RJ54N1 is not set
+# CONFIG_VIDEO_S5K6AA is not set
+# CONFIG_VIDEO_S5K6A3 is not set
+# CONFIG_VIDEO_S5K4ECGX is not set
+# CONFIG_VIDEO_S5K5BAF is not set
+# CONFIG_VIDEO_SMIAPP is not set
+# CONFIG_VIDEO_ET8EK8 is not set
+# CONFIG_VIDEO_S5C73M3 is not set
+# end of Camera sensor devices
+
+#
+# Lens drivers
+#
+# CONFIG_VIDEO_AD5820 is not set
+# CONFIG_VIDEO_AK7375 is not set
+# CONFIG_VIDEO_DW9714 is not set
+# CONFIG_VIDEO_DW9768 is not set
+# CONFIG_VIDEO_DW9807_VCM is not set
+# end of Lens drivers
+
+#
+# Flash devices
+#
+# CONFIG_VIDEO_ADP1653 is not set
+# CONFIG_VIDEO_LM3560 is not set
+# CONFIG_VIDEO_LM3646 is not set
+# end of Flash devices
+
+#
+# SPI helper chips
+#
+# CONFIG_VIDEO_GS1662 is not set
+# end of SPI helper chips
+
+#
+# Media SPI Adapters
+#
+CONFIG_CXD2880_SPI_DRV=m
+# end of Media SPI Adapters
+
+CONFIG_MEDIA_TUNER=y
+
+#
+# Customize TV tuners
+#
+CONFIG_MEDIA_TUNER_SIMPLE=m
+CONFIG_MEDIA_TUNER_TDA18250=m
+CONFIG_MEDIA_TUNER_TDA8290=m
+CONFIG_MEDIA_TUNER_TDA827X=m
+CONFIG_MEDIA_TUNER_TDA18271=m
+CONFIG_MEDIA_TUNER_TDA9887=m
+CONFIG_MEDIA_TUNER_TEA5761=m
+CONFIG_MEDIA_TUNER_TEA5767=m
+CONFIG_MEDIA_TUNER_MSI001=m
+CONFIG_MEDIA_TUNER_MT20XX=m
+CONFIG_MEDIA_TUNER_MT2060=m
+CONFIG_MEDIA_TUNER_MT2063=m
+CONFIG_MEDIA_TUNER_MT2266=m
+CONFIG_MEDIA_TUNER_MT2131=m
+CONFIG_MEDIA_TUNER_QT1010=m
+CONFIG_MEDIA_TUNER_XC2028=m
+CONFIG_MEDIA_TUNER_XC5000=m
+CONFIG_MEDIA_TUNER_XC4000=m
+CONFIG_MEDIA_TUNER_MXL5005S=m
+CONFIG_MEDIA_TUNER_MXL5007T=m
+CONFIG_MEDIA_TUNER_MC44S803=m
+CONFIG_MEDIA_TUNER_MAX2165=m
+CONFIG_MEDIA_TUNER_TDA18218=m
+CONFIG_MEDIA_TUNER_FC0011=m
+CONFIG_MEDIA_TUNER_FC0012=m
+CONFIG_MEDIA_TUNER_FC0013=m
+CONFIG_MEDIA_TUNER_TDA18212=m
+CONFIG_MEDIA_TUNER_E4000=m
+CONFIG_MEDIA_TUNER_FC2580=m
+CONFIG_MEDIA_TUNER_M88RS6000T=m
+CONFIG_MEDIA_TUNER_TUA9001=m
+CONFIG_MEDIA_TUNER_SI2157=m
+CONFIG_MEDIA_TUNER_IT913X=m
+CONFIG_MEDIA_TUNER_R820T=m
+CONFIG_MEDIA_TUNER_MXL301RF=m
+CONFIG_MEDIA_TUNER_QM1D1C0042=m
+CONFIG_MEDIA_TUNER_QM1D1B0004=m
+# end of Customize TV tuners
+
+#
+# Customise DVB Frontends
+#
+
+#
+# Multistandard (satellite) frontends
+#
+CONFIG_DVB_STB0899=m
+CONFIG_DVB_STB6100=m
+CONFIG_DVB_STV090x=m
+CONFIG_DVB_STV0910=m
+CONFIG_DVB_STV6110x=m
+CONFIG_DVB_STV6111=m
+CONFIG_DVB_MXL5XX=m
+CONFIG_DVB_M88DS3103=m
+
+#
+# Multistandard (cable + terrestrial) frontends
+#
+CONFIG_DVB_DRXK=m
+CONFIG_DVB_TDA18271C2DD=m
+CONFIG_DVB_SI2165=m
+CONFIG_DVB_MN88472=m
+CONFIG_DVB_MN88473=m
+
+#
+# DVB-S (satellite) frontends
+#
+CONFIG_DVB_CX24110=m
+CONFIG_DVB_CX24123=m
+CONFIG_DVB_MT312=m
+CONFIG_DVB_ZL10036=m
+CONFIG_DVB_ZL10039=m
+CONFIG_DVB_S5H1420=m
+CONFIG_DVB_STV0288=m
+CONFIG_DVB_STB6000=m
+CONFIG_DVB_STV0299=m
+CONFIG_DVB_STV6110=m
+CONFIG_DVB_STV0900=m
+CONFIG_DVB_TDA8083=m
+CONFIG_DVB_TDA10086=m
+CONFIG_DVB_TDA8261=m
+CONFIG_DVB_VES1X93=m
+CONFIG_DVB_TUNER_ITD1000=m
+CONFIG_DVB_TUNER_CX24113=m
+CONFIG_DVB_TDA826X=m
+CONFIG_DVB_TUA6100=m
+CONFIG_DVB_CX24116=m
+CONFIG_DVB_CX24117=m
+CONFIG_DVB_CX24120=m
+CONFIG_DVB_SI21XX=m
+CONFIG_DVB_TS2020=m
+CONFIG_DVB_DS3000=m
+CONFIG_DVB_MB86A16=m
+CONFIG_DVB_TDA10071=m
+
+#
+# DVB-T (terrestrial) frontends
+#
+CONFIG_DVB_SP8870=m
+CONFIG_DVB_SP887X=m
+CONFIG_DVB_CX22700=m
+CONFIG_DVB_CX22702=m
+CONFIG_DVB_S5H1432=m
+CONFIG_DVB_DRXD=m
+CONFIG_DVB_L64781=m
+CONFIG_DVB_TDA1004X=m
+CONFIG_DVB_NXT6000=m
+CONFIG_DVB_MT352=m
+CONFIG_DVB_ZL10353=m
+CONFIG_DVB_DIB3000MB=m
+CONFIG_DVB_DIB3000MC=m
+CONFIG_DVB_DIB7000M=m
+CONFIG_DVB_DIB7000P=m
+CONFIG_DVB_DIB9000=m
+CONFIG_DVB_TDA10048=m
+CONFIG_DVB_AF9013=m
+CONFIG_DVB_EC100=m
+CONFIG_DVB_STV0367=m
+CONFIG_DVB_CXD2820R=m
+CONFIG_DVB_CXD2841ER=m
+CONFIG_DVB_RTL2830=m
+CONFIG_DVB_RTL2832=m
+CONFIG_DVB_RTL2832_SDR=m
+CONFIG_DVB_SI2168=m
+CONFIG_DVB_ZD1301_DEMOD=m
+CONFIG_DVB_CXD2880=m
+
+#
+# DVB-C (cable) frontends
+#
+CONFIG_DVB_VES1820=m
+CONFIG_DVB_TDA10021=m
+CONFIG_DVB_TDA10023=m
+CONFIG_DVB_STV0297=m
+
+#
+# ATSC (North American/Korean Terrestrial/Cable DTV) frontends
+#
+CONFIG_DVB_NXT200X=m
+CONFIG_DVB_OR51211=m
+CONFIG_DVB_OR51132=m
+CONFIG_DVB_BCM3510=m
+CONFIG_DVB_LGDT330X=m
+CONFIG_DVB_LGDT3305=m
+CONFIG_DVB_LGDT3306A=m
+CONFIG_DVB_LG2160=m
+CONFIG_DVB_S5H1409=m
+CONFIG_DVB_AU8522=m
+CONFIG_DVB_AU8522_DTV=m
+CONFIG_DVB_AU8522_V4L=m
+CONFIG_DVB_S5H1411=m
+
+#
+# ISDB-T (terrestrial) frontends
+#
+CONFIG_DVB_S921=m
+CONFIG_DVB_DIB8000=m
+CONFIG_DVB_MB86A20S=m
+
+#
+# ISDB-S (satellite) & ISDB-T (terrestrial) frontends
+#
+CONFIG_DVB_TC90522=m
+CONFIG_DVB_MN88443X=m
+
+#
+# Digital terrestrial only tuners/PLL
+#
+CONFIG_DVB_PLL=m
+CONFIG_DVB_TUNER_DIB0070=m
+CONFIG_DVB_TUNER_DIB0090=m
+
+#
+# SEC control devices for DVB-S
+#
+CONFIG_DVB_DRX39XYJ=m
+CONFIG_DVB_LNBH25=m
+CONFIG_DVB_LNBH29=m
+CONFIG_DVB_LNBP21=m
+CONFIG_DVB_LNBP22=m
+CONFIG_DVB_ISL6405=m
+CONFIG_DVB_ISL6421=m
+CONFIG_DVB_ISL6423=m
+CONFIG_DVB_A8293=m
+CONFIG_DVB_LGS8GL5=m
+CONFIG_DVB_LGS8GXX=m
+CONFIG_DVB_ATBM8830=m
+CONFIG_DVB_TDA665x=m
+CONFIG_DVB_IX2505V=m
+CONFIG_DVB_M88RS2000=m
+CONFIG_DVB_AF9033=m
+CONFIG_DVB_HORUS3A=m
+CONFIG_DVB_ASCOT2E=m
+CONFIG_DVB_HELENE=m
+
+#
+# Common Interface (EN50221) controller drivers
+#
+CONFIG_DVB_CXD2099=m
+CONFIG_DVB_SP2=m
+# end of Customise DVB Frontends
+
+#
+# Tools to develop new frontends
+#
+# CONFIG_DVB_DUMMY_FE is not set
+# end of Media ancillary drivers
+
+#
+# Graphics support
+#
+# CONFIG_VGA_ARB is not set
+CONFIG_DRM=y
+CONFIG_DRM_MIPI_DSI=y
+# CONFIG_DRM_DP_AUX_CHARDEV is not set
+# CONFIG_DRM_DEBUG_MM is not set
+# CONFIG_DRM_DEBUG_SELFTEST is not set
+CONFIG_DRM_KMS_HELPER=y
+CONFIG_DRM_KMS_FB_HELPER=y
+# CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS is not set
+CONFIG_DRM_FBDEV_EMULATION=y
+CONFIG_DRM_FBDEV_OVERALLOC=100
+# CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM is not set
+# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set
+# CONFIG_DRM_DP_CEC is not set
+CONFIG_DRM_GEM_CMA_HELPER=y
+CONFIG_DRM_KMS_CMA_HELPER=y
+CONFIG_DRM_VM=y
+
+#
+# I2C encoder or helper chips
+#
+# CONFIG_DRM_I2C_CH7006 is not set
+# CONFIG_DRM_I2C_SIL164 is not set
+CONFIG_DRM_I2C_NXP_TDA998X=y
+# CONFIG_DRM_I2C_NXP_TDA9950 is not set
+# end of I2C encoder or helper chips
+
+#
+# ARM devices
+#
+# CONFIG_DRM_HDLCD is not set
+CONFIG_DRM_MALI_DISPLAY=m
+# CONFIG_DRM_KOMEDA is not set
+# end of ARM devices
+
+# CONFIG_DRM_RADEON is not set
+# CONFIG_DRM_AMDGPU is not set
+# CONFIG_DRM_NOUVEAU is not set
+# CONFIG_DRM_VGEM is not set
+# CONFIG_DRM_VKMS is not set
+# CONFIG_DRM_UDL is not set
+# CONFIG_DRM_AST is not set
+# CONFIG_DRM_MGAG200 is not set
+CONFIG_DRM_RCAR_DW_HDMI=m
+# CONFIG_DRM_RCAR_LVDS is not set
+# CONFIG_DRM_QXL is not set
+# CONFIG_DRM_BOCHS is not set
+# CONFIG_DRM_VIRTIO_GPU is not set
+CONFIG_DRM_PANEL=y
+
+#
+# Display Panels
+#
+# CONFIG_DRM_PANEL_ARM_VERSATILE is not set
+# CONFIG_DRM_PANEL_ASUS_Z00T_TM5P5_NT35596 is not set
+# CONFIG_DRM_PANEL_BOE_HIMAX8279D is not set
+# CONFIG_DRM_PANEL_BOE_TV101WUM_NL6 is not set
+CONFIG_DRM_PANEL_LVDS=m
+CONFIG_DRM_PANEL_SIMPLE=y
+# CONFIG_DRM_PANEL_ELIDA_KD35T133 is not set
+# CONFIG_DRM_PANEL_FEIXIN_K101_IM2BA02 is not set
+# CONFIG_DRM_PANEL_FEIYANG_FY07024DI26A30D is not set
+# CONFIG_DRM_PANEL_ILITEK_IL9322 is not set
+# CONFIG_DRM_PANEL_ILITEK_ILI9881C is not set
+# CONFIG_DRM_PANEL_INNOLUX_P079ZCA is not set
+# CONFIG_DRM_PANEL_JDI_LT070ME05000 is not set
+# CONFIG_DRM_PANEL_KINGDISPLAY_KD097D04 is not set
+# CONFIG_DRM_PANEL_LEADTEK_LTK050H3146W is not set
+# CONFIG_DRM_PANEL_LEADTEK_LTK500HD1829 is not set
+# CONFIG_DRM_PANEL_SAMSUNG_LD9040 is not set
+# CONFIG_DRM_PANEL_LG_LB035Q02 is not set
+# CONFIG_DRM_PANEL_LG_LG4573 is not set
+# CONFIG_DRM_PANEL_NEC_NL8048HL11 is not set
+# CONFIG_DRM_PANEL_NOVATEK_NT35510 is not set
+# CONFIG_DRM_PANEL_NOVATEK_NT39016 is not set
+# CONFIG_DRM_PANEL_MANTIX_MLAF057WE51 is not set
+# CONFIG_DRM_PANEL_OLIMEX_LCD_OLINUXINO is not set
+# CONFIG_DRM_PANEL_ORISETECH_OTM8009A is not set
+CONFIG_DRM_PANEL_OSD_OSD101T2587_53TS=y
+# CONFIG_DRM_PANEL_PANASONIC_VVX10F034N00 is not set
+# CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN is not set
+CONFIG_DRM_PANEL_RAYDIUM_RM67191=m
+# CONFIG_DRM_PANEL_RAYDIUM_RM68200 is not set
+# CONFIG_DRM_PANEL_RONBO_RB070D30 is not set
+# CONFIG_DRM_PANEL_SAMSUNG_S6D16D0 is not set
+# CONFIG_DRM_PANEL_SAMSUNG_S6E3HA2 is not set
+# CONFIG_DRM_PANEL_SAMSUNG_S6E63J0X03 is not set
+# CONFIG_DRM_PANEL_SAMSUNG_S6E63M0 is not set
+# CONFIG_DRM_PANEL_SAMSUNG_S6E88A0_AMS452EF01 is not set
+# CONFIG_DRM_PANEL_SAMSUNG_S6E8AA0 is not set
+# CONFIG_DRM_PANEL_SEIKO_43WVF1G is not set
+# CONFIG_DRM_PANEL_SHARP_LQ101R1SX01 is not set
+# CONFIG_DRM_PANEL_SHARP_LS037V7DW01 is not set
+# CONFIG_DRM_PANEL_SHARP_LS043T1LE01 is not set
+# CONFIG_DRM_PANEL_SITRONIX_ST7701 is not set
+CONFIG_DRM_PANEL_SITRONIX_ST7703=m
+# CONFIG_DRM_PANEL_SITRONIX_ST7789V is not set
+# CONFIG_DRM_PANEL_SONY_ACX424AKP is not set
+# CONFIG_DRM_PANEL_SONY_ACX565AKM is not set
+# CONFIG_DRM_PANEL_TPO_TD028TTEC1 is not set
+# CONFIG_DRM_PANEL_TPO_TD043MTEA1 is not set
+# CONFIG_DRM_PANEL_TPO_TPG110 is not set
+CONFIG_DRM_PANEL_TRULY_NT35597_WQXGA=m
+# CONFIG_DRM_PANEL_VISIONOX_RM69299 is not set
+# CONFIG_DRM_PANEL_XINPENG_XPP055C272 is not set
+# end of Display Panels
+
+CONFIG_DRM_BRIDGE=y
+CONFIG_DRM_PANEL_BRIDGE=y
+
+#
+# Display Interface Bridges
+#
+# CONFIG_DRM_CDNS_DSI is not set
+# CONFIG_DRM_CHRONTEL_CH7033 is not set
+CONFIG_DRM_DISPLAY_CONNECTOR=y
+CONFIG_DRM_LONTIUM_LT9611=m
+CONFIG_DRM_LVDS_CODEC=y
+# CONFIG_DRM_MEGACHIPS_STDPXXXX_GE_B850V3_FW is not set
+CONFIG_DRM_NWL_MIPI_DSI=m
+# CONFIG_DRM_NXP_PTN3460 is not set
+# CONFIG_DRM_PARADE_PS8622 is not set
+# CONFIG_DRM_PARADE_PS8640 is not set
+# CONFIG_DRM_SIL_SII8620 is not set
+CONFIG_DRM_SII902X=y
+# CONFIG_DRM_SII9234 is not set
+CONFIG_DRM_SIMPLE_BRIDGE=m
+CONFIG_DRM_THINE_THC63LVD1024=m
+# CONFIG_DRM_TOSHIBA_TC358762 is not set
+# CONFIG_DRM_TOSHIBA_TC358764 is not set
+CONFIG_DRM_TOSHIBA_TC358767=y
+CONFIG_DRM_TOSHIBA_TC358768=y
+# CONFIG_DRM_TOSHIBA_TC358775 is not set
+CONFIG_DRM_TI_TFP410=y
+CONFIG_DRM_TI_SN65DSI86=m
+CONFIG_DRM_TI_TPD12S015=y
+# CONFIG_DRM_ANALOGIX_ANX6345 is not set
+# CONFIG_DRM_ANALOGIX_ANX78XX is not set
+# CONFIG_DRM_I2C_ADV7511 is not set
+CONFIG_DRM_CDNS_MHDP8546=m
+CONFIG_DRM_CDNS_MHDP8546_J721E=y
+CONFIG_DRM_DW_HDMI=m
+CONFIG_DRM_DW_HDMI_AHB_AUDIO=m
+# CONFIG_DRM_DW_HDMI_I2S_AUDIO is not set
+CONFIG_DRM_DW_HDMI_CEC=m
+# end of Display Interface Bridges
+
+# CONFIG_DRM_ETNAVIV is not set
+# CONFIG_DRM_ARCPGU is not set
+# CONFIG_DRM_HISI_HIBMC is not set
+# CONFIG_DRM_HISI_KIRIN is not set
+# CONFIG_DRM_MXSFB is not set
+# CONFIG_DRM_CIRRUS_QEMU is not set
+# CONFIG_DRM_GM12U320 is not set
+# CONFIG_TINYDRM_HX8357D is not set
+# CONFIG_TINYDRM_ILI9225 is not set
+# CONFIG_TINYDRM_ILI9341 is not set
+# CONFIG_TINYDRM_ILI9486 is not set
+# CONFIG_TINYDRM_MI0283QT is not set
+# CONFIG_TINYDRM_REPAPER is not set
+# CONFIG_TINYDRM_ST7586 is not set
+# CONFIG_TINYDRM_ST7735R is not set
+# CONFIG_DRM_PL111 is not set
+# CONFIG_DRM_LIMA is not set
+# CONFIG_DRM_PANFROST is not set
+CONFIG_DRM_TIDSS=y
+CONFIG_DRM_LEGACY=y
+# CONFIG_DRM_TDFX is not set
+# CONFIG_DRM_R128 is not set
+# CONFIG_DRM_MGA is not set
+# CONFIG_DRM_VIA is not set
+# CONFIG_DRM_SAVAGE is not set
+CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
+
+#
+# Frame buffer Devices
+#
+CONFIG_FB_CMDLINE=y
+CONFIG_FB_NOTIFY=y
+CONFIG_FB=y
+# CONFIG_FIRMWARE_EDID is not set
+CONFIG_FB_CFB_FILLRECT=y
+CONFIG_FB_CFB_COPYAREA=y
+CONFIG_FB_CFB_IMAGEBLIT=y
+CONFIG_FB_SYS_FILLRECT=y
+CONFIG_FB_SYS_COPYAREA=y
+CONFIG_FB_SYS_IMAGEBLIT=y
+# CONFIG_FB_FOREIGN_ENDIAN is not set
+CONFIG_FB_SYS_FOPS=y
+CONFIG_FB_DEFERRED_IO=y
+CONFIG_FB_BACKLIGHT=y
+CONFIG_FB_MODE_HELPERS=y
+# CONFIG_FB_TILEBLITTING is not set
+
+#
+# Frame buffer hardware drivers
+#
+# CONFIG_FB_CIRRUS is not set
+# CONFIG_FB_PM2 is not set
+# CONFIG_FB_ARMCLCD is not set
+# CONFIG_FB_CYBER2000 is not set
+# CONFIG_FB_ASILIANT is not set
+# CONFIG_FB_IMSTT is not set
+# CONFIG_FB_OPENCORES is not set
+# CONFIG_FB_S1D13XXX is not set
+# CONFIG_FB_NVIDIA is not set
+# CONFIG_FB_RIVA is not set
+# CONFIG_FB_I740 is not set
+# CONFIG_FB_MATROX is not set
+# CONFIG_FB_RADEON is not set
+# CONFIG_FB_ATY128 is not set
+# CONFIG_FB_ATY is not set
+# CONFIG_FB_S3 is not set
+# CONFIG_FB_SAVAGE is not set
+# CONFIG_FB_SIS is not set
+# CONFIG_FB_NEOMAGIC is not set
+# CONFIG_FB_KYRO is not set
+# CONFIG_FB_3DFX is not set
+# CONFIG_FB_VOODOO1 is not set
+# CONFIG_FB_VT8623 is not set
+# CONFIG_FB_TRIDENT is not set
+# CONFIG_FB_ARK is not set
+# CONFIG_FB_PM3 is not set
+# CONFIG_FB_CARMINE is not set
+# CONFIG_FB_SMSCUFX is not set
+# CONFIG_FB_UDL is not set
+# CONFIG_FB_IBM_GXT4500 is not set
+# CONFIG_FB_VIRTUAL is not set
+# CONFIG_FB_METRONOME is not set
+# CONFIG_FB_MB862XX is not set
+# CONFIG_FB_SIMPLE is not set
+CONFIG_FB_SSD1307=y
+# CONFIG_FB_SM712 is not set
+# end of Frame buffer Devices
+
+#
+# Backlight & LCD device support
+#
+# CONFIG_LCD_CLASS_DEVICE is not set
+CONFIG_BACKLIGHT_CLASS_DEVICE=y
+# CONFIG_BACKLIGHT_KTD253 is not set
+CONFIG_BACKLIGHT_PWM=y
+# CONFIG_BACKLIGHT_QCOM_WLED is not set
+# CONFIG_BACKLIGHT_ADP8860 is not set
+# CONFIG_BACKLIGHT_ADP8870 is not set
+# CONFIG_BACKLIGHT_LM3630A is not set
+# CONFIG_BACKLIGHT_LM3639 is not set
+CONFIG_BACKLIGHT_LP855X=m
+CONFIG_BACKLIGHT_GPIO=y
+# CONFIG_BACKLIGHT_LV5207LP is not set
+# CONFIG_BACKLIGHT_BD6107 is not set
+# CONFIG_BACKLIGHT_ARCXCNN is not set
+CONFIG_BACKLIGHT_LED=y
+# end of Backlight & LCD device support
+
+CONFIG_VIDEOMODE_HELPERS=y
+CONFIG_HDMI=y
+
+#
+# Console display driver support
+#
+CONFIG_DUMMY_CONSOLE=y
+CONFIG_DUMMY_CONSOLE_COLUMNS=80
+CONFIG_DUMMY_CONSOLE_ROWS=25
+CONFIG_FRAMEBUFFER_CONSOLE=y
+CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
+# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
+# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
+# end of Console display driver support
+
+CONFIG_LOGO=y
+# CONFIG_LOGO_LINUX_MONO is not set
+# CONFIG_LOGO_LINUX_VGA16 is not set
+CONFIG_LOGO_LINUX_CLUT224=y
+# end of Graphics support
+
+CONFIG_SOUND=y
+CONFIG_SND=y
+CONFIG_SND_TIMER=y
+CONFIG_SND_PCM=y
+CONFIG_SND_PCM_ELD=y
+CONFIG_SND_PCM_IEC958=y
+CONFIG_SND_DMAENGINE_PCM=y
+CONFIG_SND_HWDEP=m
+CONFIG_SND_RAWMIDI=m
+CONFIG_SND_JACK=y
+CONFIG_SND_JACK_INPUT_DEV=y
+# CONFIG_SND_OSSEMUL is not set
+CONFIG_SND_PCM_TIMER=y
+# CONFIG_SND_HRTIMER is not set
+# CONFIG_SND_DYNAMIC_MINORS is not set
+CONFIG_SND_SUPPORT_OLD_API=y
+CONFIG_SND_PROC_FS=y
+CONFIG_SND_VERBOSE_PROCFS=y
+# CONFIG_SND_VERBOSE_PRINTK is not set
+# CONFIG_SND_DEBUG is not set
+# CONFIG_SND_SEQUENCER is not set
+CONFIG_SND_DRIVERS=y
+# CONFIG_SND_DUMMY is not set
+# CONFIG_SND_ALOOP is not set
+# CONFIG_SND_MTPAV is not set
+# CONFIG_SND_SERIAL_U16550 is not set
+# CONFIG_SND_MPU401 is not set
+CONFIG_SND_PCI=y
+# CONFIG_SND_AD1889 is not set
+# CONFIG_SND_ALS300 is not set
+# CONFIG_SND_ALI5451 is not set
+# CONFIG_SND_ATIIXP is not set
+# CONFIG_SND_ATIIXP_MODEM is not set
+# CONFIG_SND_AU8810 is not set
+# CONFIG_SND_AU8820 is not set
+# CONFIG_SND_AU8830 is not set
+# CONFIG_SND_AW2 is not set
+# CONFIG_SND_AZT3328 is not set
+# CONFIG_SND_BT87X is not set
+# CONFIG_SND_CA0106 is not set
+# CONFIG_SND_CMIPCI is not set
+# CONFIG_SND_OXYGEN is not set
+# CONFIG_SND_CS4281 is not set
+# CONFIG_SND_CS46XX is not set
+# CONFIG_SND_CTXFI is not set
+# CONFIG_SND_DARLA20 is not set
+# CONFIG_SND_GINA20 is not set
+# CONFIG_SND_LAYLA20 is not set
+# CONFIG_SND_DARLA24 is not set
+# CONFIG_SND_GINA24 is not set
+# CONFIG_SND_LAYLA24 is not set
+# CONFIG_SND_MONA is not set
+# CONFIG_SND_MIA is not set
+# CONFIG_SND_ECHO3G is not set
+# CONFIG_SND_INDIGO is not set
+# CONFIG_SND_INDIGOIO is not set
+# CONFIG_SND_INDIGODJ is not set
+# CONFIG_SND_INDIGOIOX is not set
+# CONFIG_SND_INDIGODJX is not set
+# CONFIG_SND_EMU10K1 is not set
+# CONFIG_SND_EMU10K1X is not set
+# CONFIG_SND_ENS1370 is not set
+# CONFIG_SND_ENS1371 is not set
+# CONFIG_SND_ES1938 is not set
+# CONFIG_SND_ES1968 is not set
+# CONFIG_SND_FM801 is not set
+# CONFIG_SND_HDSP is not set
+# CONFIG_SND_HDSPM is not set
+# CONFIG_SND_ICE1712 is not set
+# CONFIG_SND_ICE1724 is not set
+# CONFIG_SND_INTEL8X0 is not set
+# CONFIG_SND_INTEL8X0M is not set
+# CONFIG_SND_KORG1212 is not set
+# CONFIG_SND_LOLA is not set
+# CONFIG_SND_LX6464ES is not set
+# CONFIG_SND_MAESTRO3 is not set
+# CONFIG_SND_MIXART is not set
+# CONFIG_SND_NM256 is not set
+# CONFIG_SND_PCXHR is not set
+# CONFIG_SND_RIPTIDE is not set
+# CONFIG_SND_RME32 is not set
+# CONFIG_SND_RME96 is not set
+# CONFIG_SND_RME9652 is not set
+# CONFIG_SND_SE6X is not set
+# CONFIG_SND_SONICVIBES is not set
+# CONFIG_SND_TRIDENT is not set
+# CONFIG_SND_VIA82XX is not set
+# CONFIG_SND_VIA82XX_MODEM is not set
+# CONFIG_SND_VIRTUOSO is not set
+# CONFIG_SND_VX222 is not set
+# CONFIG_SND_YMFPCI is not set
+
+#
+# HD-Audio
+#
+# CONFIG_SND_HDA_INTEL is not set
+# end of HD-Audio
+
+CONFIG_SND_HDA_PREALLOC_SIZE=64
+# CONFIG_SND_SPI is not set
+CONFIG_SND_USB=y
+CONFIG_SND_USB_AUDIO=m
+CONFIG_SND_USB_AUDIO_USE_MEDIA_CONTROLLER=y
+# CONFIG_SND_USB_UA101 is not set
+# CONFIG_SND_USB_CAIAQ is not set
+# CONFIG_SND_USB_6FIRE is not set
+# CONFIG_SND_USB_HIFACE is not set
+# CONFIG_SND_BCD2000 is not set
+# CONFIG_SND_USB_POD is not set
+# CONFIG_SND_USB_PODHD is not set
+# CONFIG_SND_USB_TONEPORT is not set
+# CONFIG_SND_USB_VARIAX is not set
+CONFIG_SND_SOC=y
+CONFIG_SND_SOC_GENERIC_DMAENGINE_PCM=y
+# CONFIG_SND_SOC_AMD_ACP is not set
+# CONFIG_SND_ATMEL_SOC is not set
+# CONFIG_SND_BCM63XX_I2S_WHISTLER is not set
+# CONFIG_SND_DESIGNWARE_I2S is not set
+
+#
+# SoC Audio for Freescale CPUs
+#
+
+#
+# Common SoC Audio options for Freescale CPUs:
+#
+# CONFIG_SND_SOC_FSL_ASRC is not set
+# CONFIG_SND_SOC_FSL_SAI is not set
+# CONFIG_SND_SOC_FSL_AUDMIX is not set
+# CONFIG_SND_SOC_FSL_SSI is not set
+# CONFIG_SND_SOC_FSL_SPDIF is not set
+# CONFIG_SND_SOC_FSL_ESAI is not set
+# CONFIG_SND_SOC_FSL_MICFIL is not set
+# CONFIG_SND_SOC_IMX_AUDMUX is not set
+# end of SoC Audio for Freescale CPUs
+
+# CONFIG_SND_I2S_HI6210_I2S is not set
+# CONFIG_SND_SOC_IMG is not set
+# CONFIG_SND_SOC_MTK_BTCVSD is not set
+# CONFIG_SND_SOC_SOF_TOPLEVEL is not set
+
+#
+# STMicroelectronics STM32 SOC audio support
+#
+# end of STMicroelectronics STM32 SOC audio support
+
+#
+# Audio support for Texas Instruments SoCs
+#
+CONFIG_SND_SOC_TI_EDMA_PCM=y
+CONFIG_SND_SOC_TI_SDMA_PCM=y
+CONFIG_SND_SOC_TI_UDMA_PCM=y
+
+#
+# Texas Instruments DAI support for:
+#
+CONFIG_SND_SOC_DAVINCI_MCASP=y
+
+#
+# Audio support for boards with Texas Instruments SoCs
+#
+CONFIG_SND_SOC_J721E_EVM=m
+# end of Audio support for Texas Instruments SoCs
+
+# CONFIG_SND_SOC_XILINX_I2S is not set
+# CONFIG_SND_SOC_XILINX_AUDIO_FORMATTER is not set
+# CONFIG_SND_SOC_XILINX_SPDIF is not set
+# CONFIG_SND_SOC_XTFPGA_I2S is not set
+# CONFIG_ZX_TDM is not set
+CONFIG_SND_SOC_I2C_AND_SPI=y
+
+#
+# CODEC drivers
+#
+# CONFIG_SND_SOC_AC97_CODEC is not set
+# CONFIG_SND_SOC_ADAU1701 is not set
+# CONFIG_SND_SOC_ADAU1761_I2C is not set
+# CONFIG_SND_SOC_ADAU1761_SPI is not set
+# CONFIG_SND_SOC_ADAU7002 is not set
+# CONFIG_SND_SOC_ADAU7118_HW is not set
+# CONFIG_SND_SOC_ADAU7118_I2C is not set
+# CONFIG_SND_SOC_AK4104 is not set
+# CONFIG_SND_SOC_AK4118 is not set
+# CONFIG_SND_SOC_AK4458 is not set
+# CONFIG_SND_SOC_AK4554 is not set
+# CONFIG_SND_SOC_AK4613 is not set
+# CONFIG_SND_SOC_AK4642 is not set
+# CONFIG_SND_SOC_AK5386 is not set
+# CONFIG_SND_SOC_AK5558 is not set
+# CONFIG_SND_SOC_ALC5623 is not set
+# CONFIG_SND_SOC_BD28623 is not set
+# CONFIG_SND_SOC_BT_SCO is not set
+# CONFIG_SND_SOC_CROS_EC_CODEC is not set
+# CONFIG_SND_SOC_CS35L32 is not set
+# CONFIG_SND_SOC_CS35L33 is not set
+# CONFIG_SND_SOC_CS35L34 is not set
+# CONFIG_SND_SOC_CS35L35 is not set
+# CONFIG_SND_SOC_CS35L36 is not set
+# CONFIG_SND_SOC_CS42L42 is not set
+# CONFIG_SND_SOC_CS42L51_I2C is not set
+# CONFIG_SND_SOC_CS42L52 is not set
+# CONFIG_SND_SOC_CS42L56 is not set
+# CONFIG_SND_SOC_CS42L73 is not set
+# CONFIG_SND_SOC_CS4234 is not set
+# CONFIG_SND_SOC_CS4265 is not set
+# CONFIG_SND_SOC_CS4270 is not set
+# CONFIG_SND_SOC_CS4271_I2C is not set
+# CONFIG_SND_SOC_CS4271_SPI is not set
+# CONFIG_SND_SOC_CS42XX8_I2C is not set
+# CONFIG_SND_SOC_CS43130 is not set
+# CONFIG_SND_SOC_CS4341 is not set
+# CONFIG_SND_SOC_CS4349 is not set
+# CONFIG_SND_SOC_CS53L30 is not set
+# CONFIG_SND_SOC_CX2072X is not set
+# CONFIG_SND_SOC_DA7213 is not set
+# CONFIG_SND_SOC_DMIC is not set
+CONFIG_SND_SOC_HDMI_CODEC=y
+# CONFIG_SND_SOC_ES7134 is not set
+# CONFIG_SND_SOC_ES7241 is not set
+# CONFIG_SND_SOC_ES8316 is not set
+# CONFIG_SND_SOC_ES8328_I2C is not set
+# CONFIG_SND_SOC_ES8328_SPI is not set
+# CONFIG_SND_SOC_GTM601 is not set
+# CONFIG_SND_SOC_INNO_RK3036 is not set
+# CONFIG_SND_SOC_MAX98088 is not set
+# CONFIG_SND_SOC_MAX98357A is not set
+# CONFIG_SND_SOC_MAX98504 is not set
+# CONFIG_SND_SOC_MAX9867 is not set
+# CONFIG_SND_SOC_MAX98927 is not set
+# CONFIG_SND_SOC_MAX98373_I2C is not set
+# CONFIG_SND_SOC_MAX98373_SDW is not set
+# CONFIG_SND_SOC_MAX98390 is not set
+# CONFIG_SND_SOC_MAX9860 is not set
+# CONFIG_SND_SOC_MSM8916_WCD_ANALOG is not set
+# CONFIG_SND_SOC_MSM8916_WCD_DIGITAL is not set
+# CONFIG_SND_SOC_PCM1681 is not set
+# CONFIG_SND_SOC_PCM1789_I2C is not set
+# CONFIG_SND_SOC_PCM179X_I2C is not set
+# CONFIG_SND_SOC_PCM179X_SPI is not set
+# CONFIG_SND_SOC_PCM186X_I2C is not set
+# CONFIG_SND_SOC_PCM186X_SPI is not set
+# CONFIG_SND_SOC_PCM3060_I2C is not set
+# CONFIG_SND_SOC_PCM3060_SPI is not set
+CONFIG_SND_SOC_PCM3168A=m
+CONFIG_SND_SOC_PCM3168A_I2C=m
+# CONFIG_SND_SOC_PCM3168A_SPI is not set
+# CONFIG_SND_SOC_PCM512x_I2C is not set
+# CONFIG_SND_SOC_PCM512x_SPI is not set
+# CONFIG_SND_SOC_RK3328 is not set
+# CONFIG_SND_SOC_RT1308_SDW is not set
+# CONFIG_SND_SOC_RT5616 is not set
+# CONFIG_SND_SOC_RT5631 is not set
+# CONFIG_SND_SOC_RT5682_SDW is not set
+# CONFIG_SND_SOC_RT700_SDW is not set
+# CONFIG_SND_SOC_RT711_SDW is not set
+# CONFIG_SND_SOC_RT715_SDW is not set
+# CONFIG_SND_SOC_SGTL5000 is not set
+CONFIG_SND_SOC_SIMPLE_AMPLIFIER=m
+# CONFIG_SND_SOC_SIRF_AUDIO_CODEC is not set
+# CONFIG_SND_SOC_SPDIF is not set
+# CONFIG_SND_SOC_SSM2305 is not set
+# CONFIG_SND_SOC_SSM2602_SPI is not set
+# CONFIG_SND_SOC_SSM2602_I2C is not set
+# CONFIG_SND_SOC_SSM4567 is not set
+# CONFIG_SND_SOC_STA32X is not set
+# CONFIG_SND_SOC_STA350 is not set
+# CONFIG_SND_SOC_STI_SAS is not set
+# CONFIG_SND_SOC_TAS2552 is not set
+# CONFIG_SND_SOC_TAS2562 is not set
+# CONFIG_SND_SOC_TAS2764 is not set
+# CONFIG_SND_SOC_TAS2770 is not set
+# CONFIG_SND_SOC_TAS5086 is not set
+# CONFIG_SND_SOC_TAS571X is not set
+# CONFIG_SND_SOC_TAS5720 is not set
+# CONFIG_SND_SOC_TAS6424 is not set
+# CONFIG_SND_SOC_TDA7419 is not set
+# CONFIG_SND_SOC_TFA9879 is not set
+# CONFIG_SND_SOC_TLV320AIC23_I2C is not set
+# CONFIG_SND_SOC_TLV320AIC23_SPI is not set
+CONFIG_SND_SOC_TLV320AIC31XX=m
+# CONFIG_SND_SOC_TLV320AIC32X4_I2C is not set
+# CONFIG_SND_SOC_TLV320AIC32X4_SPI is not set
+CONFIG_SND_SOC_TLV320AIC3X=m
+# CONFIG_SND_SOC_TLV320ADCX140 is not set
+# CONFIG_SND_SOC_TS3A227E is not set
+# CONFIG_SND_SOC_TSCS42XX is not set
+# CONFIG_SND_SOC_TSCS454 is not set
+# CONFIG_SND_SOC_UDA1334 is not set
+# CONFIG_SND_SOC_WCD9335 is not set
+CONFIG_SND_SOC_WCD934X=m
+# CONFIG_SND_SOC_WM8510 is not set
+# CONFIG_SND_SOC_WM8523 is not set
+# CONFIG_SND_SOC_WM8524 is not set
+# CONFIG_SND_SOC_WM8580 is not set
+# CONFIG_SND_SOC_WM8711 is not set
+# CONFIG_SND_SOC_WM8728 is not set
+# CONFIG_SND_SOC_WM8731 is not set
+# CONFIG_SND_SOC_WM8737 is not set
+# CONFIG_SND_SOC_WM8741 is not set
+# CONFIG_SND_SOC_WM8750 is not set
+# CONFIG_SND_SOC_WM8753 is not set
+# CONFIG_SND_SOC_WM8770 is not set
+# CONFIG_SND_SOC_WM8776 is not set
+# CONFIG_SND_SOC_WM8782 is not set
+# CONFIG_SND_SOC_WM8804_I2C is not set
+# CONFIG_SND_SOC_WM8804_SPI is not set
+# CONFIG_SND_SOC_WM8903 is not set
+CONFIG_SND_SOC_WM8904=m
+# CONFIG_SND_SOC_WM8960 is not set
+# CONFIG_SND_SOC_WM8962 is not set
+# CONFIG_SND_SOC_WM8974 is not set
+# CONFIG_SND_SOC_WM8978 is not set
+# CONFIG_SND_SOC_WM8985 is not set
+CONFIG_SND_SOC_WSA881X=m
+# CONFIG_SND_SOC_ZL38060 is not set
+# CONFIG_SND_SOC_ZX_AUD96P22 is not set
+# CONFIG_SND_SOC_MAX9759 is not set
+# CONFIG_SND_SOC_MT6351 is not set
+# CONFIG_SND_SOC_MT6358 is not set
+# CONFIG_SND_SOC_MT6660 is not set
+# CONFIG_SND_SOC_NAU8540 is not set
+# CONFIG_SND_SOC_NAU8810 is not set
+# CONFIG_SND_SOC_NAU8822 is not set
+# CONFIG_SND_SOC_NAU8824 is not set
+# CONFIG_SND_SOC_TPA6130A2 is not set
+# end of CODEC drivers
+
+CONFIG_SND_SIMPLE_CARD_UTILS=m
+CONFIG_SND_SIMPLE_CARD=m
+CONFIG_SND_AUDIO_GRAPH_CARD=m
+
+#
+# HID support
+#
+CONFIG_HID=y
+# CONFIG_HID_BATTERY_STRENGTH is not set
+# CONFIG_HIDRAW is not set
+# CONFIG_UHID is not set
+CONFIG_HID_GENERIC=y
+
+#
+# Special HID drivers
+#
+# CONFIG_HID_A4TECH is not set
+# CONFIG_HID_ACCUTOUCH is not set
+# CONFIG_HID_ACRUX is not set
+# CONFIG_HID_APPLE is not set
+# CONFIG_HID_APPLEIR is not set
+# CONFIG_HID_ASUS is not set
+# CONFIG_HID_AUREAL is not set
+# CONFIG_HID_BELKIN is not set
+# CONFIG_HID_BETOP_FF is not set
+# CONFIG_HID_BIGBEN_FF is not set
+# CONFIG_HID_CHERRY is not set
+# CONFIG_HID_CHICONY is not set
+# CONFIG_HID_CORSAIR is not set
+# CONFIG_HID_COUGAR is not set
+# CONFIG_HID_MACALLY is not set
+# CONFIG_HID_PRODIKEYS is not set
+# CONFIG_HID_CMEDIA is not set
+# CONFIG_HID_CREATIVE_SB0540 is not set
+# CONFIG_HID_CYPRESS is not set
+# CONFIG_HID_DRAGONRISE is not set
+# CONFIG_HID_EMS_FF is not set
+# CONFIG_HID_ELAN is not set
+# CONFIG_HID_ELECOM is not set
+# CONFIG_HID_ELO is not set
+# CONFIG_HID_EZKEY is not set
+# CONFIG_HID_GEMBIRD is not set
+# CONFIG_HID_GFRM is not set
+# CONFIG_HID_GLORIOUS is not set
+# CONFIG_HID_HOLTEK is not set
+# CONFIG_HID_GOOGLE_HAMMER is not set
+# CONFIG_HID_VIVALDI is not set
+# CONFIG_HID_GT683R is not set
+# CONFIG_HID_KEYTOUCH is not set
+# CONFIG_HID_KYE is not set
+# CONFIG_HID_UCLOGIC is not set
+# CONFIG_HID_WALTOP is not set
+# CONFIG_HID_VIEWSONIC is not set
+# CONFIG_HID_GYRATION is not set
+# CONFIG_HID_ICADE is not set
+# CONFIG_HID_ITE is not set
+# CONFIG_HID_JABRA is not set
+# CONFIG_HID_TWINHAN is not set
+# CONFIG_HID_KENSINGTON is not set
+# CONFIG_HID_LCPOWER is not set
+# CONFIG_HID_LED is not set
+# CONFIG_HID_LENOVO is not set
+# CONFIG_HID_LOGITECH is not set
+# CONFIG_HID_MAGICMOUSE is not set
+# CONFIG_HID_MALTRON is not set
+# CONFIG_HID_MAYFLASH is not set
+# CONFIG_HID_REDRAGON is not set
+# CONFIG_HID_MICROSOFT is not set
+# CONFIG_HID_MONTEREY is not set
+CONFIG_HID_MULTITOUCH=m
+# CONFIG_HID_NTI is not set
+# CONFIG_HID_NTRIG is not set
+# CONFIG_HID_ORTEK is not set
+# CONFIG_HID_PANTHERLORD is not set
+# CONFIG_HID_PENMOUNT is not set
+# CONFIG_HID_PETALYNX is not set
+# CONFIG_HID_PICOLCD is not set
+# CONFIG_HID_PLANTRONICS is not set
+# CONFIG_HID_PRIMAX is not set
+# CONFIG_HID_RETRODE is not set
+# CONFIG_HID_ROCCAT is not set
+# CONFIG_HID_SAITEK is not set
+# CONFIG_HID_SAMSUNG is not set
+# CONFIG_HID_SONY is not set
+# CONFIG_HID_SPEEDLINK is not set
+# CONFIG_HID_STEAM is not set
+# CONFIG_HID_STEELSERIES is not set
+# CONFIG_HID_SUNPLUS is not set
+# CONFIG_HID_RMI is not set
+# CONFIG_HID_GREENASIA is not set
+# CONFIG_HID_SMARTJOYPLUS is not set
+# CONFIG_HID_TIVO is not set
+# CONFIG_HID_TOPSEED is not set
+# CONFIG_HID_THINGM is not set
+# CONFIG_HID_THRUSTMASTER is not set
+# CONFIG_HID_UDRAW_PS3 is not set
+# CONFIG_HID_U2FZERO is not set
+# CONFIG_HID_WACOM is not set
+# CONFIG_HID_WIIMOTE is not set
+# CONFIG_HID_XINMO is not set
+# CONFIG_HID_ZEROPLUS is not set
+# CONFIG_HID_ZYDACRON is not set
+# CONFIG_HID_SENSOR_HUB is not set
+# CONFIG_HID_ALPS is not set
+# CONFIG_HID_MCP2221 is not set
+# end of Special HID drivers
+
+#
+# USB HID support
+#
+CONFIG_USB_HID=m
+# CONFIG_HID_PID is not set
+# CONFIG_USB_HIDDEV is not set
+
+#
+# USB HID Boot Protocol drivers
+#
+# CONFIG_USB_KBD is not set
+# CONFIG_USB_MOUSE is not set
+# end of USB HID Boot Protocol drivers
+# end of USB HID support
+
+#
+# I2C HID support
+#
+CONFIG_I2C_HID=m
+# end of I2C HID support
+# end of HID support
+
+CONFIG_USB_OHCI_LITTLE_ENDIAN=y
+CONFIG_USB_SUPPORT=y
+CONFIG_USB_COMMON=m
+# CONFIG_USB_LED_TRIG is not set
+# CONFIG_USB_ULPI_BUS is not set
+CONFIG_USB_CONN_GPIO=m
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB=m
+CONFIG_USB_PCI=y
+CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
+
+#
+# Miscellaneous USB options
+#
+CONFIG_USB_DEFAULT_PERSIST=y
+# CONFIG_USB_FEW_INIT_RETRIES is not set
+# CONFIG_USB_DYNAMIC_MINORS is not set
+CONFIG_USB_OTG=y
+# CONFIG_USB_OTG_PRODUCTLIST is not set
+# CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB is not set
+# CONFIG_USB_OTG_FSM is not set
+# CONFIG_USB_LEDS_TRIGGER_USBPORT is not set
+CONFIG_USB_AUTOSUSPEND_DELAY=2
+# CONFIG_USB_MON is not set
+
+#
+# USB Host Controller Drivers
+#
+# CONFIG_USB_C67X00_HCD is not set
+CONFIG_USB_XHCI_HCD=m
+# CONFIG_USB_XHCI_DBGCAP is not set
+CONFIG_USB_XHCI_PCI=m
+# CONFIG_USB_XHCI_PCI_RENESAS is not set
+CONFIG_USB_XHCI_PLATFORM=m
+CONFIG_USB_EHCI_HCD=m
+# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
+CONFIG_USB_EHCI_TT_NEWSCHED=y
+CONFIG_USB_EHCI_PCI=m
+# CONFIG_USB_EHCI_FSL is not set
+CONFIG_USB_EHCI_HCD_PLATFORM=m
+# CONFIG_USB_OXU210HP_HCD is not set
+# CONFIG_USB_ISP116X_HCD is not set
+# CONFIG_USB_FOTG210_HCD is not set
+# CONFIG_USB_MAX3421_HCD is not set
+CONFIG_USB_OHCI_HCD=m
+CONFIG_USB_OHCI_HCD_PCI=m
+# CONFIG_USB_OHCI_HCD_SSB is not set
+CONFIG_USB_OHCI_HCD_PLATFORM=m
+# CONFIG_USB_UHCI_HCD is not set
+# CONFIG_USB_SL811_HCD is not set
+# CONFIG_USB_R8A66597_HCD is not set
+# CONFIG_USB_HCD_BCMA is not set
+# CONFIG_USB_HCD_SSB is not set
+# CONFIG_USB_HCD_TEST_MODE is not set
+
+#
+# USB Device Class drivers
+#
+CONFIG_USB_ACM=m
+# CONFIG_USB_PRINTER is not set
+# CONFIG_USB_WDM is not set
+# CONFIG_USB_TMC is not set
+
+#
+# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
+#
+
+#
+# also be needed; see USB_STORAGE Help for more info
+#
+CONFIG_USB_STORAGE=m
+# CONFIG_USB_STORAGE_DEBUG is not set
+# CONFIG_USB_STORAGE_REALTEK is not set
+# CONFIG_USB_STORAGE_DATAFAB is not set
+# CONFIG_USB_STORAGE_FREECOM is not set
+# CONFIG_USB_STORAGE_ISD200 is not set
+# CONFIG_USB_STORAGE_USBAT is not set
+# CONFIG_USB_STORAGE_SDDR09 is not set
+# CONFIG_USB_STORAGE_SDDR55 is not set
+# CONFIG_USB_STORAGE_JUMPSHOT is not set
+# CONFIG_USB_STORAGE_ALAUDA is not set
+# CONFIG_USB_STORAGE_ONETOUCH is not set
+# CONFIG_USB_STORAGE_KARMA is not set
+# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
+# CONFIG_USB_STORAGE_ENE_UB6250 is not set
+# CONFIG_USB_UAS is not set
+
+#
+# USB Imaging devices
+#
+# CONFIG_USB_MDC800 is not set
+# CONFIG_USB_MICROTEK is not set
+# CONFIG_USBIP_CORE is not set
+CONFIG_USB_CDNS3=m
+CONFIG_USB_CDNS3_GADGET=y
+CONFIG_USB_CDNS3_HOST=y
+CONFIG_USB_CDNS3_TI=m
+CONFIG_USB_MUSB_HDRC=m
+# CONFIG_USB_MUSB_HOST is not set
+# CONFIG_USB_MUSB_GADGET is not set
+CONFIG_USB_MUSB_DUAL_ROLE=y
+
+#
+# Platform Glue Layer
+#
+
+#
+# MUSB DMA mode
+#
+# CONFIG_MUSB_PIO_ONLY is not set
+CONFIG_USB_DWC3=m
+# CONFIG_USB_DWC3_HOST is not set
+# CONFIG_USB_DWC3_GADGET is not set
+CONFIG_USB_DWC3_DUAL_ROLE=y
+
+#
+# Platform Glue Driver Support
+#
+CONFIG_USB_DWC3_HAPS=m
+CONFIG_USB_DWC3_KEYSTONE=m
+CONFIG_USB_DWC3_OF_SIMPLE=m
+# CONFIG_USB_DWC2 is not set
+# CONFIG_USB_CHIPIDEA is not set
+CONFIG_USB_ISP1760=m
+CONFIG_USB_ISP1760_HCD=y
+CONFIG_USB_ISP1761_UDC=y
+# CONFIG_USB_ISP1760_HOST_ROLE is not set
+# CONFIG_USB_ISP1760_GADGET_ROLE is not set
+CONFIG_USB_ISP1760_DUAL_ROLE=y
+
+#
+# USB port drivers
+#
+CONFIG_USB_SERIAL=m
+# CONFIG_USB_SERIAL_GENERIC is not set
+# CONFIG_USB_SERIAL_SIMPLE is not set
+# CONFIG_USB_SERIAL_AIRCABLE is not set
+# CONFIG_USB_SERIAL_ARK3116 is not set
+# CONFIG_USB_SERIAL_BELKIN is not set
+# CONFIG_USB_SERIAL_CH341 is not set
+# CONFIG_USB_SERIAL_WHITEHEAT is not set
+# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set
+CONFIG_USB_SERIAL_CP210X=m
+# CONFIG_USB_SERIAL_CYPRESS_M8 is not set
+# CONFIG_USB_SERIAL_EMPEG is not set
+CONFIG_USB_SERIAL_FTDI_SIO=m
+# CONFIG_USB_SERIAL_VISOR is not set
+# CONFIG_USB_SERIAL_IPAQ is not set
+# CONFIG_USB_SERIAL_IR is not set
+# CONFIG_USB_SERIAL_EDGEPORT is not set
+# CONFIG_USB_SERIAL_EDGEPORT_TI is not set
+# CONFIG_USB_SERIAL_F81232 is not set
+# CONFIG_USB_SERIAL_F8153X is not set
+# CONFIG_USB_SERIAL_GARMIN is not set
+# CONFIG_USB_SERIAL_IPW is not set
+# CONFIG_USB_SERIAL_IUU is not set
+# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set
+# CONFIG_USB_SERIAL_KEYSPAN is not set
+# CONFIG_USB_SERIAL_KLSI is not set
+# CONFIG_USB_SERIAL_KOBIL_SCT is not set
+# CONFIG_USB_SERIAL_MCT_U232 is not set
+# CONFIG_USB_SERIAL_METRO is not set
+# CONFIG_USB_SERIAL_MOS7720 is not set
+# CONFIG_USB_SERIAL_MOS7840 is not set
+# CONFIG_USB_SERIAL_MXUPORT is not set
+# CONFIG_USB_SERIAL_NAVMAN is not set
+CONFIG_USB_SERIAL_PL2303=m
+# CONFIG_USB_SERIAL_OTI6858 is not set
+# CONFIG_USB_SERIAL_QCAUX is not set
+# CONFIG_USB_SERIAL_QUALCOMM is not set
+# CONFIG_USB_SERIAL_SPCP8X5 is not set
+# CONFIG_USB_SERIAL_SAFE is not set
+# CONFIG_USB_SERIAL_SIERRAWIRELESS is not set
+# CONFIG_USB_SERIAL_SYMBOL is not set
+# CONFIG_USB_SERIAL_TI is not set
+# CONFIG_USB_SERIAL_CYBERJACK is not set
+# CONFIG_USB_SERIAL_XIRCOM is not set
+CONFIG_USB_SERIAL_WWAN=m
+CONFIG_USB_SERIAL_OPTION=m
+# CONFIG_USB_SERIAL_OMNINET is not set
+# CONFIG_USB_SERIAL_OPTICON is not set
+# CONFIG_USB_SERIAL_XSENS_MT is not set
+# CONFIG_USB_SERIAL_WISHBONE is not set
+# CONFIG_USB_SERIAL_SSU100 is not set
+# CONFIG_USB_SERIAL_QT2 is not set
+# CONFIG_USB_SERIAL_UPD78F0730 is not set
+# CONFIG_USB_SERIAL_DEBUG is not set
+
+#
+# USB Miscellaneous drivers
+#
+# CONFIG_USB_EMI62 is not set
+# CONFIG_USB_EMI26 is not set
+# CONFIG_USB_ADUTUX is not set
+# CONFIG_USB_SEVSEG is not set
+# CONFIG_USB_LEGOTOWER is not set
+# CONFIG_USB_LCD is not set
+# CONFIG_USB_CYPRESS_CY7C63 is not set
+# CONFIG_USB_CYTHERM is not set
+# CONFIG_USB_IDMOUSE is not set
+# CONFIG_USB_FTDI_ELAN is not set
+# CONFIG_USB_APPLEDISPLAY is not set
+# CONFIG_APPLE_MFI_FASTCHARGE is not set
+# CONFIG_USB_SISUSBVGA is not set
+# CONFIG_USB_LD is not set
+# CONFIG_USB_TRANCEVIBRATOR is not set
+# CONFIG_USB_IOWARRIOR is not set
+CONFIG_USB_TEST=m
+# CONFIG_USB_EHSET_TEST_FIXTURE is not set
+# CONFIG_USB_ISIGHTFW is not set
+# CONFIG_USB_YUREX is not set
+# CONFIG_USB_EZUSB_FX2 is not set
+# CONFIG_USB_HUB_USB251XB is not set
+CONFIG_USB_HSIC_USB3503=m
+# CONFIG_USB_HSIC_USB4604 is not set
+# CONFIG_USB_LINK_LAYER_TEST is not set
+# CONFIG_USB_CHAOSKEY is not set
+
+#
+# USB Physical Layer drivers
+#
+CONFIG_USB_PHY=y
+CONFIG_NOP_USB_XCEIV=m
+# CONFIG_USB_GPIO_VBUS is not set
+# CONFIG_USB_ISP1301 is not set
+# CONFIG_USB_ULPI is not set
+# end of USB Physical Layer drivers
+
+CONFIG_USB_GADGET=m
+# CONFIG_USB_GADGET_DEBUG is not set
+# CONFIG_USB_GADGET_DEBUG_FILES is not set
+# CONFIG_USB_GADGET_DEBUG_FS is not set
+CONFIG_USB_GADGET_VBUS_DRAW=2
+CONFIG_USB_GADGET_STORAGE_NUM_BUFFERS=32
+# CONFIG_U_SERIAL_CONSOLE is not set
+
+#
+# USB Peripheral Controller
+#
+# CONFIG_USB_FOTG210_UDC is not set
+# CONFIG_USB_GR_UDC is not set
+# CONFIG_USB_R8A66597 is not set
+# CONFIG_USB_PXA27X is not set
+# CONFIG_USB_MV_UDC is not set
+# CONFIG_USB_MV_U3D is not set
+# CONFIG_USB_SNP_UDC_PLAT is not set
+# CONFIG_USB_M66592 is not set
+# CONFIG_USB_BDC_UDC is not set
+# CONFIG_USB_AMD5536UDC is not set
+# CONFIG_USB_NET2272 is not set
+# CONFIG_USB_NET2280 is not set
+# CONFIG_USB_GOKU is not set
+# CONFIG_USB_EG20T is not set
+# CONFIG_USB_GADGET_XILINX is not set
+# CONFIG_USB_MAX3420_UDC is not set
+# CONFIG_USB_DUMMY_HCD is not set
+# end of USB Peripheral Controller
+
+CONFIG_USB_LIBCOMPOSITE=m
+CONFIG_USB_F_ACM=m
+CONFIG_USB_F_SS_LB=m
+CONFIG_USB_U_SERIAL=m
+CONFIG_USB_U_ETHER=m
+CONFIG_USB_U_AUDIO=m
+CONFIG_USB_F_SERIAL=m
+CONFIG_USB_F_OBEX=m
+CONFIG_USB_F_NCM=m
+CONFIG_USB_F_ECM=m
+CONFIG_USB_F_EEM=m
+CONFIG_USB_F_SUBSET=m
+CONFIG_USB_F_RNDIS=m
+CONFIG_USB_F_MASS_STORAGE=m
+CONFIG_USB_F_FS=m
+CONFIG_USB_F_UAC1=m
+CONFIG_USB_F_UAC2=m
+CONFIG_USB_F_UVC=m
+CONFIG_USB_F_MIDI=m
+CONFIG_USB_F_HID=m
+CONFIG_USB_F_PRINTER=m
+CONFIG_USB_CONFIGFS=m
+CONFIG_USB_CONFIGFS_SERIAL=y
+CONFIG_USB_CONFIGFS_ACM=y
+CONFIG_USB_CONFIGFS_OBEX=y
+CONFIG_USB_CONFIGFS_NCM=y
+CONFIG_USB_CONFIGFS_ECM=y
+CONFIG_USB_CONFIGFS_ECM_SUBSET=y
+CONFIG_USB_CONFIGFS_RNDIS=y
+CONFIG_USB_CONFIGFS_EEM=y
+CONFIG_USB_CONFIGFS_MASS_STORAGE=y
+CONFIG_USB_CONFIGFS_F_LB_SS=y
+CONFIG_USB_CONFIGFS_F_FS=y
+CONFIG_USB_CONFIGFS_F_UAC1=y
+# CONFIG_USB_CONFIGFS_F_UAC1_LEGACY is not set
+CONFIG_USB_CONFIGFS_F_UAC2=y
+CONFIG_USB_CONFIGFS_F_MIDI=y
+CONFIG_USB_CONFIGFS_F_HID=y
+CONFIG_USB_CONFIGFS_F_UVC=y
+CONFIG_USB_CONFIGFS_F_PRINTER=y
+
+#
+# USB Gadget precomposed configurations
+#
+CONFIG_USB_ZERO=m
+# CONFIG_USB_ZERO_HNPTEST is not set
+CONFIG_USB_AUDIO=m
+# CONFIG_GADGET_UAC1 is not set
+CONFIG_USB_ETH=m
+CONFIG_USB_ETH_RNDIS=y
+# CONFIG_USB_ETH_EEM is not set
+CONFIG_USB_G_NCM=m
+CONFIG_USB_GADGETFS=m
+CONFIG_USB_FUNCTIONFS=m
+CONFIG_USB_FUNCTIONFS_ETH=y
+CONFIG_USB_FUNCTIONFS_RNDIS=y
+CONFIG_USB_FUNCTIONFS_GENERIC=y
+CONFIG_USB_MASS_STORAGE=m
+CONFIG_USB_G_SERIAL=m
+CONFIG_USB_MIDI_GADGET=m
+CONFIG_USB_G_PRINTER=m
+CONFIG_USB_CDC_COMPOSITE=m
+CONFIG_USB_G_ACM_MS=m
+CONFIG_USB_G_MULTI=m
+CONFIG_USB_G_MULTI_RNDIS=y
+CONFIG_USB_G_MULTI_CDC=y
+CONFIG_USB_G_HID=m
+CONFIG_USB_G_DBGP=m
+# CONFIG_USB_G_DBGP_PRINTK is not set
+CONFIG_USB_G_DBGP_SERIAL=y
+CONFIG_USB_G_WEBCAM=m
+# CONFIG_USB_RAW_GADGET is not set
+# end of USB Gadget precomposed configurations
+
+CONFIG_TYPEC=m
+CONFIG_TYPEC_TCPM=m
+# CONFIG_TYPEC_TCPCI is not set
+CONFIG_TYPEC_FUSB302=m
+# CONFIG_TYPEC_UCSI is not set
+CONFIG_TYPEC_HD3SS3220=m
+# CONFIG_TYPEC_TPS6598X is not set
+# CONFIG_TYPEC_STUSB160X is not set
+
+#
+# USB Type-C Multiplexer/DeMultiplexer Switch support
+#
+# CONFIG_TYPEC_MUX_PI3USB30532 is not set
+# end of USB Type-C Multiplexer/DeMultiplexer Switch support
+
+#
+# USB Type-C Alternate Mode drivers
+#
+# CONFIG_TYPEC_DP_ALTMODE is not set
+# end of USB Type-C Alternate Mode drivers
+
+CONFIG_USB_ROLE_SWITCH=m
+CONFIG_MMC=y
+CONFIG_PWRSEQ_EMMC=y
+# CONFIG_PWRSEQ_SD8787 is not set
+CONFIG_PWRSEQ_SIMPLE=y
+CONFIG_MMC_BLOCK=y
+CONFIG_MMC_BLOCK_MINORS=32
+# CONFIG_SDIO_UART is not set
+# CONFIG_MMC_TEST is not set
+
+#
+# MMC/SD/SDIO Host Controller Drivers
+#
+# CONFIG_MMC_DEBUG is not set
+CONFIG_MMC_ARMMMCI=y
+CONFIG_MMC_STM32_SDMMC=y
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_IO_ACCESSORS=y
+# CONFIG_MMC_SDHCI_PCI is not set
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_SDHCI_OF_ARASAN=y
+# CONFIG_MMC_SDHCI_OF_ASPEED is not set
+# CONFIG_MMC_SDHCI_OF_AT91 is not set
+# CONFIG_MMC_SDHCI_OF_DWCMSHC is not set
+CONFIG_MMC_SDHCI_CADENCE=y
+CONFIG_MMC_SDHCI_F_SDH30=y
+# CONFIG_MMC_SDHCI_MILBEAUT is not set
+# CONFIG_MMC_TIFM_SD is not set
+CONFIG_MMC_SPI=y
+# CONFIG_MMC_CB710 is not set
+# CONFIG_MMC_VIA_SDMMC is not set
+# CONFIG_MMC_DW is not set
+# CONFIG_MMC_VUB300 is not set
+# CONFIG_MMC_USHC is not set
+# CONFIG_MMC_USDHI6ROL0 is not set
+CONFIG_MMC_CQHCI=y
+# CONFIG_MMC_HSQ is not set
+# CONFIG_MMC_TOSHIBA_PCI is not set
+# CONFIG_MMC_MTK is not set
+CONFIG_MMC_SDHCI_XENON=y
+CONFIG_MMC_SDHCI_OMAP=y
+CONFIG_MMC_SDHCI_AM654=y
+CONFIG_MMC_SDHCI_EXTERNAL_DMA=y
+# CONFIG_MEMSTICK is not set
+CONFIG_NEW_LEDS=y
+CONFIG_LEDS_CLASS=y
+# CONFIG_LEDS_CLASS_FLASH is not set
+# CONFIG_LEDS_CLASS_MULTICOLOR is not set
+# CONFIG_LEDS_BRIGHTNESS_HW_CHANGED is not set
+
+#
+# LED drivers
+#
+# CONFIG_LEDS_AN30259A is not set
+# CONFIG_LEDS_AW2013 is not set
+# CONFIG_LEDS_BCM6328 is not set
+# CONFIG_LEDS_BCM6358 is not set
+# CONFIG_LEDS_CR0014114 is not set
+# CONFIG_LEDS_EL15203000 is not set
+# CONFIG_LEDS_LM3530 is not set
+# CONFIG_LEDS_LM3532 is not set
+# CONFIG_LEDS_LM3642 is not set
+# CONFIG_LEDS_LM3692X is not set
+# CONFIG_LEDS_PCA9532 is not set
+CONFIG_LEDS_GPIO=y
+# CONFIG_LEDS_LP3944 is not set
+# CONFIG_LEDS_LP3952 is not set
+# CONFIG_LEDS_LP50XX is not set
+# CONFIG_LEDS_LP55XX_COMMON is not set
+# CONFIG_LEDS_LP8860 is not set
+# CONFIG_LEDS_PCA955X is not set
+# CONFIG_LEDS_PCA963X is not set
+# CONFIG_LEDS_DAC124S085 is not set
+CONFIG_LEDS_PWM=y
+# CONFIG_LEDS_REGULATOR is not set
+# CONFIG_LEDS_BD2802 is not set
+# CONFIG_LEDS_LT3593 is not set
+# CONFIG_LEDS_TCA6507 is not set
+CONFIG_LEDS_TLC591XX=y
+# CONFIG_LEDS_LM355x is not set
+# CONFIG_LEDS_IS31FL319X is not set
+# CONFIG_LEDS_IS31FL32XX is not set
+
+#
+# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM)
+#
+# CONFIG_LEDS_BLINKM is not set
+CONFIG_LEDS_SYSCON=y
+# CONFIG_LEDS_MLXREG is not set
+# CONFIG_LEDS_USER is not set
+# CONFIG_LEDS_SPI_BYTE is not set
+# CONFIG_LEDS_TI_LMU_COMMON is not set
+
+#
+# LED Triggers
+#
+CONFIG_LEDS_TRIGGERS=y
+CONFIG_LEDS_TRIGGER_TIMER=y
+# CONFIG_LEDS_TRIGGER_ONESHOT is not set
+CONFIG_LEDS_TRIGGER_DISK=y
+# CONFIG_LEDS_TRIGGER_MTD is not set
+CONFIG_LEDS_TRIGGER_HEARTBEAT=y
+# CONFIG_LEDS_TRIGGER_BACKLIGHT is not set
+CONFIG_LEDS_TRIGGER_CPU=y
+# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
+# CONFIG_LEDS_TRIGGER_GPIO is not set
+CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
+
+#
+# iptables trigger is under Netfilter config (LED target)
+#
+# CONFIG_LEDS_TRIGGER_TRANSIENT is not set
+# CONFIG_LEDS_TRIGGER_CAMERA is not set
+CONFIG_LEDS_TRIGGER_PANIC=y
+# CONFIG_LEDS_TRIGGER_NETDEV is not set
+# CONFIG_LEDS_TRIGGER_PATTERN is not set
+# CONFIG_LEDS_TRIGGER_AUDIO is not set
+# CONFIG_ACCESSIBILITY is not set
+# CONFIG_INFINIBAND is not set
+CONFIG_EDAC_SUPPORT=y
+CONFIG_EDAC=y
+CONFIG_EDAC_LEGACY_SYSFS=y
+# CONFIG_EDAC_DEBUG is not set
+# CONFIG_EDAC_THUNDERX is not set
+# CONFIG_EDAC_XGENE is not set
+# CONFIG_EDAC_DMC520 is not set
+CONFIG_RTC_LIB=y
+CONFIG_RTC_CLASS=y
+CONFIG_RTC_HCTOSYS=y
+CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
+CONFIG_RTC_SYSTOHC=y
+CONFIG_RTC_SYSTOHC_DEVICE="rtc0"
+# CONFIG_RTC_DEBUG is not set
+CONFIG_RTC_NVMEM=y
+
+#
+# RTC interfaces
+#
+CONFIG_RTC_INTF_SYSFS=y
+CONFIG_RTC_INTF_PROC=y
+CONFIG_RTC_INTF_DEV=y
+# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
+# CONFIG_RTC_DRV_TEST is not set
+
+#
+# I2C RTC drivers
+#
+# CONFIG_RTC_DRV_ABB5ZES3 is not set
+# CONFIG_RTC_DRV_ABEOZ9 is not set
+# CONFIG_RTC_DRV_ABX80X is not set
+CONFIG_RTC_DRV_DS1307=m
+# CONFIG_RTC_DRV_DS1307_CENTURY is not set
+# CONFIG_RTC_DRV_DS1374 is not set
+# CONFIG_RTC_DRV_DS1672 is not set
+# CONFIG_RTC_DRV_HYM8563 is not set
+# CONFIG_RTC_DRV_MAX6900 is not set
+CONFIG_RTC_DRV_MAX77686=y
+CONFIG_RTC_DRV_RK808=m
+# CONFIG_RTC_DRV_RS5C372 is not set
+# CONFIG_RTC_DRV_ISL1208 is not set
+# CONFIG_RTC_DRV_ISL12022 is not set
+# CONFIG_RTC_DRV_ISL12026 is not set
+# CONFIG_RTC_DRV_X1205 is not set
+# CONFIG_RTC_DRV_PCF8523 is not set
+# CONFIG_RTC_DRV_PCF85063 is not set
+CONFIG_RTC_DRV_PCF85363=m
+# CONFIG_RTC_DRV_PCF8563 is not set
+# CONFIG_RTC_DRV_PCF8583 is not set
+# CONFIG_RTC_DRV_M41T80 is not set
+# CONFIG_RTC_DRV_BQ32K is not set
+# CONFIG_RTC_DRV_PALMAS is not set
+# CONFIG_RTC_DRV_S35390A is not set
+# CONFIG_RTC_DRV_FM3130 is not set
+# CONFIG_RTC_DRV_RX8010 is not set
+CONFIG_RTC_DRV_RX8581=m
+# CONFIG_RTC_DRV_RX8025 is not set
+# CONFIG_RTC_DRV_EM3027 is not set
+# CONFIG_RTC_DRV_RV3028 is not set
+# CONFIG_RTC_DRV_RV3032 is not set
+CONFIG_RTC_DRV_RV8803=m
+CONFIG_RTC_DRV_S5M=y
+# CONFIG_RTC_DRV_SD3078 is not set
+
+#
+# SPI RTC drivers
+#
+# CONFIG_RTC_DRV_M41T93 is not set
+# CONFIG_RTC_DRV_M41T94 is not set
+# CONFIG_RTC_DRV_DS1302 is not set
+# CONFIG_RTC_DRV_DS1305 is not set
+# CONFIG_RTC_DRV_DS1343 is not set
+# CONFIG_RTC_DRV_DS1347 is not set
+# CONFIG_RTC_DRV_DS1390 is not set
+# CONFIG_RTC_DRV_MAX6916 is not set
+# CONFIG_RTC_DRV_R9701 is not set
+# CONFIG_RTC_DRV_RX4581 is not set
+# CONFIG_RTC_DRV_RX6110 is not set
+# CONFIG_RTC_DRV_RS5C348 is not set
+# CONFIG_RTC_DRV_MAX6902 is not set
+# CONFIG_RTC_DRV_PCF2123 is not set
+# CONFIG_RTC_DRV_MCP795 is not set
+CONFIG_RTC_I2C_AND_SPI=y
+
+#
+# SPI and I2C RTC drivers
+#
+CONFIG_RTC_DRV_DS3232=y
+CONFIG_RTC_DRV_DS3232_HWMON=y
+CONFIG_RTC_DRV_PCF2127=m
+# CONFIG_RTC_DRV_RV3029C2 is not set
+
+#
+# Platform RTC drivers
+#
+# CONFIG_RTC_DRV_DS1286 is not set
+# CONFIG_RTC_DRV_DS1511 is not set
+# CONFIG_RTC_DRV_DS1553 is not set
+# CONFIG_RTC_DRV_DS1685_FAMILY is not set
+# CONFIG_RTC_DRV_DS1742 is not set
+# CONFIG_RTC_DRV_DS2404 is not set
+# CONFIG_RTC_DRV_STK17TA8 is not set
+# CONFIG_RTC_DRV_M48T86 is not set
+# CONFIG_RTC_DRV_M48T35 is not set
+# CONFIG_RTC_DRV_M48T59 is not set
+# CONFIG_RTC_DRV_MSM6242 is not set
+# CONFIG_RTC_DRV_BQ4802 is not set
+# CONFIG_RTC_DRV_RP5C01 is not set
+# CONFIG_RTC_DRV_V3020 is not set
+# CONFIG_RTC_DRV_ZYNQMP is not set
+CONFIG_RTC_DRV_CROS_EC=y
+
+#
+# on-CPU RTC drivers
+#
+# CONFIG_RTC_DRV_PL030 is not set
+CONFIG_RTC_DRV_PL031=y
+# CONFIG_RTC_DRV_CADENCE is not set
+# CONFIG_RTC_DRV_FTRTC010 is not set
+# CONFIG_RTC_DRV_R7301 is not set
+
+#
+# HID Sensor RTC drivers
+#
+CONFIG_DMADEVICES=y
+# CONFIG_DMADEVICES_DEBUG is not set
+
+#
+# DMA Devices
+#
+CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH=y
+CONFIG_DMA_ENGINE=y
+CONFIG_DMA_VIRTUAL_CHANNELS=y
+CONFIG_DMA_OF=y
+# CONFIG_ALTERA_MSGDMA is not set
+# CONFIG_AMBA_PL08X is not set
+# CONFIG_BCM_SBA_RAID is not set
+# CONFIG_DW_AXI_DMAC is not set
+CONFIG_FSL_EDMA=y
+# CONFIG_FSL_QDMA is not set
+# CONFIG_HISI_DMA is not set
+# CONFIG_INTEL_IDMA64 is not set
+CONFIG_MV_XOR_V2=y
+CONFIG_PL330_DMA=y
+# CONFIG_PLX_DMA is not set
+# CONFIG_XILINX_DMA is not set
+# CONFIG_XILINX_ZYNQMP_DMA is not set
+# CONFIG_XILINX_ZYNQMP_DPDMA is not set
+CONFIG_QCOM_HIDMA_MGMT=y
+CONFIG_QCOM_HIDMA=y
+# CONFIG_DW_DMAC is not set
+# CONFIG_DW_DMAC_PCI is not set
+# CONFIG_DW_EDMA is not set
+# CONFIG_DW_EDMA_PCIE is not set
+# CONFIG_SF_PDMA is not set
+CONFIG_TI_K3_UDMA=y
+CONFIG_TI_K3_UDMA_GLUE_LAYER=y
+CONFIG_TI_K3_PSIL=y
+
+#
+# DMA Clients
+#
+# CONFIG_ASYNC_TX_DMA is not set
+# CONFIG_DMATEST is not set
+CONFIG_DMA_ENGINE_RAID=y
+
+#
+# DMABUF options
+#
+CONFIG_SYNC_FILE=y
+# CONFIG_SW_SYNC is not set
+# CONFIG_UDMABUF is not set
+# CONFIG_DMABUF_MOVE_NOTIFY is not set
+# CONFIG_DMABUF_SELFTESTS is not set
+CONFIG_DMABUF_HEAPS=y
+CONFIG_DMABUF_HEAPS_SYSTEM=y
+CONFIG_DMABUF_HEAPS_CMA=y
+# end of DMABUF options
+
+# CONFIG_AUXDISPLAY is not set
+CONFIG_UIO=y
+# CONFIG_UIO_CIF is not set
+# CONFIG_UIO_PDRV_GENIRQ is not set
+# CONFIG_UIO_DMEM_GENIRQ is not set
+# CONFIG_UIO_AEC is not set
+# CONFIG_UIO_SERCOS3 is not set
+# CONFIG_UIO_PCI_GENERIC is not set
+# CONFIG_UIO_NETX is not set
+# CONFIG_UIO_PRUSS is not set
+# CONFIG_UIO_MF624 is not set
+CONFIG_VFIO_IOMMU_TYPE1=y
+CONFIG_VFIO_VIRQFD=y
+CONFIG_VFIO=y
+# CONFIG_VFIO_NOIOMMU is not set
+CONFIG_VFIO_PCI=y
+CONFIG_VFIO_PCI_MMAP=y
+CONFIG_VFIO_PCI_INTX=y
+# CONFIG_VFIO_PLATFORM is not set
+# CONFIG_VFIO_MDEV is not set
+# CONFIG_VIRT_DRIVERS is not set
+CONFIG_VIRTIO=y
+CONFIG_VIRTIO_MENU=y
+CONFIG_VIRTIO_PCI=y
+CONFIG_VIRTIO_PCI_LEGACY=y
+CONFIG_VIRTIO_BALLOON=y
+# CONFIG_VIRTIO_INPUT is not set
+CONFIG_VIRTIO_MMIO=y
+# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set
+# CONFIG_VDPA is not set
+CONFIG_VHOST_MENU=y
+# CONFIG_VHOST_NET is not set
+# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set
+
+#
+# Microsoft Hyper-V guest support
+#
+# end of Microsoft Hyper-V guest support
+
+# CONFIG_GREYBUS is not set
+CONFIG_STAGING=y
+CONFIG_STAGING_MEDIA=y
+# CONFIG_GOLDFISH is not set
+CONFIG_CHROME_PLATFORMS=y
+CONFIG_CROS_EC=y
+CONFIG_CROS_EC_I2C=y
+# CONFIG_CROS_EC_RPMSG is not set
+CONFIG_CROS_EC_SPI=y
+CONFIG_CROS_EC_PROTO=y
+CONFIG_CROS_EC_CHARDEV=m
+CONFIG_CROS_EC_LIGHTBAR=y
+CONFIG_CROS_EC_VBC=y
+CONFIG_CROS_EC_DEBUGFS=y
+CONFIG_CROS_EC_SENSORHUB=y
+CONFIG_CROS_EC_SYSFS=y
+CONFIG_CROS_EC_TYPEC=m
+CONFIG_CROS_USBPD_NOTIFY=y
+# CONFIG_MELLANOX_PLATFORM is not set
+CONFIG_HAVE_CLK=y
+CONFIG_CLKDEV_LOOKUP=y
+CONFIG_HAVE_CLK_PREPARE=y
+CONFIG_COMMON_CLK=y
+# CONFIG_COMMON_CLK_MAX77686 is not set
+# CONFIG_COMMON_CLK_MAX9485 is not set
+CONFIG_COMMON_CLK_RK808=y
+# CONFIG_COMMON_CLK_SI5341 is not set
+# CONFIG_COMMON_CLK_SI5351 is not set
+# CONFIG_COMMON_CLK_SI514 is not set
+# CONFIG_COMMON_CLK_SI544 is not set
+# CONFIG_COMMON_CLK_SI570 is not set
+# CONFIG_COMMON_CLK_CDCE706 is not set
+# CONFIG_COMMON_CLK_CDCE925 is not set
+CONFIG_COMMON_CLK_CS2000_CP=y
+CONFIG_COMMON_CLK_S2MPS11=y
+# CONFIG_CLK_QORIQ is not set
+# CONFIG_COMMON_CLK_XGENE is not set
+# CONFIG_COMMON_CLK_PALMAS is not set
+CONFIG_COMMON_CLK_PWM=y
+CONFIG_COMMON_CLK_VC5=y
+CONFIG_COMMON_CLK_BD718XX=m
+# CONFIG_COMMON_CLK_FIXED_MMIO is not set
+CONFIG_TI_SCI_CLK=y
+# CONFIG_TI_SCI_CLK_PROBE_FROM_FW is not set
+CONFIG_TI_SYSCON_CLK=y
+CONFIG_HWSPINLOCK=y
+CONFIG_HWSPINLOCK_OMAP=y
+
+#
+# Clock Source drivers
+#
+CONFIG_TIMER_OF=y
+CONFIG_TIMER_PROBE=y
+CONFIG_ARM_ARCH_TIMER=y
+CONFIG_ARM_ARCH_TIMER_EVTSTREAM=y
+CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND=y
+CONFIG_FSL_ERRATUM_A008585=y
+CONFIG_HISILICON_ERRATUM_161010101=y
+CONFIG_ARM64_ERRATUM_858921=y
+# CONFIG_MICROCHIP_PIT64B is not set
+# end of Clock Source drivers
+
+CONFIG_MAILBOX=y
+# CONFIG_ARM_MHU is not set
+# CONFIG_PLATFORM_MHU is not set
+# CONFIG_PL320_MBOX is not set
+CONFIG_OMAP2PLUS_MBOX=y
+CONFIG_OMAP_MBOX_KFIFO_SIZE=256
+# CONFIG_ALTERA_MBOX is not set
+CONFIG_TI_MESSAGE_MANAGER=y
+# CONFIG_MAILBOX_TEST is not set
+CONFIG_IOMMU_IOVA=y
+CONFIG_IOMMU_API=y
+CONFIG_IOMMU_SUPPORT=y
+
+#
+# Generic IOMMU Pagetable Support
+#
+CONFIG_IOMMU_IO_PGTABLE=y
+CONFIG_IOMMU_IO_PGTABLE_LPAE=y
+# CONFIG_IOMMU_IO_PGTABLE_LPAE_SELFTEST is not set
+# CONFIG_IOMMU_IO_PGTABLE_ARMV7S is not set
+# end of Generic IOMMU Pagetable Support
+
+# CONFIG_IOMMU_DEBUGFS is not set
+# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set
+CONFIG_OF_IOMMU=y
+CONFIG_IOMMU_DMA=y
+CONFIG_ARM_SMMU=y
+# CONFIG_ARM_SMMU_LEGACY_DT_BINDINGS is not set
+CONFIG_ARM_SMMU_DISABLE_BYPASS_BY_DEFAULT=y
+CONFIG_ARM_SMMU_V3=y
+# CONFIG_ARM_SMMU_V3_SVA is not set
+# CONFIG_VIRTIO_IOMMU is not set
+
+#
+# Remoteproc drivers
+#
+CONFIG_REMOTEPROC=y
+# CONFIG_REMOTEPROC_CDEV is not set
+CONFIG_PRU_REMOTEPROC=m
+CONFIG_TI_K3_DSP_REMOTEPROC=m
+CONFIG_TI_K3_R5_REMOTEPROC=m
+# end of Remoteproc drivers
+
+#
+# Rpmsg drivers
+#
+CONFIG_RPMSG=y
+CONFIG_RPMSG_CHAR=m
+CONFIG_RPMSG_QCOM_GLINK=y
+CONFIG_RPMSG_QCOM_GLINK_RPM=y
+CONFIG_RPMSG_VIRTIO=m
+# end of Rpmsg drivers
+
+#
+# Rpmsg virtual device drivers
+#
+CONFIG_RPMSG_KDRV=y
+# CONFIG_RPMSG_KDRV_DEMO is not set
+CONFIG_RPMSG_KDRV_DISPLAY=y
+CONFIG_RPMSG_KDRV_ETH_SWITCH=m
+# end of Rpmsg virtual device drivers
+
+CONFIG_SOUNDWIRE=m
+
+#
+# SoundWire Devices
+#
+CONFIG_SOUNDWIRE_QCOM=m
+
+#
+# SOC (System On Chip) specific Drivers
+#
+
+#
+# Amlogic SoC drivers
+#
+# end of Amlogic SoC drivers
+
+#
+# Aspeed SoC drivers
+#
+# end of Aspeed SoC drivers
+
+#
+# Broadcom SoC drivers
+#
+# CONFIG_SOC_BRCMSTB is not set
+# end of Broadcom SoC drivers
+
+#
+# NXP/Freescale QorIQ SoC drivers
+#
+# CONFIG_QUICC_ENGINE is not set
+# CONFIG_FSL_RCPM is not set
+# end of NXP/Freescale QorIQ SoC drivers
+
+#
+# i.MX SoC drivers
+#
+# end of i.MX SoC drivers
+
+#
+# Qualcomm SoC drivers
+#
+# end of Qualcomm SoC drivers
+
+CONFIG_SOC_TI=y
+CONFIG_TI_SCI_PM_DOMAINS=y
+CONFIG_TI_K3_RINGACC=y
+CONFIG_TI_K3_SOCINFO=y
+CONFIG_TI_PRUSS=m
+CONFIG_TI_SCI_INTA_MSI_DOMAIN=y
+
+#
+# Xilinx SoC drivers
+#
+# CONFIG_XILINX_VCU is not set
+# end of Xilinx SoC drivers
+# end of SOC (System On Chip) specific Drivers
+
+CONFIG_PM_DEVFREQ=y
+
+#
+# DEVFREQ Governors
+#
+CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=y
+# CONFIG_DEVFREQ_GOV_PERFORMANCE is not set
+# CONFIG_DEVFREQ_GOV_POWERSAVE is not set
+# CONFIG_DEVFREQ_GOV_USERSPACE is not set
+# CONFIG_DEVFREQ_GOV_PASSIVE is not set
+
+#
+# DEVFREQ Drivers
+#
+# CONFIG_PM_DEVFREQ_EVENT is not set
+CONFIG_EXTCON=y
+
+#
+# Extcon Device Drivers
+#
+# CONFIG_EXTCON_ADC_JACK is not set
+# CONFIG_EXTCON_FSA9480 is not set
+# CONFIG_EXTCON_GPIO is not set
+# CONFIG_EXTCON_MAX3355 is not set
+CONFIG_EXTCON_PALMAS=m
+CONFIG_EXTCON_PTN5150=m
+# CONFIG_EXTCON_RT8973A is not set
+# CONFIG_EXTCON_SM5502 is not set
+CONFIG_EXTCON_USB_GPIO=m
+CONFIG_EXTCON_USBC_CROS_EC=y
+CONFIG_MEMORY=y
+# CONFIG_ARM_PL172_MPMC is not set
+CONFIG_IIO=y
+CONFIG_IIO_BUFFER=y
+# CONFIG_IIO_BUFFER_CB is not set
+# CONFIG_IIO_BUFFER_DMA is not set
+# CONFIG_IIO_BUFFER_DMAENGINE is not set
+# CONFIG_IIO_BUFFER_HW_CONSUMER is not set
+CONFIG_IIO_KFIFO_BUF=m
+CONFIG_IIO_TRIGGERED_BUFFER=m
+# CONFIG_IIO_CONFIGFS is not set
+CONFIG_IIO_TRIGGER=y
+CONFIG_IIO_CONSUMERS_PER_TRIGGER=2
+# CONFIG_IIO_SW_DEVICE is not set
+# CONFIG_IIO_SW_TRIGGER is not set
+# CONFIG_IIO_TRIGGERED_EVENT is not set
+
+#
+# Accelerometers
+#
+# CONFIG_ADIS16201 is not set
+# CONFIG_ADIS16209 is not set
+# CONFIG_ADXL345_I2C is not set
+# CONFIG_ADXL345_SPI is not set
+# CONFIG_ADXL372_SPI is not set
+# CONFIG_ADXL372_I2C is not set
+# CONFIG_BMA180 is not set
+# CONFIG_BMA220 is not set
+# CONFIG_BMA400 is not set
+# CONFIG_BMC150_ACCEL is not set
+# CONFIG_DA280 is not set
+# CONFIG_DA311 is not set
+# CONFIG_DMARD06 is not set
+# CONFIG_DMARD09 is not set
+# CONFIG_DMARD10 is not set
+# CONFIG_IIO_CROS_EC_ACCEL_LEGACY is not set
+# CONFIG_IIO_ST_ACCEL_3AXIS is not set
+# CONFIG_KXSD9 is not set
+# CONFIG_KXCJK1013 is not set
+# CONFIG_MC3230 is not set
+# CONFIG_MMA7455_I2C is not set
+# CONFIG_MMA7455_SPI is not set
+# CONFIG_MMA7660 is not set
+# CONFIG_MMA8452 is not set
+# CONFIG_MMA9551 is not set
+# CONFIG_MMA9553 is not set
+# CONFIG_MXC4005 is not set
+# CONFIG_MXC6255 is not set
+# CONFIG_SCA3000 is not set
+# CONFIG_STK8312 is not set
+# CONFIG_STK8BA50 is not set
+# end of Accelerometers
+
+#
+# Analog to digital converters
+#
+# CONFIG_AD7091R5 is not set
+# CONFIG_AD7124 is not set
+# CONFIG_AD7192 is not set
+# CONFIG_AD7266 is not set
+# CONFIG_AD7291 is not set
+# CONFIG_AD7292 is not set
+# CONFIG_AD7298 is not set
+# CONFIG_AD7476 is not set
+# CONFIG_AD7606_IFACE_PARALLEL is not set
+# CONFIG_AD7606_IFACE_SPI is not set
+# CONFIG_AD7766 is not set
+# CONFIG_AD7768_1 is not set
+# CONFIG_AD7780 is not set
+# CONFIG_AD7791 is not set
+# CONFIG_AD7793 is not set
+# CONFIG_AD7887 is not set
+# CONFIG_AD7923 is not set
+# CONFIG_AD7949 is not set
+# CONFIG_AD799X is not set
+# CONFIG_ADI_AXI_ADC is not set
+# CONFIG_AXP20X_ADC is not set
+# CONFIG_AXP288_ADC is not set
+# CONFIG_CC10001_ADC is not set
+# CONFIG_ENVELOPE_DETECTOR is not set
+# CONFIG_HI8435 is not set
+# CONFIG_HX711 is not set
+# CONFIG_INA2XX_ADC is not set
+# CONFIG_LTC2471 is not set
+# CONFIG_LTC2485 is not set
+# CONFIG_LTC2496 is not set
+# CONFIG_LTC2497 is not set
+# CONFIG_MAX1027 is not set
+# CONFIG_MAX11100 is not set
+# CONFIG_MAX1118 is not set
+# CONFIG_MAX1241 is not set
+# CONFIG_MAX1363 is not set
+CONFIG_MAX9611=m
+# CONFIG_MCP320X is not set
+# CONFIG_MCP3422 is not set
+# CONFIG_MCP3911 is not set
+# CONFIG_NAU7802 is not set
+# CONFIG_PALMAS_GPADC is not set
+CONFIG_QCOM_VADC_COMMON=m
+# CONFIG_QCOM_SPMI_IADC is not set
+# CONFIG_QCOM_SPMI_VADC is not set
+CONFIG_QCOM_SPMI_ADC5=m
+# CONFIG_SD_ADC_MODULATOR is not set
+# CONFIG_TI_ADC081C is not set
+# CONFIG_TI_ADC0832 is not set
+# CONFIG_TI_ADC084S021 is not set
+# CONFIG_TI_ADC12138 is not set
+# CONFIG_TI_ADC108S102 is not set
+# CONFIG_TI_ADC128S052 is not set
+# CONFIG_TI_ADC161S626 is not set
+# CONFIG_TI_ADS1015 is not set
+# CONFIG_TI_ADS7950 is not set
+# CONFIG_TI_ADS8344 is not set
+# CONFIG_TI_ADS8688 is not set
+# CONFIG_TI_ADS124S08 is not set
+CONFIG_TI_AM335X_ADC=m
+# CONFIG_TI_TLC4541 is not set
+# CONFIG_VF610_ADC is not set
+# CONFIG_XILINX_XADC is not set
+# end of Analog to digital converters
+
+#
+# Analog Front Ends
+#
+# CONFIG_IIO_RESCALE is not set
+# end of Analog Front Ends
+
+#
+# Amplifiers
+#
+# CONFIG_AD8366 is not set
+# CONFIG_HMC425 is not set
+# end of Amplifiers
+
+#
+# Chemical Sensors
+#
+# CONFIG_ATLAS_PH_SENSOR is not set
+# CONFIG_ATLAS_EZO_SENSOR is not set
+# CONFIG_BME680 is not set
+# CONFIG_CCS811 is not set
+# CONFIG_IAQCORE is not set
+# CONFIG_PMS7003 is not set
+# CONFIG_SCD30_CORE is not set
+# CONFIG_SENSIRION_SGP30 is not set
+# CONFIG_SPS30 is not set
+# CONFIG_VZ89X is not set
+# end of Chemical Sensors
+
+CONFIG_IIO_CROS_EC_SENSORS_CORE=m
+CONFIG_IIO_CROS_EC_SENSORS=m
+# CONFIG_IIO_CROS_EC_SENSORS_LID_ANGLE is not set
+
+#
+# Hid Sensor IIO Common
+#
+# end of Hid Sensor IIO Common
+
+#
+# SSP Sensor Common
+#
+# CONFIG_IIO_SSP_SENSORHUB is not set
+# end of SSP Sensor Common
+
+#
+# Digital to analog converters
+#
+# CONFIG_AD5064 is not set
+# CONFIG_AD5360 is not set
+# CONFIG_AD5380 is not set
+# CONFIG_AD5421 is not set
+# CONFIG_AD5446 is not set
+# CONFIG_AD5449 is not set
+# CONFIG_AD5592R is not set
+# CONFIG_AD5593R is not set
+# CONFIG_AD5504 is not set
+# CONFIG_AD5624R_SPI is not set
+# CONFIG_AD5686_SPI is not set
+# CONFIG_AD5696_I2C is not set
+# CONFIG_AD5755 is not set
+# CONFIG_AD5758 is not set
+# CONFIG_AD5761 is not set
+# CONFIG_AD5764 is not set
+# CONFIG_AD5770R is not set
+# CONFIG_AD5791 is not set
+# CONFIG_AD7303 is not set
+# CONFIG_AD8801 is not set
+# CONFIG_DPOT_DAC is not set
+# CONFIG_DS4424 is not set
+# CONFIG_LTC1660 is not set
+# CONFIG_LTC2632 is not set
+# CONFIG_M62332 is not set
+# CONFIG_MAX517 is not set
+# CONFIG_MAX5821 is not set
+# CONFIG_MCP4725 is not set
+# CONFIG_MCP4922 is not set
+# CONFIG_TI_DAC082S085 is not set
+# CONFIG_TI_DAC5571 is not set
+# CONFIG_TI_DAC7311 is not set
+# CONFIG_TI_DAC7612 is not set
+# CONFIG_VF610_DAC is not set
+# end of Digital to analog converters
+
+#
+# IIO dummy driver
+#
+# end of IIO dummy driver
+
+#
+# Frequency Synthesizers DDS/PLL
+#
+
+#
+# Clock Generator/Distribution
+#
+# CONFIG_AD9523 is not set
+# end of Clock Generator/Distribution
+
+#
+# Phase-Locked Loop (PLL) frequency synthesizers
+#
+# CONFIG_ADF4350 is not set
+# CONFIG_ADF4371 is not set
+# end of Phase-Locked Loop (PLL) frequency synthesizers
+# end of Frequency Synthesizers DDS/PLL
+
+#
+# Digital gyroscope sensors
+#
+# CONFIG_ADIS16080 is not set
+# CONFIG_ADIS16130 is not set
+# CONFIG_ADIS16136 is not set
+# CONFIG_ADIS16260 is not set
+# CONFIG_ADXRS290 is not set
+# CONFIG_ADXRS450 is not set
+# CONFIG_BMG160 is not set
+# CONFIG_FXAS21002C is not set
+# CONFIG_MPU3050_I2C is not set
+# CONFIG_IIO_ST_GYRO_3AXIS is not set
+# CONFIG_ITG3200 is not set
+# end of Digital gyroscope sensors
+
+#
+# Health Sensors
+#
+
+#
+# Heart Rate Monitors
+#
+# CONFIG_AFE4403 is not set
+# CONFIG_AFE4404 is not set
+# CONFIG_MAX30100 is not set
+# CONFIG_MAX30102 is not set
+# end of Heart Rate Monitors
+# end of Health Sensors
+
+#
+# Humidity sensors
+#
+# CONFIG_AM2315 is not set
+# CONFIG_DHT11 is not set
+# CONFIG_HDC100X is not set
+# CONFIG_HDC2010 is not set
+# CONFIG_HTS221 is not set
+# CONFIG_HTU21 is not set
+# CONFIG_SI7005 is not set
+# CONFIG_SI7020 is not set
+# end of Humidity sensors
+
+#
+# Inertial measurement units
+#
+# CONFIG_ADIS16400 is not set
+# CONFIG_ADIS16460 is not set
+# CONFIG_ADIS16475 is not set
+# CONFIG_ADIS16480 is not set
+# CONFIG_BMI160_I2C is not set
+# CONFIG_BMI160_SPI is not set
+# CONFIG_FXOS8700_I2C is not set
+# CONFIG_FXOS8700_SPI is not set
+# CONFIG_KMX61 is not set
+# CONFIG_INV_ICM42600_I2C is not set
+# CONFIG_INV_ICM42600_SPI is not set
+# CONFIG_INV_MPU6050_I2C is not set
+# CONFIG_INV_MPU6050_SPI is not set
+# CONFIG_IIO_ST_LSM6DSX is not set
+# end of Inertial measurement units
+
+#
+# Light sensors
+#
+# CONFIG_ADJD_S311 is not set
+# CONFIG_ADUX1020 is not set
+# CONFIG_AL3010 is not set
+# CONFIG_AL3320A is not set
+# CONFIG_APDS9300 is not set
+# CONFIG_APDS9960 is not set
+# CONFIG_AS73211 is not set
+# CONFIG_BH1750 is not set
+# CONFIG_BH1780 is not set
+# CONFIG_CM32181 is not set
+# CONFIG_CM3232 is not set
+# CONFIG_CM3323 is not set
+# CONFIG_CM3605 is not set
+# CONFIG_CM36651 is not set
+CONFIG_IIO_CROS_EC_LIGHT_PROX=m
+# CONFIG_GP2AP002 is not set
+# CONFIG_GP2AP020A00F is not set
+CONFIG_SENSORS_ISL29018=m
+# CONFIG_SENSORS_ISL29028 is not set
+# CONFIG_ISL29125 is not set
+# CONFIG_JSA1212 is not set
+# CONFIG_RPR0521 is not set
+# CONFIG_LTR501 is not set
+# CONFIG_LV0104CS is not set
+# CONFIG_MAX44000 is not set
+# CONFIG_MAX44009 is not set
+# CONFIG_NOA1305 is not set
+# CONFIG_OPT3001 is not set
+# CONFIG_PA12203001 is not set
+# CONFIG_SI1133 is not set
+# CONFIG_SI1145 is not set
+# CONFIG_STK3310 is not set
+# CONFIG_ST_UVIS25 is not set
+# CONFIG_TCS3414 is not set
+# CONFIG_TCS3472 is not set
+# CONFIG_SENSORS_TSL2563 is not set
+# CONFIG_TSL2583 is not set
+# CONFIG_TSL2772 is not set
+# CONFIG_TSL4531 is not set
+# CONFIG_US5182D is not set
+# CONFIG_VCNL4000 is not set
+# CONFIG_VCNL4035 is not set
+# CONFIG_VEML6030 is not set
+# CONFIG_VEML6070 is not set
+# CONFIG_VL6180 is not set
+# CONFIG_ZOPT2201 is not set
+# end of Light sensors
+
+#
+# Magnetometer sensors
+#
+# CONFIG_AK8974 is not set
+# CONFIG_AK8975 is not set
+# CONFIG_AK09911 is not set
+# CONFIG_BMC150_MAGN_I2C is not set
+# CONFIG_BMC150_MAGN_SPI is not set
+# CONFIG_MAG3110 is not set
+# CONFIG_MMC35240 is not set
+# CONFIG_IIO_ST_MAGN_3AXIS is not set
+# CONFIG_SENSORS_HMC5843_I2C is not set
+# CONFIG_SENSORS_HMC5843_SPI is not set
+# CONFIG_SENSORS_RM3100_I2C is not set
+# CONFIG_SENSORS_RM3100_SPI is not set
+# end of Magnetometer sensors
+
+#
+# Multiplexers
+#
+# CONFIG_IIO_MUX is not set
+# end of Multiplexers
+
+#
+# Inclinometer sensors
+#
+# end of Inclinometer sensors
+
+#
+# Triggers - standalone
+#
+# CONFIG_IIO_INTERRUPT_TRIGGER is not set
+# CONFIG_IIO_SYSFS_TRIGGER is not set
+# end of Triggers - standalone
+
+#
+# Linear and angular position sensors
+#
+# end of Linear and angular position sensors
+
+#
+# Digital potentiometers
+#
+# CONFIG_AD5272 is not set
+# CONFIG_DS1803 is not set
+# CONFIG_MAX5432 is not set
+# CONFIG_MAX5481 is not set
+# CONFIG_MAX5487 is not set
+# CONFIG_MCP4018 is not set
+# CONFIG_MCP4131 is not set
+# CONFIG_MCP4531 is not set
+# CONFIG_MCP41010 is not set
+# CONFIG_TPL0102 is not set
+# end of Digital potentiometers
+
+#
+# Digital potentiostats
+#
+# CONFIG_LMP91000 is not set
+# end of Digital potentiostats
+
+#
+# Pressure sensors
+#
+# CONFIG_ABP060MG is not set
+# CONFIG_BMP280 is not set
+CONFIG_IIO_CROS_EC_BARO=m
+# CONFIG_DLHL60D is not set
+# CONFIG_DPS310 is not set
+# CONFIG_HP03 is not set
+# CONFIG_ICP10100 is not set
+# CONFIG_MPL115_I2C is not set
+# CONFIG_MPL115_SPI is not set
+CONFIG_MPL3115=m
+# CONFIG_MS5611 is not set
+# CONFIG_MS5637 is not set
+# CONFIG_IIO_ST_PRESS is not set
+# CONFIG_T5403 is not set
+# CONFIG_HP206C is not set
+# CONFIG_ZPA2326 is not set
+# end of Pressure sensors
+
+#
+# Lightning sensors
+#
+# CONFIG_AS3935 is not set
+# end of Lightning sensors
+
+#
+# Proximity and distance sensors
+#
+# CONFIG_ISL29501 is not set
+# CONFIG_LIDAR_LITE_V2 is not set
+# CONFIG_MB1232 is not set
+# CONFIG_PING is not set
+# CONFIG_RFD77402 is not set
+# CONFIG_SRF04 is not set
+# CONFIG_SX9310 is not set
+# CONFIG_SX9500 is not set
+# CONFIG_SRF08 is not set
+# CONFIG_VCNL3020 is not set
+# CONFIG_VL53L0X_I2C is not set
+# end of Proximity and distance sensors
+
+#
+# Resolver to digital converters
+#
+# CONFIG_AD2S90 is not set
+# CONFIG_AD2S1200 is not set
+# end of Resolver to digital converters
+
+#
+# Temperature sensors
+#
+# CONFIG_LTC2983 is not set
+# CONFIG_MAXIM_THERMOCOUPLE is not set
+# CONFIG_MLX90614 is not set
+# CONFIG_MLX90632 is not set
+# CONFIG_TMP006 is not set
+# CONFIG_TMP007 is not set
+# CONFIG_TSYS01 is not set
+# CONFIG_TSYS02D is not set
+# CONFIG_MAX31856 is not set
+# end of Temperature sensors
+
+CONFIG_NTB=m
+# CONFIG_NTB_MSI is not set
+# CONFIG_NTB_IDT is not set
+CONFIG_NTB_EPF=m
+# CONFIG_NTB_SWITCHTEC is not set
+# CONFIG_NTB_PINGPONG is not set
+# CONFIG_NTB_TOOL is not set
+# CONFIG_NTB_PERF is not set
+CONFIG_NTB_TRANSPORT=m
+# CONFIG_VME_BUS is not set
+CONFIG_PWM=y
+CONFIG_PWM_SYSFS=y
+# CONFIG_PWM_DEBUG is not set
+CONFIG_PWM_CROS_EC=m
+# CONFIG_PWM_FSL_FTM is not set
+# CONFIG_PWM_PCA9685 is not set
+CONFIG_PWM_TIECAP=y
+CONFIG_PWM_TIEHRPWM=y
+
+#
+# IRQ chip support
+#
+CONFIG_IRQCHIP=y
+CONFIG_ARM_GIC=y
+CONFIG_ARM_GIC_MAX_NR=1
+CONFIG_ARM_GIC_V2M=y
+CONFIG_ARM_GIC_V3=y
+CONFIG_ARM_GIC_V3_ITS=y
+CONFIG_ARM_GIC_V3_ITS_PCI=y
+# CONFIG_AL_FIC is not set
+CONFIG_PARTITION_PERCPU=y
+CONFIG_TI_SCI_INTR_IRQCHIP=y
+CONFIG_TI_SCI_INTA_IRQCHIP=y
+CONFIG_TI_PRUSS_INTC=m
+# end of IRQ chip support
+
+# CONFIG_IPACK_BUS is not set
+CONFIG_RESET_CONTROLLER=y
+# CONFIG_RESET_BRCMSTB_RESCAL is not set
+# CONFIG_RESET_INTEL_GW is not set
+CONFIG_RESET_TI_SCI=y
+CONFIG_RESET_TI_SYSCON=y
+
+#
+# PHY Subsystem
+#
+CONFIG_GENERIC_PHY=y
+CONFIG_GENERIC_PHY_MIPI_DPHY=y
+CONFIG_PHY_XGENE=y
+CONFIG_PHY_CAN_TRANSCEIVER=m
+# CONFIG_BCM_KONA_USB2_PHY is not set
+CONFIG_PHY_CADENCE_TORRENT=y
+CONFIG_PHY_CADENCE_DPHY=m
+CONFIG_PHY_CADENCE_SIERRA=y
+# CONFIG_PHY_CADENCE_SALVO is not set
+# CONFIG_PHY_FSL_IMX8MQ_USB is not set
+CONFIG_PHY_MIXEL_MIPI_DPHY=m
+# CONFIG_PHY_PXA_28NM_HSIC is not set
+# CONFIG_PHY_PXA_28NM_USB2 is not set
+# CONFIG_PHY_CPCAP_USB is not set
+# CONFIG_PHY_MAPPHONE_MDM6600 is not set
+# CONFIG_PHY_OCELOT_SERDES is not set
+CONFIG_PHY_AM654_SERDES=y
+CONFIG_PHY_J721E_WIZ=y
+CONFIG_OMAP_USB2=m
+CONFIG_PHY_TI_GMII_SEL=y
+# end of PHY Subsystem
+
+# CONFIG_POWERCAP is not set
+# CONFIG_MCB is not set
+
+#
+# Performance monitor support
+#
+# CONFIG_ARM_CCI_PMU is not set
+# CONFIG_ARM_CCN is not set
+# CONFIG_ARM_CMN is not set
+CONFIG_ARM_PMU=y
+# CONFIG_ARM_DSU_PMU is not set
+# CONFIG_ARM_SPE_PMU is not set
+# end of Performance monitor support
+
+CONFIG_RAS=y
+# CONFIG_USB4 is not set
+
+#
+# Android
+#
+# CONFIG_ANDROID is not set
+# end of Android
+
+# CONFIG_LIBNVDIMM is not set
+# CONFIG_DAX is not set
+CONFIG_NVMEM=y
+CONFIG_NVMEM_SYSFS=y
+# CONFIG_NVMEM_SPMI_SDAM is not set
+
+#
+# HW tracing support
+#
+# CONFIG_STM is not set
+# CONFIG_INTEL_TH is not set
+# end of HW tracing support
+
+CONFIG_FPGA=y
+# CONFIG_ALTERA_PR_IP_CORE is not set
+# CONFIG_FPGA_MGR_ALTERA_PS_SPI is not set
+# CONFIG_FPGA_MGR_ALTERA_CVP is not set
+# CONFIG_FPGA_MGR_XILINX_SPI is not set
+# CONFIG_FPGA_MGR_ICE40_SPI is not set
+# CONFIG_FPGA_MGR_MACHXO2_SPI is not set
+CONFIG_FPGA_BRIDGE=m
+CONFIG_ALTERA_FREEZE_BRIDGE=m
+# CONFIG_XILINX_PR_DECOUPLER is not set
+CONFIG_FPGA_REGION=m
+CONFIG_OF_FPGA_REGION=m
+# CONFIG_FPGA_DFL is not set
+# CONFIG_FSI is not set
+CONFIG_TEE=y
+
+#
+# TEE drivers
+#
+CONFIG_OPTEE=y
+CONFIG_OPTEE_SHM_NUM_PRIV_PAGES=1
+# end of TEE drivers
+
+CONFIG_MULTIPLEXER=y
+
+#
+# Multiplexer drivers
+#
+# CONFIG_MUX_ADG792A is not set
+# CONFIG_MUX_ADGS1408 is not set
+CONFIG_MUX_GPIO=y
+CONFIG_MUX_MMIO=y
+# end of Multiplexer drivers
+
+CONFIG_PM_OPP=y
+# CONFIG_SIOX is not set
+CONFIG_SLIMBUS=m
+CONFIG_SLIM_QCOM_CTRL=m
+CONFIG_INTERCONNECT=y
+# CONFIG_COUNTER is not set
+# CONFIG_MOST is not set
+# end of Device Drivers
+
+#
+# File systems
+#
+CONFIG_DCACHE_WORD_ACCESS=y
+# CONFIG_VALIDATE_FS_PARSER is not set
+CONFIG_FS_IOMAP=y
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+CONFIG_EXT3_FS=y
+# CONFIG_EXT3_FS_POSIX_ACL is not set
+# CONFIG_EXT3_FS_SECURITY is not set
+CONFIG_EXT4_FS=y
+CONFIG_EXT4_FS_POSIX_ACL=y
+CONFIG_EXT4_FS_SECURITY=y
+# CONFIG_EXT4_DEBUG is not set
+CONFIG_JBD2=y
+# CONFIG_JBD2_DEBUG is not set
+CONFIG_FS_MBCACHE=y
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_GFS2_FS is not set
+# CONFIG_OCFS2_FS is not set
+CONFIG_BTRFS_FS=m
+CONFIG_BTRFS_FS_POSIX_ACL=y
+# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
+# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
+# CONFIG_BTRFS_DEBUG is not set
+# CONFIG_BTRFS_ASSERT is not set
+# CONFIG_BTRFS_FS_REF_VERIFY is not set
+# CONFIG_NILFS2_FS is not set
+# CONFIG_F2FS_FS is not set
+# CONFIG_FS_DAX is not set
+CONFIG_FS_POSIX_ACL=y
+CONFIG_EXPORTFS=y
+# CONFIG_EXPORTFS_BLOCK_OPS is not set
+CONFIG_FILE_LOCKING=y
+CONFIG_MANDATORY_FILE_LOCKING=y
+# CONFIG_FS_ENCRYPTION is not set
+# CONFIG_FS_VERITY is not set
+CONFIG_FSNOTIFY=y
+CONFIG_DNOTIFY=y
+CONFIG_INOTIFY_USER=y
+CONFIG_FANOTIFY=y
+CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
+CONFIG_QUOTA=y
+# CONFIG_QUOTA_NETLINK_INTERFACE is not set
+CONFIG_PRINT_QUOTA_WARNING=y
+# CONFIG_QUOTA_DEBUG is not set
+# CONFIG_QFMT_V1 is not set
+# CONFIG_QFMT_V2 is not set
+CONFIG_QUOTACTL=y
+CONFIG_AUTOFS4_FS=y
+CONFIG_AUTOFS_FS=y
+CONFIG_FUSE_FS=m
+CONFIG_CUSE=m
+# CONFIG_VIRTIO_FS is not set
+CONFIG_OVERLAY_FS=m
+# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set
+CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW=y
+# CONFIG_OVERLAY_FS_INDEX is not set
+# CONFIG_OVERLAY_FS_XINO_AUTO is not set
+# CONFIG_OVERLAY_FS_METACOPY is not set
+
+#
+# Caches
+#
+# CONFIG_FSCACHE is not set
+# end of Caches
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+# end of CD-ROM/DVD Filesystems
+
+#
+# DOS/FAT/EXFAT/NT Filesystems
+#
+CONFIG_FAT_FS=y
+# CONFIG_MSDOS_FS is not set
+CONFIG_VFAT_FS=y
+CONFIG_FAT_DEFAULT_CODEPAGE=437
+CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
+# CONFIG_FAT_DEFAULT_UTF8 is not set
+# CONFIG_EXFAT_FS is not set
+# CONFIG_NTFS_FS is not set
+# end of DOS/FAT/EXFAT/NT Filesystems
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+# CONFIG_PROC_KCORE is not set
+CONFIG_PROC_SYSCTL=y
+CONFIG_PROC_PAGE_MONITOR=y
+CONFIG_PROC_CHILDREN=y
+CONFIG_KERNFS=y
+CONFIG_SYSFS=y
+CONFIG_TMPFS=y
+# CONFIG_TMPFS_POSIX_ACL is not set
+# CONFIG_TMPFS_XATTR is not set
+# CONFIG_TMPFS_INODE64 is not set
+CONFIG_HUGETLBFS=y
+CONFIG_HUGETLB_PAGE=y
+CONFIG_MEMFD_CREATE=y
+CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
+CONFIG_CONFIGFS_FS=y
+# end of Pseudo filesystems
+
+CONFIG_MISC_FILESYSTEMS=y
+# CONFIG_ORANGEFS_FS is not set
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_ECRYPT_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_JFFS2_FS is not set
+CONFIG_UBIFS_FS=y
+# CONFIG_UBIFS_FS_ADVANCED_COMPR is not set
+CONFIG_UBIFS_FS_LZO=y
+CONFIG_UBIFS_FS_ZLIB=y
+CONFIG_UBIFS_FS_ZSTD=y
+# CONFIG_UBIFS_ATIME_SUPPORT is not set
+CONFIG_UBIFS_FS_XATTR=y
+CONFIG_UBIFS_FS_SECURITY=y
+# CONFIG_UBIFS_FS_AUTHENTICATION is not set
+# CONFIG_CRAMFS is not set
+CONFIG_SQUASHFS=y
+CONFIG_SQUASHFS_FILE_CACHE=y
+# CONFIG_SQUASHFS_FILE_DIRECT is not set
+CONFIG_SQUASHFS_DECOMP_SINGLE=y
+# CONFIG_SQUASHFS_DECOMP_MULTI is not set
+# CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU is not set
+# CONFIG_SQUASHFS_XATTR is not set
+CONFIG_SQUASHFS_ZLIB=y
+# CONFIG_SQUASHFS_LZ4 is not set
+# CONFIG_SQUASHFS_LZO is not set
+# CONFIG_SQUASHFS_XZ is not set
+# CONFIG_SQUASHFS_ZSTD is not set
+# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set
+# CONFIG_SQUASHFS_EMBEDDED is not set
+CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3
+# CONFIG_VXFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_OMFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_QNX6FS_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_PSTORE is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+# CONFIG_EROFS_FS is not set
+CONFIG_NETWORK_FILESYSTEMS=y
+CONFIG_NFS_FS=y
+CONFIG_NFS_V2=y
+CONFIG_NFS_V3=y
+# CONFIG_NFS_V3_ACL is not set
+CONFIG_NFS_V4=y
+# CONFIG_NFS_SWAP is not set
+CONFIG_NFS_V4_1=y
+CONFIG_NFS_V4_2=y
+CONFIG_PNFS_FILE_LAYOUT=y
+CONFIG_PNFS_BLOCK=m
+CONFIG_PNFS_FLEXFILE_LAYOUT=y
+CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
+# CONFIG_NFS_V4_1_MIGRATION is not set
+CONFIG_NFS_V4_SECURITY_LABEL=y
+CONFIG_ROOT_NFS=y
+# CONFIG_NFS_USE_LEGACY_DNS is not set
+CONFIG_NFS_USE_KERNEL_DNS=y
+CONFIG_NFS_DISABLE_UDP_SUPPORT=y
+# CONFIG_NFS_V4_2_READ_PLUS is not set
+# CONFIG_NFSD is not set
+CONFIG_GRACE_PERIOD=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+CONFIG_NFS_COMMON=y
+CONFIG_SUNRPC=y
+CONFIG_SUNRPC_GSS=y
+CONFIG_SUNRPC_BACKCHANNEL=y
+# CONFIG_SUNRPC_DEBUG is not set
+# CONFIG_CEPH_FS is not set
+CONFIG_CIFS=m
+# CONFIG_CIFS_STATS2 is not set
+CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y
+# CONFIG_CIFS_WEAK_PW_HASH is not set
+# CONFIG_CIFS_UPCALL is not set
+CONFIG_CIFS_XATTR=y
+CONFIG_CIFS_POSIX=y
+CONFIG_CIFS_DEBUG=y
+# CONFIG_CIFS_DEBUG2 is not set
+# CONFIG_CIFS_DEBUG_DUMP_KEYS is not set
+# CONFIG_CIFS_DFS_UPCALL is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+CONFIG_9P_FS=y
+# CONFIG_9P_FS_POSIX_ACL is not set
+# CONFIG_9P_FS_SECURITY is not set
+CONFIG_NLS=y
+CONFIG_NLS_DEFAULT="iso8859-1"
+CONFIG_NLS_CODEPAGE_437=y
+# CONFIG_NLS_CODEPAGE_737 is not set
+# CONFIG_NLS_CODEPAGE_775 is not set
+# CONFIG_NLS_CODEPAGE_850 is not set
+# CONFIG_NLS_CODEPAGE_852 is not set
+# CONFIG_NLS_CODEPAGE_855 is not set
+# CONFIG_NLS_CODEPAGE_857 is not set
+# CONFIG_NLS_CODEPAGE_860 is not set
+# CONFIG_NLS_CODEPAGE_861 is not set
+# CONFIG_NLS_CODEPAGE_862 is not set
+# CONFIG_NLS_CODEPAGE_863 is not set
+# CONFIG_NLS_CODEPAGE_864 is not set
+# CONFIG_NLS_CODEPAGE_865 is not set
+# CONFIG_NLS_CODEPAGE_866 is not set
+# CONFIG_NLS_CODEPAGE_869 is not set
+# CONFIG_NLS_CODEPAGE_936 is not set
+# CONFIG_NLS_CODEPAGE_950 is not set
+# CONFIG_NLS_CODEPAGE_932 is not set
+# CONFIG_NLS_CODEPAGE_949 is not set
+# CONFIG_NLS_CODEPAGE_874 is not set
+# CONFIG_NLS_ISO8859_8 is not set
+# CONFIG_NLS_CODEPAGE_1250 is not set
+# CONFIG_NLS_CODEPAGE_1251 is not set
+# CONFIG_NLS_ASCII is not set
+CONFIG_NLS_ISO8859_1=y
+# CONFIG_NLS_ISO8859_2 is not set
+# CONFIG_NLS_ISO8859_3 is not set
+# CONFIG_NLS_ISO8859_4 is not set
+# CONFIG_NLS_ISO8859_5 is not set
+# CONFIG_NLS_ISO8859_6 is not set
+# CONFIG_NLS_ISO8859_7 is not set
+# CONFIG_NLS_ISO8859_9 is not set
+# CONFIG_NLS_ISO8859_13 is not set
+# CONFIG_NLS_ISO8859_14 is not set
+# CONFIG_NLS_ISO8859_15 is not set
+# CONFIG_NLS_KOI8_R is not set
+# CONFIG_NLS_KOI8_U is not set
+# CONFIG_NLS_MAC_ROMAN is not set
+# CONFIG_NLS_MAC_CELTIC is not set
+# CONFIG_NLS_MAC_CENTEURO is not set
+# CONFIG_NLS_MAC_CROATIAN is not set
+# CONFIG_NLS_MAC_CYRILLIC is not set
+# CONFIG_NLS_MAC_GAELIC is not set
+# CONFIG_NLS_MAC_GREEK is not set
+# CONFIG_NLS_MAC_ICELAND is not set
+# CONFIG_NLS_MAC_INUIT is not set
+# CONFIG_NLS_MAC_ROMANIAN is not set
+# CONFIG_NLS_MAC_TURKISH is not set
+# CONFIG_NLS_UTF8 is not set
+# CONFIG_DLM is not set
+# CONFIG_UNICODE is not set
+CONFIG_IO_WQ=y
+# end of File systems
+
+#
+# Security options
+#
+CONFIG_KEYS=y
+# CONFIG_KEYS_REQUEST_CACHE is not set
+# CONFIG_PERSISTENT_KEYRINGS is not set
+# CONFIG_TRUSTED_KEYS is not set
+# CONFIG_ENCRYPTED_KEYS is not set
+# CONFIG_KEY_DH_OPERATIONS is not set
+# CONFIG_SECURITY_DMESG_RESTRICT is not set
+CONFIG_SECURITY=y
+CONFIG_SECURITYFS=y
+# CONFIG_SECURITY_NETWORK is not set
+# CONFIG_SECURITY_PATH is not set
+CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y
+# CONFIG_HARDENED_USERCOPY is not set
+# CONFIG_FORTIFY_SOURCE is not set
+# CONFIG_STATIC_USERMODEHELPER is not set
+# CONFIG_SECURITY_SMACK is not set
+# CONFIG_SECURITY_TOMOYO is not set
+# CONFIG_SECURITY_APPARMOR is not set
+# CONFIG_SECURITY_LOADPIN is not set
+# CONFIG_SECURITY_YAMA is not set
+# CONFIG_SECURITY_SAFESETID is not set
+# CONFIG_SECURITY_LOCKDOWN_LSM is not set
+CONFIG_INTEGRITY=y
+# CONFIG_INTEGRITY_SIGNATURE is not set
+# CONFIG_IMA is not set
+# CONFIG_EVM is not set
+CONFIG_DEFAULT_SECURITY_DAC=y
+CONFIG_LSM="lockdown,yama,loadpin,safesetid,integrity,bpf"
+
+#
+# Kernel hardening options
+#
+
+#
+# Memory initialization
+#
+CONFIG_INIT_STACK_NONE=y
+# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set
+# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set
+# end of Memory initialization
+# end of Kernel hardening options
+# end of Security options
+
+CONFIG_XOR_BLOCKS=m
+CONFIG_CRYPTO=y
+
+#
+# Crypto core or helper
+#
+CONFIG_CRYPTO_ALGAPI=y
+CONFIG_CRYPTO_ALGAPI2=y
+CONFIG_CRYPTO_AEAD=y
+CONFIG_CRYPTO_AEAD2=y
+CONFIG_CRYPTO_SKCIPHER=y
+CONFIG_CRYPTO_SKCIPHER2=y
+CONFIG_CRYPTO_HASH=y
+CONFIG_CRYPTO_HASH2=y
+CONFIG_CRYPTO_RNG=y
+CONFIG_CRYPTO_RNG2=y
+CONFIG_CRYPTO_RNG_DEFAULT=y
+CONFIG_CRYPTO_AKCIPHER2=y
+CONFIG_CRYPTO_AKCIPHER=y
+CONFIG_CRYPTO_KPP2=y
+CONFIG_CRYPTO_KPP=m
+CONFIG_CRYPTO_ACOMP2=y
+CONFIG_CRYPTO_MANAGER=y
+CONFIG_CRYPTO_MANAGER2=y
+# CONFIG_CRYPTO_USER is not set
+CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
+CONFIG_CRYPTO_GF128MUL=y
+CONFIG_CRYPTO_NULL=y
+CONFIG_CRYPTO_NULL2=y
+# CONFIG_CRYPTO_PCRYPT is not set
+CONFIG_CRYPTO_CRYPTD=y
+CONFIG_CRYPTO_AUTHENC=m
+CONFIG_CRYPTO_TEST=m
+CONFIG_CRYPTO_SIMD=y
+
+#
+# Public-key cryptography
+#
+CONFIG_CRYPTO_RSA=y
+# CONFIG_CRYPTO_DH is not set
+CONFIG_CRYPTO_ECC=m
+CONFIG_CRYPTO_ECDH=m
+# CONFIG_CRYPTO_ECRDSA is not set
+# CONFIG_CRYPTO_SM2 is not set
+# CONFIG_CRYPTO_CURVE25519 is not set
+
+#
+# Authenticated Encryption with Associated Data
+#
+CONFIG_CRYPTO_CCM=m
+CONFIG_CRYPTO_GCM=m
+# CONFIG_CRYPTO_CHACHA20POLY1305 is not set
+# CONFIG_CRYPTO_AEGIS128 is not set
+CONFIG_CRYPTO_SEQIV=m
+CONFIG_CRYPTO_ECHAINIV=y
+
+#
+# Block modes
+#
+CONFIG_CRYPTO_CBC=m
+# CONFIG_CRYPTO_CFB is not set
+CONFIG_CRYPTO_CTR=m
+# CONFIG_CRYPTO_CTS is not set
+CONFIG_CRYPTO_ECB=m
+# CONFIG_CRYPTO_LRW is not set
+# CONFIG_CRYPTO_OFB is not set
+# CONFIG_CRYPTO_PCBC is not set
+CONFIG_CRYPTO_XTS=m
+# CONFIG_CRYPTO_KEYWRAP is not set
+# CONFIG_CRYPTO_ADIANTUM is not set
+# CONFIG_CRYPTO_ESSIV is not set
+
+#
+# Hash modes
+#
+CONFIG_CRYPTO_CMAC=m
+CONFIG_CRYPTO_HMAC=y
+# CONFIG_CRYPTO_XCBC is not set
+# CONFIG_CRYPTO_VMAC is not set
+
+#
+# Digest
+#
+CONFIG_CRYPTO_CRC32C=y
+# CONFIG_CRYPTO_CRC32 is not set
+CONFIG_CRYPTO_XXHASH=m
+CONFIG_CRYPTO_BLAKE2B=m
+# CONFIG_CRYPTO_BLAKE2S is not set
+CONFIG_CRYPTO_CRCT10DIF=y
+CONFIG_CRYPTO_GHASH=m
+# CONFIG_CRYPTO_POLY1305 is not set
+CONFIG_CRYPTO_MD4=m
+CONFIG_CRYPTO_MD5=m
+# CONFIG_CRYPTO_MICHAEL_MIC is not set
+# CONFIG_CRYPTO_RMD128 is not set
+# CONFIG_CRYPTO_RMD160 is not set
+# CONFIG_CRYPTO_RMD256 is not set
+# CONFIG_CRYPTO_RMD320 is not set
+CONFIG_CRYPTO_SHA1=y
+CONFIG_CRYPTO_SHA256=y
+CONFIG_CRYPTO_SHA512=m
+CONFIG_CRYPTO_SHA3=m
+CONFIG_CRYPTO_SM3=m
+# CONFIG_CRYPTO_STREEBOG is not set
+# CONFIG_CRYPTO_TGR192 is not set
+# CONFIG_CRYPTO_WP512 is not set
+
+#
+# Ciphers
+#
+CONFIG_CRYPTO_AES=y
+# CONFIG_CRYPTO_AES_TI is not set
+# CONFIG_CRYPTO_ANUBIS is not set
+# CONFIG_CRYPTO_ARC4 is not set
+# CONFIG_CRYPTO_BLOWFISH is not set
+# CONFIG_CRYPTO_CAMELLIA is not set
+# CONFIG_CRYPTO_CAST5 is not set
+# CONFIG_CRYPTO_CAST6 is not set
+CONFIG_CRYPTO_DES=m
+# CONFIG_CRYPTO_FCRYPT is not set
+# CONFIG_CRYPTO_KHAZAD is not set
+# CONFIG_CRYPTO_SALSA20 is not set
+# CONFIG_CRYPTO_CHACHA20 is not set
+# CONFIG_CRYPTO_SEED is not set
+# CONFIG_CRYPTO_SERPENT is not set
+CONFIG_CRYPTO_SM4=m
+# CONFIG_CRYPTO_TEA is not set
+# CONFIG_CRYPTO_TWOFISH is not set
+
+#
+# Compression
+#
+CONFIG_CRYPTO_DEFLATE=y
+CONFIG_CRYPTO_LZO=y
+# CONFIG_CRYPTO_842 is not set
+# CONFIG_CRYPTO_LZ4 is not set
+# CONFIG_CRYPTO_LZ4HC is not set
+CONFIG_CRYPTO_ZSTD=y
+
+#
+# Random Number Generation
+#
+CONFIG_CRYPTO_ANSI_CPRNG=y
+CONFIG_CRYPTO_DRBG_MENU=y
+CONFIG_CRYPTO_DRBG_HMAC=y
+# CONFIG_CRYPTO_DRBG_HASH is not set
+# CONFIG_CRYPTO_DRBG_CTR is not set
+CONFIG_CRYPTO_DRBG=y
+CONFIG_CRYPTO_JITTERENTROPY=y
+CONFIG_CRYPTO_USER_API=m
+# CONFIG_CRYPTO_USER_API_HASH is not set
+# CONFIG_CRYPTO_USER_API_SKCIPHER is not set
+CONFIG_CRYPTO_USER_API_RNG=m
+# CONFIG_CRYPTO_USER_API_RNG_CAVP is not set
+# CONFIG_CRYPTO_USER_API_AEAD is not set
+CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE=y
+CONFIG_CRYPTO_HASH_INFO=y
+
+#
+# Crypto library routines
+#
+CONFIG_CRYPTO_LIB_AES=y
+CONFIG_CRYPTO_LIB_ARC4=m
+# CONFIG_CRYPTO_LIB_BLAKE2S is not set
+CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA=m
+CONFIG_CRYPTO_LIB_CHACHA_GENERIC=m
+# CONFIG_CRYPTO_LIB_CHACHA is not set
+# CONFIG_CRYPTO_LIB_CURVE25519 is not set
+CONFIG_CRYPTO_LIB_DES=m
+CONFIG_CRYPTO_LIB_POLY1305_RSIZE=9
+# CONFIG_CRYPTO_LIB_POLY1305 is not set
+# CONFIG_CRYPTO_LIB_CHACHA20POLY1305 is not set
+CONFIG_CRYPTO_LIB_SHA256=y
+CONFIG_CRYPTO_HW=y
+# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set
+# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set
+# CONFIG_CRYPTO_DEV_CCP is not set
+# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set
+# CONFIG_CRYPTO_DEV_CAVIUM_ZIP is not set
+# CONFIG_CRYPTO_DEV_VIRTIO is not set
+# CONFIG_CRYPTO_DEV_SAFEXCEL is not set
+CONFIG_CRYPTO_DEV_CCREE=m
+# CONFIG_CRYPTO_DEV_HISI_SEC is not set
+# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set
+CONFIG_CRYPTO_DEV_SA2UL=m
+CONFIG_ASYMMETRIC_KEY_TYPE=y
+CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
+CONFIG_X509_CERTIFICATE_PARSER=y
+# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set
+CONFIG_PKCS7_MESSAGE_PARSER=y
+# CONFIG_PKCS7_TEST_KEY is not set
+# CONFIG_SIGNED_PE_FILE_VERIFICATION is not set
+
+#
+# Certificates for signature checking
+#
+CONFIG_SYSTEM_TRUSTED_KEYRING=y
+CONFIG_SYSTEM_TRUSTED_KEYS=""
+# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
+# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
+# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set
+# end of Certificates for signature checking
+
+#
+# Library routines
+#
+CONFIG_RAID6_PQ=m
+CONFIG_RAID6_PQ_BENCHMARK=y
+CONFIG_LINEAR_RANGES=y
+# CONFIG_PACKING is not set
+CONFIG_BITREVERSE=y
+CONFIG_HAVE_ARCH_BITREVERSE=y
+CONFIG_GENERIC_STRNCPY_FROM_USER=y
+CONFIG_GENERIC_STRNLEN_USER=y
+CONFIG_GENERIC_NET_UTILS=y
+CONFIG_CORDIC=m
+# CONFIG_PRIME_NUMBERS is not set
+CONFIG_RATIONAL=y
+CONFIG_GENERIC_PCI_IOMAP=y
+CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
+CONFIG_ARCH_HAS_FAST_MULTIPLIER=y
+CONFIG_ARCH_USE_SYM_ANNOTATIONS=y
+# CONFIG_INDIRECT_PIO is not set
+# CONFIG_CRC_CCITT is not set
+CONFIG_CRC16=y
+CONFIG_CRC_T10DIF=y
+CONFIG_CRC_ITU_T=y
+CONFIG_CRC32=y
+# CONFIG_CRC32_SELFTEST is not set
+CONFIG_CRC32_SLICEBY8=y
+# CONFIG_CRC32_SLICEBY4 is not set
+# CONFIG_CRC32_SARWATE is not set
+# CONFIG_CRC32_BIT is not set
+# CONFIG_CRC64 is not set
+# CONFIG_CRC4 is not set
+CONFIG_CRC7=y
+CONFIG_LIBCRC32C=m
+# CONFIG_CRC8 is not set
+CONFIG_XXHASH=y
+CONFIG_AUDIT_ARCH_COMPAT_GENERIC=y
+# CONFIG_RANDOM32_SELFTEST is not set
+CONFIG_ZLIB_INFLATE=y
+CONFIG_ZLIB_DEFLATE=y
+CONFIG_LZO_COMPRESS=y
+CONFIG_LZO_DECOMPRESS=y
+CONFIG_LZ4_DECOMPRESS=y
+CONFIG_ZSTD_COMPRESS=y
+CONFIG_ZSTD_DECOMPRESS=y
+CONFIG_XZ_DEC=y
+CONFIG_XZ_DEC_X86=y
+CONFIG_XZ_DEC_POWERPC=y
+CONFIG_XZ_DEC_IA64=y
+CONFIG_XZ_DEC_ARM=y
+CONFIG_XZ_DEC_ARMTHUMB=y
+CONFIG_XZ_DEC_SPARC=y
+CONFIG_XZ_DEC_BCJ=y
+# CONFIG_XZ_DEC_TEST is not set
+CONFIG_DECOMPRESS_GZIP=y
+CONFIG_DECOMPRESS_BZIP2=y
+CONFIG_DECOMPRESS_LZMA=y
+CONFIG_DECOMPRESS_XZ=y
+CONFIG_DECOMPRESS_LZO=y
+CONFIG_DECOMPRESS_LZ4=y
+CONFIG_DECOMPRESS_ZSTD=y
+CONFIG_GENERIC_ALLOCATOR=y
+CONFIG_TEXTSEARCH=y
+CONFIG_TEXTSEARCH_KMP=m
+CONFIG_TEXTSEARCH_BM=m
+CONFIG_TEXTSEARCH_FSM=m
+CONFIG_XARRAY_MULTI=y
+CONFIG_ASSOCIATIVE_ARRAY=y
+CONFIG_HAS_IOMEM=y
+CONFIG_HAS_IOPORT_MAP=y
+CONFIG_HAS_DMA=y
+CONFIG_DMA_OPS=y
+CONFIG_NEED_SG_DMA_LENGTH=y
+CONFIG_NEED_DMA_MAP_STATE=y
+CONFIG_ARCH_DMA_ADDR_T_64BIT=y
+CONFIG_DMA_DECLARE_COHERENT=y
+CONFIG_ARCH_HAS_SETUP_DMA_OPS=y
+CONFIG_ARCH_HAS_TEARDOWN_DMA_OPS=y
+CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE=y
+CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU=y
+CONFIG_ARCH_HAS_DMA_PREP_COHERENT=y
+CONFIG_SWIOTLB=y
+CONFIG_DMA_NONCOHERENT_MMAP=y
+CONFIG_DMA_COHERENT_POOL=y
+CONFIG_DMA_REMAP=y
+CONFIG_DMA_DIRECT_REMAP=y
+CONFIG_DMA_CMA=y
+# CONFIG_DMA_PERNUMA_CMA is not set
+
+#
+# Default contiguous memory area size:
+#
+CONFIG_CMA_SIZE_MBYTES=24
+CONFIG_CMA_SIZE_SEL_MBYTES=y
+# CONFIG_CMA_SIZE_SEL_PERCENTAGE is not set
+# CONFIG_CMA_SIZE_SEL_MIN is not set
+# CONFIG_CMA_SIZE_SEL_MAX is not set
+CONFIG_CMA_ALIGNMENT=8
+# CONFIG_DMA_API_DEBUG is not set
+CONFIG_SGL_ALLOC=y
+CONFIG_CPU_RMAP=y
+CONFIG_DQL=y
+CONFIG_GLOB=y
+# CONFIG_GLOB_SELFTEST is not set
+CONFIG_NLATTR=y
+CONFIG_CLZ_TAB=y
+CONFIG_IRQ_POLL=y
+CONFIG_MPILIB=y
+CONFIG_LIBFDT=y
+CONFIG_OID_REGISTRY=y
+CONFIG_HAVE_GENERIC_VDSO=y
+CONFIG_GENERIC_GETTIMEOFDAY=y
+CONFIG_GENERIC_VDSO_TIME_NS=y
+CONFIG_FONT_SUPPORT=y
+# CONFIG_FONTS is not set
+CONFIG_FONT_8x8=y
+CONFIG_FONT_8x16=y
+CONFIG_SG_SPLIT=y
+CONFIG_SG_POOL=y
+CONFIG_ARCH_STACKWALK=y
+CONFIG_SBITMAP=y
+# CONFIG_STRING_SELFTEST is not set
+# end of Library routines
+
+#
+# Kernel hacking
+#
+
+#
+# printk and dmesg options
+#
+CONFIG_PRINTK_TIME=y
+# CONFIG_PRINTK_CALLER is not set
+CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7
+CONFIG_CONSOLE_LOGLEVEL_QUIET=4
+CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4
+# CONFIG_BOOT_PRINTK_DELAY is not set
+# CONFIG_DYNAMIC_DEBUG is not set
+# CONFIG_DYNAMIC_DEBUG_CORE is not set
+CONFIG_SYMBOLIC_ERRNAME=y
+CONFIG_DEBUG_BUGVERBOSE=y
+# end of printk and dmesg options
+
+#
+# Compile-time checks and compiler options
+#
+# CONFIG_DEBUG_INFO is not set
+CONFIG_ENABLE_MUST_CHECK=y
+CONFIG_FRAME_WARN=2048
+# CONFIG_STRIP_ASM_SYMS is not set
+# CONFIG_READABLE_ASM is not set
+# CONFIG_HEADERS_INSTALL is not set
+# CONFIG_DEBUG_SECTION_MISMATCH is not set
+CONFIG_SECTION_MISMATCH_WARN_ONLY=y
+# CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_32B is not set
+CONFIG_ARCH_WANT_FRAME_POINTERS=y
+CONFIG_FRAME_POINTER=y
+# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
+# end of Compile-time checks and compiler options
+
+#
+# Generic Kernel Debugging Instruments
+#
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
+CONFIG_MAGIC_SYSRQ_SERIAL=y
+CONFIG_MAGIC_SYSRQ_SERIAL_SEQUENCE=""
+CONFIG_DEBUG_FS=y
+CONFIG_DEBUG_FS_ALLOW_ALL=y
+# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set
+# CONFIG_DEBUG_FS_ALLOW_NONE is not set
+CONFIG_HAVE_ARCH_KGDB=y
+# CONFIG_KGDB is not set
+CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y
+# CONFIG_UBSAN is not set
+# end of Generic Kernel Debugging Instruments
+
+CONFIG_DEBUG_KERNEL=y
+CONFIG_DEBUG_MISC=y
+
+#
+# Memory Debugging
+#
+# CONFIG_PAGE_EXTENSION is not set
+# CONFIG_DEBUG_PAGEALLOC is not set
+# CONFIG_PAGE_OWNER is not set
+# CONFIG_PAGE_POISONING is not set
+# CONFIG_DEBUG_RODATA_TEST is not set
+CONFIG_ARCH_HAS_DEBUG_WX=y
+# CONFIG_DEBUG_WX is not set
+CONFIG_GENERIC_PTDUMP=y
+# CONFIG_PTDUMP_DEBUGFS is not set
+# CONFIG_DEBUG_OBJECTS is not set
+# CONFIG_SLUB_STATS is not set
+CONFIG_HAVE_DEBUG_KMEMLEAK=y
+# CONFIG_DEBUG_KMEMLEAK is not set
+# CONFIG_DEBUG_STACK_USAGE is not set
+# CONFIG_SCHED_STACK_END_CHECK is not set
+CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y
+# CONFIG_DEBUG_VM is not set
+# CONFIG_DEBUG_VM_PGTABLE is not set
+CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y
+# CONFIG_DEBUG_VIRTUAL is not set
+# CONFIG_DEBUG_MEMORY_INIT is not set
+# CONFIG_DEBUG_PER_CPU_MAPS is not set
+CONFIG_HAVE_ARCH_KASAN=y
+CONFIG_HAVE_ARCH_KASAN_SW_TAGS=y
+CONFIG_CC_HAS_KASAN_GENERIC=y
+CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y
+# CONFIG_KASAN is not set
+# end of Memory Debugging
+
+# CONFIG_DEBUG_SHIRQ is not set
+
+#
+# Debug Oops, Lockups and Hangs
+#
+# CONFIG_PANIC_ON_OOPS is not set
+CONFIG_PANIC_ON_OOPS_VALUE=0
+CONFIG_PANIC_TIMEOUT=0
+# CONFIG_SOFTLOCKUP_DETECTOR is not set
+# CONFIG_DETECT_HUNG_TASK is not set
+# CONFIG_WQ_WATCHDOG is not set
+# CONFIG_TEST_LOCKUP is not set
+# end of Debug Oops, Lockups and Hangs
+
+#
+# Scheduler Debugging
+#
+CONFIG_SCHED_DEBUG=y
+CONFIG_SCHED_INFO=y
+CONFIG_SCHEDSTATS=y
+# end of Scheduler Debugging
+
+# CONFIG_DEBUG_TIMEKEEPING is not set
+# CONFIG_DEBUG_PREEMPT is not set
+
+#
+# Lock Debugging (spinlocks, mutexes, etc...)
+#
+CONFIG_LOCK_DEBUGGING_SUPPORT=y
+# CONFIG_PROVE_LOCKING is not set
+# CONFIG_LOCK_STAT is not set
+# CONFIG_DEBUG_RT_MUTEXES is not set
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_MUTEXES is not set
+# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
+# CONFIG_DEBUG_RWSEMS is not set
+# CONFIG_DEBUG_LOCK_ALLOC is not set
+# CONFIG_DEBUG_ATOMIC_SLEEP is not set
+# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
+# CONFIG_LOCK_TORTURE_TEST is not set
+# CONFIG_WW_MUTEX_SELFTEST is not set
+# CONFIG_SCF_TORTURE_TEST is not set
+# CONFIG_CSD_LOCK_WAIT_DEBUG is not set
+# end of Lock Debugging (spinlocks, mutexes, etc...)
+
+# CONFIG_STACKTRACE is not set
+# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
+# CONFIG_DEBUG_KOBJECT is not set
+CONFIG_HAVE_DEBUG_BUGVERBOSE=y
+
+#
+# Debug kernel data structures
+#
+# CONFIG_DEBUG_LIST is not set
+# CONFIG_DEBUG_PLIST is not set
+# CONFIG_DEBUG_SG is not set
+# CONFIG_DEBUG_NOTIFIERS is not set
+# CONFIG_BUG_ON_DATA_CORRUPTION is not set
+# end of Debug kernel data structures
+
+# CONFIG_DEBUG_CREDENTIALS is not set
+
+#
+# RCU Debugging
+#
+# CONFIG_RCU_SCALE_TEST is not set
+# CONFIG_RCU_TORTURE_TEST is not set
+# CONFIG_RCU_REF_SCALE_TEST is not set
+CONFIG_RCU_CPU_STALL_TIMEOUT=21
+CONFIG_RCU_TRACE=y
+# CONFIG_RCU_EQS_DEBUG is not set
+# end of RCU Debugging
+
+# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
+# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
+# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
+# CONFIG_LATENCYTOP is not set
+CONFIG_HAVE_FUNCTION_TRACER=y
+CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
+CONFIG_HAVE_DYNAMIC_FTRACE=y
+CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
+CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
+CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
+CONFIG_HAVE_C_RECORDMCOUNT=y
+CONFIG_TRACE_CLOCK=y
+CONFIG_TRACING_SUPPORT=y
+# CONFIG_FTRACE is not set
+CONFIG_SAMPLES=y
+# CONFIG_SAMPLE_AUXDISPLAY is not set
+# CONFIG_SAMPLE_KOBJECT is not set
+# CONFIG_SAMPLE_HW_BREAKPOINT is not set
+# CONFIG_SAMPLE_KFIFO is not set
+CONFIG_SAMPLE_RPMSG_CLIENT=m
+# CONFIG_SAMPLE_CONFIGFS is not set
+# CONFIG_SAMPLE_VFIO_MDEV_MDPY_FB is not set
+# CONFIG_SAMPLE_WATCHDOG is not set
+CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
+CONFIG_STRICT_DEVMEM=y
+# CONFIG_IO_STRICT_DEVMEM is not set
+
+#
+# arm64 Debugging
+#
+# CONFIG_PID_IN_CONTEXTIDR is not set
+# CONFIG_ARM64_RELOC_TEST is not set
+# CONFIG_CORESIGHT is not set
+# end of arm64 Debugging
+
+#
+# Kernel Testing and Coverage
+#
+# CONFIG_KUNIT is not set
+# CONFIG_NOTIFIER_ERROR_INJECTION is not set
+# CONFIG_FAULT_INJECTION is not set
+CONFIG_ARCH_HAS_KCOV=y
+CONFIG_CC_HAS_SANCOV_TRACE_PC=y
+# CONFIG_KCOV is not set
+CONFIG_RUNTIME_TESTING_MENU=y
+# CONFIG_LKDTM is not set
+# CONFIG_TEST_LIST_SORT is not set
+# CONFIG_TEST_MIN_HEAP is not set
+# CONFIG_TEST_SORT is not set
+# CONFIG_BACKTRACE_SELF_TEST is not set
+# CONFIG_RBTREE_TEST is not set
+# CONFIG_REED_SOLOMON_TEST is not set
+# CONFIG_INTERVAL_TREE_TEST is not set
+# CONFIG_PERCPU_TEST is not set
+# CONFIG_ATOMIC64_SELFTEST is not set
+# CONFIG_TEST_HEXDUMP is not set
+# CONFIG_TEST_STRING_HELPERS is not set
+# CONFIG_TEST_STRSCPY is not set
+# CONFIG_TEST_KSTRTOX is not set
+# CONFIG_TEST_PRINTF is not set
+# CONFIG_TEST_BITMAP is not set
+# CONFIG_TEST_UUID is not set
+# CONFIG_TEST_XARRAY is not set
+# CONFIG_TEST_OVERFLOW is not set
+# CONFIG_TEST_RHASHTABLE is not set
+# CONFIG_TEST_HASH is not set
+# CONFIG_TEST_IDA is not set
+# CONFIG_TEST_LKM is not set
+# CONFIG_TEST_BITOPS is not set
+# CONFIG_TEST_VMALLOC is not set
+# CONFIG_TEST_USER_COPY is not set
+# CONFIG_TEST_BPF is not set
+# CONFIG_TEST_BLACKHOLE_DEV is not set
+# CONFIG_FIND_BIT_BENCHMARK is not set
+# CONFIG_TEST_FIRMWARE is not set
+# CONFIG_TEST_SYSCTL is not set
+# CONFIG_TEST_UDELAY is not set
+# CONFIG_TEST_STATIC_KEYS is not set
+# CONFIG_TEST_KMOD is not set
+# CONFIG_TEST_MEMCAT_P is not set
+# CONFIG_TEST_STACKINIT is not set
+# CONFIG_TEST_MEMINIT is not set
+# CONFIG_TEST_FREE_PAGES is not set
+CONFIG_MEMTEST=y
+# end of Kernel Testing and Coverage
+# end of Kernel hacking
--
2.17.1
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
On 8/18/2021 9:10 AM, [email protected] wrote:
> From: Sidraya<[email protected]>
>
> This patch enables d5520 video decoder driver as module
> on J721e board.
Nack
>
> Signed-off-by: Sidraya<[email protected]>
> ---
> MAINTAINERS | 1 +
> .../configs/ti_sdk_arm64_release_defconfig | 7407 +++++++++++++++++
This defconfig doesnt belong in mainline.
> 2 files changed, 7408 insertions(+)
> create mode 100644 arch/arm64/configs/ti_sdk_arm64_release_defconfig
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index c7b4c860f8a7..db47fcafbcfc 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -19537,6 +19537,7 @@ M: Sidraya Jayagond<[email protected]>
> L: [email protected]
> S: Maintained
> F: Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
> +F: arch/arm64/configs/tisdk_j7-evm_defconfig
This is wrong.
> F: drivers/staging/media/vxd/common/addr_alloc.c
> F: drivers/staging/media/vxd/common/addr_alloc.h
> F: drivers/staging/media/vxd/common/dq.c
> diff --git a/arch/arm64/configs/ti_sdk_arm64_release_defconfig b/arch/arm64/configs/ti_sdk_arm64_release_defconfig
> new file mode 100644
On Wed, Aug 18, 2021 at 07:40:09PM +0530, [email protected] wrote:
> From: Sidraya <[email protected]>
>
> The IMG D5520 has an MMU which needs to be programmed with all
> memory which it needs access to. This includes input buffers,
> output buffers and parameter buffers for each decode instance,
> as well as common buffers for firmware, etc.
>
> Functions are provided for creating MMU directories (each stream
> will have it's own MMU context), retrieving the directory page,
> and mapping/unmapping a buffer into the MMU for a specific MMU context.
>
> Also helper(s) are provided for querying the capabilities of the MMU.
>
> Signed-off-by: Buddy Liong <[email protected]>
> Signed-off-by: Angela Stegmaier <[email protected]>
> Signed-off-by: Sidraya <[email protected]>
> ---
> MAINTAINERS | 2 +
> drivers/staging/media/vxd/common/imgmmu.c | 782 ++++++++++++++++++++++
> drivers/staging/media/vxd/common/imgmmu.h | 180 +++++
Why are you adding new stuff to staging? Why can it not just go to the
"real" part of the kernel?
If it is to go into staging, we need a TODO file that lists what needs
to be done to get it out of staging.
thanks,
greg k-h
On Wed, Aug 18, 2021 at 07:40:31PM +0530, [email protected] wrote:
> From: Sidraya <[email protected]>
>
> This module handles the resources which is added to list
> and return the resource based requested list.
That does not describe this code at all.
Why is this needed? What is wrong with the in-kernel resource manager
logic that requires this custom code instead?
Be more specific and verbose please.
thanks,
greg k-h
On 19:40-20210818, [email protected] wrote:
> From: Sidraya <[email protected]>
>
> This patch enables d5520 video decoder driver as module
> on J721e board.
>
> Signed-off-by: Sidraya <[email protected]>
> ---
> MAINTAINERS | 1 +
> .../configs/ti_sdk_arm64_release_defconfig | 7407 +++++++++++++++++
> 2 files changed, 7408 insertions(+)
> create mode 100644 arch/arm64/configs/ti_sdk_arm64_release_defconfig
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index c7b4c860f8a7..db47fcafbcfc 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -19537,6 +19537,7 @@ M: Sidraya Jayagond <[email protected]>
> L: [email protected]
> S: Maintained
> F: Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
> +F: arch/arm64/configs/tisdk_j7-evm_defconfig
Sigh.. no.
> F: drivers/staging/media/vxd/common/addr_alloc.c
> F: drivers/staging/media/vxd/common/addr_alloc.h
> F: drivers/staging/media/vxd/common/dq.c
> diff --git a/arch/arm64/configs/ti_sdk_arm64_release_defconfig b/arch/arm64/configs/ti_sdk_arm64_release_defconfig
> new file mode 100644
> index 000000000000..a424c76d29da
> --- /dev/null
> +++ b/arch/arm64/configs/ti_sdk_arm64_release_defconfig
> @@ -0,0 +1,7407 @@
> +#
> +# Automatically generated file; DO NOT EDIT.
> +# Linux/arm64 5.10.41 Kernel Configuration
> +#
^^ NAK. we DONOT want to see additional configs in arm64 and definitely
not one based on 5.10
>
> This
> message contains confidential information and is intended only
> for the
> individual(s) named. If you are not the intended
If you are sending confidential information to a public mailing list,
you might want to start talking to legal folks in your company.
> recipient, you are
> notified that disclosing, copying, distributing or taking any
> action in
> reliance on the contents of this mail and attached file/s is strictly
> prohibited. Please notify the
> sender immediately and delete this e-mail
> from your system. E-mail transmission
> cannot be guaranteed to be secured or
> error-free as information could be
> intercepted, corrupted, lost, destroyed,
> arrive late or incomplete, or contain
> viruses. The sender therefore does
> not accept liability for any errors or
> omissions in the contents of this
> message, which arise as a result of e-mail
> transmission.
^^ please use a different mail provider.
--
Regards,
Nishanth Menon
Key (0xDDB5849D1736249D) / Fingerprint: F8A2 8693 54EB 8232 17A3 1A34 DDB5 849D 1736 249D
^^ $subject
you might want to run:
git log --oneline arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
to see precedence of usage.
On 19:40-20210818, [email protected] wrote:
> From: Sidraya <[email protected]>
>
> Enable v4l2 vxd_dec on dra82
s/dra82/j721e
>
> Signed-off-by: Angela Stegmaier <[email protected]>
> Signed-off-by: Sidraya <[email protected]>
> ---
> arch/arm64/boot/dts/ti/k3-j721e-main.dtsi | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
> index cf3482376c1e..a10eb7bcce74 100644
> --- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
> +++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
> @@ -1242,6 +1242,15 @@
> power-domains = <&k3_pds 193 TI_SCI_PD_EXCLUSIVE>;
> };
>
> + d5520: video-decoder@4300000 {
> + /* IMG D5520 driver configuration */
> + compatible = "img,d5500-vxd";
> + reg = <0x00 0x04300000>,
> + <0x00 0x100000>;
> + power-domains = <&k3_pds 144 TI_SCI_PD_EXCLUSIVE>;
> + interrupts = <GIC_SPI 180 IRQ_TYPE_LEVEL_HIGH>;
> + };
> +
> ufs_wrapper: ufs-wrapper@4e80000 {
> compatible = "ti,j721e-ufs";
> reg = <0x0 0x4e80000 0x0 0x100>;
> --
> 2.17.1
>
>
> --
>
>
>
>
>
>
> This
> message contains confidential information and is intended only
> for the
> individual(s) named. If you are not the intended
> recipient, you are
You might want to look up the MAINTAINER file to see who this patch
and what list this should have been addressed to.
Further I DONOT want to see a single bit of confidential information
based patch in my tree. Please discuss with your legal department
and since TI is mentioned in the patches, please discuss with TI's
legal team as well.
This patch, IMHO, series mis-represents TI's legal position and
respect for upstream community and in no way represents the quality
of patches we(TI) would like to contribute to upstream and DOES NOT
represent the quality of contributions or collaboration expected of TI
or representatives of TI.
So, sorry NAK for the complete series.
--
Regards,
Nishanth Menon
Key (0xDDB5849D1736249D) / Fingerprint: F8A2 8693 54EB 8232 17A3 1A34 DDB5 849D 1736 249D
Hi,
I love your patch! Perhaps something to improve:
[auto build test WARNING on linuxtv-media/master]
[also build test WARNING on staging/staging-testing driver-core/driver-core-testing linus/master v5.14-rc6 next-20210818]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/sidraya-bj-pathpartnertech-com/TI-Video-Decoder-driver-upstreaming-to-v5-14-rc6-kernel/20210818-221811
base: git://linuxtv.org/media_tree.git master
config: m68k-allmodconfig (attached as .config)
compiler: m68k-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/f42ae4f45639a6214f9e775d4280061bf52fc229
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review sidraya-bj-pathpartnertech-com/TI-Video-Decoder-driver-upstreaming-to-v5-14-rc6-kernel/20210818-221811
git checkout f42ae4f45639a6214f9e775d4280061bf52fc229
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=m68k
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>
All warnings (new ones prefixed by >>):
In file included from include/linux/printk.h:456,
from include/linux/kernel.h:19,
from include/linux/radix-tree.h:12,
from include/linux/idr.h:15,
from drivers/staging/media/vxd/decoder/../common/img_mem_man.c:15:
drivers/staging/media/vxd/decoder/../common/img_mem_man.c: In function '_img_mem_alloc':
>> drivers/staging/media/vxd/decoder/../common/img_mem_man.c:290:25: warning: format '%zu' expects argument of type 'size_t', but argument 9 has type 'long unsigned int' [-Wformat=]
290 | dev_dbg(device, "%s heap %p ctx %p created buffer %d (%p) actual_size %zu\n",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/dynamic_debug.h:134:29: note: in definition of macro '__dynamic_func_call'
134 | func(&id, ##__VA_ARGS__); \
| ^~~~~~~~~~~
include/linux/dynamic_debug.h:166:9: note: in expansion of macro '_dynamic_func_call'
166 | _dynamic_func_call(fmt,__dynamic_dev_dbg, \
| ^~~~~~~~~~~~~~~~~~
include/linux/dev_printk.h:123:9: note: in expansion of macro 'dynamic_dev_dbg'
123 | dynamic_dev_dbg(dev, dev_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~~~~~~~~~~
include/linux/dev_printk.h:123:30: note: in expansion of macro 'dev_fmt'
123 | dynamic_dev_dbg(dev, dev_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~~
drivers/staging/media/vxd/decoder/../common/img_mem_man.c:290:9: note: in expansion of macro 'dev_dbg'
290 | dev_dbg(device, "%s heap %p ctx %p created buffer %d (%p) actual_size %zu\n",
| ^~~~~~~
drivers/staging/media/vxd/decoder/../common/img_mem_man.c:290:81: note: format string is defined here
290 | dev_dbg(device, "%s heap %p ctx %p created buffer %d (%p) actual_size %zu\n",
| ~~^
| |
| unsigned int
| %lu
In file included from include/linux/printk.h:456,
from include/linux/kernel.h:19,
from include/linux/radix-tree.h:12,
from include/linux/idr.h:15,
from drivers/staging/media/vxd/decoder/../common/img_mem_man.c:15:
drivers/staging/media/vxd/decoder/../common/img_mem_man.c: In function 'img_mem_alloc':
drivers/staging/media/vxd/decoder/../common/img_mem_man.c:309:25: warning: format '%zu' expects argument of type 'size_t', but argument 7 has type 'long unsigned int' [-Wformat=]
309 | dev_dbg(device, "%s heap %d ctx %p size %zu\n", __func__, heap_id,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/dynamic_debug.h:134:29: note: in definition of macro '__dynamic_func_call'
134 | func(&id, ##__VA_ARGS__); \
| ^~~~~~~~~~~
include/linux/dynamic_debug.h:166:9: note: in expansion of macro '_dynamic_func_call'
166 | _dynamic_func_call(fmt,__dynamic_dev_dbg, \
| ^~~~~~~~~~~~~~~~~~
include/linux/dev_printk.h:123:9: note: in expansion of macro 'dynamic_dev_dbg'
123 | dynamic_dev_dbg(dev, dev_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~~~~~~~~~~
include/linux/dev_printk.h:123:30: note: in expansion of macro 'dev_fmt'
123 | dynamic_dev_dbg(dev, dev_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~~
drivers/staging/media/vxd/decoder/../common/img_mem_man.c:309:9: note: in expansion of macro 'dev_dbg'
309 | dev_dbg(device, "%s heap %d ctx %p size %zu\n", __func__, heap_id,
| ^~~~~~~
drivers/staging/media/vxd/decoder/../common/img_mem_man.c:309:51: note: format string is defined here
309 | dev_dbg(device, "%s heap %d ctx %p size %zu\n", __func__, heap_id,
| ~~^
| |
| unsigned int
| %lu
In file included from include/linux/printk.h:456,
from include/linux/kernel.h:19,
from include/linux/radix-tree.h:12,
from include/linux/idr.h:15,
from drivers/staging/media/vxd/decoder/../common/img_mem_man.c:15:
drivers/staging/media/vxd/decoder/../common/img_mem_man.c:333:25: warning: format '%zu' expects argument of type 'size_t', but argument 9 has type 'long unsigned int' [-Wformat=]
333 | dev_dbg(device, "%s heap %d ctx %p created buffer %d (%p) size %zu\n",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/dynamic_debug.h:134:29: note: in definition of macro '__dynamic_func_call'
134 | func(&id, ##__VA_ARGS__); \
| ^~~~~~~~~~~
include/linux/dynamic_debug.h:166:9: note: in expansion of macro '_dynamic_func_call'
166 | _dynamic_func_call(fmt,__dynamic_dev_dbg, \
| ^~~~~~~~~~~~~~~~~~
include/linux/dev_printk.h:123:9: note: in expansion of macro 'dynamic_dev_dbg'
123 | dynamic_dev_dbg(dev, dev_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~~~~~~~~~~
include/linux/dev_printk.h:123:30: note: in expansion of macro 'dev_fmt'
123 | dynamic_dev_dbg(dev, dev_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~~
drivers/staging/media/vxd/decoder/../common/img_mem_man.c:333:9: note: in expansion of macro 'dev_dbg'
333 | dev_dbg(device, "%s heap %d ctx %p created buffer %d (%p) size %zu\n",
| ^~~~~~~
drivers/staging/media/vxd/decoder/../common/img_mem_man.c:333:74: note: format string is defined here
333 | dev_dbg(device, "%s heap %d ctx %p created buffer %d (%p) size %zu\n",
| ~~^
| |
| unsigned int
| %lu
In file included from include/linux/printk.h:456,
from include/linux/kernel.h:19,
from include/linux/radix-tree.h:12,
from include/linux/idr.h:15,
from drivers/staging/media/vxd/decoder/../common/img_mem_man.c:15:
drivers/staging/media/vxd/decoder/../common/img_mem_man.c: In function '_img_mem_import':
drivers/staging/media/vxd/decoder/../common/img_mem_man.c:372:25: warning: format '%zu' expects argument of type 'size_t', but argument 8 has type 'long unsigned int' [-Wformat=]
372 | dev_dbg(device, "%s ctx %p created buffer %d (%p) actual_size %zu\n",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/dynamic_debug.h:134:29: note: in definition of macro '__dynamic_func_call'
134 | func(&id, ##__VA_ARGS__); \
| ^~~~~~~~~~~
include/linux/dynamic_debug.h:166:9: note: in expansion of macro '_dynamic_func_call'
166 | _dynamic_func_call(fmt,__dynamic_dev_dbg, \
| ^~~~~~~~~~~~~~~~~~
include/linux/dev_printk.h:123:9: note: in expansion of macro 'dynamic_dev_dbg'
123 | dynamic_dev_dbg(dev, dev_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~~~~~~~~~~
--
drivers/staging/media/vxd/decoder/../common/work_queue.c: In function 'get_delayed_work_buff':
>> drivers/staging/media/vxd/decoder/../common/work_queue.c:148:22: warning: variable 'previous' set but not used [-Wunused-but-set-variable]
148 | struct node *previous = NULL;
| ^~~~~~~~
--
>> drivers/staging/media/vxd/decoder/jpeg_secure_parser.c:596:5: warning: no previous prototype for 'bspp_jpeg_unit_parser' [-Wmissing-prototypes]
596 | int bspp_jpeg_unit_parser(void *swsr_ctx, struct bspp_unit_data *unit_data)
| ^~~~~~~~~~~~~~~~~~~~~
vim +290 drivers/staging/media/vxd/decoder/../common/img_mem_man.c
76b88427fbba69 Sidraya 2021-08-18 240
76b88427fbba69 Sidraya 2021-08-18 241 static int _img_mem_alloc(void *device, struct mem_ctx *ctx,
76b88427fbba69 Sidraya 2021-08-18 242 struct heap *heap, unsigned long size,
76b88427fbba69 Sidraya 2021-08-18 243 enum mem_attr attr, struct buffer **buffer_new)
76b88427fbba69 Sidraya 2021-08-18 244 {
76b88427fbba69 Sidraya 2021-08-18 245 struct buffer *buffer;
76b88427fbba69 Sidraya 2021-08-18 246 int ret;
76b88427fbba69 Sidraya 2021-08-18 247
76b88427fbba69 Sidraya 2021-08-18 248 if (size == 0) {
76b88427fbba69 Sidraya 2021-08-18 249 dev_err(device, "%s: buffer size is zero\n", __func__);
76b88427fbba69 Sidraya 2021-08-18 250 return -EINVAL;
76b88427fbba69 Sidraya 2021-08-18 251 }
76b88427fbba69 Sidraya 2021-08-18 252
76b88427fbba69 Sidraya 2021-08-18 253 if (!heap->ops || !heap->ops->alloc) {
76b88427fbba69 Sidraya 2021-08-18 254 dev_err(device, "%s: no alloc function in heap %d!\n",
76b88427fbba69 Sidraya 2021-08-18 255 __func__, heap->id);
76b88427fbba69 Sidraya 2021-08-18 256 return -EINVAL;
76b88427fbba69 Sidraya 2021-08-18 257 }
76b88427fbba69 Sidraya 2021-08-18 258
76b88427fbba69 Sidraya 2021-08-18 259 buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
76b88427fbba69 Sidraya 2021-08-18 260 if (!buffer)
76b88427fbba69 Sidraya 2021-08-18 261 return -ENOMEM;
76b88427fbba69 Sidraya 2021-08-18 262
76b88427fbba69 Sidraya 2021-08-18 263 ret = idr_alloc(ctx->buffers, buffer,
76b88427fbba69 Sidraya 2021-08-18 264 MEM_MAN_MIN_BUFFER, MEM_MAN_MAX_BUFFER, GFP_KERNEL);
76b88427fbba69 Sidraya 2021-08-18 265 if (ret < 0) {
76b88427fbba69 Sidraya 2021-08-18 266 dev_err(device, "%s: idr_alloc failed\n", __func__);
76b88427fbba69 Sidraya 2021-08-18 267 goto idr_alloc_failed;
76b88427fbba69 Sidraya 2021-08-18 268 }
76b88427fbba69 Sidraya 2021-08-18 269
76b88427fbba69 Sidraya 2021-08-18 270 buffer->id = ret;
76b88427fbba69 Sidraya 2021-08-18 271 buffer->request_size = size;
76b88427fbba69 Sidraya 2021-08-18 272 buffer->actual_size = ((size + PAGE_SIZE - 1) / PAGE_SIZE) * PAGE_SIZE;
76b88427fbba69 Sidraya 2021-08-18 273 buffer->device = device;
76b88427fbba69 Sidraya 2021-08-18 274 buffer->mem_ctx = ctx;
76b88427fbba69 Sidraya 2021-08-18 275 buffer->heap = heap;
76b88427fbba69 Sidraya 2021-08-18 276 INIT_LIST_HEAD(&buffer->mappings);
76b88427fbba69 Sidraya 2021-08-18 277 buffer->kptr = NULL;
76b88427fbba69 Sidraya 2021-08-18 278 buffer->priv = NULL;
76b88427fbba69 Sidraya 2021-08-18 279
76b88427fbba69 Sidraya 2021-08-18 280 ret = heap->ops->alloc(device, heap, buffer->actual_size, attr,
76b88427fbba69 Sidraya 2021-08-18 281 buffer);
76b88427fbba69 Sidraya 2021-08-18 282 if (ret) {
76b88427fbba69 Sidraya 2021-08-18 283 dev_err(device, "%s: heap %d alloc failed\n", __func__,
76b88427fbba69 Sidraya 2021-08-18 284 heap->id);
76b88427fbba69 Sidraya 2021-08-18 285 goto heap_alloc_failed;
76b88427fbba69 Sidraya 2021-08-18 286 }
76b88427fbba69 Sidraya 2021-08-18 287
76b88427fbba69 Sidraya 2021-08-18 288 *buffer_new = buffer;
76b88427fbba69 Sidraya 2021-08-18 289
76b88427fbba69 Sidraya 2021-08-18 @290 dev_dbg(device, "%s heap %p ctx %p created buffer %d (%p) actual_size %zu\n",
76b88427fbba69 Sidraya 2021-08-18 291 __func__, heap, ctx, buffer->id, buffer, buffer->actual_size);
76b88427fbba69 Sidraya 2021-08-18 292 return 0;
76b88427fbba69 Sidraya 2021-08-18 293
76b88427fbba69 Sidraya 2021-08-18 294 heap_alloc_failed:
76b88427fbba69 Sidraya 2021-08-18 295 idr_remove(ctx->buffers, buffer->id);
76b88427fbba69 Sidraya 2021-08-18 296 idr_alloc_failed:
76b88427fbba69 Sidraya 2021-08-18 297 kfree(buffer);
76b88427fbba69 Sidraya 2021-08-18 298 return ret;
76b88427fbba69 Sidraya 2021-08-18 299 }
76b88427fbba69 Sidraya 2021-08-18 300
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]
Hi,
I love your patch! Yet something to improve:
[auto build test ERROR on linuxtv-media/master]
[also build test ERROR on staging/staging-testing driver-core/driver-core-testing linus/master v5.14-rc6 next-20210818]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/sidraya-bj-pathpartnertech-com/TI-Video-Decoder-driver-upstreaming-to-v5-14-rc6-kernel/20210818-221811
base: git://linuxtv.org/media_tree.git master
config: m68k-allmodconfig (attached as .config)
compiler: m68k-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/ed83bf9b395e58893b5d92675196aee8f619efc9
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review sidraya-bj-pathpartnertech-com/TI-Video-Decoder-driver-upstreaming-to-v5-14-rc6-kernel/20210818-221811
git checkout ed83bf9b395e58893b5d92675196aee8f619efc9
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=m68k
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>
All error/warnings (new ones prefixed by >>):
drivers/staging/media/vxd/decoder/vxd_core.c: In function 'stream_worker':
>> drivers/staging/media/vxd/decoder/vxd_core.c:549:25: warning: variable 'vxd' set but not used [-Wunused-but-set-variable]
549 | struct vxd_dev *vxd = NULL;
| ^~~
--
drivers/staging/media/vxd/decoder/vxd_v4l2.c: In function 'vxd_dec_buf_prepare':
>> drivers/staging/media/vxd/decoder/vxd_v4l2.c:712:36: error: implicit declaration of function 'phys_to_page'; did you mean 'pfn_to_page'? [-Werror=implicit-function-declaration]
712 | new_page = phys_to_page(vb2_dma_contig_plane_dma_addr(vb, plane));
| ^~~~~~~~~~~~
| pfn_to_page
>> drivers/staging/media/vxd/decoder/vxd_v4l2.c:712:34: warning: assignment to 'struct page *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
712 | new_page = phys_to_page(vb2_dma_contig_plane_dma_addr(vb, plane));
| ^
>> drivers/staging/media/vxd/decoder/vxd_v4l2.c:618:13: warning: variable 'pages' set but not used [-Wunused-but-set-variable]
618 | int pages;
| ^~~~~
In file included from drivers/staging/media/vxd/decoder/vxd_props.h:19,
from drivers/staging/media/vxd/decoder/decoder.h:29,
from drivers/staging/media/vxd/decoder/core.h:20,
from drivers/staging/media/vxd/decoder/vxd_v4l2.c:52:
At top level:
drivers/staging/media/vxd/common/imgmmu.h:65:28: warning: 'VIRT_DIR_IDX_MASK' defined but not used [-Wunused-const-variable=]
65 | static const unsigned long VIRT_DIR_IDX_MASK = (~((1 << MMU_DIR_SHIFT) - 1));
| ^~~~~~~~~~~~~~~~~
drivers/staging/media/vxd/common/imgmmu.h:62:28: warning: 'VIRT_PAGE_TBL_MASK' defined but not used [-Wunused-const-variable=]
62 | static const unsigned long VIRT_PAGE_TBL_MASK =
| ^~~~~~~~~~~~~~~~~~
drivers/staging/media/vxd/common/imgmmu.h:60:28: warning: 'VIRT_PAGE_OFF_MASK' defined but not used [-Wunused-const-variable=]
60 | static const unsigned long VIRT_PAGE_OFF_MASK = ((1 << MMU_PAGE_SHIFT) - 1);
| ^~~~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors
--
drivers/staging/media/vxd/decoder/hevc_secure_parser.c: In function 'bspp_hevc_parse_vps':
>> drivers/staging/media/vxd/decoder/hevc_secure_parser.c:594:1: warning: the frame size of 1168 bytes is larger than 1024 bytes [-Wframe-larger-than=]
594 | }
| ^
vim +712 drivers/staging/media/vxd/decoder/vxd_v4l2.c
604
605 static int vxd_dec_buf_prepare(struct vb2_buffer *vb)
606 {
607 struct vxd_dec_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
608 struct device *dev = ctx->dev->dev;
609 struct vxd_dec_q_data *q_data;
610 void *sgt;
611 #ifdef CAPTURE_CONTIG_ALLOC
612 struct page *new_page;
613 #else
614 void *sgl;
615 #endif
616 struct sg_table *sgt_new;
617 void *sgl_new;
> 618 int pages;
619 int nents = 0;
620 int size = 0;
621 int plane, num_planes, ret = 0;
622 struct vxd_buffer *buf =
623 container_of(vb, struct vxd_buffer, buffer.vb.vb2_buf);
624
625 q_data = get_q_data(ctx, vb->vb2_queue->type);
626 if (!q_data)
627 return -EINVAL;
628
629 num_planes = q_data->fmt->num_planes;
630
631 for (plane = 0; plane < num_planes; plane++) {
632 if (vb2_plane_size(vb, plane) < q_data->size_image[plane]) {
633 dev_err(dev, "data will not fit into plane (%lu < %lu)\n",
634 vb2_plane_size(vb, plane),
635 (long)q_data->size_image[plane]);
636 return -EINVAL;
637 }
638 }
639
640 if (buf->mapped)
641 return 0;
642
643 buf->buf_info.cpu_linear_addr = vb2_plane_vaddr(vb, 0);
644 buf->buf_info.buf_size = vb2_plane_size(vb, 0);
645 buf->buf_info.fd = -1;
646 sgt = vb2_dma_sg_plane_desc(vb, 0);
647 if (!sgt) {
648 dev_err(dev, "Could not get sg_table from plane 0\n");
649 return -EINVAL;
650 }
651
652 if (V4L2_TYPE_IS_OUTPUT(vb->type)) {
653 ret = core_stream_map_buf_sg(ctx->res_str_id,
654 VDEC_BUFTYPE_BITSTREAM,
655 &buf->buf_info, sgt,
656 &buf->buf_map_id);
657 if (ret) {
658 dev_err(dev, "OUTPUT core_stream_map_buf_sg failed\n");
659 return ret;
660 }
661
662 buf->bstr_info.buf_size = q_data->size_image[0];
663 buf->bstr_info.cpu_virt_addr = buf->buf_info.cpu_linear_addr;
664 buf->bstr_info.mem_attrib =
665 SYS_MEMATTRIB_UNCACHED | SYS_MEMATTRIB_WRITECOMBINE |
666 SYS_MEMATTRIB_INPUT | SYS_MEMATTRIB_CPU_WRITE;
667 buf->bstr_info.bufmap_id = buf->buf_map_id;
668 lst_init(&buf->seq_unit.bstr_seg_list);
669 lst_init(&buf->pic_unit.bstr_seg_list);
670 lst_init(&buf->end_unit.bstr_seg_list);
671
672 list_add_tail(&buf->list, &ctx->out_buffers);
673 } else {
674 /* Create a single sgt from the plane(s) */
675 sgt_new = kmalloc(sizeof(*sgt_new), GFP_KERNEL);
676 if (!sgt_new)
677 return -EINVAL;
678
679 for (plane = 0; plane < num_planes; plane++) {
680 size += ALIGN(vb2_plane_size(vb, plane), PAGE_SIZE);
681 sgt = vb2_dma_sg_plane_desc(vb, plane);
682 if (!sgt) {
683 dev_err(dev, "Could not get sg_table from plane %d\n", plane);
684 kfree(sgt_new);
685 return -EINVAL;
686 }
687 #ifdef CAPTURE_CONTIG_ALLOC
688 nents += 1;
689 #else
690 nents += sg_nents(img_mmu_get_sgl(sgt));
691 #endif
692 }
693 buf->buf_info.buf_size = size;
694
695 pages = (size + PAGE_SIZE - 1) / PAGE_SIZE;
696 ret = sg_alloc_table(sgt_new, nents, GFP_KERNEL);
697 if (ret) {
698 kfree(sgt_new);
699 return -EINVAL;
700 }
701 sgl_new = img_mmu_get_sgl(sgt_new);
702
703 for (plane = 0; plane < num_planes; plane++) {
704 sgt = vb2_dma_sg_plane_desc(vb, plane);
705 if (!sgt) {
706 dev_err(dev, "Could not get sg_table from plane %d\n", plane);
707 sg_free_table(sgt_new);
708 kfree(sgt_new);
709 return -EINVAL;
710 }
711 #ifdef CAPTURE_CONTIG_ALLOC
> 712 new_page = phys_to_page(vb2_dma_contig_plane_dma_addr(vb, plane));
713 sg_set_page(sgl_new, new_page, ALIGN(vb2_plane_size(vb, plane),
714 PAGE_SIZE), 0);
715 sgl_new = sg_next(sgl_new);
716 #else
717 sgl = img_mmu_get_sgl(sgt);
718
719 while (sgl) {
720 sg_set_page(sgl_new, sg_page(sgl), img_mmu_get_sgl_length(sgl), 0);
721 sgl = sg_next(sgl);
722 sgl_new = sg_next(sgl_new);
723 }
724 #endif
725 }
726
727 buf->buf_info.pictbuf_cfg = ctx->pict_bufcfg;
728 ret = core_stream_map_buf_sg(ctx->res_str_id,
729 VDEC_BUFTYPE_PICTURE,
730 &buf->buf_info, sgt_new,
731 &buf->buf_map_id);
732 sg_free_table(sgt_new);
733 kfree(sgt_new);
734 if (ret) {
735 dev_err(dev, "CAPTURE core_stream_map_buf_sg failed\n");
736 return ret;
737 }
738 list_add_tail(&buf->list, &ctx->cap_buffers);
739 }
740 buf->mapped = TRUE;
741 buf->reuse = TRUE;
742
743 return 0;
744 }
745
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]
Hi,
I love your patch! Yet something to improve:
[auto build test ERROR on linuxtv-media/master]
[also build test ERROR on staging/staging-testing driver-core/driver-core-testing linus/master v5.14-rc6 next-20210818]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/sidraya-bj-pathpartnertech-com/TI-Video-Decoder-driver-upstreaming-to-v5-14-rc6-kernel/20210818-221811
base: git://linuxtv.org/media_tree.git master
config: mips-allyesconfig (attached as .config)
compiler: mips-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/f42ae4f45639a6214f9e775d4280061bf52fc229
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review sidraya-bj-pathpartnertech-com/TI-Video-Decoder-driver-upstreaming-to-v5-14-rc6-kernel/20210818-221811
git checkout f42ae4f45639a6214f9e775d4280061bf52fc229
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=mips
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>
All errors (new ones prefixed by >>):
drivers/staging/media/vxd/decoder/../common/img_mem_unified.c: In function 'unified_free':
>> drivers/staging/media/vxd/decoder/../common/img_mem_unified.c:159:17: error: implicit declaration of function 'vunmap'; did you mean 'kunmap'? [-Werror=implicit-function-declaration]
159 | vunmap(buffer->kptr);
| ^~~~~~
| kunmap
drivers/staging/media/vxd/decoder/../common/img_mem_unified.c: In function 'unified_map_km':
>> drivers/staging/media/vxd/decoder/../common/img_mem_unified.c:197:24: error: implicit declaration of function 'vmap'; did you mean 'kmap'? [-Werror=implicit-function-declaration]
197 | buffer->kptr = vmap((struct page **)pages, num_pages, VM_MAP, prot);
| ^~~~
| kmap
>> drivers/staging/media/vxd/decoder/../common/img_mem_unified.c:197:63: error: 'VM_MAP' undeclared (first use in this function); did you mean 'VM_MTE'?
197 | buffer->kptr = vmap((struct page **)pages, num_pages, VM_MAP, prot);
| ^~~~~~
| VM_MTE
drivers/staging/media/vxd/decoder/../common/img_mem_unified.c:197:63: note: each undeclared identifier is reported only once for each function it appears in
cc1: some warnings being treated as errors
vim +159 drivers/staging/media/vxd/decoder/../common/img_mem_unified.c
76b88427fbba69 Sidraya 2021-08-18 145
76b88427fbba69 Sidraya 2021-08-18 146 static void unified_free(struct heap *heap, struct buffer *buffer)
76b88427fbba69 Sidraya 2021-08-18 147 {
76b88427fbba69 Sidraya 2021-08-18 148 void *dev = buffer->device;
76b88427fbba69 Sidraya 2021-08-18 149 void *sgt = buffer->priv;
76b88427fbba69 Sidraya 2021-08-18 150 void *sgl;
76b88427fbba69 Sidraya 2021-08-18 151
76b88427fbba69 Sidraya 2021-08-18 152 dev_dbg(dev, "%s:%d buffer %d (0x%p)\n", __func__, __LINE__,
76b88427fbba69 Sidraya 2021-08-18 153 buffer->id, buffer);
76b88427fbba69 Sidraya 2021-08-18 154
76b88427fbba69 Sidraya 2021-08-18 155 if (buffer->kptr) {
76b88427fbba69 Sidraya 2021-08-18 156 dev_dbg(dev, "%s vunmap 0x%p\n", __func__, buffer->kptr);
76b88427fbba69 Sidraya 2021-08-18 157 dma_unmap_sg(dev, img_mmu_get_sgl(sgt), img_mmu_get_orig_nents(sgt),
76b88427fbba69 Sidraya 2021-08-18 158 DMA_FROM_DEVICE);
76b88427fbba69 Sidraya 2021-08-18 @159 vunmap(buffer->kptr);
76b88427fbba69 Sidraya 2021-08-18 160 }
76b88427fbba69 Sidraya 2021-08-18 161
76b88427fbba69 Sidraya 2021-08-18 162 sgl = img_mmu_get_sgl(sgt);
76b88427fbba69 Sidraya 2021-08-18 163 while (sgl) {
76b88427fbba69 Sidraya 2021-08-18 164 __free_page(sg_page(sgl));
76b88427fbba69 Sidraya 2021-08-18 165 sgl = sg_next(sgl);
76b88427fbba69 Sidraya 2021-08-18 166 }
76b88427fbba69 Sidraya 2021-08-18 167 sg_free_table(sgt);
76b88427fbba69 Sidraya 2021-08-18 168 kfree(sgt);
76b88427fbba69 Sidraya 2021-08-18 169 }
76b88427fbba69 Sidraya 2021-08-18 170
76b88427fbba69 Sidraya 2021-08-18 171 static int unified_map_km(struct heap *heap, struct buffer *buffer)
76b88427fbba69 Sidraya 2021-08-18 172 {
76b88427fbba69 Sidraya 2021-08-18 173 void *dev = buffer->device;
76b88427fbba69 Sidraya 2021-08-18 174 void *sgt = buffer->priv;
76b88427fbba69 Sidraya 2021-08-18 175 void *sgl = img_mmu_get_sgl(sgt);
76b88427fbba69 Sidraya 2021-08-18 176 unsigned int num_pages = sg_nents(sgl);
76b88427fbba69 Sidraya 2021-08-18 177 unsigned int orig_nents = img_mmu_get_orig_nents(sgt);
76b88427fbba69 Sidraya 2021-08-18 178 void **pages;
76b88427fbba69 Sidraya 2021-08-18 179 int ret;
76b88427fbba69 Sidraya 2021-08-18 180 pgprot_t prot;
76b88427fbba69 Sidraya 2021-08-18 181
76b88427fbba69 Sidraya 2021-08-18 182 dev_dbg(dev, "%s:%d buffer %d (0x%p)\n", __func__, __LINE__, buffer->id, buffer);
76b88427fbba69 Sidraya 2021-08-18 183
76b88427fbba69 Sidraya 2021-08-18 184 if (buffer->kptr) {
76b88427fbba69 Sidraya 2021-08-18 185 dev_warn(dev, "%s called for already mapped buffer %d\n", __func__, buffer->id);
76b88427fbba69 Sidraya 2021-08-18 186 return 0;
76b88427fbba69 Sidraya 2021-08-18 187 }
76b88427fbba69 Sidraya 2021-08-18 188
76b88427fbba69 Sidraya 2021-08-18 189 pages = kmalloc_array(num_pages, sizeof(void *), GFP_KERNEL);
76b88427fbba69 Sidraya 2021-08-18 190 if (!pages)
76b88427fbba69 Sidraya 2021-08-18 191 return -ENOMEM;
76b88427fbba69 Sidraya 2021-08-18 192
76b88427fbba69 Sidraya 2021-08-18 193 img_mmu_get_pages(pages, sgt);
76b88427fbba69 Sidraya 2021-08-18 194
76b88427fbba69 Sidraya 2021-08-18 195 prot = PAGE_KERNEL;
76b88427fbba69 Sidraya 2021-08-18 196 prot = pgprot_writecombine(prot);
76b88427fbba69 Sidraya 2021-08-18 @197 buffer->kptr = vmap((struct page **)pages, num_pages, VM_MAP, prot);
76b88427fbba69 Sidraya 2021-08-18 198 kfree(pages);
76b88427fbba69 Sidraya 2021-08-18 199 if (!buffer->kptr) {
76b88427fbba69 Sidraya 2021-08-18 200 dev_err(dev, "%s vmap failed!\n", __func__);
76b88427fbba69 Sidraya 2021-08-18 201 return -EFAULT;
76b88427fbba69 Sidraya 2021-08-18 202 }
76b88427fbba69 Sidraya 2021-08-18 203
76b88427fbba69 Sidraya 2021-08-18 204 ret = dma_map_sg(dev, sgl, orig_nents, DMA_FROM_DEVICE);
76b88427fbba69 Sidraya 2021-08-18 205
76b88427fbba69 Sidraya 2021-08-18 206 if (ret <= 0) {
76b88427fbba69 Sidraya 2021-08-18 207 dev_err(dev, "%s dma_map_sg failed!\n", __func__);
76b88427fbba69 Sidraya 2021-08-18 208 vunmap(buffer->kptr);
76b88427fbba69 Sidraya 2021-08-18 209 return -EFAULT;
76b88427fbba69 Sidraya 2021-08-18 210 }
76b88427fbba69 Sidraya 2021-08-18 211 dev_dbg(dev, "%s:%d buffer %d orig_nents %d nents %d\n", __func__,
76b88427fbba69 Sidraya 2021-08-18 212 __LINE__, buffer->id, orig_nents, ret);
76b88427fbba69 Sidraya 2021-08-18 213
76b88427fbba69 Sidraya 2021-08-18 214 img_mmu_set_sgt_nents(sgt, ret);
76b88427fbba69 Sidraya 2021-08-18 215
76b88427fbba69 Sidraya 2021-08-18 216 dev_dbg(dev, "%s:%d buffer %d vmap to 0x%p\n", __func__, __LINE__,
76b88427fbba69 Sidraya 2021-08-18 217 buffer->id, buffer->kptr);
76b88427fbba69 Sidraya 2021-08-18 218
76b88427fbba69 Sidraya 2021-08-18 219 return 0;
76b88427fbba69 Sidraya 2021-08-18 220 }
76b88427fbba69 Sidraya 2021-08-18 221
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]
Hi Sidraya,
On 18/08/2021 16:10, [email protected] wrote:
> From: Sidraya <[email protected]>
>
> This series of patches implements v4l2 based Decoder driver for H264,
> H265 and MJPEG decoding standards.This Driver is for D5520 H/W decoder on
> DRA8x SOC of J721e platform.
> This driver has been tested on v5.14-rc6 kernel for following
> decoding standards on v4l2 based Gstreamer 1.16 plug-in.
> 1. H264
> 2. H265
> 3. MJPEG
>
> Note:
> Currently Driver uses list, map and queue custom data structure APIs
> implementation and IOMMU custom framework.We are working on replacing
> customised APIs with Linux kernel generic framework APIs.
> Meanwhile would like to address review comments from
> reviewers before merging to main media/platform subsystem.
OK, so I consider this an RFC series rather than an actual driver submission
and I'll mark it as such in patchwork.
First of all, patch 13/30 never made it to the linux-media mailinglist, so the
series as a whole will not apply. Can you repost 13/30? One possible reason why
13/30 might have been dropped is if it it a really large patch. You may have to
split it up in that case.
Did you run v4l2-compliance for this driver? Always compile v4l2-compliance
(part of https://git.linuxtv.org/v4l-utils.git/) from the git repo sources
to make sure you have most recent tests.
I need to see the output of 'v4l2-compliance' as part of the cover letter.
There shouldn't be any failures.
I see a lot of pointless #ifdef DEBUG_DECODER_DRIVER lines. Either just drop
the debug code or use dev_dbg/pr_debug. Ditto for VDEC_ASSERT(). This really
pollutes the code.
Can you provide a high-level description of the hardware? It seems like this
driver is a lot more complex than other decoder drivers, but it is not immediately
clear why that is. One reason might be that the hardware/firmware is a stateless
decoder, while the driver exposes a stateful decoder API. See the m2m interface
documentation for the differences between the two:
https://hverkuil.home.xs4all.nl/spec/userspace-api/v4l/dev-mem2mem.html
If that's the case, then the driver should really be a stateless driver, that
will likely make things much easier.
In any case, this driver clearly needs a lot more work.
Regards,
Hans
>
> Sidraya (30):
> dt-bindings: Add binding for img,d5500-vxd for DRA8x
> v4l: vxd-dec: Create mmu programming helper library
> v4l: vxd-dec: Create vxd_dec Mem Manager helper library
> v4l: vxd-dec: Add vxd helper library
> v4l: vxd-dec: Add IMG VXD Video Decoder mem to mem drive
> v4l: vxd-dec: Add hardware control modules
> v4l: vxd-dec: Add vxd core module
> v4l: vxd-dec: Add translation control modules
> v4l: vxd-dec: Add idgen api modules
> v4l: vxd-dec: Add utility modules
> v4l: vxd-dec: Add TALMMU module
> v4l: vxd-dec: Add VDEC MMU wrapper
> v4l: vxd-dec: Add Bistream Preparser (BSPP) module
> v4l: vxd-dec: Add common headers
> v4l: vxd-dec: Add firmware interface headers
> v4l: vxd-dec: Add pool api modules
> v4l: vxd-dec: This patch implements resource manage component
> v4l: vxd-dec: This patch implements pixel processing library
> v4l:vxd-dec:vdecdd utility library
> v4l:vxd-dec:Decoder resource component
> v4l:vxd-dec:Decoder Core Component
> v4l:vxd-dec:vdecdd headers added
> v4l:vxd-dec:Decoder Component
> v4l:vxd-dec: Add resource manager
> v4l: videodev2: Add 10bit definitions for NV12 and NV16 color formats
> media: Kconfig: Add Video decoder kconfig and Makefile entries
> media: platform: vxd: Kconfig: Add Video decoder Kconfig and Makefile
> IMG DEC V4L2 Interface function implementations
> arm64: dts: dra82: Add v4l2 vxd_dec device node
> ARM64: ti_sdk_arm64_release_defconfig: Enable d5520 video decoder
> driver
>
> .../bindings/media/img,d5520-vxd.yaml | 52 +
> MAINTAINERS | 114 +
> arch/arm64/boot/dts/ti/k3-j721e-main.dtsi | 9 +
> .../configs/ti_sdk_arm64_release_defconfig | 7407 +++++++++++++++++
> drivers/media/v4l2-core/v4l2-ioctl.c | 2 +
> drivers/staging/media/Kconfig | 2 +
> drivers/staging/media/Makefile | 1 +
> drivers/staging/media/vxd/common/addr_alloc.c | 499 ++
> drivers/staging/media/vxd/common/addr_alloc.h | 238 +
> drivers/staging/media/vxd/common/dq.c | 248 +
> drivers/staging/media/vxd/common/dq.h | 36 +
> drivers/staging/media/vxd/common/hash.c | 481 ++
> drivers/staging/media/vxd/common/hash.h | 86 +
> drivers/staging/media/vxd/common/idgen_api.c | 449 +
> drivers/staging/media/vxd/common/idgen_api.h | 59 +
> drivers/staging/media/vxd/common/img_errors.h | 104 +
> drivers/staging/media/vxd/common/img_mem.h | 43 +
> .../staging/media/vxd/common/img_mem_man.c | 1124 +++
> .../staging/media/vxd/common/img_mem_man.h | 231 +
> .../media/vxd/common/img_mem_unified.c | 276 +
> drivers/staging/media/vxd/common/imgmmu.c | 782 ++
> drivers/staging/media/vxd/common/imgmmu.h | 180 +
> drivers/staging/media/vxd/common/lst.c | 119 +
> drivers/staging/media/vxd/common/lst.h | 37 +
> drivers/staging/media/vxd/common/pool.c | 228 +
> drivers/staging/media/vxd/common/pool.h | 66 +
> drivers/staging/media/vxd/common/pool_api.c | 709 ++
> drivers/staging/media/vxd/common/pool_api.h | 113 +
> drivers/staging/media/vxd/common/ra.c | 972 +++
> drivers/staging/media/vxd/common/ra.h | 200 +
> drivers/staging/media/vxd/common/resource.c | 576 ++
> drivers/staging/media/vxd/common/resource.h | 66 +
> drivers/staging/media/vxd/common/rman_api.c | 620 ++
> drivers/staging/media/vxd/common/rman_api.h | 66 +
> drivers/staging/media/vxd/common/talmmu_api.c | 753 ++
> drivers/staging/media/vxd/common/talmmu_api.h | 246 +
> drivers/staging/media/vxd/common/vid_buf.h | 42 +
> drivers/staging/media/vxd/common/work_queue.c | 188 +
> drivers/staging/media/vxd/common/work_queue.h | 66 +
> drivers/staging/media/vxd/decoder/Kconfig | 13 +
> drivers/staging/media/vxd/decoder/Makefile | 129 +
> drivers/staging/media/vxd/decoder/bspp.c | 2479 ++++++
> drivers/staging/media/vxd/decoder/bspp.h | 363 +
> drivers/staging/media/vxd/decoder/bspp_int.h | 514 ++
> drivers/staging/media/vxd/decoder/core.c | 3656 ++++++++
> drivers/staging/media/vxd/decoder/core.h | 72 +
> .../staging/media/vxd/decoder/dec_resources.c | 554 ++
> .../staging/media/vxd/decoder/dec_resources.h | 46 +
> drivers/staging/media/vxd/decoder/decoder.c | 4622 ++++++++++
> drivers/staging/media/vxd/decoder/decoder.h | 375 +
> .../staging/media/vxd/decoder/fw_interface.h | 818 ++
> drivers/staging/media/vxd/decoder/h264_idx.h | 60 +
> .../media/vxd/decoder/h264_secure_parser.c | 3051 +++++++
> .../media/vxd/decoder/h264_secure_parser.h | 278 +
> drivers/staging/media/vxd/decoder/h264_vlc.h | 604 ++
> .../staging/media/vxd/decoder/h264fw_data.h | 652 ++
> .../media/vxd/decoder/h264fw_data_shared.h | 759 ++
> .../media/vxd/decoder/hevc_secure_parser.c | 2895 +++++++
> .../media/vxd/decoder/hevc_secure_parser.h | 455 +
> .../staging/media/vxd/decoder/hevcfw_data.h | 472 ++
> .../media/vxd/decoder/hevcfw_data_shared.h | 767 ++
> .../staging/media/vxd/decoder/hw_control.c | 1211 +++
> .../staging/media/vxd/decoder/hw_control.h | 144 +
> .../media/vxd/decoder/img_dec_common.h | 278 +
> .../media/vxd/decoder/img_msvdx_cmds.h | 279 +
> .../media/vxd/decoder/img_msvdx_core_regs.h | 22 +
> .../media/vxd/decoder/img_msvdx_vdmc_regs.h | 26 +
> .../media/vxd/decoder/img_msvdx_vec_regs.h | 60 +
> .../staging/media/vxd/decoder/img_pixfmts.h | 195 +
> .../media/vxd/decoder/img_profiles_levels.h | 33 +
> .../media/vxd/decoder/img_pvdec_core_regs.h | 60 +
> .../media/vxd/decoder/img_pvdec_pixel_regs.h | 35 +
> .../media/vxd/decoder/img_pvdec_test_regs.h | 39 +
> .../media/vxd/decoder/img_vdec_fw_msg.h | 192 +
> .../vxd/decoder/img_video_bus4_mmu_regs.h | 120 +
> .../media/vxd/decoder/jpeg_secure_parser.c | 645 ++
> .../media/vxd/decoder/jpeg_secure_parser.h | 37 +
> .../staging/media/vxd/decoder/jpegfw_data.h | 83 +
> .../media/vxd/decoder/jpegfw_data_shared.h | 84 +
> drivers/staging/media/vxd/decoder/mem_io.h | 42 +
> drivers/staging/media/vxd/decoder/mmu_defs.h | 42 +
> drivers/staging/media/vxd/decoder/pixel_api.c | 895 ++
> drivers/staging/media/vxd/decoder/pixel_api.h | 152 +
> .../media/vxd/decoder/pvdec_entropy_regs.h | 33 +
> drivers/staging/media/vxd/decoder/pvdec_int.h | 82 +
> .../media/vxd/decoder/pvdec_vec_be_regs.h | 35 +
> drivers/staging/media/vxd/decoder/reg_io2.h | 74 +
> .../staging/media/vxd/decoder/scaler_setup.h | 59 +
> drivers/staging/media/vxd/decoder/swsr.c | 1657 ++++
> drivers/staging/media/vxd/decoder/swsr.h | 278 +
> .../media/vxd/decoder/translation_api.c | 1725 ++++
> .../media/vxd/decoder/translation_api.h | 42 +
> drivers/staging/media/vxd/decoder/vdec_defs.h | 548 ++
> .../media/vxd/decoder/vdec_mmu_wrapper.c | 829 ++
> .../media/vxd/decoder/vdec_mmu_wrapper.h | 174 +
> .../staging/media/vxd/decoder/vdecdd_defs.h | 446 +
> .../staging/media/vxd/decoder/vdecdd_utils.c | 95 +
> .../staging/media/vxd/decoder/vdecdd_utils.h | 93 +
> .../media/vxd/decoder/vdecdd_utils_buf.c | 897 ++
> .../staging/media/vxd/decoder/vdecfw_share.h | 36 +
> .../staging/media/vxd/decoder/vdecfw_shared.h | 893 ++
> drivers/staging/media/vxd/decoder/vxd_core.c | 1683 ++++
> drivers/staging/media/vxd/decoder/vxd_dec.c | 185 +
> drivers/staging/media/vxd/decoder/vxd_dec.h | 477 ++
> drivers/staging/media/vxd/decoder/vxd_ext.h | 74 +
> drivers/staging/media/vxd/decoder/vxd_int.c | 1137 +++
> drivers/staging/media/vxd/decoder/vxd_int.h | 128 +
> .../staging/media/vxd/decoder/vxd_mmu_defs.h | 30 +
> drivers/staging/media/vxd/decoder/vxd_props.h | 80 +
> drivers/staging/media/vxd/decoder/vxd_pvdec.c | 1745 ++++
> .../media/vxd/decoder/vxd_pvdec_priv.h | 126 +
> .../media/vxd/decoder/vxd_pvdec_regs.h | 779 ++
> drivers/staging/media/vxd/decoder/vxd_v4l2.c | 2129 +++++
> include/uapi/linux/videodev2.h | 2 +
> 114 files changed, 62369 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/media/img,d5520-vxd.yaml
> create mode 100644 arch/arm64/configs/ti_sdk_arm64_release_defconfig
> create mode 100644 drivers/staging/media/vxd/common/addr_alloc.c
> create mode 100644 drivers/staging/media/vxd/common/addr_alloc.h
> create mode 100644 drivers/staging/media/vxd/common/dq.c
> create mode 100644 drivers/staging/media/vxd/common/dq.h
> create mode 100644 drivers/staging/media/vxd/common/hash.c
> create mode 100644 drivers/staging/media/vxd/common/hash.h
> create mode 100644 drivers/staging/media/vxd/common/idgen_api.c
> create mode 100644 drivers/staging/media/vxd/common/idgen_api.h
> create mode 100644 drivers/staging/media/vxd/common/img_errors.h
> create mode 100644 drivers/staging/media/vxd/common/img_mem.h
> create mode 100644 drivers/staging/media/vxd/common/img_mem_man.c
> create mode 100644 drivers/staging/media/vxd/common/img_mem_man.h
> create mode 100644 drivers/staging/media/vxd/common/img_mem_unified.c
> create mode 100644 drivers/staging/media/vxd/common/imgmmu.c
> create mode 100644 drivers/staging/media/vxd/common/imgmmu.h
> create mode 100644 drivers/staging/media/vxd/common/lst.c
> create mode 100644 drivers/staging/media/vxd/common/lst.h
> create mode 100644 drivers/staging/media/vxd/common/pool.c
> create mode 100644 drivers/staging/media/vxd/common/pool.h
> create mode 100644 drivers/staging/media/vxd/common/pool_api.c
> create mode 100644 drivers/staging/media/vxd/common/pool_api.h
> create mode 100644 drivers/staging/media/vxd/common/ra.c
> create mode 100644 drivers/staging/media/vxd/common/ra.h
> create mode 100644 drivers/staging/media/vxd/common/resource.c
> create mode 100644 drivers/staging/media/vxd/common/resource.h
> create mode 100644 drivers/staging/media/vxd/common/rman_api.c
> create mode 100644 drivers/staging/media/vxd/common/rman_api.h
> create mode 100644 drivers/staging/media/vxd/common/talmmu_api.c
> create mode 100644 drivers/staging/media/vxd/common/talmmu_api.h
> create mode 100644 drivers/staging/media/vxd/common/vid_buf.h
> create mode 100644 drivers/staging/media/vxd/common/work_queue.c
> create mode 100644 drivers/staging/media/vxd/common/work_queue.h
> create mode 100644 drivers/staging/media/vxd/decoder/Kconfig
> create mode 100644 drivers/staging/media/vxd/decoder/Makefile
> create mode 100644 drivers/staging/media/vxd/decoder/bspp.c
> create mode 100644 drivers/staging/media/vxd/decoder/bspp.h
> create mode 100644 drivers/staging/media/vxd/decoder/bspp_int.h
> create mode 100644 drivers/staging/media/vxd/decoder/core.c
> create mode 100644 drivers/staging/media/vxd/decoder/core.h
> create mode 100644 drivers/staging/media/vxd/decoder/dec_resources.c
> create mode 100644 drivers/staging/media/vxd/decoder/dec_resources.h
> create mode 100644 drivers/staging/media/vxd/decoder/decoder.c
> create mode 100644 drivers/staging/media/vxd/decoder/decoder.h
> create mode 100644 drivers/staging/media/vxd/decoder/fw_interface.h
> create mode 100644 drivers/staging/media/vxd/decoder/h264_idx.h
> create mode 100644 drivers/staging/media/vxd/decoder/h264_secure_parser.c
> create mode 100644 drivers/staging/media/vxd/decoder/h264_secure_parser.h
> create mode 100644 drivers/staging/media/vxd/decoder/h264_vlc.h
> create mode 100644 drivers/staging/media/vxd/decoder/h264fw_data.h
> create mode 100644 drivers/staging/media/vxd/decoder/h264fw_data_shared.h
> create mode 100644 drivers/staging/media/vxd/decoder/hevc_secure_parser.c
> create mode 100644 drivers/staging/media/vxd/decoder/hevc_secure_parser.h
> create mode 100644 drivers/staging/media/vxd/decoder/hevcfw_data.h
> create mode 100644 drivers/staging/media/vxd/decoder/hevcfw_data_shared.h
> create mode 100644 drivers/staging/media/vxd/decoder/hw_control.c
> create mode 100644 drivers/staging/media/vxd/decoder/hw_control.h
> create mode 100644 drivers/staging/media/vxd/decoder/img_dec_common.h
> create mode 100644 drivers/staging/media/vxd/decoder/img_msvdx_cmds.h
> create mode 100644 drivers/staging/media/vxd/decoder/img_msvdx_core_regs.h
> create mode 100644 drivers/staging/media/vxd/decoder/img_msvdx_vdmc_regs.h
> create mode 100644 drivers/staging/media/vxd/decoder/img_msvdx_vec_regs.h
> create mode 100644 drivers/staging/media/vxd/decoder/img_pixfmts.h
> create mode 100644 drivers/staging/media/vxd/decoder/img_profiles_levels.h
> create mode 100644 drivers/staging/media/vxd/decoder/img_pvdec_core_regs.h
> create mode 100644 drivers/staging/media/vxd/decoder/img_pvdec_pixel_regs.h
> create mode 100644 drivers/staging/media/vxd/decoder/img_pvdec_test_regs.h
> create mode 100644 drivers/staging/media/vxd/decoder/img_vdec_fw_msg.h
> create mode 100644 drivers/staging/media/vxd/decoder/img_video_bus4_mmu_regs.h
> create mode 100644 drivers/staging/media/vxd/decoder/jpeg_secure_parser.c
> create mode 100644 drivers/staging/media/vxd/decoder/jpeg_secure_parser.h
> create mode 100644 drivers/staging/media/vxd/decoder/jpegfw_data.h
> create mode 100644 drivers/staging/media/vxd/decoder/jpegfw_data_shared.h
> create mode 100644 drivers/staging/media/vxd/decoder/mem_io.h
> create mode 100644 drivers/staging/media/vxd/decoder/mmu_defs.h
> create mode 100644 drivers/staging/media/vxd/decoder/pixel_api.c
> create mode 100644 drivers/staging/media/vxd/decoder/pixel_api.h
> create mode 100644 drivers/staging/media/vxd/decoder/pvdec_entropy_regs.h
> create mode 100644 drivers/staging/media/vxd/decoder/pvdec_int.h
> create mode 100644 drivers/staging/media/vxd/decoder/pvdec_vec_be_regs.h
> create mode 100644 drivers/staging/media/vxd/decoder/reg_io2.h
> create mode 100644 drivers/staging/media/vxd/decoder/scaler_setup.h
> create mode 100644 drivers/staging/media/vxd/decoder/swsr.c
> create mode 100644 drivers/staging/media/vxd/decoder/swsr.h
> create mode 100644 drivers/staging/media/vxd/decoder/translation_api.c
> create mode 100644 drivers/staging/media/vxd/decoder/translation_api.h
> create mode 100644 drivers/staging/media/vxd/decoder/vdec_defs.h
> create mode 100644 drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c
> create mode 100644 drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.h
> create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_defs.h
> create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_utils.c
> create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_utils.h
> create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_utils_buf.c
> create mode 100644 drivers/staging/media/vxd/decoder/vdecfw_share.h
> create mode 100644 drivers/staging/media/vxd/decoder/vdecfw_shared.h
> create mode 100644 drivers/staging/media/vxd/decoder/vxd_core.c
> create mode 100644 drivers/staging/media/vxd/decoder/vxd_dec.c
> create mode 100644 drivers/staging/media/vxd/decoder/vxd_dec.h
> create mode 100644 drivers/staging/media/vxd/decoder/vxd_ext.h
> create mode 100644 drivers/staging/media/vxd/decoder/vxd_int.c
> create mode 100644 drivers/staging/media/vxd/decoder/vxd_int.h
> create mode 100644 drivers/staging/media/vxd/decoder/vxd_mmu_defs.h
> create mode 100644 drivers/staging/media/vxd/decoder/vxd_props.h
> create mode 100644 drivers/staging/media/vxd/decoder/vxd_pvdec.c
> create mode 100644 drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h
> create mode 100644 drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h
> create mode 100644 drivers/staging/media/vxd/decoder/vxd_v4l2.c
>
On Wed, Aug 18, 2021 at 07:40:10PM +0530, [email protected] wrote:
> +int img_mem_create_ctx(struct mem_ctx **new_ctx)
> +{
> + struct mem_man *mem_man = &mem_man_data;
> + struct mem_ctx *ctx;
> +
> + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> + if (!ctx)
> + return -ENOMEM;
> +
> + ctx->buffers = kzalloc(sizeof(*ctx->buffers), GFP_KERNEL);
> + if (!ctx->buffers)
> + return -ENOMEM;
Smatch would have caught that this needs a kfree(ctx); before returning.
It wouldn't hurt to run Smatch over this code.
git clone https://repo.or.cz/w/smatch.git
cd smatch
yum install gcc make sqlite3 sqlite-devel sqlite perl-DBD-SQLite openssl-devel perl-Try-Tiny
make
cd ~/kernel/
~/smatch/smatch_scripts/kchecker drivers/staging/media/vxd/common/img_mem_man.c
(I am the author of Smatch #BlowYourOwnTrumpet).
regards,
dan carpenter
On Wed, Aug 18, 2021 at 07:40:07PM +0530, [email protected] wrote:
>
> This
> message contains confidential information and is intended only
> for the
> individual(s) named. If you are not the intended
> recipient, you are
> notified that disclosing, copying, distributing or taking any
> action in
> reliance on the contents of this mail and attached file/s is strictly
> prohibited. Please notify the
> sender immediately and delete this e-mail
> from your system. E-mail transmission
> cannot be guaranteed to be secured or
> error-free as information could be
> intercepted, corrupted, lost, destroyed,
> arrive late or incomplete, or contain
> viruses. The sender therefore does
> not accept liability for any errors or
> omissions in the contents of this
> message, which arise as a result of e-mail
> transmission.
You can't have this type of footer on your email for GPL source code. :P
regards,
dan carpenter
Of course this is all staging code and we normally don't review staging
code too hard when it's first sent because we understand that staging
code needs a lot of work before it's acceptable.
One of the rules of staging is that you can't touch code outside of
staging.
But since I happened to glance at a some of this code then here are
some tiny comments:
On Wed, Aug 18, 2021 at 07:40:16PM +0530, [email protected] wrote:
> +/*
> + * Structure contains the ID context.
> + */
> +struct idgen_hdblk {
> + void **link; /* to be part of single linked list */
> + /* Array of handles in this block. */
> + void *ahhandles[1];
Don't use 1 element flex arrays. Do it like this:
void *ahhandles[];
You will have to adjust all you math. But you should be using the
"struct_size(hdblk, ahhandles, nr)" macro anyway to prevent integer
overflows.
> +int idgen_createcontext(unsigned int maxid, unsigned int blksize,
> + int incid, void **idgenhandle)
> +{
> + struct idgen_context *idcontext;
> +
> + /* Create context structure */
> + idcontext = kzalloc(sizeof(*idcontext), GFP_KERNEL);
> + if (!idcontext)
> + return IMG_ERROR_OUT_OF_MEMORY;
We need to get rid of all these weird error codes? You can add that
to the TODO file.
> +int idgen_destroycontext(void *idgenhandle)
> +{
> + struct idgen_context *idcontext = (struct idgen_context *)idgenhandle;
> + struct idgen_hdblk *hdblk;
> +
> + if (!idcontext)
> + return IMG_ERROR_INVALID_PARAMETERS;
> +
> + /* If incrementing Ids, free the List of Incrementing Ids */
> + if (idcontext->incids) {
> + struct idgen_id *id;
> + unsigned int i = 0;
> +
> + for (i = 0; i < idcontext->blksize; i++) {
> + id = lst_removehead(&idcontext->incidlist[i]);
You're not allowed to invent your own list API. :P Generally there just
seems like too much extra helper functions and wrappers. Removing this
kind of stuff is standard for staging drivers so that's normall. Add it
to the TODO.
regards,
dan carpenter
Le mercredi 18 août 2021 à 19:40 +0530, [email protected] a écrit :
> From: Sidraya <[email protected]>
>
> The default color formats support only 8bit color depth. This patch
> adds 10bit definitions for NV12 and NV16.
>
> Signed-off-by: Sunita Nadampalli <[email protected]>
> Signed-off-by: Sidraya <[email protected]>
> ---
> drivers/media/v4l2-core/v4l2-ioctl.c | 2 ++
> include/uapi/linux/videodev2.h | 2 ++
> 2 files changed, 4 insertions(+)
>
> diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
> index 05d5db3d85e5..445458c15168 100644
> --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
> @@ -1367,6 +1367,8 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
> case V4L2_META_FMT_VIVID: descr = "Vivid Metadata"; break;
> case V4L2_META_FMT_RK_ISP1_PARAMS: descr = "Rockchip ISP1 3A Parameters"; break;
> case V4L2_META_FMT_RK_ISP1_STAT_3A: descr = "Rockchip ISP1 3A Statistics"; break;
> + case V4L2_PIX_FMT_TI1210: descr = "10-bit YUV 4:2:0 (NV12)"; break;
> + case V4L2_PIX_FMT_TI1610: descr = "10-bit YUV 4:2:2 (NV16)"; break;
>
> default:
> /* Compressed formats */
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index 9260791b8438..a71ffd686050 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -737,6 +737,8 @@ struct v4l2_pix_format {
> #define V4L2_PIX_FMT_SUNXI_TILED_NV12 v4l2_fourcc('S', 'T', '1', '2') /* Sunxi Tiled NV12 Format */
> #define V4L2_PIX_FMT_CNF4 v4l2_fourcc('C', 'N', 'F', '4') /* Intel 4-bit packed depth confidence information */
> #define V4L2_PIX_FMT_HI240 v4l2_fourcc('H', 'I', '2', '4') /* BTTV 8-bit dithered RGB */
> +#define V4L2_PIX_FMT_TI1210 v4l2_fourcc('T', 'I', '1', '2') /* TI NV12 10-bit, two bytes per channel */
> +#define V4L2_PIX_FMT_TI1610 v4l2_fourcc('T', 'I', '1', '6') /* TI NV16 10-bit, two bytes per channel */
As we try and avoid past mistakes (consider HM12 and Sunxi Tiled format as an
example), what makes your pixel format 100% TI specific ? The only valid answer
is usually compression or a very odd pixel arrangement that only makes sense for
a specific HW, in any case, please provide all the information you can about
this format (tiling pattern, block size, compression is any). This way we can
judge if allocating a vendor format actually make sense. It would be sad if
this turns out to be just another 16x16 tiled format (yes, we have multiple
already).
From past experience with OMAP, TI CODECs uses linear tiling very similar to all
other platforms, if that is the case, it should be added a generic format with
with enough documentation for a third party to understand it. Of course, if any
of the HW specification in this regard is public, please share or link to the
related documents.
>
> /* 10bit raw bayer packed, 32 bytes for every 25 pixels, last LSB 6 bits unused */
> #define V4L2_PIX_FMT_IPU3_SBGGR10 v4l2_fourcc('i', 'p', '3', 'b') /* IPU3 packed 10-bit BGGR bayer */
> --
> 2.17.1
>
>
On Tue, Aug 24, 2021 at 04:34:38PM +0300, Dan Carpenter wrote:
> On Wed, Aug 18, 2021 at 07:40:10PM +0530, [email protected] wrote:
> > +int img_mem_create_ctx(struct mem_ctx **new_ctx)
> > +{
> > + struct mem_man *mem_man = &mem_man_data;
> > + struct mem_ctx *ctx;
> > +
> > + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> > + if (!ctx)
> > + return -ENOMEM;
> > +
> > + ctx->buffers = kzalloc(sizeof(*ctx->buffers), GFP_KERNEL);
> > + if (!ctx->buffers)
> > + return -ENOMEM;
>
> Smatch would have caught that this needs a kfree(ctx); before returning.
>
> It wouldn't hurt to run Smatch over this code.
>
> git clone https://repo.or.cz/w/smatch.git
> cd smatch
> yum install gcc make sqlite3 sqlite-devel sqlite perl-DBD-SQLite openssl-devel perl-Try-Tiny
> make
> cd ~/kernel/
> ~/smatch/smatch_scripts/kchecker drivers/staging/media/vxd/common/img_mem_man.c
>
> (I am the author of Smatch #BlowYourOwnTrumpet).
>
> regards,
> dan carpenter
>
I will run Smatch and will remove similar findings.
Thank you for reviewing.
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
On Tue, Aug 24, 2021 at 05:00:06PM +0300, Dan Carpenter wrote:
> Of course this is all staging code and we normally don't review staging
> code too hard when it's first sent because we understand that staging
> code needs a lot of work before it's acceptable.
>
> One of the rules of staging is that you can't touch code outside of
> staging.
>
> But since I happened to glance at a some of this code then here are
> some tiny comments:
>
> On Wed, Aug 18, 2021 at 07:40:16PM +0530, [email protected] wrote:
> > +/*
> > + * Structure contains the ID context.
> > + */
> > +struct idgen_hdblk {
> > + void **link; /* to be part of single linked list */
> > + /* Array of handles in this block. */
> > + void *ahhandles[1];
>
> Don't use 1 element flex arrays. Do it like this:
>
> void *ahhandles[];
>
> You will have to adjust all you math. But you should be using the
> "struct_size(hdblk, ahhandles, nr)" macro anyway to prevent integer
> overflows.
>
> > +int idgen_createcontext(unsigned int maxid, unsigned int blksize,
> > + int incid, void **idgenhandle)
> > +{
> > + struct idgen_context *idcontext;
> > +
> > + /* Create context structure */
> > + idcontext = kzalloc(sizeof(*idcontext), GFP_KERNEL);
> > + if (!idcontext)
> > + return IMG_ERROR_OUT_OF_MEMORY;
>
> We need to get rid of all these weird error codes? You can add that
> to the TODO file.
>
> > +int idgen_destroycontext(void *idgenhandle)
> > +{
> > + struct idgen_context *idcontext = (struct idgen_context *)idgenhandle;
> > + struct idgen_hdblk *hdblk;
> > +
> > + if (!idcontext)
> > + return IMG_ERROR_INVALID_PARAMETERS;
> > +
> > + /* If incrementing Ids, free the List of Incrementing Ids */
> > + if (idcontext->incids) {
> > + struct idgen_id *id;
> > + unsigned int i = 0;
> > +
> > + for (i = 0; i < idcontext->blksize; i++) {
> > + id = lst_removehead(&idcontext->incidlist[i]);
>
> You're not allowed to invent your own list API. :P Generally there just
> seems like too much extra helper functions and wrappers. Removing this
> kind of stuff is standard for staging drivers so that's normall. Add it
> to the TODO.
>
> regards,
> dan carpenter
>
I will make change for void *ahhandles[1] with void *ahhandles[];,
will use "struct_size" while allocating memory.
I will replaced all Error macros with directly linux Error macros.
About customized data structure API e.g-list,queue, I am currently working on
replacing them with Linux kernel generic dtata structure API's.
Thank you for reviewing.
--
This
message contains confidential information and is intended only
for the
individual(s) named. If you are not the intended
recipient, you are
notified that disclosing, copying, distributing or taking any
action in
reliance on the contents of this mail and attached file/s is strictly
prohibited. Please notify the
sender immediately and delete this e-mail
from your system. E-mail transmission
cannot be guaranteed to be secured or
error-free as information could be
intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain
viruses. The sender therefore does
not accept liability for any errors or
omissions in the contents of this
message, which arise as a result of e-mail
transmission.
On Tue, Sep 14, 2021 at 09:10:37AM +0530, Sidraya Jayagond wrote:
> This
> message contains confidential information and is intended only
> for the
> individual(s) named. If you are not the intended
> recipient, you are
> notified that disclosing, copying, distributing or taking any
> action in
> reliance on the contents of this mail and attached file/s is strictly
> prohibited. Please notify the
> sender immediately and delete this e-mail
> from your system. E-mail transmission
> cannot be guaranteed to be secured or
> error-free as information could be
> intercepted, corrupted, lost, destroyed,
> arrive late or incomplete, or contain
> viruses. The sender therefore does
> not accept liability for any errors or
> omissions in the contents of this
> message, which arise as a result of e-mail
> transmission.
>
Now deleted, this is not ok for kernel development mailing lists, sorry.
On Tue, Sep 14, 2021 at 06:35:04AM +0200, Greg KH wrote:
> On Tue, Sep 14, 2021 at 09:10:37AM +0530, Sidraya Jayagond wrote:
> > This
> > message contains confidential information and is intended only
> > for the
> > individual(s) named. If you are not the intended
> > recipient, you are
> > notified that disclosing, copying, distributing or taking any
> > action in
> > reliance on the contents of this mail and attached file/s is strictly
> > prohibited. Please notify the
> > sender immediately and delete this e-mail
> > from your system. E-mail transmission
> > cannot be guaranteed to be secured or
> > error-free as information could be
> > intercepted, corrupted, lost, destroyed,
> > arrive late or incomplete, or contain
> > viruses. The sender therefore does
> > not accept liability for any errors or
> > omissions in the contents of this
> > message, which arise as a result of e-mail
> > transmission.
> >
>
> Now deleted, this is not ok for kernel development mailing lists, sorry.
We are able resolve and removed confidentiality signature for my
email-id.
I apologize for the inconvenience.