2024-04-03 07:10:08

by Shawn Sung (宋孝謙)

[permalink] [raw]
Subject: [PATCH v4 0/9] Add mediate-drm secure flow for SVP

From: Hsiao Chien Sung <[email protected]>

Memory Definitions:
secure memory - Memory allocated in the TEE (Trusted Execution
Environment) which is inaccessible in the REE (Rich Execution
Environment, i.e. linux kernel/userspace).
secure handle - Integer value which acts as reference to 'secure
memory'. Used in communication between TEE and REE to reference
'secure memory'.
secure buffer - 'secure memory' that is used to store decrypted,
compressed video or for other general purposes in the TEE.
secure surface - 'secure memory' that is used to store graphic buffers.

Memory Usage in SVP:
The overall flow of SVP starts with encrypted video coming in from an
outside source into the REE. The REE will then allocate a 'secure
buffer' and send the corresponding 'secure handle' along with the
encrypted, compressed video data to the TEE. The TEE will then decrypt
the video and store the result in the 'secure buffer'. The REE will
then allocate a 'secure surface'. The REE will pass the 'secure
handles' for both the 'secure buffer' and 'secure surface' into the
TEE for video decoding. The video decoder HW will then decode the
contents of the 'secure buffer' and place the result in the 'secure
surface'. The REE will then attach the 'secure surface' to the overlay
plane for rendering of the video.

Everything relating to ensuring security of the actual contents of the
'secure buffer' and 'secure surface' is out of scope for the REE and
is the responsibility of the TEE.

DRM driver handles allocation of gem objects that are backed by a 'secure
surface' and for displaying a 'secure surface' on the overlay plane.
This introduces a new flag for object creation called
DRM_MTK_GEM_CREATE_ENCRYPTED which indicates it should be a 'secure
surface'. All changes here are in MediaTek specific code.
---
TODO:
1) Remove get sec larb port interface in ddp_comp, ovl and ovl_adaptor.
2) Verify instruction for enabling/disabling dapc and larb port in TEE
drop the sec_engine flags in normal world and.
3) Move DISP_REG_OVL_SECURE setting to secure world for mtk_disp_ovl.c.
4) Change the parameter register address in mtk_ddp_sec_write()
from "u32 addr" to "struct cmdq_client_reg *cmdq_reg".
5) Implement setting mmsys routing table in the secure world series.
---
Based on 5 series and 1 patch:
[1] v3 dma-buf: heaps: Add MediaTek secure heap
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=809023
[2] v3 add driver to support secure video decoder
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=807308
[3] v4 soc: mediatek: Add register definitions for GCE
- https://patchwork.kernel.org/project/linux-mediatek/patch/[email protected]/
[4] v2 Add CMDQ driver support for mt8188
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=810302
[5] Add mediatek,gce-events definition to mediatek,gce-mailbox bindings
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=810938
[6] v3 Add CMDQ secure driver for SVP
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=812379
---
Changes in v4:
1. Rebase on mediatek-drm-next(278640d4d74cd) and fix the conflicts
2. This series is based on [email protected]
3. This series is based on [email protected]
4. This series is based on [email protected]

Changes in v3:
1. fix kerneldoc problems
2. fix typo in title and commit message
3. adjust naming for secure variable
4. add the missing part for is_suecure plane implementation
5. use BIT_ULL macro to replace bit shifting
6. move modification of ovl_adaptor part to the correct patch
7. add TODO list in commit message
8. add commit message for using share memory to store execute count

Changes in v2:

1. remove the DRIVER_RDNDER flag for mtk_drm_ioctl
2. move cmdq_insert_backup_cookie into client driver
3. move secure gce node define from mt8195-cherry.dtsi to mt8195.dtsi
---
CK Hu (1):
drm/mediatek: Add interface to allocate MediaTek GEM buffer.

Jason-JH.Lin (9):
drm/mediatek/uapi: Add DRM_MTK_GEM_CREATE_ENCRYPTED flag
drm/mediatek: Add secure buffer control flow to mtk_drm_gem
drm/mediatek: Add secure identify flag and funcution to mtk_drm_plane
drm/mediatek: Add mtk_ddp_sec_write to config secure buffer info
drm/mediatek: Add get_sec_port interface to mtk_ddp_comp
drm/mediatek: Add secure layer config support for ovl_adaptor
drm/mediatek: Add secure layer config support for ovl
drm/mediatek: Add secure flow support to mediatek-drm
drm/mediatek: Add cmdq_insert_backup_cookie before secure pkt finalize

drivers/gpu/drm/mediatek/mtk_crtc.c | 273 +++++++++++++++++-
drivers/gpu/drm/mediatek/mtk_crtc.h | 1 +
drivers/gpu/drm/mediatek/mtk_ddp_comp.c | 16 +
drivers/gpu/drm/mediatek/mtk_ddp_comp.h | 13 +
drivers/gpu/drm/mediatek/mtk_disp_drv.h | 3 +
drivers/gpu/drm/mediatek/mtk_disp_ovl.c | 30 +-
.../gpu/drm/mediatek/mtk_disp_ovl_adaptor.c | 15 +
drivers/gpu/drm/mediatek/mtk_gem.c | 85 +++++-
drivers/gpu/drm/mediatek/mtk_gem.h | 4 +
drivers/gpu/drm/mediatek/mtk_mdp_rdma.c | 11 +-
drivers/gpu/drm/mediatek/mtk_mdp_rdma.h | 2 +
drivers/gpu/drm/mediatek/mtk_plane.c | 25 ++
drivers/gpu/drm/mediatek/mtk_plane.h | 2 +
include/uapi/drm/mediatek_drm.h | 1 +
14 files changed, 465 insertions(+), 16 deletions(-)

--
2.18.0



2024-04-03 07:10:37

by Shawn Sung (宋孝謙)

[permalink] [raw]
Subject: [PATCH v4 9/9] drm/mediatek: Add cmdq_insert_backup_cookie before secure pkt finalize

From: "Jason-JH.Lin" <[email protected]>

Add cmdq_insert_backup_cookie to append some commands before EOC:
1. Get GCE HW thread execute count from the GCE HW register.
2. Add 1 to the execute count and then store into a shared memory.
3. Set a software event siganl as secure irq to GCE HW.

Since the value of execute count + 1 is stored in a shared memory,
CMDQ driver in the normal world can use it to handle task done in irq
handler and CMDQ driver in the secure world will use it to schedule
the task slot for each secure thread.

The reason why we use shared memory to record execute count here is:
1. normal world can not access the register of secure GCE thread in
normal world.
2. calling TEE invoke cmd in the irq handler would be expensive and not
stable. I've tested that a single TEE invloke cmd to CMDQ PTA costs
19~53 us. Maybe it would cost more during the scenario that needs more
CPU loading.

Signed-off-by: Jason-JH.Lin <[email protected]>
Signed-off-by: Hsiao Chien Sung <[email protected]>
---
drivers/gpu/drm/mediatek/mtk_crtc.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.c b/drivers/gpu/drm/mediatek/mtk_crtc.c
index a6ba9965500f0..2fb52928a3055 100644
--- a/drivers/gpu/drm/mediatek/mtk_crtc.c
+++ b/drivers/gpu/drm/mediatek/mtk_crtc.c
@@ -186,7 +186,7 @@ void mtk_crtc_disable_secure_state(struct drm_crtc *crtc)
sec_scn = CMDQ_SEC_SCNR_SUB_DISP_DISABLE;

cmdq_sec_pkt_set_data(&mtk_crtc->sec_cmdq_handle, sec_engine, sec_engine, sec_scn);
-
+ cmdq_sec_insert_backup_cookie(&mtk_crtc->sec_cmdq_handle);
cmdq_pkt_finalize(&mtk_crtc->sec_cmdq_handle);
dma_sync_single_for_device(mtk_crtc->sec_cmdq_client.chan->mbox->dev,
mtk_crtc->sec_cmdq_handle.pa_base,
@@ -810,6 +810,8 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event);
cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false);
mtk_crtc_ddp_config(crtc, cmdq_handle);
+ if (cmdq_handle->sec_data)
+ cmdq_sec_insert_backup_cookie(cmdq_handle);
cmdq_pkt_finalize(cmdq_handle);
dma_sync_single_for_device(cmdq_client.chan->mbox->dev,
cmdq_handle->pa_base,
--
2.18.0


2024-04-03 07:11:17

by Shawn Sung (宋孝謙)

[permalink] [raw]
Subject: [PATCH v4 8/9] drm/mediatek: Add secure flow support to mediatek-drm

From: "Jason-JH.Lin" <[email protected]>

To add secure flow support for mediatek-drm, each crtc have to
create a secure cmdq mailbox channel. Then cmdq packets with
display HW configuration will be sent to secure cmdq mailbox channel
and configured in the secure world.

Each crtc have to use secure cmdq interface to configure some secure
settings for display HW before sending cmdq packets to secure cmdq
mailbox channel.

If any of fb get from current drm_atomic_state is secure, then crtc
will switch to the secure flow to configure display HW.
If all fbs are not secure in current drm_atomic_state, then crtc will
switch to the normal flow.

TODO:
1. Remove get sec larb port interface in ddp_comp, ovl and ovl_adaptor.
2. Verify instruction for enabling/disabling dapc and larb port in TEE
drop the sec_engine flags in normal world.

Signed-off-by: Jason-JH.Lin <[email protected]>
Signed-off-by: Hsiao Chien Sung <[email protected]>
---
drivers/gpu/drm/mediatek/mtk_crtc.c | 271 ++++++++++++++++++++++++++-
drivers/gpu/drm/mediatek/mtk_crtc.h | 1 +
drivers/gpu/drm/mediatek/mtk_plane.c | 7 +
3 files changed, 269 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.c b/drivers/gpu/drm/mediatek/mtk_crtc.c
index 29d00d11224b0..a6ba9965500f0 100644
--- a/drivers/gpu/drm/mediatek/mtk_crtc.c
+++ b/drivers/gpu/drm/mediatek/mtk_crtc.c
@@ -57,6 +57,11 @@ struct mtk_crtc {
u32 cmdq_event;
u32 cmdq_vblank_cnt;
wait_queue_head_t cb_blocking_queue;
+
+ struct cmdq_client sec_cmdq_client;
+ struct cmdq_pkt sec_cmdq_handle;
+ bool sec_cmdq_working;
+ wait_queue_head_t sec_cb_blocking_queue;
#endif

struct device *mmsys_dev;
@@ -73,6 +78,8 @@ struct mtk_crtc {

struct mtk_ddp_comp *crc_provider;
struct drm_vblank_work crc_work;
+
+ bool sec_on;
};

struct mtk_crtc_state {
@@ -117,6 +124,154 @@ static void mtk_drm_finish_page_flip(struct mtk_crtc *mtk_crtc)
}
}

+void mtk_crtc_disable_secure_state(struct drm_crtc *crtc)
+{
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+ enum cmdq_sec_scenario sec_scn = CMDQ_SEC_SCNR_MAX;
+ int i;
+ struct mtk_ddp_comp *ddp_first_comp;
+ struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc);
+ u64 sec_engine = 0; /* for hw engine write output secure fb */
+ u64 sec_port = 0; /* for larb port read input secure fb */
+
+ mutex_lock(&mtk_crtc->hw_lock);
+
+ if (!mtk_crtc->sec_cmdq_client.chan) {
+ pr_err("crtc-%d secure mbox channel is NULL\n", drm_crtc_index(crtc));
+ goto err;
+ }
+
+ if (!mtk_crtc->sec_on) {
+ pr_debug("crtc-%d is already disabled!\n", drm_crtc_index(crtc));
+ goto err;
+ }
+
+ mbox_flush(mtk_crtc->sec_cmdq_client.chan, 0);
+ mtk_crtc->sec_cmdq_handle.cmd_buf_size = 0;
+
+ if (mtk_crtc->sec_cmdq_handle.sec_data) {
+ struct cmdq_sec_data *sec_data;
+
+ sec_data = mtk_crtc->sec_cmdq_handle.sec_data;
+ sec_data->addr_metadata_cnt = 0;
+ sec_data->addr_metadatas = (uintptr_t)NULL;
+ }
+
+ /*
+ * Secure path only support DL mode, so we just wait
+ * the first path frame done here
+ */
+ cmdq_pkt_wfe(&mtk_crtc->sec_cmdq_handle, mtk_crtc->cmdq_event, false);
+
+ ddp_first_comp = mtk_crtc->ddp_comp[0];
+ for (i = 0; i < mtk_crtc->layer_nr; i++) {
+ struct drm_plane *plane = &mtk_crtc->planes[i];
+
+ sec_port |= mtk_ddp_comp_layer_get_sec_port(ddp_first_comp, i);
+
+ /* make sure secure layer off before switching secure state */
+ if (!mtk_plane_fb_is_secure(plane->state->fb)) {
+ struct mtk_plane_state *plane_state = to_mtk_plane_state(plane->state);
+
+ plane_state->pending.enable = false;
+ mtk_ddp_comp_layer_config(ddp_first_comp, i, plane_state,
+ &mtk_crtc->sec_cmdq_handle);
+ }
+ }
+
+ /* Disable secure path */
+ if (drm_crtc_index(crtc) == 0)
+ sec_scn = CMDQ_SEC_SCNR_PRIMARY_DISP_DISABLE;
+ else if (drm_crtc_index(crtc) == 1)
+ sec_scn = CMDQ_SEC_SCNR_SUB_DISP_DISABLE;
+
+ cmdq_sec_pkt_set_data(&mtk_crtc->sec_cmdq_handle, sec_engine, sec_engine, sec_scn);
+
+ cmdq_pkt_finalize(&mtk_crtc->sec_cmdq_handle);
+ dma_sync_single_for_device(mtk_crtc->sec_cmdq_client.chan->mbox->dev,
+ mtk_crtc->sec_cmdq_handle.pa_base,
+ mtk_crtc->sec_cmdq_handle.cmd_buf_size,
+ DMA_TO_DEVICE);
+
+ mtk_crtc->sec_cmdq_working = true;
+ mbox_send_message(mtk_crtc->sec_cmdq_client.chan, &mtk_crtc->sec_cmdq_handle);
+ mbox_client_txdone(mtk_crtc->sec_cmdq_client.chan, 0);
+
+ // Wait for sec state to be disabled by cmdq
+ wait_event_timeout(mtk_crtc->sec_cb_blocking_queue,
+ !mtk_crtc->sec_cmdq_working,
+ msecs_to_jiffies(500));
+
+ mtk_crtc->sec_on = false;
+ pr_debug("crtc-%d disable secure plane!\n", drm_crtc_index(crtc));
+
+err:
+ mutex_unlock(&mtk_crtc->hw_lock);
+#endif
+}
+
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+static void mtk_crtc_enable_secure_state(struct drm_crtc *crtc)
+{
+ enum cmdq_sec_scenario sec_scn = CMDQ_SEC_SCNR_MAX;
+ int i;
+ struct mtk_ddp_comp *ddp_first_comp;
+ struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc);
+ u64 sec_engine = 0; /* for hw engine write output secure fb */
+ u64 sec_port = 0; /* for larb port read input secure fb */
+
+ cmdq_pkt_wfe(&mtk_crtc->sec_cmdq_handle, mtk_crtc->cmdq_event, false);
+
+ ddp_first_comp = mtk_crtc->ddp_comp[0];
+ for (i = 0; i < mtk_crtc->layer_nr; i++)
+ if (mtk_crtc->planes[i].type == DRM_PLANE_TYPE_CURSOR)
+ sec_port |= mtk_ddp_comp_layer_get_sec_port(ddp_first_comp, i);
+
+ if (drm_crtc_index(crtc) == 0)
+ sec_scn = CMDQ_SEC_SCNR_PRIMARY_DISP;
+ else if (drm_crtc_index(crtc) == 1)
+ sec_scn = CMDQ_SEC_SCNR_SUB_DISP;
+
+ cmdq_sec_pkt_set_data(&mtk_crtc->sec_cmdq_handle, sec_engine, sec_port, sec_scn);
+
+ pr_debug("crtc-%d enable secure plane!\n", drm_crtc_index(crtc));
+}
+#endif
+
+static void mtk_crtc_plane_switch_sec_state(struct drm_crtc *crtc,
+ struct drm_atomic_state *state)
+{
+#if IS_REACHABLE(CONFIG_MTK_CMDQ)
+ bool sec_on[MAX_CRTC] = {0};
+ int i;
+ struct drm_crtc_state *crtc_state;
+ struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc);
+ struct drm_plane *plane;
+ struct drm_plane_state *old_plane_state;
+
+ for_each_old_plane_in_state(state, plane, old_plane_state, i) {
+ if (!plane->state->crtc)
+ continue;
+
+ if (plane->state->fb &&
+ mtk_plane_fb_is_secure(plane->state->fb) &&
+ mtk_crtc->sec_cmdq_client.chan)
+ sec_on[drm_crtc_index(plane->state->crtc)] = true;
+ }
+
+ for_each_old_crtc_in_state(state, crtc, crtc_state, i) {
+ mtk_crtc = to_mtk_crtc(crtc);
+
+ if (!sec_on[i])
+ mtk_crtc_disable_secure_state(crtc);
+
+ mutex_lock(&mtk_crtc->hw_lock);
+ mtk_crtc->sec_on = true;
+ mutex_unlock(&mtk_crtc->hw_lock);
+ }
+#endif
+}
+
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
static int mtk_drm_cmdq_pkt_create(struct cmdq_client *client, struct cmdq_pkt *pkt,
size_t size)
@@ -152,22 +307,33 @@ static void mtk_drm_cmdq_pkt_destroy(struct cmdq_pkt *pkt)
dma_unmap_single(client->chan->mbox->dev, pkt->pa_base, pkt->buf_size,
DMA_TO_DEVICE);
kfree(pkt->va_base);
+ kfree(pkt->sec_data);
}
#endif

static void mtk_crtc_destroy(struct drm_crtc *crtc)
{
struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc);
+ struct mtk_drm_private *priv = crtc->dev->dev_private;
int i;

+ priv = priv->all_drm_private[drm_crtc_index(crtc)];
+
mtk_mutex_put(mtk_crtc->mutex);
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
mtk_drm_cmdq_pkt_destroy(&mtk_crtc->cmdq_handle);
+ mtk_drm_cmdq_pkt_destroy(&mtk_crtc->sec_cmdq_handle);

if (mtk_crtc->cmdq_client.chan) {
mbox_free_channel(mtk_crtc->cmdq_client.chan);
mtk_crtc->cmdq_client.chan = NULL;
}
+
+ if (mtk_crtc->sec_cmdq_client.chan) {
+ device_link_remove(priv->dev, mtk_crtc->sec_cmdq_client.chan->mbox->dev);
+ mbox_free_channel(mtk_crtc->sec_cmdq_client.chan);
+ mtk_crtc->sec_cmdq_client.chan = NULL;
+ }
#endif

for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {
@@ -316,6 +482,11 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
if (data->sta < 0)
return;

+ if (!data->pkt || !data->pkt->sec_data)
+ mtk_crtc = container_of(cmdq_cl, struct mtk_crtc, cmdq_client);
+ else
+ mtk_crtc = container_of(cmdq_cl, struct mtk_crtc, sec_cmdq_client);
+
state = to_mtk_crtc_state(mtk_crtc->base.state);

state->pending_config = false;
@@ -344,6 +515,11 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
mtk_crtc->pending_async_planes = false;
}

+ if (mtk_crtc->sec_cmdq_working) {
+ mtk_crtc->sec_cmdq_working = false;
+ wake_up(&mtk_crtc->sec_cb_blocking_queue);
+ }
+
mtk_crtc->cmdq_vblank_cnt = 0;
wake_up(&mtk_crtc->cb_blocking_queue);
}
@@ -567,7 +743,8 @@ static void mtk_crtc_ddp_config(struct drm_crtc *crtc,
static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
{
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
- struct cmdq_pkt *cmdq_handle = &mtk_crtc->cmdq_handle;
+ struct cmdq_client cmdq_client;
+ struct cmdq_pkt *cmdq_handle;
#endif
struct drm_crtc *crtc = &mtk_crtc->base;
struct mtk_drm_private *priv = crtc->dev->dev_private;
@@ -605,14 +782,36 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
mtk_mutex_release(mtk_crtc->mutex);
}
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
- if (mtk_crtc->cmdq_client.chan) {
+ if (mtk_crtc->sec_on) {
+ mbox_flush(mtk_crtc->sec_cmdq_client.chan, 0);
+ mtk_crtc->sec_cmdq_handle.cmd_buf_size = 0;
+
+ if (mtk_crtc->sec_cmdq_handle.sec_data) {
+ struct cmdq_sec_data *sec_data;
+
+ sec_data = mtk_crtc->sec_cmdq_handle.sec_data;
+ sec_data->addr_metadata_cnt = 0;
+ sec_data->addr_metadatas = (uintptr_t)NULL;
+ }
+
+ mtk_crtc_enable_secure_state(crtc);
+
+ cmdq_client = mtk_crtc->sec_cmdq_client;
+ cmdq_handle = &mtk_crtc->sec_cmdq_handle;
+ } else if (mtk_crtc->cmdq_client.chan) {
mbox_flush(mtk_crtc->cmdq_client.chan, 2000);
- cmdq_handle->cmd_buf_size = 0;
+ mtk_crtc->cmdq_handle.cmd_buf_size = 0;
+
+ cmdq_client = mtk_crtc->cmdq_client;
+ cmdq_handle = &mtk_crtc->cmdq_handle;
+ }
+
+ if (cmdq_client.chan) {
cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event);
cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false);
mtk_crtc_ddp_config(crtc, cmdq_handle);
cmdq_pkt_finalize(cmdq_handle);
- dma_sync_single_for_device(mtk_crtc->cmdq_client.chan->mbox->dev,
+ dma_sync_single_for_device(cmdq_client.chan->mbox->dev,
cmdq_handle->pa_base,
cmdq_handle->cmd_buf_size,
DMA_TO_DEVICE);
@@ -625,8 +824,8 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
*/
mtk_crtc->cmdq_vblank_cnt = 3;

- mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle);
- mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0);
+ mbox_send_message(cmdq_client.chan, cmdq_handle);
+ mbox_client_txdone(cmdq_client.chan, 0);
}
#endif
mtk_crtc->config_updating = false;
@@ -835,6 +1034,8 @@ static void mtk_crtc_atomic_disable(struct drm_crtc *crtc,
if (!mtk_crtc->enabled)
return;

+ mtk_crtc_disable_secure_state(crtc);
+
/* Set all pending plane state to disabled */
for (i = 0; i < mtk_crtc->layer_nr; i++) {
struct drm_plane *plane = &mtk_crtc->planes[i];
@@ -873,6 +1074,8 @@ static void mtk_crtc_atomic_begin(struct drm_crtc *crtc,
struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc);
unsigned long flags;

+ mtk_crtc_plane_switch_sec_state(crtc, state);
+
if (mtk_crtc->event && mtk_crtc_state->base.event)
DRM_ERROR("new event while there is still a pending event\n");

@@ -1169,8 +1372,7 @@ int mtk_crtc_create(struct drm_device *drm_dev, const unsigned int *path,
if (ret) {
dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n",
drm_crtc_index(&mtk_crtc->base));
- mbox_free_channel(mtk_crtc->cmdq_client.chan);
- mtk_crtc->cmdq_client.chan = NULL;
+ goto cmdq_err;
} else {
ret = mtk_drm_cmdq_pkt_create(&mtk_crtc->cmdq_client,
&mtk_crtc->cmdq_handle,
@@ -1178,14 +1380,63 @@ int mtk_crtc_create(struct drm_device *drm_dev, const unsigned int *path,
if (ret) {
dev_dbg(dev, "mtk_crtc %d failed to create cmdq packet\n",
drm_crtc_index(&mtk_crtc->base));
- mbox_free_channel(mtk_crtc->cmdq_client.chan);
- mtk_crtc->cmdq_client.chan = NULL;
+ goto cmdq_err;
}
}

/* for sending blocking cmd in crtc disable */
init_waitqueue_head(&mtk_crtc->cb_blocking_queue);
}
+
+ mtk_crtc->sec_cmdq_client.client.dev = mtk_crtc->mmsys_dev;
+ mtk_crtc->sec_cmdq_client.client.tx_block = false;
+ mtk_crtc->sec_cmdq_client.client.knows_txdone = true;
+ mtk_crtc->sec_cmdq_client.client.rx_callback = ddp_cmdq_cb;
+ mtk_crtc->sec_cmdq_client.chan =
+ mbox_request_channel(&mtk_crtc->sec_cmdq_client.client, i + 1);
+ if (IS_ERR(mtk_crtc->sec_cmdq_client.chan)) {
+ dev_err(dev, "mtk_crtc %d failed to create sec mailbox client\n",
+ drm_crtc_index(&mtk_crtc->base));
+ mtk_crtc->sec_cmdq_client.chan = NULL;
+ }
+
+ if (mtk_crtc->sec_cmdq_client.chan) {
+ struct device_link *link;
+
+ /* add devlink to cmdq dev to make sure suspend/resume order is correct */
+ link = device_link_add(priv->dev, mtk_crtc->sec_cmdq_client.chan->mbox->dev,
+ DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);
+ if (!link) {
+ dev_err(priv->dev, "Unable to link dev=%s\n",
+ dev_name(mtk_crtc->sec_cmdq_client.chan->mbox->dev));
+ ret = -ENODEV;
+ goto cmdq_err;
+ }
+
+ ret = mtk_drm_cmdq_pkt_create(&mtk_crtc->sec_cmdq_client,
+ &mtk_crtc->sec_cmdq_handle,
+ PAGE_SIZE);
+ if (ret) {
+ dev_dbg(dev, "mtk_crtc %d failed to create cmdq secure packet\n",
+ drm_crtc_index(&mtk_crtc->base));
+ goto cmdq_err;
+ }
+
+ /* for sending blocking cmd in crtc disable */
+ init_waitqueue_head(&mtk_crtc->sec_cb_blocking_queue);
+ }
+
+cmdq_err:
+ if (ret) {
+ if (mtk_crtc->cmdq_client.chan) {
+ mbox_free_channel(mtk_crtc->cmdq_client.chan);
+ mtk_crtc->cmdq_client.chan = NULL;
+ }
+ if (mtk_crtc->sec_cmdq_client.chan) {
+ mbox_free_channel(mtk_crtc->sec_cmdq_client.chan);
+ mtk_crtc->sec_cmdq_client.chan = NULL;
+ }
+ }
#endif

if (conn_routes) {
diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.h b/drivers/gpu/drm/mediatek/mtk_crtc.h
index a79c4611754e4..340217d6acd3c 100644
--- a/drivers/gpu/drm/mediatek/mtk_crtc.h
+++ b/drivers/gpu/drm/mediatek/mtk_crtc.h
@@ -62,5 +62,6 @@ void mtk_crtc_create_crc_cmdq(struct device *dev, struct mtk_crtc_crc *crc);
void mtk_crtc_start_crc_cmdq(struct mtk_crtc_crc *crc);
void mtk_crtc_stop_crc_cmdq(struct mtk_crtc_crc *crc);
#endif
+void mtk_crtc_disable_secure_state(struct drm_crtc *crtc);

#endif /* MTK_CRTC_H */
diff --git a/drivers/gpu/drm/mediatek/mtk_plane.c b/drivers/gpu/drm/mediatek/mtk_plane.c
index 021148d4b5d4a..9762bba23273b 100644
--- a/drivers/gpu/drm/mediatek/mtk_plane.c
+++ b/drivers/gpu/drm/mediatek/mtk_plane.c
@@ -289,6 +289,13 @@ static void mtk_plane_atomic_disable(struct drm_plane *plane,
mtk_plane_state->pending.enable = false;
wmb(); /* Make sure the above parameter is set before update */
mtk_plane_state->pending.dirty = true;
+
+ if (mtk_plane_state->pending.is_secure) {
+ struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, plane);
+
+ if (old_state->crtc)
+ mtk_crtc_disable_secure_state(old_state->crtc);
+ }
}

static void mtk_plane_atomic_update(struct drm_plane *plane,
--
2.18.0