2020-04-30 12:48:01

by Frieder Schrempf

[permalink] [raw]
Subject: [RFC PATCH 0/4] Add support for i.MX8MM GPUs through Etnaviv

From: Frieder Schrempf <[email protected]>

This series contains patches to enable GPU support for the i.MX8MM.
There is currently no upstream support for the display subsystem of
the i.MX8MM, but I have a 5.4-based tree with some ported drivers for
LCDIF, DSIM bridge, etc. (see [1]) which I used to test the GPU with
glmark2.

I'm posting this as an RFC for now, as I'm not feeling confident of
all of the changes. Especially patch 1 seems a bit like a hack. Maybe
someone can help me understand the underlying problem and/or come up
with a better fix.

[1] https://git.kontron-electronics.de/linux/linux/-/commits/v5.4-ktn

Frieder Schrempf (4):
drm/etnaviv: Prevent IRQ triggering at probe time on i.MX8MM
drm/etnaviv: Fix error path in etnaviv_gpu_clk_enable()
drm/etnaviv: Change order of enabling clocks to fix boot on i.MX8MM
arm64: dts: imx8mm: Add GPU nodes for 2D and 3D core using Etnaviv

arch/arm64/boot/dts/freescale/imx8mm.dtsi | 36 ++++++++++++
drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 68 ++++++++++++++---------
2 files changed, 79 insertions(+), 25 deletions(-)

--
2.17.1


2020-04-30 12:48:22

by Frieder Schrempf

[permalink] [raw]
Subject: [RFC PATCH 1/4] drm/etnaviv: Prevent IRQ triggering at probe time on i.MX8MM

From: Frieder Schrempf <[email protected]>

On i.MX8MM there is an interrupt getting triggered immediately after
requesting the IRQ, which leads to a stall as the handler accesses
the GPU registers whithout the clock being enabled.

Enabling the clocks briefly seems to clear the IRQ state, so we do
this before requesting the IRQ.

Signed-off-by: Frieder Schrempf <[email protected]>
---
drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 29 ++++++++++++++++++++-------
1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
index a31eeff2b297..23877c1f150a 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
@@ -1775,13 +1775,6 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev)
return gpu->irq;
}

- err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0,
- dev_name(gpu->dev), gpu);
- if (err) {
- dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq, err);
- return err;
- }
-
/* Get Clocks: */
gpu->clk_reg = devm_clk_get(&pdev->dev, "reg");
DBG("clk_reg: %p", gpu->clk_reg);
@@ -1805,6 +1798,28 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev)
gpu->clk_shader = NULL;
gpu->base_rate_shader = clk_get_rate(gpu->clk_shader);

+ /*
+ * On i.MX8MM there is an interrupt getting triggered immediately
+ * after requesting the IRQ, which leads to a stall as the handler
+ * accesses the GPU registers whithout the clock being enabled.
+ * Enabling the clocks briefly seems to clear the IRQ state, so we do
+ * this here before requesting the IRQ.
+ */
+ err = etnaviv_gpu_clk_enable(gpu);
+ if (err)
+ return err;
+
+ err = etnaviv_gpu_clk_disable(gpu);
+ if (err)
+ return err;
+
+ err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0,
+ dev_name(gpu->dev), gpu);
+ if (err) {
+ dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq, err);
+ return err;
+ }
+
/* TODO: figure out max mapped size */
dev_set_drvdata(dev, gpu);

--
2.17.1

2020-04-30 12:49:05

by Frieder Schrempf

[permalink] [raw]
Subject: [RFC PATCH 2/4] drm/etnaviv: Fix error path in etnaviv_gpu_clk_enable()

From: Frieder Schrempf <[email protected]>

In case enabling of the bus clock fails etnaviv_gpu_clk_enable()
returns without disabling the already enabled reg clock. Fix this.

Fixes: 65f037e8e908 ("drm/etnaviv: add support for slave interface clock")
Signed-off-by: Frieder Schrempf <[email protected]>
---
drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
index 23877c1f150a..7b138d4dd068 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
@@ -1496,7 +1496,7 @@ static int etnaviv_gpu_clk_enable(struct etnaviv_gpu *gpu)
if (gpu->clk_bus) {
ret = clk_prepare_enable(gpu->clk_bus);
if (ret)
- return ret;
+ goto disable_clk_reg;
}

if (gpu->clk_core) {
@@ -1519,6 +1519,9 @@ static int etnaviv_gpu_clk_enable(struct etnaviv_gpu *gpu)
disable_clk_bus:
if (gpu->clk_bus)
clk_disable_unprepare(gpu->clk_bus);
+disable_clk_reg:
+ if (gpu->clk_reg)
+ clk_disable_unprepare(gpu->clk_reg);

return ret;
}
--
2.17.1

2020-04-30 12:50:43

by Frieder Schrempf

[permalink] [raw]
Subject: [RFC PATCH 3/4] drm/etnaviv: Change order of enabling clocks to fix boot on i.MX8MM

From: Frieder Schrempf <[email protected]>

On some i.MX8MM devices the boot hangs when enabling the GPU clocks.
Changing the order of clock initalization to

core -> shader -> bus -> reg

fixes the issue. This is the same order used in the imx platform code
of the downstream GPU driver in the NXP kernel [1]. For the sake of
consistency we also adjust the order of disabling the clocks to the
reverse.

[1] https://source.codeaurora.org/external/imx/linux-imx/tree/drivers/mxc/gpu-viv/hal/os/linux/kernel/platform/freescale/gc_hal_kernel_platform_imx.c?h=imx_5.4.3_2.0.0#n1538

Signed-off-by: Frieder Schrempf <[email protected]>
---
drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 42 +++++++++++++--------------
1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
index 7b138d4dd068..424b2e5951f0 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
@@ -1487,55 +1487,55 @@ static int etnaviv_gpu_clk_enable(struct etnaviv_gpu *gpu)
{
int ret;

- if (gpu->clk_reg) {
- ret = clk_prepare_enable(gpu->clk_reg);
+ if (gpu->clk_core) {
+ ret = clk_prepare_enable(gpu->clk_core);
if (ret)
return ret;
}

- if (gpu->clk_bus) {
- ret = clk_prepare_enable(gpu->clk_bus);
+ if (gpu->clk_shader) {
+ ret = clk_prepare_enable(gpu->clk_shader);
if (ret)
- goto disable_clk_reg;
+ goto disable_clk_core;
}

- if (gpu->clk_core) {
- ret = clk_prepare_enable(gpu->clk_core);
+ if (gpu->clk_bus) {
+ ret = clk_prepare_enable(gpu->clk_bus);
if (ret)
- goto disable_clk_bus;
+ goto disable_clk_shader;
}

- if (gpu->clk_shader) {
- ret = clk_prepare_enable(gpu->clk_shader);
+ if (gpu->clk_reg) {
+ ret = clk_prepare_enable(gpu->clk_reg);
if (ret)
- goto disable_clk_core;
+ goto disable_clk_bus;
}

return 0;

-disable_clk_core:
- if (gpu->clk_core)
- clk_disable_unprepare(gpu->clk_core);
disable_clk_bus:
if (gpu->clk_bus)
clk_disable_unprepare(gpu->clk_bus);
-disable_clk_reg:
- if (gpu->clk_reg)
- clk_disable_unprepare(gpu->clk_reg);
+disable_clk_shader:
+ if (gpu->clk_shader)
+ clk_disable_unprepare(gpu->clk_shader);
+disable_clk_core:
+ if (gpu->clk_core)
+ clk_disable_unprepare(gpu->clk_core);

return ret;
}

static int etnaviv_gpu_clk_disable(struct etnaviv_gpu *gpu)
{
+ if (gpu->clk_reg)
+ clk_disable_unprepare(gpu->clk_reg);
+ if (gpu->clk_bus)
+ clk_disable_unprepare(gpu->clk_bus);
if (gpu->clk_shader)
clk_disable_unprepare(gpu->clk_shader);
if (gpu->clk_core)
clk_disable_unprepare(gpu->clk_core);
- if (gpu->clk_bus)
- clk_disable_unprepare(gpu->clk_bus);
- if (gpu->clk_reg)
- clk_disable_unprepare(gpu->clk_reg);

return 0;
}
--
2.17.1

2020-04-30 12:51:25

by Frieder Schrempf

[permalink] [raw]
Subject: [RFC PATCH 4/4] arm64: dts: imx8mm: Add GPU nodes for 2D and 3D core using Etnaviv

From: Frieder Schrempf <[email protected]>

According to the documents, the i.MX8M-Mini features a GC320 and a
GCNanoUltra GPU core. Etnaviv detects them as:

etnaviv-gpu 38000000.gpu: model: GC600, revision: 4653
etnaviv-gpu 38008000.gpu: model: GC520, revision: 5341

This seems to work fine more or less without any changes to the HWDB,
which still might be needed in the future to correct some features,
etc.

Signed-off-by: Frieder Schrempf <[email protected]>
---
arch/arm64/boot/dts/freescale/imx8mm.dtsi | 36 +++++++++++++++++++++++
1 file changed, 36 insertions(+)

diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
index cc7152ecedd9..1dd0a6e849d3 100644
--- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
@@ -937,6 +937,42 @@
status = "disabled";
};

+ gpu_3d: gpu@38000000 {
+ compatible = "vivante,gc";
+ reg = <0x38000000 0x8000>;
+ interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk IMX8MM_CLK_GPU_AHB>,
+ <&clk IMX8MM_CLK_GPU_BUS_ROOT>,
+ <&clk IMX8MM_CLK_GPU3D_ROOT>;
+ clock-names = "reg", "bus", "core";
+ assigned-clocks = <&clk IMX8MM_CLK_GPU3D_SRC>,
+ <&clk IMX8MM_CLK_GPU_AXI>,
+ <&clk IMX8MM_CLK_GPU_AHB>,
+ <&clk IMX8MM_GPU_PLL_OUT>;
+ assigned-clock-parents = <&clk IMX8MM_GPU_PLL_OUT>,
+ <&clk IMX8MM_SYS_PLL1_800M>,
+ <&clk IMX8MM_SYS_PLL1_800M>;
+ assigned-clock-rates = <0>, <0>,<400000000>,<1000000000>;
+ };
+
+ gpu_2d: gpu@38008000 {
+ compatible = "vivante,gc";
+ reg = <0x38008000 0x8000>;
+ interrupts = <GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk IMX8MM_CLK_GPU_AHB>,
+ <&clk IMX8MM_CLK_GPU_BUS_ROOT>,
+ <&clk IMX8MM_CLK_GPU2D_ROOT>;
+ clock-names = "reg", "bus", "core";
+ assigned-clocks = <&clk IMX8MM_CLK_GPU2D_SRC>,
+ <&clk IMX8MM_CLK_GPU_AXI>,
+ <&clk IMX8MM_CLK_GPU_AHB>,
+ <&clk IMX8MM_GPU_PLL_OUT>;
+ assigned-clock-parents = <&clk IMX8MM_GPU_PLL_OUT>,
+ <&clk IMX8MM_SYS_PLL1_800M>,
+ <&clk IMX8MM_SYS_PLL1_800M>;
+ assigned-clock-rates = <0>, <0>,<400000000>,<1000000000>;
+ };
+
gic: interrupt-controller@38800000 {
compatible = "arm,gic-v3";
reg = <0x38800000 0x10000>, /* GIC Dist */
--
2.17.1

2020-04-30 14:26:02

by Daniel Baluta

[permalink] [raw]
Subject: Re: [RFC PATCH 1/4] drm/etnaviv: Prevent IRQ triggering at probe time on i.MX8MM

On 4/30/20 3:46 PM, Schrempf Frieder wrote:
>
> + /*
> + * On i.MX8MM there is an interrupt getting triggered immediately
> + * after requesting the IRQ, which leads to a stall as the handler
> + * accesses the GPU registers whithout the clock being enabled.
> + * Enabling the clocks briefly seems to clear the IRQ state, so we do
> + * this here before requesting the IRQ.
> + */
> + err = etnaviv_gpu_clk_enable(gpu);
> + if (err)
> + return err;
> +
> + err = etnaviv_gpu_clk_disable(gpu);
> + if (err)
> + return err;
> +
> + err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0,
> + dev_name(gpu->dev), gpu);
> + if (err) {
> + dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq, err);
> + return err;
> + }

Shouldn't you disable the clk after devm_request_irq is called?


2020-04-30 14:37:20

by Lucas Stach

[permalink] [raw]
Subject: Re: [RFC PATCH 1/4] drm/etnaviv: Prevent IRQ triggering at probe time on i.MX8MM

Hi Frieder,

Am Donnerstag, den 30.04.2020, 12:46 +0000 schrieb Schrempf Frieder:
> From: Frieder Schrempf <[email protected]>
>
> On i.MX8MM there is an interrupt getting triggered immediately after
> requesting the IRQ, which leads to a stall as the handler accesses
> the GPU registers whithout the clock being enabled.
>
> Enabling the clocks briefly seems to clear the IRQ state, so we do
> this before requesting the IRQ.

This is most likely caused by improper power-up sequencing. Normally
the GPC will trigger a hardware reset of the modules inside a power
domain when the domain is powered on. This requires the clocks to be
running at this point, as those resets are synchronous, so need clock
pulses to propagate through the hardware.

From what I see the i.MX8MM is still missing the power domain
controller integration, but I'm pretty confident that this problem
should be solved in the power domain code, instead of the GPU driver.

Regards,
Lucas

> Signed-off-by: Frieder Schrempf <[email protected]>
> ---
> drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 29 ++++++++++++++++++++-----
> --
> 1 file changed, 22 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> index a31eeff2b297..23877c1f150a 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> @@ -1775,13 +1775,6 @@ static int etnaviv_gpu_platform_probe(struct
> platform_device *pdev)
> return gpu->irq;
> }
>
> - err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0,
> - dev_name(gpu->dev), gpu);
> - if (err) {
> - dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq,
> err);
> - return err;
> - }
> -
> /* Get Clocks: */
> gpu->clk_reg = devm_clk_get(&pdev->dev, "reg");
> DBG("clk_reg: %p", gpu->clk_reg);
> @@ -1805,6 +1798,28 @@ static int etnaviv_gpu_platform_probe(struct
> platform_device *pdev)
> gpu->clk_shader = NULL;
> gpu->base_rate_shader = clk_get_rate(gpu->clk_shader);
>
> + /*
> + * On i.MX8MM there is an interrupt getting triggered
> immediately
> + * after requesting the IRQ, which leads to a stall as the
> handler
> + * accesses the GPU registers whithout the clock being enabled.
> + * Enabling the clocks briefly seems to clear the IRQ state, so
> we do
> + * this here before requesting the IRQ.
> + */
> + err = etnaviv_gpu_clk_enable(gpu);
> + if (err)
> + return err;
> +
> + err = etnaviv_gpu_clk_disable(gpu);
> + if (err)
> + return err;
> +
> + err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0,
> + dev_name(gpu->dev), gpu);
> + if (err) {
> + dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq,
> err);
> + return err;
> + }
> +
> /* TODO: figure out max mapped size */
> dev_set_drvdata(dev, gpu);
>

2020-04-30 14:39:56

by Lucas Stach

[permalink] [raw]
Subject: Re: [RFC PATCH 3/4] drm/etnaviv: Change order of enabling clocks to fix boot on i.MX8MM

Am Donnerstag, den 30.04.2020, 12:46 +0000 schrieb Schrempf Frieder:
> From: Frieder Schrempf <[email protected]>
>
> On some i.MX8MM devices the boot hangs when enabling the GPU clocks.
> Changing the order of clock initalization to
>
> core -> shader -> bus -> reg
>
> fixes the issue. This is the same order used in the imx platform code
> of the downstream GPU driver in the NXP kernel [1]. For the sake of
> consistency we also adjust the order of disabling the clocks to the
> reverse.
>
> [1] https://source.codeaurora.org/external/imx/linux-imx/tree/drivers/mxc/gpu-viv/hal/os/linux/kernel/platform/freescale/gc_hal_kernel_platform_imx.c?h=imx_5.4.3_2.0.0#n1538

I don't see why the order of the clocks is important. Is this really a
GPU issue? As in: does a GPU access hang when enabling the clocks in
the wrong order? Or is this a clock driver issue with a clock access
hanging due to an upstream clock still being disabled?

Regards,
Lucas

> Signed-off-by: Frieder Schrempf <[email protected]>
> ---
> drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 42 +++++++++++++--------------
> 1 file changed, 21 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> index 7b138d4dd068..424b2e5951f0 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> @@ -1487,55 +1487,55 @@ static int etnaviv_gpu_clk_enable(struct etnaviv_gpu *gpu)
> {
> int ret;
>
> - if (gpu->clk_reg) {
> - ret = clk_prepare_enable(gpu->clk_reg);
> + if (gpu->clk_core) {
> + ret = clk_prepare_enable(gpu->clk_core);
> if (ret)
> return ret;
> }
>
> - if (gpu->clk_bus) {
> - ret = clk_prepare_enable(gpu->clk_bus);
> + if (gpu->clk_shader) {
> + ret = clk_prepare_enable(gpu->clk_shader);
> if (ret)
> - goto disable_clk_reg;
> + goto disable_clk_core;
> }
>
> - if (gpu->clk_core) {
> - ret = clk_prepare_enable(gpu->clk_core);
> + if (gpu->clk_bus) {
> + ret = clk_prepare_enable(gpu->clk_bus);
> if (ret)
> - goto disable_clk_bus;
> + goto disable_clk_shader;
> }
>
> - if (gpu->clk_shader) {
> - ret = clk_prepare_enable(gpu->clk_shader);
> + if (gpu->clk_reg) {
> + ret = clk_prepare_enable(gpu->clk_reg);
> if (ret)
> - goto disable_clk_core;
> + goto disable_clk_bus;
> }
>
> return 0;
>
> -disable_clk_core:
> - if (gpu->clk_core)
> - clk_disable_unprepare(gpu->clk_core);
> disable_clk_bus:
> if (gpu->clk_bus)
> clk_disable_unprepare(gpu->clk_bus);
> -disable_clk_reg:
> - if (gpu->clk_reg)
> - clk_disable_unprepare(gpu->clk_reg);
> +disable_clk_shader:
> + if (gpu->clk_shader)
> + clk_disable_unprepare(gpu->clk_shader);
> +disable_clk_core:
> + if (gpu->clk_core)
> + clk_disable_unprepare(gpu->clk_core);
>
> return ret;
> }
>
> static int etnaviv_gpu_clk_disable(struct etnaviv_gpu *gpu)
> {
> + if (gpu->clk_reg)
> + clk_disable_unprepare(gpu->clk_reg);
> + if (gpu->clk_bus)
> + clk_disable_unprepare(gpu->clk_bus);
> if (gpu->clk_shader)
> clk_disable_unprepare(gpu->clk_shader);
> if (gpu->clk_core)
> clk_disable_unprepare(gpu->clk_core);
> - if (gpu->clk_bus)
> - clk_disable_unprepare(gpu->clk_bus);
> - if (gpu->clk_reg)
> - clk_disable_unprepare(gpu->clk_reg);
>
> return 0;
> }

2020-04-30 15:33:09

by Frieder Schrempf

[permalink] [raw]
Subject: Re: [RFC PATCH 1/4] drm/etnaviv: Prevent IRQ triggering at probe time on i.MX8MM

On 30.04.20 16:23, Daniel Baluta wrote:
> On 4/30/20 3:46 PM, Schrempf Frieder wrote:
>> +    /*
>> +     * On i.MX8MM there is an interrupt getting triggered immediately
>> +     * after requesting the IRQ, which leads to a stall as the handler
>> +     * accesses the GPU registers whithout the clock being enabled.
>> +     * Enabling the clocks briefly seems to clear the IRQ state, so
>> we do
>> +     * this here before requesting the IRQ.
>> +     */
>> +    err = etnaviv_gpu_clk_enable(gpu);
>> +    if (err)
>> +        return err;
>> +
>> +    err = etnaviv_gpu_clk_disable(gpu);
>> +    if (err)
>> +        return err;
>> +
>> +    err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0,
>> +                   dev_name(gpu->dev), gpu);
>> +    if (err) {
>> +        dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq, err);
>> +        return err;
>> +    }
>
> Shouldn't you disable the clk after devm_request_irq is called?

That's what I first thought, too. But then I found out, that merely
enabling the clocks will clear the sparse interrupt and cause the
handler not to be called during probe anymore.

2020-04-30 15:34:04

by Frieder Schrempf

[permalink] [raw]
Subject: Re: [RFC PATCH 1/4] drm/etnaviv: Prevent IRQ triggering at probe time on i.MX8MM

Hi Lucas,

On 30.04.20 16:32, Lucas Stach wrote:
> Hi Frieder,
>
> Am Donnerstag, den 30.04.2020, 12:46 +0000 schrieb Schrempf Frieder:
>> From: Frieder Schrempf <[email protected]>
>>
>> On i.MX8MM there is an interrupt getting triggered immediately after
>> requesting the IRQ, which leads to a stall as the handler accesses
>> the GPU registers whithout the clock being enabled.
>>
>> Enabling the clocks briefly seems to clear the IRQ state, so we do
>> this before requesting the IRQ.
>
> This is most likely caused by improper power-up sequencing. Normally
> the GPC will trigger a hardware reset of the modules inside a power
> domain when the domain is powered on. This requires the clocks to be
> running at this point, as those resets are synchronous, so need clock
> pulses to propagate through the hardware.

Ok, I was suspecting something like that and your explanation makes
total sense to me.

>
> From what I see the i.MX8MM is still missing the power domain
> controller integration, but I'm pretty confident that this problem
> should be solved in the power domain code, instead of the GPU driver.

Ok. I was hoping that GPU support could be added without power domain
control, but I now see that this is probably not reasonable at all.
So I will keep on hoping that NXP comes up with an upstreamable solution
for the power domain handling.

Thanks,
Frieder

>
> Regards,
> Lucas
>
>> Signed-off-by: Frieder Schrempf <[email protected]>
>> ---
>> drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 29 ++++++++++++++++++++-----
>> --
>> 1 file changed, 22 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
>> b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
>> index a31eeff2b297..23877c1f150a 100644
>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
>> @@ -1775,13 +1775,6 @@ static int etnaviv_gpu_platform_probe(struct
>> platform_device *pdev)
>> return gpu->irq;
>> }
>>
>> - err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0,
>> - dev_name(gpu->dev), gpu);
>> - if (err) {
>> - dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq,
>> err);
>> - return err;
>> - }
>> -
>> /* Get Clocks: */
>> gpu->clk_reg = devm_clk_get(&pdev->dev, "reg");
>> DBG("clk_reg: %p", gpu->clk_reg);
>> @@ -1805,6 +1798,28 @@ static int etnaviv_gpu_platform_probe(struct
>> platform_device *pdev)
>> gpu->clk_shader = NULL;
>> gpu->base_rate_shader = clk_get_rate(gpu->clk_shader);
>>
>> + /*
>> + * On i.MX8MM there is an interrupt getting triggered
>> immediately
>> + * after requesting the IRQ, which leads to a stall as the
>> handler
>> + * accesses the GPU registers whithout the clock being enabled.
>> + * Enabling the clocks briefly seems to clear the IRQ state, so
>> we do
>> + * this here before requesting the IRQ.
>> + */
>> + err = etnaviv_gpu_clk_enable(gpu);
>> + if (err)
>> + return err;
>> +
>> + err = etnaviv_gpu_clk_disable(gpu);
>> + if (err)
>> + return err;
>> +
>> + err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0,
>> + dev_name(gpu->dev), gpu);
>> + if (err) {
>> + dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq,
>> err);
>> + return err;
>> + }
>> +
>> /* TODO: figure out max mapped size */
>> dev_set_drvdata(dev, gpu);
>>
>

2020-04-30 15:40:46

by Frieder Schrempf

[permalink] [raw]
Subject: Re: [RFC PATCH 3/4] drm/etnaviv: Change order of enabling clocks to fix boot on i.MX8MM

On 30.04.20 16:35, Lucas Stach wrote:
> Am Donnerstag, den 30.04.2020, 12:46 +0000 schrieb Schrempf Frieder:
>> From: Frieder Schrempf <[email protected]>
>>
>> On some i.MX8MM devices the boot hangs when enabling the GPU clocks.
>> Changing the order of clock initalization to
>>
>> core -> shader -> bus -> reg
>>
>> fixes the issue. This is the same order used in the imx platform code
>> of the downstream GPU driver in the NXP kernel [1]. For the sake of
>> consistency we also adjust the order of disabling the clocks to the
>> reverse.
>>
>> [1] https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsource.codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Ftree%2Fdrivers%2Fmxc%2Fgpu-viv%2Fhal%2Fos%2Flinux%2Fkernel%2Fplatform%2Ffreescale%2Fgc_hal_kernel_platform_imx.c%3Fh%3Dimx_5.4.3_2.0.0%23n1538&amp;data=02%7C01%7Cfrieder.schrempf%40kontron.de%7Cdae15f14ed4a4999065508d7ed13ae87%7C8c9d3c973fd941c8a2b1646f3942daf1%7C0%7C0%7C637238541095594019&amp;sdata=%2BImteXNH%2FqJDionnJVHtjVnXJk%2BG%2BVlgvBdRGfnlQro%3D&amp;reserved=0
>
> I don't see why the order of the clocks is important. Is this really a
> GPU issue? As in: does a GPU access hang when enabling the clocks in
> the wrong order? Or is this a clock driver issue with a clock access
> hanging due to an upstream clock still being disabled?

Actually you might be right with this being a clock driver issue. The
hanging happens while enabling the clocks (unrelated to any GPU register
access). The strange thing is that most of the devices we have don't
care and work as is and some devices reliably fail each time when
enabling the clocks in the "wrong" order.

So I guess this could indeed be some clock being enabled with an
upstream PLL not having locked yet or something.

>
> Regards,
> Lucas
>
>> Signed-off-by: Frieder Schrempf <[email protected]>
>> ---
>> drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 42 +++++++++++++--------------
>> 1 file changed, 21 insertions(+), 21 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
>> index 7b138d4dd068..424b2e5951f0 100644
>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
>> @@ -1487,55 +1487,55 @@ static int etnaviv_gpu_clk_enable(struct etnaviv_gpu *gpu)
>> {
>> int ret;
>>
>> - if (gpu->clk_reg) {
>> - ret = clk_prepare_enable(gpu->clk_reg);
>> + if (gpu->clk_core) {
>> + ret = clk_prepare_enable(gpu->clk_core);
>> if (ret)
>> return ret;
>> }
>>
>> - if (gpu->clk_bus) {
>> - ret = clk_prepare_enable(gpu->clk_bus);
>> + if (gpu->clk_shader) {
>> + ret = clk_prepare_enable(gpu->clk_shader);
>> if (ret)
>> - goto disable_clk_reg;
>> + goto disable_clk_core;
>> }
>>
>> - if (gpu->clk_core) {
>> - ret = clk_prepare_enable(gpu->clk_core);
>> + if (gpu->clk_bus) {
>> + ret = clk_prepare_enable(gpu->clk_bus);
>> if (ret)
>> - goto disable_clk_bus;
>> + goto disable_clk_shader;
>> }
>>
>> - if (gpu->clk_shader) {
>> - ret = clk_prepare_enable(gpu->clk_shader);
>> + if (gpu->clk_reg) {
>> + ret = clk_prepare_enable(gpu->clk_reg);
>> if (ret)
>> - goto disable_clk_core;
>> + goto disable_clk_bus;
>> }
>>
>> return 0;
>>
>> -disable_clk_core:
>> - if (gpu->clk_core)
>> - clk_disable_unprepare(gpu->clk_core);
>> disable_clk_bus:
>> if (gpu->clk_bus)
>> clk_disable_unprepare(gpu->clk_bus);
>> -disable_clk_reg:
>> - if (gpu->clk_reg)
>> - clk_disable_unprepare(gpu->clk_reg);
>> +disable_clk_shader:
>> + if (gpu->clk_shader)
>> + clk_disable_unprepare(gpu->clk_shader);
>> +disable_clk_core:
>> + if (gpu->clk_core)
>> + clk_disable_unprepare(gpu->clk_core);
>>
>> return ret;
>> }
>>
>> static int etnaviv_gpu_clk_disable(struct etnaviv_gpu *gpu)
>> {
>> + if (gpu->clk_reg)
>> + clk_disable_unprepare(gpu->clk_reg);
>> + if (gpu->clk_bus)
>> + clk_disable_unprepare(gpu->clk_bus);
>> if (gpu->clk_shader)
>> clk_disable_unprepare(gpu->clk_shader);
>> if (gpu->clk_core)
>> clk_disable_unprepare(gpu->clk_core);
>> - if (gpu->clk_bus)
>> - clk_disable_unprepare(gpu->clk_bus);
>> - if (gpu->clk_reg)
>> - clk_disable_unprepare(gpu->clk_reg);
>>
>> return 0;
>> }
>

2020-04-30 16:18:22

by Adam Ford

[permalink] [raw]
Subject: Re: [RFC PATCH 1/4] drm/etnaviv: Prevent IRQ triggering at probe time on i.MX8MM

On Thu, Apr 30, 2020 at 10:31 AM Schrempf Frieder
<[email protected]> wrote:
>
> Hi Lucas,
>
> On 30.04.20 16:32, Lucas Stach wrote:
> > Hi Frieder,
> >
> > Am Donnerstag, den 30.04.2020, 12:46 +0000 schrieb Schrempf Frieder:
> >> From: Frieder Schrempf <[email protected]>
> >>
> >> On i.MX8MM there is an interrupt getting triggered immediately after
> >> requesting the IRQ, which leads to a stall as the handler accesses
> >> the GPU registers whithout the clock being enabled.
> >>
> >> Enabling the clocks briefly seems to clear the IRQ state, so we do
> >> this before requesting the IRQ.
> >
> > This is most likely caused by improper power-up sequencing. Normally
> > the GPC will trigger a hardware reset of the modules inside a power
> > domain when the domain is powered on. This requires the clocks to be
> > running at this point, as those resets are synchronous, so need clock
> > pulses to propagate through the hardware.
>
> Ok, I was suspecting something like that and your explanation makes
> total sense to me.
>
> >
> > From what I see the i.MX8MM is still missing the power domain
> > controller integration, but I'm pretty confident that this problem
> > should be solved in the power domain code, instead of the GPU driver.
>
> Ok. I was hoping that GPU support could be added without power domain
> control, but I now see that this is probably not reasonable at all.
> So I will keep on hoping that NXP comes up with an upstreamable solution
> for the power domain handling.


There was a patch for upstream power-domain control from NXP a few days ago:

https://patchwork.kernel.org/cover/10904511/

Can these be somehow tested to see if it helps the issue with the GPU?

adam
>
> Thanks,
> Frieder
>
> >
> > Regards,
> > Lucas
> >
> >> Signed-off-by: Frieder Schrempf <[email protected]>
> >> ---
> >> drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 29 ++++++++++++++++++++-----
> >> --
> >> 1 file changed, 22 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> >> b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> >> index a31eeff2b297..23877c1f150a 100644
> >> --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> >> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> >> @@ -1775,13 +1775,6 @@ static int etnaviv_gpu_platform_probe(struct
> >> platform_device *pdev)
> >> return gpu->irq;
> >> }
> >>
> >> - err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0,
> >> - dev_name(gpu->dev), gpu);
> >> - if (err) {
> >> - dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq,
> >> err);
> >> - return err;
> >> - }
> >> -
> >> /* Get Clocks: */
> >> gpu->clk_reg = devm_clk_get(&pdev->dev, "reg");
> >> DBG("clk_reg: %p", gpu->clk_reg);
> >> @@ -1805,6 +1798,28 @@ static int etnaviv_gpu_platform_probe(struct
> >> platform_device *pdev)
> >> gpu->clk_shader = NULL;
> >> gpu->base_rate_shader = clk_get_rate(gpu->clk_shader);
> >>
> >> + /*
> >> + * On i.MX8MM there is an interrupt getting triggered
> >> immediately
> >> + * after requesting the IRQ, which leads to a stall as the
> >> handler
> >> + * accesses the GPU registers whithout the clock being enabled.
> >> + * Enabling the clocks briefly seems to clear the IRQ state, so
> >> we do
> >> + * this here before requesting the IRQ.
> >> + */
> >> + err = etnaviv_gpu_clk_enable(gpu);
> >> + if (err)
> >> + return err;
> >> +
> >> + err = etnaviv_gpu_clk_disable(gpu);
> >> + if (err)
> >> + return err;
> >> +
> >> + err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0,
> >> + dev_name(gpu->dev), gpu);
> >> + if (err) {
> >> + dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq,
> >> err);
> >> + return err;
> >> + }
> >> +
> >> /* TODO: figure out max mapped size */
> >> dev_set_drvdata(dev, gpu);
> >>
> >

2020-05-01 12:41:31

by Peng Fan

[permalink] [raw]
Subject: RE: [RFC PATCH 3/4] drm/etnaviv: Change order of enabling clocks to fix boot on i.MX8MM

> Subject: Re: [RFC PATCH 3/4] drm/etnaviv: Change order of enabling clocks to
> fix boot on i.MX8MM
>
> On 30.04.20 16:35, Lucas Stach wrote:
> > Am Donnerstag, den 30.04.2020, 12:46 +0000 schrieb Schrempf Frieder:
> >> From: Frieder Schrempf <[email protected]>
> >>
> >> On some i.MX8MM devices the boot hangs when enabling the GPU clocks.
> >> Changing the order of clock initalization to
> >>
> >> core -> shader -> bus -> reg
> >>
> >> fixes the issue. This is the same order used in the imx platform code
> >> of the downstream GPU driver in the NXP kernel [1]. For the sake of
> >> consistency we also adjust the order of disabling the clocks to the
> >> reverse.
> >>
> >> [1]
> >> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsou
> >>
> rce.codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Ftree%2Fdrivers%2F
> mx
> >>
> c%2Fgpu-viv%2Fhal%2Fos%2Flinux%2Fkernel%2Fplatform%2Ffreescale%2Fgc
> _h
> >>
> al_kernel_platform_imx.c%3Fh%3Dimx_5.4.3_2.0.0%23n1538&amp;data=02
> %7C
> >>
> 01%7Cpeng.fan%40nxp.com%7Cdc7da53f665e4f567e3008d7ed1c27e0%7C6
> 86ea1d3
> >>
> bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637238577497969787&amp;sda
> ta=QRHzu
> >> C6gSKy%2F6y2FTRvlNF5t7DmJIvTgBESYKchI%2FDw%3D&amp;reserved=0
> >
> > I don't see why the order of the clocks is important. Is this really a
> > GPU issue? As in: does a GPU access hang when enabling the clocks in
> > the wrong order? Or is this a clock driver issue with a clock access
> > hanging due to an upstream clock still being disabled?
>
> Actually you might be right with this being a clock driver issue. The hanging
> happens while enabling the clocks (unrelated to any GPU register access). The
> strange thing is that most of the devices we have don't care and work as is
> and some devices reliably fail each time when enabling the clocks in the
> "wrong" order.
>
> So I guess this could indeed be some clock being enabled with an upstream
> PLL not having locked yet or something.

https://patchwork.kernel.org/cover/11433775/

Will this pachset help?

The i.MX8M CCM root mux code in Linux needs a fix.

Regards,
Peng.

>
> >
> > Regards,
> > Lucas
> >
> >> Signed-off-by: Frieder Schrempf <[email protected]>
> >> ---
> >> drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 42
> +++++++++++++--------------
> >> 1 file changed, 21 insertions(+), 21 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> >> b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> >> index 7b138d4dd068..424b2e5951f0 100644
> >> --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> >> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> >> @@ -1487,55 +1487,55 @@ static int etnaviv_gpu_clk_enable(struct
> etnaviv_gpu *gpu)
> >> {
> >> int ret;
> >>
> >> - if (gpu->clk_reg) {
> >> - ret = clk_prepare_enable(gpu->clk_reg);
> >> + if (gpu->clk_core) {
> >> + ret = clk_prepare_enable(gpu->clk_core);
> >> if (ret)
> >> return ret;
> >> }
> >>
> >> - if (gpu->clk_bus) {
> >> - ret = clk_prepare_enable(gpu->clk_bus);
> >> + if (gpu->clk_shader) {
> >> + ret = clk_prepare_enable(gpu->clk_shader);
> >> if (ret)
> >> - goto disable_clk_reg;
> >> + goto disable_clk_core;
> >> }
> >>
> >> - if (gpu->clk_core) {
> >> - ret = clk_prepare_enable(gpu->clk_core);
> >> + if (gpu->clk_bus) {
> >> + ret = clk_prepare_enable(gpu->clk_bus);
> >> if (ret)
> >> - goto disable_clk_bus;
> >> + goto disable_clk_shader;
> >> }
> >>
> >> - if (gpu->clk_shader) {
> >> - ret = clk_prepare_enable(gpu->clk_shader);
> >> + if (gpu->clk_reg) {
> >> + ret = clk_prepare_enable(gpu->clk_reg);
> >> if (ret)
> >> - goto disable_clk_core;
> >> + goto disable_clk_bus;
> >> }
> >>
> >> return 0;
> >>
> >> -disable_clk_core:
> >> - if (gpu->clk_core)
> >> - clk_disable_unprepare(gpu->clk_core);
> >> disable_clk_bus:
> >> if (gpu->clk_bus)
> >> clk_disable_unprepare(gpu->clk_bus);
> >> -disable_clk_reg:
> >> - if (gpu->clk_reg)
> >> - clk_disable_unprepare(gpu->clk_reg);
> >> +disable_clk_shader:
> >> + if (gpu->clk_shader)
> >> + clk_disable_unprepare(gpu->clk_shader);
> >> +disable_clk_core:
> >> + if (gpu->clk_core)
> >> + clk_disable_unprepare(gpu->clk_core);
> >>
> >> return ret;
> >> }
> >>
> >> static int etnaviv_gpu_clk_disable(struct etnaviv_gpu *gpu)
> >> {
> >> + if (gpu->clk_reg)
> >> + clk_disable_unprepare(gpu->clk_reg);
> >> + if (gpu->clk_bus)
> >> + clk_disable_unprepare(gpu->clk_bus);
> >> if (gpu->clk_shader)
> >> clk_disable_unprepare(gpu->clk_shader);
> >> if (gpu->clk_core)
> >> clk_disable_unprepare(gpu->clk_core);
> >> - if (gpu->clk_bus)
> >> - clk_disable_unprepare(gpu->clk_bus);
> >> - if (gpu->clk_reg)
> >> - clk_disable_unprepare(gpu->clk_reg);
> >>
> >> return 0;
> >> }
> >

2020-05-03 14:51:50

by Adam Ford

[permalink] [raw]
Subject: Re: [RFC PATCH 4/4] arm64: dts: imx8mm: Add GPU nodes for 2D and 3D core using Etnaviv

On Thu, Apr 30, 2020 at 7:46 AM Schrempf Frieder
<[email protected]> wrote:
>
> From: Frieder Schrempf <[email protected]>
>
> According to the documents, the i.MX8M-Mini features a GC320 and a
> GCNanoUltra GPU core. Etnaviv detects them as:
>
> etnaviv-gpu 38000000.gpu: model: GC600, revision: 4653
> etnaviv-gpu 38008000.gpu: model: GC520, revision: 5341
>
> This seems to work fine more or less without any changes to the HWDB,
> which still might be needed in the future to correct some features,
> etc.
>
> Signed-off-by: Frieder Schrempf <[email protected]>
> ---
Since not everyone uses the 3D or 2D, would it make sense to mark them
as disabled by default and let people who need the 3D and 2D enable
them at their respective board files?

adam

> 2.17.1

2020-05-04 10:12:22

by Lucas Stach

[permalink] [raw]
Subject: Re: [RFC PATCH 4/4] arm64: dts: imx8mm: Add GPU nodes for 2D and 3D core using Etnaviv

Am Sonntag, den 03.05.2020, 09:49 -0500 schrieb Adam Ford:
> On Thu, Apr 30, 2020 at 7:46 AM Schrempf Frieder
> <[email protected]> wrote:
> > From: Frieder Schrempf <[email protected]>
> >
> > According to the documents, the i.MX8M-Mini features a GC320 and a
> > GCNanoUltra GPU core. Etnaviv detects them as:
> >
> > etnaviv-gpu 38000000.gpu: model: GC600, revision: 4653
> > etnaviv-gpu 38008000.gpu: model: GC520, revision: 5341
> >
> > This seems to work fine more or less without any changes to the HWDB,
> > which still might be needed in the future to correct some features,
> > etc.
> >
> > Signed-off-by: Frieder Schrempf <[email protected]>
> > ---
> Since not everyone uses the 3D or 2D, would it make sense to mark them
> as disabled by default and let people who need the 3D and 2D enable
> them at their respective board files?

No, devices on the SoC with no external dependencies should be always
enabled.

The board has much less influence over whether the GPU is being used
than the specific use-case. While the board designer may not even think
about using the GPUs (because no display connector present or something
like that) people using the board may still find uses for the GPU, like
doing video pipeline color space conversions or something lie that.

Regards,
Lucas

2020-05-06 11:30:10

by Frieder Schrempf

[permalink] [raw]
Subject: Re: [RFC PATCH 3/4] drm/etnaviv: Change order of enabling clocks to fix boot on i.MX8MM

Hi Peng,

On 01.05.20 14:36, Peng Fan wrote:
>> Subject: Re: [RFC PATCH 3/4] drm/etnaviv: Change order of enabling clocks to
>> fix boot on i.MX8MM
>>
>> On 30.04.20 16:35, Lucas Stach wrote:
>>> Am Donnerstag, den 30.04.2020, 12:46 +0000 schrieb Schrempf Frieder:
>>>> From: Frieder Schrempf <[email protected]>
>>>>
>>>> On some i.MX8MM devices the boot hangs when enabling the GPU clocks.
>>>> Changing the order of clock initalization to
>>>>
>>>> core -> shader -> bus -> reg
>>>>
>>>> fixes the issue. This is the same order used in the imx platform code
>>>> of the downstream GPU driver in the NXP kernel [1]. For the sake of
>>>> consistency we also adjust the order of disabling the clocks to the
>>>> reverse.
>>>>
>>>> [1]
>>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsou
>>>>
>> rce.codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Ftree%2Fdrivers%2F
>> mx
>>>>
>> c%2Fgpu-viv%2Fhal%2Fos%2Flinux%2Fkernel%2Fplatform%2Ffreescale%2Fgc
>> _h
>>>>
>> al_kernel_platform_imx.c%3Fh%3Dimx_5.4.3_2.0.0%23n1538&amp;data=02
>> %7C
>>>>
>> 01%7Cpeng.fan%40nxp.com%7Cdc7da53f665e4f567e3008d7ed1c27e0%7C6
>> 86ea1d3
>>>>
>> bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637238577497969787&amp;sda
>> ta=QRHzu
>>>> C6gSKy%2F6y2FTRvlNF5t7DmJIvTgBESYKchI%2FDw%3D&amp;reserved=0
>>>
>>> I don't see why the order of the clocks is important. Is this really a
>>> GPU issue? As in: does a GPU access hang when enabling the clocks in
>>> the wrong order? Or is this a clock driver issue with a clock access
>>> hanging due to an upstream clock still being disabled?
>>
>> Actually you might be right with this being a clock driver issue. The hanging
>> happens while enabling the clocks (unrelated to any GPU register access). The
>> strange thing is that most of the devices we have don't care and work as is
>> and some devices reliably fail each time when enabling the clocks in the
>> "wrong" order.
>>
>> So I guess this could indeed be some clock being enabled with an upstream
>> PLL not having locked yet or something.
>
> https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.kernel.org%2Fcover%2F11433775%2F&amp;data=02%7C01%7Cfrieder.schrempf%40kontron.de%7C1014be5f9b8b4d0c6e8108d7edcc5bde%7C8c9d3c973fd941c8a2b1646f3942daf1%7C0%7C0%7C637239334279684748&amp;sdata=UwVVzPEvNOP6I4g78uG5O9jVYmHwqyo6hj97wvtlzs0%3D&amp;reserved=0
>
> Will this pachset help?

Thanks for the pointer. Unfortunately the clock patches don't help. I
tried with 5.7-rc4 and your patches on top and the issue still persists.

Also I found out that changing the order of the clock initialization as
proposed, does not fix the problem, either. On some boards it helps,
others still hang when the clocks are initialized.

Thanks,
Frieder

>
> The i.MX8M CCM root mux code in Linux needs a fix.
>
> Regards,
> Peng.
>
>>
>>>
>>> Regards,
>>> Lucas
>>>
>>>> Signed-off-by: Frieder Schrempf <[email protected]>
>>>> ---
>>>> drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 42
>> +++++++++++++--------------
>>>> 1 file changed, 21 insertions(+), 21 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
>>>> b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
>>>> index 7b138d4dd068..424b2e5951f0 100644
>>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
>>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
>>>> @@ -1487,55 +1487,55 @@ static int etnaviv_gpu_clk_enable(struct
>> etnaviv_gpu *gpu)
>>>> {
>>>> int ret;
>>>>
>>>> - if (gpu->clk_reg) {
>>>> - ret = clk_prepare_enable(gpu->clk_reg);
>>>> + if (gpu->clk_core) {
>>>> + ret = clk_prepare_enable(gpu->clk_core);
>>>> if (ret)
>>>> return ret;
>>>> }
>>>>
>>>> - if (gpu->clk_bus) {
>>>> - ret = clk_prepare_enable(gpu->clk_bus);
>>>> + if (gpu->clk_shader) {
>>>> + ret = clk_prepare_enable(gpu->clk_shader);
>>>> if (ret)
>>>> - goto disable_clk_reg;
>>>> + goto disable_clk_core;
>>>> }
>>>>
>>>> - if (gpu->clk_core) {
>>>> - ret = clk_prepare_enable(gpu->clk_core);
>>>> + if (gpu->clk_bus) {
>>>> + ret = clk_prepare_enable(gpu->clk_bus);
>>>> if (ret)
>>>> - goto disable_clk_bus;
>>>> + goto disable_clk_shader;
>>>> }
>>>>
>>>> - if (gpu->clk_shader) {
>>>> - ret = clk_prepare_enable(gpu->clk_shader);
>>>> + if (gpu->clk_reg) {
>>>> + ret = clk_prepare_enable(gpu->clk_reg);
>>>> if (ret)
>>>> - goto disable_clk_core;
>>>> + goto disable_clk_bus;
>>>> }
>>>>
>>>> return 0;
>>>>
>>>> -disable_clk_core:
>>>> - if (gpu->clk_core)
>>>> - clk_disable_unprepare(gpu->clk_core);
>>>> disable_clk_bus:
>>>> if (gpu->clk_bus)
>>>> clk_disable_unprepare(gpu->clk_bus);
>>>> -disable_clk_reg:
>>>> - if (gpu->clk_reg)
>>>> - clk_disable_unprepare(gpu->clk_reg);
>>>> +disable_clk_shader:
>>>> + if (gpu->clk_shader)
>>>> + clk_disable_unprepare(gpu->clk_shader);
>>>> +disable_clk_core:
>>>> + if (gpu->clk_core)
>>>> + clk_disable_unprepare(gpu->clk_core);
>>>>
>>>> return ret;
>>>> }
>>>>
>>>> static int etnaviv_gpu_clk_disable(struct etnaviv_gpu *gpu)
>>>> {
>>>> + if (gpu->clk_reg)
>>>> + clk_disable_unprepare(gpu->clk_reg);
>>>> + if (gpu->clk_bus)
>>>> + clk_disable_unprepare(gpu->clk_bus);
>>>> if (gpu->clk_shader)
>>>> clk_disable_unprepare(gpu->clk_shader);
>>>> if (gpu->clk_core)
>>>> clk_disable_unprepare(gpu->clk_core);
>>>> - if (gpu->clk_bus)
>>>> - clk_disable_unprepare(gpu->clk_bus);
>>>> - if (gpu->clk_reg)
>>>> - clk_disable_unprepare(gpu->clk_reg);
>>>>
>>>> return 0;
>>>> }
>>>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> https://eur04.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.infradead.org%2Fmailman%2Flistinfo%2Flinux-arm-kernel&amp;data=02%7C01%7Cfrieder.schrempf%40kontron.de%7C1014be5f9b8b4d0c6e8108d7edcc5bde%7C8c9d3c973fd941c8a2b1646f3942daf1%7C0%7C0%7C637239334279684748&amp;sdata=kpx6LDA6QXgR3CPGsugEIIDt2YbZuJTC7%2FxrRsDhtok%3D&amp;reserved=0
>

2020-05-06 11:49:56

by Frieder Schrempf

[permalink] [raw]
Subject: Re: [RFC PATCH 4/4] arm64: dts: imx8mm: Add GPU nodes for 2D and 3D core using Etnaviv

On 03.05.20 16:49, Adam Ford wrote:
> On Thu, Apr 30, 2020 at 7:46 AM Schrempf Frieder
> <[email protected]> wrote:
>>
>> From: Frieder Schrempf <[email protected]>
>>
>> According to the documents, the i.MX8M-Mini features a GC320 and a
>> GCNanoUltra GPU core. Etnaviv detects them as:
>>
>> etnaviv-gpu 38000000.gpu: model: GC600, revision: 4653
>> etnaviv-gpu 38008000.gpu: model: GC520, revision: 5341
>>
>> This seems to work fine more or less without any changes to the HWDB,
>> which still might be needed in the future to correct some features,
>> etc.
>>
>> Signed-off-by: Frieder Schrempf <[email protected]>
>> ---
> Since not everyone uses the 3D or 2D, would it make sense to mark them
> as disabled by default and let people who need the 3D and 2D enable
> them at their respective board files?

I would rather keep it the way it has been done for other SoCs. Looking
at the i.MX6 devicetrees, they all seem to have the GPUs enabled by default.

2020-05-06 12:01:37

by Frieder Schrempf

[permalink] [raw]
Subject: Re: [RFC PATCH 4/4] arm64: dts: imx8mm: Add GPU nodes for 2D and 3D core using Etnaviv

On 06.05.20 13:45, Frieder Schrempf wrote:
> On 03.05.20 16:49, Adam Ford wrote:
>> On Thu, Apr 30, 2020 at 7:46 AM Schrempf Frieder
>> <[email protected]> wrote:
>>>
>>> From: Frieder Schrempf <[email protected]>
>>>
>>> According to the documents, the i.MX8M-Mini features a GC320 and a
>>> GCNanoUltra GPU core. Etnaviv detects them as:
>>>
>>>          etnaviv-gpu 38000000.gpu: model: GC600, revision: 4653
>>>          etnaviv-gpu 38008000.gpu: model: GC520, revision: 5341
>>>
>>> This seems to work fine more or less without any changes to the HWDB,
>>> which still might be needed in the future to correct some features,
>>> etc.
>>>
>>> Signed-off-by: Frieder Schrempf <[email protected]>
>>> ---
>> Since not everyone uses the 3D or 2D, would it make sense to mark them
>> as disabled by default and let people who need the 3D and 2D enable
>> them at their respective board files?
>
> I would rather keep it the way it has been done for other SoCs. Looking
> at the i.MX6 devicetrees, they all seem to have the GPUs enabled by
> default.

Ah, I had missed Lucas reply. He already provided much better arguments
for keeping the GPUs enabled by default.