2024-01-17 03:12:38

by Erico Nunes

[permalink] [raw]
Subject: [PATCH v1 1/6] drm/lima: fix devfreq refcount imbalance for job timeouts

In case a task manages to complete but it took just long enough to also
trigger the timeout handler, the current code results in a refcount
imbalance on lima_pm_idle.

While this can be a rare occurrence, when it happens it may fill user
logs with stack traces such as:

[10136.669170] WARNING: CPU: 0 PID: 0 at drivers/gpu/drm/lima/lima_devfreq.c:205 lima_devfreq_record_idle+0xa0/0xb0
..
[10136.669459] pc : lima_devfreq_record_idle+0xa0/0xb0
..
[10136.669628] Call trace:
[10136.669634] lima_devfreq_record_idle+0xa0/0xb0
[10136.669646] lima_sched_pipe_task_done+0x5c/0xb0
[10136.669656] lima_gp_irq_handler+0xa8/0x120
[10136.669666] __handle_irq_event_percpu+0x48/0x160
[10136.669679] handle_irq_event+0x4c/0xc0

The imbalance happens because lima_sched_pipe_task_done() already calls
lima_pm_idle for this case if there was no error.
Check the error flag in the timeout handler to ensure we can never run
into this case.

Signed-off-by: Erico Nunes <[email protected]>
---
drivers/gpu/drm/lima/lima_sched.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index c3bf8cda8498..66317296d831 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -427,7 +427,8 @@ static enum drm_gpu_sched_stat lima_sched_timedout_job(struct drm_sched_job *job
pipe->current_vm = NULL;
pipe->current_task = NULL;

- lima_pm_idle(ldev);
+ if (pipe->error)
+ lima_pm_idle(ldev);

drm_sched_resubmit_jobs(&pipe->base);
drm_sched_start(&pipe->base, true);
--
2.43.0



2024-01-17 18:14:15

by Vasily Khoruzhick

[permalink] [raw]
Subject: Re: [PATCH v1 1/6] drm/lima: fix devfreq refcount imbalance for job timeouts

On Tue, Jan 16, 2024 at 7:12 PM Erico Nunes <[email protected]> wrote:
>
> In case a task manages to complete but it took just long enough to also
> trigger the timeout handler, the current code results in a refcount
> imbalance on lima_pm_idle.
>
> While this can be a rare occurrence, when it happens it may fill user
> logs with stack traces such as:
>
> [10136.669170] WARNING: CPU: 0 PID: 0 at drivers/gpu/drm/lima/lima_devfreq.c:205 lima_devfreq_record_idle+0xa0/0xb0
> ...
> [10136.669459] pc : lima_devfreq_record_idle+0xa0/0xb0
> ...
> [10136.669628] Call trace:
> [10136.669634] lima_devfreq_record_idle+0xa0/0xb0
> [10136.669646] lima_sched_pipe_task_done+0x5c/0xb0
> [10136.669656] lima_gp_irq_handler+0xa8/0x120
> [10136.669666] __handle_irq_event_percpu+0x48/0x160
> [10136.669679] handle_irq_event+0x4c/0xc0
>
> The imbalance happens because lima_sched_pipe_task_done() already calls
> lima_pm_idle for this case if there was no error.
> Check the error flag in the timeout handler to ensure we can never run
> into this case.
>
> Signed-off-by: Erico Nunes <[email protected]>
Reviewed-by: Vasily Khoruzhick <[email protected]>

> ---
> drivers/gpu/drm/lima/lima_sched.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index c3bf8cda8498..66317296d831 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -427,7 +427,8 @@ static enum drm_gpu_sched_stat lima_sched_timedout_job(struct drm_sched_job *job
> pipe->current_vm = NULL;
> pipe->current_task = NULL;
>
> - lima_pm_idle(ldev);
> + if (pipe->error)
> + lima_pm_idle(ldev);
>
> drm_sched_resubmit_jobs(&pipe->base);
> drm_sched_start(&pipe->base, true);
> --
> 2.43.0
>

2024-01-18 01:36:57

by Qiang Yu

[permalink] [raw]
Subject: Re: [PATCH v1 1/6] drm/lima: fix devfreq refcount imbalance for job timeouts

So this is caused by same job trigger both done and timeout handling?
I think a better way to solve this is to make sure only one handler
(done or timeout) process the job instead of just making lima_pm_idle()
unique.

Regards,
Qiang

On Wed, Jan 17, 2024 at 11:12 AM Erico Nunes <[email protected]> wrote:
>
> In case a task manages to complete but it took just long enough to also
> trigger the timeout handler, the current code results in a refcount
> imbalance on lima_pm_idle.
>
> While this can be a rare occurrence, when it happens it may fill user
> logs with stack traces such as:
>
> [10136.669170] WARNING: CPU: 0 PID: 0 at drivers/gpu/drm/lima/lima_devfreq.c:205 lima_devfreq_record_idle+0xa0/0xb0
> ...
> [10136.669459] pc : lima_devfreq_record_idle+0xa0/0xb0
> ...
> [10136.669628] Call trace:
> [10136.669634] lima_devfreq_record_idle+0xa0/0xb0
> [10136.669646] lima_sched_pipe_task_done+0x5c/0xb0
> [10136.669656] lima_gp_irq_handler+0xa8/0x120
> [10136.669666] __handle_irq_event_percpu+0x48/0x160
> [10136.669679] handle_irq_event+0x4c/0xc0
>
> The imbalance happens because lima_sched_pipe_task_done() already calls
> lima_pm_idle for this case if there was no error.
> Check the error flag in the timeout handler to ensure we can never run
> into this case.
>
> Signed-off-by: Erico Nunes <[email protected]>
> ---
> drivers/gpu/drm/lima/lima_sched.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index c3bf8cda8498..66317296d831 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -427,7 +427,8 @@ static enum drm_gpu_sched_stat lima_sched_timedout_job(struct drm_sched_job *job
> pipe->current_vm = NULL;
> pipe->current_task = NULL;
>
> - lima_pm_idle(ldev);
> + if (pipe->error)
> + lima_pm_idle(ldev);
>
> drm_sched_resubmit_jobs(&pipe->base);
> drm_sched_start(&pipe->base, true);
> --
> 2.43.0
>

2024-01-18 11:14:55

by Erico Nunes

[permalink] [raw]
Subject: Re: [PATCH v1 1/6] drm/lima: fix devfreq refcount imbalance for job timeouts

On Thu, Jan 18, 2024 at 2:36 AM Qiang Yu <[email protected]> wrote:
>
> So this is caused by same job trigger both done and timeout handling?
> I think a better way to solve this is to make sure only one handler
> (done or timeout) process the job instead of just making lima_pm_idle()
> unique.

It's not very clear to me how to best ensure that, with the drm_sched
software timeout and the irq happening potentially at the same time.
I think patch 4 in this series describes and covers the most common
case that this would be hit. So maybe now this patch could be dropped
in favour of just that one.
But since this was a bit hard to reproduce and I'm not sure the issue
is entirely covered by that, I just decided to keep this small change
as it prevented all the stack trace reproducers I was able to come up
with.

Erico

2024-01-19 01:51:04

by Qiang Yu

[permalink] [raw]
Subject: Re: [PATCH v1 1/6] drm/lima: fix devfreq refcount imbalance for job timeouts

On Thu, Jan 18, 2024 at 7:14 PM Erico Nunes <[email protected]> wrote:
>
> On Thu, Jan 18, 2024 at 2:36 AM Qiang Yu <[email protected]> wrote:
> >
> > So this is caused by same job trigger both done and timeout handling?
> > I think a better way to solve this is to make sure only one handler
> > (done or timeout) process the job instead of just making lima_pm_idle()
> > unique.
>
> It's not very clear to me how to best ensure that, with the drm_sched
> software timeout and the irq happening potentially at the same time.
This could be done by stopping scheduler run more job and disable
GP/PP interrupt. Then after sync irq, there should be no more new
irq gets in when we handling timeout.

> I think patch 4 in this series describes and covers the most common
> case that this would be hit. So maybe now this patch could be dropped
> in favour of just that one.
Yes.

> But since this was a bit hard to reproduce and I'm not sure the issue
> is entirely covered by that, I just decided to keep this small change
> as it prevented all the stack trace reproducers I was able to come up
> with.
>
> Erico

2024-01-23 23:19:40

by Erico Nunes

[permalink] [raw]
Subject: Re: [PATCH v1 1/6] drm/lima: fix devfreq refcount imbalance for job timeouts

On Fri, Jan 19, 2024 at 2:50 AM Qiang Yu <[email protected]> wrote:
>
> On Thu, Jan 18, 2024 at 7:14 PM Erico Nunes <[email protected]> wrote:
> >
> > On Thu, Jan 18, 2024 at 2:36 AM Qiang Yu <[email protected]> wrote:
> > >
> > > So this is caused by same job trigger both done and timeout handling?
> > > I think a better way to solve this is to make sure only one handler
> > > (done or timeout) process the job instead of just making lima_pm_idle()
> > > unique.
> >
> > It's not very clear to me how to best ensure that, with the drm_sched
> > software timeout and the irq happening potentially at the same time.
> This could be done by stopping scheduler run more job and disable
> GP/PP interrupt. Then after sync irq, there should be no more new
> irq gets in when we handling timeout.
>
> > I think patch 4 in this series describes and covers the most common
> > case that this would be hit. So maybe now this patch could be dropped
> > in favour of just that one.
> Yes.

After dropping the patch while testing to send v2, I managed to
reproduce this issue again.
Looking at a trace it seems that this actually happened with the test workload:
lima_sched_timedout_job -> (fence by is not signaled and new fence
check is passed) -> irq happens preempting lima_sched_timedout_job,
fence is signaled and lima_pm_idle called once at
lima_sched_pipe_task_done -> lima_sched_timedout_job continues and
calls lima_pm_idle again

So I think we still need this patch to at least prevent the bug with
the current software timeout. If we move to the hardware watchdog
timeout later we might be able to remove it anyway, but that will
still require separate work and testing.

Erico

2024-01-24 01:04:24

by Qiang Yu

[permalink] [raw]
Subject: Re: [PATCH v1 1/6] drm/lima: fix devfreq refcount imbalance for job timeouts

On Wed, Jan 24, 2024 at 7:19 AM Erico Nunes <[email protected]> wrote:
>
> On Fri, Jan 19, 2024 at 2:50 AM Qiang Yu <[email protected]> wrote:
> >
> > On Thu, Jan 18, 2024 at 7:14 PM Erico Nunes <[email protected]> wrote:
> > >
> > > On Thu, Jan 18, 2024 at 2:36 AM Qiang Yu <[email protected]> wrote:
> > > >
> > > > So this is caused by same job trigger both done and timeout handling?
> > > > I think a better way to solve this is to make sure only one handler
> > > > (done or timeout) process the job instead of just making lima_pm_idle()
> > > > unique.
> > >
> > > It's not very clear to me how to best ensure that, with the drm_sched
> > > software timeout and the irq happening potentially at the same time.
> > This could be done by stopping scheduler run more job and disable
> > GP/PP interrupt. Then after sync irq, there should be no more new
> > irq gets in when we handling timeout.
> >
> > > I think patch 4 in this series describes and covers the most common
> > > case that this would be hit. So maybe now this patch could be dropped
> > > in favour of just that one.
> > Yes.
>
> After dropping the patch while testing to send v2, I managed to
> reproduce this issue again.
> Looking at a trace it seems that this actually happened with the test workload:
> lima_sched_timedout_job -> (fence by is not signaled and new fence
> check is passed) -> irq happens preempting lima_sched_timedout_job,
> fence is signaled and lima_pm_idle called once at
> lima_sched_pipe_task_done -> lima_sched_timedout_job continues and
> calls lima_pm_idle again
>
> So I think we still need this patch to at least prevent the bug with
> the current software timeout. If we move to the hardware watchdog
> timeout later we might be able to remove it anyway, but that will
> still require separate work and testing.
>
Then you do need to solve the IRQ and reset race by disabling GP/PP
IRQ and sync irq after drm_sched_stop(). I mean you may keep the
patch 4 for spurious timeout, and add a new patch for solving the IRQ
and reset race.

Disable pm idle does not solve the race completely, other reset logic
may also conflict if not sequence the IRQ and reset .