2021-07-29 18:37:50

by Rob Clark

[permalink] [raw]
Subject: [PATCH] drm/msm: Disable frequency clamping on a630

From: Rob Clark <[email protected]>

The more frequent frequency transitions resulting from clamping freq to
minimum when the GPU is idle seems to be causing some issue with the bus
getting voted off when it should be on. (An enable racing with an async
disable?) This might be a problem outside of the GPU, as I can't
reproduce this on a618 which uses the same GMU fw and same mechanism to
communicate with GMU to set opp. For now, just revert to previous
devfreq behavior on a630 until the issue is understood.

Reported-by: Caleb Connolly <[email protected]>
Fixes: 9bc95570175a ("drm/msm: Devfreq tuning")
Signed-off-by: Rob Clark <[email protected]>
---
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 3 +++
drivers/gpu/drm/msm/msm_gpu.h | 2 ++
drivers/gpu/drm/msm/msm_gpu_devfreq.c | 12 ++++++++++++
3 files changed, 17 insertions(+)

diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index 748665232d29..9fd08b413010 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -945,6 +945,9 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
pm_runtime_use_autosuspend(dev);
pm_runtime_enable(dev);

+ if (adreno_is_a630(adreno_gpu))
+ gpu->devfreq.disable_freq_clamping = true;
+
return msm_gpu_init(drm, pdev, &adreno_gpu->base, &funcs->base,
adreno_gpu->info->name, &adreno_gpu_config);
}
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 0e4b45bff2e6..7e11b667f939 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -112,6 +112,8 @@ struct msm_gpu_devfreq {
* it is inactive.
*/
unsigned long idle_freq;
+
+ bool disable_freq_clamping;
};

struct msm_gpu {
diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
index 0a1ee20296a2..a832af436251 100644
--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
@@ -94,6 +94,12 @@ void msm_devfreq_init(struct msm_gpu *gpu)
if (!gpu->funcs->gpu_busy)
return;

+ /* Revert to previous polling interval if we aren't using freq clamping
+ * to preserve previous behavior
+ */
+ if (gpu->devfreq.disable_freq_clamping)
+ msm_devfreq_profile.polling_ms = 10;
+
msm_devfreq_profile.initial_freq = gpu->fast_rate;

/*
@@ -151,6 +157,9 @@ void msm_devfreq_active(struct msm_gpu *gpu)
unsigned int idle_time;
unsigned long target_freq = df->idle_freq;

+ if (gpu->devfreq.disable_freq_clamping)
+ return;
+
/*
* Hold devfreq lock to synchronize with get_dev_status()/
* target() callbacks
@@ -186,6 +195,9 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
struct msm_gpu_devfreq *df = &gpu->devfreq;
unsigned long idle_freq, target_freq = 0;

+ if (gpu->devfreq.disable_freq_clamping)
+ return;
+
/*
* Hold devfreq lock to synchronize with get_dev_status()/
* target() callbacks
--
2.31.1



2021-07-29 20:08:31

by Caleb Connolly

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

Hi Rob,

I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.

Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
at the higher frequencies.

Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
glxgear.

I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
at the voltage the hardware needs to be stable.

Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?


On 29/07/2021 19:39, Rob Clark wrote:
> From: Rob Clark <[email protected]>
>
> The more frequent frequency transitions resulting from clamping freq to
> minimum when the GPU is idle seems to be causing some issue with the bus
> getting voted off when it should be on. (An enable racing with an async
> disable?) This might be a problem outside of the GPU, as I can't
> reproduce this on a618 which uses the same GMU fw and same mechanism to
> communicate with GMU to set opp. For now, just revert to previous
> devfreq behavior on a630 until the issue is understood.
>
> Reported-by: Caleb Connolly <[email protected]>
> Fixes: 9bc95570175a ("drm/msm: Devfreq tuning")
> Signed-off-by: Rob Clark <[email protected]>
> ---
> drivers/gpu/drm/msm/adreno/adreno_gpu.c | 3 +++
> drivers/gpu/drm/msm/msm_gpu.h | 2 ++
> drivers/gpu/drm/msm/msm_gpu_devfreq.c | 12 ++++++++++++
> 3 files changed, 17 insertions(+)
>
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> index 748665232d29..9fd08b413010 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> @@ -945,6 +945,9 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
> pm_runtime_use_autosuspend(dev);
> pm_runtime_enable(dev);
>
> + if (adreno_is_a630(adreno_gpu))
> + gpu->devfreq.disable_freq_clamping = true;
> +
> return msm_gpu_init(drm, pdev, &adreno_gpu->base, &funcs->base,
> adreno_gpu->info->name, &adreno_gpu_config);
> }
> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
> index 0e4b45bff2e6..7e11b667f939 100644
> --- a/drivers/gpu/drm/msm/msm_gpu.h
> +++ b/drivers/gpu/drm/msm/msm_gpu.h
> @@ -112,6 +112,8 @@ struct msm_gpu_devfreq {
> * it is inactive.
> */
> unsigned long idle_freq;
> +
> + bool disable_freq_clamping;
> };
>
> struct msm_gpu {
> diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> index 0a1ee20296a2..a832af436251 100644
> --- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> +++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> @@ -94,6 +94,12 @@ void msm_devfreq_init(struct msm_gpu *gpu)
> if (!gpu->funcs->gpu_busy)
> return;
>
> + /* Revert to previous polling interval if we aren't using freq clamping
> + * to preserve previous behavior
> + */
> + if (gpu->devfreq.disable_freq_clamping)
> + msm_devfreq_profile.polling_ms = 10;
> +
> msm_devfreq_profile.initial_freq = gpu->fast_rate;
>
> /*
> @@ -151,6 +157,9 @@ void msm_devfreq_active(struct msm_gpu *gpu)
> unsigned int idle_time;
> unsigned long target_freq = df->idle_freq;
>
> + if (gpu->devfreq.disable_freq_clamping)
> + return;
> +
> /*
> * Hold devfreq lock to synchronize with get_dev_status()/
> * target() callbacks
> @@ -186,6 +195,9 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
> struct msm_gpu_devfreq *df = &gpu->devfreq;
> unsigned long idle_freq, target_freq = 0;
>
> + if (gpu->devfreq.disable_freq_clamping)
> + return;
> +
> /*
> * Hold devfreq lock to synchronize with get_dev_status()/
> * target() callbacks
>

--
Kind Regards,
Caleb (they/them)

2021-07-29 20:22:06

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
<[email protected]> wrote:
>
> Hi Rob,
>
> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.

*ohh*, yeah, ok, that would explain it

> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> at the higher frequencies.
>
> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> glxgear.
>
> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> at the voltage the hardware needs to be stable.
>
> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
>

tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
on CC and I added sboyd, maybe one of them knows better.

In the short term, removing the higher problematic OPPs from dts might
be a better option than this patch (which I'm dropping), since there
is nothing stopping other workloads from hitting higher OPPs.

I'm slightly curious why I didn't have problems at higher OPPs on my
c630 laptop (sdm850)

BR,
-R

>
> On 29/07/2021 19:39, Rob Clark wrote:
> > From: Rob Clark <[email protected]>
> >
> > The more frequent frequency transitions resulting from clamping freq to
> > minimum when the GPU is idle seems to be causing some issue with the bus
> > getting voted off when it should be on. (An enable racing with an async
> > disable?) This might be a problem outside of the GPU, as I can't
> > reproduce this on a618 which uses the same GMU fw and same mechanism to
> > communicate with GMU to set opp. For now, just revert to previous
> > devfreq behavior on a630 until the issue is understood.
> >
> > Reported-by: Caleb Connolly <[email protected]>
> > Fixes: 9bc95570175a ("drm/msm: Devfreq tuning")
> > Signed-off-by: Rob Clark <[email protected]>
> > ---
> > drivers/gpu/drm/msm/adreno/adreno_gpu.c | 3 +++
> > drivers/gpu/drm/msm/msm_gpu.h | 2 ++
> > drivers/gpu/drm/msm/msm_gpu_devfreq.c | 12 ++++++++++++
> > 3 files changed, 17 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > index 748665232d29..9fd08b413010 100644
> > --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > @@ -945,6 +945,9 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
> > pm_runtime_use_autosuspend(dev);
> > pm_runtime_enable(dev);
> >
> > + if (adreno_is_a630(adreno_gpu))
> > + gpu->devfreq.disable_freq_clamping = true;
> > +
> > return msm_gpu_init(drm, pdev, &adreno_gpu->base, &funcs->base,
> > adreno_gpu->info->name, &adreno_gpu_config);
> > }
> > diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
> > index 0e4b45bff2e6..7e11b667f939 100644
> > --- a/drivers/gpu/drm/msm/msm_gpu.h
> > +++ b/drivers/gpu/drm/msm/msm_gpu.h
> > @@ -112,6 +112,8 @@ struct msm_gpu_devfreq {
> > * it is inactive.
> > */
> > unsigned long idle_freq;
> > +
> > + bool disable_freq_clamping;
> > };
> >
> > struct msm_gpu {
> > diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> > index 0a1ee20296a2..a832af436251 100644
> > --- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> > +++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> > @@ -94,6 +94,12 @@ void msm_devfreq_init(struct msm_gpu *gpu)
> > if (!gpu->funcs->gpu_busy)
> > return;
> >
> > + /* Revert to previous polling interval if we aren't using freq clamping
> > + * to preserve previous behavior
> > + */
> > + if (gpu->devfreq.disable_freq_clamping)
> > + msm_devfreq_profile.polling_ms = 10;
> > +
> > msm_devfreq_profile.initial_freq = gpu->fast_rate;
> >
> > /*
> > @@ -151,6 +157,9 @@ void msm_devfreq_active(struct msm_gpu *gpu)
> > unsigned int idle_time;
> > unsigned long target_freq = df->idle_freq;
> >
> > + if (gpu->devfreq.disable_freq_clamping)
> > + return;
> > +
> > /*
> > * Hold devfreq lock to synchronize with get_dev_status()/
> > * target() callbacks
> > @@ -186,6 +195,9 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
> > struct msm_gpu_devfreq *df = &gpu->devfreq;
> > unsigned long idle_freq, target_freq = 0;
> >
> > + if (gpu->devfreq.disable_freq_clamping)
> > + return;
> > +
> > /*
> > * Hold devfreq lock to synchronize with get_dev_status()/
> > * target() callbacks
> >
>
> --
> Kind Regards,
> Caleb (they/them)

2021-07-29 20:30:52

by Caleb Connolly

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630



On 29/07/2021 21:24, Rob Clark wrote:
> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> <[email protected]> wrote:
>>
>> Hi Rob,
>>
>> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
>> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
>> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
>
> *ohh*, yeah, ok, that would explain it
>
>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
>> at the higher frequencies.
>>
>> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
>> glxgear.
>>
>> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
>> at the voltage the hardware needs to be stable.
>>
>> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
>> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
>>
>
> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> on CC and I added sboyd, maybe one of them knows better.
>
> In the short term, removing the higher problematic OPPs from dts might
> be a better option than this patch (which I'm dropping), since there
> is nothing stopping other workloads from hitting higher OPPs.
Oh yeah that sounds like a more sensible workaround than mine ????.
>
> I'm slightly curious why I didn't have problems at higher OPPs on my
> c630 laptop (sdm850)
Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.

Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
crash where yours doesn't?
>
> BR,
> -R
>
>>
>> On 29/07/2021 19:39, Rob Clark wrote:
>>> From: Rob Clark <[email protected]>
>>>
>>> The more frequent frequency transitions resulting from clamping freq to
>>> minimum when the GPU is idle seems to be causing some issue with the bus
>>> getting voted off when it should be on. (An enable racing with an async
>>> disable?) This might be a problem outside of the GPU, as I can't
>>> reproduce this on a618 which uses the same GMU fw and same mechanism to
>>> communicate with GMU to set opp. For now, just revert to previous
>>> devfreq behavior on a630 until the issue is understood.
>>>
>>> Reported-by: Caleb Connolly <[email protected]>
>>> Fixes: 9bc95570175a ("drm/msm: Devfreq tuning")
>>> Signed-off-by: Rob Clark <[email protected]>
>>> ---
>>> drivers/gpu/drm/msm/adreno/adreno_gpu.c | 3 +++
>>> drivers/gpu/drm/msm/msm_gpu.h | 2 ++
>>> drivers/gpu/drm/msm/msm_gpu_devfreq.c | 12 ++++++++++++
>>> 3 files changed, 17 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
>>> index 748665232d29..9fd08b413010 100644
>>> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
>>> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
>>> @@ -945,6 +945,9 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
>>> pm_runtime_use_autosuspend(dev);
>>> pm_runtime_enable(dev);
>>>
>>> + if (adreno_is_a630(adreno_gpu))
>>> + gpu->devfreq.disable_freq_clamping = true;
>>> +
>>> return msm_gpu_init(drm, pdev, &adreno_gpu->base, &funcs->base,
>>> adreno_gpu->info->name, &adreno_gpu_config);
>>> }
>>> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
>>> index 0e4b45bff2e6..7e11b667f939 100644
>>> --- a/drivers/gpu/drm/msm/msm_gpu.h
>>> +++ b/drivers/gpu/drm/msm/msm_gpu.h
>>> @@ -112,6 +112,8 @@ struct msm_gpu_devfreq {
>>> * it is inactive.
>>> */
>>> unsigned long idle_freq;
>>> +
>>> + bool disable_freq_clamping;
>>> };
>>>
>>> struct msm_gpu {
>>> diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
>>> index 0a1ee20296a2..a832af436251 100644
>>> --- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
>>> +++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
>>> @@ -94,6 +94,12 @@ void msm_devfreq_init(struct msm_gpu *gpu)
>>> if (!gpu->funcs->gpu_busy)
>>> return;
>>>
>>> + /* Revert to previous polling interval if we aren't using freq clamping
>>> + * to preserve previous behavior
>>> + */
>>> + if (gpu->devfreq.disable_freq_clamping)
>>> + msm_devfreq_profile.polling_ms = 10;
>>> +
>>> msm_devfreq_profile.initial_freq = gpu->fast_rate;
>>>
>>> /*
>>> @@ -151,6 +157,9 @@ void msm_devfreq_active(struct msm_gpu *gpu)
>>> unsigned int idle_time;
>>> unsigned long target_freq = df->idle_freq;
>>>
>>> + if (gpu->devfreq.disable_freq_clamping)
>>> + return;
>>> +
>>> /*
>>> * Hold devfreq lock to synchronize with get_dev_status()/
>>> * target() callbacks
>>> @@ -186,6 +195,9 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
>>> struct msm_gpu_devfreq *df = &gpu->devfreq;
>>> unsigned long idle_freq, target_freq = 0;
>>>
>>> + if (gpu->devfreq.disable_freq_clamping)
>>> + return;
>>> +
>>> /*
>>> * Hold devfreq lock to synchronize with get_dev_status()/
>>> * target() callbacks
>>>
>>
>> --
>> Kind Regards,
>> Caleb (they/them)

--
Kind Regards,
Caleb (they/them)

2021-07-29 20:51:28

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
<[email protected]> wrote:
>
>
>
> On 29/07/2021 21:24, Rob Clark wrote:
> > On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> > <[email protected]> wrote:
> >>
> >> Hi Rob,
> >>
> >> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> >> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> >> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
> >
> > *ohh*, yeah, ok, that would explain it
> >
> >> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> >> at the higher frequencies.
> >>
> >> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> >> glxgear.
> >>
> >> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> >> at the voltage the hardware needs to be stable.
> >>
> >> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> >> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
> >>
> >
> > tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> > on CC and I added sboyd, maybe one of them knows better.
> >
> > In the short term, removing the higher problematic OPPs from dts might
> > be a better option than this patch (which I'm dropping), since there
> > is nothing stopping other workloads from hitting higher OPPs.
> Oh yeah that sounds like a more sensible workaround than mine .
> >
> > I'm slightly curious why I didn't have problems at higher OPPs on my
> > c630 laptop (sdm850)
> Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
>
> Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
> crash where yours doesn't?

I've not heard any reports of similar issues from the handful of other
folks with c630's on #aarch64-laptops.. but I can't really say if that
is luck or not.

Maybe just remove it for affected devices? But I'll defer to Bjorn.

BR,
-R

2021-08-07 19:23:31

by Caleb Connolly

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

Hi Rob, Akhil,

On 29/07/2021 21:53, Rob Clark wrote:
> On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> <[email protected]> wrote:
>>
>>
>>
>> On 29/07/2021 21:24, Rob Clark wrote:
>>> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
>>> <[email protected]> wrote:
>>>>
>>>> Hi Rob,
>>>>
>>>> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
>>>> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
>>>> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
>>>
>>> *ohh*, yeah, ok, that would explain it
>>>
>>>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
>>>> at the higher frequencies.
>>>>
>>>> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
>>>> glxgear.
>>>>
>>>> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
>>>> at the voltage the hardware needs to be stable.
>>>>
>>>> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
>>>> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
>>>>
>>>
>>> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
>>> on CC and I added sboyd, maybe one of them knows better.
>>>
>>> In the short term, removing the higher problematic OPPs from dts might
>>> be a better option than this patch (which I'm dropping), since there
>>> is nothing stopping other workloads from hitting higher OPPs.
>> Oh yeah that sounds like a more sensible workaround than mine .
>>>
>>> I'm slightly curious why I didn't have problems at higher OPPs on my
>>> c630 laptop (sdm850)
>> Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
>>
>> Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
>> crash where yours doesn't?
>
> I've not heard any reports of similar issues from the handful of other
> folks with c630's on #aarch64-laptops.. but I can't really say if that
> is luck or not.
It looks like this affects at least the OnePlus 6 and PocoPhone F1, I've done some more poking and the following diff
seems to fix the stability issues completely, it seems the delay is required to let the update propagate.

This doesn't feel like the right fix, but hopefully it's enough to come up with a better solution than disabling the new
devfreq behaviour on a630.

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index d7cec7f0dde0..69e2a5e84dae 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -139,6 +139,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
return;
}

+ dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
+
+ usleep_range(300, 500);
+
gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);

gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
@@ -158,7 +162,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
if (ret)
dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);

- dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
pm_runtime_put(gmu->dev);
}
>
> Maybe just remove it for affected devices? But I'll defer to Bjorn.
>
> BR,
> -R
>

--
Kind Regards,
Caleb (they/them)

2021-08-07 20:05:12

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Sat, Aug 7, 2021 at 12:21 PM Caleb Connolly
<[email protected]> wrote:
>
> Hi Rob, Akhil,
>
> On 29/07/2021 21:53, Rob Clark wrote:
> > On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> > <[email protected]> wrote:
> >>
> >>
> >>
> >> On 29/07/2021 21:24, Rob Clark wrote:
> >>> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> >>> <[email protected]> wrote:
> >>>>
> >>>> Hi Rob,
> >>>>
> >>>> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> >>>> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> >>>> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
> >>>
> >>> *ohh*, yeah, ok, that would explain it
> >>>
> >>>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> >>>> at the higher frequencies.
> >>>>
> >>>> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> >>>> glxgear.
> >>>>
> >>>> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> >>>> at the voltage the hardware needs to be stable.
> >>>>
> >>>> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> >>>> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
> >>>>
> >>>
> >>> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> >>> on CC and I added sboyd, maybe one of them knows better.
> >>>
> >>> In the short term, removing the higher problematic OPPs from dts might
> >>> be a better option than this patch (which I'm dropping), since there
> >>> is nothing stopping other workloads from hitting higher OPPs.
> >> Oh yeah that sounds like a more sensible workaround than mine .
> >>>
> >>> I'm slightly curious why I didn't have problems at higher OPPs on my
> >>> c630 laptop (sdm850)
> >> Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
> >>
> >> Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
> >> crash where yours doesn't?
> >
> > I've not heard any reports of similar issues from the handful of other
> > folks with c630's on #aarch64-laptops.. but I can't really say if that
> > is luck or not.
> It looks like this affects at least the OnePlus 6 and PocoPhone F1, I've done some more poking and the following diff
> seems to fix the stability issues completely, it seems the delay is required to let the update propagate.
>
> This doesn't feel like the right fix, but hopefully it's enough to come up with a better solution than disabling the new
> devfreq behaviour on a630.
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> index d7cec7f0dde0..69e2a5e84dae 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> @@ -139,6 +139,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
> return;
> }
>
> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> +
> + usleep_range(300, 500);
> +

Hmm, this is going to be in the critical path on idle -> active
transition (ie. think response time to user-input).. so we defn don't
want to do this unconditionally..

If I understand the problem, we just want to limit how far we jump the
gpu freq in one go.. maybe deleting the lowest (and perhaps highest)
OPP would accomplish that? Could that be done in the board(s)'s
toplevel dts files?

BR,
-R

> gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
>
> gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
> @@ -158,7 +162,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
> if (ret)
> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
>
> - dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> pm_runtime_put(gmu->dev);
> }
> >
> > Maybe just remove it for affected devices? But I'll defer to Bjorn.
> >
> > BR,
> > -R
> >
>
> --
> Kind Regards,
> Caleb (they/them)

2021-08-08 14:36:51

by Caleb Connolly

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630



On 07/08/2021 21:04, Rob Clark wrote:
> On Sat, Aug 7, 2021 at 12:21 PM Caleb Connolly
> <[email protected]> wrote:
>>
>> Hi Rob, Akhil,
>>
>> On 29/07/2021 21:53, Rob Clark wrote:
>>> On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
>>> <[email protected]> wrote:
>>>>
>>>>
>>>>
>>>> On 29/07/2021 21:24, Rob Clark wrote:
>>>>> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
>>>>> <[email protected]> wrote:
>>>>>>
>>>>>> Hi Rob,
>>>>>>
>>>>>> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
>>>>>> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
>>>>>> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
>>>>>
>>>>> *ohh*, yeah, ok, that would explain it
>>>>>
>>>>>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
>>>>>> at the higher frequencies.
>>>>>>
>>>>>> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
>>>>>> glxgear.
>>>>>>
>>>>>> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
>>>>>> at the voltage the hardware needs to be stable.
>>>>>>
>>>>>> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
>>>>>> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
>>>>>>
>>>>>
>>>>> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
>>>>> on CC and I added sboyd, maybe one of them knows better.
>>>>>
>>>>> In the short term, removing the higher problematic OPPs from dts might
>>>>> be a better option than this patch (which I'm dropping), since there
>>>>> is nothing stopping other workloads from hitting higher OPPs.
>>>> Oh yeah that sounds like a more sensible workaround than mine .
>>>>>
>>>>> I'm slightly curious why I didn't have problems at higher OPPs on my
>>>>> c630 laptop (sdm850)
>>>> Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
>>>>
>>>> Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
>>>> crash where yours doesn't?
>>>
>>> I've not heard any reports of similar issues from the handful of other
>>> folks with c630's on #aarch64-laptops.. but I can't really say if that
>>> is luck or not.
>> It looks like this affects at least the OnePlus 6 and PocoPhone F1, I've done some more poking and the following diff
>> seems to fix the stability issues completely, it seems the delay is required to let the update propagate.
>>
>> This doesn't feel like the right fix, but hopefully it's enough to come up with a better solution than disabling the new
>> devfreq behaviour on a630.
>>
>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>> index d7cec7f0dde0..69e2a5e84dae 100644
>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>> @@ -139,6 +139,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
>> return;
>> }
>>
>> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
>> +
>> + usleep_range(300, 500);
>> +
>
> Hmm, this is going to be in the critical path on idle -> active
> transition (ie. think response time to user-input).. so we defn don't
> want to do this unconditionally..
>
> If I understand the problem, we just want to limit how far we jump the
> gpu freq in one go.. maybe deleting the lowest (and perhaps highest)
> OPP would accomplish that? Could that be done in the board(s)'s
> toplevel dts files?
That would be a workaround, however I'd really like to avoid limiting performance as a solution if I can help it,
especially as the fix might just be "set the opp first, wait for it to apply, then set the core clock".

Is there a sensible way to get a callback from the opp notify chain? Or from rpmh directly? Or is this solution really
not the right way to go?
>
> BR,
> -R
>
>> gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
>>
>> gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
>> @@ -158,7 +162,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
>> if (ret)
>> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
>>
>> - dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
>> pm_runtime_put(gmu->dev);
>> }
>>>
>>> Maybe just remove it for affected devices? But I'll defer to Bjorn.
>>>
>>> BR,
>>> -R
>>>
>>
>> --
>> Kind Regards,
>> Caleb (they/them)

--
Kind Regards,
Caleb (they/them)

2021-08-08 16:51:45

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Sun, Aug 8, 2021 at 7:33 AM Caleb Connolly <[email protected]> wrote:
>
>
>
> On 07/08/2021 21:04, Rob Clark wrote:
> > On Sat, Aug 7, 2021 at 12:21 PM Caleb Connolly
> > <[email protected]> wrote:
> >>
> >> Hi Rob, Akhil,
> >>
> >> On 29/07/2021 21:53, Rob Clark wrote:
> >>> On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> >>> <[email protected]> wrote:
> >>>>
> >>>>
> >>>>
> >>>> On 29/07/2021 21:24, Rob Clark wrote:
> >>>>> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> >>>>> <[email protected]> wrote:
> >>>>>>
> >>>>>> Hi Rob,
> >>>>>>
> >>>>>> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> >>>>>> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> >>>>>> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
> >>>>>
> >>>>> *ohh*, yeah, ok, that would explain it
> >>>>>
> >>>>>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> >>>>>> at the higher frequencies.
> >>>>>>
> >>>>>> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> >>>>>> glxgear.
> >>>>>>
> >>>>>> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> >>>>>> at the voltage the hardware needs to be stable.
> >>>>>>
> >>>>>> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> >>>>>> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
> >>>>>>
> >>>>>
> >>>>> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> >>>>> on CC and I added sboyd, maybe one of them knows better.
> >>>>>
> >>>>> In the short term, removing the higher problematic OPPs from dts might
> >>>>> be a better option than this patch (which I'm dropping), since there
> >>>>> is nothing stopping other workloads from hitting higher OPPs.
> >>>> Oh yeah that sounds like a more sensible workaround than mine .
> >>>>>
> >>>>> I'm slightly curious why I didn't have problems at higher OPPs on my
> >>>>> c630 laptop (sdm850)
> >>>> Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
> >>>>
> >>>> Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
> >>>> crash where yours doesn't?
> >>>
> >>> I've not heard any reports of similar issues from the handful of other
> >>> folks with c630's on #aarch64-laptops.. but I can't really say if that
> >>> is luck or not.
> >> It looks like this affects at least the OnePlus 6 and PocoPhone F1, I've done some more poking and the following diff
> >> seems to fix the stability issues completely, it seems the delay is required to let the update propagate.
> >>
> >> This doesn't feel like the right fix, but hopefully it's enough to come up with a better solution than disabling the new
> >> devfreq behaviour on a630.
> >>
> >> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >> index d7cec7f0dde0..69e2a5e84dae 100644
> >> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >> @@ -139,6 +139,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
> >> return;
> >> }
> >>
> >> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> >> +
> >> + usleep_range(300, 500);
> >> +
> >
> > Hmm, this is going to be in the critical path on idle -> active
> > transition (ie. think response time to user-input).. so we defn don't
> > want to do this unconditionally..
> >
> > If I understand the problem, we just want to limit how far we jump the
> > gpu freq in one go.. maybe deleting the lowest (and perhaps highest)
> > OPP would accomplish that? Could that be done in the board(s)'s
> > toplevel dts files?
> That would be a workaround, however I'd really like to avoid limiting performance as a solution if I can help it,
> especially as the fix might just be "set the opp first, wait for it to apply, then set the core clock".
>
> Is there a sensible way to get a callback from the opp notify chain? Or from rpmh directly? Or is this solution really
> not the right way to go?

It does seem a bit strange to me that we are telling GMU to change
freq before calling dev_pm_opp_set_opp().. if dev_pm_opp_set_opp() is
increasing voltage, it seems like you'd want to do that *before*
increasing freq (but reverse the order when decreasing freq).. But I'm
not an expert on the ways of the GMU.. maybe Akhil or Jordan knows
better how this is supposed to work.

But the delay seems like papering something over, and I'm trying to go
in the other direction and reduce latency between user input and
pageflip..

BR,
-R

> >
> > BR,
> > -R
> >
> >> gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
> >>
> >> gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
> >> @@ -158,7 +162,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
> >> if (ret)
> >> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
> >>
> >> - dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> >> pm_runtime_put(gmu->dev);
> >> }
> >>>
> >>> Maybe just remove it for affected devices? But I'll defer to Bjorn.
> >>>
> >>> BR,
> >>> -R
> >>>
> >>
> >> --
> >> Kind Regards,
> >> Caleb (they/them)
>
> --
> Kind Regards,
> Caleb (they/them)

2021-08-09 16:13:12

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen <[email protected]> wrote:
>
> On 8/8/2021 10:22 PM, Rob Clark wrote:
> > On Sun, Aug 8, 2021 at 7:33 AM Caleb Connolly <[email protected]> wrote:
> >>
> >>
> >>
> >> On 07/08/2021 21:04, Rob Clark wrote:
> >>> On Sat, Aug 7, 2021 at 12:21 PM Caleb Connolly
> >>> <[email protected]> wrote:
> >>>>
> >>>> Hi Rob, Akhil,
> >>>>
> >>>> On 29/07/2021 21:53, Rob Clark wrote:
> >>>>> On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> >>>>> <[email protected]> wrote:
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On 29/07/2021 21:24, Rob Clark wrote:
> >>>>>>> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> >>>>>>> <[email protected]> wrote:
> >>>>>>>>
> >>>>>>>> Hi Rob,
> >>>>>>>>
> >>>>>>>> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> >>>>>>>> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> >>>>>>>> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
> >>>>>>>
> >>>>>>> *ohh*, yeah, ok, that would explain it
> >>>>>>>
> >>>>>>>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> >>>>>>>> at the higher frequencies.
> >>>>>>>>
> >>>>>>>> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> >>>>>>>> glxgear.
> >>>>>>>>
> >>>>>>>> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> >>>>>>>> at the voltage the hardware needs to be stable.
> >>>>>>>>
> >>>>>>>> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> >>>>>>>> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
> >>>>>>>>
> >>>>>>>
> >>>>>>> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> >>>>>>> on CC and I added sboyd, maybe one of them knows better.
> >>>>>>>
> >>>>>>> In the short term, removing the higher problematic OPPs from dts might
> >>>>>>> be a better option than this patch (which I'm dropping), since there
> >>>>>>> is nothing stopping other workloads from hitting higher OPPs.
> >>>>>> Oh yeah that sounds like a more sensible workaround than mine .
> >>>>>>>
> >>>>>>> I'm slightly curious why I didn't have problems at higher OPPs on my
> >>>>>>> c630 laptop (sdm850)
> >>>>>> Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
> >>>>>>
> >>>>>> Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
> >>>>>> crash where yours doesn't?
> >>>>>
> >>>>> I've not heard any reports of similar issues from the handful of other
> >>>>> folks with c630's on #aarch64-laptops.. but I can't really say if that
> >>>>> is luck or not.
> >>>> It looks like this affects at least the OnePlus 6 and PocoPhone F1, I've done some more poking and the following diff
> >>>> seems to fix the stability issues completely, it seems the delay is required to let the update propagate.
> >>>>
> >>>> This doesn't feel like the right fix, but hopefully it's enough to come up with a better solution than disabling the new
> >>>> devfreq behaviour on a630.
> >>>>
> >>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>>> index d7cec7f0dde0..69e2a5e84dae 100644
> >>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>>> @@ -139,6 +139,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
> >>>> return;
> >>>> }
> >>>>
> >>>> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> >>>> +
> >>>> + usleep_range(300, 500);
> >>>> +
> >>>
>
> I am a bit confused. We don't define a power domain for gpu in dt,
> correct? Then what exactly set_opp do here? Do you think this usleep is
> what is helping here somehow to mask the issue?

Hmm, I thought "opp-level = RPMH_REGULATOR_LEVEL_*" did *something*,
but tbh I'm not sure exactly what..

> I feel we should just leave the new dcvs feature (shall we call it NAP?)
> disabled for a630 (and 10ms devfreq interval), until this is root caused.

I suppose "NAP" is a reasonable name.

But I think that reverting to previous behavior would not be enough,
there is nothing stopping devfreq from jumping from min to max freq,
which AFAIU should be enough to trigger this. I guess that there just
hasn't been enough testing with different game workloads on those
phones to trigger this.

That said, I haven't seen similar issues on my sdm850 laptop, where I
defn have triggered mix->max freq transitions.. I guess it would be
interesting to know if this issue could be reproduced on db845c, or if
it really is board specific?

To workaround, I think we'd need to implement some way to limit that
maximum frequency jump (and then use delayed work to continue ramping
up the freq over time until we hit the target).. which seems like a
lot of work if this is just a board(s) specific workaround and isn't
needed once CPR is supported

BR,
-R

> >>> Hmm, this is going to be in the critical path on idle -> active
> >>> transition (ie. think response time to user-input).. so we defn don't
> >>> want to do this unconditionally..
> >>>
> >>> If I understand the problem, we just want to limit how far we jump the
> >>> gpu freq in one go.. maybe deleting the lowest (and perhaps highest)
> >>> OPP would accomplish that? Could that be done in the board(s)'s
> >>> toplevel dts files?
> >> That would be a workaround, however I'd really like to avoid limiting performance as a solution if I can help it,
> >> especially as the fix might just be "set the opp first, wait for it to apply, then set the core clock".
> >>
> >> Is there a sensible way to get a callback from the opp notify chain? Or from rpmh directly? Or is this solution really
> >> not the right way to go?
> >
> > It does seem a bit strange to me that we are telling GMU to change
> > freq before calling dev_pm_opp_set_opp().. if dev_pm_opp_set_opp() is
> > increasing voltage, it seems like you'd want to do that *before*
> > increasing freq (but reverse the order when decreasing freq).. But I'm
> > not an expert on the ways of the GMU.. maybe Akhil or Jordan knows
> > better how this is supposed to work.
>
> For legacy gmu, we trigger DCVS using DCVS OOB which comes later in this
> function. But the order between regulator and clock which you mentioned
> is correct.
>
> >
> > But the delay seems like papering something over, and I'm trying to go
> > in the other direction and reduce latency between user input and
> > pageflip..
> >
> > BR,
> > -R
> >
> >>>
> >>> BR,
> >>> -R
> >>>
> >>>> gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
> >>>>
> >>>> gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
> >>>> @@ -158,7 +162,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
> >>>> if (ret)
> >>>> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
> >>>>
> >>>> - dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> >>>> pm_runtime_put(gmu->dev);
> >>>> }
> >>>>>
> >>>>> Maybe just remove it for affected devices? But I'll defer to Bjorn.
> >>>>>
> >>>>> BR,
> >>>>> -R
> >>>>>
> >>>>
> >>>> --
> >>>> Kind Regards,
> >>>> Caleb (they/them)
> >>
> >> --
> >> Kind Regards,
> >> Caleb (they/them)
>

2021-08-09 16:21:52

by Caleb Connolly

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630



On 09/08/2021 17:12, Rob Clark wrote:
> On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen <[email protected]> wrote:
>>
>> On 8/8/2021 10:22 PM, Rob Clark wrote:
>>> On Sun, Aug 8, 2021 at 7:33 AM Caleb Connolly <[email protected]> wrote:
>>>>
>>>>
>>>>
>>>> On 07/08/2021 21:04, Rob Clark wrote:
>>>>> On Sat, Aug 7, 2021 at 12:21 PM Caleb Connolly
>>>>> <[email protected]> wrote:
>>>>>>
>>>>>> Hi Rob, Akhil,
>>>>>>
>>>>>> On 29/07/2021 21:53, Rob Clark wrote:
>>>>>>> On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
>>>>>>> <[email protected]> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 29/07/2021 21:24, Rob Clark wrote:
>>>>>>>>> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>
>>>>>>>>>> Hi Rob,
>>>>>>>>>>
>>>>>>>>>> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
>>>>>>>>>> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
>>>>>>>>>> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
>>>>>>>>>
>>>>>>>>> *ohh*, yeah, ok, that would explain it
>>>>>>>>>
>>>>>>>>>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
>>>>>>>>>> at the higher frequencies.
>>>>>>>>>>
>>>>>>>>>> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
>>>>>>>>>> glxgear.
>>>>>>>>>>
>>>>>>>>>> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
>>>>>>>>>> at the voltage the hardware needs to be stable.
>>>>>>>>>>
>>>>>>>>>> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
>>>>>>>>>> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
>>>>>>>>> on CC and I added sboyd, maybe one of them knows better.
>>>>>>>>>
>>>>>>>>> In the short term, removing the higher problematic OPPs from dts might
>>>>>>>>> be a better option than this patch (which I'm dropping), since there
>>>>>>>>> is nothing stopping other workloads from hitting higher OPPs.
>>>>>>>> Oh yeah that sounds like a more sensible workaround than mine .
>>>>>>>>>
>>>>>>>>> I'm slightly curious why I didn't have problems at higher OPPs on my
>>>>>>>>> c630 laptop (sdm850)
>>>>>>>> Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
>>>>>>>>
>>>>>>>> Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
>>>>>>>> crash where yours doesn't?
>>>>>>>
>>>>>>> I've not heard any reports of similar issues from the handful of other
>>>>>>> folks with c630's on #aarch64-laptops.. but I can't really say if that
>>>>>>> is luck or not.
>>>>>> It looks like this affects at least the OnePlus 6 and PocoPhone F1, I've done some more poking and the following diff
>>>>>> seems to fix the stability issues completely, it seems the delay is required to let the update propagate.
>>>>>>
>>>>>> This doesn't feel like the right fix, but hopefully it's enough to come up with a better solution than disabling the new
>>>>>> devfreq behaviour on a630.
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>>>> index d7cec7f0dde0..69e2a5e84dae 100644
>>>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>>>> @@ -139,6 +139,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
>>>>>> return;
>>>>>> }
>>>>>>
>>>>>> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
>>>>>> +
>>>>>> + usleep_range(300, 500);
>>>>>> +
>>>>>
>>
>> I am a bit confused. We don't define a power domain for gpu in dt,
>> correct? Then what exactly set_opp do here? Do you think this usleep is
>> what is helping here somehow to mask the issue?
The power domains (for cx and gx) are defined in the GMU DT, the OPPs in the GPU DT. For the sake of simplicity I'll
refer to the lowest frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as the "min" state, and the
highest frequency (710000000) and OPP level (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
sdm845.dtsi under the gpu node.

The new devfreq behaviour unmasks what I think is a driver bug, it inadvertently puts much more strain on the GPU
regulators than they usually get. With the new behaviour the GPU jumps from it's min state to the max state and back
again extremely rapidly under workloads as small as refreshing UI. Where previously the GPU would rarely if ever go
above 342MHz when interacting with the device, it now jumps between min and max many times per second.

If my understanding is correct, the current implementation of the GMU set freq is the following:
- Get OPP for frequency to set
- Push the frequency to the GMU - immediately updating the core clock
- Call dev_pm_opp_set_opp() which triggers a notify chain, this winds up somewhere in power management code and causes
the gx regulator level to be updated

The regulator will then take some time to reach it's new voltage level and stabilise. I believe that rapid transitions
between min and max state - in combination with the increased current load from the GPU core - lead to the regulator
becoming unstable (e.g. when it's requested to transition from it's lowest to highest levels immediately after
transitioning down), the unstable voltage causes the GPU to crash.

Sillicon lottery will of course play a role here - this is very much an edge case and would definitely be different on a
per-device and even per-unit basis.
>
> Hmm, I thought "opp-level = RPMH_REGULATOR_LEVEL_*" did *something*,
> but tbh I'm not sure exactly what..
>
>> I feel we should just leave the new dcvs feature (shall we call it NAP?)
>> disabled for a630 (and 10ms devfreq interval), until this is root caused.
I believe this hacky workaround expresses the root cause of the issue quite clearly, by setting the OPP first and
allowing the gx regulator to become stable before telling the GPU to change clock speeds, we avoid the edge case and
prevent the crashes.

I took some rough measurements by adding logging to msm_devfreq_idle and causing UI updates for ~20 seconds and that
function is being called about 30 times per second, this means the GPU is transitioning between min (idle) state and max
(active / boost) state at that frequency and causing the issue I described above. It's likely that the usleep is helping
to mask this behaviour.

I hope this serves as a slightly better explanation of what I perceive to be the issue, I realise my previous
explanations were not very adequate, I apologise for all the noise.
>
> I suppose "NAP" is a reasonable name.
>
> But I think that reverting to previous behavior would not be enough,
> there is nothing stopping devfreq from jumping from min to max freq,
> which AFAIU should be enough to trigger this. I guess that there just
> hasn't been enough testing with different game workloads on those
> phones to trigger this.
Ack
>
> That said, I haven't seen similar issues on my sdm850 laptop, where I
> defn have triggered mix->max freq transitions.. I guess it would be
> interesting to know if this issue could be reproduced on db845c, or if
> it really is board specific?
My db845c arrives this week, I'll definitely try and reproduce this.
>
> To workaround, I think we'd need to implement some way to limit that
> maximum frequency jump (and then use delayed work to continue ramping
> up the freq over time until we hit the target).. which seems like a
> lot of work if this is just a board(s) specific workaround and isn't
> needed once CPR is supported
Based on my reasoning above, I came up with the following: reducing thrashing by preventing rapid idle/active
transitions. The minimum active time of 30ms was just used for testing, I think some number between 2 and 4 frames would
be a sensible choice - the higher the safer.

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index d7cec7f0dde0..87f2d1085c3e 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -139,6 +139,8 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
return;
}

+ dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
+
gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);

gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
@@ -158,7 +160,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
if (ret)
dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);

- dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
pm_runtime_put(gmu->dev);
}

diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 0e4b45bff2e6..0e2293bcb46d 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -99,8 +99,8 @@ struct msm_gpu_devfreq {
/** time: Time of last sampling period. */
ktime_t time;

- /** idle_time: Time of last transition to idle: */
- ktime_t idle_time;
+ /** transition_time: Time of last transition between idle/active: */
+ ktime_t transition_time;

/**
* idle_freq:
diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
index 0a1ee20296a2..774a7be33e7a 100644
--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
@@ -157,7 +157,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
*/
mutex_lock(&df->devfreq->lock);

- idle_time = ktime_to_ms(ktime_sub(ktime_get(), df->idle_time));
+ idle_time = ktime_to_ms(ktime_sub(ktime_get(), df->transition_time));

/*
* If we've been idle for a significant fraction of a polling
@@ -168,7 +168,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
target_freq *= 2;
}

- df->idle_freq = 0;
+ df->transition_time = ktime_get();;

msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);

@@ -185,6 +185,16 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
{
struct msm_gpu_devfreq *df = &gpu->devfreq;
unsigned long idle_freq, target_freq = 0;
+ unsigned int active_time;
+
+ active_time = ktime_to_ms(ktime_sub(ktime_get(), df->transition_time));
+ /*
+ * Don't go back to idle unless we've been active for at least 30ms
+ * to avoid thrashing.
+ */
+ if (active_time < 30) {
+ return;
+ }

/*
* Hold devfreq lock to synchronize with get_dev_status()/
@@ -196,7 +206,7 @@ void msm_devfreq_idle(struct msm_gpu *gpu)

msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);

- df->idle_time = ktime_get();
+ df->transition_time = ktime_get();
df->idle_freq = idle_freq;

mutex_unlock(&df->devfreq->lock);
>
> BR,
> -R
>
>>>>> Hmm, this is going to be in the critical path on idle -> active
>>>>> transition (ie. think response time to user-input).. so we defn don't
>>>>> want to do this unconditionally..
>>>>>
>>>>> If I understand the problem, we just want to limit how far we jump the
>>>>> gpu freq in one go.. maybe deleting the lowest (and perhaps highest)
>>>>> OPP would accomplish that? Could that be done in the board(s)'s
>>>>> toplevel dts files?
>>>> That would be a workaround, however I'd really like to avoid limiting performance as a solution if I can help it,
>>>> especially as the fix might just be "set the opp first, wait for it to apply, then set the core clock".
>>>>
>>>> Is there a sensible way to get a callback from the opp notify chain? Or from rpmh directly? Or is this solution really
>>>> not the right way to go?
>>>
>>> It does seem a bit strange to me that we are telling GMU to change
>>> freq before calling dev_pm_opp_set_opp().. if dev_pm_opp_set_opp() is
>>> increasing voltage, it seems like you'd want to do that *before*
>>> increasing freq (but reverse the order when decreasing freq).. But I'm
>>> not an expert on the ways of the GMU.. maybe Akhil or Jordan knows
>>> better how this is supposed to work.
>>
>> For legacy gmu, we trigger DCVS using DCVS OOB which comes later in this
>> function. But the order between regulator and clock which you mentioned
>> is correct.
>>
>>>
>>> But the delay seems like papering something over, and I'm trying to go
>>> in the other direction and reduce latency between user input and
>>> pageflip..
>>>
>>> BR,
>>> -R
>>>
>>>>>
>>>>> BR,
>>>>> -R
>>>>>
>>>>>> gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
>>>>>>
>>>>>> gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
>>>>>> @@ -158,7 +162,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
>>>>>> if (ret)
>>>>>> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
>>>>>>
>>>>>> - dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
>>>>>> pm_runtime_put(gmu->dev);
>>>>>> }
>>>>>>>
>>>>>>> Maybe just remove it for affected devices? But I'll defer to Bjorn.
>>>>>>>
>>>>>>> BR,
>>>>>>> -R
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Kind Regards,
>>>>>> Caleb (they/them)
>>>>
>>>> --
>>>> Kind Regards,
>>>> Caleb (they/them)
>>

--
Kind Regards,
Caleb (they/them)

2021-08-09 17:29:44

by Akhil P Oommen

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On 8/9/2021 9:48 PM, Caleb Connolly wrote:
>
>
> On 09/08/2021 17:12, Rob Clark wrote:
>> On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen <[email protected]>
>> wrote:
>>>
>>> On 8/8/2021 10:22 PM, Rob Clark wrote:
>>>> On Sun, Aug 8, 2021 at 7:33 AM Caleb Connolly
>>>> <[email protected]> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 07/08/2021 21:04, Rob Clark wrote:
>>>>>> On Sat, Aug 7, 2021 at 12:21 PM Caleb Connolly
>>>>>> <[email protected]> wrote:
>>>>>>>
>>>>>>> Hi Rob, Akhil,
>>>>>>>
>>>>>>> On 29/07/2021 21:53, Rob Clark wrote:
>>>>>>>> On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
>>>>>>>> <[email protected]> wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 29/07/2021 21:24, Rob Clark wrote:
>>>>>>>>>> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
>>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Hi Rob,
>>>>>>>>>>>
>>>>>>>>>>> I've done some more testing! It looks like before that patch
>>>>>>>>>>> ("drm/msm: Devfreq tuning") the GPU would never get above
>>>>>>>>>>> the second frequency in the OPP table (342MHz) (at least, not
>>>>>>>>>>> in glxgears). With the patch applied it would more
>>>>>>>>>>> aggressively jump up to the max frequency which seems to be
>>>>>>>>>>> unstable at the default regulator voltages.
>>>>>>>>>>
>>>>>>>>>> *ohh*, yeah, ok, that would explain it
>>>>>>>>>>
>>>>>>>>>>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up
>>>>>>>>>>> to 0.988v (instead of the stock 0.516v) makes the GPU stable
>>>>>>>>>>> at the higher frequencies.
>>>>>>>>>>>
>>>>>>>>>>> Applying this patch reverts the behaviour, and the GPU never
>>>>>>>>>>> goes above 342MHz in glxgears, losing ~30% performance in
>>>>>>>>>>> glxgear.
>>>>>>>>>>>
>>>>>>>>>>> I think (?) that enabling CPR support would be the proper
>>>>>>>>>>> solution to this - that would ensure that the regulators run
>>>>>>>>>>> at the voltage the hardware needs to be stable.
>>>>>>>>>>>
>>>>>>>>>>> Is hacking the voltage higher (although ideally not quite
>>>>>>>>>>> that high) an acceptable short term solution until we have
>>>>>>>>>>> CPR? Or would it be safer to just not make use of the higher
>>>>>>>>>>> frequencies on a630 for now?
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is
>>>>>>>>>> already
>>>>>>>>>> on CC and I added sboyd, maybe one of them knows better.
>>>>>>>>>>
>>>>>>>>>> In the short term, removing the higher problematic OPPs from
>>>>>>>>>> dts might
>>>>>>>>>> be a better option than this patch (which I'm dropping), since
>>>>>>>>>> there
>>>>>>>>>> is nothing stopping other workloads from hitting higher OPPs.
>>>>>>>>> Oh yeah that sounds like a more sensible workaround than mine .
>>>>>>>>>>
>>>>>>>>>> I'm slightly curious why I didn't have problems at higher OPPs
>>>>>>>>>> on my
>>>>>>>>>> c630 laptop (sdm850)
>>>>>>>>> Perhaps you won the sillicon lottery - iirc sdm850 is binned
>>>>>>>>> for higher clocks as is out of the factory.
>>>>>>>>>
>>>>>>>>> Would it be best to drop the OPPs for all devices? Or just
>>>>>>>>> those affected? I guess it's possible another c630 might
>>>>>>>>> crash where yours doesn't?
>>>>>>>>
>>>>>>>> I've not heard any reports of similar issues from the handful of
>>>>>>>> other
>>>>>>>> folks with c630's on #aarch64-laptops.. but I can't really say
>>>>>>>> if that
>>>>>>>> is luck or not.
>>>>>>> It looks like this affects at least the OnePlus 6 and PocoPhone
>>>>>>> F1, I've done some more poking and the following diff
>>>>>>> seems to fix the stability issues completely, it seems the delay
>>>>>>> is required to let the update propagate.
>>>>>>>
>>>>>>> This doesn't feel like the right fix, but hopefully it's enough
>>>>>>> to come up with a better solution than disabling the new
>>>>>>> devfreq behaviour on a630.
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>>>>> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>>>>> index d7cec7f0dde0..69e2a5e84dae 100644
>>>>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>>>>> @@ -139,6 +139,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu,
>>>>>>> struct dev_pm_opp *opp)
>>>>>>>                     return;
>>>>>>>             }
>>>>>>>
>>>>>>> +       dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
>>>>>>> +
>>>>>>> +       usleep_range(300, 500);
>>>>>>> +
>>>>>>
>>>
>>> I am a bit confused. We don't define a power domain for gpu in dt,
>>> correct? Then what exactly set_opp do here? Do you think this usleep is
>>> what is helping here somehow to mask the issue?
> The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
> the GPU DT. For the sake of simplicity I'll refer to the lowest
> frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
> the "min" state, and the highest frequency (710000000) and OPP level
> (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
> sdm845.dtsi under the gpu node.
>
> The new devfreq behaviour unmasks what I think is a driver bug, it
> inadvertently puts much more strain on the GPU regulators than they
> usually get. With the new behaviour the GPU jumps from it's min state to
> the max state and back again extremely rapidly under workloads as small
> as refreshing UI. Where previously the GPU would rarely if ever go above
> 342MHz when interacting with the device, it now jumps between min and
> max many times per second.
>
> If my understanding is correct, the current implementation of the GMU
> set freq is the following:
>  - Get OPP for frequency to set
>  - Push the frequency to the GMU - immediately updating the core clock
>  - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
> up somewhere in power management code and causes the gx regulator level
> to be updated

Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else.
We were using a different api earlier which got deprecated -
dev_pm_opp_set_bw().

>
> The regulator will then take some time to reach it's new voltage level
> and stabilise. I believe that rapid transitions between min and max
> state - in combination with the increased current load from the GPU core
> - lead to the regulator becoming unstable (e.g. when it's requested to
> transition from it's lowest to highest levels immediately after
> transitioning down), the unstable voltage causes the GPU to crash.
>
> Sillicon lottery will of course play a role here - this is very much an
> edge case and would definitely be different on a per-device and even
> per-unit basis.
>>
>> Hmm, I thought "opp-level = RPMH_REGULATOR_LEVEL_*" did *something*,
>> but tbh I'm not sure exactly what..
>>
>>> I feel we should just leave the new dcvs feature (shall we call it NAP?)
>>> disabled for a630 (and 10ms devfreq interval), until this is root
>>> caused.
> I believe this hacky workaround expresses the root cause of the issue
> quite clearly, by setting the OPP first and allowing the gx regulator to
> become stable before telling the GPU to change clock speeds, we avoid
> the edge case and prevent the crashes.
>
> I took some rough measurements by adding logging to msm_devfreq_idle and
> causing UI updates for ~20 seconds and that function is being called
> about 30 times per second, this means the GPU is transitioning between
> min (idle) state and max (active / boost) state at that frequency and
> causing the issue I described above. It's likely that the usleep is
> helping to mask this behaviour.
>
> I hope this serves as a slightly better explanation of what I perceive
> to be the issue, I realise my previous explanations were not very
> adequate, I apologise for all the noise.
>>
>> I suppose "NAP" is a reasonable name.
>>
>> But I think that reverting to previous behavior would not be enough,
>> there is nothing stopping devfreq from jumping from min to max freq,
>> which AFAIU should be enough to trigger this.  I guess that there just
>> hasn't been enough testing with different game workloads on those
>> phones to trigger this.
> Ack
>>
>> That said, I haven't seen similar issues on my sdm850 laptop, where I
>> defn have triggered mix->max freq transitions.. I guess it would be
>> interesting to know if this issue could be reproduced on db845c, or if
>> it really is board specific?
> My db845c arrives this week, I'll definitely try and reproduce this.
>>
>> To workaround, I think we'd need to implement some way to limit that
>> maximum frequency jump (and then use delayed work to continue ramping
>> up the freq over time until we hit the target).. which seems like a
>> lot of work if this is just a board(s) specific workaround and isn't
>> needed once CPR is supported
> Based on my reasoning above, I came up with the following: reducing
> thrashing by preventing rapid idle/active transitions. The minimum
> active time of 30ms was just used for testing, I think some number
> between 2 and 4 frames would be a sensible choice - the higher the safer.
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> index d7cec7f0dde0..87f2d1085c3e 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> @@ -139,6 +139,8 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct
> dev_pm_opp *opp)
>                 return;
>         }
>
> +       dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> +
>         gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
>
>         gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
> @@ -158,7 +160,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct
> dev_pm_opp *opp)
>         if (ret)
>                 dev_err(gmu->dev, "GMU set GPU frequency error: %d\n",
> ret);
>
> -       dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
>         pm_runtime_put(gmu->dev);
>  }
>
> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
> index 0e4b45bff2e6..0e2293bcb46d 100644
> --- a/drivers/gpu/drm/msm/msm_gpu.h
> +++ b/drivers/gpu/drm/msm/msm_gpu.h
> @@ -99,8 +99,8 @@ struct msm_gpu_devfreq {
>         /** time: Time of last sampling period. */
>         ktime_t time;
>
> -       /** idle_time: Time of last transition to idle: */
> -       ktime_t idle_time;
> +       /** transition_time: Time of last transition between
> idle/active: */
> +       ktime_t transition_time;
>
>         /**
>          * idle_freq:
> diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> index 0a1ee20296a2..774a7be33e7a 100644
> --- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> +++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> @@ -157,7 +157,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
>          */
>         mutex_lock(&df->devfreq->lock);
>
> -       idle_time = ktime_to_ms(ktime_sub(ktime_get(), df->idle_time));
> +       idle_time = ktime_to_ms(ktime_sub(ktime_get(),
> df->transition_time));
>
>         /*
>          * If we've been idle for a significant fraction of a polling
> @@ -168,7 +168,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
>                 target_freq *= 2;
>         }
>
> -       df->idle_freq = 0;
> +       df->transition_time = ktime_get();;
>
>         msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);
>
> @@ -185,6 +185,16 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
>  {
>         struct msm_gpu_devfreq *df = &gpu->devfreq;
>         unsigned long idle_freq, target_freq = 0;
> +       unsigned int active_time;
> +
> +       active_time = ktime_to_ms(ktime_sub(ktime_get(),
> df->transition_time));
> +       /*
> +        * Don't go back to idle unless we've been active for at least 30ms
> +        * to avoid thrashing.

This basically defeats the purpose of this feature! At least, we should
keep this '30' gpu specific. Does a Kconfig makes sense here?? BTW, if
300us was helping you earlier why do you want it to be 30ms now?

> +        */
> +       if (active_time < 30) {
> +               return;
> +       }
>
>         /*
>          * Hold devfreq lock to synchronize with get_dev_status()/
> @@ -196,7 +206,7 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
>
>         msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);
>
> -       df->idle_time = ktime_get();
> +       df->transition_time = ktime_get();
>         df->idle_freq = idle_freq;
>
>         mutex_unlock(&df->devfreq->lock);
>>
>> BR,
>> -R
>>
>>>>>> Hmm, this is going to be in the critical path on idle -> active
>>>>>> transition (ie. think response time to user-input).. so we defn don't
>>>>>> want to do this unconditionally..
>>>>>>
>>>>>> If I understand the problem, we just want to limit how far we jump
>>>>>> the
>>>>>> gpu freq in one go.. maybe deleting the lowest (and perhaps highest)
>>>>>> OPP would accomplish that?  Could that be done in the board(s)'s
>>>>>> toplevel dts files?
>>>>> That would be a workaround, however I'd really like to avoid
>>>>> limiting performance as a solution if I can help it,
>>>>> especially as the fix might just be "set the opp first, wait for it
>>>>> to apply, then set the core clock".
>>>>>
>>>>> Is there a sensible way to get a callback from the opp notify
>>>>> chain? Or from rpmh directly? Or is this solution really
>>>>> not the right way to go?
>>>>
>>>> It does seem a bit strange to me that we are telling GMU to change
>>>> freq before calling dev_pm_opp_set_opp()..  if dev_pm_opp_set_opp() is
>>>> increasing voltage, it seems like you'd want to do that *before*
>>>> increasing freq (but reverse the order when decreasing freq).. But I'm
>>>> not an expert on the ways of the GMU..  maybe Akhil or Jordan knows
>>>> better how this is supposed to work.
>>>
>>> For legacy gmu, we trigger DCVS using DCVS OOB which comes later in this
>>> function. But the order between regulator and clock which you mentioned
>>> is correct.
>>>
>>>>
>>>> But the delay seems like papering something over, and I'm trying to go
>>>> in the other direction and reduce latency between user input and
>>>> pageflip..
>>>>
>>>> BR,
>>>> -R
>>>>
>>>>>>
>>>>>> BR,
>>>>>> -R
>>>>>>
>>>>>>>             gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
>>>>>>>
>>>>>>>             gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
>>>>>>> @@ -158,7 +162,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu,
>>>>>>> struct dev_pm_opp *opp)
>>>>>>>             if (ret)
>>>>>>>                     dev_err(gmu->dev, "GMU set GPU frequency
>>>>>>> error: %d\n", ret);
>>>>>>>
>>>>>>> -       dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
>>>>>>>             pm_runtime_put(gmu->dev);
>>>>>>>      }
>>>>>>>>
>>>>>>>> Maybe just remove it for affected devices?  But I'll defer to
>>>>>>>> Bjorn.
>>>>>>>>
>>>>>>>> BR,
>>>>>>>> -R
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Kind Regards,
>>>>>>> Caleb (they/them)
>>>>>
>>>>> --
>>>>> Kind Regards,
>>>>> Caleb (they/them)
>>>
>

2021-08-09 19:32:58

by Akhil P Oommen

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On 8/8/2021 10:22 PM, Rob Clark wrote:
> On Sun, Aug 8, 2021 at 7:33 AM Caleb Connolly <[email protected]> wrote:
>>
>>
>>
>> On 07/08/2021 21:04, Rob Clark wrote:
>>> On Sat, Aug 7, 2021 at 12:21 PM Caleb Connolly
>>> <[email protected]> wrote:
>>>>
>>>> Hi Rob, Akhil,
>>>>
>>>> On 29/07/2021 21:53, Rob Clark wrote:
>>>>> On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
>>>>> <[email protected]> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 29/07/2021 21:24, Rob Clark wrote:
>>>>>>> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
>>>>>>> <[email protected]> wrote:
>>>>>>>>
>>>>>>>> Hi Rob,
>>>>>>>>
>>>>>>>> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
>>>>>>>> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
>>>>>>>> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
>>>>>>>
>>>>>>> *ohh*, yeah, ok, that would explain it
>>>>>>>
>>>>>>>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
>>>>>>>> at the higher frequencies.
>>>>>>>>
>>>>>>>> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
>>>>>>>> glxgear.
>>>>>>>>
>>>>>>>> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
>>>>>>>> at the voltage the hardware needs to be stable.
>>>>>>>>
>>>>>>>> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
>>>>>>>> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
>>>>>>>>
>>>>>>>
>>>>>>> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
>>>>>>> on CC and I added sboyd, maybe one of them knows better.
>>>>>>>
>>>>>>> In the short term, removing the higher problematic OPPs from dts might
>>>>>>> be a better option than this patch (which I'm dropping), since there
>>>>>>> is nothing stopping other workloads from hitting higher OPPs.
>>>>>> Oh yeah that sounds like a more sensible workaround than mine .
>>>>>>>
>>>>>>> I'm slightly curious why I didn't have problems at higher OPPs on my
>>>>>>> c630 laptop (sdm850)
>>>>>> Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
>>>>>>
>>>>>> Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
>>>>>> crash where yours doesn't?
>>>>>
>>>>> I've not heard any reports of similar issues from the handful of other
>>>>> folks with c630's on #aarch64-laptops.. but I can't really say if that
>>>>> is luck or not.
>>>> It looks like this affects at least the OnePlus 6 and PocoPhone F1, I've done some more poking and the following diff
>>>> seems to fix the stability issues completely, it seems the delay is required to let the update propagate.
>>>>
>>>> This doesn't feel like the right fix, but hopefully it's enough to come up with a better solution than disabling the new
>>>> devfreq behaviour on a630.
>>>>
>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>> index d7cec7f0dde0..69e2a5e84dae 100644
>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>> @@ -139,6 +139,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
>>>> return;
>>>> }
>>>>
>>>> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
>>>> +
>>>> + usleep_range(300, 500);
>>>> +
>>>

I am a bit confused. We don't define a power domain for gpu in dt,
correct? Then what exactly set_opp do here? Do you think this usleep is
what is helping here somehow to mask the issue?

I feel we should just leave the new dcvs feature (shall we call it NAP?)
disabled for a630 (and 10ms devfreq interval), until this is root caused.

>>> Hmm, this is going to be in the critical path on idle -> active
>>> transition (ie. think response time to user-input).. so we defn don't
>>> want to do this unconditionally..
>>>
>>> If I understand the problem, we just want to limit how far we jump the
>>> gpu freq in one go.. maybe deleting the lowest (and perhaps highest)
>>> OPP would accomplish that? Could that be done in the board(s)'s
>>> toplevel dts files?
>> That would be a workaround, however I'd really like to avoid limiting performance as a solution if I can help it,
>> especially as the fix might just be "set the opp first, wait for it to apply, then set the core clock".
>>
>> Is there a sensible way to get a callback from the opp notify chain? Or from rpmh directly? Or is this solution really
>> not the right way to go?
>
> It does seem a bit strange to me that we are telling GMU to change
> freq before calling dev_pm_opp_set_opp().. if dev_pm_opp_set_opp() is
> increasing voltage, it seems like you'd want to do that *before*
> increasing freq (but reverse the order when decreasing freq).. But I'm
> not an expert on the ways of the GMU.. maybe Akhil or Jordan knows
> better how this is supposed to work.

For legacy gmu, we trigger DCVS using DCVS OOB which comes later in this
function. But the order between regulator and clock which you mentioned
is correct.

>
> But the delay seems like papering something over, and I'm trying to go
> in the other direction and reduce latency between user input and
> pageflip..
>
> BR,
> -R
>
>>>
>>> BR,
>>> -R
>>>
>>>> gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
>>>>
>>>> gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
>>>> @@ -158,7 +162,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
>>>> if (ret)
>>>> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
>>>>
>>>> - dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
>>>> pm_runtime_put(gmu->dev);
>>>> }
>>>>>
>>>>> Maybe just remove it for affected devices? But I'll defer to Bjorn.
>>>>>
>>>>> BR,
>>>>> -R
>>>>>
>>>>
>>>> --
>>>> Kind Regards,
>>>> Caleb (they/them)
>>
>> --
>> Kind Regards,
>> Caleb (they/them)

2021-08-09 20:31:00

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Mon, Aug 9, 2021 at 10:28 AM Akhil P Oommen <[email protected]> wrote:
>
> On 8/9/2021 9:48 PM, Caleb Connolly wrote:
> >
> >
> > On 09/08/2021 17:12, Rob Clark wrote:
> >> On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen <[email protected]>
> >> wrote:
> >>>
> >>> On 8/8/2021 10:22 PM, Rob Clark wrote:
> >>>> On Sun, Aug 8, 2021 at 7:33 AM Caleb Connolly
> >>>> <[email protected]> wrote:
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 07/08/2021 21:04, Rob Clark wrote:
> >>>>>> On Sat, Aug 7, 2021 at 12:21 PM Caleb Connolly
> >>>>>> <[email protected]> wrote:
> >>>>>>>
> >>>>>>> Hi Rob, Akhil,
> >>>>>>>
> >>>>>>> On 29/07/2021 21:53, Rob Clark wrote:
> >>>>>>>> On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> >>>>>>>> <[email protected]> wrote:
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On 29/07/2021 21:24, Rob Clark wrote:
> >>>>>>>>>> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> >>>>>>>>>> <[email protected]> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> Hi Rob,
> >>>>>>>>>>>
> >>>>>>>>>>> I've done some more testing! It looks like before that patch
> >>>>>>>>>>> ("drm/msm: Devfreq tuning") the GPU would never get above
> >>>>>>>>>>> the second frequency in the OPP table (342MHz) (at least, not
> >>>>>>>>>>> in glxgears). With the patch applied it would more
> >>>>>>>>>>> aggressively jump up to the max frequency which seems to be
> >>>>>>>>>>> unstable at the default regulator voltages.
> >>>>>>>>>>
> >>>>>>>>>> *ohh*, yeah, ok, that would explain it
> >>>>>>>>>>
> >>>>>>>>>>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up
> >>>>>>>>>>> to 0.988v (instead of the stock 0.516v) makes the GPU stable
> >>>>>>>>>>> at the higher frequencies.
> >>>>>>>>>>>
> >>>>>>>>>>> Applying this patch reverts the behaviour, and the GPU never
> >>>>>>>>>>> goes above 342MHz in glxgears, losing ~30% performance in
> >>>>>>>>>>> glxgear.
> >>>>>>>>>>>
> >>>>>>>>>>> I think (?) that enabling CPR support would be the proper
> >>>>>>>>>>> solution to this - that would ensure that the regulators run
> >>>>>>>>>>> at the voltage the hardware needs to be stable.
> >>>>>>>>>>>
> >>>>>>>>>>> Is hacking the voltage higher (although ideally not quite
> >>>>>>>>>>> that high) an acceptable short term solution until we have
> >>>>>>>>>>> CPR? Or would it be safer to just not make use of the higher
> >>>>>>>>>>> frequencies on a630 for now?
> >>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is
> >>>>>>>>>> already
> >>>>>>>>>> on CC and I added sboyd, maybe one of them knows better.
> >>>>>>>>>>
> >>>>>>>>>> In the short term, removing the higher problematic OPPs from
> >>>>>>>>>> dts might
> >>>>>>>>>> be a better option than this patch (which I'm dropping), since
> >>>>>>>>>> there
> >>>>>>>>>> is nothing stopping other workloads from hitting higher OPPs.
> >>>>>>>>> Oh yeah that sounds like a more sensible workaround than mine .
> >>>>>>>>>>
> >>>>>>>>>> I'm slightly curious why I didn't have problems at higher OPPs
> >>>>>>>>>> on my
> >>>>>>>>>> c630 laptop (sdm850)
> >>>>>>>>> Perhaps you won the sillicon lottery - iirc sdm850 is binned
> >>>>>>>>> for higher clocks as is out of the factory.
> >>>>>>>>>
> >>>>>>>>> Would it be best to drop the OPPs for all devices? Or just
> >>>>>>>>> those affected? I guess it's possible another c630 might
> >>>>>>>>> crash where yours doesn't?
> >>>>>>>>
> >>>>>>>> I've not heard any reports of similar issues from the handful of
> >>>>>>>> other
> >>>>>>>> folks with c630's on #aarch64-laptops.. but I can't really say
> >>>>>>>> if that
> >>>>>>>> is luck or not.
> >>>>>>> It looks like this affects at least the OnePlus 6 and PocoPhone
> >>>>>>> F1, I've done some more poking and the following diff
> >>>>>>> seems to fix the stability issues completely, it seems the delay
> >>>>>>> is required to let the update propagate.
> >>>>>>>
> >>>>>>> This doesn't feel like the right fix, but hopefully it's enough
> >>>>>>> to come up with a better solution than disabling the new
> >>>>>>> devfreq behaviour on a630.
> >>>>>>>
> >>>>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>>>>>> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>>>>>> index d7cec7f0dde0..69e2a5e84dae 100644
> >>>>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>>>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>>>>>> @@ -139,6 +139,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu,
> >>>>>>> struct dev_pm_opp *opp)
> >>>>>>> return;
> >>>>>>> }
> >>>>>>>
> >>>>>>> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> >>>>>>> +
> >>>>>>> + usleep_range(300, 500);
> >>>>>>> +
> >>>>>>
> >>>
> >>> I am a bit confused. We don't define a power domain for gpu in dt,
> >>> correct? Then what exactly set_opp do here? Do you think this usleep is
> >>> what is helping here somehow to mask the issue?
> > The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
> > the GPU DT. For the sake of simplicity I'll refer to the lowest
> > frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
> > the "min" state, and the highest frequency (710000000) and OPP level
> > (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
> > sdm845.dtsi under the gpu node.
> >
> > The new devfreq behaviour unmasks what I think is a driver bug, it
> > inadvertently puts much more strain on the GPU regulators than they
> > usually get. With the new behaviour the GPU jumps from it's min state to
> > the max state and back again extremely rapidly under workloads as small
> > as refreshing UI. Where previously the GPU would rarely if ever go above
> > 342MHz when interacting with the device, it now jumps between min and
> > max many times per second.
> >
> > If my understanding is correct, the current implementation of the GMU
> > set freq is the following:
> > - Get OPP for frequency to set
> > - Push the frequency to the GMU - immediately updating the core clock
> > - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
> > up somewhere in power management code and causes the gx regulator level
> > to be updated
>
> Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else.
> We were using a different api earlier which got deprecated -
> dev_pm_opp_set_bw().

Hmm, ok, if this is just setting icc vote, the order shouldn't be too important.

I guess GMU then is the one that is controlling the regulator(s) to
ensure adequate voltage for the requested freq?

But the GMU fw should be the same for a618 and a630, md5sum of what
I'm using (from linux-firmware):

ab20135f7adf48e0f344282a37da80e4 a630_gmu.bin

> >
> > The regulator will then take some time to reach it's new voltage level
> > and stabilise. I believe that rapid transitions between min and max
> > state - in combination with the increased current load from the GPU core
> > - lead to the regulator becoming unstable (e.g. when it's requested to
> > transition from it's lowest to highest levels immediately after
> > transitioning down), the unstable voltage causes the GPU to crash.
> >
> > Sillicon lottery will of course play a role here - this is very much an
> > edge case and would definitely be different on a per-device and even
> > per-unit basis.
> >>
> >> Hmm, I thought "opp-level = RPMH_REGULATOR_LEVEL_*" did *something*,
> >> but tbh I'm not sure exactly what..
> >>
> >>> I feel we should just leave the new dcvs feature (shall we call it NAP?)
> >>> disabled for a630 (and 10ms devfreq interval), until this is root
> >>> caused.
> > I believe this hacky workaround expresses the root cause of the issue
> > quite clearly, by setting the OPP first and allowing the gx regulator to
> > become stable before telling the GPU to change clock speeds, we avoid
> > the edge case and prevent the crashes.
> >
> > I took some rough measurements by adding logging to msm_devfreq_idle and
> > causing UI updates for ~20 seconds and that function is being called
> > about 30 times per second, this means the GPU is transitioning between
> > min (idle) state and max (active / boost) state at that frequency and
> > causing the issue I described above. It's likely that the usleep is
> > helping to mask this behaviour.
> >
> > I hope this serves as a slightly better explanation of what I perceive
> > to be the issue, I realise my previous explanations were not very
> > adequate, I apologise for all the noise.
> >>
> >> I suppose "NAP" is a reasonable name.
> >>
> >> But I think that reverting to previous behavior would not be enough,
> >> there is nothing stopping devfreq from jumping from min to max freq,
> >> which AFAIU should be enough to trigger this. I guess that there just
> >> hasn't been enough testing with different game workloads on those
> >> phones to trigger this.
> > Ack
> >>
> >> That said, I haven't seen similar issues on my sdm850 laptop, where I
> >> defn have triggered mix->max freq transitions.. I guess it would be
> >> interesting to know if this issue could be reproduced on db845c, or if
> >> it really is board specific?
> > My db845c arrives this week, I'll definitely try and reproduce this.
> >>
> >> To workaround, I think we'd need to implement some way to limit that
> >> maximum frequency jump (and then use delayed work to continue ramping
> >> up the freq over time until we hit the target).. which seems like a
> >> lot of work if this is just a board(s) specific workaround and isn't
> >> needed once CPR is supported
> > Based on my reasoning above, I came up with the following: reducing
> > thrashing by preventing rapid idle/active transitions. The minimum
> > active time of 30ms was just used for testing, I think some number
> > between 2 and 4 frames would be a sensible choice - the higher the safer.
> >
> > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > index d7cec7f0dde0..87f2d1085c3e 100644
> > --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > @@ -139,6 +139,8 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct
> > dev_pm_opp *opp)
> > return;
> > }
> >
> > + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> > +
> > gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
> >
> > gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
> > @@ -158,7 +160,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct
> > dev_pm_opp *opp)
> > if (ret)
> > dev_err(gmu->dev, "GMU set GPU frequency error: %d\n",
> > ret);
> >
> > - dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> > pm_runtime_put(gmu->dev);
> > }
> >
> > diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
> > index 0e4b45bff2e6..0e2293bcb46d 100644
> > --- a/drivers/gpu/drm/msm/msm_gpu.h
> > +++ b/drivers/gpu/drm/msm/msm_gpu.h
> > @@ -99,8 +99,8 @@ struct msm_gpu_devfreq {
> > /** time: Time of last sampling period. */
> > ktime_t time;
> >
> > - /** idle_time: Time of last transition to idle: */
> > - ktime_t idle_time;
> > + /** transition_time: Time of last transition between
> > idle/active: */
> > + ktime_t transition_time;
> >
> > /**
> > * idle_freq:
> > diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> > b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> > index 0a1ee20296a2..774a7be33e7a 100644
> > --- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> > +++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> > @@ -157,7 +157,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
> > */
> > mutex_lock(&df->devfreq->lock);
> >
> > - idle_time = ktime_to_ms(ktime_sub(ktime_get(), df->idle_time));
> > + idle_time = ktime_to_ms(ktime_sub(ktime_get(),
> > df->transition_time));
> >
> > /*
> > * If we've been idle for a significant fraction of a polling
> > @@ -168,7 +168,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
> > target_freq *= 2;
> > }
> >
> > - df->idle_freq = 0;
> > + df->transition_time = ktime_get();;
> >
> > msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);
> >
> > @@ -185,6 +185,16 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
> > {
> > struct msm_gpu_devfreq *df = &gpu->devfreq;
> > unsigned long idle_freq, target_freq = 0;
> > + unsigned int active_time;
> > +
> > + active_time = ktime_to_ms(ktime_sub(ktime_get(),
> > df->transition_time));
> > + /*
> > + * Don't go back to idle unless we've been active for at least 30ms
> > + * to avoid thrashing.
>
> This basically defeats the purpose of this feature! At least, we should
> keep this '30' gpu specific. Does a Kconfig makes sense here?? BTW, if
> 300us was helping you earlier why do you want it to be 30ms now?

Let's not kconfig, a single kernel should be able to work on multiple devices.

What I'm less sure about is whether this is a630 specific (in which
case something in the gpulist table would work) or board specific (in
which case I guess it needs to somehow come from the dtb)

BR,
-R

2021-08-09 23:58:58

by Caleb Connolly

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630



On 09/08/2021 18:58, Rob Clark wrote:
> On Mon, Aug 9, 2021 at 10:28 AM Akhil P Oommen <[email protected]> wrote:
>>
>> On 8/9/2021 9:48 PM, Caleb Connolly wrote:
>>>
>>>
>>> On 09/08/2021 17:12, Rob Clark wrote:
>>>> On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen <[email protected]>
>>>> wrote:
>>>>>
>>>>> On 8/8/2021 10:22 PM, Rob Clark wrote:
>>>>>> On Sun, Aug 8, 2021 at 7:33 AM Caleb Connolly
>>>>>> <[email protected]> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 07/08/2021 21:04, Rob Clark wrote:
>>>>>>>> On Sat, Aug 7, 2021 at 12:21 PM Caleb Connolly
>>>>>>>> <[email protected]> wrote:
>>>>>>>>>
>>>>>>>>> Hi Rob, Akhil,
>>>>>>>>>
>>>>>>>>> On 29/07/2021 21:53, Rob Clark wrote:
>>>>>>>>>> On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
>>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 29/07/2021 21:24, Rob Clark wrote:
>>>>>>>>>>>> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
>>>>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Rob,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I've done some more testing! It looks like before that patch
>>>>>>>>>>>>> ("drm/msm: Devfreq tuning") the GPU would never get above
>>>>>>>>>>>>> the second frequency in the OPP table (342MHz) (at least, not
>>>>>>>>>>>>> in glxgears). With the patch applied it would more
>>>>>>>>>>>>> aggressively jump up to the max frequency which seems to be
>>>>>>>>>>>>> unstable at the default regulator voltages.
>>>>>>>>>>>>
>>>>>>>>>>>> *ohh*, yeah, ok, that would explain it
>>>>>>>>>>>>
>>>>>>>>>>>>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up
>>>>>>>>>>>>> to 0.988v (instead of the stock 0.516v) makes the GPU stable
>>>>>>>>>>>>> at the higher frequencies.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Applying this patch reverts the behaviour, and the GPU never
>>>>>>>>>>>>> goes above 342MHz in glxgears, losing ~30% performance in
>>>>>>>>>>>>> glxgear.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I think (?) that enabling CPR support would be the proper
>>>>>>>>>>>>> solution to this - that would ensure that the regulators run
>>>>>>>>>>>>> at the voltage the hardware needs to be stable.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Is hacking the voltage higher (although ideally not quite
>>>>>>>>>>>>> that high) an acceptable short term solution until we have
>>>>>>>>>>>>> CPR? Or would it be safer to just not make use of the higher
>>>>>>>>>>>>> frequencies on a630 for now?
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is
>>>>>>>>>>>> already
>>>>>>>>>>>> on CC and I added sboyd, maybe one of them knows better.
>>>>>>>>>>>>
>>>>>>>>>>>> In the short term, removing the higher problematic OPPs from
>>>>>>>>>>>> dts might
>>>>>>>>>>>> be a better option than this patch (which I'm dropping), since
>>>>>>>>>>>> there
>>>>>>>>>>>> is nothing stopping other workloads from hitting higher OPPs.
>>>>>>>>>>> Oh yeah that sounds like a more sensible workaround than mine .
>>>>>>>>>>>>
>>>>>>>>>>>> I'm slightly curious why I didn't have problems at higher OPPs
>>>>>>>>>>>> on my
>>>>>>>>>>>> c630 laptop (sdm850)
>>>>>>>>>>> Perhaps you won the sillicon lottery - iirc sdm850 is binned
>>>>>>>>>>> for higher clocks as is out of the factory.
>>>>>>>>>>>
>>>>>>>>>>> Would it be best to drop the OPPs for all devices? Or just
>>>>>>>>>>> those affected? I guess it's possible another c630 might
>>>>>>>>>>> crash where yours doesn't?
>>>>>>>>>>
>>>>>>>>>> I've not heard any reports of similar issues from the handful of
>>>>>>>>>> other
>>>>>>>>>> folks with c630's on #aarch64-laptops.. but I can't really say
>>>>>>>>>> if that
>>>>>>>>>> is luck or not.
>>>>>>>>> It looks like this affects at least the OnePlus 6 and PocoPhone
>>>>>>>>> F1, I've done some more poking and the following diff
>>>>>>>>> seems to fix the stability issues completely, it seems the delay
>>>>>>>>> is required to let the update propagate.
>>>>>>>>>
>>>>>>>>> This doesn't feel like the right fix, but hopefully it's enough
>>>>>>>>> to come up with a better solution than disabling the new
>>>>>>>>> devfreq behaviour on a630.
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>>>>>>> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>>>>>>> index d7cec7f0dde0..69e2a5e84dae 100644
>>>>>>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>>>>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>>>>>>>> @@ -139,6 +139,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu,
>>>>>>>>> struct dev_pm_opp *opp)
>>>>>>>>> return;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
>>>>>>>>> +
>>>>>>>>> + usleep_range(300, 500);
>>>>>>>>> +
>>>>>>>>
>>>>>
>>>>> I am a bit confused. We don't define a power domain for gpu in dt,
>>>>> correct? Then what exactly set_opp do here? Do you think this usleep is
>>>>> what is helping here somehow to mask the issue?
>>> The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
>>> the GPU DT. For the sake of simplicity I'll refer to the lowest
>>> frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
>>> the "min" state, and the highest frequency (710000000) and OPP level
>>> (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
>>> sdm845.dtsi under the gpu node.
>>>
>>> The new devfreq behaviour unmasks what I think is a driver bug, it
>>> inadvertently puts much more strain on the GPU regulators than they
>>> usually get. With the new behaviour the GPU jumps from it's min state to
>>> the max state and back again extremely rapidly under workloads as small
>>> as refreshing UI. Where previously the GPU would rarely if ever go above
>>> 342MHz when interacting with the device, it now jumps between min and
>>> max many times per second.
>>>
>>> If my understanding is correct, the current implementation of the GMU
>>> set freq is the following:
>>> - Get OPP for frequency to set
>>> - Push the frequency to the GMU - immediately updating the core clock
>>> - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
>>> up somewhere in power management code and causes the gx regulator level
>>> to be updated
>>
>> Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else.
>> We were using a different api earlier which got deprecated -
>> dev_pm_opp_set_bw().
Huh ok, thanks for the correction. So it's the GMU writes in this function which cause the regulator to be adjusted?
>
> Hmm, ok, if this is just setting icc vote, the order shouldn't be too important.
>
> I guess GMU then is the one that is controlling the regulator(s) to
> ensure adequate voltage for the requested freq?
>
> But the GMU fw should be the same for a618 and a630, md5sum of what
> I'm using (from linux-firmware):
>
> ab20135f7adf48e0f344282a37da80e4 a630_gmu.bin
Same here.
>
>>>
>>> The regulator will then take some time to reach it's new voltage level
>>> and stabilise. I believe that rapid transitions between min and max
>>> state - in combination with the increased current load from the GPU core
>>> - lead to the regulator becoming unstable (e.g. when it's requested to
>>> transition from it's lowest to highest levels immediately after
>>> transitioning down), the unstable voltage causes the GPU to crash.
>>>
>>> Sillicon lottery will of course play a role here - this is very much an
>>> edge case and would definitely be different on a per-device and even
>>> per-unit basis.
>>>>
>>>> Hmm, I thought "opp-level = RPMH_REGULATOR_LEVEL_*" did *something*,
>>>> but tbh I'm not sure exactly what..
>>>>
>>>>> I feel we should just leave the new dcvs feature (shall we call it NAP?)
>>>>> disabled for a630 (and 10ms devfreq interval), until this is root
>>>>> caused.
>>> I believe this hacky workaround expresses the root cause of the issue
>>> quite clearly, by setting the OPP first and allowing the gx regulator to
>>> become stable before telling the GPU to change clock speeds, we avoid
>>> the edge case and prevent the crashes.
>>>
>>> I took some rough measurements by adding logging to msm_devfreq_idle and
>>> causing UI updates for ~20 seconds and that function is being called
>>> about 30 times per second, this means the GPU is transitioning between
>>> min (idle) state and max (active / boost) state at that frequency and
>>> causing the issue I described above. It's likely that the usleep is
>>> helping to mask this behaviour.
>>>
>>> I hope this serves as a slightly better explanation of what I perceive
>>> to be the issue, I realise my previous explanations were not very
>>> adequate, I apologise for all the noise.
>>>>
>>>> I suppose "NAP" is a reasonable name.
>>>>
>>>> But I think that reverting to previous behavior would not be enough,
>>>> there is nothing stopping devfreq from jumping from min to max freq,
>>>> which AFAIU should be enough to trigger this. I guess that there just
>>>> hasn't been enough testing with different game workloads on those
>>>> phones to trigger this.
>>> Ack
>>>>
>>>> That said, I haven't seen similar issues on my sdm850 laptop, where I
>>>> defn have triggered mix->max freq transitions.. I guess it would be
>>>> interesting to know if this issue could be reproduced on db845c, or if
>>>> it really is board specific?
>>> My db845c arrives this week, I'll definitely try and reproduce this.
>>>>
>>>> To workaround, I think we'd need to implement some way to limit that
>>>> maximum frequency jump (and then use delayed work to continue ramping
>>>> up the freq over time until we hit the target).. which seems like a
>>>> lot of work if this is just a board(s) specific workaround and isn't
>>>> needed once CPR is supported
>>> Based on my reasoning above, I came up with the following: reducing
>>> thrashing by preventing rapid idle/active transitions. The minimum
>>> active time of 30ms was just used for testing, I think some number
>>> between 2 and 4 frames would be a sensible choice - the higher the safer.
>>>
>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>> index d7cec7f0dde0..87f2d1085c3e 100644
>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>>> @@ -139,6 +139,8 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct
>>> dev_pm_opp *opp)
>>> return;
>>> }
>>>
>>> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
>>> +
>>> gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
>>>
>>> gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
>>> @@ -158,7 +160,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct
>>> dev_pm_opp *opp)
>>> if (ret)
>>> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n",
>>> ret);
>>>
>>> - dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
>>> pm_runtime_put(gmu->dev);
>>> }
>>>
>>> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
>>> index 0e4b45bff2e6..0e2293bcb46d 100644
>>> --- a/drivers/gpu/drm/msm/msm_gpu.h
>>> +++ b/drivers/gpu/drm/msm/msm_gpu.h
>>> @@ -99,8 +99,8 @@ struct msm_gpu_devfreq {
>>> /** time: Time of last sampling period. */
>>> ktime_t time;
>>>
>>> - /** idle_time: Time of last transition to idle: */
>>> - ktime_t idle_time;
>>> + /** transition_time: Time of last transition between
>>> idle/active: */
>>> + ktime_t transition_time;
>>>
>>> /**
>>> * idle_freq:
>>> diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
>>> b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
>>> index 0a1ee20296a2..774a7be33e7a 100644
>>> --- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
>>> +++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
>>> @@ -157,7 +157,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
>>> */
>>> mutex_lock(&df->devfreq->lock);
>>>
>>> - idle_time = ktime_to_ms(ktime_sub(ktime_get(), df->idle_time));
>>> + idle_time = ktime_to_ms(ktime_sub(ktime_get(),
>>> df->transition_time));
>>>
>>> /*
>>> * If we've been idle for a significant fraction of a polling
>>> @@ -168,7 +168,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
>>> target_freq *= 2;
>>> }
>>>
>>> - df->idle_freq = 0;
>>> + df->transition_time = ktime_get();;
>>>
>>> msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);
>>>
>>> @@ -185,6 +185,16 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
>>> {
>>> struct msm_gpu_devfreq *df = &gpu->devfreq;
>>> unsigned long idle_freq, target_freq = 0;
>>> + unsigned int active_time;
>>> +
>>> + active_time = ktime_to_ms(ktime_sub(ktime_get(),
>>> df->transition_time));
>>> + /*
>>> + * Don't go back to idle unless we've been active for at least 30ms
>>> + * to avoid thrashing.
>>
>> This basically defeats the purpose of this feature! At least, we should
>> keep this '30' gpu specific. Does a Kconfig makes sense here?? BTW, if
>> 300us was helping you earlier why do you want it to be 30ms now?
Previously I thought that the issue was related to specifically the transition from idle/active, hence sleeping to let
the regulator catch up, whilst that masked the issue it didn't *fix* it, I now think it's actually due to the repeated
transition between idle and active states.

Enforcing that the GPU stay active for at least two frames should still give the intended goal of reducing latency and
more reliably fixes the issue.

AFAIU from reading the commit description, the goal of the devfreq tuning is to reduce latency by quickly bursting up
when there's user activity, by telling the GPU to stay active for longer we shouldn't impede this behaviour at all.
>
> Let's not kconfig, a single kernel should be able to work on multiple devices.
>
> What I'm less sure about is whether this is a630 specific (in which
> case something in the gpulist table would work) or board specific (in
> which case I guess it needs to somehow come from the dtb)
>
> BR,
> -R
>

--
Kind Regards,
Caleb (they/them)

2021-08-10 01:30:59

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Mon, Aug 9, 2021 at 1:35 PM Caleb Connolly <[email protected]> wrote:
>
>
>
> On 09/08/2021 18:58, Rob Clark wrote:
> > On Mon, Aug 9, 2021 at 10:28 AM Akhil P Oommen <[email protected]> wrote:
> >>
> >> On 8/9/2021 9:48 PM, Caleb Connolly wrote:
> >>>
> >>>
> >>> On 09/08/2021 17:12, Rob Clark wrote:
> >>>> On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen <[email protected]>
> >>>> wrote:
> >>>>>
> >>>>> On 8/8/2021 10:22 PM, Rob Clark wrote:
> >>>>>> On Sun, Aug 8, 2021 at 7:33 AM Caleb Connolly
> >>>>>> <[email protected]> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> On 07/08/2021 21:04, Rob Clark wrote:
> >>>>>>>> On Sat, Aug 7, 2021 at 12:21 PM Caleb Connolly
> >>>>>>>> <[email protected]> wrote:
> >>>>>>>>>
> >>>>>>>>> Hi Rob, Akhil,
> >>>>>>>>>
> >>>>>>>>> On 29/07/2021 21:53, Rob Clark wrote:
> >>>>>>>>>> On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> >>>>>>>>>> <[email protected]> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> On 29/07/2021 21:24, Rob Clark wrote:
> >>>>>>>>>>>> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> >>>>>>>>>>>> <[email protected]> wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Hi Rob,
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> I've done some more testing! It looks like before that patch
> >>>>>>>>>>>>> ("drm/msm: Devfreq tuning") the GPU would never get above
> >>>>>>>>>>>>> the second frequency in the OPP table (342MHz) (at least, not
> >>>>>>>>>>>>> in glxgears). With the patch applied it would more
> >>>>>>>>>>>>> aggressively jump up to the max frequency which seems to be
> >>>>>>>>>>>>> unstable at the default regulator voltages.
> >>>>>>>>>>>>
> >>>>>>>>>>>> *ohh*, yeah, ok, that would explain it
> >>>>>>>>>>>>
> >>>>>>>>>>>>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up
> >>>>>>>>>>>>> to 0.988v (instead of the stock 0.516v) makes the GPU stable
> >>>>>>>>>>>>> at the higher frequencies.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Applying this patch reverts the behaviour, and the GPU never
> >>>>>>>>>>>>> goes above 342MHz in glxgears, losing ~30% performance in
> >>>>>>>>>>>>> glxgear.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> I think (?) that enabling CPR support would be the proper
> >>>>>>>>>>>>> solution to this - that would ensure that the regulators run
> >>>>>>>>>>>>> at the voltage the hardware needs to be stable.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Is hacking the voltage higher (although ideally not quite
> >>>>>>>>>>>>> that high) an acceptable short term solution until we have
> >>>>>>>>>>>>> CPR? Or would it be safer to just not make use of the higher
> >>>>>>>>>>>>> frequencies on a630 for now?
> >>>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is
> >>>>>>>>>>>> already
> >>>>>>>>>>>> on CC and I added sboyd, maybe one of them knows better.
> >>>>>>>>>>>>
> >>>>>>>>>>>> In the short term, removing the higher problematic OPPs from
> >>>>>>>>>>>> dts might
> >>>>>>>>>>>> be a better option than this patch (which I'm dropping), since
> >>>>>>>>>>>> there
> >>>>>>>>>>>> is nothing stopping other workloads from hitting higher OPPs.
> >>>>>>>>>>> Oh yeah that sounds like a more sensible workaround than mine .
> >>>>>>>>>>>>
> >>>>>>>>>>>> I'm slightly curious why I didn't have problems at higher OPPs
> >>>>>>>>>>>> on my
> >>>>>>>>>>>> c630 laptop (sdm850)
> >>>>>>>>>>> Perhaps you won the sillicon lottery - iirc sdm850 is binned
> >>>>>>>>>>> for higher clocks as is out of the factory.
> >>>>>>>>>>>
> >>>>>>>>>>> Would it be best to drop the OPPs for all devices? Or just
> >>>>>>>>>>> those affected? I guess it's possible another c630 might
> >>>>>>>>>>> crash where yours doesn't?
> >>>>>>>>>>
> >>>>>>>>>> I've not heard any reports of similar issues from the handful of
> >>>>>>>>>> other
> >>>>>>>>>> folks with c630's on #aarch64-laptops.. but I can't really say
> >>>>>>>>>> if that
> >>>>>>>>>> is luck or not.
> >>>>>>>>> It looks like this affects at least the OnePlus 6 and PocoPhone
> >>>>>>>>> F1, I've done some more poking and the following diff
> >>>>>>>>> seems to fix the stability issues completely, it seems the delay
> >>>>>>>>> is required to let the update propagate.
> >>>>>>>>>
> >>>>>>>>> This doesn't feel like the right fix, but hopefully it's enough
> >>>>>>>>> to come up with a better solution than disabling the new
> >>>>>>>>> devfreq behaviour on a630.
> >>>>>>>>>
> >>>>>>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>>>>>>>> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>>>>>>>> index d7cec7f0dde0..69e2a5e84dae 100644
> >>>>>>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>>>>>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>>>>>>>> @@ -139,6 +139,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu,
> >>>>>>>>> struct dev_pm_opp *opp)
> >>>>>>>>> return;
> >>>>>>>>> }
> >>>>>>>>>
> >>>>>>>>> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> >>>>>>>>> +
> >>>>>>>>> + usleep_range(300, 500);
> >>>>>>>>> +
> >>>>>>>>
> >>>>>
> >>>>> I am a bit confused. We don't define a power domain for gpu in dt,
> >>>>> correct? Then what exactly set_opp do here? Do you think this usleep is
> >>>>> what is helping here somehow to mask the issue?
> >>> The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
> >>> the GPU DT. For the sake of simplicity I'll refer to the lowest
> >>> frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
> >>> the "min" state, and the highest frequency (710000000) and OPP level
> >>> (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
> >>> sdm845.dtsi under the gpu node.
> >>>
> >>> The new devfreq behaviour unmasks what I think is a driver bug, it
> >>> inadvertently puts much more strain on the GPU regulators than they
> >>> usually get. With the new behaviour the GPU jumps from it's min state to
> >>> the max state and back again extremely rapidly under workloads as small
> >>> as refreshing UI. Where previously the GPU would rarely if ever go above
> >>> 342MHz when interacting with the device, it now jumps between min and
> >>> max many times per second.
> >>>
> >>> If my understanding is correct, the current implementation of the GMU
> >>> set freq is the following:
> >>> - Get OPP for frequency to set
> >>> - Push the frequency to the GMU - immediately updating the core clock
> >>> - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
> >>> up somewhere in power management code and causes the gx regulator level
> >>> to be updated
> >>
> >> Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else.
> >> We were using a different api earlier which got deprecated -
> >> dev_pm_opp_set_bw().
> Huh ok, thanks for the correction. So it's the GMU writes in this function which cause the regulator to be adjusted?
> >
> > Hmm, ok, if this is just setting icc vote, the order shouldn't be too important.
> >
> > I guess GMU then is the one that is controlling the regulator(s) to
> > ensure adequate voltage for the requested freq?
> >
> > But the GMU fw should be the same for a618 and a630, md5sum of what
> > I'm using (from linux-firmware):
> >
> > ab20135f7adf48e0f344282a37da80e4 a630_gmu.bin
> Same here.
> >
> >>>
> >>> The regulator will then take some time to reach it's new voltage level
> >>> and stabilise. I believe that rapid transitions between min and max
> >>> state - in combination with the increased current load from the GPU core
> >>> - lead to the regulator becoming unstable (e.g. when it's requested to
> >>> transition from it's lowest to highest levels immediately after
> >>> transitioning down), the unstable voltage causes the GPU to crash.
> >>>
> >>> Sillicon lottery will of course play a role here - this is very much an
> >>> edge case and would definitely be different on a per-device and even
> >>> per-unit basis.
> >>>>
> >>>> Hmm, I thought "opp-level = RPMH_REGULATOR_LEVEL_*" did *something*,
> >>>> but tbh I'm not sure exactly what..
> >>>>
> >>>>> I feel we should just leave the new dcvs feature (shall we call it NAP?)
> >>>>> disabled for a630 (and 10ms devfreq interval), until this is root
> >>>>> caused.
> >>> I believe this hacky workaround expresses the root cause of the issue
> >>> quite clearly, by setting the OPP first and allowing the gx regulator to
> >>> become stable before telling the GPU to change clock speeds, we avoid
> >>> the edge case and prevent the crashes.
> >>>
> >>> I took some rough measurements by adding logging to msm_devfreq_idle and
> >>> causing UI updates for ~20 seconds and that function is being called
> >>> about 30 times per second, this means the GPU is transitioning between
> >>> min (idle) state and max (active / boost) state at that frequency and
> >>> causing the issue I described above. It's likely that the usleep is
> >>> helping to mask this behaviour.
> >>>
> >>> I hope this serves as a slightly better explanation of what I perceive
> >>> to be the issue, I realise my previous explanations were not very
> >>> adequate, I apologise for all the noise.
> >>>>
> >>>> I suppose "NAP" is a reasonable name.
> >>>>
> >>>> But I think that reverting to previous behavior would not be enough,
> >>>> there is nothing stopping devfreq from jumping from min to max freq,
> >>>> which AFAIU should be enough to trigger this. I guess that there just
> >>>> hasn't been enough testing with different game workloads on those
> >>>> phones to trigger this.
> >>> Ack
> >>>>
> >>>> That said, I haven't seen similar issues on my sdm850 laptop, where I
> >>>> defn have triggered mix->max freq transitions.. I guess it would be
> >>>> interesting to know if this issue could be reproduced on db845c, or if
> >>>> it really is board specific?
> >>> My db845c arrives this week, I'll definitely try and reproduce this.
> >>>>
> >>>> To workaround, I think we'd need to implement some way to limit that
> >>>> maximum frequency jump (and then use delayed work to continue ramping
> >>>> up the freq over time until we hit the target).. which seems like a
> >>>> lot of work if this is just a board(s) specific workaround and isn't
> >>>> needed once CPR is supported
> >>> Based on my reasoning above, I came up with the following: reducing
> >>> thrashing by preventing rapid idle/active transitions. The minimum
> >>> active time of 30ms was just used for testing, I think some number
> >>> between 2 and 4 frames would be a sensible choice - the higher the safer.
> >>>
> >>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>> index d7cec7f0dde0..87f2d1085c3e 100644
> >>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> >>> @@ -139,6 +139,8 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct
> >>> dev_pm_opp *opp)
> >>> return;
> >>> }
> >>>
> >>> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> >>> +
> >>> gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
> >>>
> >>> gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
> >>> @@ -158,7 +160,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct
> >>> dev_pm_opp *opp)
> >>> if (ret)
> >>> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n",
> >>> ret);
> >>>
> >>> - dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> >>> pm_runtime_put(gmu->dev);
> >>> }
> >>>
> >>> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
> >>> index 0e4b45bff2e6..0e2293bcb46d 100644
> >>> --- a/drivers/gpu/drm/msm/msm_gpu.h
> >>> +++ b/drivers/gpu/drm/msm/msm_gpu.h
> >>> @@ -99,8 +99,8 @@ struct msm_gpu_devfreq {
> >>> /** time: Time of last sampling period. */
> >>> ktime_t time;
> >>>
> >>> - /** idle_time: Time of last transition to idle: */
> >>> - ktime_t idle_time;
> >>> + /** transition_time: Time of last transition between
> >>> idle/active: */
> >>> + ktime_t transition_time;
> >>>
> >>> /**
> >>> * idle_freq:
> >>> diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> >>> b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> >>> index 0a1ee20296a2..774a7be33e7a 100644
> >>> --- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> >>> +++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> >>> @@ -157,7 +157,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
> >>> */
> >>> mutex_lock(&df->devfreq->lock);
> >>>
> >>> - idle_time = ktime_to_ms(ktime_sub(ktime_get(), df->idle_time));
> >>> + idle_time = ktime_to_ms(ktime_sub(ktime_get(),
> >>> df->transition_time));
> >>>
> >>> /*
> >>> * If we've been idle for a significant fraction of a polling
> >>> @@ -168,7 +168,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
> >>> target_freq *= 2;
> >>> }
> >>>
> >>> - df->idle_freq = 0;
> >>> + df->transition_time = ktime_get();;
> >>>
> >>> msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);
> >>>
> >>> @@ -185,6 +185,16 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
> >>> {
> >>> struct msm_gpu_devfreq *df = &gpu->devfreq;
> >>> unsigned long idle_freq, target_freq = 0;
> >>> + unsigned int active_time;
> >>> +
> >>> + active_time = ktime_to_ms(ktime_sub(ktime_get(),
> >>> df->transition_time));
> >>> + /*
> >>> + * Don't go back to idle unless we've been active for at least 30ms
> >>> + * to avoid thrashing.
> >>
> >> This basically defeats the purpose of this feature! At least, we should
> >> keep this '30' gpu specific. Does a Kconfig makes sense here?? BTW, if
> >> 300us was helping you earlier why do you want it to be 30ms now?
> Previously I thought that the issue was related to specifically the transition from idle/active, hence sleeping to let
> the regulator catch up, whilst that masked the issue it didn't *fix* it, I now think it's actually due to the repeated
> transition between idle and active states.
>
> Enforcing that the GPU stay active for at least two frames should still give the intended goal of reducing latency and
> more reliably fixes the issue.
>
> AFAIU from reading the commit description, the goal of the devfreq tuning is to reduce latency by quickly bursting up
> when there's user activity, by telling the GPU to stay active for longer we shouldn't impede this behaviour at all.

Well, there are a couple parts to it.. one thing it was intended to
fix was a bad devfreq behavior I was seeing with, for example, games
that throttle themselves to 30fps, so rendering one 16ms frame every
other vblank cycle.. previously devfreq would ramp up to max just as
it was at the end of rendering a frame, and then sit there at fmax
while GPU was doing nothing for the next 16ms, and then ramp back down
to fmin just as the GPU got some more work to do. So it was nearly
180deg out of phase with where you'd want it to be
increasing/decreasing GPU freq.

The longer polling interval is meant to smooth that out, with clamping
to fmin while GPU is idle to offset the fact that it would take the
GPU longer to ramp down (and it otherwise being pointless to keep the
GPU at a high freq when it isn't doing anything), and boosting above
what freq devfreq would have picked if the gpu had been idle for a
while (to offset the longer ramp up on user input).

So the 30ms delay for clamping to fmin would defeat one part of that.

We could perhaps somehow disable the clamping to fmin for certain
boards and/or gpus, which would possibly lose a bit of power savings
but otherwise be ok. But I'm not clear whether this is a board
specific issue (ie. are these phones using different PMICs compared to
sdm850 laptops and db845c? Or is there some difference in what power
rail is powering the GPU?)

I think it was mentioned earlier that CPR should help (AFAIU that is
some sort of hw closed loop voltage regulation?) so maybe this is just
a short term workaround?

BR,
-R

2021-09-03 19:49:58

by John Stultz

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Thu, Jul 29, 2021 at 1:49 PM Rob Clark <[email protected]> wrote:
> On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> <[email protected]> wrote:
> > On 29/07/2021 21:24, Rob Clark wrote:
> > > On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> > > <[email protected]> wrote:
> > >>
> > >> Hi Rob,
> > >>
> > >> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> > >> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> > >> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
> > >
> > > *ohh*, yeah, ok, that would explain it
> > >
> > >> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> > >> at the higher frequencies.
> > >>
> > >> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> > >> glxgear.
> > >>
> > >> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> > >> at the voltage the hardware needs to be stable.
> > >>
> > >> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> > >> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
> > >>
> > >
> > > tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> > > on CC and I added sboyd, maybe one of them knows better.
> > >
> > > In the short term, removing the higher problematic OPPs from dts might
> > > be a better option than this patch (which I'm dropping), since there
> > > is nothing stopping other workloads from hitting higher OPPs.
> > Oh yeah that sounds like a more sensible workaround than mine .
> > >
> > > I'm slightly curious why I didn't have problems at higher OPPs on my
> > > c630 laptop (sdm850)
> > Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
> >
> > Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
> > crash where yours doesn't?
>
> I've not heard any reports of similar issues from the handful of other
> folks with c630's on #aarch64-laptops.. but I can't really say if that
> is luck or not.
>
> Maybe just remove it for affected devices? But I'll defer to Bjorn.

Just as another datapoint, I was just marveling at how suddenly smooth
the UI was performing on db845c and Caleb pointed me at the "drm/msm:
Devfreq tuning" patch as the likely cause of the improvement, and
mid-discussion my board crashed into USB crash mode:
[ 146.157696][ C0] adreno 5000000.gpu: CP | AHB bus error
[ 146.163303][ C0] adreno 5000000.gpu: CP | AHB bus error
[ 146.168837][ C0] adreno 5000000.gpu: RBBM | ATB bus overflow
[ 146.174960][ C0] adreno 5000000.gpu: CP | HW fault | status=0x00000000
[ 146.181917][ C0] adreno 5000000.gpu: CP | AHB bus error
[ 146.187547][ C0] adreno 5000000.gpu: CP illegal instruction error
[ 146.194009][ C0] adreno 5000000.gpu: CP | AHB bus error
[ 146.308909][ T9] Internal error: synchronous external abort:
96000010 [#1] PREEMPT SMP
[ 146.317150][ T9] Modules linked in:
[ 146.320941][ T9] CPU: 3 PID: 9 Comm: kworker/u16:1 Tainted: G
W 5.14.0-mainline-06795-g42b258c2275c #24
[ 146.331974][ T9] Hardware name: Thundercomm Dragonboar
Format: Log Type - Time(microsec) - Message - Optional Info
Log Type: B - Since Boot(Power On Reset), D - Delta, S - Statistic
S - QC_IMAGE_VERSION_STRING=BOOT.XF.2.0-00371-SDM845LZB-1
S - IMAGE_VARIANT_STRING=SDM845LA
S - OEM_IMAGE_VERSION_STRING=TSBJ-FA-PC-02170

So Caleb sent me to this thread. :)

I'm still trying to trip it again, but it does seem like db845c is
also seeing some stability issues with Linus' HEAD.

thanks
-john

2021-09-03 20:27:20

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Fri, Sep 3, 2021 at 12:39 PM John Stultz <[email protected]> wrote:
>
> On Thu, Jul 29, 2021 at 1:49 PM Rob Clark <[email protected]> wrote:
> > On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> > <[email protected]> wrote:
> > > On 29/07/2021 21:24, Rob Clark wrote:
> > > > On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> > > > <[email protected]> wrote:
> > > >>
> > > >> Hi Rob,
> > > >>
> > > >> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> > > >> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> > > >> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
> > > >
> > > > *ohh*, yeah, ok, that would explain it
> > > >
> > > >> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> > > >> at the higher frequencies.
> > > >>
> > > >> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> > > >> glxgear.
> > > >>
> > > >> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> > > >> at the voltage the hardware needs to be stable.
> > > >>
> > > >> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> > > >> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
> > > >>
> > > >
> > > > tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> > > > on CC and I added sboyd, maybe one of them knows better.
> > > >
> > > > In the short term, removing the higher problematic OPPs from dts might
> > > > be a better option than this patch (which I'm dropping), since there
> > > > is nothing stopping other workloads from hitting higher OPPs.
> > > Oh yeah that sounds like a more sensible workaround than mine .
> > > >
> > > > I'm slightly curious why I didn't have problems at higher OPPs on my
> > > > c630 laptop (sdm850)
> > > Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
> > >
> > > Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
> > > crash where yours doesn't?
> >
> > I've not heard any reports of similar issues from the handful of other
> > folks with c630's on #aarch64-laptops.. but I can't really say if that
> > is luck or not.
> >
> > Maybe just remove it for affected devices? But I'll defer to Bjorn.
>
> Just as another datapoint, I was just marveling at how suddenly smooth
> the UI was performing on db845c and Caleb pointed me at the "drm/msm:
> Devfreq tuning" patch as the likely cause of the improvement, and
> mid-discussion my board crashed into USB crash mode:
> [ 146.157696][ C0] adreno 5000000.gpu: CP | AHB bus error
> [ 146.163303][ C0] adreno 5000000.gpu: CP | AHB bus error
> [ 146.168837][ C0] adreno 5000000.gpu: RBBM | ATB bus overflow
> [ 146.174960][ C0] adreno 5000000.gpu: CP | HW fault | status=0x00000000
> [ 146.181917][ C0] adreno 5000000.gpu: CP | AHB bus error
> [ 146.187547][ C0] adreno 5000000.gpu: CP illegal instruction error
> [ 146.194009][ C0] adreno 5000000.gpu: CP | AHB bus error
> [ 146.308909][ T9] Internal error: synchronous external abort:
> 96000010 [#1] PREEMPT SMP
> [ 146.317150][ T9] Modules linked in:
> [ 146.320941][ T9] CPU: 3 PID: 9 Comm: kworker/u16:1 Tainted: G
> W 5.14.0-mainline-06795-g42b258c2275c #24
> [ 146.331974][ T9] Hardware name: Thundercomm Dragonboar
> Format: Log Type - Time(microsec) - Message - Optional Info
> Log Type: B - Since Boot(Power On Reset), D - Delta, S - Statistic
> S - QC_IMAGE_VERSION_STRING=BOOT.XF.2.0-00371-SDM845LZB-1
> S - IMAGE_VARIANT_STRING=SDM845LA
> S - OEM_IMAGE_VERSION_STRING=TSBJ-FA-PC-02170
>
> So Caleb sent me to this thread. :)
>
> I'm still trying to trip it again, but it does seem like db845c is
> also seeing some stability issues with Linus' HEAD.
>

Caleb's original pastebin seems to have expired (or at least require
some sort of ubuntu login to access).. were the crashes he was seeing
also 'AHB bus error'?

If you have a reliable reproducer, I guess it would be worth seeing if
increasing the min_freq (ie. to limit how far we jump the freq in one
shot) "fixes" it?

I guess I could check downstream kgsl to see if they were doing
something to increase freq in smaller increments.. I don't recall that
they were but it has been a while since I dug thru that code. And I
suppose downstream it could also be done in their custom tz governor.

BR,
-R

2021-09-06 08:50:06

by Amit Pundir

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Sat, 4 Sept 2021 at 01:55, Rob Clark <[email protected]> wrote:
>
> On Fri, Sep 3, 2021 at 12:39 PM John Stultz <[email protected]> wrote:
> >
> > On Thu, Jul 29, 2021 at 1:49 PM Rob Clark <[email protected]> wrote:
> > > On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> > > <[email protected]> wrote:
> > > > On 29/07/2021 21:24, Rob Clark wrote:
> > > > > On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> > > > > <[email protected]> wrote:
> > > > >>
> > > > >> Hi Rob,
> > > > >>
> > > > >> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> > > > >> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> > > > >> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
> > > > >
> > > > > *ohh*, yeah, ok, that would explain it
> > > > >
> > > > >> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> > > > >> at the higher frequencies.
> > > > >>
> > > > >> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> > > > >> glxgear.
> > > > >>
> > > > >> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> > > > >> at the voltage the hardware needs to be stable.
> > > > >>
> > > > >> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> > > > >> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
> > > > >>
> > > > >
> > > > > tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> > > > > on CC and I added sboyd, maybe one of them knows better.
> > > > >
> > > > > In the short term, removing the higher problematic OPPs from dts might
> > > > > be a better option than this patch (which I'm dropping), since there
> > > > > is nothing stopping other workloads from hitting higher OPPs.
> > > > Oh yeah that sounds like a more sensible workaround than mine .
> > > > >
> > > > > I'm slightly curious why I didn't have problems at higher OPPs on my
> > > > > c630 laptop (sdm850)
> > > > Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
> > > >
> > > > Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
> > > > crash where yours doesn't?
> > >
> > > I've not heard any reports of similar issues from the handful of other
> > > folks with c630's on #aarch64-laptops.. but I can't really say if that
> > > is luck or not.
> > >
> > > Maybe just remove it for affected devices? But I'll defer to Bjorn.
> >
> > Just as another datapoint, I was just marveling at how suddenly smooth
> > the UI was performing on db845c and Caleb pointed me at the "drm/msm:
> > Devfreq tuning" patch as the likely cause of the improvement, and
> > mid-discussion my board crashed into USB crash mode:
> > [ 146.157696][ C0] adreno 5000000.gpu: CP | AHB bus error
> > [ 146.163303][ C0] adreno 5000000.gpu: CP | AHB bus error
> > [ 146.168837][ C0] adreno 5000000.gpu: RBBM | ATB bus overflow
> > [ 146.174960][ C0] adreno 5000000.gpu: CP | HW fault | status=0x00000000
> > [ 146.181917][ C0] adreno 5000000.gpu: CP | AHB bus error
> > [ 146.187547][ C0] adreno 5000000.gpu: CP illegal instruction error
> > [ 146.194009][ C0] adreno 5000000.gpu: CP | AHB bus error
> > [ 146.308909][ T9] Internal error: synchronous external abort:
> > 96000010 [#1] PREEMPT SMP
> > [ 146.317150][ T9] Modules linked in:
> > [ 146.320941][ T9] CPU: 3 PID: 9 Comm: kworker/u16:1 Tainted: G
> > W 5.14.0-mainline-06795-g42b258c2275c #24
> > [ 146.331974][ T9] Hardware name: Thundercomm Dragonboar
> > Format: Log Type - Time(microsec) - Message - Optional Info
> > Log Type: B - Since Boot(Power On Reset), D - Delta, S - Statistic
> > S - QC_IMAGE_VERSION_STRING=BOOT.XF.2.0-00371-SDM845LZB-1
> > S - IMAGE_VARIANT_STRING=SDM845LA
> > S - OEM_IMAGE_VERSION_STRING=TSBJ-FA-PC-02170
> >
> > So Caleb sent me to this thread. :)
> >
> > I'm still trying to trip it again, but it does seem like db845c is
> > also seeing some stability issues with Linus' HEAD.
> >
>
> Caleb's original pastebin seems to have expired (or at least require
> some sort of ubuntu login to access).. were the crashes he was seeing
> also 'AHB bus error'?

I can reproduce this hard crash
https://www.irccloud.com/pastebin/Cu6UJntE/ and a gpu lockup
https://www.irccloud.com/pastebin/6Ryd2Pug/ at times reliably, by
running antutu benchmark on pocof1.

Reverting 9bc95570175a ("drm/msm: Devfreq tuning") helps and I no
longer see these errors.

Complete dmesg for hardcrash https://pastebin.com/raw/GLZVQFQN

Regards,
Amit Pundir

>
> If you have a reliable reproducer, I guess it would be worth seeing if
> increasing the min_freq (ie. to limit how far we jump the freq in one
> shot) "fixes" it?
>
> I guess I could check downstream kgsl to see if they were doing
> something to increase freq in smaller increments.. I don't recall that
> they were but it has been a while since I dug thru that code. And I
> suppose downstream it could also be done in their custom tz governor.
>
> BR,
> -R

2021-09-06 16:27:06

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Mon, Sep 6, 2021 at 1:02 AM Amit Pundir <[email protected]> wrote:
>
> On Sat, 4 Sept 2021 at 01:55, Rob Clark <[email protected]> wrote:
> >
> > On Fri, Sep 3, 2021 at 12:39 PM John Stultz <[email protected]> wrote:
> > >
> > > On Thu, Jul 29, 2021 at 1:49 PM Rob Clark <[email protected]> wrote:
> > > > On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> > > > <[email protected]> wrote:
> > > > > On 29/07/2021 21:24, Rob Clark wrote:
> > > > > > On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> > > > > > <[email protected]> wrote:
> > > > > >>
> > > > > >> Hi Rob,
> > > > > >>
> > > > > >> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> > > > > >> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> > > > > >> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
> > > > > >
> > > > > > *ohh*, yeah, ok, that would explain it
> > > > > >
> > > > > >> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> > > > > >> at the higher frequencies.
> > > > > >>
> > > > > >> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> > > > > >> glxgear.
> > > > > >>
> > > > > >> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> > > > > >> at the voltage the hardware needs to be stable.
> > > > > >>
> > > > > >> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> > > > > >> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
> > > > > >>
> > > > > >
> > > > > > tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> > > > > > on CC and I added sboyd, maybe one of them knows better.
> > > > > >
> > > > > > In the short term, removing the higher problematic OPPs from dts might
> > > > > > be a better option than this patch (which I'm dropping), since there
> > > > > > is nothing stopping other workloads from hitting higher OPPs.
> > > > > Oh yeah that sounds like a more sensible workaround than mine .
> > > > > >
> > > > > > I'm slightly curious why I didn't have problems at higher OPPs on my
> > > > > > c630 laptop (sdm850)
> > > > > Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
> > > > >
> > > > > Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
> > > > > crash where yours doesn't?
> > > >
> > > > I've not heard any reports of similar issues from the handful of other
> > > > folks with c630's on #aarch64-laptops.. but I can't really say if that
> > > > is luck or not.
> > > >
> > > > Maybe just remove it for affected devices? But I'll defer to Bjorn.
> > >
> > > Just as another datapoint, I was just marveling at how suddenly smooth
> > > the UI was performing on db845c and Caleb pointed me at the "drm/msm:
> > > Devfreq tuning" patch as the likely cause of the improvement, and
> > > mid-discussion my board crashed into USB crash mode:
> > > [ 146.157696][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > [ 146.163303][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > [ 146.168837][ C0] adreno 5000000.gpu: RBBM | ATB bus overflow
> > > [ 146.174960][ C0] adreno 5000000.gpu: CP | HW fault | status=0x00000000
> > > [ 146.181917][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > [ 146.187547][ C0] adreno 5000000.gpu: CP illegal instruction error
> > > [ 146.194009][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > [ 146.308909][ T9] Internal error: synchronous external abort:
> > > 96000010 [#1] PREEMPT SMP
> > > [ 146.317150][ T9] Modules linked in:
> > > [ 146.320941][ T9] CPU: 3 PID: 9 Comm: kworker/u16:1 Tainted: G
> > > W 5.14.0-mainline-06795-g42b258c2275c #24
> > > [ 146.331974][ T9] Hardware name: Thundercomm Dragonboar
> > > Format: Log Type - Time(microsec) - Message - Optional Info
> > > Log Type: B - Since Boot(Power On Reset), D - Delta, S - Statistic
> > > S - QC_IMAGE_VERSION_STRING=BOOT.XF.2.0-00371-SDM845LZB-1
> > > S - IMAGE_VARIANT_STRING=SDM845LA
> > > S - OEM_IMAGE_VERSION_STRING=TSBJ-FA-PC-02170
> > >
> > > So Caleb sent me to this thread. :)
> > >
> > > I'm still trying to trip it again, but it does seem like db845c is
> > > also seeing some stability issues with Linus' HEAD.
> > >
> >
> > Caleb's original pastebin seems to have expired (or at least require
> > some sort of ubuntu login to access).. were the crashes he was seeing
> > also 'AHB bus error'?
>
> I can reproduce this hard crash
> https://www.irccloud.com/pastebin/Cu6UJntE/ and a gpu lockup
> https://www.irccloud.com/pastebin/6Ryd2Pug/ at times reliably, by
> running antutu benchmark on pocof1.
>
> Reverting 9bc95570175a ("drm/msm: Devfreq tuning") helps and I no
> longer see these errors.
>
> Complete dmesg for hardcrash https://pastebin.com/raw/GLZVQFQN
>

Does antutu trigger this issue as easily on db845c? If no, does
db845c have pmic differences compared to pocof1 and Caleb's phone?

I think we may need some help from qcom here, but I'll go back and
look at older downstream kernels to see if I can find any evidence
that we need to limit how far we change the freq in a single step.
It's not clear to me if there is some physical constraint that the
driver needs to respect, or if we have some missing/incorrect
configuration for a630. IIRC the downstream kernel is letting the GMU
do more of the freq management, so it might be handling this case for
the kernel. But the GMU is a bit of a black box to me and I don't
have any docs, so just a guess.

It would be helpful if someone who can repro this could try the
experiments I mentioned about increasing min_freq and/or decreasing
max_freq to limit the size of the freq change until the issue does not
happen.

If we have to, we can merge this hack patch to disable freq clamping
on a630.. but that isn't really a fix. The root issue is a power
issue, 9bc95570175a just made it more likely to see the problem.

BR,
-R

> Regards,
> Amit Pundir
>
> >
> > If you have a reliable reproducer, I guess it would be worth seeing if
> > increasing the min_freq (ie. to limit how far we jump the freq in one
> > shot) "fixes" it?
> >
> > I guess I could check downstream kgsl to see if they were doing
> > something to increase freq in smaller increments.. I don't recall that
> > they were but it has been a while since I dug thru that code. And I
> > suppose downstream it could also be done in their custom tz governor.
> >
> > BR,
> > -R

2021-09-06 20:09:07

by Amit Pundir

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Mon, 6 Sept 2021 at 21:54, Rob Clark <[email protected]> wrote:
>
> On Mon, Sep 6, 2021 at 1:02 AM Amit Pundir <[email protected]> wrote:
> >
> > On Sat, 4 Sept 2021 at 01:55, Rob Clark <[email protected]> wrote:
> > >
> > > On Fri, Sep 3, 2021 at 12:39 PM John Stultz <[email protected]> wrote:
> > > >
> > > > On Thu, Jul 29, 2021 at 1:49 PM Rob Clark <[email protected]> wrote:
> > > > > On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> > > > > <[email protected]> wrote:
> > > > > > On 29/07/2021 21:24, Rob Clark wrote:
> > > > > > > On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> > > > > > > <[email protected]> wrote:
> > > > > > >>
> > > > > > >> Hi Rob,
> > > > > > >>
> > > > > > >> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> > > > > > >> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> > > > > > >> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
> > > > > > >
> > > > > > > *ohh*, yeah, ok, that would explain it
> > > > > > >
> > > > > > >> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> > > > > > >> at the higher frequencies.
> > > > > > >>
> > > > > > >> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> > > > > > >> glxgear.
> > > > > > >>
> > > > > > >> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> > > > > > >> at the voltage the hardware needs to be stable.
> > > > > > >>
> > > > > > >> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> > > > > > >> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
> > > > > > >>
> > > > > > >
> > > > > > > tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> > > > > > > on CC and I added sboyd, maybe one of them knows better.
> > > > > > >
> > > > > > > In the short term, removing the higher problematic OPPs from dts might
> > > > > > > be a better option than this patch (which I'm dropping), since there
> > > > > > > is nothing stopping other workloads from hitting higher OPPs.
> > > > > > Oh yeah that sounds like a more sensible workaround than mine .
> > > > > > >
> > > > > > > I'm slightly curious why I didn't have problems at higher OPPs on my
> > > > > > > c630 laptop (sdm850)
> > > > > > Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
> > > > > >
> > > > > > Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
> > > > > > crash where yours doesn't?
> > > > >
> > > > > I've not heard any reports of similar issues from the handful of other
> > > > > folks with c630's on #aarch64-laptops.. but I can't really say if that
> > > > > is luck or not.
> > > > >
> > > > > Maybe just remove it for affected devices? But I'll defer to Bjorn.
> > > >
> > > > Just as another datapoint, I was just marveling at how suddenly smooth
> > > > the UI was performing on db845c and Caleb pointed me at the "drm/msm:
> > > > Devfreq tuning" patch as the likely cause of the improvement, and
> > > > mid-discussion my board crashed into USB crash mode:
> > > > [ 146.157696][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > [ 146.163303][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > [ 146.168837][ C0] adreno 5000000.gpu: RBBM | ATB bus overflow
> > > > [ 146.174960][ C0] adreno 5000000.gpu: CP | HW fault | status=0x00000000
> > > > [ 146.181917][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > [ 146.187547][ C0] adreno 5000000.gpu: CP illegal instruction error
> > > > [ 146.194009][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > [ 146.308909][ T9] Internal error: synchronous external abort:
> > > > 96000010 [#1] PREEMPT SMP
> > > > [ 146.317150][ T9] Modules linked in:
> > > > [ 146.320941][ T9] CPU: 3 PID: 9 Comm: kworker/u16:1 Tainted: G
> > > > W 5.14.0-mainline-06795-g42b258c2275c #24
> > > > [ 146.331974][ T9] Hardware name: Thundercomm Dragonboar
> > > > Format: Log Type - Time(microsec) - Message - Optional Info
> > > > Log Type: B - Since Boot(Power On Reset), D - Delta, S - Statistic
> > > > S - QC_IMAGE_VERSION_STRING=BOOT.XF.2.0-00371-SDM845LZB-1
> > > > S - IMAGE_VARIANT_STRING=SDM845LA
> > > > S - OEM_IMAGE_VERSION_STRING=TSBJ-FA-PC-02170
> > > >
> > > > So Caleb sent me to this thread. :)
> > > >
> > > > I'm still trying to trip it again, but it does seem like db845c is
> > > > also seeing some stability issues with Linus' HEAD.
> > > >
> > >
> > > Caleb's original pastebin seems to have expired (or at least require
> > > some sort of ubuntu login to access).. were the crashes he was seeing
> > > also 'AHB bus error'?
> >
> > I can reproduce this hard crash
> > https://www.irccloud.com/pastebin/Cu6UJntE/ and a gpu lockup
> > https://www.irccloud.com/pastebin/6Ryd2Pug/ at times reliably, by
> > running antutu benchmark on pocof1.
> >
> > Reverting 9bc95570175a ("drm/msm: Devfreq tuning") helps and I no
> > longer see these errors.
> >
> > Complete dmesg for hardcrash https://pastebin.com/raw/GLZVQFQN
> >
>
> Does antutu trigger this issue as easily on db845c? If no, does
> db845c have pmic differences compared to pocof1 and Caleb's phone?

Yes I can reproduce this hard crash with antutu on db845c as well with
linux/master at 477f70cd2a67 ("Merge tag 'drm-next-2021-08-31-1' of
git://anongit.freedesktop.org/drm/drm").

Dmesg: https://pastebin.com/raw/xXtvxk0G


>
> I think we may need some help from qcom here, but I'll go back and
> look at older downstream kernels to see if I can find any evidence
> that we need to limit how far we change the freq in a single step.
> It's not clear to me if there is some physical constraint that the
> driver needs to respect, or if we have some missing/incorrect
> configuration for a630. IIRC the downstream kernel is letting the GMU
> do more of the freq management, so it might be handling this case for
> the kernel. But the GMU is a bit of a black box to me and I don't
> have any docs, so just a guess.
>
> It would be helpful if someone who can repro this could try the
> experiments I mentioned about increasing min_freq and/or decreasing
> max_freq to limit the size of the freq change until the issue does not
> happen.
>
> If we have to, we can merge this hack patch to disable freq clamping
> on a630.. but that isn't really a fix. The root issue is a power
> issue, 9bc95570175a just made it more likely to see the problem.
>
> BR,
> -R
>
> > Regards,
> > Amit Pundir
> >
> > >
> > > If you have a reliable reproducer, I guess it would be worth seeing if
> > > increasing the min_freq (ie. to limit how far we jump the freq in one
> > > shot) "fixes" it?
> > >
> > > I guess I could check downstream kgsl to see if they were doing
> > > something to increase freq in smaller increments.. I don't recall that
> > > they were but it has been a while since I dug thru that code. And I
> > > suppose downstream it could also be done in their custom tz governor.
> > >
> > > BR,
> > > -R

2021-09-06 20:48:45

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Mon, Sep 6, 2021 at 12:58 PM Amit Pundir <[email protected]> wrote:
>
> On Mon, 6 Sept 2021 at 21:54, Rob Clark <[email protected]> wrote:
> >
> > On Mon, Sep 6, 2021 at 1:02 AM Amit Pundir <[email protected]> wrote:
> > >
> > > On Sat, 4 Sept 2021 at 01:55, Rob Clark <[email protected]> wrote:
> > > >
> > > > On Fri, Sep 3, 2021 at 12:39 PM John Stultz <[email protected]> wrote:
> > > > >
> > > > > On Thu, Jul 29, 2021 at 1:49 PM Rob Clark <[email protected]> wrote:
> > > > > > On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> > > > > > <[email protected]> wrote:
> > > > > > > On 29/07/2021 21:24, Rob Clark wrote:
> > > > > > > > On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> > > > > > > > <[email protected]> wrote:
> > > > > > > >>
> > > > > > > >> Hi Rob,
> > > > > > > >>
> > > > > > > >> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> > > > > > > >> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> > > > > > > >> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
> > > > > > > >
> > > > > > > > *ohh*, yeah, ok, that would explain it
> > > > > > > >
> > > > > > > >> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> > > > > > > >> at the higher frequencies.
> > > > > > > >>
> > > > > > > >> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> > > > > > > >> glxgear.
> > > > > > > >>
> > > > > > > >> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> > > > > > > >> at the voltage the hardware needs to be stable.
> > > > > > > >>
> > > > > > > >> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> > > > > > > >> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
> > > > > > > >>
> > > > > > > >
> > > > > > > > tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> > > > > > > > on CC and I added sboyd, maybe one of them knows better.
> > > > > > > >
> > > > > > > > In the short term, removing the higher problematic OPPs from dts might
> > > > > > > > be a better option than this patch (which I'm dropping), since there
> > > > > > > > is nothing stopping other workloads from hitting higher OPPs.
> > > > > > > Oh yeah that sounds like a more sensible workaround than mine .
> > > > > > > >
> > > > > > > > I'm slightly curious why I didn't have problems at higher OPPs on my
> > > > > > > > c630 laptop (sdm850)
> > > > > > > Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
> > > > > > >
> > > > > > > Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
> > > > > > > crash where yours doesn't?
> > > > > >
> > > > > > I've not heard any reports of similar issues from the handful of other
> > > > > > folks with c630's on #aarch64-laptops.. but I can't really say if that
> > > > > > is luck or not.
> > > > > >
> > > > > > Maybe just remove it for affected devices? But I'll defer to Bjorn.
> > > > >
> > > > > Just as another datapoint, I was just marveling at how suddenly smooth
> > > > > the UI was performing on db845c and Caleb pointed me at the "drm/msm:
> > > > > Devfreq tuning" patch as the likely cause of the improvement, and
> > > > > mid-discussion my board crashed into USB crash mode:
> > > > > [ 146.157696][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > > [ 146.163303][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > > [ 146.168837][ C0] adreno 5000000.gpu: RBBM | ATB bus overflow
> > > > > [ 146.174960][ C0] adreno 5000000.gpu: CP | HW fault | status=0x00000000
> > > > > [ 146.181917][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > > [ 146.187547][ C0] adreno 5000000.gpu: CP illegal instruction error
> > > > > [ 146.194009][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > > [ 146.308909][ T9] Internal error: synchronous external abort:
> > > > > 96000010 [#1] PREEMPT SMP
> > > > > [ 146.317150][ T9] Modules linked in:
> > > > > [ 146.320941][ T9] CPU: 3 PID: 9 Comm: kworker/u16:1 Tainted: G
> > > > > W 5.14.0-mainline-06795-g42b258c2275c #24
> > > > > [ 146.331974][ T9] Hardware name: Thundercomm Dragonboar
> > > > > Format: Log Type - Time(microsec) - Message - Optional Info
> > > > > Log Type: B - Since Boot(Power On Reset), D - Delta, S - Statistic
> > > > > S - QC_IMAGE_VERSION_STRING=BOOT.XF.2.0-00371-SDM845LZB-1
> > > > > S - IMAGE_VARIANT_STRING=SDM845LA
> > > > > S - OEM_IMAGE_VERSION_STRING=TSBJ-FA-PC-02170
> > > > >
> > > > > So Caleb sent me to this thread. :)
> > > > >
> > > > > I'm still trying to trip it again, but it does seem like db845c is
> > > > > also seeing some stability issues with Linus' HEAD.
> > > > >
> > > >
> > > > Caleb's original pastebin seems to have expired (or at least require
> > > > some sort of ubuntu login to access).. were the crashes he was seeing
> > > > also 'AHB bus error'?
> > >
> > > I can reproduce this hard crash
> > > https://www.irccloud.com/pastebin/Cu6UJntE/ and a gpu lockup
> > > https://www.irccloud.com/pastebin/6Ryd2Pug/ at times reliably, by
> > > running antutu benchmark on pocof1.
> > >
> > > Reverting 9bc95570175a ("drm/msm: Devfreq tuning") helps and I no
> > > longer see these errors.
> > >
> > > Complete dmesg for hardcrash https://pastebin.com/raw/GLZVQFQN
> > >
> >
> > Does antutu trigger this issue as easily on db845c? If no, does
> > db845c have pmic differences compared to pocof1 and Caleb's phone?
>
> Yes I can reproduce this hard crash with antutu on db845c as well with
> linux/master at 477f70cd2a67 ("Merge tag 'drm-next-2021-08-31-1' of
> git://anongit.freedesktop.org/drm/drm").
>
> Dmesg: https://pastebin.com/raw/xXtvxk0G
>

ok, I guess it is at least not a board specific thing (ie. won't need
to introduce some dt binding)..

It would be nice to know what the maximum we can safely increase freq
in one step, if we need to limit that.

BR,
-R

>
> >
> > I think we may need some help from qcom here, but I'll go back and
> > look at older downstream kernels to see if I can find any evidence
> > that we need to limit how far we change the freq in a single step.
> > It's not clear to me if there is some physical constraint that the
> > driver needs to respect, or if we have some missing/incorrect
> > configuration for a630. IIRC the downstream kernel is letting the GMU
> > do more of the freq management, so it might be handling this case for
> > the kernel. But the GMU is a bit of a black box to me and I don't
> > have any docs, so just a guess.
> >
> > It would be helpful if someone who can repro this could try the
> > experiments I mentioned about increasing min_freq and/or decreasing
> > max_freq to limit the size of the freq change until the issue does not
> > happen.
> >
> > If we have to, we can merge this hack patch to disable freq clamping
> > on a630.. but that isn't really a fix. The root issue is a power
> > issue, 9bc95570175a just made it more likely to see the problem.
> >
> > BR,
> > -R
> >
> > > Regards,
> > > Amit Pundir
> > >
> > > >
> > > > If you have a reliable reproducer, I guess it would be worth seeing if
> > > > increasing the min_freq (ie. to limit how far we jump the freq in one
> > > > shot) "fixes" it?
> > > >
> > > > I guess I could check downstream kgsl to see if they were doing
> > > > something to increase freq in smaller increments.. I don't recall that
> > > > they were but it has been a while since I dug thru that code. And I
> > > > suppose downstream it could also be done in their custom tz governor.
> > > >
> > > > BR,
> > > > -R

2021-09-06 21:25:05

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Mon, Sep 6, 2021 at 1:50 PM Rob Clark <[email protected]> wrote:
>
> On Mon, Sep 6, 2021 at 12:58 PM Amit Pundir <[email protected]> wrote:
> >
> > On Mon, 6 Sept 2021 at 21:54, Rob Clark <[email protected]> wrote:
> > >
> > > On Mon, Sep 6, 2021 at 1:02 AM Amit Pundir <[email protected]> wrote:
> > > >
> > > > On Sat, 4 Sept 2021 at 01:55, Rob Clark <[email protected]> wrote:
> > > > >
> > > > > On Fri, Sep 3, 2021 at 12:39 PM John Stultz <[email protected]> wrote:
> > > > > >
> > > > > > On Thu, Jul 29, 2021 at 1:49 PM Rob Clark <[email protected]> wrote:
> > > > > > > On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> > > > > > > <[email protected]> wrote:
> > > > > > > > On 29/07/2021 21:24, Rob Clark wrote:
> > > > > > > > > On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> > > > > > > > > <[email protected]> wrote:
> > > > > > > > >>
> > > > > > > > >> Hi Rob,
> > > > > > > > >>
> > > > > > > > >> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> > > > > > > > >> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> > > > > > > > >> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
> > > > > > > > >
> > > > > > > > > *ohh*, yeah, ok, that would explain it
> > > > > > > > >
> > > > > > > > >> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> > > > > > > > >> at the higher frequencies.
> > > > > > > > >>
> > > > > > > > >> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> > > > > > > > >> glxgear.
> > > > > > > > >>
> > > > > > > > >> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> > > > > > > > >> at the voltage the hardware needs to be stable.
> > > > > > > > >>
> > > > > > > > >> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> > > > > > > > >> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
> > > > > > > > >>
> > > > > > > > >
> > > > > > > > > tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> > > > > > > > > on CC and I added sboyd, maybe one of them knows better.
> > > > > > > > >
> > > > > > > > > In the short term, removing the higher problematic OPPs from dts might
> > > > > > > > > be a better option than this patch (which I'm dropping), since there
> > > > > > > > > is nothing stopping other workloads from hitting higher OPPs.
> > > > > > > > Oh yeah that sounds like a more sensible workaround than mine .
> > > > > > > > >
> > > > > > > > > I'm slightly curious why I didn't have problems at higher OPPs on my
> > > > > > > > > c630 laptop (sdm850)
> > > > > > > > Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
> > > > > > > >
> > > > > > > > Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
> > > > > > > > crash where yours doesn't?
> > > > > > >
> > > > > > > I've not heard any reports of similar issues from the handful of other
> > > > > > > folks with c630's on #aarch64-laptops.. but I can't really say if that
> > > > > > > is luck or not.
> > > > > > >
> > > > > > > Maybe just remove it for affected devices? But I'll defer to Bjorn.
> > > > > >
> > > > > > Just as another datapoint, I was just marveling at how suddenly smooth
> > > > > > the UI was performing on db845c and Caleb pointed me at the "drm/msm:
> > > > > > Devfreq tuning" patch as the likely cause of the improvement, and
> > > > > > mid-discussion my board crashed into USB crash mode:
> > > > > > [ 146.157696][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > > > [ 146.163303][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > > > [ 146.168837][ C0] adreno 5000000.gpu: RBBM | ATB bus overflow
> > > > > > [ 146.174960][ C0] adreno 5000000.gpu: CP | HW fault | status=0x00000000
> > > > > > [ 146.181917][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > > > [ 146.187547][ C0] adreno 5000000.gpu: CP illegal instruction error
> > > > > > [ 146.194009][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > > > [ 146.308909][ T9] Internal error: synchronous external abort:
> > > > > > 96000010 [#1] PREEMPT SMP
> > > > > > [ 146.317150][ T9] Modules linked in:
> > > > > > [ 146.320941][ T9] CPU: 3 PID: 9 Comm: kworker/u16:1 Tainted: G
> > > > > > W 5.14.0-mainline-06795-g42b258c2275c #24
> > > > > > [ 146.331974][ T9] Hardware name: Thundercomm Dragonboar
> > > > > > Format: Log Type - Time(microsec) - Message - Optional Info
> > > > > > Log Type: B - Since Boot(Power On Reset), D - Delta, S - Statistic
> > > > > > S - QC_IMAGE_VERSION_STRING=BOOT.XF.2.0-00371-SDM845LZB-1
> > > > > > S - IMAGE_VARIANT_STRING=SDM845LA
> > > > > > S - OEM_IMAGE_VERSION_STRING=TSBJ-FA-PC-02170
> > > > > >
> > > > > > So Caleb sent me to this thread. :)
> > > > > >
> > > > > > I'm still trying to trip it again, but it does seem like db845c is
> > > > > > also seeing some stability issues with Linus' HEAD.
> > > > > >
> > > > >
> > > > > Caleb's original pastebin seems to have expired (or at least require
> > > > > some sort of ubuntu login to access).. were the crashes he was seeing
> > > > > also 'AHB bus error'?
> > > >
> > > > I can reproduce this hard crash
> > > > https://www.irccloud.com/pastebin/Cu6UJntE/ and a gpu lockup
> > > > https://www.irccloud.com/pastebin/6Ryd2Pug/ at times reliably, by
> > > > running antutu benchmark on pocof1.
> > > >
> > > > Reverting 9bc95570175a ("drm/msm: Devfreq tuning") helps and I no
> > > > longer see these errors.
> > > >
> > > > Complete dmesg for hardcrash https://pastebin.com/raw/GLZVQFQN
> > > >
> > >
> > > Does antutu trigger this issue as easily on db845c? If no, does
> > > db845c have pmic differences compared to pocof1 and Caleb's phone?
> >
> > Yes I can reproduce this hard crash with antutu on db845c as well with
> > linux/master at 477f70cd2a67 ("Merge tag 'drm-next-2021-08-31-1' of
> > git://anongit.freedesktop.org/drm/drm").
> >
> > Dmesg: https://pastebin.com/raw/xXtvxk0G
> >
>
> ok, I guess it is at least not a board specific thing (ie. won't need
> to introduce some dt binding)..
>
> It would be nice to know what the maximum we can safely increase freq
> in one step, if we need to limit that.

Also, one sanity check.. for android builds, are you using the same
a630_gmu.bin from linux-firmware? If not, does the l-f gmu fw change
things?

For freq changes, we basically ask gmu for the freq we want, and it
votes for the requested freq.. so a gmu fw bug could be possible here.

BR,
-R

>
> >
> > >
> > > I think we may need some help from qcom here, but I'll go back and
> > > look at older downstream kernels to see if I can find any evidence
> > > that we need to limit how far we change the freq in a single step.
> > > It's not clear to me if there is some physical constraint that the
> > > driver needs to respect, or if we have some missing/incorrect
> > > configuration for a630. IIRC the downstream kernel is letting the GMU
> > > do more of the freq management, so it might be handling this case for
> > > the kernel. But the GMU is a bit of a black box to me and I don't
> > > have any docs, so just a guess.
> > >
> > > It would be helpful if someone who can repro this could try the
> > > experiments I mentioned about increasing min_freq and/or decreasing
> > > max_freq to limit the size of the freq change until the issue does not
> > > happen.
> > >
> > > If we have to, we can merge this hack patch to disable freq clamping
> > > on a630.. but that isn't really a fix. The root issue is a power
> > > issue, 9bc95570175a just made it more likely to see the problem.
> > >
> > > BR,
> > > -R
> > >
> > > > Regards,
> > > > Amit Pundir
> > > >
> > > > >
> > > > > If you have a reliable reproducer, I guess it would be worth seeing if
> > > > > increasing the min_freq (ie. to limit how far we jump the freq in one
> > > > > shot) "fixes" it?
> > > > >
> > > > > I guess I could check downstream kgsl to see if they were doing
> > > > > something to increase freq in smaller increments.. I don't recall that
> > > > > they were but it has been a while since I dug thru that code. And I
> > > > > suppose downstream it could also be done in their custom tz governor.
> > > > >
> > > > > BR,
> > > > > -R

2021-09-07 01:49:11

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Mon, Sep 6, 2021 at 12:58 PM Amit Pundir <[email protected]> wrote:
>
> On Mon, 6 Sept 2021 at 21:54, Rob Clark <[email protected]> wrote:
> >
> > On Mon, Sep 6, 2021 at 1:02 AM Amit Pundir <[email protected]> wrote:
> > >
> > > On Sat, 4 Sept 2021 at 01:55, Rob Clark <[email protected]> wrote:
> > > >
> > > > On Fri, Sep 3, 2021 at 12:39 PM John Stultz <[email protected]> wrote:
> > > > >
> > > > > On Thu, Jul 29, 2021 at 1:49 PM Rob Clark <[email protected]> wrote:
> > > > > > On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> > > > > > <[email protected]> wrote:
> > > > > > > On 29/07/2021 21:24, Rob Clark wrote:
> > > > > > > > On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> > > > > > > > <[email protected]> wrote:
> > > > > > > >>
> > > > > > > >> Hi Rob,
> > > > > > > >>
> > > > > > > >> I've done some more testing! It looks like before that patch ("drm/msm: Devfreq tuning") the GPU would never get above
> > > > > > > >> the second frequency in the OPP table (342MHz) (at least, not in glxgears). With the patch applied it would more
> > > > > > > >> aggressively jump up to the max frequency which seems to be unstable at the default regulator voltages.
> > > > > > > >
> > > > > > > > *ohh*, yeah, ok, that would explain it
> > > > > > > >
> > > > > > > >> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up to 0.988v (instead of the stock 0.516v) makes the GPU stable
> > > > > > > >> at the higher frequencies.
> > > > > > > >>
> > > > > > > >> Applying this patch reverts the behaviour, and the GPU never goes above 342MHz in glxgears, losing ~30% performance in
> > > > > > > >> glxgear.
> > > > > > > >>
> > > > > > > >> I think (?) that enabling CPR support would be the proper solution to this - that would ensure that the regulators run
> > > > > > > >> at the voltage the hardware needs to be stable.
> > > > > > > >>
> > > > > > > >> Is hacking the voltage higher (although ideally not quite that high) an acceptable short term solution until we have
> > > > > > > >> CPR? Or would it be safer to just not make use of the higher frequencies on a630 for now?
> > > > > > > >>
> > > > > > > >
> > > > > > > > tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is already
> > > > > > > > on CC and I added sboyd, maybe one of them knows better.
> > > > > > > >
> > > > > > > > In the short term, removing the higher problematic OPPs from dts might
> > > > > > > > be a better option than this patch (which I'm dropping), since there
> > > > > > > > is nothing stopping other workloads from hitting higher OPPs.
> > > > > > > Oh yeah that sounds like a more sensible workaround than mine .
> > > > > > > >
> > > > > > > > I'm slightly curious why I didn't have problems at higher OPPs on my
> > > > > > > > c630 laptop (sdm850)
> > > > > > > Perhaps you won the sillicon lottery - iirc sdm850 is binned for higher clocks as is out of the factory.
> > > > > > >
> > > > > > > Would it be best to drop the OPPs for all devices? Or just those affected? I guess it's possible another c630 might
> > > > > > > crash where yours doesn't?
> > > > > >
> > > > > > I've not heard any reports of similar issues from the handful of other
> > > > > > folks with c630's on #aarch64-laptops.. but I can't really say if that
> > > > > > is luck or not.
> > > > > >
> > > > > > Maybe just remove it for affected devices? But I'll defer to Bjorn.
> > > > >
> > > > > Just as another datapoint, I was just marveling at how suddenly smooth
> > > > > the UI was performing on db845c and Caleb pointed me at the "drm/msm:
> > > > > Devfreq tuning" patch as the likely cause of the improvement, and
> > > > > mid-discussion my board crashed into USB crash mode:
> > > > > [ 146.157696][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > > [ 146.163303][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > > [ 146.168837][ C0] adreno 5000000.gpu: RBBM | ATB bus overflow
> > > > > [ 146.174960][ C0] adreno 5000000.gpu: CP | HW fault | status=0x00000000
> > > > > [ 146.181917][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > > [ 146.187547][ C0] adreno 5000000.gpu: CP illegal instruction error
> > > > > [ 146.194009][ C0] adreno 5000000.gpu: CP | AHB bus error
> > > > > [ 146.308909][ T9] Internal error: synchronous external abort:
> > > > > 96000010 [#1] PREEMPT SMP
> > > > > [ 146.317150][ T9] Modules linked in:
> > > > > [ 146.320941][ T9] CPU: 3 PID: 9 Comm: kworker/u16:1 Tainted: G
> > > > > W 5.14.0-mainline-06795-g42b258c2275c #24
> > > > > [ 146.331974][ T9] Hardware name: Thundercomm Dragonboar
> > > > > Format: Log Type - Time(microsec) - Message - Optional Info
> > > > > Log Type: B - Since Boot(Power On Reset), D - Delta, S - Statistic
> > > > > S - QC_IMAGE_VERSION_STRING=BOOT.XF.2.0-00371-SDM845LZB-1
> > > > > S - IMAGE_VARIANT_STRING=SDM845LA
> > > > > S - OEM_IMAGE_VERSION_STRING=TSBJ-FA-PC-02170
> > > > >
> > > > > So Caleb sent me to this thread. :)
> > > > >
> > > > > I'm still trying to trip it again, but it does seem like db845c is
> > > > > also seeing some stability issues with Linus' HEAD.
> > > > >
> > > >
> > > > Caleb's original pastebin seems to have expired (or at least require
> > > > some sort of ubuntu login to access).. were the crashes he was seeing
> > > > also 'AHB bus error'?
> > >
> > > I can reproduce this hard crash
> > > https://www.irccloud.com/pastebin/Cu6UJntE/ and a gpu lockup
> > > https://www.irccloud.com/pastebin/6Ryd2Pug/ at times reliably, by
> > > running antutu benchmark on pocof1.
> > >
> > > Reverting 9bc95570175a ("drm/msm: Devfreq tuning") helps and I no
> > > longer see these errors.
> > >
> > > Complete dmesg for hardcrash https://pastebin.com/raw/GLZVQFQN
> > >
> >
> > Does antutu trigger this issue as easily on db845c? If no, does
> > db845c have pmic differences compared to pocof1 and Caleb's phone?
>
> Yes I can reproduce this hard crash with antutu on db845c as well with
> linux/master at 477f70cd2a67 ("Merge tag 'drm-next-2021-08-31-1' of
> git://anongit.freedesktop.org/drm/drm").
>
> Dmesg: https://pastebin.com/raw/xXtvxk0G
>

One thing I thought of, which would be worth ruling out, is whether
this issue only occurs with freq changes immediately after resuming
the GPU, vs freq changes in general. Could you try the below patch.
And if it "fixes" the issue, then try reducing the delay until you
start seeing GPU hangs again.

----------
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 40c9fef457a4..278b85207ea3 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -1513,6 +1513,8 @@ static int a6xx_pm_resume(struct msm_gpu *gpu)
if (ret)
return ret;

+ msleep(5);
+
msm_devfreq_resume(gpu);

a6xx_llc_activate(a6xx_gpu);
----------

BR,
-R

>
> >
> > I think we may need some help from qcom here, but I'll go back and
> > look at older downstream kernels to see if I can find any evidence
> > that we need to limit how far we change the freq in a single step.
> > It's not clear to me if there is some physical constraint that the
> > driver needs to respect, or if we have some missing/incorrect
> > configuration for a630. IIRC the downstream kernel is letting the GMU
> > do more of the freq management, so it might be handling this case for
> > the kernel. But the GMU is a bit of a black box to me and I don't
> > have any docs, so just a guess.
> >
> > It would be helpful if someone who can repro this could try the
> > experiments I mentioned about increasing min_freq and/or decreasing
> > max_freq to limit the size of the freq change until the issue does not
> > happen.
> >
> > If we have to, we can merge this hack patch to disable freq clamping
> > on a630.. but that isn't really a fix. The root issue is a power
> > issue, 9bc95570175a just made it more likely to see the problem.
> >
> > BR,
> > -R
> >
> > > Regards,
> > > Amit Pundir
> > >
> > > >
> > > > If you have a reliable reproducer, I guess it would be worth seeing if
> > > > increasing the min_freq (ie. to limit how far we jump the freq in one
> > > > shot) "fixes" it?
> > > >
> > > > I guess I could check downstream kgsl to see if they were doing
> > > > something to increase freq in smaller increments.. I don't recall that
> > > > they were but it has been a while since I dug thru that code. And I
> > > > suppose downstream it could also be done in their custom tz governor.
> > > >
> > > > BR,
> > > > -R

2021-09-07 08:22:22

by Amit Pundir

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Tue, 7 Sept 2021 at 02:53, Rob Clark <[email protected]> wrote:
>
> On Mon, Sep 6, 2021 at 1:50 PM Rob Clark <[email protected]> wrote:
> >
> >
> > ok, I guess it is at least not a board specific thing (ie. won't need
> > to introduce some dt binding)..
> >
> > It would be nice to know what the maximum we can safely increase freq
> > in one step, if we need to limit that.
>
> Also, one sanity check.. for android builds, are you using the same
> a630_gmu.bin from linux-firmware? If not, does the l-f gmu fw change
> things?

We are using the same a630_gmu.bin from linux-firmware.

>
> For freq changes, we basically ask gmu for the freq we want, and it
> votes for the requested freq.. so a gmu fw bug could be possible here.
>
> BR,
> -R

2021-09-07 08:36:32

by Amit Pundir

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Tue, 7 Sept 2021 at 07:11, Rob Clark <[email protected]> wrote:
>
> One thing I thought of, which would be worth ruling out, is whether
> this issue only occurs with freq changes immediately after resuming
> the GPU, vs freq changes in general. Could you try the below patch.
> And if it "fixes" the issue, then try reducing the delay until you
> start seeing GPU hangs again.

It doesn't fix the crash and I can still reproduce it
https://pastebin.com/raw/bxK4mAhB

>
> ----------
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> index 40c9fef457a4..278b85207ea3 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> @@ -1513,6 +1513,8 @@ static int a6xx_pm_resume(struct msm_gpu *gpu)
> if (ret)
> return ret;
>
> + msleep(5);
> +
> msm_devfreq_resume(gpu);
>
> a6xx_llc_activate(a6xx_gpu);
> ----------
>
> BR,
> -R

2021-09-07 14:34:19

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Tue, Sep 7, 2021 at 1:25 AM Amit Pundir <[email protected]> wrote:
>
> On Tue, 7 Sept 2021 at 07:11, Rob Clark <[email protected]> wrote:
> >
> > One thing I thought of, which would be worth ruling out, is whether
> > this issue only occurs with freq changes immediately after resuming
> > the GPU, vs freq changes in general. Could you try the below patch.
> > And if it "fixes" the issue, then try reducing the delay until you
> > start seeing GPU hangs again.
>
> It doesn't fix the crash and I can still reproduce it
> https://pastebin.com/raw/bxK4mAhB

Ok, thanks for confirming. That implies the limitation is about
changing freq in general, rather than immediately after resume.. the
latter could be a new scenario after 9bc95570175a. But this confirms
that this is an issue that has been there all along.

I'm still not quite sure what the correct fix is

BR,
-R

>
> >
> > ----------
> > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > index 40c9fef457a4..278b85207ea3 100644
> > --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > @@ -1513,6 +1513,8 @@ static int a6xx_pm_resume(struct msm_gpu *gpu)
> > if (ret)
> > return ret;
> >
> > + msleep(5);
> > +
> > msm_devfreq_resume(gpu);
> >
> > a6xx_llc_activate(a6xx_gpu);
> > ----------
> >
> > BR,
> > -R

2021-09-07 16:03:27

by Bjorn Andersson

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Mon 09 Aug 14:08 PDT 2021, Rob Clark wrote:

> On Mon, Aug 9, 2021 at 1:35 PM Caleb Connolly <[email protected]> wrote:
> >
> >
> >
> > On 09/08/2021 18:58, Rob Clark wrote:
> > > On Mon, Aug 9, 2021 at 10:28 AM Akhil P Oommen <[email protected]> wrote:
> > >>
> > >> On 8/9/2021 9:48 PM, Caleb Connolly wrote:
> > >>>
> > >>>
> > >>> On 09/08/2021 17:12, Rob Clark wrote:
> > >>>> On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen <[email protected]>
> > >>>> wrote:
> > >>>>>
> > >>>>> On 8/8/2021 10:22 PM, Rob Clark wrote:
> > >>>>>> On Sun, Aug 8, 2021 at 7:33 AM Caleb Connolly
> > >>>>>> <[email protected]> wrote:
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> On 07/08/2021 21:04, Rob Clark wrote:
> > >>>>>>>> On Sat, Aug 7, 2021 at 12:21 PM Caleb Connolly
> > >>>>>>>> <[email protected]> wrote:
> > >>>>>>>>>
> > >>>>>>>>> Hi Rob, Akhil,
> > >>>>>>>>>
> > >>>>>>>>> On 29/07/2021 21:53, Rob Clark wrote:
> > >>>>>>>>>> On Thu, Jul 29, 2021 at 1:28 PM Caleb Connolly
> > >>>>>>>>>> <[email protected]> wrote:
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>>>> On 29/07/2021 21:24, Rob Clark wrote:
> > >>>>>>>>>>>> On Thu, Jul 29, 2021 at 1:06 PM Caleb Connolly
> > >>>>>>>>>>>> <[email protected]> wrote:
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> Hi Rob,
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> I've done some more testing! It looks like before that patch
> > >>>>>>>>>>>>> ("drm/msm: Devfreq tuning") the GPU would never get above
> > >>>>>>>>>>>>> the second frequency in the OPP table (342MHz) (at least, not
> > >>>>>>>>>>>>> in glxgears). With the patch applied it would more
> > >>>>>>>>>>>>> aggressively jump up to the max frequency which seems to be
> > >>>>>>>>>>>>> unstable at the default regulator voltages.
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> *ohh*, yeah, ok, that would explain it
> > >>>>>>>>>>>>
> > >>>>>>>>>>>>> Hacking the pm8005 s1 regulator (which provides VDD_GFX) up
> > >>>>>>>>>>>>> to 0.988v (instead of the stock 0.516v) makes the GPU stable
> > >>>>>>>>>>>>> at the higher frequencies.
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> Applying this patch reverts the behaviour, and the GPU never
> > >>>>>>>>>>>>> goes above 342MHz in glxgears, losing ~30% performance in
> > >>>>>>>>>>>>> glxgear.
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> I think (?) that enabling CPR support would be the proper
> > >>>>>>>>>>>>> solution to this - that would ensure that the regulators run
> > >>>>>>>>>>>>> at the voltage the hardware needs to be stable.
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> Is hacking the voltage higher (although ideally not quite
> > >>>>>>>>>>>>> that high) an acceptable short term solution until we have
> > >>>>>>>>>>>>> CPR? Or would it be safer to just not make use of the higher
> > >>>>>>>>>>>>> frequencies on a630 for now?
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> tbh, I'm not sure about the regulator stuff and CPR.. Bjorn is
> > >>>>>>>>>>>> already
> > >>>>>>>>>>>> on CC and I added sboyd, maybe one of them knows better.
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> In the short term, removing the higher problematic OPPs from
> > >>>>>>>>>>>> dts might
> > >>>>>>>>>>>> be a better option than this patch (which I'm dropping), since
> > >>>>>>>>>>>> there
> > >>>>>>>>>>>> is nothing stopping other workloads from hitting higher OPPs.
> > >>>>>>>>>>> Oh yeah that sounds like a more sensible workaround than mine .
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> I'm slightly curious why I didn't have problems at higher OPPs
> > >>>>>>>>>>>> on my
> > >>>>>>>>>>>> c630 laptop (sdm850)
> > >>>>>>>>>>> Perhaps you won the sillicon lottery - iirc sdm850 is binned
> > >>>>>>>>>>> for higher clocks as is out of the factory.
> > >>>>>>>>>>>
> > >>>>>>>>>>> Would it be best to drop the OPPs for all devices? Or just
> > >>>>>>>>>>> those affected? I guess it's possible another c630 might
> > >>>>>>>>>>> crash where yours doesn't?
> > >>>>>>>>>>
> > >>>>>>>>>> I've not heard any reports of similar issues from the handful of
> > >>>>>>>>>> other
> > >>>>>>>>>> folks with c630's on #aarch64-laptops.. but I can't really say
> > >>>>>>>>>> if that
> > >>>>>>>>>> is luck or not.
> > >>>>>>>>> It looks like this affects at least the OnePlus 6 and PocoPhone
> > >>>>>>>>> F1, I've done some more poking and the following diff
> > >>>>>>>>> seems to fix the stability issues completely, it seems the delay
> > >>>>>>>>> is required to let the update propagate.
> > >>>>>>>>>
> > >>>>>>>>> This doesn't feel like the right fix, but hopefully it's enough
> > >>>>>>>>> to come up with a better solution than disabling the new
> > >>>>>>>>> devfreq behaviour on a630.
> > >>>>>>>>>
> > >>>>>>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > >>>>>>>>> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > >>>>>>>>> index d7cec7f0dde0..69e2a5e84dae 100644
> > >>>>>>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > >>>>>>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > >>>>>>>>> @@ -139,6 +139,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu,
> > >>>>>>>>> struct dev_pm_opp *opp)
> > >>>>>>>>> return;
> > >>>>>>>>> }
> > >>>>>>>>>
> > >>>>>>>>> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> > >>>>>>>>> +
> > >>>>>>>>> + usleep_range(300, 500);
> > >>>>>>>>> +
> > >>>>>>>>
> > >>>>>
> > >>>>> I am a bit confused. We don't define a power domain for gpu in dt,
> > >>>>> correct? Then what exactly set_opp do here? Do you think this usleep is
> > >>>>> what is helping here somehow to mask the issue?
> > >>> The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
> > >>> the GPU DT. For the sake of simplicity I'll refer to the lowest
> > >>> frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
> > >>> the "min" state, and the highest frequency (710000000) and OPP level
> > >>> (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
> > >>> sdm845.dtsi under the gpu node.
> > >>>
> > >>> The new devfreq behaviour unmasks what I think is a driver bug, it
> > >>> inadvertently puts much more strain on the GPU regulators than they
> > >>> usually get. With the new behaviour the GPU jumps from it's min state to
> > >>> the max state and back again extremely rapidly under workloads as small
> > >>> as refreshing UI. Where previously the GPU would rarely if ever go above
> > >>> 342MHz when interacting with the device, it now jumps between min and
> > >>> max many times per second.
> > >>>
> > >>> If my understanding is correct, the current implementation of the GMU
> > >>> set freq is the following:
> > >>> - Get OPP for frequency to set
> > >>> - Push the frequency to the GMU - immediately updating the core clock
> > >>> - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
> > >>> up somewhere in power management code and causes the gx regulator level
> > >>> to be updated
> > >>
> > >> Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else.
> > >> We were using a different api earlier which got deprecated -
> > >> dev_pm_opp_set_bw().
> > Huh ok, thanks for the correction. So it's the GMU writes in this function which cause the regulator to be adjusted?
> > >
> > > Hmm, ok, if this is just setting icc vote, the order shouldn't be too important.
> > >
> > > I guess GMU then is the one that is controlling the regulator(s) to
> > > ensure adequate voltage for the requested freq?
> > >
> > > But the GMU fw should be the same for a618 and a630, md5sum of what
> > > I'm using (from linux-firmware):
> > >
> > > ab20135f7adf48e0f344282a37da80e4 a630_gmu.bin
> > Same here.
> > >
> > >>>
> > >>> The regulator will then take some time to reach it's new voltage level
> > >>> and stabilise. I believe that rapid transitions between min and max
> > >>> state - in combination with the increased current load from the GPU core
> > >>> - lead to the regulator becoming unstable (e.g. when it's requested to
> > >>> transition from it's lowest to highest levels immediately after
> > >>> transitioning down), the unstable voltage causes the GPU to crash.
> > >>>
> > >>> Sillicon lottery will of course play a role here - this is very much an
> > >>> edge case and would definitely be different on a per-device and even
> > >>> per-unit basis.
> > >>>>
> > >>>> Hmm, I thought "opp-level = RPMH_REGULATOR_LEVEL_*" did *something*,
> > >>>> but tbh I'm not sure exactly what..
> > >>>>
> > >>>>> I feel we should just leave the new dcvs feature (shall we call it NAP?)
> > >>>>> disabled for a630 (and 10ms devfreq interval), until this is root
> > >>>>> caused.
> > >>> I believe this hacky workaround expresses the root cause of the issue
> > >>> quite clearly, by setting the OPP first and allowing the gx regulator to
> > >>> become stable before telling the GPU to change clock speeds, we avoid
> > >>> the edge case and prevent the crashes.
> > >>>
> > >>> I took some rough measurements by adding logging to msm_devfreq_idle and
> > >>> causing UI updates for ~20 seconds and that function is being called
> > >>> about 30 times per second, this means the GPU is transitioning between
> > >>> min (idle) state and max (active / boost) state at that frequency and
> > >>> causing the issue I described above. It's likely that the usleep is
> > >>> helping to mask this behaviour.
> > >>>
> > >>> I hope this serves as a slightly better explanation of what I perceive
> > >>> to be the issue, I realise my previous explanations were not very
> > >>> adequate, I apologise for all the noise.
> > >>>>
> > >>>> I suppose "NAP" is a reasonable name.
> > >>>>
> > >>>> But I think that reverting to previous behavior would not be enough,
> > >>>> there is nothing stopping devfreq from jumping from min to max freq,
> > >>>> which AFAIU should be enough to trigger this. I guess that there just
> > >>>> hasn't been enough testing with different game workloads on those
> > >>>> phones to trigger this.
> > >>> Ack
> > >>>>
> > >>>> That said, I haven't seen similar issues on my sdm850 laptop, where I
> > >>>> defn have triggered mix->max freq transitions.. I guess it would be
> > >>>> interesting to know if this issue could be reproduced on db845c, or if
> > >>>> it really is board specific?
> > >>> My db845c arrives this week, I'll definitely try and reproduce this.
> > >>>>
> > >>>> To workaround, I think we'd need to implement some way to limit that
> > >>>> maximum frequency jump (and then use delayed work to continue ramping
> > >>>> up the freq over time until we hit the target).. which seems like a
> > >>>> lot of work if this is just a board(s) specific workaround and isn't
> > >>>> needed once CPR is supported
> > >>> Based on my reasoning above, I came up with the following: reducing
> > >>> thrashing by preventing rapid idle/active transitions. The minimum
> > >>> active time of 30ms was just used for testing, I think some number
> > >>> between 2 and 4 frames would be a sensible choice - the higher the safer.
> > >>>
> > >>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > >>> b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > >>> index d7cec7f0dde0..87f2d1085c3e 100644
> > >>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > >>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> > >>> @@ -139,6 +139,8 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct
> > >>> dev_pm_opp *opp)
> > >>> return;
> > >>> }
> > >>>
> > >>> + dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> > >>> +
> > >>> gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
> > >>>
> > >>> gmu_write(gmu, REG_A6XX_GMU_DCVS_PERF_SETTING,
> > >>> @@ -158,7 +160,6 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct
> > >>> dev_pm_opp *opp)
> > >>> if (ret)
> > >>> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n",
> > >>> ret);
> > >>>
> > >>> - dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
> > >>> pm_runtime_put(gmu->dev);
> > >>> }
> > >>>
> > >>> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
> > >>> index 0e4b45bff2e6..0e2293bcb46d 100644
> > >>> --- a/drivers/gpu/drm/msm/msm_gpu.h
> > >>> +++ b/drivers/gpu/drm/msm/msm_gpu.h
> > >>> @@ -99,8 +99,8 @@ struct msm_gpu_devfreq {
> > >>> /** time: Time of last sampling period. */
> > >>> ktime_t time;
> > >>>
> > >>> - /** idle_time: Time of last transition to idle: */
> > >>> - ktime_t idle_time;
> > >>> + /** transition_time: Time of last transition between
> > >>> idle/active: */
> > >>> + ktime_t transition_time;
> > >>>
> > >>> /**
> > >>> * idle_freq:
> > >>> diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> > >>> b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> > >>> index 0a1ee20296a2..774a7be33e7a 100644
> > >>> --- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> > >>> +++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
> > >>> @@ -157,7 +157,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
> > >>> */
> > >>> mutex_lock(&df->devfreq->lock);
> > >>>
> > >>> - idle_time = ktime_to_ms(ktime_sub(ktime_get(), df->idle_time));
> > >>> + idle_time = ktime_to_ms(ktime_sub(ktime_get(),
> > >>> df->transition_time));
> > >>>
> > >>> /*
> > >>> * If we've been idle for a significant fraction of a polling
> > >>> @@ -168,7 +168,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
> > >>> target_freq *= 2;
> > >>> }
> > >>>
> > >>> - df->idle_freq = 0;
> > >>> + df->transition_time = ktime_get();;
> > >>>
> > >>> msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);
> > >>>
> > >>> @@ -185,6 +185,16 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
> > >>> {
> > >>> struct msm_gpu_devfreq *df = &gpu->devfreq;
> > >>> unsigned long idle_freq, target_freq = 0;
> > >>> + unsigned int active_time;
> > >>> +
> > >>> + active_time = ktime_to_ms(ktime_sub(ktime_get(),
> > >>> df->transition_time));
> > >>> + /*
> > >>> + * Don't go back to idle unless we've been active for at least 30ms
> > >>> + * to avoid thrashing.
> > >>
> > >> This basically defeats the purpose of this feature! At least, we should
> > >> keep this '30' gpu specific. Does a Kconfig makes sense here?? BTW, if
> > >> 300us was helping you earlier why do you want it to be 30ms now?
> > Previously I thought that the issue was related to specifically the transition from idle/active, hence sleeping to let
> > the regulator catch up, whilst that masked the issue it didn't *fix* it, I now think it's actually due to the repeated
> > transition between idle and active states.
> >
> > Enforcing that the GPU stay active for at least two frames should still give the intended goal of reducing latency and
> > more reliably fixes the issue.
> >
> > AFAIU from reading the commit description, the goal of the devfreq tuning is to reduce latency by quickly bursting up
> > when there's user activity, by telling the GPU to stay active for longer we shouldn't impede this behaviour at all.
>
> Well, there are a couple parts to it.. one thing it was intended to
> fix was a bad devfreq behavior I was seeing with, for example, games
> that throttle themselves to 30fps, so rendering one 16ms frame every
> other vblank cycle.. previously devfreq would ramp up to max just as
> it was at the end of rendering a frame, and then sit there at fmax
> while GPU was doing nothing for the next 16ms, and then ramp back down
> to fmin just as the GPU got some more work to do. So it was nearly
> 180deg out of phase with where you'd want it to be
> increasing/decreasing GPU freq.
>

But afaict you only change the selection of frequency, not the actual
change. As such this issue isn't related to your change.

> The longer polling interval is meant to smooth that out, with clamping
> to fmin while GPU is idle to offset the fact that it would take the
> GPU longer to ramp down (and it otherwise being pointless to keep the
> GPU at a high freq when it isn't doing anything), and boosting above
> what freq devfreq would have picked if the gpu had been idle for a
> while (to offset the longer ramp up on user input).
>
> So the 30ms delay for clamping to fmin would defeat one part of that.
>
> We could perhaps somehow disable the clamping to fmin for certain
> boards and/or gpus, which would possibly lose a bit of power savings
> but otherwise be ok. But I'm not clear whether this is a board
> specific issue (ie. are these phones using different PMICs compared to
> sdm850 laptops and db845c? Or is there some difference in what power
> rail is powering the GPU?)
>
> I think it was mentioned earlier that CPR should help (AFAIU that is
> some sort of hw closed loop voltage regulation?) so maybe this is just
> a short term workaround?
>

On 845 and onwards, we pick a corner which will be translated to an
actual voltage by someone else and if CPR is involved is hidden in that
other entity.

Regards,
Bjorn

2021-09-08 02:22:49

by Bjorn Andersson

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Mon 09 Aug 10:26 PDT 2021, Akhil P Oommen wrote:

> On 8/9/2021 9:48 PM, Caleb Connolly wrote:
> >
> >
> > On 09/08/2021 17:12, Rob Clark wrote:
> > > On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen
> > > <[email protected]> wrote:
[..]
> > > > I am a bit confused. We don't define a power domain for gpu in dt,
> > > > correct? Then what exactly set_opp do here? Do you think this usleep is
> > > > what is helping here somehow to mask the issue?
> > The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
> > the GPU DT. For the sake of simplicity I'll refer to the lowest
> > frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
> > the "min" state, and the highest frequency (710000000) and OPP level
> > (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
> > sdm845.dtsi under the gpu node.
> >
> > The new devfreq behaviour unmasks what I think is a driver bug, it
> > inadvertently puts much more strain on the GPU regulators than they
> > usually get. With the new behaviour the GPU jumps from it's min state to
> > the max state and back again extremely rapidly under workloads as small
> > as refreshing UI. Where previously the GPU would rarely if ever go above
> > 342MHz when interacting with the device, it now jumps between min and
> > max many times per second.
> >
> > If my understanding is correct, the current implementation of the GMU
> > set freq is the following:
> > ?- Get OPP for frequency to set
> > ?- Push the frequency to the GMU - immediately updating the core clock
> > ?- Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
> > up somewhere in power management code and causes the gx regulator level
> > to be updated
>
> Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else. We
> were using a different api earlier which got deprecated -
> dev_pm_opp_set_bw().
>

On the Lenovo Yoga C630 this is reproduced by starting alacritty and if
I'm lucky I managed to hit a few keys before it crashes, so I spent a
few hours looking into this as well...

As you say, the dev_pm_opp_set_opp() will only cast a interconnect vote.
The opp-level is just there for show and isn't used by anything, at
least not on 845.

Further more, I'm missing something in my tree, so the interconnect
doesn't hit sync_state, and as such we're not actually scaling the
buses. So the problem is not that Linux doesn't turn on the buses in
time.

So I suspect that the "AHB bus error" isn't saying that we turned off
the bus, but rather that the GPU becomes unstable or something of that
sort.


Lastly, I reverted 9bc95570175a ("drm/msm: Devfreq tuning") and ran
Aquarium for 20 minutes without a problem. I then switched the gpu
devfreq governor to "userspace" and ran the following:

while true; do
echo 257000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
echo 710000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
done

It took 19 iterations of this loop to crash the GPU.

So the problem doesn't seem to be Rob's change, it's just that prior to
it the chance to hitting it is way lower. Question is still what it is
that we're triggering.

Regards,
Bjorn

2021-09-08 13:52:27

by Caleb Connolly

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630



On 08/09/2021 03:21, Bjorn Andersson wrote:
> On Mon 09 Aug 10:26 PDT 2021, Akhil P Oommen wrote:
>
>> On 8/9/2021 9:48 PM, Caleb Connolly wrote:
>>>
>>>
>>> On 09/08/2021 17:12, Rob Clark wrote:
>>>> On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen
>>>> <[email protected]> wrote:
> [..]
>>>>> I am a bit confused. We don't define a power domain for gpu in dt,
>>>>> correct? Then what exactly set_opp do here? Do you think this usleep is
>>>>> what is helping here somehow to mask the issue?
>>> The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
>>> the GPU DT. For the sake of simplicity I'll refer to the lowest
>>> frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
>>> the "min" state, and the highest frequency (710000000) and OPP level
>>> (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
>>> sdm845.dtsi under the gpu node.
>>>
>>> The new devfreq behaviour unmasks what I think is a driver bug, it
>>> inadvertently puts much more strain on the GPU regulators than they
>>> usually get. With the new behaviour the GPU jumps from it's min state to
>>> the max state and back again extremely rapidly under workloads as small
>>> as refreshing UI. Where previously the GPU would rarely if ever go above
>>> 342MHz when interacting with the device, it now jumps between min and
>>> max many times per second.
>>>
>>> If my understanding is correct, the current implementation of the GMU
>>> set freq is the following:
>>>  - Get OPP for frequency to set
>>>  - Push the frequency to the GMU - immediately updating the core clock
>>>  - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
>>> up somewhere in power management code and causes the gx regulator level
>>> to be updated
>>
>> Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else. We
>> were using a different api earlier which got deprecated -
>> dev_pm_opp_set_bw().
>>
>
> On the Lenovo Yoga C630 this is reproduced by starting alacritty and if
> I'm lucky I managed to hit a few keys before it crashes, so I spent a
> few hours looking into this as well...
>
> As you say, the dev_pm_opp_set_opp() will only cast a interconnect vote.
> The opp-level is just there for show and isn't used by anything, at
> least not on 845.
>
> Further more, I'm missing something in my tree, so the interconnect
> doesn't hit sync_state, and as such we're not actually scaling the
> buses. So the problem is not that Linux doesn't turn on the buses in
> time.
>
> So I suspect that the "AHB bus error" isn't saying that we turned off
> the bus, but rather that the GPU becomes unstable or something of that
> sort.
>
>
> Lastly, I reverted 9bc95570175a ("drm/msm: Devfreq tuning") and ran
> Aquarium for 20 minutes without a problem. I then switched the gpu
> devfreq governor to "userspace" and ran the following:
>
> while true; do
> echo 257000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
> echo 710000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
> done
>
> It took 19 iterations of this loop to crash the GPU.
>
> So the problem doesn't seem to be Rob's change, it's just that prior to
> it the chance to hitting it is way lower. Question is still what it is
> that we're triggering.

Do the opp-levels in DTS represent how the hardware behaves? If so then it does just
appear to be that whatever is responsible for scaling the GX rail voltage
has no time limits and will attempt to switch the regulator between min/max
voltage as often as we tell it to which is probably not something the hardware expected.
>
> Regards,
> Bjorn
>

--
Kind Regards,
Caleb (they/them)

2021-09-09 14:16:02

by Amit Pundir

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Wed, 8 Sept 2021 at 07:50, Bjorn Andersson
<[email protected]> wrote:
>
> On Mon 09 Aug 10:26 PDT 2021, Akhil P Oommen wrote:
>
> > On 8/9/2021 9:48 PM, Caleb Connolly wrote:
> > >
> > >
> > > On 09/08/2021 17:12, Rob Clark wrote:
> > > > On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen
> > > > <[email protected]> wrote:
> [..]
> > > > > I am a bit confused. We don't define a power domain for gpu in dt,
> > > > > correct? Then what exactly set_opp do here? Do you think this usleep is
> > > > > what is helping here somehow to mask the issue?
> > > The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
> > > the GPU DT. For the sake of simplicity I'll refer to the lowest
> > > frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
> > > the "min" state, and the highest frequency (710000000) and OPP level
> > > (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
> > > sdm845.dtsi under the gpu node.
> > >
> > > The new devfreq behaviour unmasks what I think is a driver bug, it
> > > inadvertently puts much more strain on the GPU regulators than they
> > > usually get. With the new behaviour the GPU jumps from it's min state to
> > > the max state and back again extremely rapidly under workloads as small
> > > as refreshing UI. Where previously the GPU would rarely if ever go above
> > > 342MHz when interacting with the device, it now jumps between min and
> > > max many times per second.
> > >
> > > If my understanding is correct, the current implementation of the GMU
> > > set freq is the following:
> > > - Get OPP for frequency to set
> > > - Push the frequency to the GMU - immediately updating the core clock
> > > - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
> > > up somewhere in power management code and causes the gx regulator level
> > > to be updated
> >
> > Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else. We
> > were using a different api earlier which got deprecated -
> > dev_pm_opp_set_bw().
> >
>
> On the Lenovo Yoga C630 this is reproduced by starting alacritty and if
> I'm lucky I managed to hit a few keys before it crashes, so I spent a
> few hours looking into this as well...
>
> As you say, the dev_pm_opp_set_opp() will only cast a interconnect vote.
> The opp-level is just there for show and isn't used by anything, at
> least not on 845.
>
> Further more, I'm missing something in my tree, so the interconnect
> doesn't hit sync_state, and as such we're not actually scaling the
> buses. So the problem is not that Linux doesn't turn on the buses in
> time.
>
> So I suspect that the "AHB bus error" isn't saying that we turned off
> the bus, but rather that the GPU becomes unstable or something of that
> sort.
>
>
> Lastly, I reverted 9bc95570175a ("drm/msm: Devfreq tuning") and ran
> Aquarium for 20 minutes without a problem. I then switched the gpu
> devfreq governor to "userspace" and ran the following:
>
> while true; do
> echo 257000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
> echo 710000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
> done
>
> It took 19 iterations of this loop to crash the GPU.

Ack. With your above script, I can reproduce a crash too on db845c
(A630) running v5.14. I didn't get any crash log though and device
just rebooted to USB crash mode.

And same crash on RB5 (A650) too https://hastebin.com/raw/ejutetuwun

>
> So the problem doesn't seem to be Rob's change, it's just that prior to
> it the chance to hitting it is way lower. Question is still what it is
> that we're triggering.
>
> Regards,
> Bjorn

2021-09-09 16:16:49

by Amit Pundir

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Thu, 9 Sept 2021 at 17:47, Amit Pundir <[email protected]> wrote:
>
> On Wed, 8 Sept 2021 at 07:50, Bjorn Andersson
> <[email protected]> wrote:
> >
> > On Mon 09 Aug 10:26 PDT 2021, Akhil P Oommen wrote:
> >
> > > On 8/9/2021 9:48 PM, Caleb Connolly wrote:
> > > >
> > > >
> > > > On 09/08/2021 17:12, Rob Clark wrote:
> > > > > On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen
> > > > > <[email protected]> wrote:
> > [..]
> > > > > > I am a bit confused. We don't define a power domain for gpu in dt,
> > > > > > correct? Then what exactly set_opp do here? Do you think this usleep is
> > > > > > what is helping here somehow to mask the issue?
> > > > The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
> > > > the GPU DT. For the sake of simplicity I'll refer to the lowest
> > > > frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
> > > > the "min" state, and the highest frequency (710000000) and OPP level
> > > > (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
> > > > sdm845.dtsi under the gpu node.
> > > >
> > > > The new devfreq behaviour unmasks what I think is a driver bug, it
> > > > inadvertently puts much more strain on the GPU regulators than they
> > > > usually get. With the new behaviour the GPU jumps from it's min state to
> > > > the max state and back again extremely rapidly under workloads as small
> > > > as refreshing UI. Where previously the GPU would rarely if ever go above
> > > > 342MHz when interacting with the device, it now jumps between min and
> > > > max many times per second.
> > > >
> > > > If my understanding is correct, the current implementation of the GMU
> > > > set freq is the following:
> > > > - Get OPP for frequency to set
> > > > - Push the frequency to the GMU - immediately updating the core clock
> > > > - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
> > > > up somewhere in power management code and causes the gx regulator level
> > > > to be updated
> > >
> > > Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else. We
> > > were using a different api earlier which got deprecated -
> > > dev_pm_opp_set_bw().
> > >
> >
> > On the Lenovo Yoga C630 this is reproduced by starting alacritty and if
> > I'm lucky I managed to hit a few keys before it crashes, so I spent a
> > few hours looking into this as well...
> >
> > As you say, the dev_pm_opp_set_opp() will only cast a interconnect vote.
> > The opp-level is just there for show and isn't used by anything, at
> > least not on 845.
> >
> > Further more, I'm missing something in my tree, so the interconnect
> > doesn't hit sync_state, and as such we're not actually scaling the
> > buses. So the problem is not that Linux doesn't turn on the buses in
> > time.
> >
> > So I suspect that the "AHB bus error" isn't saying that we turned off
> > the bus, but rather that the GPU becomes unstable or something of that
> > sort.
> >
> >
> > Lastly, I reverted 9bc95570175a ("drm/msm: Devfreq tuning") and ran
> > Aquarium for 20 minutes without a problem. I then switched the gpu
> > devfreq governor to "userspace" and ran the following:
> >
> > while true; do
> > echo 257000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
> > echo 710000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
> > done
> >
> > It took 19 iterations of this loop to crash the GPU.
>
> Ack. With your above script, I can reproduce a crash too on db845c
> (A630) running v5.14. I didn't get any crash log though and device
> just rebooted to USB crash mode.
>
> And same crash on RB5 (A650) too https://hastebin.com/raw/ejutetuwun

fwiw I can't reproduce this crash on RB5 so far with v5.15-rc1 merge
window (HEAD: 477f70cd2a67)

>
> >
> > So the problem doesn't seem to be Rob's change, it's just that prior to
> > it the chance to hitting it is way lower. Question is still what it is
> > that we're triggering.
> >
> > Regards,
> > Bjorn

2021-09-09 20:06:22

by Akhil P Oommen

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On 9/9/2021 9:42 PM, Amit Pundir wrote:
> On Thu, 9 Sept 2021 at 17:47, Amit Pundir <[email protected]> wrote:
>>
>> On Wed, 8 Sept 2021 at 07:50, Bjorn Andersson
>> <[email protected]> wrote:
>>>
>>> On Mon 09 Aug 10:26 PDT 2021, Akhil P Oommen wrote:
>>>
>>>> On 8/9/2021 9:48 PM, Caleb Connolly wrote:
>>>>>
>>>>>
>>>>> On 09/08/2021 17:12, Rob Clark wrote:
>>>>>> On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen
>>>>>> <[email protected]> wrote:
>>> [..]
>>>>>>> I am a bit confused. We don't define a power domain for gpu in dt,
>>>>>>> correct? Then what exactly set_opp do here? Do you think this usleep is
>>>>>>> what is helping here somehow to mask the issue?
>>>>> The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
>>>>> the GPU DT. For the sake of simplicity I'll refer to the lowest
>>>>> frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
>>>>> the "min" state, and the highest frequency (710000000) and OPP level
>>>>> (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
>>>>> sdm845.dtsi under the gpu node.
>>>>>
>>>>> The new devfreq behaviour unmasks what I think is a driver bug, it
>>>>> inadvertently puts much more strain on the GPU regulators than they
>>>>> usually get. With the new behaviour the GPU jumps from it's min state to
>>>>> the max state and back again extremely rapidly under workloads as small
>>>>> as refreshing UI. Where previously the GPU would rarely if ever go above
>>>>> 342MHz when interacting with the device, it now jumps between min and
>>>>> max many times per second.
>>>>>
>>>>> If my understanding is correct, the current implementation of the GMU
>>>>> set freq is the following:
>>>>> - Get OPP for frequency to set
>>>>> - Push the frequency to the GMU - immediately updating the core clock
>>>>> - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
>>>>> up somewhere in power management code and causes the gx regulator level
>>>>> to be updated
>>>>
>>>> Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else. We
>>>> were using a different api earlier which got deprecated -
>>>> dev_pm_opp_set_bw().
>>>>
>>>
>>> On the Lenovo Yoga C630 this is reproduced by starting alacritty and if
>>> I'm lucky I managed to hit a few keys before it crashes, so I spent a
>>> few hours looking into this as well...
>>>
>>> As you say, the dev_pm_opp_set_opp() will only cast a interconnect vote.
>>> The opp-level is just there for show and isn't used by anything, at
>>> least not on 845.
>>>
>>> Further more, I'm missing something in my tree, so the interconnect
>>> doesn't hit sync_state, and as such we're not actually scaling the
>>> buses. So the problem is not that Linux doesn't turn on the buses in
>>> time.
>>>
>>> So I suspect that the "AHB bus error" isn't saying that we turned off
>>> the bus, but rather that the GPU becomes unstable or something of that
>>> sort.
>>>
>>>
>>> Lastly, I reverted 9bc95570175a ("drm/msm: Devfreq tuning") and ran
>>> Aquarium for 20 minutes without a problem. I then switched the gpu
>>> devfreq governor to "userspace" and ran the following:
>>>
>>> while true; do
>>> echo 257000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
>>> echo 710000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
>>> done
>>>
>>> It took 19 iterations of this loop to crash the GPU.
>>
>> Ack. With your above script, I can reproduce a crash too on db845c
>> (A630) running v5.14. I didn't get any crash log though and device
>> just rebooted to USB crash mode.
>>
>> And same crash on RB5 (A650) too https://hastebin.com/raw/ejutetuwun

Are we sure this is the same issue? It could be, but I thought we were
seeing a bunch of random gpu errors (which may eventually hit device crash).

-Akhil

>
> fwiw I can't reproduce this crash on RB5 so far with v5.15-rc1 merge
> window (HEAD: 477f70cd2a67)
>
>>
>>>
>>> So the problem doesn't seem to be Rob's change, it's just that prior to
>>> it the chance to hitting it is way lower. Question is still what it is
>>> that we're triggering.
>>>
>>> Regards,
>>> Bjorn

2021-09-09 20:52:51

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Thu, Sep 9, 2021 at 12:50 PM Akhil P Oommen <[email protected]> wrote:
>
> On 9/9/2021 9:42 PM, Amit Pundir wrote:
> > On Thu, 9 Sept 2021 at 17:47, Amit Pundir <[email protected]> wrote:
> >>
> >> On Wed, 8 Sept 2021 at 07:50, Bjorn Andersson
> >> <[email protected]> wrote:
> >>>
> >>> On Mon 09 Aug 10:26 PDT 2021, Akhil P Oommen wrote:
> >>>
> >>>> On 8/9/2021 9:48 PM, Caleb Connolly wrote:
> >>>>>
> >>>>>
> >>>>> On 09/08/2021 17:12, Rob Clark wrote:
> >>>>>> On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen
> >>>>>> <[email protected]> wrote:
> >>> [..]
> >>>>>>> I am a bit confused. We don't define a power domain for gpu in dt,
> >>>>>>> correct? Then what exactly set_opp do here? Do you think this usleep is
> >>>>>>> what is helping here somehow to mask the issue?
> >>>>> The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
> >>>>> the GPU DT. For the sake of simplicity I'll refer to the lowest
> >>>>> frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
> >>>>> the "min" state, and the highest frequency (710000000) and OPP level
> >>>>> (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
> >>>>> sdm845.dtsi under the gpu node.
> >>>>>
> >>>>> The new devfreq behaviour unmasks what I think is a driver bug, it
> >>>>> inadvertently puts much more strain on the GPU regulators than they
> >>>>> usually get. With the new behaviour the GPU jumps from it's min state to
> >>>>> the max state and back again extremely rapidly under workloads as small
> >>>>> as refreshing UI. Where previously the GPU would rarely if ever go above
> >>>>> 342MHz when interacting with the device, it now jumps between min and
> >>>>> max many times per second.
> >>>>>
> >>>>> If my understanding is correct, the current implementation of the GMU
> >>>>> set freq is the following:
> >>>>> - Get OPP for frequency to set
> >>>>> - Push the frequency to the GMU - immediately updating the core clock
> >>>>> - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
> >>>>> up somewhere in power management code and causes the gx regulator level
> >>>>> to be updated
> >>>>
> >>>> Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else. We
> >>>> were using a different api earlier which got deprecated -
> >>>> dev_pm_opp_set_bw().
> >>>>
> >>>
> >>> On the Lenovo Yoga C630 this is reproduced by starting alacritty and if
> >>> I'm lucky I managed to hit a few keys before it crashes, so I spent a
> >>> few hours looking into this as well...
> >>>
> >>> As you say, the dev_pm_opp_set_opp() will only cast a interconnect vote.
> >>> The opp-level is just there for show and isn't used by anything, at
> >>> least not on 845.
> >>>
> >>> Further more, I'm missing something in my tree, so the interconnect
> >>> doesn't hit sync_state, and as such we're not actually scaling the
> >>> buses. So the problem is not that Linux doesn't turn on the buses in
> >>> time.
> >>>
> >>> So I suspect that the "AHB bus error" isn't saying that we turned off
> >>> the bus, but rather that the GPU becomes unstable or something of that
> >>> sort.
> >>>
> >>>
> >>> Lastly, I reverted 9bc95570175a ("drm/msm: Devfreq tuning") and ran
> >>> Aquarium for 20 minutes without a problem. I then switched the gpu
> >>> devfreq governor to "userspace" and ran the following:
> >>>
> >>> while true; do
> >>> echo 257000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
> >>> echo 710000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
> >>> done
> >>>
> >>> It took 19 iterations of this loop to crash the GPU.
> >>
> >> Ack. With your above script, I can reproduce a crash too on db845c
> >> (A630) running v5.14. I didn't get any crash log though and device
> >> just rebooted to USB crash mode.
> >>
> >> And same crash on RB5 (A650) too https://hastebin.com/raw/ejutetuwun
>
> Are we sure this is the same issue? It could be, but I thought we were
> seeing a bunch of random gpu errors (which may eventually hit device crash).

In the sense that async-serror often seems to be a clk issue, it
*could* be related.. but this would have to be triggered by CPU
access. The symptom does seem very different.

BR,
-R

> -Akhil
>
> >
> > fwiw I can't reproduce this crash on RB5 so far with v5.15-rc1 merge
> > window (HEAD: 477f70cd2a67)
> >
> >>
> >>>
> >>> So the problem doesn't seem to be Rob's change, it's just that prior to
> >>> it the chance to hitting it is way lower. Question is still what it is
> >>> that we're triggering.
> >>>
> >>> Regards,
> >>> Bjorn
>

2021-09-10 17:19:04

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Tue, Sep 7, 2021 at 7:20 PM Bjorn Andersson
<[email protected]> wrote:
>
> On Mon 09 Aug 10:26 PDT 2021, Akhil P Oommen wrote:
>
> > On 8/9/2021 9:48 PM, Caleb Connolly wrote:
> > >
> > >
> > > On 09/08/2021 17:12, Rob Clark wrote:
> > > > On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen
> > > > <[email protected]> wrote:
> [..]
> > > > > I am a bit confused. We don't define a power domain for gpu in dt,
> > > > > correct? Then what exactly set_opp do here? Do you think this usleep is
> > > > > what is helping here somehow to mask the issue?
> > > The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
> > > the GPU DT. For the sake of simplicity I'll refer to the lowest
> > > frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
> > > the "min" state, and the highest frequency (710000000) and OPP level
> > > (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
> > > sdm845.dtsi under the gpu node.
> > >
> > > The new devfreq behaviour unmasks what I think is a driver bug, it
> > > inadvertently puts much more strain on the GPU regulators than they
> > > usually get. With the new behaviour the GPU jumps from it's min state to
> > > the max state and back again extremely rapidly under workloads as small
> > > as refreshing UI. Where previously the GPU would rarely if ever go above
> > > 342MHz when interacting with the device, it now jumps between min and
> > > max many times per second.
> > >
> > > If my understanding is correct, the current implementation of the GMU
> > > set freq is the following:
> > > - Get OPP for frequency to set
> > > - Push the frequency to the GMU - immediately updating the core clock
> > > - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
> > > up somewhere in power management code and causes the gx regulator level
> > > to be updated
> >
> > Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else. We
> > were using a different api earlier which got deprecated -
> > dev_pm_opp_set_bw().
> >
>
> On the Lenovo Yoga C630 this is reproduced by starting alacritty and if
> I'm lucky I managed to hit a few keys before it crashes, so I spent a
> few hours looking into this as well...
>
> As you say, the dev_pm_opp_set_opp() will only cast a interconnect vote.
> The opp-level is just there for show and isn't used by anything, at
> least not on 845.
>
> Further more, I'm missing something in my tree, so the interconnect
> doesn't hit sync_state, and as such we're not actually scaling the
> buses. So the problem is not that Linux doesn't turn on the buses in
> time.
>
> So I suspect that the "AHB bus error" isn't saying that we turned off
> the bus, but rather that the GPU becomes unstable or something of that
> sort.
>
>
> Lastly, I reverted 9bc95570175a ("drm/msm: Devfreq tuning") and ran
> Aquarium for 20 minutes without a problem. I then switched the gpu
> devfreq governor to "userspace" and ran the following:
>
> while true; do
> echo 257000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
> echo 710000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
> done
>
> It took 19 iterations of this loop to crash the GPU.

I assume you still had aquarium running, to keep the gpu awake while
you ran that loop?

Fwiw, I modified this slightly to match sc7180's min/max gpu freq and
could not trigger any issue.. interestingly sc7180 has a lower min
freq (180) and higher max freq (800) so it was toggling over a wider
freq range. I also tried on a device that had the higher 825MHz opp
(since I noticed that was the only opp that used
RPMH_REGULATOR_LEVEL_TURBO_L1 and wanted to rule that out), but could
not reproduce.

I guess a630 (sdm845) should have higher power draw (it is 2x # of
shader cores and 2x GMEM size, but lower max freq).. the question is,
is this the reason we see this on sdm845 and not sc7180? Or is there
some other difference. On the gpu side of this, they are both closely
related (ie. the same "sub-generation" of a6xx, same gmu fw, etc)..
I'm less sure about the other parts (icc, rpmh, etc)

BR,
-R

> So the problem doesn't seem to be Rob's change, it's just that prior to
> it the chance to hitting it is way lower. Question is still what it is
> that we're triggering.
>
> Regards,
> Bjorn

2021-09-10 17:20:08

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On Thu, Sep 9, 2021 at 1:54 PM Rob Clark <[email protected]> wrote:
>
> On Thu, Sep 9, 2021 at 12:50 PM Akhil P Oommen <[email protected]> wrote:
> >
> > On 9/9/2021 9:42 PM, Amit Pundir wrote:
> > > On Thu, 9 Sept 2021 at 17:47, Amit Pundir <[email protected]> wrote:
> > >>
> > >> On Wed, 8 Sept 2021 at 07:50, Bjorn Andersson
> > >> <[email protected]> wrote:
> > >>>
> > >>> On Mon 09 Aug 10:26 PDT 2021, Akhil P Oommen wrote:
> > >>>
> > >>>> On 8/9/2021 9:48 PM, Caleb Connolly wrote:
> > >>>>>
> > >>>>>
> > >>>>> On 09/08/2021 17:12, Rob Clark wrote:
> > >>>>>> On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen
> > >>>>>> <[email protected]> wrote:
> > >>> [..]
> > >>>>>>> I am a bit confused. We don't define a power domain for gpu in dt,
> > >>>>>>> correct? Then what exactly set_opp do here? Do you think this usleep is
> > >>>>>>> what is helping here somehow to mask the issue?
> > >>>>> The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
> > >>>>> the GPU DT. For the sake of simplicity I'll refer to the lowest
> > >>>>> frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
> > >>>>> the "min" state, and the highest frequency (710000000) and OPP level
> > >>>>> (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
> > >>>>> sdm845.dtsi under the gpu node.
> > >>>>>
> > >>>>> The new devfreq behaviour unmasks what I think is a driver bug, it
> > >>>>> inadvertently puts much more strain on the GPU regulators than they
> > >>>>> usually get. With the new behaviour the GPU jumps from it's min state to
> > >>>>> the max state and back again extremely rapidly under workloads as small
> > >>>>> as refreshing UI. Where previously the GPU would rarely if ever go above
> > >>>>> 342MHz when interacting with the device, it now jumps between min and
> > >>>>> max many times per second.
> > >>>>>
> > >>>>> If my understanding is correct, the current implementation of the GMU
> > >>>>> set freq is the following:
> > >>>>> - Get OPP for frequency to set
> > >>>>> - Push the frequency to the GMU - immediately updating the core clock
> > >>>>> - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
> > >>>>> up somewhere in power management code and causes the gx regulator level
> > >>>>> to be updated
> > >>>>
> > >>>> Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else. We
> > >>>> were using a different api earlier which got deprecated -
> > >>>> dev_pm_opp_set_bw().
> > >>>>
> > >>>
> > >>> On the Lenovo Yoga C630 this is reproduced by starting alacritty and if
> > >>> I'm lucky I managed to hit a few keys before it crashes, so I spent a
> > >>> few hours looking into this as well...
> > >>>
> > >>> As you say, the dev_pm_opp_set_opp() will only cast a interconnect vote.
> > >>> The opp-level is just there for show and isn't used by anything, at
> > >>> least not on 845.
> > >>>
> > >>> Further more, I'm missing something in my tree, so the interconnect
> > >>> doesn't hit sync_state, and as such we're not actually scaling the
> > >>> buses. So the problem is not that Linux doesn't turn on the buses in
> > >>> time.
> > >>>
> > >>> So I suspect that the "AHB bus error" isn't saying that we turned off
> > >>> the bus, but rather that the GPU becomes unstable or something of that
> > >>> sort.
> > >>>
> > >>>
> > >>> Lastly, I reverted 9bc95570175a ("drm/msm: Devfreq tuning") and ran
> > >>> Aquarium for 20 minutes without a problem. I then switched the gpu
> > >>> devfreq governor to "userspace" and ran the following:
> > >>>
> > >>> while true; do
> > >>> echo 257000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
> > >>> echo 710000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
> > >>> done
> > >>>
> > >>> It took 19 iterations of this loop to crash the GPU.
> > >>
> > >> Ack. With your above script, I can reproduce a crash too on db845c
> > >> (A630) running v5.14. I didn't get any crash log though and device
> > >> just rebooted to USB crash mode.
> > >>
> > >> And same crash on RB5 (A650) too https://hastebin.com/raw/ejutetuwun
> >
> > Are we sure this is the same issue? It could be, but I thought we were
> > seeing a bunch of random gpu errors (which may eventually hit device crash).
>
> In the sense that async-serror often seems to be a clk issue, it
> *could* be related.. but this would have to be triggered by CPU
> access. The symptom does seem very different.
>

The more I think about it, the more I think this is a different
issue.. a650 is somewhat different wrt gmu (ie. hfi vs legacy code
paths).

Amit, could you try the same experiment (with 9bc95570175a ("drm/msm:
Devfreq tuning") revert) while running something like webgl aquarium
to prevent the GPU from suspending? I'm kinda suspecting the issue
you hit is more likely some suspend/resume issue.

BR,
-R

2021-09-10 17:35:36

by Caleb Connolly

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630



On 10/09/2021 18:18, Rob Clark wrote:
> On Tue, Sep 7, 2021 at 7:20 PM Bjorn Andersson
> <[email protected]> wrote:
>>
>> On Mon 09 Aug 10:26 PDT 2021, Akhil P Oommen wrote:
>>
>>> On 8/9/2021 9:48 PM, Caleb Connolly wrote:
>>>>
>>>>
>>>> On 09/08/2021 17:12, Rob Clark wrote:
>>>>> On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen
>>>>> <[email protected]> wrote:
>> [..]
>>>>>> I am a bit confused. We don't define a power domain for gpu in dt,
>>>>>> correct? Then what exactly set_opp do here? Do you think this usleep is
>>>>>> what is helping here somehow to mask the issue?
>>>> The power domains (for cx and gx) are defined in the GMU DT, the OPPs in
>>>> the GPU DT. For the sake of simplicity I'll refer to the lowest
>>>> frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
>>>> the "min" state, and the highest frequency (710000000) and OPP level
>>>> (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are defined in
>>>> sdm845.dtsi under the gpu node.
>>>>
>>>> The new devfreq behaviour unmasks what I think is a driver bug, it
>>>> inadvertently puts much more strain on the GPU regulators than they
>>>> usually get. With the new behaviour the GPU jumps from it's min state to
>>>> the max state and back again extremely rapidly under workloads as small
>>>> as refreshing UI. Where previously the GPU would rarely if ever go above
>>>> 342MHz when interacting with the device, it now jumps between min and
>>>> max many times per second.
>>>>
>>>> If my understanding is correct, the current implementation of the GMU
>>>> set freq is the following:
>>>> - Get OPP for frequency to set
>>>> - Push the frequency to the GMU - immediately updating the core clock
>>>> - Call dev_pm_opp_set_opp() which triggers a notify chain, this winds
>>>> up somewhere in power management code and causes the gx regulator level
>>>> to be updated
>>>
>>> Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing else. We
>>> were using a different api earlier which got deprecated -
>>> dev_pm_opp_set_bw().
>>>
>>
>> On the Lenovo Yoga C630 this is reproduced by starting alacritty and if
>> I'm lucky I managed to hit a few keys before it crashes, so I spent a
>> few hours looking into this as well...
>>
>> As you say, the dev_pm_opp_set_opp() will only cast a interconnect vote.
>> The opp-level is just there for show and isn't used by anything, at
>> least not on 845.
>>
>> Further more, I'm missing something in my tree, so the interconnect
>> doesn't hit sync_state, and as such we're not actually scaling the
>> buses. So the problem is not that Linux doesn't turn on the buses in
>> time.
>>
>> So I suspect that the "AHB bus error" isn't saying that we turned off
>> the bus, but rather that the GPU becomes unstable or something of that
>> sort.
>>
>>
>> Lastly, I reverted 9bc95570175a ("drm/msm: Devfreq tuning") and ran
>> Aquarium for 20 minutes without a problem. I then switched the gpu
>> devfreq governor to "userspace" and ran the following:
>>
>> while true; do
>> echo 257000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
>> echo 710000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
>> done
>>
>> It took 19 iterations of this loop to crash the GPU.
>
> I assume you still had aquarium running, to keep the gpu awake while
> you ran that loop?
>
> Fwiw, I modified this slightly to match sc7180's min/max gpu freq and
> could not trigger any issue.. interestingly sc7180 has a lower min
> freq (180) and higher max freq (800) so it was toggling over a wider
> freq range. I also tried on a device that had the higher 825MHz opp
> (since I noticed that was the only opp that used
> RPMH_REGULATOR_LEVEL_TURBO_L1 and wanted to rule that out), but could
> not reproduce.
>
> I guess a630 (sdm845) should have higher power draw (it is 2x # of
> shader cores and 2x GMEM size, but lower max freq).. the question is,
> is this the reason we see this on sdm845 and not sc7180? Or is there
> some other difference. On the gpu side of this, they are both closely
> related (ie. the same "sub-generation" of a6xx, same gmu fw, etc)..
> I'm less sure about the other parts (icc, rpmh, etc)

My guess would be power draw, nobody has mentioned this yet but I've realised that the vdd_gfx rail is powered by a buck
converter, which could explain a lot of the symptoms.

Buck converters depend on high frequency switching and inductors to work, this inherently leads to some lag time when
changing voltages, and also means that the behaviour of the regulator is defined in part by how much current is being
drawn. Wikipedia has a pretty good explanation: https://en.wikipedia.org/wiki/Buck_converter

At the best of times these regulators have a known voltage ripple, when under load and when rapidly switching voltages
this will get a lot worse.

Someone with an oscilloscope and schematics could probe the rail and probably see exactly what's going on when the GPU
crashes. Because of the lag time in the regulator changing voltage, it might be undershooting whilst the GPU is trying
to clock up and draw more current - causing instability and crashes.
>
> BR,
> -R
>
>> So the problem doesn't seem to be Rob's change, it's just that prior to
>> it the chance to hitting it is way lower. Question is still what it is
>> that we're triggering.
>>
>> Regards,
>> Bjorn

--
Kind Regards,
Caleb (they/them)

2021-09-13 06:18:12

by Akhil P Oommen

[permalink] [raw]
Subject: Re: [PATCH] drm/msm: Disable frequency clamping on a630

On 9/10/2021 11:04 PM, Caleb Connolly wrote:
>
>
> On 10/09/2021 18:18, Rob Clark wrote:
>> On Tue, Sep 7, 2021 at 7:20 PM Bjorn Andersson
>> <[email protected]> wrote:
>>>
>>> On Mon 09 Aug 10:26 PDT 2021, Akhil P Oommen wrote:
>>>
>>>> On 8/9/2021 9:48 PM, Caleb Connolly wrote:
>>>>>
>>>>>
>>>>> On 09/08/2021 17:12, Rob Clark wrote:
>>>>>> On Mon, Aug 9, 2021 at 7:52 AM Akhil P Oommen
>>>>>> <[email protected]> wrote:
>>> [..]
>>>>>>> I am a bit confused. We don't define a power domain for gpu in dt,
>>>>>>> correct? Then what exactly set_opp do here? Do you think this
>>>>>>> usleep is
>>>>>>> what is helping here somehow to mask the issue?
>>>>> The power domains (for cx and gx) are defined in the GMU DT, the
>>>>> OPPs in
>>>>> the GPU DT. For the sake of simplicity I'll refer to the lowest
>>>>> frequency (257000000) and OPP level (RPMH_REGULATOR_LEVEL_LOW_SVS) as
>>>>> the "min" state, and the highest frequency (710000000) and OPP level
>>>>> (RPMH_REGULATOR_LEVEL_TURBO_L1) as the "max" state. These are
>>>>> defined in
>>>>> sdm845.dtsi under the gpu node.
>>>>>
>>>>> The new devfreq behaviour unmasks what I think is a driver bug, it
>>>>> inadvertently puts much more strain on the GPU regulators than they
>>>>> usually get. With the new behaviour the GPU jumps from it's min
>>>>> state to
>>>>> the max state and back again extremely rapidly under workloads as
>>>>> small
>>>>> as refreshing UI. Where previously the GPU would rarely if ever go
>>>>> above
>>>>> 342MHz when interacting with the device, it now jumps between min and
>>>>> max many times per second.
>>>>>
>>>>> If my understanding is correct, the current implementation of the GMU
>>>>> set freq is the following:
>>>>>    - Get OPP for frequency to set
>>>>>    - Push the frequency to the GMU - immediately updating the core
>>>>> clock
>>>>>    - Call dev_pm_opp_set_opp() which triggers a notify chain, this
>>>>> winds
>>>>> up somewhere in power management code and causes the gx regulator
>>>>> level
>>>>> to be updated
>>>>
>>>> Nope. dev_pm_opp_set_opp() sets the bandwidth for gpu and nothing
>>>> else. We
>>>> were using a different api earlier which got deprecated -
>>>> dev_pm_opp_set_bw().
>>>>
>>>
>>> On the Lenovo Yoga C630 this is reproduced by starting alacritty and if
>>> I'm lucky I managed to hit a few keys before it crashes, so I spent a
>>> few hours looking into this as well...
>>>
>>> As you say, the dev_pm_opp_set_opp() will only cast a interconnect vote.
>>> The opp-level is just there for show and isn't used by anything, at
>>> least not on 845.
>>>
>>> Further more, I'm missing something in my tree, so the interconnect
>>> doesn't hit sync_state, and as such we're not actually scaling the
>>> buses. So the problem is not that Linux doesn't turn on the buses in
>>> time.
>>>
>>> So I suspect that the "AHB bus error" isn't saying that we turned off
>>> the bus, but rather that the GPU becomes unstable or something of that
>>> sort.
>>>
>>>
>>> Lastly, I reverted 9bc95570175a ("drm/msm: Devfreq tuning") and ran
>>> Aquarium for 20 minutes without a problem. I then switched the gpu
>>> devfreq governor to "userspace" and ran the following:
>>>
>>> while true; do
>>>    echo 257000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
>>>    echo 710000000 > /sys/class/devfreq/5000000.gpu/userspace/set_freq
>>> done
>>>
>>> It took 19 iterations of this loop to crash the GPU.
>>
>> I assume you still had aquarium running, to keep the gpu awake while
>> you ran that loop?
>>
>> Fwiw, I modified this slightly to match sc7180's min/max gpu freq and
>> could not trigger any issue.. interestingly sc7180 has a lower min
>> freq (180) and higher max freq (800) so it was toggling over a wider
>> freq range.  I also tried on a device that  had the higher 825MHz opp
>> (since I noticed that was the only opp that used
>> RPMH_REGULATOR_LEVEL_TURBO_L1 and wanted to rule that out), but could
>> not reproduce.
>>
>> I guess a630 (sdm845) should have higher power draw (it is 2x # of
>> shader cores and 2x GMEM size, but lower max freq).. the question is,
>> is this the reason we see this on sdm845 and not sc7180?  Or is there
>> some other difference.  On the gpu side of this, they are both closely
>> related (ie. the same "sub-generation" of a6xx, same gmu fw, etc)..
>> I'm less sure about the other parts (icc, rpmh, etc)
>
> My guess would be power draw, nobody has mentioned this yet but I've
> realised that the vdd_gfx rail is powered by a buck converter, which
> could explain a lot of the symptoms.
>
> Buck converters depend on high frequency switching and inductors to
> work, this inherently leads to some lag time when changing voltages, and
> also means that the behaviour of the regulator is defined in part by how
> much current is being drawn. Wikipedia has a pretty good explanation:
> https://en.wikipedia.org/wiki/Buck_converter
>
> At the best of times these regulators have a known voltage ripple, when
> under load and when rapidly switching voltages this will get a lot worse.
>
> Someone with an oscilloscope and schematics could probe the rail and
> probably see exactly what's going on when the GPU crashes. Because of
> the lag time in the regulator changing voltage, it might be
> undershooting whilst the GPU is trying to clock up and draw more current
> - causing instability and crashes.

Both of you are correct. The GPU is very similar including the GMU (we
have same fw for both), except the GBIF block. As far as I am aware, the
non-gpu blocks within SoC should be similar except the configs.

And yes, for these sort of issues where we suspect a power issue, gx
rail should be probed for droops using a very high resolution
oscilloscopes (these droops might last less than 1us).

I am aware of only Dragonboard that is still alive from QC perspective.
Can someone report this issue to DB support team as it is fairly easy to
reproduce?

-Akhil.

>>
>> BR,
>> -R
>>
>>> So the problem doesn't seem to be Rob's change, it's just that prior to
>>> it the chance to hitting it is way lower. Question is still what it is
>>> that we're triggering.
>>>
>>> Regards,
>>> Bjorn
>