2022-10-05 10:07:27

by Akhil P Oommen

[permalink] [raw]
Subject: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface


Some clients like adreno gpu driver would like to ensure that its gdsc
is collapsed at hardware during a gpu reset sequence. This is because it
has a votable gdsc which could be ON due to a vote from another subsystem
like tz, hyp etc or due to an internal hardware signal. To allow
this, gpucc driver can expose an interface to the client driver using
reset framework. Using this the client driver can trigger a polling within
the gdsc driver.

This series is rebased on top of qcom/linux:for-next branch.

Related discussion: https://patchwork.freedesktop.org/patch/493144/

Changes in v7:
- Update commit message (Bjorn)
- Rebased on top of qcom/linux:for-next branch.

Changes in v6:
- No code changes in this version. Just captured the Acked-by tags

Changes in v5:
- Nit: Remove a duplicate blank line (Krzysztof)

Changes in v4:
- Update gpu dt-binding schema
- Typo fix in commit text

Changes in v3:
- Use pointer to const for "struct qcom_reset_ops" in qcom_reset_map (Krzysztof)

Changes in v2:
- Return error when a particular custom reset op is not implemented. (Dmitry)

Akhil P Oommen (6):
dt-bindings: clk: qcom: Support gpu cx gdsc reset
clk: qcom: Allow custom reset ops
clk: qcom: gdsc: Add a reset op to poll gdsc collapse
clk: qcom: gpucc-sc7280: Add cx collapse reset support
dt-bindings: drm/msm/gpu: Add optional resets
arm64: dts: qcom: sc7280: Add Reset support for gpu

.../devicetree/bindings/display/msm/gpu.yaml | 6 +++++
arch/arm64/boot/dts/qcom/sc7280.dtsi | 3 +++
drivers/clk/qcom/gdsc.c | 23 ++++++++++++++----
drivers/clk/qcom/gdsc.h | 7 ++++++
drivers/clk/qcom/gpucc-sc7280.c | 10 ++++++++
drivers/clk/qcom/reset.c | 27 +++++++++++++++++++++-
drivers/clk/qcom/reset.h | 8 +++++++
include/dt-bindings/clock/qcom,gpucc-sc7280.h | 3 +++
8 files changed, 82 insertions(+), 5 deletions(-)

--
2.7.4


2022-11-07 17:49:47

by Akhil P Oommen

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On 10/5/2022 2:36 PM, Akhil P Oommen wrote:
> Some clients like adreno gpu driver would like to ensure that its gdsc
> is collapsed at hardware during a gpu reset sequence. This is because it
> has a votable gdsc which could be ON due to a vote from another subsystem
> like tz, hyp etc or due to an internal hardware signal. To allow
> this, gpucc driver can expose an interface to the client driver using
> reset framework. Using this the client driver can trigger a polling within
> the gdsc driver.
>
> This series is rebased on top of qcom/linux:for-next branch.
>
> Related discussion: https://patchwork.freedesktop.org/patch/493144/
>
> Changes in v7:
> - Update commit message (Bjorn)
> - Rebased on top of qcom/linux:for-next branch.
>
> Changes in v6:
> - No code changes in this version. Just captured the Acked-by tags
>
> Changes in v5:
> - Nit: Remove a duplicate blank line (Krzysztof)
>
> Changes in v4:
> - Update gpu dt-binding schema
> - Typo fix in commit text
>
> Changes in v3:
> - Use pointer to const for "struct qcom_reset_ops" in qcom_reset_map (Krzysztof)
>
> Changes in v2:
> - Return error when a particular custom reset op is not implemented. (Dmitry)
>
> Akhil P Oommen (6):
> dt-bindings: clk: qcom: Support gpu cx gdsc reset
> clk: qcom: Allow custom reset ops
> clk: qcom: gdsc: Add a reset op to poll gdsc collapse
> clk: qcom: gpucc-sc7280: Add cx collapse reset support
> dt-bindings: drm/msm/gpu: Add optional resets
> arm64: dts: qcom: sc7280: Add Reset support for gpu
>
> .../devicetree/bindings/display/msm/gpu.yaml | 6 +++++
> arch/arm64/boot/dts/qcom/sc7280.dtsi | 3 +++
> drivers/clk/qcom/gdsc.c | 23 ++++++++++++++----
> drivers/clk/qcom/gdsc.h | 7 ++++++
> drivers/clk/qcom/gpucc-sc7280.c | 10 ++++++++
> drivers/clk/qcom/reset.c | 27 +++++++++++++++++++++-
> drivers/clk/qcom/reset.h | 8 +++++++
> include/dt-bindings/clock/qcom,gpucc-sc7280.h | 3 +++
> 8 files changed, 82 insertions(+), 5 deletions(-)
>
Bjorn,

The latest patchset has been in the mailing list for over a month now.
Could you please share how soon we can pick this? That will give me some
confidence to pull these patches into our chromeos kernel tree ASAP.

-Akhil.

2022-12-01 23:11:48

by Bjorn Andersson

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
>

@Ulf, Akhil has a power-domain for a piece of hardware which may be
voted active by multiple different subsystems (co-processors/execution
contexts) in the system.

As such, during the powering down sequence we don't wait for the
power-domain to turn off. But in the event of an error, the recovery
mechanism relies on waiting for the hardware to settle in a powered off
state.

The proposal here is to use the reset framework to wait for this state
to be reached, before continuing with the recovery mechanism in the
client driver.

Given our other discussions on quirky behavior, do you have any
input/suggestions on this?

> Some clients like adreno gpu driver would like to ensure that its gdsc
> is collapsed at hardware during a gpu reset sequence. This is because it
> has a votable gdsc which could be ON due to a vote from another subsystem
> like tz, hyp etc or due to an internal hardware signal. To allow
> this, gpucc driver can expose an interface to the client driver using
> reset framework. Using this the client driver can trigger a polling within
> the gdsc driver.

@Akhil, this description is fairly generic. As we've reached the state
where the hardware has settled and we return to the client, what
prevents it from being powered up again?

Or is it simply a question of it hitting the powered-off state, not
necessarily staying there?

Regards,
Bjorn

>
> This series is rebased on top of qcom/linux:for-next branch.
>
> Related discussion: https://patchwork.freedesktop.org/patch/493144/
>
> Changes in v7:
> - Update commit message (Bjorn)
> - Rebased on top of qcom/linux:for-next branch.
>
> Changes in v6:
> - No code changes in this version. Just captured the Acked-by tags
>
> Changes in v5:
> - Nit: Remove a duplicate blank line (Krzysztof)
>
> Changes in v4:
> - Update gpu dt-binding schema
> - Typo fix in commit text
>
> Changes in v3:
> - Use pointer to const for "struct qcom_reset_ops" in qcom_reset_map (Krzysztof)
>
> Changes in v2:
> - Return error when a particular custom reset op is not implemented. (Dmitry)
>
> Akhil P Oommen (6):
> dt-bindings: clk: qcom: Support gpu cx gdsc reset
> clk: qcom: Allow custom reset ops
> clk: qcom: gdsc: Add a reset op to poll gdsc collapse
> clk: qcom: gpucc-sc7280: Add cx collapse reset support
> dt-bindings: drm/msm/gpu: Add optional resets
> arm64: dts: qcom: sc7280: Add Reset support for gpu
>
> .../devicetree/bindings/display/msm/gpu.yaml | 6 +++++
> arch/arm64/boot/dts/qcom/sc7280.dtsi | 3 +++
> drivers/clk/qcom/gdsc.c | 23 ++++++++++++++----
> drivers/clk/qcom/gdsc.h | 7 ++++++
> drivers/clk/qcom/gpucc-sc7280.c | 10 ++++++++
> drivers/clk/qcom/reset.c | 27 +++++++++++++++++++++-
> drivers/clk/qcom/reset.h | 8 +++++++
> include/dt-bindings/clock/qcom,gpucc-sc7280.h | 3 +++
> 8 files changed, 82 insertions(+), 5 deletions(-)
>
> --
> 2.7.4
>

2022-12-02 07:38:23

by Akhil P Oommen

[permalink] [raw]
Subject: Re: [Freedreno] [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On 12/2/2022 4:27 AM, Bjorn Andersson wrote:
> On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
> @Ulf, Akhil has a power-domain for a piece of hardware which may be
> voted active by multiple different subsystems (co-processors/execution
> contexts) in the system.
>
> As such, during the powering down sequence we don't wait for the
> power-domain to turn off. But in the event of an error, the recovery
> mechanism relies on waiting for the hardware to settle in a powered off
> state.
>
> The proposal here is to use the reset framework to wait for this state
> to be reached, before continuing with the recovery mechanism in the
> client driver.
>
> Given our other discussions on quirky behavior, do you have any
> input/suggestions on this?
>
>> Some clients like adreno gpu driver would like to ensure that its gdsc
>> is collapsed at hardware during a gpu reset sequence. This is because it
>> has a votable gdsc which could be ON due to a vote from another subsystem
>> like tz, hyp etc or due to an internal hardware signal. To allow
>> this, gpucc driver can expose an interface to the client driver using
>> reset framework. Using this the client driver can trigger a polling within
>> the gdsc driver.
> @Akhil, this description is fairly generic. As we've reached the state
> where the hardware has settled and we return to the client, what
> prevents it from being powered up again?
>
> Or is it simply a question of it hitting the powered-off state, not
> necessarily staying there?
Correct. It doesn't need to stay there. The intention is to hit the powered-off state at least once to clear all the internal hw states (basically a hw reset).

-Akhil.
>
> Regards,
> Bjorn
>
>> This series is rebased on top of qcom/linux:for-next branch.
>>
>> Related discussion: https://patchwork.freedesktop.org/patch/493144/
>>
>> Changes in v7:
>> - Update commit message (Bjorn)
>> - Rebased on top of qcom/linux:for-next branch.
>>
>> Changes in v6:
>> - No code changes in this version. Just captured the Acked-by tags
>>
>> Changes in v5:
>> - Nit: Remove a duplicate blank line (Krzysztof)
>>
>> Changes in v4:
>> - Update gpu dt-binding schema
>> - Typo fix in commit text
>>
>> Changes in v3:
>> - Use pointer to const for "struct qcom_reset_ops" in qcom_reset_map (Krzysztof)
>>
>> Changes in v2:
>> - Return error when a particular custom reset op is not implemented. (Dmitry)
>>
>> Akhil P Oommen (6):
>> dt-bindings: clk: qcom: Support gpu cx gdsc reset
>> clk: qcom: Allow custom reset ops
>> clk: qcom: gdsc: Add a reset op to poll gdsc collapse
>> clk: qcom: gpucc-sc7280: Add cx collapse reset support
>> dt-bindings: drm/msm/gpu: Add optional resets
>> arm64: dts: qcom: sc7280: Add Reset support for gpu
>>
>> .../devicetree/bindings/display/msm/gpu.yaml | 6 +++++
>> arch/arm64/boot/dts/qcom/sc7280.dtsi | 3 +++
>> drivers/clk/qcom/gdsc.c | 23 ++++++++++++++----
>> drivers/clk/qcom/gdsc.h | 7 ++++++
>> drivers/clk/qcom/gpucc-sc7280.c | 10 ++++++++
>> drivers/clk/qcom/reset.c | 27 +++++++++++++++++++++-
>> drivers/clk/qcom/reset.h | 8 +++++++
>> include/dt-bindings/clock/qcom,gpucc-sc7280.h | 3 +++
>> 8 files changed, 82 insertions(+), 5 deletions(-)
>>
>> --
>> 2.7.4
>>

2022-12-06 20:08:37

by Akhil P Oommen

[permalink] [raw]
Subject: Re: [Freedreno] [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On 12/2/2022 12:30 PM, Akhil P Oommen wrote:
> On 12/2/2022 4:27 AM, Bjorn Andersson wrote:
>> On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
>> @Ulf, Akhil has a power-domain for a piece of hardware which may be
>> voted active by multiple different subsystems (co-processors/execution
>> contexts) in the system.
>>
>> As such, during the powering down sequence we don't wait for the
>> power-domain to turn off. But in the event of an error, the recovery
>> mechanism relies on waiting for the hardware to settle in a powered off
>> state.
>>
>> The proposal here is to use the reset framework to wait for this state
>> to be reached, before continuing with the recovery mechanism in the
>> client driver.
>>
>> Given our other discussions on quirky behavior, do you have any
>> input/suggestions on this?
Ulf,

Gentle ping! Could you please share your feedback?

-Akhil.
>>
>>> Some clients like adreno gpu driver would like to ensure that its gdsc
>>> is collapsed at hardware during a gpu reset sequence. This is because it
>>> has a votable gdsc which could be ON due to a vote from another subsystem
>>> like tz, hyp etc or due to an internal hardware signal. To allow
>>> this, gpucc driver can expose an interface to the client driver using
>>> reset framework. Using this the client driver can trigger a polling within
>>> the gdsc driver.
>> @Akhil, this description is fairly generic. As we've reached the state
>> where the hardware has settled and we return to the client, what
>> prevents it from being powered up again?
>>
>> Or is it simply a question of it hitting the powered-off state, not
>> necessarily staying there?
> Correct. It doesn't need to stay there. The intention is to hit the powered-off state at least once to clear all the internal hw states (basically a hw reset).
>
> -Akhil.
>> Regards,
>> Bjorn
>>
>>> This series is rebased on top of qcom/linux:for-next branch.
>>>
>>> Related discussion: https://patchwork.freedesktop.org/patch/493144/
>>>
>>> Changes in v7:
>>> - Update commit message (Bjorn)
>>> - Rebased on top of qcom/linux:for-next branch.
>>>
>>> Changes in v6:
>>> - No code changes in this version. Just captured the Acked-by tags
>>>
>>> Changes in v5:
>>> - Nit: Remove a duplicate blank line (Krzysztof)
>>>
>>> Changes in v4:
>>> - Update gpu dt-binding schema
>>> - Typo fix in commit text
>>>
>>> Changes in v3:
>>> - Use pointer to const for "struct qcom_reset_ops" in qcom_reset_map (Krzysztof)
>>>
>>> Changes in v2:
>>> - Return error when a particular custom reset op is not implemented. (Dmitry)
>>>
>>> Akhil P Oommen (6):
>>> dt-bindings: clk: qcom: Support gpu cx gdsc reset
>>> clk: qcom: Allow custom reset ops
>>> clk: qcom: gdsc: Add a reset op to poll gdsc collapse
>>> clk: qcom: gpucc-sc7280: Add cx collapse reset support
>>> dt-bindings: drm/msm/gpu: Add optional resets
>>> arm64: dts: qcom: sc7280: Add Reset support for gpu
>>>
>>> .../devicetree/bindings/display/msm/gpu.yaml | 6 +++++
>>> arch/arm64/boot/dts/qcom/sc7280.dtsi | 3 +++
>>> drivers/clk/qcom/gdsc.c | 23 ++++++++++++++----
>>> drivers/clk/qcom/gdsc.h | 7 ++++++
>>> drivers/clk/qcom/gpucc-sc7280.c | 10 ++++++++
>>> drivers/clk/qcom/reset.c | 27 +++++++++++++++++++++-
>>> drivers/clk/qcom/reset.h | 8 +++++++
>>> include/dt-bindings/clock/qcom,gpucc-sc7280.h | 3 +++
>>> 8 files changed, 82 insertions(+), 5 deletions(-)
>>>
>>> --
>>> 2.7.4
>>>

2022-12-07 16:21:30

by Ulf Hansson

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On Thu, 1 Dec 2022 at 23:57, Bjorn Andersson <[email protected]> wrote:
>
> On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
> >
>
> @Ulf, Akhil has a power-domain for a piece of hardware which may be
> voted active by multiple different subsystems (co-processors/execution
> contexts) in the system.
>
> As such, during the powering down sequence we don't wait for the
> power-domain to turn off. But in the event of an error, the recovery
> mechanism relies on waiting for the hardware to settle in a powered off
> state.
>
> The proposal here is to use the reset framework to wait for this state
> to be reached, before continuing with the recovery mechanism in the
> client driver.

I tried to review the series (see my other replies), but I am not sure
I fully understand the consumer part.

More exactly, when and who is going to pull the reset and at what point?

>
> Given our other discussions on quirky behavior, do you have any
> input/suggestions on this?
>
> > Some clients like adreno gpu driver would like to ensure that its gdsc
> > is collapsed at hardware during a gpu reset sequence. This is because it
> > has a votable gdsc which could be ON due to a vote from another subsystem
> > like tz, hyp etc or due to an internal hardware signal. To allow
> > this, gpucc driver can expose an interface to the client driver using
> > reset framework. Using this the client driver can trigger a polling within
> > the gdsc driver.
>
> @Akhil, this description is fairly generic. As we've reached the state
> where the hardware has settled and we return to the client, what
> prevents it from being powered up again?
>
> Or is it simply a question of it hitting the powered-off state, not
> necessarily staying there?

Okay, so it's indeed the GPU driver that is going to assert/de-assert
the reset at some point. Right?

That seems like a reasonable approach to me, even if it's a bit
unclear under what conditions that could happen.

[...]

Kind regards
Uffe

2022-12-07 16:59:34

by Bjorn Andersson

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On Wed, Dec 07, 2022 at 05:00:51PM +0100, Ulf Hansson wrote:
> On Thu, 1 Dec 2022 at 23:57, Bjorn Andersson <[email protected]> wrote:
> >
> > On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
> > >
> >
> > @Ulf, Akhil has a power-domain for a piece of hardware which may be
> > voted active by multiple different subsystems (co-processors/execution
> > contexts) in the system.
> >
> > As such, during the powering down sequence we don't wait for the
> > power-domain to turn off. But in the event of an error, the recovery
> > mechanism relies on waiting for the hardware to settle in a powered off
> > state.
> >
> > The proposal here is to use the reset framework to wait for this state
> > to be reached, before continuing with the recovery mechanism in the
> > client driver.
>
> I tried to review the series (see my other replies), but I am not sure
> I fully understand the consumer part.
>
> More exactly, when and who is going to pull the reset and at what point?
>
> >
> > Given our other discussions on quirky behavior, do you have any
> > input/suggestions on this?
> >
> > > Some clients like adreno gpu driver would like to ensure that its gdsc
> > > is collapsed at hardware during a gpu reset sequence. This is because it
> > > has a votable gdsc which could be ON due to a vote from another subsystem
> > > like tz, hyp etc or due to an internal hardware signal. To allow
> > > this, gpucc driver can expose an interface to the client driver using
> > > reset framework. Using this the client driver can trigger a polling within
> > > the gdsc driver.
> >
> > @Akhil, this description is fairly generic. As we've reached the state
> > where the hardware has settled and we return to the client, what
> > prevents it from being powered up again?
> >
> > Or is it simply a question of it hitting the powered-off state, not
> > necessarily staying there?
>
> Okay, so it's indeed the GPU driver that is going to assert/de-assert
> the reset at some point. Right?
>
> That seems like a reasonable approach to me, even if it's a bit
> unclear under what conditions that could happen.
>

Generally the disable-path of the power-domain does not check that the
power-domain is actually turned off, because the status might indicate
that the hardware is voting for the power-domain to be on.

As part of the recovery of the GPU after some fatal fault, the GPU
driver does something which will cause the hardware votes for the
power-domain to be let go, and then the driver does pm_runtime_put().

But in this case the GPU driver wants to ensure that the power-domain is
actually powered down, before it does pm_runtime_get() again. To ensure
that the hardware lost its state...

The proposal here is to use a reset to reach into the power-domain
provider and wait for the hardware to be turned off, before the GPU
driver attempts turning the power-domain on again.


In other words, there is no reset. This is a hack to make a normally
asynchronous pd.power_off() to be synchronous in this particular case.

Regards,
Bjorn

2022-12-08 14:21:21

by Ulf Hansson

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On Wed, 7 Dec 2022 at 17:55, Bjorn Andersson <[email protected]> wrote:
>
> On Wed, Dec 07, 2022 at 05:00:51PM +0100, Ulf Hansson wrote:
> > On Thu, 1 Dec 2022 at 23:57, Bjorn Andersson <[email protected]> wrote:
> > >
> > > On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
> > > >
> > >
> > > @Ulf, Akhil has a power-domain for a piece of hardware which may be
> > > voted active by multiple different subsystems (co-processors/execution
> > > contexts) in the system.
> > >
> > > As such, during the powering down sequence we don't wait for the
> > > power-domain to turn off. But in the event of an error, the recovery
> > > mechanism relies on waiting for the hardware to settle in a powered off
> > > state.
> > >
> > > The proposal here is to use the reset framework to wait for this state
> > > to be reached, before continuing with the recovery mechanism in the
> > > client driver.
> >
> > I tried to review the series (see my other replies), but I am not sure
> > I fully understand the consumer part.
> >
> > More exactly, when and who is going to pull the reset and at what point?
> >
> > >
> > > Given our other discussions on quirky behavior, do you have any
> > > input/suggestions on this?
> > >
> > > > Some clients like adreno gpu driver would like to ensure that its gdsc
> > > > is collapsed at hardware during a gpu reset sequence. This is because it
> > > > has a votable gdsc which could be ON due to a vote from another subsystem
> > > > like tz, hyp etc or due to an internal hardware signal. To allow
> > > > this, gpucc driver can expose an interface to the client driver using
> > > > reset framework. Using this the client driver can trigger a polling within
> > > > the gdsc driver.
> > >
> > > @Akhil, this description is fairly generic. As we've reached the state
> > > where the hardware has settled and we return to the client, what
> > > prevents it from being powered up again?
> > >
> > > Or is it simply a question of it hitting the powered-off state, not
> > > necessarily staying there?
> >
> > Okay, so it's indeed the GPU driver that is going to assert/de-assert
> > the reset at some point. Right?
> >
> > That seems like a reasonable approach to me, even if it's a bit
> > unclear under what conditions that could happen.
> >
>
> Generally the disable-path of the power-domain does not check that the
> power-domain is actually turned off, because the status might indicate
> that the hardware is voting for the power-domain to be on.

Is there a good reason why the HW needs to vote too, when the GPU
driver is already in control?

Or perhaps that depends on the running use case?

>
> As part of the recovery of the GPU after some fatal fault, the GPU
> driver does something which will cause the hardware votes for the
> power-domain to be let go, and then the driver does pm_runtime_put().

Okay. That "something", sounds like a device specific setting for the
corresponding gdsc, right?

So somehow the GPU driver needs to manage that setting, right?

>
> But in this case the GPU driver wants to ensure that the power-domain is
> actually powered down, before it does pm_runtime_get() again. To ensure
> that the hardware lost its state...

I see.

>
> The proposal here is to use a reset to reach into the power-domain
> provider and wait for the hardware to be turned off, before the GPU
> driver attempts turning the power-domain on again.
>
>
> In other words, there is no reset. This is a hack to make a normally
> asynchronous pd.power_off() to be synchronous in this particular case.

Alright, assuming I understood your clarifications above correctly
(thanks!), I think I have got a much better picture now.

Rather than abusing the reset interface, I think we should manage this
through the genpd's power on/off notifiers (GENPD_NOTIFY_OFF). The GPU
driver should register its corresponding device for them
(dev_pm_genpd_add_notifier()).

The trick however, is to make the behaviour of the power-domain for
the gdsc (the genpd->power_off() callback) conditional on whether the
HW is configured to vote or not. If the HW can vote, it should not
poll for the state - and vice versa when the HW can't vote.

Would this work?

Kind regards
Uffe

2022-12-08 15:59:37

by Akhil P Oommen

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On 12/8/2022 7:10 PM, Ulf Hansson wrote:
> On Wed, 7 Dec 2022 at 17:55, Bjorn Andersson <[email protected]> wrote:
>> On Wed, Dec 07, 2022 at 05:00:51PM +0100, Ulf Hansson wrote:
>>> On Thu, 1 Dec 2022 at 23:57, Bjorn Andersson <[email protected]> wrote:
>>>> On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
>>>> @Ulf, Akhil has a power-domain for a piece of hardware which may be
>>>> voted active by multiple different subsystems (co-processors/execution
>>>> contexts) in the system.
>>>>
>>>> As such, during the powering down sequence we don't wait for the
>>>> power-domain to turn off. But in the event of an error, the recovery
>>>> mechanism relies on waiting for the hardware to settle in a powered off
>>>> state.
>>>>
>>>> The proposal here is to use the reset framework to wait for this state
>>>> to be reached, before continuing with the recovery mechanism in the
>>>> client driver.
>>> I tried to review the series (see my other replies), but I am not sure
>>> I fully understand the consumer part.
>>>
>>> More exactly, when and who is going to pull the reset and at what point?
>>>
>>>> Given our other discussions on quirky behavior, do you have any
>>>> input/suggestions on this?
>>>>
>>>>> Some clients like adreno gpu driver would like to ensure that its gdsc
>>>>> is collapsed at hardware during a gpu reset sequence. This is because it
>>>>> has a votable gdsc which could be ON due to a vote from another subsystem
>>>>> like tz, hyp etc or due to an internal hardware signal. To allow
>>>>> this, gpucc driver can expose an interface to the client driver using
>>>>> reset framework. Using this the client driver can trigger a polling within
>>>>> the gdsc driver.
>>>> @Akhil, this description is fairly generic. As we've reached the state
>>>> where the hardware has settled and we return to the client, what
>>>> prevents it from being powered up again?
>>>>
>>>> Or is it simply a question of it hitting the powered-off state, not
>>>> necessarily staying there?
>>> Okay, so it's indeed the GPU driver that is going to assert/de-assert
>>> the reset at some point. Right?
>>>
>>> That seems like a reasonable approach to me, even if it's a bit
>>> unclear under what conditions that could happen.
>>>
>> Generally the disable-path of the power-domain does not check that the
>> power-domain is actually turned off, because the status might indicate
>> that the hardware is voting for the power-domain to be on.
> Is there a good reason why the HW needs to vote too, when the GPU
> driver is already in control?
>
> Or perhaps that depends on the running use case?
This power domain can be voted to be ON from other subsystems outside of linux kernel like secure os, hypervisor etc through separate vote registers. So it is not completely under the control of linux clk driver. Linux clk driver can only vote it to be kept ON, check current status etc, but cannot force it to be OFF. I believe this is why it is a votable gdsc in linux-clk driver.

Just a general clarification. GPU has mainly 2 power domains: (1) CX which is shared by GPU and its SMMU, (2) GX which is GPU specific and managed mostly by a power management core within GPU. This patch series is to allow gpu driver to ensure that CX gdsc has collapsed which in turn will reset GPU's internal state.
>
>> As part of the recovery of the GPU after some fatal fault, the GPU
>> driver does something which will cause the hardware votes for the
>> power-domain to be let go, and then the driver does pm_runtime_put().
> Okay. That "something", sounds like a device specific setting for the
> corresponding gdsc, right?
>
> So somehow the GPU driver needs to manage that setting, right?
Clarified about this above.
>
>> But in this case the GPU driver wants to ensure that the power-domain is
>> actually powered down, before it does pm_runtime_get() again. To ensure
>> that the hardware lost its state...
> I see.
>
>> The proposal here is to use a reset to reach into the power-domain
>> provider and wait for the hardware to be turned off, before the GPU
>> driver attempts turning the power-domain on again.
>>
>>
>> In other words, there is no reset. This is a hack to make a normally
>> asynchronous pd.power_off() to be synchronous in this particular case.
Not really. Because other non-linux subsystems are involved here for CX gdsc, we need a way to poll the gdsc register to ensure that it has indeed collapsed before gpu driver continue with re-initialization of gpu. It is either this approach using 'reset' framework or plumbing a new path from gpu driver to gpucc-gdsc driver to poll the collapse status. I went with the 'reset' approach as per the consensus here: https://patchwork.freedesktop.org/patch/493143/

-Akhil.
> Alright, assuming I understood your clarifications above correctly
> (thanks!), I think I have got a much better picture now.
>
> Rather than abusing the reset interface, I think we should manage this
> through the genpd's power on/off notifiers (GENPD_NOTIFY_OFF). The GPU
> driver should register its corresponding device for them
> (dev_pm_genpd_add_notifier()).
>
> The trick however, is to make the behaviour of the power-domain for
> the gdsc (the genpd->power_off() callback) conditional on whether the
> HW is configured to vote or not. If the HW can vote, it should not
> poll for the state - and vice versa when the HW can't vote.
>
> Would this work?
>
> Kind regards
> Uffe

2022-12-08 16:26:04

by Akhil P Oommen

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On 12/7/2022 9:30 PM, Ulf Hansson wrote:
> On Thu, 1 Dec 2022 at 23:57, Bjorn Andersson <[email protected]> wrote:
>> On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
>> @Ulf, Akhil has a power-domain for a piece of hardware which may be
>> voted active by multiple different subsystems (co-processors/execution
>> contexts) in the system.
>>
>> As such, during the powering down sequence we don't wait for the
>> power-domain to turn off. But in the event of an error, the recovery
>> mechanism relies on waiting for the hardware to settle in a powered off
>> state.
>>
>> The proposal here is to use the reset framework to wait for this state
>> to be reached, before continuing with the recovery mechanism in the
>> client driver.
> I tried to review the series (see my other replies), but I am not sure
> I fully understand the consumer part.
>
> More exactly, when and who is going to pull the reset and at what point?
Explained in the other patch.

-Akhil.
>
>> Given our other discussions on quirky behavior, do you have any
>> input/suggestions on this?
>>
>>> Some clients like adreno gpu driver would like to ensure that its gdsc
>>> is collapsed at hardware during a gpu reset sequence. This is because it
>>> has a votable gdsc which could be ON due to a vote from another subsystem
>>> like tz, hyp etc or due to an internal hardware signal. To allow
>>> this, gpucc driver can expose an interface to the client driver using
>>> reset framework. Using this the client driver can trigger a polling within
>>> the gdsc driver.
>> @Akhil, this description is fairly generic. As we've reached the state
>> where the hardware has settled and we return to the client, what
>> prevents it from being powered up again?
>>
>> Or is it simply a question of it hitting the powered-off state, not
>> necessarily staying there?
> Okay, so it's indeed the GPU driver that is going to assert/de-assert
> the reset at some point. Right?
>
> That seems like a reasonable approach to me, even if it's a bit
> unclear under what conditions that could happen.
>
> [...]
>
> Kind regards
> Uffe

2022-12-08 21:21:56

by Bjorn Andersson

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On Thu, Dec 08, 2022 at 02:40:55PM +0100, Ulf Hansson wrote:
> On Wed, 7 Dec 2022 at 17:55, Bjorn Andersson <[email protected]> wrote:
> >
> > On Wed, Dec 07, 2022 at 05:00:51PM +0100, Ulf Hansson wrote:
> > > On Thu, 1 Dec 2022 at 23:57, Bjorn Andersson <[email protected]> wrote:
> > > >
> > > > On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
> > > > >
> > > >
> > > > @Ulf, Akhil has a power-domain for a piece of hardware which may be
> > > > voted active by multiple different subsystems (co-processors/execution
> > > > contexts) in the system.
> > > >
> > > > As such, during the powering down sequence we don't wait for the
> > > > power-domain to turn off. But in the event of an error, the recovery
> > > > mechanism relies on waiting for the hardware to settle in a powered off
> > > > state.
> > > >
> > > > The proposal here is to use the reset framework to wait for this state
> > > > to be reached, before continuing with the recovery mechanism in the
> > > > client driver.
> > >
> > > I tried to review the series (see my other replies), but I am not sure
> > > I fully understand the consumer part.
> > >
> > > More exactly, when and who is going to pull the reset and at what point?
> > >
> > > >
> > > > Given our other discussions on quirky behavior, do you have any
> > > > input/suggestions on this?
> > > >
> > > > > Some clients like adreno gpu driver would like to ensure that its gdsc
> > > > > is collapsed at hardware during a gpu reset sequence. This is because it
> > > > > has a votable gdsc which could be ON due to a vote from another subsystem
> > > > > like tz, hyp etc or due to an internal hardware signal. To allow
> > > > > this, gpucc driver can expose an interface to the client driver using
> > > > > reset framework. Using this the client driver can trigger a polling within
> > > > > the gdsc driver.
> > > >
> > > > @Akhil, this description is fairly generic. As we've reached the state
> > > > where the hardware has settled and we return to the client, what
> > > > prevents it from being powered up again?
> > > >
> > > > Or is it simply a question of it hitting the powered-off state, not
> > > > necessarily staying there?
> > >
> > > Okay, so it's indeed the GPU driver that is going to assert/de-assert
> > > the reset at some point. Right?
> > >
> > > That seems like a reasonable approach to me, even if it's a bit
> > > unclear under what conditions that could happen.
> > >
> >
> > Generally the disable-path of the power-domain does not check that the
> > power-domain is actually turned off, because the status might indicate
> > that the hardware is voting for the power-domain to be on.
>
> Is there a good reason why the HW needs to vote too, when the GPU
> driver is already in control?
>
> Or perhaps that depends on the running use case?
>
> >
> > As part of the recovery of the GPU after some fatal fault, the GPU
> > driver does something which will cause the hardware votes for the
> > power-domain to be let go, and then the driver does pm_runtime_put().
>
> Okay. That "something", sounds like a device specific setting for the
> corresponding gdsc, right?
>
> So somehow the GPU driver needs to manage that setting, right?
>
> >
> > But in this case the GPU driver wants to ensure that the power-domain is
> > actually powered down, before it does pm_runtime_get() again. To ensure
> > that the hardware lost its state...
>
> I see.
>
> >
> > The proposal here is to use a reset to reach into the power-domain
> > provider and wait for the hardware to be turned off, before the GPU
> > driver attempts turning the power-domain on again.
> >
> >
> > In other words, there is no reset. This is a hack to make a normally
> > asynchronous pd.power_off() to be synchronous in this particular case.
>
> Alright, assuming I understood your clarifications above correctly
> (thanks!), I think I have got a much better picture now.
>
> Rather than abusing the reset interface, I think we should manage this
> through the genpd's power on/off notifiers (GENPD_NOTIFY_OFF). The GPU
> driver should register its corresponding device for them
> (dev_pm_genpd_add_notifier()).
>
> The trick however, is to make the behaviour of the power-domain for
> the gdsc (the genpd->power_off() callback) conditional on whether the
> HW is configured to vote or not. If the HW can vote, it should not
> poll for the state - and vice versa when the HW can't vote.
>

Per Akhil's description I misunderstood who the other voters are; but
either way it's not the same "HW configured" mechanism as the one we're
already discussing.


But if we based on similar means could control if the power_off() ops
should be blocking, waiting for the status indication to show that the
hardware is indeed powered down, I think this would meet the needs.

And GENPD_NOTIFY_OFF seems to provide the notification that it was
successful (i.e. happened within the timeout etc).

> Would this work?
>

If we can control the behavior of the genpd, I think it would.

Thanks,
Bjorn

2022-12-09 17:52:56

by Ulf Hansson

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On Thu, 8 Dec 2022 at 22:06, Bjorn Andersson <[email protected]> wrote:
>
> On Thu, Dec 08, 2022 at 02:40:55PM +0100, Ulf Hansson wrote:
> > On Wed, 7 Dec 2022 at 17:55, Bjorn Andersson <[email protected]> wrote:
> > >
> > > On Wed, Dec 07, 2022 at 05:00:51PM +0100, Ulf Hansson wrote:
> > > > On Thu, 1 Dec 2022 at 23:57, Bjorn Andersson <[email protected]> wrote:
> > > > >
> > > > > On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
> > > > > >
> > > > >
> > > > > @Ulf, Akhil has a power-domain for a piece of hardware which may be
> > > > > voted active by multiple different subsystems (co-processors/execution
> > > > > contexts) in the system.
> > > > >
> > > > > As such, during the powering down sequence we don't wait for the
> > > > > power-domain to turn off. But in the event of an error, the recovery
> > > > > mechanism relies on waiting for the hardware to settle in a powered off
> > > > > state.
> > > > >
> > > > > The proposal here is to use the reset framework to wait for this state
> > > > > to be reached, before continuing with the recovery mechanism in the
> > > > > client driver.
> > > >
> > > > I tried to review the series (see my other replies), but I am not sure
> > > > I fully understand the consumer part.
> > > >
> > > > More exactly, when and who is going to pull the reset and at what point?
> > > >
> > > > >
> > > > > Given our other discussions on quirky behavior, do you have any
> > > > > input/suggestions on this?
> > > > >
> > > > > > Some clients like adreno gpu driver would like to ensure that its gdsc
> > > > > > is collapsed at hardware during a gpu reset sequence. This is because it
> > > > > > has a votable gdsc which could be ON due to a vote from another subsystem
> > > > > > like tz, hyp etc or due to an internal hardware signal. To allow
> > > > > > this, gpucc driver can expose an interface to the client driver using
> > > > > > reset framework. Using this the client driver can trigger a polling within
> > > > > > the gdsc driver.
> > > > >
> > > > > @Akhil, this description is fairly generic. As we've reached the state
> > > > > where the hardware has settled and we return to the client, what
> > > > > prevents it from being powered up again?
> > > > >
> > > > > Or is it simply a question of it hitting the powered-off state, not
> > > > > necessarily staying there?
> > > >
> > > > Okay, so it's indeed the GPU driver that is going to assert/de-assert
> > > > the reset at some point. Right?
> > > >
> > > > That seems like a reasonable approach to me, even if it's a bit
> > > > unclear under what conditions that could happen.
> > > >
> > >
> > > Generally the disable-path of the power-domain does not check that the
> > > power-domain is actually turned off, because the status might indicate
> > > that the hardware is voting for the power-domain to be on.
> >
> > Is there a good reason why the HW needs to vote too, when the GPU
> > driver is already in control?
> >
> > Or perhaps that depends on the running use case?
> >
> > >
> > > As part of the recovery of the GPU after some fatal fault, the GPU
> > > driver does something which will cause the hardware votes for the
> > > power-domain to be let go, and then the driver does pm_runtime_put().
> >
> > Okay. That "something", sounds like a device specific setting for the
> > corresponding gdsc, right?
> >
> > So somehow the GPU driver needs to manage that setting, right?
> >
> > >
> > > But in this case the GPU driver wants to ensure that the power-domain is
> > > actually powered down, before it does pm_runtime_get() again. To ensure
> > > that the hardware lost its state...
> >
> > I see.
> >
> > >
> > > The proposal here is to use a reset to reach into the power-domain
> > > provider and wait for the hardware to be turned off, before the GPU
> > > driver attempts turning the power-domain on again.
> > >
> > >
> > > In other words, there is no reset. This is a hack to make a normally
> > > asynchronous pd.power_off() to be synchronous in this particular case.
> >
> > Alright, assuming I understood your clarifications above correctly
> > (thanks!), I think I have got a much better picture now.
> >
> > Rather than abusing the reset interface, I think we should manage this
> > through the genpd's power on/off notifiers (GENPD_NOTIFY_OFF). The GPU
> > driver should register its corresponding device for them
> > (dev_pm_genpd_add_notifier()).
> >
> > The trick however, is to make the behaviour of the power-domain for
> > the gdsc (the genpd->power_off() callback) conditional on whether the
> > HW is configured to vote or not. If the HW can vote, it should not
> > poll for the state - and vice versa when the HW can't vote.
> >
>
> Per Akhil's description I misunderstood who the other voters are; but
> either way it's not the same "HW configured" mechanism as the one we're
> already discussing.

Okay, so this is another thing then.

>
>
> But if we based on similar means could control if the power_off() ops
> should be blocking, waiting for the status indication to show that the
> hardware is indeed powered down, I think this would meet the needs.

Right.

>
> And GENPD_NOTIFY_OFF seems to provide the notification that it was
> successful (i.e. happened within the timeout etc).
>
> > Would this work?
> >
>
> If we can control the behavior of the genpd, I think it would.

Okay, it seems like we need a new dev_pm_genpd_* interface that
consumers can call to instruct the genpd provider, that its
->power_off() callback needs to temporarily switch to become
synchronous.

I guess this could be useful for other similar cases too, where the
corresponding PM domain isn't actually being powered off, but rather
just voted for to become powered off, thus relying on the HW to do the
aggregation.

In any case, I am still a bit skeptical of the reset approach, as is
being suggested in the $subject series. Even if it's rather nice and
clean (but somewhat abusing the interface), it looks like there will
be synchronization problems between the calls to the
pm_runtime_put_sync() and reset_control_reset() in the GPU driver. The
"reset" may actually already have happened when the call to
reset_control_reset() is done, so we may fail to detect the power
collapse, right!?

Let me cook a patch for the new genpd interface that I have in mind,
then we can see how that plays out together with the other parts. I
will post it on Monday!

Kind regards
Uffe

2022-12-12 15:58:41

by Ulf Hansson

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On Fri, 9 Dec 2022 at 18:36, Ulf Hansson <[email protected]> wrote:
>
> On Thu, 8 Dec 2022 at 22:06, Bjorn Andersson <[email protected]> wrote:
> >
> > On Thu, Dec 08, 2022 at 02:40:55PM +0100, Ulf Hansson wrote:
> > > On Wed, 7 Dec 2022 at 17:55, Bjorn Andersson <[email protected]> wrote:
> > > >
> > > > On Wed, Dec 07, 2022 at 05:00:51PM +0100, Ulf Hansson wrote:
> > > > > On Thu, 1 Dec 2022 at 23:57, Bjorn Andersson <[email protected]> wrote:
> > > > > >
> > > > > > On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
> > > > > > >
> > > > > >
> > > > > > @Ulf, Akhil has a power-domain for a piece of hardware which may be
> > > > > > voted active by multiple different subsystems (co-processors/execution
> > > > > > contexts) in the system.
> > > > > >
> > > > > > As such, during the powering down sequence we don't wait for the
> > > > > > power-domain to turn off. But in the event of an error, the recovery
> > > > > > mechanism relies on waiting for the hardware to settle in a powered off
> > > > > > state.
> > > > > >
> > > > > > The proposal here is to use the reset framework to wait for this state
> > > > > > to be reached, before continuing with the recovery mechanism in the
> > > > > > client driver.
> > > > >
> > > > > I tried to review the series (see my other replies), but I am not sure
> > > > > I fully understand the consumer part.
> > > > >
> > > > > More exactly, when and who is going to pull the reset and at what point?
> > > > >
> > > > > >
> > > > > > Given our other discussions on quirky behavior, do you have any
> > > > > > input/suggestions on this?
> > > > > >
> > > > > > > Some clients like adreno gpu driver would like to ensure that its gdsc
> > > > > > > is collapsed at hardware during a gpu reset sequence. This is because it
> > > > > > > has a votable gdsc which could be ON due to a vote from another subsystem
> > > > > > > like tz, hyp etc or due to an internal hardware signal. To allow
> > > > > > > this, gpucc driver can expose an interface to the client driver using
> > > > > > > reset framework. Using this the client driver can trigger a polling within
> > > > > > > the gdsc driver.
> > > > > >
> > > > > > @Akhil, this description is fairly generic. As we've reached the state
> > > > > > where the hardware has settled and we return to the client, what
> > > > > > prevents it from being powered up again?
> > > > > >
> > > > > > Or is it simply a question of it hitting the powered-off state, not
> > > > > > necessarily staying there?
> > > > >
> > > > > Okay, so it's indeed the GPU driver that is going to assert/de-assert
> > > > > the reset at some point. Right?
> > > > >
> > > > > That seems like a reasonable approach to me, even if it's a bit
> > > > > unclear under what conditions that could happen.
> > > > >
> > > >
> > > > Generally the disable-path of the power-domain does not check that the
> > > > power-domain is actually turned off, because the status might indicate
> > > > that the hardware is voting for the power-domain to be on.
> > >
> > > Is there a good reason why the HW needs to vote too, when the GPU
> > > driver is already in control?
> > >
> > > Or perhaps that depends on the running use case?
> > >
> > > >
> > > > As part of the recovery of the GPU after some fatal fault, the GPU
> > > > driver does something which will cause the hardware votes for the
> > > > power-domain to be let go, and then the driver does pm_runtime_put().
> > >
> > > Okay. That "something", sounds like a device specific setting for the
> > > corresponding gdsc, right?
> > >
> > > So somehow the GPU driver needs to manage that setting, right?
> > >
> > > >
> > > > But in this case the GPU driver wants to ensure that the power-domain is
> > > > actually powered down, before it does pm_runtime_get() again. To ensure
> > > > that the hardware lost its state...
> > >
> > > I see.
> > >
> > > >
> > > > The proposal here is to use a reset to reach into the power-domain
> > > > provider and wait for the hardware to be turned off, before the GPU
> > > > driver attempts turning the power-domain on again.
> > > >
> > > >
> > > > In other words, there is no reset. This is a hack to make a normally
> > > > asynchronous pd.power_off() to be synchronous in this particular case.
> > >
> > > Alright, assuming I understood your clarifications above correctly
> > > (thanks!), I think I have got a much better picture now.
> > >
> > > Rather than abusing the reset interface, I think we should manage this
> > > through the genpd's power on/off notifiers (GENPD_NOTIFY_OFF). The GPU
> > > driver should register its corresponding device for them
> > > (dev_pm_genpd_add_notifier()).
> > >
> > > The trick however, is to make the behaviour of the power-domain for
> > > the gdsc (the genpd->power_off() callback) conditional on whether the
> > > HW is configured to vote or not. If the HW can vote, it should not
> > > poll for the state - and vice versa when the HW can't vote.
> > >
> >
> > Per Akhil's description I misunderstood who the other voters are; but
> > either way it's not the same "HW configured" mechanism as the one we're
> > already discussing.
>
> Okay, so this is another thing then.
>
> >
> >
> > But if we based on similar means could control if the power_off() ops
> > should be blocking, waiting for the status indication to show that the
> > hardware is indeed powered down, I think this would meet the needs.
>
> Right.
>
> >
> > And GENPD_NOTIFY_OFF seems to provide the notification that it was
> > successful (i.e. happened within the timeout etc).
> >
> > > Would this work?
> > >
> >
> > If we can control the behavior of the genpd, I think it would.
>
> Okay, it seems like we need a new dev_pm_genpd_* interface that
> consumers can call to instruct the genpd provider, that its
> ->power_off() callback needs to temporarily switch to become
> synchronous.
>
> I guess this could be useful for other similar cases too, where the
> corresponding PM domain isn't actually being powered off, but rather
> just voted for to become powered off, thus relying on the HW to do the
> aggregation.
>
> In any case, I am still a bit skeptical of the reset approach, as is
> being suggested in the $subject series. Even if it's rather nice and
> clean (but somewhat abusing the interface), it looks like there will
> be synchronization problems between the calls to the
> pm_runtime_put_sync() and reset_control_reset() in the GPU driver. The
> "reset" may actually already have happened when the call to
> reset_control_reset() is done, so we may fail to detect the power
> collapse, right!?
>
> Let me cook a patch for the new genpd interface that I have in mind,
> then we can see how that plays out together with the other parts. I
> will post it on Monday!

Below is the genpd patch that I had in mind.

As I stated above, the GPU driver would need to register for genpd's
power on/off notificers (GENPD_NOTIFY_OFF). Then it should call the
new dev_pm_genpd_synced_poweroff() and finally pm_runtime_put().
Moreover, when the GPU driver receives the GENPD_NOTIFY_OFF
notification, it should probably just kick a completion variable,
allowing the path that calls pm_runtime_put() to wait for the
notification to arrive.

On the genpd provider side, the ->power_off() callback should be
updated to check the new genpd->synced_poweroff variable, to indicate
whether it should poll for power collapse or not.

I think this should work, but if you still prefer to use the "reset"
approach, that's entirely up to you to decide.

Kind regards
Uffe

-----

From: Ulf Hansson <[email protected]>
Date: Mon, 12 Dec 2022 16:08:05 +0100
Subject: [PATCH] PM: domains: Allow a genpd consumer to require a synced power
off

TODO: Write commit message

Signed-off-by: Ulf Hansson <[email protected]>
---
drivers/base/power/domain.c | 22 ++++++++++++++++++++++
include/linux/pm_domain.h | 1 +
2 files changed, 23 insertions(+)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index b46aa490b4cd..3402b2ea7f61 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -494,6 +494,27 @@ void dev_pm_genpd_set_next_wakeup(struct device
*dev, ktime_t next)
}
EXPORT_SYMBOL_GPL(dev_pm_genpd_set_next_wakeup);

+/**
+ * dev_pm_genpd_synced_poweroff - Next power off should be synchronous
+ *
+ * @dev: Device to handle
+ *
+ * TODO: Add description
+ */
+void dev_pm_genpd_synced_poweroff(struct device *dev)
+{
+ struct generic_pm_domain *genpd;
+
+ genpd = dev_to_genpd_safe(dev);
+ if (!genpd)
+ return;
+
+ genpd_lock(genpd);
+ genpd->synced_poweroff = true;
+ genpd_unlock(genpd);
+}
+EXPORT_SYMBOL_GPL(dev_pm_genpd_synced_poweroff);
+
static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
{
unsigned int state_idx = genpd->state_idx;
@@ -588,6 +609,7 @@ static int _genpd_power_off(struct
generic_pm_domain *genpd, bool timed)
out:
raw_notifier_call_chain(&genpd->power_notifiers, GENPD_NOTIFY_OFF,
NULL);
+ genpd->synced_poweroff = false;
return 0;
busy:
raw_notifier_call_chain(&genpd->power_notifiers, GENPD_NOTIFY_ON, NULL);
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index ebc351698090..09c6c67a4896 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -134,6 +134,7 @@ struct generic_pm_domain {
unsigned int prepared_count; /* Suspend counter of prepared
devices */
unsigned int performance_state; /* Aggregated max performance state */
cpumask_var_t cpus; /* A cpumask of the attached CPUs */
+ bool synced_poweroff; /* A consumer needs a synced poweroff */
int (*power_off)(struct generic_pm_domain *domain);
int (*power_on)(struct generic_pm_domain *domain);
struct raw_notifier_head power_notifiers; /* Power on/off notifiers */
--
2.34.1

2022-12-12 18:04:34

by Akhil P Oommen

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On 12/12/2022 9:09 PM, Ulf Hansson wrote:
> On Fri, 9 Dec 2022 at 18:36, Ulf Hansson <[email protected]> wrote:
>> On Thu, 8 Dec 2022 at 22:06, Bjorn Andersson <[email protected]> wrote:
>>> On Thu, Dec 08, 2022 at 02:40:55PM +0100, Ulf Hansson wrote:
>>>> On Wed, 7 Dec 2022 at 17:55, Bjorn Andersson <[email protected]> wrote:
>>>>> On Wed, Dec 07, 2022 at 05:00:51PM +0100, Ulf Hansson wrote:
>>>>>> On Thu, 1 Dec 2022 at 23:57, Bjorn Andersson <[email protected]> wrote:
>>>>>>> On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
>>>>>>> @Ulf, Akhil has a power-domain for a piece of hardware which may be
>>>>>>> voted active by multiple different subsystems (co-processors/execution
>>>>>>> contexts) in the system.
>>>>>>>
>>>>>>> As such, during the powering down sequence we don't wait for the
>>>>>>> power-domain to turn off. But in the event of an error, the recovery
>>>>>>> mechanism relies on waiting for the hardware to settle in a powered off
>>>>>>> state.
>>>>>>>
>>>>>>> The proposal here is to use the reset framework to wait for this state
>>>>>>> to be reached, before continuing with the recovery mechanism in the
>>>>>>> client driver.
>>>>>> I tried to review the series (see my other replies), but I am not sure
>>>>>> I fully understand the consumer part.
>>>>>>
>>>>>> More exactly, when and who is going to pull the reset and at what point?
>>>>>>
>>>>>>> Given our other discussions on quirky behavior, do you have any
>>>>>>> input/suggestions on this?
>>>>>>>
>>>>>>>> Some clients like adreno gpu driver would like to ensure that its gdsc
>>>>>>>> is collapsed at hardware during a gpu reset sequence. This is because it
>>>>>>>> has a votable gdsc which could be ON due to a vote from another subsystem
>>>>>>>> like tz, hyp etc or due to an internal hardware signal. To allow
>>>>>>>> this, gpucc driver can expose an interface to the client driver using
>>>>>>>> reset framework. Using this the client driver can trigger a polling within
>>>>>>>> the gdsc driver.
>>>>>>> @Akhil, this description is fairly generic. As we've reached the state
>>>>>>> where the hardware has settled and we return to the client, what
>>>>>>> prevents it from being powered up again?
>>>>>>>
>>>>>>> Or is it simply a question of it hitting the powered-off state, not
>>>>>>> necessarily staying there?
>>>>>> Okay, so it's indeed the GPU driver that is going to assert/de-assert
>>>>>> the reset at some point. Right?
>>>>>>
>>>>>> That seems like a reasonable approach to me, even if it's a bit
>>>>>> unclear under what conditions that could happen.
>>>>>>
>>>>> Generally the disable-path of the power-domain does not check that the
>>>>> power-domain is actually turned off, because the status might indicate
>>>>> that the hardware is voting for the power-domain to be on.
>>>> Is there a good reason why the HW needs to vote too, when the GPU
>>>> driver is already in control?
>>>>
>>>> Or perhaps that depends on the running use case?
>>>>
>>>>> As part of the recovery of the GPU after some fatal fault, the GPU
>>>>> driver does something which will cause the hardware votes for the
>>>>> power-domain to be let go, and then the driver does pm_runtime_put().
>>>> Okay. That "something", sounds like a device specific setting for the
>>>> corresponding gdsc, right?
>>>>
>>>> So somehow the GPU driver needs to manage that setting, right?
>>>>
>>>>> But in this case the GPU driver wants to ensure that the power-domain is
>>>>> actually powered down, before it does pm_runtime_get() again. To ensure
>>>>> that the hardware lost its state...
>>>> I see.
>>>>
>>>>> The proposal here is to use a reset to reach into the power-domain
>>>>> provider and wait for the hardware to be turned off, before the GPU
>>>>> driver attempts turning the power-domain on again.
>>>>>
>>>>>
>>>>> In other words, there is no reset. This is a hack to make a normally
>>>>> asynchronous pd.power_off() to be synchronous in this particular case.
>>>> Alright, assuming I understood your clarifications above correctly
>>>> (thanks!), I think I have got a much better picture now.
>>>>
>>>> Rather than abusing the reset interface, I think we should manage this
>>>> through the genpd's power on/off notifiers (GENPD_NOTIFY_OFF). The GPU
>>>> driver should register its corresponding device for them
>>>> (dev_pm_genpd_add_notifier()).
>>>>
>>>> The trick however, is to make the behaviour of the power-domain for
>>>> the gdsc (the genpd->power_off() callback) conditional on whether the
>>>> HW is configured to vote or not. If the HW can vote, it should not
>>>> poll for the state - and vice versa when the HW can't vote.
>>>>
>>> Per Akhil's description I misunderstood who the other voters are; but
>>> either way it's not the same "HW configured" mechanism as the one we're
>>> already discussing.
>> Okay, so this is another thing then.
>>
>>>
>>> But if we based on similar means could control if the power_off() ops
>>> should be blocking, waiting for the status indication to show that the
>>> hardware is indeed powered down, I think this would meet the needs.
>> Right.
>>
>>> And GENPD_NOTIFY_OFF seems to provide the notification that it was
>>> successful (i.e. happened within the timeout etc).
>>>
>>>> Would this work?
>>>>
>>> If we can control the behavior of the genpd, I think it would.
>> Okay, it seems like we need a new dev_pm_genpd_* interface that
>> consumers can call to instruct the genpd provider, that its
>> ->power_off() callback needs to temporarily switch to become
>> synchronous.
>>
>> I guess this could be useful for other similar cases too, where the
>> corresponding PM domain isn't actually being powered off, but rather
>> just voted for to become powered off, thus relying on the HW to do the
>> aggregation.
>>
>> In any case, I am still a bit skeptical of the reset approach, as is
>> being suggested in the $subject series. Even if it's rather nice and
>> clean (but somewhat abusing the interface), it looks like there will
>> be synchronization problems between the calls to the
>> pm_runtime_put_sync() and reset_control_reset() in the GPU driver. The
>> "reset" may actually already have happened when the call to
>> reset_control_reset() is done, so we may fail to detect the power
>> collapse, right!?
>>
>> Let me cook a patch for the new genpd interface that I have in mind,
>> then we can see how that plays out together with the other parts. I
>> will post it on Monday!
> Below is the genpd patch that I had in mind.
>
> As I stated above, the GPU driver would need to register for genpd's
> power on/off notificers (GENPD_NOTIFY_OFF). Then it should call the
> new dev_pm_genpd_synced_poweroff() and finally pm_runtime_put().
> Moreover, when the GPU driver receives the GENPD_NOTIFY_OFF
> notification, it should probably just kick a completion variable,
> allowing the path that calls pm_runtime_put() to wait for the
> notification to arrive.
>
> On the genpd provider side, the ->power_off() callback should be
> updated to check the new genpd->synced_poweroff variable, to indicate
> whether it should poll for power collapse or not.
>
> I think this should work, but if you still prefer to use the "reset"
> approach, that's entirely up to you to decide.
>
> Kind regards
> Uffe
>
> -----
>
> From: Ulf Hansson <[email protected]>
> Date: Mon, 12 Dec 2022 16:08:05 +0100
> Subject: [PATCH] PM: domains: Allow a genpd consumer to require a synced power
> off
>
> TODO: Write commit message
>
> Signed-off-by: Ulf Hansson <[email protected]>
> ---
> drivers/base/power/domain.c | 22 ++++++++++++++++++++++
> include/linux/pm_domain.h | 1 +
> 2 files changed, 23 insertions(+)
>
> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> index b46aa490b4cd..3402b2ea7f61 100644
> --- a/drivers/base/power/domain.c
> +++ b/drivers/base/power/domain.c
> @@ -494,6 +494,27 @@ void dev_pm_genpd_set_next_wakeup(struct device
> *dev, ktime_t next)
> }
> EXPORT_SYMBOL_GPL(dev_pm_genpd_set_next_wakeup);
>
> +/**
> + * dev_pm_genpd_synced_poweroff - Next power off should be synchronous
> + *
> + * @dev: Device to handle
> + *
> + * TODO: Add description
> + */
> +void dev_pm_genpd_synced_poweroff(struct device *dev)
> +{
> + struct generic_pm_domain *genpd;
> +
> + genpd = dev_to_genpd_safe(dev);
> + if (!genpd)
> + return;
> +
> + genpd_lock(genpd);
> + genpd->synced_poweroff = true;
> + genpd_unlock(genpd);
> +}
> +EXPORT_SYMBOL_GPL(dev_pm_genpd_synced_poweroff);
> +
> static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
> {
> unsigned int state_idx = genpd->state_idx;
> @@ -588,6 +609,7 @@ static int _genpd_power_off(struct
> generic_pm_domain *genpd, bool timed)
> out:
> raw_notifier_call_chain(&genpd->power_notifiers, GENPD_NOTIFY_OFF,
> NULL);
> + genpd->synced_poweroff = false;
> return 0;
> busy:
> raw_notifier_call_chain(&genpd->power_notifiers, GENPD_NOTIFY_ON, NULL);
> diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
> index ebc351698090..09c6c67a4896 100644
> --- a/include/linux/pm_domain.h
> +++ b/include/linux/pm_domain.h
> @@ -134,6 +134,7 @@ struct generic_pm_domain {
> unsigned int prepared_count; /* Suspend counter of prepared
> devices */
> unsigned int performance_state; /* Aggregated max performance state */
> cpumask_var_t cpus; /* A cpumask of the attached CPUs */
> + bool synced_poweroff; /* A consumer needs a synced poweroff */
> int (*power_off)(struct generic_pm_domain *domain);
> int (*power_on)(struct generic_pm_domain *domain);
> struct raw_notifier_head power_notifiers; /* Power on/off notifiers */
Thanks a lot, Ulf. I will try to prototype rest of the changes on top this.

-Akhil.

2022-12-27 18:39:24

by Bjorn Andersson

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On Mon, Dec 12, 2022 at 04:39:09PM +0100, Ulf Hansson wrote:
> On Fri, 9 Dec 2022 at 18:36, Ulf Hansson <[email protected]> wrote:
> >
> > On Thu, 8 Dec 2022 at 22:06, Bjorn Andersson <[email protected]> wrote:
> > >
> > > On Thu, Dec 08, 2022 at 02:40:55PM +0100, Ulf Hansson wrote:
> > > > On Wed, 7 Dec 2022 at 17:55, Bjorn Andersson <[email protected]> wrote:
> > > > >
> > > > > On Wed, Dec 07, 2022 at 05:00:51PM +0100, Ulf Hansson wrote:
> > > > > > On Thu, 1 Dec 2022 at 23:57, Bjorn Andersson <[email protected]> wrote:
> > > > > > >
> > > > > > > On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
> > > > > > > >
> > > > > > >
> > > > > > > @Ulf, Akhil has a power-domain for a piece of hardware which may be
> > > > > > > voted active by multiple different subsystems (co-processors/execution
> > > > > > > contexts) in the system.
> > > > > > >
> > > > > > > As such, during the powering down sequence we don't wait for the
> > > > > > > power-domain to turn off. But in the event of an error, the recovery
> > > > > > > mechanism relies on waiting for the hardware to settle in a powered off
> > > > > > > state.
> > > > > > >
> > > > > > > The proposal here is to use the reset framework to wait for this state
> > > > > > > to be reached, before continuing with the recovery mechanism in the
> > > > > > > client driver.
> > > > > >
> > > > > > I tried to review the series (see my other replies), but I am not sure
> > > > > > I fully understand the consumer part.
> > > > > >
> > > > > > More exactly, when and who is going to pull the reset and at what point?
> > > > > >
> > > > > > >
> > > > > > > Given our other discussions on quirky behavior, do you have any
> > > > > > > input/suggestions on this?
> > > > > > >
> > > > > > > > Some clients like adreno gpu driver would like to ensure that its gdsc
> > > > > > > > is collapsed at hardware during a gpu reset sequence. This is because it
> > > > > > > > has a votable gdsc which could be ON due to a vote from another subsystem
> > > > > > > > like tz, hyp etc or due to an internal hardware signal. To allow
> > > > > > > > this, gpucc driver can expose an interface to the client driver using
> > > > > > > > reset framework. Using this the client driver can trigger a polling within
> > > > > > > > the gdsc driver.
> > > > > > >
> > > > > > > @Akhil, this description is fairly generic. As we've reached the state
> > > > > > > where the hardware has settled and we return to the client, what
> > > > > > > prevents it from being powered up again?
> > > > > > >
> > > > > > > Or is it simply a question of it hitting the powered-off state, not
> > > > > > > necessarily staying there?
> > > > > >
> > > > > > Okay, so it's indeed the GPU driver that is going to assert/de-assert
> > > > > > the reset at some point. Right?
> > > > > >
> > > > > > That seems like a reasonable approach to me, even if it's a bit
> > > > > > unclear under what conditions that could happen.
> > > > > >
> > > > >
> > > > > Generally the disable-path of the power-domain does not check that the
> > > > > power-domain is actually turned off, because the status might indicate
> > > > > that the hardware is voting for the power-domain to be on.
> > > >
> > > > Is there a good reason why the HW needs to vote too, when the GPU
> > > > driver is already in control?
> > > >
> > > > Or perhaps that depends on the running use case?
> > > >
> > > > >
> > > > > As part of the recovery of the GPU after some fatal fault, the GPU
> > > > > driver does something which will cause the hardware votes for the
> > > > > power-domain to be let go, and then the driver does pm_runtime_put().
> > > >
> > > > Okay. That "something", sounds like a device specific setting for the
> > > > corresponding gdsc, right?
> > > >
> > > > So somehow the GPU driver needs to manage that setting, right?
> > > >
> > > > >
> > > > > But in this case the GPU driver wants to ensure that the power-domain is
> > > > > actually powered down, before it does pm_runtime_get() again. To ensure
> > > > > that the hardware lost its state...
> > > >
> > > > I see.
> > > >
> > > > >
> > > > > The proposal here is to use a reset to reach into the power-domain
> > > > > provider and wait for the hardware to be turned off, before the GPU
> > > > > driver attempts turning the power-domain on again.
> > > > >
> > > > >
> > > > > In other words, there is no reset. This is a hack to make a normally
> > > > > asynchronous pd.power_off() to be synchronous in this particular case.
> > > >
> > > > Alright, assuming I understood your clarifications above correctly
> > > > (thanks!), I think I have got a much better picture now.
> > > >
> > > > Rather than abusing the reset interface, I think we should manage this
> > > > through the genpd's power on/off notifiers (GENPD_NOTIFY_OFF). The GPU
> > > > driver should register its corresponding device for them
> > > > (dev_pm_genpd_add_notifier()).
> > > >
> > > > The trick however, is to make the behaviour of the power-domain for
> > > > the gdsc (the genpd->power_off() callback) conditional on whether the
> > > > HW is configured to vote or not. If the HW can vote, it should not
> > > > poll for the state - and vice versa when the HW can't vote.
> > > >
> > >
> > > Per Akhil's description I misunderstood who the other voters are; but
> > > either way it's not the same "HW configured" mechanism as the one we're
> > > already discussing.
> >
> > Okay, so this is another thing then.
> >
> > >
> > >
> > > But if we based on similar means could control if the power_off() ops
> > > should be blocking, waiting for the status indication to show that the
> > > hardware is indeed powered down, I think this would meet the needs.
> >
> > Right.
> >
> > >
> > > And GENPD_NOTIFY_OFF seems to provide the notification that it was
> > > successful (i.e. happened within the timeout etc).
> > >
> > > > Would this work?
> > > >
> > >
> > > If we can control the behavior of the genpd, I think it would.
> >
> > Okay, it seems like we need a new dev_pm_genpd_* interface that
> > consumers can call to instruct the genpd provider, that its
> > ->power_off() callback needs to temporarily switch to become
> > synchronous.
> >
> > I guess this could be useful for other similar cases too, where the
> > corresponding PM domain isn't actually being powered off, but rather
> > just voted for to become powered off, thus relying on the HW to do the
> > aggregation.
> >
> > In any case, I am still a bit skeptical of the reset approach, as is
> > being suggested in the $subject series. Even if it's rather nice and
> > clean (but somewhat abusing the interface), it looks like there will
> > be synchronization problems between the calls to the
> > pm_runtime_put_sync() and reset_control_reset() in the GPU driver. The
> > "reset" may actually already have happened when the call to
> > reset_control_reset() is done, so we may fail to detect the power
> > collapse, right!?
> >
> > Let me cook a patch for the new genpd interface that I have in mind,
> > then we can see how that plays out together with the other parts. I
> > will post it on Monday!
>
> Below is the genpd patch that I had in mind.
>
> As I stated above, the GPU driver would need to register for genpd's
> power on/off notificers (GENPD_NOTIFY_OFF). Then it should call the
> new dev_pm_genpd_synced_poweroff() and finally pm_runtime_put().
> Moreover, when the GPU driver receives the GENPD_NOTIFY_OFF
> notification, it should probably just kick a completion variable,
> allowing the path that calls pm_runtime_put() to wait for the
> notification to arrive.
>
> On the genpd provider side, the ->power_off() callback should be
> updated to check the new genpd->synced_poweroff variable, to indicate
> whether it should poll for power collapse or not.
>
> I think this should work, but if you still prefer to use the "reset"
> approach, that's entirely up to you to decide.
>

I find this to be conceptually much cleaner. Thanks for the proposal!

Regards,
Bjorn

> Kind regards
> Uffe
>
> -----
>
> From: Ulf Hansson <[email protected]>
> Date: Mon, 12 Dec 2022 16:08:05 +0100
> Subject: [PATCH] PM: domains: Allow a genpd consumer to require a synced power
> off
>
> TODO: Write commit message
>
> Signed-off-by: Ulf Hansson <[email protected]>
> ---
> drivers/base/power/domain.c | 22 ++++++++++++++++++++++
> include/linux/pm_domain.h | 1 +
> 2 files changed, 23 insertions(+)
>
> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> index b46aa490b4cd..3402b2ea7f61 100644
> --- a/drivers/base/power/domain.c
> +++ b/drivers/base/power/domain.c
> @@ -494,6 +494,27 @@ void dev_pm_genpd_set_next_wakeup(struct device
> *dev, ktime_t next)
> }
> EXPORT_SYMBOL_GPL(dev_pm_genpd_set_next_wakeup);
>
> +/**
> + * dev_pm_genpd_synced_poweroff - Next power off should be synchronous
> + *
> + * @dev: Device to handle
> + *
> + * TODO: Add description
> + */
> +void dev_pm_genpd_synced_poweroff(struct device *dev)
> +{
> + struct generic_pm_domain *genpd;
> +
> + genpd = dev_to_genpd_safe(dev);
> + if (!genpd)
> + return;
> +
> + genpd_lock(genpd);
> + genpd->synced_poweroff = true;
> + genpd_unlock(genpd);
> +}
> +EXPORT_SYMBOL_GPL(dev_pm_genpd_synced_poweroff);
> +
> static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
> {
> unsigned int state_idx = genpd->state_idx;
> @@ -588,6 +609,7 @@ static int _genpd_power_off(struct
> generic_pm_domain *genpd, bool timed)
> out:
> raw_notifier_call_chain(&genpd->power_notifiers, GENPD_NOTIFY_OFF,
> NULL);
> + genpd->synced_poweroff = false;
> return 0;
> busy:
> raw_notifier_call_chain(&genpd->power_notifiers, GENPD_NOTIFY_ON, NULL);
> diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
> index ebc351698090..09c6c67a4896 100644
> --- a/include/linux/pm_domain.h
> +++ b/include/linux/pm_domain.h
> @@ -134,6 +134,7 @@ struct generic_pm_domain {
> unsigned int prepared_count; /* Suspend counter of prepared
> devices */
> unsigned int performance_state; /* Aggregated max performance state */
> cpumask_var_t cpus; /* A cpumask of the attached CPUs */
> + bool synced_poweroff; /* A consumer needs a synced poweroff */
> int (*power_off)(struct generic_pm_domain *domain);
> int (*power_on)(struct generic_pm_domain *domain);
> struct raw_notifier_head power_notifiers; /* Power on/off notifiers */
> --
> 2.34.1

2022-12-28 09:47:00

by Akhil P Oommen

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using 'reset' interface

On 12/27/2022 11:54 PM, Bjorn Andersson wrote:
> On Mon, Dec 12, 2022 at 04:39:09PM +0100, Ulf Hansson wrote:
>> On Fri, 9 Dec 2022 at 18:36, Ulf Hansson <[email protected]> wrote:
>>> On Thu, 8 Dec 2022 at 22:06, Bjorn Andersson <[email protected]> wrote:
>>>> On Thu, Dec 08, 2022 at 02:40:55PM +0100, Ulf Hansson wrote:
>>>>> On Wed, 7 Dec 2022 at 17:55, Bjorn Andersson <[email protected]> wrote:
>>>>>> On Wed, Dec 07, 2022 at 05:00:51PM +0100, Ulf Hansson wrote:
>>>>>>> On Thu, 1 Dec 2022 at 23:57, Bjorn Andersson <[email protected]> wrote:
>>>>>>>> On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
>>>>>>>> @Ulf, Akhil has a power-domain for a piece of hardware which may be
>>>>>>>> voted active by multiple different subsystems (co-processors/execution
>>>>>>>> contexts) in the system.
>>>>>>>>
>>>>>>>> As such, during the powering down sequence we don't wait for the
>>>>>>>> power-domain to turn off. But in the event of an error, the recovery
>>>>>>>> mechanism relies on waiting for the hardware to settle in a powered off
>>>>>>>> state.
>>>>>>>>
>>>>>>>> The proposal here is to use the reset framework to wait for this state
>>>>>>>> to be reached, before continuing with the recovery mechanism in the
>>>>>>>> client driver.
>>>>>>> I tried to review the series (see my other replies), but I am not sure
>>>>>>> I fully understand the consumer part.
>>>>>>>
>>>>>>> More exactly, when and who is going to pull the reset and at what point?
>>>>>>>
>>>>>>>> Given our other discussions on quirky behavior, do you have any
>>>>>>>> input/suggestions on this?
>>>>>>>>
>>>>>>>>> Some clients like adreno gpu driver would like to ensure that its gdsc
>>>>>>>>> is collapsed at hardware during a gpu reset sequence. This is because it
>>>>>>>>> has a votable gdsc which could be ON due to a vote from another subsystem
>>>>>>>>> like tz, hyp etc or due to an internal hardware signal. To allow
>>>>>>>>> this, gpucc driver can expose an interface to the client driver using
>>>>>>>>> reset framework. Using this the client driver can trigger a polling within
>>>>>>>>> the gdsc driver.
>>>>>>>> @Akhil, this description is fairly generic. As we've reached the state
>>>>>>>> where the hardware has settled and we return to the client, what
>>>>>>>> prevents it from being powered up again?
>>>>>>>>
>>>>>>>> Or is it simply a question of it hitting the powered-off state, not
>>>>>>>> necessarily staying there?
>>>>>>> Okay, so it's indeed the GPU driver that is going to assert/de-assert
>>>>>>> the reset at some point. Right?
>>>>>>>
>>>>>>> That seems like a reasonable approach to me, even if it's a bit
>>>>>>> unclear under what conditions that could happen.
>>>>>>>
>>>>>> Generally the disable-path of the power-domain does not check that the
>>>>>> power-domain is actually turned off, because the status might indicate
>>>>>> that the hardware is voting for the power-domain to be on.
>>>>> Is there a good reason why the HW needs to vote too, when the GPU
>>>>> driver is already in control?
>>>>>
>>>>> Or perhaps that depends on the running use case?
>>>>>
>>>>>> As part of the recovery of the GPU after some fatal fault, the GPU
>>>>>> driver does something which will cause the hardware votes for the
>>>>>> power-domain to be let go, and then the driver does pm_runtime_put().
>>>>> Okay. That "something", sounds like a device specific setting for the
>>>>> corresponding gdsc, right?
>>>>>
>>>>> So somehow the GPU driver needs to manage that setting, right?
>>>>>
>>>>>> But in this case the GPU driver wants to ensure that the power-domain is
>>>>>> actually powered down, before it does pm_runtime_get() again. To ensure
>>>>>> that the hardware lost its state...
>>>>> I see.
>>>>>
>>>>>> The proposal here is to use a reset to reach into the power-domain
>>>>>> provider and wait for the hardware to be turned off, before the GPU
>>>>>> driver attempts turning the power-domain on again.
>>>>>>
>>>>>>
>>>>>> In other words, there is no reset. This is a hack to make a normally
>>>>>> asynchronous pd.power_off() to be synchronous in this particular case.
>>>>> Alright, assuming I understood your clarifications above correctly
>>>>> (thanks!), I think I have got a much better picture now.
>>>>>
>>>>> Rather than abusing the reset interface, I think we should manage this
>>>>> through the genpd's power on/off notifiers (GENPD_NOTIFY_OFF). The GPU
>>>>> driver should register its corresponding device for them
>>>>> (dev_pm_genpd_add_notifier()).
>>>>>
>>>>> The trick however, is to make the behaviour of the power-domain for
>>>>> the gdsc (the genpd->power_off() callback) conditional on whether the
>>>>> HW is configured to vote or not. If the HW can vote, it should not
>>>>> poll for the state - and vice versa when the HW can't vote.
>>>>>
>>>> Per Akhil's description I misunderstood who the other voters are; but
>>>> either way it's not the same "HW configured" mechanism as the one we're
>>>> already discussing.
>>> Okay, so this is another thing then.
>>>
>>>>
>>>> But if we based on similar means could control if the power_off() ops
>>>> should be blocking, waiting for the status indication to show that the
>>>> hardware is indeed powered down, I think this would meet the needs.
>>> Right.
>>>
>>>> And GENPD_NOTIFY_OFF seems to provide the notification that it was
>>>> successful (i.e. happened within the timeout etc).
>>>>
>>>>> Would this work?
>>>>>
>>>> If we can control the behavior of the genpd, I think it would.
>>> Okay, it seems like we need a new dev_pm_genpd_* interface that
>>> consumers can call to instruct the genpd provider, that its
>>> ->power_off() callback needs to temporarily switch to become
>>> synchronous.
>>>
>>> I guess this could be useful for other similar cases too, where the
>>> corresponding PM domain isn't actually being powered off, but rather
>>> just voted for to become powered off, thus relying on the HW to do the
>>> aggregation.
>>>
>>> In any case, I am still a bit skeptical of the reset approach, as is
>>> being suggested in the $subject series. Even if it's rather nice and
>>> clean (but somewhat abusing the interface), it looks like there will
>>> be synchronization problems between the calls to the
>>> pm_runtime_put_sync() and reset_control_reset() in the GPU driver. The
>>> "reset" may actually already have happened when the call to
>>> reset_control_reset() is done, so we may fail to detect the power
>>> collapse, right!?
>>>
>>> Let me cook a patch for the new genpd interface that I have in mind,
>>> then we can see how that plays out together with the other parts. I
>>> will post it on Monday!
>> Below is the genpd patch that I had in mind.
>>
>> As I stated above, the GPU driver would need to register for genpd's
>> power on/off notificers (GENPD_NOTIFY_OFF). Then it should call the
>> new dev_pm_genpd_synced_poweroff() and finally pm_runtime_put().
>> Moreover, when the GPU driver receives the GENPD_NOTIFY_OFF
>> notification, it should probably just kick a completion variable,
>> allowing the path that calls pm_runtime_put() to wait for the
>> notification to arrive.
>>
>> On the genpd provider side, the ->power_off() callback should be
>> updated to check the new genpd->synced_poweroff variable, to indicate
>> whether it should poll for power collapse or not.
>>
>> I think this should work, but if you still prefer to use the "reset"
>> approach, that's entirely up to you to decide.
>>
> I find this to be conceptually much cleaner. Thanks for the proposal!
>
> Regards,
> Bjorn
https://patchwork.freedesktop.org/series/111966/
Bjorn, this is the new series based on this proposal.

-Akhil.
>> Kind regards
>> Uffe
>>
>> -----
>>
>> From: Ulf Hansson <[email protected]>
>> Date: Mon, 12 Dec 2022 16:08:05 +0100
>> Subject: [PATCH] PM: domains: Allow a genpd consumer to require a synced power
>> off
>>
>> TODO: Write commit message
>>
>> Signed-off-by: Ulf Hansson <[email protected]>
>> ---
>> drivers/base/power/domain.c | 22 ++++++++++++++++++++++
>> include/linux/pm_domain.h | 1 +
>> 2 files changed, 23 insertions(+)
>>
>> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
>> index b46aa490b4cd..3402b2ea7f61 100644
>> --- a/drivers/base/power/domain.c
>> +++ b/drivers/base/power/domain.c
>> @@ -494,6 +494,27 @@ void dev_pm_genpd_set_next_wakeup(struct device
>> *dev, ktime_t next)
>> }
>> EXPORT_SYMBOL_GPL(dev_pm_genpd_set_next_wakeup);
>>
>> +/**
>> + * dev_pm_genpd_synced_poweroff - Next power off should be synchronous
>> + *
>> + * @dev: Device to handle
>> + *
>> + * TODO: Add description
>> + */
>> +void dev_pm_genpd_synced_poweroff(struct device *dev)
>> +{
>> + struct generic_pm_domain *genpd;
>> +
>> + genpd = dev_to_genpd_safe(dev);
>> + if (!genpd)
>> + return;
>> +
>> + genpd_lock(genpd);
>> + genpd->synced_poweroff = true;
>> + genpd_unlock(genpd);
>> +}
>> +EXPORT_SYMBOL_GPL(dev_pm_genpd_synced_poweroff);
>> +
>> static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
>> {
>> unsigned int state_idx = genpd->state_idx;
>> @@ -588,6 +609,7 @@ static int _genpd_power_off(struct
>> generic_pm_domain *genpd, bool timed)
>> out:
>> raw_notifier_call_chain(&genpd->power_notifiers, GENPD_NOTIFY_OFF,
>> NULL);
>> + genpd->synced_poweroff = false;
>> return 0;
>> busy:
>> raw_notifier_call_chain(&genpd->power_notifiers, GENPD_NOTIFY_ON, NULL);
>> diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
>> index ebc351698090..09c6c67a4896 100644
>> --- a/include/linux/pm_domain.h
>> +++ b/include/linux/pm_domain.h
>> @@ -134,6 +134,7 @@ struct generic_pm_domain {
>> unsigned int prepared_count; /* Suspend counter of prepared
>> devices */
>> unsigned int performance_state; /* Aggregated max performance state */
>> cpumask_var_t cpus; /* A cpumask of the attached CPUs */
>> + bool synced_poweroff; /* A consumer needs a synced poweroff */
>> int (*power_off)(struct generic_pm_domain *domain);
>> int (*power_on)(struct generic_pm_domain *domain);
>> struct raw_notifier_head power_notifiers; /* Power on/off notifiers */
>> --
>> 2.34.1