2022-04-12 20:16:47

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v1] drm/scheduler: Don't kill jobs in interrupt context

On 4/12/22 19:51, Andrey Grodzovsky wrote:
>
> On 2022-04-11 18:15, Dmitry Osipenko wrote:
>> Interrupt context can't sleep. Drivers like Panfrost and MSM are taking
>> mutex when job is released, and thus, that code can sleep. This results
>> into "BUG: scheduling while atomic" if locks are contented while job is
>> freed. There is no good reason for releasing scheduler's jobs in IRQ
>> context, hence use normal context to fix the trouble.
>
>
> I am not sure this is the beast Idea to leave job's sw fence signalling
> to be
> executed in system_wq context which is prone to delays of executing
> various work items from around the system. Seems better to me to leave the
> fence signaling within the IRQ context and offload only the job freeing or,
> maybe handle rescheduling to thread context within drivers implemention
> of .free_job cb. Not really sure which is the better.

We're talking here about killing jobs when driver destroys context,
which doesn't feel like it needs to be a fast path. I could move the
signalling into drm_sched_entity_kill_jobs_cb() and use unbound wq, but
do we really need this for a slow path?


2022-04-12 23:17:11

by Andrey Grodzovsky

[permalink] [raw]
Subject: Re: [PATCH v1] drm/scheduler: Don't kill jobs in interrupt context


On 2022-04-12 14:20, Dmitry Osipenko wrote:
> On 4/12/22 19:51, Andrey Grodzovsky wrote:
>> On 2022-04-11 18:15, Dmitry Osipenko wrote:
>>> Interrupt context can't sleep. Drivers like Panfrost and MSM are taking
>>> mutex when job is released, and thus, that code can sleep. This results
>>> into "BUG: scheduling while atomic" if locks are contented while job is
>>> freed. There is no good reason for releasing scheduler's jobs in IRQ
>>> context, hence use normal context to fix the trouble.
>>
>> I am not sure this is the beast Idea to leave job's sw fence signalling
>> to be
>> executed in system_wq context which is prone to delays of executing
>> various work items from around the system. Seems better to me to leave the
>> fence signaling within the IRQ context and offload only the job freeing or,
>> maybe handle rescheduling to thread context within drivers implemention
>> of .free_job cb. Not really sure which is the better.
> We're talking here about killing jobs when driver destroys context,
> which doesn't feel like it needs to be a fast path. I could move the
> signalling into drm_sched_entity_kill_jobs_cb() and use unbound wq, but
> do we really need this for a slow path?


You can't move the signaling back to drm_sched_entity_kill_jobs_cb
since this will bring back the lockdep splat that 'drm/sched: Avoid
lockdep spalt on killing a processes'
was fixing.

I see your point and i guess we can go this way too. Another way would
be to add to
panfrost and msm job a  work_item and reschedule to thread context from
within their
.free_job callbacks but that probably to cumbersome to be justified here.

Andrey


Reviewed-by: Andrey Grodzovsky <[email protected]>


2022-04-13 00:05:31

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v1] drm/scheduler: Don't kill jobs in interrupt context

On 4/12/22 22:40, Andrey Grodzovsky wrote:
>
> On 2022-04-12 14:20, Dmitry Osipenko wrote:
>> On 4/12/22 19:51, Andrey Grodzovsky wrote:
>>> On 2022-04-11 18:15, Dmitry Osipenko wrote:
>>>> Interrupt context can't sleep. Drivers like Panfrost and MSM are taking
>>>> mutex when job is released, and thus, that code can sleep. This results
>>>> into "BUG: scheduling while atomic" if locks are contented while job is
>>>> freed. There is no good reason for releasing scheduler's jobs in IRQ
>>>> context, hence use normal context to fix the trouble.
>>>
>>> I am not sure this is the beast Idea to leave job's sw fence signalling
>>> to be
>>> executed in system_wq context which is prone to delays of executing
>>> various work items from around the system. Seems better to me to
>>> leave the
>>> fence signaling within the IRQ context and offload only the job
>>> freeing or,
>>> maybe handle rescheduling to thread context within drivers implemention
>>> of .free_job cb. Not really sure which is the better.
>> We're talking here about killing jobs when driver destroys context,
>> which doesn't feel like it needs to be a fast path. I could move the
>> signalling into drm_sched_entity_kill_jobs_cb() and use unbound wq, but
>> do we really need this for a slow path?
>
>
> You can't move the signaling back to drm_sched_entity_kill_jobs_cb
> since this will bring back the lockdep splat that 'drm/sched: Avoid
> lockdep spalt on killing a processes'
> was fixing.

Indeed

> I see your point and i guess we can go this way too. Another way would
> be to add to
> panfrost and msm job a  work_item and reschedule to thread context from
> within their
> .free_job callbacks but that probably to cumbersome to be justified here.

Yes, there is no clear justification for doing that.

> Andrey
>
>
> Reviewed-by: Andrey Grodzovsky <[email protected]>

Thank you!

2022-04-13 01:43:00

by Erico Nunes

[permalink] [raw]
Subject: Re: [PATCH v1] drm/scheduler: Don't kill jobs in interrupt context

On Tue, Apr 12, 2022 at 9:41 PM Andrey Grodzovsky
<[email protected]> wrote:
>
>
> On 2022-04-12 14:20, Dmitry Osipenko wrote:
> > On 4/12/22 19:51, Andrey Grodzovsky wrote:
> >> On 2022-04-11 18:15, Dmitry Osipenko wrote:
> >>> Interrupt context can't sleep. Drivers like Panfrost and MSM are taking
> >>> mutex when job is released, and thus, that code can sleep. This results
> >>> into "BUG: scheduling while atomic" if locks are contented while job is
> >>> freed. There is no good reason for releasing scheduler's jobs in IRQ
> >>> context, hence use normal context to fix the trouble.
> >>
> >> I am not sure this is the beast Idea to leave job's sw fence signalling
> >> to be
> >> executed in system_wq context which is prone to delays of executing
> >> various work items from around the system. Seems better to me to leave the
> >> fence signaling within the IRQ context and offload only the job freeing or,
> >> maybe handle rescheduling to thread context within drivers implemention
> >> of .free_job cb. Not really sure which is the better.
> > We're talking here about killing jobs when driver destroys context,
> > which doesn't feel like it needs to be a fast path. I could move the
> > signalling into drm_sched_entity_kill_jobs_cb() and use unbound wq, but
> > do we really need this for a slow path?
>
>
> You can't move the signaling back to drm_sched_entity_kill_jobs_cb
> since this will bring back the lockdep splat that 'drm/sched: Avoid
> lockdep spalt on killing a processes'
> was fixing.
>
> I see your point and i guess we can go this way too. Another way would
> be to add to
> panfrost and msm job a work_item and reschedule to thread context from
> within their
> .free_job callbacks but that probably to cumbersome to be justified here.

FWIW since this mentioned individual drivers, commit 'drm/sched: Avoid
lockdep spalt on killing a processes' also introduced problems for
lima.
There were some occurrences in our CI
https://gitlab.freedesktop.org/mesa/mesa/-/jobs/20980982/raw .
Later I found it also reproducible on normal usage when just closing
applications, so it may be affecting users too.

I tested this patch and looks like it fixes things for lima.

Thanks

Erico

2022-04-13 12:20:13

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v1] drm/scheduler: Don't kill jobs in interrupt context

On 4/13/22 01:59, Erico Nunes wrote:
> On Tue, Apr 12, 2022 at 9:41 PM Andrey Grodzovsky
> <[email protected]> wrote:
>>
>>
>> On 2022-04-12 14:20, Dmitry Osipenko wrote:
>>> On 4/12/22 19:51, Andrey Grodzovsky wrote:
>>>> On 2022-04-11 18:15, Dmitry Osipenko wrote:
>>>>> Interrupt context can't sleep. Drivers like Panfrost and MSM are taking
>>>>> mutex when job is released, and thus, that code can sleep. This results
>>>>> into "BUG: scheduling while atomic" if locks are contented while job is
>>>>> freed. There is no good reason for releasing scheduler's jobs in IRQ
>>>>> context, hence use normal context to fix the trouble.
>>>>
>>>> I am not sure this is the beast Idea to leave job's sw fence signalling
>>>> to be
>>>> executed in system_wq context which is prone to delays of executing
>>>> various work items from around the system. Seems better to me to leave the
>>>> fence signaling within the IRQ context and offload only the job freeing or,
>>>> maybe handle rescheduling to thread context within drivers implemention
>>>> of .free_job cb. Not really sure which is the better.
>>> We're talking here about killing jobs when driver destroys context,
>>> which doesn't feel like it needs to be a fast path. I could move the
>>> signalling into drm_sched_entity_kill_jobs_cb() and use unbound wq, but
>>> do we really need this for a slow path?
>>
>>
>> You can't move the signaling back to drm_sched_entity_kill_jobs_cb
>> since this will bring back the lockdep splat that 'drm/sched: Avoid
>> lockdep spalt on killing a processes'
>> was fixing.
>>
>> I see your point and i guess we can go this way too. Another way would
>> be to add to
>> panfrost and msm job a work_item and reschedule to thread context from
>> within their
>> .free_job callbacks but that probably to cumbersome to be justified here.
>
> FWIW since this mentioned individual drivers, commit 'drm/sched: Avoid
> lockdep spalt on killing a processes' also introduced problems for
> lima.
> There were some occurrences in our CI
> https://gitlab.freedesktop.org/mesa/mesa/-/jobs/20980982/raw .
> Later I found it also reproducible on normal usage when just closing
> applications, so it may be affecting users too.
>
> I tested this patch and looks like it fixes things for lima.

This patch indeed should fix that lima bug. Feel free to give yours
tested-by :)

2022-04-13 19:12:37

by Erico Nunes

[permalink] [raw]
Subject: Re: [PATCH v1] drm/scheduler: Don't kill jobs in interrupt context

On Wed, Apr 13, 2022 at 8:05 AM Dmitry Osipenko
<[email protected]> wrote:
>
> On 4/13/22 01:59, Erico Nunes wrote:
> > On Tue, Apr 12, 2022 at 9:41 PM Andrey Grodzovsky
> > <[email protected]> wrote:
> >>
> >>
> >> On 2022-04-12 14:20, Dmitry Osipenko wrote:
> >>> On 4/12/22 19:51, Andrey Grodzovsky wrote:
> >>>> On 2022-04-11 18:15, Dmitry Osipenko wrote:
> >>>>> Interrupt context can't sleep. Drivers like Panfrost and MSM are taking
> >>>>> mutex when job is released, and thus, that code can sleep. This results
> >>>>> into "BUG: scheduling while atomic" if locks are contented while job is
> >>>>> freed. There is no good reason for releasing scheduler's jobs in IRQ
> >>>>> context, hence use normal context to fix the trouble.
> >>>>
> >>>> I am not sure this is the beast Idea to leave job's sw fence signalling
> >>>> to be
> >>>> executed in system_wq context which is prone to delays of executing
> >>>> various work items from around the system. Seems better to me to leave the
> >>>> fence signaling within the IRQ context and offload only the job freeing or,
> >>>> maybe handle rescheduling to thread context within drivers implemention
> >>>> of .free_job cb. Not really sure which is the better.
> >>> We're talking here about killing jobs when driver destroys context,
> >>> which doesn't feel like it needs to be a fast path. I could move the
> >>> signalling into drm_sched_entity_kill_jobs_cb() and use unbound wq, but
> >>> do we really need this for a slow path?
> >>
> >>
> >> You can't move the signaling back to drm_sched_entity_kill_jobs_cb
> >> since this will bring back the lockdep splat that 'drm/sched: Avoid
> >> lockdep spalt on killing a processes'
> >> was fixing.
> >>
> >> I see your point and i guess we can go this way too. Another way would
> >> be to add to
> >> panfrost and msm job a work_item and reschedule to thread context from
> >> within their
> >> .free_job callbacks but that probably to cumbersome to be justified here.
> >
> > FWIW since this mentioned individual drivers, commit 'drm/sched: Avoid
> > lockdep spalt on killing a processes' also introduced problems for
> > lima.
> > There were some occurrences in our CI
> > https://gitlab.freedesktop.org/mesa/mesa/-/jobs/20980982/raw .
> > Later I found it also reproducible on normal usage when just closing
> > applications, so it may be affecting users too.
> >
> > I tested this patch and looks like it fixes things for lima.
>
> This patch indeed should fix that lima bug. Feel free to give yours
> tested-by :)

Sure:
Tested-by: Erico Nunes <[email protected]>

Thanks

Erico