2016-03-04 23:44:27

by Doug Anderson

[permalink] [raw]
Subject: [PATCH] drm: Check for connector->state NULL in drm_atomic_add_affected_connectors

On a system I'm doing development on I found a crash. The crawl looked
like:

PC is at drm_atomic_add_affected_connectors+0x98/0xe8
...
drm_atomic_add_affected_connectors from __drm_atomic_helper_set_config+0x218/0x344
__drm_atomic_helper_set_config from restore_fbdev_mode+0x108/0x250
restore_fbdev_mode from drm_fb_helper_restore_fbdev_mode_unlocked+0x3c/0x80
drm_fb_helper_restore_fbdev_mode_unlocked from rockchip_drm_lastclose+0x1c/0x20
rockchip_drm_lastclose from drm_lastclose+0x4c/0x104
drm_lastclose from drm_release+0x424/0x47c
drm_release from __fput+0xf8/0x1d4
__fput from ____fput+0x18/0x1c
____fput from task_work_run+0xa8/0xbc
task_work_run from do_exit+0x448/0x91c
do_exit from do_group_exit+0x5c/0xcc
do_group_exit from get_signal+0x4dc/0x57c
get_signal from do_signal+0x9c/0x3b4
do_signal from do_work_pending+0x60/0xb8
do_work_pending from slow_work_pending+0xc/0x20

I found that I could fix the crash by checking connector->state against
NULL. This isn't code I'm familiar with and I didn't dig too deep, so
I'd appreciate any opinions about whether this is a sane thing to do.
Note that I don't actually have graphics up on my system at the moment,
so perhaps this is all just a symptom of the strange state I'm in.

Signed-off-by: Douglas Anderson <[email protected]>
---
drivers/gpu/drm/drm_atomic.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
index 8fb469c4e4b8..3377d7ddc6d1 100644
--- a/drivers/gpu/drm/drm_atomic.c
+++ b/drivers/gpu/drm/drm_atomic.c
@@ -1147,7 +1147,7 @@ drm_atomic_add_affected_connectors(struct drm_atomic_state *state,
* current configuration.
*/
drm_for_each_connector(connector, state->dev) {
- if (connector->state->crtc != crtc)
+ if (!connector->state || connector->state->crtc != crtc)
continue;

conn_state = drm_atomic_get_connector_state(state, connector);
--
2.7.0.rc3.207.g0ac5344


2016-03-06 16:30:02

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH] drm: Check for connector->state NULL in drm_atomic_add_affected_connectors

On Fri, Mar 04, 2016 at 03:43:53PM -0800, Douglas Anderson wrote:
> On a system I'm doing development on I found a crash. The crawl looked
> like:
>
> PC is at drm_atomic_add_affected_connectors+0x98/0xe8
> ...
> drm_atomic_add_affected_connectors from __drm_atomic_helper_set_config+0x218/0x344
> __drm_atomic_helper_set_config from restore_fbdev_mode+0x108/0x250
> restore_fbdev_mode from drm_fb_helper_restore_fbdev_mode_unlocked+0x3c/0x80
> drm_fb_helper_restore_fbdev_mode_unlocked from rockchip_drm_lastclose+0x1c/0x20
> rockchip_drm_lastclose from drm_lastclose+0x4c/0x104
> drm_lastclose from drm_release+0x424/0x47c
> drm_release from __fput+0xf8/0x1d4
> __fput from ____fput+0x18/0x1c
> ____fput from task_work_run+0xa8/0xbc
> task_work_run from do_exit+0x448/0x91c
> do_exit from do_group_exit+0x5c/0xcc
> do_group_exit from get_signal+0x4dc/0x57c
> get_signal from do_signal+0x9c/0x3b4
> do_signal from do_work_pending+0x60/0xb8
> do_work_pending from slow_work_pending+0xc/0x20
>
> I found that I could fix the crash by checking connector->state against
> NULL. This isn't code I'm familiar with and I didn't dig too deep, so
> I'd appreciate any opinions about whether this is a sane thing to do.
> Note that I don't actually have graphics up on my system at the moment,
> so perhaps this is all just a symptom of the strange state I'm in.
>
> Signed-off-by: Douglas Anderson <[email protected]>

This is a driver bug - under atomic the assumption is that there is
_always_ a current software state. Most driver set up the initial
"everything off" state in the ->reset functions.
-Daniel

> ---
> drivers/gpu/drm/drm_atomic.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
> index 8fb469c4e4b8..3377d7ddc6d1 100644
> --- a/drivers/gpu/drm/drm_atomic.c
> +++ b/drivers/gpu/drm/drm_atomic.c
> @@ -1147,7 +1147,7 @@ drm_atomic_add_affected_connectors(struct drm_atomic_state *state,
> * current configuration.
> */
> drm_for_each_connector(connector, state->dev) {
> - if (connector->state->crtc != crtc)
> + if (!connector->state || connector->state->crtc != crtc)
> continue;
>
> conn_state = drm_atomic_get_connector_state(state, connector);
> --
> 2.7.0.rc3.207.g0ac5344
>
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2016-03-06 17:30:30

by Ville Syrjälä

[permalink] [raw]
Subject: Re: [PATCH] drm: Check for connector->state NULL in drm_atomic_add_affected_connectors

On Sun, Mar 06, 2016 at 05:29:51PM +0100, Daniel Vetter wrote:
> On Fri, Mar 04, 2016 at 03:43:53PM -0800, Douglas Anderson wrote:
> > On a system I'm doing development on I found a crash. The crawl looked
> > like:
> >
> > PC is at drm_atomic_add_affected_connectors+0x98/0xe8
> > ...
> > drm_atomic_add_affected_connectors from __drm_atomic_helper_set_config+0x218/0x344
> > __drm_atomic_helper_set_config from restore_fbdev_mode+0x108/0x250
> > restore_fbdev_mode from drm_fb_helper_restore_fbdev_mode_unlocked+0x3c/0x80
> > drm_fb_helper_restore_fbdev_mode_unlocked from rockchip_drm_lastclose+0x1c/0x20
> > rockchip_drm_lastclose from drm_lastclose+0x4c/0x104
> > drm_lastclose from drm_release+0x424/0x47c
> > drm_release from __fput+0xf8/0x1d4
> > __fput from ____fput+0x18/0x1c
> > ____fput from task_work_run+0xa8/0xbc
> > task_work_run from do_exit+0x448/0x91c
> > do_exit from do_group_exit+0x5c/0xcc
> > do_group_exit from get_signal+0x4dc/0x57c
> > get_signal from do_signal+0x9c/0x3b4
> > do_signal from do_work_pending+0x60/0xb8
> > do_work_pending from slow_work_pending+0xc/0x20
> >
> > I found that I could fix the crash by checking connector->state against
> > NULL. This isn't code I'm familiar with and I didn't dig too deep, so
> > I'd appreciate any opinions about whether this is a sane thing to do.
> > Note that I don't actually have graphics up on my system at the moment,
> > so perhaps this is all just a symptom of the strange state I'm in.
> >
> > Signed-off-by: Douglas Anderson <[email protected]>
>
> This is a driver bug - under atomic the assumption is that there is
> _always_ a current software state. Most driver set up the initial
> "everything off" state in the ->reset functions.

Except if it's due to kzalloc() failing in the atomic helper reset
implementation. There's no way to propagate the error upwards, so
the current implementation just silently sets state=NULL and goes on
like nothing happened.

--
Ville Syrj?l?
Intel OTC

2016-03-08 00:06:15

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCH] drm: Check for connector->state NULL in drm_atomic_add_affected_connectors

Daniel,

On Sun, Mar 6, 2016 at 8:29 AM, Daniel Vetter <[email protected]> wrote:
> On Fri, Mar 04, 2016 at 03:43:53PM -0800, Douglas Anderson wrote:
>> On a system I'm doing development on I found a crash. The crawl looked
>> like:
>>
>> PC is at drm_atomic_add_affected_connectors+0x98/0xe8
>> ...
>> drm_atomic_add_affected_connectors from __drm_atomic_helper_set_config+0x218/0x344
>> __drm_atomic_helper_set_config from restore_fbdev_mode+0x108/0x250
>> restore_fbdev_mode from drm_fb_helper_restore_fbdev_mode_unlocked+0x3c/0x80
>> drm_fb_helper_restore_fbdev_mode_unlocked from rockchip_drm_lastclose+0x1c/0x20
>> rockchip_drm_lastclose from drm_lastclose+0x4c/0x104
>> drm_lastclose from drm_release+0x424/0x47c
>> drm_release from __fput+0xf8/0x1d4
>> __fput from ____fput+0x18/0x1c
>> ____fput from task_work_run+0xa8/0xbc
>> task_work_run from do_exit+0x448/0x91c
>> do_exit from do_group_exit+0x5c/0xcc
>> do_group_exit from get_signal+0x4dc/0x57c
>> get_signal from do_signal+0x9c/0x3b4
>> do_signal from do_work_pending+0x60/0xb8
>> do_work_pending from slow_work_pending+0xc/0x20
>>
>> I found that I could fix the crash by checking connector->state against
>> NULL. This isn't code I'm familiar with and I didn't dig too deep, so
>> I'd appreciate any opinions about whether this is a sane thing to do.
>> Note that I don't actually have graphics up on my system at the moment,
>> so perhaps this is all just a symptom of the strange state I'm in.
>>
>> Signed-off-by: Douglas Anderson <[email protected]>
>
> This is a driver bug - under atomic the assumption is that there is
> _always_ a current software state. Most driver set up the initial
> "everything off" state in the ->reset functions.

Ah, I see what the problem is. It looks like the main Rockchip
subsystem moved to atomic but the patch to support that in dw_hdmi
never landed. If I pick <https://patchwork.kernel.org/patch/7868951/>
then my crash goes away. :)

In case it's not obvious, please consider $SUBJECT patch abandoned. Thanks!

-Doug

2016-03-08 00:09:19

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCH] drm: Check for connector->state NULL in drm_atomic_add_affected_connectors

Hi,

On Mon, Mar 7, 2016 at 4:05 PM, Doug Anderson <[email protected]> wrote:
> Daniel,
>
> On Sun, Mar 6, 2016 at 8:29 AM, Daniel Vetter <[email protected]> wrote:
>> On Fri, Mar 04, 2016 at 03:43:53PM -0800, Douglas Anderson wrote:
>>> On a system I'm doing development on I found a crash. The crawl looked
>>> like:
>>>
>>> PC is at drm_atomic_add_affected_connectors+0x98/0xe8
>>> ...
>>> drm_atomic_add_affected_connectors from __drm_atomic_helper_set_config+0x218/0x344
>>> __drm_atomic_helper_set_config from restore_fbdev_mode+0x108/0x250
>>> restore_fbdev_mode from drm_fb_helper_restore_fbdev_mode_unlocked+0x3c/0x80
>>> drm_fb_helper_restore_fbdev_mode_unlocked from rockchip_drm_lastclose+0x1c/0x20
>>> rockchip_drm_lastclose from drm_lastclose+0x4c/0x104
>>> drm_lastclose from drm_release+0x424/0x47c
>>> drm_release from __fput+0xf8/0x1d4
>>> __fput from ____fput+0x18/0x1c
>>> ____fput from task_work_run+0xa8/0xbc
>>> task_work_run from do_exit+0x448/0x91c
>>> do_exit from do_group_exit+0x5c/0xcc
>>> do_group_exit from get_signal+0x4dc/0x57c
>>> get_signal from do_signal+0x9c/0x3b4
>>> do_signal from do_work_pending+0x60/0xb8
>>> do_work_pending from slow_work_pending+0xc/0x20
>>>
>>> I found that I could fix the crash by checking connector->state against
>>> NULL. This isn't code I'm familiar with and I didn't dig too deep, so
>>> I'd appreciate any opinions about whether this is a sane thing to do.
>>> Note that I don't actually have graphics up on my system at the moment,
>>> so perhaps this is all just a symptom of the strange state I'm in.
>>>
>>> Signed-off-by: Douglas Anderson <[email protected]>
>>
>> This is a driver bug - under atomic the assumption is that there is
>> _always_ a current software state. Most driver set up the initial
>> "everything off" state in the ->reset functions.
>
> Ah, I see what the problem is. It looks like the main Rockchip
> subsystem moved to atomic but the patch to support that in dw_hdmi
> never landed. If I pick <https://patchwork.kernel.org/patch/7868951/>
> then my crash goes away. :)
>
> In case it's not obvious, please consider $SUBJECT patch abandoned. Thanks!

Argh. ...or the needed patch landed but I didn't pick it back in my
backport. Even dumber. :(

-Doug