Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933693AbcCHAJT (ORCPT ); Mon, 7 Mar 2016 19:09:19 -0500 Received: from mail-vk0-f53.google.com ([209.85.213.53]:36822 "EHLO mail-vk0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933571AbcCHAJH (ORCPT ); Mon, 7 Mar 2016 19:09:07 -0500 MIME-Version: 1.0 In-Reply-To: References: <1457135033-11791-1-git-send-email-dianders@chromium.org> <20160306162951.GA14170@phenom.ffwll.local> Date: Mon, 7 Mar 2016 16:09:06 -0800 X-Google-Sender-Auth: Xtp0b4UrBR8SM7HHSIu3rrRclEI Message-ID: Subject: Re: [PATCH] drm: Check for connector->state NULL in drm_atomic_add_affected_connectors From: Doug Anderson To: Douglas Anderson , David Airlie , "linux-kernel@vger.kernel.org" , "dri-devel@lists.freedesktop.org" , Tomasz Figa , =?UTF-8?B?5aea5pm65oOF?= , =?UTF-8?Q?Heiko_St=C3=BCbner?= Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2298 Lines: 51 Hi, On Mon, Mar 7, 2016 at 4:05 PM, Doug Anderson wrote: > Daniel, > > On Sun, Mar 6, 2016 at 8:29 AM, Daniel Vetter wrote: >> On Fri, Mar 04, 2016 at 03:43:53PM -0800, Douglas Anderson wrote: >>> On a system I'm doing development on I found a crash. The crawl looked >>> like: >>> >>> PC is at drm_atomic_add_affected_connectors+0x98/0xe8 >>> ... >>> drm_atomic_add_affected_connectors from __drm_atomic_helper_set_config+0x218/0x344 >>> __drm_atomic_helper_set_config from restore_fbdev_mode+0x108/0x250 >>> restore_fbdev_mode from drm_fb_helper_restore_fbdev_mode_unlocked+0x3c/0x80 >>> drm_fb_helper_restore_fbdev_mode_unlocked from rockchip_drm_lastclose+0x1c/0x20 >>> rockchip_drm_lastclose from drm_lastclose+0x4c/0x104 >>> drm_lastclose from drm_release+0x424/0x47c >>> drm_release from __fput+0xf8/0x1d4 >>> __fput from ____fput+0x18/0x1c >>> ____fput from task_work_run+0xa8/0xbc >>> task_work_run from do_exit+0x448/0x91c >>> do_exit from do_group_exit+0x5c/0xcc >>> do_group_exit from get_signal+0x4dc/0x57c >>> get_signal from do_signal+0x9c/0x3b4 >>> do_signal from do_work_pending+0x60/0xb8 >>> do_work_pending from slow_work_pending+0xc/0x20 >>> >>> I found that I could fix the crash by checking connector->state against >>> NULL. This isn't code I'm familiar with and I didn't dig too deep, so >>> I'd appreciate any opinions about whether this is a sane thing to do. >>> Note that I don't actually have graphics up on my system at the moment, >>> so perhaps this is all just a symptom of the strange state I'm in. >>> >>> Signed-off-by: Douglas Anderson >> >> This is a driver bug - under atomic the assumption is that there is >> _always_ a current software state. Most driver set up the initial >> "everything off" state in the ->reset functions. > > Ah, I see what the problem is. It looks like the main Rockchip > subsystem moved to atomic but the patch to support that in dw_hdmi > never landed. If I pick > then my crash goes away. :) > > In case it's not obvious, please consider $SUBJECT patch abandoned. Thanks! Argh. ...or the needed patch landed but I didn't pick it back in my backport. Even dumber. :( -Doug