Hi Arnd,
this commit:
commit 58374713c9dfb4d231f8c56cac089f6fbdedc2ec
Author: Arnd Bergmann <[email protected]>
Date: Sat Jul 10 23:51:39 2010 +0200
drm: kill BKL from common code
moved the call to (inside drm_release) drm_lastclose inside dev->count_lock
spinlock.
drm_lastclose however takes dev->struct_mutex (now inside an atomic
context):
BUG: sleeping function called from invalid context at /home/kronos/src/linux-2.6.git/kernel/mutex.c:94
in_atomic(): 1, irqs_disabled(): 0, pid: 3331, name: Xorg
Pid: 3331, comm: Xorg Not tainted 2.6.35-06113-gf6cec0a #272
Call Trace:
[<ffffffff8102770e>] __might_sleep+0xf8/0xfa
[<ffffffff8127cf18>] mutex_lock+0x1f/0x3e
[<ffffffffa052d1c1>] drm_lastclose+0x92/0x2ad [drm]
[<ffffffffa052dbc7>] drm_release+0x5ca/0x60d [drm]
[<ffffffff810b118f>] fput+0x130/0x1f7
[<ffffffff810ae77d>] filp_close+0x63/0x6d
[<ffffffff810ae82f>] sys_close+0xa8/0xe2
[<ffffffff8100296b>] system_call_fastpath+0x16/0x1b
Luca
On Wed, Aug 11, 2010 at 6:48 PM, Luca Tettamanti <[email protected]> wrote:
> Hi Arnd,
> this commit:
>
> commit 58374713c9dfb4d231f8c56cac089f6fbdedc2ec
> Author: Arnd Bergmann <[email protected]>
> Date: ? Sat Jul 10 23:51:39 2010 +0200
>
> ? ?drm: kill BKL from common code
>
>
> moved the call to (inside drm_release) drm_lastclose inside dev->count_lock
> spinlock.
> drm_lastclose however takes dev->struct_mutex (now inside an atomic
> context):
I have a patch from Chris Wilson that I need to push to fix this,
basically reducing the spin lock coverage,
and relying on the global mutex to handle the open race.
Dave.
>
> BUG: sleeping function called from invalid context at /home/kronos/src/linux-2.6.git/kernel/mutex.c:94
> in_atomic(): 1, irqs_disabled(): 0, pid: 3331, name: Xorg
> Pid: 3331, comm: Xorg Not tainted 2.6.35-06113-gf6cec0a #272
> Call Trace:
> ?[<ffffffff8102770e>] __might_sleep+0xf8/0xfa
> ?[<ffffffff8127cf18>] mutex_lock+0x1f/0x3e
> ?[<ffffffffa052d1c1>] drm_lastclose+0x92/0x2ad [drm]
> ?[<ffffffffa052dbc7>] drm_release+0x5ca/0x60d [drm]
> ?[<ffffffff810b118f>] fput+0x130/0x1f7
> ?[<ffffffff810ae77d>] filp_close+0x63/0x6d
> ?[<ffffffff810ae82f>] sys_close+0xa8/0xe2
> ?[<ffffffff8100296b>] system_call_fastpath+0x16/0x1b
>
>
> Luca
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at ?http://www.tux.org/lkml/
>
On Wednesday 11 August 2010, Dave Airlie wrote:
> On Wed, Aug 11, 2010 at 6:48 PM, Luca Tettamanti <[email protected]> wrote:
> >
> >
> > moved the call to (inside drm_release) drm_lastclose inside dev->count_lock
> > spinlock.
> > drm_lastclose however takes dev->struct_mutex (now inside an atomic
> > context):
Yes, that's obviously been broken by me, sorry about the trouble.
I must have been trying to simplify the error handling by adding a
goto at the end of drm_release, which then happened to break
the common path.
The easiest way to fix this would be to go back to the way drm_release()
worked previously and /only/ replace {,un}lock_kernel() with
mutex_{,un}lock(&drm_global_mutex);.
> I have a patch from Chris Wilson that I need to push to fix this,
> basically reducing the spin lock coverage,
> and relying on the global mutex to handle the open race.
Yes, that sounds good, it's what the code used to do before my broken
change.
You might also be able to find a way to remove drm_global_lock from the
open/close path entirely.
Arnd
On Wed, Aug 11, 2010 at 10:50 AM, Dave Airlie <[email protected]> wrote:
> On Wed, Aug 11, 2010 at 6:48 PM, Luca Tettamanti <[email protected]> wrote:
>> Hi Arnd,
>> this commit:
>>
>> commit 58374713c9dfb4d231f8c56cac089f6fbdedc2ec
>> Author: Arnd Bergmann <[email protected]>
>> Date: Sat Jul 10 23:51:39 2010 +0200
>>
>> drm: kill BKL from common code
>>
>>
>> moved the call to (inside drm_release) drm_lastclose inside dev->count_lock
>> spinlock.
>> drm_lastclose however takes dev->struct_mutex (now inside an atomic
>> context):
>
> I have a patch from Chris Wilson that I need to push to fix this,
> basically reducing the spin lock coverage,
> and relying on the global mutex to handle the open race.
Ok, thank you :)
Luca