2015-11-01 22:56:23

by Alexei Starovoitov

[permalink] [raw]
Subject: Re: [PATCH] bpf: convert hashtab lock to raw lock

On Sat, Oct 31, 2015 at 09:47:36AM -0400, Steven Rostedt wrote:
> On Fri, 30 Oct 2015 17:03:58 -0700
> Alexei Starovoitov <[email protected]> wrote:
>
> > On Fri, Oct 30, 2015 at 03:16:26PM -0700, Yang Shi wrote:
> > > When running bpf samples on rt kernel, it reports the below warning:
> > >
> > > BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
> > > in_atomic(): 1, irqs_disabled(): 128, pid: 477, name: ping
> > > Preemption disabled at:[<ffff80000017db58>] kprobe_perf_func+0x30/0x228
> > ...
> > > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> > > index 83c209d..972b76b 100644
> > > --- a/kernel/bpf/hashtab.c
> > > +++ b/kernel/bpf/hashtab.c
> > > @@ -17,7 +17,7 @@
> > > struct bpf_htab {
> > > struct bpf_map map;
> > > struct hlist_head *buckets;
> > > - spinlock_t lock;
> > > + raw_spinlock_t lock;
> >
> > How do we address such things in general?
> > I bet there are tons of places around the kernel that
> > call spin_lock from atomic.
> > I'd hate to lose the benefits of lockdep of non-raw spin_lock
> > just to make rt happy.
>
> You wont lose any benefits of lockdep. Lockdep still checks
> raw_spin_lock(). The only difference between raw_spin_lock and
> spin_lock is that in -rt spin_lock turns into an rt_mutex() and
> raw_spin_lock stays a spin lock.

I see. The patch makes sense then.
Would be good to document this peculiarity of spin_lock.


2015-11-02 09:00:16

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH] bpf: convert hashtab lock to raw lock

On Sun, 1 Nov 2015, Alexei Starovoitov wrote:
> On Sat, Oct 31, 2015 at 09:47:36AM -0400, Steven Rostedt wrote:
> > On Fri, 30 Oct 2015 17:03:58 -0700
> > Alexei Starovoitov <[email protected]> wrote:
> >
> > > On Fri, Oct 30, 2015 at 03:16:26PM -0700, Yang Shi wrote:
> > > > When running bpf samples on rt kernel, it reports the below warning:
> > > >
> > > > BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
> > > > in_atomic(): 1, irqs_disabled(): 128, pid: 477, name: ping
> > > > Preemption disabled at:[<ffff80000017db58>] kprobe_perf_func+0x30/0x228
> > > ...
> > > > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> > > > index 83c209d..972b76b 100644
> > > > --- a/kernel/bpf/hashtab.c
> > > > +++ b/kernel/bpf/hashtab.c
> > > > @@ -17,7 +17,7 @@
> > > > struct bpf_htab {
> > > > struct bpf_map map;
> > > > struct hlist_head *buckets;
> > > > - spinlock_t lock;
> > > > + raw_spinlock_t lock;
> > >
> > > How do we address such things in general?
> > > I bet there are tons of places around the kernel that
> > > call spin_lock from atomic.
> > > I'd hate to lose the benefits of lockdep of non-raw spin_lock
> > > just to make rt happy.
> >
> > You wont lose any benefits of lockdep. Lockdep still checks
> > raw_spin_lock(). The only difference between raw_spin_lock and
> > spin_lock is that in -rt spin_lock turns into an rt_mutex() and
> > raw_spin_lock stays a spin lock.
>
> I see. The patch makes sense then.
> Would be good to document this peculiarity of spin_lock.

I'm working on a document.

Thanks,

tglx

2015-11-02 17:09:08

by Shi, Yang

[permalink] [raw]
Subject: Re: [PATCH] bpf: convert hashtab lock to raw lock

On 11/2/2015 12:59 AM, Thomas Gleixner wrote:
> On Sun, 1 Nov 2015, Alexei Starovoitov wrote:
>> On Sat, Oct 31, 2015 at 09:47:36AM -0400, Steven Rostedt wrote:
>>> On Fri, 30 Oct 2015 17:03:58 -0700
>>> Alexei Starovoitov <[email protected]> wrote:
>>>
>>>> On Fri, Oct 30, 2015 at 03:16:26PM -0700, Yang Shi wrote:
>>>>> When running bpf samples on rt kernel, it reports the below warning:
>>>>>
>>>>> BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
>>>>> in_atomic(): 1, irqs_disabled(): 128, pid: 477, name: ping
>>>>> Preemption disabled at:[<ffff80000017db58>] kprobe_perf_func+0x30/0x228
>>>> ...
>>>>> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
>>>>> index 83c209d..972b76b 100644
>>>>> --- a/kernel/bpf/hashtab.c
>>>>> +++ b/kernel/bpf/hashtab.c
>>>>> @@ -17,7 +17,7 @@
>>>>> struct bpf_htab {
>>>>> struct bpf_map map;
>>>>> struct hlist_head *buckets;
>>>>> - spinlock_t lock;
>>>>> + raw_spinlock_t lock;
>>>>
>>>> How do we address such things in general?
>>>> I bet there are tons of places around the kernel that
>>>> call spin_lock from atomic.
>>>> I'd hate to lose the benefits of lockdep of non-raw spin_lock
>>>> just to make rt happy.
>>>
>>> You wont lose any benefits of lockdep. Lockdep still checks
>>> raw_spin_lock(). The only difference between raw_spin_lock and
>>> spin_lock is that in -rt spin_lock turns into an rt_mutex() and
>>> raw_spin_lock stays a spin lock.
>>
>> I see. The patch makes sense then.
>> Would be good to document this peculiarity of spin_lock.
>
> I'm working on a document.

Thanks Steven and Thomas for your elaboration and comment.

Yang

>
> Thanks,
>
> tglx
>