2023-11-20 20:28:57

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH] locking/percpu-rwsem: Trigger contention tracepoints only if contended

Ping!

On Wed, Nov 8, 2023 at 1:53 PM Namhyung Kim <[email protected]> wrote:
>
> It mistakenly fires lock contention tracepoints always in the writer path.
> It should be conditional on the try lock result.
>
> Signed-off-by: Namhyung Kim <[email protected]>
> ---
> kernel/locking/percpu-rwsem.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
> index 185bd1c906b0..6083883c4fe0 100644
> --- a/kernel/locking/percpu-rwsem.c
> +++ b/kernel/locking/percpu-rwsem.c
> @@ -223,9 +223,10 @@ static bool readers_active_check(struct percpu_rw_semaphore *sem)
>
> void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> {
> + bool contended = false;
> +
> might_sleep();
> rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
> - trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
>
> /* Notify readers to take the slow path. */
> rcu_sync_enter(&sem->rss);
> @@ -234,8 +235,11 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> * Try set sem->block; this provides writer-writer exclusion.
> * Having sem->block set makes new readers block.
> */
> - if (!__percpu_down_write_trylock(sem))
> + if (!__percpu_down_write_trylock(sem)) {
> + trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
> percpu_rwsem_wait(sem, /* .reader = */ false);
> + contended = true;
> + }
>
> /* smp_mb() implied by __percpu_down_write_trylock() on success -- D matches A */
>
> @@ -247,7 +251,8 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
>
> /* Wait for all active readers to complete. */
> rcuwait_wait_event(&sem->writer, readers_active_check(sem), TASK_UNINTERRUPTIBLE);
> - trace_contention_end(sem, 0);
> + if (contended)
> + trace_contention_end(sem, 0);
> }
> EXPORT_SYMBOL_GPL(percpu_down_write);
>
> --
> 2.42.0.869.gea05f2083d-goog
>


2024-02-27 23:03:50

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH] locking/percpu-rwsem: Trigger contention tracepoints only if contended

Hello,

On Mon, Nov 20, 2023 at 12:28 PM Namhyung Kim <[email protected]> wrote:
>
> Ping!
>
> On Wed, Nov 8, 2023 at 1:53 PM Namhyung Kim <[email protected]> wrote:
> >
> > It mistakenly fires lock contention tracepoints always in the writer path.
> > It should be conditional on the try lock result.

Can anybody take a look at this? This makes a large noise
in the lock contention result.

Thanks,
Namhyung

> >
> > Signed-off-by: Namhyung Kim <[email protected]>
> > ---
> > kernel/locking/percpu-rwsem.c | 11 ++++++++---
> > 1 file changed, 8 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
> > index 185bd1c906b0..6083883c4fe0 100644
> > --- a/kernel/locking/percpu-rwsem.c
> > +++ b/kernel/locking/percpu-rwsem.c
> > @@ -223,9 +223,10 @@ static bool readers_active_check(struct percpu_rw_semaphore *sem)
> >
> > void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> > {
> > + bool contended = false;
> > +
> > might_sleep();
> > rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
> > - trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
> >
> > /* Notify readers to take the slow path. */
> > rcu_sync_enter(&sem->rss);
> > @@ -234,8 +235,11 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> > * Try set sem->block; this provides writer-writer exclusion.
> > * Having sem->block set makes new readers block.
> > */
> > - if (!__percpu_down_write_trylock(sem))
> > + if (!__percpu_down_write_trylock(sem)) {
> > + trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
> > percpu_rwsem_wait(sem, /* .reader = */ false);
> > + contended = true;
> > + }
> >
> > /* smp_mb() implied by __percpu_down_write_trylock() on success -- D matches A */
> >
> > @@ -247,7 +251,8 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> >
> > /* Wait for all active readers to complete. */
> > rcuwait_wait_event(&sem->writer, readers_active_check(sem), TASK_UNINTERRUPTIBLE);
> > - trace_contention_end(sem, 0);
> > + if (contended)
> > + trace_contention_end(sem, 0);
> > }
> > EXPORT_SYMBOL_GPL(percpu_down_write);
> >
> > --
> > 2.42.0.869.gea05f2083d-goog
> >

2024-02-28 00:19:07

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH] locking/percpu-rwsem: Trigger contention tracepoints only if contended


On 2/27/24 18:02, Namhyung Kim wrote:
> Hello,
>
> On Mon, Nov 20, 2023 at 12:28 PM Namhyung Kim <[email protected]> wrote:
>> Ping!
>>
>> On Wed, Nov 8, 2023 at 1:53 PM Namhyung Kim <[email protected]> wrote:
>>> It mistakenly fires lock contention tracepoints always in the writer path.
>>> It should be conditional on the try lock result.
> Can anybody take a look at this? This makes a large noise
> in the lock contention result.
>
> Thanks,
> Namhyung
>
>>> Signed-off-by: Namhyung Kim <[email protected]>
>>> ---
>>> kernel/locking/percpu-rwsem.c | 11 ++++++++---
>>> 1 file changed, 8 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
>>> index 185bd1c906b0..6083883c4fe0 100644
>>> --- a/kernel/locking/percpu-rwsem.c
>>> +++ b/kernel/locking/percpu-rwsem.c
>>> @@ -223,9 +223,10 @@ static bool readers_active_check(struct percpu_rw_semaphore *sem)
>>>
>>> void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
>>> {
>>> + bool contended = false;
>>> +
>>> might_sleep();
>>> rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
>>> - trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
>>>
>>> /* Notify readers to take the slow path. */
>>> rcu_sync_enter(&sem->rss);
>>> @@ -234,8 +235,11 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
>>> * Try set sem->block; this provides writer-writer exclusion.
>>> * Having sem->block set makes new readers block.
>>> */
>>> - if (!__percpu_down_write_trylock(sem))
>>> + if (!__percpu_down_write_trylock(sem)) {
>>> + trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
>>> percpu_rwsem_wait(sem, /* .reader = */ false);
>>> + contended = true;
>>> + }
>>>
>>> /* smp_mb() implied by __percpu_down_write_trylock() on success -- D matches A */
>>>
>>> @@ -247,7 +251,8 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
>>>
>>> /* Wait for all active readers to complete. */
>>> rcuwait_wait_event(&sem->writer, readers_active_check(sem), TASK_UNINTERRUPTIBLE);
>>> - trace_contention_end(sem, 0);
>>> + if (contended)
>>> + trace_contention_end(sem, 0);
>>> }
>>> EXPORT_SYMBOL_GPL(percpu_down_write);
>>>
>>> --
>>> 2.42.0.869.gea05f2083d-goog

Yes, that makes sense. Sorry for missing this patch.

Reviewed-by: Waiman Long <[email protected]>


2024-02-28 00:50:22

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH] locking/percpu-rwsem: Trigger contention tracepoints only if contended

On Tue, Feb 27, 2024 at 4:19 PM Waiman Long <[email protected]> wrote:
>
>
> On 2/27/24 18:02, Namhyung Kim wrote:
> > Hello,
> >
> > On Mon, Nov 20, 2023 at 12:28 PM Namhyung Kim <[email protected]> wrote:
> >> Ping!
> >>
> >> On Wed, Nov 8, 2023 at 1:53 PM Namhyung Kim <[email protected]> wrote:
> >>> It mistakenly fires lock contention tracepoints always in the writer path.
> >>> It should be conditional on the try lock result.
> > Can anybody take a look at this? This makes a large noise
> > in the lock contention result.
> >
> > Thanks,
> > Namhyung
> >
> >>> Signed-off-by: Namhyung Kim <[email protected]>
> >>> ---
> >>> kernel/locking/percpu-rwsem.c | 11 ++++++++---
> >>> 1 file changed, 8 insertions(+), 3 deletions(-)
> >>>
> >>> diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
> >>> index 185bd1c906b0..6083883c4fe0 100644
> >>> --- a/kernel/locking/percpu-rwsem.c
> >>> +++ b/kernel/locking/percpu-rwsem.c
> >>> @@ -223,9 +223,10 @@ static bool readers_active_check(struct percpu_rw_semaphore *sem)
> >>>
> >>> void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> >>> {
> >>> + bool contended = false;
> >>> +
> >>> might_sleep();
> >>> rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
> >>> - trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
> >>>
> >>> /* Notify readers to take the slow path. */
> >>> rcu_sync_enter(&sem->rss);
> >>> @@ -234,8 +235,11 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> >>> * Try set sem->block; this provides writer-writer exclusion.
> >>> * Having sem->block set makes new readers block.
> >>> */
> >>> - if (!__percpu_down_write_trylock(sem))
> >>> + if (!__percpu_down_write_trylock(sem)) {
> >>> + trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
> >>> percpu_rwsem_wait(sem, /* .reader = */ false);
> >>> + contended = true;
> >>> + }
> >>>
> >>> /* smp_mb() implied by __percpu_down_write_trylock() on success -- D matches A */
> >>>
> >>> @@ -247,7 +251,8 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> >>>
> >>> /* Wait for all active readers to complete. */
> >>> rcuwait_wait_event(&sem->writer, readers_active_check(sem), TASK_UNINTERRUPTIBLE);
> >>> - trace_contention_end(sem, 0);
> >>> + if (contended)
> >>> + trace_contention_end(sem, 0);
> >>> }
> >>> EXPORT_SYMBOL_GPL(percpu_down_write);
> >>>
> >>> --
> >>> 2.42.0.869.gea05f2083d-goog
>
> Yes, that makes sense. Sorry for missing this patch.
>
> Reviewed-by: Waiman Long <[email protected]>

Thanks for your review.
Namhyung

2024-02-28 12:51:11

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] locking/percpu-rwsem: Trigger contention tracepoints only if contended


* Waiman Long <[email protected]> wrote:

>
> On 2/27/24 18:02, Namhyung Kim wrote:
> > Hello,
> >
> > On Mon, Nov 20, 2023 at 12:28 PM Namhyung Kim <[email protected]> wrote:
> > > Ping!
> > >
> > > On Wed, Nov 8, 2023 at 1:53 PM Namhyung Kim <[email protected]> wrote:
> > > > It mistakenly fires lock contention tracepoints always in the writer path.
> > > > It should be conditional on the try lock result.
> > Can anybody take a look at this? This makes a large noise
> > in the lock contention result.
> >
> > Thanks,
> > Namhyung
> >
> > > > Signed-off-by: Namhyung Kim <[email protected]>
> > > > ---
> > > > kernel/locking/percpu-rwsem.c | 11 ++++++++---
> > > > 1 file changed, 8 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
> > > > index 185bd1c906b0..6083883c4fe0 100644
> > > > --- a/kernel/locking/percpu-rwsem.c
> > > > +++ b/kernel/locking/percpu-rwsem.c
> > > > @@ -223,9 +223,10 @@ static bool readers_active_check(struct percpu_rw_semaphore *sem)
> > > >
> > > > void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> > > > {
> > > > + bool contended = false;
> > > > +
> > > > might_sleep();
> > > > rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
> > > > - trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
> > > >
> > > > /* Notify readers to take the slow path. */
> > > > rcu_sync_enter(&sem->rss);
> > > > @@ -234,8 +235,11 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> > > > * Try set sem->block; this provides writer-writer exclusion.
> > > > * Having sem->block set makes new readers block.
> > > > */
> > > > - if (!__percpu_down_write_trylock(sem))
> > > > + if (!__percpu_down_write_trylock(sem)) {
> > > > + trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
> > > > percpu_rwsem_wait(sem, /* .reader = */ false);
> > > > + contended = true;
> > > > + }
> > > >
> > > > /* smp_mb() implied by __percpu_down_write_trylock() on success -- D matches A */
> > > >
> > > > @@ -247,7 +251,8 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> > > >
> > > > /* Wait for all active readers to complete. */
> > > > rcuwait_wait_event(&sem->writer, readers_active_check(sem), TASK_UNINTERRUPTIBLE);
> > > > - trace_contention_end(sem, 0);
> > > > + if (contended)
> > > > + trace_contention_end(sem, 0);
> > > > }
> > > > EXPORT_SYMBOL_GPL(percpu_down_write);
> > > >
> > > > --
> > > > 2.42.0.869.gea05f2083d-goog
>
> Yes, that makes sense. Sorry for missing this patch.
>
> Reviewed-by: Waiman Long <[email protected]>

Applied to tip:locking/core, thanks guys!

Ingo