2021-06-18 14:56:53

by Xiongwei Song

[permalink] [raw]
Subject: [PATCH v2 0/3] some improvements for lockdep

From: Xiongwei Song <[email protected]>

Unlikely the checks of return values of graph walk will improve the
performance to some degree, patch 1 and patch 2 are for this.

The patch 3 will print a warning after counting lock deps when hitting
bfs errors.

v2:
* For patch 3, avoid to call lockdep_unlock() twice after counting deps.
Please see https://lkml.org/lkml/2021/6/17/741.

Xiongwei Song (3):
locking/lockdep: Unlikely bfs_error() inside
locking/lockdep: Unlikely conditons about BFS_RMATCH
locking/lockdep: Print possible warning after counting deps

kernel/locking/lockdep.c | 55 +++++++++++++++++++---------------------
1 file changed, 26 insertions(+), 29 deletions(-)

--
2.30.2


2021-06-18 14:57:21

by Xiongwei Song

[permalink] [raw]
Subject: [PATCH v2 1/3] locking/lockdep: Unlikely bfs_error() inside

From: Xiongwei Song <[email protected]>

The error from graph walk is small probability event, and there are some
bfs_error calls during lockdep detection, so unlikely bfs_error inside
can improve performance a little bit.

Suggested-by: Waiman Long <[email protected]>
Signed-off-by: Xiongwei Song <[email protected]>
---
kernel/locking/lockdep.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 7641bd407239..a8a66a2a9bc1 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1540,7 +1540,7 @@ enum bfs_result {
*/
static inline bool bfs_error(enum bfs_result res)
{
- return res < 0;
+ return unlikely(res < 0);
}

/*
@@ -2089,7 +2089,7 @@ check_path(struct held_lock *target, struct lock_list *src_entry,

ret = __bfs_forwards(src_entry, target, match, skip, target_entry);

- if (unlikely(bfs_error(ret)))
+ if (bfs_error(ret))
print_bfs_bug(ret);

return ret;
@@ -2936,7 +2936,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
* in the graph whose neighbours are to be checked.
*/
ret = check_noncircular(next, prev, trace);
- if (unlikely(bfs_error(ret) || ret == BFS_RMATCH))
+ if (bfs_error(ret) || unlikely(ret == BFS_RMATCH))
return 0;

if (!check_irq_usage(curr, prev, next))
--
2.30.2

2021-06-18 14:57:55

by Xiongwei Song

[permalink] [raw]
Subject: [PATCH v2 2/3] locking/lockdep: Unlikely conditons about BFS_RMATCH

From: Xiongwei Song <[email protected]>

The probability that graph walk will return BFS_RMATCH is slim, so unlikey
conditons about BFS_RMATCH can improve performance a little bit.

Signed-off-by: Xiongwei Song <[email protected]>
---
kernel/locking/lockdep.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index a8a66a2a9bc1..cb94097014d8 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2750,7 +2750,7 @@ check_redundant(struct held_lock *src, struct held_lock *target)
*/
ret = check_path(target, &src_entry, hlock_equal, usage_skip, &target_entry);

- if (ret == BFS_RMATCH)
+ if (unlikely(ret == BFS_RMATCH))
debug_atomic_inc(nr_redundant);

return ret;
@@ -2992,7 +2992,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
ret = check_redundant(prev, next);
if (bfs_error(ret))
return 0;
- else if (ret == BFS_RMATCH)
+ else if (unlikely(ret == BFS_RMATCH))
return 2;

if (!*trace) {
--
2.30.2

2021-06-18 14:58:39

by Xiongwei Song

[permalink] [raw]
Subject: [PATCH v2 3/3] locking/lockdep: Print possible warning after counting deps

From: Xiongwei Song <[email protected]>

The graph walk might hit error when counting dependencies. Once the
return value is negative, print a warning to reminder users.

However, lockdep_unlock() would be called twice if we call print_bfs_bug()
directly in __lockdep_count_*_deps(), so as the suggestion from Boqun:
"
Here print_bfs_bug() will eventually call debug_locks_off_graph_unlock()
to release the graph lock, and the caller (lockdep_count_fowards_deps())
will also call graph_unlock() afterwards, and that means we unlock
*twice* if a BFS error happens... although in that case, lockdep should
stop working so messing up with the graph lock may not hurt anything,
but still, I don't think we want to do that.

So probably you can open-code __lockdep_count_forward_deps() into
lockdep_count_forwards_deps(), and call print_bfs_bug() or
graph_unlock() accordingly. The body of __lockdep_count_forward_deps()
is really small, so I think it's OK to open-code it into its caller.
"
we put the code in __lockdep_count_*_deps() into lockdep_count_*_deps().

Suggested-by: Waiman Long <[email protected]>
Suggested-by: Boqun Feng <[email protected]>
Signed-off-by: Xiongwei Song <[email protected]>
---
kernel/locking/lockdep.c | 45 +++++++++++++++++++---------------------
1 file changed, 21 insertions(+), 24 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index cb94097014d8..c29453b1df50 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2024,55 +2024,52 @@ static bool noop_count(struct lock_list *entry, void *data)
return false;
}

-static unsigned long __lockdep_count_forward_deps(struct lock_list *this)
-{
- unsigned long count = 0;
- struct lock_list *target_entry;
-
- __bfs_forwards(this, (void *)&count, noop_count, NULL, &target_entry);
-
- return count;
-}
unsigned long lockdep_count_forward_deps(struct lock_class *class)
{
- unsigned long ret, flags;
+ unsigned long count = 0, flags;
struct lock_list this;
+ struct lock_list *target_entry;
+ enum bfs_result result;

__bfs_init_root(&this, class);

raw_local_irq_save(flags);
lockdep_lock();
- ret = __lockdep_count_forward_deps(&this);
- lockdep_unlock();
- raw_local_irq_restore(flags);

- return ret;
-}
+ result = __bfs_forwards(&this, (void *)&count, noop_count, NULL, &target_entry);

-static unsigned long __lockdep_count_backward_deps(struct lock_list *this)
-{
- unsigned long count = 0;
- struct lock_list *target_entry;
+ if (bfs_error(result))
+ print_bfs_bug(result);
+ else
+ lockdep_unlock();

- __bfs_backwards(this, (void *)&count, noop_count, NULL, &target_entry);
+ raw_local_irq_restore(flags);

return count;
}

unsigned long lockdep_count_backward_deps(struct lock_class *class)
{
- unsigned long ret, flags;
+ unsigned long count = 0, flags;
struct lock_list this;
+ struct lock_list *target_entry;
+ enum bfs_result result;

__bfs_init_root(&this, class);

raw_local_irq_save(flags);
lockdep_lock();
- ret = __lockdep_count_backward_deps(&this);
- lockdep_unlock();
+
+ result = __bfs_backwards(&this, (void *)&count, noop_count, NULL, &target_entry);
+
+ if (bfs_error(result))
+ print_bfs_bug(result);
+ else
+ lockdep_unlock();
+
raw_local_irq_restore(flags);

- return ret;
+ return count;
}

/*
--
2.30.2

2021-06-24 08:05:04

by Xiongwei Song

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] locking/lockdep: Print possible warning after counting deps

Hi experts & Boqun,

Any comments here?

I did a test for the series, got the results below:

/ # cat /proc/lockdep
all lock classes:
ffffffffa6b63c90 FD: 37 BD: 1 +.+.: cgroup_mutex
-> [ffffffffa6b78d70] pcpu_alloc_mutex
-> [ffff8a783d5e8760] lock#2
-> [ffffffffa6b8dc90] iattr_mutex
-> [ffffffffa6b8de10] kernfs_mutex
-> [ffffffffa6b63ab8] cgroup_file_kn_lock
-> [ffffffffa6b63bd8] css_set_lock
-> [ffffffffa6b65770] freezer_mutex

ffffffffa6b54fd8 FD: 1 BD: 56 -...: (console_sem).lock

ffffffffa6b54f80 FD: 68 BD: 11 +.+.: console_lock
-> [ffffffffa6a74d58] console_owner_lock
-> [ffffffffa6a74da0] console_owner
-> [ffffffffa6a6a2f8] resource_lock
-> [ffffffffa8013710] &zone->lock
-> [ffffffffa6bea758] kbd_event_lock
-> [ffffffffa6be12f8] vga_lock
-> [ffffffffa6b54fd8] (console_sem).lock
-> [ffffffffa6b54f38] syslog_lock
-> [ffffffffa6b802e0] fs_reclaim
-> [ffffffffa8042500] &x->wait#5
-> [ffffffffa6bfdc70] gdp_mutex
-> [ffffffffa80328a0] &k->list_lock
-> [ffff8a783d5e8760] lock#2
-> [ffffffffa6b8dc90] iattr_mutex
-> [ffffffffa6b8de10] kernfs_mutex
-> [ffffffffa6be2838] bus_type_sem
-> [ffffffffa6b8e018] sysfs_symlink_target_lock
-> [ffffffffa80421a0] &dev->power.lock
-> [ffffffffa6c01110] dpm_list_mtx
-> [ffffffffa6bdedd0] uevent_sock_mutex
-> [ffffffffa8032cb0] subsys mutex#7
-> [ffffffffa6c00958] req_lock
-> [ffffffffa74e4ce0] &p->pi_lock
-> [ffffffffa80423e0] &x->wait#7
-> [ffffffffa74e7f70] &rq->__lock
-> [ffffffffa8036c30] subsys mutex#19

I compared the TOP 3 of deps count output, the results are completely
same with/without the series patch.

Regards,
Xiongwei


On Fri, Jun 18, 2021 at 10:56 PM Xiongwei Song <[email protected]> wrote:
>
> From: Xiongwei Song <[email protected]>
>
> The graph walk might hit error when counting dependencies. Once the
> return value is negative, print a warning to reminder users.
>
> However, lockdep_unlock() would be called twice if we call print_bfs_bug()
> directly in __lockdep_count_*_deps(), so as the suggestion from Boqun:
> "
> Here print_bfs_bug() will eventually call debug_locks_off_graph_unlock()
> to release the graph lock, and the caller (lockdep_count_fowards_deps())
> will also call graph_unlock() afterwards, and that means we unlock
> *twice* if a BFS error happens... although in that case, lockdep should
> stop working so messing up with the graph lock may not hurt anything,
> but still, I don't think we want to do that.
>
> So probably you can open-code __lockdep_count_forward_deps() into
> lockdep_count_forwards_deps(), and call print_bfs_bug() or
> graph_unlock() accordingly. The body of __lockdep_count_forward_deps()
> is really small, so I think it's OK to open-code it into its caller.
> "
> we put the code in __lockdep_count_*_deps() into lockdep_count_*_deps().
>
> Suggested-by: Waiman Long <[email protected]>
> Suggested-by: Boqun Feng <[email protected]>
> Signed-off-by: Xiongwei Song <[email protected]>
> ---
> kernel/locking/lockdep.c | 45 +++++++++++++++++++---------------------
> 1 file changed, 21 insertions(+), 24 deletions(-)
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index cb94097014d8..c29453b1df50 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -2024,55 +2024,52 @@ static bool noop_count(struct lock_list *entry, void *data)
> return false;
> }
>
> -static unsigned long __lockdep_count_forward_deps(struct lock_list *this)
> -{
> - unsigned long count = 0;
> - struct lock_list *target_entry;
> -
> - __bfs_forwards(this, (void *)&count, noop_count, NULL, &target_entry);
> -
> - return count;
> -}
> unsigned long lockdep_count_forward_deps(struct lock_class *class)
> {
> - unsigned long ret, flags;
> + unsigned long count = 0, flags;
> struct lock_list this;
> + struct lock_list *target_entry;
> + enum bfs_result result;
>
> __bfs_init_root(&this, class);
>
> raw_local_irq_save(flags);
> lockdep_lock();
> - ret = __lockdep_count_forward_deps(&this);
> - lockdep_unlock();
> - raw_local_irq_restore(flags);
>
> - return ret;
> -}
> + result = __bfs_forwards(&this, (void *)&count, noop_count, NULL, &target_entry);
>
> -static unsigned long __lockdep_count_backward_deps(struct lock_list *this)
> -{
> - unsigned long count = 0;
> - struct lock_list *target_entry;
> + if (bfs_error(result))
> + print_bfs_bug(result);
> + else
> + lockdep_unlock();
>
> - __bfs_backwards(this, (void *)&count, noop_count, NULL, &target_entry);
> + raw_local_irq_restore(flags);
>
> return count;
> }
>
> unsigned long lockdep_count_backward_deps(struct lock_class *class)
> {
> - unsigned long ret, flags;
> + unsigned long count = 0, flags;
> struct lock_list this;
> + struct lock_list *target_entry;
> + enum bfs_result result;
>
> __bfs_init_root(&this, class);
>
> raw_local_irq_save(flags);
> lockdep_lock();
> - ret = __lockdep_count_backward_deps(&this);
> - lockdep_unlock();
> +
> + result = __bfs_backwards(&this, (void *)&count, noop_count, NULL, &target_entry);
> +
> + if (bfs_error(result))
> + print_bfs_bug(result);
> + else
> + lockdep_unlock();
> +
> raw_local_irq_restore(flags);
>
> - return ret;
> + return count;
> }
>
> /*
> --
> 2.30.2
>

2021-06-24 13:47:28

by Boqun Feng

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] locking/lockdep: Print possible warning after counting deps

On Fri, Jun 18, 2021 at 10:55:34PM +0800, Xiongwei Song wrote:
> From: Xiongwei Song <[email protected]>
>
> The graph walk might hit error when counting dependencies. Once the
> return value is negative, print a warning to reminder users.
>
> However, lockdep_unlock() would be called twice if we call print_bfs_bug()
> directly in __lockdep_count_*_deps(), so as the suggestion from Boqun:
> "
> Here print_bfs_bug() will eventually call debug_locks_off_graph_unlock()
> to release the graph lock, and the caller (lockdep_count_fowards_deps())
> will also call graph_unlock() afterwards, and that means we unlock
> *twice* if a BFS error happens... although in that case, lockdep should
> stop working so messing up with the graph lock may not hurt anything,
> but still, I don't think we want to do that.
>
> So probably you can open-code __lockdep_count_forward_deps() into
> lockdep_count_forwards_deps(), and call print_bfs_bug() or
> graph_unlock() accordingly. The body of __lockdep_count_forward_deps()
> is really small, so I think it's OK to open-code it into its caller.
> "
> we put the code in __lockdep_count_*_deps() into lockdep_count_*_deps().
>
> Suggested-by: Waiman Long <[email protected]>
> Suggested-by: Boqun Feng <[email protected]>
> Signed-off-by: Xiongwei Song <[email protected]>

Reviewed-by: Boqun Feng <[email protected]>

Thanks!

Regards,
Boqun

> ---
> kernel/locking/lockdep.c | 45 +++++++++++++++++++---------------------
> 1 file changed, 21 insertions(+), 24 deletions(-)
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index cb94097014d8..c29453b1df50 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -2024,55 +2024,52 @@ static bool noop_count(struct lock_list *entry, void *data)
> return false;
> }
>
> -static unsigned long __lockdep_count_forward_deps(struct lock_list *this)
> -{
> - unsigned long count = 0;
> - struct lock_list *target_entry;
> -
> - __bfs_forwards(this, (void *)&count, noop_count, NULL, &target_entry);
> -
> - return count;
> -}
> unsigned long lockdep_count_forward_deps(struct lock_class *class)
> {
> - unsigned long ret, flags;
> + unsigned long count = 0, flags;
> struct lock_list this;
> + struct lock_list *target_entry;
> + enum bfs_result result;
>
> __bfs_init_root(&this, class);
>
> raw_local_irq_save(flags);
> lockdep_lock();
> - ret = __lockdep_count_forward_deps(&this);
> - lockdep_unlock();
> - raw_local_irq_restore(flags);
>
> - return ret;
> -}
> + result = __bfs_forwards(&this, (void *)&count, noop_count, NULL, &target_entry);
>
> -static unsigned long __lockdep_count_backward_deps(struct lock_list *this)
> -{
> - unsigned long count = 0;
> - struct lock_list *target_entry;
> + if (bfs_error(result))
> + print_bfs_bug(result);
> + else
> + lockdep_unlock();
>
> - __bfs_backwards(this, (void *)&count, noop_count, NULL, &target_entry);
> + raw_local_irq_restore(flags);
>
> return count;
> }
>
> unsigned long lockdep_count_backward_deps(struct lock_class *class)
> {
> - unsigned long ret, flags;
> + unsigned long count = 0, flags;
> struct lock_list this;
> + struct lock_list *target_entry;
> + enum bfs_result result;
>
> __bfs_init_root(&this, class);
>
> raw_local_irq_save(flags);
> lockdep_lock();
> - ret = __lockdep_count_backward_deps(&this);
> - lockdep_unlock();
> +
> + result = __bfs_backwards(&this, (void *)&count, noop_count, NULL, &target_entry);
> +
> + if (bfs_error(result))
> + print_bfs_bug(result);
> + else
> + lockdep_unlock();
> +
> raw_local_irq_restore(flags);
>
> - return ret;
> + return count;
> }
>
> /*
> --
> 2.30.2
>

2021-07-12 09:05:46

by Xiongwei Song

[permalink] [raw]
Subject: Re: [PATCH v2 0/3] some improvements for lockdep

Hi Peter,

Will you pick up this series?

Regards,
Xiongwei

On Fri, Jun 18, 2021 at 10:55 PM Xiongwei Song <[email protected]> wrote:
>
> From: Xiongwei Song <[email protected]>
>
> Unlikely the checks of return values of graph walk will improve the
> performance to some degree, patch 1 and patch 2 are for this.
>
> The patch 3 will print a warning after counting lock deps when hitting
> bfs errors.
>
> v2:
> * For patch 3, avoid to call lockdep_unlock() twice after counting deps.
> Please see https://lkml.org/lkml/2021/6/17/741.
>
> Xiongwei Song (3):
> locking/lockdep: Unlikely bfs_error() inside
> locking/lockdep: Unlikely conditons about BFS_RMATCH
> locking/lockdep: Print possible warning after counting deps
>
> kernel/locking/lockdep.c | 55 +++++++++++++++++++---------------------
> 1 file changed, 26 insertions(+), 29 deletions(-)
>
> --
> 2.30.2
>