2021-06-17 18:03:44

by Xiongwei Song

[permalink] [raw]
Subject: [PATCH 0/3] some improvements for lockdep

Unlikely the checks of return values of graph walk will improve the
performance to some degree, patch 1 and patch 2 are for this.

The patch 3 will print a warning after counting lock deps when hitting
bfs errors.

Xiongwei Song (3):
locking/lockdep: unlikely bfs_error function
locking/lockdep: unlikely conditons about BFS_RMATCH
locking/lockdep: print possible warning after counting deps

kernel/locking/lockdep.c | 22 +++++++++++++++-------
1 file changed, 15 insertions(+), 7 deletions(-)

--
2.30.2


2021-06-17 18:04:20

by Xiongwei Song

[permalink] [raw]
Subject: [PATCH 2/3] locking/lockdep: unlikely conditons about BFS_RMATCH

From: Xiongwei Song <[email protected]>

The probability that graph walk will return BFS_RMATCH is slim, so unlikey
conditons about BFS_RMATCH can improve performance a little bit.

Signed-off-by: Xiongwei Song <[email protected]>
---
kernel/locking/lockdep.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index a8a66a2a9bc1..cb94097014d8 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2750,7 +2750,7 @@ check_redundant(struct held_lock *src, struct held_lock *target)
*/
ret = check_path(target, &src_entry, hlock_equal, usage_skip, &target_entry);

- if (ret == BFS_RMATCH)
+ if (unlikely(ret == BFS_RMATCH))
debug_atomic_inc(nr_redundant);

return ret;
@@ -2992,7 +2992,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
ret = check_redundant(prev, next);
if (bfs_error(ret))
return 0;
- else if (ret == BFS_RMATCH)
+ else if (unlikely(ret == BFS_RMATCH))
return 2;

if (!*trace) {
--
2.30.2