2023-03-17 03:15:47

by Boqun Feng

[permalink] [raw]
Subject: [PATCH rcu 0/7] RCU-related lockdep changes for v6.4

Hi,

This series enables deadlock detection for srcu_read_lock() vs
synchronize_srcu().

Again, my first time helping prepare PR, so please take a careful look
and yell at me if there is something wrong. Thanks a lot!

You will also be able to find the series at:

https://github/fbq/linux rcu/lockdep.2023.03.12a

top commit is:

24390de55773

List of changes:

Boqun Feng (4):
locking/lockdep: Introduce lock_sync()
rcu: Annotate SRCU's update-side lockdep dependencies
locking: Reduce the number of locks in ww_mutex stress tests
locking/lockdep: Improve the deadlock scenario print for sync and read
lock

Paul E. McKenney (3):
rcutorture: Add SRCU deadlock scenarios
rcutorture: Add RCU Tasks Trace and SRCU deadlock scenarios
rcutorture: Add srcu_lockdep.sh

include/linux/lockdep.h | 8 +-
include/linux/srcu.h | 34 +++-
kernel/locking/lockdep.c | 64 +++++-
kernel/locking/test-ww_mutex.c | 2 +-
kernel/rcu/rcutorture.c | 185 ++++++++++++++++++
kernel/rcu/srcutiny.c | 2 +
kernel/rcu/srcutree.c | 2 +
.../selftests/rcutorture/bin/srcu_lockdep.sh | 73 +++++++
8 files changed, 359 insertions(+), 11 deletions(-)
create mode 100755 tools/testing/selftests/rcutorture/bin/srcu_lockdep.sh

--
2.39.2



2023-03-17 03:15:54

by Boqun Feng

[permalink] [raw]
Subject: [PATCH rcu 3/7] locking: Reduce the number of locks in ww_mutex stress tests

The stress test in test_ww_mutex_init() uses 4095 locks since
lockdep::reference has 12 bits, and since we are going to reduce it to
11 bits to support lock_sync(), and 2047 is still a reasonable number of
the max nesting level for locks, so adjust the test.

Reported-by: kernel test robot <[email protected]>
Link: https://lore.kernel.org/oe-lkp/[email protected]
Signed-off-by: Boqun Feng <[email protected]>
---
kernel/locking/test-ww_mutex.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c
index 29dc253d03af..93cca6e69860 100644
--- a/kernel/locking/test-ww_mutex.c
+++ b/kernel/locking/test-ww_mutex.c
@@ -659,7 +659,7 @@ static int __init test_ww_mutex_init(void)
if (ret)
return ret;

- ret = stress(4095, hweight32(STRESS_ALL)*ncpus, STRESS_ALL);
+ ret = stress(2047, hweight32(STRESS_ALL)*ncpus, STRESS_ALL);
if (ret)
return ret;

--
2.39.2


2023-03-17 03:16:02

by Boqun Feng

[permalink] [raw]
Subject: [PATCH rcu 2/7] rcu: Annotate SRCU's update-side lockdep dependencies

Although all flavors of RCU readers are annotated correctly with
lockdep as recursive read locks, they do not set the lock_acquire
'check' parameter. This means that RCU read locks are not added to
the lockdep dependency graph, which in turn means that lockdep cannot
detect RCU-based deadlocks. This is not a problem for RCU flavors having
atomic read-side critical sections because context-based annotations can
catch these deadlocks, see for example the RCU_LOCKDEP_WARN() statement
in synchronize_rcu(). But context-based annotations are not helpful
for sleepable RCU, especially given that it is perfectly legal to do
synchronize_srcu(&srcu1) within an srcu_read_lock(&srcu2).

However, we can detect SRCU-based by: (1) Making srcu_read_lock() a
'check'ed recursive read lock and (2) Making synchronize_srcu() a empty
write lock critical section. Even better, with the newly introduced
lock_sync(), we can avoid false positives about irq-unsafe/safe.
This commit therefore makes it so.

Note that NMI-safe SRCU read side critical sections are currently not
annotated, but might be annotated in the future.

Signed-off-by: Boqun Feng <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
[ boqun: Add comments for annotation per Waiman's suggestion ]
Signed-off-by: Boqun Feng <[email protected]>
---
include/linux/srcu.h | 34 ++++++++++++++++++++++++++++++++--
kernel/rcu/srcutiny.c | 2 ++
kernel/rcu/srcutree.c | 2 ++
3 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index 74796cd7e7a9..7a745169b79a 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -102,6 +102,32 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
return lock_is_held(&ssp->dep_map);
}

+/**
+ * Annotations provide deadlock detection for SRCU.
+ *
+ * Similar to other lockdep annotations, except there is an additional
+ * srcu_lock_sync(), which is basically an empty *write*-side critical section,
+ * see lock_sync() for more information.
+ */
+
+/* Annotates a srcu_read_lock() */
+static inline void srcu_lock_acquire(struct lockdep_map *map)
+{
+ lock_map_acquire_read(map);
+}
+
+/* Annotates a srcu_read_lock() */
+static inline void srcu_lock_release(struct lockdep_map *map)
+{
+ lock_map_release(map);
+}
+
+/* Annotates a synchronize_srcu() */
+static inline void srcu_lock_sync(struct lockdep_map *map)
+{
+ lock_map_sync(map);
+}
+
#else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */

static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
@@ -109,6 +135,10 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
return 1;
}

+#define srcu_lock_acquire(m) do { } while (0)
+#define srcu_lock_release(m) do { } while (0)
+#define srcu_lock_sync(m) do { } while (0)
+
#endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */

#define SRCU_NMI_UNKNOWN 0x0
@@ -182,7 +212,7 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp)

srcu_check_nmi_safety(ssp, false);
retval = __srcu_read_lock(ssp);
- rcu_lock_acquire(&(ssp)->dep_map);
+ srcu_lock_acquire(&(ssp)->dep_map);
return retval;
}

@@ -254,7 +284,7 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx)
{
WARN_ON_ONCE(idx & ~0x1);
srcu_check_nmi_safety(ssp, false);
- rcu_lock_release(&(ssp)->dep_map);
+ srcu_lock_release(&(ssp)->dep_map);
__srcu_read_unlock(ssp, idx);
}

diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
index b12fb0cec44d..336af24e0fe3 100644
--- a/kernel/rcu/srcutiny.c
+++ b/kernel/rcu/srcutiny.c
@@ -197,6 +197,8 @@ void synchronize_srcu(struct srcu_struct *ssp)
{
struct rcu_synchronize rs;

+ srcu_lock_sync(&ssp->dep_map);
+
RCU_LOCKDEP_WARN(lockdep_is_held(ssp) ||
lock_is_held(&rcu_bh_lock_map) ||
lock_is_held(&rcu_lock_map) ||
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index ab4ee58af84b..c541b82646b6 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -1307,6 +1307,8 @@ static void __synchronize_srcu(struct srcu_struct *ssp, bool do_norm)
{
struct rcu_synchronize rcu;

+ srcu_lock_sync(&ssp->dep_map);
+
RCU_LOCKDEP_WARN(lockdep_is_held(ssp) ||
lock_is_held(&rcu_bh_lock_map) ||
lock_is_held(&rcu_lock_map) ||
--
2.39.2


2023-03-17 03:16:08

by Boqun Feng

[permalink] [raw]
Subject: [PATCH rcu 5/7] rcutorture: Add SRCU deadlock scenarios

From: "Paul E. McKenney" <[email protected]>

In order to test the new SRCU-lockdep functionality, this commit adds
an rcutorture.test_srcu_lockdep module parameter that, when non-zero,
selects an SRCU deadlock scenario to execute. This parameter is a
five-digit number formatted as DNNL, where "D" is 1 to force a deadlock
and 0 to avoid doing so; "NN" is the test number, 0 for SRCU-based, 1
for SRCU/mutex-based, and 2 for SRCU/rwsem-based; and "L" is the number
of steps in the deadlock cycle.

Note that rcutorture.test_srcu_lockdep=1 will also force a hard hang.

If a non-zero value of rcutorture.test_srcu_lockdep does not select a
deadlock scenario, a console message is printed and testing continues.

[ paulmck: Apply kernel test robot feedback, add rwsem support. ]
[ paulmck: Apply Dan Carpenter feedback. ]

Signed-off-by: Paul E. McKenney <[email protected]>
Signed-off-by: Boqun Feng <[email protected]>
---
kernel/rcu/rcutorture.c | 151 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 151 insertions(+)

diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index 8e6c023212cb..80ff9a743d31 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -120,6 +120,7 @@ torture_param(int, test_boost, 1, "Test RCU prio boost: 0=no, 1=maybe, 2=yes.");
torture_param(int, test_boost_duration, 4, "Duration of each boost test, seconds.");
torture_param(int, test_boost_interval, 7, "Interval between boost tests, seconds.");
torture_param(bool, test_no_idle_hz, true, "Test support for tickless idle CPUs");
+torture_param(int, test_srcu_lockdep, 0, "Test specified SRCU deadlock scenario.");
torture_param(int, verbose, 1, "Enable verbose debugging printk()s");

static char *torture_type = "rcu";
@@ -3463,6 +3464,154 @@ static void rcutorture_sync(void)
cur_ops->sync();
}

+static DEFINE_MUTEX(mut0);
+static DEFINE_MUTEX(mut1);
+static DEFINE_MUTEX(mut2);
+static DEFINE_MUTEX(mut3);
+static DEFINE_MUTEX(mut4);
+static DEFINE_MUTEX(mut5);
+static DEFINE_MUTEX(mut6);
+static DEFINE_MUTEX(mut7);
+static DEFINE_MUTEX(mut8);
+static DEFINE_MUTEX(mut9);
+
+static DECLARE_RWSEM(rwsem0);
+static DECLARE_RWSEM(rwsem1);
+static DECLARE_RWSEM(rwsem2);
+static DECLARE_RWSEM(rwsem3);
+static DECLARE_RWSEM(rwsem4);
+static DECLARE_RWSEM(rwsem5);
+static DECLARE_RWSEM(rwsem6);
+static DECLARE_RWSEM(rwsem7);
+static DECLARE_RWSEM(rwsem8);
+static DECLARE_RWSEM(rwsem9);
+
+DEFINE_STATIC_SRCU(srcu0);
+DEFINE_STATIC_SRCU(srcu1);
+DEFINE_STATIC_SRCU(srcu2);
+DEFINE_STATIC_SRCU(srcu3);
+DEFINE_STATIC_SRCU(srcu4);
+DEFINE_STATIC_SRCU(srcu5);
+DEFINE_STATIC_SRCU(srcu6);
+DEFINE_STATIC_SRCU(srcu7);
+DEFINE_STATIC_SRCU(srcu8);
+DEFINE_STATIC_SRCU(srcu9);
+
+static int srcu_lockdep_next(const char *f, const char *fl, const char *fs, const char *fu, int i,
+ int cyclelen, int deadlock)
+{
+ int j = i + 1;
+
+ if (j >= cyclelen)
+ j = deadlock ? 0 : -1;
+ if (j >= 0)
+ pr_info("%s: %s(%d), %s(%d), %s(%d)\n", f, fl, i, fs, j, fu, i);
+ else
+ pr_info("%s: %s(%d), %s(%d)\n", f, fl, i, fu, i);
+ return j;
+}
+
+// Test lockdep on SRCU-based deadlock scenarios.
+static void rcu_torture_init_srcu_lockdep(void)
+{
+ int cyclelen;
+ int deadlock;
+ bool err = false;
+ int i;
+ int j;
+ int idx;
+ struct mutex *muts[] = { &mut0, &mut1, &mut2, &mut3, &mut4,
+ &mut5, &mut6, &mut7, &mut8, &mut9 };
+ struct rw_semaphore *rwsems[] = { &rwsem0, &rwsem1, &rwsem2, &rwsem3, &rwsem4,
+ &rwsem5, &rwsem6, &rwsem7, &rwsem8, &rwsem9 };
+ struct srcu_struct *srcus[] = { &srcu0, &srcu1, &srcu2, &srcu3, &srcu4,
+ &srcu5, &srcu6, &srcu7, &srcu8, &srcu9 };
+ int testtype;
+
+ if (!test_srcu_lockdep)
+ return;
+
+ deadlock = test_srcu_lockdep / 1000;
+ testtype = (test_srcu_lockdep / 10) % 100;
+ cyclelen = test_srcu_lockdep % 10;
+ WARN_ON_ONCE(ARRAY_SIZE(muts) != ARRAY_SIZE(srcus));
+ if (WARN_ONCE(deadlock != !!deadlock,
+ "%s: test_srcu_lockdep=%d and deadlock digit %d must be zero or one.\n",
+ __func__, test_srcu_lockdep, deadlock))
+ err = true;
+ if (WARN_ONCE(cyclelen <= 0,
+ "%s: test_srcu_lockdep=%d and cycle-length digit %d must be greater than zero.\n",
+ __func__, test_srcu_lockdep, cyclelen))
+ err = true;
+ if (err)
+ goto err_out;
+
+ if (testtype == 0) {
+ pr_info("%s: test_srcu_lockdep = %05d: SRCU %d-way %sdeadlock.\n",
+ __func__, test_srcu_lockdep, cyclelen, deadlock ? "" : "non-");
+ if (deadlock && cyclelen == 1)
+ pr_info("%s: Expect hang.\n", __func__);
+ for (i = 0; i < cyclelen; i++) {
+ j = srcu_lockdep_next(__func__, "srcu_read_lock", "synchronize_srcu",
+ "srcu_read_unlock", i, cyclelen, deadlock);
+ idx = srcu_read_lock(srcus[i]);
+ if (j >= 0)
+ synchronize_srcu(srcus[j]);
+ srcu_read_unlock(srcus[i], idx);
+ }
+ return;
+ }
+
+ if (testtype == 1) {
+ pr_info("%s: test_srcu_lockdep = %05d: SRCU/mutex %d-way %sdeadlock.\n",
+ __func__, test_srcu_lockdep, cyclelen, deadlock ? "" : "non-");
+ for (i = 0; i < cyclelen; i++) {
+ pr_info("%s: srcu_read_lock(%d), mutex_lock(%d), mutex_unlock(%d), srcu_read_unlock(%d)\n",
+ __func__, i, i, i, i);
+ idx = srcu_read_lock(srcus[i]);
+ mutex_lock(muts[i]);
+ mutex_unlock(muts[i]);
+ srcu_read_unlock(srcus[i], idx);
+
+ j = srcu_lockdep_next(__func__, "mutex_lock", "synchronize_srcu",
+ "mutex_unlock", i, cyclelen, deadlock);
+ mutex_lock(muts[i]);
+ if (j >= 0)
+ synchronize_srcu(srcus[j]);
+ mutex_unlock(muts[i]);
+ }
+ return;
+ }
+
+ if (testtype == 2) {
+ pr_info("%s: test_srcu_lockdep = %05d: SRCU/rwsem %d-way %sdeadlock.\n",
+ __func__, test_srcu_lockdep, cyclelen, deadlock ? "" : "non-");
+ for (i = 0; i < cyclelen; i++) {
+ pr_info("%s: srcu_read_lock(%d), down_read(%d), up_read(%d), srcu_read_unlock(%d)\n",
+ __func__, i, i, i, i);
+ idx = srcu_read_lock(srcus[i]);
+ down_read(rwsems[i]);
+ up_read(rwsems[i]);
+ srcu_read_unlock(srcus[i], idx);
+
+ j = srcu_lockdep_next(__func__, "down_write", "synchronize_srcu",
+ "up_write", i, cyclelen, deadlock);
+ down_write(rwsems[i]);
+ if (j >= 0)
+ synchronize_srcu(srcus[j]);
+ up_write(rwsems[i]);
+ }
+ return;
+ }
+
+err_out:
+ pr_info("%s: test_srcu_lockdep = %05d does nothing.\n", __func__, test_srcu_lockdep);
+ pr_info("%s: test_srcu_lockdep = DNNL.\n", __func__);
+ pr_info("%s: D: Deadlock if nonzero.\n", __func__);
+ pr_info("%s: NN: Test number, 0=SRCU, 1=SRCU/mutex, 2=SRCU/rwsem.\n", __func__);
+ pr_info("%s: L: Cycle length.\n", __func__);
+}
+
static int __init
rcu_torture_init(void)
{
@@ -3504,6 +3653,8 @@ rcu_torture_init(void)
if (cur_ops->init)
cur_ops->init();

+ rcu_torture_init_srcu_lockdep();
+
if (nreaders >= 0) {
nrealreaders = nreaders;
} else {
--
2.39.2


2023-03-17 18:38:37

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH rcu 3/7] locking: Reduce the number of locks in ww_mutex stress tests

On Thu, Mar 16, 2023 at 08:13:35PM -0700, Boqun Feng wrote:
> The stress test in test_ww_mutex_init() uses 4095 locks since
> lockdep::reference has 12 bits, and since we are going to reduce it to
> 11 bits to support lock_sync(), and 2047 is still a reasonable number of
> the max nesting level for locks, so adjust the test.
>
> Reported-by: kernel test robot <[email protected]>
> Link: https://lore.kernel.org/oe-lkp/[email protected]
> Signed-off-by: Boqun Feng <[email protected]>

Tested-by: Paul E. McKenney <[email protected]>

> ---
> kernel/locking/test-ww_mutex.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c
> index 29dc253d03af..93cca6e69860 100644
> --- a/kernel/locking/test-ww_mutex.c
> +++ b/kernel/locking/test-ww_mutex.c
> @@ -659,7 +659,7 @@ static int __init test_ww_mutex_init(void)
> if (ret)
> return ret;
>
> - ret = stress(4095, hweight32(STRESS_ALL)*ncpus, STRESS_ALL);
> + ret = stress(2047, hweight32(STRESS_ALL)*ncpus, STRESS_ALL);
> if (ret)
> return ret;
>
> --
> 2.39.2
>

2023-03-17 21:26:52

by Boqun Feng

[permalink] [raw]
Subject: Re: [PATCH rcu 3/7] locking: Reduce the number of locks in ww_mutex stress tests

On Fri, Mar 17, 2023 at 11:38:19AM -0700, Paul E. McKenney wrote:
> On Thu, Mar 16, 2023 at 08:13:35PM -0700, Boqun Feng wrote:
> > The stress test in test_ww_mutex_init() uses 4095 locks since
> > lockdep::reference has 12 bits, and since we are going to reduce it to
> > 11 bits to support lock_sync(), and 2047 is still a reasonable number of
> > the max nesting level for locks, so adjust the test.
> >
> > Reported-by: kernel test robot <[email protected]>
> > Link: https://lore.kernel.org/oe-lkp/[email protected]
> > Signed-off-by: Boqun Feng <[email protected]>
>
> Tested-by: Paul E. McKenney <[email protected]>
>

Applied, thanks!

Regards,
Boqun

> > ---
> > kernel/locking/test-ww_mutex.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c
> > index 29dc253d03af..93cca6e69860 100644
> > --- a/kernel/locking/test-ww_mutex.c
> > +++ b/kernel/locking/test-ww_mutex.c
> > @@ -659,7 +659,7 @@ static int __init test_ww_mutex_init(void)
> > if (ret)
> > return ret;
> >
> > - ret = stress(4095, hweight32(STRESS_ALL)*ncpus, STRESS_ALL);
> > + ret = stress(2047, hweight32(STRESS_ALL)*ncpus, STRESS_ALL);
> > if (ret)
> > return ret;
> >
> > --
> > 2.39.2
> >