2019-07-15 14:47:21

by Joel Fernandes

[permalink] [raw]
Subject: [PATCH 0/9] Harden list_for_each_entry_rcu() and family

Hi,
This series aims to provide lockdep checking to RCU list macros for additional
kernel hardening.

RCU has a number of primitives for "consumption" of an RCU protected pointer.
Most of the time, these consumers make sure that such accesses are under a RCU
reader-section (such as rcu_dereference{,sched,bh} or under a lock, such as
with rcu_dereference_protected()).

However, there are other ways to consume RCU pointers, such as by
list_for_each_entry_rcu or hlist_for_each_enry_rcu. Unlike the rcu_dereference
family, these consumers do no lockdep checking at all. And with the growing
number of RCU list uses (1000+), it is possible for bugs to creep in and go
unnoticed which lockdep checks can catch.

Since RCU consolidation efforts last year, the different traditional RCU
flavors (preempt, bh, sched) are all consolidated. In other words, any of these
flavors can cause a reader section to occur and all of them must cease before
the reader section is considered to be unlocked. Thanks to this, we can
generically check if we are in an RCU reader. This is what patch 1 does. Note
that the list_for_each_entry_rcu and family are different from the
rcu_dereference family in that, there is no _bh or _sched version of this
macro. They are used under many different RCU reader flavors, and also SRCU.
Patch 1 adds a new internal function rcu_read_lock_any_held() which checks
if any reader section is active at all, when these macros are called. If no
reader section exists, then the optional fourth argument to
list_for_each_entry_rcu() can be a lockdep expression which is evaluated
(similar to how rcu_dereference_check() works). If no lockdep expression is
passed, and we are not in a reader, then a splat occurs. Just take off the
lockdep expression after applying the patches, by using the following diff and
see what happens:

+++ b/arch/x86/pci/mmconfig-shared.c
@@ -55,7 +55,7 @@ static void list_add_sorted(struct pci_mmcfg_region *new)
struct pci_mmcfg_region *cfg;

/* keep list sorted by segment and starting bus number */
- list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list, pci_mmcfg_lock_held()) {
+ list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) {


The optional argument trick to list_for_each_entry_rcu() can also be used in
the future to possibly remove rcu_dereference_{,bh,sched}_protected() API and
we can pass an optional lockdep expression to rcu_dereference() itself. Thus
eliminating 3 more RCU APIs.

Note that some list macro wrappers already do their own lockdep checking in the
caller side. These can be eliminated in favor of the built-in lockdep checking
in the list macro that this series adds. For example, workqueue code has a
assert_rcu_or_wq_mutex() function which is called in for_each_wq(). This
series replaces that in favor of the built-in check.

Also in the future, we can extend these checks to list_entry_rcu() and other
list macros as well, if needed.

Please note that I have kept this option default-disabled under a new config:
CONFIG_PROVE_RCU_LIST. This is so that until all users are converted to pass
the optional argument, we should keep the check disabled. There are about a
1000 or so users and it is not possible to pass in the optional lockdep
expression in a single series since it is done on a case-by-case basis. I did
convert a few users in this series itself.

v2->v3: Simplified rcu-sync logic after rebase (Paul)
Added check for bh_map (Paul)
Refactored out more of the common code (Joel)
Added Oleg ack to rcu-sync patch.

v1->v2: Have assert_rcu_or_wq_mutex deleted (Daniel Jordan)
Simplify rcu_read_lock_any_held() (Peter Zijlstra)
Simplified rcu-sync logic (Oleg Nesterov)
Updated documentation and rculist comments.
Added GregKH ack.

RFC->v1:
Simplify list checking macro (Rasmus Villemoes)

Joel Fernandes (Google) (9):
rcu/update: Remove useless check for debug_locks (v1)
rcu: Add support for consolidated-RCU reader checking (v3)
rcu/sync: Remove custom check for reader-section (v2)
ipv4: add lockdep condition to fix for_each_entry (v1)
driver/core: Convert to use built-in RCU list checking (v1)
workqueue: Convert for_each_wq to use built-in list check (v2)
x86/pci: Pass lockdep condition to pcm_mmcfg_list iterator (v1)
acpi: Use built-in RCU list checking for acpi_ioremaps list (v1)
doc: Update documentation about list_for_each_entry_rcu (v1)

Documentation/RCU/lockdep.txt | 15 ++++++++---
Documentation/RCU/whatisRCU.txt | 9 ++++++-
arch/x86/pci/mmconfig-shared.c | 5 ++--
drivers/acpi/osl.c | 6 +++--
drivers/base/base.h | 1 +
drivers/base/core.c | 10 +++++++
drivers/base/power/runtime.c | 15 +++++++----
include/linux/rcu_sync.h | 4 +--
include/linux/rculist.h | 28 +++++++++++++++----
include/linux/rcupdate.h | 7 +++++
kernel/rcu/Kconfig.debug | 11 ++++++++
kernel/rcu/update.c | 48 ++++++++++++++++++---------------
kernel/workqueue.c | 10 ++-----
net/ipv4/fib_frontend.c | 3 ++-
14 files changed, 119 insertions(+), 53 deletions(-)

--
2.22.0.510.g264f2c817a-goog


2019-07-15 14:47:24

by Joel Fernandes

[permalink] [raw]
Subject: [PATCH 1/9] rcu/update: Remove useless check for debug_locks (v1)

In rcu_read_lock_sched_held(), debug_locks can never be true at the
point we check it because we already check debug_locks in
debug_lockdep_rcu_enabled() in the beginning. Remove the check.

Signed-off-by: Joel Fernandes (Google) <[email protected]>
---
kernel/rcu/update.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 61df2bf08563..9dd5aeef6e70 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -93,17 +93,13 @@ module_param(rcu_normal_after_boot, int, 0);
*/
int rcu_read_lock_sched_held(void)
{
- int lockdep_opinion = 0;
-
if (!debug_lockdep_rcu_enabled())
return 1;
if (!rcu_is_watching())
return 0;
if (!rcu_lockdep_current_cpu_online())
return 0;
- if (debug_locks)
- lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
- return lockdep_opinion || !preemptible();
+ return lock_is_held(&rcu_sched_lock_map) || !preemptible();
}
EXPORT_SYMBOL(rcu_read_lock_sched_held);
#endif
--
2.22.0.510.g264f2c817a-goog

2019-07-15 14:47:38

by Joel Fernandes

[permalink] [raw]
Subject: [PATCH 2/9] rcu: Add support for consolidated-RCU reader checking (v3)

This patch adds support for checking RCU reader sections in list
traversal macros. Optionally, if the list macro is called under SRCU or
other lock/mutex protection, then appropriate lockdep expressions can be
passed to make the checks pass.

Existing list_for_each_entry_rcu() invocations don't need to pass the
optional fourth argument (cond) unless they are under some non-RCU
protection and needs to make lockdep check pass.

Signed-off-by: Joel Fernandes (Google) <[email protected]>
---
include/linux/rculist.h | 28 ++++++++++++++++++++-----
include/linux/rcupdate.h | 7 +++++++
kernel/rcu/Kconfig.debug | 11 ++++++++++
kernel/rcu/update.c | 44 ++++++++++++++++++++++++----------------
4 files changed, 67 insertions(+), 23 deletions(-)

diff --git a/include/linux/rculist.h b/include/linux/rculist.h
index e91ec9ddcd30..1048160625bb 100644
--- a/include/linux/rculist.h
+++ b/include/linux/rculist.h
@@ -40,6 +40,20 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
*/
#define list_next_rcu(list) (*((struct list_head __rcu **)(&(list)->next)))

+/*
+ * Check during list traversal that we are within an RCU reader
+ */
+
+#ifdef CONFIG_PROVE_RCU_LIST
+#define __list_check_rcu(dummy, cond, ...) \
+ ({ \
+ RCU_LOCKDEP_WARN(!cond && !rcu_read_lock_any_held(), \
+ "RCU-list traversed in non-reader section!"); \
+ })
+#else
+#define __list_check_rcu(dummy, cond, ...) ({})
+#endif
+
/*
* Insert a new entry between two known consecutive entries.
*
@@ -343,14 +357,16 @@ static inline void list_splice_tail_init_rcu(struct list_head *list,
* @pos: the type * to use as a loop cursor.
* @head: the head for your list.
* @member: the name of the list_head within the struct.
+ * @cond: optional lockdep expression if called from non-RCU protection.
*
* This list-traversal primitive may safely run concurrently with
* the _rcu list-mutation primitives such as list_add_rcu()
* as long as the traversal is guarded by rcu_read_lock().
*/
-#define list_for_each_entry_rcu(pos, head, member) \
- for (pos = list_entry_rcu((head)->next, typeof(*pos), member); \
- &pos->member != (head); \
+#define list_for_each_entry_rcu(pos, head, member, cond...) \
+ for (__list_check_rcu(dummy, ## cond, 0), \
+ pos = list_entry_rcu((head)->next, typeof(*pos), member); \
+ &pos->member != (head); \
pos = list_entry_rcu(pos->member.next, typeof(*pos), member))

/**
@@ -616,13 +632,15 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n,
* @pos: the type * to use as a loop cursor.
* @head: the head for your list.
* @member: the name of the hlist_node within the struct.
+ * @cond: optional lockdep expression if called from non-RCU protection.
*
* This list-traversal primitive may safely run concurrently with
* the _rcu list-mutation primitives such as hlist_add_head_rcu()
* as long as the traversal is guarded by rcu_read_lock().
*/
-#define hlist_for_each_entry_rcu(pos, head, member) \
- for (pos = hlist_entry_safe (rcu_dereference_raw(hlist_first_rcu(head)),\
+#define hlist_for_each_entry_rcu(pos, head, member, cond...) \
+ for (__list_check_rcu(dummy, ## cond, 0), \
+ pos = hlist_entry_safe (rcu_dereference_raw(hlist_first_rcu(head)),\
typeof(*(pos)), member); \
pos; \
pos = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu(\
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 8f7167478c1d..f3c29efdf19a 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -221,6 +221,7 @@ int debug_lockdep_rcu_enabled(void);
int rcu_read_lock_held(void);
int rcu_read_lock_bh_held(void);
int rcu_read_lock_sched_held(void);
+int rcu_read_lock_any_held(void);

#else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */

@@ -241,6 +242,12 @@ static inline int rcu_read_lock_sched_held(void)
{
return !preemptible();
}
+
+static inline int rcu_read_lock_any_held(void)
+{
+ return !preemptible();
+}
+
#endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */

#ifdef CONFIG_PROVE_RCU
diff --git a/kernel/rcu/Kconfig.debug b/kernel/rcu/Kconfig.debug
index 5ec3ea4028e2..7fbd21dbfcd0 100644
--- a/kernel/rcu/Kconfig.debug
+++ b/kernel/rcu/Kconfig.debug
@@ -8,6 +8,17 @@ menu "RCU Debugging"
config PROVE_RCU
def_bool PROVE_LOCKING

+config PROVE_RCU_LIST
+ bool "RCU list lockdep debugging"
+ depends on PROVE_RCU
+ default n
+ help
+ Enable RCU lockdep checking for list usages. By default it is
+ turned off since there are several list RCU users that still
+ need to be converted to pass a lockdep expression. To prevent
+ false-positive splats, we keep it default disabled but once all
+ users are converted, we can remove this config option.
+
config TORTURE_TEST
tristate
default n
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 9dd5aeef6e70..b7a4e3b5fa98 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -91,14 +91,18 @@ module_param(rcu_normal_after_boot, int, 0);
* Similarly, we avoid claiming an SRCU read lock held if the current
* CPU is offline.
*/
+#define rcu_read_lock_held_common() \
+ if (!debug_lockdep_rcu_enabled()) \
+ return 1; \
+ if (!rcu_is_watching()) \
+ return 0; \
+ if (!rcu_lockdep_current_cpu_online()) \
+ return 0;
+
int rcu_read_lock_sched_held(void)
{
- if (!debug_lockdep_rcu_enabled())
- return 1;
- if (!rcu_is_watching())
- return 0;
- if (!rcu_lockdep_current_cpu_online())
- return 0;
+ rcu_read_lock_held_common();
+
return lock_is_held(&rcu_sched_lock_map) || !preemptible();
}
EXPORT_SYMBOL(rcu_read_lock_sched_held);
@@ -257,12 +261,8 @@ NOKPROBE_SYMBOL(debug_lockdep_rcu_enabled);
*/
int rcu_read_lock_held(void)
{
- if (!debug_lockdep_rcu_enabled())
- return 1;
- if (!rcu_is_watching())
- return 0;
- if (!rcu_lockdep_current_cpu_online())
- return 0;
+ rcu_read_lock_held_common();
+
return lock_is_held(&rcu_lock_map);
}
EXPORT_SYMBOL_GPL(rcu_read_lock_held);
@@ -284,16 +284,24 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_held);
*/
int rcu_read_lock_bh_held(void)
{
- if (!debug_lockdep_rcu_enabled())
- return 1;
- if (!rcu_is_watching())
- return 0;
- if (!rcu_lockdep_current_cpu_online())
- return 0;
+ rcu_read_lock_held_common();
+
return in_softirq() || irqs_disabled();
}
EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);

+int rcu_read_lock_any_held(void)
+{
+ rcu_read_lock_held_common();
+
+ if (lock_is_held(&rcu_lock_map) ||
+ lock_is_held(&rcu_bh_lock_map) ||
+ lock_is_held(&rcu_sched_lock_map))
+ return 1;
+ return !preemptible();
+}
+EXPORT_SYMBOL_GPL(rcu_read_lock_any_held);
+
#endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */

/**
--
2.22.0.510.g264f2c817a-goog

2019-07-15 15:34:46

by Joel Fernandes

[permalink] [raw]
Subject: [PATCH 4/9] ipv4: add lockdep condition to fix for_each_entry (v1)

Using the previous support added, use it for adding lockdep conditions
to list usage here.

Signed-off-by: Joel Fernandes (Google) <[email protected]>
---
net/ipv4/fib_frontend.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
index 317339cd7f03..26b0fb24e2c2 100644
--- a/net/ipv4/fib_frontend.c
+++ b/net/ipv4/fib_frontend.c
@@ -124,7 +124,8 @@ struct fib_table *fib_get_table(struct net *net, u32 id)
h = id & (FIB_TABLE_HASHSZ - 1);

head = &net->ipv4.fib_table_hash[h];
- hlist_for_each_entry_rcu(tb, head, tb_hlist) {
+ hlist_for_each_entry_rcu(tb, head, tb_hlist,
+ lockdep_rtnl_is_held()) {
if (tb->tb_id == id)
return tb;
}
--
2.22.0.510.g264f2c817a-goog

2019-07-16 18:41:05

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 2/9] rcu: Add support for consolidated-RCU reader checking (v3)

On Mon, Jul 15, 2019 at 10:36:58AM -0400, Joel Fernandes (Google) wrote:
> This patch adds support for checking RCU reader sections in list
> traversal macros. Optionally, if the list macro is called under SRCU or
> other lock/mutex protection, then appropriate lockdep expressions can be
> passed to make the checks pass.
>
> Existing list_for_each_entry_rcu() invocations don't need to pass the
> optional fourth argument (cond) unless they are under some non-RCU
> protection and needs to make lockdep check pass.
>
> Signed-off-by: Joel Fernandes (Google) <[email protected]>

Now that I am on the correct version, again please fold in the checks
for the extra argument. The ability to have an optional argument looks
quite helpful, especially when compared to growing the RCU API!

A few more things below.

> ---
> include/linux/rculist.h | 28 ++++++++++++++++++++-----
> include/linux/rcupdate.h | 7 +++++++
> kernel/rcu/Kconfig.debug | 11 ++++++++++
> kernel/rcu/update.c | 44 ++++++++++++++++++++++++----------------
> 4 files changed, 67 insertions(+), 23 deletions(-)
>
> diff --git a/include/linux/rculist.h b/include/linux/rculist.h
> index e91ec9ddcd30..1048160625bb 100644
> --- a/include/linux/rculist.h
> +++ b/include/linux/rculist.h
> @@ -40,6 +40,20 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
> */
> #define list_next_rcu(list) (*((struct list_head __rcu **)(&(list)->next)))
>
> +/*
> + * Check during list traversal that we are within an RCU reader
> + */
> +
> +#ifdef CONFIG_PROVE_RCU_LIST

This new Kconfig option is OK temporarily, but unless there is reason to
fear malfunction that a few weeks of rcutorture, 0day, and -next won't
find, it would be better to just use CONFIG_PROVE_RCU. The overall goal
is to reduce the number of RCU knobs rather than grow them, must though
history might lead one to believe otherwise. :-/

> +#define __list_check_rcu(dummy, cond, ...) \
> + ({ \
> + RCU_LOCKDEP_WARN(!cond && !rcu_read_lock_any_held(), \
> + "RCU-list traversed in non-reader section!"); \
> + })
> +#else
> +#define __list_check_rcu(dummy, cond, ...) ({})
> +#endif
> +
> /*
> * Insert a new entry between two known consecutive entries.
> *
> @@ -343,14 +357,16 @@ static inline void list_splice_tail_init_rcu(struct list_head *list,
> * @pos: the type * to use as a loop cursor.
> * @head: the head for your list.
> * @member: the name of the list_head within the struct.
> + * @cond: optional lockdep expression if called from non-RCU protection.
> *
> * This list-traversal primitive may safely run concurrently with
> * the _rcu list-mutation primitives such as list_add_rcu()
> * as long as the traversal is guarded by rcu_read_lock().
> */
> -#define list_for_each_entry_rcu(pos, head, member) \
> - for (pos = list_entry_rcu((head)->next, typeof(*pos), member); \
> - &pos->member != (head); \
> +#define list_for_each_entry_rcu(pos, head, member, cond...) \
> + for (__list_check_rcu(dummy, ## cond, 0), \
> + pos = list_entry_rcu((head)->next, typeof(*pos), member); \
> + &pos->member != (head); \
> pos = list_entry_rcu(pos->member.next, typeof(*pos), member))
>
> /**
> @@ -616,13 +632,15 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n,
> * @pos: the type * to use as a loop cursor.
> * @head: the head for your list.
> * @member: the name of the hlist_node within the struct.
> + * @cond: optional lockdep expression if called from non-RCU protection.
> *
> * This list-traversal primitive may safely run concurrently with
> * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> * as long as the traversal is guarded by rcu_read_lock().
> */
> -#define hlist_for_each_entry_rcu(pos, head, member) \
> - for (pos = hlist_entry_safe (rcu_dereference_raw(hlist_first_rcu(head)),\
> +#define hlist_for_each_entry_rcu(pos, head, member, cond...) \
> + for (__list_check_rcu(dummy, ## cond, 0), \
> + pos = hlist_entry_safe (rcu_dereference_raw(hlist_first_rcu(head)),\
> typeof(*(pos)), member); \
> pos; \
> pos = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu(\
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 8f7167478c1d..f3c29efdf19a 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -221,6 +221,7 @@ int debug_lockdep_rcu_enabled(void);
> int rcu_read_lock_held(void);
> int rcu_read_lock_bh_held(void);
> int rcu_read_lock_sched_held(void);
> +int rcu_read_lock_any_held(void);
>
> #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
>
> @@ -241,6 +242,12 @@ static inline int rcu_read_lock_sched_held(void)
> {
> return !preemptible();
> }
> +
> +static inline int rcu_read_lock_any_held(void)
> +{
> + return !preemptible();
> +}
> +
> #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
>
> #ifdef CONFIG_PROVE_RCU
> diff --git a/kernel/rcu/Kconfig.debug b/kernel/rcu/Kconfig.debug
> index 5ec3ea4028e2..7fbd21dbfcd0 100644
> --- a/kernel/rcu/Kconfig.debug
> +++ b/kernel/rcu/Kconfig.debug
> @@ -8,6 +8,17 @@ menu "RCU Debugging"
> config PROVE_RCU
> def_bool PROVE_LOCKING
>
> +config PROVE_RCU_LIST
> + bool "RCU list lockdep debugging"
> + depends on PROVE_RCU

This must also depend on RCU_EXPERT.

> + default n
> + help
> + Enable RCU lockdep checking for list usages. By default it is
> + turned off since there are several list RCU users that still
> + need to be converted to pass a lockdep expression. To prevent
> + false-positive splats, we keep it default disabled but once all
> + users are converted, we can remove this config option.
> +
> config TORTURE_TEST
> tristate
> default n
> diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> index 9dd5aeef6e70..b7a4e3b5fa98 100644
> --- a/kernel/rcu/update.c
> +++ b/kernel/rcu/update.c
> @@ -91,14 +91,18 @@ module_param(rcu_normal_after_boot, int, 0);
> * Similarly, we avoid claiming an SRCU read lock held if the current
> * CPU is offline.
> */
> +#define rcu_read_lock_held_common() \
> + if (!debug_lockdep_rcu_enabled()) \
> + return 1; \
> + if (!rcu_is_watching()) \
> + return 0; \
> + if (!rcu_lockdep_current_cpu_online()) \
> + return 0;

Nice abstraction of common code!

Thanx, Paul

> +
> int rcu_read_lock_sched_held(void)
> {
> - if (!debug_lockdep_rcu_enabled())
> - return 1;
> - if (!rcu_is_watching())
> - return 0;
> - if (!rcu_lockdep_current_cpu_online())
> - return 0;
> + rcu_read_lock_held_common();
> +
> return lock_is_held(&rcu_sched_lock_map) || !preemptible();
> }
> EXPORT_SYMBOL(rcu_read_lock_sched_held);
> @@ -257,12 +261,8 @@ NOKPROBE_SYMBOL(debug_lockdep_rcu_enabled);
> */
> int rcu_read_lock_held(void)
> {
> - if (!debug_lockdep_rcu_enabled())
> - return 1;
> - if (!rcu_is_watching())
> - return 0;
> - if (!rcu_lockdep_current_cpu_online())
> - return 0;
> + rcu_read_lock_held_common();
> +
> return lock_is_held(&rcu_lock_map);
> }
> EXPORT_SYMBOL_GPL(rcu_read_lock_held);
> @@ -284,16 +284,24 @@ EXPORT_SYMBOL_GPL(rcu_read_lock_held);
> */
> int rcu_read_lock_bh_held(void)
> {
> - if (!debug_lockdep_rcu_enabled())
> - return 1;
> - if (!rcu_is_watching())
> - return 0;
> - if (!rcu_lockdep_current_cpu_online())
> - return 0;
> + rcu_read_lock_held_common();
> +
> return in_softirq() || irqs_disabled();
> }
> EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
>
> +int rcu_read_lock_any_held(void)
> +{
> + rcu_read_lock_held_common();
> +
> + if (lock_is_held(&rcu_lock_map) ||
> + lock_is_held(&rcu_bh_lock_map) ||
> + lock_is_held(&rcu_sched_lock_map))
> + return 1;
> + return !preemptible();
> +}
> +EXPORT_SYMBOL_GPL(rcu_read_lock_any_held);
> +
> #endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
>
> /**
> --
> 2.22.0.510.g264f2c817a-goog
>

2019-07-16 18:42:50

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 4/9] ipv4: add lockdep condition to fix for_each_entry (v1)

On Mon, Jul 15, 2019 at 10:37:00AM -0400, Joel Fernandes (Google) wrote:
> Using the previous support added, use it for adding lockdep conditions
> to list usage here.
>
> Signed-off-by: Joel Fernandes (Google) <[email protected]>

We need an ack or better from the subsystem maintainer for this one.

Thanx, Paul

> ---
> net/ipv4/fib_frontend.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
> index 317339cd7f03..26b0fb24e2c2 100644
> --- a/net/ipv4/fib_frontend.c
> +++ b/net/ipv4/fib_frontend.c
> @@ -124,7 +124,8 @@ struct fib_table *fib_get_table(struct net *net, u32 id)
> h = id & (FIB_TABLE_HASHSZ - 1);
>
> head = &net->ipv4.fib_table_hash[h];
> - hlist_for_each_entry_rcu(tb, head, tb_hlist) {
> + hlist_for_each_entry_rcu(tb, head, tb_hlist,
> + lockdep_rtnl_is_held()) {
> if (tb->tb_id == id)
> return tb;
> }
> --
> 2.22.0.510.g264f2c817a-goog
>

2019-07-16 18:47:47

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH 2/9] rcu: Add support for consolidated-RCU reader checking (v3)

On Tue, Jul 16, 2019 at 11:38:33AM -0700, Paul E. McKenney wrote:
> On Mon, Jul 15, 2019 at 10:36:58AM -0400, Joel Fernandes (Google) wrote:
> > This patch adds support for checking RCU reader sections in list
> > traversal macros. Optionally, if the list macro is called under SRCU or
> > other lock/mutex protection, then appropriate lockdep expressions can be
> > passed to make the checks pass.
> >
> > Existing list_for_each_entry_rcu() invocations don't need to pass the
> > optional fourth argument (cond) unless they are under some non-RCU
> > protection and needs to make lockdep check pass.
> >
> > Signed-off-by: Joel Fernandes (Google) <[email protected]>
>
> Now that I am on the correct version, again please fold in the checks
> for the extra argument. The ability to have an optional argument looks
> quite helpful, especially when compared to growing the RCU API!

I did fold this and replied with a pull request URL based on /dev branch. But
we can hold off on the pull requests until we decide on the below comments:

> A few more things below.
> > ---
> > include/linux/rculist.h | 28 ++++++++++++++++++++-----
> > include/linux/rcupdate.h | 7 +++++++
> > kernel/rcu/Kconfig.debug | 11 ++++++++++
> > kernel/rcu/update.c | 44 ++++++++++++++++++++++++----------------
> > 4 files changed, 67 insertions(+), 23 deletions(-)
> >
> > diff --git a/include/linux/rculist.h b/include/linux/rculist.h
> > index e91ec9ddcd30..1048160625bb 100644
> > --- a/include/linux/rculist.h
> > +++ b/include/linux/rculist.h
> > @@ -40,6 +40,20 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
> > */
> > #define list_next_rcu(list) (*((struct list_head __rcu **)(&(list)->next)))
> >
> > +/*
> > + * Check during list traversal that we are within an RCU reader
> > + */
> > +
> > +#ifdef CONFIG_PROVE_RCU_LIST
>
> This new Kconfig option is OK temporarily, but unless there is reason to
> fear malfunction that a few weeks of rcutorture, 0day, and -next won't
> find, it would be better to just use CONFIG_PROVE_RCU. The overall goal
> is to reduce the number of RCU knobs rather than grow them, must though
> history might lead one to believe otherwise. :-/

If you want, we can try to drop this option and just use PROVE_RCU however I
must say there may be several warnings that need to be fixed in a short
period of time (even a few weeks may be too short) considering the 1000+
uses of RCU lists.

But I don't mind dropping it and it may just accelerate the fixing up of all
callers.

> > +#define __list_check_rcu(dummy, cond, ...) \
> > + ({ \
> > + RCU_LOCKDEP_WARN(!cond && !rcu_read_lock_any_held(), \
> > + "RCU-list traversed in non-reader section!"); \
> > + })
> > +#else
> > +#define __list_check_rcu(dummy, cond, ...) ({})
> > +#endif
> > +
> > /*
> > * Insert a new entry between two known consecutive entries.
> > *
> > @@ -343,14 +357,16 @@ static inline void list_splice_tail_init_rcu(struct list_head *list,
> > * @pos: the type * to use as a loop cursor.
> > * @head: the head for your list.
> > * @member: the name of the list_head within the struct.
> > + * @cond: optional lockdep expression if called from non-RCU protection.
> > *
> > * This list-traversal primitive may safely run concurrently with
> > * the _rcu list-mutation primitives such as list_add_rcu()
> > * as long as the traversal is guarded by rcu_read_lock().
> > */
> > -#define list_for_each_entry_rcu(pos, head, member) \
> > - for (pos = list_entry_rcu((head)->next, typeof(*pos), member); \
> > - &pos->member != (head); \
> > +#define list_for_each_entry_rcu(pos, head, member, cond...) \
> > + for (__list_check_rcu(dummy, ## cond, 0), \
> > + pos = list_entry_rcu((head)->next, typeof(*pos), member); \
> > + &pos->member != (head); \
> > pos = list_entry_rcu(pos->member.next, typeof(*pos), member))
> >
> > /**
> > @@ -616,13 +632,15 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n,
> > * @pos: the type * to use as a loop cursor.
> > * @head: the head for your list.
> > * @member: the name of the hlist_node within the struct.
> > + * @cond: optional lockdep expression if called from non-RCU protection.
> > *
> > * This list-traversal primitive may safely run concurrently with
> > * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> > * as long as the traversal is guarded by rcu_read_lock().
> > */
> > -#define hlist_for_each_entry_rcu(pos, head, member) \
> > - for (pos = hlist_entry_safe (rcu_dereference_raw(hlist_first_rcu(head)),\
> > +#define hlist_for_each_entry_rcu(pos, head, member, cond...) \
> > + for (__list_check_rcu(dummy, ## cond, 0), \
> > + pos = hlist_entry_safe (rcu_dereference_raw(hlist_first_rcu(head)),\
> > typeof(*(pos)), member); \
> > pos; \
> > pos = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu(\
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 8f7167478c1d..f3c29efdf19a 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -221,6 +221,7 @@ int debug_lockdep_rcu_enabled(void);
> > int rcu_read_lock_held(void);
> > int rcu_read_lock_bh_held(void);
> > int rcu_read_lock_sched_held(void);
> > +int rcu_read_lock_any_held(void);
> >
> > #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
> >
> > @@ -241,6 +242,12 @@ static inline int rcu_read_lock_sched_held(void)
> > {
> > return !preemptible();
> > }
> > +
> > +static inline int rcu_read_lock_any_held(void)
> > +{
> > + return !preemptible();
> > +}
> > +
> > #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
> >
> > #ifdef CONFIG_PROVE_RCU
> > diff --git a/kernel/rcu/Kconfig.debug b/kernel/rcu/Kconfig.debug
> > index 5ec3ea4028e2..7fbd21dbfcd0 100644
> > --- a/kernel/rcu/Kconfig.debug
> > +++ b/kernel/rcu/Kconfig.debug
> > @@ -8,6 +8,17 @@ menu "RCU Debugging"
> > config PROVE_RCU
> > def_bool PROVE_LOCKING
> >
> > +config PROVE_RCU_LIST
> > + bool "RCU list lockdep debugging"
> > + depends on PROVE_RCU
>
> This must also depend on RCU_EXPERT.

Sure.

> > + default n
> > + help
> > + Enable RCU lockdep checking for list usages. By default it is
> > + turned off since there are several list RCU users that still
> > + need to be converted to pass a lockdep expression. To prevent
> > + false-positive splats, we keep it default disabled but once all
> > + users are converted, we can remove this config option.
> > +
> > config TORTURE_TEST
> > tristate
> > default n
> > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > index 9dd5aeef6e70..b7a4e3b5fa98 100644
> > --- a/kernel/rcu/update.c
> > +++ b/kernel/rcu/update.c
> > @@ -91,14 +91,18 @@ module_param(rcu_normal_after_boot, int, 0);
> > * Similarly, we avoid claiming an SRCU read lock held if the current
> > * CPU is offline.
> > */
> > +#define rcu_read_lock_held_common() \
> > + if (!debug_lockdep_rcu_enabled()) \
> > + return 1; \
> > + if (!rcu_is_watching()) \
> > + return 0; \
> > + if (!rcu_lockdep_current_cpu_online()) \
> > + return 0;
>
> Nice abstraction of common code!

Thanks!

2019-07-16 18:49:33

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 0/9] Harden list_for_each_entry_rcu() and family

On Mon, Jul 15, 2019 at 10:36:56AM -0400, Joel Fernandes (Google) wrote:
> Hi,
> This series aims to provide lockdep checking to RCU list macros for additional
> kernel hardening.
>
> RCU has a number of primitives for "consumption" of an RCU protected pointer.
> Most of the time, these consumers make sure that such accesses are under a RCU
> reader-section (such as rcu_dereference{,sched,bh} or under a lock, such as
> with rcu_dereference_protected()).
>
> However, there are other ways to consume RCU pointers, such as by
> list_for_each_entry_rcu or hlist_for_each_enry_rcu. Unlike the rcu_dereference
> family, these consumers do no lockdep checking at all. And with the growing
> number of RCU list uses (1000+), it is possible for bugs to creep in and go
> unnoticed which lockdep checks can catch.
>
> Since RCU consolidation efforts last year, the different traditional RCU
> flavors (preempt, bh, sched) are all consolidated. In other words, any of these
> flavors can cause a reader section to occur and all of them must cease before
> the reader section is considered to be unlocked. Thanks to this, we can
> generically check if we are in an RCU reader. This is what patch 1 does. Note
> that the list_for_each_entry_rcu and family are different from the
> rcu_dereference family in that, there is no _bh or _sched version of this
> macro. They are used under many different RCU reader flavors, and also SRCU.
> Patch 1 adds a new internal function rcu_read_lock_any_held() which checks
> if any reader section is active at all, when these macros are called. If no
> reader section exists, then the optional fourth argument to
> list_for_each_entry_rcu() can be a lockdep expression which is evaluated
> (similar to how rcu_dereference_check() works). If no lockdep expression is
> passed, and we are not in a reader, then a splat occurs. Just take off the
> lockdep expression after applying the patches, by using the following diff and
> see what happens:
>
> +++ b/arch/x86/pci/mmconfig-shared.c
> @@ -55,7 +55,7 @@ static void list_add_sorted(struct pci_mmcfg_region *new)
> struct pci_mmcfg_region *cfg;
>
> /* keep list sorted by segment and starting bus number */
> - list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list, pci_mmcfg_lock_held()) {
> + list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) {
>
>
> The optional argument trick to list_for_each_entry_rcu() can also be used in
> the future to possibly remove rcu_dereference_{,bh,sched}_protected() API and
> we can pass an optional lockdep expression to rcu_dereference() itself. Thus
> eliminating 3 more RCU APIs.
>
> Note that some list macro wrappers already do their own lockdep checking in the
> caller side. These can be eliminated in favor of the built-in lockdep checking
> in the list macro that this series adds. For example, workqueue code has a
> assert_rcu_or_wq_mutex() function which is called in for_each_wq(). This
> series replaces that in favor of the built-in check.
>
> Also in the future, we can extend these checks to list_entry_rcu() and other
> list macros as well, if needed.
>
> Please note that I have kept this option default-disabled under a new config:
> CONFIG_PROVE_RCU_LIST. This is so that until all users are converted to pass
> the optional argument, we should keep the check disabled. There are about a
> 1000 or so users and it is not possible to pass in the optional lockdep
> expression in a single series since it is done on a case-by-case basis. I did
> convert a few users in this series itself.

I do like the optional argument as opposed to the traditional practice
of expanding the RCU API! Good stuff!!!

Please resend incorporating the acks and the changes from feedback.
I will hold off on any patches not yet having their maintainer's ack,
but it is OK to include them in v4. (I will just avoid applying them.)

The documentation patch needs a bit of wordsmithing, but I can do that.
Feel free to take another pass on it if you wish, though.

Thanx, Paul

> v2->v3: Simplified rcu-sync logic after rebase (Paul)
> Added check for bh_map (Paul)
> Refactored out more of the common code (Joel)
> Added Oleg ack to rcu-sync patch.
>
> v1->v2: Have assert_rcu_or_wq_mutex deleted (Daniel Jordan)
> Simplify rcu_read_lock_any_held() (Peter Zijlstra)
> Simplified rcu-sync logic (Oleg Nesterov)
> Updated documentation and rculist comments.
> Added GregKH ack.
>
> RFC->v1:
> Simplify list checking macro (Rasmus Villemoes)
>
> Joel Fernandes (Google) (9):
> rcu/update: Remove useless check for debug_locks (v1)
> rcu: Add support for consolidated-RCU reader checking (v3)
> rcu/sync: Remove custom check for reader-section (v2)
> ipv4: add lockdep condition to fix for_each_entry (v1)
> driver/core: Convert to use built-in RCU list checking (v1)
> workqueue: Convert for_each_wq to use built-in list check (v2)
> x86/pci: Pass lockdep condition to pcm_mmcfg_list iterator (v1)
> acpi: Use built-in RCU list checking for acpi_ioremaps list (v1)
> doc: Update documentation about list_for_each_entry_rcu (v1)
>
> Documentation/RCU/lockdep.txt | 15 ++++++++---
> Documentation/RCU/whatisRCU.txt | 9 ++++++-
> arch/x86/pci/mmconfig-shared.c | 5 ++--
> drivers/acpi/osl.c | 6 +++--
> drivers/base/base.h | 1 +
> drivers/base/core.c | 10 +++++++
> drivers/base/power/runtime.c | 15 +++++++----
> include/linux/rcu_sync.h | 4 +--
> include/linux/rculist.h | 28 +++++++++++++++----
> include/linux/rcupdate.h | 7 +++++
> kernel/rcu/Kconfig.debug | 11 ++++++++
> kernel/rcu/update.c | 48 ++++++++++++++++++---------------
> kernel/workqueue.c | 10 ++-----
> net/ipv4/fib_frontend.c | 3 ++-
> 14 files changed, 119 insertions(+), 53 deletions(-)
>
> --
> 2.22.0.510.g264f2c817a-goog
>

2019-07-16 18:54:24

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 2/9] rcu: Add support for consolidated-RCU reader checking (v3)

On Tue, Jul 16, 2019 at 02:46:49PM -0400, Joel Fernandes wrote:
> On Tue, Jul 16, 2019 at 11:38:33AM -0700, Paul E. McKenney wrote:
> > On Mon, Jul 15, 2019 at 10:36:58AM -0400, Joel Fernandes (Google) wrote:
> > > This patch adds support for checking RCU reader sections in list
> > > traversal macros. Optionally, if the list macro is called under SRCU or
> > > other lock/mutex protection, then appropriate lockdep expressions can be
> > > passed to make the checks pass.
> > >
> > > Existing list_for_each_entry_rcu() invocations don't need to pass the
> > > optional fourth argument (cond) unless they are under some non-RCU
> > > protection and needs to make lockdep check pass.
> > >
> > > Signed-off-by: Joel Fernandes (Google) <[email protected]>
> >
> > Now that I am on the correct version, again please fold in the checks
> > for the extra argument. The ability to have an optional argument looks
> > quite helpful, especially when compared to growing the RCU API!
>
> I did fold this and replied with a pull request URL based on /dev branch. But
> we can hold off on the pull requests until we decide on the below comments:
>
> > A few more things below.
> > > ---
> > > include/linux/rculist.h | 28 ++++++++++++++++++++-----
> > > include/linux/rcupdate.h | 7 +++++++
> > > kernel/rcu/Kconfig.debug | 11 ++++++++++
> > > kernel/rcu/update.c | 44 ++++++++++++++++++++++++----------------
> > > 4 files changed, 67 insertions(+), 23 deletions(-)
> > >
> > > diff --git a/include/linux/rculist.h b/include/linux/rculist.h
> > > index e91ec9ddcd30..1048160625bb 100644
> > > --- a/include/linux/rculist.h
> > > +++ b/include/linux/rculist.h
> > > @@ -40,6 +40,20 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
> > > */
> > > #define list_next_rcu(list) (*((struct list_head __rcu **)(&(list)->next)))
> > >
> > > +/*
> > > + * Check during list traversal that we are within an RCU reader
> > > + */
> > > +
> > > +#ifdef CONFIG_PROVE_RCU_LIST
> >
> > This new Kconfig option is OK temporarily, but unless there is reason to
> > fear malfunction that a few weeks of rcutorture, 0day, and -next won't
> > find, it would be better to just use CONFIG_PROVE_RCU. The overall goal
> > is to reduce the number of RCU knobs rather than grow them, must though
> > history might lead one to believe otherwise. :-/
>
> If you want, we can try to drop this option and just use PROVE_RCU however I
> must say there may be several warnings that need to be fixed in a short
> period of time (even a few weeks may be too short) considering the 1000+
> uses of RCU lists.

Do many people other than me build with CONFIG_PROVE_RCU? If so, then
that would be a good reason for a temporary CONFIG_PROVE_RCU_LIST,
as in going away in a release or two once the warnings get fixed.

> But I don't mind dropping it and it may just accelerate the fixing up of all
> callers.

I will let you decide based on the above question. But if you have
CONFIG_PROVE_RCU_LIST, as noted below, it needs to depend on RCU_EXPERT.

Thanx, Paul

> > > +#define __list_check_rcu(dummy, cond, ...) \
> > > + ({ \
> > > + RCU_LOCKDEP_WARN(!cond && !rcu_read_lock_any_held(), \
> > > + "RCU-list traversed in non-reader section!"); \
> > > + })
> > > +#else
> > > +#define __list_check_rcu(dummy, cond, ...) ({})
> > > +#endif
> > > +
> > > /*
> > > * Insert a new entry between two known consecutive entries.
> > > *
> > > @@ -343,14 +357,16 @@ static inline void list_splice_tail_init_rcu(struct list_head *list,
> > > * @pos: the type * to use as a loop cursor.
> > > * @head: the head for your list.
> > > * @member: the name of the list_head within the struct.
> > > + * @cond: optional lockdep expression if called from non-RCU protection.
> > > *
> > > * This list-traversal primitive may safely run concurrently with
> > > * the _rcu list-mutation primitives such as list_add_rcu()
> > > * as long as the traversal is guarded by rcu_read_lock().
> > > */
> > > -#define list_for_each_entry_rcu(pos, head, member) \
> > > - for (pos = list_entry_rcu((head)->next, typeof(*pos), member); \
> > > - &pos->member != (head); \
> > > +#define list_for_each_entry_rcu(pos, head, member, cond...) \
> > > + for (__list_check_rcu(dummy, ## cond, 0), \
> > > + pos = list_entry_rcu((head)->next, typeof(*pos), member); \
> > > + &pos->member != (head); \
> > > pos = list_entry_rcu(pos->member.next, typeof(*pos), member))
> > >
> > > /**
> > > @@ -616,13 +632,15 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n,
> > > * @pos: the type * to use as a loop cursor.
> > > * @head: the head for your list.
> > > * @member: the name of the hlist_node within the struct.
> > > + * @cond: optional lockdep expression if called from non-RCU protection.
> > > *
> > > * This list-traversal primitive may safely run concurrently with
> > > * the _rcu list-mutation primitives such as hlist_add_head_rcu()
> > > * as long as the traversal is guarded by rcu_read_lock().
> > > */
> > > -#define hlist_for_each_entry_rcu(pos, head, member) \
> > > - for (pos = hlist_entry_safe (rcu_dereference_raw(hlist_first_rcu(head)),\
> > > +#define hlist_for_each_entry_rcu(pos, head, member, cond...) \
> > > + for (__list_check_rcu(dummy, ## cond, 0), \
> > > + pos = hlist_entry_safe (rcu_dereference_raw(hlist_first_rcu(head)),\
> > > typeof(*(pos)), member); \
> > > pos; \
> > > pos = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu(\
> > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > > index 8f7167478c1d..f3c29efdf19a 100644
> > > --- a/include/linux/rcupdate.h
> > > +++ b/include/linux/rcupdate.h
> > > @@ -221,6 +221,7 @@ int debug_lockdep_rcu_enabled(void);
> > > int rcu_read_lock_held(void);
> > > int rcu_read_lock_bh_held(void);
> > > int rcu_read_lock_sched_held(void);
> > > +int rcu_read_lock_any_held(void);
> > >
> > > #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
> > >
> > > @@ -241,6 +242,12 @@ static inline int rcu_read_lock_sched_held(void)
> > > {
> > > return !preemptible();
> > > }
> > > +
> > > +static inline int rcu_read_lock_any_held(void)
> > > +{
> > > + return !preemptible();
> > > +}
> > > +
> > > #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
> > >
> > > #ifdef CONFIG_PROVE_RCU
> > > diff --git a/kernel/rcu/Kconfig.debug b/kernel/rcu/Kconfig.debug
> > > index 5ec3ea4028e2..7fbd21dbfcd0 100644
> > > --- a/kernel/rcu/Kconfig.debug
> > > +++ b/kernel/rcu/Kconfig.debug
> > > @@ -8,6 +8,17 @@ menu "RCU Debugging"
> > > config PROVE_RCU
> > > def_bool PROVE_LOCKING
> > >
> > > +config PROVE_RCU_LIST
> > > + bool "RCU list lockdep debugging"
> > > + depends on PROVE_RCU
> >
> > This must also depend on RCU_EXPERT.
>
> Sure.
>
> > > + default n
> > > + help
> > > + Enable RCU lockdep checking for list usages. By default it is
> > > + turned off since there are several list RCU users that still
> > > + need to be converted to pass a lockdep expression. To prevent
> > > + false-positive splats, we keep it default disabled but once all
> > > + users are converted, we can remove this config option.
> > > +
> > > config TORTURE_TEST
> > > tristate
> > > default n
> > > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > > index 9dd5aeef6e70..b7a4e3b5fa98 100644
> > > --- a/kernel/rcu/update.c
> > > +++ b/kernel/rcu/update.c
> > > @@ -91,14 +91,18 @@ module_param(rcu_normal_after_boot, int, 0);
> > > * Similarly, we avoid claiming an SRCU read lock held if the current
> > > * CPU is offline.
> > > */
> > > +#define rcu_read_lock_held_common() \
> > > + if (!debug_lockdep_rcu_enabled()) \
> > > + return 1; \
> > > + if (!rcu_is_watching()) \
> > > + return 0; \
> > > + if (!rcu_lockdep_current_cpu_online()) \
> > > + return 0;
> >
> > Nice abstraction of common code!
>
> Thanks!
>

2019-07-16 21:14:36

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 4/9] ipv4: add lockdep condition to fix for_each_entry (v1)

From: "Paul E. McKenney" <[email protected]>
Date: Tue, 16 Jul 2019 11:39:55 -0700

> On Mon, Jul 15, 2019 at 10:37:00AM -0400, Joel Fernandes (Google) wrote:
>> Using the previous support added, use it for adding lockdep conditions
>> to list usage here.
>>
>> Signed-off-by: Joel Fernandes (Google) <[email protected]>
>
> We need an ack or better from the subsystem maintainer for this one.

Acked-by: David S. Miller <[email protected]>

2019-07-16 22:04:28

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH 2/9] rcu: Add support for consolidated-RCU reader checking (v3)

On Tue, Jul 16, 2019 at 11:53:03AM -0700, Paul E. McKenney wrote:
[snip]
> > > A few more things below.
> > > > ---
> > > > include/linux/rculist.h | 28 ++++++++++++++++++++-----
> > > > include/linux/rcupdate.h | 7 +++++++
> > > > kernel/rcu/Kconfig.debug | 11 ++++++++++
> > > > kernel/rcu/update.c | 44 ++++++++++++++++++++++++----------------
> > > > 4 files changed, 67 insertions(+), 23 deletions(-)
> > > >
> > > > diff --git a/include/linux/rculist.h b/include/linux/rculist.h
> > > > index e91ec9ddcd30..1048160625bb 100644
> > > > --- a/include/linux/rculist.h
> > > > +++ b/include/linux/rculist.h
> > > > @@ -40,6 +40,20 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
> > > > */
> > > > #define list_next_rcu(list) (*((struct list_head __rcu **)(&(list)->next)))
> > > >
> > > > +/*
> > > > + * Check during list traversal that we are within an RCU reader
> > > > + */
> > > > +
> > > > +#ifdef CONFIG_PROVE_RCU_LIST
> > >
> > > This new Kconfig option is OK temporarily, but unless there is reason to
> > > fear malfunction that a few weeks of rcutorture, 0day, and -next won't
> > > find, it would be better to just use CONFIG_PROVE_RCU. The overall goal
> > > is to reduce the number of RCU knobs rather than grow them, must though
> > > history might lead one to believe otherwise. :-/
> >
> > If you want, we can try to drop this option and just use PROVE_RCU however I
> > must say there may be several warnings that need to be fixed in a short
> > period of time (even a few weeks may be too short) considering the 1000+
> > uses of RCU lists.
> Do many people other than me build with CONFIG_PROVE_RCU? If so, then
> that would be a good reason for a temporary CONFIG_PROVE_RCU_LIST,
> as in going away in a release or two once the warnings get fixed.

PROVE_RCU is enabled by default with PROVE_LOCKING, so it is used quite
heavilty.

> > But I don't mind dropping it and it may just accelerate the fixing up of all
> > callers.
>
> I will let you decide based on the above question. But if you have
> CONFIG_PROVE_RCU_LIST, as noted below, it needs to depend on RCU_EXPERT.

Ok, will make it depend. But yes for temporary purpose, I will leave it as a
config and remove it later.

thanks,

- Joel

2019-07-17 00:10:50

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 2/9] rcu: Add support for consolidated-RCU reader checking (v3)

On Tue, Jul 16, 2019 at 06:02:05PM -0400, Joel Fernandes wrote:
> On Tue, Jul 16, 2019 at 11:53:03AM -0700, Paul E. McKenney wrote:
> [snip]
> > > > A few more things below.
> > > > > ---
> > > > > include/linux/rculist.h | 28 ++++++++++++++++++++-----
> > > > > include/linux/rcupdate.h | 7 +++++++
> > > > > kernel/rcu/Kconfig.debug | 11 ++++++++++
> > > > > kernel/rcu/update.c | 44 ++++++++++++++++++++++++----------------
> > > > > 4 files changed, 67 insertions(+), 23 deletions(-)
> > > > >
> > > > > diff --git a/include/linux/rculist.h b/include/linux/rculist.h
> > > > > index e91ec9ddcd30..1048160625bb 100644
> > > > > --- a/include/linux/rculist.h
> > > > > +++ b/include/linux/rculist.h
> > > > > @@ -40,6 +40,20 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
> > > > > */
> > > > > #define list_next_rcu(list) (*((struct list_head __rcu **)(&(list)->next)))
> > > > >
> > > > > +/*
> > > > > + * Check during list traversal that we are within an RCU reader
> > > > > + */
> > > > > +
> > > > > +#ifdef CONFIG_PROVE_RCU_LIST
> > > >
> > > > This new Kconfig option is OK temporarily, but unless there is reason to
> > > > fear malfunction that a few weeks of rcutorture, 0day, and -next won't
> > > > find, it would be better to just use CONFIG_PROVE_RCU. The overall goal
> > > > is to reduce the number of RCU knobs rather than grow them, must though
> > > > history might lead one to believe otherwise. :-/
> > >
> > > If you want, we can try to drop this option and just use PROVE_RCU however I
> > > must say there may be several warnings that need to be fixed in a short
> > > period of time (even a few weeks may be too short) considering the 1000+
> > > uses of RCU lists.
> > Do many people other than me build with CONFIG_PROVE_RCU? If so, then
> > that would be a good reason for a temporary CONFIG_PROVE_RCU_LIST,
> > as in going away in a release or two once the warnings get fixed.
>
> PROVE_RCU is enabled by default with PROVE_LOCKING, so it is used quite
> heavilty.
>
> > > But I don't mind dropping it and it may just accelerate the fixing up of all
> > > callers.
> >
> > I will let you decide based on the above question. But if you have
> > CONFIG_PROVE_RCU_LIST, as noted below, it needs to depend on RCU_EXPERT.
>
> Ok, will make it depend. But yes for temporary purpose, I will leave it as a
> config and remove it later.

Very good, thank you! Plus you got another ack. ;-)

Thanx, Paul