2020-05-19 21:48:56

by Ahmed S. Darwish

[permalink] [raw]
Subject: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API

Asserting that preemption is disabled is a critical sanity check.
Developers are usually reluctant to add such a check in a fastpath, as
reading the preemption count can be costly.

Extend the lockdep API with a preemption disabled assertion. If lockdep
is disabled, or if the underlying architecture does not support kernel
preemption, this assert has no runtime overhead.

Since the lockdep assertion references sched.h task_struct current,
define it at lockdep.c instead of lockdep.h. This unbinds a potential
circular header dependency chain for call-sites, defined inlined, at
other header files already included and needed by sched.h.

Mark the exported assertion symbol with NOKPROBE_SYMBOL. Lockdep
functions can be involved in breakpoint handling and probing on those
functions can cause a breakpoint recursion.

References: f54bb2ec02c8 ("locking/lockdep: Add IRQs disabled/enabled assertion APIs: ...")
References: 2f43c6022d84 ("kprobes: Prohibit probing on lockdep functions")
Signed-off-by: Ahmed S. Darwish <[email protected]>
---
include/linux/lockdep.h | 9 +++++++++
kernel/locking/lockdep.c | 15 +++++++++++++++
lib/Kconfig.debug | 1 +
3 files changed, 25 insertions(+)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 206774ac6946..54c929ea5b98 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -702,6 +702,14 @@ do { \
"Not in hardirq as expected\n"); \
} while (0)

+/*
+ * Don't define this assertion here to avoid a call-site's header file
+ * dependency on sched.h task_struct current. This is needed by call
+ * sites that are inline defined at header files already included by
+ * sched.h.
+ */
+void lockdep_assert_preemption_disabled(void);
+
#else
# define might_lock(lock) do { } while (0)
# define might_lock_read(lock) do { } while (0)
@@ -709,6 +717,7 @@ do { \
# define lockdep_assert_irqs_enabled() do { } while (0)
# define lockdep_assert_irqs_disabled() do { } while (0)
# define lockdep_assert_in_irq() do { } while (0)
+# define lockdep_assert_preemption_disabled() do { } while (0)
#endif

#ifdef CONFIG_PROVE_RAW_LOCK_NESTING
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ac10db66cc63..4dae65bc65c2 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -5857,3 +5857,18 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
dump_stack();
}
EXPORT_SYMBOL_GPL(lockdep_rcu_suspicious);
+
+#ifdef CONFIG_PROVE_LOCKING
+
+void lockdep_assert_preemption_disabled(void)
+{
+ WARN_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT) &&
+ debug_locks &&
+ !current->lockdep_recursion &&
+ (preempt_count() == 0 && current->hardirqs_enabled),
+ "preemption not disabled as expected\n");
+}
+EXPORT_SYMBOL_GPL(lockdep_assert_preemption_disabled);
+NOKPROBE_SYMBOL(lockdep_assert_preemption_disabled);
+
+#endif
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 21d9c5f6e7ec..34d9d8896003 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1062,6 +1062,7 @@ config PROVE_LOCKING
select DEBUG_RWSEMS
select DEBUG_WW_MUTEX_SLOWPATH
select DEBUG_LOCK_ALLOC
+ select PREEMPT_COUNT if !ARCH_NO_PREEMPT
select TRACE_IRQFLAGS
default n
help
--
2.20.1


2020-05-22 18:00:05

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API

On Tue, May 19, 2020 at 11:45:29PM +0200, Ahmed S. Darwish wrote:
> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> index 206774ac6946..54c929ea5b98 100644
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -702,6 +702,14 @@ do { \
> "Not in hardirq as expected\n"); \
> } while (0)
>
> +/*
> + * Don't define this assertion here to avoid a call-site's header file
> + * dependency on sched.h task_struct current. This is needed by call
> + * sites that are inline defined at header files already included by
> + * sched.h.
> + */
> +void lockdep_assert_preemption_disabled(void);

So how about:

#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_TRACE_IRQFLAGS)
#define lockdep_assert_preemption_disabled() do { \
WARN_ON(debug_locks && !preempt_count() && \
current->hardirqs_enabled); \
} while (0)
#else
#define lockdep_assert_preemption_disabled() do { } while (0)
#endif

That is both more consistent with the things you claim it's modelled
after and also completely avoids that header dependency.

Subject: Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API

On 2020-05-22 19:55:03 [+0200], Peter Zijlstra wrote:
> On Tue, May 19, 2020 at 11:45:29PM +0200, Ahmed S. Darwish wrote:
> > diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> > index 206774ac6946..54c929ea5b98 100644
> > --- a/include/linux/lockdep.h
> > +++ b/include/linux/lockdep.h
> > @@ -702,6 +702,14 @@ do { \
> > "Not in hardirq as expected\n"); \
> > } while (0)
> >
> > +/*
> > + * Don't define this assertion here to avoid a call-site's header file
> > + * dependency on sched.h task_struct current. This is needed by call
> > + * sites that are inline defined at header files already included by
> > + * sched.h.
> > + */
> > +void lockdep_assert_preemption_disabled(void);
>
> So how about:
>
> #if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_TRACE_IRQFLAGS)
> #define lockdep_assert_preemption_disabled() do { \
> WARN_ON(debug_locks && !preempt_count() && \
> current->hardirqs_enabled); \
> } while (0)
> #else
> #define lockdep_assert_preemption_disabled() do { } while (0)
> #endif
>
> That is both more consistent with the things you claim it's modelled
> after and also completely avoids that header dependency.

So we need additionally:

- #include <linux/sched.h> in include/linux/flex_proportions.h
and I think un another file as well.

- write_seqcount_t_begin_nested() as a define

- write_seqcount_t_begin() as a define

Any "static inline" in the header file using
lockdep_assert_preemption_disabled() will tro to complain about missing
current-> define. But yes, it will work otherwise.

Sebastian

2020-05-23 22:44:15

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API

On Sat, May 23, 2020 at 04:59:42PM +0200, Sebastian A. Siewior wrote:
> On 2020-05-22 19:55:03 [+0200], Peter Zijlstra wrote:

> > That is both more consistent with the things you claim it's modelled
> > after and also completely avoids that header dependency.
>
> So we need additionally:
>
> - #include <linux/sched.h> in include/linux/flex_proportions.h
> and I think un another file as well.
>
> - write_seqcount_t_begin_nested() as a define
>
> - write_seqcount_t_begin() as a define
>
> Any "static inline" in the header file using
> lockdep_assert_preemption_disabled() will tro to complain about missing
> current-> define. But yes, it will work otherwise.

Because...? /me rummages around.. Ah you're proposing sticking this in
seqcount itself and then header hell.

Moo.. ok I'll go have another look on Monday.

Subject: Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API

On 2020-05-24 00:41:32 [+0200], Peter Zijlstra wrote:
> Because...? /me rummages around.. Ah you're proposing sticking this in
> seqcount itself and then header hell.
>
> Moo.. ok I'll go have another look on Monday.

if you have a look on Monday you might want to start with the patch at
the bottom on top of the series. Both sched.h includes are needed also
you need to avoid write_seqcount_t_begin_nested() as static inline in
header files.

diff --git a/include/linux/flex_proportions.h b/include/linux/flex_proportions.h
index c12df59d3f5fc..c0f88f08371d7 100644
--- a/include/linux/flex_proportions.h
+++ b/include/linux/flex_proportions.h
@@ -12,6 +12,7 @@
#include <linux/spinlock.h>
#include <linux/seqlock.h>
#include <linux/gfp.h>
+#include <linux/sched.h>

/*
* When maximum proportion of some event type is specified, this is the
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 54c929ea5b982..76385e599a9cb 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -702,13 +702,13 @@ do { \
"Not in hardirq as expected\n"); \
} while (0)

-/*
- * Don't define this assertion here to avoid a call-site's header file
- * dependency on sched.h task_struct current. This is needed by call
- * sites that are inline defined at header files already included by
- * sched.h.
- */
-void lockdep_assert_preemption_disabled(void);
+#define lockdep_assert_preemption_disabled() do { \
+ WARN_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT) && \
+ debug_locks && \
+ !current->lockdep_recursion && \
+ (preempt_count() == 0 && current->hardirqs_enabled), \
+ "preemption not disabled as expected\n"); \
+ } while (0)

#else
# define might_lock(lock) do { } while (0)
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index eca464ecf012f..7f1261376110a 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -423,11 +423,11 @@ static inline void __write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_);
}

-static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
-{
- lockdep_assert_preemption_disabled();
- __write_seqcount_t_begin_nested(s, subclass);
-}
+#define write_seqcount_t_begin_nested(__s, __subclass) \
+ do { \
+ lockdep_assert_preemption_disabled(); \
+ __write_seqcount_t_begin_nested(__s, __subclass);\
+ } while (0)

/*
* write_seqcount_t_begin() without lockdep non-preemptibility check.
@@ -450,10 +450,7 @@ static inline void __write_seqcount_t_begin(seqcount_t *s)
*/
#define write_seqcount_begin(s) do_write_seqcount_begin(s)

-static inline void write_seqcount_t_begin(seqcount_t *s)
-{
- write_seqcount_t_begin_nested(s, 0);
-}
+#define write_seqcount_t_begin(_s) write_seqcount_t_begin_nested(_s, 0);

/**
* write_seqcount_end() - end a seqcount write-side critical section
diff --git a/include/linux/u64_stats_sync.h b/include/linux/u64_stats_sync.h
index 30358ce3d8fe1..d0fd3edcdc50b 100644
--- a/include/linux/u64_stats_sync.h
+++ b/include/linux/u64_stats_sync.h
@@ -62,6 +62,7 @@
* Example of use in drivers/net/loopback.c, using per_cpu containers,
* in BH disabled context.
*/
+#include <linux/sched.h>
#include <linux/seqlock.h>

struct u64_stats_sync {
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 4dae65bc65c24..ac10db66cc63f 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -5857,18 +5857,3 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
dump_stack();
}
EXPORT_SYMBOL_GPL(lockdep_rcu_suspicious);
-
-#ifdef CONFIG_PROVE_LOCKING
-
-void lockdep_assert_preemption_disabled(void)
-{
- WARN_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT) &&
- debug_locks &&
- !current->lockdep_recursion &&
- (preempt_count() == 0 && current->hardirqs_enabled),
- "preemption not disabled as expected\n");
-}
-EXPORT_SYMBOL_GPL(lockdep_assert_preemption_disabled);
-NOKPROBE_SYMBOL(lockdep_assert_preemption_disabled);
-
-#endif

Sebastian

2020-05-25 19:04:57

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API

On Sun, May 24, 2020 at 12:41:32AM +0200, Peter Zijlstra wrote:
> On Sat, May 23, 2020 at 04:59:42PM +0200, Sebastian A. Siewior wrote:
> > On 2020-05-22 19:55:03 [+0200], Peter Zijlstra wrote:
>
> > > That is both more consistent with the things you claim it's modelled
> > > after and also completely avoids that header dependency.
> >
> > So we need additionally:
> >
> > - #include <linux/sched.h> in include/linux/flex_proportions.h
> > and I think un another file as well.
> >
> > - write_seqcount_t_begin_nested() as a define
> >
> > - write_seqcount_t_begin() as a define
> >
> > Any "static inline" in the header file using
> > lockdep_assert_preemption_disabled() will tro to complain about missing
> > current-> define. But yes, it will work otherwise.
>
> Because...? /me rummages around.. Ah you're proposing sticking this in
> seqcount itself and then header hell.
>
> Moo.. ok I'll go have another look on Monday.

How's this?

---

diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h
index d7f7e436c3af..459ae7a6c207 100644
--- a/include/linux/irqflags.h
+++ b/include/linux/irqflags.h
@@ -14,6 +14,7 @@

#include <linux/typecheck.h>
#include <asm/irqflags.h>
+#include <asm/percpu.h>

/* Currently lockdep_softirqs_on/off is used only by lockdep */
#ifdef CONFIG_PROVE_LOCKING
@@ -31,18 +32,22 @@
#endif

#ifdef CONFIG_TRACE_IRQFLAGS
+
+DECLARE_PER_CPU(int, hardirqs_enabled);
+DECLARE_PER_CPU(int, hardirq_context);
+
extern void trace_hardirqs_on_prepare(void);
extern void trace_hardirqs_off_prepare(void);
extern void trace_hardirqs_on(void);
extern void trace_hardirqs_off(void);
-# define lockdep_hardirq_context(p) ((p)->hardirq_context)
+# define lockdep_hardirq_context(p) (this_cpu_read(hardirq_context))
# define lockdep_softirq_context(p) ((p)->softirq_context)
-# define lockdep_hardirqs_enabled(p) ((p)->hardirqs_enabled)
+# define lockdep_hardirqs_enabled(p) (this_cpu_read(hardirqs_enabled))
# define lockdep_softirqs_enabled(p) ((p)->softirqs_enabled)
-# define lockdep_hardirq_enter() \
-do { \
- if (!current->hardirq_context++) \
- current->hardirq_threaded = 0; \
+# define lockdep_hardirq_enter() \
+do { \
+ if (!this_cpu_inc_return(hardirq_context)) \
+ current->hardirq_threaded = 0; \
} while (0)
# define lockdep_hardirq_threaded() \
do { \
@@ -50,7 +55,7 @@ do { \
} while (0)
# define lockdep_hardirq_exit() \
do { \
- current->hardirq_context--; \
+ this_cpu_dec(hardirq_context); \
} while (0)
# define lockdep_softirq_enter() \
do { \
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 8fce5c98a4b0..754c31e30a83 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -20,6 +20,7 @@ extern int lock_stat;
#define MAX_LOCKDEP_SUBCLASSES 8UL

#include <linux/types.h>
+#include <asm/percpu.h>

enum lockdep_wait_type {
LD_WAIT_INV = 0, /* not checked, catch all */
@@ -703,28 +704,29 @@ do { \
lock_release(&(lock)->dep_map, _THIS_IP_); \
} while (0)

-#define lockdep_assert_irqs_enabled() do { \
- WARN_ONCE(debug_locks && !current->lockdep_recursion && \
- !current->hardirqs_enabled, \
- "IRQs not enabled as expected\n"); \
- } while (0)
+DECLARE_PER_CPU(int, hardirqs_enabled);
+DECLARE_PER_CPU(int, hardirq_context);

-#define lockdep_assert_irqs_disabled() do { \
- WARN_ONCE(debug_locks && !current->lockdep_recursion && \
- current->hardirqs_enabled, \
- "IRQs not disabled as expected\n"); \
- } while (0)
+#define lockdep_assert_irqs_enabled() \
+do { \
+ WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirqs_enabled)); \
+} while (0)

-#define lockdep_assert_in_irq() do { \
- WARN_ONCE(debug_locks && !current->lockdep_recursion && \
- !current->hardirq_context, \
- "Not in hardirq as expected\n"); \
- } while (0)
+#define lockdep_assert_irqs_disabled() \
+do { \
+ WARN_ON_ONCE(debug_locks && this_cpu_read(hardirqs_enabled)); \
+} while (0)
+
+#define lockdep_assert_in_irq() \
+do { \
+ WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirq_context)); \
+} while (0)

#else
# define might_lock(lock) do { } while (0)
# define might_lock_read(lock) do { } while (0)
# define might_lock_nested(lock, subclass) do { } while (0)
+
# define lockdep_assert_irqs_enabled() do { } while (0)
# define lockdep_assert_irqs_disabled() do { } while (0)
# define lockdep_assert_in_irq() do { } while (0)
@@ -734,7 +736,7 @@ do { \

# define lockdep_assert_RT_in_threaded_ctx() do { \
WARN_ONCE(debug_locks && !current->lockdep_recursion && \
- current->hardirq_context && \
+ lockdep_hardirq_context(current) && \
!(current->hardirq_threaded || current->irq_config), \
"Not in threaded context on PREEMPT_RT as expected\n"); \
} while (0)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 1d68ee36c583..3d48f80848db 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -990,8 +990,6 @@ struct task_struct {
unsigned long hardirq_disable_ip;
unsigned int hardirq_enable_event;
unsigned int hardirq_disable_event;
- int hardirqs_enabled;
- int hardirq_context;
u64 hardirq_chain_key;
unsigned long softirq_disable_ip;
unsigned long softirq_enable_ip;
diff --git a/kernel/fork.c b/kernel/fork.c
index 96eb4b535ced..5e0e4918dc9f 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1946,8 +1946,8 @@ static __latent_entropy struct task_struct *copy_process(

rt_mutex_init_task(p);

+ lockdep_assert_irqs_enabled();
#ifdef CONFIG_PROVE_LOCKING
- DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled);
DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled);
#endif
retval = -EAGAIN;
@@ -2028,7 +2028,6 @@ static __latent_entropy struct task_struct *copy_process(
#endif
#ifdef CONFIG_TRACE_IRQFLAGS
p->irq_events = 0;
- p->hardirqs_enabled = 0;
p->hardirq_enable_ip = 0;
p->hardirq_enable_event = 0;
p->hardirq_disable_ip = _THIS_IP_;
@@ -2038,7 +2037,6 @@ static __latent_entropy struct task_struct *copy_process(
p->softirq_enable_event = 0;
p->softirq_disable_ip = 0;
p->softirq_disable_event = 0;
- p->hardirq_context = 0;
p->softirq_context = 0;
#endif

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index bdea09b365b6..b113941f579e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2062,9 +2062,9 @@ print_bad_irq_dependency(struct task_struct *curr,
pr_warn("-----------------------------------------------------\n");
pr_warn("%s/%d [HC%u[%lu]:SC%u[%lu]:HE%u:SE%u] is trying to acquire:\n",
curr->comm, task_pid_nr(curr),
- curr->hardirq_context, hardirq_count() >> HARDIRQ_SHIFT,
+ lockdep_hardirq_context(curr), hardirq_count() >> HARDIRQ_SHIFT,
curr->softirq_context, softirq_count() >> SOFTIRQ_SHIFT,
- curr->hardirqs_enabled,
+ lockdep_hardirqs_enabled(curr),
curr->softirqs_enabled);
print_lock(next);

@@ -3649,7 +3649,7 @@ void lockdep_hardirqs_on_prepare(unsigned long ip)
if (unlikely(!debug_locks || current->lockdep_recursion))
return;

- if (unlikely(current->hardirqs_enabled)) {
+ if (unlikely(lockdep_hardirqs_enabled(current))) {
/*
* Neither irq nor preemption are disabled here
* so this is racy by nature but losing one hit
@@ -3677,7 +3677,7 @@ void lockdep_hardirqs_on_prepare(unsigned long ip)
* Can't allow enabling interrupts while in an interrupt handler,
* that's general bad form and such. Recursion, limited stack etc..
*/
- if (DEBUG_LOCKS_WARN_ON(current->hardirq_context))
+ if (DEBUG_LOCKS_WARN_ON(lockdep_hardirq_context(current)))
return;

current->hardirq_chain_key = current->curr_chain_key;
@@ -3695,7 +3695,7 @@ void noinstr lockdep_hardirqs_on(unsigned long ip)
if (unlikely(!debug_locks || curr->lockdep_recursion))
return;

- if (curr->hardirqs_enabled) {
+ if (lockdep_hardirqs_enabled(curr)) {
/*
* Neither irq nor preemption are disabled here
* so this is racy by nature but losing one hit
@@ -3721,7 +3721,7 @@ void noinstr lockdep_hardirqs_on(unsigned long ip)
current->curr_chain_key);

/* we'll do an OFF -> ON transition: */
- curr->hardirqs_enabled = 1;
+ this_cpu_write(hardirqs_enabled, 1);
curr->hardirq_enable_ip = ip;
curr->hardirq_enable_event = ++curr->irq_events;
debug_atomic_inc(hardirqs_on_events);
@@ -3745,11 +3745,11 @@ void noinstr lockdep_hardirqs_off(unsigned long ip)
if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
return;

- if (curr->hardirqs_enabled) {
+ if (lockdep_hardirqs_enabled(curr)) {
/*
* We have done an ON -> OFF transition:
*/
- curr->hardirqs_enabled = 0;
+ this_cpu_write(hardirqs_enabled, 0);
curr->hardirq_disable_ip = ip;
curr->hardirq_disable_event = ++curr->irq_events;
debug_atomic_inc(hardirqs_off_events);
@@ -3794,7 +3794,7 @@ void lockdep_softirqs_on(unsigned long ip)
* usage bit for all held locks, if hardirqs are
* enabled too:
*/
- if (curr->hardirqs_enabled)
+ if (lockdep_hardirqs_enabled(curr))
mark_held_locks(curr, LOCK_ENABLED_SOFTIRQ);
lockdep_recursion_finish();
}
@@ -3843,7 +3843,7 @@ mark_usage(struct task_struct *curr, struct held_lock *hlock, int check)
*/
if (!hlock->trylock) {
if (hlock->read) {
- if (curr->hardirq_context)
+ if (lockdep_hardirq_context(curr))
if (!mark_lock(curr, hlock,
LOCK_USED_IN_HARDIRQ_READ))
return 0;
@@ -3852,7 +3852,7 @@ mark_usage(struct task_struct *curr, struct held_lock *hlock, int check)
LOCK_USED_IN_SOFTIRQ_READ))
return 0;
} else {
- if (curr->hardirq_context)
+ if (lockdep_hardirq_context(curr))
if (!mark_lock(curr, hlock, LOCK_USED_IN_HARDIRQ))
return 0;
if (curr->softirq_context)
@@ -3890,7 +3890,7 @@ mark_usage(struct task_struct *curr, struct held_lock *hlock, int check)

static inline unsigned int task_irq_context(struct task_struct *task)
{
- return LOCK_CHAIN_HARDIRQ_CONTEXT * !!task->hardirq_context +
+ return LOCK_CHAIN_HARDIRQ_CONTEXT * !!lockdep_hardirq_context(task) +
LOCK_CHAIN_SOFTIRQ_CONTEXT * !!task->softirq_context;
}

@@ -3983,7 +3983,7 @@ static inline short task_wait_context(struct task_struct *curr)
* Set appropriate wait type for the context; for IRQs we have to take
* into account force_irqthread as that is implied by PREEMPT_RT.
*/
- if (curr->hardirq_context) {
+ if (lockdep_hardirq_context(curr)) {
/*
* Check if force_irqthreads will run us threaded.
*/
@@ -4826,11 +4826,11 @@ static void check_flags(unsigned long flags)
return;

if (irqs_disabled_flags(flags)) {
- if (DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled)) {
+ if (DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled(current))) {
printk("possible reason: unannotated irqs-off.\n");
}
} else {
- if (DEBUG_LOCKS_WARN_ON(!current->hardirqs_enabled)) {
+ if (DEBUG_LOCKS_WARN_ON(!lockdep_hardirqs_enabled(current))) {
printk("possible reason: unannotated irqs-on.\n");
}
}
diff --git a/kernel/softirq.c b/kernel/softirq.c
index beb8e3a66c7c..f45ebff906f7 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -107,6 +107,12 @@ static bool ksoftirqd_running(unsigned long pending)
* where hardirqs are disabled legitimately:
*/
#ifdef CONFIG_TRACE_IRQFLAGS
+
+DEFINE_PER_CPU(int, hardirqs_enabled);
+DEFINE_PER_CPU(int, hardirq_context);
+EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled);
+EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
+
void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
{
unsigned long flags;

2020-05-26 00:55:23

by Ahmed S. Darwish

[permalink] [raw]
Subject: Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API

Peter Zijlstra <[email protected]> wrote:
> On Sun, May 24, 2020 at 12:41:32AM +0200, Peter Zijlstra wrote:
> > On Sat, May 23, 2020 at 04:59:42PM +0200, Sebastian A. Siewior wrote:
> > >
> > > Any "static inline" in the header file using
> > > lockdep_assert_preemption_disabled() will tro to complain about missing
> > > current-> define. But yes, it will work otherwise.
> >
> > Because...? /me rummages around.. Ah you're proposing sticking this in
> > seqcount itself and then header hell.
> >
> > Moo.. ok I'll go have another look on Monday.
>
> How's this?
>

This will work for my case as current-> is no longer referenced by the
lockdep macros. Please continue below though.

...

> -#define lockdep_assert_irqs_enabled() do { \
> - WARN_ONCE(debug_locks && !current->lockdep_recursion && \
> - !current->hardirqs_enabled, \
> - "IRQs not enabled as expected\n"); \
> - } while (0)
> +DECLARE_PER_CPU(int, hardirqs_enabled);
> +DECLARE_PER_CPU(int, hardirq_context);
>
> -#define lockdep_assert_irqs_disabled() do { \
> - WARN_ONCE(debug_locks && !current->lockdep_recursion && \
> - current->hardirqs_enabled, \
> - "IRQs not disabled as expected\n"); \
> - } while (0)
> +#define lockdep_assert_irqs_enabled() \
> +do { \
> + WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirqs_enabled)); \
> +} while (0)
>

Given that lockdep_off() is defined at lockdep.c as:

void lockdep_off(void)
{
current->lockdep_recursion += LOCKDEP_OFF;
}

This would imply that all of the macros:

- lockdep_assert_irqs_enabled()
- lockdep_assert_irqs_disabled()
- lockdep_assert_in_irq()
- lockdep_assert_preemption_disabled()
- lockdep_assert_preemption_enabled()

will do the lockdep checks *even if* lockdep_off() was called.

This doesn't sound right. Even if all of the above macros call sites
didn't care about lockdep_off()/on(), it is semantically incoherent.

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

2020-05-26 08:19:14

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API

On Tue, May 26, 2020 at 02:52:31AM +0200, Ahmed S. Darwish wrote:
> Peter Zijlstra <[email protected]> wrote:

> > +#define lockdep_assert_irqs_enabled() \
> > +do { \
> > + WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirqs_enabled)); \
> > +} while (0)
> >
>
> Given that lockdep_off() is defined at lockdep.c as:
>
> void lockdep_off(void)
> {
> current->lockdep_recursion += LOCKDEP_OFF;
> }
>
> This would imply that all of the macros:
>
> - lockdep_assert_irqs_enabled()
> - lockdep_assert_irqs_disabled()
> - lockdep_assert_in_irq()
> - lockdep_assert_preemption_disabled()
> - lockdep_assert_preemption_enabled()
>
> will do the lockdep checks *even if* lockdep_off() was called.
>
> This doesn't sound right. Even if all of the above macros call sites
> didn't care about lockdep_off()/on(), it is semantically incoherent.

lockdep_off() is an abomination and really should not exist.

That dm-cache-target.c thing, for example, is atrocious shite that will
explode on -rt. Whoever wrote that needs a 'medal'.

People using it deserve all the pain they get.

Also; IRQ state _should_ be tracked irrespective of tracking lock
dependencies -- I see that that currently isn't entirely the case, lemme
go fix that.

2020-05-26 09:48:56

by Ahmed S. Darwish

[permalink] [raw]
Subject: Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API

On Tue, May 26, 2020 at 10:13:50AM +0200, Peter Zijlstra wrote:
> On Tue, May 26, 2020 at 02:52:31AM +0200, Ahmed S. Darwish wrote:
> > Peter Zijlstra <[email protected]> wrote:
>
> > > +#define lockdep_assert_irqs_enabled() \
> > > +do { \
> > > + WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirqs_enabled)); \
> > > +} while (0)
> > >
> >
> > Given that lockdep_off() is defined at lockdep.c as:
> >
> > void lockdep_off(void)
> > {
> > current->lockdep_recursion += LOCKDEP_OFF;
> > }
> >
> > This would imply that all of the macros:
> >
> > - lockdep_assert_irqs_enabled()
> > - lockdep_assert_irqs_disabled()
> > - lockdep_assert_in_irq()
> > - lockdep_assert_preemption_disabled()
> > - lockdep_assert_preemption_enabled()
> >
> > will do the lockdep checks *even if* lockdep_off() was called.
> >
> > This doesn't sound right. Even if all of the above macros call sites
> > didn't care about lockdep_off()/on(), it is semantically incoherent.
>
> lockdep_off() is an abomination and really should not exist.
>
> That dm-cache-target.c thing, for example, is atrocious shite that will
> explode on -rt. Whoever wrote that needs a 'medal'.
>
> People using it deserve all the pain they get.
>
> Also; IRQ state _should_ be tracked irrespective of tracking lock
> dependencies -- I see that that currently isn't entirely the case, lemme
> go fix that.
>

Exactly, currently all the lockdep IRQ checks gets nullified if
lockdep_off() is called. That was the source of my confusion.

If you'll have any extra patches on this, I can also queue them in the
next iteration of this series, before this patch.

Thanks a lot,

--
Ahmed S. Darwish
Linutronix GmbH

2020-06-03 15:33:07

by Ahmed S. Darwish

[permalink] [raw]
Subject: Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API

On Tue, May 26, 2020 at 10:13:50AM +0200, Peter Zijlstra wrote:
> On Tue, May 26, 2020 at 02:52:31AM +0200, Ahmed S. Darwish wrote:
> > Peter Zijlstra <[email protected]> wrote:
>
> > > +#define lockdep_assert_irqs_enabled() \
> > > +do { \
> > > + WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirqs_enabled)); \
> > > +} while (0)
> > >
> >
> > Given that lockdep_off() is defined at lockdep.c as:
> >
> > void lockdep_off(void)
> > {
> > current->lockdep_recursion += LOCKDEP_OFF;
> > }
> >
> > This would imply that all of the macros:
> >
> > - lockdep_assert_irqs_enabled()
> > - lockdep_assert_irqs_disabled()
> > - lockdep_assert_in_irq()
> > - lockdep_assert_preemption_disabled()
> > - lockdep_assert_preemption_enabled()
> >
> > will do the lockdep checks *even if* lockdep_off() was called.
> >
> > This doesn't sound right. Even if all of the above macros call sites
> > didn't care about lockdep_off()/on(), it is semantically incoherent.
>
> lockdep_off() is an abomination and really should not exist.
>
> That dm-cache-target.c thing, for example, is atrocious shite that will
> explode on -rt. Whoever wrote that needs a 'medal'.
>
> People using it deserve all the pain they get.
>
> Also; IRQ state _should_ be tracked irrespective of tracking lock
> dependencies -- I see that that currently isn't entirely the case, lemme
> go fix that.
>

Since the lockdep/x86 series:

https://lkml.kernel.org/r/[email protected]
https://lkml.kernel.org/r/[email protected]

are pending and quite big, I'll drop patch #7 and patch #8 from this
series, and post a seqlock v2.

This way, this seqlock series can move forward.

Patches #7 and #8 are an "add-on" debugging feature anyway. They're
quite important of course, evident by the number of buggy call sites
they've found, but they don't affect the rest of the seqlock series in
any way.

Once the lockdep/x86 series above get merged, I can easily rebase and
post paches #7 and #8 again.

Thanks a lot,

--
Ahmed S. Darwish
Linutronix GmbH