2021-07-11 14:15:32

by Xiongwei Song

[permalink] [raw]
Subject: [RFC PATCH v1 1/3] locking/lockdep: Fix false warning of check_wait_context()

From: Xiongwei Song <[email protected]>

We now always get a "Invalid wait context" warning with
CONFIG_PROVE_RAW_LOCK_NESTING=y, see the full warning below:

[ 0.705900] =============================
[ 0.706002] [ BUG: Invalid wait context ]
[ 0.706180] 5.13.0+ #4 Not tainted
[ 0.706349] -----------------------------
[ 0.706486] swapper/1/0 is trying to lock:
[ 0.706658] ffff898c01045998 (&n->list_lock){..-.}-{3:3}, at: deactivate_slab+0x2f4/0x570
[ 0.706759] other info that might help us debug this:
[ 0.706759] context-{2:2}
[ 0.706759] no locks held by swapper/1/0.
[ 0.706759] stack backtrace:
[ 0.706759] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.13.0+ #4
[ 0.706759] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
[ 0.706759] Call Trace:
[ 0.706759] <IRQ>
[ 0.706759] dump_stack_lvl+0x45/0x59
[ 0.706759] __lock_acquire.cold+0x2bc/0x2ed
[ 0.706759] ? __lock_acquire+0x3a5/0x2330
[ 0.706759] lock_acquire+0xbb/0x2b0
[ 0.706759] ? deactivate_slab+0x2f4/0x570
[ 0.706759] _raw_spin_lock_irqsave+0x36/0x50
[ 0.706759] ? deactivate_slab+0x2f4/0x570
[ 0.706759] deactivate_slab+0x2f4/0x570
[ 0.706759] ? find_held_lock+0x2b/0x80
[ 0.706759] ? lock_release+0xbd/0x2b0
[ 0.706759] ? tick_irq_enter+0x28/0xe0
[ 0.706759] flush_cpu_slab+0x2f/0x50
[ 0.706759] flush_smp_call_function_queue+0x133/0x1d0
[ 0.706759] __sysvec_call_function_single+0x3e/0x190
[ 0.706759] sysvec_call_function_single+0x65/0x90
[ 0.706759] </IRQ>
[ 0.706759] asm_sysvec_call_function_single+0x12/0x20
[ 0.706759] RIP: 0010:default_idle+0xb/0x10
[ 0.706759] Code: 8b 04 25 40 6f 01 00 f0 80 60 02 df c3 0f ae f0 0f ae 38 0f ae f0 eb b9 0f 1f 80 00 00 00 00 eb 07 0f 00 2d ef f4 50 00 fb f4 <c3> c
[ 0.706759] RSP: 0018:ffff96c8c006bef8 EFLAGS: 00000202
[ 0.706759] RAX: ffffffff9c2f66d0 RBX: 0000000000000001 RCX: 0000000000000001
[ 0.706759] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff9c2f697f
[ 0.706759] RBP: ffff898c01201700 R08: 0000000000000001 R09: 0000000000000001
[ 0.706759] R10: 0000000000000039 R11: 0000000000000000 R12: 0000000000000000
[ 0.706759] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 0.706759] ? mwait_idle+0x70/0x70
[ 0.706759] ? default_idle_call+0x3f/0x1e0
[ 0.706759] default_idle_call+0x66/0x1e0
[ 0.706759] do_idle+0x1fb/0x270
[ 0.706759] ? _raw_spin_unlock_irqrestore+0x28/0x40
[ 0.706759] cpu_startup_entry+0x14/0x20
[ 0.706759] secondary_startup_64_no_verify+0xc2/0xcb

In this case the wait type of spin_lock is 3 and the wait type of
raw_spin_lock is 2, meanwhile deactivate_slab call is in hardirq context,
, which is waiting for wait type <= 2, so check_wait_context() will print
this warning. However, spin_lock and raw_spin_lock should be same wait
type in !PREEMPT_RT environment.

Wait type details, with CONFIG_PROVE_RAW_LOCK_NESTING=y:
LD_WAIT_SPIN = 2,
LD_WAIT_CONFIG = 3,
, with !CONFIG_PROVE_RAW_LOCK_NESTING:
LD_WAIT_CONFIG = LD_WAIT_SPIN = 2,
.

As we know, the semantics of spin_lock will be only changed in PREEMPT_RT
environment, hence the wait type of spin_lock can be bigger than
raw_spin_lock's.

The fix makes CONFIG_PROVE_RAW_LOCK_NESTING under CONFIG_PREEMPT_RT=y and
the warning will be fixed.

Furthermore, this warning doesn't exsit in PREEMPT_RT environment. Because
the RT kernel has already replaced all the spin_lock_*() with
raw_spin_lock_*() for the list_lock of node. It means the current wait
type that is in hardirq context is equal to the wait type of raw_spin_lock
in this case.

Signed-off-by: Xiongwei Song <[email protected]>
---
lib/Kconfig.debug | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 8acc01d7d816..083608106436 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1271,7 +1271,7 @@ config PROVE_LOCKING

config PROVE_RAW_LOCK_NESTING
bool "Enable raw_spinlock - spinlock nesting checks"
- depends on PROVE_LOCKING
+ depends on PROVE_LOCKING && PREEMPT_RT
default n
help
Enable the raw_spinlock vs. spinlock nesting checks which ensure
--
2.30.2


2021-07-11 14:16:27

by Xiongwei Song

[permalink] [raw]
Subject: [RFC PATCH v1 2/3] locking/lockdep: Unify the return values of check_wait_context()

From: Xiongwei Song <[email protected]>

Unity the return values of check_wait_context() as check_prev_add(),
check_irq_usage(), etc. 1 means no bug, 0 means there is a bug.

The return values of print_lock_invalid_wait_context() are unnecessary,
remove them.

Signed-off-by: Xiongwei Song <[email protected]>
---
kernel/locking/lockdep.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index bf1c00c881e4..8b50da42f2c6 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4635,16 +4635,16 @@ static inline short task_wait_context(struct task_struct *curr)
return LD_WAIT_MAX;
}

-static int
+static void
print_lock_invalid_wait_context(struct task_struct *curr,
struct held_lock *hlock)
{
short curr_inner;

if (!debug_locks_off())
- return 0;
+ return;
if (debug_locks_silent)
- return 0;
+ return;

pr_warn("\n");
pr_warn("=============================\n");
@@ -4664,8 +4664,6 @@ print_lock_invalid_wait_context(struct task_struct *curr,

pr_warn("stack backtrace:\n");
dump_stack();
-
- return 0;
}

/*
@@ -4691,7 +4689,7 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
int depth;

if (!next_inner || next->trylock)
- return 0;
+ return 1;

if (!next_outer)
next_outer = next_inner;
@@ -4723,10 +4721,12 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
}
}

- if (next_outer > curr_inner)
- return print_lock_invalid_wait_context(curr, next);
+ if (next_outer > curr_inner) {
+ print_lock_invalid_wait_context(curr, next);
+ return 0;
+ }

- return 0;
+ return 1;
}

#else /* CONFIG_PROVE_LOCKING */
@@ -4962,7 +4962,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
#endif
hlock->pin_count = pin_count;

- if (check_wait_context(curr, hlock))
+ if (!check_wait_context(curr, hlock))
return 0;

/* Initialize the lock usage bit */
--
2.30.2

2021-07-11 14:16:30

by Xiongwei Song

[permalink] [raw]
Subject: [PATCH v1 3/3] locking/lockdep,doc: Correct the max number of lock classes

From: Xiongwei Song <[email protected]>

The max number of lock classes is 8192.

Signed-off-by: Xiongwei Song <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: [email protected]
---
Documentation/locking/lockdep-design.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Documentation/locking/lockdep-design.rst b/Documentation/locking/lockdep-design.rst
index 82f36cab61bd..5c2dcec684ff 100644
--- a/Documentation/locking/lockdep-design.rst
+++ b/Documentation/locking/lockdep-design.rst
@@ -341,7 +341,7 @@ Exceeding this number will trigger the following lockdep warning::

(DEBUG_LOCKS_WARN_ON(id >= MAX_LOCKDEP_KEYS))

-By default, MAX_LOCKDEP_KEYS is currently set to 8191, and typical
+By default, MAX_LOCKDEP_KEYS is currently set to 8192, and typical
desktop systems have less than 1,000 lock classes, so this warning
normally results from lock-class leakage or failure to properly
initialize locks. These two problems are illustrated below:
@@ -383,7 +383,7 @@ you the number of lock classes currently in use along with the maximum::

This command produces the following output on a modest system::

- lock-classes: 748 [max: 8191]
+ lock-classes: 748 [max: 8192]

If the number allocated (748 above) increases continually over time,
then there is likely a leak. The following command can be used to
--
2.30.2

2021-07-11 16:28:35

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC PATCH v1 2/3] locking/lockdep: Unify the return values of check_wait_context()

On 7/11/21 10:14 AM, Xiongwei Song wrote:
> From: Xiongwei Song <[email protected]>
>
> Unity the return values of check_wait_context() as check_prev_add(),
"Unify"?
> check_irq_usage(), etc. 1 means no bug, 0 means there is a bug.
>
> The return values of print_lock_invalid_wait_context() are unnecessary,
> remove them.
>
> Signed-off-by: Xiongwei Song <[email protected]>
> ---
> kernel/locking/lockdep.c | 20 ++++++++++----------
> 1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index bf1c00c881e4..8b50da42f2c6 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -4635,16 +4635,16 @@ static inline short task_wait_context(struct task_struct *curr)
> return LD_WAIT_MAX;
> }
>
> -static int
> +static void
> print_lock_invalid_wait_context(struct task_struct *curr,
> struct held_lock *hlock)
> {
> short curr_inner;
>
> if (!debug_locks_off())
> - return 0;
> + return;
> if (debug_locks_silent)
> - return 0;
> + return;
>
> pr_warn("\n");
> pr_warn("=============================\n");
> @@ -4664,8 +4664,6 @@ print_lock_invalid_wait_context(struct task_struct *curr,
>
> pr_warn("stack backtrace:\n");
> dump_stack();
> -
> - return 0;
> }
>
> /*
> @@ -4691,7 +4689,7 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
> int depth;
>
> if (!next_inner || next->trylock)
> - return 0;
> + return 1;
>
> if (!next_outer)
> next_outer = next_inner;
> @@ -4723,10 +4721,12 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
> }
> }
>
> - if (next_outer > curr_inner)
> - return print_lock_invalid_wait_context(curr, next);
> + if (next_outer > curr_inner) {
> + print_lock_invalid_wait_context(curr, next);
> + return 0;
> + }
>
> - return 0;
> + return 1;
> }
>
> #else /* CONFIG_PROVE_LOCKING */
> @@ -4962,7 +4962,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
> #endif
> hlock->pin_count = pin_count;
>
> - if (check_wait_context(curr, hlock))
> + if (!check_wait_context(curr, hlock))
> return 0;
>
> /* Initialize the lock usage bit */

There is also another check_wait_context() in the "#else
CONFIG_PROVE_LOCKING" path that needs to be kept in sync. For clarity,
maybe you should state the meaning of the return value in the comment
above the function.

Cheers,
Longman

check_wait_context

2021-07-11 16:28:55

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] locking/lockdep,doc: Correct the max number of lock classes

On 7/11/21 10:14 AM, Xiongwei Song wrote:
> From: Xiongwei Song <[email protected]>
>
> The max number of lock classes is 8192.
>
> Signed-off-by: Xiongwei Song <[email protected]>
> Cc: Jonathan Corbet <[email protected]>
> Cc: [email protected]
> ---
> Documentation/locking/lockdep-design.rst | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/locking/lockdep-design.rst b/Documentation/locking/lockdep-design.rst
> index 82f36cab61bd..5c2dcec684ff 100644
> --- a/Documentation/locking/lockdep-design.rst
> +++ b/Documentation/locking/lockdep-design.rst
> @@ -341,7 +341,7 @@ Exceeding this number will trigger the following lockdep warning::
>
> (DEBUG_LOCKS_WARN_ON(id >= MAX_LOCKDEP_KEYS))
>
> -By default, MAX_LOCKDEP_KEYS is currently set to 8191, and typical
> +By default, MAX_LOCKDEP_KEYS is currently set to 8192, and typical
> desktop systems have less than 1,000 lock classes, so this warning
> normally results from lock-class leakage or failure to properly
> initialize locks. These two problems are illustrated below:
> @@ -383,7 +383,7 @@ you the number of lock classes currently in use along with the maximum::
>
> This command produces the following output on a modest system::
>
> - lock-classes: 748 [max: 8191]
> + lock-classes: 748 [max: 8192]
>
> If the number allocated (748 above) increases continually over time,
> then there is likely a leak. The following command can be used to

Acked-by: Waiman Long <[email protected]>

2021-07-11 16:45:14

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC PATCH v1 1/3] locking/lockdep: Fix false warning of check_wait_context()

On 7/11/21 10:14 AM, Xiongwei Song wrote:
> From: Xiongwei Song <[email protected]>
>
> We now always get a "Invalid wait context" warning with
> CONFIG_PROVE_RAW_LOCK_NESTING=y, see the full warning below:
>
> [ 0.705900] =============================
> [ 0.706002] [ BUG: Invalid wait context ]
> [ 0.706180] 5.13.0+ #4 Not tainted
> [ 0.706349] -----------------------------

I believe the purpose of CONFIG_PROVE_RAW_LOCK_NESTING is experimental
and it is turned off by default. Turning it on can cause problem as
shown in your lockdep splat. Limiting it to just PREEMPT_RT will defeat
its purpose to find potential spinlock nesting problem in non-PREEMPT_RT
kernel. The point is to fix the issue found, not hiding it from appearing.

Cheers,
Longman

2021-07-12 08:40:29

by Xiongwei Song

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] locking/lockdep,doc: Correct the max number of lock classes

On Sun, Jul 11, 2021 at 11:22 PM Waiman Long <[email protected]> wrote:
>
> On 7/11/21 10:14 AM, Xiongwei Song wrote:
> > From: Xiongwei Song <[email protected]>
> >
> > The max number of lock classes is 8192.
> >
> > Signed-off-by: Xiongwei Song <[email protected]>
> > Cc: Jonathan Corbet <[email protected]>
> > Cc: [email protected]
> > ---
> > Documentation/locking/lockdep-design.rst | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/Documentation/locking/lockdep-design.rst b/Documentation/locking/lockdep-design.rst
> > index 82f36cab61bd..5c2dcec684ff 100644
> > --- a/Documentation/locking/lockdep-design.rst
> > +++ b/Documentation/locking/lockdep-design.rst
> > @@ -341,7 +341,7 @@ Exceeding this number will trigger the following lockdep warning::
> >
> > (DEBUG_LOCKS_WARN_ON(id >= MAX_LOCKDEP_KEYS))
> >
> > -By default, MAX_LOCKDEP_KEYS is currently set to 8191, and typical
> > +By default, MAX_LOCKDEP_KEYS is currently set to 8192, and typical
> > desktop systems have less than 1,000 lock classes, so this warning
> > normally results from lock-class leakage or failure to properly
> > initialize locks. These two problems are illustrated below:
> > @@ -383,7 +383,7 @@ you the number of lock classes currently in use along with the maximum::
> >
> > This command produces the following output on a modest system::
> >
> > - lock-classes: 748 [max: 8191]
> > + lock-classes: 748 [max: 8192]
> >
> > If the number allocated (748 above) increases continually over time,
> > then there is likely a leak. The following command can be used to
>
> Acked-by: Waiman Long <[email protected]>

Thanks.

2021-07-12 08:43:06

by Xiongwei Song

[permalink] [raw]
Subject: Re: [RFC PATCH v1 2/3] locking/lockdep: Unify the return values of check_wait_context()

On Sun, Jul 11, 2021 at 11:19 PM Waiman Long <[email protected]> wrote:
>
> On 7/11/21 10:14 AM, Xiongwei Song wrote:
> > From: Xiongwei Song <[email protected]>
> >
> > Unity the return values of check_wait_context() as check_prev_add(),
> "Unify"?
Sorry. Will improve the description.

> > check_irq_usage(), etc. 1 means no bug, 0 means there is a bug.
> >
> > The return values of print_lock_invalid_wait_context() are unnecessary,
> > remove them.
> >
> > Signed-off-by: Xiongwei Song <[email protected]>
> > ---
> > kernel/locking/lockdep.c | 20 ++++++++++----------
> > 1 file changed, 10 insertions(+), 10 deletions(-)
> >
> > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> > index bf1c00c881e4..8b50da42f2c6 100644
> > --- a/kernel/locking/lockdep.c
> > +++ b/kernel/locking/lockdep.c
> > @@ -4635,16 +4635,16 @@ static inline short task_wait_context(struct task_struct *curr)
> > return LD_WAIT_MAX;
> > }
> >
> > -static int
> > +static void
> > print_lock_invalid_wait_context(struct task_struct *curr,
> > struct held_lock *hlock)
> > {
> > short curr_inner;
> >
> > if (!debug_locks_off())
> > - return 0;
> > + return;
> > if (debug_locks_silent)
> > - return 0;
> > + return;
> >
> > pr_warn("\n");
> > pr_warn("=============================\n");
> > @@ -4664,8 +4664,6 @@ print_lock_invalid_wait_context(struct task_struct *curr,
> >
> > pr_warn("stack backtrace:\n");
> > dump_stack();
> > -
> > - return 0;
> > }
> >
> > /*
> > @@ -4691,7 +4689,7 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
> > int depth;
> >
> > if (!next_inner || next->trylock)
> > - return 0;
> > + return 1;
> >
> > if (!next_outer)
> > next_outer = next_inner;
> > @@ -4723,10 +4721,12 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
> > }
> > }
> >
> > - if (next_outer > curr_inner)
> > - return print_lock_invalid_wait_context(curr, next);
> > + if (next_outer > curr_inner) {
> > + print_lock_invalid_wait_context(curr, next);
> > + return 0;
> > + }
> >
> > - return 0;
> > + return 1;
> > }
> >
> > #else /* CONFIG_PROVE_LOCKING */
> > @@ -4962,7 +4962,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
> > #endif
> > hlock->pin_count = pin_count;
> >
> > - if (check_wait_context(curr, hlock))
> > + if (!check_wait_context(curr, hlock))
> > return 0;
> >
> > /* Initialize the lock usage bit */
>
> There is also another check_wait_context() in the "#else
> CONFIG_PROVE_LOCKING" path that needs to be kept in sync.
Oops, my fault.

For clarity,
> maybe you should state the meaning of the return value in the comment
> above the function.
Good point. Thanks.

Regards,
Xiongwei
>
> Cheers,
> Longman
>
> check_wait_context
>

2021-07-12 09:06:20

by Xiongwei Song

[permalink] [raw]
Subject: Re: [RFC PATCH v1 1/3] locking/lockdep: Fix false warning of check_wait_context()

On Mon, Jul 12, 2021 at 12:43 AM Waiman Long <[email protected]> wrote:
>
> On 7/11/21 10:14 AM, Xiongwei Song wrote:
> > From: Xiongwei Song <[email protected]>
> >
> > We now always get a "Invalid wait context" warning with
> > CONFIG_PROVE_RAW_LOCK_NESTING=y, see the full warning below:
> >
> > [ 0.705900] =============================
> > [ 0.706002] [ BUG: Invalid wait context ]
> > [ 0.706180] 5.13.0+ #4 Not tainted
> > [ 0.706349] -----------------------------
>
> I believe the purpose of CONFIG_PROVE_RAW_LOCK_NESTING is experimental
> and it is turned off by default. Turning it on can cause problem as
> shown in your lockdep splat. Limiting it to just PREEMPT_RT will defeat
> its purpose to find potential spinlock nesting problem in non-PREEMPT_RT
> kernel.
As far as I know, a spinlock can nest another spinlock. In
non-PREEMPT_RT kernel
spin_lock and raw_spin_lock are same , so here acquiring a spin_lock in hardirq
context is acceptable, the warning is not needed. My knowledge on this
is not enough,
Will dig into this.

> The point is to fix the issue found,
Agree. I thought there was a spinlock usage issue, but by checking
deactivate_slab context,
looks like the spinlock usage is well. Maybe I'm missing something?

> not hiding it from appearing.
I'm not trying to hiding it, according to the code context, the fix is
reasonable from my point of
view. Let me check again.

Thank you for the comments.

Regards,
Xiongwei
>
> Cheers,
> Longman
>

2021-07-12 09:10:13

by Boqun Feng

[permalink] [raw]
Subject: Re: [RFC PATCH v1 1/3] locking/lockdep: Fix false warning of check_wait_context()

On Mon, Jul 12, 2021 at 04:18:36PM +0800, Xiongwei Song wrote:
> On Mon, Jul 12, 2021 at 12:43 AM Waiman Long <[email protected]> wrote:
> >
> > On 7/11/21 10:14 AM, Xiongwei Song wrote:
> > > From: Xiongwei Song <[email protected]>
> > >
> > > We now always get a "Invalid wait context" warning with
> > > CONFIG_PROVE_RAW_LOCK_NESTING=y, see the full warning below:
> > >
> > > [ 0.705900] =============================
> > > [ 0.706002] [ BUG: Invalid wait context ]
> > > [ 0.706180] 5.13.0+ #4 Not tainted
> > > [ 0.706349] -----------------------------
> >
> > I believe the purpose of CONFIG_PROVE_RAW_LOCK_NESTING is experimental
> > and it is turned off by default. Turning it on can cause problem as
> > shown in your lockdep splat. Limiting it to just PREEMPT_RT will defeat
> > its purpose to find potential spinlock nesting problem in non-PREEMPT_RT
> > kernel.
> As far as I know, a spinlock can nest another spinlock. In
> non-PREEMPT_RT kernel
> spin_lock and raw_spin_lock are same , so here acquiring a spin_lock in hardirq
> context is acceptable, the warning is not needed. My knowledge on this
> is not enough,
> Will dig into this.
>

You may find this useful: https://lwn.net/Articles/146861/ ;-)

The thing is that most of the irq handlers will run in process contexts
in PREEMPT_RT kernel (threaded irq), while the rest continues to run in
hardirq contexts. spinlock_t is allowed int threaded irqs but not in
hardirq contexts for PREEMPT_RT, because spinlock_t will become
sleeplable locks.

Regards,
Boqun

> > The point is to fix the issue found,
> Agree. I thought there was a spinlock usage issue, but by checking
> deactivate_slab context,
> looks like the spinlock usage is well. Maybe I'm missing something?
>
> > not hiding it from appearing.
> I'm not trying to hiding it, according to the code context, the fix is
> reasonable from my point of
> view. Let me check again.
>
> Thank you for the comments.
>
> Regards,
> Xiongwei
> >
> > Cheers,
> > Longman
> >

2021-07-12 09:25:05

by Xiongwei Song

[permalink] [raw]
Subject: Re: [RFC PATCH v1 1/3] locking/lockdep: Fix false warning of check_wait_context()

On Mon, Jul 12, 2021 at 4:52 PM Boqun Feng <[email protected]> wrote:
>
> On Mon, Jul 12, 2021 at 04:18:36PM +0800, Xiongwei Song wrote:
> > On Mon, Jul 12, 2021 at 12:43 AM Waiman Long <[email protected]> wrote:
> > >
> > > On 7/11/21 10:14 AM, Xiongwei Song wrote:
> > > > From: Xiongwei Song <[email protected]>
> > > >
> > > > We now always get a "Invalid wait context" warning with
> > > > CONFIG_PROVE_RAW_LOCK_NESTING=y, see the full warning below:
> > > >
> > > > [ 0.705900] =============================
> > > > [ 0.706002] [ BUG: Invalid wait context ]
> > > > [ 0.706180] 5.13.0+ #4 Not tainted
> > > > [ 0.706349] -----------------------------
> > >
> > > I believe the purpose of CONFIG_PROVE_RAW_LOCK_NESTING is experimental
> > > and it is turned off by default. Turning it on can cause problem as
> > > shown in your lockdep splat. Limiting it to just PREEMPT_RT will defeat
> > > its purpose to find potential spinlock nesting problem in non-PREEMPT_RT
> > > kernel.
> > As far as I know, a spinlock can nest another spinlock. In
> > non-PREEMPT_RT kernel
> > spin_lock and raw_spin_lock are same , so here acquiring a spin_lock in hardirq
> > context is acceptable, the warning is not needed. My knowledge on this
> > is not enough,
> > Will dig into this.
> >
>
> You may find this useful: https://lwn.net/Articles/146861/ ;-)
>
> The thing is that most of the irq handlers will run in process contexts
> in PREEMPT_RT kernel (threaded irq), while the rest continues to run in
> hardirq contexts. spinlock_t is allowed int threaded irqs but not in
> hardirq contexts for PREEMPT_RT, because spinlock_t will become
> sleeplable locks.
Exactly. I think I have known why the fix is incorrect.

Regards,
Xiongwei
>
> Regards,
> Boqun
>
> > > The point is to fix the issue found,
> > Agree. I thought there was a spinlock usage issue, but by checking
> > deactivate_slab context,
> > looks like the spinlock usage is well. Maybe I'm missing something?
> >
> > > not hiding it from appearing.
> > I'm not trying to hiding it, according to the code context, the fix is
> > reasonable from my point of
> > view. Let me check again.
> >
> > Thank you for the comments.
> >
> > Regards,
> > Xiongwei
> > >
> > > Cheers,
> > > Longman
> > >

2021-07-12 13:06:09

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC PATCH v1 1/3] locking/lockdep: Fix false warning of check_wait_context()

On 7/12/21 4:18 AM, Xiongwei Song wrote:
> On Mon, Jul 12, 2021 at 12:43 AM Waiman Long <[email protected]> wrote:
>> On 7/11/21 10:14 AM, Xiongwei Song wrote:
>>> From: Xiongwei Song <[email protected]>
>>>
>>> We now always get a "Invalid wait context" warning with
>>> CONFIG_PROVE_RAW_LOCK_NESTING=y, see the full warning below:
>>>
>>> [ 0.705900] =============================
>>> [ 0.706002] [ BUG: Invalid wait context ]
>>> [ 0.706180] 5.13.0+ #4 Not tainted
>>> [ 0.706349] -----------------------------
>> I believe the purpose of CONFIG_PROVE_RAW_LOCK_NESTING is experimental
>> and it is turned off by default. Turning it on can cause problem as
>> shown in your lockdep splat. Limiting it to just PREEMPT_RT will defeat
>> its purpose to find potential spinlock nesting problem in non-PREEMPT_RT
>> kernel.
> As far as I know, a spinlock can nest another spinlock. In
> non-PREEMPT_RT kernel
> spin_lock and raw_spin_lock are same , so here acquiring a spin_lock in hardirq
> context is acceptable, the warning is not needed. My knowledge on this
> is not enough,
> Will dig into this.
>
>> The point is to fix the issue found,
> Agree. I thought there was a spinlock usage issue, but by checking
> deactivate_slab context,
> looks like the spinlock usage is well. Maybe I'm missing something?

Yes, spinlock and raw spinlock are the same in non-RT kernel. They are
only different in RT kernel. However, non-RT kernel is also more heavily
tested than the RT kernel counterpart. The purpose of this config option
is to expose spinlock nesting problem in more areas of the code. If you
look at the config help text of PROVE_RAW_LOCK_NESTING:

        help
         Enable the raw_spinlock vs. spinlock nesting checks which ensure
         that the lock nesting rules for PREEMPT_RT enabled kernels are
         not violated.

         NOTE: There are known nesting problems. So if you enable this
         option expect lockdep splats until these problems have been fully
         addressed which is work in progress. This config switch allows to
         identify and analyze these problems. It will be removed and the
         check permanentely enabled once the main issues have been fixed.

         If unsure, select N.

So lockdep splat is expected. It will take time to address all the
issues found.

Cheers,
Longman

2021-07-13 02:31:38

by Xiongwei Song

[permalink] [raw]
Subject: Re: [RFC PATCH v1 1/3] locking/lockdep: Fix false warning of check_wait_context()

On Mon, Jul 12, 2021 at 9:04 PM Waiman Long <[email protected]> wrote:
>
> On 7/12/21 4:18 AM, Xiongwei Song wrote:
> > On Mon, Jul 12, 2021 at 12:43 AM Waiman Long <[email protected]> wrote:
> >> On 7/11/21 10:14 AM, Xiongwei Song wrote:
> >>> From: Xiongwei Song <[email protected]>
> >>>
> >>> We now always get a "Invalid wait context" warning with
> >>> CONFIG_PROVE_RAW_LOCK_NESTING=y, see the full warning below:
> >>>
> >>> [ 0.705900] =============================
> >>> [ 0.706002] [ BUG: Invalid wait context ]
> >>> [ 0.706180] 5.13.0+ #4 Not tainted
> >>> [ 0.706349] -----------------------------
> >> I believe the purpose of CONFIG_PROVE_RAW_LOCK_NESTING is experimental
> >> and it is turned off by default. Turning it on can cause problem as
> >> shown in your lockdep splat. Limiting it to just PREEMPT_RT will defeat
> >> its purpose to find potential spinlock nesting problem in non-PREEMPT_RT
> >> kernel.
> > As far as I know, a spinlock can nest another spinlock. In
> > non-PREEMPT_RT kernel
> > spin_lock and raw_spin_lock are same , so here acquiring a spin_lock in hardirq
> > context is acceptable, the warning is not needed. My knowledge on this
> > is not enough,
> > Will dig into this.
> >
> >> The point is to fix the issue found,
> > Agree. I thought there was a spinlock usage issue, but by checking
> > deactivate_slab context,
> > looks like the spinlock usage is well. Maybe I'm missing something?
>
> Yes, spinlock and raw spinlock are the same in non-RT kernel. They are
> only different in RT kernel. However, non-RT kernel is also more heavily
> tested than the RT kernel counterpart. The purpose of this config option
> is to expose spinlock nesting problem in more areas of the code. If you
> look at the config help text of PROVE_RAW_LOCK_NESTING:
>
> help
> Enable the raw_spinlock vs. spinlock nesting checks which ensure
> that the lock nesting rules for PREEMPT_RT enabled kernels are
> not violated.
>
> NOTE: There are known nesting problems. So if you enable this
> option expect lockdep splats until these problems have been fully
> addressed which is work in progress. This config switch allows to
> identify and analyze these problems. It will be removed and the
> check permanentely enabled once the main issues have been fixed.
>
> If unsure, select N.
Yes, I checked before sending patch, but didn't understand everything.
Thanks, :-).

> So lockdep splat is expected. It will take time to address all the
> issues found.
Ok.

Regards,
Xiongwei
>
> Cheers,
> Longman
>

2021-07-13 12:17:20

by Xiongwei Song

[permalink] [raw]
Subject: Re: [RFC PATCH v1 1/3] locking/lockdep: Fix false warning of check_wait_context()


Regards,
Xiongwei




> On Jul 12, 2021, at 9:04 PM, Waiman Long <[email protected]> wrote:
>
> On 7/12/21 4:18 AM, Xiongwei Song wrote:
>> On Mon, Jul 12, 2021 at 12:43 AM Waiman Long <[email protected]> wrote:
>>> On 7/11/21 10:14 AM, Xiongwei Song wrote:
>>>> From: Xiongwei Song <[email protected]>
>>>>
>>>> We now always get a "Invalid wait context" warning with
>>>> CONFIG_PROVE_RAW_LOCK_NESTING=y, see the full warning below:
>>>>
>>>> [ 0.705900] =============================
>>>> [ 0.706002] [ BUG: Invalid wait context ]
>>>> [ 0.706180] 5.13.0+ #4 Not tainted
>>>> [ 0.706349] -----------------------------
>>> I believe the purpose of CONFIG_PROVE_RAW_LOCK_NESTING is experimental
>>> and it is turned off by default. Turning it on can cause problem as
>>> shown in your lockdep splat. Limiting it to just PREEMPT_RT will defeat
>>> its purpose to find potential spinlock nesting problem in non-PREEMPT_RT
>>> kernel.
>> As far as I know, a spinlock can nest another spinlock. In
>> non-PREEMPT_RT kernel
>> spin_lock and raw_spin_lock are same , so here acquiring a spin_lock in hardirq
>> context is acceptable, the warning is not needed. My knowledge on this
>> is not enough,
>> Will dig into this.
>>
>>> The point is to fix the issue found,
>> Agree. I thought there was a spinlock usage issue, but by checking
>> deactivate_slab context,
>> looks like the spinlock usage is well. Maybe I'm missing something?
>
> Yes, spinlock and raw spinlock are the same in non-RT kernel. They are only different in RT kernel. However, non-RT kernel is also more heavily tested than the RT kernel counterpart. The purpose of this config option is to expose spinlock nesting problem in more areas of the code. If you look at the config help text of PROVE_RAW_LOCK_NESTING:
>
> help
> Enable the raw_spinlock vs. spinlock nesting checks which ensure
> that the lock nesting rules for PREEMPT_RT enabled kernels are
> not violated.
>
> NOTE: There are known nesting problems. So if you enable this
> option expect lockdep splats until these problems have been fully
> addressed which is work in progress. This config switch allows to
> identify and analyze these problems. It will be removed and the
> check permanentely enabled once the main issues have been fixed.
>
> If unsure, select N.
>
> So lockdep splat is expected. It will take time to address all the issues found.
Thanks, :-).

Regards,
Xiongwei

>
> Cheers,
> Longman
>

2021-07-23 02:41:17

by kernel test robot

[permalink] [raw]
Subject: [locking/lockdep] e0a77a7a5a: WARNING:bad_unlock_balance_detected



Greeting,

FYI, we noticed the following commit (built with gcc-9):

commit: e0a77a7a5a75e5e5163669d7625c765504cc2f94 ("[RFC PATCH v1 2/3] locking/lockdep: Unify the return values of check_wait_context()")
url: https://github.com/0day-ci/linux/commits/Xiongwei-Song/locking-lockdep-Fix-false-warning-of-check_wait_context/20210711-221747
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git d1bbfd0c7c9f985e57795a7e0cefc209ebf689c0

in testcase: boot

on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G

caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):


+----------------------------------------+------------+------------+
| | 428eeba5e1 | e0a77a7a5a |
+----------------------------------------+------------+------------+
| boot_successes | 26 | 0 |
| boot_failures | 0 | 6 |
| WARNING:bad_unlock_balance_detected | 0 | 6 |
| is_trying_to_release_lock(pool_lock)at | 0 | 6 |
+----------------------------------------+------------+------------+


If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>


[ 0.000000][ T0] WARNING: bad unlock balance detected!
[ 0.000000][ T0] 5.13.0-rc1-00135-ge0a77a7a5a75 #1 Not tainted
[ 0.000000][ T0] -------------------------------------
[ 0.000000][ T0] swapper/0 is trying to release lock (pool_lock) at:
[ 0.000000][ T0] __debug_object_init (kbuild/src/consumer/lib/debugobjects.c:273 kbuild/src/consumer/lib/debugobjects.c:568)
[ 0.000000][ T0] but there are no more locks to release!
[ 0.000000][ T0]
[ 0.000000][ T0] other info that might help us debug this:
[ 0.000000][ T0] no locks held by swapper/0.
[ 0.000000][ T0]
[ 0.000000][ T0] stack backtrace:
[ 0.000000][ T0] CPU: 0 PID: 0 Comm: swapper Not tainted 5.13.0-rc1-00135-ge0a77a7a5a75 #1
[ 0.000000][ T0] Call Trace:
[ 0.000000][ T0] ? lock_release (kbuild/src/consumer/kernel/locking/lockdep.c:5303 kbuild/src/consumer/kernel/locking/lockdep.c:5643)
[ 0.000000][ T0] ? _raw_spin_unlock (kbuild/src/consumer/include/linux/spinlock_api_smp.h:151 kbuild/src/consumer/kernel/locking/spinlock.c:183)
[ 0.000000][ T0] ? __debug_object_init (kbuild/src/consumer/lib/debugobjects.c:273 kbuild/src/consumer/lib/debugobjects.c:568)
[ 0.000000][ T0] ? init_cgroup_housekeeping (kbuild/src/consumer/include/linux/lockdep.h:195 kbuild/src/consumer/include/linux/lockdep.h:202 kbuild/src/consumer/include/linux/lockdep.h:208 kbuild/src/consumer/kernel/cgroup/cgroup.c:1909)
[ 0.000000][ T0] ? init_cgroup_root (kbuild/src/consumer/kernel/cgroup/cgroup.c:1922)
[ 0.000000][ T0] ? cgroup_init_early (kbuild/src/consumer/kernel/cgroup/cgroup.c:5614)
[ 0.000000][ T0] ? start_kernel (kbuild/src/consumer/arch/x86/include/asm/irqflags.h:40 kbuild/src/consumer/arch/x86/include/asm/irqflags.h:75 kbuild/src/consumer/init/main.c:886)
[ 0.000000][ T0] ? copy_bootdata (kbuild/src/consumer/arch/x86/kernel/head64.c:433)
[ 0.000000][ T0] ? secondary_startup_64_no_verify (kbuild/src/consumer/arch/x86/kernel/head_64.S:283)
[ 0.000000][ T0] Linux version 5.13.0-rc1-00135-ge0a77a7a5a75 (kbuild@db8bfb9d9f4c) (gcc-9 (Debian 9.3.0-22) 9.3.0, GNU ld (GNU Binutils for Debian) 2.35.2) #1 Thu Jul 22 11:35:57 CST 2021
[ 0.000000][ T0] Command line: ip=::::vm-snb-147::dhcp root=/dev/ram0 user=lkp job=/lkp/jobs/scheduled/vm-snb-147/boot-1-yocto-x86_64-minimal-20190520.cgz-e0a77a7a5a75e5e5163669d7625c765504cc2f94-20210723-25037-1fky0q3-4.yaml ARCH=x86_64 kconfig=x86_64-randconfig-r012-20210713 branch=linux-review/Xiongwei-Song/locking-lockdep-Fix-false-warning-of-check_wait_context/20210711-221747 commit=e0a77a7a5a75e5e5163669d7625c765504cc2f94 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-r012-20210713/gcc-9/e0a77a7a5a75e5e5163669d7625c765504cc2f94/vmlinuz-5.13.0-rc1-00135-ge0a77a7a5a75 vmalloc=512M initramfs_async=0 page_owner=on max_uptime=600 RESULT_ROOT=/result/boot/1/vm-snb/yocto-x86_64-minimal-20190520.cgz/x86_64-randconfig-r012-20210713/gcc-9/e0a77a7a5a75e5e5163669d7625c765504cc2f94/3 LKP_SERVER=internal-lkp-server selinux=0 debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 net.ifnames=0 printk.devkmsg=on panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_
[ 0.000000][ T0] KERNEL supported cpus:
[ 0.000000][ T0] Intel GenuineIntel
[ 0.000000][ T0] Centaur CentaurHauls
[ 0.000000][ T0] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[ 0.000000][ T0] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[ 0.000000][ T0] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[ 0.000000][ T0] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
[ 0.000000][ T0] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
[ 0.000000][ T0] BIOS-provided physical RAM map:
[ 0.000000][ T0] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[ 0.000000][ T0] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[ 0.000000][ T0] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[ 0.000000][ T0] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdffff] usable
[ 0.000000][ T0] BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved
[ 0.000000][ T0] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
[ 0.000000][ T0] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[ 0.000000][ T0] BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable
[ 0.000000][ T0] printk: debug: ignoring loglevel setting.
[ 0.000000][ T0] printk: bootconsole [earlyser0] enabled
[ 0.000000][ T0] NX (Execute Disable) protection: active
[ 0.000000][ T0] SMBIOS 2.8 present.
[ 0.000000][ T0] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 0.000000][ T0] Hypervisor detected: KVM
[ 0.000000][ T0] kvm-clock: Using msrs 4b564d01 and 4b564d00
[ 0.000000][ T0] kvm-clock: cpu 0, msr 5a477001, primary cpu clock
[ 0.000006][ T0] kvm-clock: using sched offset of 1327495314 cycles
[ 0.001114][ T0] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[ 0.004079][ T0] tsc: Detected 2493.988 MHz processor
[ 0.006744][ T0] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[ 0.008135][ T0] e820: remove [mem 0x000a0000-0x000fffff] usable
[ 0.009276][ T0] last_pfn = 0x440000 max_arch_pfn = 0x400000000
[ 0.010470][ T0] x86/PAT: PAT support disabled because CONFIG_X86_PAT is disabled in the kernel.
[ 0.012198][ T0] x86/PAT: Configuration [0-7]: WB WT UC- UC WB WT UC- UC
Memory KASLR using RDTSC...
[ 0.014038][ T0] last_pfn = 0xbffe0 max_arch_pfn = 0x400000000
[ 0.015131][ T0] Scan for SMP in [mem 0x00000000-0x000003ff]
[ 0.016143][ T0] Scan for SMP in [mem 0x0009fc00-0x0009ffff]
[ 0.017017][ T0] Scan for SMP in [mem 0x000f0000-0x000fffff]
[ 0.025962][ T0] found SMP MP-table at [mem 0x000f5a80-0x000f5a8f]
[ 0.027214][ T0] mpc: f5a90-f5b74
[ 0.030043][ T0] RAMDISK: [mem 0x7f797000-0x7fffffff]
[ 0.031073][ T0] ACPI: Early table checksum verification disabled
[ 0.032219][ T0] ACPI: RSDP 0x00000000000F5850 000014 (v00 BOCHS )
[ 0.033450][ T0] ACPI: RSDT 0x00000000BFFE15C9 000030 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)
[ 0.035153][ T0] ACPI: FACP 0x00000000BFFE149D 000074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001)
[ 0.036541][ T0] ACPI: DSDT 0x00000000BFFE0040 00145D (v01 BOCHS BXPCDSDT 00000001 BXPC 00000001)
[ 0.037869][ T0] ACPI: FACS 0x00000000BFFE0000 000040
[ 0.038613][ T0] ACPI: APIC 0x00000000BFFE1511 000080 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)
[ 0.040326][ T0] ACPI: HPET 0x00000000BFFE1591 000038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001)
[ 0.042095][ T0] ACPI: Reserving FACP table memory at [mem 0xbffe149d-0xbffe1510]
[ 0.043560][ T0] ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe149c]
[ 0.045025][ T0] ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f]
[ 0.046492][ T0] ACPI: Reserving APIC table memory at [mem 0xbffe1511-0xbffe1590]
[ 0.047863][ T0] ACPI: Reserving HPET table memory at [mem 0xbffe1591-0xbffe15c8]
[ 0.049237][ T0] ACPI: Local APIC address 0xfee00000
[ 0.050155][ T0] mapped APIC to ffffffffff5fd000 ( fee00000)
[ 0.051437][ T0] cma: dma_contiguous_reserve(limit 440000000)
[ 0.156468][ T0] Zone ranges:
[ 0.157142][ T0] DMA [mem 0x0000000000001000-0x0000000000ffffff]
[ 0.158408][ T0] DMA32 [mem 0x0000000001000000-0x00000000ffffffff]
[ 0.159598][ T0] Normal [mem 0x0000000100000000-0x000000043fffffff]
[ 0.160801][ T0] Movable zone start for each node
[ 0.161703][ T0] Early memory node ranges
[ 0.162412][ T0] node 0: [mem 0x0000000000001000-0x000000000009efff]
[ 0.163664][ T0] node 0: [mem 0x0000000000100000-0x00000000bffdffff]
[ 0.164905][ T0] node 0: [mem 0x0000000100000000-0x000000043fffffff]
[ 0.166193][ T0] Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff]
[ 0.167603][ T0] On node 0 totalpages: 4194174
[ 0.168453][ T0] DMA zone: 56 pages used for memmap
[ 0.169328][ T0] DMA zone: 21 pages reserved
[ 0.170204][ T0] DMA zone: 3998 pages, LIFO batch:0
[ 0.171680][ T0] DMA zone: 28770 pages in unavailable ranges
[ 0.172690][ T0] DMA32 zone: 10696 pages used for memmap
[ 0.173664][ T0] DMA32 zone: 782304 pages, LIFO batch:63
[ 0.185335][ T0] DMA32 zone: 32 pages in unavailable ranges
[ 0.186534][ T0] Normal zone: 46592 pages used for memmap
[ 0.187533][ T0] Normal zone: 3407872 pages, LIFO batch:63


To reproduce:

# build kernel
cd linux
cp config-5.13.0-rc1-00135-ge0a77a7a5a75 .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email



---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation

Thanks,
Oliver Sang


Attachments:
(No filename) (10.58 kB)
config-5.13.0-rc1-00135-ge0a77a7a5a75 (138.06 kB)
job-script (4.71 kB)
dmesg.xz (13.57 kB)
Download all attachments