2023-12-29 07:51:09

by Andrea Righi

[permalink] [raw]
Subject: [PATCH] kernfs: convert kernfs_idr_lock to an irq safe raw spinlock

bpf_cgroup_from_id() is basically a wrapper to cgroup_get_from_id(),
that is relying on kernfs to determine the right cgroup associated to
the target id.

As a kfunc, it has the potential to be attached to any function through
BPF, particularly in contexts where certain locks are held.

However, kernfs is not using an irq safe spinlock for kernfs_idr_lock,
that means any kernfs function that is acquiring this lock can be
interrupted and potentially hit bpf_cgroup_from_id() in the process,
triggering a deadlock.

For example, it is really easy to trigger a lockdep splat between
kernfs_idr_lock and rq->_lock, attaching a small BPF program to
__set_cpus_allowed_ptr_locked() that just calls bpf_cgroup_from_id():

=====================================================
WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
6.7.0-rc7-virtme #5 Not tainted
-----------------------------------------------------
repro/131 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
ffffffffb2dc4578 (kernfs_idr_lock){+.+.}-{2:2}, at: kernfs_find_and_get_node_by_id+0x1d/0x80

and this task is already holding:
ffff911cbecaf218 (&rq->__lock){-.-.}-{2:2}, at: task_rq_lock+0x50/0xc0
which would create a new lock dependency:
(&rq->__lock){-.-.}-{2:2} -> (kernfs_idr_lock){+.+.}-{2:2}

but this new dependency connects a HARDIRQ-irq-safe lock:
(&rq->__lock){-.-.}-{2:2}

... which became HARDIRQ-irq-safe at:
lock_acquire+0xbf/0x2b0
_raw_spin_lock_nested+0x2e/0x40
scheduler_tick+0x5d/0x170
update_process_times+0x9c/0xb0
tick_periodic+0x27/0xe0
tick_handle_periodic+0x24/0x70
__sysvec_apic_timer_interrupt+0x64/0x1a0
sysvec_apic_timer_interrupt+0x6f/0x80
asm_sysvec_apic_timer_interrupt+0x1a/0x20
memcpy+0xc/0x20
arch_dup_task_struct+0x15/0x30
copy_process+0x1ce/0x1eb0
kernel_clone+0xac/0x390
kernel_thread+0x6f/0xa0
kthreadd+0x199/0x230
ret_from_fork+0x31/0x50
ret_from_fork_asm+0x1b/0x30

to a HARDIRQ-irq-unsafe lock:
(kernfs_idr_lock){+.+.}-{2:2}

... which became HARDIRQ-irq-unsafe at:
...
lock_acquire+0xbf/0x2b0
_raw_spin_lock+0x30/0x40
__kernfs_new_node.isra.0+0x83/0x280
kernfs_create_root+0xf6/0x1d0
sysfs_init+0x1b/0x70
mnt_init+0xd9/0x2a0
vfs_caches_init+0xcf/0xe0
start_kernel+0x58a/0x6a0
x86_64_start_reservations+0x18/0x30
x86_64_start_kernel+0xc5/0xe0
secondary_startup_64_no_verify+0x178/0x17b

other info that might help us debug this:

Possible interrupt unsafe locking scenario:

CPU0 CPU1
---- ----
lock(kernfs_idr_lock);
local_irq_disable();
lock(&rq->__lock);
lock(kernfs_idr_lock);
<Interrupt>
lock(&rq->__lock);

*** DEADLOCK ***

Prevent this deadlock condition converting kernfs_idr_lock to a raw irq
safe spinlock.

The performance impact of this change should be negligible and it also
helps to prevent similar deadlock conditions with any other subsystems
that may depend on kernfs.

Fixes: 332ea1f697be ("bpf: Add bpf_cgroup_from_id() kfunc")
Signed-off-by: Andrea Righi <[email protected]>
---
fs/kernfs/dir.c | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
index 8b2bd65d70e7..9ce7d2872b55 100644
--- a/fs/kernfs/dir.c
+++ b/fs/kernfs/dir.c
@@ -27,7 +27,7 @@ static DEFINE_RWLOCK(kernfs_rename_lock); /* kn->parent and ->name */
*/
static DEFINE_SPINLOCK(kernfs_pr_cont_lock);
static char kernfs_pr_cont_buf[PATH_MAX]; /* protected by pr_cont_lock */
-static DEFINE_SPINLOCK(kernfs_idr_lock); /* root->ino_idr */
+static DEFINE_RAW_SPINLOCK(kernfs_idr_lock); /* root->ino_idr */

#define rb_to_kn(X) rb_entry((X), struct kernfs_node, rb)

@@ -539,6 +539,7 @@ void kernfs_put(struct kernfs_node *kn)
{
struct kernfs_node *parent;
struct kernfs_root *root;
+ unsigned long flags;

if (!kn || !atomic_dec_and_test(&kn->count))
return;
@@ -563,9 +564,9 @@ void kernfs_put(struct kernfs_node *kn)
simple_xattrs_free(&kn->iattr->xattrs, NULL);
kmem_cache_free(kernfs_iattrs_cache, kn->iattr);
}
- spin_lock(&kernfs_idr_lock);
+ raw_spin_lock_irqsave(&kernfs_idr_lock, flags);
idr_remove(&root->ino_idr, (u32)kernfs_ino(kn));
- spin_unlock(&kernfs_idr_lock);
+ raw_spin_unlock_irqrestore(&kernfs_idr_lock, flags);
kmem_cache_free(kernfs_node_cache, kn);

kn = parent;
@@ -607,6 +608,7 @@ static struct kernfs_node *__kernfs_new_node(struct kernfs_root *root,
struct kernfs_node *kn;
u32 id_highbits;
int ret;
+ unsigned long irqflags;

name = kstrdup_const(name, GFP_KERNEL);
if (!name)
@@ -617,13 +619,13 @@ static struct kernfs_node *__kernfs_new_node(struct kernfs_root *root,
goto err_out1;

idr_preload(GFP_KERNEL);
- spin_lock(&kernfs_idr_lock);
+ raw_spin_lock_irqsave(&kernfs_idr_lock, irqflags);
ret = idr_alloc_cyclic(&root->ino_idr, kn, 1, 0, GFP_ATOMIC);
if (ret >= 0 && ret < root->last_id_lowbits)
root->id_highbits++;
id_highbits = root->id_highbits;
root->last_id_lowbits = ret;
- spin_unlock(&kernfs_idr_lock);
+ raw_spin_unlock_irqrestore(&kernfs_idr_lock, irqflags);
idr_preload_end();
if (ret < 0)
goto err_out2;
@@ -659,9 +661,9 @@ static struct kernfs_node *__kernfs_new_node(struct kernfs_root *root,
return kn;

err_out3:
- spin_lock(&kernfs_idr_lock);
+ raw_spin_lock_irqsave(&kernfs_idr_lock, irqflags);
idr_remove(&root->ino_idr, (u32)kernfs_ino(kn));
- spin_unlock(&kernfs_idr_lock);
+ raw_spin_unlock_irqrestore(&kernfs_idr_lock, irqflags);
err_out2:
kmem_cache_free(kernfs_node_cache, kn);
err_out1:
@@ -702,8 +704,9 @@ struct kernfs_node *kernfs_find_and_get_node_by_id(struct kernfs_root *root,
struct kernfs_node *kn;
ino_t ino = kernfs_id_ino(id);
u32 gen = kernfs_id_gen(id);
+ unsigned long flags;

- spin_lock(&kernfs_idr_lock);
+ raw_spin_lock_irqsave(&kernfs_idr_lock, flags);

kn = idr_find(&root->ino_idr, (u32)ino);
if (!kn)
@@ -727,10 +730,10 @@ struct kernfs_node *kernfs_find_and_get_node_by_id(struct kernfs_root *root,
if (unlikely(!__kernfs_active(kn) || !atomic_inc_not_zero(&kn->count)))
goto err_unlock;

- spin_unlock(&kernfs_idr_lock);
+ raw_spin_unlock_irqrestore(&kernfs_idr_lock, flags);
return kn;
err_unlock:
- spin_unlock(&kernfs_idr_lock);
+ raw_spin_unlock_irqrestore(&kernfs_idr_lock, flags);
return NULL;
}

--
2.43.0



2023-12-29 09:14:36

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH] kernfs: convert kernfs_idr_lock to an irq safe raw spinlock

On Fri, Dec 29, 2023 at 08:49:16AM +0100, Andrea Righi wrote:
> bpf_cgroup_from_id() is basically a wrapper to cgroup_get_from_id(),
> that is relying on kernfs to determine the right cgroup associated to
> the target id.
>
> As a kfunc, it has the potential to be attached to any function through
> BPF, particularly in contexts where certain locks are held.
>
> However, kernfs is not using an irq safe spinlock for kernfs_idr_lock,
> that means any kernfs function that is acquiring this lock can be
> interrupted and potentially hit bpf_cgroup_from_id() in the process,
> triggering a deadlock.
>
> For example, it is really easy to trigger a lockdep splat between
> kernfs_idr_lock and rq->_lock, attaching a small BPF program to
> __set_cpus_allowed_ptr_locked() that just calls bpf_cgroup_from_id():
...
> Fixes: 332ea1f697be ("bpf: Add bpf_cgroup_from_id() kfunc")
> Signed-off-by: Andrea Righi <[email protected]>

Acked-by: Tejun Heo <[email protected]>

As an immediate fix, this looks great. In the longer term, it probably would
make sense to make that idr RCU protected so that lookup path doesn't have
to worry about locking order.

Thanks.

--
tejun

2024-01-09 16:36:05

by Geert Uytterhoeven

[permalink] [raw]
Subject: Re: [PATCH] kernfs: convert kernfs_idr_lock to an irq safe raw spinlock

Hi Andrea,

On Fri, Dec 29, 2023 at 8:56 AM Andrea Righi <[email protected]> wrote:
> bpf_cgroup_from_id() is basically a wrapper to cgroup_get_from_id(),
> that is relying on kernfs to determine the right cgroup associated to
> the target id.
>
> As a kfunc, it has the potential to be attached to any function through
> BPF, particularly in contexts where certain locks are held.
>
> However, kernfs is not using an irq safe spinlock for kernfs_idr_lock,
> that means any kernfs function that is acquiring this lock can be
> interrupted and potentially hit bpf_cgroup_from_id() in the process,
> triggering a deadlock.
>
> For example, it is really easy to trigger a lockdep splat between
> kernfs_idr_lock and rq->_lock, attaching a small BPF program to
> __set_cpus_allowed_ptr_locked() that just calls bpf_cgroup_from_id():
>
> =====================================================
> WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
> 6.7.0-rc7-virtme #5 Not tainted
> -----------------------------------------------------
> repro/131 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
> ffffffffb2dc4578 (kernfs_idr_lock){+.+.}-{2:2}, at: kernfs_find_and_get_node_by_id+0x1d/0x80
>
> and this task is already holding:
> ffff911cbecaf218 (&rq->__lock){-.-.}-{2:2}, at: task_rq_lock+0x50/0xc0
> which would create a new lock dependency:
> (&rq->__lock){-.-.}-{2:2} -> (kernfs_idr_lock){+.+.}-{2:2}
>
> but this new dependency connects a HARDIRQ-irq-safe lock:
> (&rq->__lock){-.-.}-{2:2}
>
> ... which became HARDIRQ-irq-safe at:
> lock_acquire+0xbf/0x2b0
> _raw_spin_lock_nested+0x2e/0x40
> scheduler_tick+0x5d/0x170
> update_process_times+0x9c/0xb0
> tick_periodic+0x27/0xe0
> tick_handle_periodic+0x24/0x70
> __sysvec_apic_timer_interrupt+0x64/0x1a0
> sysvec_apic_timer_interrupt+0x6f/0x80
> asm_sysvec_apic_timer_interrupt+0x1a/0x20
> memcpy+0xc/0x20
> arch_dup_task_struct+0x15/0x30
> copy_process+0x1ce/0x1eb0
> kernel_clone+0xac/0x390
> kernel_thread+0x6f/0xa0
> kthreadd+0x199/0x230
> ret_from_fork+0x31/0x50
> ret_from_fork_asm+0x1b/0x30
>
> to a HARDIRQ-irq-unsafe lock:
> (kernfs_idr_lock){+.+.}-{2:2}
>
> ... which became HARDIRQ-irq-unsafe at:
> ...
> lock_acquire+0xbf/0x2b0
> _raw_spin_lock+0x30/0x40
> __kernfs_new_node.isra.0+0x83/0x280
> kernfs_create_root+0xf6/0x1d0
> sysfs_init+0x1b/0x70
> mnt_init+0xd9/0x2a0
> vfs_caches_init+0xcf/0xe0
> start_kernel+0x58a/0x6a0
> x86_64_start_reservations+0x18/0x30
> x86_64_start_kernel+0xc5/0xe0
> secondary_startup_64_no_verify+0x178/0x17b
>
> other info that might help us debug this:
>
> Possible interrupt unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(kernfs_idr_lock);
> local_irq_disable();
> lock(&rq->__lock);
> lock(kernfs_idr_lock);
> <Interrupt>
> lock(&rq->__lock);
>
> *** DEADLOCK ***
>
> Prevent this deadlock condition converting kernfs_idr_lock to a raw irq
> safe spinlock.
>
> The performance impact of this change should be negligible and it also
> helps to prevent similar deadlock conditions with any other subsystems
> that may depend on kernfs.
>
> Fixes: 332ea1f697be ("bpf: Add bpf_cgroup_from_id() kfunc")
> Signed-off-by: Andrea Righi <[email protected]>

Thanks for your patch, which is now commit c312828c37a72fe2
("kernfs: convert kernfs_idr_lock to an irq safe raw spinlock")
in driver-core/driver-core-next.

Unfortunately this interacts badly with commit 4eff7d62abdeb293 ("Revert
"mm/kmemleak: move the initialisation of object to __link_object"")
in v6.7-rc5.

driver-core/driver-core-next is still at v6.7-rc3, so it does not
yet have commit 4eff7d62abdeb293, and thus still triggers:

=============================
[ BUG: Invalid wait context ]
6.7.0-rc3-kzm9g-00052-gc312828c37a7 #576 Not tainted
-----------------------------
swapper/0 is trying to lock:
c0c6e3c4 (&zone->lock){....}-{3:3}, at: __rmqueue_pcplist+0x358/0x3c8
other info that might help us debug this:
context-{5:5}
3 locks held by swapper/0:
#0: c0bf35a0 (slab_mutex){....}-{4:4}, at:
kmem_cache_create_usercopy+0xc8/0x2d0
#1: c0bfab0c (kmemleak_lock){....}-{2:2}, at: __create_object+0x2c/0x7c
#2: dfbc8c90 (&pcp->lock){....}-{3:3}, at:
get_page_from_freelist+0x1a0/0x684
stack backtrace:
CPU: 0 PID: 0 Comm: swapper Not tainted
6.7.0-rc3-kzm9g-00052-gc312828c37a7 #576
Hardware name: Generic SH73A0 (Flattened Device Tree)
unwind_backtrace from show_stack+0x10/0x14
show_stack from dump_stack_lvl+0x68/0x90
dump_stack_lvl from __lock_acquire+0x3cc/0x168c
__lock_acquire from lock_acquire+0x274/0x30c
lock_acquire from _raw_spin_lock_irqsave+0x50/0x64
_raw_spin_lock_irqsave from __rmqueue_pcplist+0x358/0x3c8
__rmqueue_pcplist from get_page_from_freelist+0x3bc/0x684
get_page_from_freelist from __alloc_pages+0xe8/0xad8
__alloc_pages from __stack_depot_save+0x160/0x398
__stack_depot_save from set_track_prepare+0x48/0x74
set_track_prepare from __link_object+0xac/0x204
__link_object from __create_object+0x48/0x7c
__create_object from kmemleak_alloc+0x2c/0x38
kmemleak_alloc from slab_post_alloc_hook.constprop.0+0x9c/0xac
slab_post_alloc_hook.constprop.0 from kmem_cache_alloc+0xcc/0x148
kmem_cache_alloc from kmem_cache_create_usercopy+0x1c4/0x2d0
kmem_cache_create_usercopy from kmem_cache_create+0x1c/0x24
kmem_cache_create from kmemleak_init+0x58/0xfc
kmemleak_init from mm_core_init+0x244/0x2c8
mm_core_init from start_kernel+0x274/0x528
start_kernel from 0x0

After merging driver-core/driver-core-next into a tree based on
v6.7-rc5, or after cherry-picking commit 4eff7d62abdeb293 into
driver-core/driver-core-next, the above BUG is gone, but a different
one appears:

=============================
[ BUG: Invalid wait context ]
6.7.0-rc5-kzm9g-00251-g655022a45b1c #578 Not tainted
-----------------------------
swapper/0/0 is trying to lock:
dfbcd488 (&c->lock){....}-{3:3}, at: local_lock_acquire+0x0/0xa4
other info that might help us debug this:
context-{5:5}
2 locks held by swapper/0/0:
#0: dfbc9c60 (lock){+.+.}-{3:3}, at: local_lock_acquire+0x0/0xa4
#1: c0c012a8 (kernfs_idr_lock){....}-{2:2}, at:
__kernfs_new_node.constprop.0+0x68/0x258
stack backtrace:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted
6.7.0-rc5-kzm9g-00251-g655022a45b1c #578
Hardware name: Generic SH73A0 (Flattened Device Tree)
unwind_backtrace from show_stack+0x10/0x14
show_stack from dump_stack_lvl+0x68/0x90
dump_stack_lvl from __lock_acquire+0x3cc/0x168c
__lock_acquire from lock_acquire+0x274/0x30c
lock_acquire from local_lock_acquire+0x28/0xa4
local_lock_acquire from ___slab_alloc+0x234/0x8a8
___slab_alloc from __slab_alloc.constprop.0+0x30/0x44
__slab_alloc.constprop.0 from kmem_cache_alloc+0x7c/0x148
kmem_cache_alloc from radix_tree_node_alloc.constprop.0+0x44/0xdc
radix_tree_node_alloc.constprop.0 from idr_get_free+0x110/0x2b8
idr_get_free from idr_alloc_u32+0x9c/0x108
idr_alloc_u32 from idr_alloc_cyclic+0x50/0xb8
idr_alloc_cyclic from __kernfs_new_node.constprop.0+0x88/0x258
__kernfs_new_node.constprop.0 from kernfs_create_root+0xbc/0x154
kernfs_create_root from sysfs_init+0x18/0x5c
sysfs_init from mnt_init+0xc4/0x220
mnt_init from vfs_caches_init+0x6c/0x88
vfs_caches_init from start_kernel+0x474/0x528
start_kernel from 0x0

Reverting commit c312828c37a72fe2 fixes that.
I have seen this issue on several Renesas arm32 and arm64 platforms.

Also, I am wondering if the issue fixed by commit c312828c37a72fe2
can still be reproduced on v6.7-rc5 or later?

Thanks!

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68korg

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds

2024-01-09 17:05:40

by Andrea Righi

[permalink] [raw]
Subject: Re: [PATCH] kernfs: convert kernfs_idr_lock to an irq safe raw spinlock

On Tue, Jan 09, 2024 at 05:35:36PM +0100, Geert Uytterhoeven wrote:
..
> Thanks for your patch, which is now commit c312828c37a72fe2
> ("kernfs: convert kernfs_idr_lock to an irq safe raw spinlock")
> in driver-core/driver-core-next.
>
> Unfortunately this interacts badly with commit 4eff7d62abdeb293 ("Revert
> "mm/kmemleak: move the initialisation of object to __link_object"")
> in v6.7-rc5.
>
> driver-core/driver-core-next is still at v6.7-rc3, so it does not
> yet have commit 4eff7d62abdeb293, and thus still triggers:
>
> =============================
> [ BUG: Invalid wait context ]
> 6.7.0-rc3-kzm9g-00052-gc312828c37a7 #576 Not tainted
> -----------------------------
> swapper/0 is trying to lock:
> c0c6e3c4 (&zone->lock){....}-{3:3}, at: __rmqueue_pcplist+0x358/0x3c8
> other info that might help us debug this:
> context-{5:5}
> 3 locks held by swapper/0:
> #0: c0bf35a0 (slab_mutex){....}-{4:4}, at:
> kmem_cache_create_usercopy+0xc8/0x2d0
> #1: c0bfab0c (kmemleak_lock){....}-{2:2}, at: __create_object+0x2c/0x7c
> #2: dfbc8c90 (&pcp->lock){....}-{3:3}, at:
> get_page_from_freelist+0x1a0/0x684
> stack backtrace:
> CPU: 0 PID: 0 Comm: swapper Not tainted
> 6.7.0-rc3-kzm9g-00052-gc312828c37a7 #576
> Hardware name: Generic SH73A0 (Flattened Device Tree)
> unwind_backtrace from show_stack+0x10/0x14
> show_stack from dump_stack_lvl+0x68/0x90
> dump_stack_lvl from __lock_acquire+0x3cc/0x168c
> __lock_acquire from lock_acquire+0x274/0x30c
> lock_acquire from _raw_spin_lock_irqsave+0x50/0x64
> _raw_spin_lock_irqsave from __rmqueue_pcplist+0x358/0x3c8
> __rmqueue_pcplist from get_page_from_freelist+0x3bc/0x684
> get_page_from_freelist from __alloc_pages+0xe8/0xad8
> __alloc_pages from __stack_depot_save+0x160/0x398
> __stack_depot_save from set_track_prepare+0x48/0x74
> set_track_prepare from __link_object+0xac/0x204
> __link_object from __create_object+0x48/0x7c
> __create_object from kmemleak_alloc+0x2c/0x38
> kmemleak_alloc from slab_post_alloc_hook.constprop.0+0x9c/0xac
> slab_post_alloc_hook.constprop.0 from kmem_cache_alloc+0xcc/0x148
> kmem_cache_alloc from kmem_cache_create_usercopy+0x1c4/0x2d0
> kmem_cache_create_usercopy from kmem_cache_create+0x1c/0x24
> kmem_cache_create from kmemleak_init+0x58/0xfc
> kmemleak_init from mm_core_init+0x244/0x2c8
> mm_core_init from start_kernel+0x274/0x528
> start_kernel from 0x0
>
> After merging driver-core/driver-core-next into a tree based on
> v6.7-rc5, or after cherry-picking commit 4eff7d62abdeb293 into
> driver-core/driver-core-next, the above BUG is gone, but a different
> one appears:
>
> =============================
> [ BUG: Invalid wait context ]
> 6.7.0-rc5-kzm9g-00251-g655022a45b1c #578 Not tainted
> -----------------------------
> swapper/0/0 is trying to lock:
> dfbcd488 (&c->lock){....}-{3:3}, at: local_lock_acquire+0x0/0xa4
> other info that might help us debug this:
> context-{5:5}
> 2 locks held by swapper/0/0:
> #0: dfbc9c60 (lock){+.+.}-{3:3}, at: local_lock_acquire+0x0/0xa4
> #1: c0c012a8 (kernfs_idr_lock){....}-{2:2}, at:
> __kernfs_new_node.constprop.0+0x68/0x258
> stack backtrace:
> CPU: 0 PID: 0 Comm: swapper/0 Not tainted
> 6.7.0-rc5-kzm9g-00251-g655022a45b1c #578
> Hardware name: Generic SH73A0 (Flattened Device Tree)
> unwind_backtrace from show_stack+0x10/0x14
> show_stack from dump_stack_lvl+0x68/0x90
> dump_stack_lvl from __lock_acquire+0x3cc/0x168c
> __lock_acquire from lock_acquire+0x274/0x30c
> lock_acquire from local_lock_acquire+0x28/0xa4
> local_lock_acquire from ___slab_alloc+0x234/0x8a8
> ___slab_alloc from __slab_alloc.constprop.0+0x30/0x44
> __slab_alloc.constprop.0 from kmem_cache_alloc+0x7c/0x148
> kmem_cache_alloc from radix_tree_node_alloc.constprop.0+0x44/0xdc
> radix_tree_node_alloc.constprop.0 from idr_get_free+0x110/0x2b8
> idr_get_free from idr_alloc_u32+0x9c/0x108
> idr_alloc_u32 from idr_alloc_cyclic+0x50/0xb8
> idr_alloc_cyclic from __kernfs_new_node.constprop.0+0x88/0x258
> __kernfs_new_node.constprop.0 from kernfs_create_root+0xbc/0x154
> kernfs_create_root from sysfs_init+0x18/0x5c
> sysfs_init from mnt_init+0xc4/0x220
> mnt_init from vfs_caches_init+0x6c/0x88
> vfs_caches_init from start_kernel+0x474/0x528
> start_kernel from 0x0
>
> Reverting commit c312828c37a72fe2 fixes that.
> I have seen this issue on several Renesas arm32 and arm64 platforms.
>
> Also, I am wondering if the issue fixed by commit c312828c37a72fe2
> can still be reproduced on v6.7-rc5 or later?

Yep, I can still reproduce it (this is with v6.7):

[ 3.082273]
[ 3.082822] =====================================================
[ 3.084543] WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
[ 3.086252] 6.7.0-virtme #4 Not tainted
[ 3.087002] -----------------------------------------------------
[ 3.087385] swapper/5/0 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
[ 3.087768] ffffffff8f9c5378 (kernfs_idr_lock){+.+.}-{2:2}, at: kernfs_find_and_get_node_by_id+0x1d/0x80
[ 3.088335]
[ 3.088335] and this task is already holding:
[ 3.088685] ffff8a83becbf758 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0xda/0xef0
[ 3.089128] which would create a new lock dependency:
[ 3.089435] (&rq->__lock){-.-.}-{2:2} -> (kernfs_idr_lock){+.+.}-{2:2}
[ 3.089827]
[ 3.089827] but this new dependency connects a HARDIRQ-irq-safe lock:
[ 3.090296] (&rq->__lock){-.-.}-{2:2}
[ 3.090297]
[ 3.090297] ... which became HARDIRQ-irq-safe at:
[ 3.090885] lock_acquire+0xcb/0x2c0
[ 3.091108] _raw_spin_lock_nested+0x2e/0x40
[ 3.091374] scheduler_tick+0x5b/0x3d0
[ 3.091607] update_process_times+0x9c/0xb0
[ 3.091867] tick_periodic+0x27/0xe0
[ 3.092089] tick_handle_periodic+0x24/0x70
[ 3.092351] timer_interrupt+0x18/0x30
[ 3.092585] __handle_irq_event_percpu+0x8b/0x240
[ 3.092865] handle_irq_event+0x38/0x80
[ 3.093095] handle_level_irq+0x90/0x170
[ 3.093340] __common_interrupt+0x4a/0xf0
[ 3.093586] common_interrupt+0x83/0xa0
[ 3.093820] asm_common_interrupt+0x26/0x40
[ 3.094080] _raw_spin_unlock_irqrestore+0x36/0x70
[ 3.094381] __setup_irq+0x441/0x6a0
[ 3.094602] request_threaded_irq+0xe5/0x190
[ 3.094862] hpet_time_init+0x3a/0x60
[ 3.095090] x86_late_time_init+0x1b/0x40
[ 3.095344] start_kernel+0x53a/0x6a0
[ 3.095569] x86_64_start_reservations+0x18/0x30
[ 3.095849] x86_64_start_kernel+0xc5/0xe0
[ 3.096097] secondary_startup_64_no_verify+0x178/0x17b
[ 3.096426]
[ 3.096426] to a HARDIRQ-irq-unsafe lock:
[ 3.096749] (kernfs_idr_lock){+.+.}-{2:2}
[ 3.096751]
[ 3.096751] ... which became HARDIRQ-irq-unsafe at:
[ 3.097372] ...
[ 3.097372] lock_acquire+0xcb/0x2c0
[ 3.097701] _raw_spin_lock+0x30/0x40
[ 3.097925] __kernfs_new_node.isra.0+0x83/0x280
[ 3.098205] kernfs_create_root+0xf6/0x1d0
[ 3.098463] sysfs_init+0x1b/0x70
[ 3.098670] mnt_init+0xd9/0x2a0
[ 3.098872] vfs_caches_init+0xcf/0xe0
[ 3.099105] start_kernel+0x58a/0x6a0
[ 3.099334] x86_64_start_reservations+0x18/0x30
[ 3.099613] x86_64_start_kernel+0xc5/0xe0
[ 3.099862] secondary_startup_64_no_verify+0x178/0x17b
[ 3.100175]
[ 3.100175] other info that might help us debug this:
[ 3.100175]
[ 3.100652] Possible interrupt unsafe locking scenario:
[ 3.100652]
[ 3.101049] CPU0 CPU1
[ 3.101323] ---- ----
[ 3.101641] lock(kernfs_idr_lock);
[ 3.101909] local_irq_disable();
[ 3.102473] lock(&rq->__lock);
[ 3.102854] lock(kernfs_idr_lock);
[ 3.103171] <Interrupt>
[ 3.103308] lock(&rq->__lock);
[ 3.103492]
[ 3.103492] *** DEADLOCK ***

I'm wondering if using a regular spinlock instead of a raw spinlock
could be a reasonable compromise.

We have a GFP_ATOMIC allocation in __kernfs_new_node():

raw_spin_lock_irqsave(&kernfs_idr_lock, irqflags);
ret = idr_alloc_cyclic(&root->ino_idr, kn, 1, 0, GFP_ATOMIC);
...
raw_spin_unlock_irqrestore(&kernfs_idr_lock, irqflags);

That should become valid using a
spin_lock_irqsave/spin_unlock_irqrestore(), right?

Thanks,
-Andrea

>
> Thanks!
>
> Gr{oetje,eeting}s,
>
> Geert
>
> --
> Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [email protected]
>
> In personal conversations with technical people, I call myself a hacker. But
> when I'm talking to journalists I just say "programmer" or something like that.
> -- Linus Torvalds

2024-01-09 19:07:45

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH] kernfs: convert kernfs_idr_lock to an irq safe raw spinlock

On Tue, Jan 09, 2024 at 06:05:09PM +0100, Andrea Righi wrote:
> On Tue, Jan 09, 2024 at 05:35:36PM +0100, Geert Uytterhoeven wrote:
> > Reverting commit c312828c37a72fe2 fixes that.
> > I have seen this issue on several Renesas arm32 and arm64 platforms.
> >
> > Also, I am wondering if the issue fixed by commit c312828c37a72fe2
> > can still be reproduced on v6.7-rc5 or later?
>
> Yep, I can still reproduce it (this is with v6.7):
..
> I'm wondering if using a regular spinlock instead of a raw spinlock
> could be a reasonable compromise.

I don't think that'd work on RT as we can end up nesting mutex inside a raw
spinlock.

> We have a GFP_ATOMIC allocation in __kernfs_new_node():
>
> raw_spin_lock_irqsave(&kernfs_idr_lock, irqflags);
> ret = idr_alloc_cyclic(&root->ino_idr, kn, 1, 0, GFP_ATOMIC);
> ...
> raw_spin_unlock_irqrestore(&kernfs_idr_lock, irqflags);
>
> That should become valid using a
> spin_lock_irqsave/spin_unlock_irqrestore(), right?

Yeah, this part should be fine. I think the right thing to do here is making
the idr RCU safe so that lookup path doesn't depend on the lock.

Greg, can you please revert c312828c37a72fe2 for now?

Thanks.

--
tejun

2024-01-09 21:51:38

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH] kernfs: convert kernfs_idr_lock to an irq safe raw spinlock

On Tue, Jan 09, 2024 at 09:07:34AM -1000, Tejun Heo wrote:
> On Tue, Jan 09, 2024 at 06:05:09PM +0100, Andrea Righi wrote:
> > On Tue, Jan 09, 2024 at 05:35:36PM +0100, Geert Uytterhoeven wrote:
> > > Reverting commit c312828c37a72fe2 fixes that.
> > > I have seen this issue on several Renesas arm32 and arm64 platforms.
> > >
> > > Also, I am wondering if the issue fixed by commit c312828c37a72fe2
> > > can still be reproduced on v6.7-rc5 or later?
> >
> > Yep, I can still reproduce it (this is with v6.7):
> ...
> > I'm wondering if using a regular spinlock instead of a raw spinlock
> > could be a reasonable compromise.
>
> I don't think that'd work on RT as we can end up nesting mutex inside a raw
> spinlock.
>
> > We have a GFP_ATOMIC allocation in __kernfs_new_node():
> >
> > raw_spin_lock_irqsave(&kernfs_idr_lock, irqflags);
> > ret = idr_alloc_cyclic(&root->ino_idr, kn, 1, 0, GFP_ATOMIC);
> > ...
> > raw_spin_unlock_irqrestore(&kernfs_idr_lock, irqflags);
> >
> > That should become valid using a
> > spin_lock_irqsave/spin_unlock_irqrestore(), right?
>
> Yeah, this part should be fine. I think the right thing to do here is making
> the idr RCU safe so that lookup path doesn't depend on the lock.
>
> Greg, can you please revert c312828c37a72fe2 for now?

I sent out a patchset to revert the commit and implement a different fix.

http://lkml.kernel.org/r/[email protected]

Thanks.

--
tejun