2022-11-03 06:25:45

by Dennis Dai

[permalink] [raw]
Subject: rust nvme driver: potential sleep-in-atomic-context

The rust nvme driver [1] (which is still pending to be merged into
mainline [2]) has a potential sleep-in-atomic-context bug.

The potential buggy code is below

// drivers/block/nvme.rs:192
dev.queues.lock().io.try_reserve(nr_io_queues as _)?;
// drivers/block/nvme.rs:227
dev.queues.lock().io.try_push(io_queue.clone())?;

The queues field is wrapped in SpinLock, which means that we cannot
sleep (or indirectly call any function that may sleep) when the lock
is held.
However try_reserve function may indirectly call krealloc with a
sleepable flag GFP_KERNEL (that's default behaviour of the global rust
allocator).
The the case is similar for try_push.

I wonder if the bug could be confirmed.


[1] https://github.com/metaspace/rust-linux/commit/d88c3744d6cbdf11767e08bad56cbfb67c4c96d0
[2] https://lore.kernel.org/lkml/202210010816.1317F2C@keescook/


2022-11-03 09:42:46

by Miguel Ojeda

[permalink] [raw]
Subject: Re: rust nvme driver: potential sleep-in-atomic-context

On Thu, Nov 3, 2022 at 7:12 AM Dennis Dai <[email protected]> wrote:
>
> The rust nvme driver [1] (which is still pending to be merged into
> mainline [2]) has a potential sleep-in-atomic-context bug.
>
> dev.queues.lock().io.try_reserve(nr_io_queues as _)?;

Cc'ing Andreas and fixing Wedson's email. Note that this was written
when it wasn't decided how the `try_*` methods would know about the
flags.

Cheers,
Miguel

2022-11-03 09:58:39

by Björn Roy Baron

[permalink] [raw]
Subject: Re: rust nvme driver: potential sleep-in-atomic-context

On Thursday, November 3rd, 2022 at 07:12, Dennis Dai <[email protected]> wrote:


> The rust nvme driver [1] (which is still pending to be merged into
> mainline [2]) has a potential sleep-in-atomic-context bug.
>
> The potential buggy code is below
>
> // drivers/block/nvme.rs:192
> dev.queues.lock().io.try_reserve(nr_io_queues as _)?;
> // drivers/block/nvme.rs:227
> dev.queues.lock().io.try_push(io_queue.clone())?;
>
> The queues field is wrapped in SpinLock, which means that we cannot
> sleep (or indirectly call any function that may sleep) when the lock
> is held.
> However try_reserve function may indirectly call krealloc with a
> sleepable flag GFP_KERNEL (that's default behaviour of the global rust
> allocator).
> The the case is similar for try_push.
>
> I wonder if the bug could be confirmed.
>
>
> [1] https://github.com/metaspace/rust-linux/commit/d88c3744d6cbdf11767e08bad56cbfb67c4c96d0
> [2] https://lore.kernel.org/lkml/202210010816.1317F2C@keescook/

setup_io_queues is only called by dev_add which in turn is only called NvmeDevice::probe. This last function is responsible for creating the &Ref<DeviceData> that ends up being passed to setup_io_queues. It doesn't seem like any reference is passed to another thread between &Ref<DeviceData>. As such no other thread can block on the current thread due to holding the lock. As far as I understand this means that sleeping while the lock is held is harmless. I think it would be possible to replace the &Ref<DeviceData> argument with an Pin<&mut DeviceData> argument by moving the add_dev call to before Ref::<DeviceData>::from(data). This would make it clear that only the current thread holds a reference and would also allow using a method like get_mut [1] to get a reference to the protected data without actually locking the spinlock as it is statically enforced that nobody can else can hold the lock. It seems that get_mut is missing from all of the locks offered in the kernel crate. I opened an issue for this. [2]

[1]: https://doc.rust-lang.org/stable/std/sync/struct.Mutex.html#method.get_mut
[2]: https://github.com/Rust-for-Linux/linux/issues/924

Cheers,
Björn