This patchset contains some CondVar methods that Rust Binder needs.
The CondVar type implements a condition variable, and tries to mirror
the API of the CondVar type provided by the Rust standard library [2].
It is implemented using a `wait_queue_head`.
Please see the Rust Binder RFC for usage examples [1].
Users of rust: sync: add `CondVar::notify_sync`:
[PATCH RFC 04/20] rust_binder: add work lists
[PATCH RFC 07/20] rust_binder: add epoll support
[PATCH RFC 08/20] rust_binder: add non-oneway transactions
Users of rust: sync: add `CondVar::wait_timeout`:
[PATCH RFC 15/20] rust_binder: add process freezing
Link: https://lore.kernel.org/rust-for-linux/[email protected]/ [1]
Link: https://doc.rust-lang.org/stable/std/sync/struct.Condvar.html [2]
Signed-off-by: Alice Ryhl <[email protected]>
---
Alice Ryhl (2):
rust: sync: add `CondVar::notify_sync`
rust: sync: add `CondVar::wait_timeout`
rust/kernel/sync.rs | 2 +-
rust/kernel/sync/condvar.rs | 79 +++++++++++++++++++++++++++++++++++++++++++++
rust/kernel/sync/lock.rs | 4 +--
3 files changed, 82 insertions(+), 3 deletions(-)
---
base-commit: 33cc938e65a98f1d29d0a18403dbbee050dcad9a
change-id: 20231205-rb-new-condvar-methods-27ba95df5d41
Best regards,
--
Alice Ryhl <[email protected]>
Wake up another thread synchronously.
This method behaves like `notify_one`, except that it hints to the
scheduler that the current thread is about to go to sleep, so it should
schedule the target thread on the same CPU.
This is used by Rust Binder as a performance optimization. When sending
a transaction to a different process, we usually know which thread will
handle it, so we can schedule that thread for execution next on this
CPU for better cache locality.
Signed-off-by: Alice Ryhl <[email protected]>
---
rust/kernel/sync/condvar.rs | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/rust/kernel/sync/condvar.rs b/rust/kernel/sync/condvar.rs
index b679b6f6dbeb..9861c6749ad0 100644
--- a/rust/kernel/sync/condvar.rs
+++ b/rust/kernel/sync/condvar.rs
@@ -155,6 +155,12 @@ fn notify(&self, count: i32, flags: u32) {
};
}
+ /// Calls the kernel function to notify one thread synchronously.
+ pub fn notify_sync(&self) {
+ // SAFETY: `wait_list` points to valid memory.
+ unsafe { bindings::__wake_up_sync(self.wait_list.get(), bindings::TASK_NORMAL) };
+ }
+
/// Wakes a single waiter up, if any.
///
/// This is not 'sticky' in the sense that if no thread is waiting, the notification is lost
--
2.43.0.rc2.451.g8631bc7472-goog
Sleep on a condition variable with a timeout.
This is used by Rust Binder for process freezing. There, we want to
sleep until the freeze operation completes, but we want to be able to
abort the process freezing if it doesn't complete within some timeout.
Signed-off-by: Alice Ryhl <[email protected]>
---
rust/kernel/sync.rs | 2 +-
rust/kernel/sync/condvar.rs | 73 +++++++++++++++++++++++++++++++++++++++++++++
rust/kernel/sync/lock.rs | 4 +--
3 files changed, 76 insertions(+), 3 deletions(-)
diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index d219ee518eff..c1fb10fc64f4 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -13,7 +13,7 @@
mod locked_by;
pub use arc::{Arc, ArcBorrow, UniqueArc};
-pub use condvar::CondVar;
+pub use condvar::{CondVar, CondVarTimeoutResult};
pub use lock::{mutex::Mutex, spinlock::SpinLock};
pub use locked_by::LockedBy;
diff --git a/rust/kernel/sync/condvar.rs b/rust/kernel/sync/condvar.rs
index 9861c6749ad0..a6a6b6ab0c39 100644
--- a/rust/kernel/sync/condvar.rs
+++ b/rust/kernel/sync/condvar.rs
@@ -120,6 +120,63 @@ fn wait_internal<T: ?Sized, B: Backend>(&self, wait_state: u32, guard: &mut Guar
unsafe { bindings::finish_wait(self.wait_list.get(), wait.get()) };
}
+ /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
+ /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
+ /// [`CondVar::notify_all`], or when the thread receives a signal.
+ ///
+ /// Returns whether there is a signal pending.
+ fn wait_internal_timeout<T, B>(
+ &self,
+ wait_state: u32,
+ guard: &mut Guard<'_, T, B>,
+ timeout: u64,
+ ) -> u64
+ where
+ T: ?Sized,
+ B: Backend,
+ {
+ let wait = Opaque::<bindings::wait_queue_entry>::uninit();
+
+ // SAFETY: `wait` points to valid memory.
+ unsafe { bindings::init_wait(wait.get()) };
+
+ // SAFETY: Both `wait` and `wait_list` point to valid memory.
+ unsafe {
+ bindings::prepare_to_wait_exclusive(self.wait_list.get(), wait.get(), wait_state as _)
+ };
+
+ // SAFETY: Switches to another thread.
+ let timeout =
+ guard.do_unlocked(|| unsafe { bindings::schedule_timeout(timeout as _) as _ });
+
+ // SAFETY: Both `wait` and `wait_list` point to valid memory.
+ unsafe { bindings::finish_wait(self.wait_list.get(), wait.get()) };
+
+ timeout
+ }
+
+ /// Releases the lock and waits for a notification in interruptible mode.
+ ///
+ /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
+ /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
+ /// [`CondVar::notify_all`], or when a timeout occurs, or when the thread receives a signal.
+ ///
+ /// Returns whether there is a signal pending.
+ #[must_use = "wait_timeout returns if a signal is pending, so the caller must check the return value"]
+ pub fn wait_timeout<T: ?Sized, B: Backend>(
+ &self,
+ guard: &mut Guard<'_, T, B>,
+ jiffies: u64,
+ ) -> CondVarTimeoutResult {
+ let res = self.wait_internal_timeout(bindings::TASK_INTERRUPTIBLE, guard, jiffies);
+
+ match (res as _, crate::current!().signal_pending()) {
+ (jiffies, true) => CondVarTimeoutResult::Signal { jiffies },
+ (0, false) => CondVarTimeoutResult::Timeout,
+ (jiffies, false) => CondVarTimeoutResult::Woken { jiffies },
+ }
+ }
+
/// Releases the lock and waits for a notification in interruptible mode.
///
/// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
@@ -177,3 +234,19 @@ pub fn notify_all(&self) {
self.notify(0, 0);
}
}
+
+/// The return type of `wait_timeout`.
+pub enum CondVarTimeoutResult {
+ /// The timeout was reached.
+ Timeout,
+ /// Somebody woke us up.
+ Woken {
+ /// Remaining sleep duration.
+ jiffies: u64,
+ },
+ /// A signal occurred.
+ Signal {
+ /// Remaining sleep duration.
+ jiffies: u64,
+ },
+}
diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index f12a684bc957..149a5259d431 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -139,7 +139,7 @@ pub struct Guard<'a, T: ?Sized, B: Backend> {
unsafe impl<T: Sync + ?Sized, B: Backend> Sync for Guard<'_, T, B> {}
impl<T: ?Sized, B: Backend> Guard<'_, T, B> {
- pub(crate) fn do_unlocked(&mut self, cb: impl FnOnce()) {
+ pub(crate) fn do_unlocked<U>(&mut self, cb: impl FnOnce() -> U) -> U {
// SAFETY: The caller owns the lock, so it is safe to unlock it.
unsafe { B::unlock(self.lock.state.get(), &self.state) };
@@ -147,7 +147,7 @@ pub(crate) fn do_unlocked(&mut self, cb: impl FnOnce()) {
let _relock =
ScopeGuard::new(|| unsafe { B::relock(self.lock.state.get(), &mut self.state) });
- cb();
+ cb()
}
}
--
2.43.0.rc2.451.g8631bc7472-goog
On 12/6/23 07:09, Alice Ryhl wrote:
> Wake up another thread synchronously.
>
> This method behaves like `notify_one`, except that it hints to the
> scheduler that the current thread is about to go to sleep, so it should
> schedule the target thread on the same CPU.
>
> This is used by Rust Binder as a performance optimization. When sending
> a transaction to a different process, we usually know which thread will
> handle it, so we can schedule that thread for execution next on this
> CPU for better cache locality.
>
> Signed-off-by: Alice Ryhl <[email protected]>
> ---
> [...]
Reviewed-by: Martin Rodriguez Reboredo <[email protected]>
On 12/6/23 07:09, Alice Ryhl wrote:
> Sleep on a condition variable with a timeout.
>
> This is used by Rust Binder for process freezing. There, we want to
> sleep until the freeze operation completes, but we want to be able to
> abort the process freezing if it doesn't complete within some timeout.
>
> Signed-off-by: Alice Ryhl <[email protected]>
> ---
> [...]
>
> + /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
> + /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
> + /// [`CondVar::notify_all`], or when the thread receives a signal.
> + ///
> + /// Returns whether there is a signal pending.
Remaining jiffies or zero on timeout?
> + fn wait_internal_timeout<T, B>(
> + &self,
> + wait_state: u32,
> + guard: &mut Guard<'_, T, B>,
> + timeout: u64,
> + ) -> u64
> [...]
> +
> + /// Releases the lock and waits for a notification in interruptible mode.
> + ///
> + /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
> + /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
> + /// [`CondVar::notify_all`], or when a timeout occurs, or when the thread receives a signal.
> + ///
> + /// Returns whether there is a signal pending.
This one is correct.
> + #[must_use = "wait_timeout returns if a signal is pending, so the caller must check the return value"]
> + pub fn wait_timeout<T: ?Sized, B: Backend>(
> + &self,
> + guard: &mut Guard<'_, T, B>,
> + jiffies: u64,
> + ) -> CondVarTimeoutResult {
> [...]
On Wed, Dec 06, 2023 at 10:09:24AM +0000, Alice Ryhl wrote:
[...]
> +
> +/// The return type of `wait_timeout`.
> +pub enum CondVarTimeoutResult {
> + /// The timeout was reached.
> + Timeout,
> + /// Somebody woke us up.
> + Woken {
> + /// Remaining sleep duration.
> + jiffies: u64,
I have a Jiffies definition in the my upcoming timer patchset:
/// The time unit of Linux kernel. One jiffy equals (1/HZ) second.
pub type Jiffies = core::ffi::c_ulong;
Maybe you can add that (in a separate patch) in kernel::time?
Regards,
Boqun
> + },
> + /// A signal occurred.
> + Signal {
> + /// Remaining sleep duration.
> + jiffies: u64,
> + },
> +}
[...]
On Wed, Dec 6, 2023 at 4:53 PM Martin Rodriguez Reboredo
<[email protected]> wrote:
>
> On 12/6/23 07:09, Alice Ryhl wrote:
> > Sleep on a condition variable with a timeout.
> >
> > This is used by Rust Binder for process freezing. There, we want to
> > sleep until the freeze operation completes, but we want to be able to
> > abort the process freezing if it doesn't complete within some timeout.
> >
> > Signed-off-by: Alice Ryhl <[email protected]>
> > ---
> > [...]
> >
> > + /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
> > + /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
> > + /// [`CondVar::notify_all`], or when the thread receives a signal.
> > + ///
> > + /// Returns whether there is a signal pending.
>
> Remaining jiffies or zero on timeout?
Seems like I just copied this typo from the other `_internal` method.
I'll fix it on both.
Alice
On Wed, Dec 06, 2023 at 08:30:06AM -0800, Boqun Feng wrote:
> On Wed, Dec 06, 2023 at 10:09:24AM +0000, Alice Ryhl wrote:
> [...]
> > +
> > +/// The return type of `wait_timeout`.
> > +pub enum CondVarTimeoutResult {
> > + /// The timeout was reached.
> > + Timeout,
> > + /// Somebody woke us up.
> > + Woken {
> > + /// Remaining sleep duration.
> > + jiffies: u64,
>
> I have a Jiffies definition in the my upcoming timer patchset:
>
> /// The time unit of Linux kernel. One jiffy equals (1/HZ) second.
> pub type Jiffies = core::ffi::c_ulong;
>
> Maybe you can add that (in a separate patch) in kernel::time?
Urgh, why are we using jiffies in 2023?
On Wed, Dec 6, 2023 at 5:39 PM Peter Zijlstra <[email protected]> wrote:
>
> On Wed, Dec 06, 2023 at 08:30:06AM -0800, Boqun Feng wrote:
> > On Wed, Dec 06, 2023 at 10:09:24AM +0000, Alice Ryhl wrote:
> > [...]
> > > +
> > > +/// The return type of `wait_timeout`.
> > > +pub enum CondVarTimeoutResult {
> > > + /// The timeout was reached.
> > > + Timeout,
> > > + /// Somebody woke us up.
> > > + Woken {
> > > + /// Remaining sleep duration.
> > > + jiffies: u64,
> >
> > I have a Jiffies definition in the my upcoming timer patchset:
> >
> > /// The time unit of Linux kernel. One jiffy equals (1/HZ) second.
> > pub type Jiffies = core::ffi::c_ulong;
> >
> > Maybe you can add that (in a separate patch) in kernel::time?
>
> Urgh, why are we using jiffies in 2023?
I assumed that the correct thing here would be to accept the same unit
as what schedule_timeout takes. Should I be doing something else?
Alice
On Wed, Dec 06, 2023 at 05:42:29PM +0100, Alice Ryhl wrote:
> On Wed, Dec 6, 2023 at 5:39 PM Peter Zijlstra <[email protected]> wrote:
> >
> > On Wed, Dec 06, 2023 at 08:30:06AM -0800, Boqun Feng wrote:
> > > On Wed, Dec 06, 2023 at 10:09:24AM +0000, Alice Ryhl wrote:
> > > [...]
> > > > +
> > > > +/// The return type of `wait_timeout`.
> > > > +pub enum CondVarTimeoutResult {
> > > > + /// The timeout was reached.
> > > > + Timeout,
> > > > + /// Somebody woke us up.
> > > > + Woken {
> > > > + /// Remaining sleep duration.
> > > > + jiffies: u64,
> > >
> > > I have a Jiffies definition in the my upcoming timer patchset:
> > >
> > > /// The time unit of Linux kernel. One jiffy equals (1/HZ) second.
> > > pub type Jiffies = core::ffi::c_ulong;
> > >
> > > Maybe you can add that (in a separate patch) in kernel::time?
> >
> > Urgh, why are we using jiffies in 2023?
>
> I assumed that the correct thing here would be to accept the same unit
> as what schedule_timeout takes. Should I be doing something else?
Bah, so we have schedule_hrtimeout() that takes ktime/u64 nsec. But the
'problem' is that hrtimers are written with the expectation to fire,
while the old timers are written with the expectation to not fire.
Timeouts are typically best done with the latter, so in that regard
using schedule_timeout() is right. But it is sad to inflict the
brain-damage that is jiffies onto new code.
Perhaps add schedule_timeout_*msec() wrappers around schedule_timeout*()
and use a consistent sane time unit?
Thomas?
On Wed, Dec 6, 2023 at 5:53 PM Peter Zijlstra <[email protected]> wrote:
>
> On Wed, Dec 06, 2023 at 05:42:29PM +0100, Alice Ryhl wrote:
> > On Wed, Dec 6, 2023 at 5:39 PM Peter Zijlstra <[email protected]> wrote:
> > >
> > > On Wed, Dec 06, 2023 at 08:30:06AM -0800, Boqun Feng wrote:
> > > > On Wed, Dec 06, 2023 at 10:09:24AM +0000, Alice Ryhl wrote:
> > > > [...]
> > > > > +
> > > > > +/// The return type of `wait_timeout`.
> > > > > +pub enum CondVarTimeoutResult {
> > > > > + /// The timeout was reached.
> > > > > + Timeout,
> > > > > + /// Somebody woke us up.
> > > > > + Woken {
> > > > > + /// Remaining sleep duration.
> > > > > + jiffies: u64,
> > > >
> > > > I have a Jiffies definition in the my upcoming timer patchset:
> > > >
> > > > /// The time unit of Linux kernel. One jiffy equals (1/HZ) second.
> > > > pub type Jiffies = core::ffi::c_ulong;
> > > >
> > > > Maybe you can add that (in a separate patch) in kernel::time?
> > >
> > > Urgh, why are we using jiffies in 2023?
> >
> > I assumed that the correct thing here would be to accept the same unit
> > as what schedule_timeout takes. Should I be doing something else?
>
> Bah, so we have schedule_hrtimeout() that takes ktime/u64 nsec. But the
> 'problem' is that hrtimers are written with the expectation to fire,
> while the old timers are written with the expectation to not fire.
>
> Timeouts are typically best done with the latter, so in that regard
> using schedule_timeout() is right. But it is sad to inflict the
> brain-damage that is jiffies onto new code.
>
> Perhaps add schedule_timeout_*msec() wrappers around schedule_timeout*()
> and use a consistent sane time unit?
>
> Thomas?
Hmm, looking over my usage in Rust Binder again ... the unit I need
*is* msec, but when we are woken up, we sometimes just go to sleep
again, which means that we need to be able to pass the remaining
duration back to `wait_timeout` to continue sleeping. I'm guessing
that I would lose precision if I converted back/forth to msecs here?
Alice
On 06/12/2023 10:09, Alice Ryhl wrote:
[...]
> diff --git a/rust/kernel/sync/condvar.rs b/rust/kernel/sync/condvar.rs
> index 9861c6749ad0..a6a6b6ab0c39 100644
> --- a/rust/kernel/sync/condvar.rs
> +++ b/rust/kernel/sync/condvar.rs
> @@ -120,6 +120,63 @@ fn wait_internal<T: ?Sized, B: Backend>(&self, wait_state: u32, guard: &mut Guar
> unsafe { bindings::finish_wait(self.wait_list.get(), wait.get()) };
> }
>
> + /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
> + /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
> + /// [`CondVar::notify_all`], or when the thread receives a signal.
> + ///
> + /// Returns whether there is a signal pending.
> + fn wait_internal_timeout<T, B>(
> + &self,
> + wait_state: u32,
> + guard: &mut Guard<'_, T, B>,
> + timeout: u64,
> + ) -> u64
> + where
> + T: ?Sized,
> + B: Backend,
> + {
> + let wait = Opaque::<bindings::wait_queue_entry>::uninit();
> +
> + // SAFETY: `wait` points to valid memory.
> + unsafe { bindings::init_wait(wait.get()) };
> +
> + // SAFETY: Both `wait` and `wait_list` point to valid memory.
> + unsafe {
> + bindings::prepare_to_wait_exclusive(self.wait_list.get(), wait.get(), wait_state as _)
> + };
> +
> + // SAFETY: Switches to another thread.
> + let timeout =
> + guard.do_unlocked(|| unsafe { bindings::schedule_timeout(timeout as _) as _ });
It looks like `schedule_timeout()` simply calls `schedule()` when the
timeout passed is `MAX_SCHEDULE_TIMEOUT`, so `wait_internal_timeout()`
could be merged together with the already existing `wait_internal()`,
where `wait_internal()` would always call `schedule_timeout()`? I may be
missing something, so just wondering why you decided to introduce
another method.
> +
> + // SAFETY: Both `wait` and `wait_list` point to valid memory.
> + unsafe { bindings::finish_wait(self.wait_list.get(), wait.get()) };
> +
> + timeout
> + }
> +
> + /// Releases the lock and waits for a notification in interruptible mode.
> + ///
> + /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
> + /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
> + /// [`CondVar::notify_all`], or when a timeout occurs, or when the thread receives a signal.
> + ///
> + /// Returns whether there is a signal pending.
> + #[must_use = "wait_timeout returns if a signal is pending, so the caller must check the return value"]
> + pub fn wait_timeout<T: ?Sized, B: Backend>(
> + &self,
> + guard: &mut Guard<'_, T, B>,
> + jiffies: u64,
> + ) -> CondVarTimeoutResult {
Should this be called `wait_timeout_interruptable` instead, so that if
we need to add one using the `TASK_INTERRUPTIBLE` state later we don't
need to modfy it again? It also matches the
`schedule_timeout_interruptible` one in the kernel (although that's not
a reason to change it just in itself).
> + let res = self.wait_internal_timeout(bindings::TASK_INTERRUPTIBLE, guard, jiffies);
> +
> + match (res as _, crate::current!().signal_pending()) {
> + (jiffies, true) => CondVarTimeoutResult::Signal { jiffies },
> + (0, false) => CondVarTimeoutResult::Timeout,
> + (jiffies, false) => CondVarTimeoutResult::Woken { jiffies },
> + }
> + }
> +
> /// Releases the lock and waits for a notification in interruptible mode.
> ///
> /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
> @@ -177,3 +234,19 @@ pub fn notify_all(&self) {
> self.notify(0, 0);
> }
> }
> +
> +/// The return type of `wait_timeout`.
> +pub enum CondVarTimeoutResult {
> + /// The timeout was reached.
> + Timeout,
> + /// Somebody woke us up.
> + Woken {
> + /// Remaining sleep duration.
> + jiffies: u64,
> + },
> + /// A signal occurred.
> + Signal {
> + /// Remaining sleep duration.
> + jiffies: u64,
> + },
> +}
Is `Signal` and `Woken` only going to hold a single value? Would it be
best represented as a tuple struct instead, like so?
pub enum CondVarTimeoutResult {
/// The timeout was reached.
Timeout,
/// Somebody woke us up.
Woken (u64),
/// A signal occurred.
Signal (u64),
}
Regard,
Tiago.
On 12/6/23 11:09, Alice Ryhl wrote:
> Wake up another thread synchronously.
>
> This method behaves like `notify_one`, except that it hints to the
> scheduler that the current thread is about to go to sleep, so it should
> schedule the target thread on the same CPU.
>
> This is used by Rust Binder as a performance optimization. When sending
> a transaction to a different process, we usually know which thread will
> handle it, so we can schedule that thread for execution next on this
> CPU for better cache locality.
>
> Signed-off-by: Alice Ryhl <[email protected]>
> ---
> rust/kernel/sync/condvar.rs | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/rust/kernel/sync/condvar.rs b/rust/kernel/sync/condvar.rs
> index b679b6f6dbeb..9861c6749ad0 100644
> --- a/rust/kernel/sync/condvar.rs
> +++ b/rust/kernel/sync/condvar.rs
> @@ -155,6 +155,12 @@ fn notify(&self, count: i32, flags: u32) {
> };
> }
>
> + /// Calls the kernel function to notify one thread synchronously.
> + pub fn notify_sync(&self) {
> + // SAFETY: `wait_list` points to valid memory.
> + unsafe { bindings::__wake_up_sync(self.wait_list.get(), bindings::TASK_NORMAL) };
I took a look at the C function (i.e. __wake_up_common) and there I
found this:
lockdep_assert_held(&wq_head->lock);
So I think this function requires that the lock is held, how are you
ensuring this?
--
Cheers,
Benno
On Thu, Dec 7, 2023 at 9:22 PM Benno Lossin <[email protected]> wrote:
>
> On 12/6/23 11:09, Alice Ryhl wrote:
> > Wake up another thread synchronously.
> >
> > This method behaves like `notify_one`, except that it hints to the
> > scheduler that the current thread is about to go to sleep, so it should
> > schedule the target thread on the same CPU.
> >
> > This is used by Rust Binder as a performance optimization. When sending
> > a transaction to a different process, we usually know which thread will
> > handle it, so we can schedule that thread for execution next on this
> > CPU for better cache locality.
> >
> > Signed-off-by: Alice Ryhl <[email protected]>
> > ---
> > rust/kernel/sync/condvar.rs | 6 ++++++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/rust/kernel/sync/condvar.rs b/rust/kernel/sync/condvar.rs
> > index b679b6f6dbeb..9861c6749ad0 100644
> > --- a/rust/kernel/sync/condvar.rs
> > +++ b/rust/kernel/sync/condvar.rs
> > @@ -155,6 +155,12 @@ fn notify(&self, count: i32, flags: u32) {
> > };
> > }
> >
> > + /// Calls the kernel function to notify one thread synchronously.
> > + pub fn notify_sync(&self) {
> > + // SAFETY: `wait_list` points to valid memory.
> > + unsafe { bindings::__wake_up_sync(self.wait_list.get(), bindings::TASK_NORMAL) };
>
> I took a look at the C function (i.e. __wake_up_common) and there I
> found this:
>
> lockdep_assert_held(&wq_head->lock);
>
> So I think this function requires that the lock is held, how are you
> ensuring this?
No, we don't need to hold a lock. The call stack is:
1. __wake_up_sync
2. __wake_up_sync_key
3. __wake_up_common_lock
4. __wake_up_common
And __wake_up_common_lock will lock wq_head->lock before calling
__wake_up_common.
Alice
On Wed, Dec 6, 2023 at 6:05 PM Tiago Lam <[email protected]> wrote:
>
>
> On 06/12/2023 10:09, Alice Ryhl wrote:
> [...]
> > diff --git a/rust/kernel/sync/condvar.rs b/rust/kernel/sync/condvar.rs
> > index 9861c6749ad0..a6a6b6ab0c39 100644
> > --- a/rust/kernel/sync/condvar.rs
> > +++ b/rust/kernel/sync/condvar.rs
> > @@ -120,6 +120,63 @@ fn wait_internal<T: ?Sized, B: Backend>(&self, wait_state: u32, guard: &mut Guar
> > unsafe { bindings::finish_wait(self.wait_list.get(), wait.get()) };
> > }
> >
> > + /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
> > + /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
> > + /// [`CondVar::notify_all`], or when the thread receives a signal.
> > + ///
> > + /// Returns whether there is a signal pending.
> > + fn wait_internal_timeout<T, B>(
> > + &self,
> > + wait_state: u32,
> > + guard: &mut Guard<'_, T, B>,
> > + timeout: u64,
> > + ) -> u64
> > + where
> > + T: ?Sized,
> > + B: Backend,
> > + {
> > + let wait = Opaque::<bindings::wait_queue_entry>::uninit();
> > +
> > + // SAFETY: `wait` points to valid memory.
> > + unsafe { bindings::init_wait(wait.get()) };
> > +
> > + // SAFETY: Both `wait` and `wait_list` point to valid memory.
> > + unsafe {
> > + bindings::prepare_to_wait_exclusive(self.wait_list.get(), wait.get(), wait_state as _)
> > + };
> > +
> > + // SAFETY: Switches to another thread.
> > + let timeout =
> > + guard.do_unlocked(|| unsafe { bindings::schedule_timeout(timeout as _) as _ });
>
> It looks like `schedule_timeout()` simply calls `schedule()` when the
> timeout passed is `MAX_SCHEDULE_TIMEOUT`, so `wait_internal_timeout()`
> could be merged together with the already existing `wait_internal()`,
> where `wait_internal()` would always call `schedule_timeout()`? I may be
> missing something, so just wondering why you decided to introduce
> another method.
Ah, nice! I didn't notice that I could combine them.
> > + /// Releases the lock and waits for a notification in interruptible mode.
> > + ///
> > + /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
> > + /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
> > + /// [`CondVar::notify_all`], or when a timeout occurs, or when the thread receives a signal.
> > + ///
> > + /// Returns whether there is a signal pending.
> > + #[must_use = "wait_timeout returns if a signal is pending, so the caller must check the return value"]
> > + pub fn wait_timeout<T: ?Sized, B: Backend>(
> > + &self,
> > + guard: &mut Guard<'_, T, B>,
> > + jiffies: u64,
> > + ) -> CondVarTimeoutResult {
>
> Should this be called `wait_timeout_interruptable` instead, so that if
> we need to add one using the `TASK_INTERRUPTIBLE` state later we don't
> need to modfy it again? It also matches the
> `schedule_timeout_interruptible` one in the kernel (although that's not
> a reason to change it just in itself).
I don't mind changing the names, but in this patch I was just
consistent with what was already there.
> > +/// The return type of `wait_timeout`.
> > +pub enum CondVarTimeoutResult {
> > + /// The timeout was reached.
> > + Timeout,
> > + /// Somebody woke us up.
> > + Woken {
> > + /// Remaining sleep duration.
> > + jiffies: u64,
> > + },
> > + /// A signal occurred.
> > + Signal {
> > + /// Remaining sleep duration.
> > + jiffies: u64,
> > + },
> > +}
>
>
> Is `Signal` and `Woken` only going to hold a single value? Would it be
> best represented as a tuple struct instead, like so?
>
> pub enum CondVarTimeoutResult {
> /// The timeout was reached.
> Timeout,
> /// Somebody woke us up.
> Woken (u64),
> /// A signal occurred.
> Signal (u64),
> }
I could do that, but I like the explicitly named version as it makes
it clear that the unit is jiffies.
Alice
On 12/8/23 08:37, Alice Ryhl wrote:
> On Wed, Dec 6, 2023 at 6:05 PM Tiago Lam <[email protected]> wrote:
>> On 06/12/2023 10:09, Alice Ryhl wrote:
>>> +/// The return type of `wait_timeout`.
>>> +pub enum CondVarTimeoutResult {
>>> + /// The timeout was reached.
>>> + Timeout,
>>> + /// Somebody woke us up.
>>> + Woken {
>>> + /// Remaining sleep duration.
>>> + jiffies: u64,
>>> + },
>>> + /// A signal occurred.
>>> + Signal {
>>> + /// Remaining sleep duration.
>>> + jiffies: u64,
>>> + },
>>> +}
>>
>> Is `Signal` and `Woken` only going to hold a single value? Would it be
>> best represented as a tuple struct instead, like so?
>>
>> pub enum CondVarTimeoutResult {
>> /// The timeout was reached.
>> Timeout,
>> /// Somebody woke us up.
>> Woken (u64),
>> /// A signal occurred.
>> Signal (u64),
>> }
>
> I could do that, but I like the explicitly named version as it makes
> it clear that the unit is jiffies.
Why not use `type Jiffies = u64;` until we have proper bindings for
them? That way we can have both.
--
Cheers,
Benno
On 12/8/23 08:29, Alice Ryhl wrote:
> On Thu, Dec 7, 2023 at 9:22 PM Benno Lossin <[email protected]> wrote:
>>
>> On 12/6/23 11:09, Alice Ryhl wrote:
>>> Wake up another thread synchronously.
>>>
>>> This method behaves like `notify_one`, except that it hints to the
>>> scheduler that the current thread is about to go to sleep, so it should
>>> schedule the target thread on the same CPU.
>>>
>>> This is used by Rust Binder as a performance optimization. When sending
>>> a transaction to a different process, we usually know which thread will
>>> handle it, so we can schedule that thread for execution next on this
>>> CPU for better cache locality.
>>>
>>> Signed-off-by: Alice Ryhl <[email protected]>
>>> ---
>>> rust/kernel/sync/condvar.rs | 6 ++++++
>>> 1 file changed, 6 insertions(+)
>>>
>>> diff --git a/rust/kernel/sync/condvar.rs b/rust/kernel/sync/condvar.rs
>>> index b679b6f6dbeb..9861c6749ad0 100644
>>> --- a/rust/kernel/sync/condvar.rs
>>> +++ b/rust/kernel/sync/condvar.rs
>>> @@ -155,6 +155,12 @@ fn notify(&self, count: i32, flags: u32) {
>>> };
>>> }
>>>
>>> + /// Calls the kernel function to notify one thread synchronously.
>>> + pub fn notify_sync(&self) {
>>> + // SAFETY: `wait_list` points to valid memory.
>>> + unsafe { bindings::__wake_up_sync(self.wait_list.get(), bindings::TASK_NORMAL) };
>>
>> I took a look at the C function (i.e. __wake_up_common) and there I
>> found this:
>>
>> lockdep_assert_held(&wq_head->lock);
>>
>> So I think this function requires that the lock is held, how are you
>> ensuring this?
>
> No, we don't need to hold a lock. The call stack is:
>
> 1. __wake_up_sync
> 2. __wake_up_sync_key
> 3. __wake_up_common_lock
> 4. __wake_up_common
>
> And __wake_up_common_lock will lock wq_head->lock before calling
> __wake_up_common.
Seems like I just looked at the wrong function.
Reviewed-by: Benno Lossin <[email protected]>
--
Cheers,
Benno
On 12/6/23 11:09, Alice Ryhl wrote:
> + /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
> + /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
> + /// [`CondVar::notify_all`], or when the thread receives a signal.
> + ///
> + /// Returns whether there is a signal pending.
> + fn wait_internal_timeout<T, B>(
> + &self,
> + wait_state: u32,
> + guard: &mut Guard<'_, T, B>,
> + timeout: u64,
> + ) -> u64
> + where
> + T: ?Sized,
> + B: Backend,
> + {
> + let wait = Opaque::<bindings::wait_queue_entry>::uninit();
> +
> + // SAFETY: `wait` points to valid memory.
> + unsafe { bindings::init_wait(wait.get()) };
> +
> + // SAFETY: Both `wait` and `wait_list` point to valid memory.
> + unsafe {
> + bindings::prepare_to_wait_exclusive(self.wait_list.get(), wait.get(), wait_state as _)
Does `.into()` work here? If for some reason the type here changes, we
probably want to know about it.
> + };
> +
> + // SAFETY: Switches to another thread.
> + let timeout =
> + guard.do_unlocked(|| unsafe { bindings::schedule_timeout(timeout as _) as _ });
Ditto.
--
Cheers,
Benno
> +
> + // SAFETY: Both `wait` and `wait_list` point to valid memory.
> + unsafe { bindings::finish_wait(self.wait_list.get(), wait.get()) };
> +
> + timeout
> + }
On Fri, Dec 8, 2023 at 10:27 AM Benno Lossin <[email protected]> wrote:
>
> On 12/8/23 08:37, Alice Ryhl wrote:
> > On Wed, Dec 6, 2023 at 6:05 PM Tiago Lam <[email protected]> wrote:
> >> On 06/12/2023 10:09, Alice Ryhl wrote:
> >>> +/// The return type of `wait_timeout`.
> >>> +pub enum CondVarTimeoutResult {
> >>> + /// The timeout was reached.
> >>> + Timeout,
> >>> + /// Somebody woke us up.
> >>> + Woken {
> >>> + /// Remaining sleep duration.
> >>> + jiffies: u64,
> >>> + },
> >>> + /// A signal occurred.
> >>> + Signal {
> >>> + /// Remaining sleep duration.
> >>> + jiffies: u64,
> >>> + },
> >>> +}
> >>
> >> Is `Signal` and `Woken` only going to hold a single value? Would it be
> >> best represented as a tuple struct instead, like so?
> >>
> >> pub enum CondVarTimeoutResult {
> >> /// The timeout was reached.
> >> Timeout,
> >> /// Somebody woke us up.
> >> Woken (u64),
> >> /// A signal occurred.
> >> Signal (u64),
> >> }
> >
> > I could do that, but I like the explicitly named version as it makes
> > it clear that the unit is jiffies.
>
> Why not use `type Jiffies = u64;` until we have proper bindings for
> them? That way we can have both.
I do not mind adding and using a type alias, but I still think the
named fields are better.
Alice
On Fri, Dec 8, 2023 at 8:04 PM Benno Lossin <[email protected]> wrote:
>
> On 12/6/23 11:09, Alice Ryhl wrote:
> > + /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
> > + /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
> > + /// [`CondVar::notify_all`], or when the thread receives a signal.
> > + ///
> > + /// Returns whether there is a signal pending.
> > + fn wait_internal_timeout<T, B>(
> > + &self,
> > + wait_state: u32,
> > + guard: &mut Guard<'_, T, B>,
> > + timeout: u64,
> > + ) -> u64
> > + where
> > + T: ?Sized,
> > + B: Backend,
> > + {
> > + let wait = Opaque::<bindings::wait_queue_entry>::uninit();
> > +
> > + // SAFETY: `wait` points to valid memory.
> > + unsafe { bindings::init_wait(wait.get()) };
> > +
> > + // SAFETY: Both `wait` and `wait_list` point to valid memory.
> > + unsafe {
> > + bindings::prepare_to_wait_exclusive(self.wait_list.get(), wait.get(), wait_state as _)
>
> Does `.into()` work here? If for some reason the type here changes, we
> probably want to know about it.
I think we may be able to eliminate this cast by using c_int for the
integer type.
> > + };
> > +
> > + // SAFETY: Switches to another thread.
> > + let timeout =
> > + guard.do_unlocked(|| unsafe { bindings::schedule_timeout(timeout as _) as _ });
>
> Ditto.
Here, we're casting u64->long and then long->u64. How about this?
u64->long - Use timeout.try_into().unwrap_or(MAX_SCHEDULE_TIMEOUT),
since MAX_SCHEDULE_TIMEOUT is LONG_MAX.
long->u64 - This value is guaranteed to be less than the argument
passed to schedule_timeout. Use .into() for infallible cast.
Alice
On 12/12/23 10:51, Alice Ryhl wrote:
> On Fri, Dec 8, 2023 at 8:04 PM Benno Lossin <[email protected]> wrote:
>>
>> On 12/6/23 11:09, Alice Ryhl wrote:
>>> + /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
>>> + /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
>>> + /// [`CondVar::notify_all`], or when the thread receives a signal.
>>> + ///
>>> + /// Returns whether there is a signal pending.
>>> + fn wait_internal_timeout<T, B>(
>>> + &self,
>>> + wait_state: u32,
>>> + guard: &mut Guard<'_, T, B>,
>>> + timeout: u64,
>>> + ) -> u64
>>> + where
>>> + T: ?Sized,
>>> + B: Backend,
>>> + {
>>> + let wait = Opaque::<bindings::wait_queue_entry>::uninit();
>>> +
>>> + // SAFETY: `wait` points to valid memory.
>>> + unsafe { bindings::init_wait(wait.get()) };
>>> +
>>> + // SAFETY: Both `wait` and `wait_list` point to valid memory.
>>> + unsafe {
>>> + bindings::prepare_to_wait_exclusive(self.wait_list.get(), wait.get(), wait_state as _)
>>
>> Does `.into()` work here? If for some reason the type here changes, we
>> probably want to know about it.
>
> I think we may be able to eliminate this cast by using c_int for the
> integer type.
Sounds good.
>>> + };
>>> +
>>> + // SAFETY: Switches to another thread.
>>> + let timeout =
>>> + guard.do_unlocked(|| unsafe { bindings::schedule_timeout(timeout as _) as _ });
>>
>> Ditto.
>
> Here, we're casting u64->long and then long->u64. How about this?
>
> u64->long - Use timeout.try_into().unwrap_or(MAX_SCHEDULE_TIMEOUT),
> since MAX_SCHEDULE_TIMEOUT is LONG_MAX.
>
> long->u64 - This value is guaranteed to be less than the argument
> passed to schedule_timeout. Use .into() for infallible cast.
Also sounds good :)
--
Cheers,
Benno
On Fri, Dec 08, 2023 at 08:37:27AM +0100, Alice Ryhl wrote:
[...]
> > > + /// Releases the lock and waits for a notification in interruptible mode.
> > > + ///
> > > + /// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
> > > + /// thread to sleep. It wakes up when notified by [`CondVar::notify_one`] or
> > > + /// [`CondVar::notify_all`], or when a timeout occurs, or when the thread receives a signal.
> > > + ///
> > > + /// Returns whether there is a signal pending.
> > > + #[must_use = "wait_timeout returns if a signal is pending, so the caller must check the return value"]
> > > + pub fn wait_timeout<T: ?Sized, B: Backend>(
> > > + &self,
> > > + guard: &mut Guard<'_, T, B>,
> > > + jiffies: u64,
> > > + ) -> CondVarTimeoutResult {
> >
> > Should this be called `wait_timeout_interruptable` instead, so that if
> > we need to add one using the `TASK_INTERRUPTIBLE` state later we don't
> > need to modfy it again? It also matches the
> > `schedule_timeout_interruptible` one in the kernel (although that's not
> > a reason to change it just in itself).
>
> I don't mind changing the names, but in this patch I was just
> consistent with what was already there.
>
Hmm.. so Rust's wait() is actually interruptible wait and we have
wait_uninterruptible(), while C API is wait_event() is uninterruptible,
and we have a wait_event_interruptible(), I think it makes sense we
follow what C API has. Will send a patch soon.
Regards,
Boqun
[...]
Currently, `CondVar::wait()` is an interruptible wait, and this is
different than `wait_event()` in include/linux/wait.h (which is an
uninterruptible wait). To avoid confusion between different APIs on the
interruptible/uninterruptible, make `CondVar::wait()` an uninterruptible
wait same as `wait_event()`, also rename the old `wait()` to
`CondVar::wait_interruptible()`.
Spotted-by: Tiago Lam <[email protected]>
Signed-off-by: Boqun Feng <[email protected]>
---
rust/kernel/sync/condvar.rs | 28 ++++++++++++++--------------
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/rust/kernel/sync/condvar.rs b/rust/kernel/sync/condvar.rs
index b679b6f6dbeb..8630faa29b78 100644
--- a/rust/kernel/sync/condvar.rs
+++ b/rust/kernel/sync/condvar.rs
@@ -50,7 +50,7 @@ macro_rules! new_condvar {
/// fn wait_for_value(e: &Example, v: u32) {
/// let mut guard = e.value.lock();
/// while *guard != v {
-/// e.value_changed.wait_uninterruptible(&mut guard);
+/// e.value_changed.wait(&mut guard);
/// }
/// }
///
@@ -120,28 +120,28 @@ fn wait_internal<T: ?Sized, B: Backend>(&self, wait_state: u32, guard: &mut Guar
unsafe { bindings::finish_wait(self.wait_list.get(), wait.get()) };
}
- /// Releases the lock and waits for a notification in interruptible mode.
+ /// Releases the lock and waits for a notification in uninterruptible mode.
///
/// Atomically releases the given lock (whose ownership is proven by the guard) and puts the
/// thread to sleep, reacquiring the lock on wake up. It wakes up when notified by
- /// [`CondVar::notify_one`] or [`CondVar::notify_all`], or when the thread receives a signal.
- /// It may also wake up spuriously.
+ /// [`CondVar::notify_one`] or [`CondVar::notify_all`]. Note that it may also wake up
+ /// spuriously.
+ pub fn wait<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) {
+ self.wait_internal(bindings::TASK_UNINTERRUPTIBLE, guard);
+ }
+
+ /// Releases the lock and waits for a notification in interruptible mode.
+ ///
+ /// Similar to [`CondVar::wait`], except that the wait is interruptible. That is, the thread may
+ /// wake up due to signals. It may also wake up spuriously.
///
/// Returns whether there is a signal pending.
- #[must_use = "wait returns if a signal is pending, so the caller must check the return value"]
- pub fn wait<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) -> bool {
+ #[must_use = "wait_interruptible returns if a signal is pending, so the caller must check the return value"]
+ pub fn wait_interruptible<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) -> bool {
self.wait_internal(bindings::TASK_INTERRUPTIBLE, guard);
crate::current!().signal_pending()
}
- /// Releases the lock and waits for a notification in uninterruptible mode.
- ///
- /// Similar to [`CondVar::wait`], except that the wait is not interruptible. That is, the
- /// thread won't wake up due to signals. It may, however, wake up supirously.
- pub fn wait_uninterruptible<T: ?Sized, B: Backend>(&self, guard: &mut Guard<'_, T, B>) {
- self.wait_internal(bindings::TASK_UNINTERRUPTIBLE, guard)
- }
-
/// Calls the kernel function to notify the appropriate number of threads with the given flags.
fn notify(&self, count: i32, flags: u32) {
// SAFETY: `wait_list` points to valid memory.
--
2.43.0
On Thu, Dec 14, 2023 at 9:04 PM Boqun Feng <[email protected]> wrote:
>
> Currently, `CondVar::wait()` is an interruptible wait, and this is
> different than `wait_event()` in include/linux/wait.h (which is an
> uninterruptible wait). To avoid confusion between different APIs on the
> interruptible/uninterruptible, make `CondVar::wait()` an uninterruptible
> wait same as `wait_event()`, also rename the old `wait()` to
> `CondVar::wait_interruptible()`.
>
> Spotted-by: Tiago Lam <[email protected]>
> Signed-off-by: Boqun Feng <[email protected]>
The diff is a bit hard to read because you swapped the order of the
functions, but LGTM.
Reviewed-by: Alice Ryhl <[email protected]>
On 14/12/2023 20:04, Boqun Feng wrote:
> Currently, `CondVar::wait()` is an interruptible wait, and this is
> different than `wait_event()` in include/linux/wait.h (which is an
> uninterruptible wait). To avoid confusion between different APIs on the
> interruptible/uninterruptible, make `CondVar::wait()` an uninterruptible
> wait same as `wait_event()`, also rename the old `wait()` to
> `CondVar::wait_interruptible()`.
>
> Spotted-by: Tiago Lam <[email protected]>
> Signed-off-by: Boqun Feng <[email protected]>
Thanks, that's clearer.
Reviewed-by: Tiago Lam <[email protected]>
On 12/16/23 00:45, Boqun Feng wrote:
> On Fri, Dec 15, 2023 at 11:27:56AM +0100, Alice Ryhl wrote:
>> On Thu, Dec 14, 2023 at 9:04 PM Boqun Feng <[email protected]> wrote:
>>>
>>> Currently, `CondVar::wait()` is an interruptible wait, and this is
>>> different than `wait_event()` in include/linux/wait.h (which is an
>>> uninterruptible wait). To avoid confusion between different APIs on the
>>> interruptible/uninterruptible, make `CondVar::wait()` an uninterruptible
>>> wait same as `wait_event()`, also rename the old `wait()` to
>>> `CondVar::wait_interruptible()`.
>>>
>>> Spotted-by: Tiago Lam <[email protected]>
>>> Signed-off-by: Boqun Feng <[email protected]>
>>
>> The diff is a bit hard to read because you swapped the order of the
>> functions, but LGTM.
>>
>
> Yeah, I did that because `wait_interruptible` metioned `wait`, so I had
> to make `wait` still before `wait_interruptible`.
What do you mean? If you are talking about the doclink, then
that should not matter.
--
Cheers,
Benno
On Mon, Dec 18, 2023 at 05:39:14PM +0000, Benno Lossin wrote:
> On 12/16/23 00:45, Boqun Feng wrote:
> > On Fri, Dec 15, 2023 at 11:27:56AM +0100, Alice Ryhl wrote:
> >> On Thu, Dec 14, 2023 at 9:04 PM Boqun Feng <[email protected]> wrote:
> >>>
> >>> Currently, `CondVar::wait()` is an interruptible wait, and this is
> >>> different than `wait_event()` in include/linux/wait.h (which is an
> >>> uninterruptible wait). To avoid confusion between different APIs on the
> >>> interruptible/uninterruptible, make `CondVar::wait()` an uninterruptible
> >>> wait same as `wait_event()`, also rename the old `wait()` to
> >>> `CondVar::wait_interruptible()`.
> >>>
> >>> Spotted-by: Tiago Lam <[email protected]>
> >>> Signed-off-by: Boqun Feng <[email protected]>
> >>
> >> The diff is a bit hard to read because you swapped the order of the
> >> functions, but LGTM.
> >>
> >
> > Yeah, I did that because `wait_interruptible` metioned `wait`, so I had
> > to make `wait` still before `wait_interruptible`.
>
> What do you mean? If you are talking about the doclink, then
I meant I prefer to keeping `wait` text-order-before
`wait_interruptible`, so that readers will first read it.
> that should not matter.
Yeah, I know the ordering doesn't matter for generating doc. However,
the ordering still matters for readers, I'd like them to learn about
`wait` first since according to the existing API in kernel, they are the
major usage.
Thanks for pointing it out anyway! ;-)
Regards,
Boqun
>
> --
> Cheers,
> Benno
>
>
On 12/14/23 21:04, Boqun Feng wrote:
> Currently, `CondVar::wait()` is an interruptible wait, and this is
> different than `wait_event()` in include/linux/wait.h (which is an
> uninterruptible wait). To avoid confusion between different APIs on the
> interruptible/uninterruptible, make `CondVar::wait()` an uninterruptible
> wait same as `wait_event()`, also rename the old `wait()` to
> `CondVar::wait_interruptible()`.
>
> Spotted-by: Tiago Lam <[email protected]>
> Signed-off-by: Boqun Feng <[email protected]>
> ---
Reviewed-by: Benno Lossin <[email protected]>
--
Cheers,
Benno