2019-10-12 05:51:25

by Manfred Spraul

[permalink] [raw]
Subject: [PATCH 3/6] ipc/mqueue.c: Update/document memory barriers

Update and document memory barriers for mqueue.c:
- ewp->state is read without any locks, thus READ_ONCE is required.

- add smp_aquire__after_ctrl_dep() after the READ_ONCE, we need
acquire semantics if the value is STATE_READY.

- add an explicit memory barrier to __pipelined_op(), the
refcount must have been increased before the updated state becomes
visible

- document why __set_current_state() may be used:
Reading task->state cannot happen before the wake_q_add() call,
which happens while holding info->lock. Thus the spin_unlock()
is the RELEASE, and the spin_lock() is the ACQUIRE.

For completeness: there is also a 3 CPU szenario, if the to be woken
up task is already on another wake_q.
Then:
- CPU1: spin_unlock() of the task that goes to sleep is the RELEASE
- CPU2: the spin_lock() of the waker is the ACQUIRE
- CPU2: smp_mb__before_atomic inside wake_q_add() is the RELEASE
- CPU3: smp_mb__after_spinlock() inside try_to_wake_up() is the ACQUIRE

Signed-off-by: Manfred Spraul <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Davidlohr Bueso <[email protected]>
---
ipc/mqueue.c | 32 +++++++++++++++++++++-----------
1 file changed, 21 insertions(+), 11 deletions(-)

diff --git a/ipc/mqueue.c b/ipc/mqueue.c
index be48c0ba92f7..b80574822f0a 100644
--- a/ipc/mqueue.c
+++ b/ipc/mqueue.c
@@ -646,18 +646,26 @@ static int wq_sleep(struct mqueue_inode_info *info, int sr,
wq_add(info, sr, ewp);

for (;;) {
+ /* memory barrier not required, we hold info->lock */
__set_current_state(TASK_INTERRUPTIBLE);

spin_unlock(&info->lock);
time = schedule_hrtimeout_range_clock(timeout, 0,
HRTIMER_MODE_ABS, CLOCK_REALTIME);

- if (ewp->state == STATE_READY) {
+ if (READ_ONCE(ewp->state) == STATE_READY) {
+ /*
+ * Pairs, together with READ_ONCE(), with
+ * the barrier in __pipelined_op().
+ */
+ smp_acquire__after_ctrl_dep();
retval = 0;
goto out;
}
spin_lock(&info->lock);
- if (ewp->state == STATE_READY) {
+
+ /* we hold info->lock, so no memory barrier required */
+ if (READ_ONCE(ewp->state) == STATE_READY) {
retval = 0;
goto out_unlock;
}
@@ -925,14 +933,12 @@ static inline void __pipelined_op(struct wake_q_head *wake_q,
list_del(&this->list);
wake_q_add(wake_q, this->task);
/*
- * Rely on the implicit cmpxchg barrier from wake_q_add such
- * that we can ensure that updating receiver->state is the last
- * write operation: As once set, the receiver can continue,
- * and if we don't have the reference count from the wake_q,
- * yet, at that point we can later have a use-after-free
- * condition and bogus wakeup.
+ * The barrier is required to ensure that the refcount increase
+ * inside wake_q_add() is completed before the state is updated.
+ *
+ * The barrier pairs with READ_ONCE()+smp_mb__after_ctrl_dep().
*/
- this->state = STATE_READY;
+ smp_store_release(&this->state, STATE_READY);
}

/* pipelined_send() - send a message directly to the task waiting in
@@ -1049,7 +1055,9 @@ static int do_mq_timedsend(mqd_t mqdes, const char __user *u_msg_ptr,
} else {
wait.task = current;
wait.msg = (void *) msg_ptr;
- wait.state = STATE_NONE;
+
+ /* memory barrier not required, we hold info->lock */
+ WRITE_ONCE(wait.state, STATE_NONE);
ret = wq_sleep(info, SEND, timeout, &wait);
/*
* wq_sleep must be called with info->lock held, and
@@ -1152,7 +1160,9 @@ static int do_mq_timedreceive(mqd_t mqdes, char __user *u_msg_ptr,
ret = -EAGAIN;
} else {
wait.task = current;
- wait.state = STATE_NONE;
+
+ /* memory barrier not required, we hold info->lock */
+ WRITE_ONCE(wait.state, STATE_NONE);
ret = wq_sleep(info, RECV, timeout, &wait);
msg_ptr = wait.msg;
}
--
2.21.0


2019-10-14 06:41:23

by Davidlohr Bueso

[permalink] [raw]
Subject: Re: [PATCH 3/6] ipc/mqueue.c: Update/document memory barriers

On Sat, 12 Oct 2019, Manfred Spraul wrote:

>Update and document memory barriers for mqueue.c:
>- ewp->state is read without any locks, thus READ_ONCE is required.
>
>- add smp_aquire__after_ctrl_dep() after the READ_ONCE, we need
> acquire semantics if the value is STATE_READY.
>
>- add an explicit memory barrier to __pipelined_op(), the
> refcount must have been increased before the updated state becomes
> visible
>
>- document why __set_current_state() may be used:
> Reading task->state cannot happen before the wake_q_add() call,
> which happens while holding info->lock. Thus the spin_unlock()
> is the RELEASE, and the spin_lock() is the ACQUIRE.
>
>For completeness: there is also a 3 CPU szenario, if the to be woken
^^^ scenario

>up task is already on another wake_q.
>Then:
>- CPU1: spin_unlock() of the task that goes to sleep is the RELEASE
>- CPU2: the spin_lock() of the waker is the ACQUIRE
>- CPU2: smp_mb__before_atomic inside wake_q_add() is the RELEASE
>- CPU3: smp_mb__after_spinlock() inside try_to_wake_up() is the ACQUIRE
>
>Signed-off-by: Manfred Spraul <[email protected]>
>Cc: Waiman Long <[email protected]>
>Cc: Davidlohr Bueso <[email protected]>

Without considering the smp_store_release() in __pipelined_op(), feel
free to add my:

Reviewed-by: Davidlohr Bueso <[email protected]>

>---
> ipc/mqueue.c | 32 +++++++++++++++++++++-----------
> 1 file changed, 21 insertions(+), 11 deletions(-)
>
>diff --git a/ipc/mqueue.c b/ipc/mqueue.c
>index be48c0ba92f7..b80574822f0a 100644
>--- a/ipc/mqueue.c
>+++ b/ipc/mqueue.c
>@@ -646,18 +646,26 @@ static int wq_sleep(struct mqueue_inode_info *info, int sr,
> wq_add(info, sr, ewp);
>
> for (;;) {
>+ /* memory barrier not required, we hold info->lock */
> __set_current_state(TASK_INTERRUPTIBLE);
>
> spin_unlock(&info->lock);
> time = schedule_hrtimeout_range_clock(timeout, 0,
> HRTIMER_MODE_ABS, CLOCK_REALTIME);
>
>- if (ewp->state == STATE_READY) {
>+ if (READ_ONCE(ewp->state) == STATE_READY) {
>+ /*
>+ * Pairs, together with READ_ONCE(), with
>+ * the barrier in __pipelined_op().
>+ */
>+ smp_acquire__after_ctrl_dep();
> retval = 0;
> goto out;
> }
> spin_lock(&info->lock);
>- if (ewp->state == STATE_READY) {
>+
>+ /* we hold info->lock, so no memory barrier required */
>+ if (READ_ONCE(ewp->state) == STATE_READY) {
> retval = 0;
> goto out_unlock;
> }
>@@ -925,14 +933,12 @@ static inline void __pipelined_op(struct wake_q_head *wake_q,
> list_del(&this->list);
> wake_q_add(wake_q, this->task);
> /*
>- * Rely on the implicit cmpxchg barrier from wake_q_add such
>- * that we can ensure that updating receiver->state is the last
>- * write operation: As once set, the receiver can continue,
>- * and if we don't have the reference count from the wake_q,
>- * yet, at that point we can later have a use-after-free
>- * condition and bogus wakeup.
>+ * The barrier is required to ensure that the refcount increase
>+ * inside wake_q_add() is completed before the state is updated.
>+ *
>+ * The barrier pairs with READ_ONCE()+smp_mb__after_ctrl_dep().
> */
>- this->state = STATE_READY;
>+ smp_store_release(&this->state, STATE_READY);
> }
>
> /* pipelined_send() - send a message directly to the task waiting in
>@@ -1049,7 +1055,9 @@ static int do_mq_timedsend(mqd_t mqdes, const char __user *u_msg_ptr,
> } else {
> wait.task = current;
> wait.msg = (void *) msg_ptr;
>- wait.state = STATE_NONE;
>+
>+ /* memory barrier not required, we hold info->lock */
>+ WRITE_ONCE(wait.state, STATE_NONE);
> ret = wq_sleep(info, SEND, timeout, &wait);
> /*
> * wq_sleep must be called with info->lock held, and
>@@ -1152,7 +1160,9 @@ static int do_mq_timedreceive(mqd_t mqdes, char __user *u_msg_ptr,
> ret = -EAGAIN;
> } else {
> wait.task = current;
>- wait.state = STATE_NONE;
>+
>+ /* memory barrier not required, we hold info->lock */
>+ WRITE_ONCE(wait.state, STATE_NONE);
> ret = wq_sleep(info, RECV, timeout, &wait);
> msg_ptr = wait.msg;
> }
>--
>2.21.0
>

2019-10-14 13:06:59

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 3/6] ipc/mqueue.c: Update/document memory barriers

On Sat, Oct 12, 2019 at 07:49:55AM +0200, Manfred Spraul wrote:

> for (;;) {
> + /* memory barrier not required, we hold info->lock */
> __set_current_state(TASK_INTERRUPTIBLE);
>
> spin_unlock(&info->lock);
> time = schedule_hrtimeout_range_clock(timeout, 0,
> HRTIMER_MODE_ABS, CLOCK_REALTIME);
>
> + if (READ_ONCE(ewp->state) == STATE_READY) {
> + /*
> + * Pairs, together with READ_ONCE(), with
> + * the barrier in __pipelined_op().
> + */
> + smp_acquire__after_ctrl_dep();
> retval = 0;
> goto out;
> }
> spin_lock(&info->lock);
> +
> + /* we hold info->lock, so no memory barrier required */
> + if (READ_ONCE(ewp->state) == STATE_READY) {
> retval = 0;
> goto out_unlock;
> }
> @@ -925,14 +933,12 @@ static inline void __pipelined_op(struct wake_q_head *wake_q,
> list_del(&this->list);
> wake_q_add(wake_q, this->task);
> /*
> + * The barrier is required to ensure that the refcount increase
> + * inside wake_q_add() is completed before the state is updated.

fails to explain *why* this is important.

> + *
> + * The barrier pairs with READ_ONCE()+smp_mb__after_ctrl_dep().
> */
> + smp_store_release(&this->state, STATE_READY);

You retained the whitespace damage.

And I'm terribly confused by this code, probably due to the lack of
'why' as per the above. What is this trying to do?

Are we worried about something like:

A B C


wq_sleep()
schedule_...();

/* spuriuos wakeup */
wake_up_process(B)

wake_q_add(A)
if (cmpxchg()) // success

->state = STATE_READY (reordered)

if (READ_ONCE() == STATE_READY)
goto out;

exit();


get_task_struct() // UaF


Can we put the exact and full race in the comment please?

2019-10-14 14:01:43

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 3/6] ipc/mqueue.c: Update/document memory barriers

On Mon, Oct 14, 2019 at 02:59:11PM +0200, Peter Zijlstra wrote:
> On Sat, Oct 12, 2019 at 07:49:55AM +0200, Manfred Spraul wrote:
>
> > for (;;) {
> > + /* memory barrier not required, we hold info->lock */
> > __set_current_state(TASK_INTERRUPTIBLE);
> >
> > spin_unlock(&info->lock);
> > time = schedule_hrtimeout_range_clock(timeout, 0,
> > HRTIMER_MODE_ABS, CLOCK_REALTIME);
> >
> > + if (READ_ONCE(ewp->state) == STATE_READY) {
> > + /*
> > + * Pairs, together with READ_ONCE(), with
> > + * the barrier in __pipelined_op().
> > + */
> > + smp_acquire__after_ctrl_dep();
> > retval = 0;
> > goto out;
> > }
> > spin_lock(&info->lock);
> > +
> > + /* we hold info->lock, so no memory barrier required */
> > + if (READ_ONCE(ewp->state) == STATE_READY) {
> > retval = 0;
> > goto out_unlock;
> > }
> > @@ -925,14 +933,12 @@ static inline void __pipelined_op(struct wake_q_head *wake_q,
> > list_del(&this->list);
> > wake_q_add(wake_q, this->task);
> > /*
> > + * The barrier is required to ensure that the refcount increase
> > + * inside wake_q_add() is completed before the state is updated.
>
> fails to explain *why* this is important.
>
> > + *
> > + * The barrier pairs with READ_ONCE()+smp_mb__after_ctrl_dep().
> > */
> > + smp_store_release(&this->state, STATE_READY);
>
> You retained the whitespace damage.
>
> And I'm terribly confused by this code, probably due to the lack of
> 'why' as per the above. What is this trying to do?
>
> Are we worried about something like:
>
> A B C
>
>
> wq_sleep()
> schedule_...();
>
> /* spuriuos wakeup */
> wake_up_process(B)
>
> wake_q_add(A)
> if (cmpxchg()) // success
>
> ->state = STATE_READY (reordered)
>
> if (READ_ONCE() == STATE_READY)
> goto out;
>
> exit();
>
>
> get_task_struct() // UaF
>
>
> Can we put the exact and full race in the comment please?

Like Davidlohr already suggested, elsewhere we write it like so:


--- a/ipc/mqueue.c
+++ b/ipc/mqueue.c
@@ -930,15 +930,10 @@ static inline void __pipelined_op(struct
struct mqueue_inode_info *info,
struct ext_wait_queue *this)
{
+ get_task_struct(this->task);
list_del(&this->list);
- wake_q_add(wake_q, this->task);
- /*
- * The barrier is required to ensure that the refcount increase
- * inside wake_q_add() is completed before the state is updated.
- *
- * The barrier pairs with READ_ONCE()+smp_mb__after_ctrl_dep().
- */
- smp_store_release(&this->state, STATE_READY);
+ smp_store_release(&this->state, STATE_READY);
+ wake_q_add_safe(wake_q, this->task);
}

/* pipelined_send() - send a message directly to the task waiting in

2019-10-14 23:06:32

by Manfred Spraul

[permalink] [raw]
Subject: Re: [PATCH 3/6] ipc/mqueue.c: Update/document memory barriers

Hi Peter,

On 10/14/19 3:58 PM, Peter Zijlstra wrote:
> On Mon, Oct 14, 2019 at 02:59:11PM +0200, Peter Zijlstra wrote:
>> On Sat, Oct 12, 2019 at 07:49:55AM +0200, Manfred Spraul wrote:
>>
>>> for (;;) {
>>> + /* memory barrier not required, we hold info->lock */
>>> __set_current_state(TASK_INTERRUPTIBLE);
>>>
>>> spin_unlock(&info->lock);
>>> time = schedule_hrtimeout_range_clock(timeout, 0,
>>> HRTIMER_MODE_ABS, CLOCK_REALTIME);
>>>
>>> + if (READ_ONCE(ewp->state) == STATE_READY) {
>>> + /*
>>> + * Pairs, together with READ_ONCE(), with
>>> + * the barrier in __pipelined_op().
>>> + */
>>> + smp_acquire__after_ctrl_dep();
>>> retval = 0;
>>> goto out;
>>> }
>>> spin_lock(&info->lock);
>>> +
>>> + /* we hold info->lock, so no memory barrier required */
>>> + if (READ_ONCE(ewp->state) == STATE_READY) {
>>> retval = 0;
>>> goto out_unlock;
>>> }
>>> @@ -925,14 +933,12 @@ static inline void __pipelined_op(struct wake_q_head *wake_q,
>>> list_del(&this->list);
>>> wake_q_add(wake_q, this->task);
>>> /*
>>> + * The barrier is required to ensure that the refcount increase
>>> + * inside wake_q_add() is completed before the state is updated.
>> fails to explain *why* this is important.
>>
>>> + *
>>> + * The barrier pairs with READ_ONCE()+smp_mb__after_ctrl_dep().
>>> */
>>> + smp_store_release(&this->state, STATE_READY);
>> You retained the whitespace damage.
>>
>> And I'm terribly confused by this code, probably due to the lack of
>> 'why' as per the above. What is this trying to do?
>>
>> Are we worried about something like:
>>
>> A B C
>>
>>
>> wq_sleep()
>> schedule_...();
>>
>> /* spuriuos wakeup */
>> wake_up_process(B)
>>
>> wake_q_add(A)
>> if (cmpxchg()) // success
>>
>> ->state = STATE_READY (reordered)
>>
>> if (READ_ONCE() == STATE_READY)
>> goto out;
>>
>> exit();
>>
>>
>> get_task_struct() // UaF
>>
>>
>> Can we put the exact and full race in the comment please?

Yes, I'll do that. Actually, two threads are sufficient:

A                    B

WRITE_ONCE(wait.state, STATE_NONE);
schedule_hrtimeout()

                      wake_q_add(A)
                      if (cmpxchg()) // success
                      ->state = STATE_READY (reordered)

<timeout returns>
if (wait.state == STATE_READY) return;
sysret to user space
sys_exit()

                      get_task_struct() // UaF


> Like Davidlohr already suggested, elsewhere we write it like so:
>
>
> --- a/ipc/mqueue.c
> +++ b/ipc/mqueue.c
> @@ -930,15 +930,10 @@ static inline void __pipelined_op(struct
> struct mqueue_inode_info *info,
> struct ext_wait_queue *this)
> {
> + get_task_struct(this->task);
> list_del(&this->list);
> - wake_q_add(wake_q, this->task);
> - /*
> - * The barrier is required to ensure that the refcount increase
> - * inside wake_q_add() is completed before the state is updated.
> - *
> - * The barrier pairs with READ_ONCE()+smp_mb__after_ctrl_dep().
> - */
> - smp_store_release(&this->state, STATE_READY);
> + smp_store_release(&this->state, STATE_READY);
> + wake_q_add_safe(wake_q, this->task);
> }
>
> /* pipelined_send() - send a message directly to the task waiting in

Much better, I'll rewrite it and then resend the series.

--

    Manfred