2021-05-07 17:15:01

by Varad Gautam

[permalink] [raw]
Subject: [PATCH v3] ipc/mqueue: Avoid relying on a stack reference past its expiry

do_mq_timedreceive calls wq_sleep with a stack local address. The
sender (do_mq_timedsend) uses this address to later call
pipelined_send.

This leads to a very hard to trigger race where a do_mq_timedreceive call
might return and leave do_mq_timedsend to rely on an invalid address,
causing the following crash:

[ 240.739977] RIP: 0010:wake_q_add_safe+0x13/0x60
[ 240.739991] Call Trace:
[ 240.739999] __x64_sys_mq_timedsend+0x2a9/0x490
[ 240.740003] ? auditd_test_task+0x38/0x40
[ 240.740007] ? auditd_test_task+0x38/0x40
[ 240.740011] do_syscall_64+0x80/0x680
[ 240.740017] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 240.740019] RIP: 0033:0x7f5928e40343

The race occurs as:

1. do_mq_timedreceive calls wq_sleep with the address of
`struct ext_wait_queue` on function stack (aliased as `ewq_addr` here)
- it holds a valid `struct ext_wait_queue *` as long as the stack has
not been overwritten.

2. `ewq_addr` gets added to info->e_wait_q[RECV].list in wq_add, and
do_mq_timedsend receives it via wq_get_first_waiter(info, RECV) to call
__pipelined_op.

3. Sender calls __pipelined_op::smp_store_release(&this->state, STATE_READY).
Here is where the race window begins. (`this` is `ewq_addr`.)

4. If the receiver wakes up now in do_mq_timedreceive::wq_sleep, it
will see `state == STATE_READY` and break.

5. do_mq_timedreceive returns, and `ewq_addr` is no longer guaranteed
to be a `struct ext_wait_queue *` since it was on do_mq_timedreceive's
stack. (Although the address may not get overwritten until another
function happens to touch it, which means it can persist around for an
indefinite time.)

6. do_mq_timedsend::__pipelined_op() still believes `ewq_addr` is a
`struct ext_wait_queue *`, and uses it to find a task_struct to pass
to the wake_q_add_safe call. In the lucky case where nothing has
overwritten `ewq_addr` yet, `ewq_addr->task` is the right task_struct.
In the unlucky case, __pipelined_op::wake_q_add_safe gets handed a
bogus address as the receiver's task_struct causing the crash.

do_mq_timedsend::__pipelined_op() should not dereference `this` after
setting STATE_READY, as the receiver counterpart is now free to return.
Change __pipelined_op to call wake_q_add before setting STATE_READY
which ensures that the receiver's task_struct can still be found via
`this`.

Fixes: c5b2cbdbdac563 ("ipc/mqueue.c: update/document memory barriers")
Signed-off-by: Varad Gautam <[email protected]>
Reported-by: Matthias von Faber <[email protected]>
Acked-by: Davidlohr Bueso <[email protected]>
Cc: <[email protected]> # 5.6
Cc: Christian Brauner <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: "Eric W. Biederman" <[email protected]>
Cc: Manfred Spraul <[email protected]>
Cc: Andrew Morton <[email protected]>
---
v2: Call wake_q_add before smp_store_release, instead of using a
get_task_struct/wake_q_add_safe combination across
smp_store_release. (Davidlohr Bueso)
v3: Comment/commit message fixup.

ipc/mqueue.c | 33 ++++++++++++++++++++++++---------
1 file changed, 24 insertions(+), 9 deletions(-)

diff --git a/ipc/mqueue.c b/ipc/mqueue.c
index 8031464ed4ae..bf5dce399854 100644
--- a/ipc/mqueue.c
+++ b/ipc/mqueue.c
@@ -78,11 +78,13 @@ struct posix_msg_tree_node {
* MQ_BARRIER:
* To achieve proper release/acquire memory barrier pairing, the state is set to
* STATE_READY with smp_store_release(), and it is read with READ_ONCE followed
- * by smp_acquire__after_ctrl_dep(). In addition, wake_q_add_safe() is used.
+ * by smp_acquire__after_ctrl_dep(). The state change to STATE_READY must be
+ * the last write operation, after which the blocked task can immediately
+ * return and exit.
*
* This prevents the following races:
*
- * 1) With the simple wake_q_add(), the task could be gone already before
+ * 1) With wake_q_add(), the task could be gone already before
* the increase of the reference happens
* Thread A
* Thread B
@@ -97,10 +99,25 @@ struct posix_msg_tree_node {
* sys_exit()
* get_task_struct() // UaF
*
- * Solution: Use wake_q_add_safe() and perform the get_task_struct() before
- * the smp_store_release() that does ->state = STATE_READY.
+ * 2) With wake_q_add(), the receiver task could have returned from the
+ * syscall and had its stack-allocated waiter overwritten before the
+ * waker could add it to the wake_q
+ * Thread A
+ * Thread B
+ * WRITE_ONCE(wait.state, STATE_NONE);
+ * schedule_hrtimeout()
+ * ->state = STATE_READY
+ * <timeout returns>
+ * if (wait.state == STATE_READY) return;
+ * sysret to user space
+ * overwrite receiver's stack
+ * wake_q_add(A)
+ * if (cmpxchg()) // corrupted waiter
*
- * 2) Without proper _release/_acquire barriers, the woken up task
+ * Solution: Queue the task for wakeup before the smp_store_release() that
+ * does ->state = STATE_READY.
+ *
+ * 3) Without proper _release/_acquire barriers, the woken up task
* could read stale data
*
* Thread A
@@ -116,7 +133,7 @@ struct posix_msg_tree_node {
*
* Solution: use _release and _acquire barriers.
*
- * 3) There is intentionally no barrier when setting current->state
+ * 4) There is intentionally no barrier when setting current->state
* to TASK_INTERRUPTIBLE: spin_unlock(&info->lock) provides the
* release memory barrier, and the wakeup is triggered when holding
* info->lock, i.e. spin_lock(&info->lock) provided a pairing
@@ -1005,11 +1022,9 @@ static inline void __pipelined_op(struct wake_q_head *wake_q,
struct ext_wait_queue *this)
{
list_del(&this->list);
- get_task_struct(this->task);
-
+ wake_q_add(wake_q, this->task);
/* see MQ_BARRIER for purpose/pairing */
smp_store_release(&this->state, STATE_READY);
- wake_q_add_safe(wake_q, this->task);
}

/* pipelined_send() - send a message directly to the task waiting in
--
2.30.2


2021-05-08 19:28:40

by Manfred Spraul

[permalink] [raw]
Subject: Re: [PATCH v3] ipc/mqueue: Avoid relying on a stack reference past its expiry

Hi Varad,

On 5/7/21 3:38 PM, Varad Gautam wrote:
> @@ -1005,11 +1022,9 @@ static inline void __pipelined_op(struct wake_q_head *wake_q,
> struct ext_wait_queue *this)
> {
> list_del(&this->list);
> - get_task_struct(this->task);
> -
> + wake_q_add(wake_q, this->task);
> /* see MQ_BARRIER for purpose/pairing */
> smp_store_release(&this->state, STATE_READY);
> - wake_q_add_safe(wake_q, this->task);
> }
>
> /* pipelined_send() - send a message directly to the task waiting in

First, I was too fast: I had assumed that wake_q_add() before
smp_store_release() would be a potential lost wakeup.

As __pipelined_op() is called within spin_lock(&info->lock), and as
wq_sleep() will reread this->state after acquiring
spin_lock(&info->lock), I do not see a bug anymore.

But I don't like the change: Why should ipc/*.c differ from kernel/futex.c?

--

    Manfred

2021-05-10 01:12:48

by Davidlohr Bueso

[permalink] [raw]
Subject: Re: [PATCH v3] ipc/mqueue: Avoid relying on a stack reference past its expiry

On 2021-05-08 12:23, Manfred Spraul wrote:
> Hi Varad,
>
> On 5/7/21 3:38 PM, Varad Gautam wrote:
>> @@ -1005,11 +1022,9 @@ static inline void __pipelined_op(struct
>> wake_q_head *wake_q,
>> struct ext_wait_queue *this)
>> {
>> list_del(&this->list);
>> - get_task_struct(this->task);
>> -
>> + wake_q_add(wake_q, this->task);
>> /* see MQ_BARRIER for purpose/pairing */
>> smp_store_release(&this->state, STATE_READY);
>> - wake_q_add_safe(wake_q, this->task);
>> }
>> /* pipelined_send() - send a message directly to the task waiting
>> in
>
> First, I was too fast: I had assumed that wake_q_add() before
> smp_store_release() would be a potential lost wakeup.

Yeah you need wake_up_q() to actually wake anything up.

>
> As __pipelined_op() is called within spin_lock(&info->lock), and as
> wq_sleep() will reread this->state after acquiring
> spin_lock(&info->lock), I do not see a bug anymore.

Right, and when I proposed this version of the fix I was mostly focusing
on STATE_READY
being set as the last operation, but the fact of the matter is we had
moved to the
wake_q_add_safe() version for two reasons:

(1) Ensuring the ->state = STATE_READY is done after the reference count
and avoid
racing with exit. In mqueue's original use of wake_q we were relying on
the call's
implied barrier from wake_q_add() in order to avoid reordering of
setting the state.
But this turned out to be insufficient hence the explicit
smp_store_release().

(2) In order to prevent a potential lost wakeup when the blocked task is
already queued
for wakeup by another task (the failed cmpxchg case in wake_q_add), and
therefore we need
to set the return condition (->state = STATE_READY) before adding the
task to the wake_q.

But I'm not seeing how race (2) can happen in mqueue. The race was
always theoretical to
begin with, with the exception of rwsems[1] in which actually the wakee
task could end up in
the waker's wake_q without actually blocking.

So all in all I now agree that we should keep the order of how we
currently have things,
just to be on the safer side, if nothing else.

[1]
https://lore.kernel.org/lkml/[email protected]

Thanks,
Davidlohr

2021-05-10 10:31:22

by Varad Gautam

[permalink] [raw]
Subject: Re: [PATCH v3] ipc/mqueue: Avoid relying on a stack reference past its expiry



On 5/10/21 3:10 AM, Davidlohr Bueso wrote:
> On 2021-05-08 12:23, Manfred Spraul wrote:
>> Hi Varad,
>>
>> On 5/7/21 3:38 PM, Varad Gautam wrote:
>>> @@ -1005,11 +1022,9 @@ static inline void __pipelined_op(struct wake_q_head *wake_q,
>>>                     struct ext_wait_queue *this)
>>>   {
>>>       list_del(&this->list);
>>> -    get_task_struct(this->task);
>>> -
>>> +    wake_q_add(wake_q, this->task);
>>>       /* see MQ_BARRIER for purpose/pairing */
>>>       smp_store_release(&this->state, STATE_READY);
>>> -    wake_q_add_safe(wake_q, this->task);
>>>   }
>>>     /* pipelined_send() - send a message directly to the task waiting in
>>
>> First, I was too fast: I had assumed that wake_q_add() before
>> smp_store_release() would be a potential lost wakeup.
>
> Yeah you need wake_up_q() to actually wake anything up.
>
>>
>> As __pipelined_op() is called within spin_lock(&info->lock), and as
>> wq_sleep() will reread this->state after acquiring
>> spin_lock(&info->lock), I do not see a bug anymore.
>
> Right, and when I proposed this version of the fix I was mostly focusing on STATE_READY
> being set as the last operation, but the fact of the matter is we had moved to the
> wake_q_add_safe() version for two reasons:
>
> (1) Ensuring the ->state = STATE_READY is done after the reference count and avoid
> racing with exit. In mqueue's original use of wake_q we were relying on the call's
> implied barrier from wake_q_add() in order to avoid reordering of setting the state.
> But this turned out to be insufficient hence the explicit smp_store_release().
>
> (2) In order to prevent a potential lost wakeup when the blocked task is already queued
> for wakeup by another task (the failed cmpxchg case in wake_q_add), and therefore we need
> to set the return condition (->state = STATE_READY) before adding the task to the wake_q.
>
> But I'm not seeing how race (2) can happen in mqueue. The race was always theoretical to
> begin with, with the exception of rwsems[1] in which actually the wakee task could end up in
> the waker's wake_q without actually blocking.
>
> So all in all I now agree that we should keep the order of how we currently have things,
> just to be on the safer side, if nothing else.
>

Considering that moving the wake_q_add call in v2 / v3 has the potential to cause lost
wakeups which has shown up in other cases, I would argue for merging the approach from v1
as the path of least surprises in favor of first factoring out the race [1]. I will
resurrect it for a v4.

[1] https://lore.kernel.org/lkml/[email protected]/

Thanks,
Varad

> [1] https://lore.kernel.org/lkml/[email protected]
>
> Thanks,
> Davidlohr
>

--
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5
90409 Nürnberg
Germany

HRB 36809, AG Nürnberg
Geschäftsführer: Felix Imendörffer