2017-07-14 13:48:15

by Prateek Sood

[permalink] [raw]
Subject: [PATCH] osq_lock: fix osq_lock queue corruption

Fix ordering of link creation between node->prev and prev->next in
osq_lock(). A case in which the status of optimistic spin queue is
CPU6->CPU2 in which CPU6 has acquired the lock. At this point if CPU0
comes in to acquire osq_lock, it will update the tail count. After tail
count update if CPU2 starts to unqueue itself from optimistic spin queue,
it will find updated tail count with CPU0 and update CPU2 node->next to
NULL in osq_wait_next(). If reordering of following stores happen then
prev->next where prev being CPU2 would be updated to point to CPU0 node:
node->prev = prev;
WRITE_ONCE(prev->next, node);
At this point if next instruction
WRITE_ONCE(next->prev, prev);
in CPU2 path is committed before the update of CPU0 node->prev = prev then
CPU0 node->prev will point to CPU6 node. At this point if CPU0 path's
node->prev = prev is committed resulting in change of CPU0 prev back to
CPU2 node. CPU2 node->next is NULL currently, so if CPU0 gets into unqueue
path of osq_lock it will keep spinning in infinite loop as condition
prev->next == node will never be true.

Signed-off-by: Prateek Sood <[email protected]>
---
kernel/locking/osq_lock.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)

diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index 05a3785..1d371a0 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -104,6 +104,32 @@ bool osq_lock(struct optimistic_spin_queue *lock)

prev = decode_cpu(old);
node->prev = prev;
+
+ /*
+ * We need to avoid reordering of link updation sequence of osq.
+ * A case in which the status of optimistic spin queue is
+ * CPU6->CPU2 in which CPU6 has acquired the lock. At this point
+ * if CPU0 comes in to acquire osq_lock, it will update the tail
+ * count. After tail count update if CPU2 starts to unqueue itself
+ * from optimistic spin queue, it will find updated tail count with
+ * CPU0 and update CPU2 node->next to NULL in osq_wait_next(). If
+ * reordering of following stores happen then prev->next where prev
+ * being CPU2 would be updated to point to CPU0 node:
+ * node->prev = prev;
+ * WRITE_ONCE(prev->next, node);
+ *
+ * At this point if next instruction
+ * WRITE_ONCE(next->prev, prev);
+ * in CPU2 path is committed before the update of CPU0 node->prev =
+ * prev then CPU0 node->prev will point to CPU6 node. At this point
+ * if CPU0 path's node->prev = prev is committed resulting in change
+ * of CPU0 prev back to CPU2 node. CPU2 node->next is NULL, so if
+ * CPU0 gets into unqueue path of osq_lock it will keep spinning
+ * in infinite loop as condition prev->next == node will never be
+ * true.
+ */
+ smp_wmb();
+
WRITE_ONCE(prev->next, node);

/*
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.,
is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.


2017-07-18 11:26:08

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] osq_lock: fix osq_lock queue corruption


I added a few pictures, just the text didn't want to make sense to me.

On Fri, Jul 14, 2017 at 07:17:56PM +0530, Prateek Sood wrote:

> Fix ordering of link creation between node->prev and prev->next in
> osq_lock(). A case in which the status of optimistic spin queue is
> CPU6->CPU2 in which CPU6 has acquired the lock.


tail
v
,-. <- ,-.
|6| |2|
`-' -> `-'

> At this point if CPU0 comes in to acquire osq_lock, it will update the
> tail count.


CPU2 CPU0
----------------------------------

tail
v
,-. <- ,-. ,-.
|6| |2| |0|
`-' -> `-' `-'


> After tail count update if CPU2 starts to unqueue itself from
> optimistic spin queue, it will find updated tail count with CPU0 and
> update CPU2 node->next to NULL in osq_wait_next().


unqueue-A

tail
v
,-. <- ,-. ,-.
|6| |2| |0|
`-' `-' `-'

unqueue-B

->tail != curr && !node->next

> If reordering of following stores happen then
> prev->next where prev being CPU2 would be updated to point to CPU0 node:

tail
v
,-. <- ,-. ,-.
|6| |2| |0|
`-' -> `-' -> `-'

osq_wait_next()
node->next <- 0
xchg(node->next, NULL)

tail
v
,-. <- ,-. ,-.
|6| |2| |0|
`-' `-' `-'

unqueue-C

> At this point if next instruction
> WRITE_ONCE(next->prev, prev);
> in CPU2 path is committed before the update of CPU0 node->prev = prev then
> CPU0 node->prev will point to CPU6 node.

tail
v----------. v
,-. <- ,-. ,-.
|6| |2| |0|
`-' `-' `-'
`----------^

> At this point if CPU0 path's node->prev = prev is committed resulting
> in change of CPU0 prev back to CPU2 node. CPU2 node->next is NULL
> currently,

tail
v
,-. <- ,-. <- ,-.
|6| |2| |0|
`-' `-' `-'
`----------^


> so if CPU0 gets into unqueue path of osq_lock it will keep spinning
> in infinite loop as condition prev->next == node will never be true.

Also updated the comment..

---
kernel/locking/osq_lock.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -109,6 +109,19 @@ bool osq_lock(struct optimistic_spin_que

prev = decode_cpu(old);
node->prev = prev;
+
+ /*
+ * osq_lock() unqueue
+ *
+ * node->prev = prev osq_wait_next()
+ * WMB MB
+ * prev->next = node next->prev = prev // unqueue-C
+ *
+ * Here 'node->prev' and 'next->prev' are the same variable and we need
+ * to ensure these stores happen in-order to avoid corrupting the list.
+ */
+ smp_wmb();
+
WRITE_ONCE(prev->next, node);

/*

Subject: [tip:locking/core] locking/osq_lock: Fix osq_lock queue corruption

Commit-ID: 50972fe78f24f1cd0b9d7bbf1f87d2be9e4f412e
Gitweb: http://git.kernel.org/tip/50972fe78f24f1cd0b9d7bbf1f87d2be9e4f412e
Author: Prateek Sood <[email protected]>
AuthorDate: Fri, 14 Jul 2017 19:17:56 +0530
Committer: Ingo Molnar <[email protected]>
CommitDate: Thu, 10 Aug 2017 12:28:54 +0200

locking/osq_lock: Fix osq_lock queue corruption

Fix ordering of link creation between node->prev and prev->next in
osq_lock(). A case in which the status of optimistic spin queue is
CPU6->CPU2 in which CPU6 has acquired the lock.

tail
v
,-. <- ,-.
|6| |2|
`-' -> `-'

At this point if CPU0 comes in to acquire osq_lock, it will update the
tail count.

CPU2 CPU0
----------------------------------

tail
v
,-. <- ,-. ,-.
|6| |2| |0|
`-' -> `-' `-'

After tail count update if CPU2 starts to unqueue itself from
optimistic spin queue, it will find an updated tail count with CPU0 and
update CPU2 node->next to NULL in osq_wait_next().

unqueue-A

tail
v
,-. <- ,-. ,-.
|6| |2| |0|
`-' `-' `-'

unqueue-B

->tail != curr && !node->next

If reordering of following stores happen then prev->next where prev
being CPU2 would be updated to point to CPU0 node:

tail
v
,-. <- ,-. ,-.
|6| |2| |0|
`-' `-' -> `-'

osq_wait_next()
node->next <- 0
xchg(node->next, NULL)

tail
v
,-. <- ,-. ,-.
|6| |2| |0|
`-' `-' `-'

unqueue-C

At this point if next instruction
WRITE_ONCE(next->prev, prev);
in CPU2 path is committed before the update of CPU0 node->prev = prev then
CPU0 node->prev will point to CPU6 node.

tail
v----------. v
,-. <- ,-. ,-.
|6| |2| |0|
`-' `-' `-'
`----------^

At this point if CPU0 path's node->prev = prev is committed resulting
in change of CPU0 prev back to CPU2 node. CPU2 node->next is NULL
currently,

tail
v
,-. <- ,-. <- ,-.
|6| |2| |0|
`-' `-' `-'
`----------^

so if CPU0 gets into unqueue path of osq_lock it will keep spinning
in infinite loop as condition prev->next == node will never be true.

Signed-off-by: Prateek Sood <[email protected]>
[ Added pictures, rewrote comments. ]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/locking/osq_lock.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index a316794..a74ee6a 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -109,6 +109,19 @@ bool osq_lock(struct optimistic_spin_queue *lock)

prev = decode_cpu(old);
node->prev = prev;
+
+ /*
+ * osq_lock() unqueue
+ *
+ * node->prev = prev osq_wait_next()
+ * WMB MB
+ * prev->next = node next->prev = prev // unqueue-C
+ *
+ * Here 'node->prev' and 'next->prev' are the same variable and we need
+ * to ensure these stores happen in-order to avoid corrupting the list.
+ */
+ smp_wmb();
+
WRITE_ONCE(prev->next, node);

/*

2017-08-31 22:06:46

by Davidlohr Bueso

[permalink] [raw]
Subject: Re: [tip:locking/core] locking/osq_lock: Fix osq_lock queue corruption

On Thu, 10 Aug 2017, tip-bot for Prateek Sood wrote:

>Signed-off-by: Prateek Sood <[email protected]>
>[ Added pictures, rewrote comments. ]
>Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
>Cc: Linus Torvalds <[email protected]>
>Cc: Peter Zijlstra <[email protected]>
>Cc: Thomas Gleixner <[email protected]>
>Cc: [email protected]
>Link: http://lkml.kernel.org/r/[email protected]
>Signed-off-by: Ingo Molnar <[email protected]>

Any reason -stable wasn't Cc'ed here? This goes back to 3.15:

fb0527bd5ea (locking/mutexes: Introduce cancelable MCS lock for adaptive spinning)

Thanks,
Davidlohr

2017-09-01 11:17:40

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [tip:locking/core] locking/osq_lock: Fix osq_lock queue corruption

On Thu, Aug 31, 2017 at 12:40:19PM -0700, Davidlohr Bueso wrote:
> On Thu, 10 Aug 2017, tip-bot for Prateek Sood wrote:
>
> > Signed-off-by: Prateek Sood <[email protected]>
> > [ Added pictures, rewrote comments. ]
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > Cc: Linus Torvalds <[email protected]>
> > Cc: Peter Zijlstra <[email protected]>
> > Cc: Thomas Gleixner <[email protected]>
> > Cc: [email protected]
> > Link: http://lkml.kernel.org/r/[email protected]
> > Signed-off-by: Ingo Molnar <[email protected]>
>
> Any reason -stable wasn't Cc'ed here? This goes back to 3.15:
>
> fb0527bd5ea (locking/mutexes: Introduce cancelable MCS lock for adaptive spinning)

No real reason, and yes, I should have done a Fixes tag I suppose :/

2017-09-01 14:22:32

by Davidlohr Bueso

[permalink] [raw]
Subject: Re: [tip:locking/core] locking/osq_lock: Fix osq_lock queue corruption

On Fri, 01 Sep 2017, Peter Zijlstra wrote:

>On Thu, Aug 31, 2017 at 12:40:19PM -0700, Davidlohr Bueso wrote:
>> On Thu, 10 Aug 2017, tip-bot for Prateek Sood wrote:
>>
>> > Signed-off-by: Prateek Sood <[email protected]>
>> > [ Added pictures, rewrote comments. ]
>> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
>> > Cc: Linus Torvalds <[email protected]>
>> > Cc: Peter Zijlstra <[email protected]>
>> > Cc: Thomas Gleixner <[email protected]>
>> > Cc: [email protected]
>> > Link: http://lkml.kernel.org/r/[email protected]
>> > Signed-off-by: Ingo Molnar <[email protected]>
>>
>> Any reason -stable wasn't Cc'ed here? This goes back to 3.15:
>>
>> fb0527bd5ea (locking/mutexes: Introduce cancelable MCS lock for adaptive spinning)
>
>No real reason, and yes, I should have done a Fixes tag I suppose :/

Ok, Cc'ed now.

Thanks,
Davidlohr