From: Jan Stancek <[email protected]>
[ Upstream commit e1b98fa316648420d0434d9ff5b92ad6609ba6c3 ]
LTP mtest06 has been observed to occasionally hit "still mapped when
deleted" and following BUG_ON on arm64.
The extra mapcount originated from pagefault handler, which handled
pagefault for vma that has already been detached. vma is detached
under mmap_sem write lock by detach_vmas_to_be_unmapped(), which
also invalidates vmacache.
When the pagefault handler (under mmap_sem read lock) calls
find_vma(), vmacache_valid() wrongly reports vmacache as valid.
After rwsem down_read() returns via 'queue empty' path (as of v5.2),
it does so without an ACQUIRE on sem->count:
down_read()
__down_read()
rwsem_down_read_failed()
__rwsem_down_read_failed_common()
raw_spin_lock_irq(&sem->wait_lock);
if (list_empty(&sem->wait_list)) {
if (atomic_long_read(&sem->count) >= 0) {
raw_spin_unlock_irq(&sem->wait_lock);
return sem;
The problem can be reproduced by running LTP mtest06 in a loop and
building the kernel (-j $NCPUS) in parallel. It does reproduces since
v4.20 on arm64 HPE Apollo 70 (224 CPUs, 256GB RAM, 2 nodes). It
triggers reliably in about an hour.
The patched kernel ran fine for 10+ hours.
Signed-off-by: Jan Stancek <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Will Deacon <[email protected]>
Acked-by: Waiman Long <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Fixes: 4b486b535c33 ("locking/rwsem: Exit read lock slowpath if queue empty & no writer")
Link: https://lkml.kernel.org/r/50b8914e20d1d62bb2dee42d342836c2c16ebee7.1563438048.git.jstancek@redhat.com
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
This is a backport for the v5.2 stable tree. There were multiple reports
of this issue being hit.
Given that there were a few changes to the code around this, I'd
appreciate an ack before pulling it in.
kernel/locking/rwsem-xadd.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 0b1f779572402..397dedc58432d 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -454,6 +454,8 @@ __rwsem_down_read_failed_common(struct rw_semaphore *sem, int state)
* been set in the count.
*/
if (atomic_long_read(&sem->count) >= 0) {
+ /* Provide lock ACQUIRE */
+ smp_acquire__after_ctrl_dep();
raw_spin_unlock_irq(&sem->wait_lock);
rwsem_set_reader_owned(sem);
lockevent_inc(rwsem_rlock_fast);
--
2.20.1
From: Peter Zijlstra <[email protected]>
[ Upstream commit 99143f82a255e7f054bead8443462fae76dd829e ]
While reviewing another read_slowpath patch, both Will and I noticed
another missing ACQUIRE, namely:
X = 0;
CPU0 CPU1
rwsem_down_read()
for (;;) {
set_current_state(TASK_UNINTERRUPTIBLE);
X = 1;
rwsem_up_write();
rwsem_mark_wake()
atomic_long_add(adjustment, &sem->count);
smp_store_release(&waiter->task, NULL);
if (!waiter.task)
break;
...
}
r = X;
Allows 'r == 0'.
Reported-by: Peter Zijlstra (Intel) <[email protected]>
Reported-by: Will Deacon <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
This is a backport for the v5.2 stable tree. There were multiple reports
of this issue being hit.
Given that there were a few changes to the code around this, I'd
appreciate an ack before pulling it in.
kernel/locking/rwsem-xadd.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 397dedc58432d..385ebcfc31a6d 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -485,8 +485,10 @@ __rwsem_down_read_failed_common(struct rw_semaphore *sem, int state)
/* wait to be given the lock */
while (true) {
set_current_state(state);
- if (!waiter.task)
+ if (!smp_load_acquire(&waiter.task)) {
+ /* Orders against rwsem_mark_wake()'s smp_store_release() */
break;
+ }
if (signal_pending_state(state, current)) {
raw_spin_lock_irq(&sem->wait_lock);
if (waiter.task)
--
2.20.1
----- Original Message -----
> From: Jan Stancek <[email protected]>
>
> [ Upstream commit e1b98fa316648420d0434d9ff5b92ad6609ba6c3 ]
>
> LTP mtest06 has been observed to occasionally hit "still mapped when
> deleted" and following BUG_ON on arm64.
>
> The extra mapcount originated from pagefault handler, which handled
> pagefault for vma that has already been detached. vma is detached
> under mmap_sem write lock by detach_vmas_to_be_unmapped(), which
> also invalidates vmacache.
>
> When the pagefault handler (under mmap_sem read lock) calls
> find_vma(), vmacache_valid() wrongly reports vmacache as valid.
>
> After rwsem down_read() returns via 'queue empty' path (as of v5.2),
> it does so without an ACQUIRE on sem->count:
>
> down_read()
> __down_read()
> rwsem_down_read_failed()
> __rwsem_down_read_failed_common()
> raw_spin_lock_irq(&sem->wait_lock);
> if (list_empty(&sem->wait_list)) {
> if (atomic_long_read(&sem->count) >= 0) {
> raw_spin_unlock_irq(&sem->wait_lock);
> return sem;
>
> The problem can be reproduced by running LTP mtest06 in a loop and
> building the kernel (-j $NCPUS) in parallel. It does reproduces since
> v4.20 on arm64 HPE Apollo 70 (224 CPUs, 256GB RAM, 2 nodes). It
> triggers reliably in about an hour.
>
> The patched kernel ran fine for 10+ hours.
>
> Signed-off-by: Jan Stancek <[email protected]>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> Reviewed-by: Will Deacon <[email protected]>
> Acked-by: Waiman Long <[email protected]>
> Cc: Linus Torvalds <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: [email protected]
> Fixes: 4b486b535c33 ("locking/rwsem: Exit read lock slowpath if queue empty &
> no writer")
> Link:
> https://lkml.kernel.org/r/50b8914e20d1d62bb2dee42d342836c2c16ebee7.1563438048.git.jstancek@redhat.com
> Signed-off-by: Ingo Molnar <[email protected]>
> Signed-off-by: Sasha Levin <[email protected]>
> ---
>
> This is a backport for the v5.2 stable tree. There were multiple reports
> of this issue being hit.
>
> Given that there were a few changes to the code around this, I'd
> appreciate an ack before pulling it in.
ACK, both look good to me.
I also re-ran reproducer with this series applied on top of 5.2.10, it PASS-ed.
Thanks,
Jan
On Tue, Aug 27, 2019 at 10:11:39AM -0400, Jan Stancek wrote:
>ACK, both look good to me.
>I also re-ran reproducer with this series applied on top of 5.2.10, it PASS-ed.
I've queued both for 5.2, thanks Jan.
--
Thanks,
Sasha