The mainline implementation of read_seqbegin() orders prior loads w.r.t.
the read-side critical section. Fixup the RT writer-boosting
implementation to provide the same guarantee.
Also, while we're here, update the usage of ACCESS_ONCE() to use
READ_ONCE().
Fixes: e69f15cf77c23 ("seqlock: Prevent rt starvation")
Cc: [email protected]
Signed-off-by: Julia Cartwright <[email protected]>
---
Found during code inspection of the RT seqlock implementation.
Julia
include/linux/seqlock.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index a59751276b94..597ce5a9e013 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -453,7 +453,7 @@ static inline unsigned read_seqbegin(seqlock_t *sl)
unsigned ret;
repeat:
- ret = ACCESS_ONCE(sl->seqcount.sequence);
+ ret = READ_ONCE(sl->seqcount.sequence);
if (unlikely(ret & 1)) {
/*
* Take the lock and let the writer proceed (i.e. evtl
@@ -462,6 +462,7 @@ static inline unsigned read_seqbegin(seqlock_t *sl)
spin_unlock_wait(&sl->lock);
goto repeat;
}
+ smp_rmb();
return ret;
}
#endif
--
2.16.1
On 2018-04-26 15:02:03 [-0500], Julia Cartwright wrote:
> The mainline implementation of read_seqbegin() orders prior loads w.r.t.
> the read-side critical section. Fixup the RT writer-boosting
> implementation to provide the same guarantee.
>
> Also, while we're here, update the usage of ACCESS_ONCE() to use
> READ_ONCE().
I'm taking this without the READ_ONCE hunk because ACCESS_ONCE is gone
since v4.15 and I had to move on in v4.16.
> Fixes: e69f15cf77c23 ("seqlock: Prevent rt starvation")
> Cc: [email protected]
> Signed-off-by: Julia Cartwright <[email protected]>
Sebastian