tick_do_timer_cpu is intentionally racy as it would require a global lock
to protect which is a non-starter.
The variable is used to prevent thundering herd problems of many CPUs
trying to update jiffies and timekeeping on the tick. It assigns the duty
of doing so to one CPU. On NOHZ=n systems this stays on the boot CPU
forever and is never updated except when the boot CPU is unplugged. On NOHZ
enabled systems this is quite different because a CPU can give up the
assignment to go for a long idle sleep. The duty is then picked up by
another online CPU on the next tick or interrupt from idle. This handover
is carefully designed so that even competing writes or temporary
assumptions cannot cause havoc.
The mechanism is unfortunately barely documented and not annotated which
triggers KCSAN reports all over the place.
Annotate the racy reads with data_race() and document the various places
which are affected.
Reported-by: [email protected]
Reported-by: [email protected]
Reported-by: [email protected]
Reported-by: Naresh Kamboju <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
kernel/time/tick-common.c | 45 ++++++++++++++++++++++-
kernel/time/tick-sched.c | 89 ++++++++++++++++++++++++++++++++++++----------
2 files changed, 114 insertions(+), 20 deletions(-)
--- a/kernel/time/tick-common.c
+++ b/kernel/time/tick-common.c
@@ -45,6 +45,27 @@ ktime_t tick_period;
* TICK_DO_TIMER_NONE, i.e. a non existing CPU. So the next cpu which looks
* at it will take over and keep the time keeping alive. The handover
* procedure also covers cpu hotplug.
+ *
+ * The variable is neither atomic nor accessed with locks. The handover and
+ * checks are intentionaly racy but safe. The basic rules for the update are:
+ *
+ * All reads and writes happen with interrupts disabled. The execution
+ * context is either task or hard interrupt.
+ *
+ * If tick_do_timer_cpu contains a CPU number then the only possible
+ * transition is that the holding CPU writes it to TICK_DO_TIMER_NONE. This
+ * only happens on NOHZ systems. If NOHZ is off then the duty stays with
+ * the CPU which picked it up, the boot CPU. The only exception is CPU hot
+ * unplug where the outgoing CPU transfers it, but that's safe because all
+ * other CPUs are stuck in stomp_machine().
+ *
+ * If tick_do_timer_cpu contains TICK_DO_TIMER_NONE then any CPU observing
+ * this can overwrite it with it's own CPU number and take on the tick
+ * duty. As this is lockless several CPUs can observe TICK_DO_TIMER_NONE
+ * concurrently and write their own CPU number to it, but at the end only
+ * one will win. Even if one of the writers assumes temporarily that it
+ * owns the duty there is no harm because the tick update is serialized
+ * with jiffies_lock. Other side effects are shorter sleeps for one round.
*/
int tick_do_timer_cpu __read_mostly = TICK_DO_TIMER_BOOT;
#ifdef CONFIG_NO_HZ_FULL
@@ -83,6 +104,13 @@ int tick_is_oneshot_available(void)
*/
static void tick_periodic(int cpu)
{
+ /*
+ * Raceless access in periodic tick mode. The variable can only
+ * change when the CPU holding tick_do_timer_cpu goes offline which
+ * requires that all other CPUs are inside stomp_machine() with
+ * interrupts disabled and cannot do this read here which is always
+ * executed from the timer interrupt.
+ */
if (tick_do_timer_cpu == cpu) {
raw_spin_lock(&jiffies_lock);
write_seqcount_begin(&jiffies_seq);
@@ -185,6 +213,11 @@ static void giveup_do_timer(void *info)
WARN_ON(tick_do_timer_cpu != smp_processor_id());
+ /*
+ * Exception to the rule that the holding CPU only can
+ * write TICK_DO_TIMER_NONE. The new CPU is waiting for
+ * this function call to complete and asked for this.
+ */
tick_do_timer_cpu = cpu;
}
@@ -214,9 +247,14 @@ static void tick_setup_device(struct tic
if (!td->evtdev) {
/*
* If no cpu took the do_timer update, assign it to
- * this cpu:
+ * this cpu.
+ *
+ * Intentional data race. The boot CPU takes it over which
+ * is obviously not racy. CPUs coming up later cannot
+ * observe TICK_DO_TIMER_BOOT even if there is a concurrent
+ * hand over.
*/
- if (tick_do_timer_cpu == TICK_DO_TIMER_BOOT) {
+ if (data_race(tick_do_timer_cpu) == TICK_DO_TIMER_BOOT) {
tick_do_timer_cpu = cpu;
tick_next_period = ktime_get();
@@ -409,6 +447,9 @@ EXPORT_SYMBOL_GPL(tick_broadcast_oneshot
*
* Called with interrupts disabled. No locking required. If
* tick_do_timer_cpu is owned by this cpu, nothing can change it.
+ *
+ * Not a data race because all other CPUs are hanging out in
+ * stomp_machine() and cannot change that assignment.
*/
void tick_handover_do_timer(void)
{
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -125,25 +125,40 @@ static void tick_sched_do_timer(struct t
#ifdef CONFIG_NO_HZ_COMMON
/*
- * Check if the do_timer duty was dropped. We don't care about
- * concurrency: This happens only when the CPU in charge went
- * into a long sleep. If two CPUs happen to assign themselves to
- * this duty, then the jiffies update is still serialized by
- * jiffies_lock.
- *
- * If nohz_full is enabled, this should not happen because the
- * tick_do_timer_cpu never relinquishes.
+ * Check if the do_timer duty was dropped. This is an intentional
+ * data race and we don't care about concurrency: This happens only
+ * when the CPU in charge went into a long sleep. If two CPUs
+ * happen to assign themselves to this duty it does not matter
+ * which one ends up holding it. Both will try to update jiffies
+ * but the jiffies update is still serialized by jiffies_lock.
*/
- if (unlikely(tick_do_timer_cpu == TICK_DO_TIMER_NONE)) {
+ if (unlikely(data_race(tick_do_timer_cpu) == TICK_DO_TIMER_NONE)) {
#ifdef CONFIG_NO_HZ_FULL
+ /*
+ *
+ * If nohz_full is enabled, this should not happen because
+ * the tick_do_timer_cpu never relinquishes. Warn and try
+ * to keep it alive.
+ */
WARN_ON(tick_nohz_full_running);
#endif
tick_do_timer_cpu = cpu;
}
#endif
- /* Check, if the jiffies need an update */
- if (tick_do_timer_cpu == cpu)
+ /*
+ * Check if jiffies need an update. Intentional data race on NOHZ:
+ *
+ * - There is some other CPU holding it
+ * - This CPU took over above but raced with another CPU. So both
+ * invoke tick_do_update_jiffies64() and contend on jiffies lock
+ * eventually.
+ * - The other CPU which holds it is about to give it up which does
+ * not cause harm because the current or some other CPU will
+ * observe that state either on the next interrupt or when trying
+ * to go back to idle and act upon it.
+ */
+ if (data_race(tick_do_timer_cpu) == cpu)
tick_do_update_jiffies64(now);
if (ts->inidle)
@@ -433,6 +448,9 @@ static int tick_nohz_cpu_down(unsigned i
* The tick_do_timer_cpu CPU handles housekeeping duty (unbound
* timers, workqueues, timekeeping, ...) on behalf of full dynticks
* CPUs. It must remain online when nohz full is enabled.
+ *
+ * There is no data race here. If nohz_full is enabled, this can
+ * not change because the tick_do_timer_cpu never relinquishes.
*/
if (tick_nohz_full_running && tick_do_timer_cpu == cpu)
return -EBUSY;
@@ -687,6 +705,7 @@ static ktime_t tick_nohz_next_event(stru
u64 basemono, next_tick, next_tmr, next_rcu, delta, expires;
unsigned long basejiff;
unsigned int seq;
+ int duty;
/* Read jiffies and the time when jiffies were updated last */
do {
@@ -751,8 +770,15 @@ static ktime_t tick_nohz_next_event(stru
* Otherwise we can sleep as long as we want.
*/
delta = timekeeping_max_deferment();
- if (cpu != tick_do_timer_cpu &&
- (tick_do_timer_cpu != TICK_DO_TIMER_NONE || !ts->do_timer_last))
+ /*
+ * Intentional data race: If the current CPU holds the do_timer()
+ * duty then nothing can change it. It's always the holder which
+ * gives it up. If it's not held by any CPU then this CPU might be
+ * the one which held it last. If that is true and another CPU
+ * takes over in parallel then the only "damage" is a short sleep.
+ */
+ duty = data_race(tick_do_timer_cpu);
+ if (cpu != duty && (duty != TICK_DO_TIMER_NONE || !ts->do_timer_last))
delta = KTIME_MAX;
/* Calculate the next expiry time */
@@ -773,6 +799,7 @@ static void tick_nohz_stop_tick(struct t
u64 basemono = ts->timer_expires_base;
u64 expires = ts->timer_expires;
ktime_t tick = expires;
+ int duty;
/* Make sure we won't be trying to stop it twice in a row. */
ts->timer_expires_base = 0;
@@ -784,11 +811,19 @@ static void tick_nohz_stop_tick(struct t
* don't drop this here the jiffies might be stale and
* do_timer() never invoked. Keep track of the fact that it
* was the one which had the do_timer() duty last.
+ *
+ * Another intentional data race: If the current CPU holds the
+ * do_timer() duty, then no other CPU can change it. If no CPU
+ * holds it and on read another CPU is taking it on concurrently
+ * then the only damage is that ts->do_timer_last might not be
+ * cleared right now which just prevents the CPU from going into
+ * sleep forever mode for another round.
*/
- if (cpu == tick_do_timer_cpu) {
+ duty = data_race(tick_do_timer_cpu);
+ if (cpu == duty) {
tick_do_timer_cpu = TICK_DO_TIMER_NONE;
ts->do_timer_last = 1;
- } else if (tick_do_timer_cpu != TICK_DO_TIMER_NONE) {
+ } else if (duty != TICK_DO_TIMER_NONE) {
ts->do_timer_last = 0;
}
@@ -906,7 +941,15 @@ static bool can_stop_idle_tick(int cpu,
* invoked.
*/
if (unlikely(!cpu_online(cpu))) {
- if (cpu == tick_do_timer_cpu)
+ /*
+ * If the current CPU holds it then the access is not racy
+ * as no other CPU can change it. If it does not hold it
+ * then it's irrelevant whether there is a concurrent
+ * update which either sets it to TICK_DO_TIMER_NONE or
+ * takes over from TICK_DO_TIMER_NONE, but no other CPU can
+ * write the current CPU number into it.
+ */
+ if (cpu == data_race(tick_do_timer_cpu))
tick_do_timer_cpu = TICK_DO_TIMER_NONE;
/*
* Make sure the CPU doesn't get fooled by obsolete tick
@@ -935,15 +978,25 @@ static bool can_stop_idle_tick(int cpu,
}
if (tick_nohz_full_enabled()) {
+ int duty;
+
+ /*
+ * Data race only possible during boot while a nohz_full
+ * CPU holds the do_timer() duty. So this read might race
+ * with an upcoming housekeeping CPU taking over. If the
+ * current CPU holds it then this cannot race.
+ */
+ duty = data_race(tick_do_timer_cpu);
+
/*
* Keep the tick alive to guarantee timekeeping progression
* if there are full dynticks CPUs around
*/
- if (tick_do_timer_cpu == cpu)
+ if (duty == cpu)
return false;
/* Should not happen for nohz-full */
- if (WARN_ON_ONCE(tick_do_timer_cpu == TICK_DO_TIMER_NONE))
+ if (WARN_ON_ONCE(duty == TICK_DO_TIMER_NONE))
return false;
}
On Sun, Dec 06, 2020 at 10:12:56PM +0100, Thomas Gleixner wrote:
> + if (data_race(tick_do_timer_cpu) == TICK_DO_TIMER_BOOT) {
I prefer the form:
if (data_race(tick_do_timer_cpu == TICK_DO_TIMER_BOOT)) {
But there doesn't yet seem to be sufficient data_race() usage in the
kernel to see which of the forms is preferred. Do we want to bike-shed
this now and document the outcome somewhere?
On Mon, Dec 07 2020 at 13:09, Peter Zijlstra wrote:
> On Sun, Dec 06, 2020 at 10:12:56PM +0100, Thomas Gleixner wrote:
>> + if (data_race(tick_do_timer_cpu) == TICK_DO_TIMER_BOOT) {
>
> I prefer the form:
>
> if (data_race(tick_do_timer_cpu == TICK_DO_TIMER_BOOT)) {
>
> But there doesn't yet seem to be sufficient data_race() usage in the
> kernel to see which of the forms is preferred. Do we want to bike-shed
> this now and document the outcome somewhere?
Yes please before we get a gazillion of patches changing half of them
half a year from now.
Thanks,
tglx
On Mon, 7 Dec 2020 at 18:46, Thomas Gleixner <[email protected]> wrote:
> On Mon, Dec 07 2020 at 13:09, Peter Zijlstra wrote:
> > On Sun, Dec 06, 2020 at 10:12:56PM +0100, Thomas Gleixner wrote:
> >> + if (data_race(tick_do_timer_cpu) == TICK_DO_TIMER_BOOT) {
> >
> > I prefer the form:
> >
> > if (data_race(tick_do_timer_cpu == TICK_DO_TIMER_BOOT)) {
> >
> > But there doesn't yet seem to be sufficient data_race() usage in the
> > kernel to see which of the forms is preferred. Do we want to bike-shed
> > this now and document the outcome somewhere?
>
> Yes please before we get a gazillion of patches changing half of them
> half a year from now.
That rule should be as simple as possible. The simplest would be:
"Only enclose the smallest required expression in data_race(); keep
the number of required data_race() expressions to a minimum." (=> want
least amount of code inside data_race() with the least number of
data_race()s).
In the case here, that'd be the "if (data_race(tick_do_timer_cpu) ==
..." variant.
Otherwise there's the possibility that we'll end up with accesses
inside data_race() that we hadn't planned for. For example, somebody
refactors some code replacing constants with variables.
I currently don't know what the rule for Peter's preferred variant
would be, without running the risk of some accidentally data_race()'d
accesses.
Thoughts?
Thanks,
-- Marco
On Mon, Dec 07 2020 at 19:19, Marco Elver wrote:
> On Mon, 7 Dec 2020 at 18:46, Thomas Gleixner <[email protected]> wrote:
>> On Mon, Dec 07 2020 at 13:09, Peter Zijlstra wrote:
>> > I prefer the form:
>> >
>> > if (data_race(tick_do_timer_cpu == TICK_DO_TIMER_BOOT)) {
>> >
>> > But there doesn't yet seem to be sufficient data_race() usage in the
>> > kernel to see which of the forms is preferred. Do we want to bike-shed
>> > this now and document the outcome somewhere?
>>
>> Yes please before we get a gazillion of patches changing half of them
>> half a year from now.
>
> That rule should be as simple as possible. The simplest would be:
> "Only enclose the smallest required expression in data_race(); keep
> the number of required data_race() expressions to a minimum." (=> want
> least amount of code inside data_race() with the least number of
> data_race()s).
>
> In the case here, that'd be the "if (data_race(tick_do_timer_cpu) ==
> ..." variant.
>
> Otherwise there's the possibility that we'll end up with accesses
> inside data_race() that we hadn't planned for. For example, somebody
> refactors some code replacing constants with variables.
>
> I currently don't know what the rule for Peter's preferred variant
> would be, without running the risk of some accidentally data_race()'d
> accesses.
I agree. Lets keep it simple and have the data_race() only covering the
actual access to the racy variable, struct member.
The worst case we could end up with would be
if (data_race(A) == data_race(B))
which would still be clearly isolated. The racy part is not the
comparison, it's the accesses which can cause random results for the
comparison.
Thanks,
tglx
On Mon, Dec 07, 2020 at 07:19:51PM +0100, Marco Elver wrote:
> On Mon, 7 Dec 2020 at 18:46, Thomas Gleixner <[email protected]> wrote:
> > On Mon, Dec 07 2020 at 13:09, Peter Zijlstra wrote:
> > > On Sun, Dec 06, 2020 at 10:12:56PM +0100, Thomas Gleixner wrote:
> > >> + if (data_race(tick_do_timer_cpu) == TICK_DO_TIMER_BOOT) {
> > >
> > > I prefer the form:
> > >
> > > if (data_race(tick_do_timer_cpu == TICK_DO_TIMER_BOOT)) {
> > >
> > > But there doesn't yet seem to be sufficient data_race() usage in the
> > > kernel to see which of the forms is preferred. Do we want to bike-shed
> > > this now and document the outcome somewhere?
> >
> > Yes please before we get a gazillion of patches changing half of them
> > half a year from now.
>
> That rule should be as simple as possible. The simplest would be:
> "Only enclose the smallest required expression in data_race(); keep
> the number of required data_race() expressions to a minimum." (=> want
> least amount of code inside data_race() with the least number of
> data_race()s).
>
> In the case here, that'd be the "if (data_race(tick_do_timer_cpu) ==
> ..." variant.
>
> Otherwise there's the possibility that we'll end up with accesses
> inside data_race() that we hadn't planned for. For example, somebody
> refactors some code replacing constants with variables.
>
> I currently don't know what the rule for Peter's preferred variant
> would be, without running the risk of some accidentally data_race()'d
> accesses.
>
> Thoughts?
I am also concerned about inadvertently covering code with data_race().
Also, in this particular case, why data_race() rather than READ_ONCE()?
Do we really expect the compiler to be able to optimize this case
significantly without READ_ONCE()?
Thanx, Paul
On Mon, Dec 07 2020 at 11:44, Paul E. McKenney wrote:
> On Mon, Dec 07, 2020 at 07:19:51PM +0100, Marco Elver wrote:
>> On Mon, 7 Dec 2020 at 18:46, Thomas Gleixner <[email protected]> wrote:
>> I currently don't know what the rule for Peter's preferred variant
>> would be, without running the risk of some accidentally data_race()'d
>> accesses.
>>
>> Thoughts?
>
> I am also concerned about inadvertently covering code with data_race().
>
> Also, in this particular case, why data_race() rather than READ_ONCE()?
> Do we really expect the compiler to be able to optimize this case
> significantly without READ_ONCE()?
That was your suggestion a week or so ago :)
Thanks,
tglx
On Mon, Dec 07, 2020 at 10:46:33PM +0100, Thomas Gleixner wrote:
> On Mon, Dec 07 2020 at 11:44, Paul E. McKenney wrote:
> > On Mon, Dec 07, 2020 at 07:19:51PM +0100, Marco Elver wrote:
> >> On Mon, 7 Dec 2020 at 18:46, Thomas Gleixner <[email protected]> wrote:
> >> I currently don't know what the rule for Peter's preferred variant
> >> would be, without running the risk of some accidentally data_race()'d
> >> accesses.
> >>
> >> Thoughts?
> >
> > I am also concerned about inadvertently covering code with data_race().
> >
> > Also, in this particular case, why data_race() rather than READ_ONCE()?
> > Do we really expect the compiler to be able to optimize this case
> > significantly without READ_ONCE()?
>
> That was your suggestion a week or so ago :)
You expected my suggestion to change? ;-)
Thanx, Paul
On Mon, Dec 07 2020 at 14:38, Paul E. McKenney wrote:
> On Mon, Dec 07, 2020 at 10:46:33PM +0100, Thomas Gleixner wrote:
>> On Mon, Dec 07 2020 at 11:44, Paul E. McKenney wrote:
>> > On Mon, Dec 07, 2020 at 07:19:51PM +0100, Marco Elver wrote:
>> >> On Mon, 7 Dec 2020 at 18:46, Thomas Gleixner <[email protected]> wrote:
>> >> I currently don't know what the rule for Peter's preferred variant
>> >> would be, without running the risk of some accidentally data_race()'d
>> >> accesses.
>> >>
>> >> Thoughts?
>> >
>> > I am also concerned about inadvertently covering code with data_race().
>> >
>> > Also, in this particular case, why data_race() rather than READ_ONCE()?
>> > Do we really expect the compiler to be able to optimize this case
>> > significantly without READ_ONCE()?
>>
>> That was your suggestion a week or so ago :)
>
> You expected my suggestion to change? ;-)
Your suggestion was data_race() IIRC but I might have lost track in that
conversation.
On Mon, Dec 07, 2020 at 11:46:48PM +0100, Thomas Gleixner wrote:
> On Mon, Dec 07 2020 at 14:38, Paul E. McKenney wrote:
>
> > On Mon, Dec 07, 2020 at 10:46:33PM +0100, Thomas Gleixner wrote:
> >> On Mon, Dec 07 2020 at 11:44, Paul E. McKenney wrote:
> >> > On Mon, Dec 07, 2020 at 07:19:51PM +0100, Marco Elver wrote:
> >> >> On Mon, 7 Dec 2020 at 18:46, Thomas Gleixner <[email protected]> wrote:
> >> >> I currently don't know what the rule for Peter's preferred variant
> >> >> would be, without running the risk of some accidentally data_race()'d
> >> >> accesses.
> >> >>
> >> >> Thoughts?
> >> >
> >> > I am also concerned about inadvertently covering code with data_race().
> >> >
> >> > Also, in this particular case, why data_race() rather than READ_ONCE()?
> >> > Do we really expect the compiler to be able to optimize this case
> >> > significantly without READ_ONCE()?
> >>
> >> That was your suggestion a week or so ago :)
> >
> > You expected my suggestion to change? ;-)
>
> Your suggestion was data_race() IIRC but I might have lost track in that
> conversation.
OK, I am inconsistent after all. I would have suggested READ_ONCE() given
no difference between them, so it is probably best to assume that there is
(or at least was) a good reason for data_race() instead of READ_ONCE().
Couldn't tell you what it might be, though. :-/
Thanx, Paul
On Mon, Dec 07, 2020 at 07:19:51PM +0100, Marco Elver wrote:
> On Mon, 7 Dec 2020 at 18:46, Thomas Gleixner <[email protected]> wrote:
> > On Mon, Dec 07 2020 at 13:09, Peter Zijlstra wrote:
> > > On Sun, Dec 06, 2020 at 10:12:56PM +0100, Thomas Gleixner wrote:
> > >> + if (data_race(tick_do_timer_cpu) == TICK_DO_TIMER_BOOT) {
> > >
> > > I prefer the form:
> > >
> > > if (data_race(tick_do_timer_cpu == TICK_DO_TIMER_BOOT)) {
> > >
> > > But there doesn't yet seem to be sufficient data_race() usage in the
> > > kernel to see which of the forms is preferred. Do we want to bike-shed
> > > this now and document the outcome somewhere?
> >
> > Yes please before we get a gazillion of patches changing half of them
> > half a year from now.
>
> That rule should be as simple as possible. The simplest would be:
> "Only enclose the smallest required expression in data_race(); keep
> the number of required data_race() expressions to a minimum." (=> want
> least amount of code inside data_race() with the least number of
> data_race()s).
>
> In the case here, that'd be the "if (data_race(tick_do_timer_cpu) ==
> ..." variant.
So I was worried that data_race(var) == const, would not allow the
compiler to emit
cmpq $CONST, ();
but would instead force a separate load. But I checked and it does
generate the right code.
On Mon, Dec 07, 2020 at 11:44:06AM -0800, Paul E. McKenney wrote:
> Also, in this particular case, why data_race() rather than READ_ONCE()?
> Do we really expect the compiler to be able to optimize this case
> significantly without READ_ONCE()?
It's about intent and how the code reads. READ_ONCE() is something
completely different from data_race(). data_race() is correct here.
On Tue, Dec 08, 2020 at 09:11:29AM +0100, Peter Zijlstra wrote:
> On Mon, Dec 07, 2020 at 11:44:06AM -0800, Paul E. McKenney wrote:
>
> > Also, in this particular case, why data_race() rather than READ_ONCE()?
> > Do we really expect the compiler to be able to optimize this case
> > significantly without READ_ONCE()?
>
> It's about intent and how the code reads. READ_ONCE() is something
> completely different from data_race(). data_race() is correct here.
Why?
Thanx, Paul
On Wed, Dec 16, 2020 at 01:19:31PM -0800, Paul E. McKenney wrote:
> Given that there is no optimization potential, then the main reason to use
> data_race() instead of *_ONCE() is to prevent KCSAN from considering the
> accesses when looking for data races. But that is mostly for debugging
> accesses, in cases when these accesses are not really part of the
> concurrent algorithm.
>
> So if I understand the situation correctly, I would be using *ONCE().
Huh, what, why?
The code doesn't need READ_ONCE(), it merely wants to tell kasan that
the race it observes is fine and as to please shut up.
IOW data_race() is accurate and right.
On Thu, Dec 17, 2020 at 11:48:23AM +0100, Peter Zijlstra wrote:
> On Wed, Dec 16, 2020 at 01:19:31PM -0800, Paul E. McKenney wrote:
> > Given that there is no optimization potential, then the main reason to use
> > data_race() instead of *_ONCE() is to prevent KCSAN from considering the
> > accesses when looking for data races. But that is mostly for debugging
> > accesses, in cases when these accesses are not really part of the
> > concurrent algorithm.
> >
> > So if I understand the situation correctly, I would be using *ONCE().
>
> Huh, what, why?
>
> The code doesn't need READ_ONCE(), it merely wants to tell kasan that
> the race it observes is fine and as to please shut up.
>
> IOW data_race() is accurate and right.
Other way around.
The code does not need any of the compiler optimizations that might
someday be enabled by data_race(). So why do you need data_race() here?
What do exactly is lost by instead using READ_ONCE()?
Thanx, Paul