[ Adding other kernel rseq maintainers in CC. ]
----- On Dec 6, 2021, at 12:14 PM, Florian Weimer [email protected] wrote:
> * Mathieu Desnoyers:
>
>> ----- On Dec 6, 2021, at 8:46 AM, Florian Weimer [email protected] wrote:
>> [...]
>>> @@ -406,6 +407,9 @@ struct pthread
>>> /* Used on strsignal. */
>>> struct tls_internal_t tls_state;
>>>
>>> + /* rseq area registered with the kernel. */
>>> + struct rseq rseq_area;
>>
>> The rseq UAPI requires that the fields within the rseq_area
>> are read-written with single-copy atomicity semantics.
>>
>> So either we define a "volatile struct rseq" here, or we'll need
>> to wrap all accesses with the proper volatile casts, or use the
>> relaxed_mo atomic accesses.
>
> Under the C memory model, neither volatile nor relaxed MO result in
> single-copy atomicity semantics. So I'm not sure what to make of this.
> Surely switching to inline assembly on all targets is over the top.
>
> I think we can rely on a plain read doing the right thing for us.
AFAIU, the plain read does not prevent the compiler from re-loading the
value in case of high register pressure.
Accesses to rseq fields such as cpu_id need to be done as if those were
concurrently modified by a signal handler nesting on top of the user-space
code, with the particular twist that blocking signals has no effect on
concurrent updates.
I do not think we need to do the load in assembly. I was under the impression
that both volatile load and relaxed MO result in single-copy atomicity
semantics for an aligned pointer. Perhaps Paul, Peter, Boqun have something
to add here ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
* Mathieu Desnoyers:
> [ Adding other kernel rseq maintainers in CC. ]
>
> ----- On Dec 6, 2021, at 12:14 PM, Florian Weimer [email protected] wrote:
>
>> * Mathieu Desnoyers:
>>
>>> ----- On Dec 6, 2021, at 8:46 AM, Florian Weimer [email protected] wrote:
>>> [...]
>>>> @@ -406,6 +407,9 @@ struct pthread
>>>> /* Used on strsignal. */
>>>> struct tls_internal_t tls_state;
>>>>
>>>> + /* rseq area registered with the kernel. */
>>>> + struct rseq rseq_area;
>>>
>>> The rseq UAPI requires that the fields within the rseq_area
>>> are read-written with single-copy atomicity semantics.
>>>
>>> So either we define a "volatile struct rseq" here, or we'll need
>>> to wrap all accesses with the proper volatile casts, or use the
>>> relaxed_mo atomic accesses.
>>
>> Under the C memory model, neither volatile nor relaxed MO result in
>> single-copy atomicity semantics. So I'm not sure what to make of this.
>> Surely switching to inline assembly on all targets is over the top.
>>
>> I think we can rely on a plain read doing the right thing for us.
>
> AFAIU, the plain read does not prevent the compiler from re-loading the
> value in case of high register pressure.
>
> Accesses to rseq fields such as cpu_id need to be done as if those were
> concurrently modified by a signal handler nesting on top of the user-space
> code, with the particular twist that blocking signals has no effect on
> concurrent updates.
>
> I do not think we need to do the load in assembly. I was under the impression
> that both volatile load and relaxed MO result in single-copy atomicity
> semantics for an aligned pointer. Perhaps Paul, Peter, Boqun have something
> to add here ?
The C memory model is broken and does not prevent out-of-thin-air
values. As far as I know, this breaks single-copy atomicity. In
practice, compilers will not exercise the latitude offered by the memory
model. volatile does not ensure absence of data races.
Using atomics or volatile would require us to materialize the thread
pointer, given the current internal interfaces we have, and I don't want
to do this because this is supposed to be performance-critical code.
The compiler barrier inherent to the function call will have to be
enough. I can add a comment to this effect:
/* This load has single-copy atomicity semantics (as required for
rseq) because the function call implies a compiler barrier. */
Thanks,
Florian
On Mon, Dec 06, 2021 at 08:03:26PM +0100, Florian Weimer wrote:
> * Mathieu Desnoyers:
>
> > [ Adding other kernel rseq maintainers in CC. ]
> >
> > ----- On Dec 6, 2021, at 12:14 PM, Florian Weimer [email protected] wrote:
> >
> >> * Mathieu Desnoyers:
> >>
> >>> ----- On Dec 6, 2021, at 8:46 AM, Florian Weimer [email protected] wrote:
> >>> [...]
> >>>> @@ -406,6 +407,9 @@ struct pthread
> >>>> /* Used on strsignal. */
> >>>> struct tls_internal_t tls_state;
> >>>>
> >>>> + /* rseq area registered with the kernel. */
> >>>> + struct rseq rseq_area;
> >>>
> >>> The rseq UAPI requires that the fields within the rseq_area
> >>> are read-written with single-copy atomicity semantics.
> >>>
> >>> So either we define a "volatile struct rseq" here, or we'll need
> >>> to wrap all accesses with the proper volatile casts, or use the
> >>> relaxed_mo atomic accesses.
> >>
> >> Under the C memory model, neither volatile nor relaxed MO result in
> >> single-copy atomicity semantics. So I'm not sure what to make of this.
> >> Surely switching to inline assembly on all targets is over the top.
> >>
> >> I think we can rely on a plain read doing the right thing for us.
> >
> > AFAIU, the plain read does not prevent the compiler from re-loading the
> > value in case of high register pressure.
> >
> > Accesses to rseq fields such as cpu_id need to be done as if those were
> > concurrently modified by a signal handler nesting on top of the user-space
> > code, with the particular twist that blocking signals has no effect on
> > concurrent updates.
> >
> > I do not think we need to do the load in assembly. I was under the impression
> > that both volatile load and relaxed MO result in single-copy atomicity
> > semantics for an aligned pointer. Perhaps Paul, Peter, Boqun have something
> > to add here ?
>
> The C memory model is broken and does not prevent out-of-thin-air
> values. As far as I know, this breaks single-copy atomicity. In
> practice, compilers will not exercise the latitude offered by the memory
> model. volatile does not ensure absence of data races.
Within the confines of the standard, agreed, use of the volatile keyword
does not explicitly prevent data races.
However, volatile accesses are (informally) defined to suffice for
device-driver memory accesses that communicate with devices, whether via
MMIO or DMA-style shared memory. The device-driver firmware is often
written in C or C++. So doesn't this informal device-driver guarantee
need to also do what is needed for userspace code that is communicating
with kernel code? If not, why not?
> Using atomics or volatile would require us to materialize the thread
> pointer, given the current internal interfaces we have, and I don't want
> to do this because this is supposed to be performance-critical code.
> The compiler barrier inherent to the function call will have to be
> enough. I can add a comment to this effect:
>
> /* This load has single-copy atomicity semantics (as required for
> rseq) because the function call implies a compiler barrier. */
Agreed on the need to be very careful to avoid degrading performance on
fast paths!
Thanx, Paul
* Paul E. McKenney via Libc-alpha:
>> The C memory model is broken and does not prevent out-of-thin-air
>> values. As far as I know, this breaks single-copy atomicity. In
>> practice, compilers will not exercise the latitude offered by the memory
>> model. volatile does not ensure absence of data races.
>
> Within the confines of the standard, agreed, use of the volatile keyword
> does not explicitly prevent data races.
>
> However, volatile accesses are (informally) defined to suffice for
> device-driver memory accesses that communicate with devices, whether via
> MMIO or DMA-style shared memory. The device-driver firmware is often
> written in C or C++. So doesn't this informal device-driver guarantee
> need to also do what is needed for userspace code that is communicating
> with kernel code? If not, why not?
The informal guarantee is probably good enough here, too. However, the
actual accesses are behind macros, and those macros use either
non-volatile plain reads or inline assembler (which use
single-instruction naturally aligned reads).
THanks,
Florian
On Mon, Dec 06, 2021 at 09:26:51PM +0100, Florian Weimer wrote:
> * Paul E. McKenney via Libc-alpha:
>
> >> The C memory model is broken and does not prevent out-of-thin-air
> >> values. As far as I know, this breaks single-copy atomicity. In
> >> practice, compilers will not exercise the latitude offered by the memory
> >> model. volatile does not ensure absence of data races.
> >
> > Within the confines of the standard, agreed, use of the volatile keyword
> > does not explicitly prevent data races.
> >
> > However, volatile accesses are (informally) defined to suffice for
> > device-driver memory accesses that communicate with devices, whether via
> > MMIO or DMA-style shared memory. The device-driver firmware is often
> > written in C or C++. So doesn't this informal device-driver guarantee
> > need to also do what is needed for userspace code that is communicating
> > with kernel code? If not, why not?
>
> The informal guarantee is probably good enough here, too. However, the
> actual accesses are behind macros, and those macros use either
> non-volatile plain reads or inline assembler (which use
> single-instruction naturally aligned reads).
Agreed, a non-volatile plain read is quite dangerous in this context.
Thanx, Paul