2019-06-11 16:44:06

by Anders Roxell

[permalink] [raw]
Subject: [PATCH v2] seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline

With the function graph tracer, each traced function calls sched_clock()
to take a timestamp. As sched_clock() uses
raw_read_seqcount()/read_seqcount_retry(), we must ensure that these
do not in turn trigger the graph tracer.
Both functions is marked as inline. However, if CONFIG_OPTIMIZE_INLINING
is set that may make the two functions tracable which they shouldn't.

Rework so that functions raw_read_seqcount and read_seqcount_retry are
marked with __always_inline so they will be inlined even if
CONFIG_OPTIMIZE_INLINING is turned on.

Acked-by: Will Deacon <[email protected]>
Signed-off-by: Anders Roxell <[email protected]>
---
include/linux/seqlock.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index bcf4cf26b8c8..1b18e3df186e 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -127,7 +127,7 @@ static inline unsigned __read_seqcount_begin(const seqcount_t *s)
* seqcount without any lockdep checking and without checking or
* masking the LSB. Calling code is responsible for handling that.
*/
-static inline unsigned raw_read_seqcount(const seqcount_t *s)
+static __always_inline unsigned raw_read_seqcount(const seqcount_t *s)
{
unsigned ret = READ_ONCE(s->sequence);
smp_rmb();
@@ -215,7 +215,8 @@ static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
* If the critical section was invalid, it must be ignored (and typically
* retried).
*/
-static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
+static
+__always_inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
{
smp_rmb();
return __read_seqcount_retry(s, start);
--
2.20.1


2019-06-23 11:16:36

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v2] seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline

On Tue, 11 Jun 2019, Anders Roxell wrote:

> With the function graph tracer, each traced function calls sched_clock()
> to take a timestamp. As sched_clock() uses
> raw_read_seqcount()/read_seqcount_retry(), we must ensure that these
> do not in turn trigger the graph tracer.
> Both functions is marked as inline. However, if CONFIG_OPTIMIZE_INLINING
> is set that may make the two functions tracable which they shouldn't.
>
> Rework so that functions raw_read_seqcount and read_seqcount_retry are
> marked with __always_inline so they will be inlined even if
> CONFIG_OPTIMIZE_INLINING is turned on.

Why just those two? The same issue can happen in other places with other
clocks which can be utilized by the tracer.

Aside of your particular issue, there is no reason why any of those
functions should ever trigger a graph.

Thanks,

tglx


2019-07-26 10:02:26

by Anders Roxell

[permalink] [raw]
Subject: Re: [PATCH v2] seqlock: mark raw_read_seqcount and read_seqcount_retry as __always_inline

On Sun, 23 Jun 2019 at 13:16, Thomas Gleixner <[email protected]> wrote:
>
> On Tue, 11 Jun 2019, Anders Roxell wrote:
>
> > With the function graph tracer, each traced function calls sched_clock()
> > to take a timestamp. As sched_clock() uses
> > raw_read_seqcount()/read_seqcount_retry(), we must ensure that these
> > do not in turn trigger the graph tracer.
> > Both functions is marked as inline. However, if CONFIG_OPTIMIZE_INLINING
> > is set that may make the two functions tracable which they shouldn't.
> >
> > Rework so that functions raw_read_seqcount and read_seqcount_retry are
> > marked with __always_inline so they will be inlined even if
> > CONFIG_OPTIMIZE_INLINING is turned on.
>
> Why just those two? The same issue can happen in other places with other
> clocks which can be utilized by the tracer.
>
> Aside of your particular issue, there is no reason why any of those
> functions should ever trigger a graph.


Yes you are correct. I'll update the patch with __always_inline to all functions
in that file.

Cheers,
Anders