2020-11-03 00:43:58

by Ahmed S. Darwish

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm: prevent gup_fast from racing with COW during fork

On Mon, Nov 02, 2020 at 08:25:32PM -0400, Jason Gunthorpe wrote:
> On Tue, Nov 03, 2020 at 01:17:12AM +0100, Ahmed S. Darwish wrote:
>
> > Please stick with the official exported API: raw_write_seqcount_begin().
>
> How did you know this was 'offical exported API' ??
>

All the official exported seqlock.h APIs are marked with verbose
kernel-doc annotations on top. The rest are internal...

> > Exactly. '*_seqcount_t_*' is a seqlock.h implementation detail, and it
> > has _zero_ relevance to what is discussed in this thread actually.
>
> Add some leading __'s to them?
>

It's a bit more complicated than just adding some "__" prefixes, due to
the massive compile-time polymorphism done through _Generic().

The '*_seqcount_t_*' format was the best we could come up with to
distinguish (again, for internal seqlock.h code) between macros taking
all seqcount_LOCKNAME_t types, and macros/functions taking only plain
seqcount_t.

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH


2020-11-03 02:22:33

by John Hubbard

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm: prevent gup_fast from racing with COW during fork

On 11/2/20 4:41 PM, Ahmed S. Darwish wrote:
> On Mon, Nov 02, 2020 at 08:25:32PM -0400, Jason Gunthorpe wrote:
>> On Tue, Nov 03, 2020 at 01:17:12AM +0100, Ahmed S. Darwish wrote:
>>
>>> Please stick with the official exported API: raw_write_seqcount_begin().
>>
>> How did you know this was 'offical exported API' ??
>>
>
> All the official exported seqlock.h APIs are marked with verbose
> kernel-doc annotations on top. The rest are internal...
>

OK, but no one here was able to deduce that, probably because there is not
enough consistency throughout the kernel to be able to assume such things--even
though your seqlock project is internally consistent. It's just not *quite*
enough communication.

I think if we added the following it would be very nice:

a) Short comments to the "unofficial and internal" routines, identifying them as
such, and

b) Short comments to the "official API for general use", also identifying
those as such.

c) A comment about what makes "raw" actually raw, for seqlock.


Since I'm proposing new work, I'll also offer to help, perhaps by putting together
a small patch to get it kicked off, if you approve of the idea.

thanks,
--
John Hubbard
NVIDIA

2020-11-03 06:54:38

by Ahmed S. Darwish

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm: prevent gup_fast from racing with COW during fork

On Mon, Nov 02, 2020 at 06:20:45PM -0800, John Hubbard wrote:
> On 11/2/20 4:41 PM, Ahmed S. Darwish wrote:
> > On Mon, Nov 02, 2020 at 08:25:32PM -0400, Jason Gunthorpe wrote:
> > > On Tue, Nov 03, 2020 at 01:17:12AM +0100, Ahmed S. Darwish wrote:
> > >
> > > > Please stick with the official exported API: raw_write_seqcount_begin().
> > >
> > > How did you know this was 'offical exported API' ??
> > >
> >
> > All the official exported seqlock.h APIs are marked with verbose
> > kernel-doc annotations on top. The rest are internal...
> >
>
> OK, but no one here was able to deduce that, probably because there is not
> enough consistency throughout the kernel to be able to assume such things--even
> though your seqlock project is internally consistent. It's just not *quite*
> enough communication.
>
> I think if we added the following it would be very nice:
>

The problem is, I've already documented seqlock.h to death.... There are
more comments than code in there, and there is "seqlock.rst" under
Documentation/ to further describe the big picture.

There comes a point where you decide what level of documentation to add,
and what level to skip.

Because in the end, you don't want to confuse "Joe, the general driver
developer" with too much details that's not relevant to their task at
hand. (I work in the Embedded domain, and I've seen so much ugly code
from embedded drivers/SoC developers already, sorry)

See for example my reply to Linus, where any talk about the lockdep-free
and barrier-free parts of the API was explicitly not mentioned in
seqlock.rst. This was done on purpose: 1) you want to keep the generic
case simple, but the special case do-able, 2) you want to encourage
people to use the standard entry/exit points as much as possible.

> a) Short comments to the "unofficial and internal" routines, identifying them as
> such, and
>
> b) Short comments to the "official API for general use", also identifying
> those as such.
>

I really think the already added kernel-doc is sufficient...

See for example __read_seqcount_begin() and __read_seqcount_retry().
They begin with "__", but they are semi-external seqlock.h API that are
used by VFS to avoid barriers. And these APIs are then polymorphised
according to the write serialization lock type, and so on.

So the most consistent way for seqlock.h was to use kernel-doc as *the*
marker for exported functions.

This is not unique to seqlock.h by the way. The same pattern is heavily
used by the DRM folks.

Yes, of course, we can add even more comments to seqlock.h, but then, I
honestly think it would be too much that maybe people will just skip
reading the whole thing altogether...

> c) A comment about what makes "raw" actually raw, for seqlock.
>

That's already documented.

What more can really be written than what's in seqlock.h below??

/**
* raw_read_seqcount_begin() - begin a seqcount_t read section w/o lockdep

/**
* raw_seqcount_begin() - begin a seqcount_t read critical section w/o
* lockdep and w/o counter stabilization

/**
* raw_write_seqcount_begin() - start a seqcount_t write section w/o lockdep

/**
* raw_write_seqcount_end() - end a seqcount_t write section w/o lockdep

>
> Since I'm proposing new work, I'll also offer to help, perhaps by putting together
> a small patch to get it kicked off, if you approve of the idea.
>

Patches are always welcome of course, and please put me in Cc. I don't
approve or deny anything though, that's the locking maintainers job :)

Kind regards,

> John Hubbard
> NVIDIA

--
Ahmed S. Darwish
Linutronix GmbH

2020-11-03 17:42:35

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm: prevent gup_fast from racing with COW during fork

On Mon, Nov 2, 2020 at 10:52 PM Ahmed S. Darwish
<[email protected]> wrote:
>
> The problem is, I've already documented seqlock.h to death.... There are
> more comments than code in there, and there is "seqlock.rst" under
> Documentation/ to further describe the big picture.

Well, honestly, I think the correct thing to do is to get rid of the
*_seqcount_t_*() functions entirely.

They add nothing but confusion, and they are entirely misnamed. That's
not the pattern we use for "internal use only" functions, and they are
*very* confusing.

They have other issues too: like raw_write_seqcount_end() not being
usable on its own when preemptibility isn't an issue like here. You
basically _have_ to use raw_write_seqcount_t_end(), because otherwise
it tries to re-enable preemption that was never there.

Linus

2020-11-04 01:36:36

by Ahmed S. Darwish

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm: prevent gup_fast from racing with COW during fork

On Tue, Nov 03, 2020 at 09:40:22AM -0800, Linus Torvalds wrote:
> On Mon, Nov 2, 2020 at 10:52 PM Ahmed S. Darwish
> <[email protected]> wrote:
> >
> > The problem is, I've already documented seqlock.h to death.... There are
> > more comments than code in there, and there is "seqlock.rst" under
> > Documentation/ to further describe the big picture.
>
> Well, honestly, I think the correct thing to do is to get rid of the
> *_seqcount_t_*() functions entirely.
>
> They add nothing but confusion, and they are entirely misnamed. That's
> not the pattern we use for "internal use only" functions, and they are
> *very* confusing.
>

I see. Would the enclosed patch #1 be OK? It basically uses the "__do_"
prefix instead, with some rationale.

>
> They have other issues too: like raw_write_seqcount_end() not being
> usable on its own when preemptibility isn't an issue like here. You
> basically _have_ to use raw_write_seqcount_t_end(), because otherwise
> it tries to re-enable preemption that was never there.
>

Hmmm, raw_write_seqcount_{begin,end}() *never* disable/enable preemption
for plain seqcount_t. This is why I kept recommending those for this
patch series instead of internal raw_write_seqcount_*t*_{begin,end}().

But..... given that multiple people made the exact same remark by now, I
guess that's due to:

#define raw_write_seqcount_begin(s) \
do { \
if (__seqcount_lock_preemptible(s)) \
preempt_disable(); \
\
... \
} while (0);

#define raw_write_seqcount_end(s) \
do { \
... \
\
if (__seqcount_lock_preemptible(s)) \
preempt_enable(); \
} while (0);

The tricky part is that __seqcount_lock_preemptible() is always false
for plain "seqcount_t". With that data type, the _Generic() selection
makes it resolve to __seqprop_preemptible(), which just returns false.

Originally, __seqcount_lock_preemptible() was called:

__seqcount_associated_lock_exists_and_is_preemptible()

but it got transformed to its current short form in the process of some
pre-mainline refactorings. Looking at it now after all the dust has
settled, maybe the verbose form was much more clear.

Please see the enclosed patch #2... Would that be OK too?

(I will submit these two patches in their own thread after some common
ground is reached.)

Patches
-------

====>
====> patch #1:
====>

Subject: [RFC][PATCH 1/2] seqlock: Use __do_ prefix instead of non-standed
_seqcount_t_ marker

The use of "*_seqcount_t_*" as a marker to denote internal seqlock.h
functions taking only plain seqcount_t instead of the whole
seqcount_LOCKNAME_t family is confusing users, as it's also not the
standard kernel pattern for denoting header file internal functions.

Use the __do_ prefix instead.

Note, a plain "__" prefix is not used since seqlock.h already uses it
for some of its exported functions; e.g. __read_seqcount_begin() and
__read_seqcount_retry().

Reported-by: Jason Gunthorpe <[email protected]>
Reported-by: John Hubbard <[email protected]>
Reported-by: Linus Torvalds <[email protected]>
Link: https://lkml.kernel.org/r/CAHk-=wgB8nyOQufpn0o6a5BpJCJPnXvH+kRxApujhsgG+7qAwQ@mail.gmail.com
Signed-off-by: Ahmed S. Darwish <[email protected]>
---
include/linux/seqlock.h | 62 ++++++++++++++++++++---------------------
1 file changed, 31 insertions(+), 31 deletions(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index cbfc78b92b65..5de043841d33 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -425,9 +425,9 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
* Return: true if a read section retry is required, else false
*/
#define __read_seqcount_retry(s, start) \
- __read_seqcount_t_retry(__seqcount_ptr(s), start)
+ __do___read_seqcount_retry(__seqcount_ptr(s), start)

-static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
+static inline int __do___read_seqcount_retry(const seqcount_t *s, unsigned start)
{
kcsan_atomic_next(0);
return unlikely(READ_ONCE(s->sequence) != start);
@@ -445,12 +445,12 @@ static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
* Return: true if a read section retry is required, else false
*/
#define read_seqcount_retry(s, start) \
- read_seqcount_t_retry(__seqcount_ptr(s), start)
+ __do_read_seqcount_retry(__seqcount_ptr(s), start)

-static inline int read_seqcount_t_retry(const seqcount_t *s, unsigned start)
+static inline int __do_read_seqcount_retry(const seqcount_t *s, unsigned start)
{
smp_rmb();
- return __read_seqcount_t_retry(s, start);
+ return __do___read_seqcount_retry(s, start);
}

/**
@@ -462,10 +462,10 @@ do { \
if (__seqcount_lock_preemptible(s)) \
preempt_disable(); \
\
- raw_write_seqcount_t_begin(__seqcount_ptr(s)); \
+ __do_raw_write_seqcount_begin(__seqcount_ptr(s)); \
} while (0)

-static inline void raw_write_seqcount_t_begin(seqcount_t *s)
+static inline void __do_raw_write_seqcount_begin(seqcount_t *s)
{
kcsan_nestable_atomic_begin();
s->sequence++;
@@ -478,13 +478,13 @@ static inline void raw_write_seqcount_t_begin(seqcount_t *s)
*/
#define raw_write_seqcount_end(s) \
do { \
- raw_write_seqcount_t_end(__seqcount_ptr(s)); \
+ __do_raw_write_seqcount_end(__seqcount_ptr(s)); \
\
if (__seqcount_lock_preemptible(s)) \
preempt_enable(); \
} while (0)

-static inline void raw_write_seqcount_t_end(seqcount_t *s)
+static inline void __do_raw_write_seqcount_end(seqcount_t *s)
{
smp_wmb();
s->sequence++;
@@ -506,12 +506,12 @@ do { \
if (__seqcount_lock_preemptible(s)) \
preempt_disable(); \
\
- write_seqcount_t_begin_nested(__seqcount_ptr(s), subclass); \
+ __do_write_seqcount_begin_nested(__seqcount_ptr(s), subclass); \
} while (0)

-static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
+static inline void __do_write_seqcount_begin_nested(seqcount_t *s, int subclass)
{
- raw_write_seqcount_t_begin(s);
+ __do_raw_write_seqcount_begin(s);
seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_);
}

@@ -533,12 +533,12 @@ do { \
if (__seqcount_lock_preemptible(s)) \
preempt_disable(); \
\
- write_seqcount_t_begin(__seqcount_ptr(s)); \
+ __do_write_seqcount_begin(__seqcount_ptr(s)); \
} while (0)

-static inline void write_seqcount_t_begin(seqcount_t *s)
+static inline void __do_write_seqcount_begin(seqcount_t *s)
{
- write_seqcount_t_begin_nested(s, 0);
+ __do_write_seqcount_begin_nested(s, 0);
}

/**
@@ -549,16 +549,16 @@ static inline void write_seqcount_t_begin(seqcount_t *s)
*/
#define write_seqcount_end(s) \
do { \
- write_seqcount_t_end(__seqcount_ptr(s)); \
+ __do_write_seqcount_end(__seqcount_ptr(s)); \
\
if (__seqcount_lock_preemptible(s)) \
preempt_enable(); \
} while (0)

-static inline void write_seqcount_t_end(seqcount_t *s)
+static inline void __do_write_seqcount_end(seqcount_t *s)
{
seqcount_release(&s->dep_map, _RET_IP_);
- raw_write_seqcount_t_end(s);
+ __do_raw_write_seqcount_end(s);
}

/**
@@ -603,9 +603,9 @@ static inline void write_seqcount_t_end(seqcount_t *s)
* }
*/
#define raw_write_seqcount_barrier(s) \
- raw_write_seqcount_t_barrier(__seqcount_ptr(s))
+ __do_raw_write_seqcount_barrier(__seqcount_ptr(s))

-static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
+static inline void __do_raw_write_seqcount_barrier(seqcount_t *s)
{
kcsan_nestable_atomic_begin();
s->sequence++;
@@ -623,9 +623,9 @@ static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
* will complete successfully and see data older than this.
*/
#define write_seqcount_invalidate(s) \
- write_seqcount_t_invalidate(__seqcount_ptr(s))
+ __do_write_seqcount_invalidate(__seqcount_ptr(s))

-static inline void write_seqcount_t_invalidate(seqcount_t *s)
+static inline void __do_write_seqcount_invalidate(seqcount_t *s)
{
smp_wmb();
kcsan_nestable_atomic_begin();
@@ -865,7 +865,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
}

/*
- * For all seqlock_t write side functions, use write_seqcount_*t*_begin()
+ * For all seqlock_t write side functions, use __do_write_seqcount_begin()
* instead of the generic write_seqcount_begin(). This way, no redundant
* lockdep_assert_held() checks are added.
*/
@@ -886,7 +886,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
static inline void write_seqlock(seqlock_t *sl)
{
spin_lock(&sl->lock);
- write_seqcount_t_begin(&sl->seqcount.seqcount);
+ __do_write_seqcount_begin(&sl->seqcount.seqcount);
}

/**
@@ -898,7 +898,7 @@ static inline void write_seqlock(seqlock_t *sl)
*/
static inline void write_sequnlock(seqlock_t *sl)
{
- write_seqcount_t_end(&sl->seqcount.seqcount);
+ __do_write_seqcount_end(&sl->seqcount.seqcount);
spin_unlock(&sl->lock);
}

@@ -912,7 +912,7 @@ static inline void write_sequnlock(seqlock_t *sl)
static inline void write_seqlock_bh(seqlock_t *sl)
{
spin_lock_bh(&sl->lock);
- write_seqcount_t_begin(&sl->seqcount.seqcount);
+ __do_write_seqcount_begin(&sl->seqcount.seqcount);
}

/**
@@ -925,7 +925,7 @@ static inline void write_seqlock_bh(seqlock_t *sl)
*/
static inline void write_sequnlock_bh(seqlock_t *sl)
{
- write_seqcount_t_end(&sl->seqcount.seqcount);
+ __do_write_seqcount_end(&sl->seqcount.seqcount);
spin_unlock_bh(&sl->lock);
}

@@ -939,7 +939,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl)
static inline void write_seqlock_irq(seqlock_t *sl)
{
spin_lock_irq(&sl->lock);
- write_seqcount_t_begin(&sl->seqcount.seqcount);
+ __do_write_seqcount_begin(&sl->seqcount.seqcount);
}

/**
@@ -951,7 +951,7 @@ static inline void write_seqlock_irq(seqlock_t *sl)
*/
static inline void write_sequnlock_irq(seqlock_t *sl)
{
- write_seqcount_t_end(&sl->seqcount.seqcount);
+ __do_write_seqcount_end(&sl->seqcount.seqcount);
spin_unlock_irq(&sl->lock);
}

@@ -960,7 +960,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
unsigned long flags;

spin_lock_irqsave(&sl->lock, flags);
- write_seqcount_t_begin(&sl->seqcount.seqcount);
+ __do_write_seqcount_begin(&sl->seqcount.seqcount);
return flags;
}

@@ -989,7 +989,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
static inline void
write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
{
- write_seqcount_t_end(&sl->seqcount.seqcount);
+ __do_write_seqcount_end(&sl->seqcount.seqcount);
spin_unlock_irqrestore(&sl->lock, flags);
}

====>
====> patch #2:
====>

Subject: [PATCH 2/2] seqlock: seqcount_LOCKAME_t: Use more verbose macro names

As evidenced by multiple discussions over LKML so far, it's not clear
that __seqcount_lock_preemptible() is always false for plain seqcount_t.
For that data type, the _Generic() selection resolves to
__seqprop_preemptible(), which just returns false.

Use __seqcount_associated_lock_exists_and_is_preemptible() instead,
which hints that "preemptibility" is for the associated write
serialization lock (if any), not for the seqcount itself.

Similarly, rename __seqcount_assert_lock_held() to
__seqcount_assert_associated_lock_held().

Link: https://lkml.kernel.org/r/CAHk-=wgB8nyOQufpn0o6a5BpJCJPnXvH+kRxApujhsgG+7qAwQ@mail.gmail.com
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/20201103170327.GJ20600@xz-x1
Signed-off-by: Ahmed S. Darwish <[email protected]>
---
include/linux/seqlock.h | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 5de043841d33..eb1e5a822e44 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -307,10 +307,10 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
__seqprop_case((s), mutex, prop), \
__seqprop_case((s), ww_mutex, prop))

-#define __seqcount_ptr(s) __seqprop(s, ptr)
-#define __seqcount_sequence(s) __seqprop(s, sequence)
-#define __seqcount_lock_preemptible(s) __seqprop(s, preemptible)
-#define __seqcount_assert_lock_held(s) __seqprop(s, assert)
+#define __seqcount_ptr(s) __seqprop(s, ptr)
+#define __seqcount_sequence(s) __seqprop(s, sequence)
+#define __seqcount_associated_lock_exists_and_is_preemptible(s) __seqprop(s, preemptible)
+#define __seqcount_assert_associated_lock_held(s) __seqprop(s, assert)

/**
* __read_seqcount_begin() - begin a seqcount_t read section w/o barrier
@@ -459,7 +459,7 @@ static inline int __do_read_seqcount_retry(const seqcount_t *s, unsigned start)
*/
#define raw_write_seqcount_begin(s) \
do { \
- if (__seqcount_lock_preemptible(s)) \
+ if (__seqcount_associated_lock_exists_and_is_preemptible(s)) \
preempt_disable(); \
\
__do_raw_write_seqcount_begin(__seqcount_ptr(s)); \
@@ -480,7 +480,7 @@ static inline void __do_raw_write_seqcount_begin(seqcount_t *s)
do { \
__do_raw_write_seqcount_end(__seqcount_ptr(s)); \
\
- if (__seqcount_lock_preemptible(s)) \
+ if (__seqcount_associated_lock_exists_and_is_preemptible(s)) \
preempt_enable(); \
} while (0)

@@ -501,9 +501,9 @@ static inline void __do_raw_write_seqcount_end(seqcount_t *s)
*/
#define write_seqcount_begin_nested(s, subclass) \
do { \
- __seqcount_assert_lock_held(s); \
+ __seqcount_assert_associated_lock_held(s); \
\
- if (__seqcount_lock_preemptible(s)) \
+ if (__seqcount_associated_lock_exists_and_is_preemptible(s)) \
preempt_disable(); \
\
__do_write_seqcount_begin_nested(__seqcount_ptr(s), subclass); \
@@ -528,9 +528,9 @@ static inline void __do_write_seqcount_begin_nested(seqcount_t *s, int subclass)
*/
#define write_seqcount_begin(s) \
do { \
- __seqcount_assert_lock_held(s); \
+ __seqcount_assert_associated_lock_held(s); \
\
- if (__seqcount_lock_preemptible(s)) \
+ if (__seqcount_associated_lock_exists_and_is_preemptible(s)) \
preempt_disable(); \
\
__do_write_seqcount_begin(__seqcount_ptr(s)); \
@@ -551,7 +551,7 @@ static inline void __do_write_seqcount_begin(seqcount_t *s)
do { \
__do_write_seqcount_end(__seqcount_ptr(s)); \
\
- if (__seqcount_lock_preemptible(s)) \
+ if (__seqcount_associated_lock_exists_and_is_preemptible(s)) \
preempt_enable(); \
} while (0)

> Linus

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

2020-11-04 02:03:37

by John Hubbard

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm: prevent gup_fast from racing with COW during fork

On 11/3/20 5:32 PM, Ahmed S. Darwish wrote:
> On Tue, Nov 03, 2020 at 09:40:22AM -0800, Linus Torvalds wrote:
>> On Mon, Nov 2, 2020 at 10:52 PM Ahmed S. Darwish
>> <[email protected]> wrote:
...
>
> ====>
> ====> patch #1:
> ====>
>
> Subject: [RFC][PATCH 1/2] seqlock: Use __do_ prefix instead of non-standed
> _seqcount_t_ marker
>
> The use of "*_seqcount_t_*" as a marker to denote internal seqlock.h
> functions taking only plain seqcount_t instead of the whole
> seqcount_LOCKNAME_t family is confusing users, as it's also not the
> standard kernel pattern for denoting header file internal functions.
>
> Use the __do_ prefix instead.
>
> Note, a plain "__" prefix is not used since seqlock.h already uses it
> for some of its exported functions; e.g. __read_seqcount_begin() and
> __read_seqcount_retry().
>
> Reported-by: Jason Gunthorpe <[email protected]>
> Reported-by: John Hubbard <[email protected]>
> Reported-by: Linus Torvalds <[email protected]>
> Link: https://lkml.kernel.org/r/CAHk-=wgB8nyOQufpn0o6a5BpJCJPnXvH+kRxApujhsgG+7qAwQ@mail.gmail.com
> Signed-off-by: Ahmed S. Darwish <[email protected]>
> ---
> include/linux/seqlock.h | 62 ++++++++++++++++++++---------------------
> 1 file changed, 31 insertions(+), 31 deletions(-)
>
> diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
> index cbfc78b92b65..5de043841d33 100644
> --- a/include/linux/seqlock.h
> +++ b/include/linux/seqlock.h
> @@ -425,9 +425,9 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
> * Return: true if a read section retry is required, else false
> */
> #define __read_seqcount_retry(s, start) \
> - __read_seqcount_t_retry(__seqcount_ptr(s), start)
> + __do___read_seqcount_retry(__seqcount_ptr(s), start)

Looking better. "do_" is clearly an internal function name prefix, so that's good.

A nit: while various numbers of leading underscores are sometimes used, it's a lot
less common to use, say, 3 consecutive underscores (as above) *within* the name. And
I don't think you need it for uniqueness, at least from a quick look around here.

In fact, if you actually need 3 vs 2 vs 1 underscore within the name, I'm going to
claim that that's too far afield, and the naming should be re-revisited. :)

So why not just:

__do_read_seqcount_retry()

?

...or, if you feeling bold:

do_read_seqcount_retry()

...thus taking further advantage of the "do" convention, in order to get rid of some
more underscores.

And similarly for other __do___*() functions.

But again, either way, I think "do" is helping a *lot* here (as is getting rid
of the _t_ idea).


thanks,
--
John Hubbard
NVIDIA

>
> -static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
> +static inline int __do___read_seqcount_retry(const seqcount_t *s, unsigned start)
> {
> kcsan_atomic_next(0);
> return unlikely(READ_ONCE(s->sequence) != start);
> @@ -445,12 +445,12 @@ static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
> * Return: true if a read section retry is required, else false
> */
> #define read_seqcount_retry(s, start) \
> - read_seqcount_t_retry(__seqcount_ptr(s), start)
> + __do_read_seqcount_retry(__seqcount_ptr(s), start)
>
> -static inline int read_seqcount_t_retry(const seqcount_t *s, unsigned start)
> +static inline int __do_read_seqcount_retry(const seqcount_t *s, unsigned start)
> {
> smp_rmb();
> - return __read_seqcount_t_retry(s, start);
> + return __do___read_seqcount_retry(s, start);
> }
>
> /**
> @@ -462,10 +462,10 @@ do { \
> if (__seqcount_lock_preemptible(s)) \
> preempt_disable(); \
> \
> - raw_write_seqcount_t_begin(__seqcount_ptr(s)); \
> + __do_raw_write_seqcount_begin(__seqcount_ptr(s)); \
> } while (0)
>
> -static inline void raw_write_seqcount_t_begin(seqcount_t *s)
> +static inline void __do_raw_write_seqcount_begin(seqcount_t *s)
> {
> kcsan_nestable_atomic_begin();
> s->sequence++;
> @@ -478,13 +478,13 @@ static inline void raw_write_seqcount_t_begin(seqcount_t *s)
> */
> #define raw_write_seqcount_end(s) \
> do { \
> - raw_write_seqcount_t_end(__seqcount_ptr(s)); \
> + __do_raw_write_seqcount_end(__seqcount_ptr(s)); \
> \
> if (__seqcount_lock_preemptible(s)) \
> preempt_enable(); \
> } while (0)
>
> -static inline void raw_write_seqcount_t_end(seqcount_t *s)
> +static inline void __do_raw_write_seqcount_end(seqcount_t *s)
> {
> smp_wmb();
> s->sequence++;
> @@ -506,12 +506,12 @@ do { \
> if (__seqcount_lock_preemptible(s)) \
> preempt_disable(); \
> \
> - write_seqcount_t_begin_nested(__seqcount_ptr(s), subclass); \
> + __do_write_seqcount_begin_nested(__seqcount_ptr(s), subclass); \
> } while (0)
>
> -static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
> +static inline void __do_write_seqcount_begin_nested(seqcount_t *s, int subclass)
> {
> - raw_write_seqcount_t_begin(s);
> + __do_raw_write_seqcount_begin(s);
> seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_);
> }
>
> @@ -533,12 +533,12 @@ do { \
> if (__seqcount_lock_preemptible(s)) \
> preempt_disable(); \
> \
> - write_seqcount_t_begin(__seqcount_ptr(s)); \
> + __do_write_seqcount_begin(__seqcount_ptr(s)); \
> } while (0)
>
> -static inline void write_seqcount_t_begin(seqcount_t *s)
> +static inline void __do_write_seqcount_begin(seqcount_t *s)
> {
> - write_seqcount_t_begin_nested(s, 0);
> + __do_write_seqcount_begin_nested(s, 0);
> }
>
> /**
> @@ -549,16 +549,16 @@ static inline void write_seqcount_t_begin(seqcount_t *s)
> */
> #define write_seqcount_end(s) \
> do { \
> - write_seqcount_t_end(__seqcount_ptr(s)); \
> + __do_write_seqcount_end(__seqcount_ptr(s)); \
> \
> if (__seqcount_lock_preemptible(s)) \
> preempt_enable(); \
> } while (0)
>
> -static inline void write_seqcount_t_end(seqcount_t *s)
> +static inline void __do_write_seqcount_end(seqcount_t *s)
> {
> seqcount_release(&s->dep_map, _RET_IP_);
> - raw_write_seqcount_t_end(s);
> + __do_raw_write_seqcount_end(s);
> }
>
> /**
> @@ -603,9 +603,9 @@ static inline void write_seqcount_t_end(seqcount_t *s)
> * }
> */
> #define raw_write_seqcount_barrier(s) \
> - raw_write_seqcount_t_barrier(__seqcount_ptr(s))
> + __do_raw_write_seqcount_barrier(__seqcount_ptr(s))
>
> -static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
> +static inline void __do_raw_write_seqcount_barrier(seqcount_t *s)
> {
> kcsan_nestable_atomic_begin();
> s->sequence++;
> @@ -623,9 +623,9 @@ static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
> * will complete successfully and see data older than this.
> */
> #define write_seqcount_invalidate(s) \
> - write_seqcount_t_invalidate(__seqcount_ptr(s))
> + __do_write_seqcount_invalidate(__seqcount_ptr(s))
>
> -static inline void write_seqcount_t_invalidate(seqcount_t *s)
> +static inline void __do_write_seqcount_invalidate(seqcount_t *s)
> {
> smp_wmb();
> kcsan_nestable_atomic_begin();
> @@ -865,7 +865,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
> }
>
> /*
> - * For all seqlock_t write side functions, use write_seqcount_*t*_begin()
> + * For all seqlock_t write side functions, use __do_write_seqcount_begin()
> * instead of the generic write_seqcount_begin(). This way, no redundant
> * lockdep_assert_held() checks are added.
> */
> @@ -886,7 +886,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
> static inline void write_seqlock(seqlock_t *sl)
> {
> spin_lock(&sl->lock);
> - write_seqcount_t_begin(&sl->seqcount.seqcount);
> + __do_write_seqcount_begin(&sl->seqcount.seqcount);
> }
>
> /**
> @@ -898,7 +898,7 @@ static inline void write_seqlock(seqlock_t *sl)
> */
> static inline void write_sequnlock(seqlock_t *sl)
> {
> - write_seqcount_t_end(&sl->seqcount.seqcount);
> + __do_write_seqcount_end(&sl->seqcount.seqcount);
> spin_unlock(&sl->lock);
> }
>
> @@ -912,7 +912,7 @@ static inline void write_sequnlock(seqlock_t *sl)
> static inline void write_seqlock_bh(seqlock_t *sl)
> {
> spin_lock_bh(&sl->lock);
> - write_seqcount_t_begin(&sl->seqcount.seqcount);
> + __do_write_seqcount_begin(&sl->seqcount.seqcount);
> }
>
> /**
> @@ -925,7 +925,7 @@ static inline void write_seqlock_bh(seqlock_t *sl)
> */
> static inline void write_sequnlock_bh(seqlock_t *sl)
> {
> - write_seqcount_t_end(&sl->seqcount.seqcount);
> + __do_write_seqcount_end(&sl->seqcount.seqcount);
> spin_unlock_bh(&sl->lock);
> }
>
> @@ -939,7 +939,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl)
> static inline void write_seqlock_irq(seqlock_t *sl)
> {
> spin_lock_irq(&sl->lock);
> - write_seqcount_t_begin(&sl->seqcount.seqcount);
> + __do_write_seqcount_begin(&sl->seqcount.seqcount);
> }
>
> /**
> @@ -951,7 +951,7 @@ static inline void write_seqlock_irq(seqlock_t *sl)
> */
> static inline void write_sequnlock_irq(seqlock_t *sl)
> {
> - write_seqcount_t_end(&sl->seqcount.seqcount);
> + __do_write_seqcount_end(&sl->seqcount.seqcount);
> spin_unlock_irq(&sl->lock);
> }
>
> @@ -960,7 +960,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
> unsigned long flags;
>
> spin_lock_irqsave(&sl->lock, flags);
> - write_seqcount_t_begin(&sl->seqcount.seqcount);
> + __do_write_seqcount_begin(&sl->seqcount.seqcount);
> return flags;
> }
>
> @@ -989,7 +989,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
> static inline void
> write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
> {
> - write_seqcount_t_end(&sl->seqcount.seqcount);
> + __do_write_seqcount_end(&sl->seqcount.seqcount);
> spin_unlock_irqrestore(&sl->lock, flags);
> }
>
> ====>
> ====> patch #2:
> ====>
>
> Subject: [PATCH 2/2] seqlock: seqcount_LOCKAME_t: Use more verbose macro names
>
> As evidenced by multiple discussions over LKML so far, it's not clear
> that __seqcount_lock_preemptible() is always false for plain seqcount_t.
> For that data type, the _Generic() selection resolves to
> __seqprop_preemptible(), which just returns false.
>
> Use __seqcount_associated_lock_exists_and_is_preemptible() instead,
> which hints that "preemptibility" is for the associated write
> serialization lock (if any), not for the seqcount itself.
>
> Similarly, rename __seqcount_assert_lock_held() to
> __seqcount_assert_associated_lock_held().
>
> Link: https://lkml.kernel.org/r/CAHk-=wgB8nyOQufpn0o6a5BpJCJPnXvH+kRxApujhsgG+7qAwQ@mail.gmail.com
> Link: https://lkml.kernel.org/r/[email protected]
> Link: https://lkml.kernel.org/r/20201103170327.GJ20600@xz-x1
> Signed-off-by: Ahmed S. Darwish <[email protected]>
> ---
> include/linux/seqlock.h | 22 +++++++++++-----------
> 1 file changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
> index 5de043841d33..eb1e5a822e44 100644
> --- a/include/linux/seqlock.h
> +++ b/include/linux/seqlock.h
> @@ -307,10 +307,10 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
> __seqprop_case((s), mutex, prop), \
> __seqprop_case((s), ww_mutex, prop))
>
> -#define __seqcount_ptr(s) __seqprop(s, ptr)
> -#define __seqcount_sequence(s) __seqprop(s, sequence)
> -#define __seqcount_lock_preemptible(s) __seqprop(s, preemptible)
> -#define __seqcount_assert_lock_held(s) __seqprop(s, assert)
> +#define __seqcount_ptr(s) __seqprop(s, ptr)
> +#define __seqcount_sequence(s) __seqprop(s, sequence)
> +#define __seqcount_associated_lock_exists_and_is_preemptible(s) __seqprop(s, preemptible)
> +#define __seqcount_assert_associated_lock_held(s) __seqprop(s, assert)
>
> /**
> * __read_seqcount_begin() - begin a seqcount_t read section w/o barrier
> @@ -459,7 +459,7 @@ static inline int __do_read_seqcount_retry(const seqcount_t *s, unsigned start)
> */
> #define raw_write_seqcount_begin(s) \
> do { \
> - if (__seqcount_lock_preemptible(s)) \
> + if (__seqcount_associated_lock_exists_and_is_preemptible(s)) \
> preempt_disable(); \
> \
> __do_raw_write_seqcount_begin(__seqcount_ptr(s)); \
> @@ -480,7 +480,7 @@ static inline void __do_raw_write_seqcount_begin(seqcount_t *s)
> do { \
> __do_raw_write_seqcount_end(__seqcount_ptr(s)); \
> \
> - if (__seqcount_lock_preemptible(s)) \
> + if (__seqcount_associated_lock_exists_and_is_preemptible(s)) \
> preempt_enable(); \
> } while (0)
>
> @@ -501,9 +501,9 @@ static inline void __do_raw_write_seqcount_end(seqcount_t *s)
> */
> #define write_seqcount_begin_nested(s, subclass) \
> do { \
> - __seqcount_assert_lock_held(s); \
> + __seqcount_assert_associated_lock_held(s); \
> \
> - if (__seqcount_lock_preemptible(s)) \
> + if (__seqcount_associated_lock_exists_and_is_preemptible(s)) \
> preempt_disable(); \
> \
> __do_write_seqcount_begin_nested(__seqcount_ptr(s), subclass); \
> @@ -528,9 +528,9 @@ static inline void __do_write_seqcount_begin_nested(seqcount_t *s, int subclass)
> */
> #define write_seqcount_begin(s) \
> do { \
> - __seqcount_assert_lock_held(s); \
> + __seqcount_assert_associated_lock_held(s); \
> \
> - if (__seqcount_lock_preemptible(s)) \
> + if (__seqcount_associated_lock_exists_and_is_preemptible(s)) \
> preempt_disable(); \
> \
> __do_write_seqcount_begin(__seqcount_ptr(s)); \
> @@ -551,7 +551,7 @@ static inline void __do_write_seqcount_begin(seqcount_t *s)
> do { \
> __do_write_seqcount_end(__seqcount_ptr(s)); \
> \
> - if (__seqcount_lock_preemptible(s)) \
> + if (__seqcount_associated_lock_exists_and_is_preemptible(s)) \
> preempt_enable(); \
> } while (0)
>
>> Linus
>
> Thanks,
>
> --
> Ahmed S. Darwish
> Linutronix GmbH
>

2020-11-04 03:20:57

by Ahmed S. Darwish

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm: prevent gup_fast from racing with COW during fork

On Tue, Nov 03, 2020 at 06:01:30PM -0800, John Hubbard wrote:
> On 11/3/20 5:32 PM, Ahmed S. Darwish wrote:
...
> > #define __read_seqcount_retry(s, start) \
> > - __read_seqcount_t_retry(__seqcount_ptr(s), start)
> > + __do___read_seqcount_retry(__seqcount_ptr(s), start)
>
...
> A nit: while various numbers of leading underscores are sometimes used, it's a lot
> less common to use, say, 3 consecutive underscores (as above) *within* the name. And
> I don't think you need it for uniqueness, at least from a quick look around here.
>
...
> But again, either way, I think "do" is helping a *lot* here (as is getting rid
> of the _t_ idea).

The three underscores are needed because there's a do_ version for
read_seqcount_retry(), and another for __read_seqcount_retry().

Similarly for {__,}read_seqcount_begin(). You want to be very careful
with this, and never mistaknely mix the two, because it affects some VFS
hot paths.

Nonetheless, as you mentioned in the later (dropped) part of your
message, I think do_ is better than __do_, so the final result will be:

do___read_seqcount_retry()
do_read_seqcount_retry()
do_raw_write_seqcount_begin()
do_raw_write_seqcount_end()
do_write_seqcount_begin()
...

and so on.

I'll wait for some further feedback on the two patches (possibly from
Linus or PeterZ), then send a mini patch series.

(This shouldn't block a v3 of Jason's mm patch series though, as it will
be using the external seqlock.h APIs anyway...).

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

2020-11-05 00:45:42

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm: prevent gup_fast from racing with COW during fork

On Tue, Nov 3, 2020 at 7:17 PM Ahmed S. Darwish <[email protected]> wrote:
>
> Nonetheless, as you mentioned in the later (dropped) part of your
> message, I think do_ is better than __do_, so the final result will be:
>
> do___read_seqcount_retry()
> do_read_seqcount_retry()
> do_raw_write_seqcount_begin()
> do_raw_write_seqcount_end()
> do_write_seqcount_begin()
> ...
>
> and so on.

Looks reasonable to me.

And can you add a few comments to the magic type macros, so that it's
a lot more obvious what the end result was. I clearly wasn't able to
follow all the _Generic() cases from the seqcount_t to the final end
result. It's a really odd combination of subtle _GENERIC() macro and
token pasting to get from zeqcount_t to "false" in
__seqcount_lock_preemptible().

I can see it when I really look, but when looking at the actual use,
it's very non-obvious indeed.

Linus

2020-11-05 01:32:47

by Ahmed S. Darwish

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm: prevent gup_fast from racing with COW during fork

On Wed, Nov 04, 2020 at 10:38:57AM -0800, Linus Torvalds wrote:
...
>
> Looks reasonable to me.
>
> And can you add a few comments to the magic type macros, so that it's
> a lot more obvious what the end result was.
...
>
> I can see it when I really look, but when looking at the actual use,
> it's very non-obvious indeed.
>

ACK, will do.

> Linus

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

2020-11-10 11:57:09

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] mm: prevent gup_fast from racing with COW during fork

On Wed, Nov 04, 2020 at 04:17:11AM +0100, Ahmed S. Darwish wrote:
> On Tue, Nov 03, 2020 at 06:01:30PM -0800, John Hubbard wrote:
> > On 11/3/20 5:32 PM, Ahmed S. Darwish wrote:
> ...
> > > #define __read_seqcount_retry(s, start) \
> > > - __read_seqcount_t_retry(__seqcount_ptr(s), start)
> > > + __do___read_seqcount_retry(__seqcount_ptr(s), start)
> >
> ...
> > A nit: while various numbers of leading underscores are sometimes used, it's a lot
> > less common to use, say, 3 consecutive underscores (as above) *within* the name. And
> > I don't think you need it for uniqueness, at least from a quick look around here.
> >
> ...
> > But again, either way, I think "do" is helping a *lot* here (as is getting rid
> > of the _t_ idea).
>
> The three underscores are needed because there's a do_ version for
> read_seqcount_retry(), and another for __read_seqcount_retry().
>
> Similarly for {__,}read_seqcount_begin(). You want to be very careful
> with this, and never mistaknely mix the two, because it affects some VFS
> hot paths.
>
> Nonetheless, as you mentioned in the later (dropped) part of your
> message, I think do_ is better than __do_, so the final result will be:
>
> do___read_seqcount_retry()
> do_read_seqcount_retry()
> do_raw_write_seqcount_begin()
> do_raw_write_seqcount_end()
> do_write_seqcount_begin()
> ...
>
> and so on.
>
> I'll wait for some further feedback on the two patches (possibly from
> Linus or PeterZ), then send a mini patch series.

I'm Ok with that. The change I have issues with is:

-#define __seqcount_lock_preemptible(s) __seqprop(s, preemptible)
+#define __seqcount_associated_lock_exists_and_is_preemptible(s) __seqprop(s, preemptible)

That's just _realllllllly_ long.

Would something like the below make it easier to follow?

seqprop_preemptible() is 'obviously' related to __seqprop_preemptible()
without having to follow through the _Generic token pasting maze.

---
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 8d8552474c64..576594add921 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -307,10 +307,10 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
__seqprop_case((s), mutex, prop), \
__seqprop_case((s), ww_mutex, prop))

-#define __seqcount_ptr(s) __seqprop(s, ptr)
-#define __seqcount_sequence(s) __seqprop(s, sequence)
-#define __seqcount_lock_preemptible(s) __seqprop(s, preemptible)
-#define __seqcount_assert_lock_held(s) __seqprop(s, assert)
+#define seqprop_ptr(s) __seqprop(s, ptr)
+#define seqprop_sequence(s) __seqprop(s, sequence)
+#define seqprop_preemptible(s) __seqprop(s, preemptible)
+#define seqprop_assert(s) __seqprop(s, assert)

/**
* __read_seqcount_begin() - begin a seqcount_t read section w/o barrier
@@ -330,7 +330,7 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
({ \
unsigned __seq; \
\
- while ((__seq = __seqcount_sequence(s)) & 1) \
+ while ((__seq = seqprop_sequence(s)) & 1) \
cpu_relax(); \
\
kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); \
@@ -359,7 +359,7 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
*/
#define read_seqcount_begin(s) \
({ \
- seqcount_lockdep_reader_access(__seqcount_ptr(s)); \
+ seqcount_lockdep_reader_access(seqprop_ptr(s)); \
raw_read_seqcount_begin(s); \
})

@@ -376,7 +376,7 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
*/
#define raw_read_seqcount(s) \
({ \
- unsigned __seq = __seqcount_sequence(s); \
+ unsigned __seq = seqprop_sequence(s); \
\
smp_rmb(); \
kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); \
@@ -425,7 +425,7 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
* Return: true if a read section retry is required, else false
*/
#define __read_seqcount_retry(s, start) \
- __read_seqcount_t_retry(__seqcount_ptr(s), start)
+ __read_seqcount_t_retry(seqprop_ptr(s), start)

static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
{
@@ -445,7 +445,7 @@ static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
* Return: true if a read section retry is required, else false
*/
#define read_seqcount_retry(s, start) \
- read_seqcount_t_retry(__seqcount_ptr(s), start)
+ read_seqcount_t_retry(seqprop_ptr(s), start)

static inline int read_seqcount_t_retry(const seqcount_t *s, unsigned start)
{
@@ -459,10 +459,10 @@ static inline int read_seqcount_t_retry(const seqcount_t *s, unsigned start)
*/
#define raw_write_seqcount_begin(s) \
do { \
- if (__seqcount_lock_preemptible(s)) \
+ if (seqprop_preemptible(s)) \
preempt_disable(); \
\
- raw_write_seqcount_t_begin(__seqcount_ptr(s)); \
+ raw_write_seqcount_t_begin(seqprop_ptr(s)); \
} while (0)

static inline void raw_write_seqcount_t_begin(seqcount_t *s)
@@ -478,9 +478,9 @@ static inline void raw_write_seqcount_t_begin(seqcount_t *s)
*/
#define raw_write_seqcount_end(s) \
do { \
- raw_write_seqcount_t_end(__seqcount_ptr(s)); \
+ raw_write_seqcount_t_end(seqprop_ptr(s)); \
\
- if (__seqcount_lock_preemptible(s)) \
+ if (seqprop_preemptible(s)) \
preempt_enable(); \
} while (0)

@@ -501,12 +501,12 @@ static inline void raw_write_seqcount_t_end(seqcount_t *s)
*/
#define write_seqcount_begin_nested(s, subclass) \
do { \
- __seqcount_assert_lock_held(s); \
+ seqprop_assert(s); \
\
- if (__seqcount_lock_preemptible(s)) \
+ if (seqprop_preemptible(s)) \
preempt_disable(); \
\
- write_seqcount_t_begin_nested(__seqcount_ptr(s), subclass); \
+ write_seqcount_t_begin_nested(seqprop_ptr(s), subclass); \
} while (0)

static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
@@ -528,12 +528,12 @@ static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
*/
#define write_seqcount_begin(s) \
do { \
- __seqcount_assert_lock_held(s); \
+ seqprop_assert(s); \
\
- if (__seqcount_lock_preemptible(s)) \
+ if (seqprop_preemptible(s)) \
preempt_disable(); \
\
- write_seqcount_t_begin(__seqcount_ptr(s)); \
+ write_seqcount_t_begin(seqprop_ptr(s)); \
} while (0)

static inline void write_seqcount_t_begin(seqcount_t *s)
@@ -549,9 +549,9 @@ static inline void write_seqcount_t_begin(seqcount_t *s)
*/
#define write_seqcount_end(s) \
do { \
- write_seqcount_t_end(__seqcount_ptr(s)); \
+ write_seqcount_t_end(seqprop_ptr(s)); \
\
- if (__seqcount_lock_preemptible(s)) \
+ if (seqprop_preemptible(s)) \
preempt_enable(); \
} while (0)

@@ -603,7 +603,7 @@ static inline void write_seqcount_t_end(seqcount_t *s)
* }
*/
#define raw_write_seqcount_barrier(s) \
- raw_write_seqcount_t_barrier(__seqcount_ptr(s))
+ raw_write_seqcount_t_barrier(seqprop_ptr(s))

static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
{
@@ -623,7 +623,7 @@ static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
* will complete successfully and see data older than this.
*/
#define write_seqcount_invalidate(s) \
- write_seqcount_t_invalidate(__seqcount_ptr(s))
+ write_seqcount_t_invalidate(seqprop_ptr(s))

static inline void write_seqcount_t_invalidate(seqcount_t *s)
{

2020-12-03 10:40:26

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: locking/core] seqlock: Rename __seqprop() users

The following commit has been merged into the locking/core branch of tip:

Commit-ID: ab440b2c604b60fe90885270fcfeb5c3dd5d6fae
Gitweb: https://git.kernel.org/tip/ab440b2c604b60fe90885270fcfeb5c3dd5d6fae
Author: Peter Zijlstra <[email protected]>
AuthorDate: Tue, 10 Nov 2020 13:44:17 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Thu, 03 Dec 2020 11:20:51 +01:00

seqlock: Rename __seqprop() users

More consistent naming should make it easier to untangle the _Generic
token pasting maze called __seqprop().

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
include/linux/seqlock.h | 46 ++++++++++++++++++++--------------------
1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 8d85524..d89134c 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -307,10 +307,10 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
__seqprop_case((s), mutex, prop), \
__seqprop_case((s), ww_mutex, prop))

-#define __seqcount_ptr(s) __seqprop(s, ptr)
-#define __seqcount_sequence(s) __seqprop(s, sequence)
-#define __seqcount_lock_preemptible(s) __seqprop(s, preemptible)
-#define __seqcount_assert_lock_held(s) __seqprop(s, assert)
+#define seqprop_ptr(s) __seqprop(s, ptr)
+#define seqprop_sequence(s) __seqprop(s, sequence)
+#define seqprop_preemptible(s) __seqprop(s, preemptible)
+#define seqprop_assert(s) __seqprop(s, assert)

/**
* __read_seqcount_begin() - begin a seqcount_t read section w/o barrier
@@ -330,7 +330,7 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
({ \
unsigned __seq; \
\
- while ((__seq = __seqcount_sequence(s)) & 1) \
+ while ((__seq = seqprop_sequence(s)) & 1) \
cpu_relax(); \
\
kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); \
@@ -359,7 +359,7 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
*/
#define read_seqcount_begin(s) \
({ \
- seqcount_lockdep_reader_access(__seqcount_ptr(s)); \
+ seqcount_lockdep_reader_access(seqprop_ptr(s)); \
raw_read_seqcount_begin(s); \
})

@@ -376,7 +376,7 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
*/
#define raw_read_seqcount(s) \
({ \
- unsigned __seq = __seqcount_sequence(s); \
+ unsigned __seq = seqprop_sequence(s); \
\
smp_rmb(); \
kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); \
@@ -425,7 +425,7 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
* Return: true if a read section retry is required, else false
*/
#define __read_seqcount_retry(s, start) \
- __read_seqcount_t_retry(__seqcount_ptr(s), start)
+ __read_seqcount_t_retry(seqprop_ptr(s), start)

static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
{
@@ -445,7 +445,7 @@ static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
* Return: true if a read section retry is required, else false
*/
#define read_seqcount_retry(s, start) \
- read_seqcount_t_retry(__seqcount_ptr(s), start)
+ read_seqcount_t_retry(seqprop_ptr(s), start)

static inline int read_seqcount_t_retry(const seqcount_t *s, unsigned start)
{
@@ -459,10 +459,10 @@ static inline int read_seqcount_t_retry(const seqcount_t *s, unsigned start)
*/
#define raw_write_seqcount_begin(s) \
do { \
- if (__seqcount_lock_preemptible(s)) \
+ if (seqprop_preemptible(s)) \
preempt_disable(); \
\
- raw_write_seqcount_t_begin(__seqcount_ptr(s)); \
+ raw_write_seqcount_t_begin(seqprop_ptr(s)); \
} while (0)

static inline void raw_write_seqcount_t_begin(seqcount_t *s)
@@ -478,9 +478,9 @@ static inline void raw_write_seqcount_t_begin(seqcount_t *s)
*/
#define raw_write_seqcount_end(s) \
do { \
- raw_write_seqcount_t_end(__seqcount_ptr(s)); \
+ raw_write_seqcount_t_end(seqprop_ptr(s)); \
\
- if (__seqcount_lock_preemptible(s)) \
+ if (seqprop_preemptible(s)) \
preempt_enable(); \
} while (0)

@@ -501,12 +501,12 @@ static inline void raw_write_seqcount_t_end(seqcount_t *s)
*/
#define write_seqcount_begin_nested(s, subclass) \
do { \
- __seqcount_assert_lock_held(s); \
+ seqprop_assert(s); \
\
- if (__seqcount_lock_preemptible(s)) \
+ if (seqprop_preemptible(s)) \
preempt_disable(); \
\
- write_seqcount_t_begin_nested(__seqcount_ptr(s), subclass); \
+ write_seqcount_t_begin_nested(seqprop_ptr(s), subclass); \
} while (0)

static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
@@ -528,12 +528,12 @@ static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
*/
#define write_seqcount_begin(s) \
do { \
- __seqcount_assert_lock_held(s); \
+ seqprop_assert(s); \
\
- if (__seqcount_lock_preemptible(s)) \
+ if (seqprop_preemptible(s)) \
preempt_disable(); \
\
- write_seqcount_t_begin(__seqcount_ptr(s)); \
+ write_seqcount_t_begin(seqprop_ptr(s)); \
} while (0)

static inline void write_seqcount_t_begin(seqcount_t *s)
@@ -549,9 +549,9 @@ static inline void write_seqcount_t_begin(seqcount_t *s)
*/
#define write_seqcount_end(s) \
do { \
- write_seqcount_t_end(__seqcount_ptr(s)); \
+ write_seqcount_t_end(seqprop_ptr(s)); \
\
- if (__seqcount_lock_preemptible(s)) \
+ if (seqprop_preemptible(s)) \
preempt_enable(); \
} while (0)

@@ -603,7 +603,7 @@ static inline void write_seqcount_t_end(seqcount_t *s)
* }
*/
#define raw_write_seqcount_barrier(s) \
- raw_write_seqcount_t_barrier(__seqcount_ptr(s))
+ raw_write_seqcount_t_barrier(seqprop_ptr(s))

static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
{
@@ -623,7 +623,7 @@ static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
* will complete successfully and see data older than this.
*/
#define write_seqcount_invalidate(s) \
- write_seqcount_t_invalidate(__seqcount_ptr(s))
+ write_seqcount_t_invalidate(seqprop_ptr(s))

static inline void write_seqcount_t_invalidate(seqcount_t *s)
{

2020-12-09 18:45:13

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: locking/core] seqlock: kernel-doc: Specify when preemption is automatically altered

The following commit has been merged into the locking/core branch of tip:

Commit-ID: cb262935a166bdef0ccfe6e2adffa00c0f2d038a
Gitweb: https://git.kernel.org/tip/cb262935a166bdef0ccfe6e2adffa00c0f2d038a
Author: Ahmed S. Darwish <[email protected]>
AuthorDate: Sun, 06 Dec 2020 17:21:43 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Wed, 09 Dec 2020 17:08:49 +01:00

seqlock: kernel-doc: Specify when preemption is automatically altered

The kernel-doc annotations for sequence counters write side functions
are incomplete: they do not specify when preemption is automatically
disabled and re-enabled.

This has confused a number of call-site developers. Fix it.

Signed-off-by: Ahmed S. Darwish <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/CAHk-=wikhGExmprXgaW+MVXG1zsGpztBbVwOb23vetk41EtTBQ@mail.gmail.com
---
include/linux/seqlock.h | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 235cbc6..2f7bb92 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -456,6 +456,8 @@ static inline int do_read_seqcount_retry(const seqcount_t *s, unsigned start)
/**
* raw_write_seqcount_begin() - start a seqcount_t write section w/o lockdep
* @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
+ *
+ * Context: check write_seqcount_begin()
*/
#define raw_write_seqcount_begin(s) \
do { \
@@ -475,6 +477,8 @@ static inline void do_raw_write_seqcount_begin(seqcount_t *s)
/**
* raw_write_seqcount_end() - end a seqcount_t write section w/o lockdep
* @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
+ *
+ * Context: check write_seqcount_end()
*/
#define raw_write_seqcount_end(s) \
do { \
@@ -498,6 +502,7 @@ static inline void do_raw_write_seqcount_end(seqcount_t *s)
* @subclass: lockdep nesting level
*
* See Documentation/locking/lockdep-design.rst
+ * Context: check write_seqcount_begin()
*/
#define write_seqcount_begin_nested(s, subclass) \
do { \
@@ -519,11 +524,10 @@ static inline void do_write_seqcount_begin_nested(seqcount_t *s, int subclass)
* write_seqcount_begin() - start a seqcount_t write side critical section
* @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
*
- * write_seqcount_begin opens a write side critical section of the given
- * seqcount_t.
- *
- * Context: seqcount_t write side critical sections must be serialized and
- * non-preemptible. If readers can be invoked from hardirq or softirq
+ * Context: sequence counter write side sections must be serialized and
+ * non-preemptible. Preemption will be automatically disabled if and
+ * only if the seqcount write serialization lock is associated, and
+ * preemptible. If readers can be invoked from hardirq or softirq
* context, interrupts or bottom halves must be respectively disabled.
*/
#define write_seqcount_begin(s) \
@@ -545,7 +549,8 @@ static inline void do_write_seqcount_begin(seqcount_t *s)
* write_seqcount_end() - end a seqcount_t write side critical section
* @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
*
- * The write section must've been opened with write_seqcount_begin().
+ * Context: Preemption will be automatically re-enabled if and only if
+ * the seqcount write serialization lock is associated, and preemptible.
*/
#define write_seqcount_end(s) \
do { \

2020-12-09 18:46:16

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: locking/core] seqlock: Prefix internal seqcount_t-only macros with a "do_"

The following commit has been merged into the locking/core branch of tip:

Commit-ID: 66bcfcdf89d00f2409f4b5da0f8c20c08318dc72
Gitweb: https://git.kernel.org/tip/66bcfcdf89d00f2409f4b5da0f8c20c08318dc72
Author: Ahmed S. Darwish <[email protected]>
AuthorDate: Sun, 06 Dec 2020 17:21:42 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Wed, 09 Dec 2020 17:08:49 +01:00

seqlock: Prefix internal seqcount_t-only macros with a "do_"

When the seqcount_LOCKNAME_t group of data types were introduced, two
classes of seqlock.h sequence counter macros were added:

- An external public API which can either take a plain seqcount_t or
any of the seqcount_LOCKNAME_t variants.

- An internal API which takes only a plain seqcount_t.

To distinguish between the two groups, the "*_seqcount_t_*" pattern was
used for the latter. This confused a number of mm/ call-site developers,
and Linus also commented that it was not a standard practice for marking
seqlock.h internal APIs.

Distinguish the latter group of macros by prefixing a "do_".

Signed-off-by: Ahmed S. Darwish <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/CAHk-=wikhGExmprXgaW+MVXG1zsGpztBbVwOb23vetk41EtTBQ@mail.gmail.com
---
include/linux/seqlock.h | 66 ++++++++++++++++++++--------------------
1 file changed, 33 insertions(+), 33 deletions(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index d89134c..235cbc6 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -425,9 +425,9 @@ SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mu
* Return: true if a read section retry is required, else false
*/
#define __read_seqcount_retry(s, start) \
- __read_seqcount_t_retry(seqprop_ptr(s), start)
+ do___read_seqcount_retry(seqprop_ptr(s), start)

-static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
+static inline int do___read_seqcount_retry(const seqcount_t *s, unsigned start)
{
kcsan_atomic_next(0);
return unlikely(READ_ONCE(s->sequence) != start);
@@ -445,12 +445,12 @@ static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
* Return: true if a read section retry is required, else false
*/
#define read_seqcount_retry(s, start) \
- read_seqcount_t_retry(seqprop_ptr(s), start)
+ do_read_seqcount_retry(seqprop_ptr(s), start)

-static inline int read_seqcount_t_retry(const seqcount_t *s, unsigned start)
+static inline int do_read_seqcount_retry(const seqcount_t *s, unsigned start)
{
smp_rmb();
- return __read_seqcount_t_retry(s, start);
+ return do___read_seqcount_retry(s, start);
}

/**
@@ -462,10 +462,10 @@ do { \
if (seqprop_preemptible(s)) \
preempt_disable(); \
\
- raw_write_seqcount_t_begin(seqprop_ptr(s)); \
+ do_raw_write_seqcount_begin(seqprop_ptr(s)); \
} while (0)

-static inline void raw_write_seqcount_t_begin(seqcount_t *s)
+static inline void do_raw_write_seqcount_begin(seqcount_t *s)
{
kcsan_nestable_atomic_begin();
s->sequence++;
@@ -478,13 +478,13 @@ static inline void raw_write_seqcount_t_begin(seqcount_t *s)
*/
#define raw_write_seqcount_end(s) \
do { \
- raw_write_seqcount_t_end(seqprop_ptr(s)); \
+ do_raw_write_seqcount_end(seqprop_ptr(s)); \
\
if (seqprop_preemptible(s)) \
preempt_enable(); \
} while (0)

-static inline void raw_write_seqcount_t_end(seqcount_t *s)
+static inline void do_raw_write_seqcount_end(seqcount_t *s)
{
smp_wmb();
s->sequence++;
@@ -506,12 +506,12 @@ do { \
if (seqprop_preemptible(s)) \
preempt_disable(); \
\
- write_seqcount_t_begin_nested(seqprop_ptr(s), subclass); \
+ do_write_seqcount_begin_nested(seqprop_ptr(s), subclass); \
} while (0)

-static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
+static inline void do_write_seqcount_begin_nested(seqcount_t *s, int subclass)
{
- raw_write_seqcount_t_begin(s);
+ do_raw_write_seqcount_begin(s);
seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_);
}

@@ -533,12 +533,12 @@ do { \
if (seqprop_preemptible(s)) \
preempt_disable(); \
\
- write_seqcount_t_begin(seqprop_ptr(s)); \
+ do_write_seqcount_begin(seqprop_ptr(s)); \
} while (0)

-static inline void write_seqcount_t_begin(seqcount_t *s)
+static inline void do_write_seqcount_begin(seqcount_t *s)
{
- write_seqcount_t_begin_nested(s, 0);
+ do_write_seqcount_begin_nested(s, 0);
}

/**
@@ -549,16 +549,16 @@ static inline void write_seqcount_t_begin(seqcount_t *s)
*/
#define write_seqcount_end(s) \
do { \
- write_seqcount_t_end(seqprop_ptr(s)); \
+ do_write_seqcount_end(seqprop_ptr(s)); \
\
if (seqprop_preemptible(s)) \
preempt_enable(); \
} while (0)

-static inline void write_seqcount_t_end(seqcount_t *s)
+static inline void do_write_seqcount_end(seqcount_t *s)
{
seqcount_release(&s->dep_map, _RET_IP_);
- raw_write_seqcount_t_end(s);
+ do_raw_write_seqcount_end(s);
}

/**
@@ -603,9 +603,9 @@ static inline void write_seqcount_t_end(seqcount_t *s)
* }
*/
#define raw_write_seqcount_barrier(s) \
- raw_write_seqcount_t_barrier(seqprop_ptr(s))
+ do_raw_write_seqcount_barrier(seqprop_ptr(s))

-static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
+static inline void do_raw_write_seqcount_barrier(seqcount_t *s)
{
kcsan_nestable_atomic_begin();
s->sequence++;
@@ -623,9 +623,9 @@ static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
* will complete successfully and see data older than this.
*/
#define write_seqcount_invalidate(s) \
- write_seqcount_t_invalidate(seqprop_ptr(s))
+ do_write_seqcount_invalidate(seqprop_ptr(s))

-static inline void write_seqcount_t_invalidate(seqcount_t *s)
+static inline void do_write_seqcount_invalidate(seqcount_t *s)
{
smp_wmb();
kcsan_nestable_atomic_begin();
@@ -865,9 +865,9 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
}

/*
- * For all seqlock_t write side functions, use write_seqcount_*t*_begin()
- * instead of the generic write_seqcount_begin(). This way, no redundant
- * lockdep_assert_held() checks are added.
+ * For all seqlock_t write side functions, use the the internal
+ * do_write_seqcount_begin() instead of generic write_seqcount_begin().
+ * This way, no redundant lockdep_assert_held() checks are added.
*/

/**
@@ -886,7 +886,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
static inline void write_seqlock(seqlock_t *sl)
{
spin_lock(&sl->lock);
- write_seqcount_t_begin(&sl->seqcount.seqcount);
+ do_write_seqcount_begin(&sl->seqcount.seqcount);
}

/**
@@ -898,7 +898,7 @@ static inline void write_seqlock(seqlock_t *sl)
*/
static inline void write_sequnlock(seqlock_t *sl)
{
- write_seqcount_t_end(&sl->seqcount.seqcount);
+ do_write_seqcount_end(&sl->seqcount.seqcount);
spin_unlock(&sl->lock);
}

@@ -912,7 +912,7 @@ static inline void write_sequnlock(seqlock_t *sl)
static inline void write_seqlock_bh(seqlock_t *sl)
{
spin_lock_bh(&sl->lock);
- write_seqcount_t_begin(&sl->seqcount.seqcount);
+ do_write_seqcount_begin(&sl->seqcount.seqcount);
}

/**
@@ -925,7 +925,7 @@ static inline void write_seqlock_bh(seqlock_t *sl)
*/
static inline void write_sequnlock_bh(seqlock_t *sl)
{
- write_seqcount_t_end(&sl->seqcount.seqcount);
+ do_write_seqcount_end(&sl->seqcount.seqcount);
spin_unlock_bh(&sl->lock);
}

@@ -939,7 +939,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl)
static inline void write_seqlock_irq(seqlock_t *sl)
{
spin_lock_irq(&sl->lock);
- write_seqcount_t_begin(&sl->seqcount.seqcount);
+ do_write_seqcount_begin(&sl->seqcount.seqcount);
}

/**
@@ -951,7 +951,7 @@ static inline void write_seqlock_irq(seqlock_t *sl)
*/
static inline void write_sequnlock_irq(seqlock_t *sl)
{
- write_seqcount_t_end(&sl->seqcount.seqcount);
+ do_write_seqcount_end(&sl->seqcount.seqcount);
spin_unlock_irq(&sl->lock);
}

@@ -960,7 +960,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
unsigned long flags;

spin_lock_irqsave(&sl->lock, flags);
- write_seqcount_t_begin(&sl->seqcount.seqcount);
+ do_write_seqcount_begin(&sl->seqcount.seqcount);
return flags;
}

@@ -989,7 +989,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
static inline void
write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
{
- write_seqcount_t_end(&sl->seqcount.seqcount);
+ do_write_seqcount_end(&sl->seqcount.seqcount);
spin_unlock_irqrestore(&sl->lock, flags);
}