2023-05-18 22:58:50

by SeongJae Park

[permalink] [raw]
Subject: [PATCH 0/4] Docs/RCU/rculist_nulls: Minor fixups

This patchset fixes minor problems in example code snippets of
rculist_nulls.rst file.

SeongJae Park (4):
Docs/RCU/rculist_nulls: Fix trivial coding style
Docs/RCU/rculist_nulls: Assign 'obj' before use from the examples
Docs/RCU/rculist_nulls: Fix hlist_head field name of 'obj'
Docs/RCU/rculist_nulls: Drop unnecessary '_release' in insert function

Documentation/RCU/rculist_nulls.rst | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)

--
2.25.1



2023-05-18 23:07:47

by SeongJae Park

[permalink] [raw]
Subject: [PATCH 4/4] Docs/RCU/rculist_nulls: Drop unnecessary '_release' in insert function

The document says we can avoid extra smp_rmb() in lockless_lookup() and
extra _release() in insert function when hlist_nulls is used. However,
the example code snippet for the insert function is still using the
extra _release(). Drop it.

Signed-off-by: SeongJae Park <[email protected]>
---
Documentation/RCU/rculist_nulls.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst
index 5cd6f3f8810f..463270273d89 100644
--- a/Documentation/RCU/rculist_nulls.rst
+++ b/Documentation/RCU/rculist_nulls.rst
@@ -191,7 +191,7 @@ scan the list again without harm.
obj = kmem_cache_alloc(cachep);
lock_chain(); // typically a spin_lock()
obj->key = key;
- atomic_set_release(&obj->refcnt, 1); // key before refcnt
+ atomic_set(&obj->refcnt, 1);
/*
* insert obj in RCU way (readers might be traversing chain)
*/
--
2.25.1


2023-05-19 19:05:08

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH 4/4] Docs/RCU/rculist_nulls: Drop unnecessary '_release' in insert function

On Thu, May 18, 2023 at 6:40 PM SeongJae Park <[email protected]> wrote:
>
> The document says we can avoid extra smp_rmb() in lockless_lookup() and
> extra _release() in insert function when hlist_nulls is used. However,
> the example code snippet for the insert function is still using the
> extra _release(). Drop it.
>
> Signed-off-by: SeongJae Park <[email protected]>
> ---
> Documentation/RCU/rculist_nulls.rst | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst
> index 5cd6f3f8810f..463270273d89 100644
> --- a/Documentation/RCU/rculist_nulls.rst
> +++ b/Documentation/RCU/rculist_nulls.rst
> @@ -191,7 +191,7 @@ scan the list again without harm.
> obj = kmem_cache_alloc(cachep);
> lock_chain(); // typically a spin_lock()
> obj->key = key;
> - atomic_set_release(&obj->refcnt, 1); // key before refcnt
> + atomic_set(&obj->refcnt, 1);
> /*
> * insert obj in RCU way (readers might be traversing chain)
> */

If write to ->refcnt of 1 is reordered with setting of ->key, what
prevents the 'lookup algorithm' from doing a key match (obj->key ==
key) before the refcount has been initialized?

Are we sure the reordering mentioned in the document is the same as
the reordering prevented by the atomic_set_release()?

For the other 3 patches, feel free to add:
Reviewed-by: Joel Fernandes (Google) <[email protected]>

thanks,

- Joel

2023-06-09 19:47:43

by SeongJae Park

[permalink] [raw]
Subject: Re: [PATCH 4/4] Docs/RCU/rculist_nulls: Drop unnecessary '_release' in insert function

On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <[email protected]> wrote:

> On Thu, May 18, 2023 at 6:40 PM SeongJae Park <[email protected]> wrote:
> >
> > The document says we can avoid extra smp_rmb() in lockless_lookup() and
> > extra _release() in insert function when hlist_nulls is used. However,
> > the example code snippet for the insert function is still using the
> > extra _release(). Drop it.
> >
> > Signed-off-by: SeongJae Park <[email protected]>
> > ---
> > Documentation/RCU/rculist_nulls.rst | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst
> > index 5cd6f3f8810f..463270273d89 100644
> > --- a/Documentation/RCU/rculist_nulls.rst
> > +++ b/Documentation/RCU/rculist_nulls.rst
> > @@ -191,7 +191,7 @@ scan the list again without harm.
> > obj = kmem_cache_alloc(cachep);
> > lock_chain(); // typically a spin_lock()
> > obj->key = key;
> > - atomic_set_release(&obj->refcnt, 1); // key before refcnt
> > + atomic_set(&obj->refcnt, 1);
> > /*
> > * insert obj in RCU way (readers might be traversing chain)
> > */
>
> If write to ->refcnt of 1 is reordered with setting of ->key, what
> prevents the 'lookup algorithm' from doing a key match (obj->key ==
> key) before the refcount has been initialized?
>
> Are we sure the reordering mentioned in the document is the same as
> the reordering prevented by the atomic_set_release()?

Paul, may I ask your opinion?


Thanks,
SJ

>
> For the other 3 patches, feel free to add:
> Reviewed-by: Joel Fernandes (Google) <[email protected]>
>
> thanks,
>
> - Joel

2023-06-09 23:49:03

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 4/4] Docs/RCU/rculist_nulls: Drop unnecessary '_release' in insert function

On Fri, Jun 09, 2023 at 07:12:06PM +0000, SeongJae Park wrote:
> On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <[email protected]> wrote:
>
> > On Thu, May 18, 2023 at 6:40 PM SeongJae Park <[email protected]> wrote:
> > >
> > > The document says we can avoid extra smp_rmb() in lockless_lookup() and
> > > extra _release() in insert function when hlist_nulls is used. However,
> > > the example code snippet for the insert function is still using the
> > > extra _release(). Drop it.
> > >
> > > Signed-off-by: SeongJae Park <[email protected]>
> > > ---
> > > Documentation/RCU/rculist_nulls.rst | 2 +-
> > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst
> > > index 5cd6f3f8810f..463270273d89 100644
> > > --- a/Documentation/RCU/rculist_nulls.rst
> > > +++ b/Documentation/RCU/rculist_nulls.rst
> > > @@ -191,7 +191,7 @@ scan the list again without harm.
> > > obj = kmem_cache_alloc(cachep);
> > > lock_chain(); // typically a spin_lock()
> > > obj->key = key;
> > > - atomic_set_release(&obj->refcnt, 1); // key before refcnt
> > > + atomic_set(&obj->refcnt, 1);
> > > /*
> > > * insert obj in RCU way (readers might be traversing chain)
> > > */
> >
> > If write to ->refcnt of 1 is reordered with setting of ->key, what
> > prevents the 'lookup algorithm' from doing a key match (obj->key ==
> > key) before the refcount has been initialized?
> >
> > Are we sure the reordering mentioned in the document is the same as
> > the reordering prevented by the atomic_set_release()?
>
> Paul, may I ask your opinion?

The next line of code is this:

hlist_nulls_add_head_rcu(&obj->obj_node, list);

If I understand the code correctly, obj (and thus *obj) are not
visible to readers before the hlist_nulls_add_head_rcu(). And
hlist_nulls_add_head_rcu() uses rcu_assign_pointer() to ensure that
initialization (including both ->key and ->refcnt) is ordered before
list insertion.

Except that this memory is being allocated from a slab cache that was
created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers
who gained a reference before this object was freed, and who still hold
their references.

Unfortunately, the implementation of try_get_ref() is not shown. However,
if ->refcnt is non-zero, this can succeed, and if it succeeds, we need
the subsequent check of obj->key with key in the lookup algorithm to
be stable. For this check to be stable, try_get_ref() needs to use an
atomic operation with at least acquire semantics (kref_get_unless_zero()
would work), and this must pair with something in the initialization.

So I don't see how it is safe to weaken that atomic_set_release() to
atomic_set(), even on x86.

Or am I missing something subtle here?

Thanx, Paul

> Thanks,
> SJ
>
> >
> > For the other 3 patches, feel free to add:
> > Reviewed-by: Joel Fernandes (Google) <[email protected]>
> >
> > thanks,
> >
> > - Joel

2023-06-10 00:31:53

by SeongJae Park

[permalink] [raw]
Subject: Re: [PATCH 4/4] Docs/RCU/rculist_nulls: Drop unnecessary '_release' in insert function

On Fri, 9 Jun 2023 16:42:59 -0700 "Paul E. McKenney" <[email protected]> wrote:

> On Fri, Jun 09, 2023 at 07:12:06PM +0000, SeongJae Park wrote:
> > On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <[email protected]> wrote:
> >
> > > On Thu, May 18, 2023 at 6:40 PM SeongJae Park <[email protected]> wrote:
> > > >
> > > > The document says we can avoid extra smp_rmb() in lockless_lookup() and
> > > > extra _release() in insert function when hlist_nulls is used. However,
> > > > the example code snippet for the insert function is still using the
> > > > extra _release(). Drop it.
> > > >
> > > > Signed-off-by: SeongJae Park <[email protected]>
> > > > ---
> > > > Documentation/RCU/rculist_nulls.rst | 2 +-
> > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst
> > > > index 5cd6f3f8810f..463270273d89 100644
> > > > --- a/Documentation/RCU/rculist_nulls.rst
> > > > +++ b/Documentation/RCU/rculist_nulls.rst
> > > > @@ -191,7 +191,7 @@ scan the list again without harm.
> > > > obj = kmem_cache_alloc(cachep);
> > > > lock_chain(); // typically a spin_lock()
> > > > obj->key = key;
> > > > - atomic_set_release(&obj->refcnt, 1); // key before refcnt
> > > > + atomic_set(&obj->refcnt, 1);
> > > > /*
> > > > * insert obj in RCU way (readers might be traversing chain)
> > > > */
> > >
> > > If write to ->refcnt of 1 is reordered with setting of ->key, what
> > > prevents the 'lookup algorithm' from doing a key match (obj->key ==
> > > key) before the refcount has been initialized?
> > >
> > > Are we sure the reordering mentioned in the document is the same as
> > > the reordering prevented by the atomic_set_release()?
> >
> > Paul, may I ask your opinion?
>
> The next line of code is this:
>
> hlist_nulls_add_head_rcu(&obj->obj_node, list);
>
> If I understand the code correctly, obj (and thus *obj) are not
> visible to readers before the hlist_nulls_add_head_rcu(). And
> hlist_nulls_add_head_rcu() uses rcu_assign_pointer() to ensure that
> initialization (including both ->key and ->refcnt) is ordered before
> list insertion.
>
> Except that this memory is being allocated from a slab cache that was
> created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers
> who gained a reference before this object was freed, and who still hold
> their references.
>
> Unfortunately, the implementation of try_get_ref() is not shown. However,
> if ->refcnt is non-zero, this can succeed, and if it succeeds, we need
> the subsequent check of obj->key with key in the lookup algorithm to
> be stable. For this check to be stable, try_get_ref() needs to use an
> atomic operation with at least acquire semantics (kref_get_unless_zero()
> would work), and this must pair with something in the initialization.
>
> So I don't see how it is safe to weaken that atomic_set_release() to
> atomic_set(), even on x86.

Thank you for the nice explanation, and I agree.

>
> Or am I missing something subtle here?

I found the text is saying extra _release() in insert function is not
needed[1], and I thought it means the atomic_set_release(). Am I misreading
it? If not, would it be better to fix the text, for example, like below?

```
--- a/Documentation/RCU/rculist_nulls.rst
+++ b/Documentation/RCU/rculist_nulls.rst
@@ -129,8 +129,7 @@ very very fast (before the end of RCU grace period)
Avoiding extra smp_rmb()
========================

-With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
-and extra _release() in insert function.
+With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup().

For example, if we choose to store the slot number as the 'nulls'
end-of-list marker for each slot of the hash table, we can detect
@@ -182,6 +181,9 @@ scan the list again without harm.
2) Insert algorithm
-------------------

+Same to the above one, but uses hlist_nulls_add_head_rcu() instead of
+hlist_add_head_rcu().
+
::

/*
@@ -191,7 +193,7 @@ scan the list again without harm.
obj = kmem_cache_alloc(cachep);
lock_chain(); // typically a spin_lock()
obj->key = key;
- atomic_set_release(&obj->refcnt, 1); // key before refcnt
+ atomic_set(&obj->refcnt, 1);
/*
* insert obj in RCU way (readers might be traversing chain)
*/
```

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/RCU/rculist_nulls.rst#n133


Thanks,
SJ

>
> Thanx, Paul
>
> > Thanks,
> > SJ
> >
> > >
> > > For the other 3 patches, feel free to add:
> > > Reviewed-by: Joel Fernandes (Google) <[email protected]>
> > >
> > > thanks,
> > >
> > > - Joel

2023-06-10 06:16:19

by Alan Huang

[permalink] [raw]
Subject: Re: [PATCH 4/4] Docs/RCU/rculist_nulls: Drop unnecessary '_release' in insert function

Hi Paul,

> 2023年6月10日 07:42,Paul E. McKenney <[email protected]> 写道:
>
> On Fri, Jun 09, 2023 at 07:12:06PM +0000, SeongJae Park wrote:
>> On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <[email protected]> wrote:
>>
>>> On Thu, May 18, 2023 at 6:40 PM SeongJae Park <[email protected]> wrote:
>>>>
>>>> The document says we can avoid extra smp_rmb() in lockless_lookup() and
>>>> extra _release() in insert function when hlist_nulls is used. However,
>>>> the example code snippet for the insert function is still using the
>>>> extra _release(). Drop it.
>>>>
>>>> Signed-off-by: SeongJae Park <[email protected]>
>>>> ---
>>>> Documentation/RCU/rculist_nulls.rst | 2 +-
>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst
>>>> index 5cd6f3f8810f..463270273d89 100644
>>>> --- a/Documentation/RCU/rculist_nulls.rst
>>>> +++ b/Documentation/RCU/rculist_nulls.rst
>>>> @@ -191,7 +191,7 @@ scan the list again without harm.
>>>> obj = kmem_cache_alloc(cachep);
>>>> lock_chain(); // typically a spin_lock()
>>>> obj->key = key;
>>>> - atomic_set_release(&obj->refcnt, 1); // key before refcnt
>>>> + atomic_set(&obj->refcnt, 1);
>>>> /*
>>>> * insert obj in RCU way (readers might be traversing chain)
>>>> */
>>>
>>> If write to ->refcnt of 1 is reordered with setting of ->key, what
>>> prevents the 'lookup algorithm' from doing a key match (obj->key ==
>>> key) before the refcount has been initialized?
>>>
>>> Are we sure the reordering mentioned in the document is the same as
>>> the reordering prevented by the atomic_set_release()?
>>
>> Paul, may I ask your opinion?
>
> The next line of code is this:
>
> hlist_nulls_add_head_rcu(&obj->obj_node, list);
>
> If I understand the code correctly, obj (and thus *obj) are not
> visible to readers before the hlist_nulls_add_head_rcu(). And
> hlist_nulls_add_head_rcu() uses rcu_assign_pointer() to ensure that
> initialization (including both ->key and ->refcnt) is ordered before
> list insertion.
>
> Except that this memory is being allocated from a slab cache that was
> created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers
> who gained a reference before this object was freed, and who still hold
> their references.
>
> Unfortunately, the implementation of try_get_ref() is not shown. However,
> if ->refcnt is non-zero, this can succeed, and if it succeeds, we need
> the subsequent check of obj->key with key in the lookup algorithm to
> be stable. For this check to be stable, try_get_ref() needs to use an
> atomic operation with at least acquire semantics (kref_get_unless_zero()
> would work), and this must pair with something in the initialization.
>
> So I don't see how it is safe to weaken that atomic_set_release() to
> atomic_set(), even on x86.

I totally agree, but only in the case of using hlist_nulls.

That means, atomic_set_release() is not enough in the case without using hlist_nulls,
we must ensure that storing to obj->next (in hlist_add_head_rcu) is ordered before storing
to obj->key. Otherwise, we can get the new ‘next' and the old ‘key' in which case we can’t detect
an object movement(from one chain to another).

So, I’m afraid that the atomic_set_release() in insertion algorithm without using hlist_nulls should
change back to:

smp_wmb();
atomic_set(&obj->refcnt, 1);

Thanks,
Alan

>
> Or am I missing something subtle here?
>
> Thanx, Paul
>
>> Thanks,
>> SJ
>>
>>>
>>> For the other 3 patches, feel free to add:
>>> Reviewed-by: Joel Fernandes (Google) <[email protected]>
>>>
>>> thanks,
>>>
>>> - Joel



2023-06-10 06:25:53

by Alan Huang

[permalink] [raw]
Subject: Re: [PATCH 4/4] Docs/RCU/rculist_nulls: Drop unnecessary '_release' in insert function

Hi SJ,

> 2023年6月10日 08:20,SeongJae Park <[email protected]> 写道:
>
> On Fri, 9 Jun 2023 16:42:59 -0700 "Paul E. McKenney" <[email protected]> wrote:
>
>> On Fri, Jun 09, 2023 at 07:12:06PM +0000, SeongJae Park wrote:
>>> On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <[email protected]> wrote:
>>>
>>>> On Thu, May 18, 2023 at 6:40 PM SeongJae Park <[email protected]> wrote:
>>>>>
>>>>> The document says we can avoid extra smp_rmb() in lockless_lookup() and
>>>>> extra _release() in insert function when hlist_nulls is used. However,
>>>>> the example code snippet for the insert function is still using the
>>>>> extra _release(). Drop it.
>>>>>
>>>>> Signed-off-by: SeongJae Park <[email protected]>
>>>>> ---
>>>>> Documentation/RCU/rculist_nulls.rst | 2 +-
>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst
>>>>> index 5cd6f3f8810f..463270273d89 100644
>>>>> --- a/Documentation/RCU/rculist_nulls.rst
>>>>> +++ b/Documentation/RCU/rculist_nulls.rst
>>>>> @@ -191,7 +191,7 @@ scan the list again without harm.
>>>>> obj = kmem_cache_alloc(cachep);
>>>>> lock_chain(); // typically a spin_lock()
>>>>> obj->key = key;
>>>>> - atomic_set_release(&obj->refcnt, 1); // key before refcnt
>>>>> + atomic_set(&obj->refcnt, 1);
>>>>> /*
>>>>> * insert obj in RCU way (readers might be traversing chain)
>>>>> */
>>>>
>>>> If write to ->refcnt of 1 is reordered with setting of ->key, what
>>>> prevents the 'lookup algorithm' from doing a key match (obj->key ==
>>>> key) before the refcount has been initialized?
>>>>
>>>> Are we sure the reordering mentioned in the document is the same as
>>>> the reordering prevented by the atomic_set_release()?
>>>
>>> Paul, may I ask your opinion?
>>
>> The next line of code is this:
>>
>> hlist_nulls_add_head_rcu(&obj->obj_node, list);
>>
>> If I understand the code correctly, obj (and thus *obj) are not
>> visible to readers before the hlist_nulls_add_head_rcu(). And
>> hlist_nulls_add_head_rcu() uses rcu_assign_pointer() to ensure that
>> initialization (including both ->key and ->refcnt) is ordered before
>> list insertion.
>>
>> Except that this memory is being allocated from a slab cache that was
>> created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers
>> who gained a reference before this object was freed, and who still hold
>> their references.
>>
>> Unfortunately, the implementation of try_get_ref() is not shown. However,
>> if ->refcnt is non-zero, this can succeed, and if it succeeds, we need
>> the subsequent check of obj->key with key in the lookup algorithm to
>> be stable. For this check to be stable, try_get_ref() needs to use an
>> atomic operation with at least acquire semantics (kref_get_unless_zero()
>> would work), and this must pair with something in the initialization.
>>
>> So I don't see how it is safe to weaken that atomic_set_release() to
>> atomic_set(), even on x86.
>
> Thank you for the nice explanation, and I agree.
>
>>
>> Or am I missing something subtle here?
>
> I found the text is saying extra _release() in insert function is not
> needed[1], and I thought it means the atomic_set_release(). Am I misreading
> it? If not, would it be better to fix the text, for example, like below?

The original text is:

“With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
and extra smp_wmb() in insert function.”

We can avoid the extra smp_wmb(), but the _release is required,

As Paul said,

>> Except that this memory is being allocated from a slab cache that was
>> created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers
>> who gained a reference before this object was freed, and who still hold
>> their references.

Without the _release, we can get the old ‘key’ after the invocation of
try_get_ref (although try_get_ref noticed the effect of atomic_set).

Thanks,
Alan

>
> ```
> --- a/Documentation/RCU/rculist_nulls.rst
> +++ b/Documentation/RCU/rculist_nulls.rst
> @@ -129,8 +129,7 @@ very very fast (before the end of RCU grace period)
> Avoiding extra smp_rmb()
> ========================
>
> -With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
> -and extra _release() in insert function.
> +With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup().
>
> For example, if we choose to store the slot number as the 'nulls'
> end-of-list marker for each slot of the hash table, we can detect
> @@ -182,6 +181,9 @@ scan the list again without harm.
> 2) Insert algorithm
> -------------------
>
> +Same to the above one, but uses hlist_nulls_add_head_rcu() instead of
> +hlist_add_head_rcu().
> +
> ::
>
> /*
> @@ -191,7 +193,7 @@ scan the list again without harm.
> obj = kmem_cache_alloc(cachep);
> lock_chain(); // typically a spin_lock()
> obj->key = key;
> - atomic_set_release(&obj->refcnt, 1); // key before refcnt
> + atomic_set(&obj->refcnt, 1);
> /*
> * insert obj in RCU way (readers might be traversing chain)
> */
> ```
>
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/RCU/rculist_nulls.rst#n133
>
>
> Thanks,
> SJ
>
>>
>> Thanx, Paul
>>
>>> Thanks,
>>> SJ
>>>
>>>>
>>>> For the other 3 patches, feel free to add:
>>>> Reviewed-by: Joel Fernandes (Google) <[email protected]>
>>>>
>>>> thanks,
>>>>
>>>> - Joel



2023-06-10 11:42:38

by Alan Huang

[permalink] [raw]
Subject: Re: [PATCH 4/4] Docs/RCU/rculist_nulls: Drop unnecessary '_release' in insert function


> 2023年6月10日 13:37,Alan Huang <[email protected]> 写道:
>
> Hi Paul,
>
>> 2023年6月10日 07:42,Paul E. McKenney <[email protected]> 写道:
>>
>> On Fri, Jun 09, 2023 at 07:12:06PM +0000, SeongJae Park wrote:
>>> On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <[email protected]> wrote:
>>>
>>>> On Thu, May 18, 2023 at 6:40 PM SeongJae Park <[email protected]> wrote:
>>>>>
>>>>> The document says we can avoid extra smp_rmb() in lockless_lookup() and
>>>>> extra _release() in insert function when hlist_nulls is used. However,
>>>>> the example code snippet for the insert function is still using the
>>>>> extra _release(). Drop it.
>>>>>
>>>>> Signed-off-by: SeongJae Park <[email protected]>
>>>>> ---
>>>>> Documentation/RCU/rculist_nulls.rst | 2 +-
>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst
>>>>> index 5cd6f3f8810f..463270273d89 100644
>>>>> --- a/Documentation/RCU/rculist_nulls.rst
>>>>> +++ b/Documentation/RCU/rculist_nulls.rst
>>>>> @@ -191,7 +191,7 @@ scan the list again without harm.
>>>>> obj = kmem_cache_alloc(cachep);
>>>>> lock_chain(); // typically a spin_lock()
>>>>> obj->key = key;
>>>>> - atomic_set_release(&obj->refcnt, 1); // key before refcnt
>>>>> + atomic_set(&obj->refcnt, 1);
>>>>> /*
>>>>> * insert obj in RCU way (readers might be traversing chain)
>>>>> */
>>>>
>>>> If write to ->refcnt of 1 is reordered with setting of ->key, what
>>>> prevents the 'lookup algorithm' from doing a key match (obj->key ==
>>>> key) before the refcount has been initialized?
>>>>
>>>> Are we sure the reordering mentioned in the document is the same as
>>>> the reordering prevented by the atomic_set_release()?
>>>
>>> Paul, may I ask your opinion?
>>
>> The next line of code is this:
>>
>> hlist_nulls_add_head_rcu(&obj->obj_node, list);
>>
>> If I understand the code correctly, obj (and thus *obj) are not
>> visible to readers before the hlist_nulls_add_head_rcu(). And
>> hlist_nulls_add_head_rcu() uses rcu_assign_pointer() to ensure that
>> initialization (including both ->key and ->refcnt) is ordered before
>> list insertion.
>>
>> Except that this memory is being allocated from a slab cache that was
>> created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers
>> who gained a reference before this object was freed, and who still hold
>> their references.
>>
>> Unfortunately, the implementation of try_get_ref() is not shown. However,
>> if ->refcnt is non-zero, this can succeed, and if it succeeds, we need
>> the subsequent check of obj->key with key in the lookup algorithm to
>> be stable. For this check to be stable, try_get_ref() needs to use an
>> atomic operation with at least acquire semantics (kref_get_unless_zero()
>> would work), and this must pair with something in the initialization.
>>
>> So I don't see how it is safe to weaken that atomic_set_release() to
>> atomic_set(), even on x86.
>
> I totally agree, but only in the case of using hlist_nulls.
>
> That means, atomic_set_release() is not enough in the case without using hlist_nulls,
> we must ensure that storing to obj->next (in hlist_add_head_rcu) is ordered before storing

Typo: not before, but after.

> to obj->key. Otherwise, we can get the new ‘next' and the old ‘key' in which case we can’t detect
> an object movement(from one chain to another).
>
> So, I’m afraid that the atomic_set_release() in insertion algorithm without using hlist_nulls should
> change back to:
>
> smp_wmb();
> atomic_set(&obj->refcnt, 1);
>
> Thanks,
> Alan
>
>>
>> Or am I missing something subtle here?
>>
>> Thanx, Paul
>>
>>> Thanks,
>>> SJ
>>>
>>>>
>>>> For the other 3 patches, feel free to add:
>>>> Reviewed-by: Joel Fernandes (Google) <[email protected]>
>>>>
>>>> thanks,
>>>>
>>>> - Joel