2024-03-21 03:53:46

by linke li

[permalink] [raw]
Subject: [PATCH v2] mm/slub: mark racy accesses on slab->slabs

The reads of slab->slabs are racy because it may be changed by
put_cpu_partial concurrently. In slabs_cpu_partial_show() and
show_slab_objects(), slab->slabs is only used for showing information.

Data-racy reads from shared variables that are used only for diagnostic
purposes should typically use data_race(), since it is normally not a
problem if the values are off by a little.

This patch is aimed at reducing the number of benign races reported by
KCSAN in order to focus future debugging effort on harmful races.

Signed-off-by: linke li <[email protected]>
Reviewed-by: Chengming Zhou <[email protected]>
---
mm/slub.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 2ef88bbf56a3..0d700f6ca547 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -6052,7 +6052,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
else if (flags & SO_OBJECTS)
WARN_ON_ONCE(1);
else
- x = slab->slabs;
+ x = data_race(slab->slabs);
total += x;
nodes[node] += x;
}
@@ -6257,7 +6257,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu));

if (slab)
- slabs += slab->slabs;
+ slabs += data_race(slab->slabs);
}
#endif

@@ -6271,7 +6271,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)

slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu));
if (slab) {
- slabs = READ_ONCE(slab->slabs);
+ slabs = data_race(slab->slabs);
objects = (slabs * oo_objects(s->oo)) / 2;
len += sysfs_emit_at(buf, len, " C%d=%d(%d)",
cpu, objects, slabs);
--
2.39.3 (Apple Git-146)



2024-03-25 13:54:45

by Chengming Zhou

[permalink] [raw]
Subject: Re: [PATCH v2] mm/slub: mark racy accesses on slab->slabs

On 2024/3/25 16:48, Vlastimil Babka wrote:
> On 3/21/24 4:48 AM, linke li wrote:
>> The reads of slab->slabs are racy because it may be changed by
>> put_cpu_partial concurrently. In slabs_cpu_partial_show() and
>> show_slab_objects(), slab->slabs is only used for showing information.
>>
>> Data-racy reads from shared variables that are used only for diagnostic
>> purposes should typically use data_race(), since it is normally not a
>> problem if the values are off by a little.
>>
>> This patch is aimed at reducing the number of benign races reported by
>> KCSAN in order to focus future debugging effort on harmful races.
>>
>> Signed-off-by: linke li <[email protected]>
>> Reviewed-by: Chengming Zhou <[email protected]>
>
> Chengming provided feedback to v1 but not offered a Reviewed-by: AFAICS? Or
> maybe will offer it now? :)

Ah, right.

Reviewed-by: Chengming Zhou <[email protected]>

Thanks.

>
> Vlastimil
>
>> ---
>> mm/slub.c | 6 +++---
>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 2ef88bbf56a3..0d700f6ca547 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -6052,7 +6052,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
>> else if (flags & SO_OBJECTS)
>> WARN_ON_ONCE(1);
>> else
>> - x = slab->slabs;
>> + x = data_race(slab->slabs);
>> total += x;
>> nodes[node] += x;
>> }
>> @@ -6257,7 +6257,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
>> slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu));
>>
>> if (slab)
>> - slabs += slab->slabs;
>> + slabs += data_race(slab->slabs);
>> }
>> #endif
>>
>> @@ -6271,7 +6271,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
>>
>> slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu));
>> if (slab) {
>> - slabs = READ_ONCE(slab->slabs);
>> + slabs = data_race(slab->slabs);
>> objects = (slabs * oo_objects(s->oo)) / 2;
>> len += sysfs_emit_at(buf, len, " C%d=%d(%d)",
>> cpu, objects, slabs);
>

2024-03-25 13:59:25

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2] mm/slub: mark racy accesses on slab->slabs

On 3/25/24 9:49 AM, Chengming Zhou wrote:
> On 2024/3/25 16:48, Vlastimil Babka wrote:
>> On 3/21/24 4:48 AM, linke li wrote:
>>> The reads of slab->slabs are racy because it may be changed by
>>> put_cpu_partial concurrently. In slabs_cpu_partial_show() and
>>> show_slab_objects(), slab->slabs is only used for showing information.
>>>
>>> Data-racy reads from shared variables that are used only for diagnostic
>>> purposes should typically use data_race(), since it is normally not a
>>> problem if the values are off by a little.
>>>
>>> This patch is aimed at reducing the number of benign races reported by
>>> KCSAN in order to focus future debugging effort on harmful races.
>>>
>>> Signed-off-by: linke li <[email protected]>
>>> Reviewed-by: Chengming Zhou <[email protected]>
>>
>> Chengming provided feedback to v1 but not offered a Reviewed-by: AFAICS? Or
>> maybe will offer it now? :)
>
> Ah, right.
>
> Reviewed-by: Chengming Zhou <[email protected]>

Thanks, added to slab/for-6.10/cleanup


2024-03-25 15:13:20

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2] mm/slub: mark racy accesses on slab->slabs

On 3/21/24 4:48 AM, linke li wrote:
> The reads of slab->slabs are racy because it may be changed by
> put_cpu_partial concurrently. In slabs_cpu_partial_show() and
> show_slab_objects(), slab->slabs is only used for showing information.
>
> Data-racy reads from shared variables that are used only for diagnostic
> purposes should typically use data_race(), since it is normally not a
> problem if the values are off by a little.
>
> This patch is aimed at reducing the number of benign races reported by
> KCSAN in order to focus future debugging effort on harmful races.
>
> Signed-off-by: linke li <[email protected]>
> Reviewed-by: Chengming Zhou <[email protected]>

Chengming provided feedback to v1 but not offered a Reviewed-by: AFAICS? Or
maybe will offer it now? :)

Vlastimil

> ---
> mm/slub.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 2ef88bbf56a3..0d700f6ca547 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -6052,7 +6052,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
> else if (flags & SO_OBJECTS)
> WARN_ON_ONCE(1);
> else
> - x = slab->slabs;
> + x = data_race(slab->slabs);
> total += x;
> nodes[node] += x;
> }
> @@ -6257,7 +6257,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
> slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu));
>
> if (slab)
> - slabs += slab->slabs;
> + slabs += data_race(slab->slabs);
> }
> #endif
>
> @@ -6271,7 +6271,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
>
> slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu));
> if (slab) {
> - slabs = READ_ONCE(slab->slabs);
> + slabs = data_race(slab->slabs);
> objects = (slabs * oo_objects(s->oo)) / 2;
> len += sysfs_emit_at(buf, len, " C%d=%d(%d)",
> cpu, objects, slabs);