After the memory is freed, it can be immediately allocated by other
CPUs, before the "free" trace report has been emitted. This causes
inaccurate traces.
For example, if the following sequence of events occurs:
CPU 0 CPU 1
(1) alloc xxxxxx
(2) free xxxxxx
(3) alloc xxxxxx
(4) free xxxxxx
Then they will be inaccurately reported via tracing, so that they appear
to have happened in this order:
CPU 0 CPU 1
(1) alloc xxxxxx
(2) alloc xxxxxx
(3) free xxxxxx
(4) free xxxxxx
This makes it look like CPU 1 somehow managed to allocate mmemory that
CPU 0 still had allocated for itself.
In order to avoid this, emit the "free xxxxxx" tracing report just
before the actual call to free the memory, instead of just after it.
Signed-off-by: Yunfeng Ye <[email protected]>
Reviewed-by: Vlastimil Babka <[email protected]>
---
v1 -> v2:
- Modify the description
- Add "Reviewed-by"
mm/slub.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 432145d7b4ec..427e62034c3f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
s = cache_from_obj(s, x);
if (!s)
return;
- slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
trace_kmem_cache_free(_RET_IP_, x, s->name);
+ slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
}
EXPORT_SYMBOL(kmem_cache_free);
--
2.27.0
On 2021/11/2 19:43, Yunfeng Ye wrote:
> After the memory is freed, it can be immediately allocated by other
> CPUs, before the "free" trace report has been emitted. This causes
> inaccurate traces.
>
> For example, if the following sequence of events occurs:
>
> CPU 0 CPU 1
>
> (1) alloc xxxxxx
> (2) free xxxxxx
> (3) alloc xxxxxx
> (4) free xxxxxx
>
> Then they will be inaccurately reported via tracing, so that they appear
> to have happened in this order:
>
> CPU 0 CPU 1
>
> (1) alloc xxxxxx
> (2) alloc xxxxxx
> (3) free xxxxxx
> (4) free xxxxxx
>
> This makes it look like CPU 1 somehow managed to allocate mmemory that
> CPU 0 still had allocated for itself.
>
> In order to avoid this, emit the "free xxxxxx" tracing report just
> before the actual call to free the memory, instead of just after it.
>
> Signed-off-by: Yunfeng Ye <[email protected]>
> Reviewed-by: Vlastimil Babka <[email protected]>
> ---
> v1 -> v2:
> - Modify the description
> - Add "Reviewed-by"
>
> mm/slub.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 432145d7b4ec..427e62034c3f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
> s = cache_from_obj(s, x);
> if (!s)
> return;
> - slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
> trace_kmem_cache_free(_RET_IP_, x, s->name);
> + slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
> }
It seems that kmem_cache_free() in mm/slab.c has the same problem.
We can fix it. Thanks.
> EXPORT_SYMBOL(kmem_cache_free);
>
On 11/2/21 14:53, Tang Yizhou wrote:
> On 2021/11/2 19:43, Yunfeng Ye wrote:
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
>> s = cache_from_obj(s, x);
>> if (!s)
>> return;
>> - slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
>> trace_kmem_cache_free(_RET_IP_, x, s->name);
>> + slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
>> }
>
> It seems that kmem_cache_free() in mm/slab.c has the same problem.
> We can fix it. Thanks.
Doh, true. Should go best before the local_irq_save() there.
And also kmem_cache_free() in mm/slob.c.
Interestingly kfree() is already OK in all 3 implementations.
>> EXPORT_SYMBOL(kmem_cache_free);
>>
>
On 11/2/21 04:43, Yunfeng Ye wrote:
> After the memory is freed, it can be immediately allocated by other
> CPUs, before the "free" trace report has been emitted. This causes
> inaccurate traces.
>
> For example, if the following sequence of events occurs:
>
> CPU 0 CPU 1
>
> (1) alloc xxxxxx
> (2) free xxxxxx
> (3) alloc xxxxxx
> (4) free xxxxxx
>
> Then they will be inaccurately reported via tracing, so that they appear
> to have happened in this order:
>
> CPU 0 CPU 1
>
> (1) alloc xxxxxx
> (2) alloc xxxxxx
> (3) free xxxxxx
> (4) free xxxxxx
>
> This makes it look like CPU 1 somehow managed to allocate mmemory that
I see I created a typo for you, sorry about that: s/mmemory/memory/
But anyway, the wording looks good now. Please feel free to add:
Reviewed-by: John Hubbard <[email protected]>
thanks,
--
John Hubbard
NVIDIA
> CPU 0 still had allocated for itself.
>
> In order to avoid this, emit the "free xxxxxx" tracing report just
> before the actual call to free the memory, instead of just after it.
>
> Signed-off-by: Yunfeng Ye <[email protected]>
> Reviewed-by: Vlastimil Babka <[email protected]>
> ---
> v1 -> v2:
> - Modify the description
> - Add "Reviewed-by"
>
> mm/slub.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 432145d7b4ec..427e62034c3f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
> s = cache_from_obj(s, x);
> if (!s)
> return;
> - slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
> trace_kmem_cache_free(_RET_IP_, x, s->name);
> + slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
> }
> EXPORT_SYMBOL(kmem_cache_free);
>
On 2021/11/2 22:39, Vlastimil Babka wrote:
> On 11/2/21 14:53, Tang Yizhou wrote:
>> On 2021/11/2 19:43, Yunfeng Ye wrote:
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
>>> s = cache_from_obj(s, x);
>>> if (!s)
>>> return;
>>> - slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
>>> trace_kmem_cache_free(_RET_IP_, x, s->name);
>>> + slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
>>> }
>>
>> It seems that kmem_cache_free() in mm/slab.c has the same problem.
>> We can fix it. Thanks.
>
> Doh, true. Should go best before the local_irq_save() there.
> And also kmem_cache_free() in mm/slob.c.
>
Yes, I will fix the same problem together in the v3 patch.
Thanks.
> Interestingly kfree() is already OK in all 3 implementations.
>
>>> EXPORT_SYMBOL(kmem_cache_free);
>>>
>>
>
> .
>
On 2021/11/3 2:37, John Hubbard wrote:
> On 11/2/21 04:43, Yunfeng Ye wrote:
>> After the memory is freed, it can be immediately allocated by other
>> CPUs, before the "free" trace report has been emitted. This causes
>> inaccurate traces.
>>
>> For example, if the following sequence of events occurs:
>>
>> CPU 0 CPU 1
>>
>> (1) alloc xxxxxx
>> (2) free xxxxxx
>> (3) alloc xxxxxx
>> (4) free xxxxxx
>>
>> Then they will be inaccurately reported via tracing, so that they appear
>> to have happened in this order:
>>
>> CPU 0 CPU 1
>>
>> (1) alloc xxxxxx
>> (2) alloc xxxxxx
>> (3) free xxxxxx
>> (4) free xxxxxx
>>
>> This makes it look like CPU 1 somehow managed to allocate mmemory that
>
>
> I see I created a typo for you, sorry about that: s/mmemory/memory/
>
> But anyway, the wording looks good now. Please feel free to add:
>
> Reviewed-by: John Hubbard <[email protected]>
>
Ok, I will fix the typo in the v3 patch.
Thanks.
>
> thanks,