If two or more threads call get_huge_zero_page concurrently, THP_ZERO_PAGE_ALLOC
may increased two or more times. But actually, this should only count
as once since the extra zero pages has been freed.
Signed-off-by: Liu Shixin <[email protected]>
---
mm/huge_memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 88d98241a635..5c83a424803a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -163,7 +163,6 @@ static bool get_huge_zero_page(void)
count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED);
return false;
}
- count_vm_event(THP_ZERO_PAGE_ALLOC);
preempt_disable();
if (cmpxchg(&huge_zero_page, NULL, zero_page)) {
preempt_enable();
@@ -175,6 +174,7 @@ static bool get_huge_zero_page(void)
/* We take additional reference here. It will be put back by shrinker */
atomic_set(&huge_zero_refcount, 2);
preempt_enable();
+ count_vm_event(THP_ZERO_PAGE_ALLOC);
return true;
}
--
2.25.1
On Mon, 5 Sep 2022 21:38:13 +0800 Liu Shixin <[email protected]> wrote:
> If two or more threads call get_huge_zero_page concurrently, THP_ZERO_PAGE_ALLOC
> may increased two or more times. But actually, this should only count
> as once since the extra zero pages has been freed.
Well, for better of for worse,
Documentation/admin-guide/mm/transhuge.rst says
thp_zero_page_alloc
is incremented every time a huge zero page is
successfully allocated. It includes allocations which where
dropped due race with other allocation. Note, it doesn't count
every map of the huge zero page, only its allocation.
If you think this interprtation should be changed then please explain
why, and let's be sure to update the documentation accordingly.
On 2022/9/6 4:07, Andrew Morton wrote:
> On Mon, 5 Sep 2022 21:38:13 +0800 Liu Shixin <[email protected]> wrote:
>
>> If two or more threads call get_huge_zero_page concurrently, THP_ZERO_PAGE_ALLOC
>> may increased two or more times. But actually, this should only count
>> as once since the extra zero pages has been freed.
> Well, for better of for worse,
> Documentation/admin-guide/mm/transhuge.rst says
>
> thp_zero_page_alloc
> is incremented every time a huge zero page is
> successfully allocated. It includes allocations which where
> dropped due race with other allocation. Note, it doesn't count
> every map of the huge zero page, only its allocation.
>
> If you think this interprtation should be changed then please explain
> why, and let's be sure to update the documentation accordingly.
>
> .
Thanks for your explanation. I misunderstood the meaning of thp_zero_page_alloc before.
Although the rules are clearly explained in the documentation, I think that this variable
should only incremented when a huge zero page used for thp is successfully allocated and
the pages dropped due race should skip increment. It seems strange to count in all allocations.
If there's something I still misunderstand, please point it out, thanks.
Thanks,
On Tue, 6 Sep 2022 09:52:23 +0800 Liu Shixin <[email protected]> wrote:
> On 2022/9/6 4:07, Andrew Morton wrote:
> > On Mon, 5 Sep 2022 21:38:13 +0800 Liu Shixin <[email protected]> wrote:
> >
> >> If two or more threads call get_huge_zero_page concurrently, THP_ZERO_PAGE_ALLOC
> >> may increased two or more times. But actually, this should only count
> >> as once since the extra zero pages has been freed.
> > Well, for better of for worse,
> > Documentation/admin-guide/mm/transhuge.rst says
> >
> > thp_zero_page_alloc
> > is incremented every time a huge zero page is
> > successfully allocated. It includes allocations which where
> > dropped due race with other allocation. Note, it doesn't count
> > every map of the huge zero page, only its allocation.
> >
> > If you think this interprtation should be changed then please explain
> > why, and let's be sure to update the documentation accordingly.
> >
> > .
> Thanks for your explanation. I misunderstood the meaning of thp_zero_page_alloc before.
> Although the rules are clearly explained in the documentation, I think that this variable
> should only incremented when a huge zero page used for thp is successfully allocated and
> the pages dropped due race should skip increment. It seems strange to count in all allocations.
>
> If there's something I still misunderstand, please point it out, thanks.
It seems strange to me also. Perhaps there's a rationale buried in the
git and mailing list history.
On 2022/9/8 7:35, Andrew Morton wrote:
> On Tue, 6 Sep 2022 09:52:23 +0800 Liu Shixin <[email protected]> wrote:
>
>> On 2022/9/6 4:07, Andrew Morton wrote:
>>> On Mon, 5 Sep 2022 21:38:13 +0800 Liu Shixin <[email protected]> wrote:
>>>
>>>> If two or more threads call get_huge_zero_page concurrently, THP_ZERO_PAGE_ALLOC
>>>> may increased two or more times. But actually, this should only count
>>>> as once since the extra zero pages has been freed.
>>> Well, for better of for worse,
>>> Documentation/admin-guide/mm/transhuge.rst says
>>>
>>> thp_zero_page_alloc
>>> is incremented every time a huge zero page is
>>> successfully allocated. It includes allocations which where
>>> dropped due race with other allocation. Note, it doesn't count
>>> every map of the huge zero page, only its allocation.
>>>
>>> If you think this interprtation should be changed then please explain
>>> why, and let's be sure to update the documentation accordingly.
>>>
>>> .
>> Thanks for your explanation. I misunderstood the meaning of thp_zero_page_alloc before.
>> Although the rules are clearly explained in the documentation, I think that this variable
>> should only incremented when a huge zero page used for thp is successfully allocated and
>> the pages dropped due race should skip increment. It seems strange to count in all allocations.
>>
>> If there's something I still misunderstand, please point it out, thanks.
> It seems strange to me also. Perhaps there's a rationale buried in the
> git and mailing list history.
>
> .
I didn't find previous discussion about this point. I update document in v2.
Kirill, what do you think about this change?
https://lore.kernel.org/linux-mm/[email protected]/