2019-01-21 08:40:40

by Sandeep Patil

[permalink] [raw]
Subject: [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation

The 'pss_locked' field of smaps_rollup was being calculated incorrectly
as it accumulated the current pss everytime a locked VMA was found.

Fix that by making sure we record the current pss value before each VMA
is walked. So, we can only add the delta if the VMA was found to be
VM_LOCKED.

Fixes: 493b0e9d945f ("mm: add /proc/pid/smaps_rollup")
Cc: [email protected] # 4.14.y 4.19.y
Signed-off-by: Sandeep Patil <[email protected]>
---
fs/proc/task_mmu.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index f0ec9edab2f3..51a00a2b4733 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
#endif
.mm = vma->vm_mm,
};
+ unsigned long pss;

smaps_walk.private = mss;

@@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma,
}
}
#endif
-
+ /* record current pss so we can calculate the delta after page walk */
+ pss = mss->pss;
/* mmap_sem is held in m_start */
walk_page_vma(vma, &smaps_walk);
if (vma->vm_flags & VM_LOCKED)
- mss->pss_locked += mss->pss;
+ mss->pss_locked += mss->pss - pss;
}

#define SEQ_PUT_DEC(str, val) \
--
2.20.1.321.g9e740568ce-goog



2019-01-29 00:15:50

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation

On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil <[email protected]> wrote:

> The 'pss_locked' field of smaps_rollup was being calculated incorrectly
> as it accumulated the current pss everytime a locked VMA was found.
>
> Fix that by making sure we record the current pss value before each VMA
> is walked. So, we can only add the delta if the VMA was found to be
> VM_LOCKED.
>
> ...
>
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
> #endif
> .mm = vma->vm_mm,
> };
> + unsigned long pss;
>
> smaps_walk.private = mss;
>
> @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma,
> }
> }
> #endif
> -
> + /* record current pss so we can calculate the delta after page walk */
> + pss = mss->pss;
> /* mmap_sem is held in m_start */
> walk_page_vma(vma, &smaps_walk);
> if (vma->vm_flags & VM_LOCKED)
> - mss->pss_locked += mss->pss;
> + mss->pss_locked += mss->pss - pss;
> }

This seems to be a rather obscure way of accumulating
mem_size_stats.pss_locked. Wouldn't it make more sense to do this in
smaps_account(), wherever we increment mem_size_stats.pss?

It would be a tiny bit less efficient but I think that the code cleanup
justifies such a cost?

2019-01-29 15:52:50

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation

On 1/29/19 1:15 AM, Andrew Morton wrote:
> On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil <[email protected]> wrote:
>
>> The 'pss_locked' field of smaps_rollup was being calculated incorrectly
>> as it accumulated the current pss everytime a locked VMA was found.
>>
>> Fix that by making sure we record the current pss value before each VMA
>> is walked. So, we can only add the delta if the VMA was found to be
>> VM_LOCKED.
>>
>> ...
>>
>> --- a/fs/proc/task_mmu.c
>> +++ b/fs/proc/task_mmu.c
>> @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>> #endif
>> .mm = vma->vm_mm,
>> };
>> + unsigned long pss;
>>
>> smaps_walk.private = mss;
>>
>> @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>> }
>> }
>> #endif
>> -
>> + /* record current pss so we can calculate the delta after page walk */
>> + pss = mss->pss;
>> /* mmap_sem is held in m_start */
>> walk_page_vma(vma, &smaps_walk);
>> if (vma->vm_flags & VM_LOCKED)
>> - mss->pss_locked += mss->pss;
>> + mss->pss_locked += mss->pss - pss;
>> }
>
> This seems to be a rather obscure way of accumulating
> mem_size_stats.pss_locked. Wouldn't it make more sense to do this in
> smaps_account(), wherever we increment mem_size_stats.pss?
>
> It would be a tiny bit less efficient but I think that the code cleanup
> justifies such a cost?

Yeah, Sandeep could you add 'bool locked' param to smaps_account() and check it
there? We probably don't need the whole vma param yet.

Thanks.

2019-02-03 06:21:50

by Sandeep Patil

[permalink] [raw]
Subject: Re: [PATCH] mm: proc: smaps_rollup: Fix pss_locked calculation

On Tue, Jan 29, 2019 at 04:52:21PM +0100, Vlastimil Babka wrote:
> On 1/29/19 1:15 AM, Andrew Morton wrote:
> > On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil <[email protected]> wrote:
> >
> >> The 'pss_locked' field of smaps_rollup was being calculated incorrectly
> >> as it accumulated the current pss everytime a locked VMA was found.
> >>
> >> Fix that by making sure we record the current pss value before each VMA
> >> is walked. So, we can only add the delta if the VMA was found to be
> >> VM_LOCKED.
> >>
> >> ...
> >>
> >> --- a/fs/proc/task_mmu.c
> >> +++ b/fs/proc/task_mmu.c
> >> @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
> >> #endif
> >> .mm = vma->vm_mm,
> >> };
> >> + unsigned long pss;
> >>
> >> smaps_walk.private = mss;
> >>
> >> @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma,
> >> }
> >> }
> >> #endif
> >> -
> >> + /* record current pss so we can calculate the delta after page walk */
> >> + pss = mss->pss;
> >> /* mmap_sem is held in m_start */
> >> walk_page_vma(vma, &smaps_walk);
> >> if (vma->vm_flags & VM_LOCKED)
> >> - mss->pss_locked += mss->pss;
> >> + mss->pss_locked += mss->pss - pss;
> >> }
> >
> > This seems to be a rather obscure way of accumulating
> > mem_size_stats.pss_locked. Wouldn't it make more sense to do this in
> > smaps_account(), wherever we increment mem_size_stats.pss?
> >
> > It would be a tiny bit less efficient but I think that the code cleanup
> > justifies such a cost?
>
> Yeah, Sandeep could you add 'bool locked' param to smaps_account() and check it
> there? We probably don't need the whole vma param yet.

Agree, I will send -v2 shortly.

- ssp