From: Joonsoo Kim <[email protected]>
page_zone_id() is a specialized function to compare the zone for the pages
that are within the section range. If the section of the pages are
different, page_zone_id() can be different even if their zone is the same.
This wrong usage doesn't cause any actual problem since
__munlock_pagevec_fill() would be called again with failed index. However,
it's better to use more appropriate function here.
This patch is also preparation for futher change about page_zone_id().
Signed-off-by: Joonsoo Kim <[email protected]>
---
mm/mlock.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/mm/mlock.c b/mm/mlock.c
index b562b55..dfc6f19 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -365,8 +365,8 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
* @start + PAGE_SIZE when no page could be added by the pte walk.
*/
static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
- struct vm_area_struct *vma, int zoneid, unsigned long start,
- unsigned long end)
+ struct vm_area_struct *vma, struct zone *zone,
+ unsigned long start, unsigned long end)
{
pte_t *pte;
spinlock_t *ptl;
@@ -394,7 +394,7 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
* Break if page could not be obtained or the page's node+zone does not
* match
*/
- if (!page || page_zone_id(page) != zoneid)
+ if (!page || page_zone(page) != zone)
break;
/*
@@ -446,7 +446,6 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
unsigned long page_increm;
struct pagevec pvec;
struct zone *zone;
- int zoneid;
pagevec_init(&pvec, 0);
/*
@@ -481,7 +480,6 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
*/
pagevec_add(&pvec, page);
zone = page_zone(page);
- zoneid = page_zone_id(page);
/*
* Try to fill the rest of pagevec using fast
@@ -490,7 +488,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
* pagevec.
*/
start = __munlock_pagevec_fill(&pvec, vma,
- zoneid, start, end);
+ zone, start, end);
__munlock_pagevec(&pvec, zone);
goto next;
}
--
2.7.4
+CC Mel
On 08/24/2017 09:20 AM, [email protected] wrote:
> From: Joonsoo Kim <[email protected]>
>
> page_zone_id() is a specialized function to compare the zone for the pages
> that are within the section range. If the section of the pages are
> different, page_zone_id() can be different even if their zone is the same.
> This wrong usage doesn't cause any actual problem since
> __munlock_pagevec_fill() would be called again with failed index. However,
> it's better to use more appropriate function here.
Hmm using zone id was part of the series making munlock faster. Too bad
it's doing the wrong thing on some memory models. Looks like it wasn't
evaluated in isolation, but only as part of the pagevec usage (commit
7a8010cd36273) but most likely it wasn't contributing too much to the
14% speedup.
> This patch is also preparation for futher change about page_zone_id().
Out of curiosity, what kind of change?
Vlastimil
> Signed-off-by: Joonsoo Kim <[email protected]>
> ---
> mm/mlock.c | 10 ++++------
> 1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/mm/mlock.c b/mm/mlock.c
> index b562b55..dfc6f19 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -365,8 +365,8 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
> * @start + PAGE_SIZE when no page could be added by the pte walk.
> */
> static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
> - struct vm_area_struct *vma, int zoneid, unsigned long start,
> - unsigned long end)
> + struct vm_area_struct *vma, struct zone *zone,
> + unsigned long start, unsigned long end)
> {
> pte_t *pte;
> spinlock_t *ptl;
> @@ -394,7 +394,7 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
> * Break if page could not be obtained or the page's node+zone does not
> * match
> */
> - if (!page || page_zone_id(page) != zoneid)
> + if (!page || page_zone(page) != zone)
> break;
>
> /*
> @@ -446,7 +446,6 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
> unsigned long page_increm;
> struct pagevec pvec;
> struct zone *zone;
> - int zoneid;
>
> pagevec_init(&pvec, 0);
> /*
> @@ -481,7 +480,6 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
> */
> pagevec_add(&pvec, page);
> zone = page_zone(page);
> - zoneid = page_zone_id(page);
>
> /*
> * Try to fill the rest of pagevec using fast
> @@ -490,7 +488,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
> * pagevec.
> */
> start = __munlock_pagevec_fill(&pvec, vma,
> - zoneid, start, end);
> + zone, start, end);
> __munlock_pagevec(&pvec, zone);
> goto next;
> }
>
On Thu, Aug 24, 2017 at 01:05:15PM +0200, Vlastimil Babka wrote:
> +CC Mel
>
> On 08/24/2017 09:20 AM, [email protected] wrote:
> > From: Joonsoo Kim <[email protected]>
> >
> > page_zone_id() is a specialized function to compare the zone for the pages
> > that are within the section range. If the section of the pages are
> > different, page_zone_id() can be different even if their zone is the same.
> > This wrong usage doesn't cause any actual problem since
> > __munlock_pagevec_fill() would be called again with failed index. However,
> > it's better to use more appropriate function here.
>
> Hmm using zone id was part of the series making munlock faster. Too bad
> it's doing the wrong thing on some memory models. Looks like it wasn't
> evaluated in isolation, but only as part of the pagevec usage (commit
> 7a8010cd36273) but most likely it wasn't contributing too much to the
> 14% speedup.
I roughly checked that patch and it seems that performance improvement
of that commit isn't related to page_zone_id() usage. With
page_zone(), we would have more chance that do a job as a batch.
>
> > This patch is also preparation for futher change about page_zone_id().
>
> Out of curiosity, what kind of change?
>
I prepared one more patch that prevent another user of page_zone_id()
since it is too tricky. However, I don't submit it. That description
should be removed. :/
Thanks.
On 08/25/2017 01:59 AM, Joonsoo Kim wrote:
> On Thu, Aug 24, 2017 at 01:05:15PM +0200, Vlastimil Babka wrote:
>> +CC Mel
>>
>> On 08/24/2017 09:20 AM, [email protected] wrote:
>>> From: Joonsoo Kim <[email protected]>
>>>
>>> page_zone_id() is a specialized function to compare the zone for the pages
>>> that are within the section range. If the section of the pages are
>>> different, page_zone_id() can be different even if their zone is the same.
>>> This wrong usage doesn't cause any actual problem since
>>> __munlock_pagevec_fill() would be called again with failed index. However,
>>> it's better to use more appropriate function here.
>>
>> Hmm using zone id was part of the series making munlock faster. Too bad
>> it's doing the wrong thing on some memory models. Looks like it wasn't
>> evaluated in isolation, but only as part of the pagevec usage (commit
>> 7a8010cd36273) but most likely it wasn't contributing too much to the
>> 14% speedup.
>
> I roughly checked that patch and it seems that performance improvement
> of that commit isn't related to page_zone_id() usage. With
> page_zone(), we would have more chance that do a job as a batch.
>
>>
>>> This patch is also preparation for futher change about page_zone_id().
>>
>> Out of curiosity, what kind of change?
>>
>
> I prepared one more patch that prevent another user of page_zone_id()
> since it is too tricky. However, I don't submit it. That description
> should be removed. :/
OK. You can add
Acked-by: Vlastimil Babka <[email protected]>
> Thanks.
>