2014-10-09 09:20:04

by Fengwei Yin

[permalink] [raw]
Subject: [PATCH] smaps should deal with huge zero page exactly same as normal zero page

Hi,
Fengguang found that the RSS/PSS shown in smaps is not correct
if the file is /dev/zero.

Example:
7bea458b3000-7fea458b3000 r--p 00000000 00:13 39989
/dev/zero
Size: 4294967296 kB
Rss: 10612736 kB
Pss: 10612736 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 10612736 kB
Private_Dirty: 0 kB
Referenced: 10612736 kB
Anonymous: 0 kB
AnonHugePages: 10612736 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
VmFlags: rd mr mw me


Attachments:
0001-smaps-should-deal-with-huge-zero-page-exactly-same.patch (2.17 kB)

2014-10-09 16:37:20

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH] smaps should deal with huge zero page exactly same as normal zero page

On 10/09/2014 02:19 AM, Fengwei Yin wrote:
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 80ca4fb..8550b27 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -476,7 +476,7 @@ static void smaps_pte_entry(pte_t ptent, unsigned long addr,
> mss->nonlinear += ptent_size;
> }
>
> - if (!page)
> + if (!page || is_huge_zero_page(page))
> return;

This really seems like a bit of a hack. A normal (small) zero page
won't make it to this point because of the vm_normal_page() check in
smaps_pte_entry() which hits the _PAGE_SPECIAL bit in the pte.

Is there a reason we can't set _PAGE_SPECIAL on the huge_zero_page ptes?
If we did that, we wouldn't need a special case here.

If we can't do that for some reason, can we at least teach
vm_normal_page() about the huge_zero_page in some other way?

> if (PageAnon(page))
> @@ -516,7 +516,8 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> if (pmd_trans_huge_lock(pmd, vma, &ptl) == 1) {
> smaps_pte_entry(*(pte_t *)pmd, addr, HPAGE_PMD_SIZE, walk);
> spin_unlock(ptl);
> - mss->anonymous_thp += HPAGE_PMD_SIZE;
> + if (!is_huge_zero_pmd(*pmd))
> + mss->anonymous_thp += HPAGE_PMD_SIZE;
> return 0;
> }

How about we just move this hunk in to smaps_pte_entry()? Something
along these lines:

...
if (PageAnon(page)) {
mss->anonymous += ptent_size;
+ if (PageTransHuge(page))
+ mss->anonymous_thp += ptent_size;
}

If we do that, plus teaching vm_normal_page() about huge_zero_pages, it
will help keep the hacks and the extra code due to huge pages to a miniumum.

> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 63579cb..758f569 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -34,6 +34,10 @@ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> unsigned long addr, pgprot_t newprot,
> int prot_numa);
>
> +extern bool is_huge_zero_page(struct page *page);
> +
> +extern bool is_huge_zero_pmd(pmd_t pmd);
> +
> enum transparent_hugepage_flag {
> TRANSPARENT_HUGEPAGE_FLAG,
> TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index d9a21d06..bedc3ae 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -173,12 +173,12 @@ static int start_khugepaged(void)
> static atomic_t huge_zero_refcount;
> static struct page *huge_zero_page __read_mostly;
>
> -static inline bool is_huge_zero_page(struct page *page)
> +bool is_huge_zero_page(struct page *page)
> {
> return ACCESS_ONCE(huge_zero_page) == page;
> }
>
> -static inline bool is_huge_zero_pmd(pmd_t pmd)
> +bool is_huge_zero_pmd(pmd_t pmd)
> {
> return is_huge_zero_page(pmd_page(pmd));
> }

^^^ And all these exports.

2014-10-10 01:37:08

by Fengwei Yin

[permalink] [raw]
Subject: Re: [PATCH] smaps should deal with huge zero page exactly same as normal zero page

On Thu, Oct 09, 2014 at 09:36:30AM -0700, Dave Hansen wrote:
> On 10/09/2014 02:19 AM, Fengwei Yin wrote:
> > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > index 80ca4fb..8550b27 100644
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/proc/task_mmu.c
> > @@ -476,7 +476,7 @@ static void smaps_pte_entry(pte_t ptent, unsigned long addr,
> > mss->nonlinear += ptent_size;
> > }
> >
> > - if (!page)
> > + if (!page || is_huge_zero_page(page))
> > return;
>
> This really seems like a bit of a hack. A normal (small) zero page
> won't make it to this point because of the vm_normal_page() check in
> smaps_pte_entry() which hits the _PAGE_SPECIAL bit in the pte.
>
> Is there a reason we can't set _PAGE_SPECIAL on the huge_zero_page ptes?
> If we did that, we wouldn't need a special case here.
>
> If we can't do that for some reason, can we at least teach
> vm_normal_page() about the huge_zero_page in some other way?
>
Thanks a lot for the comments. I will check whether could remove the
hack.

> > if (PageAnon(page))
> > @@ -516,7 +516,8 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> > if (pmd_trans_huge_lock(pmd, vma, &ptl) == 1) {
> > smaps_pte_entry(*(pte_t *)pmd, addr, HPAGE_PMD_SIZE, walk);
> > spin_unlock(ptl);
> > - mss->anonymous_thp += HPAGE_PMD_SIZE;
> > + if (!is_huge_zero_pmd(*pmd))
> > + mss->anonymous_thp += HPAGE_PMD_SIZE;
> > return 0;
> > }
>
> How about we just move this hunk in to smaps_pte_entry()? Something
> along these lines:
>
> ...
> if (PageAnon(page)) {
> mss->anonymous += ptent_size;
> + if (PageTransHuge(page))
> + mss->anonymous_thp += ptent_size;
> }
>
> If we do that, plus teaching vm_normal_page() about huge_zero_pages, it
> will help keep the hacks and the extra code due to huge pages to a miniumum.
>
> > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > index 63579cb..758f569 100644
> > --- a/include/linux/huge_mm.h
> > +++ b/include/linux/huge_mm.h
> > @@ -34,6 +34,10 @@ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> > unsigned long addr, pgprot_t newprot,
> > int prot_numa);
> >
> > +extern bool is_huge_zero_page(struct page *page);
> > +
> > +extern bool is_huge_zero_pmd(pmd_t pmd);
> > +
> > enum transparent_hugepage_flag {
> > TRANSPARENT_HUGEPAGE_FLAG,
> > TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index d9a21d06..bedc3ae 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -173,12 +173,12 @@ static int start_khugepaged(void)
> > static atomic_t huge_zero_refcount;
> > static struct page *huge_zero_page __read_mostly;
> >
> > -static inline bool is_huge_zero_page(struct page *page)
> > +bool is_huge_zero_page(struct page *page)
> > {
> > return ACCESS_ONCE(huge_zero_page) == page;
> > }
> >
> > -static inline bool is_huge_zero_pmd(pmd_t pmd)
> > +bool is_huge_zero_pmd(pmd_t pmd)
> > {
> > return is_huge_zero_page(pmd_page(pmd));
> > }
>
> ^^^ And all these exports.

2014-10-10 05:24:22

by Fengwei Yin

[permalink] [raw]
Subject: Re: [PATCH] smaps should deal with huge zero page exactly same as normal zero page

On Thu, Oct 09, 2014 at 09:36:30AM -0700, Dave Hansen wrote:
> On 10/09/2014 02:19 AM, Fengwei Yin wrote:
> > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > index 80ca4fb..8550b27 100644
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/proc/task_mmu.c
> > @@ -476,7 +476,7 @@ static void smaps_pte_entry(pte_t ptent, unsigned long addr,
> > mss->nonlinear += ptent_size;
> > }
> >
> > - if (!page)
> > + if (!page || is_huge_zero_page(page))
> > return;
>
> This really seems like a bit of a hack. A normal (small) zero page
> won't make it to this point because of the vm_normal_page() check in
> smaps_pte_entry() which hits the _PAGE_SPECIAL bit in the pte.
>
> Is there a reason we can't set _PAGE_SPECIAL on the huge_zero_page ptes?
> If we did that, we wouldn't need a special case here.
>
> If we can't do that for some reason, can we at least teach
> vm_normal_page() about the huge_zero_page in some other way?
I suppose _PAGE_SPECIAL can't work. Two reasons:
1. Not all arch have HAVE_PTE_SPECIAL set. So always need another way to
handle the arch which has no PTE_SPECIAL.
2. _PAGE_SPECIAL is just for PTE now. If want to add it for huge page,
we need to introduce pmd_mkspecial() thing which I don't think it's
worth to do now (unless you want it. :)).

Yes. We could move the check to vm_normal_page(). But it still needs
export functions from huge_memory.c.

Please check the new patch.

>
> > if (PageAnon(page))
> > @@ -516,7 +516,8 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> > if (pmd_trans_huge_lock(pmd, vma, &ptl) == 1) {
> > smaps_pte_entry(*(pte_t *)pmd, addr, HPAGE_PMD_SIZE, walk);
> > spin_unlock(ptl);
> > - mss->anonymous_thp += HPAGE_PMD_SIZE;
> > + if (!is_huge_zero_pmd(*pmd))
> > + mss->anonymous_thp += HPAGE_PMD_SIZE;
> > return 0;
> > }
>
> How about we just move this hunk in to smaps_pte_entry()? Something
> along these lines:
>
> ...
> if (PageAnon(page)) {
> mss->anonymous += ptent_size;
> + if (PageTransHuge(page))
> + mss->anonymous_thp += ptent_size;
> }
Done.
>
> If we do that, plus teaching vm_normal_page() about huge_zero_pages, it
> will help keep the hacks and the extra code due to huge pages to a miniumum.
>
> > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > index 63579cb..758f569 100644
> > --- a/include/linux/huge_mm.h
> > +++ b/include/linux/huge_mm.h
> > @@ -34,6 +34,10 @@ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> > unsigned long addr, pgprot_t newprot,
> > int prot_numa);
> >
> > +extern bool is_huge_zero_page(struct page *page);
> > +
> > +extern bool is_huge_zero_pmd(pmd_t pmd);
> > +
> > enum transparent_hugepage_flag {
> > TRANSPARENT_HUGEPAGE_FLAG,
> > TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index d9a21d06..bedc3ae 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -173,12 +173,12 @@ static int start_khugepaged(void)
> > static atomic_t huge_zero_refcount;
> > static struct page *huge_zero_page __read_mostly;
> >
> > -static inline bool is_huge_zero_page(struct page *page)
> > +bool is_huge_zero_page(struct page *page)
> > {
> > return ACCESS_ONCE(huge_zero_page) == page;
> > }
> >
> > -static inline bool is_huge_zero_pmd(pmd_t pmd)
> > +bool is_huge_zero_pmd(pmd_t pmd)
> > {
> > return is_huge_zero_page(pmd_page(pmd));
> > }
>
> ^^^ And all these exports.

A new function is_huge_zero_pfn() is added to mm/huge_memory.c
and exported.

Thanks.


Attachments:
(No filename) (3.51 kB)
0001-smaps-should-deal-with-huge-zero-page-exactly-same-a.patch (2.59 kB)
Download all attachments

2014-10-10 14:35:45

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH] smaps should deal with huge zero page exactly same as normal zero page

On 10/10/2014 06:21 AM, Fengwei Yin wrote:
> @@ -787,6 +788,9 @@ check_pfn:
> return NULL;
> }
>
> + if (is_huge_zero_pfn(pfn))
> + return NULL;
> +

That looks a lot better. One thing, why not put the is_huge_zero_pfn()
check next to the is_zero_pfn() check?

2014-10-11 02:14:21

by Fengwei Yin

[permalink] [raw]
Subject: Re: [PATCH] smaps should deal with huge zero page exactly same as normal zero page

On Fri, Oct 10, 2014 at 07:35:15AM -0700, Dave Hansen wrote:
> On 10/10/2014 06:21 AM, Fengwei Yin wrote:
> > @@ -787,6 +788,9 @@ check_pfn:
> > return NULL;
> > }
> >
> > + if (is_huge_zero_pfn(pfn))
> > + return NULL;
> > +
>
> That looks a lot better. One thing, why not put the is_huge_zero_pfn()
> check next to the is_zero_pfn() check?

On the arch which has PTE_SPECIAL feaure, we need to make sure the
is_huge_zero_pfn() is called after the pfn is checked if !pte_special().

2014-10-14 11:57:43

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH] smaps should deal with huge zero page exactly same as normal zero page

On Fri, Oct 10, 2014 at 09:21:08PM +0800, Fengwei Yin wrote:
> On Thu, Oct 09, 2014 at 09:36:30AM -0700, Dave Hansen wrote:
> > On 10/09/2014 02:19 AM, Fengwei Yin wrote:
> > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > > index 80ca4fb..8550b27 100644
> > > --- a/fs/proc/task_mmu.c
> > > +++ b/fs/proc/task_mmu.c
> > > @@ -476,7 +476,7 @@ static void smaps_pte_entry(pte_t ptent, unsigned long addr,
> > > mss->nonlinear += ptent_size;
> > > }
> > >
> > > - if (!page)
> > > + if (!page || is_huge_zero_page(page))
> > > return;
> >
> > This really seems like a bit of a hack. A normal (small) zero page
> > won't make it to this point because of the vm_normal_page() check in
> > smaps_pte_entry() which hits the _PAGE_SPECIAL bit in the pte.
> >
> > Is there a reason we can't set _PAGE_SPECIAL on the huge_zero_page ptes?
> > If we did that, we wouldn't need a special case here.
> >
> > If we can't do that for some reason, can we at least teach
> > vm_normal_page() about the huge_zero_page in some other way?
> I suppose _PAGE_SPECIAL can't work. Two reasons:
> 1. Not all arch have HAVE_PTE_SPECIAL set. So always need another way to
> handle the arch which has no PTE_SPECIAL.
> 2. _PAGE_SPECIAL is just for PTE now. If want to add it for huge page,
> we need to introduce pmd_mkspecial() thing which I don't think it's
> worth to do now (unless you want it. :)).
>
> Yes. We could move the check to vm_normal_page(). But it still needs
> export functions from huge_memory.c.
>
> Please check the new patch.
>
> >
> > > if (PageAnon(page))
> > > @@ -516,7 +516,8 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> > > if (pmd_trans_huge_lock(pmd, vma, &ptl) == 1) {
> > > smaps_pte_entry(*(pte_t *)pmd, addr, HPAGE_PMD_SIZE, walk);
> > > spin_unlock(ptl);
> > > - mss->anonymous_thp += HPAGE_PMD_SIZE;
> > > + if (!is_huge_zero_pmd(*pmd))
> > > + mss->anonymous_thp += HPAGE_PMD_SIZE;
> > > return 0;
> > > }
> >
> > How about we just move this hunk in to smaps_pte_entry()? Something
> > along these lines:
> >
> > ...
> > if (PageAnon(page)) {
> > mss->anonymous += ptent_size;
> > + if (PageTransHuge(page))
> > + mss->anonymous_thp += ptent_size;
> > }
> Done.
> >
> > If we do that, plus teaching vm_normal_page() about huge_zero_pages, it
> > will help keep the hacks and the extra code due to huge pages to a miniumum.
> >
> > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > > index 63579cb..758f569 100644
> > > --- a/include/linux/huge_mm.h
> > > +++ b/include/linux/huge_mm.h
> > > @@ -34,6 +34,10 @@ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> > > unsigned long addr, pgprot_t newprot,
> > > int prot_numa);
> > >
> > > +extern bool is_huge_zero_page(struct page *page);
> > > +
> > > +extern bool is_huge_zero_pmd(pmd_t pmd);
> > > +
> > > enum transparent_hugepage_flag {
> > > TRANSPARENT_HUGEPAGE_FLAG,
> > > TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
> > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > index d9a21d06..bedc3ae 100644
> > > --- a/mm/huge_memory.c
> > > +++ b/mm/huge_memory.c
> > > @@ -173,12 +173,12 @@ static int start_khugepaged(void)
> > > static atomic_t huge_zero_refcount;
> > > static struct page *huge_zero_page __read_mostly;
> > >
> > > -static inline bool is_huge_zero_page(struct page *page)
> > > +bool is_huge_zero_page(struct page *page)
> > > {
> > > return ACCESS_ONCE(huge_zero_page) == page;
> > > }
> > >
> > > -static inline bool is_huge_zero_pmd(pmd_t pmd)
> > > +bool is_huge_zero_pmd(pmd_t pmd)
> > > {
> > > return is_huge_zero_page(pmd_page(pmd));
> > > }
> >
> > ^^^ And all these exports.
>
> A new function is_huge_zero_pfn() is added to mm/huge_memory.c
> and exported.
>
> Thanks.

> From 4e7bdd5bc22874175982ab50303eab32843c753c Mon Sep 17 00:00:00 2001
> From: Fengwei Yin <[email protected]>
> Date: Thu, 9 Oct 2014 22:20:58 +0800
> Subject: [PATCH] smaps should deal with huge zero page exactly same as normal
> zero page.
>

Description?

> Signed-off-by: Fengwei Yin <[email protected]>
> ---
> fs/proc/task_mmu.c | 6 ++++--
> include/linux/huge_mm.h | 2 ++
> mm/huge_memory.c | 5 +++++
> mm/memory.c | 4 ++++
> 4 files changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index c341568..fb19c0c 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -471,8 +471,11 @@ static void smaps_pte_entry(pte_t ptent, unsigned long addr,
> if (!page)
> return;
>
> - if (PageAnon(page))
> + if (PageAnon(page)) {
> mss->anonymous += ptent_size;
> + if (PageTransHuge(page))
> + mss->anonymous_thp += HPAGE_PMD_SIZE;
> + }
>
> if (page->index != pgoff)
> mss->nonlinear += ptent_size;
> @@ -508,7 +511,6 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> if (pmd_trans_huge_lock(pmd, vma, &ptl) == 1) {
> smaps_pte_entry(*(pte_t *)pmd, addr, HPAGE_PMD_SIZE, walk);
> spin_unlock(ptl);
> - mss->anonymous_thp += HPAGE_PMD_SIZE;
> return 0;
> }
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 63579cb..9bf6263 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -34,6 +34,8 @@ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> unsigned long addr, pgprot_t newprot,
> int prot_numa);
>
> +extern bool is_huge_zero_pfn(unsigned long pfn);
> +
> enum transparent_hugepage_flag {
> TRANSPARENT_HUGEPAGE_FLAG,
> TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index f8ffd94..71ca4ed 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -183,6 +183,11 @@ static inline bool is_huge_zero_pmd(pmd_t pmd)
> return is_huge_zero_page(pmd_page(pmd));
> }
>
> +inline bool is_huge_zero_pfn(unsigned long pfn)

There's no way the function can be inlined.

Otherwise looks good to me.

--
Kirill A. Shutemov

2014-10-15 02:34:07

by Fengwei Yin

[permalink] [raw]
Subject: Re: [PATCH] smaps should deal with huge zero page exactly same as normal zero page

On Tue, Oct 14, 2014 at 02:57:30PM +0300, Kirill A. Shutemov wrote:
> On Fri, Oct 10, 2014 at 09:21:08PM +0800, Fengwei Yin wrote:
> > On Thu, Oct 09, 2014 at 09:36:30AM -0700, Dave Hansen wrote:
> > > On 10/09/2014 02:19 AM, Fengwei Yin wrote:
> > > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > > > index 80ca4fb..8550b27 100644
> > > > --- a/fs/proc/task_mmu.c
> > > > +++ b/fs/proc/task_mmu.c
> > > > @@ -476,7 +476,7 @@ static void smaps_pte_entry(pte_t ptent, unsigned long addr,
> > > > mss->nonlinear += ptent_size;
> > > > }
> > > >
> > > > - if (!page)
> > > > + if (!page || is_huge_zero_page(page))
> > > > return;
> > >
> > > This really seems like a bit of a hack. A normal (small) zero page
> > > won't make it to this point because of the vm_normal_page() check in
> > > smaps_pte_entry() which hits the _PAGE_SPECIAL bit in the pte.
> > >
> > > Is there a reason we can't set _PAGE_SPECIAL on the huge_zero_page ptes?
> > > If we did that, we wouldn't need a special case here.
> > >
> > > If we can't do that for some reason, can we at least teach
> > > vm_normal_page() about the huge_zero_page in some other way?
> > I suppose _PAGE_SPECIAL can't work. Two reasons:
> > 1. Not all arch have HAVE_PTE_SPECIAL set. So always need another way to
> > handle the arch which has no PTE_SPECIAL.
> > 2. _PAGE_SPECIAL is just for PTE now. If want to add it for huge page,
> > we need to introduce pmd_mkspecial() thing which I don't think it's
> > worth to do now (unless you want it. :)).
> >
> > Yes. We could move the check to vm_normal_page(). But it still needs
> > export functions from huge_memory.c.
> >
> > Please check the new patch.
> >
> > >
> > > > if (PageAnon(page))
> > > > @@ -516,7 +516,8 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> > > > if (pmd_trans_huge_lock(pmd, vma, &ptl) == 1) {
> > > > smaps_pte_entry(*(pte_t *)pmd, addr, HPAGE_PMD_SIZE, walk);
> > > > spin_unlock(ptl);
> > > > - mss->anonymous_thp += HPAGE_PMD_SIZE;
> > > > + if (!is_huge_zero_pmd(*pmd))
> > > > + mss->anonymous_thp += HPAGE_PMD_SIZE;
> > > > return 0;
> > > > }
> > >
> > > How about we just move this hunk in to smaps_pte_entry()? Something
> > > along these lines:
> > >
> > > ...
> > > if (PageAnon(page)) {
> > > mss->anonymous += ptent_size;
> > > + if (PageTransHuge(page))
> > > + mss->anonymous_thp += ptent_size;
> > > }
> > Done.
> > >
> > > If we do that, plus teaching vm_normal_page() about huge_zero_pages, it
> > > will help keep the hacks and the extra code due to huge pages to a miniumum.
> > >
> > > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > > > index 63579cb..758f569 100644
> > > > --- a/include/linux/huge_mm.h
> > > > +++ b/include/linux/huge_mm.h
> > > > @@ -34,6 +34,10 @@ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> > > > unsigned long addr, pgprot_t newprot,
> > > > int prot_numa);
> > > >
> > > > +extern bool is_huge_zero_page(struct page *page);
> > > > +
> > > > +extern bool is_huge_zero_pmd(pmd_t pmd);
> > > > +
> > > > enum transparent_hugepage_flag {
> > > > TRANSPARENT_HUGEPAGE_FLAG,
> > > > TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
> > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > > index d9a21d06..bedc3ae 100644
> > > > --- a/mm/huge_memory.c
> > > > +++ b/mm/huge_memory.c
> > > > @@ -173,12 +173,12 @@ static int start_khugepaged(void)
> > > > static atomic_t huge_zero_refcount;
> > > > static struct page *huge_zero_page __read_mostly;
> > > >
> > > > -static inline bool is_huge_zero_page(struct page *page)
> > > > +bool is_huge_zero_page(struct page *page)
> > > > {
> > > > return ACCESS_ONCE(huge_zero_page) == page;
> > > > }
> > > >
> > > > -static inline bool is_huge_zero_pmd(pmd_t pmd)
> > > > +bool is_huge_zero_pmd(pmd_t pmd)
> > > > {
> > > > return is_huge_zero_page(pmd_page(pmd));
> > > > }
> > >
> > > ^^^ And all these exports.
> >
> > A new function is_huge_zero_pfn() is added to mm/huge_memory.c
> > and exported.
> >
> > Thanks.
>
> > From 4e7bdd5bc22874175982ab50303eab32843c753c Mon Sep 17 00:00:00 2001
> > From: Fengwei Yin <[email protected]>
> > Date: Thu, 9 Oct 2014 22:20:58 +0800
> > Subject: [PATCH] smaps should deal with huge zero page exactly same as normal
> > zero page.
> >
>
> Description?
>
> > Signed-off-by: Fengwei Yin <[email protected]>
> > ---
> > fs/proc/task_mmu.c | 6 ++++--
> > include/linux/huge_mm.h | 2 ++
> > mm/huge_memory.c | 5 +++++
> > mm/memory.c | 4 ++++
> > 4 files changed, 15 insertions(+), 2 deletions(-)
> >
> > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > index c341568..fb19c0c 100644
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/proc/task_mmu.c
> > @@ -471,8 +471,11 @@ static void smaps_pte_entry(pte_t ptent, unsigned long addr,
> > if (!page)
> > return;
> >
> > - if (PageAnon(page))
> > + if (PageAnon(page)) {
> > mss->anonymous += ptent_size;
> > + if (PageTransHuge(page))
> > + mss->anonymous_thp += HPAGE_PMD_SIZE;
> > + }
> >
> > if (page->index != pgoff)
> > mss->nonlinear += ptent_size;
> > @@ -508,7 +511,6 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> > if (pmd_trans_huge_lock(pmd, vma, &ptl) == 1) {
> > smaps_pte_entry(*(pte_t *)pmd, addr, HPAGE_PMD_SIZE, walk);
> > spin_unlock(ptl);
> > - mss->anonymous_thp += HPAGE_PMD_SIZE;
> > return 0;
> > }
> >
> > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > index 63579cb..9bf6263 100644
> > --- a/include/linux/huge_mm.h
> > +++ b/include/linux/huge_mm.h
> > @@ -34,6 +34,8 @@ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> > unsigned long addr, pgprot_t newprot,
> > int prot_numa);
> >
> > +extern bool is_huge_zero_pfn(unsigned long pfn);
> > +
> > enum transparent_hugepage_flag {
> > TRANSPARENT_HUGEPAGE_FLAG,
> > TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index f8ffd94..71ca4ed 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -183,6 +183,11 @@ static inline bool is_huge_zero_pmd(pmd_t pmd)
> > return is_huge_zero_page(pmd_page(pmd));
> > }
> >
> > +inline bool is_huge_zero_pfn(unsigned long pfn)
>
> There's no way the function can be inlined.
>
> Otherwise looks good to me.
>
Thanks a lot for the comments. Updated patch attached.
> --
> Kirill A. Shutemov


Attachments:
(No filename) (6.39 kB)
0001-smaps-should-deal-with-huge-zero-page-exactly-same-a.patch (3.41 kB)
Download all attachments

2014-10-17 02:49:30

by Fengwei Yin

[permalink] [raw]
Subject: Re: [PATCH] smaps should deal with huge zero page exactly same as normal zero page

On Fri, Oct 10, 2014 at 07:35:15AM -0700, Dave Hansen wrote:
> On 10/10/2014 06:21 AM, Fengwei Yin wrote:
> > @@ -787,6 +788,9 @@ check_pfn:
> > return NULL;
> > }
> >
> > + if (is_huge_zero_pfn(pfn))
> > + return NULL;
> > +
>
> That looks a lot better. One thing, why not put the is_huge_zero_pfn()
> check next to the is_zero_pfn() check?
Hi,
If no further comment, what's the next step to push this patch merged?
:). Thanks a lot.

Regards
Yin, Fengwei