Changelog:
v1 -> v2:
* update patch description, spotted by Michal
hugetlb_total_pages() does not account for all the supported hugepage
sizes. This can lead to incorrect calculation of the total number of
page frames used by hugetlb. This patch corrects the issue.
Testcase:
boot: hugepagesz=1G hugepages=1
before patch:
egrep 'CommitLimit' /proc/meminfo
CommitLimit: 55434168 kB
after patch:
egrep 'CommitLimit' /proc/meminfo
CommitLimit: 54909880 kB
Signed-off-by: Wanpeng Li <[email protected]>
---
mm/hugetlb.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index cdb64e4..9e25040 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
/* Return the number pages of memory we physically have, in PAGE_SIZE units. */
unsigned long hugetlb_total_pages(void)
{
- struct hstate *h = &default_hstate;
- return h->nr_huge_pages * pages_per_huge_page(h);
+ struct hstate *h;
+ unsigned long nr_total_pages = 0;
+ for_each_hstate(h)
+ nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
+ return nr_total_pages;
}
static int hugetlb_acct_memory(struct hstate *h, long delta)
--
1.7.11.7
On Thu 14-03-13 18:49:49, Wanpeng Li wrote:
> Changelog:
> v1 -> v2:
> * update patch description, spotted by Michal
>
> hugetlb_total_pages() does not account for all the supported hugepage
> sizes.
> This can lead to incorrect calculation of the total number of
> page frames used by hugetlb. This patch corrects the issue.
Sorry to be so picky but this doesn't tell us much. Why do we need to
have the total number of hugetlb pages?
What about the following:
"hugetlb_total_pages is used for overcommit calculations but the
current implementation considers only default hugetlb page size (which
is either the first defined hugepage size or the one specified by
default_hugepagesz kernel boot parameter).
If the system is configured for more than one hugepage size (which is
possible since a137e1cc hugetlbfs: per mount huge page sizes) then
the overcommit estimation done by __vm_enough_memory (resp. shown by
meminfo_proc_show) is not precise - there is an impression of more
available/allowed memory. This can lead to an unexpected ENOMEM/EFAULT
resp. SIGSEGV when memory is accounted."
I think this is also worth pushing to the stable tree (it goes back to
2.6.27)
> Testcase:
> boot: hugepagesz=1G hugepages=1
> before patch:
> egrep 'CommitLimit' /proc/meminfo
> CommitLimit: 55434168 kB
> after patch:
> egrep 'CommitLimit' /proc/meminfo
> CommitLimit: 54909880 kB
This gives some more confusion to a reader because there is only
something like 500M difference here without any explanation.
>
> Signed-off-by: Wanpeng Li <[email protected]>
> ---
> mm/hugetlb.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index cdb64e4..9e25040 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
> /* Return the number pages of memory we physically have, in PAGE_SIZE units. */
> unsigned long hugetlb_total_pages(void)
> {
> - struct hstate *h = &default_hstate;
> - return h->nr_huge_pages * pages_per_huge_page(h);
> + struct hstate *h;
> + unsigned long nr_total_pages = 0;
> + for_each_hstate(h)
> + nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
> + return nr_total_pages;
> }
>
> static int hugetlb_acct_memory(struct hstate *h, long delta)
> --
> 1.7.11.7
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
--
Michal Hocko
SUSE Labs
On Thu 14-03-13 19:24:11, Wanpeng Li wrote:
> On Thu, Mar 14, 2013 at 12:09:27PM +0100, Michal Hocko wrote:
> >On Thu 14-03-13 18:49:49, Wanpeng Li wrote:
> >> Changelog:
> >> v1 -> v2:
> >> * update patch description, spotted by Michal
> >>
> >> hugetlb_total_pages() does not account for all the supported hugepage
> >> sizes.
> >
> >> This can lead to incorrect calculation of the total number of
> >> page frames used by hugetlb. This patch corrects the issue.
> >
>
> Hi Michal,
>
> >Sorry to be so picky but this doesn't tell us much. Why do we need to
> >have the total number of hugetlb pages?
> >
> >What about the following:
> >"hugetlb_total_pages is used for overcommit calculations but the
> >current implementation considers only default hugetlb page size (which
> >is either the first defined hugepage size or the one specified by
> >default_hugepagesz kernel boot parameter).
> >
> >If the system is configured for more than one hugepage size (which is
> >possible since a137e1cc hugetlbfs: per mount huge page sizes) then
> >the overcommit estimation done by __vm_enough_memory (resp. shown by
> >meminfo_proc_show) is not precise - there is an impression of more
> >available/allowed memory. This can lead to an unexpected ENOMEM/EFAULT
> >resp. SIGSEGV when memory is accounted."
> >
>
> Fair enough, thanks. :-)
>
> >I think this is also worth pushing to the stable tree (it goes back to
> >2.6.27)
> >
>
> Yup, I will Cc Greg in next version.
Ccing Greg doesn't help. All that is required is:
Cc: [email protected] # 2.6.27+
> >> Testcase:
> >> boot: hugepagesz=1G hugepages=1
> >> before patch:
> >> egrep 'CommitLimit' /proc/meminfo
> >> CommitLimit: 55434168 kB
> >> after patch:
> >> egrep 'CommitLimit' /proc/meminfo
> >> CommitLimit: 54909880 kB
> >
> >This gives some more confusion to a reader because there is only
> >something like 500M difference here without any explanation.
> >
>
> the default overcommit ratio is 50.
And that part was missing in the description...
[...]
--
Michal Hocko
SUSE Labs