2014-11-24 14:20:16

by Aneesh Kumar K.V

[permalink] [raw]
Subject: [RFC PATCH] mm/thp: Always allocate transparent hugepages on local node

This make sure that we try to allocate hugepages from local node. If
we can't we fallback to small page allocation based on
mempolicy. This is based on the observation that allocating pages
on local node is more beneficial that allocating hugepages on remote node.

Signed-off-by: Aneesh Kumar K.V <[email protected]>
---
NOTE:
I am not sure whether we want this to be per system configurable ? If not
we could possibly remove alloc_hugepage_vma.

mm/huge_memory.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index de984159cf0b..b309705e7e96 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -775,6 +775,12 @@ static inline struct page *alloc_hugepage_vma(int defrag,
HPAGE_PMD_ORDER, vma, haddr, nd);
}

+static inline struct page *alloc_hugepage_exact_node(int node, int defrag)
+{
+ return alloc_pages_exact_node(node, alloc_hugepage_gfpmask(defrag, 0),
+ HPAGE_PMD_ORDER);
+}
+
/* Caller must hold page table lock. */
static bool set_huge_zero_page(pgtable_t pgtable, struct mm_struct *mm,
struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd,
@@ -830,8 +836,8 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
}
return 0;
}
- page = alloc_hugepage_vma(transparent_hugepage_defrag(vma),
- vma, haddr, numa_node_id(), 0);
+ page = alloc_hugepage_exact_node(numa_node_id(),
+ transparent_hugepage_defrag(vma));
if (unlikely(!page)) {
count_vm_event(THP_FAULT_FALLBACK);
return VM_FAULT_FALLBACK;
@@ -1120,8 +1126,8 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
alloc:
if (transparent_hugepage_enabled(vma) &&
!transparent_hugepage_debug_cow())
- new_page = alloc_hugepage_vma(transparent_hugepage_defrag(vma),
- vma, haddr, numa_node_id(), 0);
+ new_page = alloc_hugepage_exact_node(numa_node_id(),
+ transparent_hugepage_defrag(vma));
else
new_page = NULL;

--
2.1.0


2014-11-24 15:03:49

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [RFC PATCH] mm/thp: Always allocate transparent hugepages on local node

On Mon, Nov 24, 2014 at 07:49:51PM +0530, Aneesh Kumar K.V wrote:
> This make sure that we try to allocate hugepages from local node. If
> we can't we fallback to small page allocation based on
> mempolicy. This is based on the observation that allocating pages
> on local node is more beneficial that allocating hugepages on remote node.

Local node on allocation is not necessary local node for use.
If policy says to use a specific node[s], we should follow.

I think it makes sense to force local allocation if policy is interleave
or if current node is in preferred or bind set.

--
Kirill A. Shutemov

2014-11-24 21:33:47

by David Rientjes

[permalink] [raw]
Subject: Re: [RFC PATCH] mm/thp: Always allocate transparent hugepages on local node

On Mon, 24 Nov 2014, Kirill A. Shutemov wrote:

> > This make sure that we try to allocate hugepages from local node. If
> > we can't we fallback to small page allocation based on
> > mempolicy. This is based on the observation that allocating pages
> > on local node is more beneficial that allocating hugepages on remote node.
>
> Local node on allocation is not necessary local node for use.
> If policy says to use a specific node[s], we should follow.
>

True, and the interaction between thp and mempolicies is fragile: if a
process has a MPOL_BIND mempolicy over a set of nodes, that does not
necessarily mean that we want to allocate thp remotely if it will always
be accessed remotely. It's simple to benchmark and show that remote
access latency of a hugepage can exceed that of local pages. MPOL_BIND
itself is a policy of exclusion, not inclusion, and it's difficult to
define when local pages and its cost of allocation is better than remote
thp.

For MPOL_BIND, if the local node is allowed then thp should be forced from
that node, if the local node is disallowed then allocate from any node in
the nodemask. For MPOL_INTERLEAVE, I think we should only allocate thp
from the next node in order, otherwise fail the allocation and fallback to
small pages. Is this what you meant as well?

> I think it makes sense to force local allocation if policy is interleave
> or if current node is in preferred or bind set.
>

If local allocation were forced for MPOL_INTERLEAVE and all memory is
initially faulted by cpus on a single node, then the policy has
effectively become MPOL_DEFAULT, there's no interleave.

Aside: the patch is also buggy since it passes numa_node_id() and thp is
supported on platforms that allow memoryless nodes.

2014-11-25 14:17:12

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [RFC PATCH] mm/thp: Always allocate transparent hugepages on local node

On Mon, Nov 24, 2014 at 01:33:42PM -0800, David Rientjes wrote:
> On Mon, 24 Nov 2014, Kirill A. Shutemov wrote:
>
> > > This make sure that we try to allocate hugepages from local node. If
> > > we can't we fallback to small page allocation based on
> > > mempolicy. This is based on the observation that allocating pages
> > > on local node is more beneficial that allocating hugepages on remote node.
> >
> > Local node on allocation is not necessary local node for use.
> > If policy says to use a specific node[s], we should follow.
> >
>
> True, and the interaction between thp and mempolicies is fragile: if a
> process has a MPOL_BIND mempolicy over a set of nodes, that does not
> necessarily mean that we want to allocate thp remotely if it will always
> be accessed remotely. It's simple to benchmark and show that remote
> access latency of a hugepage can exceed that of local pages. MPOL_BIND
> itself is a policy of exclusion, not inclusion, and it's difficult to
> define when local pages and its cost of allocation is better than remote
> thp.
>
> For MPOL_BIND, if the local node is allowed then thp should be forced from
> that node, if the local node is disallowed then allocate from any node in
> the nodemask. For MPOL_INTERLEAVE, I think we should only allocate thp
> from the next node in order, otherwise fail the allocation and fallback to
> small pages. Is this what you meant as well?

Correct.

> > I think it makes sense to force local allocation if policy is interleave
> > or if current node is in preferred or bind set.
> >
>
> If local allocation were forced for MPOL_INTERLEAVE and all memory is
> initially faulted by cpus on a single node, then the policy has
> effectively become MPOL_DEFAULT, there's no interleave.

You're right. I don't have much experience with mempolicy code.

--
Kirill A. Shutemov

2014-11-27 06:33:13

by Aneesh Kumar K.V

[permalink] [raw]
Subject: Re: [RFC PATCH] mm/thp: Always allocate transparent hugepages on local node

David Rientjes <[email protected]> writes:

> On Mon, 24 Nov 2014, Kirill A. Shutemov wrote:
>
>> > This make sure that we try to allocate hugepages from local node. If
>> > we can't we fallback to small page allocation based on
>> > mempolicy. This is based on the observation that allocating pages
>> > on local node is more beneficial that allocating hugepages on remote node.
>>
>> Local node on allocation is not necessary local node for use.
>> If policy says to use a specific node[s], we should follow.
>>
>
> True, and the interaction between thp and mempolicies is fragile: if a
> process has a MPOL_BIND mempolicy over a set of nodes, that does not
> necessarily mean that we want to allocate thp remotely if it will always
> be accessed remotely. It's simple to benchmark and show that remote
> access latency of a hugepage can exceed that of local pages. MPOL_BIND
> itself is a policy of exclusion, not inclusion, and it's difficult to
> define when local pages and its cost of allocation is better than remote
> thp.
>
> For MPOL_BIND, if the local node is allowed then thp should be forced from
> that node, if the local node is disallowed then allocate from any node in
> the nodemask. For MPOL_INTERLEAVE, I think we should only allocate thp
> from the next node in order, otherwise fail the allocation and fallback to
> small pages. Is this what you meant as well?
>

Something like below

struct page *alloc_hugepage_vma(gfp_t gfp, struct vm_area_struct *vma,
unsigned long addr, int order)
{
struct page *page;
nodemask_t *nmask;
struct mempolicy *pol;
int node = numa_node_id();
unsigned int cpuset_mems_cookie;

retry_cpuset:
pol = get_vma_policy(vma, addr);
cpuset_mems_cookie = read_mems_allowed_begin();

if (unlikely(pol->mode == MPOL_INTERLEAVE)) {
unsigned nid;
nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
mpol_cond_put(pol);
page = alloc_page_interleave(gfp, order, nid);
if (unlikely(!page &&
read_mems_allowed_retry(cpuset_mems_cookie)))
goto retry_cpuset;
return page;
}
nmask = policy_nodemask(gfp, pol);
if (!nmask || node_isset(node, *nmask)) {
mpol_cond_put(pol);
page = alloc_hugepage_exact_node(node, gfp, order);
if (unlikely(!page &&
read_mems_allowed_retry(cpuset_mems_cookie)))
goto retry_cpuset;
return page;

}
/*
* if current node is not part of node mask, try
* the allocation from any node, and we can do retry
* in that case.
*/
page = __alloc_pages_nodemask(gfp, order,
policy_zonelist(gfp, pol, node),
nmask);
mpol_cond_put(pol);
if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
goto retry_cpuset;

return page;
}

-aneesh