Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760353AbbKTW0k (ORCPT ); Fri, 20 Nov 2015 17:26:40 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:47291 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757174AbbKTW0j (ORCPT ); Fri, 20 Nov 2015 17:26:39 -0500 Date: Fri, 20 Nov 2015 14:26:38 -0800 From: Andrew Morton To: "Hillf Danton" Cc: "'Naoya Horiguchi'" , "'David Rientjes'" , "'Dave Hansen'" , "'Mel Gorman'" , "'Joonsoo Kim'" , "'Mike Kravetz'" , , , "'Naoya Horiguchi'" Subject: Re: [PATCH v1] mm: hugetlb: fix hugepage memory leak caused by wrong reserve count Message-Id: <20151120142638.c505927a43dc1ede32570db0@linux-foundation.org> In-Reply-To: <050201d12369$167a0a10$436e1e30$@alibaba-inc.com> References: <1448004017-23679-1-git-send-email-n-horiguchi@ah.jp.nec.com> <050201d12369$167a0a10$436e1e30$@alibaba-inc.com> X-Mailer: Sylpheed 3.4.1 (GTK+ 2.24.23; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3107 Lines: 77 On Fri, 20 Nov 2015 15:57:21 +0800 "Hillf Danton" wrote: > > > > When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back to > > alloc_buddy_huge_page() to directly create a hugepage from the buddy allocator. > > In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement > > h->resv_huge_pages, which means that successful hugetlb_fault() returns without > > releasing the reserve count. As a result, subsequent hugetlb_fault() might fail > > despite that there are still free hugepages. > > > > This patch simply adds decrementing code on that code path. > > > > I reproduced this problem when testing v4.3 kernel in the following situation: > > - the test machine/VM is a NUMA system, > > - hugepage overcommiting is enabled, > > - most of hugepages are allocated and there's only one free hugepage > > which is on node 0 (for example), > > - another program, which calls set_mempolicy(MPOL_BIND) to bind itself to > > node 1, tries to allocate a hugepage, > > - the allocation should fail but the reserve count is still hold. > > > > Signed-off-by: Naoya Horiguchi > > Cc: [3.16+] > > --- > > - the reason why I set stable target to "3.16+" is that this patch can be > > applied easily/automatically on these versions. But this bug seems to be > > old one, so if you are interested in backporting to older kernels, > > please let me know. > > --- > > mm/hugetlb.c | 5 ++++- > > 1 files changed, 4 insertions(+), 1 deletions(-) > > > > diff --git v4.3/mm/hugetlb.c v4.3_patched/mm/hugetlb.c > > index 9cc7734..77c518c 100644 > > --- v4.3/mm/hugetlb.c > > +++ v4.3_patched/mm/hugetlb.c > > @@ -1790,7 +1790,10 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, > > page = alloc_buddy_huge_page(h, NUMA_NO_NODE); > > if (!page) > > goto out_uncharge_cgroup; > > - > > + if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) { > > + SetPagePrivate(page); > > + h->resv_huge_pages--; > > + } > > I am wondering if this patch was prepared against the next tree. It's against 4.3. Here's the version I have, against current -linus: --- a/mm/hugetlb.c~mm-hugetlb-fix-hugepage-memory-leak-caused-by-wrong-reserve-count +++ a/mm/hugetlb.c @@ -1886,7 +1886,10 @@ struct page *alloc_huge_page(struct vm_a page = __alloc_buddy_huge_page_with_mpol(h, vma, addr); if (!page) goto out_uncharge_cgroup; - + if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) { + SetPagePrivate(page); + h->resv_huge_pages--; + } spin_lock(&hugetlb_lock); list_move(&page->lru, &h->hugepage_activelist); /* Fall through */ It needs a careful re-review and, preferably, retest please. Probably when Greg comes to merge this he'll hit problems and we'll need to provide him with the against-4.3 patch. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/