Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756702AbcC2KGS (ORCPT ); Tue, 29 Mar 2016 06:06:18 -0400 Received: from mx2.suse.de ([195.135.220.15]:57447 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752585AbcC2KGP (ORCPT ); Tue, 29 Mar 2016 06:06:15 -0400 Subject: Re: [PATCH] mm: fix invalid node in alloc_migrate_target() To: Xishi Qiu , Andrew Morton References: <56F4E104.9090505@huawei.com> <20160325122237.4ca4e0dbca215ccbf4f49922@linux-foundation.org> <56F61EC8.7080508@huawei.com> <56FA5062.2020103@suse.cz> Cc: Joonsoo Kim , David Rientjes , Naoya Horiguchi , Laura Abbott , zhuhui@xiaomi.com, wangxq10@lzu.edu.cn, Linux MM , LKML , Dave Hansen From: Vlastimil Babka Message-ID: <56FA5392.1030509@suse.cz> Date: Tue, 29 Mar 2016 12:06:10 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.0 MIME-Version: 1.0 In-Reply-To: <56FA5062.2020103@suse.cz> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3352 Lines: 98 On 03/29/2016 11:52 AM, Vlastimil Babka wrote: > On 03/26/2016 06:31 AM, Xishi Qiu wrote: >> On 2016/3/26 3:22, Andrew Morton wrote: >> >>> On Fri, 25 Mar 2016 14:56:04 +0800 Xishi Qiu wrote: >>> >>>> It is incorrect to use next_node to find a target node, it will >>>> return MAX_NUMNODES or invalid node. This will lead to crash in >>>> buddy system allocation. >>>> >>>> ... >>>> >>>> --- a/mm/page_isolation.c >>>> +++ b/mm/page_isolation.c >>>> @@ -289,11 +289,11 @@ struct page *alloc_migrate_target(struct page *page, unsigned long private, >>>> * now as a simple work-around, we use the next node for destination. >>>> */ >>>> if (PageHuge(page)) { >>>> - nodemask_t src = nodemask_of_node(page_to_nid(page)); >>>> - nodemask_t dst; >>>> - nodes_complement(dst, src); >>>> + int node = next_online_node(page_to_nid(page)); >>>> + if (node == MAX_NUMNODES) >>>> + node = first_online_node; >>>> return alloc_huge_page_node(page_hstate(compound_head(page)), >>>> - next_node(page_to_nid(page), dst)); >>>> + node); >>>> } >>>> >>>> if (PageHighMem(page)) >>> >>> Indeed. Can you tell us more about this circumstances under which the >>> kernel will crash? I need to decide which kernel version(s) need the >>> patch, but the changelog doesn't contain the info needed to make this >>> decision (it should). >>> >> >> Hi Andrew, >> >> I read the code v4.4, and find the following path maybe trigger the bug. >> >> alloc_migrate_target() >> alloc_huge_page_node() // the node may be offline or MAX_NUMNODES >> __alloc_buddy_huge_page_no_mpol() >> __alloc_buddy_huge_page() >> __hugetlb_alloc_buddy_huge_page() > > The code in this functions seems to come from 099730d67417d ("mm, > hugetlb: use memory policy when available") by Dave Hansen (adding to > CC), which was indeed merged in 4.4-rc1. > > However, alloc_pages_node() is only called in the block guarded by: > > if (!IS_ENABLED(CONFIG_NUMA) || !vma) { > > The rather weird "!IS_ENABLED(CONFIG_NUMA)" part comes from immediate > followup commit e0ec90ee7e6f ("mm, hugetlbfs: optimize when NUMA=n") > > So I doubt the code path here can actually happen. But it's fragile and > confusing nevertheless. Ah, so there's actually a dangerous path: alloc_huge_page_node() dequeue_huge_page_node() list_for_each_entry(page, &h->hugepage_freelists[nid], lru) hugepage_freelists is MAX_NUMNODES sized, so when nid is MAX_NUMNODES, we access past it. However, look closer at how nid is obtained in alloc_migrate_target(): nodemask_t src = nodemask_of_node(page_to_nid(page)); nodemask_t dst; nodes_complement(dst, src); nid = next_node(page_to_nid(page), dst) for nid to be MAX_NUMNODES, the original page has to be on node MAX_NUMNODES-1, otherwise the complement part means we hit the very next bit which is set. It's actually a rather obfuscated way of doing: nid = page_to_nid(page) + 1; In that case the problem is in commit c8721bbbdd36 ("mm: memory-hotplug: enable memory hotplug to handle hugepage") from 3.12 and will likely affect only people that tune down MAX_NUMNODES to match their machine. >> alloc_pages_node() >> __alloc_pages_node() >> VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); >> VM_WARN_ON(!node_online(nid)); >> >> Thanks, >> Xishi Qiu >> >