Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753958AbZKPUOH (ORCPT ); Mon, 16 Nov 2009 15:14:07 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753755AbZKPUOG (ORCPT ); Mon, 16 Nov 2009 15:14:06 -0500 Received: from smtp1.linux-foundation.org ([140.211.169.13]:53957 "EHLO smtp1.linux-foundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753643AbZKPUOF (ORCPT ); Mon, 16 Nov 2009 15:14:05 -0500 Date: Mon, 16 Nov 2009 12:12:53 -0800 From: Andrew Morton To: Larry Woodman Cc: "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , Mel Gorman , Lee Schermerhorn , Adam Litke Subject: Re: [PATCH] prevent deadlock in __unmap_hugepage_range() when alloc_huge_page() fails. Message-Id: <20091116121253.d86920a0.akpm@linux-foundation.org> In-Reply-To: <1257872456.3227.2.camel@dhcp-100-19-198.bos.redhat.com> References: <1257872456.3227.2.camel@dhcp-100-19-198.bos.redhat.com> X-Mailer: Sylpheed 2.4.8 (GTK+ 2.12.9; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3816 Lines: 90 On Tue, 10 Nov 2009 12:00:56 -0500 Larry Woodman wrote: > > hugetlb_fault() takes the mm->page_table_lock spinlock then calls > hugetlb_cow(). If the alloc_huge_page() in hugetlb_cow() fails due to > an insufficient huge page pool it calls unmap_ref_private() with the > mm->page_table_lock held. unmap_ref_private() then calls > unmap_hugepage_range() which tries to acquire the mm->page_table_lock. > > > [] print_circular_bug_tail+0x80/0x9f > [] ? check_noncircular+0xb0/0xe8 > [] __lock_acquire+0x956/0xc0e > [] lock_acquire+0xee/0x12e > [] ? unmap_hugepage_range+0x3e/0x84 > [] ? unmap_hugepage_range+0x3e/0x84 > [] _spin_lock+0x40/0x89 > [] ? unmap_hugepage_range+0x3e/0x84 > [] ? alloc_huge_page+0x218/0x318 > [] unmap_hugepage_range+0x3e/0x84 > [] hugetlb_cow+0x1e2/0x3f4 > [] ? hugetlb_fault+0x453/0x4f6 > [] hugetlb_fault+0x480/0x4f6 > [] follow_hugetlb_page+0x116/0x2d9 > [] ? _spin_unlock_irq+0x3a/0x5c > [] __get_user_pages+0x2a3/0x427 > [] get_user_pages+0x3e/0x54 > [] get_user_pages_fast+0x170/0x1b5 > [] dio_get_page+0x64/0x14a > [] __blockdev_direct_IO+0x4b7/0xb31 > [] blkdev_direct_IO+0x58/0x6e > [] ? blkdev_get_blocks+0x0/0xb8 > [] generic_file_aio_read+0xdd/0x528 > [] ? avc_has_perm+0x66/0x8c > [] do_sync_read+0xf5/0x146 > [] ? autoremove_wake_function+0x0/0x5a > [] ? security_file_permission+0x24/0x3a > [] vfs_read+0xb5/0x126 > [] ? fget_light+0x5e/0xf8 > [] sys_read+0x54/0x8c > [] system_call_fastpath+0x16/0x1b Confused. That code is very old, alloc_huge_page() failures are surely common and afaict the bug will lead to an instantly dead box. So why haven't people hit this bug before now? > This can be fixed by dropping the mm->page_table_lock around the call > to unmap_ref_private() if alloc_huge_page() fails, its dropped right below > in the normal path anyway: > Why is that safe? What is the page_table_lock protecting in here? > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 5d7601b..f4daef4 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1973,12 +1973,15 @@ retry_avoidcopy: > */ > if (outside_reserve) { > BUG_ON(huge_pte_none(pte)); > + spin_unlock(&mm->page_table_lock); > if (unmap_ref_private(mm, vma, old_page, address)) { > + spin_lock(&mm->page_table_lock); > BUG_ON(page_count(old_page) != 1); > BUG_ON(huge_pte_none(pte)); > goto retry_avoidcopy; > } > WARN_ON_ONCE(1); > + spin_lock(&mm->page_table_lock); > } > > return -PTR_ERR(new_page); > > > Signed-off-by: Larry Woodman Patch is somewhat mucked up. The s-o-b goes at the end of the changelog. And please don't include two copies of the same patch in the one email. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/