Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759027Ab2JYAy4 (ORCPT ); Wed, 24 Oct 2012 20:54:56 -0400 Received: from mail-oa0-f46.google.com ([209.85.219.46]:36705 "EHLO mail-oa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758959Ab2JYAyy (ORCPT ); Wed, 24 Oct 2012 20:54:54 -0400 MIME-Version: 1.0 In-Reply-To: References: <20121008150949.GA15130@redhat.com> <20121017040515.GA13505@redhat.com> From: KOSAKI Motohiro Date: Wed, 24 Oct 2012 20:54:33 -0400 Message-ID: Subject: Re: [patch for-3.7] mm, mempolicy: fix printing stack contents in numa_maps To: David Rientjes Cc: Sasha Levin , Mel Gorman , Peter Zijlstra , Rik van Riel , Dave Jones , Andrew Morton , Linus Torvalds , bhutchings@solarflare.com, Konstantin Khlebnikov , Naoya Horiguchi , Hugh Dickins , KAMEZAWA Hiroyuki , linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset=ISO-8859-1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3461 Lines: 66 On Wed, Oct 24, 2012 at 8:08 PM, David Rientjes wrote: > On Wed, 24 Oct 2012, Sasha Levin wrote: > >> > This should be fixed by 9e7814404b77 ("hold task->mempolicy while >> > numa_maps scans.") in 3.7-rc2, can you reproduce any issues reading >> > /proc/pid/numa_maps on that kernel? >> >> I was actually referring to the warnings Dave Jones saw when fuzzing >> with trinity after the >> original patch was applied. >> >> I still see the following when fuzzing: >> >> [ 338.467156] BUG: sleeping function called from invalid context at >> kernel/mutex.c:269 >> [ 338.473719] in_atomic(): 1, irqs_disabled(): 0, pid: 6361, name: trinity-main >> [ 338.481199] 2 locks held by trinity-main/6361: >> [ 338.486629] #0: (&mm->mmap_sem){++++++}, at: [] >> __do_page_fault+0x1e4/0x4f0 >> [ 338.498783] #1: (&(&mm->page_table_lock)->rlock){+.+...}, at: >> [] handle_pte_fault+0x3f7/0x6a0 >> [ 338.511409] Pid: 6361, comm: trinity-main Tainted: G W >> 3.7.0-rc2-next-20121024-sasha-00001-gd95ef01-dirty #74 >> [ 338.530318] Call Trace: >> [ 338.534088] [] __might_sleep+0x1c3/0x1e0 >> [ 338.539358] [] mutex_lock_nested+0x29/0x50 >> [ 338.545253] [] mpol_shared_policy_lookup+0x2e/0x90 >> [ 338.545258] [] shmem_get_policy+0x2e/0x30 >> [ 338.545264] [] get_vma_policy+0x5a/0xa0 >> [ 338.545267] [] mpol_misplaced+0x41/0x1d0 >> [ 338.545272] [] handle_pte_fault+0x465/0x6a0 >> [ 338.545278] [] ? __rcu_read_unlock+0x44/0xb0 >> [ 338.545282] [] handle_mm_fault+0x32a/0x360 >> [ 338.545286] [] __do_page_fault+0x480/0x4f0 >> [ 338.545293] [] ? del_timer+0x26/0x80 >> [ 338.545298] [] ? rcu_cleanup_after_idle+0x23/0x170 >> [ 338.545302] [] ? rcu_eqs_exit_common+0x64/0x3a0 >> [ 338.545305] [] ? rcu_eqs_enter_common+0x7c6/0x970 >> [ 338.545309] [] ? rcu_eqs_exit+0x9c/0xb0 >> [ 338.545312] [] do_page_fault+0x26/0x40 >> [ 338.545317] [] do_async_page_fault+0x30/0xa0 >> [ 338.545321] [] async_page_fault+0x28/0x30 >> > > Ok, this looks the same but it's actually a different issue: > mpol_misplaced(), which now only exists in linux-next and not in 3.7-rc2, > calls get_vma_policy() which may take the shared policy mutex. This > happens while holding page_table_lock from do_huge_pmd_numa_page() but > also from do_numa_page() while holding a spinlock on the ptl, which is > coming from the sched/numa branch. > > Is there anyway that we can avoid changing the shared policy mutex back > into a spinlock (it was converted in b22d127a39dd ["mempolicy: fix a race > in shared_policy_replace()"])? > > Adding Peter, Rik, and Mel to the cc. Hrm. I haven't noticed there is mpol_misplaced() in linux-next. Peter, I guess you commited it, right? If so, may I review your mempolicy changes? Now mempolicy has a lot of horrible buggy code and I hope to maintain carefully. Which tree should i see? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/