Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752770AbZI1LaB (ORCPT ); Mon, 28 Sep 2009 07:30:01 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752463AbZI1LaA (ORCPT ); Mon, 28 Sep 2009 07:30:00 -0400 Received: from fgwmail7.fujitsu.co.jp ([192.51.44.37]:53846 "EHLO fgwmail7.fujitsu.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752451AbZI1L37 (ORCPT ); Mon, 28 Sep 2009 07:29:59 -0400 Message-ID: In-Reply-To: <20090928093434.GA6199@linux.vnet.ibm.com> References: <200909252158.n8PLwFhG024011@imap1.linux-foundation.org> <20090928154213.8e873dec.kamezawa.hiroyu@jp.fujitsu.com> <20090928093434.GA6199@linux.vnet.ibm.com> Date: Mon, 28 Sep 2009 20:30:01 +0900 (JST) Subject: Re: Question: RCU stall detected in memcg (Re: mmotm 2009-09-25-14-35 uploaded From: "KAMEZAWA Hiroyuki" To: paulmck@linux.vnet.ibm.com Cc: "KAMEZAWA Hiroyuki" , linux-kernel@vger.kernel.org, "akpm@linux-foundation.org" , mingo@elte.hu, "balbir@linux.vnet.ibm.com" , "nishimura@mxp.nes.nec.co.jp" User-Agent: SquirrelMail/1.4.16 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-2022-jp Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6205 Lines: 152 Paul E. McKenney wrote: > On Mon, Sep 28, 2009 at 03:42:13PM +0900, KAMEZAWA Hiroyuki wrote: >> On Fri, 25 Sep 2009 14:35:46 -0700 >> akpm@linux-foundation.org wrote: >> >> > The mm-of-the-moment snapshot 2009-09-25-14-35 has been uploaded to >> > >> > http://userweb.kernel.org/~akpm/mmotm/ >> > >> > and will soon be available at >> > >> > git://git.zen-sources.org/zen/mmotm.git >> > >> > It contains the following patches against 2.6.31: >> > >> >> At testing my (small) patch, with high memory pressure to >> memcg+hierarchy+softlimit, following is shown. >> == >> INFO: RCU detected CPU 0 stall (t=10000 jiffies) >> sending NMI to all CPUs: >> NMI backtrace for cpu 0 >> CPU 0: >> Modules linked in: sco bridge stp bnep l2cap crc16 bluetooth rfkill >> iptabl >> e_filter ip_tables ip6table_filter ip6_tables x_tables ipv6 >> cpufreq_ondemand acpi_cpufreq dm_mirror dm_region_hash dm_log d >> m_multipath dm_mod uinput ppdev i2c_i801 pcspkr i2c_core bnx2 sg e1000e >> parport_pc parport button shpchp megaraid_sas sd_mo >> d scsi_mod ext3 jbd uhci_hcd ohci_hcd ehci_hcd [last unloaded: >> microcode] >> Pid: 2886, comm: ruby Not tainted 2.6.31-mm1 #2 PRIMERGY >> RIP: 0010:[] [] >> trace_hardirqs_off_ca >> ller+0x3e/0xb RSP: 0018:ffff88004fa03d98 EFLAGS: 00000006 >> RAX: 0000000000000046 RBX: 0000000000000c00 RCX: 000000000000e501 >> RDX: ffff8806133564f0 RSI: 0000000000000002 RDI: ffffffff8102a940 >> RBP: ffff88004fa03d98 R08: 0000000000000001 R09: 0000000000000000 >> R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000002 >> R13: 0000000000000046 R14: 00000000000000ff R15: ffff88004fa03f48 >> FS: 00007fdeca0856f0(0000) GS:ffff88004fa00000(0000) >> knlGS:0000000000000000 >> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 >> CR2: 00007fdeca09e000 CR3: 0000000619fc6000 CR4: 00000000000006f0 >> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 >> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 >> Call Trace: >> <#DB[1]> <> Pid: 2886, comm: ruby Not tainted 2.6.31-mm1 #2 >> Call Trace: >> [] ? show_regs+0x49/0x50 >> [] nmi_watchdog_tick+0x1e5/0x210 >> [] do_nmi+0x1b1/0x2e0 >> [] nmi+0x1a/0x2c >> [] ? flat_send_IPI_mask+0x90/0xb0 >> [] ? trace_hardirqs_off_caller+0x3e/0xb0 >> <> [] trace_hardirqs_off+0xd/0x10 >> [] flat_send_IPI_mask+0x90/0xb0 >> [] flat_send_IPI_all+0x69/0x70 >> [] arch_trigger_all_cpu_backtrace+0x62/0xa0 >> [] __rcu_pending+0x7e/0x370 >> [] rcu_check_callbacks+0x47/0x130 >> [] update_process_times+0x46/0x70 >> [] tick_sched_timer+0x60/0x160 >> [] ? tick_sched_timer+0x0/0x160 >> [] __run_hrtimer+0xba/0x150 >> [] hrtimer_interrupt+0xd5/0x1b0 >> [] ? trace_hardirqs_off_thunk+0x3a/0x3c >> [] smp_apic_timer_interrupt+0x6d/0x9b >> [] apic_timer_interrupt+0x13/0x20 >> [] ? mem_cgroup_walk_tree+0x156/0x180 >> [] ? mem_cgroup_walk_tree+0x73/0x180 >> [] ? mem_cgroup_walk_tree+0x32/0x180 > > mem_cgroup_walk_tree() does have a loop. Not sure why it is showing > up on the stack twice. But it might well be the culprit. Also, > it repeatedly calls css_get_next(), which acquires ss->id_lock. > If some other CPU is holding this lock too long, or failed to release > it, then we could see a stall spinning on the lock. (Though these > sorts of problems are often caught by other diagnostics.) > Thanks. I found above stack trace is in css_tryget() which has a loop. It loops forever if css->refcnt is 0. Then, I doubt refcnt-miss in css. >> [] ? mem_cgroup_get_local_stat+0x0/0x110 > > mem_cgroup_get_local_stat() is straight-line code, so should not be > the problem. > >> [] ? mem_control_stat_show+0x14b/0x330 > > mem_control_stat_show() has a pair of fixed-iteration loops, so > should not be the problem. > >> [] ? cgroup_seqfile_show+0x3d/0x60 > > Straight-line code, should not be a problem. > >> [] ? cgroup_map_add+0x0/0x30 > > Straight-line code, should not be a problem. > >> [] ? seq_read+0xf3/0x420 > > I suppose that seq_read() could be repeatedly getting -EAGAIN out of > traverse(), but unless I am missing something, we would be seeing a > different stack trace in that case. > >> [] ? security_file_permission+0x16/0x20 > > ... and so on ... > >> [] ? vfs_read+0xcc/0x190 >> [] ? sys_read+0x55/0x90 >> [] ? system_call_fastpath+0x16/0x1b >> ..... >> == >> mem_cgroup_walk_tree() is not very young function and I've been doing >> the >> same kind of tests but this is 1st messeage for me. So, I'm a bit >> confused. >> and not sure how I start debug from... >> >> Does this mean mem_cgroup_walk_tree() blocks RCU's progress >> over 10000 jiffies ? What should I doubt ? > > It means that this CPU has indeed been blocking RCU's progress for more > than 10 seconds (10,000 jiffies if HZ==1000). One thing would be to > wait for another 30 seconds and see if you get another stall warning. > If not, then you could try decreasing RCU_SECONDS_TILL_STALL_RECHECK to > (say) 5*HZ. > I saw twice in the same place. Thanks. I now doubt memcg's code. > Either way, if you get two stack traces during the same CPU-stall event, > it can give you a much better idea where in the callstack the stall is > actually occurring. > Thank you for your advices :) I'll review and add bug-check codes in memcg more. Regards, -Kame -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/