From: Michal Hocko Subject: Re: Lockup in wait_transaction_locked under memory pressure Date: Thu, 25 Jun 2015 13:21:16 +0200 Message-ID: <20150625112116.GC17237@dhcp22.suse.cz> References: <558BD447.1010503@kyup.com> <558BD507.9070002@kyup.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: linux-ext4@vger.kernel.org, Marian Marinov To: Nikolay Borisov Return-path: Received: from cantor2.suse.de ([195.135.220.15]:41666 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751692AbbFYLVT (ORCPT ); Thu, 25 Jun 2015 07:21:19 -0400 Content-Disposition: inline In-Reply-To: <558BD507.9070002@kyup.com> Sender: linux-ext4-owner@vger.kernel.org List-ID: On Thu 25-06-15 13:16:39, Nikolay Borisov wrote: > [Resending to linux-ext4 as the first try failed] > > On 06/25/2015 01:13 PM, Nikolay Borisov wrote: > > Hello, > > > > On a fairly busy server, running LXC I'm observing that sometimes > > the processes for a particular container lock up by going into D > > (uninterruptible sleep) state. Obtaining backtraces for those > > processes one thing which stands out is that they are all > > blocked in wait_transaction_locked (part of JBD2). > > This occurs sporadically when the particular container > > is under memory pressure and a process is selected by > > OOM for killing. I'm running kernel 4.0.0 and > > oom_kill_allocating_task is enabled. Do you have the full OOM report? > > Here are backtraces from multiple such processes: > > > > > > alxc9 kernel: nginx D ffff8820a90b39a8 11496 9331 30627 > > 0x00000004 > > alxc9 kernel: ffff8820a90b39a8 ffff881ff284f010 ffff88396c6d1e90 > > 0000000000000282 > > alxc9 kernel: ffff8820a90b0010 ffff883ff12f3870 ffff883ff12f3800 > > 0000000000027df1 > > alxc9 kernel: ffff880413c08000 ffff8820a90b39c8 ffffffff815ab76e > > ffff883ff12f3870 > > alxc9 kernel: Call Trace: > > alxc9 kernel: [] schedule+0x3e/0x90 > > alxc9 kernel: [] wait_transaction_locked+0x85/0xc0 > > alxc9 kernel: [] ? woken_wake_function+0x20/0x20 > > alxc9 kernel: [] ? __schedule+0x39a/0x870 > > alxc9 kernel: [] add_transaction_credits+0xf0/0x250 > > alxc9 kernel: [] ? schedule+0x3e/0x90 > > alxc9 kernel: [] ? schedule_timeout+0x165/0x1f0 > > alxc9 kernel: [] start_this_handle+0x184/0x420 > > alxc9 kernel: [] ? __delayacct_blkio_end+0x30/0x50 > > alxc9 kernel: [] ? kmem_cache_alloc+0xee/0x1c0 > > alxc9 kernel: [] jbd2__journal_start+0x100/0x200 > > alxc9 kernel: [] ? ext4_dirty_inode+0x3c/0x80 > > alxc9 kernel: [] __ext4_journal_start_sb+0x79/0x100 > > alxc9 kernel: [] ext4_dirty_inode+0x3c/0x80 > > alxc9 kernel: [] __mark_inode_dirty+0x173/0x400 > > alxc9 kernel: [] generic_update_time+0x85/0xd0 > > alxc9 kernel: [] ? filemap_map_pages+0x1ca/0x210 > > alxc9 kernel: [] file_update_time+0xb2/0x110 > > alxc9 kernel: [] __generic_file_write_iter+0x172/0x3a0 > > alxc9 kernel: [] ext4_file_write_iter+0x134/0x460 > > alxc9 kernel: [] ? update_rmtp+0x80/0x80 > > alxc9 kernel: [] new_sync_write+0x97/0xc0 > > alxc9 kernel: [] vfs_write+0xce/0x180 > > alxc9 kernel: [] SyS_write+0x5a/0xd0 > > alxc9 kernel: [] system_call_fastpath+0x12/0x17 > > > > alxc9 kernel: mysqld D ffff8821352638d8 11936 5176 30627 > > 0x00000006 > > alxc9 kernel: ffff8821352638d8 ffff881ff2848000 ffff8812d3d28a30 > > 0000000000000286 > > alxc9 kernel: ffff882135260010 ffff883ff12f3870 ffff883ff12f3800 > > 0000000000027df1 > > alxc9 kernel: ffff880413c08000 ffff8821352638f8 ffffffff815ab76e > > ffff883ff12f3870 > > alxc9 kernel: Call Trace: > > alxc9 kernel: [] schedule+0x3e/0x90 > > alxc9 kernel: [] wait_transaction_locked+0x85/0xc0 > > alxc9 kernel: [] ? woken_wake_function+0x20/0x20 > > alxc9 kernel: [] add_transaction_credits+0xf0/0x250 > > alxc9 kernel: [] start_this_handle+0x184/0x420 > > alxc9 kernel: [] ? kmem_cache_alloc+0xee/0x1c0 > > alxc9 kernel: [] jbd2__journal_start+0x100/0x200 > > alxc9 kernel: [] ? ext4_evict_inode+0x190/0x490 > > alxc9 kernel: [] __ext4_journal_start_sb+0x79/0x100 > > alxc9 kernel: [] ext4_evict_inode+0x190/0x490 > > alxc9 kernel: [] evict+0xb8/0x1a0 > > alxc9 kernel: [] iput_final+0xf6/0x190 > > alxc9 kernel: [] iput+0xa0/0xe0 > > alxc9 kernel: [] dentry_iput+0xa8/0xf0 > > alxc9 kernel: [] __dentry_kill+0x85/0x130 > > alxc9 kernel: [] dput+0x1bc/0x220 > > alxc9 kernel: [] __fput+0x144/0x200 > > alxc9 kernel: [] ____fput+0xe/0x10 > > alxc9 kernel: [] task_work_run+0xd5/0x120 > > alxc9 kernel: [] do_exit+0x1b9/0x560 > > alxc9 kernel: [] ? hrtimer_cancel+0x22/0x30 > > alxc9 kernel: [] do_group_exit+0x56/0x100 > > alxc9 kernel: [] get_signal+0x237/0x530 > > alxc9 kernel: [] do_signal+0x25/0x130 > > alxc9 kernel: [] ? rcu_eqs_exit+0x79/0xb0 > > alxc9 kernel: [] ? rcu_user_exit+0x13/0x20 > > alxc9 kernel: [] do_notify_resume+0x78/0xb0 > > alxc9 kernel: [] int_signal+0x12/0x17 > > > > My hypotheses is that the OOM is killing a process while its performing > > a write to a transaction without it having a chance to complete it which > > leaves all other processes waiting to be woken up, which naturally never > > happens. I wonder whether such a failure scenario is even possible? If > > not then what could possibly force a transaction to stall for hours? I am not familiar with ext4 code much but I assume that even if the oom victim was writing to the transaction and the allocation had failed then the transaction should have failed as well. It would be a bit different if that allocation was GFP_NOFAIL and the memory reserves were completely depleted then it would loop inside the allocator and never finish the write (until something else releases memory). You can increase min_free_kbytes to increase memory reserves. But this is just a wild guess. sysrq+m would tell us more about the current state of the memory. It would be also good to check whether the OOM victim managed to die (oom report and sysrq+t). -- Michal Hocko SUSE Labss