Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751941AbZJTSsP (ORCPT ); Tue, 20 Oct 2009 14:48:15 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751733AbZJTSsP (ORCPT ); Tue, 20 Oct 2009 14:48:15 -0400 Received: from smtp.wp.pl ([212.77.101.1]:40525 "EHLO mx1.wp.pl" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751704AbZJTSsO (ORCPT ); Tue, 20 Oct 2009 14:48:14 -0400 Date: Tue, 20 Oct 2009 20:48:16 +0200 From: "Krzysztof Helt" To: linux-kernel , jfs-discussion@lists.sourceforge.net Subject: [PATCH] jfs: lockdep fix Message-ID: <4ade05f0af05a2.00981454@wp.pl> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-2 Content-Transfer-Encoding: 8bit Content-Disposition: inline X-Mailer: Interfejs WWW nowej poczty Wirtualnej Polski X-User-Agent: Mozilla/5.0 (X11; U; Linux i686; pl-PL; rv:1.9.0.14) Gecko/2009090216 Ubuntu/9.04 (jaunty) Firefox/3.0.14 Organization: Nowa Poczta Wirtualnej Polski S.A. http://www.wp.pl/ X-WP-IP: 93.181.133.4 X-WP-AV: skaner antywirusowy poczty Wirtualnej Polski S. A. X-WP-SPAM: NO 0000000 [4TNk] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 15160 Lines: 310 From: Krzysztof Helt Release rdwrlock semaphore during memory allocation. This fixes the locked already reported here: http://www.mail-archive.com/jfs-discussion@lists.sourceforge.net/msg01389.html The problem here is that memory allocation is done with rdwrlock semaphore taken and the VM can get into the jfs layer taking the rdwrlock again. Also, the patch fixes the lockdep below. This problem is created because the rdwrlock semaphore acquires the commit_mutex and it is called with interrupts enabled. The interrupt may hit with the commit_mutex taken and take the rdwrlock (again) inside the interrupt context. ========================================================= [ INFO: possible irq lock inversion dependency detected ] 2.6.32-rc3 #99 --------------------------------------------------------- kswapd0/180 just changed the state of lock: (&jfs_ip->rdwrlock#2){++++-.}, at: [] jfs_get_block+0x47/0x280 but this lock took another, RECLAIM_FS-unsafe lock in the past: (&jfs_ip->commit_mutex){+.+.+.} and interrupts could create inverse lock ordering between them. other info that might help us debug this: no locks held by kswapd0/180. the shortest dependencies between 2nd lock and 1st lock: -> (&jfs_ip->commit_mutex){+.+.+.} ops: 7937 { HARDIRQ-ON-W at: [] __lock_acquire+0x5d6/0xab0 [] lock_acquire+0x7a/0xa0 [] mutex_lock_nested+0x55/0x280 [] jfs_commit_inode+0x61/0x120 [] jfs_write_inode+0x35/0x50 [] writeback_single_inode+0x1ba/0x250 [] writeback_inodes_wb+0x27d/0x3b0 [] wb_writeback+0xec/0x180 [] wb_do_writeback+0x1a7/0x1d0 [] bdi_writeback_task+0x32/0xa0 [] bdi_start_fn+0x5d/0xb0 [] kthread+0x6c/0x80 [] kernel_thread_helper+0x7/0x10 SOFTIRQ-ON-W at: [] __lock_acquire+0x5fd/0xab0 [] lock_acquire+0x7a/0xa0 [] mutex_lock_nested+0x55/0x280 [] jfs_commit_inode+0x61/0x120 [] jfs_write_inode+0x35/0x50 [] writeback_single_inode+0x1ba/0x250 [] writeback_inodes_wb+0x27d/0x3b0 [] wb_writeback+0xec/0x180 [] wb_do_writeback+0x1a7/0x1d0 [] bdi_writeback_task+0x32/0xa0 [] bdi_start_fn+0x5d/0xb0 [] kthread+0x6c/0x80 [] kernel_thread_helper+0x7/0x10 RECLAIM_FS-ON-W at: [] mark_held_locks+0x55/0x70 [] lockdep_trace_alloc+0xa2/0xd0 [] kmem_cache_alloc+0x28/0x100 [] radix_tree_preload+0x1b/0x60 [] add_to_page_cache_locked+0x21/0xe0 [] add_to_page_cache_lru+0x28/0x70 [] read_cache_page_async+0xaa/0x140 [] read_cache_page+0x12/0x60 [] __get_metapage+0x109/0x3f0 [] diWrite+0x16d/0x580 [] txCommit+0x1b5/0x1040 [] jfs_create+0x26a/0x330 [] vfs_create+0x7f/0xd0 [] do_filp_open+0x6c1/0x7e0 [] do_sys_open+0x51/0x120 [] sys_open+0x29/0x40 [] syscall_call+0x7/0xb INITIAL USE at: [] __lock_acquire+0x212/0xab0 [] lock_acquire+0x7a/0xa0 [] mutex_lock_nested+0x55/0x280 [] jfs_commit_inode+0x61/0x120 [] jfs_write_inode+0x35/0x50 [] writeback_single_inode+0x1ba/0x250 [] writeback_inodes_wb+0x27d/0x3b0 [] wb_writeback+0xec/0x180 [] wb_do_writeback+0x1a7/0x1d0 [] bdi_writeback_task+0x32/0xa0 [] bdi_start_fn+0x5d/0xb0 [] kthread+0x6c/0x80 [] kernel_thread_helper+0x7/0x10 } ... key at: [] __key.25521+0x0/0x8 ... acquired at: [] validate_chain+0xa25/0x1040 [] __lock_acquire+0x2da/0xab0 [] lock_acquire+0x7a/0xa0 [] mutex_lock_nested+0x55/0x280 [] extAlloc+0x49/0x5c0 [] jfs_get_block+0x227/0x280 [] nobh_write_begin+0x130/0x3e0 [] jfs_write_begin+0x3d/0x50 [] generic_file_buffered_write+0xde/0x260 [] __generic_file_aio_write+0x244/0x490 [] generic_file_aio_write+0x58/0xc0 [] do_sync_write+0xcc/0x110 [] vfs_write+0x96/0x160 [] sys_write+0x3d/0x70 [] syscall_call+0x7/0xb -> (&jfs_ip->rdwrlock#2){++++-.} ops: 9669 { HARDIRQ-ON-W at: [] __lock_acquire+0x5d6/0xab0 [] lock_acquire+0x7a/0xa0 [] down_write_nested+0x50/0x70 [] jfs_get_block+0x47/0x280 [] nobh_write_begin+0x130/0x3e0 [] jfs_write_begin+0x3d/0x50 [] generic_file_buffered_write+0xde/0x260 [] __generic_file_aio_write+0x244/0x490 [] generic_file_aio_write+0x58/0xc0 [] do_sync_write+0xcc/0x110 [] vfs_write+0x96/0x160 [] sys_write+0x3d/0x70 [] syscall_call+0x7/0xb HARDIRQ-ON-R at: [] __lock_acquire+0x1df/0xab0 [] lock_acquire+0x7a/0xa0 [] down_read_nested+0x50/0x70 [] jfs_get_block+0xe6/0x280 [] do_mpage_readpage+0x3a6/0x510 [] mpage_readpages+0x9d/0xd0 [] jfs_readpages+0x19/0x20 [] __do_page_cache_readahead+0x163/0x1f0 [] ra_submit+0x28/0x40 [] ondemand_readahead+0x12b/0x220 [] page_cache_sync_readahead+0x28/0x30 [] generic_file_aio_read+0x4ca/0x640 [] do_sync_read+0xcc/0x110 [] vfs_read+0x94/0x150 [] sys_read+0x3d/0x70 [] syscall_call+0x7/0xb SOFTIRQ-ON-W at: [] __lock_acquire+0x5fd/0xab0 [] lock_acquire+0x7a/0xa0 [] down_write_nested+0x50/0x70 [] jfs_get_block+0x47/0x280 [] nobh_write_begin+0x130/0x3e0 [] jfs_write_begin+0x3d/0x50 [] generic_file_buffered_write+0xde/0x260 [] __generic_file_aio_write+0x244/0x490 [] generic_file_aio_write+0x58/0xc0 [] do_sync_write+0xcc/0x110 [] vfs_write+0x96/0x160 [] sys_write+0x3d/0x70 [] syscall_call+0x7/0xb SOFTIRQ-ON-R at: [] __lock_acquire+0x70e/0xab0 [] lock_acquire+0x7a/0xa0 [] down_read_nested+0x50/0x70 [] jfs_get_block+0xe6/0x280 [] do_mpage_readpage+0x3a6/0x510 [] mpage_readpages+0x9d/0xd0 [] jfs_readpages+0x19/0x20 [] __do_page_cache_readahead+0x163/0x1f0 [] ra_submit+0x28/0x40 [] ondemand_readahead+0x12b/0x220 [] page_cache_sync_readahead+0x28/0x30 [] generic_file_aio_read+0x4ca/0x640 [] do_sync_read+0xcc/0x110 [] vfs_read+0x94/0x150 [] sys_read+0x3d/0x70 [] syscall_call+0x7/0xb IN-RECLAIM_FS-W at: [] __lock_acquire+0x6d7/0xab0 [] lock_acquire+0x7a/0xa0 [] down_write_nested+0x50/0x70 [] jfs_get_block+0x47/0x280 [] __block_write_full_page+0xe3/0x320 [] block_write_full_page_endio+0xc7/0xe0 [] block_write_full_page+0x12/0x20 [] jfs_writepage+0xf/0x20 [] shrink_page_list+0x31c/0x6c0 [] shrink_list+0x289/0x630 [] shrink_zone+0x277/0x2f0 [] kswapd+0x48b/0x4f0 [] kthread+0x6c/0x80 [] kernel_thread_helper+0x7/0x10 INITIAL USE at: [] __lock_acquire+0x212/0xab0 [] lock_acquire+0x7a/0xa0 [] down_read_nested+0x50/0x70 [] jfs_get_block+0xe6/0x280 [] do_mpage_readpage+0x3a6/0x510 [] mpage_readpages+0x9d/0xd0 [] jfs_readpages+0x19/0x20 [] __do_page_cache_readahead+0x163/0x1f0 [] ra_submit+0x28/0x40 [] ondemand_readahead+0x12b/0x220 [] page_cache_sync_readahead+0x28/0x30 [] generic_file_aio_read+0x4ca/0x640 [] do_sync_read+0xcc/0x110 [] vfs_read+0x94/0x150 [] sys_read+0x3d/0x70 [] syscall_call+0x7/0xb } ... key at: [] __key.25520+0x0/0x1c ... acquired at: [] check_usage_forwards+0x7c/0xd0 [] mark_lock+0x17d/0x5b0 [] __lock_acquire+0x6d7/0xab0 [] lock_acquire+0x7a/0xa0 [] down_write_nested+0x50/0x70 [] jfs_get_block+0x47/0x280 [] __block_write_full_page+0xe3/0x320 [] block_write_full_page_endio+0xc7/0xe0 [] block_write_full_page+0x12/0x20 [] jfs_writepage+0xf/0x20 [] shrink_page_list+0x31c/0x6c0 [] shrink_list+0x289/0x630 [] shrink_zone+0x277/0x2f0 [] kswapd+0x48b/0x4f0 [] kthread+0x6c/0x80 [] kernel_thread_helper+0x7/0x10 stack backtrace: Pid: 180, comm: kswapd0 Not tainted 2.6.32-rc3 #99 Call Trace: [] print_irq_inversion_bug+0x108/0x130 [] check_usage_forwards+0x7c/0xd0 [] mark_lock+0x17d/0x5b0 [] ? check_usage_forwards+0x0/0xd0 [] __lock_acquire+0x6d7/0xab0 [] lock_acquire+0x7a/0xa0 [] ? jfs_get_block+0x47/0x280 [] down_write_nested+0x50/0x70 [] ? jfs_get_block+0x47/0x280 [] jfs_get_block+0x47/0x280 [] ? _spin_unlock+0x1d/0x20 [] ? create_empty_buffers+0x77/0x90 [] __block_write_full_page+0xe3/0x320 [] ? jfs_get_block+0x0/0x280 [] block_write_full_page_endio+0xc7/0xe0 [] ? end_buffer_async_write+0x0/0x160 [] ? jfs_get_block+0x0/0x280 [] ? trace_hardirqs_on_caller+0x12c/0x180 [] block_write_full_page+0x12/0x20 [] ? end_buffer_async_write+0x0/0x160 [] jfs_writepage+0xf/0x20 [] shrink_page_list+0x31c/0x6c0 [] shrink_list+0x289/0x630 [] shrink_zone+0x277/0x2f0 [] kswapd+0x48b/0x4f0 [] ? isolate_pages_global+0x0/0x1f0 [] ? autoremove_wake_function+0x0/0x40 [] ? kswapd+0x0/0x4f0 [] kthread+0x6c/0x80 [] ? kthread+0x0/0x80 [] kernel_thread_helper+0x7/0x10 Signed-off-by: Krzysztof Helt --- I am not sure if this is the right fix to the problem. The heavy use of the jfs volume can lock up a machine (e.g. hit me in Ubuntu 9.04). diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c index b2ae190..de18324 100644 --- a/fs/jfs/inode.c +++ b/fs/jfs/inode.c @@ -244,9 +244,22 @@ int jfs_get_block(struct inode *ip, sector_t lblock, #ifdef _JFS_4K if ((rc = extHint(ip, lblock64 << ip->i_sb->s_blocksize_bits, &xad))) goto unlock; + + /* release lock to avoid lockdep with jfs_ip->commit_mutex */ + if (create) + IWRITE_UNLOCK(ip); + else + IREAD_UNLOCK(ip); + rc = extAlloc(ip, xlen, lblock64, &xad, false); + if (rc) - goto unlock; + return rc; + + if (create) + IWRITE_LOCK(ip, RDWRLOCK_NORMAL); + else + IREAD_LOCK(ip, RDWRLOCK_NORMAL); set_buffer_new(bh_result); map_bh(bh_result, ip->i_sb, addressXAD(&xad)); ---------------------------------------------------- 25 pa?dziernika na stadionie Polonii w Warszawie b?d? z bia?o-czerwonymi. Zas?uguj? na to... Zobacz wi?cej: http://klik.wp.pl/?adr=http%3A%2F%2Fcorto.www.wp.pl%2Fas%2Frugby.html&sid=892 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/