Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752175AbaBMWLf (ORCPT ); Thu, 13 Feb 2014 17:11:35 -0500 Received: from zene.cmpxchg.org ([85.214.230.12]:60758 "EHLO zene.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751547AbaBMWLc (ORCPT ); Thu, 13 Feb 2014 17:11:32 -0500 Date: Thu, 13 Feb 2014 17:11:26 -0500 From: Johannes Weiner To: Tetsuo Handa Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org Subject: Re: [patch 00/10] mm: thrash detection-based file cache sizing v9 Message-ID: <20140213221126.GP6963@cmpxchg.org> References: <1391475222-1169-1-git-send-email-hannes@cmpxchg.org> <201402130321.s1D3LH41073563@www262.sakura.ne.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <201402130321.s1D3LH41073563@www262.sakura.ne.jp> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Tetsuo, On Thu, Feb 13, 2014 at 12:21:17PM +0900, Tetsuo Handa wrote: > Hello. > > I got a lockdep warning shown below, and the bad commit seems to be de055616 > \"mm: keep page cache radix tree nodes in check\" as of next-20140212 > on linux-next.git. Thanks for the report. There is already a fix for this in -mm: http://marc.info/?l=linux-mm-commits&m=139180637114625&w=2 It was merged on the 7th, so it should show up in -next... any day now? > Regards. > > ========================================================= > [ INFO: possible irq lock inversion dependency detected ] > 3.14.0-rc1-00099-gde05561 #126 Tainted: GF > --------------------------------------------------------- > swapper/0/0 just changed the state of lock: > (&(&mapping->tree_lock)->rlock){..-.-.}, at: [] test_clear_page_writeback+0x48/0x190 > but this lock took another, SOFTIRQ-unsafe lock in the past: > (&(&lru->node[i].lock)->rlock){+.+.-.} > > and interrupts could create inverse lock ordering between them. > > > other info that might help us debug this: > Possible interrupt unsafe locking scenario: > > CPU0 CPU1 > ---- ---- > lock(&(&lru->node[i].lock)->rlock); > local_irq_disable(); > lock(&(&mapping->tree_lock)->rlock); > lock(&(&lru->node[i].lock)->rlock); > > lock(&(&mapping->tree_lock)->rlock); > > *** DEADLOCK *** > > no locks held by swapper/0/0. > > the shortest dependencies between 2nd lock and 1st lock: > -> (&(&lru->node[i].lock)->rlock){+.+.-.} ops: 445715 { > HARDIRQ-ON-W at: > [] mark_irqflags+0x130/0x190 > [] __lock_acquire+0x3bc/0x5e0 > [] lock_acquire+0x9e/0x170 > [] _raw_spin_lock+0x3e/0x80 > [] list_lru_add+0x5b/0xf0 > [] dput+0xbc/0x120 > [] __fput+0x1d2/0x310 > [] ____fput+0xe/0x10 > [] task_work_run+0xad/0xe0 > [] do_notify_resume+0x75/0x80 > [] int_signal+0x12/0x17 > SOFTIRQ-ON-W at: > [] mark_irqflags+0x154/0x190 > [] __lock_acquire+0x3bc/0x5e0 > [] lock_acquire+0x9e/0x170 > [] _raw_spin_lock+0x3e/0x80 > [] list_lru_add+0x5b/0xf0 > [] dput+0xbc/0x120 > [] __fput+0x1d2/0x310 > [] ____fput+0xe/0x10 > [] task_work_run+0xad/0xe0 > [] do_notify_resume+0x75/0x80 > [] int_signal+0x12/0x17 > IN-RECLAIM_FS-W at: > [] mark_irqflags+0xc6/0x190 > [] __lock_acquire+0x3bc/0x5e0 > [] lock_acquire+0x9e/0x170 > [] _raw_spin_lock+0x3e/0x80 > [] list_lru_count_node+0x28/0x70 > [] super_cache_count+0x83/0x120 > [] shrink_slab_node+0x47/0x350 > [] shrink_slab+0x8d/0x160 > [] kswapd_shrink_zone+0x130/0x1c0 > [] balance_pgdat+0x389/0x520 > [] kswapd+0x1bf/0x380 > [] kthread+0xee/0x110 > [] ret_from_fork+0x7c/0xb0 > INITIAL USE at: > [] __lock_acquire+0x214/0x5e0 > [] lock_acquire+0x9e/0x170 > [] _raw_spin_lock+0x3e/0x80 > [] list_lru_add+0x5b/0xf0 > [] dput+0xbc/0x120 > [] __fput+0x1d2/0x310 > [] ____fput+0xe/0x10 > [] task_work_run+0xad/0xe0 > [] do_notify_resume+0x75/0x80 > [] int_signal+0x12/0x17 > } > ... key at: [] __key.23573+0x0/0xc > ... acquired at: > [] validate_chain+0x6e1/0x840 > [] __lock_acquire+0x367/0x5e0 > [] lock_acquire+0x9e/0x170 > [] _raw_spin_lock+0x3e/0x80 > [] list_lru_add+0x5b/0xf0 > [] page_cache_tree_delete+0x140/0x1a0 > [] __delete_from_page_cache+0x50/0x1c0 > [] __remove_mapping+0x9d/0x170 > [] shrink_page_list+0x617/0x7f0 > [] shrink_inactive_list+0x26a/0x520 > [] shrink_lruvec+0x336/0x420 > [] shrink_zone+0x5c/0x120 > [] kswapd_shrink_zone+0xfb/0x1c0 > [] balance_pgdat+0x389/0x520 > [] kswapd+0x1bf/0x380 > [] kthread+0xee/0x110 > [] ret_from_fork+0x7c/0xb0 > > -> (&(&mapping->tree_lock)->rlock){..-.-.} ops: 11597 { > IN-SOFTIRQ-W at: > [] mark_irqflags+0x109/0x190 > [] __lock_acquire+0x3bc/0x5e0 > [] lock_acquire+0x9e/0x170 > [] _raw_spin_lock_irqsave+0x50/0x90 > [] test_clear_page_writeback+0x48/0x190 > [] end_page_writeback+0x20/0x60 > [] ext4_finish_bio+0x168/0x220 [ext4] > [] ext4_end_bio+0x97/0xe0 [ext4] > [] bio_endio+0x53/0xa0 > [] blk_update_request+0x213/0x430 > [] blk_update_bidi_request+0x27/0xb0 > [] blk_end_bidi_request+0x2f/0x80 > [] blk_end_request+0x10/0x20 > [] scsi_end_request+0x40/0xb0 > [] scsi_io_completion+0x9f/0x6c0 > [] scsi_finish_command+0xd4/0x140 > [] scsi_softirq_done+0x14f/0x170 > [] blk_done_softirq+0x84/0xa0 > [] __do_softirq+0x12d/0x430 > [] irq_exit+0xc5/0xd0 > [] do_IRQ+0x67/0x110 > [] ret_from_intr+0x0/0x13 > [] arch_cpu_idle+0x26/0x30 > [] cpu_idle_loop+0xa9/0x3c0 > [] cpu_startup_entry+0x23/0x30 > [] rest_init+0xf4/0x170 > [] start_kernel+0x346/0x34d > [] x86_64_start_reservations+0x2a/0x2c > [] x86_64_start_kernel+0xf5/0xfc > IN-RECLAIM_FS-W at: > [] mark_irqflags+0xc6/0x190 > [] __lock_acquire+0x3bc/0x5e0 > [] lock_acquire+0x9e/0x170 > [] _raw_spin_lock_irq+0x44/0x80 > [] __remove_mapping+0x55/0x170 > [] shrink_page_list+0x617/0x7f0 > [] shrink_inactive_list+0x26a/0x520 > [] shrink_lruvec+0x336/0x420 > [] shrink_zone+0x5c/0x120 > [] kswapd_shrink_zone+0xfb/0x1c0 > [] balance_pgdat+0x389/0x520 > [] kswapd+0x1bf/0x380 > [] kthread+0xee/0x110 > [] ret_from_fork+0x7c/0xb0 > INITIAL USE at: > [] __lock_acquire+0x214/0x5e0 > [] lock_acquire+0x9e/0x170 > [] _raw_spin_lock_irq+0x44/0x80 > [] __add_to_page_cache_locked+0xa0/0x1d0 > [] add_to_page_cache_lru+0x28/0x80 > [] grab_cache_page_write_begin+0x98/0xe0 > [] simple_write_begin+0x34/0x100 > [] generic_perform_write+0xca/0x210 > [] generic_file_buffered_write+0x63/0xa0 > [] __generic_file_aio_write+0x1ca/0x3c0 > [] generic_file_aio_write+0x66/0xb0 > [] do_sync_write+0x5f/0xa0 > [] vfs_write+0xc7/0x1f0 > [] SyS_write+0x62/0xb0 > [] do_copy+0x2b/0xb0 > [] flush_buffer+0x7d/0xa3 > [] gunzip+0x287/0x330 > [] unpack_to_rootfs+0x167/0x293 > [] populate_rootfs+0x62/0xdf > [] do_one_initcall+0xd2/0x180 > [] do_basic_setup+0x9d/0xc0 > [] kernel_init_freeable+0x280/0x303 > [] kernel_init+0xe/0x130 > [] ret_from_fork+0x7c/0xb0 > } > ... key at: [] __key.41448+0x0/0x8 > ... acquired at: > [] check_usage_forwards+0x90/0x110 > [] mark_lock_irq+0x9f/0x2c0 > [] mark_lock+0x11c/0x1f0 > [] mark_irqflags+0x109/0x190 > [] __lock_acquire+0x3bc/0x5e0 > [] lock_acquire+0x9e/0x170 > [] _raw_spin_lock_irqsave+0x50/0x90 > [] test_clear_page_writeback+0x48/0x190 > [] end_page_writeback+0x20/0x60 > [] ext4_finish_bio+0x168/0x220 [ext4] > [] ext4_end_bio+0x97/0xe0 [ext4] > [] bio_endio+0x53/0xa0 > [] blk_update_request+0x213/0x430 > [] blk_update_bidi_request+0x27/0xb0 > [] blk_end_bidi_request+0x2f/0x80 > [] blk_end_request+0x10/0x20 > [] scsi_end_request+0x40/0xb0 > [] scsi_io_completion+0x9f/0x6c0 > [] scsi_finish_command+0xd4/0x140 > [] scsi_softirq_done+0x14f/0x170 > [] blk_done_softirq+0x84/0xa0 > [] __do_softirq+0x12d/0x430 > [] irq_exit+0xc5/0xd0 > [] do_IRQ+0x67/0x110 > [] ret_from_intr+0x0/0x13 > [] arch_cpu_idle+0x26/0x30 > [] cpu_idle_loop+0xa9/0x3c0 > [] cpu_startup_entry+0x23/0x30 > [] rest_init+0xf4/0x170 > [] start_kernel+0x346/0x34d > [] x86_64_start_reservations+0x2a/0x2c > [] x86_64_start_kernel+0xf5/0xfc > > > stack backtrace: > CPU: 0 PID: 0 Comm: swapper/0 Tainted: GF 3.14.0-rc1-00099-gde05561 #126 > Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/20/2012 > ffffffff8233b140 ffff8800792038d8 ffffffff8162aa99 0000000000000002 > ffffffff8233b140 ffff880079203928 ffffffff810b5118 ffffffff81a08d4a > ffffffff81a08d4a ffff880079203928 ffffffff81c110a8 ffff880079203938 > Call Trace: > [] dump_stack+0x51/0x70 > [] print_irq_inversion_bug+0x1c8/0x210 > [] check_usage_forwards+0x90/0x110 > [] ? __kernel_text_address+0x58/0x80 > [] ? print_irq_inversion_bug+0x210/0x210 > [] mark_lock_irq+0x9f/0x2c0 > [] mark_lock+0x11c/0x1f0 > [] mark_irqflags+0x109/0x190 > [] __lock_acquire+0x3bc/0x5e0 > [] lock_acquire+0x9e/0x170 > [] ? test_clear_page_writeback+0x48/0x190 > [] ? __change_page_attr_set_clr+0x4d/0xb0 > [] _raw_spin_lock_irqsave+0x50/0x90 > [] ? test_clear_page_writeback+0x48/0x190 > [] test_clear_page_writeback+0x48/0x190 > [] ? ext4_finish_bio+0x1d5/0x220 [ext4] > [] end_page_writeback+0x20/0x60 > [] ext4_finish_bio+0x168/0x220 [ext4] > [] ? ext4_release_io_end+0x7c/0x100 [ext4] > [] ? blk_account_io_completion+0x119/0x1c0 > [] ext4_end_bio+0x97/0xe0 [ext4] > [] bio_endio+0x53/0xa0 > [] blk_update_request+0x213/0x430 > [] blk_update_bidi_request+0x27/0xb0 > [] blk_end_bidi_request+0x2f/0x80 > [] blk_end_request+0x10/0x20 > [] scsi_end_request+0x40/0xb0 > [] ? _raw_spin_unlock_irqrestore+0x40/0x70 > [] scsi_io_completion+0x9f/0x6c0 > [] ? trace_hardirqs_on+0xd/0x10 > [] scsi_finish_command+0xd4/0x140 > [] scsi_softirq_done+0x14f/0x170 > [] blk_done_softirq+0x84/0xa0 > [] __do_softirq+0x12d/0x430 > [] irq_exit+0xc5/0xd0 > [] do_IRQ+0x67/0x110 > [] common_interrupt+0x6f/0x6f > [] ? default_idle+0x26/0x210 > [] ? default_idle+0x24/0x210 > [] arch_cpu_idle+0x26/0x30 > [] cpu_idle_loop+0xa9/0x3c0 > [] cpu_startup_entry+0x23/0x30 > [] rest_init+0xf4/0x170 > [] ? csum_partial_copy_generic+0x170/0x170 > [] start_kernel+0x346/0x34d > [] ? repair_env_string+0x5b/0x5b > [] ? memblock_reserve+0x49/0x4e > [] x86_64_start_reservations+0x2a/0x2c > [] x86_64_start_kernel+0xf5/0xfc -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/