Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757135Ab1FIKtw (ORCPT ); Thu, 9 Jun 2011 06:49:52 -0400 Received: from oproxy8-pub.bluehost.com ([69.89.22.20]:42575 "HELO oproxy8-pub.bluehost.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1754674Ab1FIKtu (ORCPT ); Thu, 9 Jun 2011 06:49:50 -0400 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=tao.ma; h=Received:From:To:Cc:Subject:Date:Message-Id:X-Mailer:X-Identified-User; b=d6V5BwTiQ0oD9jU7nH0IA2MjW+1jtMhufm54AMTw8UFb8iGuDYWHA+IYVo4WRDEcu3+kSJgqtEIFnTQu38eKcPpZGZG27UWNzh6R5hz32DzKDfWjI4xRgoglEo3binQ1; From: Tao Ma To: linux-kernel@vger.kernel.org Cc: Jens Axboe , Vivek Goyal , Tao Ma Subject: CFQ: async queue blocks the whole system Date: Thu, 9 Jun 2011 18:49:37 +0800 Message-Id: <1307616577-6101-1-git-send-email-tm@tao.ma> X-Mailer: git-send-email 1.7.4.1 X-Identified-User: {1390:box585.bluehost.com:colyli:tao.ma} {sentby:smtp auth 114.251.86.0 authed with tm@tao.ma} Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 8156 Lines: 155 Hi Jens and Vivek, We are current running some heavy ext4 metadata test, and we found a very severe problem for CFQ. Please correct me if my statement below is wrong. CFQ only has an async queue for every priority of every class and these queues have a very low serving priority, so if the system has a large number of sync reads, these queues will be delayed a lot of time. As a result, the flushers will be blocked, then the journal and finally our applications[1]. I have tried to let jbd/2 to use WRITE_SYNC so that they can checkpoint in time and the patches are sent. But today we found another similar block in kswapd which make me think that maybe CFQ should be changed somehow so that all these callers can benefit from it. So is there any way to let the async queue work timely or at least is there any deadline for async queue to finish an request in time even in case there are many reads? btw, We have tested deadline scheduler and it seems to work in our test. [1] the message we get from one system: INFO: task flush-8:0:2950 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. flush-8:0 D ffff88062bfde738 0 2950 2 0x00000000 ffff88062b137820 0000000000000046 ffff88062b137750 ffffffff812b7bc3 ffff88032cddc000 ffff88062bfde380 ffff88032d3d8840 0000000c2be37400 000000002be37601 0000000000000006 ffff88062b137760 ffffffff811c242e Call Trace: [] ? scsi_request_fn+0x345/0x3df [] ? __blk_run_queue+0x1a/0x1c [] ? queue_unplugged+0x77/0x8e [] io_schedule+0x47/0x61 [] get_request_wait+0xe0/0x152 [] ? list_del_init+0x21/0x21 [] ? elv_merge+0xa0/0xb5 [] __make_request+0x185/0x2a8 [] generic_make_request+0x246/0x323 [] ? mempool_alloc_slab+0x16/0x18 [] ? mempool_alloc+0x31/0xf4 [] submit_bio+0xe2/0x101 [] ? bio_alloc_bioset+0x4d/0xc5 [] ? inc_zone_page_state+0x25/0x28 [] submit_bh+0x105/0x129 [] __block_write_full_page+0x218/0x31d [] ? __set_page_dirty_buffers+0xac/0xac [] ? blkdev_get_blocks+0xa6/0xa6 [] ? __set_page_dirty_buffers+0xac/0xac [] ? blkdev_get_blocks+0xa6/0xa6 [] block_write_full_page_endio+0x89/0x95 [] block_write_full_page+0x15/0x17 [] blkdev_writepage+0x18/0x1a [] __writepage+0x17/0x30 [] write_cache_pages+0x251/0x361 [] ? page_mapping+0x35/0x35 [] generic_writepages+0x48/0x63 [] do_writepages+0x21/0x2a [] writeback_single_inode+0xb1/0x1a8 [] writeback_sb_inodes+0xb5/0x12f [] writeback_inodes_wb+0x111/0x121 [] wb_writeback+0x1c9/0x2ce [] ? lock_timer_base+0x2b/0x4f [] wb_do_writeback+0x134/0x1a3 [] bdi_writeback_thread+0x89/0x1b4 [] ? perf_trace_writeback_class+0xa6/0xa6 [] kthread+0x72/0x7a [] kernel_thread_helper+0x4/0x10 [] ? kthread_bind+0x67/0x67 [] ? gs_change+0x13/0x13 INFO: task jbd2/sda12-8:3435 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. jbd2/sda12-8 D ffff88062c2fabb8 0 3435 2 0x00000000 ffff88061f6c9d30 0000000000000046 0000000000000000 0000000000000000 0000000000000000 ffff88062c2fa800 ffff88032d238400 00000001000024b4 0000000000000000 0000000000000000 0000000000000000 0000000000000000 Call Trace: [] ? spin_unlock_irqrestore+0xe/0x10 [] jbd2_journal_commit_transaction+0x254/0x14a4 [jbd2] [] ? need_resched+0x23/0x2d [] ? list_del_init+0x21/0x21 [] ? lock_timer_base+0x2b/0x4f [] ? spin_unlock_irqrestore+0xe/0x10 [] ? try_to_del_timer_sync+0x7b/0x89 [] ? jbd2_journal_start_commit+0x72/0x72 [jbd2] [] kjournald2+0x124/0x381 [jbd2] [] ? list_del_init+0x21/0x21 [] kthread+0x72/0x7a [] kernel_thread_helper+0x4/0x10 [] ? kthread_bind+0x67/0x67 [] ? gs_change+0x13/0x13 INFO: task attr_set:3832 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. attr_set D ffff8806157f8538 0 3832 1 0x00000000 ffff880615565b28 0000000000000086 0000000000000001 0000000000000007 0000000000000000 ffff8806157f8180 ffffffff8180b020 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 Call Trace: [] ? hrtick_update+0x32/0x34 [] ? dequeue_task_fair+0x15c/0x169 [] ? spin_unlock_irqrestore+0xe/0x10 [] start_this_handle+0x2f5/0x564 [jbd2] [] ? list_del_init+0x21/0x21 [] jbd2__journal_start+0xa5/0xd2 [jbd2] [] jbd2_journal_start+0x13/0x15 [jbd2] [] ext4_journal_start_sb+0x11a/0x129 [ext4] [] ? ext4_file_open+0x15b/0x181 [ext4] [] ext4_xattr_set+0x69/0xe2 [ext4] [] ext4_xattr_user_set+0x43/0x49 [ext4] [] generic_setxattr+0x67/0x76 [] __vfs_setxattr_noperm+0x77/0xdc [] vfs_setxattr+0x7c/0x97 [] setxattr+0xb5/0xe8 [] ? virt_to_head_page+0x29/0x2b [] ? virt_to_slab+0x1e/0x2e [] ? __cache_free+0x44/0x1bf [] sys_fsetxattr+0x6b/0x91 [] system_call_fastpath+0x16/0x1b [2] kswapd is blocked. [] io_schedule+0x73/0xc0 [682201.029914] [] get_request_wait+0xca/0x160 [682201.030236] [] ? autoremove_wake_function+0x0/0x40 [682201.030602] [] ? elv_merge+0x37/0x1c0 [682201.030880] [] __make_request+0x93/0x4b0 [682201.031511] [] generic_make_request+0x1b9/0x3c0 [682201.031863] [] ? rcu_start_gp+0xfd/0x1e0 [682201.032195] [] submit_bio+0x79/0x120 [682201.032472] [] submit_bh+0xf9/0x150 [682201.032741] [] __block_write_full_page+0x1ae/0x320 [682201.033093] [] ? end_buffer_async_write+0x0/0x160 [682201.033457] [] ? noalloc_get_block_write+0x0/0x60 [ext4] [682201.033777] [] ? end_buffer_async_write+0x0/0x160 [682201.034079] [] block_write_full_page_endio+0xd6/0x120 [682201.034413] [] ? noalloc_get_block_write+0x0/0x60 [ext4] [682201.034727] [] block_write_full_page+0x15/0x20 [682201.035063] [] ext4_writepage+0x28e/0x340 [ext4] [682201.035509] [] shrink_zone+0x116d/0x1480 [682201.035792] [] kswapd+0x60c/0x800 [682201.036049] [] ? isolate_pages_global+0x0/0x3e0 [682201.036397] [] ? thread_return+0x4e/0x734 [682201.036745] [] ? autoremove_wake_function+0x0/0x40 [682201.037055] [] ? kswapd+0x0/0x800 [682201.037359] [] kthread+0x96/0xa0 [682201.037671] [] child_rip+0xa/0x20 [682201.038115] [] ? kthread+0x0/0xa0 [682201.038421] [] ? child_rip+0x0/0x20 Regards, Tao -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/