Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757112AbYFZEuv (ORCPT ); Thu, 26 Jun 2008 00:50:51 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1756013AbYFZEuQ (ORCPT ); Thu, 26 Jun 2008 00:50:16 -0400 Received: from TYO201.gate.nec.co.jp ([202.32.8.193]:36488 "EHLO tyo201.gate.nec.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754656AbYFZEuL (ORCPT ); Thu, 26 Jun 2008 00:50:11 -0400 From: "Satoshi UCHIDA" To: "'Ryo Tsuruta'" Cc: , , , , , References: <003701c8bc80$3a5f9ec0$af1edc40$@jp.nec.com> <20080526.114627.104044752.ryov@valinux.co.jp> <004f01c8bfed$640fa380$2c2eea80$@jp.nec.com> <20080603.171535.246514860.ryov@valinux.co.jp> In-Reply-To: <20080603.171535.246514860.ryov@valinux.co.jp> Subject: RE: [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth controlling subsystem for CGroups based on CFQ Date: Thu, 26 Jun 2008 13:49:20 +0900 Message-Id: <002001c8d747$ffe5a8b0$ffb0fa10$@jp.nec.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: AcjFUgAdaenoUHgfT32fW+DBrgsQYQR9eGmA Content-language: ja Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4530 Lines: 113 Hi, Tsuruta. > In addition, I got the following message during test #2. Program > "ioload", our benchmark program, was blocked more than 120 seconds. > Do you see any problems? No. I tried to test in environment which runs from 1 to 200 processes per group. However, such message was not output. > The result of test #1 is close to your estimation, but the result > of test #2 is not, the gap between the estimation and the result > increased. In the above my test, the gap between the estimation and the result is increasing as a process increases. And, in native CFQ with ionice command, this situation is a similar. These circumstances are shown in the case of more than processes of total 200. I'll investigate this problem continuously. Thanks, Satoshi Uchida. > -----Original Message----- > From: Ryo Tsuruta [mailto:ryov@valinux.co.jp] > Sent: Tuesday, June 03, 2008 5:16 PM > To: s-uchida@ap.jp.nec.com > Cc: axboe@kernel.dk; vtaras@openvz.org; > containers@lists.linux-foundation.org; tom-sugawara@ap.jp.nec.com; > linux-kernel@vger.kernel.org > Subject: Re: [RFC][v2][patch 0/12][CFQ-cgroup]Yet another I/O bandwidth > controlling subsystem for CGroups based on CFQ > > Hi Uchida-san, > > > I report my tests. > > I did a similar test to yours. I increased the number of I/Os > which are issued simultaneously up to 100 per cgroup. > > Procedures: > o Prepare 300 files which size is 250MB on 1 partition sdb3 > o Create three groups with priority 0, 4 and 7. > o Run many processes issuing random direct I/O with 4KB data on each > files in three groups. > #1 Run 25 processes issuing read I/O only per group. > #2 Run 100 processes issuing read I/O only per group. > o Count up the number of I/Os which have done in 10 minutes. > > The number of I/Os (percentage to total I/O) > -------------------------------------------------------------- > | group | group 1 | group 2 | group 3 | total | > | priority | 0(highest) | 4 | 7(lowest) | I/Os | > |-------------+------------+------------+------------+---------| > | Estimate | | | | | > | Performance | 61.5% | 30.8% | 7.7% | | > |-------------+------------+------------+------------|---------| > | #1 25procs | 52763(57%) | 30811(33%) | 9575(10%) | 93149 | > | #2 100procs | 24949(40%) | 21325(34%) | 16508(26%) | 62782 | > -------------------------------------------------------------- > > The result of test #1 is close to your estimation, but the result > of test #2 is not, the gap between the estimation and the result > increased. > > In addition, I got the following message during test #2. Program > "ioload", our benchmark program, was blocked more than 120 seconds. > Do you see any problems? > > INFO: task ioload:8456 blocked for more than 120 seconds. > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > ioload D 00000008 2772 8456 8419 > f72eb740 00200082 c34862c0 00000008 c3565170 c35653c0 c2009d80 > 00000001 > c1d1bea0 00200046 ffffffff f6ee039c 00000000 00000000 00000000 > c2009d80 > 018db000 00000000 f71a6a00 c0604fb6 00000000 f71a6bc8 c04876a4 > 00000000 > Call Trace: > [] io_schedule+0x4a/0x81 > [] __blockdev_direct_IO+0xa04/0xb54 > [] ext2_direct_IO+0x35/0x3a > [] ext2_get_block+0x0/0x603 > [] generic_file_direct_IO+0x103/0x118 > [] generic_file_direct_write+0x50/0x13d > [] __generic_file_aio_write_nolock+0x375/0x4c3 > [] link_path_walk+0x86/0x8f > [] find_lock_page+0x19/0x6d > [] generic_file_aio_write+0x52/0xa9 > [] do_sync_write+0xbf/0x100 > [] autoremove_wake_function+0x0/0x2d > [] update_curr+0x83/0x116 > [] mutex_lock+0xb/0x1a > [] security_file_permission+0xc/0xd > [] do_sync_write+0x0/0x100 > [] vfs_write+0x83/0xf6 > [] sys_write+0x3c/0x63 > [] syscall_call+0x7/0xb > [] print_cpu_info+0x27/0x92 > ======================= > > Thanks, > Ryo Tsuruta -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/