Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752563AbZGOHGg (ORCPT ); Wed, 15 Jul 2009 03:06:36 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752325AbZGOHGf (ORCPT ); Wed, 15 Jul 2009 03:06:35 -0400 Received: from mga03.intel.com ([143.182.124.21]:39444 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752023AbZGOHGe (ORCPT ); Wed, 15 Jul 2009 03:06:34 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.42,403,1243839600"; d="scan'208";a="165141342" Date: Wed, 15 Jul 2009 15:06:17 +0800 From: Wu Fengguang To: Vladislav Bolkhovitin Cc: Ronald Moesbergen , "linux-kernel@vger.kernel.org" , "akpm@linux-foundation.org" , "kosaki.motohiro@jp.fujitsu.com" , "Alan.Brunelle@hp.com" , "hifumi.hisashi@oss.ntt.co.jp" , "linux-fsdevel@vger.kernel.org" , "jens.axboe@oracle.com" , "randy.dunlap@oracle.com" , Bart Van Assche Subject: Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev Message-ID: <20090715070617.GB6145@localhost> References: <4A5395FD.2040507@vlnb.net> <4A5493A8.2000806@vlnb.net> <4A56FF32.2060303@vlnb.net> <4A570981.5080803@vlnb.net> <20090713123621.GA31051@localhost> <4A5CD3EB.50402@vlnb.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4A5CD3EB.50402@vlnb.net> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2085 Lines: 42 On Wed, Jul 15, 2009 at 02:52:27AM +0800, Vladislav Bolkhovitin wrote: > > Wu Fengguang, on 07/13/2009 04:36 PM wrote: > >> Test done with XFS on both the target and the initiator. This confirms > >> your findings, using files instead of block devices is faster, but > >> only when using the io_context patch. > > > > It shows that the one really matters is the io_context patch, > > even when context readahead is running. I guess what happened > > in the tests are: > > - without readahead (or readahead algorithm failed to do proper > > sequential readaheads), the SCST processes will be submitting > > small but close to each other IOs. CFQ relies on the io_context > > patch to prevent unnecessary idling. > > - with proper readahead, the SCST processes will also be submitting > > close readahead IOs. For example, one file's 100-102MB pages is > > readahead by process A, while its 102-104MB pages may be > > readahead by process B. In this case CFQ will also idle waiting > > for process A to submit the next IO, but in fact that IO is being > > submitted by process B. So the io_context patch is still necessary > > even when context readahead is working fine. I guess context > > readahead do have the added value of possibly enlarging the IO size > > (however this benchmark seems to not very sensitive to IO size). > > Looks like the truth. Although with 2MB RA I expect CFQ to do idling >10 > times less, which should bring bigger improvement than few %%. > > For how long CFQ idles? For HZ/125, i.e. 8 ms with HZ 250? Yes, 8ms by default. Note that the 8ms idle time is armed when the last IO from current process completes. So it would be definitely a waste if the cooperative process submitted the next read/readahead IO within this 8ms idle window (without cfq_coop.patch). Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/