Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756041AbZGNSxA (ORCPT ); Tue, 14 Jul 2009 14:53:00 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1756020AbZGNSw7 (ORCPT ); Tue, 14 Jul 2009 14:52:59 -0400 Received: from moutng.kundenserver.de ([212.227.17.8]:54939 "EHLO moutng.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756013AbZGNSw6 (ORCPT ); Tue, 14 Jul 2009 14:52:58 -0400 Message-ID: <4A5CD3EB.50402@vlnb.net> Date: Tue, 14 Jul 2009 22:52:27 +0400 From: Vladislav Bolkhovitin User-Agent: Thunderbird 2.0.0.21 (X11/20090320) MIME-Version: 1.0 To: Wu Fengguang CC: Ronald Moesbergen , "linux-kernel@vger.kernel.org" , "akpm@linux-foundation.org" , "kosaki.motohiro@jp.fujitsu.com" , "Alan.Brunelle@hp.com" , "hifumi.hisashi@oss.ntt.co.jp" , "linux-fsdevel@vger.kernel.org" , "jens.axboe@oracle.com" , "randy.dunlap@oracle.com" , Bart Van Assche Subject: Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev References: <4A5238EC.1070505@vlnb.net> <4A5395FD.2040507@vlnb.net> <4A5493A8.2000806@vlnb.net> <4A56FF32.2060303@vlnb.net> <4A570981.5080803@vlnb.net> <20090713123621.GA31051@localhost> In-Reply-To: <20090713123621.GA31051@localhost> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Provags-ID: V01U2FsdGVkX1+pOPhBkp7y/PVqb4cOQBf5z2Cv+SkC+wmulw0 g4rhrfYs+mQKLnRBI2x8eFjvmKziW4teNYtHAPOF+7xt2EkN9Y rnQkWdpphpVda+PxmKBKQ== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1704 Lines: 36 Wu Fengguang, on 07/13/2009 04:36 PM wrote: >> Test done with XFS on both the target and the initiator. This confirms >> your findings, using files instead of block devices is faster, but >> only when using the io_context patch. > > It shows that the one really matters is the io_context patch, > even when context readahead is running. I guess what happened > in the tests are: > - without readahead (or readahead algorithm failed to do proper > sequential readaheads), the SCST processes will be submitting > small but close to each other IOs. CFQ relies on the io_context > patch to prevent unnecessary idling. > - with proper readahead, the SCST processes will also be submitting > close readahead IOs. For example, one file's 100-102MB pages is > readahead by process A, while its 102-104MB pages may be > readahead by process B. In this case CFQ will also idle waiting > for process A to submit the next IO, but in fact that IO is being > submitted by process B. So the io_context patch is still necessary > even when context readahead is working fine. I guess context > readahead do have the added value of possibly enlarging the IO size > (however this benchmark seems to not very sensitive to IO size). Looks like the truth. Although with 2MB RA I expect CFQ to do idling >10 times less, which should bring bigger improvement than few %%. For how long CFQ idles? For HZ/125, i.e. 8 ms with HZ 250? > Thanks, > Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/