Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1765431AbZDBRUN (ORCPT ); Thu, 2 Apr 2009 13:20:13 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1764537AbZDBRTy (ORCPT ); Thu, 2 Apr 2009 13:19:54 -0400 Received: from mailgw.nyc.medallion.com ([63.68.220.242]:1712 "EHLO mailgw.nyc.medallion.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1761640AbZDBRTw convert rfc822-to-8bit (ORCPT ); Thu, 2 Apr 2009 13:19:52 -0400 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.3790.4325 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT Subject: RE: [Scst-devel] [Iscsitarget-devel] ISCSI-SCST performance (with also IET and STGT data) Date: Thu, 2 Apr 2009 13:19:49 -0400 Message-ID: In-Reply-To: <49D4DB7A.2030004@vlnb.net> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: [Scst-devel] [Iscsitarget-devel] ISCSI-SCST performance (with also IET and STGT data) Thread-Index: AcmzqM9pHhmsqLMxRVuEBcdy0sdMAwABxpkA References: <49D10256.8030307@vlnb.net> <49D11096.3070804@vlnb.net> <49D254E4.8050806@vlnb.net> <1238617423.3318.82.camel@mulgrave.int.hansenpartnership.com> <49D46B8B.5000708@vlnb.net> <49D47F28.3060409@vlnb.net> <49D4DB7A.2030004@vlnb.net> From: "Ross S. W. Walker" Importance: normal To: "Vladislav Bolkhovitin" Cc: "James Bottomley" , , "iSCSI Enterprise Target Developer List" , , "Ross Walker" , , "scst-devel" X-OriginalArrivalTime: 02 Apr 2009 17:19:49.0547 (UTC) FILETIME=[3A4FDFB0:01C9B3B7] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3069 Lines: 70 Vladislav Bolkhovitin wrote: > > Think what you want and do what you want. You can even filter out all > e-mails from me, that's your right. But: > > 1. As I wrote grouping threads into a single IO context doesn't explain > all the performance difference and finding out reasons for other's > performance problems isn't something I can afford at the moment. No, not all the performance, but a substantial part of it, enough so to say IET has a real performance issue when using CFQ scheduler. > 2. CFQ doesn't have any processing latency and has never had. Learn to > understand what are your writing about and how to correctly express > yourself at first. You asked about that latency and I replied that there > is nothing to defeat. CFQ pauses briefly before switching I/O contexts in order to make sure it is giving as much bandwidth to a context before moving on. This is documented. With a single I/O stream, or random I/O it won't be noticeable, but for interleaved sequential I/O across multiple threads with different I/O contexts it can be significant. Not that Wikipedia is authorative: http://en.wikipedia.org/wiki/CFQ It's right in the first paragraph: "... While CFQ does not do explicit anticipatory IO scheduling, it achieves the same effect of having good aggregate throughput for the system as a whole, by allowing a process queue to idle at the end of synchronous IO thereby "anticipating" further close IO from that process. ..." You can also check out the LXR: This one in 2.6.18 kernels (RHEL) show a pause of HZ/10 http://lxr.linux.no/linux+v2.6.18/block/cfq-iosched.c#L30 So given a 10ms time slice, that would equate to ~1ms, in later kernels it's defined as HZ/5 which can equate to ~2ms. These ms delays can be an eternity for sequential I/O patterns. > 3. SCST doesn't have any hooks into CFQ and not going to have in the > considerable future. True, SCST doesn't have any hooks into CFQ, but your code modifies block/blk-ioc.c to export the alloc_io_context(), which by default is a private function, to allow your kernel based threads to set their I/O contexts to the same group, therefore avoiding the delay CFQ imposes on the switching of the I/O contexts between these threads. -Ross ______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/