Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Mon, 15 Jul 2002 06:04:51 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Mon, 15 Jul 2002 06:04:50 -0400 Received: from [195.223.140.120] ([195.223.140.120]:59406 "EHLO penguin.e-mind.com") by vger.kernel.org with ESMTP id ; Mon, 15 Jul 2002 06:04:47 -0400 Date: Mon, 15 Jul 2002 12:07:19 +0200 From: Andrea Arcangeli To: "Griffiths, Richard A" Cc: "'Andrew Morton'" , "'Marcelo Tosatti'" , "'linux-kernel@vger.kernel.org'" , "'Carter K. George'" , "'Don Norton'" , "'James S. Tybur'" , "Gross, Mark" Subject: Re: fsync fixes for 2.4 Message-ID: <20020715100719.GE34@dualathlon.random> References: <01BDB7EEF8D4D3119D95009027AE99951B0E6428@fmsmsx33.fm.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <01BDB7EEF8D4D3119D95009027AE99951B0E6428@fmsmsx33.fm.intel.com> User-Agent: Mutt/1.3.27i X-GnuPG-Key-URL: http://e-mind.com/~andrea/aa.gnupg.asc X-PGP-Key-URL: http://e-mind.com/~andrea/aa.asc Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2014 Lines: 42 On Fri, Jul 12, 2002 at 02:52:11PM -0700, Griffiths, Richard A wrote: > Mark is off climbing Mt. Hood, so he asked me to post the data on the fsync > patch. > It appears from these results that there is no appreciable improvement using > the > fsync patch - there is a slight loss of top end on 4 adapters 1 drive. that's very much expected, as said with my new design by adding an additional pass (third pass), I could remove the slight loss that I expected from the simple patch that puts wait_on_buffer right in the first pass. I mentioned this in my first email of the thread, so it looks all right. For a rc2 the slight loss sounds like the simplest approch. If you care about it, with my new fsync accounting design we can fix it, just let me know if you're interested about it. Personally I'm pretty much fine with it this way too, as said in the first email if we block it's likely bdflush is pumping the queue for us. the slowdown is most probably due too early unplug of the queue generated by the blocking points. as for the scaling with async flushes to multiple devices, 2.4 has a single flushing thread, 2.5 as Andrew said (partly) fixes this as he explained me at OLS, with multiple pdflush. The only issue I seen in his design is that he works based on superblocks, so if a filesystem is on top of a lvm backed by a dozen of different harddisks, only one pdflush will pump on those dozen physical request queues, because the first pdflush entering the superblock will forbid other pdflush to work on the same superblock too. So the first physical queue that is full, will forbid pdflush to push more dirty pages to the other possibly empty physical queues. without lvm or raid that doesn't matter thoguh, nor it matters with hardware raid. Andrea - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/