Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760883AbZC3UT4 (ORCPT ); Mon, 30 Mar 2009 16:19:56 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1755223AbZC3UTq (ORCPT ); Mon, 30 Mar 2009 16:19:46 -0400 Received: from rtr.ca ([76.10.145.34]:33589 "EHLO mail.rtr.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752268AbZC3UTq (ORCPT ); Mon, 30 Mar 2009 16:19:46 -0400 Message-ID: <49D1295E.7010300@rtr.ca> Date: Mon, 30 Mar 2009 16:19:42 -0400 From: Mark Lord Organization: Real-Time Remedies Inc. User-Agent: Thunderbird 2.0.0.21 (X11/20090318) MIME-Version: 1.0 To: Chris Mason Cc: Linus Torvalds , Ric Wheeler , "Andreas T.Auer" , Alan Cox , Theodore Tso , Stefan Richter , Jeff Garzik , Matthew Garrett , Andrew Morton , David Rees , Jesper Krogh , Linux Kernel Mailing List Subject: Re: Linux 2.6.29 References: <49CD7B10.7010601@garzik.org> <49CD891A.7030103@rtr.ca> <49CD9047.4060500@garzik.org> <49CE2633.2000903@s5r6.in-berlin.de> <49CE3186.8090903@garzik.org> <49CE35AE.1080702@s5r6.in-berlin.de> <49CE3F74.6090103@rtr.ca> <20090329231451.GR26138@disturbed> <20090330003948.GA13356@mit.edu> <49D0710A.1030805@ursus.ath.cx> <20090330100546.51907bd2@the-village.bc.nu> <49D0A3D6.4000300@ursus.ath.cx> <49D0AA4A.6020308@redhat.com> <49D0CDBA.7040702@rtr.ca> <49D0D08E.3090100@redhat.com> <49D0DAD3.6030507@rtr.ca> <49D0DDFE.5080701@redhat.com> <49D0E35E.9080003@rtr.ca> <49D0E4E8.20508@redhat.com> <49D0F399.5010407@rtr.ca> <1238435832.30488.83.camel@think.oraclecorp.com> <49D111C9.4020502@rtr.ca> <1238439149.20607.12.camel@think.oraclecorp.com> In-Reply-To: <1238439149.20607.12.camel@think.oraclecorp.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2157 Lines: 60 Chris Mason wrote: > On Mon, 2009-03-30 at 14:39 -0400, Mark Lord wrote: >> Chris Mason wrote: >>> I had some fun trying things with this, and I've been able to reliably >>> trigger stalls in write cache of ~60 seconds on my seagate 500GB sata >>> drive. The worst I saw was 214 seconds. >> .. >> >> I'd be more interested in how you managed that (above), >> than the quite different test you describe below. >> >> Yes, different, I think. The test below just times how long a single >> chunk of data might stay in-drive cache under constant load, >> rather than how long it takes to flush the drive cache on command. >> >> Right? >> >> Still, useful for other stuff. >> > > That's right, it is testing for starvation in a single sector, not for > how long the cache flush actually takes. But, your remark from higher > up in the thread was this: > > > > > Anything in the drive's write cache very probably made > > it to the media within a second or two of arriving there. .. Yeah, but that was in the context of how long the drive takes to clear out it's cache when there's a (brief) break in the action. Still, it's really good to see hard data on a drive that actually starves itself for an extended period. Very handy insight, that! > Sorry if I misread things. But the goal is just to show that it really > does matter if we use a writeback cache with or without barriers. The > test has two datasets: > > 1) An area that is constantly overwritten sequentially > 2) A single sector that stores a critical bit of data. > > #1 is the filesystem log, #2 is the filesystem super. This isn't a > specialized workload ;) .. Good points. I'm thinking of perhaps acquiring an OCZ Vertex SSD. The 120GB ones apparently have 64MB of RAM inside, much of which is used to cache data heading to the flash. I wonder how long it takes to empty out that sucker! Cheers -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/