Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Sun, 11 Aug 2002 06:14:52 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Sun, 11 Aug 2002 06:14:52 -0400 Received: from parcelfarce.linux.theplanet.co.uk ([195.92.249.252]:31757 "EHLO www.linux.org.uk") by vger.kernel.org with ESMTP id ; Sun, 11 Aug 2002 06:14:48 -0400 Message-ID: <3D563C4C.2DE0CA82@zip.com.au> Date: Sun, 11 Aug 2002 03:28:28 -0700 From: Andrew Morton X-Mailer: Mozilla 4.79 [en] (X11; U; Linux 2.4.19-rc5 i686) X-Accept-Language: en MIME-Version: 1.0 To: Simon Kirby CC: linux-kernel@vger.kernel.org Subject: Re: [patch 6/12] hold atomic kmaps across generic_file_read References: <20020810201027.E306@kushida.apsleyroad.org> <20020811031705.GA13878@netnation.com> <3D55FF30.6164040D@zip.com.au> <20020811084652.GB22497@netnation.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2367 Lines: 53 Simon Kirby wrote: > > ... > What's happening with my MP3 streaming is: > > 1. read(4k) gets data after a delay. xmms starts playing. > 2. read(4k) gets some more data, right way, because readahead worked. > xmms continues. > ... > 3. read(4k) blocks for a long time while readahead starts up again and > reads a huge block of data. read() then returns the 4k. meanwhile, > xmms has underrun. xmms starts again. > 4. goto 2. > > It's really easy to see this behavior with the xmms-crossfade plugin and > a large buffer with "buffer debugging" display on. I happen to have a little test app for this stuff: http://www.zip.com.au/~akpm/linux/stream.tar.gz You can use it to slowly read or write a file. ./stream -i /dev/fd0h1440 23 1000 will read 1000k from floppy at 23k per second. It's a bit useless at those rates on 2.4 because of the coarse timer resolution. But in 1000Hz 2.5 it works a treat. ./stream -i /dev/fd0h1440 20 1000 0.00s user 0.01s system 0% cpu 51.896 total ./stream -i /dev/fd0h1440 21 1000 0.00s user 0.02s system 0% cpu 49.825 total ./stream -i /dev/fd0h1440 22 1000 0.00s user 0.02s system 0% cpu 47.843 total ./stream -i /dev/fd0h1440 23 1000 0.00s user 0.01s system 0% cpu 45.853 total ./stream -i /dev/fd0h1440 24 1000 0.01s user 0.02s system 0% cpu 44.077 total ./stream -i /dev/fd0h1440 25 1000 0.00s user 0.02s system 0% cpu 42.307 total ./stream -i /dev/fd0h1440 26 1000 0.00s user 0.01s system 0% cpu 41.305 total ./stream -i /dev/fd0h1440 27 1000 0.00s user 0.02s system 0% cpu 40.493 total ./stream -i /dev/fd0h1440 28 1000 0.01s user 0.02s system 0% cpu 39.122 total ./stream -i /dev/fd0h1440 29 1000 0.00s user 0.01s system 0% cpu 39.118 total What we see here is perfect readahead behaviour. The kernel is keeping the read streaming ahead of the application's read cursor all the way out to the point where the device is saturated. (The numbers are all off by three seconds because of the initial spinup delay). If you strace it, the reads are smooth on 2.4 and 2.5. So it may be an NFS peculiarity. That's a bit hard for me to test over 100bT. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/