Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755316Ab0BXEoB (ORCPT ); Tue, 23 Feb 2010 23:44:01 -0500 Received: from mga11.intel.com ([192.55.52.93]:26227 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752500Ab0BXEn6 (ORCPT ); Tue, 23 Feb 2010 23:43:58 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.49,529,1262592000"; d="scan'208";a="543413115" Date: Wed, 24 Feb 2010 12:43:56 +0800 From: Wu Fengguang To: Dave Chinner Cc: Trond Myklebust , "linux-nfs@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , Linux Memory Management List , LKML Subject: Re: [RFC] nfs: use 2*rsize readahead size Message-ID: <20100224044356.GA2007@localhost> References: <20100224024100.GA17048@localhost> <20100224032934.GF16175@discord.disaster> <20100224042414.GG16175@discord.disaster> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20100224042414.GG16175@discord.disaster> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5161 Lines: 145 On Wed, Feb 24, 2010 at 12:24:14PM +0800, Dave Chinner wrote: > On Wed, Feb 24, 2010 at 02:29:34PM +1100, Dave Chinner wrote: > > On Wed, Feb 24, 2010 at 10:41:01AM +0800, Wu Fengguang wrote: > > > With default rsize=512k and NFS_MAX_READAHEAD=15, the current NFS > > > readahead size 512k*15=7680k is too large than necessary for typical > > > clients. > > > > > > On a e1000e--e1000e connection, I got the following numbers > > > > > > readahead size throughput > > > 16k 35.5 MB/s > > > 32k 54.3 MB/s > > > 64k 64.1 MB/s > > > 128k 70.5 MB/s > > > 256k 74.6 MB/s > > > rsize ==> 512k 77.4 MB/s > > > 1024k 85.5 MB/s > > > 2048k 86.8 MB/s > > > 4096k 87.9 MB/s > > > 8192k 89.0 MB/s > > > 16384k 87.7 MB/s > > > > > > So it seems that readahead_size=2*rsize (ie. keep two RPC requests in flight) > > > can already get near full NFS bandwidth. > > > > > > The test script is: > > > > > > #!/bin/sh > > > > > > file=/mnt/sparse > > > BDI=0:15 > > > > > > for rasize in 16 32 64 128 256 512 1024 2048 4096 8192 16384 > > > do > > > echo 3 > /proc/sys/vm/drop_caches > > > echo $rasize > /sys/devices/virtual/bdi/$BDI/read_ahead_kb > > > echo readahead_size=${rasize}k > > > dd if=$file of=/dev/null bs=4k count=1024000 > > > done > > > > That's doing a cached read out of the server cache, right? You > > might find the results are different if the server has to read the > > file from disk. I would expect reads from the server cache not > > to require much readahead as there is no IO latency on the server > > side for the readahead to hide.... > > FWIW, if you mount the client with "-o rsize=32k" or the server only > supports rsize <= 32k then this will probably hurt throughput a lot > because then readahead will be capped at 64k instead of 480k.... I should have mentioned that in changelog.. Hope the updated one helps. Thanks, Fengguang --- nfs: use 2*rsize readahead size With default rsize=512k and NFS_MAX_READAHEAD=15, the current NFS readahead size 512k*15=7680k is too large than necessary for typical clients. On a e1000e--e1000e connection, I got the following numbers (this reads sparse file from server and involves no disk IO) readahead size throughput 16k 35.5 MB/s 32k 54.3 MB/s 64k 64.1 MB/s 128k 70.5 MB/s 256k 74.6 MB/s rsize ==> 512k 77.4 MB/s 1024k 85.5 MB/s 2048k 86.8 MB/s 4096k 87.9 MB/s 8192k 89.0 MB/s 16384k 87.7 MB/s So it seems that readahead_size=2*rsize (ie. keep two RPC requests in flight) can already get near full NFS bandwidth. To avoid small readahead when the client mount with "-o rsize=32k" or the server only supports rsize <= 32k, we take the max of 2*rsize and default_backing_dev_info.ra_pages. The latter defaults to 512K, and will be auto scaled down when system memory is less than 512M, or can be explicitly changed by user with kernel parameter "readahead=". The test script is: #!/bin/sh file=/mnt/sparse BDI=0:15 for rasize in 16 32 64 128 256 512 1024 2048 4096 8192 16384 do echo 3 > /proc/sys/vm/drop_caches echo $rasize > /sys/devices/virtual/bdi/$BDI/read_ahead_kb echo readahead_size=${rasize}k dd if=$file of=/dev/null bs=4k count=1024000 done CC: Dave Chinner CC: Trond Myklebust Signed-off-by: Wu Fengguang --- fs/nfs/client.c | 4 +++- fs/nfs/internal.h | 8 -------- 2 files changed, 3 insertions(+), 9 deletions(-) --- linux.orig/fs/nfs/client.c 2010-02-23 11:15:44.000000000 +0800 +++ linux/fs/nfs/client.c 2010-02-24 10:16:00.000000000 +0800 @@ -889,7 +889,9 @@ static void nfs_server_set_fsinfo(struct server->rpages = (server->rsize + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; server->backing_dev_info.name = "nfs"; - server->backing_dev_info.ra_pages = server->rpages * NFS_MAX_READAHEAD; + server->backing_dev_info.ra_pages = max_t(unsigned long, + default_backing_dev_info.ra_pages, + 2 * server->rpages); server->backing_dev_info.capabilities |= BDI_CAP_ACCT_UNSTABLE; if (server->wsize > max_rpc_payload) --- linux.orig/fs/nfs/internal.h 2010-02-23 11:15:44.000000000 +0800 +++ linux/fs/nfs/internal.h 2010-02-23 13:26:00.000000000 +0800 @@ -10,14 +10,6 @@ struct nfs_string; -/* Maximum number of readahead requests - * FIXME: this should really be a sysctl so that users may tune it to suit - * their needs. People that do NFS over a slow network, might for - * instance want to reduce it to something closer to 1 for improved - * interactive response. - */ -#define NFS_MAX_READAHEAD (RPC_DEF_SLOT_TABLE - 1) - /* * Determine if sessions are in use. */ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/