Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2690448pxb; Tue, 23 Feb 2021 13:07:42 -0800 (PST) X-Google-Smtp-Source: ABdhPJynquiPYePsZsmS/g5vCI7Vb1IXVSUjV12pSj5y1VwE8s2RLWnJo49v7EToVTdYB6E7CqKD X-Received: by 2002:a17:907:76d6:: with SMTP id kf22mr21818373ejc.495.1614114462168; Tue, 23 Feb 2021 13:07:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614114462; cv=none; d=google.com; s=arc-20160816; b=kQiTBrul+JtHlZtiH0CkVnuhTz0zX+Mw7Cq9YdevOXRQFwrBP4hoqGZrGv4U8203zs e0VsijnRj9LOaPKwYWUmF5RxaOHBvLsFWws07yI3KXEbshFNzYWyQR4zMLgEHocvbqWa weTt3R68TFchJx+6J8Wi/t8v/HmT3vzNDlw11wms63+4Nx46gU0U3bGQvB67NRYjWpDB 7cUZkyJ07P0SPNMcSgzZ4pntH6UXLf8dlYGYT3GI6NQhKmwS54jHVVfzN2IxuPk+h9ma ZTCdJyrk3/CrmIsX5fUOK64dflhXUU9V/eo5h1l3hGxBYBwAjr7MoxvzM6n0zI8NVDpe IcqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=V88KanrG68jeibxEGiMg5wPm/95lNG+KdlAZmbHwfNg=; b=mTYeFn383cmp2ysj9u9N50d8IwwIuZQx+6fH6b6enCpfed2uUNrDiiHfyNbdixC14S vLX24G4DAiE7w+iqimYzlJ9Pw7rIbAB3q45PtdZXF+x15pdkbRI/DGCaL3KN+XxEbRTz iVxVe21LXsclI8urwHcN4tGjiL8ynrYK9QgyKKm1djGglqdt48LZBk+ePoxgIv4amO+i R+eCqJI+66IGasoPOesGp/CdBeC4HyAIfMFThBcsPrEQj8o0S5UZmXYMV9mipNM+hQgp pKntvdUm6iO0BOCKe2E6bUn6j54KmFiAY78Tf8SwtWWGMDp0Ncxp4hIcKsDOezdWqHU2 yRVA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=WsrJrMaC; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t3si5423051eds.511.2021.02.23.13.06.33; Tue, 23 Feb 2021 13:07:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=WsrJrMaC; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234293AbhBWU3K (ORCPT + 99 others); Tue, 23 Feb 2021 15:29:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232942AbhBWU3E (ORCPT ); Tue, 23 Feb 2021 15:29:04 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F2F0C061574; Tue, 23 Feb 2021 12:28:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=V88KanrG68jeibxEGiMg5wPm/95lNG+KdlAZmbHwfNg=; b=WsrJrMaCSkHBCRydgslpm+bYR7 p4fZaB4qJVm4eh/q4N9ITrmuWgU22B1d2LPb0yP61G6zCvOG4nZbMvPCFXGMdgl4kliAGgFgRHfSK vBkrCF7V78z7YKGtGtCYPngbxKEdwwfVRKQNk88r0NnpnhmSdXye788ZaCAS/38sjxJGvjCU8gmAZ 0Po70uq86ABK1s0pppmhu68EwIBCytaT6WL17R2ISODMQrLMpYDPuJDblWkLgeZmGAOSG2DI4WurP dlmqQcawkf3fmLa27fEdJHsx5uXkFApiU8O/jT0SoBE9lHBeRm5DoQ2t5hpdhqKdGSRus/saT/nk/ QuTxmsSA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lEeHW-008TA8-Qu; Tue, 23 Feb 2021 20:27:48 +0000 Date: Tue, 23 Feb 2021 20:27:42 +0000 From: Matthew Wilcox To: Steve French Cc: Jeff Layton , David Howells , Trond Myklebust , Anna Schumaker , Steve French , Dominique Martinet , CIFS , ceph-devel@vger.kernel.org, linux-cachefs@redhat.com, Alexander Viro , linux-mm , linux-afs@lists.infradead.org, v9fs-developer@lists.sourceforge.net, Christoph Hellwig , linux-fsdevel , linux-nfs , Linus Torvalds , David Wysochanski , LKML , William Kucharski , Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net Subject: Re: [PATCH 00/33] Network fs helper library & fscache kiocb API [ver #3] Message-ID: <20210223202742.GM2858050@casper.infradead.org> References: <161340385320.1303470.2392622971006879777.stgit@warthog.procyon.org.uk> <9e49f96cd80eaf9c8ed267a7fbbcb4c6467ee790.camel@redhat.com> <20210216021015.GH2858050@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Mon, Feb 15, 2021 at 11:22:20PM -0600, Steve French wrote: > On Mon, Feb 15, 2021 at 8:10 PM Matthew Wilcox wrote: > > The switch from readpages to readahead does help in a couple of corner > > cases. For example, if you have two processes reading the same file at > > the same time, one will now block on the other (due to the page lock) > > rather than submitting a mess of overlapping and partial reads. > > Do you have a simple repro example of this we could try (fio, dbench, iozone > etc) to get some objective perf data? I don't. The problem was noted by the f2fs people, so maybe they have a reproducer. > My biggest worry is making sure that the switch to netfs doesn't degrade > performance (which might be a low bar now since current network file copy > perf seems to signifcantly lag at least Windows), and in some easy to understand > scenarios want to make sure it actually helps perf. I had a question about that ... you've mentioned having 4x4MB reads outstanding as being the way to get optimum performance. Is there a significant performance difference between 4x4MB, 16x1MB and 64x256kB? I'm concerned about having "too large" an I/O on the wire at a given time. For example, with a 1Gbps link, you get 250MB/s. That's a minimum latency of 16us for a 4kB page, but 16ms for a 4MB page. "For very simple tasks, people can perceive latencies down to 2 ms or less" (https://danluu.com/input-lag/) so going all the way to 4MB I/Os takes us into the perceptible latency range, whereas a 256kB I/O is only 1ms. So could you do some experiments with fio doing direct I/O to see if it takes significantly longer to do, say, 1TB of I/O in 4MB chunks vs 256kB chunks? Obviously use threads to keep lots of I/Os outstanding.