Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp3283709pxb; Mon, 24 Jan 2022 06:31:38 -0800 (PST) X-Google-Smtp-Source: ABdhPJx7pvKRKqDPyewjlnOiPiRzD1w+cM1gUS0+Ip7FjCg53mW5D72FAJZSj5U4+wqO1QQpV034 X-Received: by 2002:a17:902:7143:b0:14a:62ed:c2a7 with SMTP id u3-20020a170902714300b0014a62edc2a7mr14809834plm.80.1643034697995; Mon, 24 Jan 2022 06:31:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643034697; cv=none; d=google.com; s=arc-20160816; b=uRj+A25bGugkb33YkchR5tNaWxlccX9p9lP8ZRDOSSZBXbhzCkQbEJFzm3ItDO5KOE mEe/Gay3XuQP5grt82e0L0cBcKzRJHoqawgQWK5q7t4GSFM5gqxsQNuhj1/vv7hRyymq 9DeTS7wspRspMD5RcFAZ96UkfEd9ZSOTtxLcAXFlAfcqx7w9zE6QoP61TZflcf3nOT2F CVHKoDoA87QfAJM4jVipb9cs8X7MK4MDaxiIAmzaER1Sb8t6ZjgwXUj50Dx6nIChh/em CcNgRAE5MckV/aPxX1epsgAxb4lF4/iyUBjA94vHCPTIsljeOPVI1c7GWZhEGR6Ruvs6 ikdw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:subject:message-id:date:from:mime-version :dkim-signature; bh=CqdUdHVN6E6ZceBwT8KM9D1xekkTzH22IAWcjML3lo4=; b=bBXU8Jh/9hEqtKAI8C6IQ10feH7Ep92ZvyNrdoctJQn1ZdqWHubc1TxXV2tqsNBXRW u72ElDq8F98yi5EQLGWrCGTKoIFK8wur3MAS+y9ActEjgtNrbvm6qoqceF8Hz/wEhkS3 IH3oLSDST6GHc6/Clx4XxjAgLz5Ix+fIVXIe93r8B+1pSwGW9G7YEhKvEYsKD6TH22/Y auu0Dw2y6jaBXeuZlijxt/69TLhhxLr6DejiA4HzPiwFhoWKLkMadZdx6pJPLOH3D1Ly SlN1xY3LZ92xfGpt9kB2xjT9DrqQLSAPmoatGDaHlDpCMaZ8hd1Y2adb+ZB9T8EBPBY+ LAJg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@dneg.com header.s=google header.b=MfxAoSSE; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=dneg.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 184si12367654pff.344.2022.01.24.06.31.20; Mon, 24 Jan 2022 06:31:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@dneg.com header.s=google header.b=MfxAoSSE; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=dneg.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233670AbiAWXxq (ORCPT + 99 others); Sun, 23 Jan 2022 18:53:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233662AbiAWXxp (ORCPT ); Sun, 23 Jan 2022 18:53:45 -0500 Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com [IPv6:2a00:1450:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C198C06173B for ; Sun, 23 Jan 2022 15:53:45 -0800 (PST) Received: by mail-ej1-x633.google.com with SMTP id j2so16548893ejk.6 for ; Sun, 23 Jan 2022 15:53:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dneg.com; s=google; h=mime-version:from:date:message-id:subject:to; bh=CqdUdHVN6E6ZceBwT8KM9D1xekkTzH22IAWcjML3lo4=; b=MfxAoSSEppfflqtF6mj50qRNFD1QGIxEmYe6m9OeJ7WcU6uwZcttw9piDy5cvjY+1H kNB04gn9Hwh53wDVYHjroJX8zXncsIQTz0bQqlMh5jIEQM9PUdEPiYw6pTF6Rn1Q0IMf fvIkZcGX+KTZQyTab0h9ZoXSdRVP9dgaUFtUx98Qw2OYT+ynGJ6xA+q3crUfnhJeOM7Q DKsV4SLNWfoHVF3NAx7ilV76TDpSjgh3hFBaHyBTNILqENDnDZRCj3fIm89dLMJyyMFd H2BH64okZ37oKzJEVA5RADzDh3Z9THGOdIIFgHQ10frq8piy3yij4bxv9x1v9T1tzrDY Y0MA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=CqdUdHVN6E6ZceBwT8KM9D1xekkTzH22IAWcjML3lo4=; b=OCN5PojQwNmUXMS/BtPMgnQrSn4hGrEsONvhRIQG8pgzPFa+EqEHePji3dqXzd+S46 KX5BbFVG4yq1K8kWnWSGNq9A3lU2tqyw8A14kJFNCUoiFFur2qKzbk30NZ8A4wRZzQhB wzT3FM3wYMCNwa8RZUHlpJiUrgaCn/CnJnC4/N04B8lit2OECYFUGxkDLXTHi4xokoZH PnJT/IB3IRJRAleRsR2RGi1LeHl478N2hcwzFdfPL1NU8G6Goi98pG43zry7CF6mUgWb Q7wmtvs1OVSgYr2Mnw70IiHZZ9RDCfspT/mrQZH9HH2TxBtbSg7Fgxgyz8nElaSLcZe9 mHAg== X-Gm-Message-State: AOAM532n+RbxsmUBF0/ns2co0jLISxuWE/GznlEv0Sj2GrOo4VrQ77ZN PyJEIPyY9BR+vcgEcoGvPPgZ2bpPg9QiKfWEdWisegiXFVAR2/CG X-Received: by 2002:a17:906:3819:: with SMTP id v25mr10434795ejc.539.1642982023951; Sun, 23 Jan 2022 15:53:43 -0800 (PST) MIME-Version: 1.0 From: Daire Byrne Date: Sun, 23 Jan 2022 23:53:08 +0000 Message-ID: Subject: parallel file create rates (+high latency) To: linux-nfs Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Hi, I've been experimenting a bit more with high latency NFSv4.2 (200ms). I've noticed a difference between the file creation rates when you have parallel processes running against a single client mount creating files in multiple directories compared to in one shared directory. If I start 100 processes on the same client creating unique files in a single shared directory (with 200ms latency), the rate of new file creates is limited to around 3 files per second. Something like this: # add latency to the client sudo tc qdisc replace dev eth0 root netem delay 200ms sudo mount -o vers=4.2,nocto,actimeo=3600 server:/data /tmp/data for x in {1..10000}; do echo /tmp/data/dir1/touch.$x done | xargs -n1 -P 100 -iX -t touch X 2>&1 | pv -l -a > /dev/null It's a similar (slow) result for NFSv3. If we run it again just to update the existing files, it's a lot faster because of the nocto,actimeo and open file caching (32 files/s). Then if I switch it so that each process on the client creates hundreds of files in a unique directory per process, the aggregate file create rate increases to 32 per second. For NFSv3 it's 162 aggregate new files per second. So much better parallelism is possible when the creates are spread across multiple remote directories on the same client. If I then take the slow 3 creates per second example again and instead use 10 client hosts (all with 200ms latency) and set them all creating in the same remote server directory, then we get 3 x 10 = 30 creates per second. So we can achieve some parallel file create performance in the same remote directory but just not from a single client running multiple processes. Which makes me think it's more of a client limitation rather than a server locking issue? My interest in this (as always) is because while having hundreds of processes creating files in the same directory might not be a common workload, it is if you are re-exporting a filesystem and multiple clients are creating new files for writing. For example a batch job creating files in a common output directory. Re-exporting is a useful way of caching mostly read heavy workloads but then performance suffers for these metadata heavy or writing workloads. The parallel performance (nfsd threads) with a single client mountpoint just can't compete with directly connected clients to the originating server. Does anyone have any idea what the specific bottlenecks are here for parallel file creates from a single client to a single directory? Cheers, Daire