2012-12-18 16:37:07

by Keith Edmunds

[permalink] [raw]
Subject: NFS access slow

Accessing disks locally on a server gives read speeds around 100MB/s,
write speeds around 267MB/s.

Mounting the same disks on the same server via NFS (ie, not using the
network at all) gives read speeds around 30MB/s, write speeds around
80MB/s.

That's about 30% of the local access speed.

Is that to be expected? I'd expect a 10-15% slowdown, but not this much.

This is using NFSv4; using NFSv3 improves the speeds slightly (36MB/s
read, 95MB/s write).

Other parameters we've changed, none of which have a significant impact:

- UDP/TCP
- rsize and wsize
- noatime
- noacl
- nocto

If that's an unexpected slow down, where should we be looking?

Thanks.


2012-12-18 18:50:08

by J. Bruce Fields

[permalink] [raw]
Subject: Re: NFS access slow

On Tue, Dec 18, 2012 at 03:52:48PM +0000, Keith Edmunds wrote:
> Accessing disks locally on a server gives read speeds around 100MB/s,
> write speeds around 267MB/s.
>
> Mounting the same disks on the same server via NFS (ie, not using the
> network at all) gives read speeds around 30MB/s, write speeds around
> 80MB/s.
>
> That's about 30% of the local access speed.
>
> Is that to be expected? I'd expect a 10-15% slowdown, but not this much.
>
> This is using NFSv4; using NFSv3 improves the speeds slightly (36MB/s
> read, 95MB/s write).

What are your disks? How exactly are you getting those numbers?
(Literally, step-by-step, what commands are you running?)

What kernel version?

Note loopback-mounts (client and server on same machine) aren't really
fully supported.

--b.

>
> Other parameters we've changed, none of which have a significant impact:
>
> - UDP/TCP
> - rsize and wsize
> - noatime
> - noacl
> - nocto
>
> If that's an unexpected slow down, where should we be looking?
>
> Thanks.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2012-12-18 20:09:18

by Keith Edmunds

[permalink] [raw]
Subject: Re: NFS access slow

> What are your disks?

They are Enterprise Nearline 6Gb/s SAS drives in an Infortrend disk array.

> How exactly are you getting those numbers?
> (Literally, step-by-step, what commands are you running?)

Using postmark:

pm> set location /mnt/tmp
pm> set size 10000 10000000
pm> run

The only difference is the 'set location' line, which points to either the
NFS mountpoint or the local mountpoint.

A test using dd ("dd if=/dev/zero of=/mnt/tmp bs=1M count=8192") gave a
difference of about five times faster for direct access versus access via
NFS.

> What kernel version?

3.2

> Note loopback-mounts (client and server on same machine) aren't really
> fully supported.

OK, I wasn't aware of that. We were only testing that way to try to
eliminate switches, cables, etc. I've just run a test from another server,
both connected via 10G links, and I'm getting a read speed of just under
20BM/s and a write speed of 52MB/s.

2012-12-18 20:37:28

by J. Bruce Fields

[permalink] [raw]
Subject: Re: NFS access slow

On Tue, Dec 18, 2012 at 07:42:51PM +0000, Keith Edmunds wrote:
> > What are your disks?
>
> They are Enterprise Nearline 6Gb/s SAS drives in an Infortrend disk array.
>
> > How exactly are you getting those numbers?
> > (Literally, step-by-step, what commands are you running?)
>
> Using postmark:
>
> pm> set location /mnt/tmp
> pm> set size 10000 10000000
> pm> run
>
> The only difference is the 'set location' line, which points to either the
> NFS mountpoint or the local mountpoint.

Note that NFS requires operations such as file creation and removal to
be synchronous (for reboot/crash-recovery reasons). So e.g. if postmark
is single threaded (I think it is), then the client has to wait for the
server to respond to a file create before proceeding, and the server has
to wait for the create to hit disk before responding.

Depending on exactly how postmark calculates those bandwidth numbers
that could have a big effect.

If your array has a battery-backed cache that should help.

> A test using dd ("dd if=/dev/zero of=/mnt/tmp bs=1M count=8192") gave a
> difference of about five times faster for direct access versus access via
> NFS.

To make that an apples-to-apples comparison you should include the
time to sync after the dd in both cases. (Though if your server doesn't
have much memory that might not make a big difference.)

> > What kernel version?
>
> 3.2
>
> > Note loopback-mounts (client and server on same machine) aren't really
> > fully supported.
>
> OK, I wasn't aware of that. We were only testing that way to try to
> eliminate switches, cables, etc. I've just run a test from another server,
> both connected via 10G links, and I'm getting a read speed of just under
> 20BM/s and a write speed of 52MB/s.

Have you tested the network speed? (E.g. with iperf.)

--b.

2012-12-18 20:36:40

by Jim Rees

[permalink] [raw]
Subject: Re: NFS access slow

Keith Edmunds wrote:

OK, I wasn't aware of that. We were only testing that way to try to
eliminate switches, cables, etc. I've just run a test from another server,
both connected via 10G links, and I'm getting a read speed of just under
20BM/s and a write speed of 52MB/s.

Something's wrong. What numbers do you get from iperf, or even something
like wget? Are you setting anything unusual with sysctl?