2010-11-03 18:08:53

by Shehjar Tikoo

[permalink] [raw]
Subject: Streaming perf problem on 10g

Hi All,

I am running into a performance problem with 2.6.32-23 Ubuntu lucid on both client and server.

The disk is an SSD performing at 1.4 - 1.6Gbps for a dd of a 6gb file in 64k blocks.

The network is performing fine with many Gbps of iperf throughput.

Yet, the dd write performance over the nfs mount point ranges from 96-105 Mbps for a 6gb file in 64k blocks.

I've tried changing the tcp_slot_table_entries and the wsize but there is negligible gain from these.

Does it sound like a client side inefficiency?

Thanks
-Shehjar


2010-11-04 08:20:27

by Shehjar Tikoo

[permalink] [raw]
Subject: Re: Streaming perf problem on 10g

[email protected] wrote:
> Hi Shehjar,
>
> Can you provide the exact dd command you are running both locally and
> for the NFS mount?

on the ssd:

# dd if=/dev/zero of=bigfile4 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 0.690624 s, 1.5 GB/s
# dd if=/dev/zero of=bigfile4 bs=1M count=1000 oflag=direct
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 1.72764 s, 607 MB/s

The ssd file system is ext4 mounted as
(rw,noatime,nodiratime,data=writeback)

Here is another strangeness, using oflag=direct gives better performance:

On nfs mount:
# dd if=/dev/zero of=/tmp/testmount/bigfile3 bs=1M count=1000 oflag=direct
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 3.7063 s, 283 MB/s
# rm /tmp/testmount/bigfile3
# dd if=/dev/zero of=/tmp/testmount/bigfile3 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 9.66876 s, 108 MB/s

The kernel on both server and client is 2.6.32-23, so I think this
regression might be in play.

http://thread.gmane.org/gmane.comp.file-systems.ext4/20360

Thanks
-Shehjar

>
> -Tommy
>
> On Wed, Nov 3, 2010 at 11:33 AM, Joe Landman <[email protected]> wrote:
>> On 11/03/2010 01:58 PM, Shehjar Tikoo wrote:
>>> Hi All,
>>>
>>> I am running into a performance problem with 2.6.32-23 Ubuntu lucid on
>>> both client and server.
>>>
>>> The disk is an SSD performing at 1.4 - 1.6Gbps for a dd of a 6gb file in
>>> 64k blocks.
>>>
>> If the size of this file is comparable to or smaller than the client or
>> server ram, this number is meaningless.
>>
>>> The network is performing fine with many Gbps of iperf throughput.
>>>
>> GbE gets you 1 Gbps. 10GbE may get you from 3-10 Gbps, depending upon many
>> things. What are your numbers?
>>
>>> Yet, the dd write performance over the nfs mount point ranges from 96-105
>>> Mbps for a 6gb file in 64k blocks.
>>>
>> Sounds like you are writing over the gigabit, and not the 10GbE interface.
>>
>>> I've tried changing the tcp_slot_table_entries and the wsize but there is
>>> negligible gain from these.
>>>
>>> Does it sound like a client side inefficiency?
>>>
>> Nope.
>>
>> --
>> Joseph Landman, Ph.D
>> Founder and CEO
>> Scalable Informatics Inc.
>> email: [email protected]
>> web : http://scalableinformatics.com
>> phone: +1 734 786 8423 x121
>> fax : +1 866 888 3112
>> cell : +1 734 612 4615
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>


2010-11-03 18:30:58

by Joe Landman

[permalink] [raw]
Subject: Re: Streaming perf problem on 10g

On 11/03/2010 01:58 PM, Shehjar Tikoo wrote:
> Hi All,
>
> I am running into a performance problem with 2.6.32-23 Ubuntu lucid on both client and server.
>
> The disk is an SSD performing at 1.4 - 1.6Gbps for a dd of a 6gb file in 64k blocks.
>

If the size of this file is comparable to or smaller than the client or
server ram, this number is meaningless.

> The network is performing fine with many Gbps of iperf throughput.
>

GbE gets you 1 Gbps. 10GbE may get you from 3-10 Gbps, depending upon
many things. What are your numbers?

> Yet, the dd write performance over the nfs mount point ranges from 96-105 Mbps for a 6gb file in 64k blocks.
>

Sounds like you are writing over the gigabit, and not the 10GbE interface.

> I've tried changing the tcp_slot_table_entries and the wsize but there is negligible gain from these.
>
> Does it sound like a client side inefficiency?
>

Nope.

--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: [email protected]
web : http://scalableinformatics.com
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615

2010-11-05 11:43:30

by [email protected]

[permalink] [raw]
Subject: Re: Streaming perf problem on 10g

Hi Shehjar,

Have you tested with another file system besides ext4, like XFS or ReiserFS?

How many SSD's in the configuration? What is the storage controller
(SAS, SATA, PCIe direct-connect)? 1.5GB/sec a lot of speed...seems
like at least 8 SSDs but please confirm.

Also, you are not copying enough data in this test....how much DRAM in
the server with the SSD? I would run dd with an IO amount at least
double or triple the the amount of memory in the system. 1GB is not
enough.

-Tommy

On Thu, Nov 4, 2010 at 1:20 AM, Shehjar Tikoo <[email protected]> wrote:
> [email protected] wrote:
>>
>> Hi Shehjar,
>>
>> Can you provide the exact dd command you are running both locally and
>> for the NFS mount?
>
> on the ssd:
>
> # dd if=/dev/zero of=bigfile4 bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 0.690624 s, 1.5 GB/s
> # dd if=/dev/zero of=bigfile4 bs=1M count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 1.72764 s, 607 MB/s
>
> The ssd file system is ext4 mounted as
> (rw,noatime,nodiratime,data=writeback)
>
> Here is another strangeness, using oflag=direct gives better performance:
>
> On nfs mount:
> # dd if=/dev/zero of=/tmp/testmount/bigfile3 bs=1M count=1000 oflag=direct
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 3.7063 s, 283 MB/s
> # rm /tmp/testmount/bigfile3
> # dd if=/dev/zero of=/tmp/testmount/bigfile3 bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 9.66876 s, 108 MB/s
>
> The kernel on both server and client is 2.6.32-23, so I think this
> regression might be in play.
>
> http://thread.gmane.org/gmane.comp.file-systems.ext4/20360
>
> Thanks
> -Shehjar
>
>>
>> -Tommy
>>
>> On Wed, Nov 3, 2010 at 11:33 AM, Joe Landman <[email protected]>
>> wrote:
>>>
>>> On 11/03/2010 01:58 PM, Shehjar Tikoo wrote:
>>>>
>>>> Hi All,
>>>>
>>>> I am running into a performance problem with 2.6.32-23 Ubuntu lucid on
>>>> both client and server.
>>>>
>>>> The disk is an SSD performing at 1.4 - 1.6Gbps for a dd of a 6gb file in
>>>> 64k blocks.
>>>>
>>> If the size of this file is comparable to or smaller than the client or
>>> server ram, this number is meaningless.
>>>
>>>> The network is performing fine with many Gbps of iperf throughput.
>>>>
>>> GbE gets you 1 Gbps. ?10GbE may get you from 3-10 Gbps, depending upon
>>> many
>>> things. ?What are ?your numbers?
>>>
>>>> Yet, the dd write performance over the nfs mount point ranges from
>>>> 96-105
>>>> Mbps for a 6gb file in 64k blocks.
>>>>
>>> Sounds like you are writing over the gigabit, and not the 10GbE
>>> interface.
>>>
>>>> I've tried changing the tcp_slot_table_entries and the wsize but there
>>>> is
>>>> negligible gain from these.
>>>>
>>>> Does it sound like a client side inefficiency?
>>>>
>>> Nope.
>>>
>>> --
>>> Joseph Landman, Ph.D
>>> Founder and CEO
>>> Scalable Informatics Inc.
>>> email: [email protected]
>>> web : http://scalableinformatics.com
>>> phone: +1 734 786 8423 x121
>>> fax : +1 866 888 3112
>>> cell : +1 734 612 4615
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>> the body of a message to [email protected]
>>> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
>>>
>
>

2010-11-03 18:47:39

by [email protected]

[permalink] [raw]
Subject: Re: Streaming perf problem on 10g

Hi Shehjar,

Can you provide the exact dd command you are running both locally and
for the NFS mount?

-Tommy

On Wed, Nov 3, 2010 at 11:33 AM, Joe Landman <[email protected]> wr=
ote:
> On 11/03/2010 01:58 PM, Shehjar Tikoo wrote:
>>
>> Hi All,
>>
>> I am running into a performance problem with 2.6.32-23 Ubuntu lucid =
on
>> both client and server.
>>
>> The disk is an SSD performing at 1.4 - 1.6Gbps for a dd of a 6gb fil=
e in
>> 64k blocks.
>>
>
> If the size of this file is comparable to or smaller than the client =
or
> server ram, this number is meaningless.
>
>> The network is performing fine with many Gbps of iperf throughput.
>>
>
> GbE gets you 1 Gbps. =A010GbE may get you from 3-10 Gbps, depending u=
pon many
> things. =A0What are =A0your numbers?
>
>> Yet, the dd write performance over the nfs mount point ranges from 9=
6-105
>> Mbps for a 6gb file in 64k blocks.
>>
>
> Sounds like you are writing over the gigabit, and not the 10GbE inter=
face.
>
>> I've tried changing the tcp_slot_table_entries and the wsize but the=
re is
>> negligible gain from these.
>>
>> Does it sound like a client side inefficiency?
>>
>
> Nope.
>
> --
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics Inc.
> email: landman-nyOC7EYE20mM0MU9lROt9DlRY1/[email protected]
> web : http://scalableinformatics.com
> phone: +1 734 786 8423 x121
> fax : +1 866 888 3112
> cell : +1 734 612 4615
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" =
in
> the body of a message to [email protected]
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>