2012-05-06 03:08:07

by Daniel Pocock

[permalink] [raw]
Subject: extremely slow nfs when sync enabled



I've been observing some very slow nfs write performance when the server
has `sync' in /etc/exports

I want to avoid using async, but I have tested it and on my gigabit
network, it gives almost the same speed as if I was on the server
itself. (e.g. 30MB/sec to one disk, or less than 1MB/sec to the same
disk over NFS with `sync')

I'm using Debian 6 with 2.6.38 kernels on client and server, NFSv3

I've also tried a client running Debian 7/Linux 3.2.0 with both NFSv3
and NFSv4, speed is still slow

Looking at iostat on the server, I notice that avgrq-sz = 8 sectors
(4096 bytes) throughout the write operations

I've tried various tests, e.g. dd a large file, or unpack a tarball with
many small files, the iostat output is always the same

Looking at /proc/mounts on the clients, everything looks good, large
wsize, tcp:

rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.x.x.x,mountvers=3,mountport=58727,mountproto=udp,local_lock=none,addr=192.x.x.x
0 0

and
rw,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.x.x.x.,minorversion=0,local_lock=none,addr=192.x.x.x 0 0

and in /proc/fs/nfs/exports on the server, I have sync and wdelay:

/nfs4/daniel
192.168.1.0/24,192.x.x.x(rw,insecure,root_squash,sync,wdelay,no_subtree_check,uuid=aa2a6f37:9cc94eeb:bcbf983c:d6e041d9,sec=1)
/home/daniel
192.168.1.0/24,192.x.x.x(rw,root_squash,sync,wdelay,no_subtree_check,uuid=aa2a6f37:9cc94eeb:bcbf983c:d6e041d9)

Can anyone suggest anything else? Or is this really the performance hit
of `sync'?




2012-05-06 22:12:35

by Daniel Pocock

[permalink] [raw]
Subject: Re: extremely slow nfs when sync enabled



On 06/05/12 21:49, Myklebust, Trond wrote:
> On Sun, 2012-05-06 at 21:23 +0000, Daniel Pocock wrote:
>>
>> On 06/05/12 18:23, Myklebust, Trond wrote:
>>> On Sun, 2012-05-06 at 03:00 +0000, Daniel Pocock wrote:
>>>>
>>>> I've been observing some very slow nfs write performance when the server
>>>> has `sync' in /etc/exports
>>>>
>>>> I want to avoid using async, but I have tested it and on my gigabit
>>>> network, it gives almost the same speed as if I was on the server
>>>> itself. (e.g. 30MB/sec to one disk, or less than 1MB/sec to the same
>>>> disk over NFS with `sync')
>>>>
>>>> I'm using Debian 6 with 2.6.38 kernels on client and server, NFSv3
>>>>
>>>> I've also tried a client running Debian 7/Linux 3.2.0 with both NFSv3
>>>> and NFSv4, speed is still slow
>>>>
>>>> Looking at iostat on the server, I notice that avgrq-sz = 8 sectors
>>>> (4096 bytes) throughout the write operations
>>>>
>>>> I've tried various tests, e.g. dd a large file, or unpack a tarball with
>>>> many small files, the iostat output is always the same
>>>
>>> Were you using 'conv=sync'?
>>
>> No, it was not using conv=sync, just the vanilla dd:
>>
>> dd if=/dev/zero of=some-fat-file bs=65536 count=65536
>
> Then the results are not comparable.

If I run dd with conv=sync on the server, then I still notice that OS
caching plays a factor and write performance just appears really fast

>>>> Looking at /proc/mounts on the clients, everything looks good, large
>>>> wsize, tcp:
>>>>
>>>> rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.x.x.x,mountvers=3,mountport=58727,mountproto=udp,local_lock=none,addr=192.x.x.x
>>>> 0 0
>>>>
>>>> and
>>>> rw,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.x.x.x.,minorversion=0,local_lock=none,addr=192.x.x.x 0 0
>>>>
>>>> and in /proc/fs/nfs/exports on the server, I have sync and wdelay:
>>>>
>>>> /nfs4/daniel
>>>> 192.168.1.0/24,192.x.x.x(rw,insecure,root_squash,sync,wdelay,no_subtree_check,uuid=aa2a6f37:9cc94eeb:bcbf983c:d6e041d9,sec=1)
>>>> /home/daniel
>>>> 192.168.1.0/24,192.x.x.x(rw,root_squash,sync,wdelay,no_subtree_check,uuid=aa2a6f37:9cc94eeb:bcbf983c:d6e041d9)
>>>>
>>>> Can anyone suggest anything else? Or is this really the performance hit
>>>> of `sync'?
>>>
>>> It really depends on your disk setup. Particularly when your filesystem
>>> is using barriers (enabled by default on ext4 and xfs), a lot of raid
>>
>> On the server, I've tried both ext3 and ext4, explicitly changing things
>> like data=writeback,barrier=0, but the problem remains
>>
>> The only thing that made it faster was using hdparm -W1 /dev/sd[ab] to
>> enable the write-back cache on the disk
>
> That should in principle be safe to do as long as you are using
> barrier=1.

Ok, so the combination of:

- enable writeback with hdparm
- use ext4 (and not ext3)
- barrier=1 and data=writeback? or data=?

- is there a particular kernel version (on either client or server side)
that will offer more stability using this combination of features?

I think there are some other variations of my workflow that I can
attempt too, e.g. I've contemplated compiling C++ code onto a RAM disk
because I don't need to keep the hundreds of object files.

>>> setups really _suck_ at dealing with fsync(). The latter is used every
>>
>> I'm using md RAID1, my setup is like this:
>>
>> 2x 1TB SATA disks ST31000528AS (7200rpm with 32MB cache and NCQ)
>>
>> SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI
>> mode] (rev 40)
>> - not using any of the BIOS softraid stuff
>>
>> Both devices have identical partitioning:
>> 1. 128MB boot
>> 2. md volume (1TB - 128MB)
>>
>> The entire md volume (/dev/md2) is then used as a PV for LVM
>>
>> I do my write tests on a fresh LV with no fragmentation
>>
>>> time the NFS client sends a COMMIT or trunc() instruction, and for
>>> pretty much all file and directory creation operations (you can use
>>> 'nfsstat' to monitor how many such operations the NFS client is sending
>>> as part of your test).
>>
>> I know that my two tests are very different in that way:
>>
>> - dd is just writing one big file, no fsync
>>
>> - unpacking a tarball (or compiling a large C++ project) does a lot of
>> small writes with many fsyncs
>>
>> In both cases, it is slow
>>
>>> Local disk can get away with doing a lot less fsync(), because the cache
>>> consistency guarantees are different:
>>> * in NFS, the server is allowed to crash or reboot without
>>> affecting the client's view of the filesystem.
>>> * in the local file system, the expectation is that on reboot any
>>> data lost is won't need to be recovered (the application will
>>> have used fsync() for any data that does need to be persistent).
>>> Only the disk filesystem structures need to be recovered, and
>>> that is done using the journal (or fsck).
>>
>>
>> Is this an intractable problem though?
>>
>> Or do people just work around this, for example, enable async and
>> write-back cache, and then try to manage the risk by adding a UPS and/or
>> battery backed cache to their RAID setup (to reduce the probability of
>> unclean shutdown)?
>
> It all boils down to what kind of consistency guarantees you are
> comfortable living with. The default NFS server setup offers much
> stronger data consistency guarantees than local disk, and is therefore
> likely to be slower when using cheap hardware.
>

I'm keen for consistency, because I don't like the idea of corrupting
some source code or a whole git repository for example.

How did you know I'm using cheap hardware? It is a HP MicroServer, I
even got the £100 cash-back cheque:

http://www8.hp.com/uk/en/campaign/focus-for-smb/solution.html#/tab2/

Seriously though, I've worked with some very large arrays in my business
environment, but I use this hardware at home because of the low noise
and low heat dissipation rather than for saving money, so I would like
to try and get the most out of it if possible.



2012-05-06 21:51:30

by Myklebust, Trond

[permalink] [raw]
Subject: Re: extremely slow nfs when sync enabled

T24gU3VuLCAyMDEyLTA1LTA2IGF0IDIxOjIzICswMDAwLCBEYW5pZWwgUG9jb2NrIHdyb3RlOg0K
PiANCj4gT24gMDYvMDUvMTIgMTg6MjMsIE15a2xlYnVzdCwgVHJvbmQgd3JvdGU6DQo+ID4gT24g
U3VuLCAyMDEyLTA1LTA2IGF0IDAzOjAwICswMDAwLCBEYW5pZWwgUG9jb2NrIHdyb3RlOg0KPiA+
Pg0KPiA+PiBJJ3ZlIGJlZW4gb2JzZXJ2aW5nIHNvbWUgdmVyeSBzbG93IG5mcyB3cml0ZSBwZXJm
b3JtYW5jZSB3aGVuIHRoZSBzZXJ2ZXINCj4gPj4gaGFzIGBzeW5jJyBpbiAvZXRjL2V4cG9ydHMN
Cj4gPj4NCj4gPj4gSSB3YW50IHRvIGF2b2lkIHVzaW5nIGFzeW5jLCBidXQgSSBoYXZlIHRlc3Rl
ZCBpdCBhbmQgb24gbXkgZ2lnYWJpdA0KPiA+PiBuZXR3b3JrLCBpdCBnaXZlcyBhbG1vc3QgdGhl
IHNhbWUgc3BlZWQgYXMgaWYgSSB3YXMgb24gdGhlIHNlcnZlcg0KPiA+PiBpdHNlbGYuIChlLmcu
IDMwTUIvc2VjIHRvIG9uZSBkaXNrLCBvciBsZXNzIHRoYW4gMU1CL3NlYyB0byB0aGUgc2FtZQ0K
PiA+PiBkaXNrIG92ZXIgTkZTIHdpdGggYHN5bmMnKQ0KPiA+Pg0KPiA+PiBJJ20gdXNpbmcgRGVi
aWFuIDYgd2l0aCAyLjYuMzgga2VybmVscyBvbiBjbGllbnQgYW5kIHNlcnZlciwgTkZTdjMNCj4g
Pj4NCj4gPj4gSSd2ZSBhbHNvIHRyaWVkIGEgY2xpZW50IHJ1bm5pbmcgRGViaWFuIDcvTGludXgg
My4yLjAgd2l0aCBib3RoIE5GU3YzDQo+ID4+IGFuZCBORlN2NCwgc3BlZWQgaXMgc3RpbGwgc2xv
dw0KPiA+Pg0KPiA+PiBMb29raW5nIGF0IGlvc3RhdCBvbiB0aGUgc2VydmVyLCBJIG5vdGljZSB0
aGF0IGF2Z3JxLXN6ID0gOCBzZWN0b3JzDQo+ID4+ICg0MDk2IGJ5dGVzKSB0aHJvdWdob3V0IHRo
ZSB3cml0ZSBvcGVyYXRpb25zDQo+ID4+DQo+ID4+IEkndmUgdHJpZWQgdmFyaW91cyB0ZXN0cywg
ZS5nLiBkZCBhIGxhcmdlIGZpbGUsIG9yIHVucGFjayBhIHRhcmJhbGwgd2l0aA0KPiA+PiBtYW55
IHNtYWxsIGZpbGVzLCB0aGUgaW9zdGF0IG91dHB1dCBpcyBhbHdheXMgdGhlIHNhbWUNCj4gPiAN
Cj4gPiBXZXJlIHlvdSB1c2luZyAnY29udj1zeW5jJz8NCj4gDQo+IE5vLCBpdCB3YXMgbm90IHVz
aW5nIGNvbnY9c3luYywganVzdCB0aGUgdmFuaWxsYSBkZDoNCj4gDQo+IGRkIGlmPS9kZXYvemVy
byBvZj1zb21lLWZhdC1maWxlIGJzPTY1NTM2IGNvdW50PTY1NTM2DQoNClRoZW4gdGhlIHJlc3Vs
dHMgYXJlIG5vdCBjb21wYXJhYmxlLg0KDQo+ID4+IExvb2tpbmcgYXQgL3Byb2MvbW91bnRzIG9u
IHRoZSBjbGllbnRzLCBldmVyeXRoaW5nIGxvb2tzIGdvb2QsIGxhcmdlDQo+ID4+IHdzaXplLCB0
Y3A6DQo+ID4+DQo+ID4+IHJ3LHJlbGF0aW1lLHZlcnM9Myxyc2l6ZT0xMDQ4NTc2LHdzaXplPTEw
NDg1NzYsbmFtbGVuPTI1NSxoYXJkLHByb3RvPXRjcCx0aW1lbz02MDAscmV0cmFucz0yLHNlYz1z
eXMsbW91bnRhZGRyPTE5Mi54LngueCxtb3VudHZlcnM9Myxtb3VudHBvcnQ9NTg3MjcsbW91bnRw
cm90bz11ZHAsbG9jYWxfbG9jaz1ub25lLGFkZHI9MTkyLngueC54DQo+ID4+IDAgMA0KPiA+Pg0K
PiA+PiBhbmQNCj4gPj4gIHJ3LHJlbGF0aW1lLHZlcnM9NCxyc2l6ZT0xMDQ4NTc2LHdzaXplPTEw
NDg1NzYsbmFtbGVuPTI1NSxoYXJkLHByb3RvPXRjcCxwb3J0PTAsdGltZW89NjAwLHJldHJhbnM9
MixzZWM9c3lzLGNsaWVudGFkZHI9MTkyLngueC54LixtaW5vcnZlcnNpb249MCxsb2NhbF9sb2Nr
PW5vbmUsYWRkcj0xOTIueC54LnggMCAwDQo+ID4+DQo+ID4+IGFuZCBpbiAvcHJvYy9mcy9uZnMv
ZXhwb3J0cyBvbiB0aGUgc2VydmVyLCBJIGhhdmUgc3luYyBhbmQgd2RlbGF5Og0KPiA+Pg0KPiA+
PiAvbmZzNC9kYW5pZWwNCj4gPj4gMTkyLjE2OC4xLjAvMjQsMTkyLngueC54KHJ3LGluc2VjdXJl
LHJvb3Rfc3F1YXNoLHN5bmMsd2RlbGF5LG5vX3N1YnRyZWVfY2hlY2ssdXVpZD1hYTJhNmYzNzo5
Y2M5NGVlYjpiY2JmOTgzYzpkNmUwNDFkOSxzZWM9MSkNCj4gPj4gL2hvbWUvZGFuaWVsDQo+ID4+
IDE5Mi4xNjguMS4wLzI0LDE5Mi54LngueChydyxyb290X3NxdWFzaCxzeW5jLHdkZWxheSxub19z
dWJ0cmVlX2NoZWNrLHV1aWQ9YWEyYTZmMzc6OWNjOTRlZWI6YmNiZjk4M2M6ZDZlMDQxZDkpDQo+
ID4+DQo+ID4+IENhbiBhbnlvbmUgc3VnZ2VzdCBhbnl0aGluZyBlbHNlPyAgT3IgaXMgdGhpcyBy
ZWFsbHkgdGhlIHBlcmZvcm1hbmNlIGhpdA0KPiA+PiBvZiBgc3luYyc/DQo+ID4gDQo+ID4gSXQg
cmVhbGx5IGRlcGVuZHMgb24geW91ciBkaXNrIHNldHVwLiBQYXJ0aWN1bGFybHkgd2hlbiB5b3Vy
IGZpbGVzeXN0ZW0NCj4gPiBpcyB1c2luZyBiYXJyaWVycyAoZW5hYmxlZCBieSBkZWZhdWx0IG9u
IGV4dDQgYW5kIHhmcyksIGEgbG90IG9mIHJhaWQNCj4gDQo+IE9uIHRoZSBzZXJ2ZXIsIEkndmUg
dHJpZWQgYm90aCBleHQzIGFuZCBleHQ0LCBleHBsaWNpdGx5IGNoYW5naW5nIHRoaW5ncw0KPiBs
aWtlIGRhdGE9d3JpdGViYWNrLGJhcnJpZXI9MCwgYnV0IHRoZSBwcm9ibGVtIHJlbWFpbnMNCj4g
DQo+IFRoZSBvbmx5IHRoaW5nIHRoYXQgbWFkZSBpdCBmYXN0ZXIgd2FzIHVzaW5nIGhkcGFybSAt
VzEgL2Rldi9zZFthYl0gdG8NCj4gZW5hYmxlIHRoZSB3cml0ZS1iYWNrIGNhY2hlIG9uIHRoZSBk
aXNrDQoNClRoYXQgc2hvdWxkIGluIHByaW5jaXBsZSBiZSBzYWZlIHRvIGRvIGFzIGxvbmcgYXMg
eW91IGFyZSB1c2luZw0KYmFycmllcj0xLg0KDQo+ID4gc2V0dXBzIHJlYWxseSBfc3Vja18gYXQg
ZGVhbGluZyB3aXRoIGZzeW5jKCkuIFRoZSBsYXR0ZXIgaXMgdXNlZCBldmVyeQ0KPiANCj4gSSdt
IHVzaW5nIG1kIFJBSUQxLCBteSBzZXR1cCBpcyBsaWtlIHRoaXM6DQo+IA0KPiAyeCAxVEIgU0FU
QSBkaXNrcyBTVDMxMDAwNTI4QVMgKDcyMDBycG0gd2l0aCAzMk1CIGNhY2hlIGFuZCBOQ1EpDQo+
IA0KPiBTQVRBIGNvbnRyb2xsZXI6IEFUSSBUZWNobm9sb2dpZXMgSW5jIFNCNzAwL1NCODAwIFNB
VEEgQ29udHJvbGxlciBbQUhDSQ0KPiBtb2RlXSAocmV2IDQwKQ0KPiAtIG5vdCB1c2luZyBhbnkg
b2YgdGhlIEJJT1Mgc29mdHJhaWQgc3R1ZmYNCj4gDQo+IEJvdGggZGV2aWNlcyBoYXZlIGlkZW50
aWNhbCBwYXJ0aXRpb25pbmc6DQo+IDEuIDEyOE1CIGJvb3QNCj4gMi4gbWQgdm9sdW1lICgxVEIg
LSAxMjhNQikNCj4gDQo+IFRoZSBlbnRpcmUgbWQgdm9sdW1lICgvZGV2L21kMikgaXMgdGhlbiB1
c2VkIGFzIGEgUFYgZm9yIExWTQ0KPiANCj4gSSBkbyBteSB3cml0ZSB0ZXN0cyBvbiBhIGZyZXNo
IExWIHdpdGggbm8gZnJhZ21lbnRhdGlvbg0KPiANCj4gPiB0aW1lIHRoZSBORlMgY2xpZW50IHNl
bmRzIGEgQ09NTUlUIG9yIHRydW5jKCkgaW5zdHJ1Y3Rpb24sIGFuZCBmb3INCj4gPiBwcmV0dHkg
bXVjaCBhbGwgZmlsZSBhbmQgZGlyZWN0b3J5IGNyZWF0aW9uIG9wZXJhdGlvbnMgKHlvdSBjYW4g
dXNlDQo+ID4gJ25mc3N0YXQnIHRvIG1vbml0b3IgaG93IG1hbnkgc3VjaCBvcGVyYXRpb25zIHRo
ZSBORlMgY2xpZW50IGlzIHNlbmRpbmcNCj4gPiBhcyBwYXJ0IG9mIHlvdXIgdGVzdCkuDQo+IA0K
PiBJIGtub3cgdGhhdCBteSB0d28gdGVzdHMgYXJlIHZlcnkgZGlmZmVyZW50IGluIHRoYXQgd2F5
Og0KPiANCj4gLSBkZCBpcyBqdXN0IHdyaXRpbmcgb25lIGJpZyBmaWxlLCBubyBmc3luYw0KPiAN
Cj4gLSB1bnBhY2tpbmcgYSB0YXJiYWxsIChvciBjb21waWxpbmcgYSBsYXJnZSBDKysgcHJvamVj
dCkgZG9lcyBhIGxvdCBvZg0KPiBzbWFsbCB3cml0ZXMgd2l0aCBtYW55IGZzeW5jcw0KPiANCj4g
SW4gYm90aCBjYXNlcywgaXQgaXMgc2xvdw0KPiANCj4gPiBMb2NhbCBkaXNrIGNhbiBnZXQgYXdh
eSB3aXRoIGRvaW5nIGEgbG90IGxlc3MgZnN5bmMoKSwgYmVjYXVzZSB0aGUgY2FjaGUNCj4gPiBj
b25zaXN0ZW5jeSBndWFyYW50ZWVzIGFyZSBkaWZmZXJlbnQ6DQo+ID4gICAgICAgKiBpbiBORlMs
IHRoZSBzZXJ2ZXIgaXMgYWxsb3dlZCB0byBjcmFzaCBvciByZWJvb3Qgd2l0aG91dA0KPiA+ICAg
ICAgICAgYWZmZWN0aW5nIHRoZSBjbGllbnQncyB2aWV3IG9mIHRoZSBmaWxlc3lzdGVtLg0KPiA+
ICAgICAgICogaW4gdGhlIGxvY2FsIGZpbGUgc3lzdGVtLCB0aGUgZXhwZWN0YXRpb24gaXMgdGhh
dCBvbiByZWJvb3QgYW55DQo+ID4gICAgICAgICBkYXRhIGxvc3QgaXMgd29uJ3QgbmVlZCB0byBi
ZSByZWNvdmVyZWQgKHRoZSBhcHBsaWNhdGlvbiB3aWxsDQo+ID4gICAgICAgICBoYXZlIHVzZWQg
ZnN5bmMoKSBmb3IgYW55IGRhdGEgdGhhdCBkb2VzIG5lZWQgdG8gYmUgcGVyc2lzdGVudCkuDQo+
ID4gICAgICAgICBPbmx5IHRoZSBkaXNrIGZpbGVzeXN0ZW0gc3RydWN0dXJlcyBuZWVkIHRvIGJl
IHJlY292ZXJlZCwgYW5kDQo+ID4gICAgICAgICB0aGF0IGlzIGRvbmUgdXNpbmcgdGhlIGpvdXJu
YWwgKG9yIGZzY2spLg0KPiANCj4gDQo+IElzIHRoaXMgYW4gaW50cmFjdGFibGUgcHJvYmxlbSB0
aG91Z2g/DQo+IA0KPiBPciBkbyBwZW9wbGUganVzdCB3b3JrIGFyb3VuZCB0aGlzLCBmb3IgZXhh
bXBsZSwgZW5hYmxlIGFzeW5jIGFuZA0KPiB3cml0ZS1iYWNrIGNhY2hlLCBhbmQgdGhlbiB0cnkg
dG8gbWFuYWdlIHRoZSByaXNrIGJ5IGFkZGluZyBhIFVQUyBhbmQvb3INCj4gYmF0dGVyeSBiYWNr
ZWQgY2FjaGUgdG8gdGhlaXIgUkFJRCBzZXR1cCAodG8gcmVkdWNlIHRoZSBwcm9iYWJpbGl0eSBv
Zg0KPiB1bmNsZWFuIHNodXRkb3duKT8NCg0KSXQgYWxsIGJvaWxzIGRvd24gdG8gd2hhdCBraW5k
IG9mIGNvbnNpc3RlbmN5IGd1YXJhbnRlZXMgeW91IGFyZQ0KY29tZm9ydGFibGUgbGl2aW5nIHdp
dGguIFRoZSBkZWZhdWx0IE5GUyBzZXJ2ZXIgc2V0dXAgb2ZmZXJzIG11Y2gNCnN0cm9uZ2VyIGRh
dGEgY29uc2lzdGVuY3kgZ3VhcmFudGVlcyB0aGFuIGxvY2FsIGRpc2ssIGFuZCBpcyB0aGVyZWZv
cmUNCmxpa2VseSB0byBiZSBzbG93ZXIgd2hlbiB1c2luZyBjaGVhcCBoYXJkd2FyZS4NCg0KLS0g
DQpUcm9uZCBNeWtsZWJ1c3QNCkxpbnV4IE5GUyBjbGllbnQgbWFpbnRhaW5lcg0KDQpOZXRBcHAN
ClRyb25kLk15a2xlYnVzdEBuZXRhcHAuY29tDQp3d3cubmV0YXBwLmNvbQ0KDQo=

2012-05-06 18:24:52

by Myklebust, Trond

[permalink] [raw]
Subject: Re: extremely slow nfs when sync enabled

T24gU3VuLCAyMDEyLTA1LTA2IGF0IDAzOjAwICswMDAwLCBEYW5pZWwgUG9jb2NrIHdyb3RlOg0K
PiANCj4gSSd2ZSBiZWVuIG9ic2VydmluZyBzb21lIHZlcnkgc2xvdyBuZnMgd3JpdGUgcGVyZm9y
bWFuY2Ugd2hlbiB0aGUgc2VydmVyDQo+IGhhcyBgc3luYycgaW4gL2V0Yy9leHBvcnRzDQo+IA0K
PiBJIHdhbnQgdG8gYXZvaWQgdXNpbmcgYXN5bmMsIGJ1dCBJIGhhdmUgdGVzdGVkIGl0IGFuZCBv
biBteSBnaWdhYml0DQo+IG5ldHdvcmssIGl0IGdpdmVzIGFsbW9zdCB0aGUgc2FtZSBzcGVlZCBh
cyBpZiBJIHdhcyBvbiB0aGUgc2VydmVyDQo+IGl0c2VsZi4gKGUuZy4gMzBNQi9zZWMgdG8gb25l
IGRpc2ssIG9yIGxlc3MgdGhhbiAxTUIvc2VjIHRvIHRoZSBzYW1lDQo+IGRpc2sgb3ZlciBORlMg
d2l0aCBgc3luYycpDQo+IA0KPiBJJ20gdXNpbmcgRGViaWFuIDYgd2l0aCAyLjYuMzgga2VybmVs
cyBvbiBjbGllbnQgYW5kIHNlcnZlciwgTkZTdjMNCj4gDQo+IEkndmUgYWxzbyB0cmllZCBhIGNs
aWVudCBydW5uaW5nIERlYmlhbiA3L0xpbnV4IDMuMi4wIHdpdGggYm90aCBORlN2Mw0KPiBhbmQg
TkZTdjQsIHNwZWVkIGlzIHN0aWxsIHNsb3cNCj4gDQo+IExvb2tpbmcgYXQgaW9zdGF0IG9uIHRo
ZSBzZXJ2ZXIsIEkgbm90aWNlIHRoYXQgYXZncnEtc3ogPSA4IHNlY3RvcnMNCj4gKDQwOTYgYnl0
ZXMpIHRocm91Z2hvdXQgdGhlIHdyaXRlIG9wZXJhdGlvbnMNCj4gDQo+IEkndmUgdHJpZWQgdmFy
aW91cyB0ZXN0cywgZS5nLiBkZCBhIGxhcmdlIGZpbGUsIG9yIHVucGFjayBhIHRhcmJhbGwgd2l0
aA0KPiBtYW55IHNtYWxsIGZpbGVzLCB0aGUgaW9zdGF0IG91dHB1dCBpcyBhbHdheXMgdGhlIHNh
bWUNCg0KV2VyZSB5b3UgdXNpbmcgJ2NvbnY9c3luYyc/DQoNCj4gTG9va2luZyBhdCAvcHJvYy9t
b3VudHMgb24gdGhlIGNsaWVudHMsIGV2ZXJ5dGhpbmcgbG9va3MgZ29vZCwgbGFyZ2UNCj4gd3Np
emUsIHRjcDoNCj4gDQo+IHJ3LHJlbGF0aW1lLHZlcnM9Myxyc2l6ZT0xMDQ4NTc2LHdzaXplPTEw
NDg1NzYsbmFtbGVuPTI1NSxoYXJkLHByb3RvPXRjcCx0aW1lbz02MDAscmV0cmFucz0yLHNlYz1z
eXMsbW91bnRhZGRyPTE5Mi54LngueCxtb3VudHZlcnM9Myxtb3VudHBvcnQ9NTg3MjcsbW91bnRw
cm90bz11ZHAsbG9jYWxfbG9jaz1ub25lLGFkZHI9MTkyLngueC54DQo+IDAgMA0KPiANCj4gYW5k
DQo+ICBydyxyZWxhdGltZSx2ZXJzPTQscnNpemU9MTA0ODU3Nix3c2l6ZT0xMDQ4NTc2LG5hbWxl
bj0yNTUsaGFyZCxwcm90bz10Y3AscG9ydD0wLHRpbWVvPTYwMCxyZXRyYW5zPTIsc2VjPXN5cyxj
bGllbnRhZGRyPTE5Mi54LngueC4sbWlub3J2ZXJzaW9uPTAsbG9jYWxfbG9jaz1ub25lLGFkZHI9
MTkyLngueC54IDAgMA0KPiANCj4gYW5kIGluIC9wcm9jL2ZzL25mcy9leHBvcnRzIG9uIHRoZSBz
ZXJ2ZXIsIEkgaGF2ZSBzeW5jIGFuZCB3ZGVsYXk6DQo+IA0KPiAvbmZzNC9kYW5pZWwNCj4gMTky
LjE2OC4xLjAvMjQsMTkyLngueC54KHJ3LGluc2VjdXJlLHJvb3Rfc3F1YXNoLHN5bmMsd2RlbGF5
LG5vX3N1YnRyZWVfY2hlY2ssdXVpZD1hYTJhNmYzNzo5Y2M5NGVlYjpiY2JmOTgzYzpkNmUwNDFk
OSxzZWM9MSkNCj4gL2hvbWUvZGFuaWVsDQo+IDE5Mi4xNjguMS4wLzI0LDE5Mi54LngueChydyxy
b290X3NxdWFzaCxzeW5jLHdkZWxheSxub19zdWJ0cmVlX2NoZWNrLHV1aWQ9YWEyYTZmMzc6OWNj
OTRlZWI6YmNiZjk4M2M6ZDZlMDQxZDkpDQo+IA0KPiBDYW4gYW55b25lIHN1Z2dlc3QgYW55dGhp
bmcgZWxzZT8gIE9yIGlzIHRoaXMgcmVhbGx5IHRoZSBwZXJmb3JtYW5jZSBoaXQNCj4gb2YgYHN5
bmMnPw0KDQpJdCByZWFsbHkgZGVwZW5kcyBvbiB5b3VyIGRpc2sgc2V0dXAuIFBhcnRpY3VsYXJs
eSB3aGVuIHlvdXIgZmlsZXN5c3RlbQ0KaXMgdXNpbmcgYmFycmllcnMgKGVuYWJsZWQgYnkgZGVm
YXVsdCBvbiBleHQ0IGFuZCB4ZnMpLCBhIGxvdCBvZiByYWlkDQpzZXR1cHMgcmVhbGx5IF9zdWNr
XyBhdCBkZWFsaW5nIHdpdGggZnN5bmMoKS4gVGhlIGxhdHRlciBpcyB1c2VkIGV2ZXJ5DQp0aW1l
IHRoZSBORlMgY2xpZW50IHNlbmRzIGEgQ09NTUlUIG9yIHRydW5jKCkgaW5zdHJ1Y3Rpb24sIGFu
ZCBmb3INCnByZXR0eSBtdWNoIGFsbCBmaWxlIGFuZCBkaXJlY3RvcnkgY3JlYXRpb24gb3BlcmF0
aW9ucyAoeW91IGNhbiB1c2UNCiduZnNzdGF0JyB0byBtb25pdG9yIGhvdyBtYW55IHN1Y2ggb3Bl
cmF0aW9ucyB0aGUgTkZTIGNsaWVudCBpcyBzZW5kaW5nDQphcyBwYXJ0IG9mIHlvdXIgdGVzdCku
DQoNCkxvY2FsIGRpc2sgY2FuIGdldCBhd2F5IHdpdGggZG9pbmcgYSBsb3QgbGVzcyBmc3luYygp
LCBiZWNhdXNlIHRoZSBjYWNoZQ0KY29uc2lzdGVuY3kgZ3VhcmFudGVlcyBhcmUgZGlmZmVyZW50
Og0KICAgICAgKiBpbiBORlMsIHRoZSBzZXJ2ZXIgaXMgYWxsb3dlZCB0byBjcmFzaCBvciByZWJv
b3Qgd2l0aG91dA0KICAgICAgICBhZmZlY3RpbmcgdGhlIGNsaWVudCdzIHZpZXcgb2YgdGhlIGZp
bGVzeXN0ZW0uDQogICAgICAqIGluIHRoZSBsb2NhbCBmaWxlIHN5c3RlbSwgdGhlIGV4cGVjdGF0
aW9uIGlzIHRoYXQgb24gcmVib290IGFueQ0KICAgICAgICBkYXRhIGxvc3QgaXMgd29uJ3QgbmVl
ZCB0byBiZSByZWNvdmVyZWQgKHRoZSBhcHBsaWNhdGlvbiB3aWxsDQogICAgICAgIGhhdmUgdXNl
ZCBmc3luYygpIGZvciBhbnkgZGF0YSB0aGF0IGRvZXMgbmVlZCB0byBiZSBwZXJzaXN0ZW50KS4N
CiAgICAgICAgT25seSB0aGUgZGlzayBmaWxlc3lzdGVtIHN0cnVjdHVyZXMgbmVlZCB0byBiZSBy
ZWNvdmVyZWQsIGFuZA0KICAgICAgICB0aGF0IGlzIGRvbmUgdXNpbmcgdGhlIGpvdXJuYWwgKG9y
IGZzY2spLg0KDQoNCi0tIA0KVHJvbmQgTXlrbGVidXN0DQpMaW51eCBORlMgY2xpZW50IG1haW50
YWluZXINCg0KTmV0QXBwDQpUcm9uZC5NeWtsZWJ1c3RAbmV0YXBwLmNvbQ0Kd3d3Lm5ldGFwcC5j
b20NCg0K

2012-05-06 21:23:49

by Daniel Pocock

[permalink] [raw]
Subject: Re: extremely slow nfs when sync enabled



On 06/05/12 18:23, Myklebust, Trond wrote:
> On Sun, 2012-05-06 at 03:00 +0000, Daniel Pocock wrote:
>>
>> I've been observing some very slow nfs write performance when the server
>> has `sync' in /etc/exports
>>
>> I want to avoid using async, but I have tested it and on my gigabit
>> network, it gives almost the same speed as if I was on the server
>> itself. (e.g. 30MB/sec to one disk, or less than 1MB/sec to the same
>> disk over NFS with `sync')
>>
>> I'm using Debian 6 with 2.6.38 kernels on client and server, NFSv3
>>
>> I've also tried a client running Debian 7/Linux 3.2.0 with both NFSv3
>> and NFSv4, speed is still slow
>>
>> Looking at iostat on the server, I notice that avgrq-sz = 8 sectors
>> (4096 bytes) throughout the write operations
>>
>> I've tried various tests, e.g. dd a large file, or unpack a tarball with
>> many small files, the iostat output is always the same
>
> Were you using 'conv=sync'?

No, it was not using conv=sync, just the vanilla dd:

dd if=/dev/zero of=some-fat-file bs=65536 count=65536

>> Looking at /proc/mounts on the clients, everything looks good, large
>> wsize, tcp:
>>
>> rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.x.x.x,mountvers=3,mountport=58727,mountproto=udp,local_lock=none,addr=192.x.x.x
>> 0 0
>>
>> and
>> rw,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.x.x.x.,minorversion=0,local_lock=none,addr=192.x.x.x 0 0
>>
>> and in /proc/fs/nfs/exports on the server, I have sync and wdelay:
>>
>> /nfs4/daniel
>> 192.168.1.0/24,192.x.x.x(rw,insecure,root_squash,sync,wdelay,no_subtree_check,uuid=aa2a6f37:9cc94eeb:bcbf983c:d6e041d9,sec=1)
>> /home/daniel
>> 192.168.1.0/24,192.x.x.x(rw,root_squash,sync,wdelay,no_subtree_check,uuid=aa2a6f37:9cc94eeb:bcbf983c:d6e041d9)
>>
>> Can anyone suggest anything else? Or is this really the performance hit
>> of `sync'?
>
> It really depends on your disk setup. Particularly when your filesystem
> is using barriers (enabled by default on ext4 and xfs), a lot of raid

On the server, I've tried both ext3 and ext4, explicitly changing things
like data=writeback,barrier=0, but the problem remains

The only thing that made it faster was using hdparm -W1 /dev/sd[ab] to
enable the write-back cache on the disk

> setups really _suck_ at dealing with fsync(). The latter is used every

I'm using md RAID1, my setup is like this:

2x 1TB SATA disks ST31000528AS (7200rpm with 32MB cache and NCQ)

SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI
mode] (rev 40)
- not using any of the BIOS softraid stuff

Both devices have identical partitioning:
1. 128MB boot
2. md volume (1TB - 128MB)

The entire md volume (/dev/md2) is then used as a PV for LVM

I do my write tests on a fresh LV with no fragmentation

> time the NFS client sends a COMMIT or trunc() instruction, and for
> pretty much all file and directory creation operations (you can use
> 'nfsstat' to monitor how many such operations the NFS client is sending
> as part of your test).

I know that my two tests are very different in that way:

- dd is just writing one big file, no fsync

- unpacking a tarball (or compiling a large C++ project) does a lot of
small writes with many fsyncs

In both cases, it is slow

> Local disk can get away with doing a lot less fsync(), because the cache
> consistency guarantees are different:
> * in NFS, the server is allowed to crash or reboot without
> affecting the client's view of the filesystem.
> * in the local file system, the expectation is that on reboot any
> data lost is won't need to be recovered (the application will
> have used fsync() for any data that does need to be persistent).
> Only the disk filesystem structures need to be recovered, and
> that is done using the journal (or fsck).


Is this an intractable problem though?

Or do people just work around this, for example, enable async and
write-back cache, and then try to manage the risk by adding a UPS and/or
battery backed cache to their RAID setup (to reduce the probability of
unclean shutdown)?

2012-05-06 22:12:51

by Daniel Pocock

[permalink] [raw]
Subject: Re: extremely slow nfs when sync enabled



On 06/05/12 21:49, Myklebust, Trond wrote:
> On Sun, 2012-05-06 at 21:23 +0000, Daniel Pocock wrote:
>>
>> On 06/05/12 18:23, Myklebust, Trond wrote:
>>> On Sun, 2012-05-06 at 03:00 +0000, Daniel Pocock wrote:
>>>>
>>>> I've been observing some very slow nfs write performance when the server
>>>> has `sync' in /etc/exports
>>>>
>>>> I want to avoid using async, but I have tested it and on my gigabit
>>>> network, it gives almost the same speed as if I was on the server
>>>> itself. (e.g. 30MB/sec to one disk, or less than 1MB/sec to the same
>>>> disk over NFS with `sync')
>>>>
>>>> I'm using Debian 6 with 2.6.38 kernels on client and server, NFSv3
>>>>
>>>> I've also tried a client running Debian 7/Linux 3.2.0 with both NFSv3
>>>> and NFSv4, speed is still slow
>>>>
>>>> Looking at iostat on the server, I notice that avgrq-sz = 8 sectors
>>>> (4096 bytes) throughout the write operations
>>>>
>>>> I've tried various tests, e.g. dd a large file, or unpack a tarball with
>>>> many small files, the iostat output is always the same
>>>
>>> Were you using 'conv=sync'?
>>
>> No, it was not using conv=sync, just the vanilla dd:
>>
>> dd if=/dev/zero of=some-fat-file bs=65536 count=65536
>
> Then the results are not comparable.

If I run dd with conv=sync on the server, then I still notice that OS
caching plays a factor and write performance just appears really fast

>>>> Looking at /proc/mounts on the clients, everything looks good, large
>>>> wsize, tcp:
>>>>
>>>> rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.x.x.x,mountvers=3,mountport=58727,mountproto=udp,local_lock=none,addr=192.x.x.x
>>>> 0 0
>>>>
>>>> and
>>>> rw,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.x.x.x.,minorversion=0,local_lock=none,addr=192.x.x.x 0 0
>>>>
>>>> and in /proc/fs/nfs/exports on the server, I have sync and wdelay:
>>>>
>>>> /nfs4/daniel
>>>> 192.168.1.0/24,192.x.x.x(rw,insecure,root_squash,sync,wdelay,no_subtree_check,uuid=aa2a6f37:9cc94eeb:bcbf983c:d6e041d9,sec=1)
>>>> /home/daniel
>>>> 192.168.1.0/24,192.x.x.x(rw,root_squash,sync,wdelay,no_subtree_check,uuid=aa2a6f37:9cc94eeb:bcbf983c:d6e041d9)
>>>>
>>>> Can anyone suggest anything else? Or is this really the performance hit
>>>> of `sync'?
>>>
>>> It really depends on your disk setup. Particularly when your filesystem
>>> is using barriers (enabled by default on ext4 and xfs), a lot of raid
>>
>> On the server, I've tried both ext3 and ext4, explicitly changing things
>> like data=writeback,barrier=0, but the problem remains
>>
>> The only thing that made it faster was using hdparm -W1 /dev/sd[ab] to
>> enable the write-back cache on the disk
>
> That should in principle be safe to do as long as you are using
> barrier=1.

Ok, so the combination of:

- enable writeback with hdparm
- use ext4 (and not ext3)
- barrier=1 and data=writeback? or data=?

- is there a particular kernel version (on either client or server side)
that will offer more stability using this combination of features?

I think there are some other variations of my workflow that I can
attempt too, e.g. I've contemplated compiling C++ code onto a RAM disk
because I don't need to keep the hundreds of object files.

>>> setups really _suck_ at dealing with fsync(). The latter is used every
>>
>> I'm using md RAID1, my setup is like this:
>>
>> 2x 1TB SATA disks ST31000528AS (7200rpm with 32MB cache and NCQ)
>>
>> SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI
>> mode] (rev 40)
>> - not using any of the BIOS softraid stuff
>>
>> Both devices have identical partitioning:
>> 1. 128MB boot
>> 2. md volume (1TB - 128MB)
>>
>> The entire md volume (/dev/md2) is then used as a PV for LVM
>>
>> I do my write tests on a fresh LV with no fragmentation
>>
>>> time the NFS client sends a COMMIT or trunc() instruction, and for
>>> pretty much all file and directory creation operations (you can use
>>> 'nfsstat' to monitor how many such operations the NFS client is sending
>>> as part of your test).
>>
>> I know that my two tests are very different in that way:
>>
>> - dd is just writing one big file, no fsync
>>
>> - unpacking a tarball (or compiling a large C++ project) does a lot of
>> small writes with many fsyncs
>>
>> In both cases, it is slow
>>
>>> Local disk can get away with doing a lot less fsync(), because the cache
>>> consistency guarantees are different:
>>> * in NFS, the server is allowed to crash or reboot without
>>> affecting the client's view of the filesystem.
>>> * in the local file system, the expectation is that on reboot any
>>> data lost is won't need to be recovered (the application will
>>> have used fsync() for any data that does need to be persistent).
>>> Only the disk filesystem structures need to be recovered, and
>>> that is done using the journal (or fsck).
>>
>>
>> Is this an intractable problem though?
>>
>> Or do people just work around this, for example, enable async and
>> write-back cache, and then try to manage the risk by adding a UPS and/or
>> battery backed cache to their RAID setup (to reduce the probability of
>> unclean shutdown)?
>
> It all boils down to what kind of consistency guarantees you are
> comfortable living with. The default NFS server setup offers much
> stronger data consistency guarantees than local disk, and is therefore
> likely to be slower when using cheap hardware.
>

I'm keen for consistency, because I don't like the idea of corrupting
some source code or a whole git repository for example.

How did you know I'm using cheap hardware? It is a HP MicroServer, I
even got the £100 cash-back cheque:

http://www8.hp.com/uk/en/campaign/focus-for-smb/solution.html#/tab2/

Seriously though, I've worked with some very large arrays in my business
environment, but I use this hardware at home because of the low noise
and low heat dissipation rather than for saving money, so I would like
to try and get the most out of it if possible and I'm very grateful for
these suggestions.



2012-05-06 22:44:22

by Myklebust, Trond

[permalink] [raw]
Subject: Re: extremely slow nfs when sync enabled

T24gU3VuLCAyMDEyLTA1LTA2IGF0IDIyOjEyICswMDAwLCBEYW5pZWwgUG9jb2NrIHdyb3RlOg0K
PiANCj4gT24gMDYvMDUvMTIgMjE6NDksIE15a2xlYnVzdCwgVHJvbmQgd3JvdGU6DQo+ID4gT24g
U3VuLCAyMDEyLTA1LTA2IGF0IDIxOjIzICswMDAwLCBEYW5pZWwgUG9jb2NrIHdyb3RlOg0KPiA+
Pg0KPiA+PiBPbiAwNi8wNS8xMiAxODoyMywgTXlrbGVidXN0LCBUcm9uZCB3cm90ZToNCj4gPj4+
IE9uIFN1biwgMjAxMi0wNS0wNiBhdCAwMzowMCArMDAwMCwgRGFuaWVsIFBvY29jayB3cm90ZToN
Cj4gPj4+Pg0KPiA+Pj4+IEkndmUgYmVlbiBvYnNlcnZpbmcgc29tZSB2ZXJ5IHNsb3cgbmZzIHdy
aXRlIHBlcmZvcm1hbmNlIHdoZW4gdGhlIHNlcnZlcg0KPiA+Pj4+IGhhcyBgc3luYycgaW4gL2V0
Yy9leHBvcnRzDQo+ID4+Pj4NCj4gPj4+PiBJIHdhbnQgdG8gYXZvaWQgdXNpbmcgYXN5bmMsIGJ1
dCBJIGhhdmUgdGVzdGVkIGl0IGFuZCBvbiBteSBnaWdhYml0DQo+ID4+Pj4gbmV0d29yaywgaXQg
Z2l2ZXMgYWxtb3N0IHRoZSBzYW1lIHNwZWVkIGFzIGlmIEkgd2FzIG9uIHRoZSBzZXJ2ZXINCj4g
Pj4+PiBpdHNlbGYuIChlLmcuIDMwTUIvc2VjIHRvIG9uZSBkaXNrLCBvciBsZXNzIHRoYW4gMU1C
L3NlYyB0byB0aGUgc2FtZQ0KPiA+Pj4+IGRpc2sgb3ZlciBORlMgd2l0aCBgc3luYycpDQo+ID4+
Pj4NCj4gPj4+PiBJJ20gdXNpbmcgRGViaWFuIDYgd2l0aCAyLjYuMzgga2VybmVscyBvbiBjbGll
bnQgYW5kIHNlcnZlciwgTkZTdjMNCj4gPj4+Pg0KPiA+Pj4+IEkndmUgYWxzbyB0cmllZCBhIGNs
aWVudCBydW5uaW5nIERlYmlhbiA3L0xpbnV4IDMuMi4wIHdpdGggYm90aCBORlN2Mw0KPiA+Pj4+
IGFuZCBORlN2NCwgc3BlZWQgaXMgc3RpbGwgc2xvdw0KPiA+Pj4+DQo+ID4+Pj4gTG9va2luZyBh
dCBpb3N0YXQgb24gdGhlIHNlcnZlciwgSSBub3RpY2UgdGhhdCBhdmdycS1zeiA9IDggc2VjdG9y
cw0KPiA+Pj4+ICg0MDk2IGJ5dGVzKSB0aHJvdWdob3V0IHRoZSB3cml0ZSBvcGVyYXRpb25zDQo+
ID4+Pj4NCj4gPj4+PiBJJ3ZlIHRyaWVkIHZhcmlvdXMgdGVzdHMsIGUuZy4gZGQgYSBsYXJnZSBm
aWxlLCBvciB1bnBhY2sgYSB0YXJiYWxsIHdpdGgNCj4gPj4+PiBtYW55IHNtYWxsIGZpbGVzLCB0
aGUgaW9zdGF0IG91dHB1dCBpcyBhbHdheXMgdGhlIHNhbWUNCj4gPj4+DQo+ID4+PiBXZXJlIHlv
dSB1c2luZyAnY29udj1zeW5jJz8NCj4gPj4NCj4gPj4gTm8sIGl0IHdhcyBub3QgdXNpbmcgY29u
dj1zeW5jLCBqdXN0IHRoZSB2YW5pbGxhIGRkOg0KPiA+Pg0KPiA+PiBkZCBpZj0vZGV2L3plcm8g
b2Y9c29tZS1mYXQtZmlsZSBicz02NTUzNiBjb3VudD02NTUzNg0KPiA+IA0KPiA+IFRoZW4gdGhl
IHJlc3VsdHMgYXJlIG5vdCBjb21wYXJhYmxlLg0KPiANCj4gSWYgSSBydW4gZGQgd2l0aCBjb252
PXN5bmMgb24gdGhlIHNlcnZlciwgdGhlbiBJIHN0aWxsIG5vdGljZSB0aGF0IE9TDQo+IGNhY2hp
bmcgcGxheXMgYSBmYWN0b3IgYW5kIHdyaXRlIHBlcmZvcm1hbmNlIGp1c3QgYXBwZWFycyByZWFs
bHkgZmFzdA0KPiANCj4gPj4+PiBMb29raW5nIGF0IC9wcm9jL21vdW50cyBvbiB0aGUgY2xpZW50
cywgZXZlcnl0aGluZyBsb29rcyBnb29kLCBsYXJnZQ0KPiA+Pj4+IHdzaXplLCB0Y3A6DQo+ID4+
Pj4NCj4gPj4+PiBydyxyZWxhdGltZSx2ZXJzPTMscnNpemU9MTA0ODU3Nix3c2l6ZT0xMDQ4NTc2
LG5hbWxlbj0yNTUsaGFyZCxwcm90bz10Y3AsdGltZW89NjAwLHJldHJhbnM9MixzZWM9c3lzLG1v
dW50YWRkcj0xOTIueC54LngsbW91bnR2ZXJzPTMsbW91bnRwb3J0PTU4NzI3LG1vdW50cHJvdG89
dWRwLGxvY2FsX2xvY2s9bm9uZSxhZGRyPTE5Mi54LngueA0KPiA+Pj4+IDAgMA0KPiA+Pj4+DQo+
ID4+Pj4gYW5kDQo+ID4+Pj4gIHJ3LHJlbGF0aW1lLHZlcnM9NCxyc2l6ZT0xMDQ4NTc2LHdzaXpl
PTEwNDg1NzYsbmFtbGVuPTI1NSxoYXJkLHByb3RvPXRjcCxwb3J0PTAsdGltZW89NjAwLHJldHJh
bnM9MixzZWM9c3lzLGNsaWVudGFkZHI9MTkyLngueC54LixtaW5vcnZlcnNpb249MCxsb2NhbF9s
b2NrPW5vbmUsYWRkcj0xOTIueC54LnggMCAwDQo+ID4+Pj4NCj4gPj4+PiBhbmQgaW4gL3Byb2Mv
ZnMvbmZzL2V4cG9ydHMgb24gdGhlIHNlcnZlciwgSSBoYXZlIHN5bmMgYW5kIHdkZWxheToNCj4g
Pj4+Pg0KPiA+Pj4+IC9uZnM0L2RhbmllbA0KPiA+Pj4+IDE5Mi4xNjguMS4wLzI0LDE5Mi54Lngu
eChydyxpbnNlY3VyZSxyb290X3NxdWFzaCxzeW5jLHdkZWxheSxub19zdWJ0cmVlX2NoZWNrLHV1
aWQ9YWEyYTZmMzc6OWNjOTRlZWI6YmNiZjk4M2M6ZDZlMDQxZDksc2VjPTEpDQo+ID4+Pj4gL2hv
bWUvZGFuaWVsDQo+ID4+Pj4gMTkyLjE2OC4xLjAvMjQsMTkyLngueC54KHJ3LHJvb3Rfc3F1YXNo
LHN5bmMsd2RlbGF5LG5vX3N1YnRyZWVfY2hlY2ssdXVpZD1hYTJhNmYzNzo5Y2M5NGVlYjpiY2Jm
OTgzYzpkNmUwNDFkOSkNCj4gPj4+Pg0KPiA+Pj4+IENhbiBhbnlvbmUgc3VnZ2VzdCBhbnl0aGlu
ZyBlbHNlPyAgT3IgaXMgdGhpcyByZWFsbHkgdGhlIHBlcmZvcm1hbmNlIGhpdA0KPiA+Pj4+IG9m
IGBzeW5jJz8NCj4gPj4+DQo+ID4+PiBJdCByZWFsbHkgZGVwZW5kcyBvbiB5b3VyIGRpc2sgc2V0
dXAuIFBhcnRpY3VsYXJseSB3aGVuIHlvdXIgZmlsZXN5c3RlbQ0KPiA+Pj4gaXMgdXNpbmcgYmFy
cmllcnMgKGVuYWJsZWQgYnkgZGVmYXVsdCBvbiBleHQ0IGFuZCB4ZnMpLCBhIGxvdCBvZiByYWlk
DQo+ID4+DQo+ID4+IE9uIHRoZSBzZXJ2ZXIsIEkndmUgdHJpZWQgYm90aCBleHQzIGFuZCBleHQ0
LCBleHBsaWNpdGx5IGNoYW5naW5nIHRoaW5ncw0KPiA+PiBsaWtlIGRhdGE9d3JpdGViYWNrLGJh
cnJpZXI9MCwgYnV0IHRoZSBwcm9ibGVtIHJlbWFpbnMNCj4gPj4NCj4gPj4gVGhlIG9ubHkgdGhp
bmcgdGhhdCBtYWRlIGl0IGZhc3RlciB3YXMgdXNpbmcgaGRwYXJtIC1XMSAvZGV2L3NkW2FiXSB0
bw0KPiA+PiBlbmFibGUgdGhlIHdyaXRlLWJhY2sgY2FjaGUgb24gdGhlIGRpc2sNCj4gPiANCj4g
PiBUaGF0IHNob3VsZCBpbiBwcmluY2lwbGUgYmUgc2FmZSB0byBkbyBhcyBsb25nIGFzIHlvdSBh
cmUgdXNpbmcNCj4gPiBiYXJyaWVyPTEuDQo+IA0KPiBPaywgc28gdGhlIGNvbWJpbmF0aW9uIG9m
Og0KPiANCj4gLSBlbmFibGUgd3JpdGViYWNrIHdpdGggaGRwYXJtDQo+IC0gdXNlIGV4dDQgKGFu
ZCBub3QgZXh0MykNCj4gLSBiYXJyaWVyPTEgYW5kIGRhdGE9d3JpdGViYWNrPyAgb3IgZGF0YT0/
DQo+IA0KPiAtIGlzIHRoZXJlIGEgcGFydGljdWxhciBrZXJuZWwgdmVyc2lvbiAob24gZWl0aGVy
IGNsaWVudCBvciBzZXJ2ZXIgc2lkZSkNCj4gdGhhdCB3aWxsIG9mZmVyIG1vcmUgc3RhYmlsaXR5
IHVzaW5nIHRoaXMgY29tYmluYXRpb24gb2YgZmVhdHVyZXM/DQoNCk5vdCB0aGF0IEknbSBhd2Fy
ZSBvZi4gQXMgbG9uZyBhcyB5b3UgaGF2ZSBhIGtlcm5lbCA+IDIuNi4yOSwgdGhlbiBMVk0NCnNo
b3VsZCB3b3JrIGNvcnJlY3RseS4gVGhlIG1haW4gcHJvYmxlbSBpcyB0aGF0IHNvbWUgU0FUQSBo
YXJkd2FyZSB0ZW5kcw0KdG8gYmUgYnVnZ3ksIGRlZmVhdGluZyB0aGUgbWV0aG9kcyB1c2VkIGJ5
IHRoZSBiYXJyaWVyIGNvZGUgdG8gZW5zdXJlDQpkYXRhIGlzIHRydWx5IG9uIGRpc2suIEkgYmVs
aWV2ZSB0aGF0IFhGUyB3aWxsIHRoZXJlZm9yZSBhY3R1YWxseSB0ZXN0DQp0aGUgaGFyZHdhcmUg
d2hlbiB5b3UgbW91bnQgd2l0aCB3cml0ZSBjYWNoaW5nIGFuZCBiYXJyaWVycywgYW5kIHNob3Vs
ZA0KcmVwb3J0IGlmIHRoZSB0ZXN0IGZhaWxzIGluIHRoZSBzeXNsb2dzLg0KU2VlIGh0dHA6Ly94
ZnMub3JnL2luZGV4LnBocC9YRlNfRkFRI1dyaXRlX2JhcnJpZXJfc3VwcG9ydC4NCg0KPiBJIHRo
aW5rIHRoZXJlIGFyZSBzb21lIG90aGVyIHZhcmlhdGlvbnMgb2YgbXkgd29ya2Zsb3cgdGhhdCBJ
IGNhbg0KPiBhdHRlbXB0IHRvbywgZS5nLiBJJ3ZlIGNvbnRlbXBsYXRlZCBjb21waWxpbmcgQysr
IGNvZGUgb250byBhIFJBTSBkaXNrDQo+IGJlY2F1c2UgSSBkb24ndCBuZWVkIHRvIGtlZXAgdGhl
IGh1bmRyZWRzIG9mIG9iamVjdCBmaWxlcy4NCg0KWW91IG1pZ2h0IGFsc28gY29uc2lkZXIgdXNp
bmcgc29tZXRoaW5nIGxpa2UgY2NhY2hlIGFuZCBzZXQgdGhlDQpDQ0FDSEVfRElSIHRvIGEgbG9j
YWwgZGlzayBpZiB5b3UgaGF2ZSBvbmUuDQoNCj4gPj4+IHNldHVwcyByZWFsbHkgX3N1Y2tfIGF0
IGRlYWxpbmcgd2l0aCBmc3luYygpLiBUaGUgbGF0dGVyIGlzIHVzZWQgZXZlcnkNCj4gPj4NCj4g
Pj4gSSdtIHVzaW5nIG1kIFJBSUQxLCBteSBzZXR1cCBpcyBsaWtlIHRoaXM6DQo+ID4+DQo+ID4+
IDJ4IDFUQiBTQVRBIGRpc2tzIFNUMzEwMDA1MjhBUyAoNzIwMHJwbSB3aXRoIDMyTUIgY2FjaGUg
YW5kIE5DUSkNCj4gPj4NCj4gPj4gU0FUQSBjb250cm9sbGVyOiBBVEkgVGVjaG5vbG9naWVzIElu
YyBTQjcwMC9TQjgwMCBTQVRBIENvbnRyb2xsZXIgW0FIQ0kNCj4gPj4gbW9kZV0gKHJldiA0MCkN
Cj4gPj4gLSBub3QgdXNpbmcgYW55IG9mIHRoZSBCSU9TIHNvZnRyYWlkIHN0dWZmDQo+ID4+DQo+
ID4+IEJvdGggZGV2aWNlcyBoYXZlIGlkZW50aWNhbCBwYXJ0aXRpb25pbmc6DQo+ID4+IDEuIDEy
OE1CIGJvb3QNCj4gPj4gMi4gbWQgdm9sdW1lICgxVEIgLSAxMjhNQikNCj4gPj4NCj4gPj4gVGhl
IGVudGlyZSBtZCB2b2x1bWUgKC9kZXYvbWQyKSBpcyB0aGVuIHVzZWQgYXMgYSBQViBmb3IgTFZN
DQo+ID4+DQo+ID4+IEkgZG8gbXkgd3JpdGUgdGVzdHMgb24gYSBmcmVzaCBMViB3aXRoIG5vIGZy
YWdtZW50YXRpb24NCj4gPj4NCj4gPj4+IHRpbWUgdGhlIE5GUyBjbGllbnQgc2VuZHMgYSBDT01N
SVQgb3IgdHJ1bmMoKSBpbnN0cnVjdGlvbiwgYW5kIGZvcg0KPiA+Pj4gcHJldHR5IG11Y2ggYWxs
IGZpbGUgYW5kIGRpcmVjdG9yeSBjcmVhdGlvbiBvcGVyYXRpb25zICh5b3UgY2FuIHVzZQ0KPiA+
Pj4gJ25mc3N0YXQnIHRvIG1vbml0b3IgaG93IG1hbnkgc3VjaCBvcGVyYXRpb25zIHRoZSBORlMg
Y2xpZW50IGlzIHNlbmRpbmcNCj4gPj4+IGFzIHBhcnQgb2YgeW91ciB0ZXN0KS4NCj4gPj4NCj4g
Pj4gSSBrbm93IHRoYXQgbXkgdHdvIHRlc3RzIGFyZSB2ZXJ5IGRpZmZlcmVudCBpbiB0aGF0IHdh
eToNCj4gPj4NCj4gPj4gLSBkZCBpcyBqdXN0IHdyaXRpbmcgb25lIGJpZyBmaWxlLCBubyBmc3lu
Yw0KPiA+Pg0KPiA+PiAtIHVucGFja2luZyBhIHRhcmJhbGwgKG9yIGNvbXBpbGluZyBhIGxhcmdl
IEMrKyBwcm9qZWN0KSBkb2VzIGEgbG90IG9mDQo+ID4+IHNtYWxsIHdyaXRlcyB3aXRoIG1hbnkg
ZnN5bmNzDQo+ID4+DQo+ID4+IEluIGJvdGggY2FzZXMsIGl0IGlzIHNsb3cNCj4gPj4NCj4gPj4+
IExvY2FsIGRpc2sgY2FuIGdldCBhd2F5IHdpdGggZG9pbmcgYSBsb3QgbGVzcyBmc3luYygpLCBi
ZWNhdXNlIHRoZSBjYWNoZQ0KPiA+Pj4gY29uc2lzdGVuY3kgZ3VhcmFudGVlcyBhcmUgZGlmZmVy
ZW50Og0KPiA+Pj4gICAgICAgKiBpbiBORlMsIHRoZSBzZXJ2ZXIgaXMgYWxsb3dlZCB0byBjcmFz
aCBvciByZWJvb3Qgd2l0aG91dA0KPiA+Pj4gICAgICAgICBhZmZlY3RpbmcgdGhlIGNsaWVudCdz
IHZpZXcgb2YgdGhlIGZpbGVzeXN0ZW0uDQo+ID4+PiAgICAgICAqIGluIHRoZSBsb2NhbCBmaWxl
IHN5c3RlbSwgdGhlIGV4cGVjdGF0aW9uIGlzIHRoYXQgb24gcmVib290IGFueQ0KPiA+Pj4gICAg
ICAgICBkYXRhIGxvc3QgaXMgd29uJ3QgbmVlZCB0byBiZSByZWNvdmVyZWQgKHRoZSBhcHBsaWNh
dGlvbiB3aWxsDQo+ID4+PiAgICAgICAgIGhhdmUgdXNlZCBmc3luYygpIGZvciBhbnkgZGF0YSB0
aGF0IGRvZXMgbmVlZCB0byBiZSBwZXJzaXN0ZW50KS4NCj4gPj4+ICAgICAgICAgT25seSB0aGUg
ZGlzayBmaWxlc3lzdGVtIHN0cnVjdHVyZXMgbmVlZCB0byBiZSByZWNvdmVyZWQsIGFuZA0KPiA+
Pj4gICAgICAgICB0aGF0IGlzIGRvbmUgdXNpbmcgdGhlIGpvdXJuYWwgKG9yIGZzY2spLg0KPiA+
Pg0KPiA+Pg0KPiA+PiBJcyB0aGlzIGFuIGludHJhY3RhYmxlIHByb2JsZW0gdGhvdWdoPw0KPiA+
Pg0KPiA+PiBPciBkbyBwZW9wbGUganVzdCB3b3JrIGFyb3VuZCB0aGlzLCBmb3IgZXhhbXBsZSwg
ZW5hYmxlIGFzeW5jIGFuZA0KPiA+PiB3cml0ZS1iYWNrIGNhY2hlLCBhbmQgdGhlbiB0cnkgdG8g
bWFuYWdlIHRoZSByaXNrIGJ5IGFkZGluZyBhIFVQUyBhbmQvb3INCj4gPj4gYmF0dGVyeSBiYWNr
ZWQgY2FjaGUgdG8gdGhlaXIgUkFJRCBzZXR1cCAodG8gcmVkdWNlIHRoZSBwcm9iYWJpbGl0eSBv
Zg0KPiA+PiB1bmNsZWFuIHNodXRkb3duKT8NCj4gPiANCj4gPiBJdCBhbGwgYm9pbHMgZG93biB0
byB3aGF0IGtpbmQgb2YgY29uc2lzdGVuY3kgZ3VhcmFudGVlcyB5b3UgYXJlDQo+ID4gY29tZm9y
dGFibGUgbGl2aW5nIHdpdGguIFRoZSBkZWZhdWx0IE5GUyBzZXJ2ZXIgc2V0dXAgb2ZmZXJzIG11
Y2gNCj4gPiBzdHJvbmdlciBkYXRhIGNvbnNpc3RlbmN5IGd1YXJhbnRlZXMgdGhhbiBsb2NhbCBk
aXNrLCBhbmQgaXMgdGhlcmVmb3JlDQo+ID4gbGlrZWx5IHRvIGJlIHNsb3dlciB3aGVuIHVzaW5n
IGNoZWFwIGhhcmR3YXJlLg0KPiA+IA0KPiANCj4gSSdtIGtlZW4gZm9yIGNvbnNpc3RlbmN5LCBi
ZWNhdXNlIEkgZG9uJ3QgbGlrZSB0aGUgaWRlYSBvZiBjb3JydXB0aW5nDQo+IHNvbWUgc291cmNl
IGNvZGUgb3IgYSB3aG9sZSBnaXQgcmVwb3NpdG9yeSBmb3IgZXhhbXBsZS4NCj4gDQo+IEhvdyBk
aWQgeW91IGtub3cgSSdtIHVzaW5nIGNoZWFwIGhhcmR3YXJlPyAgSXQgaXMgYSBIUCBNaWNyb1Nl
cnZlciwgSQ0KPiBldmVuIGdvdCB0aGUgwqMxMDAgY2FzaC1iYWNrIGNoZXF1ZToNCj4gDQo+IGh0
dHA6Ly93d3c4LmhwLmNvbS91ay9lbi9jYW1wYWlnbi9mb2N1cy1mb3Itc21iL3NvbHV0aW9uLmh0
bWwjL3RhYjIvDQo+IA0KPiBTZXJpb3VzbHkgdGhvdWdoLCBJJ3ZlIHdvcmtlZCB3aXRoIHNvbWUg
dmVyeSBsYXJnZSBhcnJheXMgaW4gbXkgYnVzaW5lc3MNCj4gZW52aXJvbm1lbnQsIGJ1dCBJIHVz
ZSB0aGlzIGhhcmR3YXJlIGF0IGhvbWUgYmVjYXVzZSBvZiB0aGUgbG93IG5vaXNlDQo+IGFuZCBs
b3cgaGVhdCBkaXNzaXBhdGlvbiByYXRoZXIgdGhhbiBmb3Igc2F2aW5nIG1vbmV5LCBzbyBJIHdv
dWxkIGxpa2UNCj4gdG8gdHJ5IGFuZCBnZXQgdGhlIG1vc3Qgb3V0IG9mIGl0IGlmIHBvc3NpYmxl
IGFuZCBJJ20gdmVyeSBncmF0ZWZ1bCBmb3INCj4gdGhlc2Ugc3VnZ2VzdGlvbnMuDQoNClJpZ2h0
LiBBbGwgSSdtIHNheWluZyBpcyB0aGF0IHdoZW4gY29tcGFyaW5nIGxvY2FsIGRpc2sgYW5kIE5G
Uw0KcGVyZm9ybWFuY2UsIHRoZW4gbWFrZSBzdXJlIHRoYXQgeW91IGFyZSBkb2luZyBhbiBhcHBs
ZXMtdG8tYXBwbGVzDQpjb21wYXJpc29uLg0KVGhlIG1haW4gcmVhc29uIGZvciB3YW50aW5nIHRv
IHVzZSBORlMgaW4gYSBob21lIHNldHVwIHdvdWxkIHVzdWFsbHkgYmUNCmluIG9yZGVyIHRvIHNp
bXVsdGFuZW91c2x5IGFjY2VzcyB0aGUgc2FtZSBkYXRhIHRocm91Z2ggc2V2ZXJhbCBjbGllbnRz
Lg0KSWYgdGhhdCBpcyBub3QgYSBjb25jZXJuLCB0aGVuIHBlcmhhcHMgdHJhbnNmb3JtaW5nIHlv
dXIgTkZTIHNlcnZlciBpbnRvDQphbiBpU0NTSSB0YXJnZXQgbWlnaHQgZml0IHlvdXIgcGVyZm9y
bWFuY2UgcmVxdWlyZW1lbnRzIGJldHRlcj8NCg0KLS0gDQpUcm9uZCBNeWtsZWJ1c3QNCkxpbnV4
IE5GUyBjbGllbnQgbWFpbnRhaW5lcg0KDQpOZXRBcHANClRyb25kLk15a2xlYnVzdEBuZXRhcHAu
Y29tDQp3d3cubmV0YXBwLmNvbQ0KDQo=