2010-08-05 00:21:05

by Chen, Helen Y

[permalink] [raw]
Subject: RE: pNFS file layout performance

I add a 2nd DS and repeated the benchmark. It had seemed the test data had been placed on the MDS and not the DS'es? Any idea why?

Thanks,
Helen

The following are my notes on kernel, configuration set up, and results:

I rebuilt Steve's f13 pnfs kernel source with:
# CONFIG_PNFSD_LOCAL_EXPORT is not set


Configuration set up:
Data Server:
/etc/fstab
tmpfs /export/spnfs tmpfs size=85% 0 0

/etc/exports
/export/spnfs *(rw,sync,fsid=0,insecure,no_subtree_check,pnfs,no_root_squash)


Meta Data Server:
/etc/fstab
wtb9-10g:/ /spnfs/192.168.96.109 nfs4 minorversion=1,intr,soft,rsize=32768,wsiz
e=32768 0 0
wtb10-10g:/ /spnfs/192.168.96.110 nfs4 minorversion=1,intr,soft,rsize=32768,wsiz
e=32768 0 0
wtb11-10g:/ /spnfs/192.168.96.11 nfs4 minorversion=1,intr,soft,rsize=32768,wsiz
e=32768 0 0

/etc/exports
/export *(rw,sync,pnfs,fsid=0,insecure,no_subtree_check,no_root_squash)

/etc/spnfsd.conf
[General]
Verbosity = 1
Stripe-size = 8192
Dense-striping = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
DS-Mount-Directory = /spnfs

[DataServers]
NumDS = 2
DS1_IP = 192.168.96.109
DS1_PORT = 2049
DS1_ROOT = /
DS1_ID = 1
DS2_IP = 192.168.96.110
DS2_PORT = 2049
DS2_ROOT = /
DS2_ID = 2


pNFS Client:
/etc/fstab
wtb8-10g:/ /mnt nfs4 minorversion=1,tcp,intr,soft,rsize=32768,wsize=32768,timeo=600 0


The following is one of my benchmark result:
iozone -i 0 -s 1g -r 32k -f /mnt/1g -c -w


1. on Client:
[root@wtb7 pnfs-tests]# ls -l /mnt
total 132100
-rw-r----- 1 root root 1073741824 Aug 4 16:25 1g

2. on MDS
[root@wtb8 ~]# stat /export/1g
File: `/export/1g'
Size: 1073741824 Blocks: 264200 IO Block: 4096 regular file
Device: 805h/2053d Inode: 491522 Links: 1
Access: (0640/-rw-r-----) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2010-08-04 16:24:48.000000000 -0700
Modify: 2010-08-04 16:25:28.000000000 -0700
Change: 2010-08-04 16:25:28.000000000 -0700

2. on MDS
[root@wtb8 ~]# stat /export/1g
File: `/export/1g'
Size: 1073741824 Blocks: 264200 IO Block: 4096 regular file
Device: 805h/2053d Inode: 491522 Links: 1
Access: (0640/-rw-r-----) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2010-08-04 16:24:48.000000000 -0700
Modify: 2010-08-04 16:25:28.000000000 -0700
Change: 2010-08-04 16:25:28.000000000 -0700

3. op DS1:
[root@wtb9 ~]# stat /export/spnfs
File: `/export/spnfs'
Size: 60 Blocks: 0 IO Block: 4096 directory
Device: 14h/20d Inode: 8981 Links: 2
Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2010-08-04 16:26:49.938720191 -0700
Modify: 2010-08-04 16:58:31.959755618 -0700
Change: 2010-08-04 16:58:31.959755618 -0700

4. on DS2
[root@wtb10 ~]# stat /export/spnfs/
File: `/export/spnfs/'
Size: 60 Blocks: 0 IO Block: 4096 directory
Device: 13h/19d Inode: 8694 Links: 2
Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2010-08-04 16:26:54.573669020 -0700
Modify: 2010-08-04 16:58:18.525689302 -0700
Change: 2010-08-04 16:58:18.525689302 -0700



________________________________________
From: Chen, Helen Y
Sent: Wednesday, August 04, 2010 3:59 PM
To: 'Benny Halevy'; '[email protected]'; 'NFS list'
Subject: RE: pNFS file layout performance

Steve and Benny,

Thank you very much for your help! I have successfully set up a 3-node testbed and ran some benchmark since. Unfortunately, throughput results are very poor.

I am running Steve's 2.6.33.5-112.2.2.pnfs.fc13.x86_64 kernel, and exported a 28GB ramfs from my DS to the MDS. I was able to achieve ~400 MB/s over NFS using iozone. I then proceed to run the same test from the pNFS client and got only 50 MB/s. After detecting that my test data had landed on both the MDS and DS, I assumed disk I/O on the MDS was the bottleneck. So I proceeded to rebuild the kernel with CONFIG_PNFSD_LOCAL_EXPORT disabled, but achieved only 6 MB/s afterward. Is this expected, or am I doing something wrong? Please let me know if I need to provide further information.

Thanks,
Helen

-----Original Message-----
From: Benny Halevy [mailto:[email protected]] On Behalf Of Benny Halevy
Sent: Wednesday, May 26, 2010 5:36 AM
To: [email protected]; NFS list
Cc: Chen, Helen Y
Subject: Fwd: Re: [pnfs] problem building pnfs-nfs-utils under Fedora 13

Helen, please note that the [email protected] mailing list was deprecated.
Forwarding to [email protected].

>From a quick glance I'm not sure what went wrong with your build, Steve should know better :-)

Benny

On May. 19, 2010, 20:32 +0300, "Chen, Helen Y" <[email protected]> wrote:




Has anyone successfully build pnfs enabled nfs utils under Fedora 13?
I am running the kernel from:
http://steved.fedorapeople.org/repos/pnfs/13/x86_64/

I installed libtirpc{,-devel}, tcp_wrappers{,-devel}, libevent{,-devel}, nfs-utils-lib{,-devel}, libgssglue{,-devel}, libblkid{,-devel}, and libcap{,-devel}

per instructions from:
_http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd#Kernel_and_nfs-utils_compilati__on_
<http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd#Kernel_and_nfs-utils_compilation>

I used libnfsidmap{,-devel} bundled in:
Nfs-utils-lib-devel-1.1.5-1.fc13.X86_64.rpm

Finally, I downloaded _nfs-utils-1.2.2-4.1.pnfs.src.rpm_
<http://steved.fedorapeople.org/repos/pnfs/13/source/nfs-utils-1.2.2-4.1.pnfs.src.rpm>
from _http://steved.fedorapeople.org/repos/pnfs/13/source/_

I am having trouble building these utils. I failed to generate ?configure? when I ran autogen.sh:

/c//leaning up ............. done
//lobotomize//: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `aclocal'.
libtoolize: copying file `aclocal/libtool.m4'
libtoolize: copying file `aclocal/ltoptions.m4'
libtoolize: copying file `aclocal/ltsugar.m4'
libtoolize: copying file `aclocal/ltversion.m4'
libtoolize: copying file `aclocal/lt~obsolete.m4'
configure.ac:5: installing `./config.guess'
configure.ac:5: installing `./config.sub'
configure.ac:421: required file `tools/mountstats/Makefile.in' not found
configure.ac:421: required file `tools/nfs-iostat/Makefile.in' not found/

I deleted the two Makefile.in requirements from line 421 in configure.ac because there were only python scripts inside those directories.

When I ran the ?configure? generated after the modification, it failed with the following output:

/checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables...
checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking how to run the C preprocessor... gcc -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... no checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... no checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for clnt_tli_create in -ltirpc... yes checking /usr/include/tirpc/netconfig.h usability... yes checking /usr/include/tirpc/netconfig.h presence... yes checking for /usr/include/tirpc/netconfig.h... yes checking for prctl... yes checking for cap_ge!
t_proc in -lcap... yes checking sys/capability.h usability... yes checking sys/capability.h presence... yes checking for sys/capability.h... yes checking for libwrap... /

But libwrap is obviously installed based on the locate command:
#locate libwrap
/usr/lib/libwrap.so
/usr/lib/libwrap.so.0
/usr/lib/libwrap.so.0.7.6
/usr/lib64/libwrap.so
/usr/lib64/libwrap.so.0
/usr/lib64/libwrap.so.0.7.6

I am new at this and would appreciate any help you can provide.

Thanks,
Helen











2010-08-05 22:01:55

by Daniel.Muntz

[permalink] [raw]
Subject: RE: pNFS file layout performance

Anything in the logs indicating the MDS couldn't communicate with the DSs? Beyond that, if the setup is correct, would it be possible to get network traces? Nothing immediately comes to mind as to why performance would be poor if the data is landing on the MDS. Otoh, if the data were landing on the DSs and the client was having to resort to proxy I/O, then spNFS performance would probably be poor (proxy I/O is _not_ spNFS's strong suit :-)

-Dan

-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of Chen, Helen Y
Sent: Wednesday, August 04, 2010 5:21 PM
To: Chen, Helen Y; 'Benny Halevy'; '[email protected]'; 'NFS list'
Subject: RE: pNFS file layout performance

I add a 2nd DS and repeated the benchmark. It had seemed the test data had been placed on the MDS and not the DS'es? Any idea why?

Thanks,
Helen

The following are my notes on kernel, configuration set up, and results:

I rebuilt Steve's f13 pnfs kernel source with:
# CONFIG_PNFSD_LOCAL_EXPORT is not set


Configuration set up:
Data Server:
/etc/fstab
tmpfs /export/spnfs tmpfs size=85% 0 0

/etc/exports
/export/spnfs *(rw,sync,fsid=0,insecure,no_subtree_check,pnfs,no_root_squash)


Meta Data Server:
/etc/fstab
wtb9-10g:/ /spnfs/192.168.96.109 nfs4 minorversion=1,intr,soft,rsize=32768,wsiz
e=32768 0 0
wtb10-10g:/ /spnfs/192.168.96.110 nfs4 minorversion=1,intr,soft,rsize=32768,wsiz
e=32768 0 0
wtb11-10g:/ /spnfs/192.168.96.11 nfs4 minorversion=1,intr,soft,rsize=32768,wsiz
e=32768 0 0

/etc/exports
/export *(rw,sync,pnfs,fsid=0,insecure,no_subtree_check,no_root_squash)

/etc/spnfsd.conf
[General]
Verbosity = 1
Stripe-size = 8192
Dense-striping = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
DS-Mount-Directory = /spnfs

[DataServers]
NumDS = 2
DS1_IP = 192.168.96.109
DS1_PORT = 2049
DS1_ROOT = /
DS1_ID = 1
DS2_IP = 192.168.96.110
DS2_PORT = 2049
DS2_ROOT = /
DS2_ID = 2


pNFS Client:
/etc/fstab
wtb8-10g:/ /mnt nfs4 minorversion=1,tcp,intr,soft,rsize=32768,wsize=32768,timeo=600 0


The following is one of my benchmark result:
iozone -i 0 -s 1g -r 32k -f /mnt/1g -c -w


1. on Client:
[root@wtb7 pnfs-tests]# ls -l /mnt
total 132100
-rw-r----- 1 root root 1073741824 Aug 4 16:25 1g

2. on MDS
[root@wtb8 ~]# stat /export/1g
File: `/export/1g'
Size: 1073741824 Blocks: 264200 IO Block: 4096 regular file
Device: 805h/2053d Inode: 491522 Links: 1
Access: (0640/-rw-r-----) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2010-08-04 16:24:48.000000000 -0700
Modify: 2010-08-04 16:25:28.000000000 -0700
Change: 2010-08-04 16:25:28.000000000 -0700

2. on MDS
[root@wtb8 ~]# stat /export/1g
File: `/export/1g'
Size: 1073741824 Blocks: 264200 IO Block: 4096 regular file
Device: 805h/2053d Inode: 491522 Links: 1
Access: (0640/-rw-r-----) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2010-08-04 16:24:48.000000000 -0700
Modify: 2010-08-04 16:25:28.000000000 -0700
Change: 2010-08-04 16:25:28.000000000 -0700

3. op DS1:
[root@wtb9 ~]# stat /export/spnfs
File: `/export/spnfs'
Size: 60 Blocks: 0 IO Block: 4096 directory
Device: 14h/20d Inode: 8981 Links: 2
Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2010-08-04 16:26:49.938720191 -0700
Modify: 2010-08-04 16:58:31.959755618 -0700
Change: 2010-08-04 16:58:31.959755618 -0700

4. on DS2
[root@wtb10 ~]# stat /export/spnfs/
File: `/export/spnfs/'
Size: 60 Blocks: 0 IO Block: 4096 directory
Device: 13h/19d Inode: 8694 Links: 2
Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2010-08-04 16:26:54.573669020 -0700
Modify: 2010-08-04 16:58:18.525689302 -0700
Change: 2010-08-04 16:58:18.525689302 -0700



________________________________________
From: Chen, Helen Y
Sent: Wednesday, August 04, 2010 3:59 PM
To: 'Benny Halevy'; '[email protected]'; 'NFS list'
Subject: RE: pNFS file layout performance

Steve and Benny,

Thank you very much for your help! I have successfully set up a 3-node testbed and ran some benchmark since. Unfortunately, throughput results are very poor.

I am running Steve's 2.6.33.5-112.2.2.pnfs.fc13.x86_64 kernel, and exported a 28GB ramfs from my DS to the MDS. I was able to achieve ~400 MB/s over NFS using iozone. I then proceed to run the same test from the pNFS client and got only 50 MB/s. After detecting that my test data had landed on both the MDS and DS, I assumed disk I/O on the MDS was the bottleneck. So I proceeded to rebuild the kernel with CONFIG_PNFSD_LOCAL_EXPORT disabled, but achieved only 6 MB/s afterward. Is this expected, or am I doing something wrong? Please let me know if I need to provide further information.

Thanks,
Helen

-----Original Message-----
From: Benny Halevy [mailto:[email protected]] On Behalf Of Benny Halevy
Sent: Wednesday, May 26, 2010 5:36 AM
To: [email protected]; NFS list
Cc: Chen, Helen Y
Subject: Fwd: Re: [pnfs] problem building pnfs-nfs-utils under Fedora 13

Helen, please note that the [email protected] mailing list was deprecated.
Forwarding to [email protected].

>From a quick glance I'm not sure what went wrong with your build, Steve should know better :-)

Benny

On May. 19, 2010, 20:32 +0300, "Chen, Helen Y" <[email protected]> wrote:




Has anyone successfully build pnfs enabled nfs utils under Fedora 13?
I am running the kernel from:
http://steved.fedorapeople.org/repos/pnfs/13/x86_64/

I installed libtirpc{,-devel}, tcp_wrappers{,-devel}, libevent{,-devel}, nfs-utils-lib{,-devel}, libgssglue{,-devel}, libblkid{,-devel}, and libcap{,-devel}

per instructions from:
_http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd#Kernel_and_nfs-utils_compilati__on_
<http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd#Kernel_and_nfs-utils_compilation>

I used libnfsidmap{,-devel} bundled in:
Nfs-utils-lib-devel-1.1.5-1.fc13.X86_64.rpm

Finally, I downloaded _nfs-utils-1.2.2-4.1.pnfs.src.rpm_
<http://steved.fedorapeople.org/repos/pnfs/13/source/nfs-utils-1.2.2-4.1.pnfs.src.rpm>
from _http://steved.fedorapeople.org/repos/pnfs/13/source/_

I am having trouble building these utils. I failed to generate 'configure' when I ran autogen.sh:

/c//leaning up ............. done
//lobotomize//: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `aclocal'.
libtoolize: copying file `aclocal/libtool.m4'
libtoolize: copying file `aclocal/ltoptions.m4'
libtoolize: copying file `aclocal/ltsugar.m4'
libtoolize: copying file `aclocal/ltversion.m4'
libtoolize: copying file `aclocal/lt~obsolete.m4'
configure.ac:5: installing `./config.guess'
configure.ac:5: installing `./config.sub'
configure.ac:421: required file `tools/mountstats/Makefile.in' not found
configure.ac:421: required file `tools/nfs-iostat/Makefile.in' not found/

I deleted the two Makefile.in requirements from line 421 in configure.ac because there were only python scripts inside those directories.

When I ran the 'configure' generated after the modification, it failed with the following output:

/checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables...
checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking how to run the C preprocessor... gcc -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... no checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... no checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for clnt_tli_create in -ltirpc... yes checking /usr/include/tirpc/netconfig.h usability... yes checking /usr/include/tirpc/netconfig.h presence... yes checking for /usr/include/tirpc/netconfig.h... yes checking for prctl... yes checking for cap_ge!
t_proc in -lcap... yes checking sys/capability.h usability... yes checking sys/capability.h presence... yes checking for sys/capability.h... yes checking for libwrap... /

But libwrap is obviously installed based on the locate command:
#locate libwrap
/usr/lib/libwrap.so
/usr/lib/libwrap.so.0
/usr/lib/libwrap.so.0.7.6
/usr/lib64/libwrap.so
/usr/lib64/libwrap.so.0
/usr/lib64/libwrap.so.0.7.6

I am new at this and would appreciate any help you can provide.

Thanks,
Helen








2010-08-05 16:33:46

by Benny Halevy

[permalink] [raw]
Subject: Re: pNFS file layout performance

On Aug. 05, 2010, 3:20 +0300, "Chen, Helen Y" <[email protected]> wrote:
> I add a 2nd DS and repeated the benchmark. It had seemed the test data had been placed on the MDS and not the DS'es? Any idea why?

Any chance you forgot to modprobe nfslayoutdriver?
BTW, this requirement just changed with a patch merge today that
should automatically load the nfsv41 files layout driver on the client.

Benny

>
> Thanks,
> Helen
>
> The following are my notes on kernel, configuration set up, and results:
>
> I rebuilt Steve's f13 pnfs kernel source with:
> # CONFIG_PNFSD_LOCAL_EXPORT is not set
>
>
> Configuration set up:
> Data Server:
> /etc/fstab
> tmpfs /export/spnfs tmpfs size=85% 0 0
>
> /etc/exports
> /export/spnfs *(rw,sync,fsid=0,insecure,no_subtree_check,pnfs,no_root_squash)
>
>
> Meta Data Server:
> /etc/fstab
> wtb9-10g:/ /spnfs/192.168.96.109 nfs4 minorversion=1,intr,soft,rsize=32768,wsiz
> e=32768 0 0
> wtb10-10g:/ /spnfs/192.168.96.110 nfs4 minorversion=1,intr,soft,rsize=32768,wsiz
> e=32768 0 0
> wtb11-10g:/ /spnfs/192.168.96.11 nfs4 minorversion=1,intr,soft,rsize=32768,wsiz
> e=32768 0 0
>
> /etc/exports
> /export *(rw,sync,pnfs,fsid=0,insecure,no_subtree_check,no_root_squash)
>
> /etc/spnfsd.conf
> [General]
> Verbosity = 1
> Stripe-size = 8192
> Dense-striping = 0
> Pipefs-Directory = /var/lib/nfs/rpc_pipefs
> DS-Mount-Directory = /spnfs
>
> [DataServers]
> NumDS = 2
> DS1_IP = 192.168.96.109
> DS1_PORT = 2049
> DS1_ROOT = /
> DS1_ID = 1
> DS2_IP = 192.168.96.110
> DS2_PORT = 2049
> DS2_ROOT = /
> DS2_ID = 2
>
>
> pNFS Client:
> /etc/fstab
> wtb8-10g:/ /mnt nfs4 minorversion=1,tcp,intr,soft,rsize=32768,wsize=32768,timeo=600 0
>
>
> The following is one of my benchmark result:
> iozone -i 0 -s 1g -r 32k -f /mnt/1g -c -w
>
>
> 1. on Client:
> [root@wtb7 pnfs-tests]# ls -l /mnt
> total 132100
> -rw-r----- 1 root root 1073741824 Aug 4 16:25 1g
>
> 2. on MDS
> [root@wtb8 ~]# stat /export/1g
> File: `/export/1g'
> Size: 1073741824 Blocks: 264200 IO Block: 4096 regular file
> Device: 805h/2053d Inode: 491522 Links: 1
> Access: (0640/-rw-r-----) Uid: ( 0/ root) Gid: ( 0/ root)
> Access: 2010-08-04 16:24:48.000000000 -0700
> Modify: 2010-08-04 16:25:28.000000000 -0700
> Change: 2010-08-04 16:25:28.000000000 -0700
>
> 2. on MDS
> [root@wtb8 ~]# stat /export/1g
> File: `/export/1g'
> Size: 1073741824 Blocks: 264200 IO Block: 4096 regular file
> Device: 805h/2053d Inode: 491522 Links: 1
> Access: (0640/-rw-r-----) Uid: ( 0/ root) Gid: ( 0/ root)
> Access: 2010-08-04 16:24:48.000000000 -0700
> Modify: 2010-08-04 16:25:28.000000000 -0700
> Change: 2010-08-04 16:25:28.000000000 -0700
>
> 3. op DS1:
> [root@wtb9 ~]# stat /export/spnfs
> File: `/export/spnfs'
> Size: 60 Blocks: 0 IO Block: 4096 directory
> Device: 14h/20d Inode: 8981 Links: 2
> Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
> Access: 2010-08-04 16:26:49.938720191 -0700
> Modify: 2010-08-04 16:58:31.959755618 -0700
> Change: 2010-08-04 16:58:31.959755618 -0700
>
> 4. on DS2
> [root@wtb10 ~]# stat /export/spnfs/
> File: `/export/spnfs/'
> Size: 60 Blocks: 0 IO Block: 4096 directory
> Device: 13h/19d Inode: 8694 Links: 2
> Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
> Access: 2010-08-04 16:26:54.573669020 -0700
> Modify: 2010-08-04 16:58:18.525689302 -0700
> Change: 2010-08-04 16:58:18.525689302 -0700
>
>
>
> ________________________________________
> From: Chen, Helen Y
> Sent: Wednesday, August 04, 2010 3:59 PM
> To: 'Benny Halevy'; '[email protected]'; 'NFS list'
> Subject: RE: pNFS file layout performance
>
> Steve and Benny,
>
> Thank you very much for your help! I have successfully set up a 3-node testbed and ran some benchmark since. Unfortunately, throughput results are very poor.
>
> I am running Steve's 2.6.33.5-112.2.2.pnfs.fc13.x86_64 kernel, and exported a 28GB ramfs from my DS to the MDS. I was able to achieve ~400 MB/s over NFS using iozone. I then proceed to run the same test from the pNFS client and got only 50 MB/s. After detecting that my test data had landed on both the MDS and DS, I assumed disk I/O on the MDS was the bottleneck. So I proceeded to rebuild the kernel with CONFIG_PNFSD_LOCAL_EXPORT disabled, but achieved only 6 MB/s afterward. Is this expected, or am I doing something wrong? Please let me know if I need to provide further information.
>
> Thanks,
> Helen
>
> -----Original Message-----
> From: Benny Halevy [mailto:[email protected]] On Behalf Of Benny Halevy
> Sent: Wednesday, May 26, 2010 5:36 AM
> To: [email protected]; NFS list
> Cc: Chen, Helen Y
> Subject: Fwd: Re: [pnfs] problem building pnfs-nfs-utils under Fedora 13
>
> Helen, please note that the [email protected] mailing list was deprecated.
> Forwarding to [email protected].
>
> From a quick glance I'm not sure what went wrong with your build, Steve should know better :-)
>
> Benny
>
> On May. 19, 2010, 20:32 +0300, "Chen, Helen Y" <[email protected]> wrote:
>
>
>
>
> Has anyone successfully build pnfs enabled nfs utils under Fedora 13?
> I am running the kernel from:
> http://steved.fedorapeople.org/repos/pnfs/13/x86_64/
>
> I installed libtirpc{,-devel}, tcp_wrappers{,-devel}, libevent{,-devel}, nfs-utils-lib{,-devel}, libgssglue{,-devel}, libblkid{,-devel}, and libcap{,-devel}
>
> per instructions from:
> _http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd#Kernel_and_nfs-utils_compilati__on_
> <http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd#Kernel_and_nfs-utils_compilation>
>
> I used libnfsidmap{,-devel} bundled in:
> Nfs-utils-lib-devel-1.1.5-1.fc13.X86_64.rpm
>
> Finally, I downloaded _nfs-utils-1.2.2-4.1.pnfs.src.rpm_
> <http://steved.fedorapeople.org/repos/pnfs/13/source/nfs-utils-1.2.2-4.1.pnfs.src.rpm>
> from _http://steved.fedorapeople.org/repos/pnfs/13/source/_
>
> I am having trouble building these utils. I failed to generate ?configure? when I ran autogen.sh:
>
> /c//leaning up ............. done
> //lobotomize//: putting auxiliary files in `.'.
> libtoolize: copying file `./ltmain.sh'
> libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `aclocal'.
> libtoolize: copying file `aclocal/libtool.m4'
> libtoolize: copying file `aclocal/ltoptions.m4'
> libtoolize: copying file `aclocal/ltsugar.m4'
> libtoolize: copying file `aclocal/ltversion.m4'
> libtoolize: copying file `aclocal/lt~obsolete.m4'
> configure.ac:5: installing `./config.guess'
> configure.ac:5: installing `./config.sub'
> configure.ac:421: required file `tools/mountstats/Makefile.in' not found
> configure.ac:421: required file `tools/nfs-iostat/Makefile.in' not found/
>
> I deleted the two Makefile.in requirements from line 421 in configure.ac because there were only python scripts inside those directories.
>
> When I ran the ?configure? generated after the modification, it failed with the following output:
>
> /checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables...
> checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking how to run the C preprocessor... gcc -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... no checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... no checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for clnt_tli_create in -ltirpc... yes checking /usr/include/tirpc/netconfig.h usability... yes checking /usr/include/tirpc/netconfig.h presence... yes checking for /usr/include/tirpc/netconfig.h... yes checking for prctl... yes checking for cap_g
et_proc in -lcap... yes checking sys/capability.h usability... yes checking sys/capability.h presence... yes checking for sys/capability.h... yes checking for libwrap... /
>
> But libwrap is obviously installed based on the locate command:
> #locate libwrap
> /usr/lib/libwrap.so
> /usr/lib/libwrap.so.0
> /usr/lib/libwrap.so.0.7.6
> /usr/lib64/libwrap.so
> /usr/lib64/libwrap.so.0
> /usr/lib64/libwrap.so.0.7.6
>
> I am new at this and would appreciate any help you can provide.
>
> Thanks,
> Helen
>
>
>
>
>
>
>
>
>

2010-08-05 22:14:08

by Chen, Helen Y

[permalink] [raw]
Subject: Re: pNFS file layout performance

I believe Benny is right! I might have forgotten to load the nfs file layout driver. I will check on that as soon as I have a chance. Hopefully I will have good news to report later.

Thanks,
Helen

----- Original Message -----
From: [email protected] [mailto:[email protected]]
Sent: Thursday, August 05, 2010 04:01 PM
To: Chen, Helen Y; [email protected] <[email protected]>; [email protected] <[email protected]>; [email protected] <[email protected]>
Subject: RE: pNFS file layout performance

Anything in the logs indicating the MDS couldn't communicate with the DSs? Beyond that, if the setup is correct, would it be possible to get network traces? Nothing immediately comes to mind as to why performance would be poor if the data is landing on the MDS. Otoh, if the data were landing on the DSs and the client was having to resort to proxy I/O, then spNFS performance would probably be poor (proxy I/O is _not_ spNFS's strong suit :-)

-Dan

-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of Chen, Helen Y
Sent: Wednesday, August 04, 2010 5:21 PM
To: Chen, Helen Y; 'Benny Halevy'; '[email protected]'; 'NFS list'
Subject: RE: pNFS file layout performance

I add a 2nd DS and repeated the benchmark. It had seemed the test data had been placed on the MDS and not the DS'es? Any idea why?

Thanks,
Helen

The following are my notes on kernel, configuration set up, and results:

I rebuilt Steve's f13 pnfs kernel source with:
# CONFIG_PNFSD_LOCAL_EXPORT is not set


Configuration set up:
Data Server:
/etc/fstab
tmpfs /export/spnfs tmpfs size=85% 0 0

/etc/exports
/export/spnfs *(rw,sync,fsid=0,insecure,no_subtree_check,pnfs,no_root_squash)


Meta Data Server:
/etc/fstab
wtb9-10g:/ /spnfs/192.168.96.109 nfs4 minorversion=1,intr,soft,rsize=32768,wsiz
e=32768 0 0
wtb10-10g:/ /spnfs/192.168.96.110 nfs4 minorversion=1,intr,soft,rsize=32768,wsiz
e=32768 0 0
wtb11-10g:/ /spnfs/192.168.96.11 nfs4 minorversion=1,intr,soft,rsize=32768,wsiz
e=32768 0 0

/etc/exports
/export *(rw,sync,pnfs,fsid=0,insecure,no_subtree_check,no_root_squash)

/etc/spnfsd.conf
[General]
Verbosity = 1
Stripe-size = 8192
Dense-striping = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
DS-Mount-Directory = /spnfs

[DataServers]
NumDS = 2
DS1_IP = 192.168.96.109
DS1_PORT = 2049
DS1_ROOT = /
DS1_ID = 1
DS2_IP = 192.168.96.110
DS2_PORT = 2049
DS2_ROOT = /
DS2_ID = 2


pNFS Client:
/etc/fstab
wtb8-10g:/ /mnt nfs4 minorversion=1,tcp,intr,soft,rsize=32768,wsize=32768,timeo=600 0


The following is one of my benchmark result:
iozone -i 0 -s 1g -r 32k -f /mnt/1g -c -w


1. on Client:
[root@wtb7 pnfs-tests]# ls -l /mnt
total 132100
-rw-r----- 1 root root 1073741824 Aug 4 16:25 1g

2. on MDS
[root@wtb8 ~]# stat /export/1g
File: `/export/1g'
Size: 1073741824 Blocks: 264200 IO Block: 4096 regular file
Device: 805h/2053d Inode: 491522 Links: 1
Access: (0640/-rw-r-----) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2010-08-04 16:24:48.000000000 -0700
Modify: 2010-08-04 16:25:28.000000000 -0700
Change: 2010-08-04 16:25:28.000000000 -0700

2. on MDS
[root@wtb8 ~]# stat /export/1g
File: `/export/1g'
Size: 1073741824 Blocks: 264200 IO Block: 4096 regular file
Device: 805h/2053d Inode: 491522 Links: 1
Access: (0640/-rw-r-----) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2010-08-04 16:24:48.000000000 -0700
Modify: 2010-08-04 16:25:28.000000000 -0700
Change: 2010-08-04 16:25:28.000000000 -0700

3. op DS1:
[root@wtb9 ~]# stat /export/spnfs
File: `/export/spnfs'
Size: 60 Blocks: 0 IO Block: 4096 directory
Device: 14h/20d Inode: 8981 Links: 2
Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2010-08-04 16:26:49.938720191 -0700
Modify: 2010-08-04 16:58:31.959755618 -0700
Change: 2010-08-04 16:58:31.959755618 -0700

4. on DS2
[root@wtb10 ~]# stat /export/spnfs/
File: `/export/spnfs/'
Size: 60 Blocks: 0 IO Block: 4096 directory
Device: 13h/19d Inode: 8694 Links: 2
Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2010-08-04 16:26:54.573669020 -0700
Modify: 2010-08-04 16:58:18.525689302 -0700
Change: 2010-08-04 16:58:18.525689302 -0700



________________________________________
From: Chen, Helen Y
Sent: Wednesday, August 04, 2010 3:59 PM
To: 'Benny Halevy'; '[email protected]'; 'NFS list'
Subject: RE: pNFS file layout performance

Steve and Benny,

Thank you very much for your help! I have successfully set up a 3-node testbed and ran some benchmark since. Unfortunately, throughput results are very poor.

I am running Steve's 2.6.33.5-112.2.2.pnfs.fc13.x86_64 kernel, and exported a 28GB ramfs from my DS to the MDS. I was able to achieve ~400 MB/s over NFS using iozone. I then proceed to run the same test from the pNFS client and got only 50 MB/s. After detecting that my test data had landed on both the MDS and DS, I assumed disk I/O on the MDS was the bottleneck. So I proceeded to rebuild the kernel with CONFIG_PNFSD_LOCAL_EXPORT disabled, but achieved only 6 MB/s afterward. Is this expected, or am I doing something wrong? Please let me know if I need to provide further information.

Thanks,
Helen

-----Original Message-----
From: Benny Halevy [mailto:[email protected]] On Behalf Of Benny Halevy
Sent: Wednesday, May 26, 2010 5:36 AM
To: [email protected]; NFS list
Cc: Chen, Helen Y
Subject: Fwd: Re: [pnfs] problem building pnfs-nfs-utils under Fedora 13

Helen, please note that the [email protected] mailing list was deprecated.
Forwarding to [email protected].

>From a quick glance I'm not sure what went wrong with your build, Steve should know better :-)

Benny

On May. 19, 2010, 20:32 +0300, "Chen, Helen Y" <[email protected]> wrote:




Has anyone successfully build pnfs enabled nfs utils under Fedora 13?
I am running the kernel from:
http://steved.fedorapeople.org/repos/pnfs/13/x86_64/

I installed libtirpc{,-devel}, tcp_wrappers{,-devel}, libevent{,-devel}, nfs-utils-lib{,-devel}, libgssglue{,-devel}, libblkid{,-devel}, and libcap{,-devel}

per instructions from:
_http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd#Kernel_and_nfs-utils_compilati__on_
<http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd#Kernel_and_nfs-utils_compilation>

I used libnfsidmap{,-devel} bundled in:
Nfs-utils-lib-devel-1.1.5-1.fc13.X86_64.rpm

Finally, I downloaded _nfs-utils-1.2.2-4.1.pnfs.src.rpm_
<http://steved.fedorapeople.org/repos/pnfs/13/source/nfs-utils-1.2.2-4.1.pnfs.src.rpm>
from _http://steved.fedorapeople.org/repos/pnfs/13/source/_

I am having trouble building these utils. I failed to generate 'configure' when I ran autogen.sh:

/c//leaning up ............. done
//lobotomize//: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `aclocal'.
libtoolize: copying file `aclocal/libtool.m4'
libtoolize: copying file `aclocal/ltoptions.m4'
libtoolize: copying file `aclocal/ltsugar.m4'
libtoolize: copying file `aclocal/ltversion.m4'
libtoolize: copying file `aclocal/lt~obsolete.m4'
configure.ac:5: installing `./config.guess'
configure.ac:5: installing `./config.sub'
configure.ac:421: required file `tools/mountstats/Makefile.in' not found
configure.ac:421: required file `tools/nfs-iostat/Makefile.in' not found/

I deleted the two Makefile.in requirements from line 421 in configure.ac because there were only python scripts inside those directories.

When I ran the 'configure' generated after the modification, it failed with the following output:

/checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables...
checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking how to run the C preprocessor... gcc -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... no checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... no checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for clnt_tli_create in -ltirpc... yes checking /usr/include/tirpc/netconfig.h usability... yes checking /usr/include/tirpc/netconfig.h presence... yes checking for /usr/include/tirpc/netconfig.h... yes checking for prctl... yes checking for cap_ge!
t_proc in -lcap... yes checking sys/capability.h usability... yes checking sys/capability.h presence... yes checking for sys/capability.h... yes checking for libwrap... /

But libwrap is obviously installed based on the locate command:
#locate libwrap
/usr/lib/libwrap.so
/usr/lib/libwrap.so.0
/usr/lib/libwrap.so.0.7.6
/usr/lib64/libwrap.so
/usr/lib64/libwrap.so.0
/usr/lib64/libwrap.so.0.7.6

I am new at this and would appreciate any help you can provide.

Thanks,
Helen








2010-08-05 21:12:04

by Chen, Helen Y

[permalink] [raw]
Subject: Re: pNFS file layout performance

R29vZCBxdWVzdGlvbi4gIEkgd2lsbCBjaGVjayBvbiBpdCBhcyBzb29uIGFzIEkgaGF2ZSBhIGNo
YW5jZS4NCg0KVGhhbmtzLA0KSGVsZW4NCg0KLS0tLS0gT3JpZ2luYWwgTWVzc2FnZSAtLS0tLQ0K
RnJvbTogQmVubnkgSGFsZXZ5IFttYWlsdG86YmhhbGV2eUBwYW5hc2FzLmNvbV0NClNlbnQ6IFRo
dXJzZGF5LCBBdWd1c3QgMDUsIDIwMTAgMTA6MzMgQU0KVG86IENoZW4sIEhlbGVuIFkNCkNjOiAn
c3RldmVkQHJlZGhhdC5jb20nIDxzdGV2ZWRAcmVkaGF0LmNvbT47ICdORlMgbGlzdCcgPGxpbnV4
LW5mc0B2Z2VyLmtlcm5lbC5vcmc+DQpTdWJqZWN0OiBSZTogcE5GUyBmaWxlIGxheW91dCBwZXJm
b3JtYW5jZQ0KDQpPbiBBdWcuIDA1LCAyMDEwLCAzOjIwICswMzAwLCAiQ2hlbiwgSGVsZW4gWSIg
PGh5Y3N3QHNhbmRpYS5nb3Y+IHdyb3RlOg0KPiBJIGFkZCBhIDJuZCBEUyBhbmQgcmVwZWF0ZWQg
dGhlIGJlbmNobWFyay4gICBJdCBoYWQgc2VlbWVkIHRoZSB0ZXN0IGRhdGEgaGFkIGJlZW4gcGxh
Y2VkIG9uIHRoZSBNRFMgYW5kIG5vdCB0aGUgRFMnZXM/ICBBbnkgaWRlYSB3aHk/DQoNCkFueSBj
aGFuY2UgeW91IGZvcmdvdCB0byBtb2Rwcm9iZSBuZnNsYXlvdXRkcml2ZXI/DQpCVFcsIHRoaXMg
cmVxdWlyZW1lbnQganVzdCBjaGFuZ2VkIHdpdGggYSBwYXRjaCBtZXJnZSB0b2RheSB0aGF0DQpz
aG91bGQgYXV0b21hdGljYWxseSBsb2FkIHRoZSBuZnN2NDEgZmlsZXMgbGF5b3V0IGRyaXZlciBv
biB0aGUgY2xpZW50Lg0KDQpCZW5ueQ0KDQo+IA0KPiBUaGFua3MsDQo+IEhlbGVuDQo+IA0KPiBU
aGUgZm9sbG93aW5nIGFyZSBteSBub3RlcyBvbiBrZXJuZWwsIGNvbmZpZ3VyYXRpb24gc2V0IHVw
LCBhbmQgcmVzdWx0czoNCj4gDQo+IEkgcmVidWlsdCBTdGV2ZSdzIGYxMyBwbmZzIGtlcm5lbCBz
b3VyY2Ugd2l0aDoNCj4gIyBDT05GSUdfUE5GU0RfTE9DQUxfRVhQT1JUIGlzIG5vdCBzZXQNCj4g
DQo+IA0KPiBDb25maWd1cmF0aW9uIHNldCB1cDoNCj4gRGF0YSBTZXJ2ZXI6DQo+IC9ldGMvZnN0
YWINCj4gdG1wZnMJCS9leHBvcnQvc3BuZnMJCXRtcGZzCXNpemU9ODUlCTAgMA0KPiANCj4gL2V0
Yy9leHBvcnRzDQo+IC9leHBvcnQvc3BuZnMJKihydyxzeW5jLGZzaWQ9MCxpbnNlY3VyZSxub19z
dWJ0cmVlX2NoZWNrLHBuZnMsbm9fcm9vdF9zcXVhc2gpDQo+IA0KPiANCj4gTWV0YSBEYXRhIFNl
cnZlcjoNCj4gL2V0Yy9mc3RhYg0KPiB3dGI5LTEwZzovCS9zcG5mcy8xOTIuMTY4Ljk2LjEwOQlu
ZnM0CW1pbm9ydmVyc2lvbj0xLGludHIsc29mdCxyc2l6ZT0zMjc2OCx3c2l6DQo+IGU9MzI3NjgJ
MCAwDQo+IHd0YjEwLTEwZzovCS9zcG5mcy8xOTIuMTY4Ljk2LjExMAluZnM0CW1pbm9ydmVyc2lv
bj0xLGludHIsc29mdCxyc2l6ZT0zMjc2OCx3c2l6DQo+IGU9MzI3NjgJMCAwDQo+IHd0YjExLTEw
ZzovCS9zcG5mcy8xOTIuMTY4Ljk2LjExCW5mczQJbWlub3J2ZXJzaW9uPTEsaW50cixzb2Z0LHJz
aXplPTMyNzY4LHdzaXoNCj4gZT0zMjc2OAkwIDANCj4gDQo+IC9ldGMvZXhwb3J0cw0KPiAvZXhw
b3J0CSoocncsc3luYyxwbmZzLGZzaWQ9MCxpbnNlY3VyZSxub19zdWJ0cmVlX2NoZWNrLG5vX3Jv
b3Rfc3F1YXNoKQ0KPiANCj4gL2V0Yy9zcG5mc2QuY29uZg0KPiBbR2VuZXJhbF0NCj4gVmVyYm9z
aXR5ID0gMQ0KPiBTdHJpcGUtc2l6ZSA9IDgxOTINCj4gRGVuc2Utc3RyaXBpbmcgPSAwDQo+IFBp
cGVmcy1EaXJlY3RvcnkgPSAvdmFyL2xpYi9uZnMvcnBjX3BpcGVmcw0KPiBEUy1Nb3VudC1EaXJl
Y3RvcnkgPSAvc3BuZnMNCj4gDQo+IFtEYXRhU2VydmVyc10NCj4gTnVtRFMgPSAyDQo+IERTMV9J
UCA9IDE5Mi4xNjguOTYuMTA5DQo+IERTMV9QT1JUID0gMjA0OQ0KPiBEUzFfUk9PVCA9IC8NCj4g
RFMxX0lEID0gMQ0KPiBEUzJfSVAgPSAxOTIuMTY4Ljk2LjExMA0KPiBEUzJfUE9SVCA9IDIwNDkN
Cj4gRFMyX1JPT1QgPSAvDQo+IERTMl9JRCA9IDINCj4gDQo+IA0KPiBwTkZTIENsaWVudDoNCj4g
L2V0Yy9mc3RhYg0KPiB3dGI4LTEwZzovICAvbW50IG5mczQJbWlub3J2ZXJzaW9uPTEsdGNwLGlu
dHIsc29mdCxyc2l6ZT0zMjc2OCx3c2l6ZT0zMjc2OCx0aW1lbz02MDAgMCANCj4gDQo+IA0KPiBU
aGUgZm9sbG93aW5nIGlzIG9uZSBvZiBteSBiZW5jaG1hcmsgcmVzdWx0Og0KPiBpb3pvbmUgLWkg
MCAtcyAxZyAtciAzMmsgLWYgL21udC8xZyAtYyAtdw0KPiANCj4gDQo+IDEuIG9uIENsaWVudDoN
Cj4gW3Jvb3RAd3RiNyBwbmZzLXRlc3RzXSMgbHMgLWwgL21udA0KPiB0b3RhbCAxMzIxMDANCj4g
LXJ3LXItLS0tLSAxIHJvb3Qgcm9vdCAxMDczNzQxODI0IEF1ZyAgNCAxNjoyNSAxZw0KPiANCj4g
Mi4gb24gTURTDQo+IFtyb290QHd0Yjggfl0jIHN0YXQgL2V4cG9ydC8xZw0KPiAgIEZpbGU6IGAv
ZXhwb3J0LzFnJw0KPiAgIFNpemU6IDEwNzM3NDE4MjQJQmxvY2tzOiAyNjQyMDAgICAgIElPIEJs
b2NrOiA0MDk2ICAgcmVndWxhciBmaWxlDQo+IERldmljZTogODA1aC8yMDUzZAlJbm9kZTogNDkx
NTIyICAgICAgTGlua3M6IDENCj4gQWNjZXNzOiAoMDY0MC8tcnctci0tLS0tKSAgVWlkOiAoICAg
IDAvICAgIHJvb3QpICAgR2lkOiAoICAgIDAvICAgIHJvb3QpDQo+IEFjY2VzczogMjAxMC0wOC0w
NCAxNjoyNDo0OC4wMDAwMDAwMDAgLTA3MDANCj4gTW9kaWZ5OiAyMDEwLTA4LTA0IDE2OjI1OjI4
LjAwMDAwMDAwMCAtMDcwMA0KPiBDaGFuZ2U6IDIwMTAtMDgtMDQgMTY6MjU6MjguMDAwMDAwMDAw
IC0wNzAwDQo+IA0KPiAyLiBvbiBNRFMNCj4gW3Jvb3RAd3RiOCB+XSMgc3RhdCAvZXhwb3J0LzFn
DQo+ICAgRmlsZTogYC9leHBvcnQvMWcnDQo+ICAgU2l6ZTogMTA3Mzc0MTgyNAlCbG9ja3M6IDI2
NDIwMCAgICAgSU8gQmxvY2s6IDQwOTYgICByZWd1bGFyIGZpbGUNCj4gRGV2aWNlOiA4MDVoLzIw
NTNkCUlub2RlOiA0OTE1MjIgICAgICBMaW5rczogMQ0KPiBBY2Nlc3M6ICgwNjQwLy1ydy1yLS0t
LS0pICBVaWQ6ICggICAgMC8gICAgcm9vdCkgICBHaWQ6ICggICAgMC8gICAgcm9vdCkNCj4gQWNj
ZXNzOiAyMDEwLTA4LTA0IDE2OjI0OjQ4LjAwMDAwMDAwMCAtMDcwMA0KPiBNb2RpZnk6IDIwMTAt
MDgtMDQgMTY6MjU6MjguMDAwMDAwMDAwIC0wNzAwDQo+IENoYW5nZTogMjAxMC0wOC0wNCAxNjoy
NToyOC4wMDAwMDAwMDAgLTA3MDANCj4gDQo+IDMuIG9wIERTMToNCj4gW3Jvb3RAd3RiOSB+XSMg
c3RhdCAvZXhwb3J0L3NwbmZzDQo+ICAgRmlsZTogYC9leHBvcnQvc3BuZnMnDQo+ICAgU2l6ZTog
NjAgICAgICAgIAlCbG9ja3M6IDAgICAgICAgICAgSU8gQmxvY2s6IDQwOTYgICBkaXJlY3RvcnkN
Cj4gRGV2aWNlOiAxNGgvMjBkCUlub2RlOiA4OTgxICAgICAgICBMaW5rczogMg0KPiBBY2Nlc3M6
ICgxNzc3L2Ryd3hyd3hyd3QpICBVaWQ6ICggICAgMC8gICAgcm9vdCkgICBHaWQ6ICggICAgMC8g
ICAgcm9vdCkNCj4gQWNjZXNzOiAyMDEwLTA4LTA0IDE2OjI2OjQ5LjkzODcyMDE5MSAtMDcwMA0K
PiBNb2RpZnk6IDIwMTAtMDgtMDQgMTY6NTg6MzEuOTU5NzU1NjE4IC0wNzAwDQo+IENoYW5nZTog
MjAxMC0wOC0wNCAxNjo1ODozMS45NTk3NTU2MTggLTA3MDANCj4gDQo+IDQuIG9uIERTMg0KPiBb
cm9vdEB3dGIxMCB+XSMgc3RhdCAvZXhwb3J0L3NwbmZzLw0KPiAgIEZpbGU6IGAvZXhwb3J0L3Nw
bmZzLycNCj4gICBTaXplOiA2MCAgICAgICAgCUJsb2NrczogMCAgICAgICAgICBJTyBCbG9jazog
NDA5NiAgIGRpcmVjdG9yeQ0KPiBEZXZpY2U6IDEzaC8xOWQJSW5vZGU6IDg2OTQgICAgICAgIExp
bmtzOiAyDQo+IEFjY2VzczogKDE3NzcvZHJ3eHJ3eHJ3dCkgIFVpZDogKCAgICAwLyAgICByb290
KSAgIEdpZDogKCAgICAwLyAgICByb290KQ0KPiBBY2Nlc3M6IDIwMTAtMDgtMDQgMTY6MjY6NTQu
NTczNjY5MDIwIC0wNzAwDQo+IE1vZGlmeTogMjAxMC0wOC0wNCAxNjo1ODoxOC41MjU2ODkzMDIg
LTA3MDANCj4gQ2hhbmdlOiAyMDEwLTA4LTA0IDE2OjU4OjE4LjUyNTY4OTMwMiAtMDcwMA0KPiAN
Cj4gDQo+IA0KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+IEZy
b206IENoZW4sIEhlbGVuIFkNCj4gU2VudDogV2VkbmVzZGF5LCBBdWd1c3QgMDQsIDIwMTAgMzo1
OSBQTQ0KPiBUbzogJ0Jlbm55IEhhbGV2eSc7ICdzdGV2ZWRAcmVkaGF0LmNvbSc7ICdORlMgbGlz
dCcNCj4gU3ViamVjdDogUkU6IHBORlMgZmlsZSBsYXlvdXQgcGVyZm9ybWFuY2UNCj4gDQo+IFN0
ZXZlIGFuZCBCZW5ueSwNCj4gDQo+IFRoYW5rIHlvdSB2ZXJ5IG11Y2ggZm9yIHlvdXIgaGVscCEg
SSBoYXZlIHN1Y2Nlc3NmdWxseSBzZXQgdXAgYSAzLW5vZGUgdGVzdGJlZCBhbmQgcmFuIHNvbWUg
YmVuY2htYXJrIHNpbmNlLiAgVW5mb3J0dW5hdGVseSwgdGhyb3VnaHB1dCByZXN1bHRzIGFyZSB2
ZXJ5IHBvb3IuDQo+IA0KPiBJIGFtIHJ1bm5pbmcgU3RldmUncyAyLjYuMzMuNS0xMTIuMi4yLnBu
ZnMuZmMxMy54ODZfNjQga2VybmVsLCBhbmQgZXhwb3J0ZWQgYSAyOEdCIHJhbWZzIGZyb20gbXkg
RFMgdG8gdGhlIE1EUy4gSSB3YXMgYWJsZSB0byBhY2hpZXZlIH40MDAgTUIvcyBvdmVyIE5GUyB1
c2luZyBpb3pvbmUuIEkgdGhlbiBwcm9jZWVkIHRvIHJ1biB0aGUgc2FtZSB0ZXN0IGZyb20gdGhl
IHBORlMgY2xpZW50IGFuZCBnb3Qgb25seSA1MCBNQi9zLiAgQWZ0ZXIgZGV0ZWN0aW5nIHRoYXQg
bXkgdGVzdCBkYXRhIGhhZCBsYW5kZWQgb24gYm90aCB0aGUgTURTIGFuZCBEUywgSSBhc3N1bWVk
IGRpc2sgSS9PIG9uIHRoZSBNRFMgd2FzIHRoZSBib3R0bGVuZWNrLiBTbyBJIHByb2NlZWRlZCB0
byByZWJ1aWxkIHRoZSBrZXJuZWwgd2l0aCBDT05GSUdfUE5GU0RfTE9DQUxfRVhQT1JUIGRpc2Fi
bGVkLCBidXQgYWNoaWV2ZWQgb25seSA2IE1CL3MgYWZ0ZXJ3YXJkLiBJcyB0aGlzIGV4cGVjdGVk
LCBvciBhbSBJIGRvaW5nIHNvbWV0aGluZyB3cm9uZz8gIFBsZWFzZSBsZXQgbWUga25vdyBpZiBJ
IG5lZWQgdG8gcHJvdmlkZSBmdXJ0aGVyIGluZm9ybWF0aW9uLg0KPiANCj4gVGhhbmtzLA0KPiBI
ZWxlbg0KPiANCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogQmVubnkgSGFs
ZXZ5IFttYWlsdG86YmhhbGV2eS5saXN0c0BnbWFpbC5jb21dIE9uIEJlaGFsZiBPZiBCZW5ueSBI
YWxldnkNCj4gU2VudDogV2VkbmVzZGF5LCBNYXkgMjYsIDIwMTAgNTozNiBBTQ0KPiBUbzogc3Rl
dmVkQHJlZGhhdC5jb207IE5GUyBsaXN0DQo+IENjOiBDaGVuLCBIZWxlbiBZDQo+IFN1YmplY3Q6
IEZ3ZDogUmU6IFtwbmZzXSBwcm9ibGVtIGJ1aWxkaW5nIHBuZnMtbmZzLXV0aWxzIHVuZGVyIEZl
ZG9yYSAxMw0KPiANCj4gSGVsZW4sIHBsZWFzZSBub3RlIHRoYXQgdGhlIHBuZnNAbGludXgtbmZz
Lm9yZyBtYWlsaW5nIGxpc3Qgd2FzIGRlcHJlY2F0ZWQuDQo+IEZvcndhcmRpbmcgdG8gbGludXgt
bmZzQHZnZXIua2VybmVsLm9yZy4NCj4gDQo+IEZyb20gYSBxdWljayBnbGFuY2UgSSdtIG5vdCBz
dXJlIHdoYXQgd2VudCB3cm9uZyB3aXRoIHlvdXIgYnVpbGQsIFN0ZXZlIHNob3VsZCBrbm93IGJl
dHRlciA6LSkNCj4gDQo+IEJlbm55DQo+IA0KPiBPbiBNYXkuIDE5LCAyMDEwLCAyMDozMiArMDMw
MCwgIkNoZW4sIEhlbGVuIFkiIDxoeWNzd0BzYW5kaWEuZ292PiB3cm90ZToNCj4gDQo+IA0KPiAN
Cj4gDQo+IEhhcyBhbnlvbmUgc3VjY2Vzc2Z1bGx5IGJ1aWxkIHBuZnMgZW5hYmxlZCBuZnMgdXRp
bHMgdW5kZXIgRmVkb3JhIDEzPw0KPiBJIGFtIHJ1bm5pbmcgdGhlIGtlcm5lbCBmcm9tOg0KPiBo
dHRwOi8vc3RldmVkLmZlZG9yYXBlb3BsZS5vcmcvcmVwb3MvcG5mcy8xMy94ODZfNjQvDQo+IA0K
PiBJIGluc3RhbGxlZCBsaWJ0aXJwY3ssLWRldmVsfSwgdGNwX3dyYXBwZXJzeywtZGV2ZWx9LCBs
aWJldmVudHssLWRldmVsfSwgbmZzLXV0aWxzLWxpYnssLWRldmVsfSwgbGliZ3NzZ2x1ZXssLWRl
dmVsfSwgbGliYmxraWR7LC1kZXZlbH0sIGFuZCBsaWJjYXB7LC1kZXZlbH0NCj4gDQo+IHBlciBp
bnN0cnVjdGlvbnMgZnJvbToNCj4gX2h0dHA6Ly93aWtpLmxpbnV4LW5mcy5vcmcvd2lraS9pbmRl
eC5waHAvQ29uZmlndXJpbmdfcE5GUy9zcG5mc2QjS2VybmVsX2FuZF9uZnMtdXRpbHNfY29tcGls
YXRpX19vbl8NCj4gPGh0dHA6Ly93aWtpLmxpbnV4LW5mcy5vcmcvd2lraS9pbmRleC5waHAvQ29u
ZmlndXJpbmdfcE5GUy9zcG5mc2QjS2VybmVsX2FuZF9uZnMtdXRpbHNfY29tcGlsYXRpb24+DQo+
IA0KPiBJIHVzZWQgbGlibmZzaWRtYXB7LC1kZXZlbH0gYnVuZGxlZCBpbjoNCj4gTmZzLXV0aWxz
LWxpYi1kZXZlbC0xLjEuNS0xLmZjMTMuWDg2XzY0LnJwbQ0KPiANCj4gRmluYWxseSwgSSBkb3du
bG9hZGVkIF9uZnMtdXRpbHMtMS4yLjItNC4xLnBuZnMuc3JjLnJwbV8NCj4gPGh0dHA6Ly9zdGV2
ZWQuZmVkb3JhcGVvcGxlLm9yZy9yZXBvcy9wbmZzLzEzL3NvdXJjZS9uZnMtdXRpbHMtMS4yLjIt
NC4xLnBuZnMuc3JjLnJwbT4NCj4gZnJvbSBfaHR0cDovL3N0ZXZlZC5mZWRvcmFwZW9wbGUub3Jn
L3JlcG9zL3BuZnMvMTMvc291cmNlL18NCj4gDQo+IEkgYW0gaGF2aW5nIHRyb3VibGUgYnVpbGRp
bmcgdGhlc2UgdXRpbHMuIEkgZmFpbGVkIHRvIGdlbmVyYXRlIOKAmGNvbmZpZ3VyZeKAmSB3aGVu
IEkgcmFuIGF1dG9nZW4uc2g6DQo+IA0KPiAvYy8vbGVhbmluZyB1cCAuLi4uLi4uLi4uLi4uIGRv
bmUNCj4gLy9sb2JvdG9taXplLy86IHB1dHRpbmcgYXV4aWxpYXJ5IGZpbGVzIGluIGAuJy4NCj4g
bGlidG9vbGl6ZTogY29weWluZyBmaWxlIGAuL2x0bWFpbi5zaCcNCj4gbGlidG9vbGl6ZTogcHV0
dGluZyBtYWNyb3MgaW4gQUNfQ09ORklHX01BQ1JPX0RJUiwgYGFjbG9jYWwnLg0KPiBsaWJ0b29s
aXplOiBjb3B5aW5nIGZpbGUgYGFjbG9jYWwvbGlidG9vbC5tNCcNCj4gbGlidG9vbGl6ZTogY29w
eWluZyBmaWxlIGBhY2xvY2FsL2x0b3B0aW9ucy5tNCcNCj4gbGlidG9vbGl6ZTogY29weWluZyBm
aWxlIGBhY2xvY2FsL2x0c3VnYXIubTQnDQo+IGxpYnRvb2xpemU6IGNvcHlpbmcgZmlsZSBgYWNs
b2NhbC9sdHZlcnNpb24ubTQnDQo+IGxpYnRvb2xpemU6IGNvcHlpbmcgZmlsZSBgYWNsb2NhbC9s
dH5vYnNvbGV0ZS5tNCcNCj4gY29uZmlndXJlLmFjOjU6IGluc3RhbGxpbmcgYC4vY29uZmlnLmd1
ZXNzJw0KPiBjb25maWd1cmUuYWM6NTogaW5zdGFsbGluZyBgLi9jb25maWcuc3ViJw0KPiBjb25m
aWd1cmUuYWM6NDIxOiByZXF1aXJlZCBmaWxlIGB0b29scy9tb3VudHN0YXRzL01ha2VmaWxlLmlu
JyBub3QgZm91bmQNCj4gY29uZmlndXJlLmFjOjQyMTogcmVxdWlyZWQgZmlsZSBgdG9vbHMvbmZz
LWlvc3RhdC9NYWtlZmlsZS5pbicgbm90IGZvdW5kLw0KPiANCj4gSSBkZWxldGVkIHRoZSB0d28g
TWFrZWZpbGUuaW4gcmVxdWlyZW1lbnRzIGZyb20gbGluZSA0MjEgaW4gY29uZmlndXJlLmFjIGJl
Y2F1c2UgdGhlcmUgd2VyZSBvbmx5IHB5dGhvbiBzY3JpcHRzIGluc2lkZSB0aG9zZSBkaXJlY3Rv
cmllcy4NCj4gDQo+IFdoZW4gSSByYW4gdGhlIOKAmGNvbmZpZ3VyZeKAmSBnZW5lcmF0ZWQgYWZ0
ZXIgdGhlIG1vZGlmaWNhdGlvbiwgaXQgZmFpbGVkIHdpdGggdGhlIGZvbGxvd2luZyBvdXRwdXQ6
DQo+IA0KPiAvY2hlY2tpbmcgYnVpbGQgc3lzdGVtIHR5cGUuLi4geDg2XzY0LXVua25vd24tbGlu
dXgtZ251IGNoZWNraW5nIGhvc3Qgc3lzdGVtIHR5cGUuLi4geDg2XzY0LXVua25vd24tbGludXgt
Z251IGNoZWNraW5nIGZvciBhIEJTRC1jb21wYXRpYmxlIGluc3RhbGwuLi4gL3Vzci9iaW4vaW5z
dGFsbCAtYyBjaGVja2luZyB3aGV0aGVyIGJ1aWxkIGVudmlyb25tZW50IGlzIHNhbmUuLi4geWVz
IGNoZWNraW5nIGZvciBnYXdrLi4uIGdhd2sgY2hlY2tpbmcgd2hldGhlciBtYWtlIHNldHMgJChN
QUtFKS4uLiB5ZXMgY2hlY2tpbmcgd2hldGhlciB0byBlbmFibGUgbWFpbnRhaW5lci1zcGVjaWZp
YyBwb3J0aW9ucyBvZiBNYWtlZmlsZXMuLi4gbm8gY2hlY2tpbmcgZm9yIHN0eWxlIG9mIGluY2x1
ZGUgdXNlZCBieSBtYWtlLi4uIEdOVSBjaGVja2luZyBmb3IgZ2NjLi4uIGdjYyBjaGVja2luZyB3
aGV0aGVyIHRoZSBDIGNvbXBpbGVyIHdvcmtzLi4uIHllcyBjaGVja2luZyBmb3IgQyBjb21waWxl
ciBkZWZhdWx0IG91dHB1dCBmaWxlIG5hbWUuLi4gYS5vdXQgY2hlY2tpbmcgZm9yIHN1ZmZpeCBv
ZiBleGVjdXRhYmxlcy4uLg0KPiBjaGVja2luZyB3aGV0aGVyIHdlIGFyZSBjcm9zcyBjb21waWxp
bmcuLi4gbm8gY2hlY2tpbmcgZm9yIHN1ZmZpeCBvZiBvYmplY3QgZmlsZXMuLi4gbyBjaGVja2lu
ZyB3aGV0aGVyIHdlIGFyZSB1c2luZyB0aGUgR05VIEMgY29tcGlsZXIuLi4geWVzIGNoZWNraW5n
IHdoZXRoZXIgZ2NjIGFjY2VwdHMgLWcuLi4geWVzIGNoZWNraW5nIGZvciBnY2Mgb3B0aW9uIHRv
IGFjY2VwdCBJU08gQzg5Li4uIG5vbmUgbmVlZGVkIGNoZWNraW5nIGRlcGVuZGVuY3kgc3R5bGUg
b2YgZ2NjLi4uIGdjYzMgY2hlY2tpbmcgaG93IHRvIHJ1biB0aGUgQyBwcmVwcm9jZXNzb3IuLi4g
Z2NjIC1FIGNoZWNraW5nIGZvciBncmVwIHRoYXQgaGFuZGxlcyBsb25nIGxpbmVzIGFuZCAtZS4u
LiAvYmluL2dyZXAgY2hlY2tpbmcgZm9yIGVncmVwLi4uIC9iaW4vZ3JlcCAtRSBjaGVja2luZyBm
b3IgQU5TSSBDIGhlYWRlciBmaWxlcy4uLiBubyBjaGVja2luZyBmb3Igc3lzL3R5cGVzLmguLi4g
eWVzIGNoZWNraW5nIGZvciBzeXMvc3RhdC5oLi4uIHllcyBjaGVja2luZyBmb3Igc3RkbGliLmgu
Li4gbm8gY2hlY2tpbmcgZm9yIHN0cmluZy5oLi4uIHllcyBjaGVja2luZyBmb3IgbWVtb3J5Lmgu
Li4geWVzIGNoZWNraW5nIGZvciBzdHJpbmdzLmguLi4geWVzIGNoZWNraW5nIGZvciBpbnR0eXBl
cy5oLi4uIHllcyBjaGVja2luZyBmb3Igc3RkaW50LmguLi4geWVzIGNoZWNraW5nIGZvciB1bmlz
dGQuaC4uLiB5ZXMgY2hlY2tpbmcgZm9yIGNsbnRfdGxpX2NyZWF0ZSBpbiAtbHRpcnBjLi4uIHll
cyBjaGVja2luZyAvdXNyL2luY2x1ZGUvdGlycGMvbmV0Y29uZmlnLmggdXNhYmlsaXR5Li4uIHll
cyBjaGVja2luZyAvdXNyL2luY2x1ZGUvdGlycGMvbmV0Y29uZmlnLmggcHJlc2VuY2UuLi4geWVz
IGNoZWNraW5nIGZvciAvdXNyL2luY2x1ZGUvdGlycGMvbmV0Y29uZmlnLmguLi4geWVzIGNoZWNr
aW5nIGZvciBwcmN0bC4uLiB5ZXMgY2hlY2tpbmcgZm9yIGNhcF9nDQpldF9wcm9jIGluIC1sY2Fw
Li4uIHllcyBjaGVja2luZyBzeXMvY2FwYWJpbGl0eS5oIHVzYWJpbGl0eS4uLiB5ZXMgY2hlY2tp
bmcgc3lzL2NhcGFiaWxpdHkuaCBwcmVzZW5jZS4uLiB5ZXMgY2hlY2tpbmcgZm9yIHN5cy9jYXBh
YmlsaXR5LmguLi4geWVzIGNoZWNraW5nIGZvciBsaWJ3cmFwLi4uIC8NCj4gDQo+IEJ1dCBsaWJ3
cmFwIGlzIG9idmlvdXNseSBpbnN0YWxsZWQgYmFzZWQgb24gdGhlIGxvY2F0ZSBjb21tYW5kOg0K
PiAjbG9jYXRlIGxpYndyYXANCj4gL3Vzci9saWIvbGlid3JhcC5zbw0KPiAvdXNyL2xpYi9saWJ3
cmFwLnNvLjANCj4gL3Vzci9saWIvbGlid3JhcC5zby4wLjcuNg0KPiAvdXNyL2xpYjY0L2xpYndy
YXAuc28NCj4gL3Vzci9saWI2NC9saWJ3cmFwLnNvLjANCj4gL3Vzci9saWI2NC9saWJ3cmFwLnNv
LjAuNy42DQo+IA0KPiBJIGFtIG5ldyBhdCB0aGlzIGFuZCB3b3VsZCBhcHByZWNpYXRlIGFueSBo
ZWxwIHlvdSBjYW4gcHJvdmlkZS4NCj4gDQo+IFRoYW5rcywNCj4gSGVsZW4NCj4gDQo+IA0KPiAN
Cj4gDQo+IA0KPiANCj4gDQo+IA0KPiANCg0K