From: "Kottaridis, Chris" Subject: Rpc.mountd growing 6 MB/day Date: Mon, 30 Apr 2007 10:41:07 -0700 Message-ID: <37B62E0F71C9E14B9859FADB1FC3E3E133E51B@ala-mail02.corp.ad.wrs.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0317008836==" To: Return-path: Received: from sc8-sf-mx1-b.sourceforge.net ([10.3.1.91] helo=mail.sourceforge.net) by sc8-sf-list2-new.sourceforge.net with esmtp (Exim 4.43) id 1HiZs8-0008Ql-P8 for nfs@lists.sourceforge.net; Mon, 30 Apr 2007 10:41:13 -0700 Received: from mail.windriver.com ([147.11.1.11] helo=mail.wrs.com) by mail.sourceforge.net with esmtp (Exim 4.44) id 1HiZsB-0000gJ-3R for nfs@lists.sourceforge.net; Mon, 30 Apr 2007 10:41:15 -0700 Received: from ALA-MAIL03.corp.ad.wrs.com (ala-mail03 [147.11.57.144]) by mail.wrs.com (8.13.6/8.13.6) with ESMTP id l3UHf9QG010864 for ; Mon, 30 Apr 2007 10:41:09 -0700 (PDT) List-Id: "Discussion of NFS under Linux development, interoperability, and testing." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: nfs-bounces@lists.sourceforge.net Errors-To: nfs-bounces@lists.sourceforge.net This is a multi-part message in MIME format. --===============0317008836== Content-class: urn:content-classes:message Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01C78B4E.BBE16FF0" This is a multi-part message in MIME format. ------_=_NextPart_001_01C78B4E.BBE16FF0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable I have a situation where rpc.mountd is growing continuously and I am not sure if it's a memory leak or expected behavior that can be controlled with some configuration option. =20 Top is used to watch rpc.mountd over time and I can see Virtual size continually growing. In this environment there is no swap space so RSS is growing also. When RSS gets close to Virtual size, Virtual size jumps up by 128K until RSS again "catches up" to VIRTUAL which then jumps up by 128K again: =20 =20 VIRT RES 9951 root 16 0 1812 1052 668 S 0.0 0.0 0:20.07 rpc.mountd --nfs-version 2 --nfs-version 3 9951 root 16 0 1812 1052 668 S 0.0 0.0 0:20.19 rpc.mountd --nfs-version 2 --nfs-version 3 9951 root 15 0 1812 1052 668 S 0.0 0.0 0:20.44 rpc.mountd --nfs-version 2 --nfs-version 3 9951 root 15 0 1940 1056 668 S 0.0 0.0 0:20.57 rpc.mountd --nfs-version 2 --nfs-version 3 9951 root 16 0 1940 1056 668 S 0.0 0.0 0:20.69 rpc.mountd --nfs-version 2 --nfs-version 3 9951 root 16 0 1940 1056 668 S 0.0 0.0 0:20.70 rpc.mountd --nfs-version 2 --nfs-version 3 =20 RES keeps incrementing every so often and eventually: =20 VIRT RES 9951 root 16 0 1940 1180 668 S 0.0 0.0 0:38.85 rpc.mountd --nfs-version 2 --nfs-version 3 9951 root 16 0 1940 1180 668 S 0.0 0.0 0:38.99 rpc.mountd --nfs-version 2 --nfs-version 3 9951 root 16 0 2068 1184 668 S 0.0 0.0 0:39.14 rpc.mountd --nfs-version 2 --nfs-version 3 9951 root 16 0 2068 1184 668 S 0.0 0.0 0:39.33 rpc.mountd --nfs-version 2 --nfs-version 3 =20 This pattern continues. =20 I added some xlog() statements in rpc.mountd to try and show me if there were some malloc's without free's. I didn't see anything, but here are the routines that seemed to get called frequently: =20 Apr 26 18:55:13 unit0 mountd[24268]: auth_unix_ip client alloced 0x80695e0 Apr 26 18:55:13 unit0 mountd[24268]: auth_unix_ip client freed 0x80695e0 Apr 26 18:55:13 unit0 mountd[24268]: nfsd_export alocated dom 0x80695e0=20 Apr 26 18:55:13 unit0 mountd[24268]: nfsd_export alocated path 0x80695f0 Apr 26 18:55:13 unit0 mountd[24268]: nfsd_export : free dom 0x80695e0 Apr 26 18:55:13 unit0 mountd[24268]: nfsd_export : free path 0x80695f0 Apr 26 18:55:13 unit0 mountd[24268]: nfsd_fh : dom allocated 0x8069600 Apr 26 18:55:13 unit0 mountd[24268]: nfsd_fh : freed dom 0x8069600 Apr 26 19:10:33 unit0 mountd[24268]: auth_unix_ip client alloced 0x8069618 Apr 26 19:10:33 unit0 mountd[24268]: auth_unix_ip client freed 0x8069618 Apr 26 19:10:33 unit0 mountd[24268]: nfsd_export alocated dom 0x8069618=20 Apr 26 19:10:33 unit0 mountd[24268]: nfsd_export alocated path 0x8069628 Apr 26 19:10:33 unit0 mountd[24268]: nfsd_export : free dom 0x8069618 Apr 26 19:10:33 unit0 mountd[24268]: nfsd_export : free path 0x8069628 Apr 26 19:10:33 unit0 mountd[24268]: nfsd_fh : dom allocated 0x8069638 Apr 26 19:10:33 unit0 mountd[24268]: nfsd_fh : freed dom 0x8069638 =20 It seems to be these three routines that get called over and over, at least so far that I have added xlog()'s to. These seem to be making some interactions with the kernel nfsd via /proc, but I am not real sure how that would impact rpc.mountd's virtual size. =20 I am using the kernel NFSD, the kernel version is 2.6.10, the nfs-utils version is nfs-utils-1.0.7 =20 I don't know if this growth is expected or if it will eventually stop, over a period of days it has not stopped growing, and if there may be some configuration option that can control it. =20 Has anyone seen this behavior before ? =20 Is it expected and controllable or is there a memory leak here ? =20 Thanks =20 Chris Kottaridis Senior Engineer Wind River Systems 719-522-9786 =20 ------_=_NextPart_001_01C78B4E.BBE16FF0 Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable
I have = a situation=20 where rpc.mountd is growing continuously and I am not sure if it's a = memory leak=20 or expected behavior that can be controlled with some configuration=20 option.
 
Top is = used to=20 watch rpc.mountd over time and I can see Virtual size continually = growing. In=20 this environment there is no swap space so RSS is growing also. When RSS = gets=20 close to Virtual size,  Virtual size jumps up by 128K until RSS = again=20 "catches up" to VIRTUAL which then jumps up by 128K = again:
 
 
          &nbs= p;            = ;      =20 VIRT  RES
9951=20 root      16   0  1812 = 1052  668=20 S  0.0  0.0   0:20.07 rpc.mountd --nfs-version 2=20 --nfs-version 3
 9951 root      = 16  =20 0  1812 1052  668 S  0.0  0.0   0:20.19 = rpc.mountd=20 --nfs-version 2 --nfs-version 3
 9951 = root     =20 15   0  1812 1052  668 S  0.0  = 0.0  =20 0:20.44 rpc.mountd --nfs-version 2 --nfs-version 3
 9951=20 root      15   0  1940 = 1056  668=20 S  0.0  0.0   0:20.57 rpc.mountd --nfs-version 2=20 --nfs-version 3
 9951 root      = 16  =20 0  1940 1056  668 S  0.0  0.0   0:20.69 = rpc.mountd=20 --nfs-version 2 --nfs-version 3
 9951 = root     =20 16   0  1940 1056  668 S  0.0  = 0.0  =20 0:20.70 rpc.mountd --nfs-version 2 --nfs-version 3
 
RES keeps incrementing every so = often and=20 eventually:
 
          &nbs= p;            = ;       =20 VIRT RES
 9951 root      16   = 0  1940=20 1180  668 S  0.0  0.0   0:38.85 rpc.mountd=20 --nfs-version 2 --nfs-version 3
 9951 = root     =20 16   0  1940 1180  668 S  0.0  = 0.0  =20 0:38.99 rpc.mountd --nfs-version 2 --nfs-version 3
 9951=20 root      16   0  2068 = 1184  668=20 S  0.0  0.0   0:39.14 rpc.mountd --nfs-version 2=20 --nfs-version 3
 9951 root      = 16  =20 0  2068 1184  668 S  0.0  0.0   0:39.33 = rpc.mountd=20 --nfs-version 2 --nfs-version=20 3
 
This = pattern=20 continues.
 
I = added some xlog()=20 statements in rpc.mountd to try and show me if there were some malloc's = without=20 free's. I didn't see anything, but here are the routines that seemed to = get=20 called frequently:
 
Apr 26 = 18:55:13=20 unit0 mountd[24268]: auth_unix_ip client alloced 0x80695e0
Apr 26 = 18:55:13=20 unit0 mountd[24268]: auth_unix_ip client freed 0x80695e0
Apr 26 = 18:55:13=20 unit0 mountd[24268]: nfsd_export alocated dom 0x80695e0
Apr 26 = 18:55:13=20 unit0 mountd[24268]: nfsd_export alocated path 0x80695f0
Apr 26 = 18:55:13=20 unit0 mountd[24268]: nfsd_export : free dom 0x80695e0
Apr 26 18:55:13 = unit0=20 mountd[24268]: nfsd_export : free path 0x80695f0
Apr 26 18:55:13 = unit0=20 mountd[24268]: nfsd_fh : dom allocated 0x8069600
Apr 26 18:55:13 = unit0=20 mountd[24268]: nfsd_fh : freed dom 0x8069600
Apr 26 19:10:33 unit0=20 mountd[24268]: auth_unix_ip client alloced 0x8069618
Apr 26 19:10:33 = unit0=20 mountd[24268]: auth_unix_ip client freed 0x8069618
Apr 26 19:10:33 = unit0=20 mountd[24268]: nfsd_export alocated dom 0x8069618
Apr 26 19:10:33 = unit0=20 mountd[24268]: nfsd_export alocated path 0x8069628
Apr 26 19:10:33 = unit0=20 mountd[24268]: nfsd_export : free dom 0x8069618
Apr 26 19:10:33 unit0 = mountd[24268]: nfsd_export : free path 0x8069628
Apr 26 19:10:33 = unit0=20 mountd[24268]: nfsd_fh : dom allocated 0x8069638
Apr 26 19:10:33 = unit0=20 mountd[24268]: nfsd_fh : freed dom 0x8069638
 
It = seems to be these=20 three routines that get called over and over, at least so far that I = have added=20 xlog()'s to. These seem to be making some interactions with the kernel = nfsd via=20 /proc, but I am not real sure how that would impact rpc.mountd's virtual = size.
 
I am = using the=20 kernel NFSD, the kernel version is 2.6.10, the nfs-utils version=20 is nfs-utils-1.0.7
 
I = don't know if this=20 growth is expected or if it will eventually stop, over a period of days = it has=20 not stopped growing, and if there may be some configuration option that = can=20 control it.
 
Has = anyone seen this=20 behavior before ?
 
Is it = expected and=20 controllable or is there a memory leak here ?
 
Thanks
 
Chris = Kottaridis
Senior Engineer
Wind River = Systems
719-522-9786
 
------_=_NextPart_001_01C78B4E.BBE16FF0-- --===============0317008836== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ --===============0317008836== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ NFS maillist - NFS@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs --===============0317008836==--