Hi,
analyzing performance issues with Oracle databases on an NFS client
running on SLES9 SP3 and NetApp as NFS server, I found that in SLES9
SP3 each write call is followed by an getattr. This is not the case
with SLES10 SP1.
Mount options in use are:
rw,v3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,lock,proto=tcp,addr=172.18.131.134
I do
dd if=/dev/zero of=/mnt/qqq oflag=direct bs=8k count=100000
and using tcpdump (BTW, is there an easier way ?) I see with SLES9 SP3
(i.e. 2.6.5-7.244) each single 8k write followed by an getattr (which
comes at some cost).
Using SLES10 SP1 (2.6.16.46-0.12) there is only one getattr when dd
closes the file.
Is there anything I can do to avoid the getattr calls in SLES9 SP3
(no, sorry, can't update to SLES10 SP1) ?
Thanks
Gerd
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Gerd Bavendiek wrote:
>
> analyzing performance issues with Oracle databases on an NFS client
> running on SLES9 SP3 and NetApp as NFS server, I found that in SLES9
> SP3 each write call is followed by an getattr. This is not the case
> with SLES10 SP1.
The changes between 2.6.5 and 2.6.16 are too many that it is not easy to
track down.
> I do
>
> dd if=/dev/zero of=/mnt/qqq oflag=direct bs=8k count=100000
>
> and using tcpdump (BTW, is there an easier way ?) I see with SLES9 SP3
> (i.e. 2.6.5-7.244) each single 8k write followed by an getattr (which
> comes at some cost).
I Would suggest trying the latest kernel update from Novell/SUSE.
If the issue is seen in the latest kernels, create a bug report in
Novell/SUSE bugzilla - https://bugzilla.novell.com
Thanks,
--
Suresh Jayaraman
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Tue, 2007-11-13 at 10:03 +0100, Gerd Bavendiek wrote:
> Hi,
>
> analyzing performance issues with Oracle databases on an NFS client
> running on SLES9 SP3 and NetApp as NFS server, I found that in SLES9
> SP3 each write call is followed by an getattr. This is not the case
> with SLES10 SP1.
>
> Mount options in use are:
>
> rw,v3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,lock,proto=tcp,addr=172.18.131.134
>
> I do
>
> dd if=/dev/zero of=/mnt/qqq oflag=direct bs=8k count=100000
>
> and using tcpdump (BTW, is there an easier way ?) I see with SLES9 SP3
> (i.e. 2.6.5-7.244) each single 8k write followed by an getattr (which
> comes at some cost).
>
> Using SLES10 SP1 (2.6.16.46-0.12) there is only one getattr when dd
> closes the file.
>
> Is there anything I can do to avoid the getattr calls in SLES9 SP3
> (no, sorry, can't update to SLES10 SP1) ?
For one thing, you could turn attribute caching back on. I don't know
why SLES10 fails to GETATTR, but acregmin=0,acregmax=0 turns attribute
caching off. Append writes need to know where the end-of-file is, and so
they will force a GETATTR when there is no attribute caching.
Trond
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Trond Myklebust schrieb:
> On Tue, 2007-11-13 at 10:03 +0100, Gerd Bavendiek wrote:
>> Hi,
>>
>> analyzing performance issues with Oracle databases on an NFS client
>> running on SLES9 SP3 and NetApp as NFS server, I found that in SLES9
>> SP3 each write call is followed by an getattr. This is not the case
>> with SLES10 SP1.
>>
>> Mount options in use are:
>>
>> rw,v3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,lock,proto=tcp,addr=172.18.131.134
>>
>> I do
>>
>> dd if=/dev/zero of=/mnt/qqq oflag=direct bs=8k count=100000
>>
>> and using tcpdump (BTW, is there an easier way ?) I see with SLES9 SP3
>> (i.e. 2.6.5-7.244) each single 8k write followed by an getattr (which
>> comes at some cost).
>>
>> Using SLES10 SP1 (2.6.16.46-0.12) there is only one getattr when dd
>> closes the file.
>>
>> Is there anything I can do to avoid the getattr calls in SLES9 SP3
>> (no, sorry, can't update to SLES10 SP1) ?
>
> For one thing, you could turn attribute caching back on. I don't know
> why SLES10 fails to GETATTR, but acregmin=0,acregmax=0 turns attribute
> caching off. Append writes need to know where the end-of-file is, and so
> they will force a GETATTR when there is no attribute caching.
>
> Trond
>
>
Trond,
you say: SLES10 fails to GETATTR.
So with acregmin=0,acregmax=0 etc. we should always have one write AND
one getattr ?
So SLES9 SP3 does it the right way ?
Gerd
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Nov 13, 2007, at 10:55 AM, Gerd Bavendiek wrote:
> Trond Myklebust schrieb:
>> On Tue, 2007-11-13 at 10:03 +0100, Gerd Bavendiek wrote:
>>> Hi,
>>>
>>> analyzing performance issues with Oracle databases on an NFS client
>>> running on SLES9 SP3 and NetApp as NFS server, I found that in SLES9
>>> SP3 each write call is followed by an getattr. This is not the case
>>> with SLES10 SP1.
>>>
>>> Mount options in use are:
>>>
>>> rw,v3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdir
>>> max=0,hard,lock,proto=tcp,addr=172.18.131.134
>>>
>>> I do
>>>
>>> dd if=/dev/zero of=/mnt/qqq oflag=direct bs=8k count=100000
>>>
>>> and using tcpdump (BTW, is there an easier way ?) I see with
>>> SLES9 SP3
>>> (i.e. 2.6.5-7.244) each single 8k write followed by an getattr
>>> (which
>>> comes at some cost).
>>>
>>> Using SLES10 SP1 (2.6.16.46-0.12) there is only one getattr when dd
>>> closes the file.
>>>
>>> Is there anything I can do to avoid the getattr calls in SLES9 SP3
>>> (no, sorry, can't update to SLES10 SP1) ?
>>
>> For one thing, you could turn attribute caching back on. I don't know
>> why SLES10 fails to GETATTR, but acregmin=0,acregmax=0 turns
>> attribute
>> caching off. Append writes need to know where the end-of-file is,
>> and so
>> they will force a GETATTR when there is no attribute caching.
Note "oflag=direct".
>> Trond
>>
>>
>
> Trond,
>
> you say: SLES10 fails to GETATTR.
>
> So with acregmin=0,acregmax=0 etc. we should always have one write AND
> one getattr ?
>
> So SLES9 SP3 does it the right way ?
>
> Gerd
SLES 9's direct I/O engine does the GETATTRs. There's no way to get
rid of that behavior in that kernel version; it's an implementation
limitation that was corrected in later versions.
SLES 10's direct I/O engine is more advanced, and doesn't do GETATTRs
with direct I/O.
When performing direct I/O, the client doesn't need to know how large
the file is.
--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs