2006-07-26 09:11:28

by Xue Peng Li

[permalink] [raw]
Subject: [ltc-perf] draft of nfs event hook


Hi folks,

I am working on NFS trace hooks for SystemTap/LKET. These trace
hooks could be used for performance analyzing which will trace both
NFS client and server side activities.

At the first step I need make sure that the trace hooks I defined
are appropriate and every trace hook probes the right places inside
the Kernel. So I will be appreciated if you could help me review the
following trace hooks.


Thanks

======================== NFS Client Side Trace Hooks =================

The following event hooks are used to trace nfs client activities.
These event hooks are divided into two groups. Probe Point and
Description is given for each event hook.

Group1:
It contains 15 event hooks, which are used to probe Client-side
NFS procedures.
---------------------------------------------------------------------
addevent.nfs.proc.read_setup
Probe Point:
nfs_proc_read_setup,nfs3_proc_read_setup, nfs4_proc_read_setup
Description:
Setup a rpc task to prepare for reading
---------------------------------------------------------------------
addevent.nfs.proc.read_done
Probe Point:
nfs_proc_read_done,nfs3_proc_read_done,
nfs4_proc_read_done
Description:
Fires when receive a read reply from server,it is used to
refresh the inode on client
---------------------------------------------------------------------
addevnet.nfs.proc.read
Probe Point:
nfs_proc_read,nfs3_proc_read,nfs4_proc_read
Description:
Send a read operation to server,and refresh local inode after
receive reply from server
---------------------------------------------------------------------
addevent.nfs.proc.write_setup
Probe Point:
nfs_proc_write_setup,nfs3_proc_write_setup,nfs4_proc_write_setup
Description:
---------------------------------------------------------------------
addevent.nfs.proc.write
Probe Point:
nfs_proc_write,nfs3_proc_write,nfs4_proc_write
Description:
Send a write operation to server
---------------------------------------------------------------------
addevent.nfs.proc.write_done
Probe Point:
nfs_write_done,nfs3_write_done,nfs4_write_done
Description:
Fires when receive a write reply from server,it is used to
refresh the inode on client
---------------------------------------------------------------------
addevent.nfs.proc.open
Probe Point:
nfs_open
Description:
Allocate file read/write context information
---------------------------------------------------------------------
addevent.nfs.proc.release
Probe Point:
nfs_release
Description:
Release file read/write context information
---------------------------------------------------------------------
addevent.nfs.proc.create
Probe Point:
nfs_create
Description:
Create a new file or dir on server
_____________________________________________________________________

Group2:
This group includes the event hooks which probe NFS address space
operation related function.All the functions are common in NFSV2,
NFSV3,NFSV4.
---------------------------------------------------------------------
addevent.nfs.aops.readpage
Probe Point:
nfs_readpage
Description :
Read the page ,only fires when a previous async read operation
failed
---------------------------------------------------------------------
addevent.nfs.aops.readpages
Probe Point:
nfs_readpages
Description:
Fires when in readahead way,read several pages once
---------------------------------------------------------------------
addevent.nfs.aops.writepage
Probe Point:
nfs_writepage
Description:
Write an mapped page to the server
---------------------------------------------------------------------
addevent.nfs.aops.writepages
Probe Point:
nfs_writepages
Description:
Write several dirty pages to the serve once
---------------------------------------------------------------------
addevent.nfs.aops.prepare_write
Probe Point:
prepare_write
Description:
Prepare a page for writing. Look for a request corresponding
to the page. If there is one, and it belongs to another aops,
we flush it out before we try to copy anything into the page.
Also do the same if we find a request from an existing
dropped page.
---------------------------------------------------------------------
addevent.nfs.aops.commit_write
Probe Point:
nfs_commit_write
Description :
Update and possibly write a cached page of an NFS aops
_____________________________________________________________________


====================== NFS Server Side Trace Hooks ==================

The following event hooks are used to traced nfs server activities.
The event hooks are divided into three group.

Group1:
It contains one event hook,which probes nfsd_dispatch
---------------------------------------------------------------------
addevent.nfsd.dispatch
Probe Point:
nfsd_dispatch
Description:
Decode the arguments received from client,call the procedure
handler,encode the result
______________________________________________________________________
Group2:
It contains three event hooks.The functions probed will be called
by related procedure handler. All the functions are common in NFSV2,
NFSV3,NFSV4
---------------------------------------------------------------------
addevent.nfsd.read
Probe Point:
nfsd_read
Description:
It does the "real" work of read
---------------------------------------------------------------------
addevent.nfsd.write
Probe Point:
nfsd_write
Description:
It does the "real " work of write
---------------------------------------------------------------------
addevent.nfsd.open
Probe Point:
nfsd_open
Description:
Open an existing file or directory.
---------------------------------------------------------------------
addevent.nfsd.close
Probe Point:
nfsd_close
Description:
Close an existing file or directory
_____________________________________________________________________
Group3:
It contains eight event hooks,which probe procedure handlers.
---------------------------------------------------------------------
addevent.nfsd.proc2.read
Probe Point:
nfsd_proc_read
Description:
Read data from file (NFSV2)
---------------------------------------------------------------------
addevent.nfsd.proc3.read
Probe Point:
nfsd3_proc_read
Description:
Read data from file (NFSV3)
---------------------------------------------------------------------
addevent.nfsd.proc4.read
Probe Point:
nfsd4_read
Description:
Check stateid and prepare for reading
---------------------------------------------------------------------
addevent.nfsd.proc2.write
Probe Point:
nfsd_proc_write
Description:
Write data to file (NFSV2)
---------------------------------------------------------------------
addevent.nfsd.pro3.write
Probe Point:
nfsd3_proc_write
Description:
Write data to file (NFSV3)
---------------------------------------------------------------------
addevent.nfsd.proc4.write
Probe Point:
nfsd4_write
Description:
Check stateid and write data to file
---------------------------------------------------------------------
addevent.nfsd.proc4.open
Probe Point:
nfsd4_open
Description:
Check stateid and open file
---------------------------------------------------------------------
addevent.nfsd.proc4.compound
Probe Point:
nfsd4_proc_compound
Description:
Call different procedures according to client request


Attachments:
(No filename) (0.00 B)
(No filename) (348.00 B)
(No filename) (140.00 B)
Download all attachments

2006-07-26 15:10:21

by Chuck Lever

[permalink] [raw]
Subject: Re: [ltc-perf] draft of nfs event hook

Xue Peng ---

I've only glanced at your specification, but it occurs to me that it
would be helpful for reviewers to understand your intent of adding the
hooks where you did. Do you have a design document, even a short one?
Or can you discuss your decisions on the list with us?

For example, why hook all three of "setup" "read/write" and "done" ?

And, what value do you hope these hooks will add over and above the
performance metrics that I added in 2.6.17 ? We see the SystemTap
hooks as an opportunity to make more specialized (and potentially more
run-time expensive) observations than the performance metrics.

Are there other file systems that have hooks in them? Did you add
hooks in conventional/standard places for all file systems?

Thanks!

On 7/26/06, Xue Peng Li <[email protected]> wrote:
>
>
>
> Hi folks,
>
> I am working on NFS trace hooks for SystemTap/LKET. These trace
> hooks could be used for performance analyzing which will trace both
> NFS client and server side activities.
>
> At the first step I need make sure that the trace hooks I defined
> are appropriate and every trace hook probes the right places inside
> the Kernel. So I will be appreciated if you could help me review the
> following trace hooks.
>
>
> Thanks
>
> ======================== NFS Client Side Trace Hooks =================
>
> The following event hooks are used to trace nfs client activities.
> These event hooks are divided into two groups. Probe Point and
> Description is given for each event hook.
>
> Group1:
> It contains 15 event hooks, which are used to probe Client-side
> NFS procedures.
> ---------------------------------------------------------------------
> addevent.nfs.proc.read_setup
> Probe Point:
> nfs_proc_read_setup,nfs3_proc_read_setup,
> nfs4_proc_read_setup
> Description:
> Setup a rpc task to prepare for reading
> ---------------------------------------------------------------------
> addevent.nfs.proc.read_done
> Probe Point:
> nfs_proc_read_done,nfs3_proc_read_done,
> nfs4_proc_read_done
> Description:
> Fires when receive a read reply from server,it is used to
> refresh the inode on client
> ---------------------------------------------------------------------
> addevnet.nfs.proc.read
> Probe Point:
> nfs_proc_read,nfs3_proc_read,nfs4_proc_read
> Description:
> Send a read operation to server,and refresh local inode after
> receive reply from server
> ---------------------------------------------------------------------
> addevent.nfs.proc.write_setup
> Probe Point:
> nfs_proc_write_setup,nfs3_proc_write_setup,nfs4_proc_write_setup
> Description:
> ---------------------------------------------------------------------
> addevent.nfs.proc.write
> Probe Point:
> nfs_proc_write,nfs3_proc_write,nfs4_proc_write
> Description:
> Send a write operation to server
> ---------------------------------------------------------------------
> addevent.nfs.proc.write_done
> Probe Point:
> nfs_write_done,nfs3_write_done,nfs4_write_done
> Description:
> Fires when receive a write reply from server,it is used to
> refresh the inode on client
> ---------------------------------------------------------------------
> addevent.nfs.proc.open
> Probe Point:
> nfs_open
> Description:
> Allocate file read/write context information
> ---------------------------------------------------------------------
> addevent.nfs.proc.release
> Probe Point:
> nfs_release
> Description:
> Release file read/write context information
> ---------------------------------------------------------------------
> addevent.nfs.proc.create
> Probe Point:
> nfs_create
> Description:
> Create a new file or dir on server
> _____________________________________________________________________
>
> Group2:
> This group includes the event hooks which probe NFS address space
> operation related function.All the functions are common in NFSV2,
> NFSV3,NFSV4.
> ---------------------------------------------------------------------
> addevent.nfs.aops.readpage
> Probe Point:
> nfs_readpage
> Description :
> Read the page ,only fires when a previous async read operation
> failed
> ---------------------------------------------------------------------
> addevent.nfs.aops.readpages
> Probe Point:
> nfs_readpages
> Description:
> Fires when in readahead way,read several pages once
> ---------------------------------------------------------------------
> addevent.nfs.aops.writepage
> Probe Point:
> nfs_writepage
> Description:
> Write an mapped page to the server
> ---------------------------------------------------------------------
> addevent.nfs.aops.writepages
> Probe Point:
> nfs_writepages
> Description:
> Write several dirty pages to the serve once
> ---------------------------------------------------------------------
> addevent.nfs.aops.prepare_write
> Probe Point:
> prepare_write
> Description:
> Prepare a page for writing. Look for a request corresponding
> to the page. If there is one, and it belongs to another aops,
> we flush it out before we try to copy anything into the page.
> Also do the same if we find a request from an existing
> dropped page.
> ---------------------------------------------------------------------
> addevent.nfs.aops.commit_write
> Probe Point:
> nfs_commit_write
> Description :
> Update and possibly write a cached page of an NFS aops
> _____________________________________________________________________
>
>
> ====================== NFS Server Side Trace Hooks ==================
>
> The following event hooks are used to traced nfs server activities.
> The event hooks are divided into three group.
>
> Group1:
> It contains one event hook,which probes nfsd_dispatch
> ---------------------------------------------------------------------
> addevent.nfsd.dispatch
> Probe Point:
> nfsd_dispatch
> Description:
> Decode the arguments received from client,call the procedure
> handler,encode the result
> ______________________________________________________________________
> Group2:
> It contains three event hooks.The functions probed will be called
> by related procedure handler. All the functions are common in NFSV2,
> NFSV3,NFSV4
> ---------------------------------------------------------------------
> addevent.nfsd.read
> Probe Point:
> nfsd_read
> Description:
> It does the "real" work of read
> ---------------------------------------------------------------------
> addevent.nfsd.write
> Probe Point:
> nfsd_write
> Description:
> It does the "real " work of write
> ---------------------------------------------------------------------
> addevent.nfsd.open
> Probe Point:
> nfsd_open
> Description:
> Open an existing file or directory.
> ---------------------------------------------------------------------
> addevent.nfsd.close
> Probe Point:
> nfsd_close
> Description:
> Close an existing file or directory
> _____________________________________________________________________
> Group3:
> It contains eight event hooks,which probe procedure handlers.
> ---------------------------------------------------------------------
> addevent.nfsd.proc2.read
> Probe Point:
> nfsd_proc_read
> Description:
> Read data from file (NFSV2)
> ---------------------------------------------------------------------
> addevent.nfsd.proc3.read
> Probe Point:
> nfsd3_proc_read
> Description:
> Read data from file (NFSV3)
> ---------------------------------------------------------------------
> addevent.nfsd.proc4.read
> Probe Point:
> nfsd4_read
> Description:
> Check stateid and prepare for reading
> ---------------------------------------------------------------------
> addevent.nfsd.proc2.write
> Probe Point:
> nfsd_proc_write
> Description:
> Write data to file (NFSV2)
> ---------------------------------------------------------------------
> addevent.nfsd.pro3.write
> Probe Point:
> nfsd3_proc_write
> Description:
> Write data to file (NFSV3)
> ---------------------------------------------------------------------
> addevent.nfsd.proc4.write
> Probe Point:
> nfsd4_write
> Description:
> Check stateid and write data to file
> ---------------------------------------------------------------------
> addevent.nfsd.proc4.open
> Probe Point:
> nfsd4_open
> Description:
> Check stateid and open file
> ---------------------------------------------------------------------
> addevent.nfsd.proc4.compound
> Probe Point:
> nfsd4_proc_compound
> Description:
> Call different procedures according to client request
>
> -------------------------------------------------------------------------
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys -- and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
>
> _______________________________________________
> NFS maillist - [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfs
>
>
>


--
"We who cut mere stones must always be envisioning cathedrals"
-- Quarry worker's creed

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2006-07-27 13:57:05

by Li Guanglei

[permalink] [raw]
Subject: Re: [ltc-perf] draft of nfs event hook

Hi,

The NFS trace hooks we are working on will be part of the trace
hooks of LKET, which is a system trace tool and we mainly use it for
performance analysis.

LKET is a dynamic trace facility based on SystemTap. It is actually
implemented as SystemTap's tapsets library and it has been integrated
into SystemTap already. For more info of LKET, you can refer to:

http://sourceware.org/systemtap/man5/lket.5.html

When we started working on NFS trace hooks, we realized it is not
an easy task. Although we use NFS in daily work but we don't have much
knowledge about the NFS protocol details and its implementation inside
the Kernel. So I divided the work into two steps. At the first step I
need get a list of trace points. And at the second step I need to make
sure what trace data is available for each trace hook. In a short, the
trace data available for each hook will be derived from the arguments
of the kernel functions being probed.

We read through the Kernel source code and chose some functions to
be instrumented. We will trace the entry of these functions and if
necessary, the return of them will also be traced. The following is
the list of these functions, please take a review:

==================== Client Side ==========================

<1> nfs directory operations

All functions from nfs_dir_operations:

const struct file_operations nfs_dir_operations = {
.llseek = nfs_llseek_dir,
.read = generic_read_dir,
.readdir = nfs_readdir,
.open = nfs_opendir,
.release = nfs_release,
.fsync = nfs_fsync_dir,
};

<2> nfs file operations

All functions from nfs_file_operations:

const struct file_operations nfs_file_operations = {
.llseek = nfs_file_llseek,
.read = do_sync_read,
.write = do_sync_write,
.aio_read = nfs_file_read,
.aio_write = nfs_file_write,
.mmap = nfs_file_mmap,
.open = nfs_file_open,
.flush = nfs_file_flush,
.release = nfs_file_release,
.fsync = nfs_fsync,
.lock = nfs_lock,
.flock = nfs_flock,
.sendfile = nfs_file_sendfile,
.check_flags = nfs_check_flags,
};

<3> nfs address space operations:
All functions from nfs_file_aops:

struct address_space_operations nfs_file_aops = {
.readpage = nfs_readpage,
.readpages = nfs_readpages,
.set_page_dirty = __set_page_dirty_nobuffers,
.writepage = nfs_writepage,
.writepages = nfs_writepages,
.prepare_write = nfs_prepare_write,
.commit_write = nfs_commit_write,
.invalidatepage = nfs_invalidate_page,
.releasepage = nfs_release_page,
#ifdef CONFIG_NFS_DIRECTIO
.direct_IO = nfs_direct_IO,
#endif
};

<4> NFS RPC procedures:

All functions from nfs_v[2,3,4]_clientops:
I only list the nfs_v3 rpc procedures:
struct nfs_rpc_ops nfs_v3_clientops = {
.version = 3, /* protocol version */
.dentry_ops = &nfs_dentry_operations,
.dir_inode_ops = &nfs3_dir_inode_operations,
.file_inode_ops = &nfs3_file_inode_operations,
.getroot = nfs3_proc_get_root,
.getattr = nfs3_proc_getattr,
.setattr = nfs3_proc_setattr,
.lookup = nfs3_proc_lookup,
.access = nfs3_proc_access,
.readlink = nfs3_proc_readlink,
.read = nfs3_proc_read,
.write = nfs3_proc_write,
.commit = nfs3_proc_commit,
.create = nfs3_proc_create,
.remove = nfs3_proc_remove,
.unlink_setup = nfs3_proc_unlink_setup,
.unlink_done = nfs3_proc_unlink_done,
.rename = nfs3_proc_rename,
.link = nfs3_proc_link,
.symlink = nfs3_proc_symlink,
.mkdir = nfs3_proc_mkdir,
.rmdir = nfs3_proc_rmdir,
.readdir = nfs3_proc_readdir,
.mknod = nfs3_proc_mknod,
.statfs = nfs3_proc_statfs,
.fsinfo = nfs3_proc_fsinfo,
.pathconf = nfs3_proc_pathconf,
.decode_dirent = nfs3_decode_dirent,
.read_setup = nfs3_proc_read_setup,
.read_done = nfs3_read_done,
.write_setup = nfs3_proc_write_setup,
.write_done = nfs3_write_done,
.commit_setup = nfs3_proc_commit_setup,
.commit_done = nfs3_commit_done,
.file_open = nfs_open,
.file_release = nfs_release,
.lock = nfs3_proc_lock,
.clear_acl_cache = nfs3_forget_cached_acls,
};

The LKET already has syscall and iosyscall trace hooks. So with the
above trace hooks, LKET could trace different layer of NFS operations:
--> Syscall
--> struct file_operations
--> struct address_space_operations
--> struct nfs_rpc_ops

======================= Server Side =============================

<1> nfsd_dispatch
This is the NFS dispatching function sit on top of RPC.

<2> NFS RPC procedures:

For NFSv4, it will be nfsd4_proc_compound

For NFSv2, NFSv3, it will be the functions from nfsd_procedures[2,3]

Here is a list for NFSv3, NFSv2 are almost the same:
nfsd3_proc_null,
nfsd3_proc_getattr,
nfsd3_proc_setattr,
nfsd3_proc_lookup,
nfsd3_proc_access,
nfsd3_proc_readlink,
nfsd3_proc_read,
nfsd3_proc_write,
nfsd3_proc_create,
nfsd3_proc_mkdir,
nfsd3_proc_symlink,
nfsd3_proc_mknod,
nfsd3_proc_remove,
nfsd3_proc_rmdir,
nfsd3_proc_rename,
nfsd3_proc_link,
nfsd3_proc_readdir,
nfsd3_proc_readdirplus,readdirplus,
nfsd3_proc_fsstat,
nfsd3_proc_fsinfo,
nfsd3_proc_pathconf,
nfsd3_proc_commit,

<3> NFSD file VFS operations

The functions nfsd_xxx from "fs/nfsd/vfs.c"

With the above server side trace hooks, LKET could trace NFS
operations at different layer:

nfsd_dispatch -->
--> NFS RPC Procedures
--> NFS VFS file operations


What I didn't list about NFS operations includes authentication,
NFSv4 callback and RPC(I prefer to use a separate set of trace hooks
for RPC). I am not sure if these operations are also required to be
traced. If I missed some important functions or I listed some
redundant functions, please feel free to let me know. Any comments
will be highly appreciated.

Thanks.

The following is from Li Xuepeng posted on [email protected]
which involved some implementations details and its trace point lists
is a subset of the above.

- Guanglei

Xue Peng Li ??:
> Hi folks,
>
> I am working on NFS trace hooks for SystemTap/LKET. These trace
> hooks could be used for performance analyzing which will trace both
> NFS client and server side activities.
>
> At the first step I need make sure that the trace hooks I defined
> are appropriate and every trace hook probes the right places inside
> the Kernel. So I will be appreciated if you could help me review the
> following trace hooks.
>
>
> Thanks
>
> ======================== NFS Client Side Trace Hooks =================
>
> The following event hooks are used to trace nfs client activities.
> These event hooks are divided into two groups. Probe Point and
> Description is given for each event hook.
>
> Group1:
> It contains 15 event hooks, which are used to probe Client-side
> NFS procedures.
> ---------------------------------------------------------------------
> addevent.nfs.proc.read_setup
> Probe Point:
> nfs_proc_read_setup,nfs3_proc_read_setup, nfs4_proc_read_setup
> Description:
> Setup a rpc task to prepare for reading
> ---------------------------------------------------------------------
> addevent.nfs.proc.read_done
> Probe Point:
> nfs_proc_read_done,nfs3_proc_read_done,
> nfs4_proc_read_done
> Description:
> Fires when receive a read reply from server,it is used to
> refresh the inode on client
> ---------------------------------------------------------------------
> addevnet.nfs.proc.read
> Probe Point:
> nfs_proc_read,nfs3_proc_read,nfs4_proc_read
> Description:
> Send a read operation to server,and refresh local inode after
> receive reply from server
> ---------------------------------------------------------------------
> addevent.nfs.proc.write_setup
> Probe Point:
> nfs_proc_write_setup,nfs3_proc_write_setup,nfs4_proc_write_setup
> Description:
> ---------------------------------------------------------------------
> addevent.nfs.proc.write
> Probe Point:
> nfs_proc_write,nfs3_proc_write,nfs4_proc_write
> Description:
> Send a write operation to server
> ---------------------------------------------------------------------
> addevent.nfs.proc.write_done
> Probe Point:
> nfs_write_done,nfs3_write_done,nfs4_write_done
> Description:
> Fires when receive a write reply from server,it is used to
> refresh the inode on client
> ---------------------------------------------------------------------
> addevent.nfs.proc.open
> Probe Point:
> nfs_open
> Description:
> Allocate file read/write context information
> ---------------------------------------------------------------------
> addevent.nfs.proc.release
> Probe Point:
> nfs_release
> Description:
> Release file read/write context information
> ---------------------------------------------------------------------
> addevent.nfs.proc.create
> Probe Point:
> nfs_create
> Description:
> Create a new file or dir on server
> _____________________________________________________________________
>
> Group2:
> This group includes the event hooks which probe NFS address space
> operation related function.All the functions are common in NFSV2,
> NFSV3,NFSV4.
> ---------------------------------------------------------------------
> addevent.nfs.aops.readpage
> Probe Point:
> nfs_readpage
> Description :
> Read the page ,only fires when a previous async read operation
> failed
> ---------------------------------------------------------------------
> addevent.nfs.aops.readpages
> Probe Point:
> nfs_readpages
> Description:
> Fires when in readahead way,read several pages once
> ---------------------------------------------------------------------
> addevent.nfs.aops.writepage
> Probe Point:
> nfs_writepage
> Description:
> Write an mapped page to the server
> ---------------------------------------------------------------------
> addevent.nfs.aops.writepages
> Probe Point:
> nfs_writepages
> Description:
> Write several dirty pages to the serve once
> ---------------------------------------------------------------------
> addevent.nfs.aops.prepare_write
> Probe Point:
> prepare_write
> Description:
> Prepare a page for writing. Look for a request corresponding
> to the page. If there is one, and it belongs to another aops,
> we flush it out before we try to copy anything into the page.
> Also do the same if we find a request from an existing
> dropped page.
> ---------------------------------------------------------------------
> addevent.nfs.aops.commit_write
> Probe Point:
> nfs_commit_write
> Description :
> Update and possibly write a cached page of an NFS aops
> _____________________________________________________________________
>
>
> ====================== NFS Server Side Trace Hooks ==================
>
> The following event hooks are used to traced nfs server activities.
> The event hooks are divided into three group.
>
> Group1:
> It contains one event hook,which probes nfsd_dispatch
> ---------------------------------------------------------------------
> addevent.nfsd.dispatch
> Probe Point:
> nfsd_dispatch
> Description:
> Decode the arguments received from client,call the procedure
> handler,encode the result
> ______________________________________________________________________
> Group2:
> It contains three event hooks.The functions probed will be called
> by related procedure handler. All the functions are common in NFSV2,
> NFSV3,NFSV4
> ---------------------------------------------------------------------
> addevent.nfsd.read
> Probe Point:
> nfsd_read
> Description:
> It does the "real" work of read
> ---------------------------------------------------------------------
> addevent.nfsd.write
> Probe Point:
> nfsd_write
> Description:
> It does the "real " work of write
> ---------------------------------------------------------------------
> addevent.nfsd.open
> Probe Point:
> nfsd_open
> Description:
> Open an existing file or directory.
> ---------------------------------------------------------------------
> addevent.nfsd.close
> Probe Point:
> nfsd_close
> Description:
> Close an existing file or directory
> _____________________________________________________________________
> Group3:
> It contains eight event hooks,which probe procedure handlers.
> ---------------------------------------------------------------------
> addevent.nfsd.proc2.read
> Probe Point:
> nfsd_proc_read
> Description:
> Read data from file (NFSV2)
> ---------------------------------------------------------------------
> addevent.nfsd.proc3.read
> Probe Point:
> nfsd3_proc_read
> Description:
> Read data from file (NFSV3)
> ---------------------------------------------------------------------
> addevent.nfsd.proc4.read
> Probe Point:
> nfsd4_read
> Description:
> Check stateid and prepare for reading
> ---------------------------------------------------------------------
> addevent.nfsd.proc2.write
> Probe Point:
> nfsd_proc_write
> Description:
> Write data to file (NFSV2)
> ---------------------------------------------------------------------
> addevent.nfsd.pro3.write
> Probe Point:
> nfsd3_proc_write
> Description:
> Write data to file (NFSV3)
> ---------------------------------------------------------------------
> addevent.nfsd.proc4.write
> Probe Point:
> nfsd4_write
> Description:
> Check stateid and write data to file
> ---------------------------------------------------------------------
> addevent.nfsd.proc4.open
> Probe Point:
> nfsd4_open
> Description:
> Check stateid and open file
> ---------------------------------------------------------------------
> addevent.nfsd.proc4.compound
> Probe Point:
> nfsd4_proc_compound
> Description:
> Call different procedures according to client request
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> ltc-perf mailing list
> [email protected]
> http://linux.ibm.com/mailman/listinfo/ltc-perf

2006-07-27 14:45:28

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [ltc-perf] draft of nfs event hook

On Wed, Jul 26, 2006 at 05:13:09PM +0800, Xue Peng Li wrote:
>
> Hi folks,
>
> I am working on NFS trace hooks for SystemTap/LKET. These trace
> hooks could be used for performance analyzing which will trace both
> NFS client and server side activities.
>
> At the first step I need make sure that the trace hooks I defined
> are appropriate and every trace hook probes the right places inside
> the Kernel. So I will be appreciated if you could help me review the
> following trace hooks.

Please implement proper static traces based on something similar to the
blktrace instead of wasting your time systemtap.


-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2006-07-27 15:29:39

by Chuck Lever

[permalink] [raw]
Subject: Re: [NFS] [ltc-perf] draft of nfs event hook

On 7/27/06, Li Guanglei <[email protected]> wrote:
> When we started working on NFS trace hooks, we realized it is not
> an easy task. Although we use NFS in daily work but we don't have much
> knowledge about the NFS protocol details and its implementation inside
> the Kernel. So I divided the work into two steps. At the first step I
> need get a list of trace points. And at the second step I need to make
> sure what trace data is available for each trace hook. In a short, the
> trace data available for each hook will be derived from the arguments
> of the kernel functions being probed.
>
> We read through the Kernel source code and chose some functions to
> be instrumented. We will trace the entry of these functions and if
> necessary, the return of them will also be traced. The following is
> the list of these functions, please take a review:

Have you done this with a local file system? I assume yes, and that
you just described the general approach you have taken with other file
systems. I think getting the same kind of data and trace points from
the NFS client as you added to local file systems would be good.

Capturing VFS and address space entry points is definitely useful and
is similar to local file systems. At the bottom of the NFS client is
the RPC client, and it acts just like the block I/O layer does for
local file systems. Would you consider adding trace points in the
LKET for the RPC client and server?




> ==================== Client Side ==========================
>
> <1> nfs directory operations
>
> All functions from nfs_dir_operations:
>
> const struct file_operations nfs_dir_operations = {
> .llseek = nfs_llseek_dir,
> .read = generic_read_dir,
> .readdir = nfs_readdir,
> .open = nfs_opendir,
> .release = nfs_release,
> .fsync = nfs_fsync_dir,
> };
>
> <2> nfs file operations
>
> All functions from nfs_file_operations:
>
> const struct file_operations nfs_file_operations = {
> .llseek = nfs_file_llseek,
> .read = do_sync_read,
> .write = do_sync_write,
> .aio_read = nfs_file_read,
> .aio_write = nfs_file_write,
> .mmap = nfs_file_mmap,
> .open = nfs_file_open,
> .flush = nfs_file_flush,
> .release = nfs_file_release,
> .fsync = nfs_fsync,
> .lock = nfs_lock,
> .flock = nfs_flock,
> .sendfile = nfs_file_sendfile,
> .check_flags = nfs_check_flags,
> };
>
> <3> nfs address space operations:
> All functions from nfs_file_aops:
>
> struct address_space_operations nfs_file_aops = {
> .readpage = nfs_readpage,
> .readpages = nfs_readpages,
> .set_page_dirty = __set_page_dirty_nobuffers,
> .writepage = nfs_writepage,
> .writepages = nfs_writepages,
> .prepare_write = nfs_prepare_write,
> .commit_write = nfs_commit_write,
> .invalidatepage = nfs_invalidate_page,
> .releasepage = nfs_release_page,
> #ifdef CONFIG_NFS_DIRECTIO
> .direct_IO = nfs_direct_IO,
> #endif
> };
>
> <4> NFS RPC procedures:
>
> All functions from nfs_v[2,3,4]_clientops:
> I only list the nfs_v3 rpc procedures:
> struct nfs_rpc_ops nfs_v3_clientops = {
> .version = 3, /* protocol version */
> .dentry_ops = &nfs_dentry_operations,
> .dir_inode_ops = &nfs3_dir_inode_operations,
> .file_inode_ops = &nfs3_file_inode_operations,
> .getroot = nfs3_proc_get_root,
> .getattr = nfs3_proc_getattr,
> .setattr = nfs3_proc_setattr,
> .lookup = nfs3_proc_lookup,
> .access = nfs3_proc_access,
> .readlink = nfs3_proc_readlink,
> .read = nfs3_proc_read,
> .write = nfs3_proc_write,
> .commit = nfs3_proc_commit,
> .create = nfs3_proc_create,
> .remove = nfs3_proc_remove,
> .unlink_setup = nfs3_proc_unlink_setup,
> .unlink_done = nfs3_proc_unlink_done,
> .rename = nfs3_proc_rename,
> .link = nfs3_proc_link,
> .symlink = nfs3_proc_symlink,
> .mkdir = nfs3_proc_mkdir,
> .rmdir = nfs3_proc_rmdir,
> .readdir = nfs3_proc_readdir,
> .mknod = nfs3_proc_mknod,
> .statfs = nfs3_proc_statfs,
> .fsinfo = nfs3_proc_fsinfo,
> .pathconf = nfs3_proc_pathconf,
> .decode_dirent = nfs3_decode_dirent,
> .read_setup = nfs3_proc_read_setup,
> .read_done = nfs3_read_done,
> .write_setup = nfs3_proc_write_setup,
> .write_done = nfs3_write_done,
> .commit_setup = nfs3_proc_commit_setup,
> .commit_done = nfs3_commit_done,
> .file_open = nfs_open,
> .file_release = nfs_release,
> .lock = nfs3_proc_lock,
> .clear_acl_cache = nfs3_forget_cached_acls,
> };
>
> The LKET already has syscall and iosyscall trace hooks. So with the
> above trace hooks, LKET could trace different layer of NFS operations:
> --> Syscall
> --> struct file_operations
> --> struct address_space_operations
> --> struct nfs_rpc_ops
>
> ======================= Server Side =============================
>
> <1> nfsd_dispatch
> This is the NFS dispatching function sit on top of RPC.
>
> <2> NFS RPC procedures:
>
> For NFSv4, it will be nfsd4_proc_compound
>
> For NFSv2, NFSv3, it will be the functions from nfsd_procedures[2,3]
>
> Here is a list for NFSv3, NFSv2 are almost the same:
> nfsd3_proc_null,
> nfsd3_proc_getattr,
> nfsd3_proc_setattr,
> nfsd3_proc_lookup,
> nfsd3_proc_access,
> nfsd3_proc_readlink,
> nfsd3_proc_read,
> nfsd3_proc_write,
> nfsd3_proc_create,
> nfsd3_proc_mkdir,
> nfsd3_proc_symlink,
> nfsd3_proc_mknod,
> nfsd3_proc_remove,
> nfsd3_proc_rmdir,
> nfsd3_proc_rename,
> nfsd3_proc_link,
> nfsd3_proc_readdir,
> nfsd3_proc_readdirplus,readdirplus,
> nfsd3_proc_fsstat,
> nfsd3_proc_fsinfo,
> nfsd3_proc_pathconf,
> nfsd3_proc_commit,
>
> <3> NFSD file VFS operations
>
> The functions nfsd_xxx from "fs/nfsd/vfs.c"
>
> With the above server side trace hooks, LKET could trace NFS
> operations at different layer:
>
> nfsd_dispatch -->
> --> NFS RPC Procedures
> --> NFS VFS file operations
>
>
> What I didn't list about NFS operations includes authentication,
> NFSv4 callback and RPC(I prefer to use a separate set of trace hooks
> for RPC). I am not sure if these operations are also required to be
> traced. If I missed some important functions or I listed some
> redundant functions, please feel free to let me know. Any comments
> will be highly appreciated.
>
> Thanks.
>
> The following is from Li Xuepeng posted on [email protected]
> which involved some implementations details and its trace point lists
> is a subset of the above.
>
> - Guanglei
>
> Xue Peng Li ??:
> > Hi folks,
> >
> > I am working on NFS trace hooks for SystemTap/LKET. These trace
> > hooks could be used for performance analyzing which will trace both
> > NFS client and server side activities.
> >
> > At the first step I need make sure that the trace hooks I defined
> > are appropriate and every trace hook probes the right places inside
> > the Kernel. So I will be appreciated if you could help me review the
> > following trace hooks.
> >
> >
> > Thanks
> >
> > ======================== NFS Client Side Trace Hooks =================
> >
> > The following event hooks are used to trace nfs client activities.
> > These event hooks are divided into two groups. Probe Point and
> > Description is given for each event hook.
> >
> > Group1:
> > It contains 15 event hooks, which are used to probe Client-side
> > NFS procedures.
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.read_setup
> > Probe Point:
> > nfs_proc_read_setup,nfs3_proc_read_setup, nfs4_proc_read_setup
> > Description:
> > Setup a rpc task to prepare for reading
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.read_done
> > Probe Point:
> > nfs_proc_read_done,nfs3_proc_read_done,
> > nfs4_proc_read_done
> > Description:
> > Fires when receive a read reply from server,it is used to
> > refresh the inode on client
> > ---------------------------------------------------------------------
> > addevnet.nfs.proc.read
> > Probe Point:
> > nfs_proc_read,nfs3_proc_read,nfs4_proc_read
> > Description:
> > Send a read operation to server,and refresh local inode after
> > receive reply from server
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.write_setup
> > Probe Point:
> > nfs_proc_write_setup,nfs3_proc_write_setup,nfs4_proc_write_setup
> > Description:
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.write
> > Probe Point:
> > nfs_proc_write,nfs3_proc_write,nfs4_proc_write
> > Description:
> > Send a write operation to server
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.write_done
> > Probe Point:
> > nfs_write_done,nfs3_write_done,nfs4_write_done
> > Description:
> > Fires when receive a write reply from server,it is used to
> > refresh the inode on client
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.open
> > Probe Point:
> > nfs_open
> > Description:
> > Allocate file read/write context information
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.release
> > Probe Point:
> > nfs_release
> > Description:
> > Release file read/write context information
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.create
> > Probe Point:
> > nfs_create
> > Description:
> > Create a new file or dir on server
> > _____________________________________________________________________
> >
> > Group2:
> > This group includes the event hooks which probe NFS address space
> > operation related function.All the functions are common in NFSV2,
> > NFSV3,NFSV4.
> > ---------------------------------------------------------------------
> > addevent.nfs.aops.readpage
> > Probe Point:
> > nfs_readpage
> > Description :
> > Read the page ,only fires when a previous async read operation
> > failed
> > ---------------------------------------------------------------------
> > addevent.nfs.aops.readpages
> > Probe Point:
> > nfs_readpages
> > Description:
> > Fires when in readahead way,read several pages once
> > ---------------------------------------------------------------------
> > addevent.nfs.aops.writepage
> > Probe Point:
> > nfs_writepage
> > Description:
> > Write an mapped page to the server
> > ---------------------------------------------------------------------
> > addevent.nfs.aops.writepages
> > Probe Point:
> > nfs_writepages
> > Description:
> > Write several dirty pages to the serve once
> > ---------------------------------------------------------------------
> > addevent.nfs.aops.prepare_write
> > Probe Point:
> > prepare_write
> > Description:
> > Prepare a page for writing. Look for a request corresponding
> > to the page. If there is one, and it belongs to another aops,
> > we flush it out before we try to copy anything into the page.
> > Also do the same if we find a request from an existing
> > dropped page.
> > ---------------------------------------------------------------------
> > addevent.nfs.aops.commit_write
> > Probe Point:
> > nfs_commit_write
> > Description :
> > Update and possibly write a cached page of an NFS aops
> > _____________________________________________________________________
> >
> >
> > ====================== NFS Server Side Trace Hooks ==================
> >
> > The following event hooks are used to traced nfs server activities.
> > The event hooks are divided into three group.
> >
> > Group1:
> > It contains one event hook,which probes nfsd_dispatch
> > ---------------------------------------------------------------------
> > addevent.nfsd.dispatch
> > Probe Point:
> > nfsd_dispatch
> > Description:
> > Decode the arguments received from client,call the procedure
> > handler,encode the result
> > ______________________________________________________________________
> > Group2:
> > It contains three event hooks.The functions probed will be called
> > by related procedure handler. All the functions are common in NFSV2,
> > NFSV3,NFSV4
> > ---------------------------------------------------------------------
> > addevent.nfsd.read
> > Probe Point:
> > nfsd_read
> > Description:
> > It does the "real" work of read
> > ---------------------------------------------------------------------
> > addevent.nfsd.write
> > Probe Point:
> > nfsd_write
> > Description:
> > It does the "real " work of write
> > ---------------------------------------------------------------------
> > addevent.nfsd.open
> > Probe Point:
> > nfsd_open
> > Description:
> > Open an existing file or directory.
> > ---------------------------------------------------------------------
> > addevent.nfsd.close
> > Probe Point:
> > nfsd_close
> > Description:
> > Close an existing file or directory
> > _____________________________________________________________________
> > Group3:
> > It contains eight event hooks,which probe procedure handlers.
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc2.read
> > Probe Point:
> > nfsd_proc_read
> > Description:
> > Read data from file (NFSV2)
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc3.read
> > Probe Point:
> > nfsd3_proc_read
> > Description:
> > Read data from file (NFSV3)
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc4.read
> > Probe Point:
> > nfsd4_read
> > Description:
> > Check stateid and prepare for reading
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc2.write
> > Probe Point:
> > nfsd_proc_write
> > Description:
> > Write data to file (NFSV2)
> > ---------------------------------------------------------------------
> > addevent.nfsd.pro3.write
> > Probe Point:
> > nfsd3_proc_write
> > Description:
> > Write data to file (NFSV3)
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc4.write
> > Probe Point:
> > nfsd4_write
> > Description:
> > Check stateid and write data to file
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc4.open
> > Probe Point:
> > nfsd4_open
> > Description:
> > Check stateid and open file
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc4.compound
> > Probe Point:
> > nfsd4_proc_compound
> > Description:
> > Call different procedures according to client request
> >
> >
> > ------------------------------------------------------------------------
> >
> > _______________________________________________
> > ltc-perf mailing list
> > [email protected]
> > http://linux.ibm.com/mailman/listinfo/ltc-perf
>
>
>
> -------------------------------------------------------------------------
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys -- and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> _______________________________________________
> NFS maillist - [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfs
>


--
"We who cut mere stones must always be envisioning cathedrals"
-- Quarry worker's creed

2006-07-27 17:01:09

by Jose R. Santos

[permalink] [raw]
Subject: Re: [NFS] [ltc-perf] draft of nfs event hook

Chuck Lever wrote:
> On 7/27/06, Li Guanglei <[email protected]> wrote:
>> When we started working on NFS trace hooks, we realized it is not
>> an easy task. Although we use NFS in daily work but we don't have much
>> knowledge about the NFS protocol details and its implementation inside
>> the Kernel. So I divided the work into two steps. At the first step I
>> need get a list of trace points. And at the second step I need to make
>> sure what trace data is available for each trace hook. In a short, the
>> trace data available for each hook will be derived from the arguments
>> of the kernel functions being probed.
>>
>> We read through the Kernel source code and chose some functions to
>> be instrumented. We will trace the entry of these functions and if
>> necessary, the return of them will also be traced. The following is
>> the list of these functions, please take a review:
>
> Have you done this with a local file system? I assume yes, and that
> you just described the general approach you have taken with other file
> systems. I think getting the same kind of data and trace points from
> the NFS client as you added to local file systems would be good.

There is someone already working on a tapset for ext3 and we are waiting
for that tapset to be available before we go into looking how to add
trace hooks to ext3. We plan to have a similar approach to the NFS hooks.
>
> Capturing VFS and address space entry points is definitely useful and
> is similar to local file systems. At the bottom of the NFS client is
> the RPC client, and it acts just like the block I/O layer does for
> local file systems. Would you consider adding trace points in the
> LKET for the RPC client and server?

Definitely. We've already added a trace hook on the NFS server dispatch
entry and exit and we are looking at adding even more. While our focus
has mostly been adding hooks that would assist in performance analysis,
the tool can also be used for the purposes of debugging as well. If you
have specific places where you would like to see trace hook inserted, we
are definitely interested.

-JRS

2006-07-27 22:47:02

by Li Guanglei

[permalink] [raw]
Subject: Re: [NFS] [ltc-perf] draft of nfs event hook

>
> Have you done this with a local file system? I assume yes, and that
> you just described the general approach you have taken with other file
> systems. I think getting the same kind of data and trace points from
> the NFS client as you added to local file systems would be good.
>
> Capturing VFS and address space entry points is definitely useful and
> is similar to local file systems. At the bottom of the NFS client is
> the RPC client, and it acts just like the block I/O layer does for
> local file systems. Would you consider adding trace points in the
> LKET for the RPC client and server?
>
>>
>> What I didn't list about NFS operations includes authentication,
>> NFSv4 callback and RPC(I prefer to use a separate set of trace hooks
>> for RPC). I am not sure if these operations are also required to be
>> traced. If I missed some important functions or I listed some
>> redundant functions, please feel free to let me know. Any comments
>> will be highly appreciated.
>>

I didn't list RPC here because I think RPC is not only used by NFS and
I need another set of RPC trace hooks to address the RPC server and
client side operations. That will be my plan of the next set of trace
hooks.

- Guanglei

2006-07-28 03:33:07

by Xue Peng Li

[permalink] [raw]
Subject: Re: [ltc-perf] draft of nfs event hook



"Chuck Lever" <[email protected]> wrote on 2006-07-26 21:50:42:

> Xue Peng ---
>
> I've only glanced at your specification, but it occurs to me that it
> would be helpful for reviewers to understand your intent of adding the
> hooks where you did. Do you have a design document, even a short one?
> Or can you discuss your decisions on the list with us?

Guanglei posted an updated mail of NFS trace hooks which included a link to
LKET manual page.

>
> For example, why hook all three of "setup" "read/write" and "done" ?
>

read is for sync operations. While setup/done pair is used for async
operations.


> And, what value do you hope these hooks will add over and above the
> performance metrics that I added in 2.6.17 ? We see the SystemTap
> hooks as an opportunity to make more specialized (and potentially more
> run-time expensive) observations than the performance metrics.
These hooks can used to trace NFS activities ,which is a new hook group
>
> Are there other file systems that have hooks in them? Did you add
> hooks in conventional/standard places for all file systems?

Thomas Zanussi from IBM is working on ext3 hooks. The hooks into other fs
may be our plan of next step.

>
> Thanks!
>
> On 7/26/06, Xue Peng Li <[email protected]> wrote:
> >
> >
> >
> > Hi folks,
> >
> > I am working on NFS trace hooks for SystemTap/LKET. These trace
> > hooks could be used for performance analyzing which will trace both
> > NFS client and server side activities.
> >
> > At the first step I need make sure that the trace hooks I defined
> > are appropriate and every trace hook probes the right places inside
> > the Kernel. So I will be appreciated if you could help me review the
> > following trace hooks.
> >
> >
> > Thanks
> >
> > ======================== NFS Client Side Trace Hooks =================
> >
> > The following event hooks are used to trace nfs client activities.
> > These event hooks are divided into two groups. Probe Point and
> > Description is given for each event hook.
> >
> > Group1:
> > It contains 15 event hooks, which are used to probe Client-side
> > NFS procedures.
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.read_setup
> > Probe Point:
> > nfs_proc_read_setup,nfs3_proc_read_setup,
> > nfs4_proc_read_setup
> > Description:
> > Setup a rpc task to prepare for reading
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.read_done
> > Probe Point:
> > nfs_proc_read_done,nfs3_proc_read_done,
> > nfs4_proc_read_done
> > Description:
> > Fires when receive a read reply from server,it is used to
> > refresh the inode on client
> > ---------------------------------------------------------------------
> > addevnet.nfs.proc.read
> > Probe Point:
> > nfs_proc_read,nfs3_proc_read,nfs4_proc_read
> > Description:
> > Send a read operation to server,and refresh local inode after
> > receive reply from server
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.write_setup
> > Probe Point:
> > nfs_proc_write_setup,nfs3_proc_write_setup,nfs4_proc_write_setup
> > Description:
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.write
> > Probe Point:
> > nfs_proc_write,nfs3_proc_write,nfs4_proc_write
> > Description:
> > Send a write operation to server
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.write_done
> > Probe Point:
> > nfs_write_done,nfs3_write_done,nfs4_write_done
> > Description:
> > Fires when receive a write reply from server,it is used to
> > refresh the inode on client
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.open
> > Probe Point:
> > nfs_open
> > Description:
> > Allocate file read/write context information
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.release
> > Probe Point:
> > nfs_release
> > Description:
> > Release file read/write context information
> > ---------------------------------------------------------------------
> > addevent.nfs.proc.create
> > Probe Point:
> > nfs_create
> > Description:
> > Create a new file or dir on server
> > _____________________________________________________________________
> >
> > Group2:
> > This group includes the event hooks which probe NFS address space
> > operation related function.All the functions are common in NFSV2,
> > NFSV3,NFSV4.
> > ---------------------------------------------------------------------
> > addevent.nfs.aops.readpage
> > Probe Point:
> > nfs_readpage
> > Description :
> > Read the page ,only fires when a previous async read operation
> > failed
> > ---------------------------------------------------------------------
> > addevent.nfs.aops.readpages
> > Probe Point:
> > nfs_readpages
> > Description:
> > Fires when in readahead way,read several pages once
> > ---------------------------------------------------------------------
> > addevent.nfs.aops.writepage
> > Probe Point:
> > nfs_writepage
> > Description:
> > Write an mapped page to the server
> > ---------------------------------------------------------------------
> > addevent.nfs.aops.writepages
> > Probe Point:
> > nfs_writepages
> > Description:
> > Write several dirty pages to the serve once
> > ---------------------------------------------------------------------
> > addevent.nfs.aops.prepare_write
> > Probe Point:
> > prepare_write
> > Description:
> > Prepare a page for writing. Look for a request corresponding
> > to the page. If there is one, and it belongs to another aops,
> > we flush it out before we try to copy anything into the page.
> > Also do the same if we find a request from an existing
> > dropped page.
> > ---------------------------------------------------------------------
> > addevent.nfs.aops.commit_write
> > Probe Point:
> > nfs_commit_write
> > Description :
> > Update and possibly write a cached page of an NFS aops
> > _____________________________________________________________________
> >
> >
> > ====================== NFS Server Side Trace Hooks ==================
> >
> > The following event hooks are used to traced nfs server activities.
> > The event hooks are divided into three group.
> >
> > Group1:
> > It contains one event hook,which probes nfsd_dispatch
> > ---------------------------------------------------------------------
> > addevent.nfsd.dispatch
> > Probe Point:
> > nfsd_dispatch
> > Description:
> > Decode the arguments received from client,call the procedure
> > handler,encode the result
> > ______________________________________________________________________
> > Group2:
> > It contains three event hooks.The functions probed will be called
> > by related procedure handler. All the functions are common in NFSV2,
> > NFSV3,NFSV4
> > ---------------------------------------------------------------------
> > addevent.nfsd.read
> > Probe Point:
> > nfsd_read
> > Description:
> > It does the "real" work of read
> > ---------------------------------------------------------------------
> > addevent.nfsd.write
> > Probe Point:
> > nfsd_write
> > Description:
> > It does the "real " work of write
> > ---------------------------------------------------------------------
> > addevent.nfsd.open
> > Probe Point:
> > nfsd_open
> > Description:
> > Open an existing file or directory.
> > ---------------------------------------------------------------------
> > addevent.nfsd.close
> > Probe Point:
> > nfsd_close
> > Description:
> > Close an existing file or directory
> > _____________________________________________________________________
> > Group3:
> > It contains eight event hooks,which probe procedure handlers.
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc2.read
> > Probe Point:
> > nfsd_proc_read
> > Description:
> > Read data from file (NFSV2)
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc3.read
> > Probe Point:
> > nfsd3_proc_read
> > Description:
> > Read data from file (NFSV3)
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc4.read
> > Probe Point:
> > nfsd4_read
> > Description:
> > Check stateid and prepare for reading
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc2.write
> > Probe Point:
> > nfsd_proc_write
> > Description:
> > Write data to file (NFSV2)
> > ---------------------------------------------------------------------
> > addevent.nfsd.pro3.write
> > Probe Point:
> > nfsd3_proc_write
> > Description:
> > Write data to file (NFSV3)
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc4.write
> > Probe Point:
> > nfsd4_write
> > Description:
> > Check stateid and write data to file
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc4.open
> > Probe Point:
> > nfsd4_open
> > Description:
> > Check stateid and open file
> > ---------------------------------------------------------------------
> > addevent.nfsd.proc4.compound
> > Probe Point:
> > nfsd4_proc_compound
> > Description:
> > Call different procedures according to client request
> >
> >
-------------------------------------------------------------------------
> > Take Surveys. Earn Cash. Influence the Future of IT
> > Join SourceForge.net's Techsay panel and you'll get the chance to share
your
> > opinions on IT & business topics through brief surveys -- and earn cash
> >
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> >
> > _______________________________________________
> > NFS maillist - [email protected]
> > https://lists.sourceforge.net/lists/listinfo/nfs
> >
> >
> >
>
>
> --
> "We who cut mere stones must always be envisioning cathedrals"
> -- Quarry worker's creed


Attachments:
(No filename) (0.00 B)
(No filename) (348.00 B)
(No filename) (140.00 B)
Download all attachments

2006-08-22 10:08:05

by Xue Peng Li

[permalink] [raw]
Subject: Re: [ltc-perf] draft of nfs event hook

Hi folks,
This is nfsd tapset ,which includes nfsd procedures stubs on server
side and nfsd functions called by corresponding procedures stubs.
In this tapset , I only probes some procedures stubs and nfsd functions
not all of them .And this is the last nfs tapset,
I have sent out all nfs tapsets yet.

Not like NFSV2 and NFSV3,there is just one procedure stub for
NFSV4,nfsd4_proc_compound ,which will call difference function
according to RPC request.So there is only one probe point at proc level
for NFSV4.Some nfsd functions for NFSV4 like nfsd4_read,
nfsd4_write etc,are static inline functions,can't be probed.

tell me if you have any questions/comments

BTW: My kernel is 2.6.17.2,could anyone tell me the NFSV4 of this
kernel is available?



Thanks.
Best Regards,

Li Xuepeng (??ѩ??)

Linux Performance, China Systems & Technology Lab
China Development Labs, Beijing
Email: [email protected]

[email protected] wrote on 2006-08-16 16:56:32:

> Hi folks
> This is another nfs tapset about nfs procedures stubs on client
side.
> And I will write nfs procedures stubs on server side at next step.
>
> If you have any questions/suggestions/comments,pls tell me
>
>
>
> Thanks
> [attachment "nfs_proc.stp" deleted by Xue Peng Li/China/Contr/IBM]


Attachments:
nfsd.stp (26.97 kB)

2006-08-11 01:57:29

by Xue Peng Li

[permalink] [raw]
Subject: Re: [ltc-perf] draft of nfs event hook

%{
#include <linux/kernel.h>
#include <linux/nfs_fs.h>
%}
/*Get struct nfs_inode from struct inode*/
%{
struct nfs_inode * __nfs_i (struct inode *inode)
{
struct nfs_inode * nfsi = NFS_I(inode);

return (nfsi);
}
%}

/*Get cache_validity flag from struct inode*/
function __nfsi_cache_valid:long(inode:long)
%{
struct inode * inode = (struct inode *)(THIS->inode);
struct nfs_inode * nfsi;

nfsi = __nfs_i(inode);
THIS->__retvalue = nfsi->cache_validity;
%}

/*Get read_cache_jiffies from struct inode*/
function __nfsi_rcache_time :long (inode:long)
%{
struct inode * inode = (struct inode *)(THIS->inode);
struct nfs_inode * nfsi = (struct nfs_inode *) __nfs_i(inode);

THIS->__retvalue = nfsi->read_cache_jiffies;
%}

/*Get attrtimeo from struct inode*/
function __nfsi_attr_time :long (inode:long)
%{
struct inode * inode = (struct inode *)(THIS->inode);
struct nfs_inode * nfsi = (struct nfs_inode *) __nfs_i(inode);

THIS->__retvalue = nfsi->attrtimeo;
%}

/*Get ndirty from struct inode*/
function __nfsi_ndirty:long (inode:long)
%{
struct inode *inode = (struct inode *)(THIS->inode);
struct nfs_inode *nfsi = NFS_I(inode);

THIS->__retvalue = nfsi->ndirty;
%}

/*Get rsize from struct inode*/
function __nfs_server_rsize:long (inode:long)
%{
struct inode * inode = (struct inode *)(THIS->inode);

THIS->__retvalue = NFS_SERVER(inode)->rsize;
%}

/*Get wsize from struct inode*/
function __nfs_server_wsize:long (inode:long)
%{
struct inode * inode = (struct inode *)(THIS->inode);

THIS->__retvalue = NFS_SERVER(inode)->wsize;
%}

/*Get rpages from struct inode*/
function __nfs_rpages:long (inode:long)
%{
struct inode * inode = (struct inode *)(THIS->inode);

THIS->__retvalue = NFS_SERVER(inode)->rpages;
%}

/*Get wpages from struct inode*/
function __nfs_wpages:long(inode:long)
%{
struct inode *inode = (struct inode*)(THIS->inode);
THIS->__retvalue = NFS_SERVER(inode)->wpages;
%}

/*Get struct inode from struct page*/
function __p2i :long(page:long)
%{
struct page *page = (struct page *)(THIS->page);
THIS->__retvalue = (long)page->mapping->host;
%}

/*Get i_flags from struct page*/
function __p2i_flag : long (page:long)
%{
struct page *page = (struct page *) (THIS->page);
THIS->__retvalue = page->mapping->host->i_flags;
%}

/*Get i_state from struct page*/
function __p2i_state :long (page:long)
%{
struct page *page = (struct page *) (THIS->page);
THIS->__retvalue = page->mapping->host->i_state;
%}

/*Get i_size from struct page*/
function __p2i_size :long (page:long)
%{
struct page *page = (struct page *) (THIS->page);
THIS->__retvalue = page->mapping->host->i_size;
%}

/*Get s_flags from struct page*/
function __p2sb_flag:long (page:long)
%{
struct page *page = (struct page *)(THIS->page);
THIS->__retvalue = page->mapping->host->i_sb->s_flags;
%}

function __d_loff_t :long (ppos :long)
%{
loff_t * ppos = (loff_t *) (THIS->ppos);

THIS->__retvalue =(long) *ppos;
%}

probe nfs.fop.entries = nfs.fop.llseek,
nfs.fop.read,
nfs.fop.write,
nfs.fop.aio_read,
nfs.fop.aio_write,
nfs.fop.mmap,
nfs.fop.open,
nfs.fop.flush,
nfs.fop.release,
nfs.fop.fsync,
nfs.fop.lock,
nfs.fop.sendfile
{
}

probe nfs.fop.entries.return = nfs.fop.llseek.return,
nfs.fop.read.return,
nfs.fop.write.return,
nfs.fop.aio_read.return,
nfs.fop.aio_write.return,
nfs.fop.mmap.return,
nfs.fop.open.return,
nfs.fop.flush.return,
nfs.fop.release.return,
nfs.fop.fsync.return,
nfs.fop.lock.return,
nfs.fop.sendfile.return
{
}

/*probe nfs.fop.llseek
*
* Fires when do a llseek operation on nfs,it probes
* llseek file operation of nfs
*
* Arguments:
* dev : device identifier
* ino : inode number
* offset : the offset of the file will be repositioned
* origin : the original position. The possible value could be:
* SEEK_SET
* The offset is set to offset bytes.
* SEEK_CUR
* The offset is set to its current location plus offset bytes.
* SEEK_END
* The offset is set to the size of the file plus offset bytes.
*
*/
probe nfs.fop.llseek = kernel.function ("nfs_file_llseek") ?,
module("nfs").function("nfs_file_llseek") ?
{
dev = $filp->f_dentry->d_inode->i_sb->s_dev
ino = $filp->f_dentry->d_inode->i_ino
maxbyte = $filp->f_dentry->d_inode->i_sb->s_maxbytes
offset = $offset
origin = $origin

name = "nfs.fop.llseek"
argstr = sprintf("%d, %d", offset, origin)
}

probe nfs.fop.llseek.return = kernel.function ("nfs_file_llseek").return ?,
module("nfs").function("nfs_file_llseek").return ?
{
name = "nfs.fop.llseek.return"
retstr = sprintf("%d", $return)
}
/*probe nfs.fop.read
*
* Fires when do a read operation on nfs,it probes
* read file operation of nfs
*
* Arguments:
*
*
*/
probe nfs.fop.read = vfs.do_sync_read
{
name = "nfs.fop.read"
}

probe nfs.fop.read.return = vfs.do_sync_read.return
{
name = "nfs.fop.read.return"
}

/*probe nfs.fop.write
*
* Fires when do a write operation on nfs,it probes
* write file operation of nfs
*
* Arguments:
*
*
*/

probe nfs.fop.write = vfs.do_sync_write
{
name = "nfs.fop.write"
}

probe nfs.fop.write.return = vfs.do_sync_write.return
{
name = "nfs.fop.write.return"
}

/*probe nfs.fop.aio_read
*
* It probes aio_read file operation of nfs
*
* Arguments:
* dev : device identifier
* ino : inode number
* count : read bytes
* pos : current position of file
* buf : the address of buf in user space
* parent_name : parent dir name
* file_name : file name
* cache_valid : cache related bit mask flag
* cache_time : when we started read-caching this inode
* attrtimeo : how long the cached information is assumed
* to be valid.
* We need to revalidate the cached attrs for this inode if
*
* jiffies - read_cache_jiffies > attrtimeo
*/
probe nfs.fop.aio_read = kernel.function ("nfs_file_read") ?,
module("nfs").function("nfs_file_read") ?
{
dev = $iocb->ki_filp->f_dentry->d_inode->i_sb->s_dev
ino = $iocb->ki_filp->f_dentry->d_inode->i_ino

count = $count
pos = $pos
buf = $buf

parent_name = kernel_string($iocb->ki_filp->f_dentry->d_parent->d_name->name)
file_name = kernel_string($iocb->ki_filp->f_dentry->d_name->name)


cache_valid = __nfsi_cache_valid($iocb->ki_filp->f_dentry->d_inode)
cache_time = __nfsi_rcache_time($iocb->ki_filp->f_dentry->d_inode)
attr_time = __nfsi_attr_time($iocb->ki_filp->f_dentry->d_inode)

flag = $iocb->ki_filp->f_flags

name = "nfs.fop.aio_read"
argstr = sprintf("%p,%d, %d",buf,count, pos)

size = count
units = "bytes"
}


probe nfs.fop.aio_read.return = kernel.function ("nfs_file_read").return ?,
module("nfs").function("nfs_file_read").return ?
{
name = "nfs.fop.aio_read.return"
retstr = sprintf("%d", $return)

if ($return > 0) {
size = $return
units = "bytes"
}
}

/*probe nfs.fop.aio_write
*
* It probes aio_write file operation of nfs
*
* Arguments:
* dev : device identifier
* ino : inode number
* count : read bytes
* pos : offset of the file
* buf : the address of buf in user space
* parent_name : parent dir name
* file_name : file name
*
*/
probe nfs.fop.aio_write = kernel.function("nfs_file_write") ?,
module("nfs").function("nfs_file_write") ?
{
dev = $iocb->ki_filp->f_dentry->d_inode->i_sb->s_dev
ino = $iocb->ki_filp->f_dentry->d_inode->i_ino

count = $count
pos = $pos
buf = $buf

parent_name = kernel_string($iocb->ki_filp->f_dentry->d_parent->d_name->name)
file_name = kernel_string($iocb->ki_filp->f_dentry->d_name->name)

name = "nfs.fop.aio.write"
argstr = sprintf("%p, %d, %d", buf, count, pos)

size = count
units = "bytes"
}

probe nfs.fop.aio_write.return = kernel.function("nfs_file_write").return ?,
module("nfs").function("nfs_file_write").return ?
{
name = "nfs.fop.aio_write.return"
retstr = sprintf("%d", $return)

if ($return > 0) {
size = $return
units = "bytes"
}
}

/*probe nfs.fop.mmap
*
* Fires when do an mmap operation on nfs,
* it probes mmap operation of nfs
*
* Arguments:
* dev : device identifier
* ino : inode number
* vm_start : start address within vm_mm
* vm_end : the first byte after end address within vm_mm
* vm_flag : vm flags
* buf : the address of buf in user space
* parent_name : parent dir name
* file_name : file name
* cache_valid : cache related bit mask flag
* cache_time : when we started read-caching this inode
* attrtimeo : how long the cached information is assumed
* to be valid.
* We need to revalidate the cached attrs for this inode if
*
* jiffies - read_cache_jiffies > attrtimeo
*/
probe nfs.fop.mmap = kernel.function("nfs_file_mmap") ?,
module("nfs").function("nfs_file_mmap") ?
{
dev = $file->f_dentry->d_inode->i_sb->s_dev
ino = $file->f_dentry->d_inode->i_ino

vm_start = $vma->vm_start
vm_end = $vma->vm_end
vm_flags = $vma->vm_flags

parent_name = kernel_string($file->f_dentry->d_parent->d_name->name)
file_name = kernel_string($file->f_dentry->d_name->name)

cache_valid = __nfsi_cache_valid($file->f_dentry->d_inode)
cache_time = __nfsi_rcache_time($file->f_dentry->d_inode)
attr_time = __nfsi_attr_time($file->f_dentry->d_inode)

name = "nfs.fop.mmap"
argstr = sprintf("0x%x, 0x%x, 0x%x", vm_start, vm_end, vm_flags)
}

probe nfs.fop.mmap.return = kernel.function("nfs_file_mmap").return ?,
module("nfs").function("nfs_file_mmap").return ?
{
name = "nfs.fop.mmap.return"
retstr = sprintf("%d", $return)
}

/*probe nfs.fop.open
*
* Fires when do an open operation on nfs,
* it probes open file operation of nfs
*
* Arguments:
* dev : device identifier
* ino : inode number
* file_name : file name
* flag : file flag
* i_size : file length in bytes
*/
probe nfs.fop.open = kernel.function("nfs_file_open") ?,
module("nfs").function("nfs_file_open") ?
{
dev = $filp->f_dentry->d_inode->i_sb->s_dev
ino = $inode->i_ino

filename = kernel_string($filp->f_dentry->d_name->name)
flag = $filp->f_flags

i_size = $inode->i_size

name = "nfs.fop.open"
argstr = sprintf("%d,%d, %s", flag, ino, filename)
}

probe nfs.fop.open.return = kernel.function("nfs_file_open").return ?,
module("nfs").function("nfs_file_open").return ?
{
name = "nfs.fop.open.return"
retstr = sprintf("%d", $return)
}

/*probe nfs.fop.flush
*
* Fires when do a flush file operation on nfs,
* it probes flush file operation of nfs
*
* Arguments:
* dev : device identifier
* ino : inode number
* mode : file mode
* ndirty : number of dirty page
*/
probe nfs.fop.flush = kernel.function("nfs_file_flush") ?,
module("nfs").function("nfs_file_flush") ?
{
dev = $file->f_dentry->d_inode->i_sb->s_dev
ino = $file->f_dentry->d_inode->i_ino;

mode = $file->f_mode
ndirty = __nfsi_ndirty($file->f_dentry->d_inode)

name = "nfs.fop.flush"
argstr = sprintf("%d",ino)
}

probe nfs.fop.flush.return = kernel.function("nfs_file_flush").return ?,
module("nfs").function("nfs_file_flush").return ?
{
name = "nfs.fop.flush.return"
retstr = sprintf("%d",$return)
}

/*probe nfs.fop.release
*
* Fires when do a release page operation on nfs,
* it probes release file operation of nfs
*
* Arguments:
* dev : device identifier
* ino : inode number
* mode : file mode
*/
probe nfs.fop.release = kernel.function("nfs_file_release") ?,
module("nfs").function("nfs_file_release") ?
{
dev = $filp->f_dentry->d_inode->i_sb->s_dev
ino = $inode->i_ino

mode = $filp->f_mode

name = "nfs.fop.release"
argstr = sprintf("%d" , ino)
}

probe nfs.fop.release.return = kernel.function("nfs_file_release").return ?,
module("nfs").function("nfs_file_release").return ?
{
name = "nfs.fop.release.return"
retstr = sprintf("%d", $return)
}

/*probe nfs.fop.fsync
*
* Fires when do a fsync operation on nfs,
* it probes fsync file operation of nfs
*
* Arguments:
* dev : device identifier
* ino : inode number
* ndirty : number of dirty pages
*/
probe nfs.fop.fsync = kernel.function("nfs_fsync") ?,
module("nfs").function("nfs_fsync") ?
{
dev = $file->f_dentry->d_inode->i_sb->s_dev
ino = $file->f_dentry->d_inode->i_ino

ndirty = __nfsi_ndirty($file->f_dentry->d_inode)

name = "nfs.fop.fsync"
argstr = sprintf("%d",ino)
}

probe nfs.fop.fsync.return = kernel.function("nfs_fsync").return ?,
module("nfs").function("nfs_fsync").return ?
{
name = "nfs.fop.fsync.return"
retstr = sprintf("%d", $return)
}

/*probe nfs.fop.lock
*
* Fires when do a file lock operation on nfs,
* it probes lock file operation of nfs
*
* Arguments:
* dev : device identifier
* ino : inode number
* i_mode : file type and access rights
* cmd : cmd arguments
* fl_type :lock type
* fl_flag : lock flags
* fl_start : starting offset of locked region
* fl_end : ending offset of locked region
*/
probe nfs.fop.lock = kernel.function("nfs_lock") ?,
module("nfs").function("nfs_lock") ?
{
dev = $filp->f_dentry->d_inode->i_sb->s_dev
ino = $filp->f_dentry->d_inode->i_ino

i_mode = $filp->f_dentry->d_inode->i_mode
cmd = $cmd

fl_type = $fl->fl_type
fl_flag = $fl->fl_flags
fl_start = $fl->fl_start
fl_end = $fl->fl_end

name = "nfs.fop.lock"
argstr = sprintf("%d,%d",cmd,i_mode)
}

probe nfs.fop.lock.return = kernel.function("nfs_lock").return ?,
module("nfs").function("nfs_lock").return ?
{
name = "nfs.fop.lock.return"
retstr = sprintf("%d",$return)
}


/*probe nfs.fop.sendfile
*
* Fires when do a send file operation on nfs,
* it probes sendfile file operation of nfs
*
* Arguments:
* dev : device identifier
* ino : inode number
* count : read bytes
* ppos : current position of file
* cache_valid : cache related bit mask flag
* cache_time : when we started read-caching this inode
* attrtimeo : how long the cached information is assumed
* to be valid.
* We need to revalidate the cached attrs for this inode if
*
* jiffies - read_cache_jiffies > attrtimeo
*/
probe nfs.fop.sendfile = kernel.function("nfs_file_sendfile") ?,
module("nfs").function("nfs_file_sendfile") ?
{

dev = $filp->f_dentry->d_inode->i_sb->s_dev
ino = $filp->f_dentry->d_inode->i_ino

count = $count
ppos = __d_loff_t($ppos)

cache_valid = __nfsi_cache_valid($filp->f_dentry->d_inode)
cache_time = __nfsi_rcache_time($filp->f_dentry->d_inode)
attr_time = __nfsi_attr_time($filp->f_dentry->d_inode)


name = "nfs.fop.sendfile"
argstr = sprintf("%d,%d", count,ppos)

size = count
units = "bytes"
}

probe nfs.fop.sendfile.return = kernel.function("nfs_file_sendfile").return ?,
module("nfs").function("nfs_file_sendfile").return ?
{
name = "nfs.fopsendfile.return"
retstr = sprintf("%d", $return)

if ($return > 0) {
size = $return
units = "bytes"
}
}

/*probe nfs.fop.check_flags
*
* Fires when do a checking flag operation on nfs,
* it probes check_flag file operation of nfs
*
* Arguments:
* flag : file flag
*/
probe nfs.fop.check_flags = kernel.function("nfs_check_flags") ?,
module("nfs").function("nfs_check_flags") ?
{
flag = $flags

name = "nfs.fop.check_flags"
argstr = sprintf("%d",flag)
}

probe nfs.fop.check_flags.return = kernel.function("nfs_check_flags").return ?,
module("nfs").function("nfs_check_flags").return ?
{
name = "nfs.fop.check_flags.return"
retstr = sprintf("%d",$return)
}

probe nfs.aop.entries = nfs.aop.readpage,
nfs.aop.readpages,
nfs.aop.writepage,
nfs.aop.writepages,
nfs.aop.prepare_write,
nfs.aop.commit_write,
nfs.aop.release_page
{
}

probe nfs.aop.entries.return = nfs.aop.readpage.return,
nfs.aop.readpages.return,
nfs.aop.writepage.return,
nfs.aop.writepages.return,
nfs.aop.prepare_write.return,
nfs.aop.commit_write.return,
nfs.aop.release_page.return
{
}

/* probe nfs.aop.readpage
*
* Read the page ,only fies when a previous async
* read operation failed
*
* Arguments:
* __page : the address of page
* dev : device identifier
* ino : inode number
* i_flag : file flags
* i_size : file length in bytes
* sb_flag : super block flags
* file : file argument
* page_index : offset within mapping, can used a
page identifier and position identifier
in the page frame
* rsize : read size (in bytes)
* size : number of pages to be read in this execution
*/
probe nfs.aop.readpage = kernel.function ("nfs_readpage") ?,
module("nfs").function ("nfs_readpage") ?
{
__page = $page
dev = __page_dev(__page)
ino = __page_ino(__page)

i_flag = __p2i_flag($page)
i_size = __p2i_size($page)

sb_flag = __p2sb_flag($page)

file = $file
page_index = $page->index

__inode = __p2i($page)
rsize = __nfs_server_rsize(__inode)

name = "nfs.aop.readpage"
argstr = sprintf("%d,%d" , page_index,r_size)

size = 1
units = "pages"
}

probe nfs.aop.readpage.return = kernel.function ("nfs_readpage").return ?,
module("nfs").function ("nfs_readpage").return ?
{
name = "nfs.aop.readpage.return"
retstr = sprintf("%d", $return)

size = 1
units = "pages"
}

/* probe nfs.aop.readpages
*
* Fies when in readahead way,read several pages once
* Arguments:
* dev : device identifier
* ino : inode number
* nr_pages : number of pages to be read in this execution
* file : filp argument
* rpages : read size (in pages)
* rsize : read size (in bytes)
* size : number of pages to be read in this execution
*/
probe nfs.aop.readpages = kernel.function ("nfs_readpages") ?,
module("nfs").function ("nfs_readpages") ?
{
dev = $mapping->host->i_sb->s_dev
ino = $mapping->host->i_ino

nr_pages = $nr_pages
file = $filp

rpages = __nfs_rpages($mapping->host)
rsize = __nfs_server_rsize($mapping->host)

name = "nfs.aop.readpages"
argstr = sprintf("%d" , nr_pages)

size = nr_pages
units = "pages"
}

probe nfs.aop.readpages.return = kernel.function ("nfs_readpages").return ?,
module("nfs").function ("nfs_readpages").return ?
{
name = "nfs.aop.readpages.return"
retstr = sprintf("%d", $return)


if($return > 0 )
{
size = retstr
}
units = "pages"
}
/*probe nfs.aop.set_page_dirty
*
* __set_page_dirty_nobuffers is used to set a page dirty,but
* not all the buffers.
*
* Arguments:
* __page : the address of page
* page_flag : page flags
*/
probe nfs.aop.set_page_dirty =
kernel.function ("__set_page_dirty_nobuffers") ?,
module("nfs").function ("__set_page_dirty_nobuffers") ?
{
/* dev = $mapping->host->i_sb->s_dev
devname = __find_bdevname(dev, $mapping->host->i_sb->s_bdev)
ino = $mapping->host->i_ino
*/
__page = $page
page_flag = $page->flags

name = "nfs.aop.set_page_dirty"
argstr = sprintf("%d",flag)
}

probe nfs.aop.set_page_dirty.return =
kernel.function ("__set_page_dirty_nobuffers") .return?,
module("nfs").function ("__set_page_dirty_nobuffers").return ?
{
name = "nfs.aop.set_page_dirty.return"
retstr = sprintf("%d", $return)
}

/*probe nfs.aop.writepage
*
* Write an mapped page to the server
*
* Arguments:
* __page : the address of page
* dev : device identifier
* ino : inode number
* for_reclaim : a flag of writeback_control, indicates if it's invoked from the page allocator
* for_kupdate : a flag of writeback_control, indicates if it's a kupdate writeback
* The priority of wb is decided by above two flags
* i_flag : file flags
* i_size : file length in bytes
* i_state : inode state flags
* sb_flag : super block flags
* page_index : offset within mapping, can used a
page identifier and position identifier
in the page frame
* wsize : write size
* size : number of pages to be written in this execution
*/
probe nfs.aop.writepage = kernel.function ("nfs_writepage") ?,
module("nfs").function ("nfs_writepage") ?
{
__page = $page
dev = __page_dev(__page)
ino = __page_ino(__page)


for_reclaim = $wbc->for_reclaim
for_kupdate = $wbc->for_kupdate

i_flag = __p2i_flag($page)
i_state = __p2i_state($page)
i_size = __p2i_size($page)

sb_flag = __p2sb_flag($page)


page_index = $page->index

__inode = __p2i($page)
wsize = __nfs_server_wsize(__inode)

name = "nfs.aop.writepage"
argstr = sprintf("%d",page_index)

size = 1
units = "pages"
}

probe nfs.aop.writepage.return = kernel.function ("nfs_writepage").return ?,
module("nfs").function ("nfs_writepage").return ?
{
name = "nfs.aop.writepage.return"
retstr = sprintf("%d", $return)
}

/*probe nfs.aop.writepages
* Write several dirty pages to the serve
*Arguments:
* dev : device identifier
* ino : inode number
* for_reclaim : a flag of writeback_control, indicates if it's invoked from the page allocator
* for_kupdate : a flag of writeback_control, indicates if it's a kupdate writeback
* The priority of wb is decided by above two flags
* wsize : write size
* wpages : write size (in pages)
* nr_to_write : number of pages to be written in this execution
* size : number of pages to be written in this execution
*/
probe nfs.aop.writepages = kernel.function ("nfs_writepages") ?,
module("nfs").function ("nfs_writepages") ?
{
dev = $mapping->host->i_sb->s_dev
ino = $mapping->host->i_ino

for_reclaim = $wbc->for_reclaim
for_kupdate = $wbc->for_kupdate
nr_to_write = $wbc->nr_to_write

wsize = __nfs_server_wsize($mapping->host)
wpages = __nfs_wpages($mapping->host)

name = "nfs.aop.writepages"
argstr = sprintf("%d",nr_to_write)

size = nr_to_write
units = "pages"
}

probe nfs.aop.writepages.return = kernel.function ("nfs_writepages").return ?,
module("nfs").function ("nfs_writepages").return ?
{
name = "nfs.aop.writepages.return"
retstr = sprintf("%d", $return)
}
/*probe nfs.aop.prepare_write
* Fires when do write operation on nfs.
* Prepare a page for writing
* Look for a request corresponding to the page. If there
* is one, and it belongs to another file, we flush it out
* before we try to copy anything into the page.
* Also do the same if we find a request from an existing
* dropped page
*
* Arguments:
* __page : the address of page
* dev : device identifier
* ino : inode number
* offset : start address of this write operation
* to : end address of this write operation
* page_index : offset within mapping, can used a
page identifier and position identifier
in the page frame
* size : read bytes
*/
probe nfs.aop.prepare_write= kernel.function ("nfs_prepare_write") ?,
module("nfs").function ("nfs_prepare_write") ?
{
dev = __page_dev(__page)
devname = __find_bdevname(dev, __page_bdev(__page))
ino = __page_ino(__page)

offset = $offset
to = $to

page_index = $page->index
__page = $page

name = "nfs.aop.prepare_write"
argstr = sprintf("%d", page_index)

size = to - offset
units = "bytes"
}

probe nfs.aop.prepare_write.return =
kernel.function ("nfs_prepare_write").return ?,
module("nfs").function ("nfs_prepare_write").return ?
{
name = "nfs.aop.nfs_prepare_write.return"
retstr = sprintf("%d", $return)
}

/*probe nfs.aop.commit_write
* Fires when do a write operation on nfs,
* often after prepare_write
*
* Update and possibly write a cached page of an NFS file
*
* Arguments:
* __page : the address of page
* dev : device identifier
* ino : inode number
* offset : start address of this write operation
* to : end address of this write operation
* i_flag : file flags
* i_size : file length in bytes
* sb_flag : super block flags
* page_index : offset within mapping, can used a
page identifier and position identifier
in the page frame
* size : read bytes
*/
probe nfs.aop.commit_write= kernel.function ("nfs_commit_write") ?,
module("nfs").function ("nfs_commit_write") ?
{
__page = $page
dev = __page_dev(__page)
ino = __page_ino(__page)

offset = $offset
to = $to


i_flag = __p2i_flag($page)
i_size = __p2i_size($page)

sb_flag = __p2sb_flag($page)

page_index = $page->index

name = "nfs.aop.commit_write"
argstr = sprintf("%d, %d",offset , to)

size = to - offset
units = "bytes"
}


probe nfs.aop.commit_write.return=
kernel.function ("nfs_commit_write").return ?,
module("nfs").function ("nfs_commit_write").return?
{
name = "nfs.aop.nfs_commit_write.return"
retstr = sprintf("%d", $return)
}

/*probe nfs.aop.release_page
* Fires when do a release operation on nfs,
*
*
* Arguments:
* __page : the address of page
* dev : device identifier
* ino : inode number
* page_index : offset within mapping, can used a
page identifier and position identifier
in the page frame
* size : release pages
*/
probe nfs.aop.release_page = kernel.function ("nfs_release_page") ?,
module("nfs").function ("nfs_release_page")?
{
__page = $page
dev = __page_dev(__page)
ino = __page_ino(__page)

// gfp = $gfp
page_index = $page->index

name = "nfs.aop.releasepage"
argstr = sprintf("%d", page_index)

size = 1
units = "pages"

}

probe nfs.aop.release_page.return = kernel.function ("nfs_release_page").return ?,
module("nfs").function ("nfs_release_page").return?
{
name = "nfs.aop.nfs_release_page.return"
retstr = sprintf("%d", $return)
}


Attachments:
nfs.stp (28.44 kB)

2006-08-16 08:56:32

by Xue Peng Li

[permalink] [raw]
Subject: Re: [ltc-perf] draft of nfs event hook

Hi folks
This is another nfs tapset about nfs procedures stubs on client side.
And I will write nfs procedures stubs on server side at next step.

If you have any questions/suggestions/comments,pls tell me



Thanks


Attachments:
nfs_proc.stp (29.74 kB)