2016-04-15 14:25:06

by J. Bruce Fields

[permalink] [raw]
Subject: ls -l regression from 311324ad1713

This change is making "ls -l" of large changing directories slower:

commit 311324ad1713
Author: Trond Myklebust <[email protected]>
Date: Fri Feb 7 17:02:08 2014 -0500

NFS: Be more aggressive in using readdirplus for 'ls -l' situations

Try to detect 'ls -l' by having nfs_getattr() look at whether or not
there is an opendir() file descriptor for the parent directory.
If so, then assume that we want to force use of readdirplus in order
to avoid the multiple GETATTR calls over the wire.

Signed-off-by: Trond Myklebust <[email protected]>

After that commit, if we create a large directory and then add and
remove entries from it continuously while doing an "ls -l", we see
multiple READDIRPLUS requests sent with cookie 0; it appears the readdir
is starting over from the beginning of the directory repeatedly.

It doesn't appear to be the changes to nfs_getattr that do this, instead
it's just the change to nfs_readdir to call nfs_revalidate_mapping() in
the case INVALID_DATA is set (as opposed to only in the case the
directory's attribute cache has expired):

+static bool nfs_dir_mapping_need_revalidate(struct inode *dir)
+{
+ struct nfs_inode *nfsi = NFS_I(dir);
+
+ if (nfs_attribute_cache_expired(dir))
+ return true;
+ if (nfsi->cache_validity & NFS_INO_INVALID_DATA)
+ return true;
+ return false;
+}
...
@@ -847,7 +881,7 @@ static int nfs_readdir(...
desc->plus = nfs_use_readdirplus(inode, ctx) ? 1 : 0;

nfs_block_sillyrename(dentry);
- if (ctx->pos == 0 || nfs_attribute_cache_expired(inode))
+ if (ctx->pos == 0 || nfs_dir_mapping_need_revalidate(inode))
res = nfs_revalidate_mapping(inode, file->f_mapping);
if (res < 0)
goto out;


I assume (haven't checked) that INVALID_DATA is getting set when
READDIRPLUS's notice a change in the post_op_attr's?

I'm not sure what the correct behavior is here--any suggestions?

--b.