There are long-standing problems with btrfs subvols, particularly in
relation to whether and how they are exposed in the mount table.
- /proc/self/mountinfo reports the major:minor device number for each
filesystem and when a btrfs subvol is explicitly mounted, the number
reported is wrong - it does not match what stat() reports for the
mountpoint.
- when subvol are not explicitly mounted, they don't appear in
mountinfo at all.
Consequences include that a tool which uses stat() to find the dev of the
filesystem, then searches mountinfo for that filesystem, will not find
it.
Some tools (e.g. findmnt) appear to have been enhanced to cope with this
strangeness, but it would be best to make btrfs behave more normally.
- nfsd cannot currently see the transition to subvol, so reports the
main volume and all subvols to the client as being in the same
filesystem. As inode numbers are not unique across all subvols,
this can confuse clients. In particular, 'find' is likely to report a
loop.
subvols can be made to appear in mountinfo using automounts. However
nfsd does not cope well with automounts. It assumes all filesystems to
be exported are already mounted. So adding automounts to btrfs would
break nfsd.
We can enhance nfsd to understand that some automounts can be managed.
"internal mounts" where a filesystem provides an automount point and
mounts its own directories, can be handled differently by nfsd.
This series addresses all these issues. After a few enhancements to the
VFS to provide needed support, they enhance exportfs and nfsd to cope
with the concept of internal mounts, and then enhance btrfs to provide
them.
The NFSv3 support is incomplete. I'm not sure we can make it work
"perfectly". A normal nfsv3 mount seem to work well enough, but if
mounted with '-o noac', it loses track of the mounted-on inode number
and complains about inode numbers changing.
My basic test for these is to mount a btrfs filesystem which contains
subvols, nfs-export it and mount it with nfsv3 and nfsv4, then run
'find' in each of the filesystem and check the contents of
/proc/self/mountinfo.
The first patch simply fixes the dev number in mountinfo and could
possibly be tagged for -stable.
NeilBrown
---
NeilBrown (11):
VFS: show correct dev num in mountinfo
VFS: allow d_automount to create in-place bind-mount.
VFS: pass lookup_flags into follow_down()
VFS: export lookup_mnt()
VFS: new function: mount_is_internal()
nfsd: include a vfsmount in struct svc_fh
exportfs: Allow filehandle lookup to cross internal mount points.
nfsd: change get_parent_attributes() to nfsd_get_mounted_on()
nfsd: Allow filehandle lookup to cross internal mount points.
btrfs: introduce mapping function from location to inum
btrfs: use automount to bind-mount all subvol roots.
fs/btrfs/btrfs_inode.h | 12 +++
fs/btrfs/inode.c | 111 ++++++++++++++++++++++++++-
fs/btrfs/super.c | 1 +
fs/exportfs/expfs.c | 100 ++++++++++++++++++++----
fs/fhandle.c | 2 +-
fs/internal.h | 1 -
fs/namei.c | 6 +-
fs/namespace.c | 32 +++++++-
fs/nfsd/export.c | 4 +-
fs/nfsd/nfs3xdr.c | 40 +++++++---
fs/nfsd/nfs4proc.c | 9 ++-
fs/nfsd/nfs4xdr.c | 106 ++++++++++++-------------
fs/nfsd/nfsfh.c | 44 +++++++----
fs/nfsd/nfsfh.h | 3 +-
fs/nfsd/nfsproc.c | 5 +-
fs/nfsd/vfs.c | 162 +++++++++++++++++++++++----------------
fs/nfsd/vfs.h | 12 +--
fs/nfsd/xdr4.h | 2 +-
fs/overlayfs/namei.c | 5 +-
fs/xfs/xfs_ioctl.c | 12 ++-
include/linux/exportfs.h | 4 +-
include/linux/mount.h | 4 +
include/linux/namei.h | 2 +-
23 files changed, 490 insertions(+), 189 deletions(-)
--
Signature
/proc/$PID/mountinfo contains a field for the device number of the
filesystem at each mount.
This is taken from the superblock ->s_dev field, which is correct for
every filesystem except btrfs. A btrfs filesystem can contain multiple
subvols which each have a different device number. If (a directory
within) one of these subvols is mounted, the device number reported in
mountinfo will be different from the device number reported by stat().
This confuses some libraries and tools such as, historically, findmnt.
Current findmnt seems to cope with the strangeness.
So instead of using ->s_dev, call vfs_getattr_nosec() and use the ->dev
provided. As there is no STATX flag to ask for the device number, we
pass a request mask for zero, and also ask the filesystem to avoid
syncing with any remote service.
Signed-off-by: NeilBrown <[email protected]>
---
fs/proc_namespace.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/fs/proc_namespace.c b/fs/proc_namespace.c
index 392ef5162655..f342a0231e9e 100644
--- a/fs/proc_namespace.c
+++ b/fs/proc_namespace.c
@@ -138,10 +138,16 @@ static int show_mountinfo(struct seq_file *m, struct vfsmount *mnt)
struct mount *r = real_mount(mnt);
struct super_block *sb = mnt->mnt_sb;
struct path mnt_path = { .dentry = mnt->mnt_root, .mnt = mnt };
+ struct kstat stat;
int err;
+ /* We only want ->dev, and there is no STATX flag for that,
+ * so ask for nothing and assume we get ->dev
+ */
+ vfs_getattr_nosec(&mnt_path, &stat, 0, AT_STATX_DONT_SYNC);
+
seq_printf(m, "%i %i %u:%u ", r->mnt_id, r->mnt_parent->mnt_id,
- MAJOR(sb->s_dev), MINOR(sb->s_dev));
+ MAJOR(stat.dev), MINOR(stat.dev));
if (sb->s_op->show_path) {
err = sb->s_op->show_path(m, mnt->mnt_root);
if (err)
finish_automount() prevents a mount trap from mounting a dentry onto
itself, as this could cause a loop - repeatedly automounting. There is
nothing intrinsically wrong with this arrangement, and the d_automount
function can easily avoid the loop. btrfs will use it to expose subvols
in the mount table.
It may well be a problem to mount a dentry onto itself when it is
already the root of the vfsmount, so narrow the test to only check that
case.
The test on mnt_sb is redundant and has been removed. path->mnt and
path->dentry must have the same sb, so if m->mnt_root == dentry, then
m->mnt_sb must be the same as path->mnt->mnt_sb.
Signed-off-by: NeilBrown <[email protected]>
---
fs/namespace.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/namespace.c b/fs/namespace.c
index ab4174a3c802..81b0f2b2e701 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -2928,7 +2928,7 @@ int finish_automount(struct vfsmount *m, struct path *path)
*/
BUG_ON(mnt_get_count(mnt) < 2);
- if (m->mnt_sb == path->mnt->mnt_sb &&
+ if (m->mnt_root == path->mnt->mnt_root &&
m->mnt_root == dentry) {
err = -ELOOP;
goto discard;
A future patch will want to trigger automount (LOOKUP_AUTOMOUNT) on some
follow_down calls, so allow a flag to be passed.
Signed-off-by: NeilBrown <[email protected]>
---
fs/namei.c | 6 +++---
fs/nfsd/vfs.c | 2 +-
include/linux/namei.h | 2 +-
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/namei.c b/fs/namei.c
index bf6d8a738c59..cea0e9b2f162 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -1395,11 +1395,11 @@ EXPORT_SYMBOL(follow_down_one);
* point, the filesystem owning that dentry may be queried as to whether the
* caller is permitted to proceed or not.
*/
-int follow_down(struct path *path)
+int follow_down(struct path *path, unsigned int lookup_flags)
{
struct vfsmount *mnt = path->mnt;
bool jumped;
- int ret = traverse_mounts(path, &jumped, NULL, 0);
+ int ret = traverse_mounts(path, &jumped, NULL, lookup_flags);
if (path->mnt != mnt)
mntput(mnt);
@@ -2736,7 +2736,7 @@ int path_pts(struct path *path)
path->dentry = child;
dput(parent);
- follow_down(path);
+ follow_down(path, 0);
return 0;
}
#endif
diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
index a224a5e23cc1..7c32edcfd2e9 100644
--- a/fs/nfsd/vfs.c
+++ b/fs/nfsd/vfs.c
@@ -65,7 +65,7 @@ nfsd_cross_mnt(struct svc_rqst *rqstp, struct dentry **dpp,
.dentry = dget(dentry)};
int err = 0;
- err = follow_down(&path);
+ err = follow_down(&path, 0);
if (err < 0)
goto out;
if (path.mnt == exp->ex_path.mnt && path.dentry == dentry &&
diff --git a/include/linux/namei.h b/include/linux/namei.h
index be9a2b349ca7..8d47433def3c 100644
--- a/include/linux/namei.h
+++ b/include/linux/namei.h
@@ -70,7 +70,7 @@ extern struct dentry *lookup_one_len_unlocked(const char *, struct dentry *, int
extern struct dentry *lookup_positive_unlocked(const char *, struct dentry *, int);
extern int follow_down_one(struct path *);
-extern int follow_down(struct path *);
+extern int follow_down(struct path *, unsigned int);
extern int follow_up(struct path *);
extern struct dentry *lock_rename(struct dentry *, struct dentry *);
A future patch will allow exportfs_decode_fh{,_raw} to return a
different vfsmount than the one passed. This is specifically for btrfs,
but would be useful for any filesystem that presents as multiple volumes
(i.e. different st_dev, each with their own st_ino number-space).
For nfsd, this means that the mnt in the svc_export may not apply to all
filehandles reached from that export. So svc_fh needs to store a
distinct vfsmount as well.
For now, fs->fh_mnt == fh->fh_export->ex_path.mnt, but that will change.
Changes include:
fh_compose()
nfsd_lookup_dentry()
now take a *path instead of a *dentry
nfsd4_encode_fattr()
nfsd4_encode_fattr_to_buf()
now take a *vfsmount as well as a *dentry
nfsd_cross_mnt() now takes a *path instead of a **dentry
to pass in, and get back, the mnt and dentry.
nfsd_lookup_parent() used to take a *dentry and a **dentry.
now it just takes a *path. This is the *path that as passed
to nfsd_lookup_dentry().
Signed-off-by: NeilBrown <[email protected]>
---
fs/nfsd/export.c | 4 +-
fs/nfsd/nfs3xdr.c | 22 +++++----
fs/nfsd/nfs4proc.c | 9 ++--
fs/nfsd/nfs4xdr.c | 55 +++++++++++-----------
fs/nfsd/nfsfh.c | 30 +++++++-----
fs/nfsd/nfsfh.h | 3 +
fs/nfsd/nfsproc.c | 5 ++
fs/nfsd/vfs.c | 133 ++++++++++++++++++++++++++++------------------------
fs/nfsd/vfs.h | 10 ++--
fs/nfsd/xdr4.h | 2 -
10 files changed, 150 insertions(+), 123 deletions(-)
diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c
index 9421dae22737..e506cbe78b4f 100644
--- a/fs/nfsd/export.c
+++ b/fs/nfsd/export.c
@@ -1003,7 +1003,7 @@ exp_rootfh(struct net *net, struct auth_domain *clp, char *name,
* fh must be initialized before calling fh_compose
*/
fh_init(&fh, maxsize);
- if (fh_compose(&fh, exp, path.dentry, NULL))
+ if (fh_compose(&fh, exp, &path, NULL))
err = -EINVAL;
else
err = 0;
@@ -1178,7 +1178,7 @@ exp_pseudoroot(struct svc_rqst *rqstp, struct svc_fh *fhp)
exp = rqst_find_fsidzero_export(rqstp);
if (IS_ERR(exp))
return nfserrno(PTR_ERR(exp));
- rv = fh_compose(fhp, exp, exp->ex_path.dentry, NULL);
+ rv = fh_compose(fhp, exp, &exp->ex_path, NULL);
exp_put(exp);
return rv;
}
diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
index 0a5ebc52e6a9..67af0c5c1543 100644
--- a/fs/nfsd/nfs3xdr.c
+++ b/fs/nfsd/nfs3xdr.c
@@ -1089,36 +1089,38 @@ compose_entry_fh(struct nfsd3_readdirres *cd, struct svc_fh *fhp,
const char *name, int namlen, u64 ino)
{
struct svc_export *exp;
- struct dentry *dparent, *dchild;
+ struct dentry *dparent;
+ struct path child;
__be32 rv = nfserr_noent;
dparent = cd->fh.fh_dentry;
exp = cd->fh.fh_export;
+ child.mnt = cd->fh.fh_mnt;
if (isdotent(name, namlen)) {
if (namlen == 2) {
- dchild = dget_parent(dparent);
+ child.dentry = dget_parent(dparent);
/*
* Don't return filehandle for ".." if we're at
* the filesystem or export root:
*/
- if (dchild == dparent)
+ if (child.dentry == dparent)
goto out;
if (dparent == exp->ex_path.dentry)
goto out;
} else
- dchild = dget(dparent);
+ child.dentry = dget(dparent);
} else
- dchild = lookup_positive_unlocked(name, dparent, namlen);
- if (IS_ERR(dchild))
+ child.dentry = lookup_positive_unlocked(name, dparent, namlen);
+ if (IS_ERR(child.dentry))
return rv;
- if (d_mountpoint(dchild))
+ if (d_mountpoint(child.dentry))
goto out;
- if (dchild->d_inode->i_ino != ino)
+ if (child.dentry->d_inode->i_ino != ino)
goto out;
- rv = fh_compose(fhp, exp, dchild, &cd->fh);
+ rv = fh_compose(fhp, exp, &child, &cd->fh);
out:
- dput(dchild);
+ dput(child.dentry);
return rv;
}
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index 486c5dba4b65..743b9315cd3e 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -902,7 +902,7 @@ nfsd4_secinfo(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
{
struct nfsd4_secinfo *secinfo = &u->secinfo;
struct svc_export *exp;
- struct dentry *dentry;
+ struct path path;
__be32 err;
err = fh_verify(rqstp, &cstate->current_fh, S_IFDIR, NFSD_MAY_EXEC);
@@ -910,16 +910,16 @@ nfsd4_secinfo(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
return err;
err = nfsd_lookup_dentry(rqstp, &cstate->current_fh,
secinfo->si_name, secinfo->si_namelen,
- &exp, &dentry);
+ &exp, &path);
if (err)
return err;
fh_unlock(&cstate->current_fh);
- if (d_really_is_negative(dentry)) {
+ if (d_really_is_negative(path.dentry)) {
exp_put(exp);
err = nfserr_noent;
} else
secinfo->si_exp = exp;
- dput(dentry);
+ path_put(&path);
if (cstate->minorversion)
/* See rfc 5661 section 2.6.3.1.1.8 */
fh_put(&cstate->current_fh);
@@ -1930,6 +1930,7 @@ _nfsd4_verify(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
p = buf;
status = nfsd4_encode_fattr_to_buf(&p, count, &cstate->current_fh,
cstate->current_fh.fh_export,
+ cstate->current_fh.fh_mnt,
cstate->current_fh.fh_dentry,
verify->ve_bmval,
rqstp, 0);
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index 7abeccb975b2..21c277fa28ae 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -2823,9 +2823,9 @@ nfsd4_encode_bitmap(struct xdr_stream *xdr, u32 bmval0, u32 bmval1, u32 bmval2)
*/
static __be32
nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
- struct svc_export *exp,
- struct dentry *dentry, u32 *bmval,
- struct svc_rqst *rqstp, int ignore_crossmnt)
+ struct svc_export *exp,
+ struct vfsmount *mnt, struct dentry *dentry,
+ u32 *bmval, struct svc_rqst *rqstp, int ignore_crossmnt)
{
u32 bmval0 = bmval[0];
u32 bmval1 = bmval[1];
@@ -2851,7 +2851,7 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
struct nfsd4_compoundres *resp = rqstp->rq_resp;
u32 minorversion = resp->cstate.minorversion;
struct path path = {
- .mnt = exp->ex_path.mnt,
+ .mnt = mnt,
.dentry = dentry,
};
struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
@@ -2882,7 +2882,7 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
if (!tempfh)
goto out;
fh_init(tempfh, NFS4_FHSIZE);
- status = fh_compose(tempfh, exp, dentry, NULL);
+ status = fh_compose(tempfh, exp, &path, NULL);
if (status)
goto out;
fhp = tempfh;
@@ -3274,13 +3274,12 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
p = xdr_reserve_space(xdr, 8);
if (!p)
- goto out_resource;
+ goto out_resource;
/*
* Get parent's attributes if not ignoring crossmount
* and this is the root of a cross-mounted filesystem.
*/
- if (ignore_crossmnt == 0 &&
- dentry == exp->ex_path.mnt->mnt_root) {
+ if (ignore_crossmnt == 0 && dentry == mnt->mnt_root) {
err = get_parent_attributes(exp, &parent_stat);
if (err)
goto out_nfserr;
@@ -3380,17 +3379,18 @@ static void svcxdr_init_encode_from_buffer(struct xdr_stream *xdr,
}
__be32 nfsd4_encode_fattr_to_buf(__be32 **p, int words,
- struct svc_fh *fhp, struct svc_export *exp,
- struct dentry *dentry, u32 *bmval,
- struct svc_rqst *rqstp, int ignore_crossmnt)
+ struct svc_fh *fhp, struct svc_export *exp,
+ struct vfsmount *mnt, struct dentry *dentry,
+ u32 *bmval, struct svc_rqst *rqstp,
+ int ignore_crossmnt)
{
struct xdr_buf dummy;
struct xdr_stream xdr;
__be32 ret;
svcxdr_init_encode_from_buffer(&xdr, &dummy, *p, words << 2);
- ret = nfsd4_encode_fattr(&xdr, fhp, exp, dentry, bmval, rqstp,
- ignore_crossmnt);
+ ret = nfsd4_encode_fattr(&xdr, fhp, exp, mnt, dentry, bmval, rqstp,
+ ignore_crossmnt);
*p = xdr.p;
return ret;
}
@@ -3409,14 +3409,16 @@ nfsd4_encode_dirent_fattr(struct xdr_stream *xdr, struct nfsd4_readdir *cd,
const char *name, int namlen)
{
struct svc_export *exp = cd->rd_fhp->fh_export;
- struct dentry *dentry;
+ struct path path;
__be32 nfserr;
int ignore_crossmnt = 0;
- dentry = lookup_positive_unlocked(name, cd->rd_fhp->fh_dentry, namlen);
- if (IS_ERR(dentry))
- return nfserrno(PTR_ERR(dentry));
+ path.dentry = lookup_positive_unlocked(name, cd->rd_fhp->fh_dentry,
+ namlen);
+ if (IS_ERR(path.dentry))
+ return nfserrno(PTR_ERR(path.dentry));
+ path.mnt = mntget(cd->rd_fhp->fh_mnt);
exp_get(exp);
/*
* In the case of a mountpoint, the client may be asking for
@@ -3425,7 +3427,7 @@ nfsd4_encode_dirent_fattr(struct xdr_stream *xdr, struct nfsd4_readdir *cd,
* we will not follow the cross mount and will fill the attribtutes
* directly from the mountpoint dentry.
*/
- if (nfsd_mountpoint(dentry, exp)) {
+ if (nfsd_mountpoint(path.dentry, exp)) {
int err;
if (!(exp->ex_flags & NFSEXP_V4ROOT)
@@ -3434,11 +3436,11 @@ nfsd4_encode_dirent_fattr(struct xdr_stream *xdr, struct nfsd4_readdir *cd,
goto out_encode;
}
/*
- * Why the heck aren't we just using nfsd_lookup??
+ * Why the heck aren't we just using nfsd_lookup_dentry??
* Different "."/".." handling? Something else?
* At least, add a comment here to explain....
*/
- err = nfsd_cross_mnt(cd->rd_rqstp, &dentry, &exp);
+ err = nfsd_cross_mnt(cd->rd_rqstp, &path, &exp);
if (err) {
nfserr = nfserrno(err);
goto out_put;
@@ -3446,13 +3448,13 @@ nfsd4_encode_dirent_fattr(struct xdr_stream *xdr, struct nfsd4_readdir *cd,
nfserr = check_nfsd_access(exp, cd->rd_rqstp);
if (nfserr)
goto out_put;
-
}
out_encode:
- nfserr = nfsd4_encode_fattr(xdr, NULL, exp, dentry, cd->rd_bmval,
- cd->rd_rqstp, ignore_crossmnt);
+ nfserr = nfsd4_encode_fattr(xdr, NULL, exp, path.mnt, path.dentry,
+ cd->rd_bmval, cd->rd_rqstp,
+ ignore_crossmnt);
out_put:
- dput(dentry);
+ path_put(&path);
exp_put(exp);
return nfserr;
}
@@ -3651,8 +3653,9 @@ nfsd4_encode_getattr(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4
struct svc_fh *fhp = getattr->ga_fhp;
struct xdr_stream *xdr = resp->xdr;
- return nfsd4_encode_fattr(xdr, fhp, fhp->fh_export, fhp->fh_dentry,
- getattr->ga_bmval, resp->rqstp, 0);
+ return nfsd4_encode_fattr(xdr, fhp, fhp->fh_export,
+ fhp->fh_mnt, fhp->fh_dentry,
+ getattr->ga_bmval, resp->rqstp, 0);
}
static __be32
diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
index c475d2271f9c..0bf7ac13ae50 100644
--- a/fs/nfsd/nfsfh.c
+++ b/fs/nfsd/nfsfh.c
@@ -299,6 +299,7 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
}
fhp->fh_dentry = dentry;
+ fhp->fh_mnt = mntget(exp->ex_path.mnt);
fhp->fh_export = exp;
switch (rqstp->rq_vers) {
@@ -556,7 +557,7 @@ static void set_version_and_fsid_type(struct svc_fh *fhp, struct svc_export *exp
}
__be32
-fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry,
+fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct path *path,
struct svc_fh *ref_fh)
{
/* ref_fh is a reference file handle.
@@ -567,13 +568,13 @@ fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry,
*
*/
- struct inode * inode = d_inode(dentry);
+ struct inode * inode = d_inode(path->dentry);
dev_t ex_dev = exp_sb(exp)->s_dev;
dprintk("nfsd: fh_compose(exp %02x:%02x/%ld %pd2, ino=%ld)\n",
MAJOR(ex_dev), MINOR(ex_dev),
(long) d_inode(exp->ex_path.dentry)->i_ino,
- dentry,
+ path->dentry,
(inode ? inode->i_ino : 0));
/* Choose filehandle version and fsid type based on
@@ -590,14 +591,15 @@ fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry,
if (fhp->fh_locked || fhp->fh_dentry) {
printk(KERN_ERR "fh_compose: fh %pd2 not initialized!\n",
- dentry);
+ path->dentry);
}
if (fhp->fh_maxsize < NFS_FHSIZE)
printk(KERN_ERR "fh_compose: called with maxsize %d! %pd2\n",
fhp->fh_maxsize,
- dentry);
+ path->dentry);
- fhp->fh_dentry = dget(dentry); /* our internal copy */
+ fhp->fh_dentry = dget(path->dentry); /* our internal copy */
+ fhp->fh_mnt = mntget(path->mnt);
fhp->fh_export = exp_get(exp);
if (fhp->fh_handle.fh_version == 0xca) {
@@ -609,9 +611,9 @@ fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry,
fhp->fh_handle.ofh_xdev = fhp->fh_handle.ofh_dev;
fhp->fh_handle.ofh_xino =
ino_t_to_u32(d_inode(exp->ex_path.dentry)->i_ino);
- fhp->fh_handle.ofh_dirino = ino_t_to_u32(parent_ino(dentry));
+ fhp->fh_handle.ofh_dirino = ino_t_to_u32(parent_ino(path->dentry));
if (inode)
- _fh_update_old(dentry, exp, &fhp->fh_handle);
+ _fh_update_old(path->dentry, exp, &fhp->fh_handle);
} else {
fhp->fh_handle.fh_size =
key_len(fhp->fh_handle.fh_fsid_type) + 4;
@@ -624,7 +626,7 @@ fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry,
exp->ex_fsid, exp->ex_uuid);
if (inode)
- _fh_update(fhp, exp, dentry);
+ _fh_update(fhp, exp, path->dentry);
if (fhp->fh_handle.fh_fileid_type == FILEID_INVALID) {
fh_put(fhp);
return nfserr_opnotsupp;
@@ -675,8 +677,10 @@ fh_update(struct svc_fh *fhp)
void
fh_put(struct svc_fh *fhp)
{
- struct dentry * dentry = fhp->fh_dentry;
- struct svc_export * exp = fhp->fh_export;
+ struct dentry *dentry = fhp->fh_dentry;
+ struct svc_export *exp = fhp->fh_export;
+ struct vfsmount *mnt = fhp->fh_mnt;
+
if (dentry) {
fh_unlock(fhp);
fhp->fh_dentry = NULL;
@@ -684,6 +688,10 @@ fh_put(struct svc_fh *fhp)
fh_clear_wcc(fhp);
}
fh_drop_write(fhp);
+ if (mnt) {
+ mntput(mnt);
+ fhp->fh_mnt = NULL;
+ }
if (exp) {
exp_put(exp);
fhp->fh_export = NULL;
diff --git a/fs/nfsd/nfsfh.h b/fs/nfsd/nfsfh.h
index 6106697adc04..26c02209babd 100644
--- a/fs/nfsd/nfsfh.h
+++ b/fs/nfsd/nfsfh.h
@@ -31,6 +31,7 @@ static inline ino_t u32_to_ino_t(__u32 uino)
typedef struct svc_fh {
struct knfsd_fh fh_handle; /* FH data */
int fh_maxsize; /* max size for fh_handle */
+ struct vfsmount * fh_mnt; /* mnt, possibly of subvol */
struct dentry * fh_dentry; /* validated dentry */
struct svc_export * fh_export; /* export pointer */
@@ -171,7 +172,7 @@ extern char * SVCFH_fmt(struct svc_fh *fhp);
* Function prototypes
*/
__be32 fh_verify(struct svc_rqst *, struct svc_fh *, umode_t, int);
-__be32 fh_compose(struct svc_fh *, struct svc_export *, struct dentry *, struct svc_fh *);
+__be32 fh_compose(struct svc_fh *, struct svc_export *, struct path *, struct svc_fh *);
__be32 fh_update(struct svc_fh *);
void fh_put(struct svc_fh *);
diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c
index 60d7c59e7935..245199b0e630 100644
--- a/fs/nfsd/nfsproc.c
+++ b/fs/nfsd/nfsproc.c
@@ -268,6 +268,7 @@ nfsd_proc_create(struct svc_rqst *rqstp)
struct iattr *attr = &argp->attrs;
struct inode *inode;
struct dentry *dchild;
+ struct path path;
int type, mode;
int hosterr;
dev_t rdev = 0, wanted = new_decode_dev(attr->ia_size);
@@ -298,7 +299,9 @@ nfsd_proc_create(struct svc_rqst *rqstp)
goto out_unlock;
}
fh_init(newfhp, NFS_FHSIZE);
- resp->status = fh_compose(newfhp, dirfhp->fh_export, dchild, dirfhp);
+ path.mnt = dirfhp->fh_mnt;
+ path.dentry = dchild;
+ resp->status = fh_compose(newfhp, dirfhp->fh_export, &path, dirfhp);
if (!resp->status && d_really_is_negative(dchild))
resp->status = nfserr_noent;
dput(dchild);
diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
index 7c32edcfd2e9..c0c6920f25a4 100644
--- a/fs/nfsd/vfs.c
+++ b/fs/nfsd/vfs.c
@@ -49,27 +49,26 @@
#define NFSDDBG_FACILITY NFSDDBG_FILEOP
-/*
- * Called from nfsd_lookup and encode_dirent. Check if we have crossed
+/*
+ * Called from nfsd_lookup and encode_dirent. Check if we have crossed
* a mount point.
- * Returns -EAGAIN or -ETIMEDOUT leaving *dpp and *expp unchanged,
- * or nfs_ok having possibly changed *dpp and *expp
+ * Returns -EAGAIN or -ETIMEDOUT leaving *path and *expp unchanged,
+ * or nfs_ok having possibly changed *path and *expp
*/
int
-nfsd_cross_mnt(struct svc_rqst *rqstp, struct dentry **dpp,
- struct svc_export **expp)
+nfsd_cross_mnt(struct svc_rqst *rqstp, struct path *path_parent,
+ struct svc_export **expp)
{
struct svc_export *exp = *expp, *exp2 = NULL;
- struct dentry *dentry = *dpp;
- struct path path = {.mnt = mntget(exp->ex_path.mnt),
- .dentry = dget(dentry)};
+ struct path path = {.mnt = mntget(path_parent->mnt),
+ .dentry = dget(path_parent->dentry)};
int err = 0;
err = follow_down(&path, 0);
if (err < 0)
goto out;
- if (path.mnt == exp->ex_path.mnt && path.dentry == dentry &&
- nfsd_mountpoint(dentry, exp) == 2) {
+ if (path.mnt == path_parent->mnt && path.dentry == path_parent->dentry &&
+ nfsd_mountpoint(path.dentry, exp) == 2) {
/* This is only a mountpoint in some other namespace */
path_put(&path);
goto out;
@@ -93,19 +92,14 @@ nfsd_cross_mnt(struct svc_rqst *rqstp, struct dentry **dpp,
if (nfsd_v4client(rqstp) ||
(exp->ex_flags & NFSEXP_CROSSMOUNT) || EX_NOHIDE(exp2)) {
/* successfully crossed mount point */
- /*
- * This is subtle: path.dentry is *not* on path.mnt
- * at this point. The only reason we are safe is that
- * original mnt is pinned down by exp, so we should
- * put path *before* putting exp
- */
- *dpp = path.dentry;
- path.dentry = dentry;
+ path_put(path_parent);
+ *path_parent = path;
+ exp_put(exp);
*expp = exp2;
- exp2 = exp;
+ } else {
+ path_put(&path);
+ exp_put(exp2);
}
- path_put(&path);
- exp_put(exp2);
out:
return err;
}
@@ -121,27 +115,30 @@ static void follow_to_parent(struct path *path)
path->dentry = dp;
}
-static int nfsd_lookup_parent(struct svc_rqst *rqstp, struct dentry *dparent, struct svc_export **exp, struct dentry **dentryp)
+static int nfsd_lookup_parent(struct svc_rqst *rqstp, struct svc_export **exp,
+ struct path *path)
{
+ struct path path2;
struct svc_export *exp2;
- struct path path = {.mnt = mntget((*exp)->ex_path.mnt),
- .dentry = dget(dparent)};
- follow_to_parent(&path);
-
- exp2 = rqst_exp_parent(rqstp, &path);
+ path2 = *path;
+ path_get(&path2);
+ follow_to_parent(&path2);
+ exp2 = rqst_exp_parent(rqstp, path);
if (PTR_ERR(exp2) == -ENOENT) {
- *dentryp = dget(dparent);
+ /* leave path unchanged */
+ path_put(&path2);
+ return 0;
} else if (IS_ERR(exp2)) {
- path_put(&path);
+ path_put(&path2);
return PTR_ERR(exp2);
} else {
- *dentryp = dget(path.dentry);
+ path_put(path);
+ *path = path2;
exp_put(*exp);
*exp = exp2;
+ return 0;
}
- path_put(&path);
- return 0;
}
/*
@@ -172,29 +169,32 @@ int nfsd_mountpoint(struct dentry *dentry, struct svc_export *exp)
__be32
nfsd_lookup_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp,
const char *name, unsigned int len,
- struct svc_export **exp_ret, struct dentry **dentry_ret)
+ struct svc_export **exp_ret, struct path *ret)
{
struct svc_export *exp;
struct dentry *dparent;
- struct dentry *dentry;
int host_err;
dprintk("nfsd: nfsd_lookup(fh %s, %.*s)\n", SVCFH_fmt(fhp), len,name);
dparent = fhp->fh_dentry;
+ ret->mnt = mntget(fhp->fh_mnt);
exp = exp_get(fhp->fh_export);
/* Lookup the name, but don't follow links */
if (isdotent(name, len)) {
if (len==1)
- dentry = dget(dparent);
+ ret->dentry = dget(dparent);
else if (dparent != exp->ex_path.dentry)
- dentry = dget_parent(dparent);
+ ret->dentry = dget_parent(dparent);
else if (!EX_NOHIDE(exp) && !nfsd_v4client(rqstp))
- dentry = dget(dparent); /* .. == . just like at / */
+ ret->dentry = dget(dparent); /* .. == . just like at / */
else {
- /* checking mountpoint crossing is very different when stepping up */
- host_err = nfsd_lookup_parent(rqstp, dparent, &exp, &dentry);
+ /* checking mountpoint crossing is very different when
+ * stepping up
+ */
+ ret->dentry = dget(dparent);
+ host_err = nfsd_lookup_parent(rqstp, &exp, ret);
if (host_err)
goto out_nfserr;
}
@@ -205,11 +205,13 @@ nfsd_lookup_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp,
* need to take the child's i_mutex:
*/
fh_lock_nested(fhp, I_MUTEX_PARENT);
- dentry = lookup_one_len(name, dparent, len);
- host_err = PTR_ERR(dentry);
- if (IS_ERR(dentry))
+ ret->dentry = lookup_one_len(name, dparent, len);
+ host_err = PTR_ERR(ret->dentry);
+ if (IS_ERR(ret->dentry)) {
+ ret->dentry = NULL;
goto out_nfserr;
- if (nfsd_mountpoint(dentry, exp)) {
+ }
+ if (nfsd_mountpoint(ret->dentry, exp)) {
/*
* We don't need the i_mutex after all. It's
* still possible we could open this (regular
@@ -219,18 +221,16 @@ nfsd_lookup_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp,
* and a mountpoint won't be renamed:
*/
fh_unlock(fhp);
- if ((host_err = nfsd_cross_mnt(rqstp, &dentry, &exp))) {
- dput(dentry);
+ if ((host_err = nfsd_cross_mnt(rqstp, ret, &exp)))
goto out_nfserr;
- }
}
}
- *dentry_ret = dentry;
*exp_ret = exp;
return 0;
out_nfserr:
exp_put(exp);
+ path_put(ret);
return nfserrno(host_err);
}
@@ -251,13 +251,13 @@ nfsd_lookup(struct svc_rqst *rqstp, struct svc_fh *fhp, const char *name,
unsigned int len, struct svc_fh *resfh)
{
struct svc_export *exp;
- struct dentry *dentry;
+ struct path path;
__be32 err;
err = fh_verify(rqstp, fhp, S_IFDIR, NFSD_MAY_EXEC);
if (err)
return err;
- err = nfsd_lookup_dentry(rqstp, fhp, name, len, &exp, &dentry);
+ err = nfsd_lookup_dentry(rqstp, fhp, name, len, &exp, &path);
if (err)
return err;
err = check_nfsd_access(exp, rqstp);
@@ -267,11 +267,11 @@ nfsd_lookup(struct svc_rqst *rqstp, struct svc_fh *fhp, const char *name,
* Note: we compose the file handle now, but as the
* dentry may be negative, it may need to be updated.
*/
- err = fh_compose(resfh, exp, dentry, fhp);
- if (!err && d_really_is_negative(dentry))
+ err = fh_compose(resfh, exp, &path, fhp);
+ if (!err && d_really_is_negative(path.dentry))
err = nfserr_noent;
out:
- dput(dentry);
+ path_put(&path);
exp_put(exp);
return err;
}
@@ -740,7 +740,7 @@ __nfsd_open(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type,
__be32 err;
int host_err = 0;
- path.mnt = fhp->fh_export->ex_path.mnt;
+ path.mnt = fhp->fh_mnt;
path.dentry = fhp->fh_dentry;
inode = d_inode(path.dentry);
@@ -1350,6 +1350,7 @@ nfsd_create(struct svc_rqst *rqstp, struct svc_fh *fhp,
int type, dev_t rdev, struct svc_fh *resfhp)
{
struct dentry *dentry, *dchild = NULL;
+ struct path path;
__be32 err;
int host_err;
@@ -1371,7 +1372,9 @@ nfsd_create(struct svc_rqst *rqstp, struct svc_fh *fhp,
host_err = PTR_ERR(dchild);
if (IS_ERR(dchild))
return nfserrno(host_err);
- err = fh_compose(resfhp, fhp->fh_export, dchild, fhp);
+ path.mnt = fhp->fh_mnt;
+ path.dentry = dchild;
+ err = fh_compose(resfhp, fhp->fh_export, &path, fhp);
/*
* We unconditionally drop our ref to dchild as fh_compose will have
* already grabbed its own ref for it.
@@ -1390,11 +1393,12 @@ nfsd_create(struct svc_rqst *rqstp, struct svc_fh *fhp,
*/
__be32
do_nfsd_create(struct svc_rqst *rqstp, struct svc_fh *fhp,
- char *fname, int flen, struct iattr *iap,
- struct svc_fh *resfhp, int createmode, u32 *verifier,
- bool *truncp, bool *created)
+ char *fname, int flen, struct iattr *iap,
+ struct svc_fh *resfhp, int createmode, u32 *verifier,
+ bool *truncp, bool *created)
{
struct dentry *dentry, *dchild = NULL;
+ struct path path;
struct inode *dirp;
__be32 err;
int host_err;
@@ -1436,7 +1440,9 @@ do_nfsd_create(struct svc_rqst *rqstp, struct svc_fh *fhp,
goto out;
}
- err = fh_compose(resfhp, fhp->fh_export, dchild, fhp);
+ path.mnt = fhp->fh_mnt;
+ path.dentry = dchild;
+ err = fh_compose(resfhp, fhp->fh_export, &path, fhp);
if (err)
goto out;
@@ -1569,7 +1575,7 @@ nfsd_readlink(struct svc_rqst *rqstp, struct svc_fh *fhp, char *buf, int *lenp)
if (unlikely(err))
return err;
- path.mnt = fhp->fh_export->ex_path.mnt;
+ path.mnt = fhp->fh_mnt;
path.dentry = fhp->fh_dentry;
if (unlikely(!d_is_symlink(path.dentry)))
@@ -1600,6 +1606,7 @@ nfsd_symlink(struct svc_rqst *rqstp, struct svc_fh *fhp,
struct svc_fh *resfhp)
{
struct dentry *dentry, *dnew;
+ struct path pathnew;
__be32 err, cerr;
int host_err;
@@ -1633,7 +1640,9 @@ nfsd_symlink(struct svc_rqst *rqstp, struct svc_fh *fhp,
fh_drop_write(fhp);
- cerr = fh_compose(resfhp, fhp->fh_export, dnew, fhp);
+ pathnew.mnt = fhp->fh_mnt;
+ pathnew.dentry = dnew;
+ cerr = fh_compose(resfhp, fhp->fh_export, &pathnew, fhp);
dput(dnew);
if (err==0) err = cerr;
out:
@@ -2107,7 +2116,7 @@ nfsd_statfs(struct svc_rqst *rqstp, struct svc_fh *fhp, struct kstatfs *stat, in
err = fh_verify(rqstp, fhp, 0, NFSD_MAY_NOP | access);
if (!err) {
struct path path = {
- .mnt = fhp->fh_export->ex_path.mnt,
+ .mnt = fhp->fh_mnt,
.dentry = fhp->fh_dentry,
};
if (vfs_statfs(&path, stat))
diff --git a/fs/nfsd/vfs.h b/fs/nfsd/vfs.h
index b21b76e6b9a8..52f587716208 100644
--- a/fs/nfsd/vfs.h
+++ b/fs/nfsd/vfs.h
@@ -42,13 +42,13 @@ struct nfsd_file;
typedef int (*nfsd_filldir_t)(void *, const char *, int, loff_t, u64, unsigned);
/* nfsd/vfs.c */
-int nfsd_cross_mnt(struct svc_rqst *rqstp, struct dentry **dpp,
+int nfsd_cross_mnt(struct svc_rqst *rqstp, struct path *,
struct svc_export **expp);
__be32 nfsd_lookup(struct svc_rqst *, struct svc_fh *,
const char *, unsigned int, struct svc_fh *);
__be32 nfsd_lookup_dentry(struct svc_rqst *, struct svc_fh *,
const char *, unsigned int,
- struct svc_export **, struct dentry **);
+ struct svc_export **, struct path *);
__be32 nfsd_setattr(struct svc_rqst *, struct svc_fh *,
struct iattr *, int, time64_t);
int nfsd_mountpoint(struct dentry *, struct svc_export *);
@@ -138,7 +138,7 @@ static inline int fh_want_write(struct svc_fh *fh)
if (fh->fh_want_write)
return 0;
- ret = mnt_want_write(fh->fh_export->ex_path.mnt);
+ ret = mnt_want_write(fh->fh_mnt);
if (!ret)
fh->fh_want_write = true;
return ret;
@@ -148,13 +148,13 @@ static inline void fh_drop_write(struct svc_fh *fh)
{
if (fh->fh_want_write) {
fh->fh_want_write = false;
- mnt_drop_write(fh->fh_export->ex_path.mnt);
+ mnt_drop_write(fh->fh_mnt);
}
}
static inline __be32 fh_getattr(const struct svc_fh *fh, struct kstat *stat)
{
- struct path p = {.mnt = fh->fh_export->ex_path.mnt,
+ struct path p = {.mnt = fh->fh_mnt,
.dentry = fh->fh_dentry};
return nfserrno(vfs_getattr(&p, stat, STATX_BASIC_STATS,
AT_STATX_SYNC_AS_STAT));
diff --git a/fs/nfsd/xdr4.h b/fs/nfsd/xdr4.h
index 3e4052e3bd50..8934db5113ac 100644
--- a/fs/nfsd/xdr4.h
+++ b/fs/nfsd/xdr4.h
@@ -763,7 +763,7 @@ void nfsd4_encode_operation(struct nfsd4_compoundres *, struct nfsd4_op *);
void nfsd4_encode_replay(struct xdr_stream *xdr, struct nfsd4_op *op);
__be32 nfsd4_encode_fattr_to_buf(__be32 **p, int words,
struct svc_fh *fhp, struct svc_export *exp,
- struct dentry *dentry,
+ struct vfsmount *mnt, struct dentry *dentry,
u32 *bmval, struct svc_rqst *, int ignore_crossmnt);
extern __be32 nfsd4_setclientid(struct svc_rqst *rqstp,
struct nfsd4_compound_state *, union nfsd4_op_u *u);
When a filesystem has internal mounts, it controls the filehandles
across all those mounts (subvols) in the filesystem. So it is useful to
be able to look up a filehandle again one mount, and get a result which
is in a different mount (part of the same overall file system).
This patch makes that possible by changing export_decode_fh() and
export_decode_fh_raw() to take a vfsmount pointer by reference, and
possibly change the vfsmount pointed to before returning.
The core of the change is in reconnect_path() which now not only checks
that the dentry is fully connected, but also that the vfsmnt reported
has the same 'dev' (reported by vfs_getattr) as the dentry.
If it doesn't, we walk up the dparent() chain to find the highest place
where the dev changes without there being a mount point, and trigger an
automount there.
As no filesystems yet provide local-mounts, this does not yet change any
behaviour.
In exportfs_decode_fh_raw() we previously tested for DCACHE_DISCONNECT
before calling reconnect_path(). That test is dropped. It was only a
minor optimisation and is now inconvenient.
The change in overlayfs needs more careful thought than I have yet given
it.
Signed-off-by: NeilBrown <[email protected]>
---
fs/exportfs/expfs.c | 100 +++++++++++++++++++++++++++++++++++++++-------
fs/fhandle.c | 2 -
fs/nfsd/nfsfh.c | 9 +++-
fs/overlayfs/namei.c | 5 ++
fs/xfs/xfs_ioctl.c | 12 ++++--
include/linux/exportfs.h | 4 +-
6 files changed, 106 insertions(+), 26 deletions(-)
diff --git a/fs/exportfs/expfs.c b/fs/exportfs/expfs.c
index 0106eba46d5a..2d7c42137b49 100644
--- a/fs/exportfs/expfs.c
+++ b/fs/exportfs/expfs.c
@@ -207,11 +207,18 @@ static struct dentry *reconnect_one(struct vfsmount *mnt,
* that case reconnect_path may still succeed with target_dir fully
* connected, but further operations using the filehandle will fail when
* necessary (due to S_DEAD being set on the directory).
+ *
+ * If the filesystem supports multiple subvols, then *mntp may be updated
+ * to a subordinate mount point on the same filesystem.
*/
static int
-reconnect_path(struct vfsmount *mnt, struct dentry *target_dir, char *nbuf)
+reconnect_path(struct vfsmount **mntp, struct dentry *target_dir, char *nbuf)
{
+ struct vfsmount *mnt = *mntp;
+ struct path path;
struct dentry *dentry, *parent;
+ struct kstat stat;
+ dev_t target_dev;
dentry = dget(target_dir);
@@ -232,6 +239,68 @@ reconnect_path(struct vfsmount *mnt, struct dentry *target_dir, char *nbuf)
}
dput(dentry);
clear_disconnected(target_dir);
+
+ /* Need to find appropriate vfsmount, which might not exist yet.
+ * We may need to trigger automount points.
+ */
+ path.mnt = mnt;
+ path.dentry = target_dir;
+ vfs_getattr_nosec(&path, &stat, 0, AT_STATX_DONT_SYNC);
+ target_dev = stat.dev;
+
+ path.dentry = mnt->mnt_root;
+ vfs_getattr_nosec(&path, &stat, 0, AT_STATX_DONT_SYNC);
+
+ while (stat.dev != target_dev) {
+ /* walk up the dcache tree from target_dir, recording the
+ * location of the most recent change in dev number,
+ * until we find a mountpoint.
+ * If there was no change in show_dev result before the
+ * mountpount, the vfsmount at the mountpoint is what we want.
+ * If there was, we need to trigger an automount where the
+ * show_dev() result changed.
+ */
+ struct dentry *last_change = NULL;
+ dev_t last_dev = target_dev;
+
+ dentry = dget(target_dir);
+ while ((parent = dget_parent(dentry)) != dentry) {
+ path.dentry = parent;
+ vfs_getattr_nosec(&path, &stat, 0, AT_STATX_DONT_SYNC);
+ if (stat.dev != last_dev) {
+ path.dentry = dentry;
+ mnt = lookup_mnt(&path);
+ if (mnt) {
+ mntput(path.mnt);
+ path.mnt = mnt;
+ break;
+ }
+ dput(last_change);
+ last_change = dget(dentry);
+ last_dev = stat.dev;
+ }
+ dput(dentry);
+ dentry = parent;
+ }
+ dput(dentry); dput(parent);
+
+ if (!last_change)
+ break;
+
+ mnt = path.mnt;
+ path.dentry = last_change;
+ follow_down(&path, LOOKUP_AUTOMOUNT);
+ dput(path.dentry);
+ if (path.mnt == mnt)
+ /* There should have been a mount-trap there,
+ * but there wasn't. Just give up.
+ */
+ break;
+
+ path.dentry = mnt->mnt_root;
+ vfs_getattr_nosec(&path, &stat, 0, AT_STATX_DONT_SYNC);
+ }
+ *mntp = path.mnt;
return 0;
}
@@ -418,12 +487,13 @@ int exportfs_encode_fh(struct dentry *dentry, struct fid *fid, int *max_len,
EXPORT_SYMBOL_GPL(exportfs_encode_fh);
struct dentry *
-exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
+exportfs_decode_fh_raw(struct vfsmount **mntp, struct fid *fid, int fh_len,
int fileid_type,
int (*acceptable)(void *, struct dentry *),
void *context)
{
- const struct export_operations *nop = mnt->mnt_sb->s_export_op;
+ struct super_block *sb = (*mntp)->mnt_sb;
+ const struct export_operations *nop = sb->s_export_op;
struct dentry *result, *alias;
char nbuf[NAME_MAX+1];
int err;
@@ -433,7 +503,7 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
*/
if (!nop || !nop->fh_to_dentry)
return ERR_PTR(-ESTALE);
- result = nop->fh_to_dentry(mnt->mnt_sb, fid, fh_len, fileid_type);
+ result = nop->fh_to_dentry(sb, fid, fh_len, fileid_type);
if (IS_ERR_OR_NULL(result))
return result;
@@ -452,14 +522,12 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
*
* On the positive side there is only one dentry for each
* directory inode. On the negative side this implies that we
- * to ensure our dentry is connected all the way up to the
+ * need to ensure our dentry is connected all the way up to the
* filesystem root.
*/
- if (result->d_flags & DCACHE_DISCONNECTED) {
- err = reconnect_path(mnt, result, nbuf);
- if (err)
- goto err_result;
- }
+ err = reconnect_path(mntp, result, nbuf);
+ if (err)
+ goto err_result;
if (!acceptable(context, result)) {
err = -EACCES;
@@ -494,7 +562,7 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
if (!nop->fh_to_parent)
goto err_result;
- target_dir = nop->fh_to_parent(mnt->mnt_sb, fid,
+ target_dir = nop->fh_to_parent(sb, fid,
fh_len, fileid_type);
if (!target_dir)
goto err_result;
@@ -507,7 +575,7 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
* connected to the filesystem root. The VFS really doesn't
* like disconnected directories..
*/
- err = reconnect_path(mnt, target_dir, nbuf);
+ err = reconnect_path(mntp, target_dir, nbuf);
if (err) {
dput(target_dir);
goto err_result;
@@ -518,7 +586,7 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
* dentry for the inode we're after, make sure that our
* inode is actually connected to the parent.
*/
- err = exportfs_get_name(mnt, target_dir, nbuf, result);
+ err = exportfs_get_name(*mntp, target_dir, nbuf, result);
if (err) {
dput(target_dir);
goto err_result;
@@ -556,7 +624,7 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
goto err_result;
}
- return alias;
+ return result;
}
err_result:
@@ -565,14 +633,14 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
}
EXPORT_SYMBOL_GPL(exportfs_decode_fh_raw);
-struct dentry *exportfs_decode_fh(struct vfsmount *mnt, struct fid *fid,
+struct dentry *exportfs_decode_fh(struct vfsmount **mntp, struct fid *fid,
int fh_len, int fileid_type,
int (*acceptable)(void *, struct dentry *),
void *context)
{
struct dentry *ret;
- ret = exportfs_decode_fh_raw(mnt, fid, fh_len, fileid_type,
+ ret = exportfs_decode_fh_raw(mntp, fid, fh_len, fileid_type,
acceptable, context);
if (IS_ERR_OR_NULL(ret)) {
if (ret == ERR_PTR(-ENOMEM))
diff --git a/fs/fhandle.c b/fs/fhandle.c
index 6630c69c23a2..b47c7696469f 100644
--- a/fs/fhandle.c
+++ b/fs/fhandle.c
@@ -149,7 +149,7 @@ static int do_handle_to_path(int mountdirfd, struct file_handle *handle,
}
/* change the handle size to multiple of sizeof(u32) */
handle_dwords = handle->handle_bytes >> 2;
- path->dentry = exportfs_decode_fh(path->mnt,
+ path->dentry = exportfs_decode_fh(&path->mnt,
(struct fid *)handle->f_handle,
handle_dwords, handle->handle_type,
vfs_dentry_acceptable, NULL);
diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
index 0bf7ac13ae50..4023046f63e2 100644
--- a/fs/nfsd/nfsfh.c
+++ b/fs/nfsd/nfsfh.c
@@ -157,6 +157,7 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
struct fid *fid = NULL, sfid;
struct svc_export *exp;
struct dentry *dentry;
+ struct vfsmount *mnt = NULL;
int fileid_type;
int data_left = fh->fh_size/4;
__be32 error;
@@ -253,6 +254,8 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
if (rqstp->rq_vers > 2)
error = nfserr_badhandle;
+ mnt = mntget(exp->ex_path.mnt);
+
if (fh->fh_version != 1) {
sfid.i32.ino = fh->ofh_ino;
sfid.i32.gen = fh->ofh_generation;
@@ -269,7 +272,7 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
if (fileid_type == FILEID_ROOT)
dentry = dget(exp->ex_path.dentry);
else {
- dentry = exportfs_decode_fh_raw(exp->ex_path.mnt, fid,
+ dentry = exportfs_decode_fh_raw(&mnt, fid,
data_left, fileid_type,
nfsd_acceptable, exp);
if (IS_ERR_OR_NULL(dentry)) {
@@ -299,7 +302,7 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
}
fhp->fh_dentry = dentry;
- fhp->fh_mnt = mntget(exp->ex_path.mnt);
+ fhp->fh_mnt = mnt;
fhp->fh_export = exp;
switch (rqstp->rq_vers) {
@@ -317,6 +320,7 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
return 0;
out:
+ mntput(mnt);
exp_put(exp);
return error;
}
@@ -428,7 +432,6 @@ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type, int access)
return error;
}
-
/*
* Compose a file handle for an NFS reply.
*
diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
index 210cd6f66e28..0bca19f6df54 100644
--- a/fs/overlayfs/namei.c
+++ b/fs/overlayfs/namei.c
@@ -155,6 +155,7 @@ struct dentry *ovl_decode_real_fh(struct ovl_fs *ofs, struct ovl_fh *fh,
{
struct dentry *real;
int bytes;
+ struct vfsmount *mnt2;
if (!capable(CAP_DAC_READ_SEARCH))
return NULL;
@@ -169,9 +170,11 @@ struct dentry *ovl_decode_real_fh(struct ovl_fs *ofs, struct ovl_fh *fh,
return NULL;
bytes = (fh->fb.len - offsetof(struct ovl_fb, fid));
- real = exportfs_decode_fh(mnt, (struct fid *)fh->fb.fid,
+ mnt2 = mntget(mnt);
+ real = exportfs_decode_fh(&mnt2, (struct fid *)fh->fb.fid,
bytes >> 2, (int)fh->fb.type,
connected ? ovl_acceptable : NULL, mnt);
+ mntput(mnt2);
if (IS_ERR(real)) {
/*
* Treat stale file handle to lower file as "origin unknown".
diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c
index 16039ea10ac9..76eb7d540811 100644
--- a/fs/xfs/xfs_ioctl.c
+++ b/fs/xfs/xfs_ioctl.c
@@ -149,6 +149,8 @@ xfs_handle_to_dentry(
{
xfs_handle_t handle;
struct xfs_fid64 fid;
+ struct dentry *ret;
+ struct vfsmount *mnt;
/*
* Only allow handle opens under a directory.
@@ -168,9 +170,13 @@ xfs_handle_to_dentry(
fid.ino = handle.ha_fid.fid_ino;
fid.gen = handle.ha_fid.fid_gen;
- return exportfs_decode_fh(parfilp->f_path.mnt, (struct fid *)&fid, 3,
- FILEID_INO32_GEN | XFS_FILEID_TYPE_64FLAG,
- xfs_handle_acceptable, NULL);
+ mnt = mntget(parfilp->f_path.mnt);
+ ret = exportfs_decode_fh(&mnt, (struct fid *)&fid, 3,
+ FILEID_INO32_GEN | XFS_FILEID_TYPE_64FLAG,
+ xfs_handle_acceptable, NULL);
+ WARN_ON(mnt != parfilp->f_path.mnt);
+ mntput(mnt);
+ return ret;
}
STATIC struct dentry *
diff --git a/include/linux/exportfs.h b/include/linux/exportfs.h
index fe848901fcc3..9a8c5434a5cf 100644
--- a/include/linux/exportfs.h
+++ b/include/linux/exportfs.h
@@ -228,12 +228,12 @@ extern int exportfs_encode_inode_fh(struct inode *inode, struct fid *fid,
int *max_len, struct inode *parent);
extern int exportfs_encode_fh(struct dentry *dentry, struct fid *fid,
int *max_len, int connectable);
-extern struct dentry *exportfs_decode_fh_raw(struct vfsmount *mnt,
+extern struct dentry *exportfs_decode_fh_raw(struct vfsmount **mntp,
struct fid *fid, int fh_len,
int fileid_type,
int (*acceptable)(void *, struct dentry *),
void *context);
-extern struct dentry *exportfs_decode_fh(struct vfsmount *mnt, struct fid *fid,
+extern struct dentry *exportfs_decode_fh(struct vfsmount **mnt, struct fid *fid,
int fh_len, int fileid_type, int (*acceptable)(void *, struct dentry *),
void *context);
Enhance nfsd to detect internal mounts and to cross them without
requiring a new export.
Also ensure the fsid reported is different for different submounts. We
do this by xoring in the ino of the mounted-on directory. This makes
sense for btrfs at least.
Signed-off-by: NeilBrown <[email protected]>
---
fs/nfsd/nfs3xdr.c | 28 +++++++++++++++++++++-------
fs/nfsd/nfs4xdr.c | 34 +++++++++++++++++++++++-----------
fs/nfsd/nfsfh.c | 7 ++++++-
fs/nfsd/vfs.c | 11 +++++++++--
4 files changed, 59 insertions(+), 21 deletions(-)
diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
index 67af0c5c1543..80b1cc0334fa 100644
--- a/fs/nfsd/nfs3xdr.c
+++ b/fs/nfsd/nfs3xdr.c
@@ -370,6 +370,8 @@ svcxdr_encode_fattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
case FSIDSOURCE_UUID:
fsid = ((u64 *)fhp->fh_export->ex_uuid)[0];
fsid ^= ((u64 *)fhp->fh_export->ex_uuid)[1];
+ if (fhp->fh_mnt != fhp->fh_export->ex_path.mnt)
+ fsid ^= nfsd_get_mounted_on(fhp->fh_mnt);
break;
default:
fsid = (u64)huge_encode_dev(fhp->fh_dentry->d_sb->s_dev);
@@ -1094,8 +1096,8 @@ compose_entry_fh(struct nfsd3_readdirres *cd, struct svc_fh *fhp,
__be32 rv = nfserr_noent;
dparent = cd->fh.fh_dentry;
- exp = cd->fh.fh_export;
- child.mnt = cd->fh.fh_mnt;
+ exp = exp_get(cd->fh.fh_export);
+ child.mnt = mntget(cd->fh.fh_mnt);
if (isdotent(name, namlen)) {
if (namlen == 2) {
@@ -1112,15 +1114,27 @@ compose_entry_fh(struct nfsd3_readdirres *cd, struct svc_fh *fhp,
child.dentry = dget(dparent);
} else
child.dentry = lookup_positive_unlocked(name, dparent, namlen);
- if (IS_ERR(child.dentry))
+ if (IS_ERR(child.dentry)) {
+ mntput(child.mnt);
+ exp_put(exp);
return rv;
- if (d_mountpoint(child.dentry))
- goto out;
- if (child.dentry->d_inode->i_ino != ino)
+ }
+ /* If child is a mountpoint, then we want to expose the fact
+ * so client can create a mountpoint. If not, then a different
+ * ino number probably means a race with rename, so avoid providing
+ * too much detail.
+ */
+ if (nfsd_mountpoint(child.dentry, exp)) {
+ int err;
+ err = nfsd_cross_mnt(cd->rqstp, &child, &exp);
+ if (err)
+ goto out;
+ } else if (child.dentry->d_inode->i_ino != ino)
goto out;
rv = fh_compose(fhp, exp, &child, &cd->fh);
out:
- dput(child.dentry);
+ path_put(&child);
+ exp_put(exp);
return rv;
}
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index d5683b6a74b2..4dbc99ed2c8b 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -2817,6 +2817,8 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
struct kstat stat;
struct svc_fh *tempfh = NULL;
struct kstatfs statfs;
+ u64 mounted_on_ino;
+ u64 sub_fsid;
__be32 *p;
int starting_len = xdr->buf->len;
int attrlen_offset;
@@ -2871,6 +2873,24 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
goto out;
fhp = tempfh;
}
+ if ((bmval0 & FATTR4_WORD0_FSID) ||
+ (bmval1 & FATTR4_WORD1_MOUNTED_ON_FILEID)) {
+ mounted_on_ino = stat.ino;
+ sub_fsid = 0;
+ /*
+ * The inode number that the current mnt is mounted on is
+ * used for MOUNTED_ON_FILED if we are at the root,
+ * and for sub_fsid if mnt is not the export mnt.
+ */
+ if (ignore_crossmnt == 0) {
+ u64 moi = nfsd_get_mounted_on(mnt);
+
+ if (dentry == mnt->mnt_root && moi)
+ mounted_on_ino = moi;
+ if (mnt != exp->ex_path.mnt)
+ sub_fsid = moi;
+ }
+ }
if (bmval0 & FATTR4_WORD0_ACL) {
err = nfsd4_get_nfs4_acl(rqstp, dentry, &acl);
if (err == -EOPNOTSUPP)
@@ -3008,6 +3028,8 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
case FSIDSOURCE_UUID:
p = xdr_encode_opaque_fixed(p, exp->ex_uuid,
EX_UUID_LEN);
+ if (mnt != exp->ex_path.mnt)
+ *(u64*)(p-2) ^= sub_fsid;
break;
}
}
@@ -3253,20 +3275,10 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
*p++ = cpu_to_be32(stat.mtime.tv_nsec);
}
if (bmval1 & FATTR4_WORD1_MOUNTED_ON_FILEID) {
- u64 ino;
-
p = xdr_reserve_space(xdr, 8);
if (!p)
goto out_resource;
- /*
- * Get parent's attributes if not ignoring crossmount
- * and this is the root of a cross-mounted filesystem.
- */
- if (ignore_crossmnt == 0 && dentry == mnt->mnt_root)
- ino = nfsd_get_mounted_on(mnt);
- if (!ino)
- ino = stat.ino;
- p = xdr_encode_hyper(p, ino);
+ p = xdr_encode_hyper(p, mounted_on_ino);
}
#ifdef CONFIG_NFSD_PNFS
if (bmval1 & FATTR4_WORD1_FS_LAYOUT_TYPES) {
diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
index 4023046f63e2..4b53838bca89 100644
--- a/fs/nfsd/nfsfh.c
+++ b/fs/nfsd/nfsfh.c
@@ -9,7 +9,7 @@
*/
#include <linux/exportfs.h>
-
+#include <linux/namei.h>
#include <linux/sunrpc/svcauth_gss.h>
#include "nfsd.h"
#include "vfs.h"
@@ -285,6 +285,11 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
default:
dentry = ERR_PTR(-ESTALE);
}
+ } else if (nfsd_mountpoint(dentry, exp)) {
+ struct path path = { .mnt = mnt, .dentry = dentry };
+ follow_down(&path, LOOKUP_AUTOMOUNT);
+ mnt = path.mnt;
+ dentry = path.dentry;
}
}
if (dentry == NULL)
diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
index baa12ac36ece..22523e1cd478 100644
--- a/fs/nfsd/vfs.c
+++ b/fs/nfsd/vfs.c
@@ -64,7 +64,7 @@ nfsd_cross_mnt(struct svc_rqst *rqstp, struct path *path_parent,
.dentry = dget(path_parent->dentry)};
int err = 0;
- err = follow_down(&path, 0);
+ err = follow_down(&path, LOOKUP_AUTOMOUNT);
if (err < 0)
goto out;
if (path.mnt == path_parent->mnt && path.dentry == path_parent->dentry &&
@@ -73,6 +73,13 @@ nfsd_cross_mnt(struct svc_rqst *rqstp, struct path *path_parent,
path_put(&path);
goto out;
}
+ if (mount_is_internal(path.mnt)) {
+ /* Use the new path, but don't look for a new export */
+ /* FIXME should I check NOHIDE in this case?? */
+ path_put(path_parent);
+ *path_parent = path;
+ goto out;
+ }
exp2 = rqst_exp_get_by_name(rqstp, &path);
if (IS_ERR(exp2)) {
@@ -157,7 +164,7 @@ int nfsd_mountpoint(struct dentry *dentry, struct svc_export *exp)
return 1;
if (nfsd4_is_junction(dentry))
return 1;
- if (d_mountpoint(dentry))
+ if (d_managed(dentry))
/*
* Might only be a mountpoint in a different namespace,
* but we need to check.
A btrfs directory entry can refer to two different sorts of objects
BTRFS_INODE_ITEM_KEY - a regular fs object (file, dir, etc)
BTRFS_ROOT_ITEM_KEY - a reference to a subvol.
The 'objectid' numbers for these two are independent, so it is possible
(and common) for an INODE and a ROOT to have the same objectid.
As readdir reports the objectid as the inode number, if two such are in
the same directory, a tool which examines the inode numbers in getdents
results could think they are links.
As the BTRFS_ROOT_ITEM_KEY objectid is not visible via stat() (only
getdents), this is rarely if ever a problem. However a future patch
will expose this number as the i_ino of an automount point. This will
cause problems if the objectid is used as-is.
So: create a simple mapping function to reduce (or eliminate?) the
possibility of conflict. The objectid of BTRFS_ROOT_ITEM_KEY is
subtracted from ULONG_MAX to make an inode number.
Signed-off-by: NeilBrown <[email protected]>
---
fs/btrfs/btrfs_inode.h | 10 ++++++++++
fs/btrfs/inode.c | 3 ++-
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index c652e19ad74e..a4b5f38196e6 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -328,6 +328,16 @@ static inline bool btrfs_inode_in_log(struct btrfs_inode *inode, u64 generation)
return ret;
}
+static inline unsigned long btrfs_location_to_ino(struct btrfs_key *location)
+{
+ if (location->type == BTRFS_INODE_ITEM_KEY)
+ return location->objectid;
+ /* Probably BTRFS_ROOT_ITEM_KEY, try to keep the inode
+ * numbers separate.
+ */
+ return ULONG_MAX - location->objectid;
+}
+
struct btrfs_dio_private {
struct inode *inode;
u64 logical_offset;
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 8f60314c36c5..02537c1a9763 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -6136,7 +6136,8 @@ static int btrfs_real_readdir(struct file *file, struct dir_context *ctx)
put_unaligned(fs_ftype_to_dtype(btrfs_dir_type(leaf, di)),
&entry->type);
btrfs_dir_item_key_to_cpu(leaf, di, &location);
- put_unaligned(location.objectid, &entry->ino);
+ put_unaligned(btrfs_location_to_ino(&location),
+ &entry->ino);
put_unaligned(found_key.offset, &entry->offset);
entries++;
addr += sizeof(struct dir_entry) + name_len;
All subvol roots are now marked as automounts. If the d_automount()
function determines that the dentry is not the root of the vfsmount, it
creates a simple loop-back mount of the dentry onto itself. If it
determines that it IS the root of the vfsmount, it returns -EISDIR so
that no further automounting is attempted.
btrfs_getattr pays special attention to these automount dentries.
If it is NOT the root of the vfsmount:
- the ->dev is reported as that for the rest of the vfsmount
- the ->ino is reported as the subvol objectid, suitable transformed
to avoid collision.
This way the same inode appear to be different depending on which mount
it is in.
automounted vfsmounts are kept on a list and timeout after 500 to 1000
seconds of last use. This is configurable via a module parameter.
The tracking and timeout of automounts is copied from NFS.
Signed-off-by: NeilBrown <[email protected]>
---
fs/btrfs/btrfs_inode.h | 2 +
fs/btrfs/inode.c | 108 ++++++++++++++++++++++++++++++++++++++++++++++++
fs/btrfs/super.c | 1
3 files changed, 111 insertions(+)
diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index a4b5f38196e6..f03056cacc4a 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -387,4 +387,6 @@ static inline void btrfs_print_data_csum_error(struct btrfs_inode *inode,
mirror_num);
}
+void btrfs_release_automount_timer(void);
+
#endif
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 02537c1a9763..a5f46545fb38 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -31,6 +31,7 @@
#include <linux/migrate.h>
#include <linux/sched/mm.h>
#include <linux/iomap.h>
+#include <linux/fs_context.h>
#include <asm/unaligned.h>
#include "misc.h"
#include "ctree.h"
@@ -5782,6 +5783,8 @@ static int btrfs_init_locked_inode(struct inode *inode, void *p)
struct btrfs_iget_args *args = p;
inode->i_ino = args->ino;
+ if (args->ino == BTRFS_FIRST_FREE_OBJECTID)
+ inode->i_flags |= S_AUTOMOUNT;
BTRFS_I(inode)->location.objectid = args->ino;
BTRFS_I(inode)->location.type = BTRFS_INODE_ITEM_KEY;
BTRFS_I(inode)->location.offset = 0;
@@ -5985,6 +5988,101 @@ static int btrfs_dentry_delete(const struct dentry *dentry)
return 0;
}
+static void btrfs_expire_automounts(struct work_struct *work);
+static LIST_HEAD(btrfs_automount_list);
+static DECLARE_DELAYED_WORK(btrfs_automount_task, btrfs_expire_automounts);
+int btrfs_mountpoint_expiry_timeout = 500 * HZ;
+static void btrfs_expire_automounts(struct work_struct *work)
+{
+ struct list_head *list = &btrfs_automount_list;
+ int timeout = READ_ONCE(btrfs_mountpoint_expiry_timeout);
+
+ mark_mounts_for_expiry(list);
+ if (!list_empty(list) && timeout > 0)
+ schedule_delayed_work(&btrfs_automount_task, timeout);
+}
+
+void btrfs_release_automount_timer(void)
+{
+ if (list_empty(&btrfs_automount_list))
+ cancel_delayed_work(&btrfs_automount_task);
+}
+
+static struct vfsmount *btrfs_automount(struct path *path)
+{
+ struct fs_context fc;
+ struct vfsmount *mnt;
+ int timeout = READ_ONCE(btrfs_mountpoint_expiry_timeout);
+
+ if (path->dentry == path->mnt->mnt_root)
+ /* dentry is root of the vfsmount,
+ * so skip automount processing
+ */
+ return ERR_PTR(-EISDIR);
+ /* Create a bind-mount to expose the subvol in the mount table */
+ fc.root = path->dentry;
+ fc.sb_flags = 0;
+ fc.source = "btrfs-automount";
+ mnt = vfs_create_mount(&fc);
+ if (IS_ERR(mnt))
+ return mnt;
+ mntget(mnt);
+ mnt_set_expiry(mnt, &btrfs_automount_list);
+ if (timeout > 0)
+ schedule_delayed_work(&btrfs_automount_task, timeout);
+ return mnt;
+}
+
+static int param_set_btrfs_timeout(const char *val, const struct kernel_param *kp)
+{
+ long num;
+ int ret;
+
+ if (!val)
+ return -EINVAL;
+ ret = kstrtol(val, 0, &num);
+ if (ret)
+ return -EINVAL;
+ if (num > 0) {
+ if (num >= INT_MAX / HZ)
+ num = INT_MAX;
+ else
+ num *= HZ;
+ *((int *)kp->arg) = num;
+ if (!list_empty(&btrfs_automount_list))
+ mod_delayed_work(system_wq, &btrfs_automount_task, num);
+ } else {
+ *((int *)kp->arg) = -1*HZ;
+ cancel_delayed_work(&btrfs_automount_task);
+ }
+ return 0;
+}
+
+static int param_get_btrfs_timeout(char *buffer, const struct kernel_param *kp)
+{
+ long num = *((int *)kp->arg);
+
+ if (num > 0) {
+ if (num >= INT_MAX - (HZ - 1))
+ num = INT_MAX / HZ;
+ else
+ num = (num + (HZ - 1)) / HZ;
+ } else
+ num = -1;
+ return scnprintf(buffer, PAGE_SIZE, "%li\n", num);
+}
+
+static const struct kernel_param_ops param_ops_btrfs_timeout = {
+ .set = param_set_btrfs_timeout,
+ .get = param_get_btrfs_timeout,
+};
+#define param_check_btrfs_timeout(name, p) __param_check(name, p, int)
+
+module_param(btrfs_mountpoint_expiry_timeout, btrfs_timeout, 0644);
+MODULE_PARM_DESC(btrfs_mountpoint_expiry_timeout,
+ "Set the btrfs automounted mountpoint timeout value (seconds). "
+ "Values <= 0 turn expiration off.");
+
static struct dentry *btrfs_lookup(struct inode *dir, struct dentry *dentry,
unsigned int flags)
{
@@ -9195,6 +9293,15 @@ static int btrfs_getattr(struct user_namespace *mnt_userns,
generic_fillattr(&init_user_ns, inode, stat);
stat->dev = BTRFS_I(inode)->root->anon_dev;
+ if ((inode->i_flags & S_AUTOMOUNT) &&
+ path->dentry != path->mnt->mnt_root) {
+ /* This is the mounted-on side of the automount,
+ * so we show the inode number from the ROOT_ITEM key
+ * and the dev of the mountpoint.
+ */
+ stat->ino = btrfs_location_to_ino(&BTRFS_I(inode)->root->root_key);
+ stat->dev = BTRFS_I(d_inode(path->mnt->mnt_root))->root->anon_dev;
+ }
spin_lock(&BTRFS_I(inode)->lock);
delalloc_bytes = BTRFS_I(inode)->new_delalloc_bytes;
@@ -10844,4 +10951,5 @@ static const struct inode_operations btrfs_symlink_inode_operations = {
const struct dentry_operations btrfs_dentry_operations = {
.d_delete = btrfs_dentry_delete,
+ .d_automount = btrfs_automount,
};
diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index d07b18b2b250..33008e432a15 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -338,6 +338,7 @@ void __btrfs_panic(struct btrfs_fs_info *fs_info, const char *function,
static void btrfs_put_super(struct super_block *sb)
{
close_ctree(btrfs_sb(sb));
+ btrfs_release_automount_timer();
}
enum {
This patch introduces the concept of an "internal" mount which is a
mount where a filesystem has create the mount itself.
Both the mounted-on-dentry and the mount's root dentry must refer to the
same superblock (they may be the same dentry), and the mounted-on dentry
must be an automount.
Signed-off-by: NeilBrown <[email protected]>
---
fs/namespace.c | 29 +++++++++++++++++++++++++++++
include/linux/mount.h | 2 ++
2 files changed, 31 insertions(+)
diff --git a/fs/namespace.c b/fs/namespace.c
index 73bbdb921e24..a14efbccfb03 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -1273,6 +1273,35 @@ bool path_is_mountpoint(const struct path *path)
}
EXPORT_SYMBOL(path_is_mountpoint);
+/**
+ * mount_is_internal() - Check if path is a mount internal to a single filesystem
+ * @mnt: vfsmount to check
+ *
+ * Some filesystems present multiple file-sets using a single
+ * superblock, such as btrfs with multiple subvolumes. Names within a
+ * parent filesystem which lead to a subordinate filesystem are
+ * implemented as automounts so that the structure is visible in the
+ * mount table. nfsd needs visibility into this arrangement so that it
+ * can determine if a mountpoint requires a new export, or is completely
+ * covered by an existing mount.
+ *
+ * An "internal" mount is one where the parent and child have the same
+ * superblock, and the mounted-on dentry is "managed" as an automount. A
+ * filehandle found for an inode in the child can be looked-up using either
+ * vfsmount.
+ */
+bool mount_is_internal(struct vfsmount *mnt)
+{
+ struct mount *m = real_mount(mnt);
+
+ if (!mnt_has_parent(m))
+ return false;
+ if (m->mnt_parent->mnt.mnt_sb != m->mnt.mnt_sb)
+ return false;
+ return m->mnt_mountpoint->d_flags & DCACHE_NEED_AUTOMOUNT;
+}
+EXPORT_SYMBOL(mount_is_internal);
+
struct vfsmount *mnt_clone_internal(const struct path *path)
{
struct mount *p;
diff --git a/include/linux/mount.h b/include/linux/mount.h
index 1d3daed88f83..ab58087728ba 100644
--- a/include/linux/mount.h
+++ b/include/linux/mount.h
@@ -118,6 +118,8 @@ extern unsigned int sysctl_mount_max;
extern bool path_is_mountpoint(const struct path *path);
+extern bool mount_is_internal(struct vfsmount *mnt);
+
extern struct vfsmount *lookup_mnt(const struct path *);
extern void kern_unmount_array(struct vfsmount *mnt[], unsigned int num);
In order to support filehandle lookup in filesystems with internal
mounts (multiple subvols in the one filesystem) reconnect_path() in
exportfs will need to find out if a given dentry is already mounted.
This can be done with the function lookup_mnt(), so export that to make
it available.
Signed-off-by: NeilBrown <[email protected]>
---
fs/internal.h | 1 -
fs/namespace.c | 1 +
include/linux/mount.h | 2 ++
3 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/internal.h b/fs/internal.h
index 3ce8edbaa3ca..0feb2722d2e5 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -81,7 +81,6 @@ int do_renameat2(int olddfd, struct filename *oldname, int newdfd,
/*
* namespace.c
*/
-extern struct vfsmount *lookup_mnt(const struct path *);
extern int finish_automount(struct vfsmount *, struct path *);
extern int sb_prepare_remount_readonly(struct super_block *);
diff --git a/fs/namespace.c b/fs/namespace.c
index 81b0f2b2e701..73bbdb921e24 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -662,6 +662,7 @@ struct vfsmount *lookup_mnt(const struct path *path)
rcu_read_unlock();
return m;
}
+EXPORT_SYMBOL(lookup_mnt);
static inline void lock_ns_list(struct mnt_namespace *ns)
{
diff --git a/include/linux/mount.h b/include/linux/mount.h
index 5d92a7e1a742..1d3daed88f83 100644
--- a/include/linux/mount.h
+++ b/include/linux/mount.h
@@ -118,6 +118,8 @@ extern unsigned int sysctl_mount_max;
extern bool path_is_mountpoint(const struct path *path);
+extern struct vfsmount *lookup_mnt(const struct path *);
+
extern void kern_unmount_array(struct vfsmount *mnt[], unsigned int num);
#endif /* _LINUX_MOUNT_H */
get_parent_attributes() is only used to get the inode number of the
mounted-on directory. So change it to only do that and call it
nfsd_get_mounted_on().
It will eventually be use by nfs3 as well as nfs4, so move it to vfs.c.
Signed-off-by: NeilBrown <[email protected]>
---
fs/nfsd/nfs4xdr.c | 29 +++++------------------------
fs/nfsd/vfs.c | 18 ++++++++++++++++++
fs/nfsd/vfs.h | 2 ++
3 files changed, 25 insertions(+), 24 deletions(-)
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index 21c277fa28ae..d5683b6a74b2 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -2768,22 +2768,6 @@ static __be32 fattr_handle_absent_fs(u32 *bmval0, u32 *bmval1, u32 *bmval2, u32
return 0;
}
-
-static int get_parent_attributes(struct svc_export *exp, struct kstat *stat)
-{
- struct path path = exp->ex_path;
- int err;
-
- path_get(&path);
- while (follow_up(&path)) {
- if (path.dentry != path.mnt->mnt_root)
- break;
- }
- err = vfs_getattr(&path, stat, STATX_BASIC_STATS, AT_STATX_SYNC_AS_STAT);
- path_put(&path);
- return err;
-}
-
static __be32
nfsd4_encode_bitmap(struct xdr_stream *xdr, u32 bmval0, u32 bmval1, u32 bmval2)
{
@@ -3269,8 +3253,7 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
*p++ = cpu_to_be32(stat.mtime.tv_nsec);
}
if (bmval1 & FATTR4_WORD1_MOUNTED_ON_FILEID) {
- struct kstat parent_stat;
- u64 ino = stat.ino;
+ u64 ino;
p = xdr_reserve_space(xdr, 8);
if (!p)
@@ -3279,12 +3262,10 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
* Get parent's attributes if not ignoring crossmount
* and this is the root of a cross-mounted filesystem.
*/
- if (ignore_crossmnt == 0 && dentry == mnt->mnt_root) {
- err = get_parent_attributes(exp, &parent_stat);
- if (err)
- goto out_nfserr;
- ino = parent_stat.ino;
- }
+ if (ignore_crossmnt == 0 && dentry == mnt->mnt_root)
+ ino = nfsd_get_mounted_on(mnt);
+ if (!ino)
+ ino = stat.ino;
p = xdr_encode_hyper(p, ino);
}
#ifdef CONFIG_NFSD_PNFS
diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
index c0c6920f25a4..baa12ac36ece 100644
--- a/fs/nfsd/vfs.c
+++ b/fs/nfsd/vfs.c
@@ -2445,3 +2445,21 @@ nfsd_permission(struct svc_rqst *rqstp, struct svc_export *exp,
return err? nfserrno(err) : 0;
}
+
+unsigned long nfsd_get_mounted_on(struct vfsmount *mnt)
+{
+ struct kstat stat;
+ struct path path = { .mnt = mnt, .dentry = mnt->mnt_root };
+ int err;
+
+ path_get(&path);
+ while (follow_up(&path)) {
+ if (path.dentry != path.mnt->mnt_root)
+ break;
+ }
+ err = vfs_getattr(&path, &stat, STATX_INO, AT_STATX_DONT_SYNC);
+ path_put(&path);
+ if (err)
+ return 0;
+ return stat.ino;
+}
diff --git a/fs/nfsd/vfs.h b/fs/nfsd/vfs.h
index 52f587716208..11ac36b21b4c 100644
--- a/fs/nfsd/vfs.h
+++ b/fs/nfsd/vfs.h
@@ -132,6 +132,8 @@ __be32 nfsd_statfs(struct svc_rqst *, struct svc_fh *,
__be32 nfsd_permission(struct svc_rqst *, struct svc_export *,
struct dentry *, int);
+unsigned long nfsd_get_mounted_on(struct vfsmount *mnt);
+
static inline int fh_want_write(struct svc_fh *fh)
{
int ret;
On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> We can enhance nfsd to understand that some automounts can be managed.
> "internal mounts" where a filesystem provides an automount point and
> mounts its own directories, can be handled differently by nfsd.
>
> This series addresses all these issues. After a few enhancements to the
> VFS to provide needed support, they enhance exportfs and nfsd to cope
> with the concept of internal mounts, and then enhance btrfs to provide
> them.
I'm half asleep right now; will review and reply in detail tomorrow.
The first impression is that it's going to be brittle; properties of
mount really should not depend upon the nature of mountpoint - too many
ways for that to go wrong.
Anyway, tomorrow...
On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> This patch introduces the concept of an "internal" mount which is a
> mount where a filesystem has create the mount itself.
>
> Both the mounted-on-dentry and the mount's root dentry must refer to the
> same superblock (they may be the same dentry), and the mounted-on dentry
> must be an automount.
And what happens if you mount --move it?
On Wed, 28 Jul 2021, Al Viro wrote:
> On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> > This patch introduces the concept of an "internal" mount which is a
> > mount where a filesystem has create the mount itself.
> >
> > Both the mounted-on-dentry and the mount's root dentry must refer to the
> > same superblock (they may be the same dentry), and the mounted-on dentry
> > must be an automount.
>
> And what happens if you mount --move it?
>
>
If you move the mount, then the mounted-on dentry would not longer be an
automount (.... I assume???...) so it would not longer be
mount_is_internal().
I think that is reasonable. Whoever moved the mount has now taken over
responsibility for it - it no longer is controlled by the filesystem.
The moving will have removed the mount from the list of auto-expire
mounts, and the mount-trap will now be exposed and can be mounted-on
again.
It would be just like unmounting the automount, and bind-mounting the
same dentry elsewhere.
NeilBrown
Hi,
We no longer need the dummy inode(BTRFS_FIRST_FREE_OBJECTID - 1) in this
patch serials?
I tried to backport it to 5.10.x, but it failed to work.
No big modification in this 5.10.x backporting, and all modified pathes
are attached.
Best Regards
Wang Yugui ([email protected])
2021/07/28
> There are long-standing problems with btrfs subvols, particularly in
> relation to whether and how they are exposed in the mount table.
>
> - /proc/self/mountinfo reports the major:minor device number for each
> filesystem and when a btrfs subvol is explicitly mounted, the number
> reported is wrong - it does not match what stat() reports for the
> mountpoint.
>
> - when subvol are not explicitly mounted, they don't appear in
> mountinfo at all.
>
> Consequences include that a tool which uses stat() to find the dev of the
> filesystem, then searches mountinfo for that filesystem, will not find
> it.
>
> Some tools (e.g. findmnt) appear to have been enhanced to cope with this
> strangeness, but it would be best to make btrfs behave more normally.
>
> - nfsd cannot currently see the transition to subvol, so reports the
> main volume and all subvols to the client as being in the same
> filesystem. As inode numbers are not unique across all subvols,
> this can confuse clients. In particular, 'find' is likely to report a
> loop.
>
> subvols can be made to appear in mountinfo using automounts. However
> nfsd does not cope well with automounts. It assumes all filesystems to
> be exported are already mounted. So adding automounts to btrfs would
> break nfsd.
>
> We can enhance nfsd to understand that some automounts can be managed.
> "internal mounts" where a filesystem provides an automount point and
> mounts its own directories, can be handled differently by nfsd.
>
> This series addresses all these issues. After a few enhancements to the
> VFS to provide needed support, they enhance exportfs and nfsd to cope
> with the concept of internal mounts, and then enhance btrfs to provide
> them.
>
> The NFSv3 support is incomplete. I'm not sure we can make it work
> "perfectly". A normal nfsv3 mount seem to work well enough, but if
> mounted with '-o noac', it loses track of the mounted-on inode number
> and complains about inode numbers changing.
>
> My basic test for these is to mount a btrfs filesystem which contains
> subvols, nfs-export it and mount it with nfsv3 and nfsv4, then run
> 'find' in each of the filesystem and check the contents of
> /proc/self/mountinfo.
>
> The first patch simply fixes the dev number in mountinfo and could
> possibly be tagged for -stable.
>
> NeilBrown
>
> ---
>
> NeilBrown (11):
> VFS: show correct dev num in mountinfo
> VFS: allow d_automount to create in-place bind-mount.
> VFS: pass lookup_flags into follow_down()
> VFS: export lookup_mnt()
> VFS: new function: mount_is_internal()
> nfsd: include a vfsmount in struct svc_fh
> exportfs: Allow filehandle lookup to cross internal mount points.
> nfsd: change get_parent_attributes() to nfsd_get_mounted_on()
> nfsd: Allow filehandle lookup to cross internal mount points.
> btrfs: introduce mapping function from location to inum
> btrfs: use automount to bind-mount all subvol roots.
>
>
> fs/btrfs/btrfs_inode.h | 12 +++
> fs/btrfs/inode.c | 111 ++++++++++++++++++++++++++-
> fs/btrfs/super.c | 1 +
> fs/exportfs/expfs.c | 100 ++++++++++++++++++++----
> fs/fhandle.c | 2 +-
> fs/internal.h | 1 -
> fs/namei.c | 6 +-
> fs/namespace.c | 32 +++++++-
> fs/nfsd/export.c | 4 +-
> fs/nfsd/nfs3xdr.c | 40 +++++++---
> fs/nfsd/nfs4proc.c | 9 ++-
> fs/nfsd/nfs4xdr.c | 106 ++++++++++++-------------
> fs/nfsd/nfsfh.c | 44 +++++++----
> fs/nfsd/nfsfh.h | 3 +-
> fs/nfsd/nfsproc.c | 5 +-
> fs/nfsd/vfs.c | 162 +++++++++++++++++++++++----------------
> fs/nfsd/vfs.h | 12 +--
> fs/nfsd/xdr4.h | 2 +-
> fs/overlayfs/namei.c | 5 +-
> fs/xfs/xfs_ioctl.c | 12 ++-
> include/linux/exportfs.h | 4 +-
> include/linux/mount.h | 4 +
> include/linux/namei.h | 2 +-
> 23 files changed, 490 insertions(+), 189 deletions(-)
>
> --
> Signature
Hi,
This patchset works well in 5.14-rc3.
1, fixed dummy inode(255, BTRFS_FIRST_FREE_OBJECTID - 1 ) is changed to
dynamic dummy inode(18446744073709551358, or 18446744073709551359, ...)
2, btrfs subvol mount info is shown in /proc/mounts, even if nfsd/nfs is
not used.
/dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test
/dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub1
/dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub2
This is a visiual feature change for btrfs user.
Best Regards
Wang Yugui ([email protected])
2021/07/28
> Hi,
>
> We no longer need the dummy inode(BTRFS_FIRST_FREE_OBJECTID - 1) in this
> patch serials?
>
> I tried to backport it to 5.10.x, but it failed to work.
> No big modification in this 5.10.x backporting, and all modified pathes
> are attached.
>
> Best Regards
> Wang Yugui ([email protected])
> 2021/07/28
>
> > There are long-standing problems with btrfs subvols, particularly in
> > relation to whether and how they are exposed in the mount table.
> >
> > - /proc/self/mountinfo reports the major:minor device number for each
> > filesystem and when a btrfs subvol is explicitly mounted, the number
> > reported is wrong - it does not match what stat() reports for the
> > mountpoint.
> >
> > - when subvol are not explicitly mounted, they don't appear in
> > mountinfo at all.
> >
> > Consequences include that a tool which uses stat() to find the dev of the
> > filesystem, then searches mountinfo for that filesystem, will not find
> > it.
> >
> > Some tools (e.g. findmnt) appear to have been enhanced to cope with this
> > strangeness, but it would be best to make btrfs behave more normally.
> >
> > - nfsd cannot currently see the transition to subvol, so reports the
> > main volume and all subvols to the client as being in the same
> > filesystem. As inode numbers are not unique across all subvols,
> > this can confuse clients. In particular, 'find' is likely to report a
> > loop.
> >
> > subvols can be made to appear in mountinfo using automounts. However
> > nfsd does not cope well with automounts. It assumes all filesystems to
> > be exported are already mounted. So adding automounts to btrfs would
> > break nfsd.
> >
> > We can enhance nfsd to understand that some automounts can be managed.
> > "internal mounts" where a filesystem provides an automount point and
> > mounts its own directories, can be handled differently by nfsd.
> >
> > This series addresses all these issues. After a few enhancements to the
> > VFS to provide needed support, they enhance exportfs and nfsd to cope
> > with the concept of internal mounts, and then enhance btrfs to provide
> > them.
> >
> > The NFSv3 support is incomplete. I'm not sure we can make it work
> > "perfectly". A normal nfsv3 mount seem to work well enough, but if
> > mounted with '-o noac', it loses track of the mounted-on inode number
> > and complains about inode numbers changing.
> >
> > My basic test for these is to mount a btrfs filesystem which contains
> > subvols, nfs-export it and mount it with nfsv3 and nfsv4, then run
> > 'find' in each of the filesystem and check the contents of
> > /proc/self/mountinfo.
> >
> > The first patch simply fixes the dev number in mountinfo and could
> > possibly be tagged for -stable.
> >
> > NeilBrown
> >
> > ---
> >
> > NeilBrown (11):
> > VFS: show correct dev num in mountinfo
> > VFS: allow d_automount to create in-place bind-mount.
> > VFS: pass lookup_flags into follow_down()
> > VFS: export lookup_mnt()
> > VFS: new function: mount_is_internal()
> > nfsd: include a vfsmount in struct svc_fh
> > exportfs: Allow filehandle lookup to cross internal mount points.
> > nfsd: change get_parent_attributes() to nfsd_get_mounted_on()
> > nfsd: Allow filehandle lookup to cross internal mount points.
> > btrfs: introduce mapping function from location to inum
> > btrfs: use automount to bind-mount all subvol roots.
> >
> >
> > fs/btrfs/btrfs_inode.h | 12 +++
> > fs/btrfs/inode.c | 111 ++++++++++++++++++++++++++-
> > fs/btrfs/super.c | 1 +
> > fs/exportfs/expfs.c | 100 ++++++++++++++++++++----
> > fs/fhandle.c | 2 +-
> > fs/internal.h | 1 -
> > fs/namei.c | 6 +-
> > fs/namespace.c | 32 +++++++-
> > fs/nfsd/export.c | 4 +-
> > fs/nfsd/nfs3xdr.c | 40 +++++++---
> > fs/nfsd/nfs4proc.c | 9 ++-
> > fs/nfsd/nfs4xdr.c | 106 ++++++++++++-------------
> > fs/nfsd/nfsfh.c | 44 +++++++----
> > fs/nfsd/nfsfh.h | 3 +-
> > fs/nfsd/nfsproc.c | 5 +-
> > fs/nfsd/vfs.c | 162 +++++++++++++++++++++++----------------
> > fs/nfsd/vfs.h | 12 +--
> > fs/nfsd/xdr4.h | 2 +-
> > fs/overlayfs/namei.c | 5 +-
> > fs/xfs/xfs_ioctl.c | 12 ++-
> > include/linux/exportfs.h | 4 +-
> > include/linux/mount.h | 4 +
> > include/linux/namei.h | 2 +-
> > 23 files changed, 490 insertions(+), 189 deletions(-)
> >
> > --
> > Signature
>
On Wed, 28 Jul 2021, Wang Yugui wrote:
> Hi,
>
> This patchset works well in 5.14-rc3.
Thanks for testing.
>
> 1, fixed dummy inode(255, BTRFS_FIRST_FREE_OBJECTID - 1 ) is changed to
> dynamic dummy inode(18446744073709551358, or 18446744073709551359, ...)
The BTRFS_FIRST_FREE_OBJECTID-1 was a just a hack, I never wanted it to
be permanent.
The new number is ULONG_MAX - subvol_id (where subvol_id starts at 257 I
think).
This is a bit less of a hack. It is an easily available number that is
fairly unique.
>
> 2, btrfs subvol mount info is shown in /proc/mounts, even if nfsd/nfs is
> not used.
> /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test
> /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub1
> /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub2
>
> This is a visiual feature change for btrfs user.
Hopefully it is an improvement. But it is certainly a change that needs
to be carefully considered.
Thanks,
NeilBrown
On Wed, 28 Jul 2021, Wang Yugui wrote:
> Hi,
>
> We no longer need the dummy inode(BTRFS_FIRST_FREE_OBJECTID - 1) in this
> patch serials?
No.
>
> I tried to backport it to 5.10.x, but it failed to work.
> No big modification in this 5.10.x backporting, and all modified pathes
> are attached.
I'm not surprised, but I doubt there would be a major barrier to making
it work. I'm unlikely to try until I have more positive reviews.
Thanks,
NeilBrown
fs/btrfs/inode.c:5868:5: warning: symbol 'btrfs_mountpoint_expiry_timeout' was not declared. Should it be static?
Reported-by: kernel test robot <[email protected]>
Signed-off-by: kernel test robot <[email protected]>
---
inode.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 8667a26d684d4..4f9472e41074a 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -5865,7 +5865,7 @@ static int btrfs_dentry_delete(const struct dentry *dentry)
static void btrfs_expire_automounts(struct work_struct *work);
static LIST_HEAD(btrfs_automount_list);
static DECLARE_DELAYED_WORK(btrfs_automount_task, btrfs_expire_automounts);
-int btrfs_mountpoint_expiry_timeout = 500 * HZ;
+static int btrfs_mountpoint_expiry_timeout = 500 * HZ;
static void btrfs_expire_automounts(struct work_struct *work)
{
struct list_head *list = &btrfs_automount_list;
Hi NeilBrown,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on nfsd/nfsd-next]
[also build test WARNING on kdave/for-next hch-configfs/for-next linus/master v5.14-rc3 next-20210727]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/NeilBrown/expose-btrfs-subvols-in-mount-table-correctly/20210728-064502
base: git://linux-nfs.org/~bfields/linux.git nfsd-next
config: i386-randconfig-s002-20210728 (attached as .config)
compiler: gcc-10 (Ubuntu 10.3.0-1ubuntu1~20.04) 10.3.0
reproduce:
# apt-get install sparse
# sparse version: v0.6.3-341-g8af24329-dirty
# https://github.com/0day-ci/linux/commit/58749022685aea90dfddfb9f8b2fcdc74dee6ec0
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review NeilBrown/expose-btrfs-subvols-in-mount-table-correctly/20210728-064502
git checkout 58749022685aea90dfddfb9f8b2fcdc74dee6ec0
# save the attached .config to linux build tree
make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=i386 SHELL=/bin/bash fs/btrfs/
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>
sparse warnings: (new ones prefixed by >>)
>> fs/btrfs/inode.c:5868:5: sparse: sparse: symbol 'btrfs_mountpoint_expiry_timeout' was not declared. Should it be static?
Please review and possibly fold the followup patch.
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]
Hi,
> > I tried to backport it to 5.10.x, but it failed to work.
> > No big modification in this 5.10.x backporting, and all modified pathes
> > are attached.
>
> I'm not surprised, but I doubt there would be a major barrier to making
> it work. I'm unlikely to try until I have more positive reviews.
With two patches as dependency,
d045465fc6cbfa4acfb5a7d817a7c1a57a078109
0001-exportfs-Add-a-function-to-return-the-raw-output-fro.patch
2e19d10c1438241de32467637a2a411971547991
0002-nfsd-Fix-up-nfsd-to-ensure-that-timeout-errors-don-t.patch
the 5.10.x backporting become more simple and it works well now.
Best Regards
Wang Yugui ([email protected])
2021/07/28
On Wed, Jul 28, 2021 at 1:44 AM NeilBrown <[email protected]> wrote:
>
> When a filesystem has internal mounts, it controls the filehandles
> across all those mounts (subvols) in the filesystem. So it is useful to
> be able to look up a filehandle again one mount, and get a result which
> is in a different mount (part of the same overall file system).
>
> This patch makes that possible by changing export_decode_fh() and
> export_decode_fh_raw() to take a vfsmount pointer by reference, and
> possibly change the vfsmount pointed to before returning.
>
> The core of the change is in reconnect_path() which now not only checks
> that the dentry is fully connected, but also that the vfsmnt reported
> has the same 'dev' (reported by vfs_getattr) as the dentry.
> If it doesn't, we walk up the dparent() chain to find the highest place
> where the dev changes without there being a mount point, and trigger an
> automount there.
>
> As no filesystems yet provide local-mounts, this does not yet change any
> behaviour.
>
> In exportfs_decode_fh_raw() we previously tested for DCACHE_DISCONNECT
> before calling reconnect_path(). That test is dropped. It was only a
> minor optimisation and is now inconvenient.
>
> The change in overlayfs needs more careful thought than I have yet given
> it.
Just note that overlayfs does not support following auto mounts in layers.
See ovl_dentry_weird(). ovl_lookup() fails if it finds such a dentry.
So I think you need to make sure that the vfsmount was not crossed
when decoding an overlayfs real fh.
Apart from that, I think that your new feature should be opt-in w.r.t
the exportfs_decode_fh() vfs api and that overlayfs should not opt-in
for the cross mount decode.
Thanks,
Amir.
On Wed, Jul 28, 2021 at 3:02 AM NeilBrown <[email protected]> wrote:
>
> On Wed, 28 Jul 2021, Wang Yugui wrote:
> > Hi,
> >
> > This patchset works well in 5.14-rc3.
>
> Thanks for testing.
>
> >
> > 1, fixed dummy inode(255, BTRFS_FIRST_FREE_OBJECTID - 1 ) is changed to
> > dynamic dummy inode(18446744073709551358, or 18446744073709551359, ...)
>
> The BTRFS_FIRST_FREE_OBJECTID-1 was a just a hack, I never wanted it to
> be permanent.
> The new number is ULONG_MAX - subvol_id (where subvol_id starts at 257 I
> think).
> This is a bit less of a hack. It is an easily available number that is
> fairly unique.
>
> >
> > 2, btrfs subvol mount info is shown in /proc/mounts, even if nfsd/nfs is
> > not used.
> > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test
> > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub1
> > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub2
> >
> > This is a visiual feature change for btrfs user.
>
> Hopefully it is an improvement. But it is certainly a change that needs
> to be carefully considered.
I think this is behavior people generally expect, but I wonder what
the consequences of this would be with huge numbers of subvolumes. If
there are hundreds or thousands of them (which is quite possible on
SUSE systems, for example, with its auto-snapshotting regime), this
would be a mess, wouldn't it?
Or can we add a way to mark these things to not show up there or is
there some kind of behavioral change we can make to snapper or other
tools to make them not show up here?
--
真実はいつも一つ!/ Always, there's only one truth!
On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> All subvol roots are now marked as automounts. If the d_automount()
> function determines that the dentry is not the root of the vfsmount, it
> creates a simple loop-back mount of the dentry onto itself. If it
> determines that it IS the root of the vfsmount, it returns -EISDIR so
> that no further automounting is attempted.
>
> btrfs_getattr pays special attention to these automount dentries.
> If it is NOT the root of the vfsmount:
> - the ->dev is reported as that for the rest of the vfsmount
> - the ->ino is reported as the subvol objectid, suitable transformed
> to avoid collision.
>
> This way the same inode appear to be different depending on which mount
> it is in.
>
> automounted vfsmounts are kept on a list and timeout after 500 to 1000
> seconds of last use. This is configurable via a module parameter.
> The tracking and timeout of automounts is copied from NFS.
>
> Link: https://lore.kernel.org/r/[email protected]
> Reported-by: kernel test robot <[email protected]>
> Signed-off-by: NeilBrown <[email protected]>
> ---
> fs/btrfs/btrfs_inode.h | 2 +
> fs/btrfs/inode.c | 108 ++++++++++++++++++++++++++++++++++++++++++++++++
> fs/btrfs/super.c | 1
> 3 files changed, 111 insertions(+)
>
> diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
> index a4b5f38196e6..f03056cacc4a 100644
> --- a/fs/btrfs/btrfs_inode.h
> +++ b/fs/btrfs/btrfs_inode.h
> @@ -387,4 +387,6 @@ static inline void btrfs_print_data_csum_error(struct btrfs_inode *inode,
> mirror_num);
> }
>
> +void btrfs_release_automount_timer(void);
> +
> #endif
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 02537c1a9763..a5f46545fb38 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -31,6 +31,7 @@
> #include <linux/migrate.h>
> #include <linux/sched/mm.h>
> #include <linux/iomap.h>
> +#include <linux/fs_context.h>
> #include <asm/unaligned.h>
> #include "misc.h"
> #include "ctree.h"
> @@ -5782,6 +5783,8 @@ static int btrfs_init_locked_inode(struct inode *inode, void *p)
> struct btrfs_iget_args *args = p;
>
> inode->i_ino = args->ino;
> + if (args->ino == BTRFS_FIRST_FREE_OBJECTID)
> + inode->i_flags |= S_AUTOMOUNT;
> BTRFS_I(inode)->location.objectid = args->ino;
> BTRFS_I(inode)->location.type = BTRFS_INODE_ITEM_KEY;
> BTRFS_I(inode)->location.offset = 0;
> @@ -5985,6 +5988,101 @@ static int btrfs_dentry_delete(const struct dentry *dentry)
> return 0;
> }
>
> +static void btrfs_expire_automounts(struct work_struct *work);
> +static LIST_HEAD(btrfs_automount_list);
> +static DECLARE_DELAYED_WORK(btrfs_automount_task, btrfs_expire_automounts);
> +int btrfs_mountpoint_expiry_timeout = 500 * HZ;
> +static void btrfs_expire_automounts(struct work_struct *work)
> +{
> + struct list_head *list = &btrfs_automount_list;
> + int timeout = READ_ONCE(btrfs_mountpoint_expiry_timeout);
> +
> + mark_mounts_for_expiry(list);
> + if (!list_empty(list) && timeout > 0)
> + schedule_delayed_work(&btrfs_automount_task, timeout);
> +}
> +
> +void btrfs_release_automount_timer(void)
> +{
> + if (list_empty(&btrfs_automount_list))
> + cancel_delayed_work(&btrfs_automount_task);
> +}
> +
> +static struct vfsmount *btrfs_automount(struct path *path)
> +{
> + struct fs_context fc;
> + struct vfsmount *mnt;
> + int timeout = READ_ONCE(btrfs_mountpoint_expiry_timeout);
> +
> + if (path->dentry == path->mnt->mnt_root)
> + /* dentry is root of the vfsmount,
> + * so skip automount processing
> + */
> + return ERR_PTR(-EISDIR);
> + /* Create a bind-mount to expose the subvol in the mount table */
> + fc.root = path->dentry;
> + fc.sb_flags = 0;
> + fc.source = "btrfs-automount";
> + mnt = vfs_create_mount(&fc);
> + if (IS_ERR(mnt))
> + return mnt;
Hey Neil,
Sorry if this is a stupid question but wouldn't you want to copy the
mount properties from path->mnt here? Couldn't you otherwise use this to
e.g. suddenly expose a dentry on a read-only mount as read-write?
Christian
On 28/07/2021 08:01, NeilBrown wrote:
> On Wed, 28 Jul 2021, Wang Yugui wrote:
>> Hi,
>>
>> This patchset works well in 5.14-rc3.
>
> Thanks for testing.
>
>>
>> 1, fixed dummy inode(255, BTRFS_FIRST_FREE_OBJECTID - 1 ) is changed to
>> dynamic dummy inode(18446744073709551358, or 18446744073709551359, ...)
>
> The BTRFS_FIRST_FREE_OBJECTID-1 was a just a hack, I never wanted it to
> be permanent.
> The new number is ULONG_MAX - subvol_id (where subvol_id starts at 257 I
> think).
> This is a bit less of a hack. It is an easily available number that is
> fairly unique.
>
>>
>> 2, btrfs subvol mount info is shown in /proc/mounts, even if nfsd/nfs is
>> not used.
>> /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test
>> /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub1
>> /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub2
>>
>> This is a visiual feature change for btrfs user.
>
> Hopefully it is an improvement. But it is certainly a change that needs
> to be carefully considered.
Would this change the behaviour of findmnt? I have several scripts that
depend on findmnt to select btrfs filesystems. Just to take a couple of
examples (using the example shown above): my scripts would depend on
'findmnt --target /mnt/test/sub1 -o target' providing /mnt/test, not the
subvolume; and another script would depend on 'findmnt -t btrfs
--mountpoint /mnt/test/sub1' providing no output as the directory is not
an /etc/fstab mount point for a btrfs filesystem.
Maybe findmnt isn't affected? Or maybe the change is worth making
anyway? But it needs to be carefully considered if it breaks existing
user interfaces.
On Wed, Jul 28, 2021 at 08:26:12AM -0400, Neal Gompa wrote:
> I think this is behavior people generally expect, but I wonder what
> the consequences of this would be with huge numbers of subvolumes. If
> there are hundreds or thousands of them (which is quite possible on
> SUSE systems, for example, with its auto-snapshotting regime), this
> would be a mess, wouldn't it?
I'm surprised that btrfs is special here. Doesn't anyone have thousands
of lvm snapshots? Or is it that they do but they're not normally
mounted?
--b.
On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> Enhance nfsd to detect internal mounts and to cross them without
> requiring a new export.
Why don't we want a new export?
(Honest question, it's not obvious to me what the best behavior is.)
--b.
>
> Also ensure the fsid reported is different for different submounts. We
> do this by xoring in the ino of the mounted-on directory. This makes
> sense for btrfs at least.
>
> Signed-off-by: NeilBrown <[email protected]>
> ---
> fs/nfsd/nfs3xdr.c | 28 +++++++++++++++++++++-------
> fs/nfsd/nfs4xdr.c | 34 +++++++++++++++++++++++-----------
> fs/nfsd/nfsfh.c | 7 ++++++-
> fs/nfsd/vfs.c | 11 +++++++++--
> 4 files changed, 59 insertions(+), 21 deletions(-)
>
> diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
> index 67af0c5c1543..80b1cc0334fa 100644
> --- a/fs/nfsd/nfs3xdr.c
> +++ b/fs/nfsd/nfs3xdr.c
> @@ -370,6 +370,8 @@ svcxdr_encode_fattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
> case FSIDSOURCE_UUID:
> fsid = ((u64 *)fhp->fh_export->ex_uuid)[0];
> fsid ^= ((u64 *)fhp->fh_export->ex_uuid)[1];
> + if (fhp->fh_mnt != fhp->fh_export->ex_path.mnt)
> + fsid ^= nfsd_get_mounted_on(fhp->fh_mnt);
> break;
> default:
> fsid = (u64)huge_encode_dev(fhp->fh_dentry->d_sb->s_dev);
> @@ -1094,8 +1096,8 @@ compose_entry_fh(struct nfsd3_readdirres *cd, struct svc_fh *fhp,
> __be32 rv = nfserr_noent;
>
> dparent = cd->fh.fh_dentry;
> - exp = cd->fh.fh_export;
> - child.mnt = cd->fh.fh_mnt;
> + exp = exp_get(cd->fh.fh_export);
> + child.mnt = mntget(cd->fh.fh_mnt);
>
> if (isdotent(name, namlen)) {
> if (namlen == 2) {
> @@ -1112,15 +1114,27 @@ compose_entry_fh(struct nfsd3_readdirres *cd, struct svc_fh *fhp,
> child.dentry = dget(dparent);
> } else
> child.dentry = lookup_positive_unlocked(name, dparent, namlen);
> - if (IS_ERR(child.dentry))
> + if (IS_ERR(child.dentry)) {
> + mntput(child.mnt);
> + exp_put(exp);
> return rv;
> - if (d_mountpoint(child.dentry))
> - goto out;
> - if (child.dentry->d_inode->i_ino != ino)
> + }
> + /* If child is a mountpoint, then we want to expose the fact
> + * so client can create a mountpoint. If not, then a different
> + * ino number probably means a race with rename, so avoid providing
> + * too much detail.
> + */
> + if (nfsd_mountpoint(child.dentry, exp)) {
> + int err;
> + err = nfsd_cross_mnt(cd->rqstp, &child, &exp);
> + if (err)
> + goto out;
> + } else if (child.dentry->d_inode->i_ino != ino)
> goto out;
> rv = fh_compose(fhp, exp, &child, &cd->fh);
> out:
> - dput(child.dentry);
> + path_put(&child);
> + exp_put(exp);
> return rv;
> }
>
> diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
> index d5683b6a74b2..4dbc99ed2c8b 100644
> --- a/fs/nfsd/nfs4xdr.c
> +++ b/fs/nfsd/nfs4xdr.c
> @@ -2817,6 +2817,8 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
> struct kstat stat;
> struct svc_fh *tempfh = NULL;
> struct kstatfs statfs;
> + u64 mounted_on_ino;
> + u64 sub_fsid;
> __be32 *p;
> int starting_len = xdr->buf->len;
> int attrlen_offset;
> @@ -2871,6 +2873,24 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
> goto out;
> fhp = tempfh;
> }
> + if ((bmval0 & FATTR4_WORD0_FSID) ||
> + (bmval1 & FATTR4_WORD1_MOUNTED_ON_FILEID)) {
> + mounted_on_ino = stat.ino;
> + sub_fsid = 0;
> + /*
> + * The inode number that the current mnt is mounted on is
> + * used for MOUNTED_ON_FILED if we are at the root,
> + * and for sub_fsid if mnt is not the export mnt.
> + */
> + if (ignore_crossmnt == 0) {
> + u64 moi = nfsd_get_mounted_on(mnt);
> +
> + if (dentry == mnt->mnt_root && moi)
> + mounted_on_ino = moi;
> + if (mnt != exp->ex_path.mnt)
> + sub_fsid = moi;
> + }
> + }
> if (bmval0 & FATTR4_WORD0_ACL) {
> err = nfsd4_get_nfs4_acl(rqstp, dentry, &acl);
> if (err == -EOPNOTSUPP)
> @@ -3008,6 +3028,8 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
> case FSIDSOURCE_UUID:
> p = xdr_encode_opaque_fixed(p, exp->ex_uuid,
> EX_UUID_LEN);
> + if (mnt != exp->ex_path.mnt)
> + *(u64*)(p-2) ^= sub_fsid;
> break;
> }
> }
> @@ -3253,20 +3275,10 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
> *p++ = cpu_to_be32(stat.mtime.tv_nsec);
> }
> if (bmval1 & FATTR4_WORD1_MOUNTED_ON_FILEID) {
> - u64 ino;
> -
> p = xdr_reserve_space(xdr, 8);
> if (!p)
> goto out_resource;
> - /*
> - * Get parent's attributes if not ignoring crossmount
> - * and this is the root of a cross-mounted filesystem.
> - */
> - if (ignore_crossmnt == 0 && dentry == mnt->mnt_root)
> - ino = nfsd_get_mounted_on(mnt);
> - if (!ino)
> - ino = stat.ino;
> - p = xdr_encode_hyper(p, ino);
> + p = xdr_encode_hyper(p, mounted_on_ino);
> }
> #ifdef CONFIG_NFSD_PNFS
> if (bmval1 & FATTR4_WORD1_FS_LAYOUT_TYPES) {
> diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
> index 4023046f63e2..4b53838bca89 100644
> --- a/fs/nfsd/nfsfh.c
> +++ b/fs/nfsd/nfsfh.c
> @@ -9,7 +9,7 @@
> */
>
> #include <linux/exportfs.h>
> -
> +#include <linux/namei.h>
> #include <linux/sunrpc/svcauth_gss.h>
> #include "nfsd.h"
> #include "vfs.h"
> @@ -285,6 +285,11 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
> default:
> dentry = ERR_PTR(-ESTALE);
> }
> + } else if (nfsd_mountpoint(dentry, exp)) {
> + struct path path = { .mnt = mnt, .dentry = dentry };
> + follow_down(&path, LOOKUP_AUTOMOUNT);
> + mnt = path.mnt;
> + dentry = path.dentry;
> }
> }
> if (dentry == NULL)
> diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
> index baa12ac36ece..22523e1cd478 100644
> --- a/fs/nfsd/vfs.c
> +++ b/fs/nfsd/vfs.c
> @@ -64,7 +64,7 @@ nfsd_cross_mnt(struct svc_rqst *rqstp, struct path *path_parent,
> .dentry = dget(path_parent->dentry)};
> int err = 0;
>
> - err = follow_down(&path, 0);
> + err = follow_down(&path, LOOKUP_AUTOMOUNT);
> if (err < 0)
> goto out;
> if (path.mnt == path_parent->mnt && path.dentry == path_parent->dentry &&
> @@ -73,6 +73,13 @@ nfsd_cross_mnt(struct svc_rqst *rqstp, struct path *path_parent,
> path_put(&path);
> goto out;
> }
> + if (mount_is_internal(path.mnt)) {
> + /* Use the new path, but don't look for a new export */
> + /* FIXME should I check NOHIDE in this case?? */
> + path_put(path_parent);
> + *path_parent = path;
> + goto out;
> + }
>
> exp2 = rqst_exp_get_by_name(rqstp, &path);
> if (IS_ERR(exp2)) {
> @@ -157,7 +164,7 @@ int nfsd_mountpoint(struct dentry *dentry, struct svc_export *exp)
> return 1;
> if (nfsd4_is_junction(dentry))
> return 1;
> - if (d_mountpoint(dentry))
> + if (d_managed(dentry))
> /*
> * Might only be a mountpoint in a different namespace,
> * but we need to check.
>
On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> When a filesystem has internal mounts, it controls the filehandles
> across all those mounts (subvols) in the filesystem. So it is useful to
> be able to look up a filehandle again one mount, and get a result which
> is in a different mount (part of the same overall file system).
>
> This patch makes that possible by changing export_decode_fh() and
> export_decode_fh_raw() to take a vfsmount pointer by reference, and
> possibly change the vfsmount pointed to before returning.
>
> The core of the change is in reconnect_path() which now not only checks
> that the dentry is fully connected, but also that the vfsmnt reported
> has the same 'dev' (reported by vfs_getattr) as the dentry.
> If it doesn't, we walk up the dparent() chain to find the highest place
> where the dev changes without there being a mount point, and trigger an
> automount there.
>
> As no filesystems yet provide local-mounts, this does not yet change any
> behaviour.
>
> In exportfs_decode_fh_raw() we previously tested for DCACHE_DISCONNECT
> before calling reconnect_path(). That test is dropped. It was only a
> minor optimisation and is now inconvenient.
>
> The change in overlayfs needs more careful thought than I have yet given
> it.
>
> Signed-off-by: NeilBrown <[email protected]>
> ---
> fs/exportfs/expfs.c | 100 +++++++++++++++++++++++++++++++++++++++-------
> fs/fhandle.c | 2 -
> fs/nfsd/nfsfh.c | 9 +++-
> fs/overlayfs/namei.c | 5 ++
> fs/xfs/xfs_ioctl.c | 12 ++++--
> include/linux/exportfs.h | 4 +-
> 6 files changed, 106 insertions(+), 26 deletions(-)
>
> diff --git a/fs/exportfs/expfs.c b/fs/exportfs/expfs.c
> index 0106eba46d5a..2d7c42137b49 100644
> --- a/fs/exportfs/expfs.c
> +++ b/fs/exportfs/expfs.c
> @@ -207,11 +207,18 @@ static struct dentry *reconnect_one(struct vfsmount *mnt,
> * that case reconnect_path may still succeed with target_dir fully
> * connected, but further operations using the filehandle will fail when
> * necessary (due to S_DEAD being set on the directory).
> + *
> + * If the filesystem supports multiple subvols, then *mntp may be updated
> + * to a subordinate mount point on the same filesystem.
> */
> static int
> -reconnect_path(struct vfsmount *mnt, struct dentry *target_dir, char *nbuf)
> +reconnect_path(struct vfsmount **mntp, struct dentry *target_dir, char *nbuf)
> {
> + struct vfsmount *mnt = *mntp;
> + struct path path;
> struct dentry *dentry, *parent;
> + struct kstat stat;
> + dev_t target_dev;
>
> dentry = dget(target_dir);
>
> @@ -232,6 +239,68 @@ reconnect_path(struct vfsmount *mnt, struct dentry *target_dir, char *nbuf)
> }
> dput(dentry);
> clear_disconnected(target_dir);
Minor nit--I'd prefer the following in a separate function.
--b.
> +
> + /* Need to find appropriate vfsmount, which might not exist yet.
> + * We may need to trigger automount points.
> + */
> + path.mnt = mnt;
> + path.dentry = target_dir;
> + vfs_getattr_nosec(&path, &stat, 0, AT_STATX_DONT_SYNC);
> + target_dev = stat.dev;
> +
> + path.dentry = mnt->mnt_root;
> + vfs_getattr_nosec(&path, &stat, 0, AT_STATX_DONT_SYNC);
> +
> + while (stat.dev != target_dev) {
> + /* walk up the dcache tree from target_dir, recording the
> + * location of the most recent change in dev number,
> + * until we find a mountpoint.
> + * If there was no change in show_dev result before the
> + * mountpount, the vfsmount at the mountpoint is what we want.
> + * If there was, we need to trigger an automount where the
> + * show_dev() result changed.
> + */
> + struct dentry *last_change = NULL;
> + dev_t last_dev = target_dev;
> +
> + dentry = dget(target_dir);
> + while ((parent = dget_parent(dentry)) != dentry) {
> + path.dentry = parent;
> + vfs_getattr_nosec(&path, &stat, 0, AT_STATX_DONT_SYNC);
> + if (stat.dev != last_dev) {
> + path.dentry = dentry;
> + mnt = lookup_mnt(&path);
> + if (mnt) {
> + mntput(path.mnt);
> + path.mnt = mnt;
> + break;
> + }
> + dput(last_change);
> + last_change = dget(dentry);
> + last_dev = stat.dev;
> + }
> + dput(dentry);
> + dentry = parent;
> + }
> + dput(dentry); dput(parent);
> +
> + if (!last_change)
> + break;
> +
> + mnt = path.mnt;
> + path.dentry = last_change;
> + follow_down(&path, LOOKUP_AUTOMOUNT);
> + dput(path.dentry);
> + if (path.mnt == mnt)
> + /* There should have been a mount-trap there,
> + * but there wasn't. Just give up.
> + */
> + break;
> +
> + path.dentry = mnt->mnt_root;
> + vfs_getattr_nosec(&path, &stat, 0, AT_STATX_DONT_SYNC);
> + }
> + *mntp = path.mnt;
> return 0;
> }
>
> @@ -418,12 +487,13 @@ int exportfs_encode_fh(struct dentry *dentry, struct fid *fid, int *max_len,
> EXPORT_SYMBOL_GPL(exportfs_encode_fh);
>
> struct dentry *
> -exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
> +exportfs_decode_fh_raw(struct vfsmount **mntp, struct fid *fid, int fh_len,
> int fileid_type,
> int (*acceptable)(void *, struct dentry *),
> void *context)
> {
> - const struct export_operations *nop = mnt->mnt_sb->s_export_op;
> + struct super_block *sb = (*mntp)->mnt_sb;
> + const struct export_operations *nop = sb->s_export_op;
> struct dentry *result, *alias;
> char nbuf[NAME_MAX+1];
> int err;
> @@ -433,7 +503,7 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
> */
> if (!nop || !nop->fh_to_dentry)
> return ERR_PTR(-ESTALE);
> - result = nop->fh_to_dentry(mnt->mnt_sb, fid, fh_len, fileid_type);
> + result = nop->fh_to_dentry(sb, fid, fh_len, fileid_type);
> if (IS_ERR_OR_NULL(result))
> return result;
>
> @@ -452,14 +522,12 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
> *
> * On the positive side there is only one dentry for each
> * directory inode. On the negative side this implies that we
> - * to ensure our dentry is connected all the way up to the
> + * need to ensure our dentry is connected all the way up to the
> * filesystem root.
> */
> - if (result->d_flags & DCACHE_DISCONNECTED) {
> - err = reconnect_path(mnt, result, nbuf);
> - if (err)
> - goto err_result;
> - }
> + err = reconnect_path(mntp, result, nbuf);
> + if (err)
> + goto err_result;
>
> if (!acceptable(context, result)) {
> err = -EACCES;
> @@ -494,7 +562,7 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
> if (!nop->fh_to_parent)
> goto err_result;
>
> - target_dir = nop->fh_to_parent(mnt->mnt_sb, fid,
> + target_dir = nop->fh_to_parent(sb, fid,
> fh_len, fileid_type);
> if (!target_dir)
> goto err_result;
> @@ -507,7 +575,7 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
> * connected to the filesystem root. The VFS really doesn't
> * like disconnected directories..
> */
> - err = reconnect_path(mnt, target_dir, nbuf);
> + err = reconnect_path(mntp, target_dir, nbuf);
> if (err) {
> dput(target_dir);
> goto err_result;
> @@ -518,7 +586,7 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
> * dentry for the inode we're after, make sure that our
> * inode is actually connected to the parent.
> */
> - err = exportfs_get_name(mnt, target_dir, nbuf, result);
> + err = exportfs_get_name(*mntp, target_dir, nbuf, result);
> if (err) {
> dput(target_dir);
> goto err_result;
> @@ -556,7 +624,7 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
> goto err_result;
> }
>
> - return alias;
> + return result;
> }
>
> err_result:
> @@ -565,14 +633,14 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
> }
> EXPORT_SYMBOL_GPL(exportfs_decode_fh_raw);
>
> -struct dentry *exportfs_decode_fh(struct vfsmount *mnt, struct fid *fid,
> +struct dentry *exportfs_decode_fh(struct vfsmount **mntp, struct fid *fid,
> int fh_len, int fileid_type,
> int (*acceptable)(void *, struct dentry *),
> void *context)
> {
> struct dentry *ret;
>
> - ret = exportfs_decode_fh_raw(mnt, fid, fh_len, fileid_type,
> + ret = exportfs_decode_fh_raw(mntp, fid, fh_len, fileid_type,
> acceptable, context);
> if (IS_ERR_OR_NULL(ret)) {
> if (ret == ERR_PTR(-ENOMEM))
> diff --git a/fs/fhandle.c b/fs/fhandle.c
> index 6630c69c23a2..b47c7696469f 100644
> --- a/fs/fhandle.c
> +++ b/fs/fhandle.c
> @@ -149,7 +149,7 @@ static int do_handle_to_path(int mountdirfd, struct file_handle *handle,
> }
> /* change the handle size to multiple of sizeof(u32) */
> handle_dwords = handle->handle_bytes >> 2;
> - path->dentry = exportfs_decode_fh(path->mnt,
> + path->dentry = exportfs_decode_fh(&path->mnt,
> (struct fid *)handle->f_handle,
> handle_dwords, handle->handle_type,
> vfs_dentry_acceptable, NULL);
> diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
> index 0bf7ac13ae50..4023046f63e2 100644
> --- a/fs/nfsd/nfsfh.c
> +++ b/fs/nfsd/nfsfh.c
> @@ -157,6 +157,7 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
> struct fid *fid = NULL, sfid;
> struct svc_export *exp;
> struct dentry *dentry;
> + struct vfsmount *mnt = NULL;
> int fileid_type;
> int data_left = fh->fh_size/4;
> __be32 error;
> @@ -253,6 +254,8 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
> if (rqstp->rq_vers > 2)
> error = nfserr_badhandle;
>
> + mnt = mntget(exp->ex_path.mnt);
> +
> if (fh->fh_version != 1) {
> sfid.i32.ino = fh->ofh_ino;
> sfid.i32.gen = fh->ofh_generation;
> @@ -269,7 +272,7 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
> if (fileid_type == FILEID_ROOT)
> dentry = dget(exp->ex_path.dentry);
> else {
> - dentry = exportfs_decode_fh_raw(exp->ex_path.mnt, fid,
> + dentry = exportfs_decode_fh_raw(&mnt, fid,
> data_left, fileid_type,
> nfsd_acceptable, exp);
> if (IS_ERR_OR_NULL(dentry)) {
> @@ -299,7 +302,7 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
> }
>
> fhp->fh_dentry = dentry;
> - fhp->fh_mnt = mntget(exp->ex_path.mnt);
> + fhp->fh_mnt = mnt;
> fhp->fh_export = exp;
>
> switch (rqstp->rq_vers) {
> @@ -317,6 +320,7 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
>
> return 0;
> out:
> + mntput(mnt);
> exp_put(exp);
> return error;
> }
> @@ -428,7 +432,6 @@ fh_verify(struct svc_rqst *rqstp, struct svc_fh *fhp, umode_t type, int access)
> return error;
> }
>
> -
> /*
> * Compose a file handle for an NFS reply.
> *
> diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
> index 210cd6f66e28..0bca19f6df54 100644
> --- a/fs/overlayfs/namei.c
> +++ b/fs/overlayfs/namei.c
> @@ -155,6 +155,7 @@ struct dentry *ovl_decode_real_fh(struct ovl_fs *ofs, struct ovl_fh *fh,
> {
> struct dentry *real;
> int bytes;
> + struct vfsmount *mnt2;
>
> if (!capable(CAP_DAC_READ_SEARCH))
> return NULL;
> @@ -169,9 +170,11 @@ struct dentry *ovl_decode_real_fh(struct ovl_fs *ofs, struct ovl_fh *fh,
> return NULL;
>
> bytes = (fh->fb.len - offsetof(struct ovl_fb, fid));
> - real = exportfs_decode_fh(mnt, (struct fid *)fh->fb.fid,
> + mnt2 = mntget(mnt);
> + real = exportfs_decode_fh(&mnt2, (struct fid *)fh->fb.fid,
> bytes >> 2, (int)fh->fb.type,
> connected ? ovl_acceptable : NULL, mnt);
> + mntput(mnt2);
> if (IS_ERR(real)) {
> /*
> * Treat stale file handle to lower file as "origin unknown".
> diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c
> index 16039ea10ac9..76eb7d540811 100644
> --- a/fs/xfs/xfs_ioctl.c
> +++ b/fs/xfs/xfs_ioctl.c
> @@ -149,6 +149,8 @@ xfs_handle_to_dentry(
> {
> xfs_handle_t handle;
> struct xfs_fid64 fid;
> + struct dentry *ret;
> + struct vfsmount *mnt;
>
> /*
> * Only allow handle opens under a directory.
> @@ -168,9 +170,13 @@ xfs_handle_to_dentry(
> fid.ino = handle.ha_fid.fid_ino;
> fid.gen = handle.ha_fid.fid_gen;
>
> - return exportfs_decode_fh(parfilp->f_path.mnt, (struct fid *)&fid, 3,
> - FILEID_INO32_GEN | XFS_FILEID_TYPE_64FLAG,
> - xfs_handle_acceptable, NULL);
> + mnt = mntget(parfilp->f_path.mnt);
> + ret = exportfs_decode_fh(&mnt, (struct fid *)&fid, 3,
> + FILEID_INO32_GEN | XFS_FILEID_TYPE_64FLAG,
> + xfs_handle_acceptable, NULL);
> + WARN_ON(mnt != parfilp->f_path.mnt);
> + mntput(mnt);
> + return ret;
> }
>
> STATIC struct dentry *
> diff --git a/include/linux/exportfs.h b/include/linux/exportfs.h
> index fe848901fcc3..9a8c5434a5cf 100644
> --- a/include/linux/exportfs.h
> +++ b/include/linux/exportfs.h
> @@ -228,12 +228,12 @@ extern int exportfs_encode_inode_fh(struct inode *inode, struct fid *fid,
> int *max_len, struct inode *parent);
> extern int exportfs_encode_fh(struct dentry *dentry, struct fid *fid,
> int *max_len, int connectable);
> -extern struct dentry *exportfs_decode_fh_raw(struct vfsmount *mnt,
> +extern struct dentry *exportfs_decode_fh_raw(struct vfsmount **mntp,
> struct fid *fid, int fh_len,
> int fileid_type,
> int (*acceptable)(void *, struct dentry *),
> void *context);
> -extern struct dentry *exportfs_decode_fh(struct vfsmount *mnt, struct fid *fid,
> +extern struct dentry *exportfs_decode_fh(struct vfsmount **mnt, struct fid *fid,
> int fh_len, int fileid_type, int (*acceptable)(void *, struct dentry *),
> void *context);
>
>
I'm still stuck trying to understand why subvolumes can't get their own
superblocks:
- Why are the performance issues Josef raises unsurmountable?
And why are they unique to btrfs? (Surely there other cases
where people need hundreds or thousands of superblocks?)
- If filehandle decoding can return a different vfs mount than
it's passed, why can't it return a different superblock?
--b.
On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> There are long-standing problems with btrfs subvols, particularly in
> relation to whether and how they are exposed in the mount table.
>
> - /proc/self/mountinfo reports the major:minor device number for each
> filesystem and when a btrfs subvol is explicitly mounted, the number
> reported is wrong - it does not match what stat() reports for the
> mountpoint.
>
> - when subvol are not explicitly mounted, they don't appear in
> mountinfo at all.
>
> Consequences include that a tool which uses stat() to find the dev of the
> filesystem, then searches mountinfo for that filesystem, will not find
> it.
>
> Some tools (e.g. findmnt) appear to have been enhanced to cope with this
> strangeness, but it would be best to make btrfs behave more normally.
>
> - nfsd cannot currently see the transition to subvol, so reports the
> main volume and all subvols to the client as being in the same
> filesystem. As inode numbers are not unique across all subvols,
> this can confuse clients. In particular, 'find' is likely to report a
> loop.
>
> subvols can be made to appear in mountinfo using automounts. However
> nfsd does not cope well with automounts. It assumes all filesystems to
> be exported are already mounted. So adding automounts to btrfs would
> break nfsd.
>
> We can enhance nfsd to understand that some automounts can be managed.
> "internal mounts" where a filesystem provides an automount point and
> mounts its own directories, can be handled differently by nfsd.
>
> This series addresses all these issues. After a few enhancements to the
> VFS to provide needed support, they enhance exportfs and nfsd to cope
> with the concept of internal mounts, and then enhance btrfs to provide
> them.
>
> The NFSv3 support is incomplete. I'm not sure we can make it work
> "perfectly". A normal nfsv3 mount seem to work well enough, but if
> mounted with '-o noac', it loses track of the mounted-on inode number
> and complains about inode numbers changing.
>
> My basic test for these is to mount a btrfs filesystem which contains
> subvols, nfs-export it and mount it with nfsv3 and nfsv4, then run
> 'find' in each of the filesystem and check the contents of
> /proc/self/mountinfo.
>
> The first patch simply fixes the dev number in mountinfo and could
> possibly be tagged for -stable.
>
> NeilBrown
>
> ---
>
> NeilBrown (11):
> VFS: show correct dev num in mountinfo
> VFS: allow d_automount to create in-place bind-mount.
> VFS: pass lookup_flags into follow_down()
> VFS: export lookup_mnt()
> VFS: new function: mount_is_internal()
> nfsd: include a vfsmount in struct svc_fh
> exportfs: Allow filehandle lookup to cross internal mount points.
> nfsd: change get_parent_attributes() to nfsd_get_mounted_on()
> nfsd: Allow filehandle lookup to cross internal mount points.
> btrfs: introduce mapping function from location to inum
> btrfs: use automount to bind-mount all subvol roots.
>
>
> fs/btrfs/btrfs_inode.h | 12 +++
> fs/btrfs/inode.c | 111 ++++++++++++++++++++++++++-
> fs/btrfs/super.c | 1 +
> fs/exportfs/expfs.c | 100 ++++++++++++++++++++----
> fs/fhandle.c | 2 +-
> fs/internal.h | 1 -
> fs/namei.c | 6 +-
> fs/namespace.c | 32 +++++++-
> fs/nfsd/export.c | 4 +-
> fs/nfsd/nfs3xdr.c | 40 +++++++---
> fs/nfsd/nfs4proc.c | 9 ++-
> fs/nfsd/nfs4xdr.c | 106 ++++++++++++-------------
> fs/nfsd/nfsfh.c | 44 +++++++----
> fs/nfsd/nfsfh.h | 3 +-
> fs/nfsd/nfsproc.c | 5 +-
> fs/nfsd/vfs.c | 162 +++++++++++++++++++++++----------------
> fs/nfsd/vfs.h | 12 +--
> fs/nfsd/xdr4.h | 2 +-
> fs/overlayfs/namei.c | 5 +-
> fs/xfs/xfs_ioctl.c | 12 ++-
> include/linux/exportfs.h | 4 +-
> include/linux/mount.h | 4 +
> include/linux/namei.h | 2 +-
> 23 files changed, 490 insertions(+), 189 deletions(-)
>
> --
> Signature
On 7/28/21 3:35 PM, J. Bruce Fields wrote:
> I'm still stuck trying to understand why subvolumes can't get their own
> superblocks:
>
> - Why are the performance issues Josef raises unsurmountable?
> And why are they unique to btrfs? (Surely there other cases
> where people need hundreds or thousands of superblocks?)
>
I don't think anybody has that many file systems. For btrfs it's a single file
system. Think of syncfs, it's going to walk through all of the super blocks on
the system calling ->sync_fs on each subvol superblock. Now this isn't a huge
deal, we could just have some flag that says "I'm not real" or even just have
anonymous superblocks that don't get added to the global super_blocks list, and
that would address my main pain points.
The second part is inode reclaim. Again this particular problem could be
avoided if we had an anonymous superblock that wasn't actually used, but the
inode lru is per superblock. Now with reclaim instead of walking all the
inodes, you're walking a bunch of super blocks and then walking the list of
inodes within those super blocks. You're burning CPU cycles because now instead
of getting big chunks of inodes to dispose, it's spread out across many super
blocks.
The other weird thing is the way we apply pressure to shrinker systems. We
essentially say "try to evict X objects from your list", which means in this
case with lots of subvolumes we'd be evicting waaaaay more inodes than you were
before, likely impacting performance where you have workloads that have lots of
files open across many subvolumes (which is what FB does with it's containers).
If we want a anonymous superblock per subvolume then the only way it'll work is
if it's not actually tied into anything, and we still use the primary super
block for the whole file system. And if that's what we're going to do what's
the point of the super block exactly? This approach that Neil's come up with
seems like a reasonable solution to me. Christoph gets his separation and
/proc/self/mountinfo, and we avoid the scalability headache of a billion super
blocks. Thanks,
Josef
On Thu, 29 Jul 2021, J. Bruce Fields wrote:
> On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> > @@ -232,6 +239,68 @@ reconnect_path(struct vfsmount *mnt, struct dentry *target_dir, char *nbuf)
> > }
> > dput(dentry);
> > clear_disconnected(target_dir);
>
> Minor nit--I'd prefer the following in a separate function.
Fair. Are you thinking "a separate function that is called here" or "a
separate function that needs to be called by whoever called
exportfs_decode_fh_raw()" if they happen to want the vfsmnt to be
updated?
NeilBrown
>
> --b.
>
> > +
> > + /* Need to find appropriate vfsmount, which might not exist yet.
> > + * We may need to trigger automount points.
> > + */
> > + path.mnt = mnt;
> > + path.dentry = target_dir;
> > + vfs_getattr_nosec(&path, &stat, 0, AT_STATX_DONT_SYNC);
> > + target_dev = stat.dev;
> > +
> > + path.dentry = mnt->mnt_root;
> > + vfs_getattr_nosec(&path, &stat, 0, AT_STATX_DONT_SYNC);
> > +
> > + while (stat.dev != target_dev) {
> > + /* walk up the dcache tree from target_dir, recording the
> > + * location of the most recent change in dev number,
> > + * until we find a mountpoint.
> > + * If there was no change in show_dev result before the
> > + * mountpount, the vfsmount at the mountpoint is what we want.
> > + * If there was, we need to trigger an automount where the
> > + * show_dev() result changed.
> > + */
On Thu, 29 Jul 2021, J. Bruce Fields wrote:
> On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> > Enhance nfsd to detect internal mounts and to cross them without
> > requiring a new export.
>
> Why don't we want a new export?
>
> (Honest question, it's not obvious to me what the best behavior is.)
Because a new export means asking user-space to determine if the mount
is exported and to provide a filehandle-prefix for it. A large part of
the point of this it to avoid using a different filehandle-prefix.
I haven't yet thought deeply about how the 'crossmnt' flag (for v3)
should affect crossing these internal mounts. My current feeling is
that it shouldn't as it really is just one big filesystem being
exported, which happens to internally have different inode-number
spaces.
Unfortuantely this technically violates the RFC as the fsid is not meant
to change when you do a LOOKUP ...
NeilBrown
On Wed, 28 Jul 2021, Neal Gompa wrote:
> On Wed, Jul 28, 2021 at 3:02 AM NeilBrown <[email protected]> wrote:
> >
> > On Wed, 28 Jul 2021, Wang Yugui wrote:
> > > Hi,
> > >
> > > This patchset works well in 5.14-rc3.
> >
> > Thanks for testing.
> >
> > >
> > > 1, fixed dummy inode(255, BTRFS_FIRST_FREE_OBJECTID - 1 ) is changed to
> > > dynamic dummy inode(18446744073709551358, or 18446744073709551359, ...)
> >
> > The BTRFS_FIRST_FREE_OBJECTID-1 was a just a hack, I never wanted it to
> > be permanent.
> > The new number is ULONG_MAX - subvol_id (where subvol_id starts at 257 I
> > think).
> > This is a bit less of a hack. It is an easily available number that is
> > fairly unique.
> >
> > >
> > > 2, btrfs subvol mount info is shown in /proc/mounts, even if nfsd/nfs is
> > > not used.
> > > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test
> > > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub1
> > > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub2
> > >
> > > This is a visiual feature change for btrfs user.
> >
> > Hopefully it is an improvement. But it is certainly a change that needs
> > to be carefully considered.
>
> I think this is behavior people generally expect, but I wonder what
> the consequences of this would be with huge numbers of subvolumes. If
> there are hundreds or thousands of them (which is quite possible on
> SUSE systems, for example, with its auto-snapshotting regime), this
> would be a mess, wouldn't it?
Would there be hundreds or thousands of subvols concurrently being
accessed? The auto-mounted subvols only appear in the mount table while
that are being accessed, and for about 15 minutes after the last access.
I suspect that most subvols are "backup" snapshots which are not being
accessed and so would not appear.
>
> Or can we add a way to mark these things to not show up there or is
> there some kind of behavioral change we can make to snapper or other
> tools to make them not show up here?
Certainly it might make sense to flag these in some way so that tools
can choose the ignore them or handle them specially, just as nfsd needs
to handle them specially. I was considering a "local" mount flag.
NeilBrown
>
>
>
> --
> 真実はいつも一つ!/ Always, there's only one truth!
>
>
On Wed, 28 Jul 2021, Amir Goldstein wrote:
> On Wed, Jul 28, 2021 at 1:44 AM NeilBrown <[email protected]> wrote:
> >
> > When a filesystem has internal mounts, it controls the filehandles
> > across all those mounts (subvols) in the filesystem. So it is useful to
> > be able to look up a filehandle again one mount, and get a result which
> > is in a different mount (part of the same overall file system).
> >
> > This patch makes that possible by changing export_decode_fh() and
> > export_decode_fh_raw() to take a vfsmount pointer by reference, and
> > possibly change the vfsmount pointed to before returning.
> >
> > The core of the change is in reconnect_path() which now not only checks
> > that the dentry is fully connected, but also that the vfsmnt reported
> > has the same 'dev' (reported by vfs_getattr) as the dentry.
> > If it doesn't, we walk up the dparent() chain to find the highest place
> > where the dev changes without there being a mount point, and trigger an
> > automount there.
> >
> > As no filesystems yet provide local-mounts, this does not yet change any
> > behaviour.
> >
> > In exportfs_decode_fh_raw() we previously tested for DCACHE_DISCONNECT
> > before calling reconnect_path(). That test is dropped. It was only a
> > minor optimisation and is now inconvenient.
> >
> > The change in overlayfs needs more careful thought than I have yet given
> > it.
>
> Just note that overlayfs does not support following auto mounts in layers.
> See ovl_dentry_weird(). ovl_lookup() fails if it finds such a dentry.
> So I think you need to make sure that the vfsmount was not crossed
> when decoding an overlayfs real fh.
Sounds sensible - thanks.
Does this mean that my change would cause problems for people using
overlayfs with a btrfs lower layer?
>
> Apart from that, I think that your new feature should be opt-in w.r.t
> the exportfs_decode_fh() vfs api and that overlayfs should not opt-in
> for the cross mount decode.
I did consider making it opt-in, but it is easy enough for the caller
to ignore the changed vfsmount, and only one (of 4) callers that it
really makes a difference for.
I will review the overlayfs in light of these comments.
Thanks,
NeilBrown
On Wed, 28 Jul 2021, Christian Brauner wrote:
>
> Hey Neil,
>
> Sorry if this is a stupid question but wouldn't you want to copy the
> mount properties from path->mnt here? Couldn't you otherwise use this to
> e.g. suddenly expose a dentry on a read-only mount as read-write?
There are no stupid questions, and this is a particularly non-stupid
one!
I hadn't considered that, but having examined the code I see that it
is already handled.
The vfsmount that d_automount returns is passed to finish_automount(),
which hands it to do_add_mount() together with the mnt_flags for the
parent vfsmount (plus MNT_SHRINKABLE).
do_add_mount() sets up the mnt_flags of the new vfsmount.
In fact, the d_automount interface has no control of these flags at all.
Whatever it sets will be over-written by do_add_mount.
Thanks,
NeilBrown
On Wed, Jul 28, 2021 at 03:14:31PM -0400, J. Bruce Fields wrote:
> On Wed, Jul 28, 2021 at 08:26:12AM -0400, Neal Gompa wrote:
> > I think this is behavior people generally expect, but I wonder what
> > the consequences of this would be with huge numbers of subvolumes. If
> > there are hundreds or thousands of them (which is quite possible on
> > SUSE systems, for example, with its auto-snapshotting regime), this
> > would be a mess, wouldn't it?
>
> I'm surprised that btrfs is special here. Doesn't anyone have thousands
> of lvm snapshots? Or is it that they do but they're not normally
> mounted?
Unprivileged users can't create lvm snapshots as easily or quickly as
using mkdir (well, ok, mkdir and fssync). lvm doesn't scale very well
past more than a few dozen snapshots of the same original volume, and
performance degrades linearly in the number of snapshots if the original
LV is modified. btrfs is the opposite: users can create and delete
as many snapshots as they like, at a cost more expensive than mkdir but
less expensive than 'cp -a', and users only pay IO costs for writes to
the subvols they modify. So some btrfs users use snapshots in places
where more traditional tools like 'cp -a' or 'git checkout' are used on
other filesystems.
e.g. a build system might make a snapshot of a git working tree containing
a checked out and built baseline revision, and then it might do a loop
where it makes a snapshot, applies one patch from an integration branch
in the snapshot directory, and incrementally builds there. The next
revision makes a snapshot of its parent revision's subvol and builds
the next patch. If there are merges in the integration branch, then
the builder can go back to parent revisions, create a new snapshot,
apply the patch, and build in a snapshot on both sides of the merge.
After testing picks a winner, the builder can simply delete all the
snapshots except the one for the version that won testing (there is no
requirement to commit the snapshot to the origin LV as in lvm, either
can be destroyed without requiring action to preserve the other).
You can do a similar thing with overlayfs, but it runs into problems
with all the mount points. In btrfs, the mount points are persistent
because they're built into the filesystem. With overlayfs, you have
to save and restore them so they persist across reboots (unless that
feature has been added since I last looked).
I'm looking at a few machines here, and if all the subvols are visible to
'df', its output would be somewhere around 3-5 MB. That's too much--we'd
have to hack up df to not show the same btrfs twice...as well as every
monitoring tool that reports free space...which sounds similar to the
problems we're trying to avoid.
Ideally there would be a way to turn this on or off. It is creating a
set of new problems that is the complement of the set we're trying to
fix in this change.
> --b.
On Wed, 28 Jul 2021, [email protected] wrote:
> On 28/07/2021 08:01, NeilBrown wrote:
> > On Wed, 28 Jul 2021, Wang Yugui wrote:
> >> Hi,
> >>
> >> This patchset works well in 5.14-rc3.
> >
> > Thanks for testing.
> >
> >>
> >> 1, fixed dummy inode(255, BTRFS_FIRST_FREE_OBJECTID - 1 ) is changed to
> >> dynamic dummy inode(18446744073709551358, or 18446744073709551359, ...)
> >
> > The BTRFS_FIRST_FREE_OBJECTID-1 was a just a hack, I never wanted it to
> > be permanent.
> > The new number is ULONG_MAX - subvol_id (where subvol_id starts at 257 I
> > think).
> > This is a bit less of a hack. It is an easily available number that is
> > fairly unique.
> >
> >>
> >> 2, btrfs subvol mount info is shown in /proc/mounts, even if nfsd/nfs is
> >> not used.
> >> /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test
> >> /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub1
> >> /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub2
> >>
> >> This is a visiual feature change for btrfs user.
> >
> > Hopefully it is an improvement. But it is certainly a change that needs
> > to be carefully considered.
>
> Would this change the behaviour of findmnt? I have several scripts that
> depend on findmnt to select btrfs filesystems. Just to take a couple of
> examples (using the example shown above): my scripts would depend on
> 'findmnt --target /mnt/test/sub1 -o target' providing /mnt/test, not the
> subvolume; and another script would depend on 'findmnt -t btrfs
> --mountpoint /mnt/test/sub1' providing no output as the directory is not
> an /etc/fstab mount point for a btrfs filesystem.
Yes, I think it does change the behaviour of findmnt.
If the sub1 automount has not been triggered,
findmnt --target /mnt/test/sub1 -o target
will report "/mnt/test".
After it has been triggered, it will report "/mnt/test/sub1"
Similarly "findmnt -t btrfs --mountpoint /mnt/test/sub1" will report
nothing if the automount hasn't been triggered, and will report full
details of /mnt/test/sub1 if it has.
>
> Maybe findmnt isn't affected? Or maybe the change is worth making
> anyway? But it needs to be carefully considered if it breaks existing
> user interfaces.
>
I hope the change is worth making anyway, but breaking findmnt would not
be a popular move.
This is unfortunate.... btrfs is "broken" and people/code have adjusted
to that breakage so that "fixing" it will be traumatic.
The only way I can find to get findmnt to ignore the new entries in
/proc/self/mountinfo is to trigger a parse error such as by replacing the
" - " with " -- "
but that causes a parse error message to be generated, and will likely
break other tools.
(... or I could check if current->comm is "findmnt", and suppress the
extra entries, but that is even more horrible!!)
A possible option is to change findmnt to explicitly ignore the new
"internal" mounts (unless some option is given) and then delay the
kernel update until that can be rolled out.
Or we could make the new internal mounts invisible in /proc without some
kernel setting enabled. Then distros can enable it once all important
tools can cope, and they can easily test correctness without rebooting.
I wonder if correcting the device-number reported for explicit subvol
mounts will break anything.... findmnt seems happy with it in my
limited testing. There seems to be a testsuite with util-linux. Maybe
I should try that.
Thanks,
NeilBrown
On Thu, 29 Jul 2021, Zygo Blaxell wrote:
>
> I'm looking at a few machines here, and if all the subvols are visible to
> 'df', its output would be somewhere around 3-5 MB. That's too much--we'd
> have to hack up df to not show the same btrfs twice...as well as every
> monitoring tool that reports free space...which sounds similar to the
> problems we're trying to avoid.
Thanks for providing hard data!! How many of these subvols are actively
used (have a file open) a the same time? Most? About half? Just a few??
Thanks,
NeilBrown
On Thu, Jul 29, 2021 at 08:50:50AM +1000, NeilBrown wrote:
> On Wed, 28 Jul 2021, Neal Gompa wrote:
> > On Wed, Jul 28, 2021 at 3:02 AM NeilBrown <[email protected]> wrote:
> > >
> > > On Wed, 28 Jul 2021, Wang Yugui wrote:
> > > > Hi,
> > > >
> > > > This patchset works well in 5.14-rc3.
> > >
> > > Thanks for testing.
> > >
> > > >
> > > > 1, fixed dummy inode(255, BTRFS_FIRST_FREE_OBJECTID - 1 ) is changed to
> > > > dynamic dummy inode(18446744073709551358, or 18446744073709551359, ...)
> > >
> > > The BTRFS_FIRST_FREE_OBJECTID-1 was a just a hack, I never wanted it to
> > > be permanent.
> > > The new number is ULONG_MAX - subvol_id (where subvol_id starts at 257 I
> > > think).
> > > This is a bit less of a hack. It is an easily available number that is
> > > fairly unique.
> > >
> > > >
> > > > 2, btrfs subvol mount info is shown in /proc/mounts, even if nfsd/nfs is
> > > > not used.
> > > > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test
> > > > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub1
> > > > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub2
> > > >
> > > > This is a visiual feature change for btrfs user.
> > >
> > > Hopefully it is an improvement. But it is certainly a change that needs
> > > to be carefully considered.
> >
> > I think this is behavior people generally expect, but I wonder what
> > the consequences of this would be with huge numbers of subvolumes. If
> > there are hundreds or thousands of them (which is quite possible on
> > SUSE systems, for example, with its auto-snapshotting regime), this
> > would be a mess, wouldn't it?
>
> Would there be hundreds or thousands of subvols concurrently being
> accessed? The auto-mounted subvols only appear in the mount table while
> that are being accessed, and for about 15 minutes after the last access.
> I suspect that most subvols are "backup" snapshots which are not being
> accessed and so would not appear.
bees dedupes across subvols and polls every few minutes for new data
to dedupe. bees doesn't particularly care where the "src" in the dedupe
call comes from, so it will pick a subvol that has a reference to the
data at random (whichever one comes up first in backref search) for each
dedupe call. There is a cache of open fds on each subvol root so that it
can access files within that subvol using openat(). The cache quickly
populates fully, i.e. it holds a fd to every subvol on the filesystem.
The cache has a 15 minute timeout too, so bees would likely keep the
mount table fully populated at all times.
plocate also uses openat() and it can also be active on many subvols
simultaneously, though it only runs once a day, and it's reasonable to
exclude all snapshots from plocate for performance reasons.
My bigger concern here is that users on btrfs can currently have private
subvols with secret names. e.g.
user$ mkdir -m 700 private
user$ btrfs sub create private/secret
user$ cd private/secret
user$ ...do stuff...
Would "secret" now be visible in the very public /proc/mounts every time
the user is doing stuff?
> > Or can we add a way to mark these things to not show up there or is
> > there some kind of behavioral change we can make to snapper or other
> > tools to make them not show up here?
>
> Certainly it might make sense to flag these in some way so that tools
> can choose the ignore them or handle them specially, just as nfsd needs
> to handle them specially. I was considering a "local" mount flag.
I would definitely want an 'off' switch for this thing until the impact
is better understood.
> NeilBrown
>
> >
> >
> >
> > --
> > 真実はいつも一つ!/ Always, there's only one truth!
> >
> >
On Thu, 29 Jul 2021, Zygo Blaxell wrote:
> On Thu, Jul 29, 2021 at 08:50:50AM +1000, NeilBrown wrote:
> > On Wed, 28 Jul 2021, Neal Gompa wrote:
> > > On Wed, Jul 28, 2021 at 3:02 AM NeilBrown <[email protected]> wrote:
> > > >
> > > > On Wed, 28 Jul 2021, Wang Yugui wrote:
> > > > > Hi,
> > > > >
> > > > > This patchset works well in 5.14-rc3.
> > > >
> > > > Thanks for testing.
> > > >
> > > > >
> > > > > 1, fixed dummy inode(255, BTRFS_FIRST_FREE_OBJECTID - 1 ) is changed to
> > > > > dynamic dummy inode(18446744073709551358, or 18446744073709551359, ...)
> > > >
> > > > The BTRFS_FIRST_FREE_OBJECTID-1 was a just a hack, I never wanted it to
> > > > be permanent.
> > > > The new number is ULONG_MAX - subvol_id (where subvol_id starts at 257 I
> > > > think).
> > > > This is a bit less of a hack. It is an easily available number that is
> > > > fairly unique.
> > > >
> > > > >
> > > > > 2, btrfs subvol mount info is shown in /proc/mounts, even if nfsd/nfs is
> > > > > not used.
> > > > > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test
> > > > > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub1
> > > > > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub2
> > > > >
> > > > > This is a visiual feature change for btrfs user.
> > > >
> > > > Hopefully it is an improvement. But it is certainly a change that needs
> > > > to be carefully considered.
> > >
> > > I think this is behavior people generally expect, but I wonder what
> > > the consequences of this would be with huge numbers of subvolumes. If
> > > there are hundreds or thousands of them (which is quite possible on
> > > SUSE systems, for example, with its auto-snapshotting regime), this
> > > would be a mess, wouldn't it?
> >
> > Would there be hundreds or thousands of subvols concurrently being
> > accessed? The auto-mounted subvols only appear in the mount table while
> > that are being accessed, and for about 15 minutes after the last access.
> > I suspect that most subvols are "backup" snapshots which are not being
> > accessed and so would not appear.
>
> bees dedupes across subvols and polls every few minutes for new data
> to dedupe. bees doesn't particularly care where the "src" in the dedupe
> call comes from, so it will pick a subvol that has a reference to the
> data at random (whichever one comes up first in backref search) for each
> dedupe call. There is a cache of open fds on each subvol root so that it
> can access files within that subvol using openat(). The cache quickly
> populates fully, i.e. it holds a fd to every subvol on the filesystem.
> The cache has a 15 minute timeout too, so bees would likely keep the
> mount table fully populated at all times.
OK ... that is very interesting and potentially helpful - thanks.
Localizing these daemons in a separate namespace would stop them from
polluting the public namespace, but I don't know how easy that would
be..
Do you know how bees opens these files? Does it use path-names from the
root, or some special btrfs ioctl, or ???
If path-names are not used, it might be possible to suppress the
automount.
>
> plocate also uses openat() and it can also be active on many subvols
> simultaneously, though it only runs once a day, and it's reasonable to
> exclude all snapshots from plocate for performance reasons.
>
> My bigger concern here is that users on btrfs can currently have private
> subvols with secret names. e.g.
>
> user$ mkdir -m 700 private
> user$ btrfs sub create private/secret
> user$ cd private/secret
> user$ ...do stuff...
>
> Would "secret" now be visible in the very public /proc/mounts every time
> the user is doing stuff?
Yes, the secret would be publicly visible. Unless we hid it.
It is conceivable that the content of /proc/mounts could be limited to
mountpoints where the process reading had 'x' access to the mountpoint.
However to be really safe we would want to require 'x' access to all
ancestors too, and possibly some 'r' access. That would get
prohibitively expensive.
We could go with "owned by root, or owned by user" maybe.
Thanks,
NeilBrown
>
> > > Or can we add a way to mark these things to not show up there or is
> > > there some kind of behavioral change we can make to snapper or other
> > > tools to make them not show up here?
> >
> > Certainly it might make sense to flag these in some way so that tools
> > can choose the ignore them or handle them specially, just as nfsd needs
> > to handle them specially. I was considering a "local" mount flag.
>
> I would definitely want an 'off' switch for this thing until the impact
> is better understood.
>
> > NeilBrown
> >
> > >
> > >
> > >
> > > --
> > > 真実はいつも一つ!/ Always, there's only one truth!
> > >
> > >
>
>
On Thu, Jul 29, 2021 at 3:28 AM NeilBrown <[email protected]> wrote:
>
> On Wed, 28 Jul 2021, Amir Goldstein wrote:
> > On Wed, Jul 28, 2021 at 1:44 AM NeilBrown <[email protected]> wrote:
> > >
> > > When a filesystem has internal mounts, it controls the filehandles
> > > across all those mounts (subvols) in the filesystem. So it is useful to
> > > be able to look up a filehandle again one mount, and get a result which
> > > is in a different mount (part of the same overall file system).
> > >
> > > This patch makes that possible by changing export_decode_fh() and
> > > export_decode_fh_raw() to take a vfsmount pointer by reference, and
> > > possibly change the vfsmount pointed to before returning.
> > >
> > > The core of the change is in reconnect_path() which now not only checks
> > > that the dentry is fully connected, but also that the vfsmnt reported
> > > has the same 'dev' (reported by vfs_getattr) as the dentry.
> > > If it doesn't, we walk up the dparent() chain to find the highest place
> > > where the dev changes without there being a mount point, and trigger an
> > > automount there.
> > >
> > > As no filesystems yet provide local-mounts, this does not yet change any
> > > behaviour.
> > >
> > > In exportfs_decode_fh_raw() we previously tested for DCACHE_DISCONNECT
> > > before calling reconnect_path(). That test is dropped. It was only a
> > > minor optimisation and is now inconvenient.
> > >
> > > The change in overlayfs needs more careful thought than I have yet given
> > > it.
> >
> > Just note that overlayfs does not support following auto mounts in layers.
> > See ovl_dentry_weird(). ovl_lookup() fails if it finds such a dentry.
> > So I think you need to make sure that the vfsmount was not crossed
> > when decoding an overlayfs real fh.
>
> Sounds sensible - thanks.
> Does this mean that my change would cause problems for people using
> overlayfs with a btrfs lower layer?
>
It sounds like it might :-/
I assume that enabling automount in btrfs in opt-in?
Otherwise you will be changing behavior for users of existing systems.
I am not sure, but I think it may be possible to remove the AUTOMOUNT
check from the ovl_dentry_weird() condition with an explicit overlayfs
config/module/mount option so that we won't change behavior by
default, but distro may change the default for overlayfs.
Then, when admin changes the btrfs options on the system to perform
automounts, it will also need to change the overlayfs options to not
error on automounts.
Given that today, subvolume mounts (or any mounts) on the lower layer
are not followed by overlayfs, I don't really see the difference
if mounts are created manually or automatically.
Miklos?
> >
> > Apart from that, I think that your new feature should be opt-in w.r.t
> > the exportfs_decode_fh() vfs api and that overlayfs should not opt-in
> > for the cross mount decode.
>
> I did consider making it opt-in, but it is easy enough for the caller
> to ignore the changed vfsmount, and only one (of 4) callers that it
> really makes a difference for.
>
Which reminds me. Please ignore the changed vfsmount in
do_handle_to_path() (or do not opt-in to changed vfsmount).
I have an application that uses a bind mount to filter file handles
of directories by subtree. It opens by the file handles that were
acquired from fanotify DFID info record using a mountfd in the
bind mount and readlink /proc/self/fd to determine the path
relative to that subtree bind mount.
Your change, IIUC, is going to change the semantics of
open_by_handle_at(2) and is going to break my application.
If you need this change for nfsd, please keep it as an internal api
used only by nfsd.
TBH, I think it would also be nice to have an internal api to limit
reconnect_path() walk up to mnt->mnt_root, which is what overlayfs
really wants, so here is another excuse for you to introduce
"reconnect flags" to exportfs_decode_fh_raw() ;-)
Note that I had already added support for one implicit "reconnect
flag" (i.e. "don't reconnect") in commit 8a22efa15b46
("ovl: do not try to reconnect a disconnected origin dentry").
Thanks,
Amir.
On 29/07/2021 02:39, NeilBrown wrote:
> On Wed, 28 Jul 2021, [email protected] wrote:
>> On 28/07/2021 08:01, NeilBrown wrote:
>>> On Wed, 28 Jul 2021, Wang Yugui wrote:
>>>> Hi,
>>>>
>>>> This patchset works well in 5.14-rc3.
>>>
>>> Thanks for testing.
>>>
>>>>
>>>> 1, fixed dummy inode(255, BTRFS_FIRST_FREE_OBJECTID - 1 ) is changed to
>>>> dynamic dummy inode(18446744073709551358, or 18446744073709551359, ...)
>>>
>>> The BTRFS_FIRST_FREE_OBJECTID-1 was a just a hack, I never wanted it to
>>> be permanent.
>>> The new number is ULONG_MAX - subvol_id (where subvol_id starts at 257 I
>>> think).
>>> This is a bit less of a hack. It is an easily available number that is
>>> fairly unique.
>>>
>>>>
>>>> 2, btrfs subvol mount info is shown in /proc/mounts, even if nfsd/nfs is
>>>> not used.
>>>> /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test
>>>> /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub1
>>>> /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub2
>>>>
>>>> This is a visiual feature change for btrfs user.
>>>
>>> Hopefully it is an improvement. But it is certainly a change that needs
>>> to be carefully considered.
>>
>> Would this change the behaviour of findmnt? I have several scripts that
>> depend on findmnt to select btrfs filesystems. Just to take a couple of
>> examples (using the example shown above): my scripts would depend on
>> 'findmnt --target /mnt/test/sub1 -o target' providing /mnt/test, not the
>> subvolume; and another script would depend on 'findmnt -t btrfs
>> --mountpoint /mnt/test/sub1' providing no output as the directory is not
>> an /etc/fstab mount point for a btrfs filesystem.
>
> Yes, I think it does change the behaviour of findmnt.
> If the sub1 automount has not been triggered,
> findmnt --target /mnt/test/sub1 -o target
> will report "/mnt/test".
> After it has been triggered, it will report "/mnt/test/sub1"
>
> Similarly "findmnt -t btrfs --mountpoint /mnt/test/sub1" will report
> nothing if the automount hasn't been triggered, and will report full
> details of /mnt/test/sub1 if it has.
>
>>
>> Maybe findmnt isn't affected? Or maybe the change is worth making
>> anyway? But it needs to be carefully considered if it breaks existing
>> user interfaces.
>>
> I hope the change is worth making anyway, but breaking findmnt would not
> be a popular move.
I agree. I use findmnt, but I also use NFS mounted btrfs disks so I am
keen to see this deployed. But people who don't maintain their own
scripts and need a third party to change them might disagree!
> This is unfortunate.... btrfs is "broken" and people/code have adjusted
> to that breakage so that "fixing" it will be traumatic.
>
> The only way I can find to get findmnt to ignore the new entries in
> /proc/self/mountinfo is to trigger a parse error such as by replacing the
> " - " with " -- "
> but that causes a parse error message to be generated, and will likely
> break other tools.
> (... or I could check if current->comm is "findmnt", and suppress the
> extra entries, but that is even more horrible!!)
>
> A possible option is to change findmnt to explicitly ignore the new
> "internal" mounts (unless some option is given) and then delay the
> kernel update until that can be rolled out.
That sounds good as a permanent fix for findmnt. Some sort of
'--include-subvols' option. Particularly if it were possible to default
it using an environment variable so a script can be written to work with
both the old and the new versions of findmnt.
Unfortunately it won't help any other program which does similar
searches through /proc/self/mountinfo.
How about creating two different files? Say, /proc/self/mountinfo and
/proc/self/mountinfo.internal (better filenames may be available!). The
.internal file could be just the additional internal mounts, or it could
be the complete list. Or something like
/proc/self/mountinfo.without-subvols and
/proc/self/mountinfo.with-subvols and a sysctl setting to choose which
is made visible as /proc/self/mountinfo.
Graham
On Thu, Jul 29, 2021 at 10:43:23AM +1000, NeilBrown wrote:
> On Wed, 28 Jul 2021, Christian Brauner wrote:
> >
> > Hey Neil,
> >
> > Sorry if this is a stupid question but wouldn't you want to copy the
> > mount properties from path->mnt here? Couldn't you otherwise use this to
> > e.g. suddenly expose a dentry on a read-only mount as read-write?
>
> There are no stupid questions, and this is a particularly non-stupid
> one!
>
> I hadn't considered that, but having examined the code I see that it
> is already handled.
> The vfsmount that d_automount returns is passed to finish_automount(),
> which hands it to do_add_mount() together with the mnt_flags for the
> parent vfsmount (plus MNT_SHRINKABLE).
> do_add_mount() sets up the mnt_flags of the new vfsmount.
> In fact, the d_automount interface has no control of these flags at all.
> Whatever it sets will be over-written by do_add_mount.
Ah, interesting thank you very much, Neil. I seemed to have overlooked
this yesterday.
If btrfs makes use of automounts the way you envisioned to expose
subvolumes and also will support idmapped mounts (see [1]) we need to
teach do_add_mount() to also take the idmapped mount into account. So
you'd need something like (entirely untested):
diff --git a/fs/namespace.c b/fs/namespace.c
index ab4174a3c802..921f6396c36d 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -2811,6 +2811,11 @@ static int do_add_mount(struct mount *newmnt, struct mountpoint *mp,
return -EINVAL;
newmnt->mnt.mnt_flags = mnt_flags;
+
+ newmnt->mnt.mnt_userns = path->mnt;
+ if (newmnt->mnt.mnt_userns != &init_user_ns)
+ newmnt->mnt.mnt_userns = get_user_ns(newmnt->mnt.mnt_userns);
+
return graft_tree(newmnt, parent, mp);
}
[1]: https://lore.kernel.org/linux-btrfs/[email protected]/T/#mca601363b435e81c89d8ca4f09134faa5c227e6d
On Thu, Jul 29, 2021 at 01:36:06PM +1000, NeilBrown wrote:
> On Thu, 29 Jul 2021, Zygo Blaxell wrote:
> > On Thu, Jul 29, 2021 at 08:50:50AM +1000, NeilBrown wrote:
> > > On Wed, 28 Jul 2021, Neal Gompa wrote:
> > > > On Wed, Jul 28, 2021 at 3:02 AM NeilBrown <[email protected]> wrote:
> > > > >
> > > > > On Wed, 28 Jul 2021, Wang Yugui wrote:
> > > > > > Hi,
> > > > > >
> > > > > > This patchset works well in 5.14-rc3.
> > > > >
> > > > > Thanks for testing.
> > > > >
> > > > > >
> > > > > > 1, fixed dummy inode(255, BTRFS_FIRST_FREE_OBJECTID - 1 ) is changed to
> > > > > > dynamic dummy inode(18446744073709551358, or 18446744073709551359, ...)
> > > > >
> > > > > The BTRFS_FIRST_FREE_OBJECTID-1 was a just a hack, I never wanted it to
> > > > > be permanent.
> > > > > The new number is ULONG_MAX - subvol_id (where subvol_id starts at 257 I
> > > > > think).
> > > > > This is a bit less of a hack. It is an easily available number that is
> > > > > fairly unique.
> > > > >
> > > > > >
> > > > > > 2, btrfs subvol mount info is shown in /proc/mounts, even if nfsd/nfs is
> > > > > > not used.
> > > > > > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test
> > > > > > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub1
> > > > > > /dev/sdc btrfs 94G 3.5M 93G 1% /mnt/test/sub2
> > > > > >
> > > > > > This is a visiual feature change for btrfs user.
> > > > >
> > > > > Hopefully it is an improvement. But it is certainly a change that needs
> > > > > to be carefully considered.
> > > >
> > > > I think this is behavior people generally expect, but I wonder what
> > > > the consequences of this would be with huge numbers of subvolumes. If
> > > > there are hundreds or thousands of them (which is quite possible on
> > > > SUSE systems, for example, with its auto-snapshotting regime), this
> > > > would be a mess, wouldn't it?
> > >
> > > Would there be hundreds or thousands of subvols concurrently being
> > > accessed? The auto-mounted subvols only appear in the mount table while
> > > that are being accessed, and for about 15 minutes after the last access.
> > > I suspect that most subvols are "backup" snapshots which are not being
> > > accessed and so would not appear.
> >
> > bees dedupes across subvols and polls every few minutes for new data
> > to dedupe. bees doesn't particularly care where the "src" in the dedupe
> > call comes from, so it will pick a subvol that has a reference to the
> > data at random (whichever one comes up first in backref search) for each
> > dedupe call. There is a cache of open fds on each subvol root so that it
> > can access files within that subvol using openat(). The cache quickly
> > populates fully, i.e. it holds a fd to every subvol on the filesystem.
> > The cache has a 15 minute timeout too, so bees would likely keep the
> > mount table fully populated at all times.
>
> OK ... that is very interesting and potentially helpful - thanks.
>
> Localizing these daemons in a separate namespace would stop them from
> polluting the public namespace, but I don't know how easy that would
> be..
>
> Do you know how bees opens these files? Does it use path-names from the
> root, or some special btrfs ioctl, or ???
There's a function in bees that opens a subvol root directory by subvol
id. It walks up the btrfs subvol tree with btrfs ioctls to construct a
path to the root, then down the filesystem tree with other btrfs ioctls
to get filenames for each subvol. The filenames are fed to openat()
with the parent subvol's fd to get a fd for each child subvol's root
directory along the path. This is recursive and expensive (the fd
has to be checked to see if it matches the subvol, in case some other
process renamed it) and called every time bees wants to open a file,
so the fd goes into a cache for future open-subvol-by-id calls.
For files, bees calls subvol root open to get a fd for the subvol's root,
then calls the btrfs inode-to-path ioctl on that fd to get a list of
names for the inode, then openat(subvol_fd, inode_to_path(inum), ...) on
each name until a fd matching the target subvol and inode is obtained.
File access is driven by data content, so bees cannot easily predict
which files will need to be accessed again in the near future and which
can be closed. The fd cache is a brute-force way to reduce the number
of inode-to-path and open calls.
Upper layers of bees use (subvol, inode) pairs to identify files
and request file descriptors. The lower layers use filenames as an
implementation detail for compatibility with kernel API.
> If path-names are not used, it might be possible to suppress the
> automount.
A userspace interface to read and dedupe that doesn't use pathnames or
file descriptors (other than one fd to bind the interface to a filesystem)
would be nice! About half of the bees code is devoted to emulating
that interface using the existing kernel API.
Ideally a dedupe agent would be able to pass two physical offsets and
a length of identical data to the filesystem without ever opening a file.
While I'm in favor of making bees smaller, this seems like an expensive
way to suppress automounts.
> > plocate also uses openat() and it can also be active on many subvols
> > simultaneously, though it only runs once a day, and it's reasonable to
> > exclude all snapshots from plocate for performance reasons.
> >
> > My bigger concern here is that users on btrfs can currently have private
> > subvols with secret names. e.g.
> >
> > user$ mkdir -m 700 private
> > user$ btrfs sub create private/secret
> > user$ cd private/secret
> > user$ ...do stuff...
> >
> > Would "secret" now be visible in the very public /proc/mounts every time
> > the user is doing stuff?
>
> Yes, the secret would be publicly visible. Unless we hid it.
>
> It is conceivable that the content of /proc/mounts could be limited to
> mountpoints where the process reading had 'x' access to the mountpoint.
> However to be really safe we would want to require 'x' access to all
> ancestors too, and possibly some 'r' access. That would get
> prohibitively expensive.
And inconsistent, since we don't do that with other mount points, i.e.
outside of btrfs. But we do hide parts of the path names when the
process's filesystem root or namespace changes, so maybe this is more
of that.
> We could go with "owned by root, or owned by user" maybe.
>
> Thanks,
> NeilBrown
>
>
> >
> > > > Or can we add a way to mark these things to not show up there or is
> > > > there some kind of behavioral change we can make to snapper or other
> > > > tools to make them not show up here?
> > >
> > > Certainly it might make sense to flag these in some way so that tools
> > > can choose the ignore them or handle them specially, just as nfsd needs
> > > to handle them specially. I was considering a "local" mount flag.
> >
> > I would definitely want an 'off' switch for this thing until the impact
> > is better understood.
> >
> > > NeilBrown
> > >
> > > >
> > > >
> > > >
> > > > --
> > > > 真実はいつも一つ!/ Always, there's only one truth!
> > > >
> > > >
> >
> >
>
On Thu, Jul 29, 2021 at 11:43:21AM +1000, NeilBrown wrote:
> On Thu, 29 Jul 2021, Zygo Blaxell wrote:
> >
> > I'm looking at a few machines here, and if all the subvols are visible to
> > 'df', its output would be somewhere around 3-5 MB. That's too much--we'd
> > have to hack up df to not show the same btrfs twice...as well as every
> > monitoring tool that reports free space...which sounds similar to the
> > problems we're trying to avoid.
>
> Thanks for providing hard data!! How many of these subvols are actively
> used (have a file open) a the same time? Most? About half? Just a few??
Between 1% and 10% of the subvols have open fds at any time (not counting
bees, which holds an open fd on every subvol most of the time). The ratio
is higher (more active) when the machine has less storage or more CPU/RAM:
we keep idle subvols around longer if we have lots of free space, or we
make more subvols active at the same time if we have lots of RAM and CPU.
I don't recall if stat() triggers automount, but most of the subvols are
located in a handful of directories. Could a single 'ls -l' command
activate all of their automounts? If so, then we'd be hitting those
at least once every 15 minutes--these directories are browseable, and
an entry point for users. Certainly anything that looks inside those
directories (like certain file-browsing user agents that look for icons
one level down) will trigger automount as they search children of the
subvol root.
Some of this might be fixable, like I could probably make bees be
more parsimonious with subvol root fds, and I could probably rework
reporting scripts to avoid touching anything inside subdirectories,
and I could probably rework our subvolume directory layout to avoid
accidentally triggering thousands of automounts. But I'd rather not.
I'd rather have working NFS and a 15-20 line df output with no new
privacy or scalability concerns.
> Thanks,
> NeilBrown
On Wed, Jul 28, 2021 at 05:30:04PM -0400, Josef Bacik wrote:
> I don't think anybody has that many file systems. For btrfs it's a single
> file system. Think of syncfs, it's going to walk through all of the super
> blocks on the system calling ->sync_fs on each subvol superblock. Now this
> isn't a huge deal, we could just have some flag that says "I'm not real" or
> even just have anonymous superblocks that don't get added to the global
> super_blocks list, and that would address my main pain points.
Umm... Aren't the snapshots read-only by definition?
> The second part is inode reclaim. Again this particular problem could be
> avoided if we had an anonymous superblock that wasn't actually used, but the
> inode lru is per superblock. Now with reclaim instead of walking all the
> inodes, you're walking a bunch of super blocks and then walking the list of
> inodes within those super blocks. You're burning CPU cycles because now
> instead of getting big chunks of inodes to dispose, it's spread out across
> many super blocks.
>
> The other weird thing is the way we apply pressure to shrinker systems. We
> essentially say "try to evict X objects from your list", which means in this
> case with lots of subvolumes we'd be evicting waaaaay more inodes than you
> were before, likely impacting performance where you have workloads that have
> lots of files open across many subvolumes (which is what FB does with it's
> containers).
>
> If we want a anonymous superblock per subvolume then the only way it'll work
> is if it's not actually tied into anything, and we still use the primary
> super block for the whole file system. And if that's what we're going to do
> what's the point of the super block exactly? This approach that Neil's come
> up with seems like a reasonable solution to me. Christoph gets his
> separation and /proc/self/mountinfo, and we avoid the scalability headache
> of a billion super blocks. Thanks,
AFAICS, we also get arseloads of weird corner cases - in particular, Neil's
suggestions re visibility in /proc/mounts look rather arbitrary.
Al, really disliking the entire series...
On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> /proc/$PID/mountinfo contains a field for the device number of the
> filesystem at each mount.
>
> This is taken from the superblock ->s_dev field, which is correct for
> every filesystem except btrfs. A btrfs filesystem can contain multiple
> subvols which each have a different device number. If (a directory
> within) one of these subvols is mounted, the device number reported in
> mountinfo will be different from the device number reported by stat().
>
> This confuses some libraries and tools such as, historically, findmnt.
> Current findmnt seems to cope with the strangeness.
>
> So instead of using ->s_dev, call vfs_getattr_nosec() and use the ->dev
> provided. As there is no STATX flag to ask for the device number, we
> pass a request mask for zero, and also ask the filesystem to avoid
> syncing with any remote service.
Hard NAK. You are putting IO (potentially - network IO, with no upper
limit on the completion time) under namespace_sem.
This is an instant DoS - have a hung NFS mount anywhere in the system,
try to cat /proc/self/mountinfo and watch a system-wide rwsem held shared.
From that point on any attempt to take it exclusive will hang *AND* after
that all attempts to take it shared will do the same.
Please, fix BTRFS shite in BTRFS. Without turning a moderately unpleasant
problem (say, unplugged hub on the way to NFS server) into something that
escalates into buggered clients. Note that you have taken out any possibility
to e.g. umount -l /path/to/stuck/mount, along with any chance of clear shutdown
of the client. Not going to happen.
NAKed-by: Al Viro <[email protected]>
On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> In order to support filehandle lookup in filesystems with internal
> mounts (multiple subvols in the one filesystem) reconnect_path() in
> exportfs will need to find out if a given dentry is already mounted.
> This can be done with the function lookup_mnt(), so export that to make
> it available.
IMO having exportfs modular is wrong - note that fs/fhandle.c is
* calling functions in exportfs
* non-modular
* ... and not going to be modular, no matter what - there
are syscalls in it.
On Wed, Jul 28, 2021 at 01:32:45PM +1000, NeilBrown wrote:
> On Wed, 28 Jul 2021, Al Viro wrote:
> > On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> > > This patch introduces the concept of an "internal" mount which is a
> > > mount where a filesystem has create the mount itself.
> > >
> > > Both the mounted-on-dentry and the mount's root dentry must refer to the
> > > same superblock (they may be the same dentry), and the mounted-on dentry
> > > must be an automount.
> >
> > And what happens if you mount --move it?
> >
> >
> If you move the mount, then the mounted-on dentry would not longer be an
> automount (.... I assume???...) so it would not longer be
> mount_is_internal().
>
> I think that is reasonable. Whoever moved the mount has now taken over
> responsibility for it - it no longer is controlled by the filesystem.
> The moving will have removed the mount from the list of auto-expire
> mounts, and the mount-trap will now be exposed and can be mounted-on
> again.
>
> It would be just like unmounting the automount, and bind-mounting the
> same dentry elsewhere.
Once more, with feeling: what happens to your function if it gets called during
mount --move?
What locking environment is going to be provided for it? And what is going
to provide said environment?
On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
> index baa12ac36ece..22523e1cd478 100644
> --- a/fs/nfsd/vfs.c
> +++ b/fs/nfsd/vfs.c
> @@ -64,7 +64,7 @@ nfsd_cross_mnt(struct svc_rqst *rqstp, struct path *path_parent,
> .dentry = dget(path_parent->dentry)};
> int err = 0;
>
> - err = follow_down(&path, 0);
> + err = follow_down(&path, LOOKUP_AUTOMOUNT);
> if (err < 0)
> goto out;
> if (path.mnt == path_parent->mnt && path.dentry == path_parent->dentry &&
> @@ -73,6 +73,13 @@ nfsd_cross_mnt(struct svc_rqst *rqstp, struct path *path_parent,
> path_put(&path);
> goto out;
> }
> + if (mount_is_internal(path.mnt)) {
> + /* Use the new path, but don't look for a new export */
> + /* FIXME should I check NOHIDE in this case?? */
> + path_put(path_parent);
> + *path_parent = path;
> + goto out;
> + }
... IOW, mount_is_internal() is called with no exclusion whatsoever. What's there
to
* keep its return value valid?
* prevent fetching ->mnt_mountpoint, getting preempted away, having
the mount moved *and* what used to be ->mnt_mountpoint evicted from dcache,
now that it's no longer pinned, then mount_is_internal() regaining CPU and
dereferencing ->mnt_mountpoint, which now points to hell knows what?
I've been pondering all the excellent feedback, and what I have learnt
from examining the code in btrfs, and I have developed a different
perspective.
Maybe "subvol" is a poor choice of name because it conjures up
connections with the Volumes in LVM, and btrfs subvols are very different
things. Btrfs subvols are really just subtrees that can be treated as a
unit for operations like "clone" or "destroy".
As such, they don't really deserve separate st_dev numbers.
Maybe the different st_dev numbers were introduced as a "cheap" way to
extend to size of the inode-number space. Like many "cheap" things, it
has hidden costs.
Maybe objects in different subvols should still be given different inode
numbers. This would be problematic on 32bit systems, but much less so on
64bit systems.
The patch below, which is just a proof-of-concept, changes btrfs to
report a uniform st_dev, and different (64bit) st_ino in different subvols.
It has problems:
- it will break any 32bit readdir and 32bit stat. I don't know how big
a problem that is these days (ino_t in the kernel is "unsigned long",
not "unsigned long long). That surprised me).
- It might break some user-space expectations. One thing I have learnt
is not to make any assumption about what other people might expect.
However, it would be quite easy to make this opt-in (or opt-out) with a
mount option, so that people who need the current inode numbers and will
accept the current breakage can keep working.
I think this approach would be a net-win for NFS export, whether BTRFS
supports it directly or not. I might post a patch which modifies NFS to
intuit improved inode numbers for btrfs exports....
So: how would this break your use-case??
Thanks,
NeilBrown
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 0117d867ecf8..8dc58c848502 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -6020,6 +6020,37 @@ static int btrfs_opendir(struct inode *inode, struct file *file)
return 0;
}
+static u64 btrfs_make_inum(struct btrfs_key *root, struct btrfs_key *ino)
+{
+ u64 rootid = root->objectid;
+ u64 inoid = ino->objectid;
+ int shift = 64-8;
+
+ if (ino->type == BTRFS_ROOT_ITEM_KEY) {
+ /* This is a subvol root found during readdir. */
+ rootid = inoid;
+ inoid = BTRFS_FIRST_FREE_OBJECTID;
+ }
+ if (rootid == BTRFS_FS_TREE_OBJECTID)
+ /* this is main vol, not subvol (I think) */
+ return inoid;
+ /* store the rootid in the high bits of the inum. This
+ * will break if 32bit inums are required - we cannot know
+ */
+ while (rootid) {
+ inoid ^= (rootid & 0xff) << shift;
+ rootid >>= 8;
+ shift -= 8;
+ }
+ return inoid;
+}
+
+static inline u64 btrfs_ino_to_inum(struct inode *inode)
+{
+ return btrfs_make_inum(&BTRFS_I(inode)->root->root_key,
+ &BTRFS_I(inode)->location);
+}
+
struct dir_entry {
u64 ino;
u64 offset;
@@ -6045,6 +6076,49 @@ static int btrfs_filldir(void *addr, int entries, struct dir_context *ctx)
return 0;
}
+static inline bool btrfs_dir_emit_dot(struct file *file,
+ struct dir_context *ctx)
+{
+ return ctx->actor(ctx, ".", 1, ctx->pos,
+ btrfs_ino_to_inum(file->f_path.dentry->d_inode),
+ DT_DIR) == 0;
+}
+
+static inline ino_t btrfs_parent_ino(struct dentry *dentry)
+{
+ ino_t res;
+
+ /*
+ * Don't strictly need d_lock here? If the parent ino could change
+ * then surely we'd have a deeper race in the caller?
+ */
+ spin_lock(&dentry->d_lock);
+ res = btrfs_ino_to_inum(dentry->d_parent->d_inode);
+ spin_unlock(&dentry->d_lock);
+ return res;
+}
+
+static inline bool btrfs_dir_emit_dotdot(struct file *file,
+ struct dir_context *ctx)
+{
+ return ctx->actor(ctx, "..", 2, ctx->pos,
+ btrfs_parent_ino(file->f_path.dentry), DT_DIR) == 0;
+}
+static inline bool btrfs_dir_emit_dots(struct file *file,
+ struct dir_context *ctx)
+{
+ if (ctx->pos == 0) {
+ if (!btrfs_dir_emit_dot(file, ctx))
+ return false;
+ ctx->pos = 1;
+ }
+ if (ctx->pos == 1) {
+ if (!btrfs_dir_emit_dotdot(file, ctx))
+ return false;
+ ctx->pos = 2;
+ }
+ return true;
+}
static int btrfs_real_readdir(struct file *file, struct dir_context *ctx)
{
struct inode *inode = file_inode(file);
@@ -6067,7 +6141,7 @@ static int btrfs_real_readdir(struct file *file, struct dir_context *ctx)
bool put = false;
struct btrfs_key location;
- if (!dir_emit_dots(file, ctx))
+ if (!btrfs_dir_emit_dots(file, ctx))
return 0;
path = btrfs_alloc_path();
@@ -6136,7 +6210,8 @@ static int btrfs_real_readdir(struct file *file, struct dir_context *ctx)
put_unaligned(fs_ftype_to_dtype(btrfs_dir_type(leaf, di)),
&entry->type);
btrfs_dir_item_key_to_cpu(leaf, di, &location);
- put_unaligned(location.objectid, &entry->ino);
+ put_unaligned(btrfs_make_inum(&root->root_key, &location),
+ &entry->ino);
put_unaligned(found_key.offset, &entry->offset);
entries++;
addr += sizeof(struct dir_entry) + name_len;
@@ -9193,7 +9268,7 @@ static int btrfs_getattr(struct user_namespace *mnt_userns,
STATX_ATTR_NODUMP);
generic_fillattr(&init_user_ns, inode, stat);
- stat->dev = BTRFS_I(inode)->root->anon_dev;
+ stat->ino = btrfs_ino_to_inum(inode);
spin_lock(&BTRFS_I(inode)->lock);
delalloc_bytes = BTRFS_I(inode)->new_delalloc_bytes;
On 2021/7/30 上午10:36, NeilBrown wrote:
>
> I've been pondering all the excellent feedback, and what I have learnt
> from examining the code in btrfs, and I have developed a different
> perspective.
Great! Some new developers into the btrfs realm!
>
> Maybe "subvol" is a poor choice of name because it conjures up
> connections with the Volumes in LVM, and btrfs subvols are very different
> things. Btrfs subvols are really just subtrees that can be treated as a
> unit for operations like "clone" or "destroy".
>
> As such, they don't really deserve separate st_dev numbers.
>
> Maybe the different st_dev numbers were introduced as a "cheap" way to
> extend to size of the inode-number space. Like many "cheap" things, it
> has hidden costs.
>
> Maybe objects in different subvols should still be given different inode
> numbers. This would be problematic on 32bit systems, but much less so on
> 64bit systems.
>
> The patch below, which is just a proof-of-concept, changes btrfs to
> report a uniform st_dev, and different (64bit) st_ino in different subvols.
>
> It has problems:
> - it will break any 32bit readdir and 32bit stat. I don't know how big
> a problem that is these days (ino_t in the kernel is "unsigned long",
> not "unsigned long long). That surprised me).
> - It might break some user-space expectations. One thing I have learnt
> is not to make any assumption about what other people might expect.
Wouldn't any filesystem boundary check fail to stop at subvolume boundary?
Then it will go through the full btrfs subvolumes/snapshots, which can
be super slow.
>
> However, it would be quite easy to make this opt-in (or opt-out) with a
> mount option, so that people who need the current inode numbers and will
> accept the current breakage can keep working.
>
> I think this approach would be a net-win for NFS export, whether BTRFS
> supports it directly or not. I might post a patch which modifies NFS to
> intuit improved inode numbers for btrfs exports....
Some extra ideas, but not familiar with VFS enough to be sure.
Can we generate "fake" superblock for each subvolume?
Like using the subolume UUID to replace the FSID of each subvolume.
Could that migrate the problem?
Thanks,
Qu
>
> So: how would this break your use-case??
>
> Thanks,
> NeilBrown
>
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 0117d867ecf8..8dc58c848502 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -6020,6 +6020,37 @@ static int btrfs_opendir(struct inode *inode, struct file *file)
> return 0;
> }
>
> +static u64 btrfs_make_inum(struct btrfs_key *root, struct btrfs_key *ino)
> +{
> + u64 rootid = root->objectid;
> + u64 inoid = ino->objectid;
> + int shift = 64-8;
> +
> + if (ino->type == BTRFS_ROOT_ITEM_KEY) {
> + /* This is a subvol root found during readdir. */
> + rootid = inoid;
> + inoid = BTRFS_FIRST_FREE_OBJECTID;
> + }
> + if (rootid == BTRFS_FS_TREE_OBJECTID)
> + /* this is main vol, not subvol (I think) */
> + return inoid;
> + /* store the rootid in the high bits of the inum. This
> + * will break if 32bit inums are required - we cannot know
> + */
> + while (rootid) {
> + inoid ^= (rootid & 0xff) << shift;
> + rootid >>= 8;
> + shift -= 8;
> + }
> + return inoid;
> +}
> +
> +static inline u64 btrfs_ino_to_inum(struct inode *inode)
> +{
> + return btrfs_make_inum(&BTRFS_I(inode)->root->root_key,
> + &BTRFS_I(inode)->location);
> +}
> +
> struct dir_entry {
> u64 ino;
> u64 offset;
> @@ -6045,6 +6076,49 @@ static int btrfs_filldir(void *addr, int entries, struct dir_context *ctx)
> return 0;
> }
>
> +static inline bool btrfs_dir_emit_dot(struct file *file,
> + struct dir_context *ctx)
> +{
> + return ctx->actor(ctx, ".", 1, ctx->pos,
> + btrfs_ino_to_inum(file->f_path.dentry->d_inode),
> + DT_DIR) == 0;
> +}
> +
> +static inline ino_t btrfs_parent_ino(struct dentry *dentry)
> +{
> + ino_t res;
> +
> + /*
> + * Don't strictly need d_lock here? If the parent ino could change
> + * then surely we'd have a deeper race in the caller?
> + */
> + spin_lock(&dentry->d_lock);
> + res = btrfs_ino_to_inum(dentry->d_parent->d_inode);
> + spin_unlock(&dentry->d_lock);
> + return res;
> +}
> +
> +static inline bool btrfs_dir_emit_dotdot(struct file *file,
> + struct dir_context *ctx)
> +{
> + return ctx->actor(ctx, "..", 2, ctx->pos,
> + btrfs_parent_ino(file->f_path.dentry), DT_DIR) == 0;
> +}
> +static inline bool btrfs_dir_emit_dots(struct file *file,
> + struct dir_context *ctx)
> +{
> + if (ctx->pos == 0) {
> + if (!btrfs_dir_emit_dot(file, ctx))
> + return false;
> + ctx->pos = 1;
> + }
> + if (ctx->pos == 1) {
> + if (!btrfs_dir_emit_dotdot(file, ctx))
> + return false;
> + ctx->pos = 2;
> + }
> + return true;
> +}
> static int btrfs_real_readdir(struct file *file, struct dir_context *ctx)
> {
> struct inode *inode = file_inode(file);
> @@ -6067,7 +6141,7 @@ static int btrfs_real_readdir(struct file *file, struct dir_context *ctx)
> bool put = false;
> struct btrfs_key location;
>
> - if (!dir_emit_dots(file, ctx))
> + if (!btrfs_dir_emit_dots(file, ctx))
> return 0;
>
> path = btrfs_alloc_path();
> @@ -6136,7 +6210,8 @@ static int btrfs_real_readdir(struct file *file, struct dir_context *ctx)
> put_unaligned(fs_ftype_to_dtype(btrfs_dir_type(leaf, di)),
> &entry->type);
> btrfs_dir_item_key_to_cpu(leaf, di, &location);
> - put_unaligned(location.objectid, &entry->ino);
> + put_unaligned(btrfs_make_inum(&root->root_key, &location),
> + &entry->ino);
> put_unaligned(found_key.offset, &entry->offset);
> entries++;
> addr += sizeof(struct dir_entry) + name_len;
> @@ -9193,7 +9268,7 @@ static int btrfs_getattr(struct user_namespace *mnt_userns,
> STATX_ATTR_NODUMP);
>
> generic_fillattr(&init_user_ns, inode, stat);
> - stat->dev = BTRFS_I(inode)->root->anon_dev;
> + stat->ino = btrfs_ino_to_inum(inode);
>
> spin_lock(&BTRFS_I(inode)->lock);
> delalloc_bytes = BTRFS_I(inode)->new_delalloc_bytes;
>
On Fri, Jul 30, 2021 at 5:41 AM NeilBrown <[email protected]> wrote:
>
>
> I've been pondering all the excellent feedback, and what I have learnt
> from examining the code in btrfs, and I have developed a different
> perspective.
>
> Maybe "subvol" is a poor choice of name because it conjures up
> connections with the Volumes in LVM, and btrfs subvols are very different
> things. Btrfs subvols are really just subtrees that can be treated as a
> unit for operations like "clone" or "destroy".
>
> As such, they don't really deserve separate st_dev numbers.
>
> Maybe the different st_dev numbers were introduced as a "cheap" way to
> extend to size of the inode-number space. Like many "cheap" things, it
> has hidden costs.
>
> Maybe objects in different subvols should still be given different inode
> numbers. This would be problematic on 32bit systems, but much less so on
> 64bit systems.
>
> The patch below, which is just a proof-of-concept, changes btrfs to
> report a uniform st_dev, and different (64bit) st_ino in different subvols.
>
> It has problems:
> - it will break any 32bit readdir and 32bit stat. I don't know how big
> a problem that is these days (ino_t in the kernel is "unsigned long",
> not "unsigned long long). That surprised me).
> - It might break some user-space expectations. One thing I have learnt
> is not to make any assumption about what other people might expect.
>
> However, it would be quite easy to make this opt-in (or opt-out) with a
> mount option, so that people who need the current inode numbers and will
> accept the current breakage can keep working.
>
> I think this approach would be a net-win for NFS export, whether BTRFS
> supports it directly or not. I might post a patch which modifies NFS to
> intuit improved inode numbers for btrfsdemostrates exports....
>
> So: how would this break your use-case??
The simple cases are find -xdev and du -x which expect the st_dev
change, but that can be excused if opting in to a unified st_dev namespace.
The harder problem is <st_dev;st_ino> collisions which are not even
that hard to hit with unlimited number of snapshots.
The 'diff' tool demonstrates the implications of <st_dev;st_ino>
collisions for different objects on userspace.
See xfstest overlay/049 for a demonstration.
The overlayfs xino feature made a similar change to overlayfs
<st_dev;st_ino> with one big difference - applications expect that
all objects in overlayfs mount will have the same st_dev.
Also, overlayfs has prior knowledge on the number of layers
so it is easier to parcel the ino namespace and avoid collisions.
Thanks,
Amir.
On Fri, 30 Jul 2021, Al Viro wrote:
> On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> > /proc/$PID/mountinfo contains a field for the device number of the
> > filesystem at each mount.
> >
> > This is taken from the superblock ->s_dev field, which is correct for
> > every filesystem except btrfs. A btrfs filesystem can contain multiple
> > subvols which each have a different device number. If (a directory
> > within) one of these subvols is mounted, the device number reported in
> > mountinfo will be different from the device number reported by stat().
> >
> > This confuses some libraries and tools such as, historically, findmnt.
> > Current findmnt seems to cope with the strangeness.
> >
> > So instead of using ->s_dev, call vfs_getattr_nosec() and use the ->dev
> > provided. As there is no STATX flag to ask for the device number, we
> > pass a request mask for zero, and also ask the filesystem to avoid
> > syncing with any remote service.
>
> Hard NAK. You are putting IO (potentially - network IO, with no upper
> limit on the completion time) under namespace_sem.
Why would IO be generated? The inode must already be in cache because it
is mounted, and STATX_DONT_SYNC is passed. If a filesystem did IO in
those circumstances, it would be broken.
Thanks for the review,
NeilBrown
On 2021/7/30 下午1:25, Qu Wenruo wrote:
>
>
> On 2021/7/30 上午10:36, NeilBrown wrote:
>>
>> I've been pondering all the excellent feedback, and what I have learnt
>> from examining the code in btrfs, and I have developed a different
>> perspective.
>
> Great! Some new developers into the btrfs realm!
>
>>
>> Maybe "subvol" is a poor choice of name because it conjures up
>> connections with the Volumes in LVM, and btrfs subvols are very different
>> things. Btrfs subvols are really just subtrees that can be treated as a
>> unit for operations like "clone" or "destroy".
>>
>> As such, they don't really deserve separate st_dev numbers.
>>
>> Maybe the different st_dev numbers were introduced as a "cheap" way to
>> extend to size of the inode-number space. Like many "cheap" things, it
>> has hidden costs.
Forgot another problem already caused by this st_dev method.
Since btrfs uses st_dev to distinguish them its inode name space, and
st_dev is allocated using anonymous bdev, and the anonymous bdev poor
has limited size (much smaller than btrfs subvolume id name space), it's
already causing problems like we can't allocate enough anonymous bdev
for each subvolume, and failed to create subvolume/snapshot.
Thus it's really a time to re-consider how we should export this info to
user space.
Thanks,
Qu
>>
>> Maybe objects in different subvols should still be given different inode
>> numbers. This would be problematic on 32bit systems, but much less so on
>> 64bit systems.
>>
>> The patch below, which is just a proof-of-concept, changes btrfs to
>> report a uniform st_dev, and different (64bit) st_ino in different
>> subvols.
>>
>> It has problems:
>> - it will break any 32bit readdir and 32bit stat. I don't know how big
>> a problem that is these days (ino_t in the kernel is "unsigned long",
>> not "unsigned long long). That surprised me).
>> - It might break some user-space expectations. One thing I have learnt
>> is not to make any assumption about what other people might expect.
>
> Wouldn't any filesystem boundary check fail to stop at subvolume boundary?
>
> Then it will go through the full btrfs subvolumes/snapshots, which can
> be super slow.
>
>>
>> However, it would be quite easy to make this opt-in (or opt-out) with a
>> mount option, so that people who need the current inode numbers and will
>> accept the current breakage can keep working.
>>
>> I think this approach would be a net-win for NFS export, whether BTRFS
>> supports it directly or not. I might post a patch which modifies NFS to
>> intuit improved inode numbers for btrfs exports....
>
> Some extra ideas, but not familiar with VFS enough to be sure.
>
> Can we generate "fake" superblock for each subvolume?
> Like using the subolume UUID to replace the FSID of each subvolume.
> Could that migrate the problem?
>
> Thanks,
> Qu
>
>>
>> So: how would this break your use-case??
>>
>> Thanks,
>> NeilBrown
>>
>> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
>> index 0117d867ecf8..8dc58c848502 100644
>> --- a/fs/btrfs/inode.c
>> +++ b/fs/btrfs/inode.c
>> @@ -6020,6 +6020,37 @@ static int btrfs_opendir(struct inode *inode,
>> struct file *file)
>> return 0;
>> }
>>
>> +static u64 btrfs_make_inum(struct btrfs_key *root, struct btrfs_key
>> *ino)
>> +{
>> + u64 rootid = root->objectid;
>> + u64 inoid = ino->objectid;
>> + int shift = 64-8;
>> +
>> + if (ino->type == BTRFS_ROOT_ITEM_KEY) {
>> + /* This is a subvol root found during readdir. */
>> + rootid = inoid;
>> + inoid = BTRFS_FIRST_FREE_OBJECTID;
>> + }
>> + if (rootid == BTRFS_FS_TREE_OBJECTID)
>> + /* this is main vol, not subvol (I think) */
>> + return inoid;
>> + /* store the rootid in the high bits of the inum. This
>> + * will break if 32bit inums are required - we cannot know
>> + */
>> + while (rootid) {
>> + inoid ^= (rootid & 0xff) << shift;
>> + rootid >>= 8;
>> + shift -= 8;
>> + }
>> + return inoid;
>> +}
>> +
>> +static inline u64 btrfs_ino_to_inum(struct inode *inode)
>> +{
>> + return btrfs_make_inum(&BTRFS_I(inode)->root->root_key,
>> + &BTRFS_I(inode)->location);
>> +}
>> +
>> struct dir_entry {
>> u64 ino;
>> u64 offset;
>> @@ -6045,6 +6076,49 @@ static int btrfs_filldir(void *addr, int
>> entries, struct dir_context *ctx)
>> return 0;
>> }
>>
>> +static inline bool btrfs_dir_emit_dot(struct file *file,
>> + struct dir_context *ctx)
>> +{
>> + return ctx->actor(ctx, ".", 1, ctx->pos,
>> + btrfs_ino_to_inum(file->f_path.dentry->d_inode),
>> + DT_DIR) == 0;
>> +}
>> +
>> +static inline ino_t btrfs_parent_ino(struct dentry *dentry)
>> +{
>> + ino_t res;
>> +
>> + /*
>> + * Don't strictly need d_lock here? If the parent ino could change
>> + * then surely we'd have a deeper race in the caller?
>> + */
>> + spin_lock(&dentry->d_lock);
>> + res = btrfs_ino_to_inum(dentry->d_parent->d_inode);
>> + spin_unlock(&dentry->d_lock);
>> + return res;
>> +}
>> +
>> +static inline bool btrfs_dir_emit_dotdot(struct file *file,
>> + struct dir_context *ctx)
>> +{
>> + return ctx->actor(ctx, "..", 2, ctx->pos,
>> + btrfs_parent_ino(file->f_path.dentry), DT_DIR) == 0;
>> +}
>> +static inline bool btrfs_dir_emit_dots(struct file *file,
>> + struct dir_context *ctx)
>> +{
>> + if (ctx->pos == 0) {
>> + if (!btrfs_dir_emit_dot(file, ctx))
>> + return false;
>> + ctx->pos = 1;
>> + }
>> + if (ctx->pos == 1) {
>> + if (!btrfs_dir_emit_dotdot(file, ctx))
>> + return false;
>> + ctx->pos = 2;
>> + }
>> + return true;
>> +}
>> static int btrfs_real_readdir(struct file *file, struct dir_context
>> *ctx)
>> {
>> struct inode *inode = file_inode(file);
>> @@ -6067,7 +6141,7 @@ static int btrfs_real_readdir(struct file *file,
>> struct dir_context *ctx)
>> bool put = false;
>> struct btrfs_key location;
>>
>> - if (!dir_emit_dots(file, ctx))
>> + if (!btrfs_dir_emit_dots(file, ctx))
>> return 0;
>>
>> path = btrfs_alloc_path();
>> @@ -6136,7 +6210,8 @@ static int btrfs_real_readdir(struct file *file,
>> struct dir_context *ctx)
>> put_unaligned(fs_ftype_to_dtype(btrfs_dir_type(leaf, di)),
>> &entry->type);
>> btrfs_dir_item_key_to_cpu(leaf, di, &location);
>> - put_unaligned(location.objectid, &entry->ino);
>> + put_unaligned(btrfs_make_inum(&root->root_key, &location),
>> + &entry->ino);
>> put_unaligned(found_key.offset, &entry->offset);
>> entries++;
>> addr += sizeof(struct dir_entry) + name_len;
>> @@ -9193,7 +9268,7 @@ static int btrfs_getattr(struct user_namespace
>> *mnt_userns,
>> STATX_ATTR_NODUMP);
>>
>> generic_fillattr(&init_user_ns, inode, stat);
>> - stat->dev = BTRFS_I(inode)->root->anon_dev;
>> + stat->ino = btrfs_ino_to_inum(inode);
>>
>> spin_lock(&BTRFS_I(inode)->lock);
>> delalloc_bytes = BTRFS_I(inode)->new_delalloc_bytes;
>>
On Fri, 30 Jul 2021, Al Viro wrote:
> On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> > In order to support filehandle lookup in filesystems with internal
> > mounts (multiple subvols in the one filesystem) reconnect_path() in
> > exportfs will need to find out if a given dentry is already mounted.
> > This can be done with the function lookup_mnt(), so export that to make
> > it available.
>
> IMO having exportfs modular is wrong - note that fs/fhandle.c is
> * calling functions in exportfs
> * non-modular
> * ... and not going to be modular, no matter what - there
> are syscalls in it.
>
>
I agree - it makes sense for exportfs to be non-module. It cannot be
module if FHANDLE is enabled, and if you don't want FHANDLE you probably
don't want EXPORTFS either.
Thanks,
NeilBrown
On Fri, 30 Jul 2021, Al Viro wrote:
> On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
>
> > diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
> > index baa12ac36ece..22523e1cd478 100644
> > --- a/fs/nfsd/vfs.c
> > +++ b/fs/nfsd/vfs.c
> > @@ -64,7 +64,7 @@ nfsd_cross_mnt(struct svc_rqst *rqstp, struct path *path_parent,
> > .dentry = dget(path_parent->dentry)};
> > int err = 0;
> >
> > - err = follow_down(&path, 0);
> > + err = follow_down(&path, LOOKUP_AUTOMOUNT);
> > if (err < 0)
> > goto out;
> > if (path.mnt == path_parent->mnt && path.dentry == path_parent->dentry &&
> > @@ -73,6 +73,13 @@ nfsd_cross_mnt(struct svc_rqst *rqstp, struct path *path_parent,
> > path_put(&path);
> > goto out;
> > }
> > + if (mount_is_internal(path.mnt)) {
> > + /* Use the new path, but don't look for a new export */
> > + /* FIXME should I check NOHIDE in this case?? */
> > + path_put(path_parent);
> > + *path_parent = path;
> > + goto out;
> > + }
>
> ... IOW, mount_is_internal() is called with no exclusion whatsoever. What's there
> to
> * keep its return value valid?
> * prevent fetching ->mnt_mountpoint, getting preempted away, having
> the mount moved *and* what used to be ->mnt_mountpoint evicted from dcache,
> now that it's no longer pinned, then mount_is_internal() regaining CPU and
> dereferencing ->mnt_mountpoint, which now points to hell knows what?
>
Yes, mount_is_internal needs to same mount_lock protection that
lookup_mnt() has. Thanks.
I don't think it matter how long the result remains valid. The only
realistic transtion is from True to False, but the fact that it *was*
True means that it is acceptable for the lookup to have succeeded.
i.e. If the mountpoint was moved which a request was being processed it
will either cause the same result as if it happened before the request
started, or after it finished. Either seems OK.
Thanks,
NeilBrown
On Fri, Jul 30, 2021 at 8:33 AM Qu Wenruo <[email protected]> wrote:
>
>
>
> On 2021/7/30 下午1:25, Qu Wenruo wrote:
> >
> >
> > On 2021/7/30 上午10:36, NeilBrown wrote:
> >>
> >> I've been pondering all the excellent feedback, and what I have learnt
> >> from examining the code in btrfs, and I have developed a different
> >> perspective.
> >
> > Great! Some new developers into the btrfs realm!
> >
> >>
> >> Maybe "subvol" is a poor choice of name because it conjures up
> >> connections with the Volumes in LVM, and btrfs subvols are very different
> >> things. Btrfs subvols are really just subtrees that can be treated as a
> >> unit for operations like "clone" or "destroy".
> >>
> >> As such, they don't really deserve separate st_dev numbers.
> >>
> >> Maybe the different st_dev numbers were introduced as a "cheap" way to
> >> extend to size of the inode-number space. Like many "cheap" things, it
> >> has hidden costs.
>
> Forgot another problem already caused by this st_dev method.
>
> Since btrfs uses st_dev to distinguish them its inode name space, and
> st_dev is allocated using anonymous bdev, and the anonymous bdev poor
> has limited size (much smaller than btrfs subvolume id name space), it's
> already causing problems like we can't allocate enough anonymous bdev
> for each subvolume, and failed to create subvolume/snapshot.
>
How about creating a major dev for btrfs subvolumes to start with.
Then at least there is a possibility for administrative reservation of
st_dev values for subvols that need persistent <st_dev;st_ino>
By default subvols get assigned a minor dynamically as today
and with opt-in (e.g. for small short lived btrfs filesystems), the
unified st_dev approach can be used, possibly by providing
an upper limit to the inode numbers on the filesystem, similar to
xfs -o inode32 mount option.
Thanks,
Amir.
On Fri, 30 Jul 2021 at 07:28, NeilBrown <[email protected]> wrote:
>
> On Fri, 30 Jul 2021, Al Viro wrote:
> > On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> > > /proc/$PID/mountinfo contains a field for the device number of the
> > > filesystem at each mount.
> > >
> > > This is taken from the superblock ->s_dev field, which is correct for
> > > every filesystem except btrfs. A btrfs filesystem can contain multiple
> > > subvols which each have a different device number. If (a directory
> > > within) one of these subvols is mounted, the device number reported in
> > > mountinfo will be different from the device number reported by stat().
> > >
> > > This confuses some libraries and tools such as, historically, findmnt.
> > > Current findmnt seems to cope with the strangeness.
> > >
> > > So instead of using ->s_dev, call vfs_getattr_nosec() and use the ->dev
> > > provided. As there is no STATX flag to ask for the device number, we
> > > pass a request mask for zero, and also ask the filesystem to avoid
> > > syncing with any remote service.
> >
> > Hard NAK. You are putting IO (potentially - network IO, with no upper
> > limit on the completion time) under namespace_sem.
>
> Why would IO be generated? The inode must already be in cache because it
> is mounted, and STATX_DONT_SYNC is passed. If a filesystem did IO in
> those circumstances, it would be broken.
STATX_DONT_SYNC is a hint, and while some network fs do honor it, not all do.
Thanks,
Miklos
On Fri, 30 Jul 2021, Qu Wenruo wrote:
>
> On 2021/7/30 上午10:36, NeilBrown wrote:
> >
> > I've been pondering all the excellent feedback, and what I have learnt
> > from examining the code in btrfs, and I have developed a different
> > perspective.
>
> Great! Some new developers into the btrfs realm!
:-)
>
> >
> > Maybe "subvol" is a poor choice of name because it conjures up
> > connections with the Volumes in LVM, and btrfs subvols are very different
> > things. Btrfs subvols are really just subtrees that can be treated as a
> > unit for operations like "clone" or "destroy".
> >
> > As such, they don't really deserve separate st_dev numbers.
> >
> > Maybe the different st_dev numbers were introduced as a "cheap" way to
> > extend to size of the inode-number space. Like many "cheap" things, it
> > has hidden costs.
> >
> > Maybe objects in different subvols should still be given different inode
> > numbers. This would be problematic on 32bit systems, but much less so on
> > 64bit systems.
> >
> > The patch below, which is just a proof-of-concept, changes btrfs to
> > report a uniform st_dev, and different (64bit) st_ino in different subvols.
> >
> > It has problems:
> > - it will break any 32bit readdir and 32bit stat. I don't know how big
> > a problem that is these days (ino_t in the kernel is "unsigned long",
> > not "unsigned long long). That surprised me).
> > - It might break some user-space expectations. One thing I have learnt
> > is not to make any assumption about what other people might expect.
>
> Wouldn't any filesystem boundary check fail to stop at subvolume boundary?
You mean like "du -x"?? Yes. You would lose the misleading illusion
that there are multiple filesystems. That is one user-expectation that
would need to be addressed before people opt-in
>
> Then it will go through the full btrfs subvolumes/snapshots, which can
> be super slow.
>
> >
> > However, it would be quite easy to make this opt-in (or opt-out) with a
> > mount option, so that people who need the current inode numbers and will
> > accept the current breakage can keep working.
> >
> > I think this approach would be a net-win for NFS export, whether BTRFS
> > supports it directly or not. I might post a patch which modifies NFS to
> > intuit improved inode numbers for btrfs exports....
>
> Some extra ideas, but not familiar with VFS enough to be sure.
>
> Can we generate "fake" superblock for each subvolume?
I don't see how that would help. Either subvols are like filesystems
and appear in /proc/mounts, or they aren't like filesystems and don't
get different st_dev. Either of these outcomes can be achieved without
fake superblocks. If you really need BTRFS subvols to have some
properties of filesystems but not all, then you are in for a whole world
of pain.
Maybe btrfs subvols should be treated more like XFS "managed trees". At
least there you have precedent and someone else to share the pain.
Maybe we should train people to use "quota" to check the usage of a
subvol, rather than "du" (which will stop working with my patch if it
contains refs to other subvols) or "df" (which already doesn't work), or
"btrs df"
> Like using the subolume UUID to replace the FSID of each subvolume.
> Could that migrate the problem?
Which problem, exactly? My first approach to making subvols work on NFS
took essentially that approach. It was seen (quite reasonably) as a
hack to work around poor behaviour in btrfs.
Given that NFS has always seen all of a btrfs filesystem as have a
uniform fsid, I'm now of the opinion that we don't want to change that,
but should just fix the duplicate-inode-number problem.
If I could think of some way for NFSD to see different inode numbers
than VFS, I would push hard for fixs nfsd by giving it more sane inode
numbers.
Thanks,
NeilBrown
On Fri, 30 Jul 2021, Qu Wenruo wrote:
>
> On 2021/7/30 下午1:25, Qu Wenruo wrote:
> >
> >
> > On 2021/7/30 上午10:36, NeilBrown wrote:
> >>
> >> I've been pondering all the excellent feedback, and what I have learnt
> >> from examining the code in btrfs, and I have developed a different
> >> perspective.
> >
> > Great! Some new developers into the btrfs realm!
> >
> >>
> >> Maybe "subvol" is a poor choice of name because it conjures up
> >> connections with the Volumes in LVM, and btrfs subvols are very different
> >> things. Btrfs subvols are really just subtrees that can be treated as a
> >> unit for operations like "clone" or "destroy".
> >>
> >> As such, they don't really deserve separate st_dev numbers.
> >>
> >> Maybe the different st_dev numbers were introduced as a "cheap" way to
> >> extend to size of the inode-number space. Like many "cheap" things, it
> >> has hidden costs.
>
> Forgot another problem already caused by this st_dev method.
>
> Since btrfs uses st_dev to distinguish them its inode name space, and
> st_dev is allocated using anonymous bdev, and the anonymous bdev poor
> has limited size (much smaller than btrfs subvolume id name space), it's
> already causing problems like we can't allocate enough anonymous bdev
> for each subvolume, and failed to create subvolume/snapshot.
What sort of numbers do you see in practice? How many subvolumes and how
many inodes per subvolume?
If we allocated some number of bits to each, with over-allocation to
allow for growth, could we fit both into 64 bits?
NeilBrown
>
> Thus it's really a time to re-consider how we should export this info to
> user space.
>
> Thanks,
> Qu
>
On Fri, 30 Jul 2021, Al Viro wrote:
> On Wed, Jul 28, 2021 at 05:30:04PM -0400, Josef Bacik wrote:
>
> > I don't think anybody has that many file systems. For btrfs it's a single
> > file system. Think of syncfs, it's going to walk through all of the super
> > blocks on the system calling ->sync_fs on each subvol superblock. Now this
> > isn't a huge deal, we could just have some flag that says "I'm not real" or
> > even just have anonymous superblocks that don't get added to the global
> > super_blocks list, and that would address my main pain points.
>
> Umm... Aren't the snapshots read-only by definition?
No, though they can be.
subvols can be created empty, or duplicated from an existing subvol.
Any subvol can be written, using copy-on-write of course.
NeilBrown
On 2021/7/30 下午2:00, NeilBrown wrote:
> On Fri, 30 Jul 2021, Qu Wenruo wrote:
>>
>> On 2021/7/30 下午1:25, Qu Wenruo wrote:
>>>
>>>
>>> On 2021/7/30 上午10:36, NeilBrown wrote:
>>>>
>>>> I've been pondering all the excellent feedback, and what I have learnt
>>>> from examining the code in btrfs, and I have developed a different
>>>> perspective.
>>>
>>> Great! Some new developers into the btrfs realm!
>>>
>>>>
>>>> Maybe "subvol" is a poor choice of name because it conjures up
>>>> connections with the Volumes in LVM, and btrfs subvols are very different
>>>> things. Btrfs subvols are really just subtrees that can be treated as a
>>>> unit for operations like "clone" or "destroy".
>>>>
>>>> As such, they don't really deserve separate st_dev numbers.
>>>>
>>>> Maybe the different st_dev numbers were introduced as a "cheap" way to
>>>> extend to size of the inode-number space. Like many "cheap" things, it
>>>> has hidden costs.
>>
>> Forgot another problem already caused by this st_dev method.
>>
>> Since btrfs uses st_dev to distinguish them its inode name space, and
>> st_dev is allocated using anonymous bdev, and the anonymous bdev poor
>> has limited size (much smaller than btrfs subvolume id name space), it's
>> already causing problems like we can't allocate enough anonymous bdev
>> for each subvolume, and failed to create subvolume/snapshot.
>
> What sort of numbers do you see in practice? How many subvolumes and how
> many inodes per subvolume?
Normally the "live"(*) subvolume numbers are below the minor dev number
range (1<<20), thus not a big deal.
*: Live here means the subvolume is at least accessed once. Subvolume
exists but never accessed doesn't get its anonymous bdev number allocated.
But (1<<20) is really small compared some real-world users.
Thus we had some reports of such problem, and changed the timing to
allocate such bdev number.
> If we allocated some number of bits to each, with over-allocation to
> allow for growth, could we fit both into 64 bits?
I don't think it's even possible, as currently we use u32 for dev_t,
which is already way below the theoretical limit (U64_MAX - 512).
Thus AFAIK there is no real way to solve it right now.
Thanks,
Qu
>
> NeilBrown
>
>
>>
>> Thus it's really a time to re-consider how we should export this info to
>> user space.
>>
>> Thanks,
>> Qu
>>
On Fri, 30 Jul 2021, Miklos Szeredi wrote:
> On Fri, 30 Jul 2021 at 07:28, NeilBrown <[email protected]> wrote:
> >
> > On Fri, 30 Jul 2021, Al Viro wrote:
> > > On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> > > > /proc/$PID/mountinfo contains a field for the device number of the
> > > > filesystem at each mount.
> > > >
> > > > This is taken from the superblock ->s_dev field, which is correct for
> > > > every filesystem except btrfs. A btrfs filesystem can contain multiple
> > > > subvols which each have a different device number. If (a directory
> > > > within) one of these subvols is mounted, the device number reported in
> > > > mountinfo will be different from the device number reported by stat().
> > > >
> > > > This confuses some libraries and tools such as, historically, findmnt.
> > > > Current findmnt seems to cope with the strangeness.
> > > >
> > > > So instead of using ->s_dev, call vfs_getattr_nosec() and use the ->dev
> > > > provided. As there is no STATX flag to ask for the device number, we
> > > > pass a request mask for zero, and also ask the filesystem to avoid
> > > > syncing with any remote service.
> > >
> > > Hard NAK. You are putting IO (potentially - network IO, with no upper
> > > limit on the completion time) under namespace_sem.
> >
> > Why would IO be generated? The inode must already be in cache because it
> > is mounted, and STATX_DONT_SYNC is passed. If a filesystem did IO in
> > those circumstances, it would be broken.
>
> STATX_DONT_SYNC is a hint, and while some network fs do honor it, not all do.
>
That's ... unfortunate. Rather seems to spoil the whole point of having
a flag like that. Maybe it should have been called
"STATX_SYNC_OR_SYNC_NOT_THERE_IS_NO_GUARANTEE"
Thanks.
NeilBrown
On 2021/7/30 下午1:58, NeilBrown wrote:
> On Fri, 30 Jul 2021, Qu Wenruo wrote:
>>
>> On 2021/7/30 上午10:36, NeilBrown wrote:
>>>
>>> I've been pondering all the excellent feedback, and what I have learnt
>>> from examining the code in btrfs, and I have developed a different
>>> perspective.
>>
>> Great! Some new developers into the btrfs realm!
>
> :-)
>
>>
>>>
>>> Maybe "subvol" is a poor choice of name because it conjures up
>>> connections with the Volumes in LVM, and btrfs subvols are very different
>>> things. Btrfs subvols are really just subtrees that can be treated as a
>>> unit for operations like "clone" or "destroy".
>>>
>>> As such, they don't really deserve separate st_dev numbers.
>>>
>>> Maybe the different st_dev numbers were introduced as a "cheap" way to
>>> extend to size of the inode-number space. Like many "cheap" things, it
>>> has hidden costs.
>>>
>>> Maybe objects in different subvols should still be given different inode
>>> numbers. This would be problematic on 32bit systems, but much less so on
>>> 64bit systems.
>>>
>>> The patch below, which is just a proof-of-concept, changes btrfs to
>>> report a uniform st_dev, and different (64bit) st_ino in different subvols.
>>>
>>> It has problems:
>>> - it will break any 32bit readdir and 32bit stat. I don't know how big
>>> a problem that is these days (ino_t in the kernel is "unsigned long",
>>> not "unsigned long long). That surprised me).
>>> - It might break some user-space expectations. One thing I have learnt
>>> is not to make any assumption about what other people might expect.
>>
>> Wouldn't any filesystem boundary check fail to stop at subvolume boundary?
>
> You mean like "du -x"?? Yes. You would lose the misleading illusion
> that there are multiple filesystems. That is one user-expectation that
> would need to be addressed before people opt-in
OK, forgot it's an opt-in feature, then it's less an impact.
But it can still sometimes be problematic.
E.g. if the user want to put some git code into one subvolume, while
export another subvolume through NFS.
Then the user has to opt-in, affecting the git subvolume to lose the
ability to determine subvolume boundary, right?
This is more concerning as most btrfs users won't want to explicitly
prepare another different btrfs.
>
>>
>> Then it will go through the full btrfs subvolumes/snapshots, which can
>> be super slow.
>>
>>>
>>> However, it would be quite easy to make this opt-in (or opt-out) with a
>>> mount option, so that people who need the current inode numbers and will
>>> accept the current breakage can keep working.
>>>
>>> I think this approach would be a net-win for NFS export, whether BTRFS
>>> supports it directly or not. I might post a patch which modifies NFS to
>>> intuit improved inode numbers for btrfs exports....
>>
>> Some extra ideas, but not familiar with VFS enough to be sure.
>>
>> Can we generate "fake" superblock for each subvolume?
>
> I don't see how that would help. Either subvols are like filesystems
> and appear in /proc/mounts, or they aren't like filesystems and don't
> get different st_dev. Either of these outcomes can be achieved without
> fake superblocks. If you really need BTRFS subvols to have some
> properties of filesystems but not all, then you are in for a whole world
> of pain.
I guess it's time we pay for the hacks...
>
> Maybe btrfs subvols should be treated more like XFS "managed trees". At
> least there you have precedent and someone else to share the pain.
> Maybe we should train people to use "quota" to check the usage of a
> subvol,
Well, btrfs quota has its own pain...
> rather than "du" (which will stop working with my patch if it
> contains refs to other subvols) or "df" (which already doesn't work), or
> "btrs df"
BTW, since XFS has a similar feature (not sure with XFS though), I guess
in the long run, it may be worthy to make the VFS to have some way to
treat such concept that is not full volume but still different trees.
>
>> Like using the subolume UUID to replace the FSID of each subvolume.
>> Could that migrate the problem?
>
> Which problem, exactly? My first approach to making subvols work on NFS
> took essentially that approach. It was seen (quite reasonably) as a
> hack to work around poor behaviour in btrfs.
>
> Given that NFS has always seen all of a btrfs filesystem as have a
> uniform fsid, I'm now of the opinion that we don't want to change that,
> but should just fix the duplicate-inode-number problem.
>
> If I could think of some way for NFSD to see different inode numbers
> than VFS, I would push hard for fixs nfsd by giving it more sane inode
> numbers.
Really not familiar with NFS/VFS, thus some ideas from me may sounds
super crazy.
Is it possible that, for nfsd to detect such "subvolume" concept by its
own, like checking st_dev and the fsid returned from statfs().
Then if nfsd find some boundary which has different st_dev, but the same
fsid as its parent, then it knows it's a "subvolume"-like concept.
Then do some local inode number mapping inside nfsd?
Like use the highest 20 bits for different subvolumes, while the
remaining 44 bits for real inode numbers.
Of-course, this is still a workaround...
Thanks,
Qu
>
> Thanks,
> NeilBrown
>
On Fri, 30 Jul 2021, Qu Wenruo wrote:
> >
> > You mean like "du -x"?? Yes. You would lose the misleading illusion
> > that there are multiple filesystems. That is one user-expectation that
> > would need to be addressed before people opt-in
>
> OK, forgot it's an opt-in feature, then it's less an impact.
The hope would have to be that everyone would eventually opt-in once all
issues were understood.
>
> Really not familiar with NFS/VFS, thus some ideas from me may sounds
> super crazy.
>
> Is it possible that, for nfsd to detect such "subvolume" concept by its
> own, like checking st_dev and the fsid returned from statfs().
>
> Then if nfsd find some boundary which has different st_dev, but the same
> fsid as its parent, then it knows it's a "subvolume"-like concept.
>
> Then do some local inode number mapping inside nfsd?
> Like use the highest 20 bits for different subvolumes, while the
> remaining 44 bits for real inode numbers.
>
> Of-course, this is still a workaround...
Yes, it would certainly be possible to add some hacks to nfsd to fix the
immediate problem, and we could probably even created some well-defined
interfaces into btrfs to extract the required information so that it
wasn't too hackish.
Maybe that is what we will have to do. But I'd rather not hack NFSD
while there is any chance that a more complete solution will be found.
I'm not quite ready to give up on the idea of squeezing all btrfs inodes
into a 64bit number space. 24bits of subvol and 40 bits of inode?
Make the split a mkfs or mount option?
Maybe hand out inode numbers to subvols in 2^32 chunks so each subvol
(which has ever been accessed) has a mapping from the top 32 bits of the
objectid to the top 32 bits of the inode number.
We don't need something that is theoretically perfect (that's not
possible anyway as we don't have 64bits of device numbers). We just
need something that is practical and scales adequately. If you have
petabytes of storage, it is reasonable to spend a gigabyte of memory on
a lookup table(?).
If we can make inode numbers unique, we can possibly leave the st_dev
changing at subvols so that "du -x" works as currently expected.
One thought I had was to use a strong hash to combine the subvol object
id and the inode object id into a 64bit number. What is the chance of
a collision in practice :-)
Thanks,
NeilBrown
On 2021/7/30 下午2:53, NeilBrown wrote:
> On Fri, 30 Jul 2021, Qu Wenruo wrote:
>>>
>>> You mean like "du -x"?? Yes. You would lose the misleading illusion
>>> that there are multiple filesystems. That is one user-expectation that
>>> would need to be addressed before people opt-in
>>
>> OK, forgot it's an opt-in feature, then it's less an impact.
>
> The hope would have to be that everyone would eventually opt-in once all
> issues were understood.
>
>>
>> Really not familiar with NFS/VFS, thus some ideas from me may sounds
>> super crazy.
>>
>> Is it possible that, for nfsd to detect such "subvolume" concept by its
>> own, like checking st_dev and the fsid returned from statfs().
>>
>> Then if nfsd find some boundary which has different st_dev, but the same
>> fsid as its parent, then it knows it's a "subvolume"-like concept.
>>
>> Then do some local inode number mapping inside nfsd?
>> Like use the highest 20 bits for different subvolumes, while the
>> remaining 44 bits for real inode numbers.
>>
>> Of-course, this is still a workaround...
>
> Yes, it would certainly be possible to add some hacks to nfsd to fix the
> immediate problem, and we could probably even created some well-defined
> interfaces into btrfs to extract the required information so that it
> wasn't too hackish.
>
> Maybe that is what we will have to do. But I'd rather not hack NFSD
> while there is any chance that a more complete solution will be found.
>
> I'm not quite ready to give up on the idea of squeezing all btrfs inodes
> into a 64bit number space. 24bits of subvol and 40 bits of inode?
> Make the split a mkfs or mount option?
Btrfs used to have a subvolume number limit in the past, for different
reasons.
In that case, subvolume number is limited to 48 bits, which is still too
large to avoid conflicts.
For inode number there is really no limit except the 256 ~ (U64)-256 limit.
Considering all these numbers are almost U64, conflicts would be
unavoidable AFAIK.
> Maybe hand out inode numbers to subvols in 2^32 chunks so each subvol
> (which has ever been accessed) has a mapping from the top 32 bits of the
> objectid to the top 32 bits of the inode number.
>
> We don't need something that is theoretically perfect (that's not
> possible anyway as we don't have 64bits of device numbers). We just
> need something that is practical and scales adequately. If you have
> petabytes of storage, it is reasonable to spend a gigabyte of memory on
> a lookup table(?).
Can such squishing-all-inodes-into-one-namespace work to be done in a
more generic way? e.g, let each fs with "subvolume"-like feature to
provide the interface to do that.
Despite that I still hope to have a way to distinguish the "subvolume"
boundary.
If completely inside btrfs, it's pretty simple to locate a subvolume
boundary.
All subvolume have the same inode number 256.
Maybe we could reserve some special "squished" inode number to indicate
boundary inside a filesystem.
E.g. reserve (u64)-1 as a special indicator for subvolume boundaries.
As most fs would have reserved super high inode numbers anyway.
>
> If we can make inode numbers unique, we can possibly leave the st_dev
> changing at subvols so that "du -x" works as currently expected.
>
> One thought I had was to use a strong hash to combine the subvol object
> id and the inode object id into a 64bit number. What is the chance of
> a collision in practice :-)
But with just 64bits, conflicts will happen anyway...
Thanks,
Qu
>
> Thanks,
> NeilBrown
>
On Fri, 30 Jul 2021 at 08:13, NeilBrown <[email protected]> wrote:
>
> On Fri, 30 Jul 2021, Miklos Szeredi wrote:
> > On Fri, 30 Jul 2021 at 07:28, NeilBrown <[email protected]> wrote:
> > >
> > > On Fri, 30 Jul 2021, Al Viro wrote:
> > > > On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> > > > > /proc/$PID/mountinfo contains a field for the device number of the
> > > > > filesystem at each mount.
> > > > >
> > > > > This is taken from the superblock ->s_dev field, which is correct for
> > > > > every filesystem except btrfs. A btrfs filesystem can contain multiple
> > > > > subvols which each have a different device number. If (a directory
> > > > > within) one of these subvols is mounted, the device number reported in
> > > > > mountinfo will be different from the device number reported by stat().
> > > > >
> > > > > This confuses some libraries and tools such as, historically, findmnt.
> > > > > Current findmnt seems to cope with the strangeness.
> > > > >
> > > > > So instead of using ->s_dev, call vfs_getattr_nosec() and use the ->dev
> > > > > provided. As there is no STATX flag to ask for the device number, we
> > > > > pass a request mask for zero, and also ask the filesystem to avoid
> > > > > syncing with any remote service.
> > > >
> > > > Hard NAK. You are putting IO (potentially - network IO, with no upper
> > > > limit on the completion time) under namespace_sem.
> > >
> > > Why would IO be generated? The inode must already be in cache because it
> > > is mounted, and STATX_DONT_SYNC is passed. If a filesystem did IO in
> > > those circumstances, it would be broken.
> >
> > STATX_DONT_SYNC is a hint, and while some network fs do honor it, not all do.
> >
>
> That's ... unfortunate. Rather seems to spoil the whole point of having
> a flag like that. Maybe it should have been called
> "STATX_SYNC_OR_SYNC_NOT_THERE_IS_NO_GUARANTEE"
And I guess just about every filesystem would need to be fixed to
prevent starting I/O on STATX_DONT_SYNC, as block I/O could just as
well generate network traffic.
Probably much easier fix btrfs to use some sort of subvolume structure
that the VFS knows about. I think there's been talk about that for a
long time, not sure where it got stalled.
Thanks,
Miklos
>On Fri, 30 Jul 2021, Miklos Szeredi wrote:
> On Fri, 30 Jul 2021 at 08:13, NeilBrown <[email protected]> wrote:
> >
> > On Fri, 30 Jul 2021, Miklos Szeredi wrote:
> > > On Fri, 30 Jul 2021 at 07:28, NeilBrown <[email protected]> wrote:
> > > >
> > > > On Fri, 30 Jul 2021, Al Viro wrote:
> > > > > On Wed, Jul 28, 2021 at 08:37:45AM +1000, NeilBrown wrote:
> > > > > > /proc/$PID/mountinfo contains a field for the device number of the
> > > > > > filesystem at each mount.
> > > > > >
> > > > > > This is taken from the superblock ->s_dev field, which is correct for
> > > > > > every filesystem except btrfs. A btrfs filesystem can contain multiple
> > > > > > subvols which each have a different device number. If (a directory
> > > > > > within) one of these subvols is mounted, the device number reported in
> > > > > > mountinfo will be different from the device number reported by stat().
> > > > > >
> > > > > > This confuses some libraries and tools such as, historically, findmnt.
> > > > > > Current findmnt seems to cope with the strangeness.
> > > > > >
> > > > > > So instead of using ->s_dev, call vfs_getattr_nosec() and use the ->dev
> > > > > > provided. As there is no STATX flag to ask for the device number, we
> > > > > > pass a request mask for zero, and also ask the filesystem to avoid
> > > > > > syncing with any remote service.
> > > > >
> > > > > Hard NAK. You are putting IO (potentially - network IO, with no upper
> > > > > limit on the completion time) under namespace_sem.
> > > >
> > > > Why would IO be generated? The inode must already be in cache because it
> > > > is mounted, and STATX_DONT_SYNC is passed. If a filesystem did IO in
> > > > those circumstances, it would be broken.
> > >
> > > STATX_DONT_SYNC is a hint, and while some network fs do honor it, not all do.
> > >
> >
> > That's ... unfortunate. Rather seems to spoil the whole point of having
> > a flag like that. Maybe it should have been called
> > "STATX_SYNC_OR_SYNC_NOT_THERE_IS_NO_GUARANTEE"
>
> And I guess just about every filesystem would need to be fixed to
> prevent starting I/O on STATX_DONT_SYNC, as block I/O could just as
> well generate network traffic.
Certainly I think that would be appropriate. If the information simply
isn't available EWOULDBLOCK could be returned.
>
> Probably much easier fix btrfs to use some sort of subvolume structure
> that the VFS knows about. I think there's been talk about that for a
> long time, not sure where it got stalled.
An easy fix for this particular patch is to add a super_operation which
provides the devno to show in /proc/self/mountinfo. There are already a
number of show_foo super_operations to show other content.
But I'm curious about your reference to "some sort of subvolume
structure that the VFS knows about". Do you have any references, or can
you suggest a search term I could try?
Thanks,
NeilBrown
On Fri, 30 Jul 2021 at 09:34, NeilBrown <[email protected]> wrote:
> But I'm curious about your reference to "some sort of subvolume
> structure that the VFS knows about". Do you have any references, or can
> you suggest a search term I could try?
Found this:
https://lore.kernel.org/linux-fsdevel/[email protected]/
I also remember discussing it at some LSF/MM with the btrfs devs, but
no specific conclusion.
Thanks,
Miklos
On Fri, Jul 30, 2021 at 02:23:44PM +0800, Qu Wenruo wrote:
> OK, forgot it's an opt-in feature, then it's less an impact.
>
> But it can still sometimes be problematic.
>
> E.g. if the user want to put some git code into one subvolume, while
> export another subvolume through NFS.
>
> Then the user has to opt-in, affecting the git subvolume to lose the
> ability to determine subvolume boundary, right?
Totally naive question: is it be possible to treat different subvolumes
differently, and give the user some choice at subvolume creation time
how this new boundary should behave?
It seems like there are some conflicting priorities that can only be
resolved by someone who knows the intended use case.
--b.
On 7/30/21 11:17 AM, J. Bruce Fields wrote:
> On Fri, Jul 30, 2021 at 02:23:44PM +0800, Qu Wenruo wrote:
>> OK, forgot it's an opt-in feature, then it's less an impact.
>>
>> But it can still sometimes be problematic.
>>
>> E.g. if the user want to put some git code into one subvolume, while
>> export another subvolume through NFS.
>>
>> Then the user has to opt-in, affecting the git subvolume to lose the
>> ability to determine subvolume boundary, right?
>
> Totally naive question: is it be possible to treat different subvolumes
> differently, and give the user some choice at subvolume creation time
> how this new boundary should behave?
>
> It seems like there are some conflicting priorities that can only be
> resolved by someone who knows the intended use case.
>
This is the crux of the problem. We have no real interfaces or anything to deal
with this sort of paradigm. We do the st_dev thing because that's the most
common way that tools like find or rsync use to determine they've wandered into
a "different" volume. This exists specifically because of usescases like
Zygo's, where he's taking thousands of snapshots and manually excluding them
from find/rsync is just not reasonable.
We have no good way to give the user information about what's going on, we just
have these old shitty interfaces. I asked our guys about filling up
/proc/self/mountinfo with our subvolumes and they had a heart attack because we
have around 2-4k subvolumes on machines, and with monitoring stuff in place we
regularly read /proc/self/mountinfo to determine what's mounted and such.
And then there's NFS which needs to know that it's walked into a new inode space.
This is all super shitty, and mostly exists because we don't have a good way to
expose to the user wtf is going on.
Personally I would be ok with simply disallowing NFS to wander into subvolumes
from an exported fs. If you want to export subvolumes then export them
individually, otherwise if you walk into a subvolume from NFS you simply get an
empty directory.
This doesn't solve the mountinfo problem where a user may want to figure out
which subvol they're in, but this is where I think we could address the issue
with better interfaces. Or perhaps Neil's idea to have a common major number
with a different minor number for every subvol.
Either way this isn't as simple as shoehorning it into automount and being done
with it, we need to take a step back and think about how should this actually
look, taking into account we've got 12 years of having Btrfs deployed with
existing usecases that expect a certain behavior. Thanks,
Josef
On 2021-07-30 17:48, Josef Bacik wrote:
> On 7/30/21 11:17 AM, J. Bruce Fields wrote:
>> On Fri, Jul 30, 2021 at 02:23:44PM +0800, Qu Wenruo wrote:
>>> OK, forgot it's an opt-in feature, then it's less an impact.
>>>
>>> But it can still sometimes be problematic.
>>>
>>> E.g. if the user want to put some git code into one subvolume, while
>>> export another subvolume through NFS.
>>>
>>> Then the user has to opt-in, affecting the git subvolume to lose the
>>> ability to determine subvolume boundary, right?
>>
>> Totally naive question: is it be possible to treat different subvolumes
>> differently, and give the user some choice at subvolume creation time
>> how this new boundary should behave?
>>
>> It seems like there are some conflicting priorities that can only be
>> resolved by someone who knows the intended use case.
>>
>
> This is the crux of the problem. We have no real interfaces or anything
> to deal with this sort of paradigm. We do the st_dev thing because
> that's the most common way that tools like find or rsync use to
> determine they've wandered into a "different" volume. This exists
> specifically because of usescases like Zygo's, where he's taking
> thousands of snapshots and manually excluding them from find/rsync is
> just not reasonable.
>
> We have no good way to give the user information about what's going on,
> we just have these old shitty interfaces. I asked our guys about
> filling up /proc/self/mountinfo with our subvolumes and they had a heart
> attack because we have around 2-4k subvolumes on machines, and with
> monitoring stuff in place we regularly read /proc/self/mountinfo to
> determine what's mounted and such.
>
> And then there's NFS which needs to know that it's walked into a new
> inode space.
>
> This is all super shitty, and mostly exists because we don't have a good
> way to expose to the user wtf is going on.
>
> Personally I would be ok with simply disallowing NFS to wander into
> subvolumes from an exported fs. If you want to export subvolumes then
> export them individually, otherwise if you walk into a subvolume from
> NFS you simply get an empty directory.
>
> This doesn't solve the mountinfo problem where a user may want to figure
> out which subvol they're in, but this is where I think we could address
> the issue with better interfaces. Or perhaps Neil's idea to have a
> common major number with a different minor number for every subvol.
>
> Either way this isn't as simple as shoehorning it into automount and
> being done with it, we need to take a step back and think about how
> should this actually look, taking into account we've got 12 years of
> having Btrfs deployed with existing usecases that expect a certain
> behavior. Thanks,
>
> Josef
As a user and sysadmin I really appreciate the way Btrfs currently works.
We use hourly snapshots which are exposed over Samba as "Previous
Versions" to Windows users. This amounts to thousands of snapshots, all
user serviceable. A great feature!
In Samba world we have a mount option[1] called "noserverino" which lets
the client generate unique inode numbers, rather than using the server
provided inode numbers. This allows Linux clients to work well against
servers exposing subvolumes and snapshots.
NFS has really old roots and had to make choices that we don't really
have to make today. Can we not provide something similar to mount.cifs
that generate unique inode numbers for the clients. This could be either
an nfsd export option (such as /mnt/foo *(rw,uniq_inodes)) or a mount
option on the clients.
One worry I have with making subvolumes automountpoints is that it might
affect the possibility to cp --reflink across that boundary.
[1] https://www.samba.org/~ab/output/htmldocs/manpages-3/mount.cifs.8.html
On Fri, Jul 30, 2021 at 11:48:15AM -0400, Josef Bacik wrote:
> On 7/30/21 11:17 AM, J. Bruce Fields wrote:
> > On Fri, Jul 30, 2021 at 02:23:44PM +0800, Qu Wenruo wrote:
> > > OK, forgot it's an opt-in feature, then it's less an impact.
> > >
> > > But it can still sometimes be problematic.
> > >
> > > E.g. if the user want to put some git code into one subvolume, while
> > > export another subvolume through NFS.
> > >
> > > Then the user has to opt-in, affecting the git subvolume to lose the
> > > ability to determine subvolume boundary, right?
> >
> > Totally naive question: is it be possible to treat different subvolumes
> > differently, and give the user some choice at subvolume creation time
> > how this new boundary should behave?
> >
> > It seems like there are some conflicting priorities that can only be
> > resolved by someone who knows the intended use case.
> >
>
> This is the crux of the problem. We have no real interfaces or anything to
> deal with this sort of paradigm. We do the st_dev thing because that's the
> most common way that tools like find or rsync use to determine they've
> wandered into a "different" volume. This exists specifically because of
> usescases like Zygo's, where he's taking thousands of snapshots and manually
> excluding them from find/rsync is just not reasonable.
>
> We have no good way to give the user information about what's going on, we
> just have these old shitty interfaces. I asked our guys about filling up
> /proc/self/mountinfo with our subvolumes and they had a heart attack because
> we have around 2-4k subvolumes on machines, and with monitoring stuff in
> place we regularly read /proc/self/mountinfo to determine what's mounted and
> such.
>
> And then there's NFS which needs to know that it's walked into a new inode space.
NFS somehow works surprisingly well without knowing that. I didn't know
there was a problem with NFS, despite exporting thousands of btrfs subvols
from a single export point for 7 years. Maybe I have some non-default
setting in /etc/exports which works around the problems, or maybe I got
lucky, and all my use cases are weirdly specific and evade all the bugs
by accident?
> This is all super shitty, and mostly exists because we don't have a good way
> to expose to the user wtf is going on.
>
> Personally I would be ok with simply disallowing NFS to wander into
> subvolumes from an exported fs. If you want to export subvolumes then
> export them individually, otherwise if you walk into a subvolume from NFS
> you simply get an empty directory.
As a present exporter of thousands of btrfs subvols over NFS from single
export points, I'm not a fan of this idea.
> This doesn't solve the mountinfo problem where a user may want to figure out
> which subvol they're in, but this is where I think we could address the
> issue with better interfaces. Or perhaps Neil's idea to have a common major
> number with a different minor number for every subvol.
It's not hard to figure out what subvol you're in. There's an ioctl
which tells the subvol ID, and another that tells the name. The problem
is that it's btrfs-specific, and no existing software knows how and when
to use it (and also it's privileged, but that's easy to fix compared to
the other issues).
> Either way this isn't as simple as shoehorning it into automount and being
> done with it, we need to take a step back and think about how should this
> actually look, taking into account we've got 12 years of having Btrfs
> deployed with existing usecases that expect a certain behavior. Thanks,
I think if we got into a time machine, went back 12 years, changed
the btrfs behavior, and then returned to the present, in the alternate
history, we would all be here today talking about how mountinfo doesn't
scale up to what btrfs throws at it, and can btrfs opt out of it somehow.
Maybe we could have a system call for mount point discovery? Right now,
the kernel throws a trail of breadcrumbs into /proc/self/mountinfo,
and users use userspace libraries to translate that text blob into
actionable information. We could solve problems with scalability and
visibility in mountinfo if we only had to provide the information in
the context of a single inode (i.e. the inode's parent or child mount
points accessible to the caller).
So you'd have a call for "get paths for all the mount points below inode
X" and another for "get paths for all mount points above inode X", and
calls that tell you details about mount points (like what they're mounted
on, which filesystem they are part of, what the mount flags are, etc).
> Josef
On Fri, Jul 30, 2021 at 03:09:12PM +0800, Qu Wenruo wrote:
>
>
> On 2021/7/30 下午2:53, NeilBrown wrote:
> > On Fri, 30 Jul 2021, Qu Wenruo wrote:
> > > >
> > > > You mean like "du -x"?? Yes. You would lose the misleading illusion
> > > > that there are multiple filesystems. That is one user-expectation that
> > > > would need to be addressed before people opt-in
> > >
> > > OK, forgot it's an opt-in feature, then it's less an impact.
> >
> > The hope would have to be that everyone would eventually opt-in once all
> > issues were understood.
> >
> > >
> > > Really not familiar with NFS/VFS, thus some ideas from me may sounds
> > > super crazy.
> > >
> > > Is it possible that, for nfsd to detect such "subvolume" concept by its
> > > own, like checking st_dev and the fsid returned from statfs().
> > >
> > > Then if nfsd find some boundary which has different st_dev, but the same
> > > fsid as its parent, then it knows it's a "subvolume"-like concept.
> > >
> > > Then do some local inode number mapping inside nfsd?
> > > Like use the highest 20 bits for different subvolumes, while the
> > > remaining 44 bits for real inode numbers.
> > >
> > > Of-course, this is still a workaround...
> >
> > Yes, it would certainly be possible to add some hacks to nfsd to fix the
> > immediate problem, and we could probably even created some well-defined
> > interfaces into btrfs to extract the required information so that it
> > wasn't too hackish.
> >
> > Maybe that is what we will have to do. But I'd rather not hack NFSD
> > while there is any chance that a more complete solution will be found.
> >
> > I'm not quite ready to give up on the idea of squeezing all btrfs inodes
> > into a 64bit number space. 24bits of subvol and 40 bits of inode?
> > Make the split a mkfs or mount option?
>
> Btrfs used to have a subvolume number limit in the past, for different
> reasons.
>
> In that case, subvolume number is limited to 48 bits, which is still too
> large to avoid conflicts.
>
> For inode number there is really no limit except the 256 ~ (U64)-256 limit.
>
> Considering all these numbers are almost U64, conflicts would be
> unavoidable AFAIK.
>
> > Maybe hand out inode numbers to subvols in 2^32 chunks so each subvol
> > (which has ever been accessed) has a mapping from the top 32 bits of the
> > objectid to the top 32 bits of the inode number.
> >
> > We don't need something that is theoretically perfect (that's not
> > possible anyway as we don't have 64bits of device numbers). We just
> > need something that is practical and scales adequately. If you have
> > petabytes of storage, it is reasonable to spend a gigabyte of memory on
> > a lookup table(?).
>
> Can such squishing-all-inodes-into-one-namespace work to be done in a
> more generic way? e.g, let each fs with "subvolume"-like feature to
> provide the interface to do that.
If you know the highest subvol ID number, you can pack two integers into
one larger integer by reversing the bits of the subvol number and ORing
them with the inode number, i.e. 0x0080000000000300 is subvol 256
inode 768.
The subvol ID's grow left to right while the inode numbers grow right
to left. You can have billions of inodes in a few subvols, or billions of
subvols with a few inodes each, and neither will collide with the other
until there are billions of both.
If the filesystem tracks the number of bits in the highest subvol ID
and the highest inode number, then the inode numbers can be decoded,
and collisions can be detected. e.g. if the maximum subvol ID on the
filesystem is below 131072, it will fit in 17 bits, then we know bits
63-47 are the subvol ID and bits 46-0 are the inode.. When subvol 131072
is created, the number of subvol bits increases to 18, but if every inode
fits in less than 46 bits, we know that every existing inode has a 0 in
the 18th subvol ID bit of the inode number, so there is no ambiguity.
If you don't know the maximum subvol ID, you can guess based on the
position of the large run of zero bits in the middle of the integer--not
reliable, but good enough for a guess if you were looking at 'ls -li'
output (and wrote the inode numbers in hex).
In the pathological case (the maximum subvol ID and maximum inode number
require more than 64 total bits) we return ENOSPC.
This can all be done when btrfs fills in an inode struct. There's no need
to change the on-disk format, other than to track the highest inode and
subvol number. btrfs can compute the maxima in reasonable but non-zero
time by searching trees on mount, so an incompatible disk format change
would only be needed to avoid making mount slower.
> Despite that I still hope to have a way to distinguish the "subvolume"
> boundary.
Packing the bits into a single uint64 doesn't help with this--it does
the opposite. Subvol boundaries become harder to see without deliberate
checking (i.e. not the traditional parent.st_dev != child.st_dev test).
Judging from previous btrfs-related complaints, some users do want
"stealth" subvols whose boundaries are not accidentally visible, so the
new behavior could be a feature for someone.
> If completely inside btrfs, it's pretty simple to locate a subvolume
> boundary.
> All subvolume have the same inode number 256.
>
> Maybe we could reserve some special "squished" inode number to indicate
> boundary inside a filesystem.
>
> E.g. reserve (u64)-1 as a special indicator for subvolume boundaries.
> As most fs would have reserved super high inode numbers anyway.
>
>
> >
> > If we can make inode numbers unique, we can possibly leave the st_dev
> > changing at subvols so that "du -x" works as currently expected.
> >
> > One thought I had was to use a strong hash to combine the subvol object
> > id and the inode object id into a 64bit number. What is the chance of
> > a collision in practice :-)
>
> But with just 64bits, conflicts will happen anyway...
The collision rate might be low enough that we could just skip over the
colliding numbers, but we'd have to have some kind of in-memory collision
map to avoid slowing down inode creation (currently the next inode number
is more or less "++last_inode_number", and looking up inodes to see if
they exist first would slow down new file creation a lot).
> Thanks,
> Qu
> >
> > Thanks,
> > NeilBrown
> >
On Fri, 30 Jul 2021, Miklos Szeredi wrote:
> On Fri, 30 Jul 2021 at 09:34, NeilBrown <[email protected]> wrote:
>
> > But I'm curious about your reference to "some sort of subvolume
> > structure that the VFS knows about". Do you have any references, or can
> > you suggest a search term I could try?
>
> Found this:
> https://lore.kernel.org/linux-fsdevel/[email protected]/
>
Excellent, thanks. Very useful.
OK. Time for a third perspective.
With its current framing the problem is unsolvable. So it needs to be
reframed.
By "current framing", I mean that we are trying to get btrfs to behave
in a way that meets current user-space expectations. Specially, the
expectation that each object in any filesystem can be uniquely
identified by a 64bit inode number. btrfs provides functionality which
needs more than 64bits. So it simple does not fit. btrfs currently
fudges with device numbers to hide the problem. This is at best an
incomplete solution, and is already being shown to be insufficient.
Therefore we need to change user-space expectations. This has been done
before multiple times - often by breaking things and leaving it up to
user-space to fix it. My favourite example is that NFSv2 broke the
creation of lock files with O_CREAT|O_EXCL. USER-space starting using
hard-links to achieve the same result. When NFSv3 added reliable
O_CREAT|O_EXCL support, it hardly mattered.... but I digress.
It think we need to bite-the-bullet and decide that 64bits is not
enough, and in fact no number of bits will ever be enough. overlayfs
makes this clear. overlayfs merges multiple filesystems, and so needs
strictly more bits to uniquely identify all inodes than any of the
filesystems use. Currently it over-loads the high bits and hopes the
filesystem doesn't use them.
The "obvious" choice for a replacement is the file handle provided by
name_to_handle_at() (falling back to st_ino if name_to_handle_at isn't
supported by the filesystem). This returns an extensible opaque
byte-array. It is *already* more reliable than st_ino. Comparing
st_ino is only a reliable way to check if two files are the same if you
have both of them open. If you don't, then one of the files might have
been deleted and the inode number reused for the other. A filehandle
contains a generation number which protects against this.
So I think we need to strongly encourage user-space to start using
name_to_handle_at() whenever there is a need to test if two things are
the same.
This frees us to be a little less precise about assuring st_ino is
always unique, but only a little. We still want to minimize conflicts
and avoid them in common situations.
A filehandle typically has some bytes used to locate the inode -
"location" - and some to validate it - "generation". In general, st_ino
must now be seen as a hash of the "location". It could be a generic hash
(xxhash? jhash?) or it could be a careful xor of the bits.
For btrfs, the "location" is root.objectid ++ file.objectid. I think
the inode should become (file.objectid ^ swab64(root.objectid)). This
will provide numbers that are unique until you get very large subvols,
and very many subvols. It also ensures that two inodes in the same
subvol will be guaranteed to have different st_ino.
This will quickly cause problems for overlayfs as it means that if btrfs
is used with overlayfs, the top few bits won't be zero. Possibly btrfs
could be polite and shift the swab64(root.objectid) down 8 bits to make
room. Possible overlayfs should handle this case (top N-bits not all
zero), and switch to a generic hash of the inode number (or preferably
the filehandle) to (64-N bits).
If we convince user-space to use filehandles to compare objects, the NFS
problems I initially was trying to address go away. Even before that,
if btrfs switches to a hashed (i.e. xor) inode number, then the problems
also go away. but they aren't the only problems here.
Accessing the fhandle isn't always possible. For example reading
/proc/locks reports major:minor:inode-number for each file (This is the
major:minor from the superblock, so btrfs internal dev numbers aren't
used). The filehandle is simply not available. I think the only way
to address this is to create a new file. "/proc/locks2" :-)
Similarly the "lock:" lines in /proc/$PID/fdinfo/$FD need to be duplicated
as "lock2:" lines with filehandle instead of inode number. Ditto for
'inotify:' lines and possibly others.
Similarly /proc/$PID/maps contains the inode number with no fhandle.
The situation isn't so bad there as there is a path name, and you can
even call name_to_handle_at("/proc/$PID/map_files/$RANGE") to get the
fhandle. It might be better to provide a new file though.
Next we come to the use of different device numbers in the one btrfs
filesystem. I'm of the opinion that this was an unfortunately choice
that we need to deprecate. Tools that use fhandle won't need it to
differentiate inodes, but there is more to the story than just that
need.
As has been mentioned, people depend on "du -x" and "find -mount" (aka
"-xdev") to stay within a "subvol". We need to provide a clean
alternate before discouraging that usage.
xfs, ext4, fuse, and f2fs each (can) maintain a "project id" for each
inode, which effectively groups inodes into a tree. This is used for
project quotas. At one level this is conceptually very similar to the
btrfs subtree.root.objectid. It is different in that it is only 32 bits
(:-[) and is mapped between user name-spaces like uids and gids. It is
similar in that it identifies a group of inodes that are accounted
together and are (generally) contiguous in a tree.
If we encouraged "du" to have a "--proj" option (-j) which stays within
a project, and gave a similar option to find, that could be broadly
useful. Then if btrfs provided the subvol objectid as fsx_projid
(available in FS_IOC_FSGETXATTR ioctl), then "du --proj" on btrfs would
stay in a subvol. Longer term it might make sense to add a 64bit
project-id to statx. I don't think it would make sense for btrfs to
have a 'project' concept that is different from the "subvolume".
It would be cool if "df" could have a "--proj" (or similar) flag so that
it would report the usage of a "subtree" (given a path). Unfortunately
there isn't really an interface for this. Going through the quota
system might be nice, I don't think it would work.
Another thought about btrfs device numbers is that, providing inode
numbers are (nearly) unique, we don't really need more than 2. A btrfs
filesystem could allocate 2 anon device numbers. One would be assigned
to the root, and each subvolume would get whichever device number its
parent doesn't have. This would stop "du -x" and "find -mount" and
similar from crossing into subvols. There could be a mount option to
select between "1", "2", and "many" device numbers for a filesystem.
- I note that cephfs place games with st_dev too.... I wonder if we can
learn anything from that.
- audit uses sb->s_dev without asking the filesystem. So it won't
handle btrfs correctly. I wonder if there is room for it to use
file handles.
I accept that I'm proposing some BIG changes here, and they might break
things. But btrfs is already broken in various ways. I think we need a
goal to work towards which will eventually remove all breakage and still
have room for expansion. I think that must include:
- providing as-unique-as-practical inode numbers across the whole
filesystem, and deprecating the internal use of different device
numbers. Make it possible to mount without them ASAP, and aim to
make that the default eventually.
- working with user-space tool/library developers to use
name_to_handle_at() to identify inodes, only using st_ino
as a fall-back
- adding filehandles to various /proc etc files as needed, either
duplicating lines or duplicating files. And helping application which
use these files to migrate (I would *NOT* change the dev numbers in
the current file to report the internal btrfs dev numbers the way that
SUSE does. I would prefer that current breakage could be used to
motivate developers towards depending instead on fhandles).
- exporting subtree (aka subvol) id to user-space, possibly paralleling
proj_id in some way, and extending various tools to understand
subtrees
Who's with me??
NeilBrown
On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
> It think we need to bite-the-bullet and decide that 64bits is not
> enough, and in fact no number of bits will ever be enough. overlayfs
> makes this clear.
Sure - let's go for broke and use XML. Oh, wait - it's 8 months too
early...
> So I think we need to strongly encourage user-space to start using
> name_to_handle_at() whenever there is a need to test if two things are
> the same.
... and forgetting the inconvenient facts, such as that two different
fhandles may correspond to the same object.
> I accept that I'm proposing some BIG changes here, and they might break
> things. But btrfs is already broken in various ways. I think we need a
> goal to work towards which will eventually remove all breakage and still
> have room for expansion. I think that must include:
>
> - providing as-unique-as-practical inode numbers across the whole
> filesystem, and deprecating the internal use of different device
> numbers. Make it possible to mount without them ASAP, and aim to
> make that the default eventually.
> - working with user-space tool/library developers to use
> name_to_handle_at() to identify inodes, only using st_ino
> as a fall-back
> - adding filehandles to various /proc etc files as needed, either
> duplicating lines or duplicating files. And helping application which
> use these files to migrate (I would *NOT* change the dev numbers in
> the current file to report the internal btrfs dev numbers the way that
> SUSE does. I would prefer that current breakage could be used to
> motivate developers towards depending instead on fhandles).
> - exporting subtree (aka subvol) id to user-space, possibly paralleling
> proj_id in some way, and extending various tools to understand
> subtrees
>
> Who's with me??
Cf. "Poe Law"...
On Mon, 02 Aug 2021, Al Viro wrote:
> On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
>
> > It think we need to bite-the-bullet and decide that 64bits is not
> > enough, and in fact no number of bits will ever be enough. overlayfs
> > makes this clear.
>
> Sure - let's go for broke and use XML. Oh, wait - it's 8 months too
> early...
>
> > So I think we need to strongly encourage user-space to start using
> > name_to_handle_at() whenever there is a need to test if two things are
> > the same.
>
> ... and forgetting the inconvenient facts, such as that two different
> fhandles may correspond to the same object.
Can they? They certainly can if the "connectable" flag is passed.
name_to_handle_at() cannot set that flag.
nfsd can, so using name_to_handle_at() on an NFS filesystem isn't quite
perfect. However it is the best that can be done over NFS.
Or is there some other situation where two different filehandles can be
reported for the same inode?
Do you have a better suggestion?
NeilBrown
Hi Neil!
Wow, this is a bit overwhelming for me. However, I got a very specific
question for userspace developers in order to probably provide valuable
input to the KDE Baloo desktop search developers:
NeilBrown - 02.08.21, 06:18:29 CEST:
> The "obvious" choice for a replacement is the file handle provided by
> name_to_handle_at() (falling back to st_ino if name_to_handle_at isn't
> supported by the filesystem). This returns an extensible opaque
> byte-array. It is *already* more reliable than st_ino. Comparing
> st_ino is only a reliable way to check if two files are the same if
> you have both of them open. If you don't, then one of the files
> might have been deleted and the inode number reused for the other. A
> filehandle contains a generation number which protects against this.
>
> So I think we need to strongly encourage user-space to start using
> name_to_handle_at() whenever there is a need to test if two things are
> the same.
How could that work for Baloo's use case to see whether a file it
encounters is already in its database or whether it is a new file.
Would Baloo compare the whole file handle or just certain fields or make a
hash of the filehandle or what ever? Could you, in pseudo code or
something, describe the approach you'd suggest. I'd then share it on:
Bug 438434 - Baloo appears to be indexing twice the number of files than
are actually in my home directory
https://bugs.kde.org/438434
Best,
--
Martin
On Mon, Aug 2, 2021 at 8:41 AM NeilBrown <[email protected]> wrote:
>
> On Mon, 02 Aug 2021, Al Viro wrote:
> > On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
> >
> > > It think we need to bite-the-bullet and decide that 64bits is not
> > > enough, and in fact no number of bits will ever be enough. overlayfs
> > > makes this clear.
> >
> > Sure - let's go for broke and use XML. Oh, wait - it's 8 months too
> > early...
> >
> > > So I think we need to strongly encourage user-space to start using
> > > name_to_handle_at() whenever there is a need to test if two things are
> > > the same.
> >
> > ... and forgetting the inconvenient facts, such as that two different
> > fhandles may correspond to the same object.
>
> Can they? They certainly can if the "connectable" flag is passed.
> name_to_handle_at() cannot set that flag.
> nfsd can, so using name_to_handle_at() on an NFS filesystem isn't quite
> perfect. However it is the best that can be done over NFS.
>
> Or is there some other situation where two different filehandles can be
> reported for the same inode?
>
> Do you have a better suggestion?
>
Neil,
I think the plan of "changing the world" is not very realistic.
Sure, *some* tools can be changed, but all of them?
I went back to read your initial cover letter to understand the
problem and what I mostly found there was that the view of
/proc/x/mountinfo was hiding information that is important for
some tools to understand what is going on with btrfs subvols.
Well I am not a UNIX history expert, but I suppose that
/proc/PID/mountinfo was created because /proc/mounts and
/proc/PID/mounts no longer provided tool with all the information
about Linux mounts.
Maybe it's time for a new interface to query the more advanced
sb/mount topology? fsinfo() maybe? With mount2 compatible API for
traversing mounts that is not limited to reporting all entries inside
a single page. I suppose we could go for some hierarchical view
under /proc/PID/mounttree. I don't know - new API is hard.
In any case, instead of changing st_dev and st_ino or changing the
world to work with file handles, why not add inode generation (and
maybe subvol id) to statx().
filesystem that care enough will provide this information and tools that
care enough will use it.
Thanks,
Amir.
---- From: Amir Goldstein <[email protected]> -- Sent: 2021-08-02 - 09:54 ----
> On Mon, Aug 2, 2021 at 8:41 AM NeilBrown <[email protected]> wrote:
>>
>> On Mon, 02 Aug 2021, Al Viro wrote:
>> > On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
>> >
>> > > It think we need to bite-the-bullet and decide that 64bits is not
>> > > enough, and in fact no number of bits will ever be enough. overlayfs
>> > > makes this clear.
>> >
>> > Sure - let's go for broke and use XML. Oh, wait - it's 8 months too
>> > early...
>> >
>> > > So I think we need to strongly encourage user-space to start using
>> > > name_to_handle_at() whenever there is a need to test if two things are
>> > > the same.
>> >
>> > ... and forgetting the inconvenient facts, such as that two different
>> > fhandles may correspond to the same object.
>>
>> Can they? They certainly can if the "connectable" flag is passed.
>> name_to_handle_at() cannot set that flag.
>> nfsd can, so using name_to_handle_at() on an NFS filesystem isn't quite
>> perfect. However it is the best that can be done over NFS.
>>
>> Or is there some other situation where two different filehandles can be
>> reported for the same inode?
>>
>> Do you have a better suggestion?
>>
>
> Neil,
>
> I think the plan of "changing the world" is not very realistic.
> Sure, *some* tools can be changed, but all of them?
>
> I went back to read your initial cover letter to understand the
> problem and what I mostly found there was that the view of
> /proc/x/mountinfo was hiding information that is important for
> some tools to understand what is going on with btrfs subvols.
>
> Well I am not a UNIX history expert, but I suppose that
> /proc/PID/mountinfo was created because /proc/mounts and
> /proc/PID/mounts no longer provided tool with all the information
> about Linux mounts.
>
> Maybe it's time for a new interface to query the more advanced
> sb/mount topology? fsinfo() maybe? With mount2 compatible API for
> traversing mounts that is not limited to reporting all entries inside
> a single page. I suppose we could go for some hierarchical view
> under /proc/PID/mounttree. I don't know - new API is hard.
>
> In any case, instead of changing st_dev and st_ino or changing the
> world to work with file handles, why not add inode generation (and
> maybe subvol id) to statx().
> filesystem that care enough will provide this information and tools that
> care enough will use it.
>
> Thanks,
> Amir.
I think it would be better and easier if nfs provided clients with virtual inodes and kept an internal mapping to actual filesystem inodes. Samba does this with the mount.cifs -o noserverino option, and as far as I know it works pretty well.
This could be made either as an export option (/mnt/foo *(noserverino) or like in the Samba case, a mount option.
This way existing tools will continue to work and we don't have to reinvent various Linux subsystems. Because it's an option, users that don't use btrfs or other filesystems with snapshots, can simply skip it.
Thanks,
Forza
On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
> For btrfs, the "location" is root.objectid ++ file.objectid. I think
> the inode should become (file.objectid ^ swab64(root.objectid)). This
> will provide numbers that are unique until you get very large subvols,
> and very many subvols.
If you snapshot a filesystem, I'd expect, at least by default, that
inodes in the snapshot to stay the same as in the snapshotted
filesystem.
--b.
On 8/2/21 3:54 AM, Amir Goldstein wrote:
> On Mon, Aug 2, 2021 at 8:41 AM NeilBrown <[email protected]> wrote:
>>
>> On Mon, 02 Aug 2021, Al Viro wrote:
>>> On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
>>>
>>>> It think we need to bite-the-bullet and decide that 64bits is not
>>>> enough, and in fact no number of bits will ever be enough. overlayfs
>>>> makes this clear.
>>>
>>> Sure - let's go for broke and use XML. Oh, wait - it's 8 months too
>>> early...
>>>
>>>> So I think we need to strongly encourage user-space to start using
>>>> name_to_handle_at() whenever there is a need to test if two things are
>>>> the same.
>>>
>>> ... and forgetting the inconvenient facts, such as that two different
>>> fhandles may correspond to the same object.
>>
>> Can they? They certainly can if the "connectable" flag is passed.
>> name_to_handle_at() cannot set that flag.
>> nfsd can, so using name_to_handle_at() on an NFS filesystem isn't quite
>> perfect. However it is the best that can be done over NFS.
>>
>> Or is there some other situation where two different filehandles can be
>> reported for the same inode?
>>
>> Do you have a better suggestion?
>>
>
> Neil,
>
> I think the plan of "changing the world" is not very realistic.
> Sure, *some* tools can be changed, but all of them?
>
> I went back to read your initial cover letter to understand the
> problem and what I mostly found there was that the view of
> /proc/x/mountinfo was hiding information that is important for
> some tools to understand what is going on with btrfs subvols.
>
> Well I am not a UNIX history expert, but I suppose that
> /proc/PID/mountinfo was created because /proc/mounts and
> /proc/PID/mounts no longer provided tool with all the information
> about Linux mounts.
>
> Maybe it's time for a new interface to query the more advanced
> sb/mount topology? fsinfo() maybe? With mount2 compatible API for
> traversing mounts that is not limited to reporting all entries inside
> a single page. I suppose we could go for some hierarchical view
> under /proc/PID/mounttree. I don't know - new API is hard.
>
> In any case, instead of changing st_dev and st_ino or changing the
> world to work with file handles, why not add inode generation (and
> maybe subvol id) to statx().
> filesystem that care enough will provide this information and tools that
> care enough will use it.
>
Can y'all wait till I'm back from vacation, goddamn ;)
This is what I'm aiming for, I spent some time looking at how many
places we string parse /proc/<whatever>/mounts and my head hurts.
Btrfs already has a reasonable solution for this, we have UUID's for
everything. UUID's aren't a strictly btrfs thing either, all the file
systems have some sort of UUID identifier, hell its built into blkid. I
would love if we could do a better job about letting applications query
information about where they are. And we could expose this with the
relatively common UUID format. You ask what fs you're in, you get the
FS UUID, and then if you're on Btrfs you get the specific subvolume UUID
you're in. That way you could do more fancy things like know if you've
wandered into a new file system completely or just a different subvolume.
We have to keep the st_ino/st_dev thing for backwards compatibility, but
make it easier to get more info out of the file system.
We could in theory expose just the subvolid also, since that's a nice
simple u64, but it limits our ability to do new fancy shit in the
future. It's not a bad solution, but like I said I think we need to
take a step back and figure out what problem we're specifically trying
to solve, and work from there. Starting from automounts and working our
way back is not going very well. Thanks,
Josef
> In any case, instead of changing st_dev and st_ino or changing the world to
> work with file handles, why not add inode generation (and maybe subvol id) to
> statx().
> filesystem that care enough will provide this information and tools that care
> enough will use it.
And how is NFS (especially V2 and V3 - V4.2 at least can add attributes) going to provide these values for statx if applications are going to start depending on them, and especially, will this work for those applications that need to distinguish inodes that are working on an NFS exported btrfs filesystem?
Frank
On 8/2/21 7:39 AM, J. Bruce Fields wrote:
> On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
>> For btrfs, the "location" is root.objectid ++ file.objectid. I think
>> the inode should become (file.objectid ^ swab64(root.objectid)). This
>> will provide numbers that are unique until you get very large subvols,
>> and very many subvols.
>
> If you snapshot a filesystem, I'd expect, at least by default, that
> inodes in the snapshot to stay the same as in the snapshotted
> filesystem.
>
> --b.
>
For copy on right systems like ZFS, how could it be otherwise?
On Mon, Aug 02, 2021 at 03:32:45PM -0500, Patrick Goetz wrote:
> On 8/2/21 7:39 AM, J. Bruce Fields wrote:
> >On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
> >>For btrfs, the "location" is root.objectid ++ file.objectid. I think
> >>the inode should become (file.objectid ^ swab64(root.objectid)). This
> >>will provide numbers that are unique until you get very large subvols,
> >>and very many subvols.
> >
> >If you snapshot a filesystem, I'd expect, at least by default, that
> >inodes in the snapshot to stay the same as in the snapshotted
> >filesystem.
>
> For copy on right systems like ZFS, how could it be otherwise?
I'm reacting to Neil's suggesting above, which (as I understand it)
would result in different inode numbers.
--b.
On Mon, 02 Aug 2021, J. Bruce Fields wrote:
> On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
> > For btrfs, the "location" is root.objectid ++ file.objectid. I think
> > the inode should become (file.objectid ^ swab64(root.objectid)). This
> > will provide numbers that are unique until you get very large subvols,
> > and very many subvols.
>
> If you snapshot a filesystem, I'd expect, at least by default, that
> inodes in the snapshot to stay the same as in the snapshotted
> filesystem.
As I said: we need to challenge and revise user-space (and meat-space)
expectations.
In btrfs, you DO NOT snapshot a FILESYSTEM. Rather, you effectively
create a 'reflink' for a subtree (only works on subtrees that have been
correctly created with the poorly named "btrfs subvolume" command).
As with any reflink, the original has the same inode number that it did
before, the new version has a different inode number (though in current
BTRFS, half of the inode number is hidden from user-space, so it looks
like the inode number hasn't changed).
NeilBrown
On Mon, 02 Aug 2021, Amir Goldstein wrote:
> On Mon, Aug 2, 2021 at 8:41 AM NeilBrown <[email protected]> wrote:
> >
> > On Mon, 02 Aug 2021, Al Viro wrote:
> > > On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
> > >
> > > > It think we need to bite-the-bullet and decide that 64bits is not
> > > > enough, and in fact no number of bits will ever be enough. overlayfs
> > > > makes this clear.
> > >
> > > Sure - let's go for broke and use XML. Oh, wait - it's 8 months too
> > > early...
> > >
> > > > So I think we need to strongly encourage user-space to start using
> > > > name_to_handle_at() whenever there is a need to test if two things are
> > > > the same.
> > >
> > > ... and forgetting the inconvenient facts, such as that two different
> > > fhandles may correspond to the same object.
> >
> > Can they? They certainly can if the "connectable" flag is passed.
> > name_to_handle_at() cannot set that flag.
> > nfsd can, so using name_to_handle_at() on an NFS filesystem isn't quite
> > perfect. However it is the best that can be done over NFS.
> >
> > Or is there some other situation where two different filehandles can be
> > reported for the same inode?
> >
> > Do you have a better suggestion?
> >
>
> Neil,
>
> I think the plan of "changing the world" is not very realistic.
I disagree. It has happened before, it will happen again. The only
difference about my proposal is that I'm suggesting the change be
proactive rather than reactive.
> Sure, *some* tools can be changed, but all of them?
We only need to change the tools that notice there is a problem. So it
is important to minimize the effect on existing tools, even when we
cannot reduce it to zero. We then fix things that are likely to see a
problem, or that actually do. And we clearly document the behaviour and
how to deal with it, for code that we cannot directly affect.
Remember: there is ALREADY breakage that has been fixed. btrfs does
*not* behave like a "normal" filesystem. Nor does NFS. Multiple tools
have been adjusted to work with them. Let's not pretend that will never
happen again, but instead use the dynamic to drive evolution in the way
we choose.
>
> I went back to read your initial cover letter to understand the
> problem and what I mostly found there was that the view of
> /proc/x/mountinfo was hiding information that is important for
> some tools to understand what is going on with btrfs subvols.
That was where I started, but not where I ended. There are *lots* of
places that currently report inconsistent information for btrfs subvols.
>
> Well I am not a UNIX history expert, but I suppose that
> /proc/PID/mountinfo was created because /proc/mounts and
> /proc/PID/mounts no longer provided tool with all the information
> about Linux mounts.
>
> Maybe it's time for a new interface to query the more advanced
> sb/mount topology? fsinfo() maybe? With mount2 compatible API for
> traversing mounts that is not limited to reporting all entries inside
> a single page. I suppose we could go for some hierarchical view
> under /proc/PID/mounttree. I don't know - new API is hard.
Yes, exactly - but not just for mounts. Yes, we need new APIs (Because
the old ones have been broken in various ways). That is exactly what
I'm proposing. But "fixing" mountinfo turns out to be little more than
rearranging deck-chairs on the Titanic.
>
> In any case, instead of changing st_dev and st_ino or changing the
> world to work with file handles, why not add inode generation (and
> maybe subvol id) to statx().
The enormous benefit of filehandles is that they are supported by
kernels running today. As others have commented, they also work over
NFS.
But I would be quite happy to see more information made available
through statx - providing the meaning of that information was clearly
specified - both what can be assumed about it and what cannot.
Thanks,
NeilBrown
> filesystem that care enough will provide this information and tools that
> care enough will use it.
>
> Thanks,
> Amir.
>
>
On Mon, 02 Aug 2021, Martin Steigerwald wrote:
> Hi Neil!
>
> Wow, this is a bit overwhelming for me. However, I got a very specific
> question for userspace developers in order to probably provide valuable
> input to the KDE Baloo desktop search developers:
>
> NeilBrown - 02.08.21, 06:18:29 CEST:
> > The "obvious" choice for a replacement is the file handle provided by
> > name_to_handle_at() (falling back to st_ino if name_to_handle_at isn't
> > supported by the filesystem). This returns an extensible opaque
> > byte-array. It is *already* more reliable than st_ino. Comparing
> > st_ino is only a reliable way to check if two files are the same if
> > you have both of them open. If you don't, then one of the files
> > might have been deleted and the inode number reused for the other. A
> > filehandle contains a generation number which protects against this.
> >
> > So I think we need to strongly encourage user-space to start using
> > name_to_handle_at() whenever there is a need to test if two things are
> > the same.
>
> How could that work for Baloo's use case to see whether a file it
> encounters is already in its database or whether it is a new file.
>
> Would Baloo compare the whole file handle or just certain fields or make a
> hash of the filehandle or what ever? Could you, in pseudo code or
> something, describe the approach you'd suggest. I'd then share it on:
Yes, the whole filehandle.
struct file_handle {
unsigned int handle_bytes; /* Size of f_handle [in, out] */
int handle_type; /* Handle type [out] */
unsigned char f_handle[0]; /* File identifier (sized by
caller) [out] */
};
i.e. compare handle_type, handle_bytes, and handle_bytes worth of
f_handle.
This file_handle is local to the filesytem. Two different filesystems
can use the same filehandle for different files. So the identity of the
filesystem need to be combined with the file_handle.
>
> Bug 438434 - Baloo appears to be indexing twice the number of files than
> are actually in my home directory
>
> https://bugs.kde.org/438434
This bug wouldn't be address by using the filehandle. Using a
filehandle allows you to compare two files within a single filesystem.
This bug is about comparing two filesystems either side of a reboot, to
see if they are the same.
As has already been mentioned in that bug, statfs().f_fsid is the best
solution (unless comparing the mount point is satisfactory).
NeilBrown
On Mon, 02 Aug 2021, Forza wrote:
>
> ---- From: Amir Goldstein <[email protected]> -- Sent: 2021-08-02 - 09:54 ----
>
> > On Mon, Aug 2, 2021 at 8:41 AM NeilBrown <[email protected]> wrote:
> >>
> >> On Mon, 02 Aug 2021, Al Viro wrote:
> >> > On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
> >> >
> >> > > It think we need to bite-the-bullet and decide that 64bits is not
> >> > > enough, and in fact no number of bits will ever be enough. overlayfs
> >> > > makes this clear.
> >> >
> >> > Sure - let's go for broke and use XML. Oh, wait - it's 8 months too
> >> > early...
> >> >
> >> > > So I think we need to strongly encourage user-space to start using
> >> > > name_to_handle_at() whenever there is a need to test if two things are
> >> > > the same.
> >> >
> >> > ... and forgetting the inconvenient facts, such as that two different
> >> > fhandles may correspond to the same object.
> >>
> >> Can they? They certainly can if the "connectable" flag is passed.
> >> name_to_handle_at() cannot set that flag.
> >> nfsd can, so using name_to_handle_at() on an NFS filesystem isn't quite
> >> perfect. However it is the best that can be done over NFS.
> >>
> >> Or is there some other situation where two different filehandles can be
> >> reported for the same inode?
> >>
> >> Do you have a better suggestion?
> >>
> >
> > Neil,
> >
> > I think the plan of "changing the world" is not very realistic.
> > Sure, *some* tools can be changed, but all of them?
> >
> > I went back to read your initial cover letter to understand the
> > problem and what I mostly found there was that the view of
> > /proc/x/mountinfo was hiding information that is important for
> > some tools to understand what is going on with btrfs subvols.
> >
> > Well I am not a UNIX history expert, but I suppose that
> > /proc/PID/mountinfo was created because /proc/mounts and
> > /proc/PID/mounts no longer provided tool with all the information
> > about Linux mounts.
> >
> > Maybe it's time for a new interface to query the more advanced
> > sb/mount topology? fsinfo() maybe? With mount2 compatible API for
> > traversing mounts that is not limited to reporting all entries inside
> > a single page. I suppose we could go for some hierarchical view
> > under /proc/PID/mounttree. I don't know - new API is hard.
> >
> > In any case, instead of changing st_dev and st_ino or changing the
> > world to work with file handles, why not add inode generation (and
> > maybe subvol id) to statx().
> > filesystem that care enough will provide this information and tools that
> > care enough will use it.
> >
> > Thanks,
> > Amir.
>
> I think it would be better and easier if nfs provided clients with
> virtual inodes and kept an internal mapping to actual filesystem
> inodes. Samba does this with the mount.cifs -o noserverino option,
> and as far as I know it works pretty well.
This approach does have it's place, but it is far from perfect.
POSIX expects an inode number to be unique and stable for the lifetime
of the file. Different applications have different expectations ranging
from those which don't care at all to those which really need the full
lifetime and full uniqueness (like an indexing tool).
Dynamically provided inode numbers cannot provide the full guarantee.
If implemented on the NFS client, it could ensure two inodes that are in
cache have different inode numbers, and it could ensure inode numbers
are not reused for a very long time, but it could not ensure they remain
stable for the lifetime of the file. To do that last, it would need
stable storage to keep a copy of all the metadata of the whole
filesystem.
Implementing it on the NFS server would provide fewer guarantees. The
server has limited insight into the client-side cache, so inode numbers
might change while a file was in cache on the client.
NeilBrown
>
> This could be made either as an export option (/mnt/foo *(noserverino) or like in the Samba case, a mount option.
>
> This way existing tools will continue to work and we don't have to reinvent various Linux subsystems. Because it's an option, users that don't use btrfs or other filesystems with snapshots, can simply skip it.
>
> Thanks,
> Forza
>
>
On Tue, Aug 03, 2021 at 07:10:44AM +1000, NeilBrown wrote:
> On Mon, 02 Aug 2021, J. Bruce Fields wrote:
> > On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
> > > For btrfs, the "location" is root.objectid ++ file.objectid. I think
> > > the inode should become (file.objectid ^ swab64(root.objectid)). This
> > > will provide numbers that are unique until you get very large subvols,
> > > and very many subvols.
> >
> > If you snapshot a filesystem, I'd expect, at least by default, that
> > inodes in the snapshot to stay the same as in the snapshotted
> > filesystem.
>
> As I said: we need to challenge and revise user-space (and meat-space)
> expectations.
The example that came to mind is people that export a snapshot, then
replace it with an updated snapshot, and expect that to be transparent
to clients.
Our client will error out with ESTALE if it notices an inode number
changed out from under it.
I don't know if there are other such cases. It seems like surprising
behavior to me, though.
--b.
> In btrfs, you DO NOT snapshot a FILESYSTEM. Rather, you effectively
> create a 'reflink' for a subtree (only works on subtrees that have been
> correctly created with the poorly named "btrfs subvolume" command).
>
> As with any reflink, the original has the same inode number that it did
> before, the new version has a different inode number (though in current
> BTRFS, half of the inode number is hidden from user-space, so it looks
> like the inode number hasn't changed).
On Tue, 03 Aug 2021, J. Bruce Fields wrote:
> On Tue, Aug 03, 2021 at 07:10:44AM +1000, NeilBrown wrote:
> > On Mon, 02 Aug 2021, J. Bruce Fields wrote:
> > > On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
> > > > For btrfs, the "location" is root.objectid ++ file.objectid. I think
> > > > the inode should become (file.objectid ^ swab64(root.objectid)). This
> > > > will provide numbers that are unique until you get very large subvols,
> > > > and very many subvols.
> > >
> > > If you snapshot a filesystem, I'd expect, at least by default, that
> > > inodes in the snapshot to stay the same as in the snapshotted
> > > filesystem.
> >
> > As I said: we need to challenge and revise user-space (and meat-space)
> > expectations.
>
> The example that came to mind is people that export a snapshot, then
> replace it with an updated snapshot, and expect that to be transparent
> to clients.
>
> Our client will error out with ESTALE if it notices an inode number
> changed out from under it.
Will it? If the inode number changed, then the filehandle would change.
Unless the filesystem were exported with subtreecheck, the old filehandle
would continue to work (unless the old snapshot was deleted). File-name
lookups from the root would find new files...
"replace with an updated snapshot" is no different from "replace with an
updated directory tree". If you delete the old tree, then
currently-open files will break. If you don't you get a reasonably
clean transition.
>
> I don't know if there are other such cases. It seems like surprising
> behavior to me, though.
If you refuse to risk breaking anything, then you cannot make progress.
Providing people can choose when things break, and have advanced
warning, they often cope remarkable well.
Thanks,
NeilBrown
>
> --b.
>
> > In btrfs, you DO NOT snapshot a FILESYSTEM. Rather, you effectively
> > create a 'reflink' for a subtree (only works on subtrees that have been
> > correctly created with the poorly named "btrfs subvolume" command).
> >
> > As with any reflink, the original has the same inode number that it did
> > before, the new version has a different inode number (though in current
> > BTRFS, half of the inode number is hidden from user-space, so it looks
> > like the inode number hasn't changed).
>
>
On Tue, Aug 03, 2021 at 07:59:30AM +1000, NeilBrown wrote:
> On Tue, 03 Aug 2021, J. Bruce Fields wrote:
> > On Tue, Aug 03, 2021 at 07:10:44AM +1000, NeilBrown wrote:
> > > On Mon, 02 Aug 2021, J. Bruce Fields wrote:
> > > > On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
> > > > > For btrfs, the "location" is root.objectid ++ file.objectid. I think
> > > > > the inode should become (file.objectid ^ swab64(root.objectid)). This
> > > > > will provide numbers that are unique until you get very large subvols,
> > > > > and very many subvols.
> > > >
> > > > If you snapshot a filesystem, I'd expect, at least by default, that
> > > > inodes in the snapshot to stay the same as in the snapshotted
> > > > filesystem.
> > >
> > > As I said: we need to challenge and revise user-space (and meat-space)
> > > expectations.
> >
> > The example that came to mind is people that export a snapshot, then
> > replace it with an updated snapshot, and expect that to be transparent
> > to clients.
> >
> > Our client will error out with ESTALE if it notices an inode number
> > changed out from under it.
>
> Will it?
See fs/nfs/inode.c:nfs_check_inode_attributes():
if (nfsi->fileid != fattr->fileid) {
/* Is this perhaps the mounted-on fileid? */
if ((fattr->valid & NFS_ATTR_FATTR_MOUNTED_ON_FILEID) &&
nfsi->fileid == fattr->mounted_on_fileid)
return 0;
return -ESTALE;
}
--b.
> If the inode number changed, then the filehandle would change.
> Unless the filesystem were exported with subtreecheck, the old filehandle
> would continue to work (unless the old snapshot was deleted). File-name
> lookups from the root would find new files...
>
> "replace with an updated snapshot" is no different from "replace with an
> updated directory tree". If you delete the old tree, then
> currently-open files will break. If you don't you get a reasonably
> clean transition.
>
> >
> > I don't know if there are other such cases. It seems like surprising
> > behavior to me, though.
>
> If you refuse to risk breaking anything, then you cannot make progress.
> Providing people can choose when things break, and have advanced
> warning, they often cope remarkable well.
>
> Thanks,
> NeilBrown
>
>
> >
> > --b.
> >
> > > In btrfs, you DO NOT snapshot a FILESYSTEM. Rather, you effectively
> > > create a 'reflink' for a subtree (only works on subtrees that have been
> > > correctly created with the poorly named "btrfs subvolume" command).
> > >
> > > As with any reflink, the original has the same inode number that it did
> > > before, the new version has a different inode number (though in current
> > > BTRFS, half of the inode number is hidden from user-space, so it looks
> > > like the inode number hasn't changed).
> >
> >
On Tue, 03 Aug 2021, J. Bruce Fields wrote:
> On Tue, Aug 03, 2021 at 07:59:30AM +1000, NeilBrown wrote:
> > On Tue, 03 Aug 2021, J. Bruce Fields wrote:
> > > On Tue, Aug 03, 2021 at 07:10:44AM +1000, NeilBrown wrote:
> > > > On Mon, 02 Aug 2021, J. Bruce Fields wrote:
> > > > > On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
> > > > > > For btrfs, the "location" is root.objectid ++ file.objectid. I think
> > > > > > the inode should become (file.objectid ^ swab64(root.objectid)). This
> > > > > > will provide numbers that are unique until you get very large subvols,
> > > > > > and very many subvols.
> > > > >
> > > > > If you snapshot a filesystem, I'd expect, at least by default, that
> > > > > inodes in the snapshot to stay the same as in the snapshotted
> > > > > filesystem.
> > > >
> > > > As I said: we need to challenge and revise user-space (and meat-space)
> > > > expectations.
> > >
> > > The example that came to mind is people that export a snapshot, then
> > > replace it with an updated snapshot, and expect that to be transparent
> > > to clients.
> > >
> > > Our client will error out with ESTALE if it notices an inode number
> > > changed out from under it.
> >
> > Will it?
>
> See fs/nfs/inode.c:nfs_check_inode_attributes():
>
> if (nfsi->fileid != fattr->fileid) {
> /* Is this perhaps the mounted-on fileid? */
> if ((fattr->valid & NFS_ATTR_FATTR_MOUNTED_ON_FILEID) &&
> nfsi->fileid == fattr->mounted_on_fileid)
> return 0;
> return -ESTALE;
> }
That code fires if the fileid (inode number) reported for a particular
filehandle changes. I'm saying that won't happen.
If you reflink (aka snaphot) a btrfs subtree (aka "subvol"), then the
new sub-tree will ALREADY have different filehandles than the original
subvol. Whether it has the same inode numbers or different ones is
irrelevant to NFS.
(on reflection, I didn't say that as clearly as I could have done last time)
NeilBrown
On Tue, Aug 03, 2021 at 08:36:44AM +1000, NeilBrown wrote:
> On Tue, 03 Aug 2021, J. Bruce Fields wrote:
> > On Tue, Aug 03, 2021 at 07:59:30AM +1000, NeilBrown wrote:
> > > On Tue, 03 Aug 2021, J. Bruce Fields wrote:
> > > > On Tue, Aug 03, 2021 at 07:10:44AM +1000, NeilBrown wrote:
> > > > > On Mon, 02 Aug 2021, J. Bruce Fields wrote:
> > > > > > On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
> > > > > > > For btrfs, the "location" is root.objectid ++ file.objectid. I think
> > > > > > > the inode should become (file.objectid ^ swab64(root.objectid)). This
> > > > > > > will provide numbers that are unique until you get very large subvols,
> > > > > > > and very many subvols.
> > > > > >
> > > > > > If you snapshot a filesystem, I'd expect, at least by default, that
> > > > > > inodes in the snapshot to stay the same as in the snapshotted
> > > > > > filesystem.
> > > > >
> > > > > As I said: we need to challenge and revise user-space (and meat-space)
> > > > > expectations.
> > > >
> > > > The example that came to mind is people that export a snapshot, then
> > > > replace it with an updated snapshot, and expect that to be transparent
> > > > to clients.
> > > >
> > > > Our client will error out with ESTALE if it notices an inode number
> > > > changed out from under it.
> > >
> > > Will it?
> >
> > See fs/nfs/inode.c:nfs_check_inode_attributes():
> >
> > if (nfsi->fileid != fattr->fileid) {
> > /* Is this perhaps the mounted-on fileid? */
> > if ((fattr->valid & NFS_ATTR_FATTR_MOUNTED_ON_FILEID) &&
> > nfsi->fileid == fattr->mounted_on_fileid)
> > return 0;
> > return -ESTALE;
> > }
>
> That code fires if the fileid (inode number) reported for a particular
> filehandle changes. I'm saying that won't happen.
>
> If you reflink (aka snaphot) a btrfs subtree (aka "subvol"), then the
> new sub-tree will ALREADY have different filehandles than the original
> subvol.
Whoops, you're right, sorry for the noise....
--b.
> Whether it has the same inode numbers or different ones is
> irrelevant to NFS.
On 2021/8/2 下午9:53, Josef Bacik wrote:
> On 8/2/21 3:54 AM, Amir Goldstein wrote:
>> On Mon, Aug 2, 2021 at 8:41 AM NeilBrown <[email protected]> wrote:
>>>
>>> On Mon, 02 Aug 2021, Al Viro wrote:
>>>> On Mon, Aug 02, 2021 at 02:18:29PM +1000, NeilBrown wrote:
>>>>
>>>>> It think we need to bite-the-bullet and decide that 64bits is not
>>>>> enough, and in fact no number of bits will ever be enough. overlayfs
>>>>> makes this clear.
>>>>
>>>> Sure - let's go for broke and use XML. Oh, wait - it's 8 months too
>>>> early...
>>>>
>>>>> So I think we need to strongly encourage user-space to start using
>>>>> name_to_handle_at() whenever there is a need to test if two things are
>>>>> the same.
>>>>
>>>> ... and forgetting the inconvenient facts, such as that two different
>>>> fhandles may correspond to the same object.
>>>
>>> Can they? They certainly can if the "connectable" flag is passed.
>>> name_to_handle_at() cannot set that flag.
>>> nfsd can, so using name_to_handle_at() on an NFS filesystem isn't quite
>>> perfect. However it is the best that can be done over NFS.
>>>
>>> Or is there some other situation where two different filehandles can be
>>> reported for the same inode?
>>>
>>> Do you have a better suggestion?
>>>
>>
>> Neil,
>>
>> I think the plan of "changing the world" is not very realistic.
>> Sure, *some* tools can be changed, but all of them?
>>
>> I went back to read your initial cover letter to understand the
>> problem and what I mostly found there was that the view of
>> /proc/x/mountinfo was hiding information that is important for
>> some tools to understand what is going on with btrfs subvols.
>>
>> Well I am not a UNIX history expert, but I suppose that
>> /proc/PID/mountinfo was created because /proc/mounts and
>> /proc/PID/mounts no longer provided tool with all the information
>> about Linux mounts.
>>
>> Maybe it's time for a new interface to query the more advanced
>> sb/mount topology? fsinfo() maybe? With mount2 compatible API for
>> traversing mounts that is not limited to reporting all entries inside
>> a single page. I suppose we could go for some hierarchical view
>> under /proc/PID/mounttree. I don't know - new API is hard.
>>
>> In any case, instead of changing st_dev and st_ino or changing the
>> world to work with file handles, why not add inode generation (and
>> maybe subvol id) to statx().
>> filesystem that care enough will provide this information and tools that
>> care enough will use it.
>>
>
> Can y'all wait till I'm back from vacation, goddamn ;)
>
> This is what I'm aiming for, I spent some time looking at how many
> places we string parse /proc/<whatever>/mounts and my head hurts.
>
> Btrfs already has a reasonable solution for this, we have UUID's for
> everything. UUID's aren't a strictly btrfs thing either, all the file
> systems have some sort of UUID identifier, hell its built into blkid. I
> would love if we could do a better job about letting applications query
> information about where they are. And we could expose this with the
> relatively common UUID format. You ask what fs you're in, you get the
> FS UUID, and then if you're on Btrfs you get the specific subvolume UUID
> you're in. That way you could do more fancy things like know if you've
> wandered into a new file system completely or just a different subvolume.
I'm completely on the side of using proper UUID.
But suddenly I find a problem for this, at least we still need something
like st_dev for real volume based snapshot.
One of the problem for real volume based snapshot is, the snapshoted
volume is completely the same filesystem, every binary is the same,
including UUID.
That means, the only way to distinguish such volumes is by st_dev.
For such pure UUID base solution, it's in fact unable to distinguish
them using just UUID.
Unless we have some device UUID to replace the old st_dev.
Thanks,
Qu
>
> We have to keep the st_ino/st_dev thing for backwards compatibility, but
> make it easier to get more info out of the file system.
>
> We could in theory expose just the subvolid also, since that's a nice
> simple u64, but it limits our ability to do new fancy shit in the
> future. It's not a bad solution, but like I said I think we need to
> take a step back and figure out what problem we're specifically trying
> to solve, and work from there. Starting from automounts and working our
> way back is not going very well. Thanks,
>
> Josef
On Thu, 29 Jul 2021 at 07:27, Amir Goldstein <[email protected]> wrote:
> Given that today, subvolume mounts (or any mounts) on the lower layer
> are not followed by overlayfs, I don't really see the difference
> if mounts are created manually or automatically.
> Miklos?
Never tried overlay on btrfs. Subvolumes AFAIK do not use submounts
currently, they are a sort of hack where the st_dev changes when
crossing the subvolume boundary, but there's no sign of them in
/proc/mounts (there's no d_automount in btrfs).
Thanks,
Miklos
On Fri, Aug 6, 2021 at 10:52 AM Miklos Szeredi <[email protected]> wrote:
>
> On Thu, 29 Jul 2021 at 07:27, Amir Goldstein <[email protected]> wrote:
>
> > Given that today, subvolume mounts (or any mounts) on the lower layer
> > are not followed by overlayfs, I don't really see the difference
> > if mounts are created manually or automatically.
> > Miklos?
>
> Never tried overlay on btrfs. Subvolumes AFAIK do not use submounts
> currently, they are a sort of hack where the st_dev changes when
> crossing the subvolume boundary, but there's no sign of them in
> /proc/mounts (there's no d_automount in btrfs).
That's what Niel's patch 11/11 is proposing to add and that's the reason
he was asking if this is going to break overlayfs over btrfs.
My question was, regardless of btrfs, can ovl_lookup() treat automount
dentries gracefully as empty dirs or just read them as is, instead of
returning EREMOTE on lookup?
The rationale is that we use a private mount and we are not following
across mounts from layers anyway, so what do we care about
auto or manual mounts?
Thanks,
Amir.
On Fri, 6 Aug 2021 at 10:08, Amir Goldstein <[email protected]> wrote:
>
> On Fri, Aug 6, 2021 at 10:52 AM Miklos Szeredi <[email protected]> wrote:
> >
> > On Thu, 29 Jul 2021 at 07:27, Amir Goldstein <[email protected]> wrote:
> >
> > > Given that today, subvolume mounts (or any mounts) on the lower layer
> > > are not followed by overlayfs, I don't really see the difference
> > > if mounts are created manually or automatically.
> > > Miklos?
> >
> > Never tried overlay on btrfs. Subvolumes AFAIK do not use submounts
> > currently, they are a sort of hack where the st_dev changes when
> > crossing the subvolume boundary, but there's no sign of them in
> > /proc/mounts (there's no d_automount in btrfs).
>
> That's what Niel's patch 11/11 is proposing to add and that's the reason
> he was asking if this is going to break overlayfs over btrfs.
>
> My question was, regardless of btrfs, can ovl_lookup() treat automount
> dentries gracefully as empty dirs or just read them as is, instead of
> returning EREMOTE on lookup?
>
> The rationale is that we use a private mount and we are not following
> across mounts from layers anyway, so what do we care about
> auto or manual mounts?
I guess that depends on the use cases. If no one cares (as is the
case apparently), the simplest is to leave it the way it is.
Thanks,
Miklos
[[This patch is a minimal patch which addresses the current problems
with nfsd and btrfs, in a way which I think is most supportable, least
surprising, and least likely to impact any future attempts to more
completely fix the btrfs file-identify problem]]
BTRFS does not provide unique inode numbers across a filesystem.
It *does* provide unique inode numbers with a subvolume and
uses synthetic device numbers for different subvolumes to ensure
uniqueness for device+inode.
nfsd cannot use these varying device numbers. If nfsd were to
synthesise different stable filesystem ids to give to the client, that
would cause subvolumes to appear in the mount table on the client, even
though they don't appear in the mount table on the server. Also, NFSv3
doesn't support changing the filesystem id without a new explicit
mount on the client (this is partially supported in practice, but
violates the protocol specification).
So currently, the roots of all subvolumes report the same inode number
in the same filesystem to NFS clients and tools like 'find' notice that
a directory has the same identity as an ancestor, and so refuse to
enter that directory.
This patch allows btrfs (or any filesystem) to provide a 64bit number
that can be xored with the inode number to make the number more unique.
Rather than the client being certain to see duplicates, with this patch
it is possible but extremely rare.
The number than btrfs provides is a swab64() version of the subvolume
identifier. This has most entropy in the high bits (the low bits of the
subvolume identifer), while the inoe has most entropy in the low bits.
The result will always be unique within a subvolume, and will almost
always be unique across the filesystem.
Signed-off-by: NeilBrown <[email protected]>
---
fs/btrfs/inode.c | 4 ++++
fs/nfsd/nfs3xdr.c | 17 ++++++++++++++++-
fs/nfsd/nfs4xdr.c | 9 ++++++++-
fs/nfsd/xdr3.h | 2 ++
include/linux/stat.h | 17 +++++++++++++++++
5 files changed, 47 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 0117d867ecf8..989fdf2032d5 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -9195,6 +9195,10 @@ static int btrfs_getattr(struct user_namespace *mnt_userns,
generic_fillattr(&init_user_ns, inode, stat);
stat->dev = BTRFS_I(inode)->root->anon_dev;
+ if (BTRFS_I(inode)->root->root_key.objectid != BTRFS_FS_TREE_OBJECTID)
+ stat->ino_uniquifier =
+ swab64(BTRFS_I(inode)->root->root_key.objectid);
+
spin_lock(&BTRFS_I(inode)->lock);
delalloc_bytes = BTRFS_I(inode)->new_delalloc_bytes;
inode_bytes = inode_get_bytes(inode);
diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
index 0a5ebc52e6a9..669e2437362a 100644
--- a/fs/nfsd/nfs3xdr.c
+++ b/fs/nfsd/nfs3xdr.c
@@ -340,6 +340,7 @@ svcxdr_encode_fattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
{
struct user_namespace *userns = nfsd_user_namespace(rqstp);
__be32 *p;
+ u64 ino;
u64 fsid;
p = xdr_reserve_space(xdr, XDR_UNIT * 21);
@@ -377,7 +378,10 @@ svcxdr_encode_fattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
p = xdr_encode_hyper(p, fsid);
/* fileid */
- p = xdr_encode_hyper(p, stat->ino);
+ ino = stat->ino;
+ if (stat->ino_uniquifier && stat->ino_uniquifier != ino)
+ ino ^= stat->ino_uniquifier;
+ p = xdr_encode_hyper(p, ino);
p = encode_nfstime3(p, &stat->atime);
p = encode_nfstime3(p, &stat->mtime);
@@ -1151,6 +1155,17 @@ svcxdr_encode_entry3_common(struct nfsd3_readdirres *resp, const char *name,
if (xdr_stream_encode_item_present(xdr) < 0)
return false;
/* fileid */
+ if (!resp->dir_have_uniquifier) {
+ struct kstat stat;
+ if (fh_getattr(&resp->fh, &stat) == nfs_ok)
+ resp->dir_ino_uniquifier = stat.ino_uniquifier;
+ else
+ resp->dir_ino_uniquifier = 0;
+ resp->dir_have_uniquifier = 1;
+ }
+ if (resp->dir_ino_uniquifier &&
+ resp->dir_ino_uniquifier != ino)
+ ino ^= resp->dir_ino_uniquifier;
if (xdr_stream_encode_u64(xdr, ino) < 0)
return false;
/* name */
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index 7abeccb975b2..ddccf849c29c 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -3114,10 +3114,14 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
fhp->fh_handle.fh_size);
}
if (bmval0 & FATTR4_WORD0_FILEID) {
+ u64 ino = stat.ino;
+ if (stat.ino_uniquifier &&
+ stat.ino_uniquifier != stat.ino)
+ ino ^= stat.ino_uniquifier;
p = xdr_reserve_space(xdr, 8);
if (!p)
goto out_resource;
- p = xdr_encode_hyper(p, stat.ino);
+ p = xdr_encode_hyper(p, ino);
}
if (bmval0 & FATTR4_WORD0_FILES_AVAIL) {
p = xdr_reserve_space(xdr, 8);
@@ -3285,6 +3289,9 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
if (err)
goto out_nfserr;
ino = parent_stat.ino;
+ if (parent_stat.ino_uniquifier &&
+ parent_stat.ino_uniquifier != ino)
+ ino ^= parent_stat.ino_uniquifier;
}
p = xdr_encode_hyper(p, ino);
}
diff --git a/fs/nfsd/xdr3.h b/fs/nfsd/xdr3.h
index 933008382bbe..b4f9f3c71f72 100644
--- a/fs/nfsd/xdr3.h
+++ b/fs/nfsd/xdr3.h
@@ -179,6 +179,8 @@ struct nfsd3_readdirres {
struct xdr_buf dirlist;
struct svc_fh scratch;
struct readdir_cd common;
+ u64 dir_ino_uniquifier;
+ int dir_have_uniquifier;
unsigned int cookie_offset;
struct svc_rqst * rqstp;
diff --git a/include/linux/stat.h b/include/linux/stat.h
index fff27e603814..a5188f42ed81 100644
--- a/include/linux/stat.h
+++ b/include/linux/stat.h
@@ -46,6 +46,23 @@ struct kstat {
struct timespec64 btime; /* File creation time */
u64 blocks;
u64 mnt_id;
+ /*
+ * BTRFS does not provide unique inode numbers within a filesystem,
+ * depending on a synthetic 'dev' to provide uniqueness.
+ * NFSd cannot make use of this 'dev' number so clients often see
+ * duplicate inode numbers.
+ * For BTRFS, 'ino' is unlikely to use the high bits. It puts
+ * another number in ino_uniquifier which:
+ * - has most entropy in the high bits
+ * - is different precisely when 'dev' is different
+ * - is stable across unmount/remount
+ * NFSd can xor this with 'ino' to get a substantially more unique
+ * number for reporting to the client.
+ * The ino_uniquifier for a directory can reasonably be applied
+ * to inode numbers reported by the readdir filldir callback.
+ * It is NOT currently exported to user-space.
+ */
+ u64 ino_uniquifier;
};
#endif
--
2.32.0
On 8/12/21 9:45 PM, NeilBrown wrote:
>
> [[This patch is a minimal patch which addresses the current problems
> with nfsd and btrfs, in a way which I think is most supportable, least
> surprising, and least likely to impact any future attempts to more
> completely fix the btrfs file-identify problem]]
>
> BTRFS does not provide unique inode numbers across a filesystem.
> It *does* provide unique inode numbers with a subvolume and
> uses synthetic device numbers for different subvolumes to ensure
> uniqueness for device+inode.
>
> nfsd cannot use these varying device numbers. If nfsd were to
> synthesise different stable filesystem ids to give to the client, that
> would cause subvolumes to appear in the mount table on the client, even
> though they don't appear in the mount table on the server. Also, NFSv3
> doesn't support changing the filesystem id without a new explicit
> mount on the client (this is partially supported in practice, but
> violates the protocol specification).
>
> So currently, the roots of all subvolumes report the same inode number
> in the same filesystem to NFS clients and tools like 'find' notice that
> a directory has the same identity as an ancestor, and so refuse to
> enter that directory.
>
> This patch allows btrfs (or any filesystem) to provide a 64bit number
> that can be xored with the inode number to make the number more unique.
> Rather than the client being certain to see duplicates, with this patch
> it is possible but extremely rare.
>
> The number than btrfs provides is a swab64() version of the subvolume
> identifier. This has most entropy in the high bits (the low bits of the
> subvolume identifer), while the inoe has most entropy in the low bits.
> The result will always be unique within a subvolume, and will almost
> always be unique across the filesystem.
>
This is a reasonable approach to me, solves the problem without being overly
complicated and side-steps the thornier issues around how we deal with
subvolumes. I'll leave it up to the other maintainers of the other fs'es to
weigh in, but for me you can add
Acked-by: Josef Bacik <[email protected]>
Thanks,
Josef
Hi All,
On 8/13/21 3:45 AM, NeilBrown wrote:
>
> [[This patch is a minimal patch which addresses the current problems
> with nfsd and btrfs, in a way which I think is most supportable, least
> surprising, and least likely to impact any future attempts to more
> completely fix the btrfs file-identify problem]]
>
> BTRFS does not provide unique inode numbers across a filesystem.
> It *does* provide unique inode numbers with a subvolume and
> uses synthetic device numbers for different subvolumes to ensure
> uniqueness for device+inode.
>
> nfsd cannot use these varying device numbers. If nfsd were to
> synthesise different stable filesystem ids to give to the client, that
> would cause subvolumes to appear in the mount table on the client, even
> though they don't appear in the mount table on the server. Also, NFSv3
> doesn't support changing the filesystem id without a new explicit
> mount on the client (this is partially supported in practice, but
> violates the protocol specification).
I am sure that it was discussed already but I was unable to find any track of this discussion.
But if the problem is the collision between the inode number of different subvolume in the nfd export, is it simpler if the export is truncated to the subvolume boundary ? It would be more coherent with the current behavior of vfs+nfsd.
In fact in btrfs a subvolume is a complete filesystem, with an "own synthetic" device. We could like or not this solution, but this solution is the more aligned to the unix standard, where for each filesystem there is a pair (device, inode-set). NFS (by default) avoids to cross the boundary between the filesystems. So why in BTRFS this
should be different ?
May be an opt-in option would solve the backward compatibility (i.e. to avoid problem with setup which relies on this behaviour).
> So currently, the roots of all subvolumes report the same inode number
> in the same filesystem to NFS clients and tools like 'find' notice that
> a directory has the same identity as an ancestor, and so refuse to
> enter that directory.
>
> This patch allows btrfs (or any filesystem) to provide a 64bit number
> that can be xored with the inode number to make the number more unique.
> Rather than the client being certain to see duplicates, with this patch
> it is possible but extremely rare.
>
> The number than btrfs provides is a swab64() version of the subvolume
> identifier. This has most entropy in the high bits (the low bits of the
> subvolume identifer), while the inoe has most entropy in the low bits.
> The result will always be unique within a subvolume, and will almost
> always be unique across the filesystem.
>
> Signed-off-by: NeilBrown <[email protected]>
> ---
> fs/btrfs/inode.c | 4 ++++
> fs/nfsd/nfs3xdr.c | 17 ++++++++++++++++-
> fs/nfsd/nfs4xdr.c | 9 ++++++++-
> fs/nfsd/xdr3.h | 2 ++
> include/linux/stat.h | 17 +++++++++++++++++
> 5 files changed, 47 insertions(+), 2 deletions(-)
>
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 0117d867ecf8..989fdf2032d5 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -9195,6 +9195,10 @@ static int btrfs_getattr(struct user_namespace *mnt_userns,
> generic_fillattr(&init_user_ns, inode, stat);
> stat->dev = BTRFS_I(inode)->root->anon_dev;
>
> + if (BTRFS_I(inode)->root->root_key.objectid != BTRFS_FS_TREE_OBJECTID)
> + stat->ino_uniquifier =
> + swab64(BTRFS_I(inode)->root->root_key.objectid);
> +
> spin_lock(&BTRFS_I(inode)->lock);
> delalloc_bytes = BTRFS_I(inode)->new_delalloc_bytes;
> inode_bytes = inode_get_bytes(inode);
> diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
> index 0a5ebc52e6a9..669e2437362a 100644
> --- a/fs/nfsd/nfs3xdr.c
> +++ b/fs/nfsd/nfs3xdr.c
> @@ -340,6 +340,7 @@ svcxdr_encode_fattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
> {
> struct user_namespace *userns = nfsd_user_namespace(rqstp);
> __be32 *p;
> + u64 ino;
> u64 fsid;
>
> p = xdr_reserve_space(xdr, XDR_UNIT * 21);
> @@ -377,7 +378,10 @@ svcxdr_encode_fattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
> p = xdr_encode_hyper(p, fsid);
>
> /* fileid */
> - p = xdr_encode_hyper(p, stat->ino);
> + ino = stat->ino;
> + if (stat->ino_uniquifier && stat->ino_uniquifier != ino)
> + ino ^= stat->ino_uniquifier;
> + p = xdr_encode_hyper(p, ino);
>
> p = encode_nfstime3(p, &stat->atime);
> p = encode_nfstime3(p, &stat->mtime);
> @@ -1151,6 +1155,17 @@ svcxdr_encode_entry3_common(struct nfsd3_readdirres *resp, const char *name,
> if (xdr_stream_encode_item_present(xdr) < 0)
> return false;
> /* fileid */
> + if (!resp->dir_have_uniquifier) {
> + struct kstat stat;
> + if (fh_getattr(&resp->fh, &stat) == nfs_ok)
> + resp->dir_ino_uniquifier = stat.ino_uniquifier;
> + else
> + resp->dir_ino_uniquifier = 0;
> + resp->dir_have_uniquifier = 1;
> + }
> + if (resp->dir_ino_uniquifier &&
> + resp->dir_ino_uniquifier != ino)
> + ino ^= resp->dir_ino_uniquifier;
> if (xdr_stream_encode_u64(xdr, ino) < 0)
> return false;
> /* name */
> diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
> index 7abeccb975b2..ddccf849c29c 100644
> --- a/fs/nfsd/nfs4xdr.c
> +++ b/fs/nfsd/nfs4xdr.c
> @@ -3114,10 +3114,14 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
> fhp->fh_handle.fh_size);
> }
> if (bmval0 & FATTR4_WORD0_FILEID) {
> + u64 ino = stat.ino;
> + if (stat.ino_uniquifier &&
> + stat.ino_uniquifier != stat.ino)
> + ino ^= stat.ino_uniquifier;
> p = xdr_reserve_space(xdr, 8);
> if (!p)
> goto out_resource;
> - p = xdr_encode_hyper(p, stat.ino);
> + p = xdr_encode_hyper(p, ino);
> }
> if (bmval0 & FATTR4_WORD0_FILES_AVAIL) {
> p = xdr_reserve_space(xdr, 8);
> @@ -3285,6 +3289,9 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
> if (err)
> goto out_nfserr;
> ino = parent_stat.ino;
> + if (parent_stat.ino_uniquifier &&
> + parent_stat.ino_uniquifier != ino)
> + ino ^= parent_stat.ino_uniquifier;
> }
> p = xdr_encode_hyper(p, ino);
> }
> diff --git a/fs/nfsd/xdr3.h b/fs/nfsd/xdr3.h
> index 933008382bbe..b4f9f3c71f72 100644
> --- a/fs/nfsd/xdr3.h
> +++ b/fs/nfsd/xdr3.h
> @@ -179,6 +179,8 @@ struct nfsd3_readdirres {
> struct xdr_buf dirlist;
> struct svc_fh scratch;
> struct readdir_cd common;
> + u64 dir_ino_uniquifier;
> + int dir_have_uniquifier;
> unsigned int cookie_offset;
> struct svc_rqst * rqstp;
>
> diff --git a/include/linux/stat.h b/include/linux/stat.h
> index fff27e603814..a5188f42ed81 100644
> --- a/include/linux/stat.h
> +++ b/include/linux/stat.h
> @@ -46,6 +46,23 @@ struct kstat {
> struct timespec64 btime; /* File creation time */
> u64 blocks;
> u64 mnt_id;
> + /*
> + * BTRFS does not provide unique inode numbers within a filesystem,
> + * depending on a synthetic 'dev' to provide uniqueness.
> + * NFSd cannot make use of this 'dev' number so clients often see
> + * duplicate inode numbers.
> + * For BTRFS, 'ino' is unlikely to use the high bits. It puts
> + * another number in ino_uniquifier which:
> + * - has most entropy in the high bits
> + * - is different precisely when 'dev' is different
> + * - is stable across unmount/remount
> + * NFSd can xor this with 'ino' to get a substantially more unique
> + * number for reporting to the client.
> + * The ino_uniquifier for a directory can reasonably be applied
> + * to inode numbers reported by the readdir filldir callback.
> + * It is NOT currently exported to user-space.
> + */
> + u64 ino_uniquifier;
> };
Why don't rename "ino_uniquifier" as "ino_and_subvolume" and leave to the filesystem the work to combine the inode and the subvolume-id ?
I am worried that the logic is split between the filesystem, which synthesizes the ino_uniquifier, and to NFS which combine to the inode. I am thinking that this combination is filesystem specific; for BTRFS is a simple xor but for other filesystem may be a more complex operation, so leaving an half in the filesystem and another half to the NFS seems to not optimal if other filesystem needs to use ino_uniquifier.
> #endif
>
--
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D 17B2 0EDA 9B37 8B82 E0B5
On Sun, 15 Aug 2021 09:39:08 +0200
Goffredo Baroncelli <[email protected]> wrote:
> I am sure that it was discussed already but I was unable to find any track
> of this discussion. But if the problem is the collision between the inode
> number of different subvolume in the nfd export, is it simpler if the export
> is truncated to the subvolume boundary ? It would be more coherent with the
> current behavior of vfs+nfsd.
See this bugreport thread which started it all:
https://www.spinics.net/lists/linux-btrfs/msg111172.html
In there the reporting user replied that it is strongly not feasible for them
to export each individual snapshot.
> In fact in btrfs a subvolume is a complete filesystem, with an "own
> synthetic" device. We could like or not this solution, but this solution is
> the more aligned to the unix standard, where for each filesystem there is a
> pair (device, inode-set). NFS (by default) avoids to cross the boundary
> between the filesystems. So why in BTRFS this should be different ?
From the user point of view subvolumes are basically directories; that they
are "complete filesystems"* is merely a low-level implementation detail.
* well except they are not, as you cannot 'dd' a subvolume to another
blockdevice.
> Why don't rename "ino_uniquifier" as "ino_and_subvolume" and leave to the
> filesystem the work to combine the inode and the subvolume-id ?
>
> I am worried that the logic is split between the filesystem, which
> synthesizes the ino_uniquifier, and to NFS which combine to the inode. I am
> thinking that this combination is filesystem specific; for BTRFS is a simple
> xor but for other filesystem may be a more complex operation, so leaving an
> half in the filesystem and another half to the NFS seems to not optimal if
> other filesystem needs to use ino_uniquifier.
I wondered a bit myself, what are the downsides of just doing the
uniquefication inside Btrfs, not leaving that to NFSD?
I mean not even adding the extra stat field, just return the inode itself with
that already applied. Surely cannot be any worse collision-wise, than
different subvolumes straight up having the same inode numbers as right now?
Or is it a performance concern, always doing more work, for something which
only NFSD has needed so far.
--
With respect,
Roman
On 8/15/21 9:35 PM, Roman Mamedov wrote:
> On Sun, 15 Aug 2021 09:39:08 +0200
> Goffredo Baroncelli <[email protected]> wrote:
>
>> I am sure that it was discussed already but I was unable to find any track
>> of this discussion. But if the problem is the collision between the inode
>> number of different subvolume in the nfd export, is it simpler if the export
>> is truncated to the subvolume boundary ? It would be more coherent with the
>> current behavior of vfs+nfsd.
>
> See this bugreport thread which started it all:
> https://www.spinics.net/lists/linux-btrfs/msg111172.html
>
> In there the reporting user replied that it is strongly not feasible for them
> to export each individual snapshot.
Thanks for pointing that.
However looking at the 'exports' man page, it seems that NFS has already an
option to cover these cases: 'crossmnt'.
If NFSd detects a "child" filesystem (i.e. a filesystem mounted inside an already
exported one) and the "parent" filesystem is marked as 'crossmnt', the client mount
the parent AND the child filesystem with two separate mounts, so there is not problem of inode collision.
I tested it mounting two nested ext4 filesystem, and there isn't any problem of inode collision
(even if there are two different files with the same inode number).
---------
# mount -o loop disk2 test3/
# echo 123 >test3/one
# mkdir test3/test4
# sudo mount -o loop disk3 test3/test4/
# echo 123 >test3/test4/one
# ls -liR test3/
test3/:
total 24
11 drwx------ 2 root root 16384 Aug 15 22:27 lost+found
12 -rw-r--r-- 1 ghigo ghigo 4 Aug 15 22:29 one
2 drwxr-xrwx 3 root root 4096 Aug 15 22:46 test4
test3/test4:
total 20
11 drwx------ 2 root root 16384 Aug 15 22:45 lost+found
12 -rw-r--r-- 1 ghigo ghigo 4 Aug 15 22:46 one
# egrep test3 /etc/exports
/tmp/test3 *(rw,no_subtree_check,crossmnt)
# mount 192.168.1.27:/tmp/test3 3
ls -lRi 3
3:
total 24
11 drwx------ 2 root root 16384 Aug 15 22:27 lost+found
12 -rw-r--r-- 1 ghigo ghigo 4 Aug 15 22:29 one
2 drwxr-xrwx 3 root root 4096 Aug 15 22:46 test4
3/test4:
total 20
11 drwx------ 2 root root 16384 Aug 15 22:45 lost+found
12 -rw-r--r-- 1 ghigo ghigo 4 Aug 15 22:46 one
# mount | egrep 192
192.168.1.27:/tmp/test3 on /tmp/3 type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.27,local_lock=none,addr=192.168.1.27)
192.168.1.27:/tmp/test3/test4 on /tmp/3/test4 type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.27,local_lock=none,addr=192.168.1.27)
---------------
I tried to mount even with "nfsvers=3", and it seems to work.
However the tests above are related to ext4; in fact it doesn't work with btrfs, but I think this is more a implementation problem than a strategy problem.
What I means is that NFS already has a way to mount different parts of the fs-tree with different mounts (form a client POV). I think that this strategy should be used when NFSd exports a BTRFS filesystem:
- if the 'crossmnt' is NOT Passed, the export should ends at the subvolume boundary (or allow inode collision)
- if the 'crossmnt' is passed, the client should automatically mounts each nested subvolume as a separate mount
>
>> In fact in btrfs a subvolume is a complete filesystem, with an "own
>> synthetic" device. We could like or not this solution, but this solution is
>> the more aligned to the unix standard, where for each filesystem there is a
>> pair (device, inode-set). NFS (by default) avoids to cross the boundary
>> between the filesystems. So why in BTRFS this should be different ?
>
> From the user point of view subvolumes are basically directories; that they
> are "complete filesystems"* is merely a low-level implementation detail.
>
> * well except they are not, as you cannot 'dd' a subvolume to another
> blockdevice.
>
>> Why don't rename "ino_uniquifier" as "ino_and_subvolume" and leave to the
>> filesystem the work to combine the inode and the subvolume-id ?
>>
>> I am worried that the logic is split between the filesystem, which
>> synthesizes the ino_uniquifier, and to NFS which combine to the inode. I am
>> thinking that this combination is filesystem specific; for BTRFS is a simple
>> xor but for other filesystem may be a more complex operation, so leaving an
>> half in the filesystem and another half to the NFS seems to not optimal if
>> other filesystem needs to use ino_uniquifier.
>
> I wondered a bit myself, what are the downsides of just doing the
> uniquefication inside Btrfs, not leaving that to NFSD?
>
> I mean not even adding the extra stat field, just return the inode itself with
> that already applied. Surely cannot be any worse collision-wise, than
> different subvolumes straight up having the same inode numbers as right now?
>
> Or is it a performance concern, always doing more work, for something which
> only NFSD has needed so far.
>
--
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D 17B2 0EDA 9B37 8B82 E0B5
On Mon, 16 Aug 2021, [email protected] wrote:
> On 8/15/21 9:35 PM, Roman Mamedov wrote:
> > On Sun, 15 Aug 2021 09:39:08 +0200
> > Goffredo Baroncelli <[email protected]> wrote:
> >
> >> I am sure that it was discussed already but I was unable to find any track
> >> of this discussion. But if the problem is the collision between the inode
> >> number of different subvolume in the nfd export, is it simpler if the export
> >> is truncated to the subvolume boundary ? It would be more coherent with the
> >> current behavior of vfs+nfsd.
> >
> > See this bugreport thread which started it all:
> > https://www.spinics.net/lists/linux-btrfs/msg111172.html
> >
> > In there the reporting user replied that it is strongly not feasible for them
> > to export each individual snapshot.
>
> Thanks for pointing that.
>
> However looking at the 'exports' man page, it seems that NFS has already an
> option to cover these cases: 'crossmnt'.
>
> If NFSd detects a "child" filesystem (i.e. a filesystem mounted inside an already
> exported one) and the "parent" filesystem is marked as 'crossmnt', the client mount
> the parent AND the child filesystem with two separate mounts, so there is not problem of inode collision.
As you acknowledged, you haven't read the whole back-story. Maybe you
should.
https://lore.kernel.org/linux-nfs/[email protected]/
https://lore.kernel.org/linux-nfs/[email protected]/
https://lore.kernel.org/linux-btrfs/[email protected]/
The flow of conversation does sometimes jump between threads.
I'm very happy to respond you questions after you've absorbed all that.
NeilBrown
On Mon, 16 Aug 2021, Roman Mamedov wrote:
>
> I wondered a bit myself, what are the downsides of just doing the
> uniquefication inside Btrfs, not leaving that to NFSD?
>
> I mean not even adding the extra stat field, just return the inode itself with
> that already applied. Surely cannot be any worse collision-wise, than
> different subvolumes straight up having the same inode numbers as right now?
>
> Or is it a performance concern, always doing more work, for something which
> only NFSD has needed so far.
Any change in behaviour will have unexpected consequences. I think the
btrfs maintainers perspective is they they don't want to change
behaviour if they don't have to (which is reasonable) and that currently
they don't have to (which probably means that users aren't complaining
loudly enough).
NFS export of BTRFS is already demonstrably broken and users are
complaining loudly enough that I can hear them .... though I think it
has been broken like this for 10 years, do I wonder that I didn't hear
them before.
If something is perceived as broken, then a behaviour change that
appears to fix it is more easily accepted.
However, having said that I now see that my latest patch is not ideal.
It changes the inode numbers associated with filehandles of objects in
the non-root subvolume. This will cause the Linux NFS client to treat
the object as 'stale' For most objects this is a transient annoyance.
Reopen the file or restart the process and all should be well again.
However if the inode number of the mount point changes, you will need to
unmount and remount. That is more somewhat more of an annoyance.
There are a few ways to handle this more gracefully.
1/ We could get btrfs to hand out new filehandles as well as new inode
numbers, but still accept the old filehandles. Then we could make the
inode number reported be based on the filehandle. This would be nearly
seamless but rather clumsy to code. I'm not *very* keen on this idea,
but it is worth keeping in mind.
2/ We could add a btrfs mount option to control whether the uniquifier
was set or not. This would allow the sysadmin to choose when to manage
any breakage. I think this is my preference, but Josef has declared an
aversion to mount options.
3/ We could add a module parameter to nfsd to control whether the
uniquifier is merged in. This again gives the sysadmin control, and it
can be done despite any aversion from btrfs maintainers. But I'd need
to overcome any aversion from the nfsd maintainers, and I don't know how
strong that would be yet. (A new export option isn't really appropriate.
It is much more work to add an export option than the add a mount option).
I don't know.... maybe I should try harder to like option 1, or at least
verify if it works as expected and see how ugly the code really is.
NeilBrown
On 8/15/21 11:53 PM, NeilBrown wrote:
> On Mon, 16 Aug 2021, [email protected] wrote:
>> On 8/15/21 9:35 PM, Roman Mamedov wrote:
>>> On Sun, 15 Aug 2021 09:39:08 +0200
>>> Goffredo Baroncelli <[email protected]> wrote:
>>>
>>>> I am sure that it was discussed already but I was unable to find any track
>>>> of this discussion. But if the problem is the collision between the inode
>>>> number of different subvolume in the nfd export, is it simpler if the export
>>>> is truncated to the subvolume boundary ? It would be more coherent with the
>>>> current behavior of vfs+nfsd.
>>>
>>> See this bugreport thread which started it all:
>>> https://www.spinics.net/lists/linux-btrfs/msg111172.html
>>>
>>> In there the reporting user replied that it is strongly not feasible for them
>>> to export each individual snapshot.
>>
>> Thanks for pointing that.
>>
>> However looking at the 'exports' man page, it seems that NFS has already an
>> option to cover these cases: 'crossmnt'.
>>
>> If NFSd detects a "child" filesystem (i.e. a filesystem mounted inside an already
>> exported one) and the "parent" filesystem is marked as 'crossmnt', the client mount
>> the parent AND the child filesystem with two separate mounts, so there is not problem of inode collision.
>
> As you acknowledged, you haven't read the whole back-story. Maybe you
> should.
>
> https://lore.kernel.org/linux-nfs/[email protected]/
> https://lore.kernel.org/linux-nfs/[email protected]/
> https://lore.kernel.org/linux-btrfs/[email protected]/
>
> The flow of conversation does sometimes jump between threads.
>
> I'm very happy to respond you questions after you've absorbed all that.
Hi Neil,
I read the other threads. And I still have the opinion that the nfsd crossmnt behavior should be a good solution for the btrfs subvolumes.
>
> NeilBrown
>
--
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D 17B2 0EDA 9B37 8B82 E0B5
On Wed, 18 Aug 2021, [email protected] wrote:
> On 8/15/21 11:53 PM, NeilBrown wrote:
> > On Mon, 16 Aug 2021, [email protected] wrote:
> >> On 8/15/21 9:35 PM, Roman Mamedov wrote:
> >>
> >> However looking at the 'exports' man page, it seems that NFS has already an
> >> option to cover these cases: 'crossmnt'.
> >>
> >> If NFSd detects a "child" filesystem (i.e. a filesystem mounted inside an already
> >> exported one) and the "parent" filesystem is marked as 'crossmnt', the client mount
> >> the parent AND the child filesystem with two separate mounts, so there is not problem of inode collision.
> >
> > As you acknowledged, you haven't read the whole back-story. Maybe you
> > should.
> >
> > https://lore.kernel.org/linux-nfs/[email protected]/
> > https://lore.kernel.org/linux-nfs/[email protected]/
> > https://lore.kernel.org/linux-btrfs/[email protected]/
> >
> > The flow of conversation does sometimes jump between threads.
> >
> > I'm very happy to respond you questions after you've absorbed all that.
>
> Hi Neil,
>
> I read the other threads. And I still have the opinion that the nfsd
> crossmnt behavior should be a good solution for the btrfs subvolumes.
Thanks for reading it all. Let me join the dots for you.
"crossmnt" doesn't currently work because "subvolumes" aren't mount
points.
We could change btrfs so that subvolumes *are* mountpoints. They would
have to be automounts. I posted patches to do that. They were broadly
rejected because people have many thousands of submounts that are
concurrently active and so /proc/mounts would be multiple megabytes is
size and working with it would become impractical. Also, non-privileged
users can create subvols, and may want the path names to remain private.
But these subvols would appear in the mount table and so would no longer
be private.
Alternately we could change the "crossmnt" functionality to treat a
change of st_dev as though it were a mount point. I posted patches to
do this too. This hits the same sort of problems in a different way.
If NFSD reports that is has crossed a "mount" by providing a different
filesystem-id to the client, then the client will create a new mount
point which will appear in /proc/mounts. It might be less likely that
many thousands of subvolumes are accessed over NFS than locally, but it
is still entirely possible. I don't want the NFS client to suffer a
problem that btrfs doesn't impose locally. And 'private' subvolumes
could again appear on a public list if they were accessed via NFS.
Thanks,
NeilBrown
Hi,
We use 'swab64' to combinate 'subvol id' and 'inode' into 64bit in this
patch.
case1:
'subvol id': 16bit => 64K, a little small because the subvol id is
always increase?
'inode': 48bit * 4K per node, this is big enough.
case2:
'subvol id': 24bit => 16M, this is big enough.
'inode': 40bit * 4K per node => 4 PB. this is a little small?
Is there a way to 'bit-swap' the subvol id, rather the current byte-swap?
If not, maybe it is a better balance if we combinate 22bit subvol id and
42 bit inode?
Best Regards
Wang Yugui ([email protected])
2021/08/18
>
> [[This patch is a minimal patch which addresses the current problems
> with nfsd and btrfs, in a way which I think is most supportable, least
> surprising, and least likely to impact any future attempts to more
> completely fix the btrfs file-identify problem]]
>
> BTRFS does not provide unique inode numbers across a filesystem.
> It *does* provide unique inode numbers with a subvolume and
> uses synthetic device numbers for different subvolumes to ensure
> uniqueness for device+inode.
>
> nfsd cannot use these varying device numbers. If nfsd were to
> synthesise different stable filesystem ids to give to the client, that
> would cause subvolumes to appear in the mount table on the client, even
> though they don't appear in the mount table on the server. Also, NFSv3
> doesn't support changing the filesystem id without a new explicit
> mount on the client (this is partially supported in practice, but
> violates the protocol specification).
>
> So currently, the roots of all subvolumes report the same inode number
> in the same filesystem to NFS clients and tools like 'find' notice that
> a directory has the same identity as an ancestor, and so refuse to
> enter that directory.
>
> This patch allows btrfs (or any filesystem) to provide a 64bit number
> that can be xored with the inode number to make the number more unique.
> Rather than the client being certain to see duplicates, with this patch
> it is possible but extremely rare.
>
> The number than btrfs provides is a swab64() version of the subvolume
> identifier. This has most entropy in the high bits (the low bits of the
> subvolume identifer), while the inoe has most entropy in the low bits.
> The result will always be unique within a subvolume, and will almost
> always be unique across the filesystem.
>
> Signed-off-by: NeilBrown <[email protected]>
> ---
> fs/btrfs/inode.c | 4 ++++
> fs/nfsd/nfs3xdr.c | 17 ++++++++++++++++-
> fs/nfsd/nfs4xdr.c | 9 ++++++++-
> fs/nfsd/xdr3.h | 2 ++
> include/linux/stat.h | 17 +++++++++++++++++
> 5 files changed, 47 insertions(+), 2 deletions(-)
>
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 0117d867ecf8..989fdf2032d5 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -9195,6 +9195,10 @@ static int btrfs_getattr(struct user_namespace *mnt_userns,
> generic_fillattr(&init_user_ns, inode, stat);
> stat->dev = BTRFS_I(inode)->root->anon_dev;
>
> + if (BTRFS_I(inode)->root->root_key.objectid != BTRFS_FS_TREE_OBJECTID)
> + stat->ino_uniquifier =
> + swab64(BTRFS_I(inode)->root->root_key.objectid);
> +
> spin_lock(&BTRFS_I(inode)->lock);
> delalloc_bytes = BTRFS_I(inode)->new_delalloc_bytes;
> inode_bytes = inode_get_bytes(inode);
> diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
> index 0a5ebc52e6a9..669e2437362a 100644
> --- a/fs/nfsd/nfs3xdr.c
> +++ b/fs/nfsd/nfs3xdr.c
> @@ -340,6 +340,7 @@ svcxdr_encode_fattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
> {
> struct user_namespace *userns = nfsd_user_namespace(rqstp);
> __be32 *p;
> + u64 ino;
> u64 fsid;
>
> p = xdr_reserve_space(xdr, XDR_UNIT * 21);
> @@ -377,7 +378,10 @@ svcxdr_encode_fattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
> p = xdr_encode_hyper(p, fsid);
>
> /* fileid */
> - p = xdr_encode_hyper(p, stat->ino);
> + ino = stat->ino;
> + if (stat->ino_uniquifier && stat->ino_uniquifier != ino)
> + ino ^= stat->ino_uniquifier;
> + p = xdr_encode_hyper(p, ino);
>
> p = encode_nfstime3(p, &stat->atime);
> p = encode_nfstime3(p, &stat->mtime);
> @@ -1151,6 +1155,17 @@ svcxdr_encode_entry3_common(struct nfsd3_readdirres *resp, const char *name,
> if (xdr_stream_encode_item_present(xdr) < 0)
> return false;
> /* fileid */
> + if (!resp->dir_have_uniquifier) {
> + struct kstat stat;
> + if (fh_getattr(&resp->fh, &stat) == nfs_ok)
> + resp->dir_ino_uniquifier = stat.ino_uniquifier;
> + else
> + resp->dir_ino_uniquifier = 0;
> + resp->dir_have_uniquifier = 1;
> + }
> + if (resp->dir_ino_uniquifier &&
> + resp->dir_ino_uniquifier != ino)
> + ino ^= resp->dir_ino_uniquifier;
> if (xdr_stream_encode_u64(xdr, ino) < 0)
> return false;
> /* name */
> diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
> index 7abeccb975b2..ddccf849c29c 100644
> --- a/fs/nfsd/nfs4xdr.c
> +++ b/fs/nfsd/nfs4xdr.c
> @@ -3114,10 +3114,14 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
> fhp->fh_handle.fh_size);
> }
> if (bmval0 & FATTR4_WORD0_FILEID) {
> + u64 ino = stat.ino;
> + if (stat.ino_uniquifier &&
> + stat.ino_uniquifier != stat.ino)
> + ino ^= stat.ino_uniquifier;
> p = xdr_reserve_space(xdr, 8);
> if (!p)
> goto out_resource;
> - p = xdr_encode_hyper(p, stat.ino);
> + p = xdr_encode_hyper(p, ino);
> }
> if (bmval0 & FATTR4_WORD0_FILES_AVAIL) {
> p = xdr_reserve_space(xdr, 8);
> @@ -3285,6 +3289,9 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
> if (err)
> goto out_nfserr;
> ino = parent_stat.ino;
> + if (parent_stat.ino_uniquifier &&
> + parent_stat.ino_uniquifier != ino)
> + ino ^= parent_stat.ino_uniquifier;
> }
> p = xdr_encode_hyper(p, ino);
> }
> diff --git a/fs/nfsd/xdr3.h b/fs/nfsd/xdr3.h
> index 933008382bbe..b4f9f3c71f72 100644
> --- a/fs/nfsd/xdr3.h
> +++ b/fs/nfsd/xdr3.h
> @@ -179,6 +179,8 @@ struct nfsd3_readdirres {
> struct xdr_buf dirlist;
> struct svc_fh scratch;
> struct readdir_cd common;
> + u64 dir_ino_uniquifier;
> + int dir_have_uniquifier;
> unsigned int cookie_offset;
> struct svc_rqst * rqstp;
>
> diff --git a/include/linux/stat.h b/include/linux/stat.h
> index fff27e603814..a5188f42ed81 100644
> --- a/include/linux/stat.h
> +++ b/include/linux/stat.h
> @@ -46,6 +46,23 @@ struct kstat {
> struct timespec64 btime; /* File creation time */
> u64 blocks;
> u64 mnt_id;
> + /*
> + * BTRFS does not provide unique inode numbers within a filesystem,
> + * depending on a synthetic 'dev' to provide uniqueness.
> + * NFSd cannot make use of this 'dev' number so clients often see
> + * duplicate inode numbers.
> + * For BTRFS, 'ino' is unlikely to use the high bits. It puts
> + * another number in ino_uniquifier which:
> + * - has most entropy in the high bits
> + * - is different precisely when 'dev' is different
> + * - is stable across unmount/remount
> + * NFSd can xor this with 'ino' to get a substantially more unique
> + * number for reporting to the client.
> + * The ino_uniquifier for a directory can reasonably be applied
> + * to inode numbers reported by the readdir filldir callback.
> + * It is NOT currently exported to user-space.
> + */
> + u64 ino_uniquifier;
> };
>
> #endif
> --
> 2.32.0
On 8/17/21 11:39 PM, NeilBrown wrote:
> On Wed, 18 Aug 2021, [email protected] wrote:
>> On 8/15/21 11:53 PM, NeilBrown wrote:
>>> On Mon, 16 Aug 2021, [email protected] wrote:
>>>> On 8/15/21 9:35 PM, Roman Mamedov wrote:
>
>>>>
>>>> However looking at the 'exports' man page, it seems that NFS has already an
>>>> option to cover these cases: 'crossmnt'.
>>>>
>>>> If NFSd detects a "child" filesystem (i.e. a filesystem mounted inside an already
>>>> exported one) and the "parent" filesystem is marked as 'crossmnt', the client mount
>>>> the parent AND the child filesystem with two separate mounts, so there is not problem of inode collision.
>>>
>>> As you acknowledged, you haven't read the whole back-story. Maybe you
>>> should.
>>>
>>> https://lore.kernel.org/linux-nfs/[email protected]/
>>> https://lore.kernel.org/linux-nfs/[email protected]/
>>> https://lore.kernel.org/linux-btrfs/[email protected]/
>>>
>>> The flow of conversation does sometimes jump between threads.
>>>
>>> I'm very happy to respond you questions after you've absorbed all that.
>>
>> Hi Neil,
>>
>> I read the other threads. And I still have the opinion that the nfsd
>> crossmnt behavior should be a good solution for the btrfs subvolumes.
>
> Thanks for reading it all. Let me join the dots for you.
>
[...]
>
> Alternately we could change the "crossmnt" functionality to treat a
> change of st_dev as though it were a mount point. I posted patches to
> do this too. This hits the same sort of problems in a different way.
> If NFSD reports that is has crossed a "mount" by providing a different
> filesystem-id to the client, then the client will create a new mount
> point which will appear in /proc/mounts.
Yes, this is my proposal.
> It might be less likely that
> many thousands of subvolumes are accessed over NFS than locally, but it
> is still entirely possible.
I don't think that it would be so unlikely. Think about a file indexer
and/or a 'find' command runned in the folder that contains the snapshots...
> I don't want the NFS client to suffer a
> problem that btrfs doesn't impose locally.
The solution is not easy. In fact we are trying to map a u64 x u64 space to a u64 space. The true is that we
cannot guarantee that a collision will not happen. We can only say that for a fresh filesystem is near
impossible, but for an aged filesystem it is unlikely but possible.
We already faced real case where we exhausted the inode space in the 32 bit arch.What is the chances that the subvolumes ever created count is greater 2^24 and the inode number is greater 2^40 ? The likelihood is low but not 0...
Some random toughs:
- the new inode number are created merging the original inode-number (in the lower bit) and the object-id of the subvolume (in higher bit). We could add a warning when these bits overlap:
if (fls(stat->ino) >= ffs(stat->ino_uniquifer))
printk("NFSD: Warning possible inode collision...")
More smarter heuristic can be developed, like doing the check against the maximum value if inode and the maximum value of the subvolume once at mount time....
- for the inode number it is an expensive operation (even tough it exists/existed for the 32bit processor), but we could reuse the object-id after it is freed
- I think that we could add an option to nfsd or btrfs (not a default behavior) to avoid to cross the subvolume boundary
> And 'private' subvolumes
> could again appear on a public list if they were accessed via NFS.
(wrongly) I never considered a similar scenario. However I think that these could be anonymized using a alias (the name of the path to mount is passed by nfsd, so it could create an alias that will be recognized by nfsd when the clienet requires it... complex but doable...)
>
> Thanks,
> NeilBrown
>
--
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D 17B2 0EDA 9B37 8B82 E0B5
On Thu, 19 Aug 2021, Wang Yugui wrote:
> Hi,
>
> We use 'swab64' to combinate 'subvol id' and 'inode' into 64bit in this
> patch.
>
> case1:
> 'subvol id': 16bit => 64K, a little small because the subvol id is
> always increase?
> 'inode': 48bit * 4K per node, this is big enough.
>
> case2:
> 'subvol id': 24bit => 16M, this is big enough.
> 'inode': 40bit * 4K per node => 4 PB. this is a little small?
I don't know what point you are trying to make with the above.
>
> Is there a way to 'bit-swap' the subvol id, rather the current byte-swap?
Sure:
for (i=0; i<64; i++) {
new = (new << 1) | (old & 1)
old >>= 1;
}
but would it gain anything significant?
Remember what the goal is. Most apps don't care at all about duplicate
inode numbers - only a few do, and they only care about a few inodes.
The only bug I actually have a report of is caused by a directory having
the same inode as an ancestor. i.e. in lots of cases, duplicate inode
numbers won't be noticed.
The behaviour of btrfs over NFS RELIABLY causes exactly this behaviour
of a directory having the same inode number as an ancestor. The root of
a subtree will *always* do this. If we JUST changed the inode numbers
of the roots of subtrees, then most observed problems would go away. It
would change from "trivial to reproduce" to "rarely happens". The patch
I actually propose makes it much more unlikely than that. Even if
duplicate inode numbers do happen, the chance of them being noticed is
infinitesimal. Given that, there is no point in minor tweaks unless
they can make duplicate inode numbers IMPOSSIBLE.
>
> If not, maybe it is a better balance if we combinate 22bit subvol id and
> 42 bit inode?
This would be better except when it is worse. We cannot know which will
happen more often.
As long as BTRFS allows object-ids and root-ids combined to use more
than 64 bits there can be no perfect solution. There are many possible
solutions that will be close to perfect in practice. swab64() is the
simplest that I could think of. Picking any arbitrary cut-off (22/42,
24/40, ...) is unlikely to be better, and could is some circumstances be
worse.
My preference would be for btrfs to start re-using old object-ids and
root-ids, and to enforce a limit (set at mkfs or tunefs) so that the
total number of bits does not exceed 64. Unfortunately the maintainers
seem reluctant to even consider this.
NeilBrown
On Thu, Aug 19, 2021 at 07:46:22AM +1000, NeilBrown wrote:
> On Thu, 19 Aug 2021, Wang Yugui wrote:
> > Hi,
> >
> > We use 'swab64' to combinate 'subvol id' and 'inode' into 64bit in this
> > patch.
> >
> > case1:
> > 'subvol id': 16bit => 64K, a little small because the subvol id is
> > always increase?
> > 'inode': 48bit * 4K per node, this is big enough.
> >
> > case2:
> > 'subvol id': 24bit => 16M, this is big enough.
> > 'inode': 40bit * 4K per node => 4 PB. this is a little small?
>
> I don't know what point you are trying to make with the above.
>
> >
> > Is there a way to 'bit-swap' the subvol id, rather the current byte-swap?
>
> Sure:
> for (i=0; i<64; i++) {
> new = (new << 1) | (old & 1)
> old >>= 1;
> }
>
> but would it gain anything significant?
>
> Remember what the goal is. Most apps don't care at all about duplicate
> inode numbers - only a few do, and they only care about a few inodes.
> The only bug I actually have a report of is caused by a directory having
> the same inode as an ancestor. i.e. in lots of cases, duplicate inode
> numbers won't be noticed.
rsync -H and cpio's hardlink detection can be badly confused. They will
think distinct files with the same inode number are hardlinks. This could
be bad if you were making backups (though if you're making backups over
NFS, you are probably doing something that could be done better in a
different way).
> The behaviour of btrfs over NFS RELIABLY causes exactly this behaviour
> of a directory having the same inode number as an ancestor. The root of
> a subtree will *always* do this. If we JUST changed the inode numbers
> of the roots of subtrees, then most observed problems would go away. It
> would change from "trivial to reproduce" to "rarely happens". The patch
> I actually propose makes it much more unlikely than that. Even if
> duplicate inode numbers do happen, the chance of them being noticed is
> infinitesimal. Given that, there is no point in minor tweaks unless
> they can make duplicate inode numbers IMPOSSIBLE.
That's a good argument. I have a different one with the same conclusion.
40 bit inodes would take about 20 years to collide with 24-bit subvols--if
you are creating an average of 1742 inodes every second. Also at the
same time you have to be creating a subvol every 37 seconds to occupy
the colliding 25th bit of the subvol ID. Only the highest inode number
in any subvol counts--if your inode creation is spread out over several
different subvols, you'll need to make inodes even faster.
For reference, my high scores are 17 inodes per second and a subvol
every 595 seconds (averaged over 1 year). Burst numbers are much higher,
but one has to spend some time _reading_ the files now and then.
I've encountered other btrfs users with two orders of magnitude higher
inode creation rates than mine. They are barely squeaking under the
20-year line--or they would be, if they were creating snapshots 50 times
faster than they do today.
Use cases that have the highest inode creation rates (like /tmp) tend
to get more specialized storage solutions (like tmpfs).
Cloud fleets do have higher average inode creation rates, but their
filesystems have much shorter lifespans than 20 years, so the delta on
both sides of the ratio cancels out.
If this hack is only used for NFS, it gives us some time to come up
with a better solution. (On the other hand, we had 14 years already,
and here we are...)
> > If not, maybe it is a better balance if we combinate 22bit subvol id and
> > 42 bit inode?
>
> This would be better except when it is worse. We cannot know which will
> happen more often.
>
> As long as BTRFS allows object-ids and root-ids combined to use more
> than 64 bits there can be no perfect solution. There are many possible
> solutions that will be close to perfect in practice. swab64() is the
> simplest that I could think of. Picking any arbitrary cut-off (22/42,
> 24/40, ...) is unlikely to be better, and could is some circumstances be
> worse.
>
> My preference would be for btrfs to start re-using old object-ids and
> root-ids, and to enforce a limit (set at mkfs or tunefs) so that the
> total number of bits does not exceed 64. Unfortunately the maintainers
> seem reluctant to even consider this.
It was considered, implemented in 2011, and removed in 2020. Rationale
is in commit b547a88ea5776a8092f7f122ddc20d6720528782 "btrfs: start
deprecation of mount option inode_cache". It made file creation slower,
and consumed disk space, iops, and memory to run. Nobody used it.
Newer on-disk data structure versions (free space tree, 2015) didn't
bother implementing inode_cache's storage requirement.
> NeilBrown
On Mon, Aug 16, 2021 at 1:21 AM NeilBrown <[email protected]> wrote:
>
> On Mon, 16 Aug 2021, Roman Mamedov wrote:
> >
> > I wondered a bit myself, what are the downsides of just doing the
> > uniquefication inside Btrfs, not leaving that to NFSD?
> >
> > I mean not even adding the extra stat field, just return the inode itself with
> > that already applied. Surely cannot be any worse collision-wise, than
> > different subvolumes straight up having the same inode numbers as right now?
> >
> > Or is it a performance concern, always doing more work, for something which
> > only NFSD has needed so far.
>
> Any change in behaviour will have unexpected consequences. I think the
> btrfs maintainers perspective is they they don't want to change
> behaviour if they don't have to (which is reasonable) and that currently
> they don't have to (which probably means that users aren't complaining
> loudly enough).
>
> NFS export of BTRFS is already demonstrably broken and users are
> complaining loudly enough that I can hear them .... though I think it
> has been broken like this for 10 years, do I wonder that I didn't hear
> them before.
>
> If something is perceived as broken, then a behaviour change that
> appears to fix it is more easily accepted.
>
> However, having said that I now see that my latest patch is not ideal.
> It changes the inode numbers associated with filehandles of objects in
> the non-root subvolume. This will cause the Linux NFS client to treat
> the object as 'stale' For most objects this is a transient annoyance.
> Reopen the file or restart the process and all should be well again.
> However if the inode number of the mount point changes, you will need to
> unmount and remount. That is more somewhat more of an annoyance.
>
> There are a few ways to handle this more gracefully.
>
> 1/ We could get btrfs to hand out new filehandles as well as new inode
> numbers, but still accept the old filehandles. Then we could make the
> inode number reported be based on the filehandle. This would be nearly
> seamless but rather clumsy to code. I'm not *very* keen on this idea,
> but it is worth keeping in mind.
>
So objects would change their inode number after nfs inode cache is
evicted and while nfs filesystem is mounted. That does not sound ideal.
But I am a bit confused about the problem.
If the export is of the btrfs root, then nfs client cannot access any
subvolumes (right?) - that was the bug report, so the value of inode
numbers in non-root subvolumes is not an issue.
If export is of non-root subvolume, then why bother changing anything
at all? Is there a need to traverse into sub-sub-volumes?
> 2/ We could add a btrfs mount option to control whether the uniquifier
> was set or not. This would allow the sysadmin to choose when to manage
> any breakage. I think this is my preference, but Josef has declared an
> aversion to mount options.
>
> 3/ We could add a module parameter to nfsd to control whether the
> uniquifier is merged in. This again gives the sysadmin control, and it
> can be done despite any aversion from btrfs maintainers. But I'd need
> to overcome any aversion from the nfsd maintainers, and I don't know how
> strong that would be yet. (A new export option isn't really appropriate.
> It is much more work to add an export option than the add a mount option).
>
That is too bad, because IMO from users POV, "fsid=btrfsroot" or "cross-subvol"
export option would have been a nice way to describe and opt-in to this new
functionality.
But let's consider for a moment the consequences of enabling this functionality
automatically whenever exporting a btrfs root volume without "crossmnt":
1. Objects inside a subvol that are inaccessible(?) with current
nfs/nfsd without
"crossmnt" will become accessible after enabling the feature -
this will match
the user experience of accessing btrfs on the host
2. The inode numbers of the newly accessible objects would not match the inode
numbers on the host fs (no big deal?)
3. The inode numbers of objects in a snapshot would not match the inode
numbers of the original (pre-snapshot) objects (acceptable tradeoff for
being able to access the snapshot objects without bloating /proc/mounts?)
4. The inode numbers of objects in a subvol observed via this "cross-subvol"
export would not match the inode numbers of the same objects observed
via an individual subvol export
5. st_ino conflicts are possible when multiplexing subvol id and inode number.
overlayfs resolved those conflicts by allocating an inode number from a
reserved non-persistent inode range, which may cause objects to change
their inode number during the lifetime on the filesystem (sensible
tradeoff?)
I think that #4 is a bit hard to swallow and #3 is borderline acceptable...
Both and quite hard to document and to set expectations as a non-opt-in
change of behavior when exporting btrfs root.
IMO, an nfsd module parameter will give some control and therefore is
a must, but it won't make life easier to document and set user expectations
when the semantics are not clearly stated in the exports table.
You claim that "A new export option isn't really appropriate."
but your only argument is that "It is much more work to add
an export option than the add a mount option".
With all due respect, for this particular challenge with all the
constraints involved, this sounds like a pretty weak argument.
Surely, adding an export option is easier than slowly changing all
userspace tools to understand subvolumes? a solution that you had
previously brought up.
Can you elaborate some more about your aversion to a new
export option.
Thanks,
Amir.
On Thu, 19 Aug 2021, Zygo Blaxell wrote:
> On Thu, Aug 19, 2021 at 07:46:22AM +1000, NeilBrown wrote:
> >
> > Remember what the goal is. Most apps don't care at all about duplicate
> > inode numbers - only a few do, and they only care about a few inodes.
> > The only bug I actually have a report of is caused by a directory having
> > the same inode as an ancestor. i.e. in lots of cases, duplicate inode
> > numbers won't be noticed.
>
> rsync -H and cpio's hardlink detection can be badly confused. They will
> think distinct files with the same inode number are hardlinks. This could
> be bad if you were making backups (though if you're making backups over
> NFS, you are probably doing something that could be done better in a
> different way).
Yes, they could get confused. inode numbers remain unique within a
"subvolume" so you would need to do at backup of multiple subtrees to
hit a problem. Certainly possible, but probably less common.
>
> 40 bit inodes would take about 20 years to collide with 24-bit subvols--if
> you are creating an average of 1742 inodes every second. Also at the
> same time you have to be creating a subvol every 37 seconds to occupy
> the colliding 25th bit of the subvol ID. Only the highest inode number
> in any subvol counts--if your inode creation is spread out over several
> different subvols, you'll need to make inodes even faster.
>
> For reference, my high scores are 17 inodes per second and a subvol
> every 595 seconds (averaged over 1 year). Burst numbers are much higher,
> but one has to spend some time _reading_ the files now and then.
>
> I've encountered other btrfs users with two orders of magnitude higher
> inode creation rates than mine. They are barely squeaking under the
> 20-year line--or they would be, if they were creating snapshots 50 times
> faster than they do today.
I do like seeing concrete numbers, thanks. How many of these inodes and
subvols remain undeleted? Supposing inode numbers were reused, how many
bits might you need?
> > My preference would be for btrfs to start re-using old object-ids and
> > root-ids, and to enforce a limit (set at mkfs or tunefs) so that the
> > total number of bits does not exceed 64. Unfortunately the maintainers
> > seem reluctant to even consider this.
>
> It was considered, implemented in 2011, and removed in 2020. Rationale
> is in commit b547a88ea5776a8092f7f122ddc20d6720528782 "btrfs: start
> deprecation of mount option inode_cache". It made file creation slower,
> and consumed disk space, iops, and memory to run. Nobody used it.
> Newer on-disk data structure versions (free space tree, 2015) didn't
> bother implementing inode_cache's storage requirement.
Yes, I saw that. Providing reliable functional certainly can impact
performance and consume disk-space. That isn't an excuse for not doing
it.
I suspect that carefully tuned code could result in typical creation
times being unchanged, and mean creation times suffering only a tiny
cost. Using "max+1" when the creation rate is particularly high might
be a reasonable part of managing costs.
Storage cost need not be worse than the cost of tracking free blocks
on the device.
"Nobody used it" is odd. It implies it would have to be explicitly
enabled, and all it would provide anyone is sane behaviour. Who would
imagine that to be an optional extra.
NeilBrown
On Thu, 19 Aug 2021, Amir Goldstein wrote:
> On Mon, Aug 16, 2021 at 1:21 AM NeilBrown <[email protected]> wrote:
> >
> > There are a few ways to handle this more gracefully.
> >
> > 1/ We could get btrfs to hand out new filehandles as well as new inode
> > numbers, but still accept the old filehandles. Then we could make the
> > inode number reported be based on the filehandle. This would be nearly
> > seamless but rather clumsy to code. I'm not *very* keen on this idea,
> > but it is worth keeping in mind.
> >
>
> So objects would change their inode number after nfs inode cache is
> evicted and while nfs filesystem is mounted. That does not sound ideal.
No. Almost all filehandle lookups happen in the context of some other
filehandle. If the provided context is an old-style filehandle, we
provide an old-style filehandle for the lookup. There is already code
in nfsd to support this (as we have in the past changed how filesystems
are identified).
It would only be if the mountpoint filehandle (which is fetched without
that context) went out of cache that inode numbers would change. That
would mean that the filesystem (possibly an automount) was unmounted.
When it was remounted it could have a different device number anyway, so
having different inode numbers would be of little consequence.
>
> But I am a bit confused about the problem.
> If the export is of the btrfs root, then nfs client cannot access any
> subvolumes (right?) - that was the bug report, so the value of inode
> numbers in non-root subvolumes is not an issue.
Not correct. All objects in the filesystem are fully accessible. The
only problem is that some pairs of objects have the same inode number.
This causes some programs like 'find' and 'du' to behave differently to
expectations. They will refuse to even look in a subvolume, because it
looks like doing so could cause an infinite loop. The values of inode
numbers in non-root subvolumes is EXACTLY the issue.
> If export is of non-root subvolume, then why bother changing anything
> at all? Is there a need to traverse into sub-sub-volumes?
>
> > 2/ We could add a btrfs mount option to control whether the uniquifier
> > was set or not. This would allow the sysadmin to choose when to manage
> > any breakage. I think this is my preference, but Josef has declared an
> > aversion to mount options.
> >
> > 3/ We could add a module parameter to nfsd to control whether the
> > uniquifier is merged in. This again gives the sysadmin control, and it
> > can be done despite any aversion from btrfs maintainers. But I'd need
> > to overcome any aversion from the nfsd maintainers, and I don't know how
> > strong that would be yet. (A new export option isn't really appropriate.
> > It is much more work to add an export option than the add a mount option).
> >
>
> That is too bad, because IMO from users POV, "fsid=btrfsroot" or "cross-subvol"
> export option would have been a nice way to describe and opt-in to this new
> functionality.
>
> But let's consider for a moment the consequences of enabling this functionality
> automatically whenever exporting a btrfs root volume without "crossmnt":
>
> 1. Objects inside a subvol that are inaccessible(?) with current
> nfs/nfsd without
> "crossmnt" will become accessible after enabling the feature -
> this will match
> the user experience of accessing btrfs on the host
Not correct - as above.
> 2. The inode numbers of the newly accessible objects would not match the inode
> numbers on the host fs (no big deal?)
Unlikely to be a problem. Inode numbers have no meaning beyond the facts
that:
- they are stable for the lifetime of the object
- they are unique within a filesystem (except btrfs lies about
filesystems)
- they are not zero
The facts only need to be equally true on the NFS server and client..
> 3. The inode numbers of objects in a snapshot would not match the inode
> numbers of the original (pre-snapshot) objects (acceptable tradeoff for
> being able to access the snapshot objects without bloating /proc/mounts?)
This also should not be a problem. Files in different snapshots are
different things that happen to share storage (like reflinks).
Comparing inode numbers between places which report different st_dev
does not fit within the meaning of inode numbers.
> 4. The inode numbers of objects in a subvol observed via this "cross-subvol"
> export would not match the inode numbers of the same objects observed
> via an individual subvol export
The device number would differ too, so the relative values of the inode
numbers would be irrelevant.
> 5. st_ino conflicts are possible when multiplexing subvol id and inode number.
> overlayfs resolved those conflicts by allocating an inode number from a
> reserved non-persistent inode range, which may cause objects to change
> their inode number during the lifetime on the filesystem (sensible
> tradeoff?)
>
> I think that #4 is a bit hard to swallow and #3 is borderline acceptable...
> Both and quite hard to document and to set expectations as a non-opt-in
> change of behavior when exporting btrfs root.
>
> IMO, an nfsd module parameter will give some control and therefore is
> a must, but it won't make life easier to document and set user expectations
> when the semantics are not clearly stated in the exports table.
>
> You claim that "A new export option isn't really appropriate."
> but your only argument is that "It is much more work to add
> an export option than the add a mount option".
>
> With all due respect, for this particular challenge with all the
> constraints involved, this sounds like a pretty weak argument.
>
> Surely, adding an export option is easier than slowly changing all
> userspace tools to understand subvolumes? a solution that you had
> previously brought up.
>
> Can you elaborate some more about your aversion to a new
> export option.
Export options are bits in a 32bit word - so both user-space and kernel
need to agree on names for them. We are currently using 18, so there is
room to grow. It is a perfectly reasonable way to implement sensible
features. It is, I think, a poor way to implement hacks to work around
misfeatures in filesystems.
This is the core of my dislike for adding an export option. Using one
effectively admits that what btrfs is doing is a valid thing to do. I
don't think it is. I don't think we want any other filesystem developer
to think that they can emulate the behaviour because support is already
provided.
If we add any configuration to support btrfs, I would much prefer it to
be implemented in fs/btrfs, and if not, then with loud warnings that it
works around a deficiency in btrfs.
/sys/modules/nfsd/parameters/btrfs_export_workaround
Thanks,
NeilBrown
On Fri, Aug 20, 2021 at 6:22 AM NeilBrown <[email protected]> wrote:
>
> On Thu, 19 Aug 2021, Amir Goldstein wrote:
> > On Mon, Aug 16, 2021 at 1:21 AM NeilBrown <[email protected]> wrote:
> > >
> > > There are a few ways to handle this more gracefully.
> > >
> > > 1/ We could get btrfs to hand out new filehandles as well as new inode
> > > numbers, but still accept the old filehandles. Then we could make the
> > > inode number reported be based on the filehandle. This would be nearly
> > > seamless but rather clumsy to code. I'm not *very* keen on this idea,
> > > but it is worth keeping in mind.
> > >
> >
> > So objects would change their inode number after nfs inode cache is
> > evicted and while nfs filesystem is mounted. That does not sound ideal.
>
> No. Almost all filehandle lookups happen in the context of some other
> filehandle. If the provided context is an old-style filehandle, we
> provide an old-style filehandle for the lookup. There is already code
> in nfsd to support this (as we have in the past changed how filesystems
> are identified).
>
I see. This is nice!
It almost sounds like "inode mapped mounts" :-)
> It would only be if the mountpoint filehandle (which is fetched without
> that context) went out of cache that inode numbers would change. That
> would mean that the filesystem (possibly an automount) was unmounted.
> When it was remounted it could have a different device number anyway, so
> having different inode numbers would be of little consequence.
>
> >
> > But I am a bit confused about the problem.
> > If the export is of the btrfs root, then nfs client cannot access any
> > subvolumes (right?) - that was the bug report, so the value of inode
> > numbers in non-root subvolumes is not an issue.
>
> Not correct. All objects in the filesystem are fully accessible. The
> only problem is that some pairs of objects have the same inode number.
> This causes some programs like 'find' and 'du' to behave differently to
> expectations. They will refuse to even look in a subvolume, because it
> looks like doing so could cause an infinite loop. The values of inode
> numbers in non-root subvolumes is EXACTLY the issue.
>
Thanks for clarifying.
> > If export is of non-root subvolume, then why bother changing anything
> > at all? Is there a need to traverse into sub-sub-volumes?
> >
> > > 2/ We could add a btrfs mount option to control whether the uniquifier
> > > was set or not. This would allow the sysadmin to choose when to manage
> > > any breakage. I think this is my preference, but Josef has declared an
> > > aversion to mount options.
> > >
> > > 3/ We could add a module parameter to nfsd to control whether the
> > > uniquifier is merged in. This again gives the sysadmin control, and it
> > > can be done despite any aversion from btrfs maintainers. But I'd need
> > > to overcome any aversion from the nfsd maintainers, and I don't know how
> > > strong that would be yet. (A new export option isn't really appropriate.
> > > It is much more work to add an export option than the add a mount option).
> > >
> >
> > That is too bad, because IMO from users POV, "fsid=btrfsroot" or "cross-subvol"
> > export option would have been a nice way to describe and opt-in to this new
> > functionality.
> >
> > But let's consider for a moment the consequences of enabling this functionality
> > automatically whenever exporting a btrfs root volume without "crossmnt":
> >
> > 1. Objects inside a subvol that are inaccessible(?) with current
> > nfs/nfsd without
> > "crossmnt" will become accessible after enabling the feature -
> > this will match
> > the user experience of accessing btrfs on the host
>
> Not correct - as above.
>
> > 2. The inode numbers of the newly accessible objects would not match the inode
> > numbers on the host fs (no big deal?)
>
> Unlikely to be a problem. Inode numbers have no meaning beyond the facts
> that:
> - they are stable for the lifetime of the object
> - they are unique within a filesystem (except btrfs lies about
> filesystems)
> - they are not zero
>
> The facts only need to be equally true on the NFS server and client..
>
> > 3. The inode numbers of objects in a snapshot would not match the inode
> > numbers of the original (pre-snapshot) objects (acceptable tradeoff for
> > being able to access the snapshot objects without bloating /proc/mounts?)
>
> This also should not be a problem. Files in different snapshots are
> different things that happen to share storage (like reflinks).
> Comparing inode numbers between places which report different st_dev
> does not fit within the meaning of inode numbers.
>
> > 4. The inode numbers of objects in a subvol observed via this "cross-subvol"
> > export would not match the inode numbers of the same objects observed
> > via an individual subvol export
>
> The device number would differ too, so the relative values of the inode
> numbers would be irrelevant.
>
> > 5. st_ino conflicts are possible when multiplexing subvol id and inode number.
> > overlayfs resolved those conflicts by allocating an inode number from a
> > reserved non-persistent inode range, which may cause objects to change
> > their inode number during the lifetime on the filesystem (sensible
> > tradeoff?)
> >
> > I think that #4 is a bit hard to swallow and #3 is borderline acceptable...
> > Both and quite hard to document and to set expectations as a non-opt-in
> > change of behavior when exporting btrfs root.
> >
> > IMO, an nfsd module parameter will give some control and therefore is
> > a must, but it won't make life easier to document and set user expectations
> > when the semantics are not clearly stated in the exports table.
> >
> > You claim that "A new export option isn't really appropriate."
> > but your only argument is that "It is much more work to add
> > an export option than the add a mount option".
> >
> > With all due respect, for this particular challenge with all the
> > constraints involved, this sounds like a pretty weak argument.
> >
> > Surely, adding an export option is easier than slowly changing all
> > userspace tools to understand subvolumes? a solution that you had
> > previously brought up.
> >
> > Can you elaborate some more about your aversion to a new
> > export option.
>
> Export options are bits in a 32bit word - so both user-space and kernel
> need to agree on names for them. We are currently using 18, so there is
> room to grow. It is a perfectly reasonable way to implement sensible
> features. It is, I think, a poor way to implement hacks to work around
> misfeatures in filesystems.
>
> This is the core of my dislike for adding an export option. Using one
> effectively admits that what btrfs is doing is a valid thing to do. I
> don't think it is. I don't think we want any other filesystem developer
> to think that they can emulate the behaviour because support is already
> provided.
>
> If we add any configuration to support btrfs, I would much prefer it to
> be implemented in fs/btrfs, and if not, then with loud warnings that it
> works around a deficiency in btrfs.
> /sys/modules/nfsd/parameters/btrfs_export_workaround
>
Thanks for clarifying.
I now understand how "hacky" the workaround in nfsd is.
Naive question: could this behavior be implemented in btrfs as you
prefer, but in a way that only serves nfsd and NEW nfs mounts for
that matter (i.e. only NEW filehandles)?
Meaning passing some hint in ->getattr() and ->iterate_shared(),
sort of like idmapped mount does for uid/gid.
Thanks,
Amir.
On Fri, Aug 20, 2021 at 12:54:17PM +1000, NeilBrown wrote:
> On Thu, 19 Aug 2021, Zygo Blaxell wrote:
> > 40 bit inodes would take about 20 years to collide with 24-bit subvols--if
> > you are creating an average of 1742 inodes every second. Also at the
> > same time you have to be creating a subvol every 37 seconds to occupy
> > the colliding 25th bit of the subvol ID. Only the highest inode number
> > in any subvol counts--if your inode creation is spread out over several
> > different subvols, you'll need to make inodes even faster.
> >
> > For reference, my high scores are 17 inodes per second and a subvol
> > every 595 seconds (averaged over 1 year). Burst numbers are much higher,
> > but one has to spend some time _reading_ the files now and then.
> >
> > I've encountered other btrfs users with two orders of magnitude higher
> > inode creation rates than mine. They are barely squeaking under the
> > 20-year line--or they would be, if they were creating snapshots 50 times
> > faster than they do today.
>
> I do like seeing concrete numbers, thanks. How many of these inodes and
> subvols remain undeleted? Supposing inode numbers were reused, how many
> bits might you need?
Number of existing inodes is filesystem size divided by average inode
size, about 30 million inodes per terabyte for build servers, give or
take an order of magnitude per project. That does put 1 << 32 inodes in
the range of current disk sizes, which motivated the inode_cache feature.
Number of existing subvols stays below 1 << 14. It's usually some
near-constant multiple of the filesystem age (if it is not limited more
by capacity) because it's not trivial to move a subvol structure from
one filesystem to another.
The main constraint on the product of both numbers is filesystem size.
If that limit is reached, we often see that lower subvol numbers correlate
with higher inode numbers and vice versa; otherwise both keep growing until
they hit the size limit or some user-chosen limit (e.g. "we just don't
need more than the last 300 builds online at any time").
For build and backup use cases (which both heavily use snapshots) there is
no incentive to delete snapshots other than to avoid eventually running
out of space. There is also no incentive to increase filesystem size
to accommodate extra snapshots, as long as there is room for some minimal
useful number of snapshots, the original subvols, and some free space.
So we get snapshots in numbers that are rougly:
min(age_of_filesystem * snapshot_creation_rate, filesystem_capacity / average_subvol_unique_data_size)
Subvol IDs are not reusable. They are embedded in shared object ownership
metadata, and persist for some time after subvols are deleted.
> > > My preference would be for btrfs to start re-using old object-ids and
> > > root-ids, and to enforce a limit (set at mkfs or tunefs) so that the
> > > total number of bits does not exceed 64. Unfortunately the maintainers
> > > seem reluctant to even consider this.
> >
> > It was considered, implemented in 2011, and removed in 2020. Rationale
> > is in commit b547a88ea5776a8092f7f122ddc20d6720528782 "btrfs: start
> > deprecation of mount option inode_cache". It made file creation slower,
> > and consumed disk space, iops, and memory to run. Nobody used it.
> > Newer on-disk data structure versions (free space tree, 2015) didn't
> > bother implementing inode_cache's storage requirement.
>
> Yes, I saw that. Providing reliable functional certainly can impact
> performance and consume disk-space. That isn't an excuse for not doing
> it.
> I suspect that carefully tuned code could result in typical creation
> times being unchanged, and mean creation times suffering only a tiny
> cost. Using "max+1" when the creation rate is particularly high might
> be a reasonable part of managing costs.
> Storage cost need not be worse than the cost of tracking free blocks
> on the device.
The cost of _tracking_ free object IDs is trivial compared to the cost
of _reusing_ an object ID on btrfs.
If btrfs doesn't reuse object numbers, btrfs can append new objects
to the last partially filled leaf. If there are shared metadata pages
(i.e. snapshots), btrfs unshares a handful of pages once, and then future
writes use densely packed new pages and delayed allocation without having
to read anything.
If btrfs reuses object numbers, the filesystem has to pack new objects
into random previously filled metadata leaf nodes, so there are a lot
of read-modify-writes scattered over old metadata pages, which spreads
the working set around and reduces cache usage efficiency (i.e. uses
more RAM). If there are snapshots, each shared page that is modified
for the first time after the snapshot comes with two-orders-of-magnitude
worst-case write multipliers.
The two-algorithm scheme (switching from "reuse freed inode" to "max+1"
under load) would be forced into the "max+1" mode half the time by a
daily workload of alternating git checkouts and builds. It would save
only one bit of inode namespace over the lifetime of the filesystem.
> "Nobody used it" is odd. It implies it would have to be explicitly
> enabled, and all it would provide anyone is sane behaviour. Who would
> imagine that to be an optional extra.
It always had to be explicitly enabled. It was initially a workaround
for 32-bit ino_t that was limiting a few users, but ino_t got better
and the need for inode_cache went away.
NFS (particularly NFSv2) might be the use case inode_cache has been
waiting for. btrfs has an i_version field for NFSv4, so it's not like
there's no precedent for adding features in btrfs to support NFS.
On the other hand, the cost of ino_cache gets worse with snapshots,
and the benefit in practice takes years to decades to become relevant.
Users who are exporting snapshots over NFS are likely to be especially
averse to using inode_cache.
> NeilBrown
>
>
Hi,
> rsync -H and cpio's hardlink detection can be badly confused. They will
> think distinct files with the same inode number are hardlinks. This could
> be bad if you were making backups (though if you're making backups over
> NFS, you are probably doing something that could be done better in a
> different way).
'rysnc -x ' and 'find -mount/-xdev' will fail to work in
snapper config?
snapper is a very important user case.
Although yet not some option like '-mount/-xdev' for '/bin/cp',
but maybe come soon.
I though the first patchset( crossmnt in nfs client) is the right way,
because in most case, subvol is a filesystem, not a directory.
Best Regards
Wang Yugui ([email protected])
2021/08/23
BTRFS does not provide unique inode numbers across a filesystem.
It only provide unique inode numbers within a subvolume and
uses synthetic device numbers for different subvolumes to ensure
uniqueness for device+inode.
nfsd cannot use these varying synthetic device numbers. If nfsd were to
synthesise different stable filesystem ids to give to the client, that
would cause subvolumes to appear in the mount table on the client, even
though they don't appear in the mount table on the server. Also, NFSv3
doesn't support changing the filesystem id without a new explicit mount
on the client (this is partially supported in practice, but violates the
protocol specification and has problems in some edge cases).
So currently, the roots of all subvolumes report the same inode number
in the same filesystem to NFS clients and tools like 'find' notice that
a directory has the same identity as an ancestor, and so refuse to
enter that directory.
This patch allows btrfs (or any filesystem) to provide a 64bit number
that can be xored with the inode number to make the number more unique.
Rather than the client being certain to see duplicates, with this patch
it is possible but extremely rare.
The number that btrfs provides is a swab64() version of the subvolume
identifier. This has most entropy in the high bits (the low bits of the
subvolume identifer), while the inode has most entropy in the low bits.
The result will always be unique within a subvolume, and will almost
always be unique across the filesystem.
If an upgrade of the NFS server caused all inode numbers in an exportfs
BTRFS filesystem to appear to the client to change, the client may not
handle this well. The Linux client will cause any open files to become
'stale'. If the mount point changed inode number, the whole mount would
become inaccessible.
To avoid this, an unused byte in the filehandle (fh_auth) has been
repurposed as "fh_options". (The use of #defines make fh_flags a
problematic choice). The new behaviour of uniquifying inode number is
only activated when this bit is set.
NFSD will only set this bit in filehandles it reports if the filehandle
of the parent (provided by the client) contains the bit, or if
- the filehandle for the parent is not provided or is for a different
export and
- the filehandle refers to a BTRFS filesystem.
Thus if you have a BTRFS filesystem originally mounted from a server
without this patch, the flag will never be set and the current behaviour
will continue. Only once you re-mount the filesystem (or the filesystem
is re-auto-mounted) will the inode numbers change. When that happens,
it is likely that the filesystem st_dev number seen on the client will
change anyway.
Signed-off-by: NeilBrown <[email protected]>
---
fs/btrfs/inode.c | 4 ++++
fs/nfsd/nfs3xdr.c | 15 ++++++++++++++-
fs/nfsd/nfs4xdr.c | 7 ++++---
fs/nfsd/nfsfh.c | 13 +++++++++++--
fs/nfsd/nfsfh.h | 22 ++++++++++++++++++++++
fs/nfsd/xdr3.h | 2 ++
include/linux/stat.h | 18 ++++++++++++++++++
include/uapi/linux/nfsd/nfsfh.h | 18 ++++++++++++------
8 files changed, 87 insertions(+), 12 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 0117d867ecf8..989fdf2032d5 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -9195,6 +9195,10 @@ static int btrfs_getattr(struct user_namespace *mnt_userns,
generic_fillattr(&init_user_ns, inode, stat);
stat->dev = BTRFS_I(inode)->root->anon_dev;
+ if (BTRFS_I(inode)->root->root_key.objectid != BTRFS_FS_TREE_OBJECTID)
+ stat->ino_uniquifier =
+ swab64(BTRFS_I(inode)->root->root_key.objectid);
+
spin_lock(&BTRFS_I(inode)->lock);
delalloc_bytes = BTRFS_I(inode)->new_delalloc_bytes;
inode_bytes = inode_get_bytes(inode);
diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
index 0a5ebc52e6a9..19d14f11f79a 100644
--- a/fs/nfsd/nfs3xdr.c
+++ b/fs/nfsd/nfs3xdr.c
@@ -340,6 +340,7 @@ svcxdr_encode_fattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
{
struct user_namespace *userns = nfsd_user_namespace(rqstp);
__be32 *p;
+ u64 ino;
u64 fsid;
p = xdr_reserve_space(xdr, XDR_UNIT * 21);
@@ -377,7 +378,8 @@ svcxdr_encode_fattr3(struct svc_rqst *rqstp, struct xdr_stream *xdr,
p = xdr_encode_hyper(p, fsid);
/* fileid */
- p = xdr_encode_hyper(p, stat->ino);
+ ino = nfsd_uniquify_ino(fhp, stat);
+ p = xdr_encode_hyper(p, ino);
p = encode_nfstime3(p, &stat->atime);
p = encode_nfstime3(p, &stat->mtime);
@@ -1151,6 +1153,17 @@ svcxdr_encode_entry3_common(struct nfsd3_readdirres *resp, const char *name,
if (xdr_stream_encode_item_present(xdr) < 0)
return false;
/* fileid */
+ if (!resp->dir_have_uniquifier) {
+ struct kstat stat;
+ if (fh_getattr(&resp->fh, &stat) == nfs_ok)
+ resp->dir_ino_uniquifier =
+ nfsd_ino_uniquifier(&resp->fh, &stat);
+ else
+ resp->dir_ino_uniquifier = 0;
+ resp->dir_have_uniquifier = true;
+ }
+ if (resp->dir_ino_uniquifier != ino)
+ ino ^= resp->dir_ino_uniquifier;
if (xdr_stream_encode_u64(xdr, ino) < 0)
return false;
/* name */
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index 7abeccb975b2..5ed894ceebb0 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -3114,10 +3114,11 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
fhp->fh_handle.fh_size);
}
if (bmval0 & FATTR4_WORD0_FILEID) {
+ u64 ino = nfsd_uniquify_ino(fhp, &stat);
p = xdr_reserve_space(xdr, 8);
if (!p)
goto out_resource;
- p = xdr_encode_hyper(p, stat.ino);
+ p = xdr_encode_hyper(p, ino);
}
if (bmval0 & FATTR4_WORD0_FILES_AVAIL) {
p = xdr_reserve_space(xdr, 8);
@@ -3274,7 +3275,7 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
p = xdr_reserve_space(xdr, 8);
if (!p)
- goto out_resource;
+ goto out_resource;
/*
* Get parent's attributes if not ignoring crossmount
* and this is the root of a cross-mounted filesystem.
@@ -3284,7 +3285,7 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp,
err = get_parent_attributes(exp, &parent_stat);
if (err)
goto out_nfserr;
- ino = parent_stat.ino;
+ ino = nfsd_uniquify_ino(fhp, &parent_stat);
}
p = xdr_encode_hyper(p, ino);
}
diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
index c475d2271f9c..e97ed957a379 100644
--- a/fs/nfsd/nfsfh.c
+++ b/fs/nfsd/nfsfh.c
@@ -172,7 +172,7 @@ static __be32 nfsd_set_fh_dentry(struct svc_rqst *rqstp, struct svc_fh *fhp)
if (--data_left < 0)
return error;
- if (fh->fh_auth_type != 0)
+ if ((fh->fh_options & ~NFSD_FH_OPTION_ALL) != 0)
return error;
len = key_len(fh->fh_fsid_type) / 4;
if (len == 0)
@@ -569,6 +569,7 @@ fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry,
struct inode * inode = d_inode(dentry);
dev_t ex_dev = exp_sb(exp)->s_dev;
+ u8 options = 0;
dprintk("nfsd: fh_compose(exp %02x:%02x/%ld %pd2, ino=%ld)\n",
MAJOR(ex_dev), MINOR(ex_dev),
@@ -585,6 +586,14 @@ fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry,
/* If we have a ref_fh, then copy the fh_no_wcc setting from it. */
fhp->fh_no_wcc = ref_fh ? ref_fh->fh_no_wcc : false;
+ if (ref_fh && ref_fh->fh_export == exp) {
+ options = ref_fh->fh_handle.fh_options;
+ } else {
+ /* Set options as needed */
+ if (exp->ex_path.mnt->mnt_sb->s_magic == BTRFS_SUPER_MAGIC)
+ options |= NFSD_FH_OPTION_INO_UNIQUIFY;
+ }
+
if (ref_fh == fhp)
fh_put(ref_fh);
@@ -615,7 +624,7 @@ fh_compose(struct svc_fh *fhp, struct svc_export *exp, struct dentry *dentry,
} else {
fhp->fh_handle.fh_size =
key_len(fhp->fh_handle.fh_fsid_type) + 4;
- fhp->fh_handle.fh_auth_type = 0;
+ fhp->fh_handle.fh_options = options;
mk_fsid(fhp->fh_handle.fh_fsid_type,
fhp->fh_handle.fh_fsid,
diff --git a/fs/nfsd/nfsfh.h b/fs/nfsd/nfsfh.h
index 6106697adc04..1144a98c2951 100644
--- a/fs/nfsd/nfsfh.h
+++ b/fs/nfsd/nfsfh.h
@@ -84,6 +84,28 @@ enum fsid_source {
};
extern enum fsid_source fsid_source(const struct svc_fh *fhp);
+enum nfsd_fh_options {
+ NFSD_FH_OPTION_INO_UNIQUIFY = 1, /* BTRFS only */
+
+ NFSD_FH_OPTION_ALL = 1
+};
+
+static inline u64 nfsd_ino_uniquifier(const struct svc_fh *fhp,
+ const struct kstat *stat)
+{
+ if (fhp->fh_handle.fh_options & NFSD_FH_OPTION_INO_UNIQUIFY)
+ return stat->ino_uniquifier;
+ return 0;
+}
+
+static inline u64 nfsd_uniquify_ino(const struct svc_fh *fhp,
+ const struct kstat *stat)
+{
+ u64 u = nfsd_ino_uniquifier(fhp, stat);
+ if (u != stat->ino)
+ return stat->ino ^ u;
+ return stat->ino;
+}
/*
* This might look a little large to "inline" but in all calls except
diff --git a/fs/nfsd/xdr3.h b/fs/nfsd/xdr3.h
index 933008382bbe..d9b6c8314bbb 100644
--- a/fs/nfsd/xdr3.h
+++ b/fs/nfsd/xdr3.h
@@ -179,6 +179,8 @@ struct nfsd3_readdirres {
struct xdr_buf dirlist;
struct svc_fh scratch;
struct readdir_cd common;
+ u64 dir_ino_uniquifier;
+ bool dir_have_uniquifier;
unsigned int cookie_offset;
struct svc_rqst * rqstp;
diff --git a/include/linux/stat.h b/include/linux/stat.h
index fff27e603814..0f3f74d302f8 100644
--- a/include/linux/stat.h
+++ b/include/linux/stat.h
@@ -46,6 +46,24 @@ struct kstat {
struct timespec64 btime; /* File creation time */
u64 blocks;
u64 mnt_id;
+ /*
+ * BTRFS does not provide unique inode numbers within a filesystem,
+ * depending on a synthetic 'dev' to provide uniqueness.
+ * NFSd cannot make use of this 'dev' number so clients often see
+ * duplicate inode numbers.
+ * For BTRFS, 'ino' is unlikely to use the high bits until the filesystem
+ * has created a great many inodes.
+ * It puts another number in ino_uniquifier which:
+ * - has most entropy in the high bits
+ * - is different precisely when 'dev' is different
+ * - is stable across unmount/remount
+ * NFSd can xor this with 'ino' to get a substantially more unique
+ * number for reporting to the client.
+ * The ino_uniquifier for a directory can reasonably be applied
+ * to inode numbers reported by the readdir filldir callback.
+ * It is NOT currently exported to user-space.
+ */
+ u64 ino_uniquifier;
};
#endif
diff --git a/include/uapi/linux/nfsd/nfsfh.h b/include/uapi/linux/nfsd/nfsfh.h
index 427294dd56a1..59311df4b476 100644
--- a/include/uapi/linux/nfsd/nfsfh.h
+++ b/include/uapi/linux/nfsd/nfsfh.h
@@ -38,11 +38,17 @@ struct nfs_fhbase_old {
* The file handle starts with a sequence of four-byte words.
* The first word contains a version number (1) and three descriptor bytes
* that tell how the remaining 3 variable length fields should be handled.
- * These three bytes are auth_type, fsid_type and fileid_type.
+ * These three bytes are options, fsid_type and fileid_type.
*
* All four-byte values are in host-byte-order.
*
- * The auth_type field is deprecated and must be set to 0.
+ * The options field (previously auth_type) can be used when nfsd behaviour
+ * needs to change in a non-compatible way, usually for some specific
+ * filesystem. Options should only be set in filehandles for filesystems which
+ * need them.
+ * Current values:
+ * 1 - BTRFS only. Cause stat->ino_uniquifier to be used to improve inode
+ * number uniqueness.
*
* The fsid_type identifies how the filesystem (or export point) is
* encoded.
@@ -67,7 +73,7 @@ struct nfs_fhbase_new {
union {
struct {
__u8 fb_version_aux; /* == 1, even => nfs_fhbase_old */
- __u8 fb_auth_type_aux;
+ __u8 fb_options_aux;
__u8 fb_fsid_type_aux;
__u8 fb_fileid_type_aux;
__u32 fb_auth[1];
@@ -76,7 +82,7 @@ struct nfs_fhbase_new {
};
struct {
__u8 fb_version; /* == 1, even => nfs_fhbase_old */
- __u8 fb_auth_type;
+ __u8 fb_options;
__u8 fb_fsid_type;
__u8 fb_fileid_type;
__u32 fb_auth_flex[]; /* flexible-array member */
@@ -106,11 +112,11 @@ struct knfsd_fh {
#define fh_version fh_base.fh_new.fb_version
#define fh_fsid_type fh_base.fh_new.fb_fsid_type
-#define fh_auth_type fh_base.fh_new.fb_auth_type
+#define fh_options fh_base.fh_new.fb_options
#define fh_fileid_type fh_base.fh_new.fb_fileid_type
#define fh_fsid fh_base.fh_new.fb_auth_flex
/* Do not use, provided for userspace compatiblity. */
-#define fh_auth fh_base.fh_new.fb_auth
+#define fh_auth fh_base.fh_new.fb_options
#endif /* _UAPI_LINUX_NFSD_FH_H */
--
2.32.0
On Mon, 23 Aug 2021, Zygo Blaxell wrote:
>
> Subvol IDs are not reusable. They are embedded in shared object ownership
> metadata, and persist for some time after subvols are deleted.
Hmmm... that's interesting. Makes some sense too. I did wonder how
ownership across multiple snapshots was tracked.
>
> > > > My preference would be for btrfs to start re-using old object-ids and
> > > > root-ids, and to enforce a limit (set at mkfs or tunefs) so that the
> > > > total number of bits does not exceed 64. Unfortunately the maintainers
> > > > seem reluctant to even consider this.
> > >
> > > It was considered, implemented in 2011, and removed in 2020. Rationale
> > > is in commit b547a88ea5776a8092f7f122ddc20d6720528782 "btrfs: start
> > > deprecation of mount option inode_cache". It made file creation slower,
> > > and consumed disk space, iops, and memory to run. Nobody used it.
> > > Newer on-disk data structure versions (free space tree, 2015) didn't
> > > bother implementing inode_cache's storage requirement.
> >
> > Yes, I saw that. Providing reliable functional certainly can impact
> > performance and consume disk-space. That isn't an excuse for not doing
> > it.
> > I suspect that carefully tuned code could result in typical creation
> > times being unchanged, and mean creation times suffering only a tiny
> > cost. Using "max+1" when the creation rate is particularly high might
> > be a reasonable part of managing costs.
> > Storage cost need not be worse than the cost of tracking free blocks
> > on the device.
>
> The cost of _tracking_ free object IDs is trivial compared to the cost
> of _reusing_ an object ID on btrfs.
I hadn't thought of that.
>
> If btrfs doesn't reuse object numbers, btrfs can append new objects
> to the last partially filled leaf. If there are shared metadata pages
> (i.e. snapshots), btrfs unshares a handful of pages once, and then future
> writes use densely packed new pages and delayed allocation without having
> to read anything.
>
> If btrfs reuses object numbers, the filesystem has to pack new objects
> into random previously filled metadata leaf nodes, so there are a lot
> of read-modify-writes scattered over old metadata pages, which spreads
> the working set around and reduces cache usage efficiency (i.e. uses
> more RAM). If there are snapshots, each shared page that is modified
> for the first time after the snapshot comes with two-orders-of-magnitude
> worst-case write multipliers.
I don't really follow that .... but I'll take your word for it for now.
>
> The two-algorithm scheme (switching from "reuse freed inode" to "max+1"
> under load) would be forced into the "max+1" mode half the time by a
> daily workload of alternating git checkouts and builds. It would save
> only one bit of inode namespace over the lifetime of the filesystem.
>
> > "Nobody used it" is odd. It implies it would have to be explicitly
> > enabled, and all it would provide anyone is sane behaviour. Who would
> > imagine that to be an optional extra.
>
> It always had to be explicitly enabled. It was initially a workaround
> for 32-bit ino_t that was limiting a few users, but ino_t got better
> and the need for inode_cache went away.
>
> NFS (particularly NFSv2) might be the use case inode_cache has been
> waiting for. btrfs has an i_version field for NFSv4, so it's not like
> there's no precedent for adding features in btrfs to support NFS.
NFSv2 is not worth any effort. NFSv4 is. NFSv3 ... some, but not a lot.
>
> On the other hand, the cost of ino_cache gets worse with snapshots,
> and the benefit in practice takes years to decades to become relevant.
> Users who are exporting snapshots over NFS are likely to be especially
> averse to using inode_cache.
That's the real killer. Everything will work fine for years until it
doesn't. And once it doesn't .... what do you do?
Thanks for lot for all this background info. I've found it to be very
helpful for my general understanding.
Thanks,
NeilBrown
On Mon, 23 Aug 2021, Zygo Blaxell wrote:
...
>
> Subvol IDs are not reusable. They are embedded in shared object ownership
> metadata, and persist for some time after subvols are deleted.
...
>
> The cost of _tracking_ free object IDs is trivial compared to the cost
> of _reusing_ an object ID on btrfs.
One possible approach to these two objections is to decouple inode
numbers from object ids.
The inode number becomes just another piece of metadata stored in the
inode.
struct btrfs_inode_item has four spare u64s, so we could use one of
those.
struct btrfs_dir_item would need to store the inode number too. What
is location.offset used for? Would a diritem ever point to a non-zero
offset? Could the 'offset' be used to store the inode number?
This could even be added to existing filesystems I think. It might not
be easy to re-use inode numbers smaller than the largest at the time the
extension was added, but newly added inode numbers could be reused after
they were deleted.
Just a thought...
NeilBrown
On Tue, Aug 24, 2021 at 09:22:05AM +1000, NeilBrown wrote:
> On Mon, 23 Aug 2021, Zygo Blaxell wrote:
> ...
> >
> > Subvol IDs are not reusable. They are embedded in shared object ownership
> > metadata, and persist for some time after subvols are deleted.
> ...
> >
> > The cost of _tracking_ free object IDs is trivial compared to the cost
> > of _reusing_ an object ID on btrfs.
>
> One possible approach to these two objections is to decouple inode
> numbers from object ids.
This would be reasonable for subvol IDs (I thought of it earlier in this
thread, but didn't mention it because I wasn't going to be the first to
open that worm can ;).
There aren't very many subvol IDs and they're not used as frequently
as inodes, so a lookup table to remap them to smaller numbers to save
st_ino bit-space wouldn't be unreasonably expensive. If we stop right
here and use the [some_zeros:reversed_subvol:inode] bit-packing scheme
you proposed for NFS, that seems like a reasonable plan. It would have
48 bits of usable inode number space, ~440000 file creates per second
for 20 years with up to 65535 snapshots, the same number of bits that
ZFS has in its inodes.
Once that subvol ID mapping tree exists, it could also map subvol inode
numbers to globally unique numbers. Each tree item would contain a map of
[subvol_inode1..subvol_inode2] that maps the inode numbers in the subvol
into the global inode number space at [global_inode1..global_inode2].
When a snapshot is created, the snapshot gets a copy of all the origin
subvol's inode ranges, but with newly allocated base offsets. If the
original subvol needs new inodes, it gets a new chunk from the global
inode allocator. If the snapshot subvol needs new inodes, it gets a
different new chunk from the global allocator. The minimum chunk might
be a million inodes or so to avoid having to allocate new chunks all the
time, but not so high to make the code completely untested (or testers
just set the minchunk to 1000 inodes).
The question I have (and why I didn't propose this earlier) is whether
this scheme is any real improvement over dividing the subvol:inode space
by bit packing. If you have one subvol that has 3 billion existing inodes
in it, every snapshot of that subvol is going to burn up roughly 2^-32 of
the available globally unique inode numbers. If we burn 3 billion inodes
instead of 4 billion per subvol, it only gets 25% more lifespan for the
filesystem, and the allocation of unique inode spaces and tracking inode
space usage will add cost to every single file creation and snapshot
operation. If your oldest/biggest subvol only has a million inodes in
it, all of the above is pure cost: you can create billions of snapshots,
never repeat any object IDs, and never worry about running out.
I'd want to see cost/benefit simulations of:
this plan,
the simpler but less efficient bit-packing plan,
'cp -a --reflink' to a new subvol and start over every 20 years
when inodes run out,
and online garbage-collection/renumbering schemes that allow
users to schedule the inode renumbering costs in overnight
batches instead of on every inode create.
> The inode number becomes just another piece of metadata stored in the
> inode.
> struct btrfs_inode_item has four spare u64s, so we could use one of
> those.
> struct btrfs_dir_item would need to store the inode number too. What
> is location.offset used for? Would a diritem ever point to a non-zero
> offset? Could the 'offset' be used to store the inode number?
Offset is used to identify subvol roots at the moment, but so far that
means only values 0 and UINT64_MAX are used. It seems possible to treat
all other values as inode numbers. Don't quote me on that--I'm not an
expert on this structure.
> This could even be added to existing filesystems I think. It might not
> be easy to re-use inode numbers smaller than the largest at the time the
> extension was added, but newly added inode numbers could be reused after
> they were deleted.
We'd need a structure to track reusable inode numbers and it would have to
be kept up to date to work, so this feature would necessarily come with an
incompat bit. Whether you borrow bits from existing structures or make
extended new structures doesn't matter at that point, though obviously
for something as common as inodes it would be bad to make them bigger.
Some of the btrfs userspace API uses inode numbers, but unless I missed
something, it could all be converted to use object numbers directly
instead.
> Just a thought...
>
> NeilBrown