2022-09-20 10:27:31

by Jeff Layton

[permalink] [raw]
Subject: Re: [man-pages RFC PATCH v4] statx, inode: document the new STATX_INO_VERSION field

On Tue, 2022-09-20 at 10:16 +1000, Dave Chinner wrote:
> On Mon, Sep 19, 2022 at 09:13:00AM -0400, Jeff Layton wrote:
> > On Mon, 2022-09-19 at 09:53 +1000, Dave Chinner wrote:
> > > On Fri, Sep 16, 2022 at 11:11:34AM -0400, Jeff Layton wrote:
> > > > On Fri, 2022-09-16 at 07:36 -0400, Jeff Layton wrote:
> > > > > On Fri, 2022-09-16 at 02:54 -0400, Theodore Ts'o wrote:
> > > > > > On Fri, Sep 16, 2022 at 08:23:55AM +1000, NeilBrown wrote:
> > > > > > > > > If the answer is that 'all values change', then why store the crash
> > > > > > > > > counter in the inode at all? Why not just add it as an offset when
> > > > > > > > > you're generating the user-visible change attribute?
> > > > > > > > >
> > > > > > > > > i.e. statx.change_attr = inode->i_version + (crash counter * offset)
> > > > > >
> > > > > > I had suggested just hashing the crash counter with the file system's
> > > > > > on-disk i_version number, which is essentially what you are suggested.
> > > > > >
> > > > > > > > Yes, if we plan to ensure that all the change attrs change after a
> > > > > > > > crash, we can do that.
> > > > > > > >
> > > > > > > > So what would make sense for an offset? Maybe 2**12? One would hope that
> > > > > > > > there wouldn't be more than 4k increments before one of them made it to
> > > > > > > > disk. OTOH, maybe that can happen with teeny-tiny writes.
> > > > > > >
> > > > > > > Leave it up the to filesystem to decide. The VFS and/or NFSD should
> > > > > > > have not have part in calculating the i_version. It should be entirely
> > > > > > > in the filesystem - though support code could be provided if common
> > > > > > > patterns exist across filesystems.
> > > > > >
> > > > > > Oh, *heck* no. This parameter is for the NFS implementation to
> > > > > > decide, because it's NFS's caching algorithms which are at stake here.
> > > > > >
> > > > > > As a the file system maintainer, I had offered to make an on-disk
> > > > > > "crash counter" which would get updated when the journal had gotten
> > > > > > replayed, in addition to the on-disk i_version number. This will be
> > > > > > available for the Linux implementation of NFSD to use, but that's up
> > > > > > to *you* to decide how you want to use them.
> > > > > >
> > > > > > I was perfectly happy with hashing the crash counter and the i_version
> > > > > > because I had assumed that not *that* much stuff was going to be
> > > > > > cached, and so invalidating all of the caches in the unusual case
> > > > > > where there was a crash was acceptable. After all it's a !@#?!@
> > > > > > cache. Caches sometimmes get invalidated. "That is the order of
> > > > > > things." (as Ramata'Klan once said in "Rocks and Shoals")
> > > > > >
> > > > > > But if people expect that multiple TB's of data is going to be stored;
> > > > > > that cache invalidation is unacceptable; and that a itsy-weeny chance
> > > > > > of false negative failures which might cause data corruption might be
> > > > > > acceptable tradeoff, hey, that's for the system which is providing
> > > > > > caching semantics to determine.
> > > > > >
> > > > > > PLEASE don't put this tradeoff on the file system authors; I would
> > > > > > much prefer to leave this tradeoff in the hands of the system which is
> > > > > > trying to do the caching.
> > > > > >
> > > > >
> > > > > Yeah, if we were designing this from scratch, I might agree with leaving
> > > > > more up to the filesystem, but the existing users all have pretty much
> > > > > the same needs. I'm going to plan to try to keep most of this in the
> > > > > common infrastructure defined in iversion.h.
> > > > >
> > > > > Ted, for the ext4 crash counter, what wordsize were you thinking? I
> > > > > doubt we'll be able to use much more than 32 bits so a larger integer is
> > > > > probably not worthwhile. There are several holes in struct super_block
> > > > > (at least on x86_64), so adding this field to the generic structure
> > > > > needn't grow it.
> > > >
> > > > That said, now that I've taken a swipe at implementing this, I need more
> > > > information than just the crash counter. We need to multiply the crash
> > > > counter with a reasonable estimate of the maximum number of individual
> > > > writes that could occur between an i_version being incremented and that
> > > > value making it to the backing store.
> > > >
> > > > IOW, given a write that bumps the i_version to X, how many more write
> > > > calls could race in before X makes it to the platter? I took a SWAG and
> > > > said 4k in an earlier email, but I don't really have a way to know, and
> > > > that could vary wildly with different filesystems and storage.
> > > >
> > > > What I'd like to see is this in struct super_block:
> > > >
> > > > u32 s_version_offset;
> > >
> > > u64 s_version_salt;
> > >
> >
> > IDK...it _is_ an offset since we're folding it in with addition, and it
> > has a real meaning. Filesystems do need to be cognizant of that fact, I
> > think.
> >
> > Also does anyone have a preference on doing this vs. a get_version_salt
> > or get_version_offset sb operation? I figured the value should be mostly
> > static so it'd be nice to avoid an operation for it.
> >
> > > > ...and then individual filesystems can calculate:
> > > >
> > > > crash_counter * max_number_of_writes
> > > >
> > > > and put the correct value in there at mount time.
> > >
> > > Other filesystems might not have a crash counter but have other
> > > information that can be substituted, like a mount counter or a
> > > global change sequence number that is guaranteed to increment from
> > > one mount to the next.
> > >
> >
> > The problem there is that you're going to cause the invalidation of all
> > of the NFS client's cached regular files, even on clean server reboots.
> > That's not a desirable outcome.
>
> Stop saying "anything less than perfect is unacceptible". I *know*
> that changing the salt on every mount might result in less than
> perfect results, but the fact is that a -false negative- is a data
> corruption event, whilst a false positive is not. False positives
> may not be desirable, but false negatives are *not acceptible at
> all*.
>
> XFS can give you a guarantee of no false negatives right now with no
> on-disk format changes necessary, but it comes with the downside of
> false positives. That's not the end of the world, and it gives NFS
> the functionality it needs immediately and allows us time to add
> purpose-built on-disk functionality that gives NFS exactly what it
> wants. The reality is that this purpose-built on-disk change will
> take years to roll out to production systems, whilst using what we
> have now is just a kernel patch and upgrade away....
>
> Changing on-disk metadata formats takes time, no matter how simple
> the change, and this timeframe is not something the NFS server
> actually controls.
>
> But there is a way for the NFS server to define and control it's own
> on-disk persistent metadata: *extended attributes*.
>
> How about we set a "crash" extended attribute on the root of an NFS
> export when the filesystem is exported, and then remove it when the
> filesystem is unexported.
>
> This gives the NFS server it's own persistent attribute that tells
> it whether the filesystem was *unexported* cleanly. If the exportfs
> code calls syncfs() before the xattr is removed, then it guarantees
> that everything the NFS clients have written and modified will be
> exactly present the next time the filesystem is exported. If the
> "crash" xattr is present when the filesystem is exported, then it
> wasn't cleanly synced before it was taken out of service, and so
> something may have been lost and the "crash counter" needs to be
> bumped.
>
> Yes, the "crash counter" is held in another xattr, so that it is
> persistent across crash and mount/unmount cycles. If the crash
> xattr is present, the NFSD reads, bumps and writes the crash counter
> xattr, and uses the new value for the life of that export. If the
> crash xattr is not present, then is just reads the counter xattr and
> uses it unchanged.
>
> IOWs, the NFS server can define it's own on-disk persistent metadata
> using xattrs, and you don't need local filesystems to be modified at
> all. You can add the crash epoch into the change attr that is sent
> to NFS clients without having to change the VFS i_version
> implementation at all.
>
> This whole problem is solvable entirely within the NFS server code,
> and we don't need to change local filesystems at all. NFS can
> control the persistence and format of the xattrs it uses, and it
> does not need new custom on-disk format changes from every
> filesystem to support this new application requirement.
>
> At this point, NFS server developers don't need to care what the
> underlying filesystem format provides - the xattrs provide the crash
> detection and enumeration the NFS server functionality requires.
>

Doesn't the filesystem already detect when it's been mounted after an
unclean shutdown? I'm not sure what good we'll get out of bolting this
scheme onto the NFS server, when the filesystem could just as easily
give us this info.

In any case, the main problem at this point is not so much in detecting
when there has been an unclean shutdown, but rather what to do when
there is one. We need to to advance the presented change attributes
beyond the largest possible one that may have been handed out prior to
the crash.

How do we determine what that offset should be? Your last email
suggested that there really is no limit to the number of i_version bumps
that can happen in memory before one of them makes it to disk. What can
we do to address that?
--
Jeff Layton <[email protected]>


2022-09-21 00:03:48

by Dave Chinner

[permalink] [raw]
Subject: Re: [man-pages RFC PATCH v4] statx, inode: document the new STATX_INO_VERSION field

On Tue, Sep 20, 2022 at 06:26:05AM -0400, Jeff Layton wrote:
> On Tue, 2022-09-20 at 10:16 +1000, Dave Chinner wrote:
> > IOWs, the NFS server can define it's own on-disk persistent metadata
> > using xattrs, and you don't need local filesystems to be modified at
> > all. You can add the crash epoch into the change attr that is sent
> > to NFS clients without having to change the VFS i_version
> > implementation at all.
> >
> > This whole problem is solvable entirely within the NFS server code,
> > and we don't need to change local filesystems at all. NFS can
> > control the persistence and format of the xattrs it uses, and it
> > does not need new custom on-disk format changes from every
> > filesystem to support this new application requirement.
> >
> > At this point, NFS server developers don't need to care what the
> > underlying filesystem format provides - the xattrs provide the crash
> > detection and enumeration the NFS server functionality requires.
> >
>
> Doesn't the filesystem already detect when it's been mounted after an
> unclean shutdown?

Not every filesystem will be able to guarantee unclean shutdown
detection at the next mount. That's the whole problem - NFS
developers are asking for something that cannot be provided as
generic functionality by individual filesystems, so the NFS server
application is going to have to work around any filesytem that
cannot provide the information it needs.

e.g. ext4 has it journal replayed by the userspace tools prior
to mount, so when it then gets mounted by the kernel it's seen as a
clean mount.

If we shut an XFS filesystem down due to a filesystem corruption or
failed IO to the journal code, the kernel might not be able to
replay the journal on mount (i.e. it is corrupt). We then run
xfs_repair, and that fixes the corruption issue and -cleans the
log-. When we next mount the filesystem, it results in a _clean
mount_, and the kernel filesystem code can not signal to NFS that an
unclean mount occurred and so it should bump it's crash counter.

IOWs, this whole "filesystems need to tell NFS about crashes"
propagates all the way through *every filesystem tool chain*, not
just the kernel mount code. And we most certainly don't control
every 3rd party application that walks around in the filesystem on
disk format, and so there are -zero- guarantees that the kernel
filesystem mount code can give that an unclean shutdown occurred
prior to the current mount.

And then for niche NFS server applications (like transparent
fail-over between HA NFS servers) there are even more rigid
constraints on NFS change attributes. And you're asking local
filesystems to know about these application constraints and bake
them into their on-disk format again.

This whole discussion has come about because we baked certain
behaviour for NFS into the on-disk format many, many years ago, and
it's only now that it is considered inadequate for *new* NFS
application related functionality (e.g. fscache integration and
cache validity across server side mount cycles).

We've learnt a valuable lesson from this: don't bake application
specific persistent metadata requirements into the on-disk format
because when the application needs to change, it requires every
filesystem that supports taht application level functionality
to change their on-disk formats...

> I'm not sure what good we'll get out of bolting this
> scheme onto the NFS server, when the filesystem could just as easily
> give us this info.

The xattr scheme guarantees the correct application behaviour that the NFS
server requires, all at the NFS application level without requiring
local filesystems to support the NFS requirements in their on-disk
format. THe NFS server controls the format and versioning of it's
on-disk persistent metadata (i.e. the xattrs it uses) and so any
changes to the application level requirements of that functionality
are now completely under the control of the application.

i.e. the application gets to manage version control, backwards and
forwards compatibility of it's persistent metadata, etc. What you
are asking is that every local filesystem takes responsibility for
managing the long term persistent metadata that only NFS requires.
It's more complex to do this at the filesystem level, and we have to
replicate the same work for every filesystem that is going to
support this on-disk functionality.

Using xattrs means the functionality is implemented once, it's
common across all local filesystems, and no exportable filesystem
needs to know anything about it as it's all self-contained in the
NFS server code. THe code is smaller, easier to maintain, consistent
across all systems, easy to test, etc.

It also can be implemented and rolled out *immediately* to all
existing supported NFS server implementations, without having to
wait months/years (or never!) for local filesystem on-disk format
changes to roll out to production systems.

Asking individual filesystems to implement application specific
persistent metadata is a *last resort* and should only be done if
correctness or performance cannot be obtained in any other way.

So, yeah, the only sane direction to take here is to use xattrs to
store this NFS application level information. It's less work for
everyone, and in the long term it means when the NFS application
requirements change again, we don't need to modify the on-disk
format of multiple local filesystems.

> In any case, the main problem at this point is not so much in detecting
> when there has been an unclean shutdown, but rather what to do when
> there is one. We need to to advance the presented change attributes
> beyond the largest possible one that may have been handed out prior to
> the crash.

Sure, but you're missing my point: by using xattrs for detection,
you don't need to involve anything to do with local filesystems at
all.

> How do we determine what that offset should be? Your last email
> suggested that there really is no limit to the number of i_version bumps
> that can happen in memory before one of them makes it to disk. What can
> we do to address that?

<shrug>

I'm just pointing out problems I see when defining this as behaviour
for on-disk format purposes. If we define it as part of the on-disk
format, then we have to be concerned about how it may be used
outside the scope of just the NFS server application.

However, If NFS keeps this metadata and functionaly entirely
contained at the application level via xattrs, I really don't care
what algorithm NFS developers decides to use for their crash
sequencing. It's not my concern at this point, and that's precisely
why NFS should be using xattrs for this NFS specific functionality.

-Dave.
--
Dave Chinner
[email protected]