On Thu, 2022-09-08 at 10:40 +1000, NeilBrown wrote:
> On Thu, 08 Sep 2022, Jeff Layton wrote:
> > On Wed, 2022-09-07 at 13:55 +0000, Trond Myklebust wrote:
> > > On Wed, 2022-09-07 at 09:12 -0400, Jeff Layton wrote:
> > > > On Wed, 2022-09-07 at 08:52 -0400, J. Bruce Fields wrote:
> > > > > On Wed, Sep 07, 2022 at 08:47:20AM -0400, Jeff Layton wrote:
> > > > > > On Wed, 2022-09-07 at 21:37 +1000, NeilBrown wrote:
> > > > > > > On Wed, 07 Sep 2022, Jeff Layton wrote:
> > > > > > > > +The change to \fIstatx.stx_ino_version\fP is not atomic with
> > > > > > > > respect to the
> > > > > > > > +other changes in the inode. On a write, for instance, the
> > > > > > > > i_version it usually
> > > > > > > > +incremented before the data is copied into the pagecache.
> > > > > > > > Therefore it is
> > > > > > > > +possible to see a new i_version value while a read still
> > > > > > > > shows the old data.
> > > > > > >
> > > > > > > Doesn't that make the value useless?
> > > > > > >
> > > > > >
> > > > > > No, I don't think so. It's only really useful for comparing to an
> > > > > > older
> > > > > > sample anyway. If you do "statx; read; statx" and the value
> > > > > > hasn't
> > > > > > changed, then you know that things are stable.
> > > > >
> > > > > I don't see how that helps.? It's still possible to get:
> > > > >
> > > > > ????????????????reader??????????writer
> > > > > ????????????????------??????????------
> > > > > ????????????????????????????????i_version++
> > > > > ????????????????statx
> > > > > ????????????????read
> > > > > ????????????????statx
> > > > > ????????????????????????????????update page cache
> > > > >
> > > > > right?
> > > > >
> > > >
> > > > Yeah, I suppose so -- the statx wouldn't necessitate any locking. In
> > > > that case, maybe this is useless then other than for testing purposes
> > > > and userland NFS servers.
> > > >
> > > > Would it be better to not consume a statx field with this if so? What
> > > > could we use as an alternate interface? ioctl? Some sort of global
> > > > virtual xattr? It does need to be something per-inode.
> > >
> > > I don't see how a non-atomic change attribute is remotely useful even
> > > for NFS.
> > >
> > > The main problem is not so much the above (although NFS clients are
> > > vulnerable to that too) but the behaviour w.r.t. directory changes.
> > >
> > > If the server can't guarantee that file/directory/... creation and
> > > unlink are atomically recorded with change attribute updates, then the
> > > client has to always assume that the server is lying, and that it has
> > > to revalidate all its caches anyway. Cue endless readdir/lookup/getattr
> > > requests after each and every directory modification in order to check
> > > that some other client didn't also sneak in a change of their own.
> > >
> >
> > We generally hold the parent dir's inode->i_rwsem exclusively over most
> > important directory changes, and the times/i_version are also updated
> > while holding it. What we don't do is serialize reads of this value vs.
> > the i_rwsem, so you could see new directory contents alongside an old
> > i_version. Maybe we should be taking it for read when we query it on a
> > directory?
>
> We do hold i_rwsem today. I'm working on changing that. Preserving
> atomic directory changeinfo will be a challenge. The only mechanism I
> can think if is to pass a "u64*" to all the directory modification ops,
> and they fill in the version number at the point where it is incremented
> (inode_maybe_inc_iversion_return()). The (nfsd) caller assumes that
> "before" was one less than "after". If you don't want to internally
> require single increments, then you would need to pass a 'u64 [2]' to
> get two iversions back.
>
That's a major redesign of what the i_version counter is today. It may
very well end up being needed, but that's going to touch a lot of stuff
in the VFS. Are you planning to do that as a part of your locking
changes?
> >
> > Achieving atomicity with file writes though is another matter entirely.
> > I'm not sure that's even doable or how to approach it if so.
> > Suggestions?
>
> Call inode_maybe_inc_version(page->host) in __folio_mark_dirty() ??
>
Writes can cover multiple folios so we'd be doing several increments per
write. Maybe that's ok? Should we also be updating the ctime at that
point as well?
Fetching the i_version under the i_rwsem is probably sufficient to fix
this though. Most of the write_iter ops already bump the i_version while
holding that lock, so this wouldn't add any extra locking to the write
codepaths.
--
Jeff Layton <[email protected]>
On Thu, 08 Sep 2022, Jeff Layton wrote:
> On Thu, 2022-09-08 at 10:40 +1000, NeilBrown wrote:
> > On Thu, 08 Sep 2022, Jeff Layton wrote:
> > > On Wed, 2022-09-07 at 13:55 +0000, Trond Myklebust wrote:
> > > > On Wed, 2022-09-07 at 09:12 -0400, Jeff Layton wrote:
> > > > > On Wed, 2022-09-07 at 08:52 -0400, J. Bruce Fields wrote:
> > > > > > On Wed, Sep 07, 2022 at 08:47:20AM -0400, Jeff Layton wrote:
> > > > > > > On Wed, 2022-09-07 at 21:37 +1000, NeilBrown wrote:
> > > > > > > > On Wed, 07 Sep 2022, Jeff Layton wrote:
> > > > > > > > > +The change to \fIstatx.stx_ino_version\fP is not atomic with
> > > > > > > > > respect to the
> > > > > > > > > +other changes in the inode. On a write, for instance, the
> > > > > > > > > i_version it usually
> > > > > > > > > +incremented before the data is copied into the pagecache.
> > > > > > > > > Therefore it is
> > > > > > > > > +possible to see a new i_version value while a read still
> > > > > > > > > shows the old data.
> > > > > > > >
> > > > > > > > Doesn't that make the value useless?
> > > > > > > >
> > > > > > >
> > > > > > > No, I don't think so. It's only really useful for comparing to an
> > > > > > > older
> > > > > > > sample anyway. If you do "statx; read; statx" and the value
> > > > > > > hasn't
> > > > > > > changed, then you know that things are stable.
> > > > > >
> > > > > > I don't see how that helps. It's still possible to get:
> > > > > >
> > > > > > reader writer
> > > > > > ------ ------
> > > > > > i_version++
> > > > > > statx
> > > > > > read
> > > > > > statx
> > > > > > update page cache
> > > > > >
> > > > > > right?
> > > > > >
> > > > >
> > > > > Yeah, I suppose so -- the statx wouldn't necessitate any locking. In
> > > > > that case, maybe this is useless then other than for testing purposes
> > > > > and userland NFS servers.
> > > > >
> > > > > Would it be better to not consume a statx field with this if so? What
> > > > > could we use as an alternate interface? ioctl? Some sort of global
> > > > > virtual xattr? It does need to be something per-inode.
> > > >
> > > > I don't see how a non-atomic change attribute is remotely useful even
> > > > for NFS.
> > > >
> > > > The main problem is not so much the above (although NFS clients are
> > > > vulnerable to that too) but the behaviour w.r.t. directory changes.
> > > >
> > > > If the server can't guarantee that file/directory/... creation and
> > > > unlink are atomically recorded with change attribute updates, then the
> > > > client has to always assume that the server is lying, and that it has
> > > > to revalidate all its caches anyway. Cue endless readdir/lookup/getattr
> > > > requests after each and every directory modification in order to check
> > > > that some other client didn't also sneak in a change of their own.
> > > >
> > >
> > > We generally hold the parent dir's inode->i_rwsem exclusively over most
> > > important directory changes, and the times/i_version are also updated
> > > while holding it. What we don't do is serialize reads of this value vs.
> > > the i_rwsem, so you could see new directory contents alongside an old
> > > i_version. Maybe we should be taking it for read when we query it on a
> > > directory?
> >
> > We do hold i_rwsem today. I'm working on changing that. Preserving
> > atomic directory changeinfo will be a challenge. The only mechanism I
> > can think if is to pass a "u64*" to all the directory modification ops,
> > and they fill in the version number at the point where it is incremented
> > (inode_maybe_inc_iversion_return()). The (nfsd) caller assumes that
> > "before" was one less than "after". If you don't want to internally
> > require single increments, then you would need to pass a 'u64 [2]' to
> > get two iversions back.
> >
>
> That's a major redesign of what the i_version counter is today. It may
> very well end up being needed, but that's going to touch a lot of stuff
> in the VFS. Are you planning to do that as a part of your locking
> changes?
>
"A major design"? How? The "one less than" might be, but allowing a
directory morphing op to fill in a "u64 [2]" is just a new interface to
existing data. One that allows fine grained atomicity.
This would actually be really good for NFS. nfs_mkdir (for example)
could easily have access to the atomic pre/post changedid provided by
the server, and so could easily provide them to nfsd.
I'm not planning to do this as part of my locking changes. In the first
instance only NFS changes behaviour, and it doesn't provide atomic
changeids, so there is no loss of functionality.
When some other filesystem wants to opt-in to shared-locking on
directories - that would be the time to push through a better interface.
> > >
> > > Achieving atomicity with file writes though is another matter entirely.
> > > I'm not sure that's even doable or how to approach it if so.
> > > Suggestions?
> >
> > Call inode_maybe_inc_version(page->host) in __folio_mark_dirty() ??
> >
>
> Writes can cover multiple folios so we'd be doing several increments per
> write. Maybe that's ok? Should we also be updating the ctime at that
> point as well?
You would only do several increments if something was reading the value
concurrently, and then you really should to several increments for
correctness.
>
> Fetching the i_version under the i_rwsem is probably sufficient to fix
> this though. Most of the write_iter ops already bump the i_version while
> holding that lock, so this wouldn't add any extra locking to the write
> codepaths.
Adding new locking doesn't seem like a good idea. It's bound to have
performance implications. It may well end up serialising the directory
op that I'm currently trying to make parallelisable.
Thanks,
NeilBrown
>
> --
> Jeff Layton <[email protected]>
>