2021-09-23 13:25:57

by Chengguang Xu

[permalink] [raw]
Subject: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

Implement overlayfs' ->write_inode to sync dirty data
and redirty overlayfs' inode if necessary.

Signed-off-by: Chengguang Xu <[email protected]>
---
fs/overlayfs/super.c | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)

diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
index 2ab77adf7256..cddae3ca2fa5 100644
--- a/fs/overlayfs/super.c
+++ b/fs/overlayfs/super.c
@@ -412,12 +412,42 @@ static void ovl_evict_inode(struct inode *inode)
clear_inode(inode);
}

+static int ovl_write_inode(struct inode *inode,
+ struct writeback_control *wbc)
+{
+ struct ovl_fs *ofs = inode->i_sb->s_fs_info;
+ struct inode *upper = ovl_inode_upper(inode);
+ unsigned long iflag = 0;
+ int ret = 0;
+
+ if (!upper)
+ return 0;
+
+ if (!ovl_should_sync(ofs))
+ return 0;
+
+ if (upper->i_sb->s_op->write_inode)
+ ret = upper->i_sb->s_op->write_inode(inode, wbc);
+
+ if (mapping_writably_mapped(upper->i_mapping) ||
+ mapping_tagged(upper->i_mapping, PAGECACHE_TAG_WRITEBACK))
+ iflag |= I_DIRTY_PAGES;
+
+ iflag |= upper->i_state & I_DIRTY_ALL;
+
+ if (iflag)
+ ovl_mark_inode_dirty(inode);
+
+ return ret;
+}
+
static const struct super_operations ovl_super_operations = {
.alloc_inode = ovl_alloc_inode,
.free_inode = ovl_free_inode,
.destroy_inode = ovl_destroy_inode,
.drop_inode = generic_delete_inode,
.evict_inode = ovl_evict_inode,
+ .write_inode = ovl_write_inode,
.put_super = ovl_put_super,
.sync_fs = ovl_sync_fs,
.statfs = ovl_statfs,
--
2.27.0



2021-10-07 09:26:53

by Miklos Szeredi

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Thu, 23 Sept 2021 at 15:08, Chengguang Xu <[email protected]> wrote:
>
> Implement overlayfs' ->write_inode to sync dirty data
> and redirty overlayfs' inode if necessary.
>
> Signed-off-by: Chengguang Xu <[email protected]>
> ---
> fs/overlayfs/super.c | 30 ++++++++++++++++++++++++++++++
> 1 file changed, 30 insertions(+)
>
> diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
> index 2ab77adf7256..cddae3ca2fa5 100644
> --- a/fs/overlayfs/super.c
> +++ b/fs/overlayfs/super.c
> @@ -412,12 +412,42 @@ static void ovl_evict_inode(struct inode *inode)
> clear_inode(inode);
> }
>
> +static int ovl_write_inode(struct inode *inode,
> + struct writeback_control *wbc)
> +{
> + struct ovl_fs *ofs = inode->i_sb->s_fs_info;
> + struct inode *upper = ovl_inode_upper(inode);
> + unsigned long iflag = 0;
> + int ret = 0;
> +
> + if (!upper)
> + return 0;
> +
> + if (!ovl_should_sync(ofs))
> + return 0;
> +
> + if (upper->i_sb->s_op->write_inode)
> + ret = upper->i_sb->s_op->write_inode(inode, wbc);

Where is page writeback on upper inode triggered?

Thanks,
Miklos

2021-10-07 09:53:47

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Thu 23-09-21 21:08:10, Chengguang Xu wrote:
> Implement overlayfs' ->write_inode to sync dirty data
> and redirty overlayfs' inode if necessary.
>
> Signed-off-by: Chengguang Xu <[email protected]>

...

> +static int ovl_write_inode(struct inode *inode,
> + struct writeback_control *wbc)
> +{
> + struct ovl_fs *ofs = inode->i_sb->s_fs_info;
> + struct inode *upper = ovl_inode_upper(inode);
> + unsigned long iflag = 0;
> + int ret = 0;
> +
> + if (!upper)
> + return 0;
> +
> + if (!ovl_should_sync(ofs))
> + return 0;
> +
> + if (upper->i_sb->s_op->write_inode)
> + ret = upper->i_sb->s_op->write_inode(inode, wbc);
> +

I'm somewhat confused here. 'inode' is overlayfs inode AFAIU, so how is it
correct to pass it to ->write_inode function of upper filesystem? Shouldn't
you pass 'upper' there instead?

> + if (mapping_writably_mapped(upper->i_mapping) ||
> + mapping_tagged(upper->i_mapping, PAGECACHE_TAG_WRITEBACK))
> + iflag |= I_DIRTY_PAGES;
> +
> + iflag |= upper->i_state & I_DIRTY_ALL;

Also since you call ->write_inode directly upper->i_state won't be updated
to reflect that inode has been written out (I_DIRTY flags get cleared in
__writeback_single_inode()). So it seems to me overlayfs will keep writing
out upper inode until flush worker on upper filesystem also writes the
inode and clears the dirty flags? So you rather need to call something like
write_inode_now() that will handle the flag clearing and do writeback list
handling for you?

Honza

--
Jan Kara <[email protected]>
SUSE Labs, CR

2021-10-07 12:31:37

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

---- 在 星期四, 2021-10-07 17:23:06 Miklos Szeredi <[email protected]> 撰写 ----
> On Thu, 23 Sept 2021 at 15:08, Chengguang Xu <[email protected]> wrote:
> >
> > Implement overlayfs' ->write_inode to sync dirty data
> > and redirty overlayfs' inode if necessary.
> >
> > Signed-off-by: Chengguang Xu <[email protected]>
> > ---
> > fs/overlayfs/super.c | 30 ++++++++++++++++++++++++++++++
> > 1 file changed, 30 insertions(+)
> >
> > diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
> > index 2ab77adf7256..cddae3ca2fa5 100644
> > --- a/fs/overlayfs/super.c
> > +++ b/fs/overlayfs/super.c
> > @@ -412,12 +412,42 @@ static void ovl_evict_inode(struct inode *inode)
> > clear_inode(inode);
> > }
> >
> > +static int ovl_write_inode(struct inode *inode,
> > + struct writeback_control *wbc)
> > +{
> > + struct ovl_fs *ofs = inode->i_sb->s_fs_info;
> > + struct inode *upper = ovl_inode_upper(inode);
> > + unsigned long iflag = 0;
> > + int ret = 0;
> > +
> > + if (!upper)
> > + return 0;
> > +
> > + if (!ovl_should_sync(ofs))
> > + return 0;
> > +
> > + if (upper->i_sb->s_op->write_inode)
> > + ret = upper->i_sb->s_op->write_inode(inode, wbc);
>
> Where is page writeback on upper inode triggered?
>

Should pass upper inode instead of overlay inode here.

Thanks,
Chengguang


2021-10-07 13:01:49

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

---- 在 星期四, 2021-10-07 17:01:57 Jan Kara <[email protected]> 撰写 ----
> On Thu 23-09-21 21:08:10, Chengguang Xu wrote:
> > Implement overlayfs' ->write_inode to sync dirty data
> > and redirty overlayfs' inode if necessary.
> >
> > Signed-off-by: Chengguang Xu <[email protected]>
>
> ...
>
> > +static int ovl_write_inode(struct inode *inode,
> > + struct writeback_control *wbc)
> > +{
> > + struct ovl_fs *ofs = inode->i_sb->s_fs_info;
> > + struct inode *upper = ovl_inode_upper(inode);
> > + unsigned long iflag = 0;
> > + int ret = 0;
> > +
> > + if (!upper)
> > + return 0;
> > +
> > + if (!ovl_should_sync(ofs))
> > + return 0;
> > +
> > + if (upper->i_sb->s_op->write_inode)
> > + ret = upper->i_sb->s_op->write_inode(inode, wbc);
> > +
>
> I'm somewhat confused here. 'inode' is overlayfs inode AFAIU, so how is it
> correct to pass it to ->write_inode function of upper filesystem? Shouldn't
> you pass 'upper' there instead?

That's right!

>
> > + if (mapping_writably_mapped(upper->i_mapping) ||
> > + mapping_tagged(upper->i_mapping, PAGECACHE_TAG_WRITEBACK))
> > + iflag |= I_DIRTY_PAGES;
> > +
> > + iflag |= upper->i_state & I_DIRTY_ALL;
>
> Also since you call ->write_inode directly upper->i_state won't be updated
> to reflect that inode has been written out (I_DIRTY flags get cleared in
> __writeback_single_inode()). So it seems to me overlayfs will keep writing
> out upper inode until flush worker on upper filesystem also writes the
> inode and clears the dirty flags? So you rather need to call something like
> write_inode_now() that will handle the flag clearing and do writeback list
> handling for you?
>

Calling ->write_inode directly upper->i_state won't be updated,
however, I don't think overlayfs will keep writing out upper inode since ->write_inode
will be called when only overlay inode itself marked dirty. Am I missing something?


Thanks,
Chengguang


2021-10-07 13:06:58

by Miklos Szeredi

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Thu, 7 Oct 2021 at 14:28, Chengguang Xu <[email protected]> wrote:
>
> ---- 在 星期四, 2021-10-07 17:23:06 Miklos Szeredi <[email protected]> 撰写 ----
> > On Thu, 23 Sept 2021 at 15:08, Chengguang Xu <[email protected]> wrote:
> > >
> > > Implement overlayfs' ->write_inode to sync dirty data
> > > and redirty overlayfs' inode if necessary.
> > >
> > > Signed-off-by: Chengguang Xu <[email protected]>
> > > ---
> > > fs/overlayfs/super.c | 30 ++++++++++++++++++++++++++++++
> > > 1 file changed, 30 insertions(+)
> > >
> > > diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
> > > index 2ab77adf7256..cddae3ca2fa5 100644
> > > --- a/fs/overlayfs/super.c
> > > +++ b/fs/overlayfs/super.c
> > > @@ -412,12 +412,42 @@ static void ovl_evict_inode(struct inode *inode)
> > > clear_inode(inode);
> > > }
> > >
> > > +static int ovl_write_inode(struct inode *inode,
> > > + struct writeback_control *wbc)
> > > +{
> > > + struct ovl_fs *ofs = inode->i_sb->s_fs_info;
> > > + struct inode *upper = ovl_inode_upper(inode);
> > > + unsigned long iflag = 0;
> > > + int ret = 0;
> > > +
> > > + if (!upper)
> > > + return 0;
> > > +
> > > + if (!ovl_should_sync(ofs))
> > > + return 0;
> > > +
> > > + if (upper->i_sb->s_op->write_inode)
> > > + ret = upper->i_sb->s_op->write_inode(inode, wbc);
> >
> > Where is page writeback on upper inode triggered?
> >
>
> Should pass upper inode instead of overlay inode here.

That's true and it does seem to indicate lack of thorough testing.

However that wasn't what I was asking about. AFAICS ->write_inode()
won't start write back for dirty pages. Maybe I'm missing something,
but there it looks as if nothing will actually trigger writeback for
dirty pages in upper inode.

Thanks,
Miklos

2021-10-07 13:37:45

by Miklos Szeredi

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > However that wasn't what I was asking about. AFAICS ->write_inode()
> > won't start write back for dirty pages. Maybe I'm missing something,
> > but there it looks as if nothing will actually trigger writeback for
> > dirty pages in upper inode.
> >
>
> Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).

Right.

But wouldn't it be simpler to do this from ->write_inode()?

I.e. call write_inode_now() as suggested by Jan.

Also could just call mark_inode_dirty() on the overlay inode
regardless of the dirty flags on the upper inode since it shouldn't
matter and results in simpler logic.

Thanks,
Miklos


>
> Thanks,
> Chengguang
>
>
>

2021-10-07 15:49:45

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Thu 07-10-21 20:26:36, Chengguang Xu wrote:
> ---- 在 星期四, 2021-10-07 17:01:57 Jan Kara <[email protected]> 撰写 ----
> >
> > > + if (mapping_writably_mapped(upper->i_mapping) ||
> > > + mapping_tagged(upper->i_mapping, PAGECACHE_TAG_WRITEBACK))
> > > + iflag |= I_DIRTY_PAGES;
> > > +
> > > + iflag |= upper->i_state & I_DIRTY_ALL;
> >
> > Also since you call ->write_inode directly upper->i_state won't be updated
> > to reflect that inode has been written out (I_DIRTY flags get cleared in
> > __writeback_single_inode()). So it seems to me overlayfs will keep writing
> > out upper inode until flush worker on upper filesystem also writes the
> > inode and clears the dirty flags? So you rather need to call something like
> > write_inode_now() that will handle the flag clearing and do writeback list
> > handling for you?
> >
>
> Calling ->write_inode directly upper->i_state won't be updated, however,
> I don't think overlayfs will keep writing out upper inode since
> ->write_inode will be called when only overlay inode itself marked dirty.
> Am I missing something?

Well, if upper->i_state is not updated, you are more or less guaranteed
upper->i_state & I_DIRTY_ALL != 0 and thus even overlay inode stays dirty.
And thus next time writeback runs you will see dirty overlay inode and
writeback the upper inode again although it is not necessary.

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2021-10-07 15:53:09

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

---- 在 星期四, 2021-10-07 22:46:46 Jan Kara <[email protected]> 撰写 ----
> On Thu 07-10-21 15:34:19, Miklos Szeredi wrote:
> > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > > won't start write back for dirty pages. Maybe I'm missing something,
> > > > but there it looks as if nothing will actually trigger writeback for
> > > > dirty pages in upper inode.
> > > >
> > >
> > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
> >
> > Right.
> >
> > But wouldn't it be simpler to do this from ->write_inode()?
>
> You could but then you'd have to make sure you have I_DIRTY_SYNC always set
> when I_DIRTY_PAGES is set on the upper inode so that your ->write_inode()
> callback gets called. Overall I agree the logic would be probably simpler.
>

Hi Jan, Miklos

Thnaks for your suggestions. Let me have a try in next version.


Thanks,
Chengguang

2021-10-07 19:39:15

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Thu 07-10-21 15:34:19, Miklos Szeredi wrote:
> On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > won't start write back for dirty pages. Maybe I'm missing something,
> > > but there it looks as if nothing will actually trigger writeback for
> > > dirty pages in upper inode.
> > >
> >
> > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
>
> Right.
>
> But wouldn't it be simpler to do this from ->write_inode()?

You could but then you'd have to make sure you have I_DIRTY_SYNC always set
when I_DIRTY_PAGES is set on the upper inode so that your ->write_inode()
callback gets called. Overall I agree the logic would be probably simpler.

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2021-10-07 19:39:44

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation


---- 在 星期四, 2021-10-07 22:41:56 Jan Kara <[email protected]> 撰写 ----
> On Thu 07-10-21 20:26:36, Chengguang Xu wrote:
> > ---- 在 星期四, 2021-10-07 17:01:57 Jan Kara <[email protected]> 撰写 ----
> > >
> > > > + if (mapping_writably_mapped(upper->i_mapping) ||
> > > > + mapping_tagged(upper->i_mapping, PAGECACHE_TAG_WRITEBACK))
> > > > + iflag |= I_DIRTY_PAGES;
> > > > +
> > > > + iflag |= upper->i_state & I_DIRTY_ALL;
> > >
> > > Also since you call ->write_inode directly upper->i_state won't be updated
> > > to reflect that inode has been written out (I_DIRTY flags get cleared in
> > > __writeback_single_inode()). So it seems to me overlayfs will keep writing
> > > out upper inode until flush worker on upper filesystem also writes the
> > > inode and clears the dirty flags? So you rather need to call something like
> > > write_inode_now() that will handle the flag clearing and do writeback list
> > > handling for you?
> > >
> >
> > Calling ->write_inode directly upper->i_state won't be updated, however,
> > I don't think overlayfs will keep writing out upper inode since
> > ->write_inode will be called when only overlay inode itself marked dirty.
> > Am I missing something?
>
> Well, if upper->i_state is not updated, you are more or less guaranteed
> upper->i_state & I_DIRTY_ALL != 0 and thus even overlay inode stays dirty.
> And thus next time writeback runs you will see dirty overlay inode and
> writeback the upper inode again although it is not necessary.
>

Hi Jan,

Yes, I get the point now. Thanks for the explanation.


Thanks,
Chengguang




2021-10-07 19:52:30

by Miklos Szeredi

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Thu, 7 Oct 2021 at 16:53, Chengguang Xu <[email protected]> wrote:
>
> ---- 在 星期四, 2021-10-07 22:46:46 Jan Kara <[email protected]> 撰写 ----
> > On Thu 07-10-21 15:34:19, Miklos Szeredi wrote:
> > > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > > > won't start write back for dirty pages. Maybe I'm missing something,
> > > > > but there it looks as if nothing will actually trigger writeback for
> > > > > dirty pages in upper inode.
> > > > >
> > > >
> > > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
> > >
> > > Right.
> > >
> > > But wouldn't it be simpler to do this from ->write_inode()?
> >
> > You could but then you'd have to make sure you have I_DIRTY_SYNC always set
> > when I_DIRTY_PAGES is set on the upper inode so that your ->write_inode()
> > callback gets called. Overall I agree the logic would be probably simpler.
> >
>

And it's not just for simplicity. The I_SYNC logic in
writeback_single_inode() is actually necessary to prevent races
between instances on a specific inode. I.e. if inode writeback is
started by background wb then syncfs needs to synchronize with that
otherwise it will miss the inode, or worse, mess things up by calling
->write_inode() multiple times in parallel. So going throught
writeback_single_inode() is actually a must AFAICS.

Thanks,
Miklos

2021-10-07 20:40:30

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

---- 在 星期四, 2021-10-07 20:45:20 Miklos Szeredi <[email protected]> 撰写 ----
> On Thu, 7 Oct 2021 at 14:28, Chengguang Xu <[email protected]> wrote:
> >
> > ---- 在 星期四, 2021-10-07 17:23:06 Miklos Szeredi <[email protected]> 撰写 ----
> > > On Thu, 23 Sept 2021 at 15:08, Chengguang Xu <[email protected]> wrote:
> > > >
> > > > Implement overlayfs' ->write_inode to sync dirty data
> > > > and redirty overlayfs' inode if necessary.
> > > >
> > > > Signed-off-by: Chengguang Xu <[email protected]>
> > > > ---
> > > > fs/overlayfs/super.c | 30 ++++++++++++++++++++++++++++++
> > > > 1 file changed, 30 insertions(+)
> > > >
> > > > diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
> > > > index 2ab77adf7256..cddae3ca2fa5 100644
> > > > --- a/fs/overlayfs/super.c
> > > > +++ b/fs/overlayfs/super.c
> > > > @@ -412,12 +412,42 @@ static void ovl_evict_inode(struct inode *inode)
> > > > clear_inode(inode);
> > > > }
> > > >
> > > > +static int ovl_write_inode(struct inode *inode,
> > > > + struct writeback_control *wbc)
> > > > +{
> > > > + struct ovl_fs *ofs = inode->i_sb->s_fs_info;
> > > > + struct inode *upper = ovl_inode_upper(inode);
> > > > + unsigned long iflag = 0;
> > > > + int ret = 0;
> > > > +
> > > > + if (!upper)
> > > > + return 0;
> > > > +
> > > > + if (!ovl_should_sync(ofs))
> > > > + return 0;
> > > > +
> > > > + if (upper->i_sb->s_op->write_inode)
> > > > + ret = upper->i_sb->s_op->write_inode(inode, wbc);
> > >
> > > Where is page writeback on upper inode triggered?
> > >
> >
> > Should pass upper inode instead of overlay inode here.
>
> That's true and it does seem to indicate lack of thorough testing.

It's a little bit weird this passed all overlay cases and generic/474(syncfs) without errors in xfstests.
Please let me do more diagnosis on this and strengthen the test case.


>
> However that wasn't what I was asking about. AFAICS ->write_inode()
> won't start write back for dirty pages. Maybe I'm missing something,
> but there it looks as if nothing will actually trigger writeback for
> dirty pages in upper inode.
>

Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).

Thanks,
Chengguang



2021-10-08 13:19:20

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Thu 07-10-21 20:51:47, Miklos Szeredi wrote:
> On Thu, 7 Oct 2021 at 16:53, Chengguang Xu <[email protected]> wrote:
> >
> > ---- 在 星期四, 2021-10-07 22:46:46 Jan Kara <[email protected]> 撰写 ----
> > > On Thu 07-10-21 15:34:19, Miklos Szeredi wrote:
> > > > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > > > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > > > > won't start write back for dirty pages. Maybe I'm missing something,
> > > > > > but there it looks as if nothing will actually trigger writeback for
> > > > > > dirty pages in upper inode.
> > > > > >
> > > > >
> > > > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > > > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
> > > >
> > > > Right.
> > > >
> > > > But wouldn't it be simpler to do this from ->write_inode()?
> > >
> > > You could but then you'd have to make sure you have I_DIRTY_SYNC always set
> > > when I_DIRTY_PAGES is set on the upper inode so that your ->write_inode()
> > > callback gets called. Overall I agree the logic would be probably simpler.
> > >
> >
>
> And it's not just for simplicity. The I_SYNC logic in
> writeback_single_inode() is actually necessary to prevent races
> between instances on a specific inode. I.e. if inode writeback is
> started by background wb then syncfs needs to synchronize with that
> otherwise it will miss the inode, or worse, mess things up by calling
> ->write_inode() multiple times in parallel. So going throught
> writeback_single_inode() is actually a must AFAICS.

Yes, you are correct.

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2021-11-16 02:22:49

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

---- 在 星期四, 2021-10-07 21:34:19 Miklos Szeredi <[email protected]> 撰写 ----
> On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > won't start write back for dirty pages. Maybe I'm missing something,
> > > but there it looks as if nothing will actually trigger writeback for
> > > dirty pages in upper inode.
> > >
> >
> > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
>
> Right.
>
> But wouldn't it be simpler to do this from ->write_inode()?
>
> I.e. call write_inode_now() as suggested by Jan.
>
> Also could just call mark_inode_dirty() on the overlay inode
> regardless of the dirty flags on the upper inode since it shouldn't
> matter and results in simpler logic.
>

Hi Miklos,

Sorry for delayed response for this, I've been busy with another project.

I agree with your suggesion above and further more how about just mark overlay inode dirty
when it has upper inode? This approach will make marking dirtiness simple enough.

Thanks,
Chengguang

2021-11-16 12:36:40

by Miklos Szeredi

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Tue, 16 Nov 2021 at 03:20, Chengguang Xu <[email protected]> wrote:
>
> ---- 在 星期四, 2021-10-07 21:34:19 Miklos Szeredi <[email protected]> 撰写 ----
> > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > > won't start write back for dirty pages. Maybe I'm missing something,
> > > > but there it looks as if nothing will actually trigger writeback for
> > > > dirty pages in upper inode.
> > > >
> > >
> > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
> >
> > Right.
> >
> > But wouldn't it be simpler to do this from ->write_inode()?
> >
> > I.e. call write_inode_now() as suggested by Jan.
> >
> > Also could just call mark_inode_dirty() on the overlay inode
> > regardless of the dirty flags on the upper inode since it shouldn't
> > matter and results in simpler logic.
> >
>
> Hi Miklos,
>
> Sorry for delayed response for this, I've been busy with another project.
>
> I agree with your suggesion above and further more how about just mark overlay inode dirty
> when it has upper inode? This approach will make marking dirtiness simple enough.

Are you suggesting that all non-lower overlay inodes should always be dirty?

The logic would be simple, no doubt, but there's the cost to walking
those overlay inodes which don't have a dirty upper inode, right? Can
you quantify this cost with a benchmark? Can be totally synthetic,
e.g. lookup a million upper files without modifying them, then call
syncfs.

Thanks,
Miklos

2021-11-17 06:11:50

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

---- 在 星期二, 2021-11-16 20:35:55 Miklos Szeredi <[email protected]> 撰写 ----
> On Tue, 16 Nov 2021 at 03:20, Chengguang Xu <[email protected]> wrote:
> >
> > ---- 在 星期四, 2021-10-07 21:34:19 Miklos Szeredi <[email protected]> 撰写 ----
> > > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > > > won't start write back for dirty pages. Maybe I'm missing something,
> > > > > but there it looks as if nothing will actually trigger writeback for
> > > > > dirty pages in upper inode.
> > > > >
> > > >
> > > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
> > >
> > > Right.
> > >
> > > But wouldn't it be simpler to do this from ->write_inode()?
> > >
> > > I.e. call write_inode_now() as suggested by Jan.
> > >
> > > Also could just call mark_inode_dirty() on the overlay inode
> > > regardless of the dirty flags on the upper inode since it shouldn't
> > > matter and results in simpler logic.
> > >
> >
> > Hi Miklos,
> >
> > Sorry for delayed response for this, I've been busy with another project.
> >
> > I agree with your suggesion above and further more how about just mark overlay inode dirty
> > when it has upper inode? This approach will make marking dirtiness simple enough.
>
> Are you suggesting that all non-lower overlay inodes should always be dirty?
>
> The logic would be simple, no doubt, but there's the cost to walking
> those overlay inodes which don't have a dirty upper inode, right?

That's true.

> Can you quantify this cost with a benchmark? Can be totally synthetic,
> e.g. lookup a million upper files without modifying them, then call
> syncfs.
>

No problem, I'll do some tests for the performance.

Thanks,
Chengguang

2021-11-18 06:33:07

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation


---- 在 星期三, 2021-11-17 14:11:29 Chengguang Xu <[email protected]> 撰写 ----
> ---- 在 星期二, 2021-11-16 20:35:55 Miklos Szeredi <[email protected]> 撰写 ----
> > On Tue, 16 Nov 2021 at 03:20, Chengguang Xu <[email protected]> wrote:
> > >
> > > ---- 在 星期四, 2021-10-07 21:34:19 Miklos Szeredi <[email protected]> 撰写 ----
> > > > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > > > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > > > > won't start write back for dirty pages. Maybe I'm missing something,
> > > > > > but there it looks as if nothing will actually trigger writeback for
> > > > > > dirty pages in upper inode.
> > > > > >
> > > > >
> > > > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > > > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
> > > >
> > > > Right.
> > > >
> > > > But wouldn't it be simpler to do this from ->write_inode()?
> > > >
> > > > I.e. call write_inode_now() as suggested by Jan.
> > > >
> > > > Also could just call mark_inode_dirty() on the overlay inode
> > > > regardless of the dirty flags on the upper inode since it shouldn't
> > > > matter and results in simpler logic.
> > > >
> > >
> > > Hi Miklos,
> > >
> > > Sorry for delayed response for this, I've been busy with another project.
> > >
> > > I agree with your suggesion above and further more how about just mark overlay inode dirty
> > > when it has upper inode? This approach will make marking dirtiness simple enough.
> >
> > Are you suggesting that all non-lower overlay inodes should always be dirty?
> >
> > The logic would be simple, no doubt, but there's the cost to walking
> > those overlay inodes which don't have a dirty upper inode, right?
>
> That's true.
>
> > Can you quantify this cost with a benchmark? Can be totally synthetic,
> > e.g. lookup a million upper files without modifying them, then call
> > syncfs.
> >
>
> No problem, I'll do some tests for the performance.
>

Hi Miklos,

I did some rough tests and the results like below.
In practice, I don't think that 1.3s extra time of syncfs will cause significant problem.
What do you think?



Test bed: kvm vm
2.50GHz cpu 32core
64GB mem
vm kernel 5.15.0-rc1+ (with ovl syncfs patch V6)

one millon files spread to 2 level of dir hierarchy.
test step:
1) create testfiles in ovl upper dir
2) mount overlayfs
3) excute ls -lR to lookup all file in overlay merge dir
4) excute slabtop to make sure overlay inode number
5) call syncfs to the file in merge dir

Tested five times and the reusults are in 1.310s ~ 1.326s

root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
syncfs success

real 0m1.310s
user 0m0.000s
sys 0m0.001s
[root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
syncfs success

real 0m1.326s
user 0m0.001s
sys 0m0.000s
[root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
syncfs success

real 0m1.321s
user 0m0.000s
sys 0m0.001s
[root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
syncfs success

real 0m1.316s
user 0m0.000s
sys 0m0.001s
[root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
syncfs success

real 0m1.314s
user 0m0.001s
sys 0m0.001s


Directly run syncfs to the file in ovl-upper dir.
Tested five times and the reusults are in 0.001s ~ 0.003s

[root@VM-144-4-centos test]# time ./syncfs a
syncfs success

real 0m0.002s
user 0m0.001s
sys 0m0.000s
[root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
syncfs success

real 0m0.003s
user 0m0.001s
sys 0m0.000s
[root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
syncfs success

real 0m0.001s
user 0m0.000s
sys 0m0.001s
[root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
syncfs success

real 0m0.001s
user 0m0.000s
sys 0m0.001s
[root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
syncfs success

real 0m0.001s
user 0m0.000s
sys 0m0.001s
[root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
syncfs success

real 0m0.001s
user 0m0.000s
sys 0m0.001







2021-11-18 11:24:05

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Thu 18-11-21 14:32:36, Chengguang Xu wrote:
>
> ---- 在 星期三, 2021-11-17 14:11:29 Chengguang Xu <[email protected]> 撰写 ----
> > ---- 在 星期二, 2021-11-16 20:35:55 Miklos Szeredi <[email protected]> 撰写 ----
> > > On Tue, 16 Nov 2021 at 03:20, Chengguang Xu <[email protected]> wrote:
> > > >
> > > > ---- 在 星期四, 2021-10-07 21:34:19 Miklos Szeredi <[email protected]> 撰写 ----
> > > > > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > > > > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > > > > > won't start write back for dirty pages. Maybe I'm missing something,
> > > > > > > but there it looks as if nothing will actually trigger writeback for
> > > > > > > dirty pages in upper inode.
> > > > > > >
> > > > > >
> > > > > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > > > > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
> > > > >
> > > > > Right.
> > > > >
> > > > > But wouldn't it be simpler to do this from ->write_inode()?
> > > > >
> > > > > I.e. call write_inode_now() as suggested by Jan.
> > > > >
> > > > > Also could just call mark_inode_dirty() on the overlay inode
> > > > > regardless of the dirty flags on the upper inode since it shouldn't
> > > > > matter and results in simpler logic.
> > > > >
> > > >
> > > > Hi Miklos,
> > > >
> > > > Sorry for delayed response for this, I've been busy with another project.
> > > >
> > > > I agree with your suggesion above and further more how about just mark overlay inode dirty
> > > > when it has upper inode? This approach will make marking dirtiness simple enough.
> > >
> > > Are you suggesting that all non-lower overlay inodes should always be dirty?
> > >
> > > The logic would be simple, no doubt, but there's the cost to walking
> > > those overlay inodes which don't have a dirty upper inode, right?
> >
> > That's true.
> >
> > > Can you quantify this cost with a benchmark? Can be totally synthetic,
> > > e.g. lookup a million upper files without modifying them, then call
> > > syncfs.
> > >
> >
> > No problem, I'll do some tests for the performance.
> >
>
> Hi Miklos,
>
> I did some rough tests and the results like below. In practice, I don't
> think that 1.3s extra time of syncfs will cause significant problem.
> What do you think?

Well, burning 1.3s worth of CPU time for doing nothing seems like quite a
bit to me. I understand this is with 1000000 inodes but although that is
quite a few it is not unheard of. If there would be several containers
calling sync_fs(2) on the machine they could easily hog the machine... That
is why I was originally against keeping overlay inodes always dirty and
wanted their dirtiness to at least roughly track the real need to do
writeback.

Honza

> Test bed: kvm vm
> 2.50GHz cpu 32core
> 64GB mem
> vm kernel 5.15.0-rc1+ (with ovl syncfs patch V6)
>
> one millon files spread to 2 level of dir hierarchy.
> test step:
> 1) create testfiles in ovl upper dir
> 2) mount overlayfs
> 3) excute ls -lR to lookup all file in overlay merge dir
> 4) excute slabtop to make sure overlay inode number
> 5) call syncfs to the file in merge dir
>
> Tested five times and the reusults are in 1.310s ~ 1.326s
>
> root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> syncfs success
>
> real 0m1.310s
> user 0m0.000s
> sys 0m0.001s
> [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> syncfs success
>
> real 0m1.326s
> user 0m0.001s
> sys 0m0.000s
> [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> syncfs success
>
> real 0m1.321s
> user 0m0.000s
> sys 0m0.001s
> [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> syncfs success
>
> real 0m1.316s
> user 0m0.000s
> sys 0m0.001s
> [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> syncfs success
>
> real 0m1.314s
> user 0m0.001s
> sys 0m0.001s
>
>
> Directly run syncfs to the file in ovl-upper dir.
> Tested five times and the reusults are in 0.001s ~ 0.003s
>
> [root@VM-144-4-centos test]# time ./syncfs a
> syncfs success
>
> real 0m0.002s
> user 0m0.001s
> sys 0m0.000s
> [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> syncfs success
>
> real 0m0.003s
> user 0m0.001s
> sys 0m0.000s
> [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> syncfs success
>
> real 0m0.001s
> user 0m0.000s
> sys 0m0.001s
> [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> syncfs success
>
> real 0m0.001s
> user 0m0.000s
> sys 0m0.001s
> [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> syncfs success
>
> real 0m0.001s
> user 0m0.000s
> sys 0m0.001s
> [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> syncfs success
>
> real 0m0.001s
> user 0m0.000s
> sys 0m0.001
>
>
>
>
>
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2021-11-18 12:02:43

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

---- 在 星期四, 2021-11-18 19:23:15 Jan Kara <[email protected]> 撰写 ----
> On Thu 18-11-21 14:32:36, Chengguang Xu wrote:
> >
> > ---- 在 星期三, 2021-11-17 14:11:29 Chengguang Xu <[email protected]> 撰写 ----
> > > ---- 在 星期二, 2021-11-16 20:35:55 Miklos Szeredi <[email protected]> 撰写 ----
> > > > On Tue, 16 Nov 2021 at 03:20, Chengguang Xu <[email protected]> wrote:
> > > > >
> > > > > ---- 在 星期四, 2021-10-07 21:34:19 Miklos Szeredi <[email protected]> 撰写 ----
> > > > > > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > > > > > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > > > > > > won't start write back for dirty pages. Maybe I'm missing something,
> > > > > > > > but there it looks as if nothing will actually trigger writeback for
> > > > > > > > dirty pages in upper inode.
> > > > > > > >
> > > > > > >
> > > > > > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > > > > > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
> > > > > >
> > > > > > Right.
> > > > > >
> > > > > > But wouldn't it be simpler to do this from ->write_inode()?
> > > > > >
> > > > > > I.e. call write_inode_now() as suggested by Jan.
> > > > > >
> > > > > > Also could just call mark_inode_dirty() on the overlay inode
> > > > > > regardless of the dirty flags on the upper inode since it shouldn't
> > > > > > matter and results in simpler logic.
> > > > > >
> > > > >
> > > > > Hi Miklos,
> > > > >
> > > > > Sorry for delayed response for this, I've been busy with another project.
> > > > >
> > > > > I agree with your suggesion above and further more how about just mark overlay inode dirty
> > > > > when it has upper inode? This approach will make marking dirtiness simple enough.
> > > >
> > > > Are you suggesting that all non-lower overlay inodes should always be dirty?
> > > >
> > > > The logic would be simple, no doubt, but there's the cost to walking
> > > > those overlay inodes which don't have a dirty upper inode, right?
> > >
> > > That's true.
> > >
> > > > Can you quantify this cost with a benchmark? Can be totally synthetic,
> > > > e.g. lookup a million upper files without modifying them, then call
> > > > syncfs.
> > > >
> > >
> > > No problem, I'll do some tests for the performance.
> > >
> >
> > Hi Miklos,
> >
> > I did some rough tests and the results like below. In practice, I don't
> > think that 1.3s extra time of syncfs will cause significant problem.
> > What do you think?
>
> Well, burning 1.3s worth of CPU time for doing nothing seems like quite a
> bit to me. I understand this is with 1000000 inodes but although that is
> quite a few it is not unheard of. If there would be several containers
> calling sync_fs(2) on the machine they could easily hog the machine... That
> is why I was originally against keeping overlay inodes always dirty and
> wanted their dirtiness to at least roughly track the real need to do
> writeback.
>

Hi Jan,

Actually, the time on user and sys are almost same with directly excute syncfs on underlying fs.
IMO, it only extends syncfs(2) waiting time for perticular container but not burning cpu.
What am I missing?


Thanks,
Chengguang


>
> > Test bed: kvm vm
> > 2.50GHz cpu 32core
> > 64GB mem
> > vm kernel 5.15.0-rc1+ (with ovl syncfs patch V6)
> >
> > one millon files spread to 2 level of dir hierarchy.
> > test step:
> > 1) create testfiles in ovl upper dir
> > 2) mount overlayfs
> > 3) excute ls -lR to lookup all file in overlay merge dir
> > 4) excute slabtop to make sure overlay inode number
> > 5) call syncfs to the file in merge dir
> >
> > Tested five times and the reusults are in 1.310s ~ 1.326s
> >
> > root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> > syncfs success
> >
> > real 0m1.310s
> > user 0m0.000s
> > sys 0m0.001s
> > [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> > syncfs success
> >
> > real 0m1.326s
> > user 0m0.001s
> > sys 0m0.000s
> > [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> > syncfs success
> >
> > real 0m1.321s
> > user 0m0.000s
> > sys 0m0.001s
> > [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> > syncfs success
> >
> > real 0m1.316s
> > user 0m0.000s
> > sys 0m0.001s
> > [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> > syncfs success
> >
> > real 0m1.314s
> > user 0m0.001s
> > sys 0m0.001s
> >
> >
> > Directly run syncfs to the file in ovl-upper dir.
> > Tested five times and the reusults are in 0.001s ~ 0.003s
> >
> > [root@VM-144-4-centos test]# time ./syncfs a
> > syncfs success
> >
> > real 0m0.002s
> > user 0m0.001s
> > sys 0m0.000s
> > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> > syncfs success
> >
> > real 0m0.003s
> > user 0m0.001s
> > sys 0m0.000s
> > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> > syncfs success
> >
> > real 0m0.001s
> > user 0m0.000s
> > sys 0m0.001s
> > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> > syncfs success
> >
> > real 0m0.001s
> > user 0m0.000s
> > sys 0m0.001s
> > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> > syncfs success
> >
> > real 0m0.001s
> > user 0m0.000s
> > sys 0m0.001s
> > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> > syncfs success
> >
> > real 0m0.001s
> > user 0m0.000s
> > sys 0m0.001
> >
> >
> >
> >
> >
> >
> --
> Jan Kara <[email protected]>
> SUSE Labs, CR
>

2021-11-18 16:43:56

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Thu 18-11-21 20:02:09, Chengguang Xu wrote:
> ---- 在 星期四, 2021-11-18 19:23:15 Jan Kara <[email protected]> 撰写 ----
> > On Thu 18-11-21 14:32:36, Chengguang Xu wrote:
> > >
> > > ---- 在 星期三, 2021-11-17 14:11:29 Chengguang Xu <[email protected]> 撰写 ----
> > > > ---- 在 星期二, 2021-11-16 20:35:55 Miklos Szeredi <[email protected]> 撰写 ----
> > > > > On Tue, 16 Nov 2021 at 03:20, Chengguang Xu <[email protected]> wrote:
> > > > > >
> > > > > > ---- 在 星期四, 2021-10-07 21:34:19 Miklos Szeredi <[email protected]> 撰写 ----
> > > > > > > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > > > > > > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > > > > > > > won't start write back for dirty pages. Maybe I'm missing something,
> > > > > > > > > but there it looks as if nothing will actually trigger writeback for
> > > > > > > > > dirty pages in upper inode.
> > > > > > > > >
> > > > > > > >
> > > > > > > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > > > > > > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
> > > > > > >
> > > > > > > Right.
> > > > > > >
> > > > > > > But wouldn't it be simpler to do this from ->write_inode()?
> > > > > > >
> > > > > > > I.e. call write_inode_now() as suggested by Jan.
> > > > > > >
> > > > > > > Also could just call mark_inode_dirty() on the overlay inode
> > > > > > > regardless of the dirty flags on the upper inode since it shouldn't
> > > > > > > matter and results in simpler logic.
> > > > > > >
> > > > > >
> > > > > > Hi Miklos,
> > > > > >
> > > > > > Sorry for delayed response for this, I've been busy with another project.
> > > > > >
> > > > > > I agree with your suggesion above and further more how about just mark overlay inode dirty
> > > > > > when it has upper inode? This approach will make marking dirtiness simple enough.
> > > > >
> > > > > Are you suggesting that all non-lower overlay inodes should always be dirty?
> > > > >
> > > > > The logic would be simple, no doubt, but there's the cost to walking
> > > > > those overlay inodes which don't have a dirty upper inode, right?
> > > >
> > > > That's true.
> > > >
> > > > > Can you quantify this cost with a benchmark? Can be totally synthetic,
> > > > > e.g. lookup a million upper files without modifying them, then call
> > > > > syncfs.
> > > > >
> > > >
> > > > No problem, I'll do some tests for the performance.
> > > >
> > >
> > > Hi Miklos,
> > >
> > > I did some rough tests and the results like below. In practice, I don't
> > > think that 1.3s extra time of syncfs will cause significant problem.
> > > What do you think?
> >
> > Well, burning 1.3s worth of CPU time for doing nothing seems like quite a
> > bit to me. I understand this is with 1000000 inodes but although that is
> > quite a few it is not unheard of. If there would be several containers
> > calling sync_fs(2) on the machine they could easily hog the machine... That
> > is why I was originally against keeping overlay inodes always dirty and
> > wanted their dirtiness to at least roughly track the real need to do
> > writeback.
> >
>
> Hi Jan,
>
> Actually, the time on user and sys are almost same with directly excute syncfs on underlying fs.
> IMO, it only extends syncfs(2) waiting time for perticular container but not burning cpu.
> What am I missing?

Ah, right, I've missed that only realtime changed, not systime. I'm sorry
for confusion. But why did the realtime increase so much? Are we waiting
for some IO?

Honza

> > > Test bed: kvm vm
> > > 2.50GHz cpu 32core
> > > 64GB mem
> > > vm kernel 5.15.0-rc1+ (with ovl syncfs patch V6)
> > >
> > > one millon files spread to 2 level of dir hierarchy.
> > > test step:
> > > 1) create testfiles in ovl upper dir
> > > 2) mount overlayfs
> > > 3) excute ls -lR to lookup all file in overlay merge dir
> > > 4) excute slabtop to make sure overlay inode number
> > > 5) call syncfs to the file in merge dir
> > >
> > > Tested five times and the reusults are in 1.310s ~ 1.326s
> > >
> > > root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> > > syncfs success
> > >
> > > real 0m1.310s
> > > user 0m0.000s
> > > sys 0m0.001s
> > > [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> > > syncfs success
> > >
> > > real 0m1.326s
> > > user 0m0.001s
> > > sys 0m0.000s
> > > [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> > > syncfs success
> > >
> > > real 0m1.321s
> > > user 0m0.000s
> > > sys 0m0.001s
> > > [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> > > syncfs success
> > >
> > > real 0m1.316s
> > > user 0m0.000s
> > > sys 0m0.001s
> > > [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh
> > > syncfs success
> > >
> > > real 0m1.314s
> > > user 0m0.001s
> > > sys 0m0.001s
> > >
> > >
> > > Directly run syncfs to the file in ovl-upper dir.
> > > Tested five times and the reusults are in 0.001s ~ 0.003s
> > >
> > > [root@VM-144-4-centos test]# time ./syncfs a
> > > syncfs success
> > >
> > > real 0m0.002s
> > > user 0m0.001s
> > > sys 0m0.000s
> > > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> > > syncfs success
> > >
> > > real 0m0.003s
> > > user 0m0.001s
> > > sys 0m0.000s
> > > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> > > syncfs success
> > >
> > > real 0m0.001s
> > > user 0m0.000s
> > > sys 0m0.001s
> > > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> > > syncfs success
> > >
> > > real 0m0.001s
> > > user 0m0.000s
> > > sys 0m0.001s
> > > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> > > syncfs success
> > >
> > > real 0m0.001s
> > > user 0m0.000s
> > > sys 0m0.001s
> > > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh
> > > syncfs success
> > >
> > > real 0m0.001s
> > > user 0m0.000s
> > > sys 0m0.001
> > >
> > >
> > >
> > >
> > >
> > >
> > --
> > Jan Kara <[email protected]>
> > SUSE Labs, CR
> >
--
Jan Kara <[email protected]>
SUSE Labs, CR

2021-11-19 06:13:05

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

---- 在 星期五, 2021-11-19 00:43:49 Jan Kara <[email protected]> 撰写 ----
> On Thu 18-11-21 20:02:09, Chengguang Xu wrote:
> > ---- 在 星期四, 2021-11-18 19:23:15 Jan Kara <[email protected]> 撰写 ----
> > > On Thu 18-11-21 14:32:36, Chengguang Xu wrote:
> > > >
> > > > ---- 在 星期三, 2021-11-17 14:11:29 Chengguang Xu <[email protected]> 撰写 ----
> > > > > ---- 在 星期二, 2021-11-16 20:35:55 Miklos Szeredi <[email protected]> 撰写 ----
> > > > > > On Tue, 16 Nov 2021 at 03:20, Chengguang Xu <[email protected]> wrote:
> > > > > > >
> > > > > > > ---- 在 星期四, 2021-10-07 21:34:19 Miklos Szeredi <[email protected]> 撰写 ----
> > > > > > > > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > > > > > > > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > > > > > > > > won't start write back for dirty pages. Maybe I'm missing something,
> > > > > > > > > > but there it looks as if nothing will actually trigger writeback for
> > > > > > > > > > dirty pages in upper inode.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > > > > > > > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
> > > > > > > >
> > > > > > > > Right.
> > > > > > > >
> > > > > > > > But wouldn't it be simpler to do this from ->write_inode()?
> > > > > > > >
> > > > > > > > I.e. call write_inode_now() as suggested by Jan.
> > > > > > > >
> > > > > > > > Also could just call mark_inode_dirty() on the overlay inode
> > > > > > > > regardless of the dirty flags on the upper inode since it shouldn't
> > > > > > > > matter and results in simpler logic.
> > > > > > > >
> > > > > > >
> > > > > > > Hi Miklos,
> > > > > > >
> > > > > > > Sorry for delayed response for this, I've been busy with another project.
> > > > > > >
> > > > > > > I agree with your suggesion above and further more how about just mark overlay inode dirty
> > > > > > > when it has upper inode? This approach will make marking dirtiness simple enough.
> > > > > >
> > > > > > Are you suggesting that all non-lower overlay inodes should always be dirty?
> > > > > >
> > > > > > The logic would be simple, no doubt, but there's the cost to walking
> > > > > > those overlay inodes which don't have a dirty upper inode, right?
> > > > >
> > > > > That's true.
> > > > >
> > > > > > Can you quantify this cost with a benchmark? Can be totally synthetic,
> > > > > > e.g. lookup a million upper files without modifying them, then call
> > > > > > syncfs.
> > > > > >
> > > > >
> > > > > No problem, I'll do some tests for the performance.
> > > > >
> > > >
> > > > Hi Miklos,
> > > >
> > > > I did some rough tests and the results like below. In practice, I don't
> > > > think that 1.3s extra time of syncfs will cause significant problem.
> > > > What do you think?
> > >
> > > Well, burning 1.3s worth of CPU time for doing nothing seems like quite a
> > > bit to me. I understand this is with 1000000 inodes but although that is
> > > quite a few it is not unheard of. If there would be several containers
> > > calling sync_fs(2) on the machine they could easily hog the machine... That
> > > is why I was originally against keeping overlay inodes always dirty and
> > > wanted their dirtiness to at least roughly track the real need to do
> > > writeback.
> > >
> >
> > Hi Jan,
> >
> > Actually, the time on user and sys are almost same with directly excute syncfs on underlying fs.
> > IMO, it only extends syncfs(2) waiting time for perticular container but not burning cpu.
> > What am I missing?
>
> Ah, right, I've missed that only realtime changed, not systime. I'm sorry
> for confusion. But why did the realtime increase so much? Are we waiting
> for some IO?
>

There are many places to call cond_resched() in writeback process,
so sycnfs process was scheduled several times.

Thanks,
Chengguang




2021-11-30 11:22:15

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Fri 19-11-21 14:12:46, Chengguang Xu wrote:
> ---- 在 星期五, 2021-11-19 00:43:49 Jan Kara <[email protected]> 撰写 ----
> > On Thu 18-11-21 20:02:09, Chengguang Xu wrote:
> > > ---- 在 星期四, 2021-11-18 19:23:15 Jan Kara <[email protected]> 撰写 ----
> > > > On Thu 18-11-21 14:32:36, Chengguang Xu wrote:
> > > > >
> > > > > ---- 在 星期三, 2021-11-17 14:11:29 Chengguang Xu <[email protected]> 撰写 ----
> > > > > > ---- 在 星期二, 2021-11-16 20:35:55 Miklos Szeredi <[email protected]> 撰写 ----
> > > > > > > On Tue, 16 Nov 2021 at 03:20, Chengguang Xu <[email protected]> wrote:
> > > > > > > >
> > > > > > > > ---- 在 星期四, 2021-10-07 21:34:19 Miklos Szeredi <[email protected]> 撰写 ----
> > > > > > > > > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > > > > > > > > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > > > > > > > > > won't start write back for dirty pages. Maybe I'm missing something,
> > > > > > > > > > > but there it looks as if nothing will actually trigger writeback for
> > > > > > > > > > > dirty pages in upper inode.
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > > > > > > > > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
> > > > > > > > >
> > > > > > > > > Right.
> > > > > > > > >
> > > > > > > > > But wouldn't it be simpler to do this from ->write_inode()?
> > > > > > > > >
> > > > > > > > > I.e. call write_inode_now() as suggested by Jan.
> > > > > > > > >
> > > > > > > > > Also could just call mark_inode_dirty() on the overlay inode
> > > > > > > > > regardless of the dirty flags on the upper inode since it shouldn't
> > > > > > > > > matter and results in simpler logic.
> > > > > > > > >
> > > > > > > >
> > > > > > > > Hi Miklos,
> > > > > > > >
> > > > > > > > Sorry for delayed response for this, I've been busy with another project.
> > > > > > > >
> > > > > > > > I agree with your suggesion above and further more how about just mark overlay inode dirty
> > > > > > > > when it has upper inode? This approach will make marking dirtiness simple enough.
> > > > > > >
> > > > > > > Are you suggesting that all non-lower overlay inodes should always be dirty?
> > > > > > >
> > > > > > > The logic would be simple, no doubt, but there's the cost to walking
> > > > > > > those overlay inodes which don't have a dirty upper inode, right?
> > > > > >
> > > > > > That's true.
> > > > > >
> > > > > > > Can you quantify this cost with a benchmark? Can be totally synthetic,
> > > > > > > e.g. lookup a million upper files without modifying them, then call
> > > > > > > syncfs.
> > > > > > >
> > > > > >
> > > > > > No problem, I'll do some tests for the performance.
> > > > > >
> > > > >
> > > > > Hi Miklos,
> > > > >
> > > > > I did some rough tests and the results like below. In practice, I don't
> > > > > think that 1.3s extra time of syncfs will cause significant problem.
> > > > > What do you think?
> > > >
> > > > Well, burning 1.3s worth of CPU time for doing nothing seems like quite a
> > > > bit to me. I understand this is with 1000000 inodes but although that is
> > > > quite a few it is not unheard of. If there would be several containers
> > > > calling sync_fs(2) on the machine they could easily hog the machine... That
> > > > is why I was originally against keeping overlay inodes always dirty and
> > > > wanted their dirtiness to at least roughly track the real need to do
> > > > writeback.
> > > >
> > >
> > > Hi Jan,
> > >
> > > Actually, the time on user and sys are almost same with directly excute syncfs on underlying fs.
> > > IMO, it only extends syncfs(2) waiting time for perticular container but not burning cpu.
> > > What am I missing?
> >
> > Ah, right, I've missed that only realtime changed, not systime. I'm sorry
> > for confusion. But why did the realtime increase so much? Are we waiting
> > for some IO?
> >
>
> There are many places to call cond_resched() in writeback process,
> so sycnfs process was scheduled several times.

I was thinking about this a bit more and I don't think I buy this
explanation. What I rather think is happening is that real work for syncfs
(writeback_inodes_sb() and sync_inodes_sb() calls) gets offloaded to a flush
worker. E.g. writeback_inodes_sb() ends up calling
__writeback_inodes_sb_nr() which does:

bdi_split_work_to_wbs()
wb_wait_for_completion()

So you don't see the work done in the times accounted to your test
program. But in practice the flush worker is indeed burning 1.3s worth of
CPU to scan the 1 million inode list and do nothing.

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2021-11-30 16:10:56

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation


---- 在 星期二, 2021-11-30 19:22:06 Jan Kara <[email protected]> 撰写 ----
> On Fri 19-11-21 14:12:46, Chengguang Xu wrote:
> > ---- 在 星期五, 2021-11-19 00:43:49 Jan Kara <[email protected]> 撰写 ----
> > > On Thu 18-11-21 20:02:09, Chengguang Xu wrote:
> > > > ---- 在 星期四, 2021-11-18 19:23:15 Jan Kara <[email protected]> 撰写 ----
> > > > > On Thu 18-11-21 14:32:36, Chengguang Xu wrote:
> > > > > >
> > > > > > ---- 在 星期三, 2021-11-17 14:11:29 Chengguang Xu <[email protected]> 撰写 ----
> > > > > > > ---- 在 星期二, 2021-11-16 20:35:55 Miklos Szeredi <[email protected]> 撰写 ----
> > > > > > > > On Tue, 16 Nov 2021 at 03:20, Chengguang Xu <[email protected]> wrote:
> > > > > > > > >
> > > > > > > > > ---- 在 星期四, 2021-10-07 21:34:19 Miklos Szeredi <[email protected]> 撰写 ----
> > > > > > > > > > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <[email protected]> wrote:
> > > > > > > > > > > > However that wasn't what I was asking about. AFAICS ->write_inode()
> > > > > > > > > > > > won't start write back for dirty pages. Maybe I'm missing something,
> > > > > > > > > > > > but there it looks as if nothing will actually trigger writeback for
> > > > > > > > > > > > dirty pages in upper inode.
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and
> > > > > > > > > > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes).
> > > > > > > > > >
> > > > > > > > > > Right.
> > > > > > > > > >
> > > > > > > > > > But wouldn't it be simpler to do this from ->write_inode()?
> > > > > > > > > >
> > > > > > > > > > I.e. call write_inode_now() as suggested by Jan.
> > > > > > > > > >
> > > > > > > > > > Also could just call mark_inode_dirty() on the overlay inode
> > > > > > > > > > regardless of the dirty flags on the upper inode since it shouldn't
> > > > > > > > > > matter and results in simpler logic.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > Hi Miklos,
> > > > > > > > >
> > > > > > > > > Sorry for delayed response for this, I've been busy with another project.
> > > > > > > > >
> > > > > > > > > I agree with your suggesion above and further more how about just mark overlay inode dirty
> > > > > > > > > when it has upper inode? This approach will make marking dirtiness simple enough.
> > > > > > > >
> > > > > > > > Are you suggesting that all non-lower overlay inodes should always be dirty?
> > > > > > > >
> > > > > > > > The logic would be simple, no doubt, but there's the cost to walking
> > > > > > > > those overlay inodes which don't have a dirty upper inode, right?
> > > > > > >
> > > > > > > That's true.
> > > > > > >
> > > > > > > > Can you quantify this cost with a benchmark? Can be totally synthetic,
> > > > > > > > e.g. lookup a million upper files without modifying them, then call
> > > > > > > > syncfs.
> > > > > > > >
> > > > > > >
> > > > > > > No problem, I'll do some tests for the performance.
> > > > > > >
> > > > > >
> > > > > > Hi Miklos,
> > > > > >
> > > > > > I did some rough tests and the results like below. In practice, I don't
> > > > > > think that 1.3s extra time of syncfs will cause significant problem.
> > > > > > What do you think?
> > > > >
> > > > > Well, burning 1.3s worth of CPU time for doing nothing seems like quite a
> > > > > bit to me. I understand this is with 1000000 inodes but although that is
> > > > > quite a few it is not unheard of. If there would be several containers
> > > > > calling sync_fs(2) on the machine they could easily hog the machine... That
> > > > > is why I was originally against keeping overlay inodes always dirty and
> > > > > wanted their dirtiness to at least roughly track the real need to do
> > > > > writeback.
> > > > >
> > > >
> > > > Hi Jan,
> > > >
> > > > Actually, the time on user and sys are almost same with directly excute syncfs on underlying fs.
> > > > IMO, it only extends syncfs(2) waiting time for perticular container but not burning cpu.
> > > > What am I missing?
> > >
> > > Ah, right, I've missed that only realtime changed, not systime. I'm sorry
> > > for confusion. But why did the realtime increase so much? Are we waiting
> > > for some IO?
> > >
> >
> > There are many places to call cond_resched() in writeback process,
> > so sycnfs process was scheduled several times.
>
> I was thinking about this a bit more and I don't think I buy this
> explanation. What I rather think is happening is that real work for syncfs
> (writeback_inodes_sb() and sync_inodes_sb() calls) gets offloaded to a flush
> worker. E.g. writeback_inodes_sb() ends up calling
> __writeback_inodes_sb_nr() which does:
>
> bdi_split_work_to_wbs()
> wb_wait_for_completion()
>
> So you don't see the work done in the times accounted to your test
> program. But in practice the flush worker is indeed burning 1.3s worth of
> CPU to scan the 1 million inode list and do nothing.
>

That makes sense. However, in real container use case, the upper dir is always empty,
so I don't think there is meaningful difference compare to accurately marking overlay
inode dirty.

I'm not very familiar with other use cases of overlayfs except container, should we consider
other use cases? Maybe we can also ignore the cpu burden because those use cases don't
have density deployment like container.



Thanks,
Chengguang




2021-11-30 19:05:15

by Amir Goldstein

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

> > I was thinking about this a bit more and I don't think I buy this
> > explanation. What I rather think is happening is that real work for syncfs
> > (writeback_inodes_sb() and sync_inodes_sb() calls) gets offloaded to a flush
> > worker. E.g. writeback_inodes_sb() ends up calling
> > __writeback_inodes_sb_nr() which does:
> >
> > bdi_split_work_to_wbs()
> > wb_wait_for_completion()
> >
> > So you don't see the work done in the times accounted to your test
> > program. But in practice the flush worker is indeed burning 1.3s worth of
> > CPU to scan the 1 million inode list and do nothing.
> >
>
> That makes sense. However, in real container use case, the upper dir is always empty,
> so I don't think there is meaningful difference compare to accurately marking overlay
> inode dirty.
>

It's true the that is a very common case, but...

> I'm not very familiar with other use cases of overlayfs except container, should we consider
> other use cases? Maybe we can also ignore the cpu burden because those use cases don't
> have density deployment like container.
>

metacopy feature was developed for the use case of a container
that chowns all the files in the lower image.

In that case, which is now also quite common, all the overlay inodes are
upper inodes.

What about only re-mark overlay inode dirty if upper inode is dirty or is
writeably mmapped.
For other cases, it is easy to know when overlay inode becomes dirty?
Didn't you already try this?

Thanks,
Amir.

2021-12-01 02:38:00

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation


---- 在 星期三, 2021-12-01 03:04:59 Amir Goldstein <[email protected]> 撰写 ----
> > > I was thinking about this a bit more and I don't think I buy this
> > > explanation. What I rather think is happening is that real work for syncfs
> > > (writeback_inodes_sb() and sync_inodes_sb() calls) gets offloaded to a flush
> > > worker. E.g. writeback_inodes_sb() ends up calling
> > > __writeback_inodes_sb_nr() which does:
> > >
> > > bdi_split_work_to_wbs()
> > > wb_wait_for_completion()
> > >
> > > So you don't see the work done in the times accounted to your test
> > > program. But in practice the flush worker is indeed burning 1.3s worth of
> > > CPU to scan the 1 million inode list and do nothing.
> > >
> >
> > That makes sense. However, in real container use case, the upper dir is always empty,
> > so I don't think there is meaningful difference compare to accurately marking overlay
> > inode dirty.
> >
>
> It's true the that is a very common case, but...
>
> > I'm not very familiar with other use cases of overlayfs except container, should we consider
> > other use cases? Maybe we can also ignore the cpu burden because those use cases don't
> > have density deployment like container.
> >
>
> metacopy feature was developed for the use case of a container
> that chowns all the files in the lower image.
>
> In that case, which is now also quite common, all the overlay inodes are
> upper inodes.
>

Regardless of metacopy or datacopy, that copy-up has already modified overlay inode
so initialy marking dirty to all overlay inodes which have upper inode will not be a serious
problem in this case too, right?

I guess maybe you more concern about the re-mark dirtiness on above use case.



> What about only re-mark overlay inode dirty if upper inode is dirty or is
> writeably mmapped.
> For other cases, it is easy to know when overlay inode becomes dirty?
> Didn't you already try this?
>

Yes, I've tried that approach in previous version but as Miklos pointed out in the
feedback there are a few of racy conditions.



Thanks,
Chengguang





2021-12-01 06:31:46

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation


---- 在 星期三, 2021-12-01 10:37:15 Chengguang Xu <[email protected]> 撰写 ----
>
> ---- 在 星期三, 2021-12-01 03:04:59 Amir Goldstein <[email protected]> 撰写 ----
> > > > I was thinking about this a bit more and I don't think I buy this
> > > > explanation. What I rather think is happening is that real work for syncfs
> > > > (writeback_inodes_sb() and sync_inodes_sb() calls) gets offloaded to a flush
> > > > worker. E.g. writeback_inodes_sb() ends up calling
> > > > __writeback_inodes_sb_nr() which does:
> > > >
> > > > bdi_split_work_to_wbs()
> > > > wb_wait_for_completion()
> > > >
> > > > So you don't see the work done in the times accounted to your test
> > > > program. But in practice the flush worker is indeed burning 1.3s worth of
> > > > CPU to scan the 1 million inode list and do nothing.
> > > >
> > >
> > > That makes sense. However, in real container use case, the upper dir is always empty,
> > > so I don't think there is meaningful difference compare to accurately marking overlay
> > > inode dirty.
> > >
> >
> > It's true the that is a very common case, but...
> >
> > > I'm not very familiar with other use cases of overlayfs except container, should we consider
> > > other use cases? Maybe we can also ignore the cpu burden because those use cases don't
> > > have density deployment like container.
> > >
> >
> > metacopy feature was developed for the use case of a container
> > that chowns all the files in the lower image.
> >
> > In that case, which is now also quite common, all the overlay inodes are
> > upper inodes.
> >
>
> Regardless of metacopy or datacopy, that copy-up has already modified overlay inode
> so initialy marking dirty to all overlay inodes which have upper inode will not be a serious
> problem in this case too, right?
>
> I guess maybe you more concern about the re-mark dirtiness on above use case.
>
>
>
> > What about only re-mark overlay inode dirty if upper inode is dirty or is
> > writeably mmapped.
> > For other cases, it is easy to know when overlay inode becomes dirty?
> > Didn't you already try this?
> >
>
> Yes, I've tried that approach in previous version but as Miklos pointed out in the
> feedback there are a few of racy conditions.
>

So the final solution to handle all the concerns looks like accurately mark overlay inode
diry on modification and re-mark dirty only for mmaped file in ->write_inode().

Hi Miklos, Jan

Will you agree with new proposal above?



Thanks,
Chengguang
































2021-12-01 07:19:32

by Amir Goldstein

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Wed, Dec 1, 2021 at 8:31 AM Chengguang Xu <[email protected]> wrote:
>
>
> ---- 在 星期三, 2021-12-01 10:37:15 Chengguang Xu <[email protected]> 撰写 ----
> >
> > ---- 在 星期三, 2021-12-01 03:04:59 Amir Goldstein <[email protected]> 撰写 ----
> > > > > I was thinking about this a bit more and I don't think I buy this
> > > > > explanation. What I rather think is happening is that real work for syncfs
> > > > > (writeback_inodes_sb() and sync_inodes_sb() calls) gets offloaded to a flush
> > > > > worker. E.g. writeback_inodes_sb() ends up calling
> > > > > __writeback_inodes_sb_nr() which does:
> > > > >
> > > > > bdi_split_work_to_wbs()
> > > > > wb_wait_for_completion()
> > > > >
> > > > > So you don't see the work done in the times accounted to your test
> > > > > program. But in practice the flush worker is indeed burning 1.3s worth of
> > > > > CPU to scan the 1 million inode list and do nothing.
> > > > >
> > > >
> > > > That makes sense. However, in real container use case, the upper dir is always empty,
> > > > so I don't think there is meaningful difference compare to accurately marking overlay
> > > > inode dirty.
> > > >
> > >
> > > It's true the that is a very common case, but...
> > >
> > > > I'm not very familiar with other use cases of overlayfs except container, should we consider
> > > > other use cases? Maybe we can also ignore the cpu burden because those use cases don't
> > > > have density deployment like container.
> > > >
> > >
> > > metacopy feature was developed for the use case of a container
> > > that chowns all the files in the lower image.
> > >
> > > In that case, which is now also quite common, all the overlay inodes are
> > > upper inodes.
> > >
> >
> > Regardless of metacopy or datacopy, that copy-up has already modified overlay inode
> > so initialy marking dirty to all overlay inodes which have upper inode will not be a serious
> > problem in this case too, right?
> >
> > I guess maybe you more concern about the re-mark dirtiness on above use case.
> >
> >
> >
> > > What about only re-mark overlay inode dirty if upper inode is dirty or is
> > > writeably mmapped.
> > > For other cases, it is easy to know when overlay inode becomes dirty?
> > > Didn't you already try this?
> > >
> >
> > Yes, I've tried that approach in previous version but as Miklos pointed out in the
> > feedback there are a few of racy conditions.
> >

Right..

>
> So the final solution to handle all the concerns looks like accurately mark overlay inode
> diry on modification and re-mark dirty only for mmaped file in ->write_inode().
>
> Hi Miklos, Jan
>
> Will you agree with new proposal above?
>

Maybe you can still pull off a simpler version by remarking dirty only
writably mmapped upper AND inode_is_open_for_write(upper)?

If I am not mistaken, if you always mark overlay inode dirty on ovl_flush()
of FMODE_WRITE file, there is nothing that can make upper inode dirty
after last close (if upper is not mmaped), so one more inode sync should
be enough. No?

Thanks,
Amir.

2021-12-01 13:47:52

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Wed 01-12-21 09:19:17, Amir Goldstein wrote:
> On Wed, Dec 1, 2021 at 8:31 AM Chengguang Xu <[email protected]> wrote:
> > So the final solution to handle all the concerns looks like accurately
> > mark overlay inode diry on modification and re-mark dirty only for
> > mmaped file in ->write_inode().
> >
> > Hi Miklos, Jan
> >
> > Will you agree with new proposal above?
> >
>
> Maybe you can still pull off a simpler version by remarking dirty only
> writably mmapped upper AND inode_is_open_for_write(upper)?

Well, if inode is writeably mapped, it must be also open for write, doesn't
it? The VMA of the mapping will hold file open. So remarking overlay inode
dirty during writeback while inode_is_open_for_write(upper) looks like
reasonably easy and presumably there won't be that many inodes open for
writing for this to become big overhead?

> If I am not mistaken, if you always mark overlay inode dirty on ovl_flush()
> of FMODE_WRITE file, there is nothing that can make upper inode dirty
> after last close (if upper is not mmaped), so one more inode sync should
> be enough. No?

But we still need to catch other dirtying events like timestamp updates,
truncate(2) etc. to mark overlay inode dirty. Not sure how reliably that
can be done...

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2021-12-01 15:00:44

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

---- 在 星期三, 2021-12-01 21:46:10 Jan Kara <[email protected]> 撰写 ----
> On Wed 01-12-21 09:19:17, Amir Goldstein wrote:
> > On Wed, Dec 1, 2021 at 8:31 AM Chengguang Xu <[email protected]> wrote:
> > > So the final solution to handle all the concerns looks like accurately
> > > mark overlay inode diry on modification and re-mark dirty only for
> > > mmaped file in ->write_inode().
> > >
> > > Hi Miklos, Jan
> > >
> > > Will you agree with new proposal above?
> > >
> >
> > Maybe you can still pull off a simpler version by remarking dirty only
> > writably mmapped upper AND inode_is_open_for_write(upper)?
>
> Well, if inode is writeably mapped, it must be also open for write, doesn't
> it?

That's right.


> The VMA of the mapping will hold file open.

It's a bit tricky but currently ovl_mmap() will replace file to realfile in upper layer
and release overlayfs file. So overlayfs file itself will not have any relationship with
the VMA anymore after mmap().


Thanks,
Chengguang


> So remarking overlay inode
> dirty during writeback while inode_is_open_for_write(upper) looks like
> reasonably easy and presumably there won't be that many inodes open for
> writing for this to become big overhead?
>
> > If I am not mistaken, if you always mark overlay inode dirty on ovl_flush()
> > of FMODE_WRITE file, there is nothing that can make upper inode dirty
> > after last close (if upper is not mmaped), so one more inode sync should
> > be enough. No?
>
> But we still need to catch other dirtying events like timestamp updates,
> truncate(2) etc. to mark overlay inode dirty. Not sure how reliably that
> can be done...
>
> Honza
> --
> Jan Kara <[email protected]>
> SUSE Labs, CR
>

2021-12-01 16:27:27

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

---- 在 星期三, 2021-12-01 21:46:10 Jan Kara <[email protected]> 撰写 ----
> On Wed 01-12-21 09:19:17, Amir Goldstein wrote:
> > On Wed, Dec 1, 2021 at 8:31 AM Chengguang Xu <[email protected]> wrote:
> > > So the final solution to handle all the concerns looks like accurately
> > > mark overlay inode diry on modification and re-mark dirty only for
> > > mmaped file in ->write_inode().
> > >
> > > Hi Miklos, Jan
> > >
> > > Will you agree with new proposal above?
> > >
> >
> > Maybe you can still pull off a simpler version by remarking dirty only
> > writably mmapped upper AND inode_is_open_for_write(upper)?
>
> Well, if inode is writeably mapped, it must be also open for write, doesn't
> it? The VMA of the mapping will hold file open. So remarking overlay inode
> dirty during writeback while inode_is_open_for_write(upper) looks like
> reasonably easy and presumably there won't be that many inodes open for
> writing for this to become big overhead?
>
> > If I am not mistaken, if you always mark overlay inode dirty on ovl_flush()
> > of FMODE_WRITE file, there is nothing that can make upper inode dirty
> > after last close (if upper is not mmaped), so one more inode sync should
> > be enough. No?
>
> But we still need to catch other dirtying events like timestamp updates,
> truncate(2) etc. to mark overlay inode dirty. Not sure how reliably that
> can be done...
>

To be honest I even don't fully understand what's the ->flush() logic in overlayfs.
Why should we open new underlying file when calling ->flush()?
Is it still correct in the case of opening lower layer first then copy-uped case?


Thanks,
Chengguang








2021-12-01 22:47:43

by Amir Goldstein

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Wed, Dec 1, 2021 at 6:24 PM Chengguang Xu <[email protected]> wrote:
>
> ---- 在 星期三, 2021-12-01 21:46:10 Jan Kara <[email protected]> 撰写 ----
> > On Wed 01-12-21 09:19:17, Amir Goldstein wrote:
> > > On Wed, Dec 1, 2021 at 8:31 AM Chengguang Xu <[email protected]> wrote:
> > > > So the final solution to handle all the concerns looks like accurately
> > > > mark overlay inode diry on modification and re-mark dirty only for
> > > > mmaped file in ->write_inode().
> > > >
> > > > Hi Miklos, Jan
> > > >
> > > > Will you agree with new proposal above?
> > > >
> > >
> > > Maybe you can still pull off a simpler version by remarking dirty only
> > > writably mmapped upper AND inode_is_open_for_write(upper)?
> >
> > Well, if inode is writeably mapped, it must be also open for write, doesn't
> > it? The VMA of the mapping will hold file open. So remarking overlay inode
> > dirty during writeback while inode_is_open_for_write(upper) looks like
> > reasonably easy and presumably there won't be that many inodes open for
> > writing for this to become big overhead?

I think it should be ok and a good tradeoff of complexity vs. performance.

> >
> > > If I am not mistaken, if you always mark overlay inode dirty on ovl_flush()
> > > of FMODE_WRITE file, there is nothing that can make upper inode dirty
> > > after last close (if upper is not mmaped), so one more inode sync should
> > > be enough. No?
> >
> > But we still need to catch other dirtying events like timestamp updates,
> > truncate(2) etc. to mark overlay inode dirty. Not sure how reliably that
> > can be done...
> >

Oh yeh, we have those as well :)
All those cases should be covered by ovl_copyattr() that updates the
ovl inode ctime/mtime, so always dirty in ovl_copyattr() should be good.
I *think* the only case of ovl_copyattr() that should not dirty is in
ovl_inode_init(), so need some special helper there.

>
> To be honest I even don't fully understand what's the ->flush() logic in overlayfs.
> Why should we open new underlying file when calling ->flush()?
> Is it still correct in the case of opening lower layer first then copy-uped case?
>

The semantics of flush() are far from being uniform across filesystems.
most local filesystems do nothing on close.
most network fs only flush dirty data when a writer closes a file
but not when a reader closes a file.
It is hard to imagine that applications rely on flush-on-close of
rdonly fd behavior and I agree that flushing only if original fd was upper
makes more sense, so I am not sure if it is really essential for
overlayfs to open an upper rdonly fd just to do whatever the upper fs
would have done on close of rdonly fd, but maybe there is no good
reason to change this behavior either.

Thanks,
Amir.

2021-12-01 23:23:35

by Amir Goldstein

[permalink] [raw]
Subject: Re: ovl_flush() behavior

> >
> > To be honest I even don't fully understand what's the ->flush() logic in overlayfs.
> > Why should we open new underlying file when calling ->flush()?
> > Is it still correct in the case of opening lower layer first then copy-uped case?
> >
>
> The semantics of flush() are far from being uniform across filesystems.
> most local filesystems do nothing on close.
> most network fs only flush dirty data when a writer closes a file
> but not when a reader closes a file.
> It is hard to imagine that applications rely on flush-on-close of
> rdonly fd behavior and I agree that flushing only if original fd was upper
> makes more sense, so I am not sure if it is really essential for
> overlayfs to open an upper rdonly fd just to do whatever the upper fs
> would have done on close of rdonly fd, but maybe there is no good
> reason to change this behavior either.
>

On second thought, I think there may be a good reason to change
ovl_flush() otherwise I wouldn't have submitted commit
a390ccb316be ("fuse: add FOPEN_NOFLUSH") - I did observe
applications that frequently open short lived rdonly fds and suffered
undesired latencies on close().

As for "changing existing behavior", I think that most fs used as
upper do not implement flush at all.
Using fuse/virtiofs as overlayfs upper is quite new, so maybe that
is not a problem and maybe the new behavior would be preferred
for those users?

Thanks,
Amir.

2021-12-02 02:12:22

by Chengguang Xu

[permalink] [raw]
Subject: Re: ovl_flush() behavior


---- 在 星期四, 2021-12-02 07:23:17 Amir Goldstein <[email protected]> 撰写 ----
> > >
> > > To be honest I even don't fully understand what's the ->flush() logic in overlayfs.
> > > Why should we open new underlying file when calling ->flush()?
> > > Is it still correct in the case of opening lower layer first then copy-uped case?
> > >
> >
> > The semantics of flush() are far from being uniform across filesystems.
> > most local filesystems do nothing on close.
> > most network fs only flush dirty data when a writer closes a file
> > but not when a reader closes a file.
> > It is hard to imagine that applications rely on flush-on-close of
> > rdonly fd behavior and I agree that flushing only if original fd was upper
> > makes more sense, so I am not sure if it is really essential for
> > overlayfs to open an upper rdonly fd just to do whatever the upper fs
> > would have done on close of rdonly fd, but maybe there is no good
> > reason to change this behavior either.
> >
>
> On second thought, I think there may be a good reason to change
> ovl_flush() otherwise I wouldn't have submitted commit
> a390ccb316be ("fuse: add FOPEN_NOFLUSH") - I did observe
> applications that frequently open short lived rdonly fds and suffered
> undesired latencies on close().
>
> As for "changing existing behavior", I think that most fs used as
> upper do not implement flush at all.
> Using fuse/virtiofs as overlayfs upper is quite new, so maybe that
> is not a problem and maybe the new behavior would be preferred
> for those users?
>

So is that mean simply redirect the ->flush request to original underlying realfile?


Thanks,
Chengguang


2021-12-02 15:14:37

by Vivek Goyal

[permalink] [raw]
Subject: Re: ovl_flush() behavior

On Thu, Dec 02, 2021 at 01:23:17AM +0200, Amir Goldstein wrote:
> > >
> > > To be honest I even don't fully understand what's the ->flush() logic in overlayfs.
> > > Why should we open new underlying file when calling ->flush()?
> > > Is it still correct in the case of opening lower layer first then copy-uped case?
> > >
> >
> > The semantics of flush() are far from being uniform across filesystems.
> > most local filesystems do nothing on close.
> > most network fs only flush dirty data when a writer closes a file
> > but not when a reader closes a file.
> > It is hard to imagine that applications rely on flush-on-close of
> > rdonly fd behavior and I agree that flushing only if original fd was upper
> > makes more sense, so I am not sure if it is really essential for
> > overlayfs to open an upper rdonly fd just to do whatever the upper fs
> > would have done on close of rdonly fd, but maybe there is no good
> > reason to change this behavior either.
> >
>
> On second thought, I think there may be a good reason to change
> ovl_flush() otherwise I wouldn't have submitted commit
> a390ccb316be ("fuse: add FOPEN_NOFLUSH") - I did observe
> applications that frequently open short lived rdonly fds and suffered
> undesired latencies on close().
>
> As for "changing existing behavior", I think that most fs used as
> upper do not implement flush at all.
> Using fuse/virtiofs as overlayfs upper is quite new, so maybe that
> is not a problem and maybe the new behavior would be preferred
> for those users?

It probably will be nice not to send flush to fuse server when it is not
required.

Right now in virtiofsd, I see that we are depending on flush being sent
as we are dealing with remote posix lock magic. I am supporting remotme
posix locks in virtiofs and virtiofsd is building these on top of open
file description locks on host. (Can't use posix locks on host as these
locks are per process and virtiofsd is single process working on behalf
of all the guest processes, and unexpected things happen).

When an fd is being closed, flush request is sent and along with it we
also send "lock_owner".

inarg.lock_owner = fuse_lock_owner_id(fm->fc, id);

We basically use this to keep track which process is closing the fd and
release associated OFD locks on host. /me needs to dive into details
to explain it better. Will do that if need be.

Bottom line is that as of now virtiofsd seems to be relying on receiving
FLUSH requests when remote posix locks are enabled. Maybe we can set
FOPEN_NOFLUSH when remote posix locks are not enabled.

Thanks
Vivek


2021-12-02 15:20:45

by Vivek Goyal

[permalink] [raw]
Subject: Re: ovl_flush() behavior

On Thu, Dec 02, 2021 at 10:11:39AM +0800, Chengguang Xu wrote:
>
> ---- 在 星期四, 2021-12-02 07:23:17 Amir Goldstein <[email protected]> 撰写 ----
> > > >
> > > > To be honest I even don't fully understand what's the ->flush() logic in overlayfs.
> > > > Why should we open new underlying file when calling ->flush()?
> > > > Is it still correct in the case of opening lower layer first then copy-uped case?
> > > >
> > >
> > > The semantics of flush() are far from being uniform across filesystems.
> > > most local filesystems do nothing on close.
> > > most network fs only flush dirty data when a writer closes a file
> > > but not when a reader closes a file.
> > > It is hard to imagine that applications rely on flush-on-close of
> > > rdonly fd behavior and I agree that flushing only if original fd was upper
> > > makes more sense, so I am not sure if it is really essential for
> > > overlayfs to open an upper rdonly fd just to do whatever the upper fs
> > > would have done on close of rdonly fd, but maybe there is no good
> > > reason to change this behavior either.
> > >
> >
> > On second thought, I think there may be a good reason to change
> > ovl_flush() otherwise I wouldn't have submitted commit
> > a390ccb316be ("fuse: add FOPEN_NOFLUSH") - I did observe
> > applications that frequently open short lived rdonly fds and suffered
> > undesired latencies on close().
> >
> > As for "changing existing behavior", I think that most fs used as
> > upper do not implement flush at all.
> > Using fuse/virtiofs as overlayfs upper is quite new, so maybe that
> > is not a problem and maybe the new behavior would be preferred
> > for those users?
> >
>
> So is that mean simply redirect the ->flush request to original underlying realfile?

If the file has been copied up since open(), then flush should go on upper
file, right?

I think Amir is talking about that can we optimize flush in overlay and
not call ->flush at all if file was opened read-only, IIUC.

In case of fuse he left it to server. If that's the case, then in case
of overlayfs, it should be left to underlyng filesystem as well?
Otherwise, it might happen underlying filesystem (like virtiofs) might
be expecting ->flush() and overlayfs decided not to call it because
file was read only.

So I will lean towards continue to call ->flush in overlay and try to
optimize virtiofsd to set FOPEN_NOFLUSH when not required.

Thanks
Vivek


2021-12-02 16:00:15

by Amir Goldstein

[permalink] [raw]
Subject: Re: ovl_flush() behavior

On Thu, Dec 2, 2021 at 5:20 PM Vivek Goyal <[email protected]> wrote:
>
> On Thu, Dec 02, 2021 at 10:11:39AM +0800, Chengguang Xu wrote:
> >
> > ---- 在 星期四, 2021-12-02 07:23:17 Amir Goldstein <[email protected]> 撰写 ----
> > > > >
> > > > > To be honest I even don't fully understand what's the ->flush() logic in overlayfs.
> > > > > Why should we open new underlying file when calling ->flush()?
> > > > > Is it still correct in the case of opening lower layer first then copy-uped case?
> > > > >
> > > >
> > > > The semantics of flush() are far from being uniform across filesystems.
> > > > most local filesystems do nothing on close.
> > > > most network fs only flush dirty data when a writer closes a file
> > > > but not when a reader closes a file.
> > > > It is hard to imagine that applications rely on flush-on-close of
> > > > rdonly fd behavior and I agree that flushing only if original fd was upper
> > > > makes more sense, so I am not sure if it is really essential for
> > > > overlayfs to open an upper rdonly fd just to do whatever the upper fs
> > > > would have done on close of rdonly fd, but maybe there is no good
> > > > reason to change this behavior either.
> > > >
> > >
> > > On second thought, I think there may be a good reason to change
> > > ovl_flush() otherwise I wouldn't have submitted commit
> > > a390ccb316be ("fuse: add FOPEN_NOFLUSH") - I did observe
> > > applications that frequently open short lived rdonly fds and suffered
> > > undesired latencies on close().
> > >
> > > As for "changing existing behavior", I think that most fs used as
> > > upper do not implement flush at all.
> > > Using fuse/virtiofs as overlayfs upper is quite new, so maybe that
> > > is not a problem and maybe the new behavior would be preferred
> > > for those users?
> > >
> >
> > So is that mean simply redirect the ->flush request to original underlying realfile?
>
> If the file has been copied up since open(), then flush should go on upper
> file, right?
>
> I think Amir is talking about that can we optimize flush in overlay and
> not call ->flush at all if file was opened read-only, IIUC.
>

Maybe that's what I wrote, but what I meant was if file was open as
lower read-only and later copied up, not sure we should bother with
ovl_open_realfile() for flushing upper.

> In case of fuse he left it to server. If that's the case, then in case
> of overlayfs, it should be left to underlyng filesystem as well?
> Otherwise, it might happen underlying filesystem (like virtiofs) might
> be expecting ->flush() and overlayfs decided not to call it because
> file was read only.
>

Certainly, if upper wants flush on rdonly file we must call flush on
close of rdonly file *that was opened on upper*.

But if we opened rdonly file on lower, even if lower is virtiofs, does
virtiosfd needs this flush? that same file on the server was not supposed
to be written by anyone.
If virtiofsd really needs this flush then it is a problem already today,
because if lower file was copied up since open rdonly, virtiofsd
is not going to get the flushes for the lower file (only the release).

However, I now realize that if we opened file rdonly on lower,
we may have later opened a short lived realfile on upper for read post
copy up and we never issued a flush for this short live rdonly upper fd.
So actually, unless we store a flag or something that says that
we never opened upper realfile, we should keep current behavior.

> So I will lean towards continue to call ->flush in overlay and try to
> optimize virtiofsd to set FOPEN_NOFLUSH when not required.
>

Makes sense.
Calling flush() on fs that does nothing with it doesn't hurt.

Thanks,
Amir.

2021-12-02 22:00:16

by Vivek Goyal

[permalink] [raw]
Subject: Re: ovl_flush() behavior

On Thu, Dec 02, 2021 at 05:59:56PM +0200, Amir Goldstein wrote:
> On Thu, Dec 2, 2021 at 5:20 PM Vivek Goyal <[email protected]> wrote:
> >
> > On Thu, Dec 02, 2021 at 10:11:39AM +0800, Chengguang Xu wrote:
> > >
> > > ---- 在 星期四, 2021-12-02 07:23:17 Amir Goldstein <[email protected]> 撰写 ----
> > > > > >
> > > > > > To be honest I even don't fully understand what's the ->flush() logic in overlayfs.
> > > > > > Why should we open new underlying file when calling ->flush()?
> > > > > > Is it still correct in the case of opening lower layer first then copy-uped case?
> > > > > >
> > > > >
> > > > > The semantics of flush() are far from being uniform across filesystems.
> > > > > most local filesystems do nothing on close.
> > > > > most network fs only flush dirty data when a writer closes a file
> > > > > but not when a reader closes a file.
> > > > > It is hard to imagine that applications rely on flush-on-close of
> > > > > rdonly fd behavior and I agree that flushing only if original fd was upper
> > > > > makes more sense, so I am not sure if it is really essential for
> > > > > overlayfs to open an upper rdonly fd just to do whatever the upper fs
> > > > > would have done on close of rdonly fd, but maybe there is no good
> > > > > reason to change this behavior either.
> > > > >
> > > >
> > > > On second thought, I think there may be a good reason to change
> > > > ovl_flush() otherwise I wouldn't have submitted commit
> > > > a390ccb316be ("fuse: add FOPEN_NOFLUSH") - I did observe
> > > > applications that frequently open short lived rdonly fds and suffered
> > > > undesired latencies on close().
> > > >
> > > > As for "changing existing behavior", I think that most fs used as
> > > > upper do not implement flush at all.
> > > > Using fuse/virtiofs as overlayfs upper is quite new, so maybe that
> > > > is not a problem and maybe the new behavior would be preferred
> > > > for those users?
> > > >
> > >
> > > So is that mean simply redirect the ->flush request to original underlying realfile?
> >
> > If the file has been copied up since open(), then flush should go on upper
> > file, right?
> >
> > I think Amir is talking about that can we optimize flush in overlay and
> > not call ->flush at all if file was opened read-only, IIUC.
> >
>
> Maybe that's what I wrote, but what I meant was if file was open as
> lower read-only and later copied up, not sure we should bother with
> ovl_open_realfile() for flushing upper.

I am not sure either. Miklos might have thoughts on this.

>
> > In case of fuse he left it to server. If that's the case, then in case
> > of overlayfs, it should be left to underlyng filesystem as well?
> > Otherwise, it might happen underlying filesystem (like virtiofs) might
> > be expecting ->flush() and overlayfs decided not to call it because
> > file was read only.
> >
>
> Certainly, if upper wants flush on rdonly file we must call flush on
> close of rdonly file *that was opened on upper*.
>
> But if we opened rdonly file on lower, even if lower is virtiofs, does
> virtiosfd needs this flush?

Right now virtiofsd seems to care about flush for read only files for
remote posix locks. Given overlayfs is not registering f_op->lock() handler,
my understaning is that all locking will be done by VFS on overlayfs inode
and it will never reach to the level of virtiofs (lower or upper). If that's
the case, then atleast from locking perspective we don't need flush on
lower for virtiofs.

> that same file on the server was not supposed
> to be written by anyone.
> If virtiofsd really needs this flush then it is a problem already today,
> because if lower file was copied up since open rdonly, virtiofsd
> is not going to get the flushes for the lower file (only the release).

So virtiofs will not get flushes on lower only if file got copied-up
after opening on lower, right? So far nobody has complained. But if
there is a use case somewhere, we might have to issue a flush on lower
as well.

>
> However, I now realize that if we opened file rdonly on lower,
> we may have later opened a short lived realfile on upper for read post
> copy up and we never issued a flush for this short live rdonly upper fd.
> So actually, unless we store a flag or something that says that
> we never opened upper realfile, we should keep current behavior.

I am assuming you are referring to ovl_read_iter() where it will open
upper file for read and then call fdput(real). So why should we issue
flush on upper in this case?

I thought flush will need to be issued only when overlay "fd" as seen
by user space is closed (and not when inernal fds opened by overlay are
closed).

So real question is, do we need to issue flush when fd is opened read
only. If yes, then we probably need to issue flush both on lower and
upper (and not only on upper). But if flush is to be issued only for
the case of fd which was writable, then issuing it on upper makes
sense.

>
> > So I will lean towards continue to call ->flush in overlay and try to
> > optimize virtiofsd to set FOPEN_NOFLUSH when not required.
> >
>
> Makes sense.
> Calling flush() on fs that does nothing with it doesn't hurt.

This probably is safest right now given virtiofs expects flush to be
issued even for read only fd (in case of remote posix locks). Though
that's a different thing that when overlayfs is sitting on top, we
will not be using remote posix lock functionality of virtiofs, IIUC.

Vivek


2021-12-05 14:08:10

by Chengguang Xu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

---- 在 星期四, 2021-12-02 06:47:25 Amir Goldstein <[email protected]> 撰写 ----
> On Wed, Dec 1, 2021 at 6:24 PM Chengguang Xu <[email protected]> wrote:
> >
> > ---- 在 星期三, 2021-12-01 21:46:10 Jan Kara <[email protected]> 撰写 ----
> > > On Wed 01-12-21 09:19:17, Amir Goldstein wrote:
> > > > On Wed, Dec 1, 2021 at 8:31 AM Chengguang Xu <[email protected]> wrote:
> > > > > So the final solution to handle all the concerns looks like accurately
> > > > > mark overlay inode diry on modification and re-mark dirty only for
> > > > > mmaped file in ->write_inode().
> > > > >
> > > > > Hi Miklos, Jan
> > > > >
> > > > > Will you agree with new proposal above?
> > > > >
> > > >
> > > > Maybe you can still pull off a simpler version by remarking dirty only
> > > > writably mmapped upper AND inode_is_open_for_write(upper)?
> > >
> > > Well, if inode is writeably mapped, it must be also open for write, doesn't
> > > it? The VMA of the mapping will hold file open. So remarking overlay inode
> > > dirty during writeback while inode_is_open_for_write(upper) looks like
> > > reasonably easy and presumably there won't be that many inodes open for
> > > writing for this to become big overhead?
>
> I think it should be ok and a good tradeoff of complexity vs. performance.

IMO, mark dirtiness on write is relatively simple, so I think we can mark the
overlayfs inode dirty during real write behavior and only remark writable mmap
unconditionally in ->write_inode().


>
> > >
> > > > If I am not mistaken, if you always mark overlay inode dirty on ovl_flush()
> > > > of FMODE_WRITE file, there is nothing that can make upper inode dirty
> > > > after last close (if upper is not mmaped), so one more inode sync should
> > > > be enough. No?
> > >
> > > But we still need to catch other dirtying events like timestamp updates,
> > > truncate(2) etc. to mark overlay inode dirty. Not sure how reliably that
> > > can be done...
> > >
>
> Oh yeh, we have those as well :)
> All those cases should be covered by ovl_copyattr() that updates the
> ovl inode ctime/mtime, so always dirty in ovl_copyattr() should be good.

Currently ovl_copyattr() does not cover all the cases, so I think we still need to carefully
check all the places of calling mnt_want_write().


Thanks,
Chengguang



> I *think* the only case of ovl_copyattr() that should not dirty is in
> ovl_inode_init(), so need some special helper there.
>
> >
> > To be honest I even don't fully understand what's the ->flush() logic in overlayfs.
> > Why should we open new underlying file when calling ->flush()?
> > Is it still correct in the case of opening lower layer first then copy-uped case?
> >
>
> The semantics of flush() are far from being uniform across filesystems.
> most local filesystems do nothing on close.
> most network fs only flush dirty data when a writer closes a file
> but not when a reader closes a file.
> It is hard to imagine that applications rely on flush-on-close of
> rdonly fd behavior and I agree that flushing only if original fd was upper
> makes more sense, so I am not sure if it is really essential for
> overlayfs to open an upper rdonly fd just to do whatever the upper fs
> would have done on close of rdonly fd, but maybe there is no good
> reason to change this behavior either.
>
> Thanks,
> Amir.
>

2021-12-07 05:34:09

by Amir Goldstein

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/10] ovl: implement overlayfs' ->write_inode operation

On Sun, Dec 5, 2021 at 4:07 PM Chengguang Xu <[email protected]> wrote:
>
> ---- 在 星期四, 2021-12-02 06:47:25 Amir Goldstein <[email protected]> 撰写 ----
> > On Wed, Dec 1, 2021 at 6:24 PM Chengguang Xu <[email protected]> wrote:
> > >
> > > ---- 在 星期三, 2021-12-01 21:46:10 Jan Kara <[email protected]> 撰写 ----
> > > > On Wed 01-12-21 09:19:17, Amir Goldstein wrote:
> > > > > On Wed, Dec 1, 2021 at 8:31 AM Chengguang Xu <[email protected]> wrote:
> > > > > > So the final solution to handle all the concerns looks like accurately
> > > > > > mark overlay inode diry on modification and re-mark dirty only for
> > > > > > mmaped file in ->write_inode().
> > > > > >
> > > > > > Hi Miklos, Jan
> > > > > >
> > > > > > Will you agree with new proposal above?
> > > > > >
> > > > >
> > > > > Maybe you can still pull off a simpler version by remarking dirty only
> > > > > writably mmapped upper AND inode_is_open_for_write(upper)?
> > > >
> > > > Well, if inode is writeably mapped, it must be also open for write, doesn't
> > > > it? The VMA of the mapping will hold file open. So remarking overlay inode
> > > > dirty during writeback while inode_is_open_for_write(upper) looks like
> > > > reasonably easy and presumably there won't be that many inodes open for
> > > > writing for this to become big overhead?
> >
> > I think it should be ok and a good tradeoff of complexity vs. performance.
>
> IMO, mark dirtiness on write is relatively simple, so I think we can mark the
> overlayfs inode dirty during real write behavior and only remark writable mmap
> unconditionally in ->write_inode().
>

If by "on write" you mean on write/copy_file_range/splice_write/...
then yes I agree
since we have to cover all other mnt_want_write() cases anyway.

>
> >
> > > >
> > > > > If I am not mistaken, if you always mark overlay inode dirty on ovl_flush()
> > > > > of FMODE_WRITE file, there is nothing that can make upper inode dirty
> > > > > after last close (if upper is not mmaped), so one more inode sync should
> > > > > be enough. No?
> > > >
> > > > But we still need to catch other dirtying events like timestamp updates,
> > > > truncate(2) etc. to mark overlay inode dirty. Not sure how reliably that
> > > > can be done...
> > > >
> >
> > Oh yeh, we have those as well :)
> > All those cases should be covered by ovl_copyattr() that updates the
> > ovl inode ctime/mtime, so always dirty in ovl_copyattr() should be good.
>
> Currently ovl_copyattr() does not cover all the cases, so I think we still need to carefully
> check all the places of calling mnt_want_write().
>

Careful audit is always good, but if we do not have ovl_copyattr() in
a call site
that should mark inode dirty, then it sounds like a bug, because ovl inode ctime
will not get updated. Do you know of any such cases?

Thanks,
Amir.