2009-03-25 12:18:08

by Fengguang Wu

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

On Wed, Mar 25, 2009 at 07:51:10PM +0800, Jeff Layton wrote:
> On Wed, 25 Mar 2009 10:50:37 +0800
> Wu Fengguang <[email protected]> wrote:
>
> > > Given the right situation though (or maybe the right filesystem), it's
> > > not too hard to imagine this problem occurring even in current mainline
> > > code with an inode that's frequently being redirtied.
> >
> > My reasoning with recent kernel is: for kupdate, s_dirty enqueues only
> > happen in __mark_inode_dirty() and redirty_tail(). Newly dirtied
> > inodes will be parked in s_dirty for 30s. During which time the
> > actively being-redirtied inodes, if their dirtied_when is an old stuck
> > value, will be retried for writeback and then re-inserted into a
> > non-empty s_dirty queue and have their dirtied_when refreshed.
> >
>
> Doesn't that assume that there are new inodes that are being dirtied?
> If you only have the same inodes being redirtied and never any new
> ones, the problem still occurs, right?

Yes. But will a production server run months without making one single
new dirtied inode? (Just out of curiosity. Not that I'm not willing to
fix this possible issue.:)

> > > > ...I see no obvious reasons against unconditionally resetting dirtied_when.
> > > >
> > > > (a) Delaying an inode's writeback for 30s maybe too long - its blocking
> > > > condition may well go away within 1s. (b) And it would be very undesirable
> > > > if one big file is repeatedly redirtied hence its writeback being
> > > > delayed considerably.
> > > >
> > > > However, redirty_tail() currently only tries to speedup writeback-after-redirty
> > > > in a _best effort_ way. It at best partially hides the above issues,
> > > > if there are any. In particular, if (b) is possible, the bug should
> > > > already show up at least in some situations.
> > > >
> > > > For XFS, immediately sync of redirtied inode is actually discouraged:
> > > >
> > > > http://lkml.org/lkml/2008/1/16/491
> > > >
> > > >
> > >
> > > Ok, those are good points that I need to think about.
> > >
> > > Thanks for the help so far. I'd welcome any suggestions you have on
> > > how best to fix this.
> >
> > For NFS, is it desirable to retry a redirtied inode after 30s, or
> > after a shorter 5s, or after 0.1~5s? Or the exact timing simply
> > doesn't matter?
> >
>
> I don't really consider NFS to be a special case here. It just happens
> to be where we saw the problem originally. Some of its characteristics
> might make it easier to hit this, but I'm not certain of that.

Now there are now two possible solutions:
- unconditionally update dirtied_when in redirty_tail();
- keep dirtied_when and redirty inodes to a new dedicated queue.
The first one involves less code, the second one allows more flexible timing.

NFS/XFS could be a good starting point for discussing the
requirements, so that we can reach a suitable solution.

Thanks,
Fengguang



2009-03-27 02:13:03

by Fengguang Wu

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

On Fri, Mar 27, 2009 at 01:03:27AM +0800, Jeff Layton wrote:
> On Wed, 25 Mar 2009 22:16:18 +0800
> Wu Fengguang <[email protected]> wrote:
>
> > >
> > > Actually, I think you were right. We still have this check in
> > > generic_sync_sb_inodes() even with Wu's January 2008 patches:
> > >
> > > /* Was this inode dirtied after sync_sb_inodes was called? */
> > > if (time_after(inode->dirtied_when, start))
> > > break;
> >
> > Yeah, ugly code. Jens' per-bdi flush daemons should eliminate it...
> >
>
> I had a look over Jens' patches and they seem to be more concerned with
> how the queues and daemons are organized (per-bdi rather than per-sb).
> The actual way that inodes flow between the queues and get written out
> don't look like they really change with his set.

OK, sorry that I've not carefully reviewed the per-bdi flushing patchset.

> They also don't eliminate the problematic check above. Regardless of
> whether your or Jens' patches make it in, I think we'll still need
> something like the following (untested) patch.
>
> If this looks ok, I'll flesh out the comments some and "officially" post
> it. Thoughts?

It's good in itself. However with more_io_wait queue, the first two
chunks will be eliminated. Mind I carry this patch with my patchset?

Thanks,
Fengguang


> --------------[snip]-----------------
>
> >From d10adff2d5f9a15d19c438119dbb2c410bd26e3c Mon Sep 17 00:00:00 2001
> From: Jeff Layton <[email protected]>
> Date: Thu, 26 Mar 2009 12:54:52 -0400
> Subject: [PATCH] writeback: guard against jiffies wraparound on inode->dirtied_when checks
>
> The dirtied_when value on an inode is supposed to represent the first
> time that an inode has one of its pages dirtied. This value is in units
> of jiffies. This value is used in several places in the writeback code
> to determine when to write out an inode.
>
> The problem is that these checks assume that dirtied_when is updated
> periodically. But if an inode is continuously being used for I/O it can
> be persistently marked as dirty and will continue to age. Once the time
> difference between dirtied_when and the jiffies value it is being
> compared to is greater than (or equal to) half the maximum of the
> jiffies type, the logic of the time_*() macros inverts and the opposite
> of what is needed is returned. On 32-bit architectures that's just under
> 25 days (assuming HZ == 1000).
>
> As the least-recently dirtied inode, it'll end up being the first one
> that pdflush will try to write out. sync_sb_inodes does this check
> however:
>
> /* Was this inode dirtied after sync_sb_inodes was called? */
> if (time_after(inode->dirtied_when, start))
> break;
>
> ...but now dirtied_when appears to be in the future. sync_sb_inodes
> bails out without attempting to write any dirty inodes. When this
> occurs, pdflush will stop writing out inodes for this superblock and
> nothing will unwedge it until jiffies moves out of the problematic
> window.
>
> This patch fixes this problem by changing the time_after checks against
> dirtied_when to also check whether dirtied_when appears to be in the
> future. If it does, then we consider the value to be in the past.
>
> This should shrink the problematic window to such a small period as not
> to matter.
>
> Signed-off-by: Jeff Layton <[email protected]>
> ---
> fs/fs-writeback.c | 11 +++++++----
> 1 files changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index e3fe991..dba69a5 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -196,8 +196,9 @@ static void redirty_tail(struct inode *inode)
> struct inode *tail_inode;
>
> tail_inode = list_entry(sb->s_dirty.next, struct inode, i_list);
> - if (!time_after_eq(inode->dirtied_when,
> - tail_inode->dirtied_when))
> + if (time_before(inode->dirtied_when,
> + tail_inode->dirtied_when) ||
> + time_after(inode->dirtied_when, jiffies))
> inode->dirtied_when = jiffies;
> }
> list_move(&inode->i_list, &sb->s_dirty);
> @@ -231,7 +232,8 @@ static void move_expired_inodes(struct list_head *delaying_queue,
> struct inode *inode = list_entry(delaying_queue->prev,
> struct inode, i_list);
> if (older_than_this &&
> - time_after(inode->dirtied_when, *older_than_this))
> + time_after(inode->dirtied_when, *older_than_this) &&
> + time_before_eq(inode->dirtied_when, jiffies))
> break;
> list_move(&inode->i_list, dispatch_queue);
> }
> @@ -493,7 +495,8 @@ void generic_sync_sb_inodes(struct super_block *sb,
> }
>
> /* Was this inode dirtied after sync_sb_inodes was called? */
> - if (time_after(inode->dirtied_when, start))
> + if (time_after(inode->dirtied_when, start) &&
> + time_before_eq(inode->dirtied_when, jiffies))
> break;
>
> /* Is another pdflush already flushing this queue? */
> --
> 1.5.5.6

2009-03-27 11:16:33

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

On Fri, 27 Mar 2009 10:13:03 +0800
Wu Fengguang <[email protected]> wrote:

>
> > They also don't eliminate the problematic check above. Regardless of
> > whether your or Jens' patches make it in, I think we'll still need
> > something like the following (untested) patch.
> >
> > If this looks ok, I'll flesh out the comments some and "officially" post
> > it. Thoughts?
>
> It's good in itself. However with more_io_wait queue, the first two
> chunks will be eliminated. Mind I carry this patch with my patchset?
>

It makes sense to roll that fix in with the stuff you're doing.

If it's going to be a little while before your patches get taken into
mainline though, it might not hurt to go ahead and push my patch in as
an interim fix. It shouldn't change the behavior of the code in the
normal case of a short-lived dirtied_when value, and should guard
against major problems when there's a long-lived one.

--
Jeff Layton <[email protected]>

2009-03-25 13:14:33

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

On Wed, 25 Mar 2009 20:17:43 +0800
Wu Fengguang <[email protected]> wrote:

> On Wed, Mar 25, 2009 at 07:51:10PM +0800, Jeff Layton wrote:
> > On Wed, 25 Mar 2009 10:50:37 +0800
> > Wu Fengguang <[email protected]> wrote:
> >
> > > > Given the right situation though (or maybe the right filesystem), it's
> > > > not too hard to imagine this problem occurring even in current mainline
> > > > code with an inode that's frequently being redirtied.
> > >
> > > My reasoning with recent kernel is: for kupdate, s_dirty enqueues only
> > > happen in __mark_inode_dirty() and redirty_tail(). Newly dirtied
> > > inodes will be parked in s_dirty for 30s. During which time the
> > > actively being-redirtied inodes, if their dirtied_when is an old stuck
> > > value, will be retried for writeback and then re-inserted into a
> > > non-empty s_dirty queue and have their dirtied_when refreshed.
> > >
> >
> > Doesn't that assume that there are new inodes that are being dirtied?
> > If you only have the same inodes being redirtied and never any new
> > ones, the problem still occurs, right?
>
> Yes. But will a production server run months without making one single
> new dirtied inode? (Just out of curiosity. Not that I'm not willing to
> fix this possible issue.:)
>

Yes. It's not that the box will run that long without creating a
single new dirtied inode, but rather that it won't necessarily create
one on all of its mounts. It's often the case that someone has a
mountpoint for a dedicated purpose.

Consider a host that has a mountpoint that contains logfiles that are
being heavily written. There's nothing that says that they must rotate
those logs over a particular period (assuming the fs has enough space,
etc). If the same ones are constantly being redirtied and no new
ones are created, then I think this problem can easily happen.

> > > > > ...I see no obvious reasons against unconditionally resetting dirtied_when.
> > > > >
> > > > > (a) Delaying an inode's writeback for 30s maybe too long - its blocking
> > > > > condition may well go away within 1s. (b) And it would be very undesirable
> > > > > if one big file is repeatedly redirtied hence its writeback being
> > > > > delayed considerably.
> > > > >
> > > > > However, redirty_tail() currently only tries to speedup writeback-after-redirty
> > > > > in a _best effort_ way. It at best partially hides the above issues,
> > > > > if there are any. In particular, if (b) is possible, the bug should
> > > > > already show up at least in some situations.
> > > > >
> > > > > For XFS, immediately sync of redirtied inode is actually discouraged:
> > > > >
> > > > > http://lkml.org/lkml/2008/1/16/491
> > > > >
> > > > >
> > > >
> > > > Ok, those are good points that I need to think about.
> > > >
> > > > Thanks for the help so far. I'd welcome any suggestions you have on
> > > > how best to fix this.
> > >
> > > For NFS, is it desirable to retry a redirtied inode after 30s, or
> > > after a shorter 5s, or after 0.1~5s? Or the exact timing simply
> > > doesn't matter?
> > >
> >
> > I don't really consider NFS to be a special case here. It just happens
> > to be where we saw the problem originally. Some of its characteristics
> > might make it easier to hit this, but I'm not certain of that.
>
> Now there are now two possible solutions:
> - unconditionally update dirtied_when in redirty_tail();
> - keep dirtied_when and redirty inodes to a new dedicated queue.
> The first one involves less code, the second one allows more flexible timing.
>
> NFS/XFS could be a good starting point for discussing the
> requirements, so that we can reach a suitable solution.
>

It sounds like it, yes. I saw that you posted some patches in January
(including your s_more_io_wait patch). I'll give those a closer look.
Adding the new s_more_io_wait queue is interesting and might sidestep
this problem nicely.

--
Jeff Layton <[email protected]>

2009-03-25 13:18:57

by Ian Kent

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

Jeff Layton wrote:
> On Wed, 25 Mar 2009 20:17:43 +0800
> Wu Fengguang <[email protected]> wrote:
>
>> On Wed, Mar 25, 2009 at 07:51:10PM +0800, Jeff Layton wrote:
>>> On Wed, 25 Mar 2009 10:50:37 +0800
>>> Wu Fengguang <[email protected]> wrote:
>>>
>>>>> Given the right situation though (or maybe the right filesystem), it's
>>>>> not too hard to imagine this problem occurring even in current mainline
>>>>> code with an inode that's frequently being redirtied.
>>>> My reasoning with recent kernel is: for kupdate, s_dirty enqueues only
>>>> happen in __mark_inode_dirty() and redirty_tail(). Newly dirtied
>>>> inodes will be parked in s_dirty for 30s. During which time the
>>>> actively being-redirtied inodes, if their dirtied_when is an old stuck
>>>> value, will be retried for writeback and then re-inserted into a
>>>> non-empty s_dirty queue and have their dirtied_when refreshed.
>>>>
>>> Doesn't that assume that there are new inodes that are being dirtied?
>>> If you only have the same inodes being redirtied and never any new
>>> ones, the problem still occurs, right?
>> Yes. But will a production server run months without making one single
>> new dirtied inode? (Just out of curiosity. Not that I'm not willing to
>> fix this possible issue.:)
>>
>
> Yes. It's not that the box will run that long without creating a
> single new dirtied inode, but rather that it won't necessarily create
> one on all of its mounts. It's often the case that someone has a
> mountpoint for a dedicated purpose.
>
> Consider a host that has a mountpoint that contains logfiles that are
> being heavily written. There's nothing that says that they must rotate
> those logs over a particular period (assuming the fs has enough space,
> etc). If the same ones are constantly being redirtied and no new
> ones are created, then I think this problem can easily happen.
>
>>>>>> ...I see no obvious reasons against unconditionally resetting dirtied_when.
>>>>>>
>>>>>> (a) Delaying an inode's writeback for 30s maybe too long - its blocking
>>>>>> condition may well go away within 1s. (b) And it would be very undesirable
>>>>>> if one big file is repeatedly redirtied hence its writeback being
>>>>>> delayed considerably.
>>>>>>
>>>>>> However, redirty_tail() currently only tries to speedup writeback-after-redirty
>>>>>> in a _best effort_ way. It at best partially hides the above issues,
>>>>>> if there are any. In particular, if (b) is possible, the bug should
>>>>>> already show up at least in some situations.
>>>>>>
>>>>>> For XFS, immediately sync of redirtied inode is actually discouraged:
>>>>>>
>>>>>> http://lkml.org/lkml/2008/1/16/491
>>>>>>
>>>>>>
>>>>> Ok, those are good points that I need to think about.
>>>>>
>>>>> Thanks for the help so far. I'd welcome any suggestions you have on
>>>>> how best to fix this.
>>>> For NFS, is it desirable to retry a redirtied inode after 30s, or
>>>> after a shorter 5s, or after 0.1~5s? Or the exact timing simply
>>>> doesn't matter?
>>>>
>>> I don't really consider NFS to be a special case here. It just happens
>>> to be where we saw the problem originally. Some of its characteristics
>>> might make it easier to hit this, but I'm not certain of that.
>> Now there are now two possible solutions:
>> - unconditionally update dirtied_when in redirty_tail();
>> - keep dirtied_when and redirty inodes to a new dedicated queue.
>> The first one involves less code, the second one allows more flexible timing.
>>
>> NFS/XFS could be a good starting point for discussing the
>> requirements, so that we can reach a suitable solution.
>>
>
> It sounds like it, yes. I saw that you posted some patches in January
> (including your s_more_io_wait patch). I'll give those a closer look.
> Adding the new s_more_io_wait queue is interesting and might sidestep
> this problem nicely.
>

Yes, I was looking at that bit of code but, so far, I think it won't be
called for the case we are trying to describe.

Ian

2009-03-25 13:38:47

by Ian Kent

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

Ian Kent wrote:
> Jeff Layton wrote:
>> On Wed, 25 Mar 2009 20:17:43 +0800
>> Wu Fengguang <[email protected]> wrote:
>>
>>> On Wed, Mar 25, 2009 at 07:51:10PM +0800, Jeff Layton wrote:
>>>> On Wed, 25 Mar 2009 10:50:37 +0800
>>>> Wu Fengguang <[email protected]> wrote:
>>>>
>>>>>> Given the right situation though (or maybe the right filesystem), it's
>>>>>> not too hard to imagine this problem occurring even in current mainline
>>>>>> code with an inode that's frequently being redirtied.
>>>>> My reasoning with recent kernel is: for kupdate, s_dirty enqueues only
>>>>> happen in __mark_inode_dirty() and redirty_tail(). Newly dirtied
>>>>> inodes will be parked in s_dirty for 30s. During which time the
>>>>> actively being-redirtied inodes, if their dirtied_when is an old stuck
>>>>> value, will be retried for writeback and then re-inserted into a
>>>>> non-empty s_dirty queue and have their dirtied_when refreshed.
>>>>>
>>>> Doesn't that assume that there are new inodes that are being dirtied?
>>>> If you only have the same inodes being redirtied and never any new
>>>> ones, the problem still occurs, right?
>>> Yes. But will a production server run months without making one single
>>> new dirtied inode? (Just out of curiosity. Not that I'm not willing to
>>> fix this possible issue.:)
>>>
>> Yes. It's not that the box will run that long without creating a
>> single new dirtied inode, but rather that it won't necessarily create
>> one on all of its mounts. It's often the case that someone has a
>> mountpoint for a dedicated purpose.
>>
>> Consider a host that has a mountpoint that contains logfiles that are
>> being heavily written. There's nothing that says that they must rotate
>> those logs over a particular period (assuming the fs has enough space,
>> etc). If the same ones are constantly being redirtied and no new
>> ones are created, then I think this problem can easily happen.
>>
>>>>>>> ...I see no obvious reasons against unconditionally resetting dirtied_when.
>>>>>>>
>>>>>>> (a) Delaying an inode's writeback for 30s maybe too long - its blocking
>>>>>>> condition may well go away within 1s. (b) And it would be very undesirable
>>>>>>> if one big file is repeatedly redirtied hence its writeback being
>>>>>>> delayed considerably.
>>>>>>>
>>>>>>> However, redirty_tail() currently only tries to speedup writeback-after-redirty
>>>>>>> in a _best effort_ way. It at best partially hides the above issues,
>>>>>>> if there are any. In particular, if (b) is possible, the bug should
>>>>>>> already show up at least in some situations.
>>>>>>>
>>>>>>> For XFS, immediately sync of redirtied inode is actually discouraged:
>>>>>>>
>>>>>>> http://lkml.org/lkml/2008/1/16/491
>>>>>>>
>>>>>>>
>>>>>> Ok, those are good points that I need to think about.
>>>>>>
>>>>>> Thanks for the help so far. I'd welcome any suggestions you have on
>>>>>> how best to fix this.
>>>>> For NFS, is it desirable to retry a redirtied inode after 30s, or
>>>>> after a shorter 5s, or after 0.1~5s? Or the exact timing simply
>>>>> doesn't matter?
>>>>>
>>>> I don't really consider NFS to be a special case here. It just happens
>>>> to be where we saw the problem originally. Some of its characteristics
>>>> might make it easier to hit this, but I'm not certain of that.
>>> Now there are now two possible solutions:
>>> - unconditionally update dirtied_when in redirty_tail();
>>> - keep dirtied_when and redirty inodes to a new dedicated queue.
>>> The first one involves less code, the second one allows more flexible timing.
>>>
>>> NFS/XFS could be a good starting point for discussing the
>>> requirements, so that we can reach a suitable solution.
>>>
>> It sounds like it, yes. I saw that you posted some patches in January
>> (including your s_more_io_wait patch). I'll give those a closer look.
>> Adding the new s_more_io_wait queue is interesting and might sidestep
>> this problem nicely.
>>
>
> Yes, I was looking at that bit of code but, so far, I think it won't be
> called for the case we are trying to describe.

I take that back.
As Jeff pointed out I haven't seen these patches and can't seem to find
them in my fsdevel list folder, Wu can you send me a copy please?

Ian


2009-03-25 13:44:57

by Fengguang Wu

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

On Wed, Mar 25, 2009 at 09:38:47PM +0800, Ian Kent wrote:
> Ian Kent wrote:
> > Jeff Layton wrote:
> >> On Wed, 25 Mar 2009 20:17:43 +0800
> >> Wu Fengguang <[email protected]> wrote:
> >>
> >>> On Wed, Mar 25, 2009 at 07:51:10PM +0800, Jeff Layton wrote:
> >>>> On Wed, 25 Mar 2009 10:50:37 +0800
> >>>> Wu Fengguang <[email protected]> wrote:
> >>>>
> >>>>>> Given the right situation though (or maybe the right filesystem), it's
> >>>>>> not too hard to imagine this problem occurring even in current mainline
> >>>>>> code with an inode that's frequently being redirtied.
> >>>>> My reasoning with recent kernel is: for kupdate, s_dirty enqueues only
> >>>>> happen in __mark_inode_dirty() and redirty_tail(). Newly dirtied
> >>>>> inodes will be parked in s_dirty for 30s. During which time the
> >>>>> actively being-redirtied inodes, if their dirtied_when is an old stuck
> >>>>> value, will be retried for writeback and then re-inserted into a
> >>>>> non-empty s_dirty queue and have their dirtied_when refreshed.
> >>>>>
> >>>> Doesn't that assume that there are new inodes that are being dirtied?
> >>>> If you only have the same inodes being redirtied and never any new
> >>>> ones, the problem still occurs, right?
> >>> Yes. But will a production server run months without making one single
> >>> new dirtied inode? (Just out of curiosity. Not that I'm not willing to
> >>> fix this possible issue.:)
> >>>
> >> Yes. It's not that the box will run that long without creating a
> >> single new dirtied inode, but rather that it won't necessarily create
> >> one on all of its mounts. It's often the case that someone has a
> >> mountpoint for a dedicated purpose.
> >>
> >> Consider a host that has a mountpoint that contains logfiles that are
> >> being heavily written. There's nothing that says that they must rotate
> >> those logs over a particular period (assuming the fs has enough space,
> >> etc). If the same ones are constantly being redirtied and no new
> >> ones are created, then I think this problem can easily happen.
> >>
> >>>>>>> ...I see no obvious reasons against unconditionally resetting dirtied_when.
> >>>>>>>
> >>>>>>> (a) Delaying an inode's writeback for 30s maybe too long - its blocking
> >>>>>>> condition may well go away within 1s. (b) And it would be very undesirable
> >>>>>>> if one big file is repeatedly redirtied hence its writeback being
> >>>>>>> delayed considerably.
> >>>>>>>
> >>>>>>> However, redirty_tail() currently only tries to speedup writeback-after-redirty
> >>>>>>> in a _best effort_ way. It at best partially hides the above issues,
> >>>>>>> if there are any. In particular, if (b) is possible, the bug should
> >>>>>>> already show up at least in some situations.
> >>>>>>>
> >>>>>>> For XFS, immediately sync of redirtied inode is actually discouraged:
> >>>>>>>
> >>>>>>> http://lkml.org/lkml/2008/1/16/491
> >>>>>>>
> >>>>>>>
> >>>>>> Ok, those are good points that I need to think about.
> >>>>>>
> >>>>>> Thanks for the help so far. I'd welcome any suggestions you have on
> >>>>>> how best to fix this.
> >>>>> For NFS, is it desirable to retry a redirtied inode after 30s, or
> >>>>> after a shorter 5s, or after 0.1~5s? Or the exact timing simply
> >>>>> doesn't matter?
> >>>>>
> >>>> I don't really consider NFS to be a special case here. It just happens
> >>>> to be where we saw the problem originally. Some of its characteristics
> >>>> might make it easier to hit this, but I'm not certain of that.
> >>> Now there are now two possible solutions:
> >>> - unconditionally update dirtied_when in redirty_tail();
> >>> - keep dirtied_when and redirty inodes to a new dedicated queue.
> >>> The first one involves less code, the second one allows more flexible timing.
> >>>
> >>> NFS/XFS could be a good starting point for discussing the
> >>> requirements, so that we can reach a suitable solution.
> >>>
> >> It sounds like it, yes. I saw that you posted some patches in January
> >> (including your s_more_io_wait patch). I'll give those a closer look.
> >> Adding the new s_more_io_wait queue is interesting and might sidestep
> >> this problem nicely.
> >>
> >
> > Yes, I was looking at that bit of code but, so far, I think it won't be
> > called for the case we are trying to describe.

You mean this case?

} else if (inode->i_state & I_DIRTY) {
/*
* Someone redirtied the inode while were writing back
* the pages.
*/
redirty_tail(inode);
} else if (atomic_read(&inode->i_count)) {

Sure we can replace the redirty_tail() with requeue_io_wait().

> I take that back.
> As Jeff pointed out I haven't seen these patches and can't seem to find
> them in my fsdevel list folder, Wu can you send me a copy please?

OK, wait a minute...

Thanks,
Fengguang

2009-03-25 14:00:49

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

On Wed, 25 Mar 2009 22:38:47 +0900
Ian Kent <[email protected]> wrote:

> Ian Kent wrote:
> > Jeff Layton wrote:
> >> On Wed, 25 Mar 2009 20:17:43 +0800
> >> Wu Fengguang <[email protected]> wrote:
> >>
> >>> On Wed, Mar 25, 2009 at 07:51:10PM +0800, Jeff Layton wrote:
> >>>> On Wed, 25 Mar 2009 10:50:37 +0800
> >>>> Wu Fengguang <[email protected]> wrote:
> >>>>
> >>>>>> Given the right situation though (or maybe the right filesystem), it's
> >>>>>> not too hard to imagine this problem occurring even in current mainline
> >>>>>> code with an inode that's frequently being redirtied.
> >>>>> My reasoning with recent kernel is: for kupdate, s_dirty enqueues only
> >>>>> happen in __mark_inode_dirty() and redirty_tail(). Newly dirtied
> >>>>> inodes will be parked in s_dirty for 30s. During which time the
> >>>>> actively being-redirtied inodes, if their dirtied_when is an old stuck
> >>>>> value, will be retried for writeback and then re-inserted into a
> >>>>> non-empty s_dirty queue and have their dirtied_when refreshed.
> >>>>>
> >>>> Doesn't that assume that there are new inodes that are being dirtied?
> >>>> If you only have the same inodes being redirtied and never any new
> >>>> ones, the problem still occurs, right?
> >>> Yes. But will a production server run months without making one single
> >>> new dirtied inode? (Just out of curiosity. Not that I'm not willing to
> >>> fix this possible issue.:)
> >>>
> >> Yes. It's not that the box will run that long without creating a
> >> single new dirtied inode, but rather that it won't necessarily create
> >> one on all of its mounts. It's often the case that someone has a
> >> mountpoint for a dedicated purpose.
> >>
> >> Consider a host that has a mountpoint that contains logfiles that are
> >> being heavily written. There's nothing that says that they must rotate
> >> those logs over a particular period (assuming the fs has enough space,
> >> etc). If the same ones are constantly being redirtied and no new
> >> ones are created, then I think this problem can easily happen.
> >>
> >>>>>>> ...I see no obvious reasons against unconditionally resetting dirtied_when.
> >>>>>>>
> >>>>>>> (a) Delaying an inode's writeback for 30s maybe too long - its blocking
> >>>>>>> condition may well go away within 1s. (b) And it would be very undesirable
> >>>>>>> if one big file is repeatedly redirtied hence its writeback being
> >>>>>>> delayed considerably.
> >>>>>>>
> >>>>>>> However, redirty_tail() currently only tries to speedup writeback-after-redirty
> >>>>>>> in a _best effort_ way. It at best partially hides the above issues,
> >>>>>>> if there are any. In particular, if (b) is possible, the bug should
> >>>>>>> already show up at least in some situations.
> >>>>>>>
> >>>>>>> For XFS, immediately sync of redirtied inode is actually discouraged:
> >>>>>>>
> >>>>>>> http://lkml.org/lkml/2008/1/16/491
> >>>>>>>
> >>>>>>>
> >>>>>> Ok, those are good points that I need to think about.
> >>>>>>
> >>>>>> Thanks for the help so far. I'd welcome any suggestions you have on
> >>>>>> how best to fix this.
> >>>>> For NFS, is it desirable to retry a redirtied inode after 30s, or
> >>>>> after a shorter 5s, or after 0.1~5s? Or the exact timing simply
> >>>>> doesn't matter?
> >>>>>
> >>>> I don't really consider NFS to be a special case here. It just happens
> >>>> to be where we saw the problem originally. Some of its characteristics
> >>>> might make it easier to hit this, but I'm not certain of that.
> >>> Now there are now two possible solutions:
> >>> - unconditionally update dirtied_when in redirty_tail();
> >>> - keep dirtied_when and redirty inodes to a new dedicated queue.
> >>> The first one involves less code, the second one allows more flexible timing.
> >>>
> >>> NFS/XFS could be a good starting point for discussing the
> >>> requirements, so that we can reach a suitable solution.
> >>>
> >> It sounds like it, yes. I saw that you posted some patches in January
> >> (including your s_more_io_wait patch). I'll give those a closer look.
> >> Adding the new s_more_io_wait queue is interesting and might sidestep
> >> this problem nicely.
> >>
> >
> > Yes, I was looking at that bit of code but, so far, I think it won't be
> > called for the case we are trying to describe.
>
> I take that back.
> As Jeff pointed out I haven't seen these patches and can't seem to find
> them in my fsdevel list folder, Wu can you send me a copy please?
>

Actually, I think you were right. We still have this check in
generic_sync_sb_inodes() even with Wu's January 2008 patches:

/* Was this inode dirtied after sync_sb_inodes was called? */
if (time_after(inode->dirtied_when, start))
break;

...this check is the crux of the problem. We're assuming that the
dirtied_when value will never appear to be in the future. If we change
this check so that it's checking that dirtied_when is between "start"
and "now", then this problem basically goes away.

We'll probably also need to change the test in move_expired_inodes
too, unless Wu's changes go in.

--
Jeff Layton <[email protected]>

2009-03-25 14:16:18

by Fengguang Wu

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

On Wed, Mar 25, 2009 at 10:00:49PM +0800, Jeff Layton wrote:
> On Wed, 25 Mar 2009 22:38:47 +0900
> Ian Kent <[email protected]> wrote:
>
> > Ian Kent wrote:
> > > Jeff Layton wrote:
> > >> On Wed, 25 Mar 2009 20:17:43 +0800
> > >> Wu Fengguang <[email protected]> wrote:
> > >>
> > >>> On Wed, Mar 25, 2009 at 07:51:10PM +0800, Jeff Layton wrote:
> > >>>> On Wed, 25 Mar 2009 10:50:37 +0800
> > >>>> Wu Fengguang <[email protected]> wrote:
> > >>>>
> > >>>>>> Given the right situation though (or maybe the right filesystem), it's
> > >>>>>> not too hard to imagine this problem occurring even in current mainline
> > >>>>>> code with an inode that's frequently being redirtied.
> > >>>>> My reasoning with recent kernel is: for kupdate, s_dirty enqueues only
> > >>>>> happen in __mark_inode_dirty() and redirty_tail(). Newly dirtied
> > >>>>> inodes will be parked in s_dirty for 30s. During which time the
> > >>>>> actively being-redirtied inodes, if their dirtied_when is an old stuck
> > >>>>> value, will be retried for writeback and then re-inserted into a
> > >>>>> non-empty s_dirty queue and have their dirtied_when refreshed.
> > >>>>>
> > >>>> Doesn't that assume that there are new inodes that are being dirtied?
> > >>>> If you only have the same inodes being redirtied and never any new
> > >>>> ones, the problem still occurs, right?
> > >>> Yes. But will a production server run months without making one single
> > >>> new dirtied inode? (Just out of curiosity. Not that I'm not willing to
> > >>> fix this possible issue.:)
> > >>>
> > >> Yes. It's not that the box will run that long without creating a
> > >> single new dirtied inode, but rather that it won't necessarily create
> > >> one on all of its mounts. It's often the case that someone has a
> > >> mountpoint for a dedicated purpose.
> > >>
> > >> Consider a host that has a mountpoint that contains logfiles that are
> > >> being heavily written. There's nothing that says that they must rotate
> > >> those logs over a particular period (assuming the fs has enough space,
> > >> etc). If the same ones are constantly being redirtied and no new
> > >> ones are created, then I think this problem can easily happen.
> > >>
> > >>>>>>> ...I see no obvious reasons against unconditionally resetting dirtied_when.
> > >>>>>>>
> > >>>>>>> (a) Delaying an inode's writeback for 30s maybe too long - its blocking
> > >>>>>>> condition may well go away within 1s. (b) And it would be very undesirable
> > >>>>>>> if one big file is repeatedly redirtied hence its writeback being
> > >>>>>>> delayed considerably.
> > >>>>>>>
> > >>>>>>> However, redirty_tail() currently only tries to speedup writeback-after-redirty
> > >>>>>>> in a _best effort_ way. It at best partially hides the above issues,
> > >>>>>>> if there are any. In particular, if (b) is possible, the bug should
> > >>>>>>> already show up at least in some situations.
> > >>>>>>>
> > >>>>>>> For XFS, immediately sync of redirtied inode is actually discouraged:
> > >>>>>>>
> > >>>>>>> http://lkml.org/lkml/2008/1/16/491
> > >>>>>>>
> > >>>>>>>
> > >>>>>> Ok, those are good points that I need to think about.
> > >>>>>>
> > >>>>>> Thanks for the help so far. I'd welcome any suggestions you have on
> > >>>>>> how best to fix this.
> > >>>>> For NFS, is it desirable to retry a redirtied inode after 30s, or
> > >>>>> after a shorter 5s, or after 0.1~5s? Or the exact timing simply
> > >>>>> doesn't matter?
> > >>>>>
> > >>>> I don't really consider NFS to be a special case here. It just happens
> > >>>> to be where we saw the problem originally. Some of its characteristics
> > >>>> might make it easier to hit this, but I'm not certain of that.
> > >>> Now there are now two possible solutions:
> > >>> - unconditionally update dirtied_when in redirty_tail();
> > >>> - keep dirtied_when and redirty inodes to a new dedicated queue.
> > >>> The first one involves less code, the second one allows more flexible timing.
> > >>>
> > >>> NFS/XFS could be a good starting point for discussing the
> > >>> requirements, so that we can reach a suitable solution.
> > >>>
> > >> It sounds like it, yes. I saw that you posted some patches in January
> > >> (including your s_more_io_wait patch). I'll give those a closer look.
> > >> Adding the new s_more_io_wait queue is interesting and might sidestep
> > >> this problem nicely.
> > >>
> > >
> > > Yes, I was looking at that bit of code but, so far, I think it won't be
> > > called for the case we are trying to describe.
> >
> > I take that back.
> > As Jeff pointed out I haven't seen these patches and can't seem to find
> > them in my fsdevel list folder, Wu can you send me a copy please?
> >
>
> Actually, I think you were right. We still have this check in
> generic_sync_sb_inodes() even with Wu's January 2008 patches:
>
> /* Was this inode dirtied after sync_sb_inodes was called? */
> if (time_after(inode->dirtied_when, start))
> break;

Yeah, ugly code. Jens' per-bdi flush daemons should eliminate it...

> ...this check is the crux of the problem. We're assuming that the
> dirtied_when value will never appear to be in the future. If we change
> this check so that it's checking that dirtied_when is between "start"
> and "now", then this problem basically goes away.

Yeah that turns the problem into a temporary and tolerable one.

> We'll probably also need to change the test in move_expired_inodes
> too, unless Wu's changes go in.

So the most simple (and complete) solution is still this one ;-)

Thanks,
Fengguang

---
fs/fs-writeback.c | 14 +-------------
1 file changed, 1 insertion(+), 13 deletions(-)

--- mm.orig/fs/fs-writeback.c
+++ mm/fs/fs-writeback.c
@@ -182,24 +182,12 @@ static int write_inode(struct inode *ino
/*
* Redirty an inode: set its when-it-was dirtied timestamp and move it to the
* furthest end of its superblock's dirty-inode list.
- *
- * Before stamping the inode's ->dirtied_when, we check to see whether it is
- * already the most-recently-dirtied inode on the s_dirty list. If that is
- * the case then the inode must have been redirtied while it was being written
- * out and we don't reset its dirtied_when.
*/
static void redirty_tail(struct inode *inode)
{
struct super_block *sb = inode->i_sb;

- if (!list_empty(&sb->s_dirty)) {
- struct inode *tail_inode;
-
- tail_inode = list_entry(sb->s_dirty.next, struct inode, i_list);
- if (!time_after_eq(inode->dirtied_when,
- tail_inode->dirtied_when))
- inode->dirtied_when = jiffies;
- }
+ inode->dirtied_when = jiffies;
list_move(&inode->i_list, &sb->s_dirty);
}


2009-03-25 14:30:14

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

On Wed, 25 Mar 2009 22:16:18 +0800
Wu Fengguang <[email protected]> wrote:

> On Wed, Mar 25, 2009 at 10:00:49PM +0800, Jeff Layton wrote:
> > On Wed, 25 Mar 2009 22:38:47 +0900
> > Ian Kent <[email protected]> wrote:
> >
> > > Ian Kent wrote:
> > > > Jeff Layton wrote:
> > > >> On Wed, 25 Mar 2009 20:17:43 +0800
> > > >> Wu Fengguang <[email protected]> wrote:
> > > >>
> > > >>> On Wed, Mar 25, 2009 at 07:51:10PM +0800, Jeff Layton wrote:
> > > >>>> On Wed, 25 Mar 2009 10:50:37 +0800
> > > >>>> Wu Fengguang <[email protected]> wrote:
> > > >>>>
> > > >>>>>> Given the right situation though (or maybe the right filesystem), it's
> > > >>>>>> not too hard to imagine this problem occurring even in current mainline
> > > >>>>>> code with an inode that's frequently being redirtied.
> > > >>>>> My reasoning with recent kernel is: for kupdate, s_dirty enqueues only
> > > >>>>> happen in __mark_inode_dirty() and redirty_tail(). Newly dirtied
> > > >>>>> inodes will be parked in s_dirty for 30s. During which time the
> > > >>>>> actively being-redirtied inodes, if their dirtied_when is an old stuck
> > > >>>>> value, will be retried for writeback and then re-inserted into a
> > > >>>>> non-empty s_dirty queue and have their dirtied_when refreshed.
> > > >>>>>
> > > >>>> Doesn't that assume that there are new inodes that are being dirtied?
> > > >>>> If you only have the same inodes being redirtied and never any new
> > > >>>> ones, the problem still occurs, right?
> > > >>> Yes. But will a production server run months without making one single
> > > >>> new dirtied inode? (Just out of curiosity. Not that I'm not willing to
> > > >>> fix this possible issue.:)
> > > >>>
> > > >> Yes. It's not that the box will run that long without creating a
> > > >> single new dirtied inode, but rather that it won't necessarily create
> > > >> one on all of its mounts. It's often the case that someone has a
> > > >> mountpoint for a dedicated purpose.
> > > >>
> > > >> Consider a host that has a mountpoint that contains logfiles that are
> > > >> being heavily written. There's nothing that says that they must rotate
> > > >> those logs over a particular period (assuming the fs has enough space,
> > > >> etc). If the same ones are constantly being redirtied and no new
> > > >> ones are created, then I think this problem can easily happen.
> > > >>
> > > >>>>>>> ...I see no obvious reasons against unconditionally resetting dirtied_when.
> > > >>>>>>>
> > > >>>>>>> (a) Delaying an inode's writeback for 30s maybe too long - its blocking
> > > >>>>>>> condition may well go away within 1s. (b) And it would be very undesirable
> > > >>>>>>> if one big file is repeatedly redirtied hence its writeback being
> > > >>>>>>> delayed considerably.
> > > >>>>>>>
> > > >>>>>>> However, redirty_tail() currently only tries to speedup writeback-after-redirty
> > > >>>>>>> in a _best effort_ way. It at best partially hides the above issues,
> > > >>>>>>> if there are any. In particular, if (b) is possible, the bug should
> > > >>>>>>> already show up at least in some situations.
> > > >>>>>>>
> > > >>>>>>> For XFS, immediately sync of redirtied inode is actually discouraged:
> > > >>>>>>>
> > > >>>>>>> http://lkml.org/lkml/2008/1/16/491
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>> Ok, those are good points that I need to think about.
> > > >>>>>>
> > > >>>>>> Thanks for the help so far. I'd welcome any suggestions you have on
> > > >>>>>> how best to fix this.
> > > >>>>> For NFS, is it desirable to retry a redirtied inode after 30s, or
> > > >>>>> after a shorter 5s, or after 0.1~5s? Or the exact timing simply
> > > >>>>> doesn't matter?
> > > >>>>>
> > > >>>> I don't really consider NFS to be a special case here. It just happens
> > > >>>> to be where we saw the problem originally. Some of its characteristics
> > > >>>> might make it easier to hit this, but I'm not certain of that.
> > > >>> Now there are now two possible solutions:
> > > >>> - unconditionally update dirtied_when in redirty_tail();
> > > >>> - keep dirtied_when and redirty inodes to a new dedicated queue.
> > > >>> The first one involves less code, the second one allows more flexible timing.
> > > >>>
> > > >>> NFS/XFS could be a good starting point for discussing the
> > > >>> requirements, so that we can reach a suitable solution.
> > > >>>
> > > >> It sounds like it, yes. I saw that you posted some patches in January
> > > >> (including your s_more_io_wait patch). I'll give those a closer look.
> > > >> Adding the new s_more_io_wait queue is interesting and might sidestep
> > > >> this problem nicely.
> > > >>
> > > >
> > > > Yes, I was looking at that bit of code but, so far, I think it won't be
> > > > called for the case we are trying to describe.
> > >
> > > I take that back.
> > > As Jeff pointed out I haven't seen these patches and can't seem to find
> > > them in my fsdevel list folder, Wu can you send me a copy please?
> > >
> >
> > Actually, I think you were right. We still have this check in
> > generic_sync_sb_inodes() even with Wu's January 2008 patches:
> >
> > /* Was this inode dirtied after sync_sb_inodes was called? */
> > if (time_after(inode->dirtied_when, start))
> > break;
>
> Yeah, ugly code. Jens' per-bdi flush daemons should eliminate it...
>

Ok, good to know. I need to look at those more closely I guess...

> > ...this check is the crux of the problem. We're assuming that the
> > dirtied_when value will never appear to be in the future. If we change
> > this check so that it's checking that dirtied_when is between "start"
> > and "now", then this problem basically goes away.
>
> Yeah that turns the problem into a temporary and tolerable one.
>

Yes.

> > We'll probably also need to change the test in move_expired_inodes
> > too, unless Wu's changes go in.
>
> So the most simple (and complete) solution is still this one ;-)
>

I suppose so. I guess that also takes care of the problem on XFS (and
maybe other filesystems too?) of inodes getting flushed too frequently
when they're redirtied.

The downside sounds like that it'll mean that big files that are being
frequently redirtied might get less frequent writeout attempts. We can
easily dirty pages faster than we can write them out (at least with
most filesystems). Will that cause problem where we accumulate too many
dirty pages for the inode? That also means that the I/O will be more
"spiky"...

pdflush writes out some data
inode goes back on s_dirty and dirtied_when gets restamped
wait 30s...
pdflush writes out more data
etc...

That seems sub-optimal.

--
Jeff Layton <[email protected]>

2009-03-25 16:55:00

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

On Wed, Mar 25, 2009 at 08:17:43PM +0800, Wu Fengguang wrote:
> Now there are now two possible solutions:
> - unconditionally update dirtied_when in redirty_tail();
> - keep dirtied_when and redirty inodes to a new dedicated queue.
> The first one involves less code, the second one allows more flexible timing.
>
> NFS/XFS could be a good starting point for discussing the
> requirements, so that we can reach a suitable solution.

Note that the XFS requirement also applies to all filesystems that
perform some sort of metadata updats on I/O completeion. That includes
at least ext4, btrfs and most likely the cluster filesystems too.

2009-03-25 20:09:01

by Chris Mason

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

On Wed, 2009-03-25 at 12:55 -0400, [email protected] wrote:
> On Wed, Mar 25, 2009 at 08:17:43PM +0800, Wu Fengguang wrote:
> > Now there are now two possible solutions:
> > - unconditionally update dirtied_when in redirty_tail();
> > - keep dirtied_when and redirty inodes to a new dedicated queue.
> > The first one involves less code, the second one allows more flexible timing.
> >
> > NFS/XFS could be a good starting point for discussing the
> > requirements, so that we can reach a suitable solution.
>
> Note that the XFS requirement also applies to all filesystems that
> perform some sort of metadata updats on I/O completeion. That includes
> at least ext4, btrfs and most likely the cluster filesystems too.

btrfs at least doesn't dirty the inode on I/O completion. It just puts
the changes directly into the btree blocks.

-chris



2009-03-26 17:03:27

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH] writeback: reset inode dirty time when adding it back to empty s_dirty list

On Wed, 25 Mar 2009 22:16:18 +0800
Wu Fengguang <[email protected]> wrote:

> >
> > Actually, I think you were right. We still have this check in
> > generic_sync_sb_inodes() even with Wu's January 2008 patches:
> >
> > /* Was this inode dirtied after sync_sb_inodes was called? */
> > if (time_after(inode->dirtied_when, start))
> > break;
>
> Yeah, ugly code. Jens' per-bdi flush daemons should eliminate it...
>

I had a look over Jens' patches and they seem to be more concerned with
how the queues and daemons are organized (per-bdi rather than per-sb).
The actual way that inodes flow between the queues and get written out
don't look like they really change with his set.

They also don't eliminate the problematic check above. Regardless of
whether your or Jens' patches make it in, I think we'll still need
something like the following (untested) patch.

If this looks ok, I'll flesh out the comments some and "officially" post
it. Thoughts?

--------------[snip]-----------------

>From d10adff2d5f9a15d19c438119dbb2c410bd26e3c Mon Sep 17 00:00:00 2001
From: Jeff Layton <[email protected]>
Date: Thu, 26 Mar 2009 12:54:52 -0400
Subject: [PATCH] writeback: guard against jiffies wraparound on inode->dirtied_when checks

The dirtied_when value on an inode is supposed to represent the first
time that an inode has one of its pages dirtied. This value is in units
of jiffies. This value is used in several places in the writeback code
to determine when to write out an inode.

The problem is that these checks assume that dirtied_when is updated
periodically. But if an inode is continuously being used for I/O it can
be persistently marked as dirty and will continue to age. Once the time
difference between dirtied_when and the jiffies value it is being
compared to is greater than (or equal to) half the maximum of the
jiffies type, the logic of the time_*() macros inverts and the opposite
of what is needed is returned. On 32-bit architectures that's just under
25 days (assuming HZ == 1000).

As the least-recently dirtied inode, it'll end up being the first one
that pdflush will try to write out. sync_sb_inodes does this check
however:

/* Was this inode dirtied after sync_sb_inodes was called? */
if (time_after(inode->dirtied_when, start))
break;

...but now dirtied_when appears to be in the future. sync_sb_inodes
bails out without attempting to write any dirty inodes. When this
occurs, pdflush will stop writing out inodes for this superblock and
nothing will unwedge it until jiffies moves out of the problematic
window.

This patch fixes this problem by changing the time_after checks against
dirtied_when to also check whether dirtied_when appears to be in the
future. If it does, then we consider the value to be in the past.

This should shrink the problematic window to such a small period as not
to matter.

Signed-off-by: Jeff Layton <[email protected]>
---
fs/fs-writeback.c | 11 +++++++----
1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index e3fe991..dba69a5 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -196,8 +196,9 @@ static void redirty_tail(struct inode *inode)
struct inode *tail_inode;

tail_inode = list_entry(sb->s_dirty.next, struct inode, i_list);
- if (!time_after_eq(inode->dirtied_when,
- tail_inode->dirtied_when))
+ if (time_before(inode->dirtied_when,
+ tail_inode->dirtied_when) ||
+ time_after(inode->dirtied_when, jiffies))
inode->dirtied_when = jiffies;
}
list_move(&inode->i_list, &sb->s_dirty);
@@ -231,7 +232,8 @@ static void move_expired_inodes(struct list_head *delaying_queue,
struct inode *inode = list_entry(delaying_queue->prev,
struct inode, i_list);
if (older_than_this &&
- time_after(inode->dirtied_when, *older_than_this))
+ time_after(inode->dirtied_when, *older_than_this) &&
+ time_before_eq(inode->dirtied_when, jiffies))
break;
list_move(&inode->i_list, dispatch_queue);
}
@@ -493,7 +495,8 @@ void generic_sync_sb_inodes(struct super_block *sb,
}

/* Was this inode dirtied after sync_sb_inodes was called? */
- if (time_after(inode->dirtied_when, start))
+ if (time_after(inode->dirtied_when, start) &&
+ time_before_eq(inode->dirtied_when, jiffies))
break;

/* Is another pdflush already flushing this queue? */
--
1.5.5.6