2008-02-01 12:18:47

by Wu Fengguang

[permalink] [raw]
Subject: [PATCH] clear PAGECACHE_TAG_DIRTY for truncated pages

The `truncated' page in block_write_full_page()/nobh_writepage() may
stick for a long time. For example, ext2_rmdir() will set i_size to
0, and then the dir inode and its pages may hang around because of
being referenced by some process.

To produce this situation:

In terminal 1:
$ mkdir hi; cd hi
$ sleep 1h; cd ..
In terminal 2:
$ rmdir hi

The dir 'hi' is deleted in terminal 2 while still being referenced by
the shell(as working dir) in terminal 1. The deleted inode 'hi' with
its dirty-and-truncated page will stay for 1 hour, during which the
pdflush will retry the page _at least_ once every 5s.

So clear PAGECACHE_TAG_DIRTY to prevent pdflush from retrying on it.

Tested-by: Joerg Platte <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Michael Rubin <[email protected]>
Signed-off-by: Fengguang Wu <[email protected]>
---
fs/buffer.c | 16 ++++++++++++++++
1 files changed, 16 insertions(+)

--- linux-mm.orig/fs/buffer.c
+++ linux-mm/fs/buffer.c
@@ -2631,7 +2631,13 @@ int nobh_writepage(struct page *page, ge
if (page->mapping->a_ops->invalidatepage)
page->mapping->a_ops->invalidatepage(page, offset);
#endif
+ /*
+ * Clear PAGECACHE_TAG_DIRTY to stop pdflush from retrying.
+ * Read block_write_full_page() for more details.
+ */
+ set_page_writeback(page);
unlock_page(page);
+ end_page_writeback(page);
return 0; /* don't care */
}

@@ -2826,7 +2832,17 @@ int block_write_full_page(struct page *p
* freeable here, so the page does not leak.
*/
do_invalidatepage(page, 0);
+ /*
+ * Clear PAGECACHE_TAG_DIRTY to stop pdflush from retrying.
+ *
+ * Some truncated pages may hang around for long time.
+ * For example, ext2_rmdir() will set i_size to 0, and then
+ * keep the pages as long as the dir is still referenced(as
+ * the working dir of some process).
+ */
+ set_page_writeback(page);
unlock_page(page);
+ end_page_writeback(page);
return 0; /* don't care */
}