In case a compressed file is getting overwritten, the current retry
logic doesn't include the current page to be retried now as it sets
the new start index as 0 and new end index as writeback_index - 1.
This causes the corresponding cluster to be uncompressed and written
as normal pages without compression. Fix this by allowing writeback to
be retried for the current page as well (in case of compressed page
getting retried due to index mismatch with cluster index). So that
this cluster can be written compressed in case of overwrite.
Also, align f2fs_write_cache_pages() according to the change -
<64081362e8ff>("mm/page-writeback.c: fix range_cyclic writeback vs
writepages deadlock").
Signed-off-by: Sahitya Tummala <[email protected]>
---
v2:
- Align the code to <64081362e8ff>("mm/page-writeback.c: fix range_cyclic
writeback vs writepages deadlock") and adjust the retry logic also as per
Chao's suggestion.
fs/f2fs/data.c | 13 ++++---------
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 4af5fcd..31af29f 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2825,7 +2825,6 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
pgoff_t index;
pgoff_t end; /* Inclusive */
pgoff_t done_index;
- int cycled;
int range_whole = 0;
xa_mark_t tag;
int nwritten = 0;
@@ -2843,17 +2842,12 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
if (wbc->range_cyclic) {
writeback_index = mapping->writeback_index; /* prev offset */
index = writeback_index;
- if (index == 0)
- cycled = 1;
- else
- cycled = 0;
end = -1;
} else {
index = wbc->range_start >> PAGE_SHIFT;
end = wbc->range_end >> PAGE_SHIFT;
if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
range_whole = 1;
- cycled = 1; /* ignore range_cyclic tests */
}
if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
tag = PAGECACHE_TAG_TOWRITE;
@@ -3021,12 +3015,13 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
}
}
#endif
- if ((!cycled && !done) || retry) {
- cycled = 1;
+ if (retry) {
index = 0;
- end = writeback_index - 1;
+ end = -1;
goto retry;
}
+ if (wbc->range_cyclic && !done)
+ done_index = 0;
if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
mapping->writeback_index = done_index;
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.
On 2020/6/2 20:41, Sahitya Tummala wrote:
> In case a compressed file is getting overwritten, the current retry
> logic doesn't include the current page to be retried now as it sets
> the new start index as 0 and new end index as writeback_index - 1.
> This causes the corresponding cluster to be uncompressed and written
> as normal pages without compression. Fix this by allowing writeback to
> be retried for the current page as well (in case of compressed page
> getting retried due to index mismatch with cluster index). So that
> this cluster can be written compressed in case of overwrite.
>
> Also, align f2fs_write_cache_pages() according to the change -
> <64081362e8ff>("mm/page-writeback.c: fix range_cyclic writeback vs
> writepages deadlock").
>
> Signed-off-by: Sahitya Tummala <[email protected]>
Reviewed-by: Chao Yu <[email protected]>
Thanks,