2023-01-11 15:52:24

by Jan Kara

[permalink] [raw]
Subject: [PATCH 0/7] ext4: Cleanup data=journal writeback path

Hello,

this patch series implements promised cleanup of ext4 data=journal writeback
path. That way we can get rid of last remnants of old support for .writepage
callback and mostly unify this writeback path with standard data handling
modes. We also update some of the stale comments regarding data=journal mode
along the way.

Note that patch 3 (ext4: Mark page for delayed dirtying only if it is pinned)
currently breaks somewhat the data=journal guarantees if mapped page cache is
used as a buffer for direct IO read because that path does not pin pages yet
(David Howels is working on it). But I figured this is not a huge deal for a
cornercase usecase of a cornercase configuration and keeping marking page for
delayed dirtying unconditionally is problematic as checkpointing dirties the
page as well so it is effectively permanently added to the journal. It was
problematic already in the past and happened to work only by luck.

Honza


2023-01-11 15:52:36

by Jan Kara

[permalink] [raw]
Subject: [PATCH 4/7] ext4: Don't unlock page in ext4_bio_write_page()

Do not unlock the written page in ext4_bio_write_page(). Instead leave
the page locked and unlock it in the callers. We'll need to keep the
page locked for data=journal writeback for a bit longer.

Signed-off-by: Jan Kara <[email protected]>
---
fs/ext4/inode.c | 2 ++
fs/ext4/page-io.c | 10 +++++-----
2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 4c14aa1b9152..237880f0d406 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2076,6 +2076,7 @@ static int ext4_writepage(struct page *page,
return -ENOMEM;
}
ret = ext4_bio_write_page(&io_submit, page, len);
+ unlock_page(page);
ext4_io_submit(&io_submit);
/* Drop io_end reference we got from init */
ext4_put_io_end_defer(io_submit.io_end);
@@ -2110,6 +2111,7 @@ static int mpage_submit_page(struct mpage_da_data *mpd, struct page *page)
else
len = PAGE_SIZE;
err = ext4_bio_write_page(&mpd->io_submit, page, len);
+ unlock_page(page);
if (!err)
mpd->wbc->nr_to_write--;
mpd->first_page++;
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index beaec6d81074..3bc7c7c5b99d 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -500,7 +500,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,

/* Nothing to submit? Just unlock the page... */
if (!nr_to_submit)
- goto unlock;
+ return 0;

bh = head = page_buffers(page);

@@ -548,7 +548,8 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
}
bh = bh->b_this_page;
} while (bh != head);
- goto unlock;
+
+ return ret;
}
}

@@ -564,7 +565,6 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
io_submit_add_bh(io, inode,
bounce_page ? bounce_page : page, bh);
} while ((bh = bh->b_this_page) != head);
-unlock:
- unlock_page(page);
- return ret;
+
+ return 0;
}
--
2.35.3

2023-01-11 15:53:19

by Jan Kara

[permalink] [raw]
Subject: [PATCH 1/7] ext4: Update stale comment about write constraints

The comment above do_journal_get_write_access() is very stale. Most of
it just does not refer to what the function does today or how jbd2
works. The bit about transaction handling during write(2) is still
correct so just update the function names in that part and move the
comment to a more appropriate place.

Signed-off-by: Jan Kara <[email protected]>
---
fs/ext4/inode.c | 31 +++++++------------------------
1 file changed, 7 insertions(+), 24 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 9d9f414f99fe..f9201c7d61ad 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1005,30 +1005,6 @@ int ext4_walk_page_buffers(handle_t *handle, struct inode *inode,
return ret;
}

-/*
- * To preserve ordering, it is essential that the hole instantiation and
- * the data write be encapsulated in a single transaction. We cannot
- * close off a transaction and start a new one between the ext4_get_block()
- * and the commit_write(). So doing the jbd2_journal_start at the start of
- * prepare_write() is the right place.
- *
- * Also, this function can nest inside ext4_writepage(). In that case, we
- * *know* that ext4_writepage() has generated enough buffer credits to do the
- * whole page. So we won't block on the journal in that case, which is good,
- * because the caller may be PF_MEMALLOC.
- *
- * By accident, ext4 can be reentered when a transaction is open via
- * quota file writes. If we were to commit the transaction while thus
- * reentered, there can be a deadlock - we would be holding a quota
- * lock, and the commit would never complete if another thread had a
- * transaction open and was blocking on the quota lock - a ranking
- * violation.
- *
- * So what we do is to rely on the fact that jbd2_journal_stop/journal_start
- * will _not_ run commit under these circumstances because handle->h_ref
- * is elevated. We'll still have enough credits for the tiny quotafile
- * write.
- */
int do_journal_get_write_access(handle_t *handle, struct inode *inode,
struct buffer_head *bh)
{
@@ -1149,6 +1125,13 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
}
#endif

+/*
+ * To preserve ordering, it is essential that the hole instantiation and
+ * the data write be encapsulated in a single transaction. We cannot
+ * close off a transaction and start a new one between the ext4_get_block()
+ * and the ext4_write_end(). So doing the jbd2_journal_start at the start of
+ * ext4_write_begin() is the right place.
+ */
static int ext4_write_begin(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len,
struct page **pagep, void **fsdata)
--
2.35.3

2023-01-11 15:54:05

by Jan Kara

[permalink] [raw]
Subject: [PATCH 7/7] ext4: Convert data=journal writeback to use ext4_writepages()

Add support for writeback of journalled data directly into
ext4_writepages() instead of offloading it to write_cache_pages(). This
actually significantly simplifies the code and reduces code duplication.
For checkpointing of committed data we can use ext4_writepages()
rightaway the same way as writeback of ordered data uses it on
transaction commit. For journalling of dirty mapped pages, we need to
add a special case to mpage_prepare_extent_to_map() to add all page
buffers to the journal.

Signed-off-by: Jan Kara <[email protected]>
---
fs/ext4/inode.c | 340 ++++++++++--------------------------
include/trace/events/ext4.h | 7 -
2 files changed, 90 insertions(+), 257 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index d0102b1c6b27..b1bc4e8db6d8 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -136,7 +136,6 @@ static inline int ext4_begin_ordered_truncate(struct inode *inode,
new_size);
}

-static int __ext4_journalled_writepage(struct page *page, unsigned int len);
static int ext4_meta_trans_blocks(struct inode *inode, int lblocks,
int pextents);

@@ -1632,12 +1631,6 @@ static void ext4_print_free_blocks(struct inode *inode)
return;
}

-static int ext4_bh_delay_or_unwritten(handle_t *handle, struct inode *inode,
- struct buffer_head *bh)
-{
- return (buffer_delay(bh) || buffer_unwritten(bh)) && buffer_dirty(bh);
-}
-
/*
* ext4_insert_delayed_block - adds a delayed block to the extents status
* tree, incrementing the reserved cluster/block
@@ -1870,219 +1863,6 @@ int ext4_da_get_block_prep(struct inode *inode, sector_t iblock,
return 0;
}

-static int __ext4_journalled_writepage(struct page *page,
- unsigned int len)
-{
- struct address_space *mapping = page->mapping;
- struct inode *inode = mapping->host;
- handle_t *handle = NULL;
- int ret = 0, err = 0;
- int inline_data = ext4_has_inline_data(inode);
- struct buffer_head *inode_bh = NULL;
- loff_t size;
-
- ClearPageChecked(page);
-
- if (inline_data) {
- BUG_ON(page->index != 0);
- BUG_ON(len > ext4_get_max_inline_size(inode));
- inode_bh = ext4_journalled_write_inline_data(inode, len, page);
- if (inode_bh == NULL)
- goto out;
- }
- /*
- * We need to release the page lock before we start the
- * journal, so grab a reference so the page won't disappear
- * out from under us.
- */
- get_page(page);
- unlock_page(page);
-
- handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE,
- ext4_writepage_trans_blocks(inode));
- if (IS_ERR(handle)) {
- ret = PTR_ERR(handle);
- put_page(page);
- goto out_no_pagelock;
- }
- BUG_ON(!ext4_handle_valid(handle));
-
- lock_page(page);
- put_page(page);
- size = i_size_read(inode);
- if (page->mapping != mapping || page_offset(page) > size) {
- /* The page got truncated from under us */
- ext4_journal_stop(handle);
- ret = 0;
- goto out;
- }
-
- if (inline_data) {
- ret = ext4_mark_inode_dirty(handle, inode);
- } else {
- struct buffer_head *page_bufs = page_buffers(page);
-
- if (page->index == size >> PAGE_SHIFT)
- len = size & ~PAGE_MASK;
- else
- len = PAGE_SIZE;
-
- ret = ext4_walk_page_buffers(handle, inode, page_bufs, 0, len,
- NULL, do_journal_get_write_access);
-
- err = ext4_walk_page_buffers(handle, inode, page_bufs, 0, len,
- NULL, write_end_fn);
- }
- if (ret == 0)
- ret = err;
- err = ext4_jbd2_inode_add_write(handle, inode, page_offset(page), len);
- if (ret == 0)
- ret = err;
- EXT4_I(inode)->i_datasync_tid = handle->h_transaction->t_tid;
- err = ext4_journal_stop(handle);
- if (!ret)
- ret = err;
-
- ext4_set_inode_state(inode, EXT4_STATE_JDATA);
-out:
- unlock_page(page);
-out_no_pagelock:
- brelse(inode_bh);
- return ret;
-}
-
-/*
- * Note that we don't need to start a transaction unless we're journaling data
- * because we should have holes filled from ext4_page_mkwrite(). We even don't
- * need to file the inode to the transaction's list in ordered mode because if
- * we are writing back data added by write(), the inode is already there and if
- * we are writing back data modified via mmap(), no one guarantees in which
- * transaction the data will hit the disk. In case we are journaling data, we
- * cannot start transaction directly because transaction start ranks above page
- * lock so we have to do some magic.
- *
- * This function can get called via...
- * - ext4_writepages after taking page lock (have journal handle)
- * - journal_submit_inode_data_buffers (no journal handle)
- * - shrink_page_list via the kswapd/direct reclaim (no journal handle)
- * - grab_page_cache when doing write_begin (have journal handle)
- *
- * We don't do any block allocation in this function. If we have page with
- * multiple blocks we need to write those buffer_heads that are mapped. This
- * is important for mmaped based write. So if we do with blocksize 1K
- * truncate(f, 1024);
- * a = mmap(f, 0, 4096);
- * a[0] = 'a';
- * truncate(f, 4096);
- * we have in the page first buffer_head mapped via page_mkwrite call back
- * but other buffer_heads would be unmapped but dirty (dirty done via the
- * do_wp_page). So writepage should write the first block. If we modify
- * the mmap area beyond 1024 we will again get a page_fault and the
- * page_mkwrite callback will do the block allocation and mark the
- * buffer_heads mapped.
- *
- * We redirty the page if we have any buffer_heads that is either delay or
- * unwritten in the page.
- *
- * We can get recursively called as show below.
- *
- * ext4_writepage() -> kmalloc() -> __alloc_pages() -> page_launder() ->
- * ext4_writepage()
- *
- * But since we don't do any block allocation we should not deadlock.
- * Page also have the dirty flag cleared so we don't get recurive page_lock.
- */
-static int ext4_writepage(struct page *page,
- struct writeback_control *wbc)
-{
- struct folio *folio = page_folio(page);
- int ret = 0;
- loff_t size;
- unsigned int len;
- struct buffer_head *page_bufs = NULL;
- struct inode *inode = page->mapping->host;
- struct ext4_io_submit io_submit;
-
- if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) {
- folio_invalidate(folio, 0, folio_size(folio));
- folio_unlock(folio);
- return -EIO;
- }
-
- trace_ext4_writepage(page);
- size = i_size_read(inode);
- if (page->index == size >> PAGE_SHIFT &&
- !ext4_verity_in_progress(inode))
- len = size & ~PAGE_MASK;
- else
- len = PAGE_SIZE;
-
- /* Should never happen but for bugs in other kernel subsystems */
- if (!page_has_buffers(page)) {
- ext4_warning_inode(inode,
- "page %lu does not have buffers attached", page->index);
- ClearPageDirty(page);
- unlock_page(page);
- return 0;
- }
-
- page_bufs = page_buffers(page);
- /*
- * We cannot do block allocation or other extent handling in this
- * function. If there are buffers needing that, we have to redirty
- * the page. But we may reach here when we do a journal commit via
- * journal_submit_inode_data_buffers() and in that case we must write
- * allocated buffers to achieve data=ordered mode guarantees.
- *
- * Also, if there is only one buffer per page (the fs block
- * size == the page size), if one buffer needs block
- * allocation or needs to modify the extent tree to clear the
- * unwritten flag, we know that the page can't be written at
- * all, so we might as well refuse the write immediately.
- * Unfortunately if the block size != page size, we can't as
- * easily detect this case using ext4_walk_page_buffers(), but
- * for the extremely common case, this is an optimization that
- * skips a useless round trip through ext4_bio_write_page().
- */
- if (ext4_walk_page_buffers(NULL, inode, page_bufs, 0, len, NULL,
- ext4_bh_delay_or_unwritten)) {
- redirty_page_for_writepage(wbc, page);
- if ((current->flags & PF_MEMALLOC) ||
- (inode->i_sb->s_blocksize == PAGE_SIZE)) {
- /*
- * For memory cleaning there's no point in writing only
- * some buffers. So just bail out. Warn if we came here
- * from direct reclaim.
- */
- WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD))
- == PF_MEMALLOC);
- unlock_page(page);
- return 0;
- }
- }
-
- if (PageChecked(page) && ext4_should_journal_data(inode))
- /*
- * It's mmapped pagecache. Add buffers and journal it. There
- * doesn't seem much point in redirtying the page here.
- */
- return __ext4_journalled_writepage(page, len);
-
- ext4_io_submit_init(&io_submit, wbc);
- io_submit.io_end = ext4_init_io_end(inode, GFP_NOFS);
- if (!io_submit.io_end) {
- redirty_page_for_writepage(wbc, page);
- unlock_page(page);
- return -ENOMEM;
- }
- ret = ext4_bio_write_page(&io_submit, page, len);
- unlock_page(page);
- ext4_io_submit(&io_submit);
- /* Drop io_end reference we got from init */
- ext4_put_io_end_defer(io_submit.io_end);
- return ret;
-}
-
static void mpage_page_done(struct mpage_da_data *mpd, struct page *page)
{
mpd->first_page++;
@@ -2563,6 +2343,50 @@ static bool ext4_page_nomap_can_writeout(struct page *page)
return false;
}

+static int ext4_journal_page_buffers(handle_t *handle, struct page *page,
+ int len)
+{
+ struct buffer_head *page_bufs = page_buffers(page);
+ struct inode *inode = page->mapping->host;
+ int ret, err;
+
+ ret = ext4_walk_page_buffers(handle, inode, page_bufs, 0, len,
+ NULL, do_journal_get_write_access);
+ err = ext4_walk_page_buffers(handle, inode, page_bufs, 0, len,
+ NULL, write_end_fn);
+ if (ret == 0)
+ ret = err;
+ err = ext4_jbd2_inode_add_write(handle, inode, page_offset(page), len);
+ if (ret == 0)
+ ret = err;
+ EXT4_I(inode)->i_datasync_tid = handle->h_transaction->t_tid;
+
+ ext4_set_inode_state(inode, EXT4_STATE_JDATA);
+
+ return ret;
+}
+
+static int mpage_journal_page_buffers(handle_t *handle,
+ struct mpage_da_data *mpd,
+ struct page *page)
+{
+ struct inode *inode = mpd->inode;
+ loff_t size = i_size_read(inode);
+ int len;
+
+ ClearPageChecked(page);
+ clear_page_dirty_for_io(page);
+ mpd->wbc->nr_to_write--;
+
+ if (page->index == size >> PAGE_SHIFT &&
+ !ext4_verity_in_progress(inode))
+ len = size & ~PAGE_MASK;
+ else
+ len = PAGE_SIZE;
+
+ return ext4_journal_page_buffers(handle, page, len);
+}
+
/*
* mpage_prepare_extent_to_map - find & lock contiguous range of dirty pages
* needing mapping, submit mapped pages
@@ -2595,12 +2419,20 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
int blkbits = mpd->inode->i_blkbits;
ext4_lblk_t lblk;
struct buffer_head *head;
+ handle_t *handle = NULL;
+ int bpp = ext4_journal_blocks_per_page(mpd->inode);

if (mpd->wbc->sync_mode == WB_SYNC_ALL || mpd->wbc->tagged_writepages)
tag = PAGECACHE_TAG_TOWRITE;
else
tag = PAGECACHE_TAG_DIRTY;

+ if (ext4_should_journal_data(mpd->inode)) {
+ handle = ext4_journal_start(mpd->inode, EXT4_HT_WRITE_PAGE,
+ bpp);
+ if (IS_ERR(handle))
+ return PTR_ERR(handle);
+ }
pagevec_init(&pvec);
mpd->map.m_len = 0;
mpd->next_page = index;
@@ -2630,6 +2462,13 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
if (mpd->map.m_len > 0 && mpd->next_page != page->index)
goto out;

+ if (handle) {
+ err = ext4_journal_ensure_credits(handle, bpp,
+ 0);
+ if (err < 0)
+ goto out;
+ }
+
lock_page(page);
/*
* If the page is no longer dirty, or its mapping no
@@ -2669,8 +2508,15 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
mpd->first_page = page->index;
mpd->next_page = page->index + 1;
/*
- * Writeout for transaction commit where we cannot
- * modify metadata is simple. Just submit the page.
+ * Writeout when we cannot modify metadata is simple.
+ * Just submit the page. For data=journal mode we
+ * first handle writeout of the page for checkpoint and
+ * only after that handle delayed page dirtying. This
+ * is crutial so that forcing a transaction commit and
+ * then calling filemap_write_and_wait() guarantees
+ * current state of data is in its final location. Such
+ * sequence is used for example by insert/collapse
+ * range operations before discarding the page cache.
*/
if (!mpd->can_map) {
if (ext4_page_nomap_can_writeout(page)) {
@@ -2678,6 +2524,13 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
if (err < 0)
goto out;
}
+ /* Pending dirtying of journalled data? */
+ if (PageChecked(page)) {
+ err = mpage_journal_page_buffers(handle,
+ mpd, page);
+ if (err < 0)
+ goto out;
+ }
mpage_page_done(mpd, page);
} else {
/* Add all dirty buffers to mpd */
@@ -2695,18 +2548,16 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
cond_resched();
}
mpd->scanned_until_end = 1;
+ if (handle)
+ ext4_journal_stop(handle);
return 0;
out:
pagevec_release(&pvec);
+ if (handle)
+ ext4_journal_stop(handle);
return err;
}

-static int ext4_writepage_cb(struct page *page, struct writeback_control *wbc,
- void *data)
-{
- return ext4_writepage(page, wbc);
-}
-
static int ext4_do_writepages(struct mpage_da_data *mpd)
{
struct writeback_control *wbc = mpd->wbc;
@@ -2732,13 +2583,6 @@ static int ext4_do_writepages(struct mpage_da_data *mpd)
if (!mapping->nrpages || !mapping_tagged(mapping, PAGECACHE_TAG_DIRTY))
goto out_writepages;

- if (ext4_should_journal_data(inode)) {
- blk_start_plug(&plug);
- ret = write_cache_pages(mapping, wbc, ext4_writepage_cb, NULL);
- blk_finish_plug(&plug);
- goto out_writepages;
- }
-
/*
* If the filesystem has aborted, it is read-only, so return
* right away instead of dumping stack traces later on that
@@ -2773,6 +2617,13 @@ static int ext4_do_writepages(struct mpage_da_data *mpd)
ext4_journal_stop(handle);
}

+ /*
+ * data=journal mode does not do delalloc so we just need to writeout /
+ * journal already mapped buffers
+ */
+ if (ext4_should_journal_data(inode))
+ mpd->can_map = 0;
+
if (ext4_should_dioread_nolock(inode)) {
/*
* We may need to convert up to one extent per block in
@@ -3149,9 +3000,8 @@ static int ext4_da_write_end(struct file *file,
* i_disksize since writeback will push i_disksize upto i_size
* eventually. If the end of the current write is > i_size and
* inside an allocated block (ext4_da_should_update_i_disksize()
- * check), we need to update i_disksize here as neither
- * ext4_writepage() nor certain ext4_writepages() paths not
- * allocating blocks update i_disksize.
+ * check), we need to update i_disksize here as certain
+ * ext4_writepages() paths not allocating blocks update i_disksize.
*
* Note that we defer inode dirtying to generic_write_end() /
* ext4_da_write_inline_data_end().
@@ -5373,7 +5223,7 @@ static void ext4_wait_for_tail_page_commit(struct inode *inode)
* If the folio is fully truncated, we don't need to wait for any commit
* (and we even should not as __ext4_journalled_invalidate_folio() may
* strip all buffers from the folio but keep the folio dirty which can then
- * confuse e.g. concurrent ext4_writepage() seeing dirty folio without
+ * confuse e.g. concurrent ext4_writepages() seeing dirty folio without
* buffers). Also we don't need to wait for any commit if all buffers in
* the folio remain valid. This is most beneficial for the common case of
* blocksize == PAGESIZE.
@@ -6311,18 +6161,8 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
err = __block_write_begin(page, 0, len, ext4_get_block);
if (!err) {
ret = VM_FAULT_SIGBUS;
- if (ext4_walk_page_buffers(handle, inode,
- page_buffers(page), 0, len, NULL,
- do_journal_get_write_access))
- goto out_error;
- if (ext4_walk_page_buffers(handle, inode,
- page_buffers(page), 0, len, NULL,
- write_end_fn))
- goto out_error;
- if (ext4_jbd2_inode_add_write(handle, inode,
- page_offset(page), len))
+ if (ext4_journal_page_buffers(handle, page, len))
goto out_error;
- ext4_set_inode_state(inode, EXT4_STATE_JDATA);
} else {
unlock_page(page);
}
diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h
index 77b426ae0064..ebccf6a6aa1b 100644
--- a/include/trace/events/ext4.h
+++ b/include/trace/events/ext4.h
@@ -584,13 +584,6 @@ DECLARE_EVENT_CLASS(ext4__page_op,
(unsigned long) __entry->index)
);

-DEFINE_EVENT(ext4__page_op, ext4_writepage,
-
- TP_PROTO(struct page *page),
-
- TP_ARGS(page)
-);
-
DEFINE_EVENT(ext4__page_op, ext4_readpage,

TP_PROTO(struct page *page),
--
2.35.3

2023-02-19 05:41:10

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH 0/7] ext4: Cleanup data=journal writeback path

On Wed, 11 Jan 2023 16:43:24 +0100, Jan Kara wrote:
> this patch series implements promised cleanup of ext4 data=journal writeback
> path. That way we can get rid of last remnants of old support for .writepage
> callback and mostly unify this writeback path with standard data handling
> modes. We also update some of the stale comments regarding data=journal mode
> along the way.
>
> Note that patch 3 (ext4: Mark page for delayed dirtying only if it is pinned)
> currently breaks somewhat the data=journal guarantees if mapped page cache is
> used as a buffer for direct IO read because that path does not pin pages yet
> (David Howels is working on it). But I figured this is not a huge deal for a
> cornercase usecase of a cornercase configuration and keeping marking page for
> delayed dirtying unconditionally is problematic as checkpointing dirties the
> page as well so it is effectively permanently added to the journal. It was
> problematic already in the past and happened to work only by luck.
>
> [...]

Applied, thanks!

[1/7] ext4: Update stale comment about write constraints
commit: d399906a1bd04c3fbc5bca9057ce992dc9b9e036
[2/7] ext4: Use nr_to_write directly in mpage_prepare_extent_to_map()
commit: 726432969963b16f817566715b0c7ec537d0283b
[3/7] ext4: Mark page for delayed dirtying only if it is pinned
commit: 71ace6bd0d47b09d6130014d70f87fcff5ae3e36
[4/7] ext4: Don't unlock page in ext4_bio_write_page()
commit: 0bcd54cf93e0dcefd121368fa32a478df2c86398
[5/7] ext4: Move page unlocking out of mpage_submit_page()
commit: 9ff6a9153c8f8a08eb60e0740c32e08cef88545c
[6/7] ext4: Move mpage_page_done() calls after error handling
commit: b4d26e70a7556ca723d31ac0c6bace0d1ab932b5
[7/7] ext4: Convert data=journal writeback to use ext4_writepages()
commit: 9b18c23c131ae6a335ef7aa6bf53f1d27a667ed1

Best regards,
--
Theodore Ts'o <[email protected]>