2008-10-02 18:19:18

by Aneesh Kumar K.V

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Thu, Oct 02, 2008 at 08:20:54AM -0400, Chris Mason wrote:
> On Wed, 2008-10-01 at 21:52 -0700, Andrew Morton wrote:
> > On Wed, 01 Oct 2008 14:40:51 -0400 Chris Mason <[email protected]> wrote:
> >
> > > The patch below changes write_cache_pages to only use writeback_index
> > > when current_is_pdflush(). The basic idea is that pdflush is the only
> > > one who has concurrency control against the bdi, so it is the only one
> > > who can safely use and update writeback_index.
> >
> > Another approach would be to only update mapping->writeback_index if
> > nobody else altered it meanwhile.
> >
>
> Ok, I can give that a short.
>
> > That being said, I don't really see why we get lots of seekiness when
> > two threads start their writing the file from the same offset.
>
> For metadata, it makes sense. Pages get dirtied in strange order, and
> if writeback_index is jumping around, we'll get the seeky metadata
> writeback.
>
> Data makes less sense, especially the very high extent count from ext4.
> An extra printk shows that ext4 is calling redirty_page_for_writepage
> quite a bit in ext4_da_writepage. This should be enough to make us jump
> around in the file.


We need to do start the journal before locking the page with jbd2.
That prevent us from doing any block allocation in writepage() call
back. So with ext4/jbd2 we do block allocation only in writepages()
call back where we start the journal with credit needed to write
a single extent. Then we look for contiguous unallocated logical
block and request the block allocator for 'x' blocks. If we get
less than that. The rest of the pages which we iterated in
writepages are redirtied so that we try to allocate them again.
We loop inside ext4_da_writepages itself looking at wbc->pages_skipped

2481 if (wbc->range_cont && (pages_skipped != wbc->pages_skipped)) {
2482 /* We skipped pages in this loop */
2483 wbc->range_start = range_start;
2484 wbc->nr_to_write = to_write +


>
> For a 4.5GB streaming buffered write, this printk inside
> ext4_da_writepage shows up 37,2429 times in /var/log/messages.
>

Part of that can happen due to shrink_page_list -> pageout -> writepagee
call back with lots of unallocated buffer_heads(blocks). Also a journal
commit with jbd2 looks at the inode and all the dirty pages, rather than
the buffer_heads (journal_submit_data_buffers). We don't force commit
pages that doesn't have blocks allocated with the ext4. The consistency
is only with i_size and data.

> if (page_has_buffers(page)) {
> page_bufs = page_buffers(page);
> if (walk_page_buffers(NULL, page_bufs, 0, len, NULL,
> ext4_bh_unmapped_or_delay)) {
> /*
> * We don't want to do block allocation
> * So redirty the page and return
> * We may reach here when we do a journal commit
> * via journal_submit_inode_data_buffers.
> * If we don't have mapping block we just ignore
> * them. We can also reach here via shrink_page_list
> */
> redirty_page_for_writepage(wbc, page);
>
> printk("redirty page %Lu\n", page_offset(page));
>
> unlock_page(page);
> return 0;
> }
> } else {
>
> -chris
>
>

-aneesh


2008-10-02 19:44:33

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Thu, 2 Oct 2008 23:48:56 +0530 "Aneesh Kumar K.V" <[email protected]> wrote:

> >
> > For a 4.5GB streaming buffered write, this printk inside
> > ext4_da_writepage shows up 37,2429 times in /var/log/messages.
> >
>
> Part of that can happen due to shrink_page_list -> pageout -> writepagee
> call back with lots of unallocated buffer_heads(blocks).

That workload shouldn't be using that code path much at all. It's
supposed to be the case that pdflush and balance_dirty_pages() do most
of the writeback work.

And that _used_ to be the case, but we broke it. It happened several
years ago and I wasn't able to provoke anyone into finding out why.
iirc the XFS guys noticed it because their throughput was fairly badly
affected.


2008-10-02 23:43:15

by Dave Chinner

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Thu, Oct 02, 2008 at 11:48:56PM +0530, Aneesh Kumar K.V wrote:
> On Thu, Oct 02, 2008 at 08:20:54AM -0400, Chris Mason wrote:
> > On Wed, 2008-10-01 at 21:52 -0700, Andrew Morton wrote:
> > For a 4.5GB streaming buffered write, this printk inside
> > ext4_da_writepage shows up 37,2429 times in /var/log/messages.
> >
>
> Part of that can happen due to shrink_page_list -> pageout -> writepagee
> call back with lots of unallocated buffer_heads(blocks).

Quite frankly, a simple streaming buffered write should *never*
trigger writeback from the LRU in memory reclaim. That indicates
that some feedback loop has broken down and we are not cleaning
pages fast enough or perhaps in the correct order. Page reclaim in
this case should be reclaiming clean pages (those that have already
been written back), not writing back random dirty pages.

Cheers,

Dave.
--
Dave Chinner
[email protected]

2008-10-03 01:11:18

by Chris Mason

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Thu, 2008-10-02 at 23:48 +0530, Aneesh Kumar K.V wrote:
> On Thu, Oct 02, 2008 at 08:20:54AM -0400, Chris Mason wrote:
> > On Wed, 2008-10-01 at 21:52 -0700, Andrew Morton wrote:
> > > On Wed, 01 Oct 2008 14:40:51 -0400 Chris Mason <[email protected]> wrote:
> > >
> > > > The patch below changes write_cache_pages to only use writeback_index
> > > > when current_is_pdflush(). The basic idea is that pdflush is the only
> > > > one who has concurrency control against the bdi, so it is the only one
> > > > who can safely use and update writeback_index.
> > >
> > > Another approach would be to only update mapping->writeback_index if
> > > nobody else altered it meanwhile.
> > >
> >
> > Ok, I can give that a short.
> >
> > > That being said, I don't really see why we get lots of seekiness when
> > > two threads start their writing the file from the same offset.
> >
> > For metadata, it makes sense. Pages get dirtied in strange order, and
> > if writeback_index is jumping around, we'll get the seeky metadata
> > writeback.
> >
> > Data makes less sense, especially the very high extent count from ext4.
> > An extra printk shows that ext4 is calling redirty_page_for_writepage
> > quite a bit in ext4_da_writepage. This should be enough to make us jump
> > around in the file.
>
>
> We need to do start the journal before locking the page with jbd2.
> That prevent us from doing any block allocation in writepage() call
> back. So with ext4/jbd2 we do block allocation only in writepages()
> call back where we start the journal with credit needed to write
> a single extent. Then we look for contiguous unallocated logical
> block and request the block allocator for 'x' blocks. If we get
> less than that. The rest of the pages which we iterated in
> writepages are redirtied so that we try to allocate them again.
> We loop inside ext4_da_writepages itself looking at wbc->pages_skipped
>
> 2481 if (wbc->range_cont && (pages_skipped != wbc->pages_skipped)) {
> 2482 /* We skipped pages in this loop */
> 2483 wbc->range_start = range_start;
> 2484 wbc->nr_to_write = to_write +
>
>
> >
> > For a 4.5GB streaming buffered write, this printk inside
> > ext4_da_writepage shows up 37,2429 times in /var/log/messages.
> >
>
> Part of that can happen due to shrink_page_list -> pageout -> writepagee
> call back with lots of unallocated buffer_heads(blocks). Also a journal
> commit with jbd2 looks at the inode and all the dirty pages, rather than
> the buffer_heads (journal_submit_data_buffers). We don't force commit
> pages that doesn't have blocks allocated with the ext4. The consistency
> is only with i_size and data.

In general, I don't think pdflush or the VM expect
redirty_pages_for_writepage to be used this aggressively.

At this point I think we're best off if one of the ext4 developers is
able to reproduce and explain things in better detail than my hand
waving.

My patch is pretty lame, but it isn't a horrible bandage until we can
rethink the pdflush<->balance_dirty_pages<->kudpate interactions in
detail.

Two other data points, ext3 runs at 200MB/s with and without the patch.
Btrfs runs at 320MB/s with and without the patch, but only when I turn
checksums off. The IO isn't quite as sequential with checksumming on
because the helper threads submit things slightly out of order (220MB/s
with checksums on).

Btrfs does use redirty_page_for_writepage if (current->flags &
PF_MEMALLOC) in my writepage call, but doesn't call it from the
writepages path.

-chris



2008-10-03 02:44:00

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Friday 03 October 2008 11:11, Chris Mason wrote:
> On Thu, 2008-10-02 at 23:48 +0530, Aneesh Kumar K.V wrote:
> > On Thu, Oct 02, 2008 at 08:20:54AM -0400, Chris Mason wrote:
> > > On Wed, 2008-10-01 at 21:52 -0700, Andrew Morton wrote:
> > > > On Wed, 01 Oct 2008 14:40:51 -0400 Chris Mason
<[email protected]> wrote:
> > > > > The patch below changes write_cache_pages to only use
> > > > > writeback_index when current_is_pdflush(). The basic idea is that
> > > > > pdflush is the only one who has concurrency control against the
> > > > > bdi, so it is the only one who can safely use and update
> > > > > writeback_index.
> > > >
> > > > Another approach would be to only update mapping->writeback_index if
> > > > nobody else altered it meanwhile.
> > >
> > > Ok, I can give that a short.
> > >
> > > > That being said, I don't really see why we get lots of seekiness when
> > > > two threads start their writing the file from the same offset.
> > >
> > > For metadata, it makes sense. Pages get dirtied in strange order, and
> > > if writeback_index is jumping around, we'll get the seeky metadata
> > > writeback.
> > >
> > > Data makes less sense, especially the very high extent count from ext4.
> > > An extra printk shows that ext4 is calling redirty_page_for_writepage
> > > quite a bit in ext4_da_writepage. This should be enough to make us
> > > jump around in the file.
> >
> > We need to do start the journal before locking the page with jbd2.
> > That prevent us from doing any block allocation in writepage() call
> > back. So with ext4/jbd2 we do block allocation only in writepages()
> > call back where we start the journal with credit needed to write
> > a single extent. Then we look for contiguous unallocated logical
> > block and request the block allocator for 'x' blocks. If we get
> > less than that. The rest of the pages which we iterated in
> > writepages are redirtied so that we try to allocate them again.
> > We loop inside ext4_da_writepages itself looking at wbc->pages_skipped
> >
> > 2481 if (wbc->range_cont && (pages_skipped !=
> > wbc->pages_skipped)) { 2482 /* We skipped pages in this
> > loop */
> > 2483 wbc->range_start = range_start;
> > 2484 wbc->nr_to_write = to_write +
> >
> > > For a 4.5GB streaming buffered write, this printk inside
> > > ext4_da_writepage shows up 37,2429 times in /var/log/messages.
> >
> > Part of that can happen due to shrink_page_list -> pageout -> writepagee
> > call back with lots of unallocated buffer_heads(blocks). Also a journal
> > commit with jbd2 looks at the inode and all the dirty pages, rather than
> > the buffer_heads (journal_submit_data_buffers). We don't force commit
> > pages that doesn't have blocks allocated with the ext4. The consistency
> > is only with i_size and data.
>
> In general, I don't think pdflush or the VM expect
> redirty_pages_for_writepage to be used this aggressively.

BTW. redirty_page_for_writepage and the whole model of cleaning the page's
dirty bit *before* calling into the filesystem is really nasty IMO. For
one thing it opens races that mean a filesystem can't keep metadata about
the pagecache properly in synch with the page's dirty bit.

I have a patch in my fsblock series that fixes this and has the writepage()
function itself clear the page's dirty bit. This basically makes
redirty_page_for_writepages go away completely (at least the uses I looked
at, I didn't look at ext4 though).

Shall I break it out and submit it?

2008-10-03 12:08:09

by Chris Mason

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Fri, 2008-10-03 at 12:43 +1000, Nick Piggin wrote:
> On Friday 03 October 2008 11:11, Chris Mason wrote:

> > > Part of that can happen due to shrink_page_list -> pageout -> writepagee
> > > call back with lots of unallocated buffer_heads(blocks). Also a journal
> > > commit with jbd2 looks at the inode and all the dirty pages, rather than
> > > the buffer_heads (journal_submit_data_buffers). We don't force commit
> > > pages that doesn't have blocks allocated with the ext4. The consistency
> > > is only with i_size and data.
> >
> > In general, I don't think pdflush or the VM expect
> > redirty_pages_for_writepage to be used this aggressively.
>
> BTW. redirty_page_for_writepage and the whole model of cleaning the page's
> dirty bit *before* calling into the filesystem is really nasty IMO. For
> one thing it opens races that mean a filesystem can't keep metadata about
> the pagecache properly in synch with the page's dirty bit.
>
> I have a patch in my fsblock series that fixes this and has the writepage()
> function itself clear the page's dirty bit. This basically makes
> redirty_page_for_writepages go away completely (at least the uses I looked
> at, I didn't look at ext4 though).
>
> Shall I break it out and submit it?

It's a fair amount of churn in the FS code, and the part I'm not sure of
is if the bigger problem is lock ordering around the page lock and the
FS locks or just the dirty bit.

Personally I'd rather see writepages used everywhere, giving the FS the
chance to do more efficient IO.

-chris




2008-10-03 19:52:34

by Chris Mason

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Fri, 2008-10-03 at 09:43 +1000, Dave Chinner wrote:
> On Thu, Oct 02, 2008 at 11:48:56PM +0530, Aneesh Kumar K.V wrote:
> > On Thu, Oct 02, 2008 at 08:20:54AM -0400, Chris Mason wrote:
> > > On Wed, 2008-10-01 at 21:52 -0700, Andrew Morton wrote:
> > > For a 4.5GB streaming buffered write, this printk inside
> > > ext4_da_writepage shows up 37,2429 times in /var/log/messages.
> > >
> >
> > Part of that can happen due to shrink_page_list -> pageout -> writepagee
> > call back with lots of unallocated buffer_heads(blocks).
>
> Quite frankly, a simple streaming buffered write should *never*
> trigger writeback from the LRU in memory reclaim.

The blktrace runs on ext4 didn't show kswapd doing any IO. It isn't
clear if this is because ext4 did the redirty trick or if kswapd didn't
call writepage.

-chris



2008-10-06 10:16:05

by Aneesh Kumar K.V

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Fri, Oct 03, 2008 at 03:45:55PM -0400, Chris Mason wrote:
> On Fri, 2008-10-03 at 09:43 +1000, Dave Chinner wrote:
> > On Thu, Oct 02, 2008 at 11:48:56PM +0530, Aneesh Kumar K.V wrote:
> > > On Thu, Oct 02, 2008 at 08:20:54AM -0400, Chris Mason wrote:
> > > > On Wed, 2008-10-01 at 21:52 -0700, Andrew Morton wrote:
> > > > For a 4.5GB streaming buffered write, this printk inside
> > > > ext4_da_writepage shows up 37,2429 times in /var/log/messages.
> > > >
> > >
> > > Part of that can happen due to shrink_page_list -> pageout -> writepagee
> > > call back with lots of unallocated buffer_heads(blocks).
> >
> > Quite frankly, a simple streaming buffered write should *never*
> > trigger writeback from the LRU in memory reclaim.
>
> The blktrace runs on ext4 didn't show kswapd doing any IO. It isn't
> clear if this is because ext4 did the redirty trick or if kswapd didn't
> call writepage.
>
> -chris

This patch actually reduced the number of extents for the below test
from 564 to 171.

$dd if=/dev/zero of=test bs=1M count=1024

ext4 mballoc block allocator still can be improved to make sure we get
less extents. I am looking into this.

What the below change basically does is to make sure we advance
writeback_index after looking at the pages skipped. With delayed
allocation when we request for 100 blocks we just add each of these
blocks to the in memory extent. Once we get the contiguous chunk
of 100 block request we have the index updated to 100. Now we request
block allocator for 100 blocks. But allocator gives us back 50 blocks
So with the current code we have writeback_index pointing to 100
With the changes below it is pointing to 50.


We also force loop inside the ext4_da_writepags with the below condition
a) if nr_to_write is zero break the loop
b) if we are able to allocate blocks but we have pages_skipped move
the writeback_index/range_start to initial value and try with a new
nr_to_write value. This make sure we will be allocating blocks for
all requested dirty pages. The reason being __fsync_super. It does

sync_inodes_sb(sb, 0);
sync_blockdev(sb->s_bdev);
sync_inodes_sb(sb, 1);

I guess the first call is supposed to have done all the meta data
allocation ? So i was forcing the block allocation without even looking
at WB_SYNC_ALL

c) If we don't have anything to write break the loop

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 21f1d3a..58d010d 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2386,14 +2386,16 @@ static int ext4_da_writepages(struct address_space *mapping,
wbc->nr_to_write = sbi->s_mb_stream_request;
}

- if (!wbc->range_cyclic)
+ if (wbc->range_cyclic) {
+ range_start = mapping->writeback_index;
+ } else {
/*
* If range_cyclic is not set force range_cont
- * and save the old writeback_index
+ * and save the old range_start;
*/
wbc->range_cont = 1;
-
- range_start = wbc->range_start;
+ range_start = wbc->range_start;
+ }
pages_skipped = wbc->pages_skipped;

mpd.wbc = wbc;
@@ -2440,6 +2442,19 @@ static int ext4_da_writepages(struct address_space *mapping,
*/
to_write += wbc->nr_to_write;
ret = 0;
+ if (pages_skipped != wbc->pages_skipped) {
+ /* writepages skipped some pages */
+ if (wbc->range_cont) {
+ wbc->range_start = range_start;
+ } else {
+ /* range_cyclic */
+ mapping->writeback_index = range_start;
+ }
+ wbc->nr_to_write = to_write +
+ (wbc->pages_skipped - pages_skipped);
+ wbc->pages_skipped = pages_skipped;
+ } else
+ wbc->nr_to_write = to_write;
} else if (wbc->nr_to_write) {
/*
* There is no more writeout needed
@@ -2449,18 +2464,27 @@ static int ext4_da_writepages(struct address_space *mapping,
to_write += wbc->nr_to_write;
break;
}
- wbc->nr_to_write = to_write;
}

if (wbc->range_cont && (pages_skipped != wbc->pages_skipped)) {
/* We skipped pages in this loop */
wbc->range_start = range_start;
wbc->nr_to_write = to_write +
- wbc->pages_skipped - pages_skipped;
+ (wbc->pages_skipped - pages_skipped);
wbc->pages_skipped = pages_skipped;
goto restart_loop;
}

+ if (wbc->range_cyclic && (pages_skipped != wbc->pages_skipped)) {
+ /*
+ * we need to make sure we don't move the
+ * writeback_index further without looking
+ * at the pages skipped.
+ */
+ mapping->writeback_index = mapping->writeback_index -
+ (wbc->pages_skipped - pages_skipped);
+ }
+
out_writepages:
wbc->nr_to_write = to_write - nr_to_writebump;
wbc->range_start = range_start;

2008-10-06 14:21:43

by Chris Mason

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Mon, 2008-10-06 at 15:46 +0530, Aneesh Kumar K.V wrote:
> On Fri, Oct 03, 2008 at 03:45:55PM -0400, Chris Mason wrote:
> > On Fri, 2008-10-03 at 09:43 +1000, Dave Chinner wrote:
> > > On Thu, Oct 02, 2008 at 11:48:56PM +0530, Aneesh Kumar K.V wrote:
> > > > On Thu, Oct 02, 2008 at 08:20:54AM -0400, Chris Mason wrote:
> > > > > On Wed, 2008-10-01 at 21:52 -0700, Andrew Morton wrote:
> > > > > For a 4.5GB streaming buffered write, this printk inside
> > > > > ext4_da_writepage shows up 37,2429 times in /var/log/messages.
> > > > >
> > > >
> > > > Part of that can happen due to shrink_page_list -> pageout -> writepagee
> > > > call back with lots of unallocated buffer_heads(blocks).
> > >
> > > Quite frankly, a simple streaming buffered write should *never*
> > > trigger writeback from the LRU in memory reclaim.
> >
> > The blktrace runs on ext4 didn't show kswapd doing any IO. It isn't
> > clear if this is because ext4 did the redirty trick or if kswapd didn't
> > call writepage.
> >
> > -chris
>
> This patch actually reduced the number of extents for the below test
> from 564 to 171.
>

For my array, this patch brings the number of ext4 extents down from
over 4000 to 27. The throughput reported by dd goes up from ~80MB/s to
330MB/s, which means buffered IO is going as fast as O_DIRECT.

Here's the graph:

http://oss.oracle.com/~mason/bugs/writeback_ordering/ext4-aneesh.png

The strange metadata writeback for the uninit block groups is gone.

Looking at the patch, I think the ext4_writepages code should just make
its own write_cache_pages. It's pretty hard to follow the code that is
there for ext4 vs the code that is there to make write_cache_pages do
what ext4 expects it to.

-chris



2008-10-07 08:45:31

by Aneesh Kumar K.V

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Mon, Oct 06, 2008 at 10:21:43AM -0400, Chris Mason wrote:
> On Mon, 2008-10-06 at 15:46 +0530, Aneesh Kumar K.V wrote:
> > On Fri, Oct 03, 2008 at 03:45:55PM -0400, Chris Mason wrote:
> > > On Fri, 2008-10-03 at 09:43 +1000, Dave Chinner wrote:
> > > > On Thu, Oct 02, 2008 at 11:48:56PM +0530, Aneesh Kumar K.V wrote:
> > > > > On Thu, Oct 02, 2008 at 08:20:54AM -0400, Chris Mason wrote:
> > > > > > On Wed, 2008-10-01 at 21:52 -0700, Andrew Morton wrote:
> > > > > > For a 4.5GB streaming buffered write, this printk inside
> > > > > > ext4_da_writepage shows up 37,2429 times in /var/log/messages.
> > > > > >
> > > > >
> > > > > Part of that can happen due to shrink_page_list -> pageout -> writepagee
> > > > > call back with lots of unallocated buffer_heads(blocks).
> > > >
> > > > Quite frankly, a simple streaming buffered write should *never*
> > > > trigger writeback from the LRU in memory reclaim.
> > >
> > > The blktrace runs on ext4 didn't show kswapd doing any IO. It isn't
> > > clear if this is because ext4 did the redirty trick or if kswapd didn't
> > > call writepage.
> > >
> > > -chris
> >
> > This patch actually reduced the number of extents for the below test
> > from 564 to 171.
> >
>
> For my array, this patch brings the number of ext4 extents down from
> over 4000 to 27. The throughput reported by dd goes up from ~80MB/s to
> 330MB/s, which means buffered IO is going as fast as O_DIRECT.
>
> Here's the graph:
>
> http://oss.oracle.com/~mason/bugs/writeback_ordering/ext4-aneesh.png
>
> The strange metadata writeback for the uninit block groups is gone.
>
> Looking at the patch, I think the ext4_writepages code should just make
> its own write_cache_pages. It's pretty hard to follow the code that is
> there for ext4 vs the code that is there to make write_cache_pages do
> what ext4 expects it to.
>

How about the below ? The patch is on top of ext4 patchqueue.

commit 9b839065f973eb68e8e28be18e1f31d8fb6854ee
Author: Aneesh Kumar K.V <[email protected]>
Date: Tue Oct 7 12:27:52 2008 +0530

ext4: Fix file fragmentation during large file write.

The range_cyclic writeback mode use the address_space
writeback_index as the start index for writeback. With
delayed allocation we were updating writeback_index
wrongly resulting in highly fragmented file. Number of
extents reduced from 4000 to 27 for a 3GB file with
the below patch.

The patch also cleanup the ext4 delayed allocation writepages
by implementing write_cache_pages locally with needed changes
for cleanup. Also it drops the range_cont writeback mode added
for ext4 delayed allocation writeback

Signed-off-by: Aneesh Kumar K.V <[email protected]>

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 21f1d3a..b6b0985 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1648,6 +1648,7 @@ static int mpage_da_submit_io(struct mpage_da_data *mpd)
int ret = 0, err, nr_pages, i;
unsigned long index, end;
struct pagevec pvec;
+ long pages_skipped;

BUG_ON(mpd->next_page <= mpd->first_page);
pagevec_init(&pvec, 0);
@@ -1655,20 +1656,30 @@ static int mpage_da_submit_io(struct mpage_da_data *mpd)
end = mpd->next_page - 1;

while (index <= end) {
- /* XXX: optimize tail */
- nr_pages = pagevec_lookup(&pvec, mapping, index, PAGEVEC_SIZE);
+ /*
+ * We can use PAGECACHE_TAG_DIRTY lookup here because
+ * even though we have cleared the dirty flag on the page
+ * We still keep the page in the radix tree with tag
+ * PAGECACHE_TAG_DIRTY. See clear_page_dirty_for_io.
+ * The PAGECACHE_TAG_DIRTY is cleared in set_page_writeback
+ * which is called via the below writepage callback.
+ */
+ nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
+ PAGECACHE_TAG_DIRTY,
+ min(end - index,
+ (pgoff_t)PAGEVEC_SIZE-1) + 1);
if (nr_pages == 0)
break;
for (i = 0; i < nr_pages; i++) {
struct page *page = pvec.pages[i];

- index = page->index;
- if (index > end)
- break;
- index++;
-
+ pages_skipped = mpd->wbc->pages_skipped;
err = mapping->a_ops->writepage(page, mpd->wbc);
- if (!err)
+ if (!err && (pages_skipped == mpd->wbc->pages_skipped))
+ /*
+ * have successfully written the page
+ * without skipping the same
+ */
mpd->pages_written++;
/*
* In error case, we have to continue because
@@ -2088,6 +2099,100 @@ static int __mpage_da_writepage(struct page *page,
return 0;
}

+static int ext4_write_cache_pages(struct address_space *mapping,
+ struct writeback_control *wbc, writepage_t writepage,
+ void *data)
+{
+ struct pagevec pvec;
+ pgoff_t index, end;
+ long to_write = wbc->nr_to_write;
+ int ret = 0, done = 0, scanned = 0, nr_pages;
+ struct backing_dev_info *bdi = mapping->backing_dev_info;
+
+ if (wbc->nonblocking && bdi_write_congested(bdi)) {
+ wbc->encountered_congestion = 1;
+ return 0;
+ }
+
+ pagevec_init(&pvec, 0);
+ if (wbc->range_cyclic) {
+ index = mapping->writeback_index; /* Start from prev offset */
+ end = -1;
+ } else {
+ index = wbc->range_start >> PAGE_CACHE_SHIFT;
+ end = wbc->range_end >> PAGE_CACHE_SHIFT;
+ scanned = 1;
+ }
+retry:
+ while (!done && (index <= end) &&
+ (nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
+ PAGECACHE_TAG_DIRTY,
+ min(end - index, (pgoff_t)PAGEVEC_SIZE-1) + 1))) {
+ unsigned i;
+
+ scanned = 1;
+ for (i = 0; i < nr_pages; i++) {
+ struct page *page = pvec.pages[i];
+
+ /*
+ * At this point we hold neither mapping->tree_lock nor
+ * lock on the page itself: the page may be truncated or
+ * invalidated (changing page->mapping to NULL), or even
+ * swizzled back from swapper_space to tmpfs file
+ * mapping
+ */
+ lock_page(page);
+
+ if (unlikely(page->mapping != mapping)) {
+ unlock_page(page);
+ continue;
+ }
+ if (!wbc->range_cyclic && page->index > end) {
+ done = 1;
+ unlock_page(page);
+ continue;
+ }
+ if (wbc->sync_mode != WB_SYNC_NONE)
+ wait_on_page_writeback(page);
+
+ if (PageWriteback(page) ||
+ !clear_page_dirty_for_io(page)) {
+ unlock_page(page);
+ continue;
+ }
+ ret = (*writepage)(page, wbc, data);
+ if (unlikely(ret == AOP_WRITEPAGE_ACTIVATE)) {
+ unlock_page(page);
+ ret = 0;
+ }
+ if (ret || (--(to_write) <= 0))
+ /*
+ * writepage either failed.
+ * or did an extent write.
+ * We wrote what we are asked to
+ * write
+ */
+ done = 1;
+ if (wbc->nonblocking && bdi_write_congested(bdi)) {
+ wbc->encountered_congestion = 1;
+ done = 1;
+ }
+ }
+ pagevec_release(&pvec);
+ cond_resched();
+ }
+ if (!scanned && !done) {
+ /*
+ * We hit the last page and there is more work to be done: wrap
+ * back to the start of the file
+ */
+ scanned = 1;
+ index = 0;
+ goto retry;
+ }
+ return ret;
+}
+
/*
* mpage_da_writepages - walk the list of dirty pages of the given
* address space, allocates non-allocated blocks, maps newly-allocated
@@ -2104,7 +2209,6 @@ static int mpage_da_writepages(struct address_space *mapping,
struct writeback_control *wbc,
struct mpage_da_data *mpd)
{
- long to_write;
int ret;

if (!mpd->get_block)
@@ -2119,10 +2223,7 @@ static int mpage_da_writepages(struct address_space *mapping,
mpd->pages_written = 0;
mpd->retval = 0;

- to_write = wbc->nr_to_write;
-
- ret = write_cache_pages(mapping, wbc, __mpage_da_writepage, mpd);
-
+ ret = ext4_write_cache_pages(mapping, wbc, __mpage_da_writepage, mpd);
/*
* Handle last extent of pages
*/
@@ -2131,7 +2232,7 @@ static int mpage_da_writepages(struct address_space *mapping,
mpage_da_submit_io(mpd);
}

- wbc->nr_to_write = to_write - mpd->pages_written;
+ wbc->nr_to_write -= mpd->pages_written;
return ret;
}

@@ -2360,12 +2461,13 @@ static int ext4_da_writepages_trans_blocks(struct inode *inode)
static int ext4_da_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
+ pgoff_t index;
+ int range_whole = 0;
handle_t *handle = NULL;
- loff_t range_start = 0;
+ long pages_written = 0;
struct mpage_da_data mpd;
struct inode *inode = mapping->host;
int needed_blocks, ret = 0, nr_to_writebump = 0;
- long to_write, pages_skipped = 0;
struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb);

/*
@@ -2385,23 +2487,18 @@ static int ext4_da_writepages(struct address_space *mapping,
nr_to_writebump = sbi->s_mb_stream_request - wbc->nr_to_write;
wbc->nr_to_write = sbi->s_mb_stream_request;
}
+ if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
+ range_whole = 1;

- if (!wbc->range_cyclic)
- /*
- * If range_cyclic is not set force range_cont
- * and save the old writeback_index
- */
- wbc->range_cont = 1;
-
- range_start = wbc->range_start;
- pages_skipped = wbc->pages_skipped;
+ if (wbc->range_cyclic)
+ index = mapping->writeback_index;
+ else
+ index = wbc->range_start >> PAGE_CACHE_SHIFT;

mpd.wbc = wbc;
mpd.inode = mapping->host;

-restart_loop:
- to_write = wbc->nr_to_write;
- while (!ret && to_write > 0) {
+ while (!ret && wbc->nr_to_write > 0) {

/*
* we insert one extent at a time. So we need
@@ -2422,48 +2519,45 @@ static int ext4_da_writepages(struct address_space *mapping,
dump_stack();
goto out_writepages;
}
- to_write -= wbc->nr_to_write;
-
mpd.get_block = ext4_da_get_block_write;
ret = mpage_da_writepages(mapping, wbc, &mpd);

ext4_journal_stop(handle);

- if (mpd.retval == -ENOSPC)
+ if (mpd.retval == -ENOSPC) {
+ /* commit the transaction which would
+ * free blocks released in the transaction
+ * and try again
+ */
jbd2_journal_force_commit_nested(sbi->s_journal);
-
- /* reset the retry count */
- if (ret == MPAGE_DA_EXTENT_TAIL) {
+ ret = 0;
+ } else if (ret == MPAGE_DA_EXTENT_TAIL) {
/*
* got one extent now try with
* rest of the pages
*/
- to_write += wbc->nr_to_write;
+ pages_written += mpd.pages_written;
ret = 0;
- } else if (wbc->nr_to_write) {
+ } else if (wbc->nr_to_write)
/*
* There is no more writeout needed
* or we requested for a noblocking writeout
* and we found the device congested
*/
- to_write += wbc->nr_to_write;
break;
- }
- wbc->nr_to_write = to_write;
}

- if (wbc->range_cont && (pages_skipped != wbc->pages_skipped)) {
- /* We skipped pages in this loop */
- wbc->range_start = range_start;
- wbc->nr_to_write = to_write +
- wbc->pages_skipped - pages_skipped;
- wbc->pages_skipped = pages_skipped;
- goto restart_loop;
- }
+ /* Update index */
+ index += pages_written;
+ if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
+ /*
+ * set the writeback_index so that range_cyclic
+ * mode will write it back later
+ */
+ mapping->writeback_index = index;

out_writepages:
- wbc->nr_to_write = to_write - nr_to_writebump;
- wbc->range_start = range_start;
+ wbc->nr_to_write -= nr_to_writebump;
return ret;
}

diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 12b15c5..bd91987 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -63,7 +63,6 @@ struct writeback_control {
unsigned for_writepages:1; /* This is a writepages() call */
unsigned range_cyclic:1; /* range_start is cyclic */
unsigned more_io:1; /* more io to be dispatched */
- unsigned range_cont:1;
};

/*
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 24de8b6..718efa6 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -961,8 +961,6 @@ int write_cache_pages(struct address_space *mapping,
if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
mapping->writeback_index = index;

- if (wbc->range_cont)
- wbc->range_start = index << PAGE_CACHE_SHIFT;
return ret;
}
EXPORT_SYMBOL(write_cache_pages);

2008-10-07 09:05:54

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Tue, Oct 07, 2008 at 02:15:31PM +0530, Aneesh Kumar K.V wrote:
> +static int ext4_write_cache_pages(struct address_space *mapping,
> + struct writeback_control *wbc, writepage_t writepage,
> + void *data)
> +{

Looking at this functions the only difference is killing the
writeback_index and range_start updates. If they are bad why would we
only remove them from ext4?

2008-10-07 10:02:57

by Aneesh Kumar K.V

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Tue, Oct 07, 2008 at 05:05:54AM -0400, Christoph Hellwig wrote:
> On Tue, Oct 07, 2008 at 02:15:31PM +0530, Aneesh Kumar K.V wrote:
> > +static int ext4_write_cache_pages(struct address_space *mapping,
> > + struct writeback_control *wbc, writepage_t writepage,
> > + void *data)
> > +{
>
> Looking at this functions the only difference is killing the
> writeback_index and range_start updates. If they are bad why would we
> only remove them from ext4?

I am also not updating wbc->nr_to_write.

ext4 delayed allocation writeback is bit tricky. It does

a) Look at the dirty pages and build an in memory extent of contiguous
logical file blocks. If we use writecache_pages to do that it will
update nr_to_write, writeback_index etc during this stage.

b) Request the block allocator for 'x' blocks. We get the value x from
step a.

c) block allocator may return less than 'x' contiguous block. That would
mean the variables updated by write_cache_pages need to corrected. The
old code was doing that. Chris Mason suggested it would make it easy
to use a write_cache_pages which doesn't update the variable for ext4.

I don't think other filesystem have this requirement.

-aneesh

2008-10-07 13:30:31

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Tue, Oct 07, 2008 at 03:32:57PM +0530, Aneesh Kumar K.V wrote:
> On Tue, Oct 07, 2008 at 05:05:54AM -0400, Christoph Hellwig wrote:
> > On Tue, Oct 07, 2008 at 02:15:31PM +0530, Aneesh Kumar K.V wrote:
> > > +static int ext4_write_cache_pages(struct address_space *mapping,
> > > + struct writeback_control *wbc, writepage_t writepage,
> > > + void *data)
> > > +{
> >
> > Looking at this functions the only difference is killing the
> > writeback_index and range_start updates. If they are bad why would we
> > only remove them from ext4?
>
> I am also not updating wbc->nr_to_write.
...
> I don't think other filesystem have this requirement.

That's true, but there is a lot of code duplication, which means that
bugs or changes in write_cache_pages() would need to be fixed in
ext4_write_cache_pages(). So another approach that might be better
from a long-term code maintenance point of view is to add a flag in
struct writeback_control that tells write_cache_pages() not to update
those fields, and avoid duplicating approximately 95 lines of code.
It means a change in a core mm function, though, so if folks thinks
its too ugly, we can make our own copy in fs/ext4.

Opinions? Andrew, as someone who often weighs in on fs and mm issues,
what do you think? My preference would be to make the change to
mm/page-writeback.c, controlled by a flag which ext4 would set be set
by fs/ext4 before it calls write_cache_pages().

- Ted

2008-10-07 13:37:15

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Tue, Oct 07, 2008 at 09:29:11AM -0400, Theodore Tso wrote:
> That's true, but there is a lot of code duplication, which means that
> bugs or changes in write_cache_pages() would need to be fixed in
> ext4_write_cache_pages(). So another approach that might be better
> from a long-term code maintenance point of view is to add a flag in
> struct writeback_control that tells write_cache_pages() not to update
> those fields, and avoid duplicating approximately 95 lines of code.
> It means a change in a core mm function, though, so if folks thinks
> its too ugly, we can make our own copy in fs/ext4.
>
> Opinions? Andrew, as someone who often weighs in on fs and mm issues,
> what do you think? My preference would be to make the change to
> mm/page-writeback.c, controlled by a flag which ext4 would set be set
> by fs/ext4 before it calls write_cache_pages().

I agree. But I'm still not quite sure if that requirement is unique to
ext4 anyway. Give me some time to dive into the writeback code again,
haven't been there for quite a while.


2008-10-07 13:58:23

by Peter Staubach

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

Aneesh Kumar K.V wrote:
> On Tue, Oct 07, 2008 at 05:05:54AM -0400, Christoph Hellwig wrote:
>
>> On Tue, Oct 07, 2008 at 02:15:31PM +0530, Aneesh Kumar K.V wrote:
>>
>>> +static int ext4_write_cache_pages(struct address_space *mapping,
>>> + struct writeback_control *wbc, writepage_t writepage,
>>> + void *data)
>>> +{
>>>
>> Looking at this functions the only difference is killing the
>> writeback_index and range_start updates. If they are bad why would we
>> only remove them from ext4?
>>
>
> I am also not updating wbc->nr_to_write.
>
> ext4 delayed allocation writeback is bit tricky. It does
>
> a) Look at the dirty pages and build an in memory extent of contiguous
> logical file blocks. If we use writecache_pages to do that it will
> update nr_to_write, writeback_index etc during this stage.
>
> b) Request the block allocator for 'x' blocks. We get the value x from
> step a.
>
> c) block allocator may return less than 'x' contiguous block. That would
> mean the variables updated by write_cache_pages need to corrected. The
> old code was doing that. Chris Mason suggested it would make it easy
> to use a write_cache_pages which doesn't update the variable for ext4.
>
> I don't think other filesystem have this requirement.

The NFS client can benefit from only writing pages in strictly
ascending offset order. The benefit comes from helping the
server to do better allocations by not sending file data to the
server in random order.

There is also an NFS server in the market which requires data
to be sent in strict ascending offset order. This sort of
support would make interoperating with that server much easier.

Thanx...

ps

2008-10-07 14:38:00

by Chuck Lever

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering


On Oct 7, 2008, at Oct 7, 2008, 9:55 AM, Peter Staubach wrote:

> Aneesh Kumar K.V wrote:
>> On Tue, Oct 07, 2008 at 05:05:54AM -0400, Christoph Hellwig wrote:
>>
>>> On Tue, Oct 07, 2008 at 02:15:31PM +0530, Aneesh Kumar K.V wrote:
>>>
>>>> +static int ext4_write_cache_pages(struct address_space *mapping,
>>>> + struct writeback_control *wbc, writepage_t writepage,
>>>> + void *data)
>>>> +{
>>>>
>>> Looking at this functions the only difference is killing the
>>> writeback_index and range_start updates. If they are bad why
>>> would we
>>> only remove them from ext4?
>>>
>>
>> I am also not updating wbc->nr_to_write.
>>
>> ext4 delayed allocation writeback is bit tricky. It does
>>
>> a) Look at the dirty pages and build an in memory extent of
>> contiguous
>> logical file blocks. If we use writecache_pages to do that it will
>> update nr_to_write, writeback_index etc during this stage.
>>
>> b) Request the block allocator for 'x' blocks. We get the value x
>> from
>> step a.
>>
>> c) block allocator may return less than 'x' contiguous block. That
>> would
>> mean the variables updated by write_cache_pages need to corrected.
>> The
>> old code was doing that. Chris Mason suggested it would make it easy
>> to use a write_cache_pages which doesn't update the variable for
>> ext4.
>>
>> I don't think other filesystem have this requirement.
>
> The NFS client can benefit from only writing pages in strictly
> ascending offset order. The benefit comes from helping the
> server to do better allocations by not sending file data to the
> server in random order.

For the record, it would also help prevent the creation of temporary
holes in O_APPEND files.

If an NFS client writes the front and back ends of a request before it
writes the middle, other clients will see a temporary hole in that
file. Applications (especially simple ones like "tail") are often not
prepared for the appearance of such holes.

Over a client crash, data integrity would improve if the client was
less likely to create temporary holes in files.

> There is also an NFS server in the market which requires data
> to be sent in strict ascending offset order. This sort of
> support would make interoperating with that server much easier.

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com

2008-10-09 15:11:20

by Chris Mason

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Fri, 2008-10-03 at 09:43 +1000, Dave Chinner wrote:
> On Thu, Oct 02, 2008 at 11:48:56PM +0530, Aneesh Kumar K.V wrote:
> > On Thu, Oct 02, 2008 at 08:20:54AM -0400, Chris Mason wrote:
> > > On Wed, 2008-10-01 at 21:52 -0700, Andrew Morton wrote:
> > > For a 4.5GB streaming buffered write, this printk inside
> > > ext4_da_writepage shows up 37,2429 times in /var/log/messages.
> > >
> >
> > Part of that can happen due to shrink_page_list -> pageout -> writepagee
> > call back with lots of unallocated buffer_heads(blocks).
>
> Quite frankly, a simple streaming buffered write should *never*
> trigger writeback from the LRU in memory reclaim. That indicates
> that some feedback loop has broken down and we are not cleaning
> pages fast enough or perhaps in the correct order. Page reclaim in
> this case should be reclaiming clean pages (those that have already
> been written back), not writing back random dirty pages.

Here are some go faster stripes for the XFS buffered writeback. This
patch has a lot of debatable features to it, but the idea is to show
which knobs are slowing us down today.

The first change is to avoid calling balance_dirty_pages_ratelimited on
every page. When we know we're doing a largeish write it makes more
sense to balance things less often. This might just mean our
ratelimit_pages magic value is too small.

The second change makes xfs bump wbc->nr_to_write (suggested by
Christoph), which probably makes delalloc go in bigger chunks.

On unpatched kernels, XFS does streaming writes to my 4 drive array at
around 205MB/s. With the patch below, I come in at 326MB/s. O_DIRECT
runs at 330MB/s, so that's pretty good.

With just the nr_to_write change, I get around 315MB/s.

With just the balance_dirty_pages_nr change, I get around 240MB/s.

-chris

diff --git a/fs/xfs/linux-2.6/xfs_aops.c b/fs/xfs/linux-2.6/xfs_aops.c
index a44d68e..c72bd54 100644
--- a/fs/xfs/linux-2.6/xfs_aops.c
+++ b/fs/xfs/linux-2.6/xfs_aops.c
@@ -944,6 +944,9 @@ xfs_page_state_convert(
int trylock = 0;
int all_bh = unmapped;

+
+ wbc->nr_to_write *= 4;
+
if (startio) {
if (wbc->sync_mode == WB_SYNC_NONE && wbc->nonblocking)
trylock |= BMAPI_TRYLOCK;
diff --git a/mm/filemap.c b/mm/filemap.c
index 876bc59..b6c26e3 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2389,6 +2389,7 @@ static ssize_t generic_perform_write(struct file *file,
long status = 0;
ssize_t written = 0;
unsigned int flags = 0;
+ unsigned long nr = 0;

/*
* Copies from kernel address space cannot fail (NFSD is a big user).
@@ -2460,11 +2461,17 @@ again:
}
pos += copied;
written += copied;
-
- balance_dirty_pages_ratelimited(mapping);
+ nr++;
+ if (nr > 256) {
+ balance_dirty_pages_ratelimited_nr(mapping, nr);
+ nr = 0;
+ }

} while (iov_iter_count(i));

+ if (nr)
+ balance_dirty_pages_ratelimited_nr(mapping, nr);
+
return written ? written : status;
}


2008-10-10 05:13:45

by Dave Chinner

[permalink] [raw]
Subject: Re: [PATCH] Improve buffered streaming write ordering

On Thu, Oct 09, 2008 at 11:11:20AM -0400, Chris Mason wrote:
> On Fri, 2008-10-03 at 09:43 +1000, Dave Chinner wrote:
> > On Thu, Oct 02, 2008 at 11:48:56PM +0530, Aneesh Kumar K.V wrote:
> > > On Thu, Oct 02, 2008 at 08:20:54AM -0400, Chris Mason wrote:
> > > > On Wed, 2008-10-01 at 21:52 -0700, Andrew Morton wrote:
> > > > For a 4.5GB streaming buffered write, this printk inside
> > > > ext4_da_writepage shows up 37,2429 times in /var/log/messages.
> > > >
> > >
> > > Part of that can happen due to shrink_page_list -> pageout -> writepagee
> > > call back with lots of unallocated buffer_heads(blocks).
> >
> > Quite frankly, a simple streaming buffered write should *never*
> > trigger writeback from the LRU in memory reclaim. That indicates
> > that some feedback loop has broken down and we are not cleaning
> > pages fast enough or perhaps in the correct order. Page reclaim in
> > this case should be reclaiming clean pages (those that have already
> > been written back), not writing back random dirty pages.
>
> Here are some go faster stripes for the XFS buffered writeback. This
> patch has a lot of debatable features to it, but the idea is to show
> which knobs are slowing us down today.
>
> The first change is to avoid calling balance_dirty_pages_ratelimited on
> every page. When we know we're doing a largeish write it makes more
> sense to balance things less often. This might just mean our
> ratelimit_pages magic value is too small.

Ok, so how about doing something like this to reduce the
number of balances on large writes, but causing at least one
balance call for every write that occurs:

int nr = 0;
.....
while() {
....
if (!(nr % 256)) {
/* do balance */
}
nr++;
....
}

That way you get a balance on the first page on every write,
but then hold off balancing on that write again for some
number of pages.

> The second change makes xfs bump wbc->nr_to_write (suggested by
> Christoph), which probably makes delalloc go in bigger chunks.

Hmmmm. Reasonable theory. We used to do gigantic delalloc extents -
we paid no attention to congestion and could allocate and write
several GB at a time. Latency was an issue, though, so it got
changed to be bound by nr_to_write.

I guess we need to be issuing larger allocations. Can you remove
you patches and see what effect using the allocsize mount
option has on throughput? This changes the default delalloc EOF
preallocation size, which means more or less allocations. The
default is 64k and it can go as high as 1GB, IIRC.

Cheers,

Dave.
--
Dave Chinner
[email protected]