2009-09-01 18:44:53

by Chris Mason

[permalink] [raw]
Subject: ext4 writepages is making tiny bios?

Hello everyone,

I've been doing some benchmark runs to speed up btrfs and look at Jens'
new writeback work. One thing that really surprised me is that ext4
seems to be making 4k bios pretty much all the time.

The test I did was:

dd if=/dev/zero of=/mnt/foo bs=1M count=32768

It was done under seekwatcher, so blktrace was running. The blktrace
files for xfs and btrfs were about 60MB, but ext4 was almost 700MB. A
looks at the trace shows it is because ext4 is doing everything in 4k writes,
and I'm tracing on top of dm so the traces don't reflect any kind of
merging done by the elevator.

This graph shows the difference:

http://oss.oracle.com/~mason/seekwatcher/trace-buffered.png

When tracing on dm, seekwatcher uses the completion events for IOPs, so
the huge io rate for ext4 just comes from using smaller ios to write the
same data. Note the ext4 performance in this test is quite good, but I
think it would probably be better if it were making bigger bios.

A quick look at the code makes me think its trying to make big bios, so
I wanted to report it here in case things aren't working the way they
should.

(this version of seekwatcher isn't released yet, but you can grab it out
of the hg repo on linked from http://oss.oracle.com/~mason/seekwatcher)

-chris



2009-09-01 20:57:43

by Theodore Ts'o

[permalink] [raw]
Subject: Re: ext4 writepages is making tiny bios?

On Tue, Sep 01, 2009 at 02:44:50PM -0400, Chris Mason wrote:
> I've been doing some benchmark runs to speed up btrfs and look at Jens'
> new writeback work. One thing that really surprised me is that ext4
> seems to be making 4k bios pretty much all the time.

Yeah, thanks for pointing that out. As you pointed out, we're doing
95% of the work to create big bios, so we can allocate blocks
contiguosly for each file. But then, at the very end, in
fs/ext4/inode.c:mpage_da_submit_io(), we end up calling
ext4_writepage() for each page in the extent.

For the case of data=journal, we need to do that, since all of the I/O
requests need to get chopped up into buffer heads and then submitted
through the jbd layer. But in the other journal modes, we should be
able to issue a bio directly. It probably doesn't make that much
difference to the ext4's performance (since the elevator will coalesce
the writes), but all that extra work is burning lot of CPU, and fixing
that would be a Good Thing. (More CPU for the application to do, you
know, Real Work. :-)

> This graph shows the difference:
>
> http://oss.oracle.com/~mason/seekwatcher/trace-buffered.png

Wow, I'm surprised how seeky XFS was in these graphs compared to ext4
and btrfs. I wonder what was going on.

Thanks for pointing that out. I'll have to add that to do our "To do"
list for ext4.

- Ted

2009-09-01 21:27:38

by Christoph Hellwig

[permalink] [raw]
Subject: Re: ext4 writepages is making tiny bios?

On Tue, Sep 01, 2009 at 04:57:44PM -0400, Theodore Tso wrote:
> > This graph shows the difference:
> >
> > http://oss.oracle.com/~mason/seekwatcher/trace-buffered.png
>
> Wow, I'm surprised how seeky XFS was in these graphs compared to ext4
> and btrfs. I wonder what was going on.

XFS did the mistake of trusting the VM, while everyone more or less
overrode it. Removing all those checks and writing out much larger
data fixes it with a relatively small patch:

http://verein.lst.de/~hch/xfs/xfs-writeback-scaling

when that code was last benchamrked extensively (on SLES9) it
worked nicely to saturate extremly large machines using buffered
I/O, since then VM tuning basically destroyed it.


2009-09-02 00:17:43

by Chris Mason

[permalink] [raw]
Subject: Re: ext4 writepages is making tiny bios?

On Tue, Sep 01, 2009 at 05:27:40PM -0400, Christoph Hellwig wrote:
> On Tue, Sep 01, 2009 at 04:57:44PM -0400, Theodore Tso wrote:
> > > This graph shows the difference:
> > >
> > > http://oss.oracle.com/~mason/seekwatcher/trace-buffered.png
> >
> > Wow, I'm surprised how seeky XFS was in these graphs compared to ext4
> > and btrfs. I wonder what was going on.
>
> XFS did the mistake of trusting the VM, while everyone more or less
> overrode it. Removing all those checks and writing out much larger
> data fixes it with a relatively small patch:
>
> http://verein.lst.de/~hch/xfs/xfs-writeback-scaling
>
> when that code was last benchamrked extensively (on SLES9) it
> worked nicely to saturate extremly large machines using buffered
> I/O, since then VM tuning basically destroyed it.
>

I sent Christoph other versions of the graphs and tried a few fixes.
With patches they are down to almost 0 seeks/sec.

For the Ext4 bio size, this array is just a few sata drives and is
very tolerant. Real raid or cciss controllers will benefit much more
from bigger bios.

And most importantly, seekwatcher wouldn't take as long to make the
graphs ;)

-chris


2009-09-03 05:52:01

by Dave Chinner

[permalink] [raw]
Subject: Re: ext4 writepages is making tiny bios?

On Tue, Sep 01, 2009 at 05:27:40PM -0400, Christoph Hellwig wrote:
> On Tue, Sep 01, 2009 at 04:57:44PM -0400, Theodore Tso wrote:
> > > This graph shows the difference:
> > >
> > > http://oss.oracle.com/~mason/seekwatcher/trace-buffered.png
> >
> > Wow, I'm surprised how seeky XFS was in these graphs compared to ext4
> > and btrfs. I wonder what was going on.
>
> XFS did the mistake of trusting the VM, while everyone more or less
> overrode it. Removing all those checks and writing out much larger
> data fixes it with a relatively small patch:
>
> http://verein.lst.de/~hch/xfs/xfs-writeback-scaling

Careful:

- tloff = min(tlast, startpage->index + 64);
+ tloff = min(tlast, startpage->index + 8192);

That will cause 64k page machines to try to write back 512MB at a
time. This will re-introduce similar to the behaviour in sles9 where
writeback would only terminate at the end of an extent (because the
mapping end wasn't capped like above).

This has two nasty side effects:

1. horrible fsync latency when streaming writes are
occuring (e.g. NFS writes) which limit throughput
2. a single large streaming write could delay the writeback
of thousands of small files indefinitely.

#1 is still an issue, but #2 might not be so bad compared to sles9
given the way inodes are cycled during writeback now...

> when that code was last benchamrked extensively (on SLES9) it
> worked nicely to saturate extremly large machines using buffered
> I/O, since then VM tuning basically destroyed it.

It was removed because it caused all sorts of problems and buffered
writes on sles9 were limited by lock contention in XFS, not the VM.
On 2.6.15, pdflush and the code the above patch removes was capable
of pushing more than 6GB/s of buffered writes to a single block
device. VM writeback has gone steadily down hill since then...

Cheers,

Dave.
--
Dave Chinner
[email protected]

2009-09-03 16:42:12

by Christoph Hellwig

[permalink] [raw]
Subject: Re: ext4 writepages is making tiny bios?

On Thu, Sep 03, 2009 at 03:52:01PM +1000, Dave Chinner wrote:
> > XFS did the mistake of trusting the VM, while everyone more or less
> > overrode it. Removing all those checks and writing out much larger
> > data fixes it with a relatively small patch:
> >
> > http://verein.lst.de/~hch/xfs/xfs-writeback-scaling
>
> Careful:
>
> - tloff = min(tlast, startpage->index + 64);
> + tloff = min(tlast, startpage->index + 8192);
>
> That will cause 64k page machines to try to write back 512MB at a
> time. This will re-introduce similar to the behaviour in sles9 where
> writeback would only terminate at the end of an extent (because the
> mapping end wasn't capped like above).

Pretty good point, any applies to all the different things we discussed
recently. Ted, should be maybe introduce a max_writeback_mb instead of
the max_writeback_pages in the VM, too?


2009-09-04 00:15:55

by Theodore Ts'o

[permalink] [raw]
Subject: Re: ext4 writepages is making tiny bios?

On Thu, Sep 03, 2009 at 12:42:09PM -0400, Christoph Hellwig wrote:
> > Careful:
> >
> > - tloff = min(tlast, startpage->index + 64);
> > + tloff = min(tlast, startpage->index + 8192);
> >
> > That will cause 64k page machines to try to write back 512MB at a
> > time. This will re-introduce similar to the behaviour in sles9 where
> > writeback would only terminate at the end of an extent (because the
> > mapping end wasn't capped like above).
>
> Pretty good point, any applies to all the different things we discussed
> recently. Ted, should be maybe introduce a max_writeback_mb instead of
> the max_writeback_pages in the VM, too?

Good point.

Jens, maybe we should replace my patch with this one, which makes the
tunable in terms of megabytes instead of pages?

- Ted

commit ed48d661394a6b22e9d376a7ad5327c2b9080a9c
Author: Theodore Ts'o <[email protected]>
Date: Tue Sep 1 13:19:06 2009 +0200

vm: Add an tuning knob for vm.max_writeback_mb

Originally, MAX_WRITEBACK_PAGES was hard-coded to 1024 because of a
concern of not holding I_SYNC for too long. (At least, that was the
comment previously.) This doesn't make sense now because the only
time we wait for I_SYNC is if we are calling sync or fsync, and in
that case we need to write out all of the data anyway. Previously
there may have been other code paths that waited on I_SYNC, but not
any more.

According to Christoph, the current writeback size is way too small,
and XFS had a hack that bumped out nr_to_write to four times the value
sent by the VM to be able to saturate medium-sized RAID arrays. This
value was also problematic for ext4 as well, as it caused large files
to be come interleaved on disk by in 8 megabyte chunks (we bumped up
the nr_to_write by a factor of two).

So, in this patch, we make the MAX_WRITEBACK_PAGES a tunable,
max_writeback_mb, and set it to a default value of 128 megabytes.

http://bugzilla.kernel.org/show_bug.cgi?id=13930

Signed-off-by: "Theodore Ts'o" <[email protected]>

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 38cb758..a9b230f 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -585,14 +585,7 @@ void generic_sync_bdi_inodes(struct writeback_control *wbc)
generic_sync_wb_inodes(&bdi->wb, wbc);
}

-/*
- * The maximum number of pages to writeout in a single bdi flush/kupdate
- * operation. We do this so we don't hold I_SYNC against an inode for
- * enormous amounts of time, which would block a userspace task which has
- * been forced to throttle against that inode. Also, the code reevaluates
- * the dirty each time it has written this many pages.
- */
-#define MAX_WRITEBACK_PAGES 1024
+#define MAX_WRITEBACK_PAGES (max_writeback_mb << (20 - PAGE_SHIFT))

static inline bool over_bground_thresh(void)
{
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 34c59f9..57cd3b5 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -103,6 +103,7 @@ extern int vm_dirty_ratio;
extern unsigned long vm_dirty_bytes;
extern unsigned int dirty_writeback_interval;
extern unsigned int dirty_expire_interval;
+extern unsigned int max_writeback_mb;
extern int vm_highmem_is_dirtyable;
extern int block_dump;
extern int laptop_mode;
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 58be760..315fc30 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1104,6 +1104,14 @@ static struct ctl_table vm_table[] = {
.proc_handler = &proc_dointvec,
},
{
+ .ctl_name = CTL_UNNUMBERED,
+ .procname = "max_writeback_mb",
+ .data = &max_writeback_mb,
+ .maxlen = sizeof(max_writeback_mb),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
+ {
.ctl_name = VM_NR_PDFLUSH_THREADS,
.procname = "nr_pdflush_threads",
.data = &nr_pdflush_threads,
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 0fce7df..77decaa 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -55,6 +55,12 @@ static inline long sync_writeback_pages(void)
/* The following parameters are exported via /proc/sys/vm */

/*
+ * The maximum amount of memory (in megabytes) to write out in a
+ * single bdflush/kupdate operation.
+ */
+unsigned int max_writeback_mb = 128;
+
+/*
* Start background writeback (via pdflush) at this percentage
*/
int dirty_background_ratio = 10;

2009-09-04 07:20:13

by Jens Axboe

[permalink] [raw]
Subject: Re: ext4 writepages is making tiny bios?

On Thu, Sep 03 2009, Theodore Tso wrote:
> On Thu, Sep 03, 2009 at 12:42:09PM -0400, Christoph Hellwig wrote:
> > > Careful:
> > >
> > > - tloff = min(tlast, startpage->index + 64);
> > > + tloff = min(tlast, startpage->index + 8192);
> > >
> > > That will cause 64k page machines to try to write back 512MB at a
> > > time. This will re-introduce similar to the behaviour in sles9 where
> > > writeback would only terminate at the end of an extent (because the
> > > mapping end wasn't capped like above).
> >
> > Pretty good point, any applies to all the different things we discussed
> > recently. Ted, should be maybe introduce a max_writeback_mb instead of
> > the max_writeback_pages in the VM, too?
>
> Good point.
>
> Jens, maybe we should replace my patch with this one, which makes the
> tunable in terms of megabytes instead of pages?

That is probably a better metric than 'pages', lets update it.

--
Jens Axboe