2008-01-24 20:43:51

by Al Boldi

[permalink] [raw]
Subject: [RFC] ext3: per-process soft-syncing data=ordered mode

Greetings!

data=ordered mode has proven reliable over the years, and it does this by
ordering filedata flushes before metadata flushes. But this sometimes
causes contention in the order of a 10x slowdown for certain apps, either
due to the misuse of fsync or due to inherent behaviour like db's, as well
as inherent starvation issues exposed by the data=ordered mode.

data=writeback mode alleviates data=order mode slowdowns, but only works
per-mount and is too dangerous to run as a default mode.

This RFC proposes to introduce a tunable which allows to disable fsync and
changes ordered into writeback writeout on a per-process basis like this:

echo 1 > /proc/`pidof process`/softsync


Your comments are much welcome!


Thanks!

--
Al


2008-01-24 21:52:16

by Diego Calleja

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

El Thu, 24 Jan 2008 23:36:00 +0300, Al Boldi <[email protected]> escribi?:

> Greetings!
>
> data=ordered mode has proven reliable over the years, and it does this by
> ordering filedata flushes before metadata flushes. But this sometimes
> causes contention in the order of a 10x slowdown for certain apps, either
> due to the misuse of fsync or due to inherent behaviour like db's, as well
> as inherent starvation issues exposed by the data=ordered mode.

There's a related bug in bugzilla: http://bugzilla.kernel.org/show_bug.cgi?id=9546

The diagnostic from Jan Kara is different though, but I think it may be the same
problem...

"One process does data-intensive load. Thus in the ordered mode the
transaction is tiny but has tons of data buffers attached. If commit
happens, it takes a long time to sync all the data before the commit
can proceed... In the writeback mode, we don't wait for data buffers, in
the journal mode amount of data to be written is really limited by the
maximum size of a transaction and so we write by much smaller chunks
and better latency is thus ensured."


I'm hitting this bug too...it's surprising that there's not many people
reporting more bugs about this, because it's really annoying.


There's a patch by Jan Kara (that I'm including here because bugzilla didn't
include it and took me a while to find it) which I don't know if it's supposed to
fix the problem , but it'd be interesting to try:




Don't allow too much data buffers in a transaction.

diff --git a/fs/jbd/transaction.c b/fs/jbd/transaction.c
index 08ff6c7..e6f9dd6 100644
--- a/fs/jbd/transaction.c
+++ b/fs/jbd/transaction.c
@@ -163,7 +163,7 @@ repeat_locked:
spin_lock(&transaction->t_handle_lock);
needed = transaction->t_outstanding_credits + nblocks;

- if (needed > journal->j_max_transaction_buffers) {
+ if (needed > journal->j_max_transaction_buffers || atomic_read(&transaction->t_data_buf_count) > 32768) {
/*
* If the current transaction is already too large, then start
* to commit it: we can then go back and attach this handle to
@@ -1528,6 +1528,7 @@ static void __journal_temp_unlink_buffer(struct journal_head *jh)
return;
case BJ_SyncData:
list = &transaction->t_sync_datalist;
+ atomic_dec(&transaction->t_data_buf_count);
break;
case BJ_Metadata:
transaction->t_nr_buffers--;
@@ -1989,6 +1990,7 @@ void __journal_file_buffer(struct journal_head *jh,
return;
case BJ_SyncData:
list = &transaction->t_sync_datalist;
+ atomic_inc(&transaction->t_data_buf_count);
break;
case BJ_Metadata:
transaction->t_nr_buffers++;
diff --git a/include/linux/jbd.h b/include/linux/jbd.h
index d9ecd13..6dd284a 100644
--- a/include/linux/jbd.h
+++ b/include/linux/jbd.h
@@ -541,6 +541,12 @@ struct transaction_s
int t_outstanding_credits;

/*
+ * Number of data buffers on t_sync_datalist attached to
+ * the transaction.
+ */
+ atomic_t t_data_buf_count;
+
+ /*
* Forward and backward links for the circular list of all transactions
* awaiting checkpoint. [j_list_lock]
*/

2008-01-24 21:59:17

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Thu, 24 Jan 2008 23:36:00 +0300, Al Boldi said:
> data=ordered mode has proven reliable over the years, and it does this by
> ordering filedata flushes before metadata flushes. But this sometimes
> causes contention in the order of a 10x slowdown for certain apps, either
> due to the misuse of fsync or due to inherent behaviour like db's, as well
> as inherent starvation issues exposed by the data=ordered mode.

If they're misusing it, they should be fixed. There should be a limit to
how much the kernel will do to reduce the pain of doing stupid things.

> This RFC proposes to introduce a tunable which allows to disable fsync and
> changes ordered into writeback writeout on a per-process basis like this:

Well-written programs only call fsync() when they really do need the semantics
of fsync. Disabling that is just *asking* for trouble.

>From rfc2821:

6.1 Reliable Delivery and Replies by Email

When the receiver-SMTP accepts a piece of mail (by sending a "250 OK"
message in response to DATA), it is accepting responsibility for
delivering or relaying the message. It must take this responsibility
seriously. It MUST NOT lose the message for frivolous reasons, such
as because the host later crashes or because of a predictable
resource shortage.

Some people really *do* think "the CPU took a machine check and after replacing
the motherboard, the resulting fsck ate the file" is a "frivolous" reason to
lose data.

But if you want to give them enough rope to shoot themselves in the foot with,
I'd suggest abusing LD_PRELOAD to replace the fsync() glibc code instead. No
need to clutter the kernel with rope that can be (and has been) done in userspace.


Attachments:
(No filename) (226.00 B)

2008-01-25 01:20:20

by Chris Snook

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

Al Boldi wrote:
> Greetings!
>
> data=ordered mode has proven reliable over the years, and it does this by
> ordering filedata flushes before metadata flushes. But this sometimes
> causes contention in the order of a 10x slowdown for certain apps, either
> due to the misuse of fsync or due to inherent behaviour like db's, as well
> as inherent starvation issues exposed by the data=ordered mode.
>
> data=writeback mode alleviates data=order mode slowdowns, but only works
> per-mount and is too dangerous to run as a default mode.
>
> This RFC proposes to introduce a tunable which allows to disable fsync and
> changes ordered into writeback writeout on a per-process basis like this:
>
> echo 1 > /proc/`pidof process`/softsync
>
>
> Your comments are much welcome!

This is basically a kernel workaround for stupid app behavior. It wouldn't be
the first time we've provided such an option, but we shouldn't do it without a
very good justification. At the very least, we need a test case that
demonstrates the problem and benchmark results that prove that this approach
actually fixes it. I suspect we can find a cleaner fix for the problem.

-- Chris

2008-01-25 15:36:44

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

> Greetings!
>
> data=ordered mode has proven reliable over the years, and it does this by
> ordering filedata flushes before metadata flushes. But this sometimes
> causes contention in the order of a 10x slowdown for certain apps, either
> due to the misuse of fsync or due to inherent behaviour like db's, as well
> as inherent starvation issues exposed by the data=ordered mode.
>
> data=writeback mode alleviates data=order mode slowdowns, but only works
> per-mount and is too dangerous to run as a default mode.
>
> This RFC proposes to introduce a tunable which allows to disable fsync and
> changes ordered into writeback writeout on a per-process basis like this:
>
> echo 1 > /proc/`pidof process`/softsync
I guess disabling fsync() was already commented on enough. Regarding
switching to writeback mode on per-process basis - not easily possible
because sometimes data is not written out by the process which stored
them (think of mmaped file). And in case of DB, they use direct-io
anyway most of the time so they don't care about journaling mode anyway.
But as Diego wrote, there is definitely some room for improvement in
current data=ordered mode so the difference shouldn't be as big in the
end.

Honza
--
Jan Kara <[email protected]>
SuSE CR Labs

2008-01-25 20:24:52

by Andreas Dilger

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Jan 24, 2008 23:36 +0300, Al Boldi wrote:
> data=ordered mode has proven reliable over the years, and it does this by
> ordering filedata flushes before metadata flushes. But this sometimes
> causes contention in the order of a 10x slowdown for certain apps, either
> due to the misuse of fsync or due to inherent behaviour like db's, as well
> as inherent starvation issues exposed by the data=ordered mode.
>
> data=writeback mode alleviates data=order mode slowdowns, but only works
> per-mount and is too dangerous to run as a default mode.
>
> This RFC proposes to introduce a tunable which allows to disable fsync and
> changes ordered into writeback writeout on a per-process basis like this:
>
> echo 1 > /proc/`pidof process`/softsync

If fsync performance is an issue for you, run the filesystem in data=journal
mode, put the journal on a separate disk and make it big enough that you
don't block on it to flush the data to the filesystem (but not so big that
it is consuming all of your RAM).

That keeps your data guarantees without hurting performance.

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

2008-01-25 20:46:46

by David Lang

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Thu, 24 Jan 2008, Andreas Dilger wrote:

> On Jan 24, 2008 23:36 +0300, Al Boldi wrote:
>> data=ordered mode has proven reliable over the years, and it does this by
>> ordering filedata flushes before metadata flushes. But this sometimes
>> causes contention in the order of a 10x slowdown for certain apps, either
>> due to the misuse of fsync or due to inherent behaviour like db's, as well
>> as inherent starvation issues exposed by the data=ordered mode.
>>
>> data=writeback mode alleviates data=order mode slowdowns, but only works
>> per-mount and is too dangerous to run as a default mode.
>>
>> This RFC proposes to introduce a tunable which allows to disable fsync and
>> changes ordered into writeback writeout on a per-process basis like this:
>>
>> echo 1 > /proc/`pidof process`/softsync
>
> If fsync performance is an issue for you, run the filesystem in data=journal
> mode, put the journal on a separate disk and make it big enough that you
> don't block on it to flush the data to the filesystem (but not so big that
> it is consuming all of your RAM).

my understanding is that the journal is limited to 128M or so. This
prevents you from making it big enough to avoid all problems.

David Lang

> That keeps your data guarantees without hurting performance.
>
> Cheers, Andreas
> --
> Andreas Dilger
> Sr. Staff Engineer, Lustre Group
> Sun Microsystems of Canada, Inc.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2008-01-26 05:28:24

by Al Boldi

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

Diego Calleja wrote:
> El Thu, 24 Jan 2008 23:36:00 +0300, Al Boldi <[email protected]> escribi?:
> > Greetings!
> >
> > data=ordered mode has proven reliable over the years, and it does this
> > by ordering filedata flushes before metadata flushes. But this
> > sometimes causes contention in the order of a 10x slowdown for certain
> > apps, either due to the misuse of fsync or due to inherent behaviour
> > like db's, as well as inherent starvation issues exposed by the
> > data=ordered mode.
>
> There's a related bug in bugzilla:
> http://bugzilla.kernel.org/show_bug.cgi?id=9546
>
> The diagnostic from Jan Kara is different though, but I think it may be
> the same problem...
>
> "One process does data-intensive load. Thus in the ordered mode the
> transaction is tiny but has tons of data buffers attached. If commit
> happens, it takes a long time to sync all the data before the commit
> can proceed... In the writeback mode, we don't wait for data buffers, in
> the journal mode amount of data to be written is really limited by the
> maximum size of a transaction and so we write by much smaller chunks
> and better latency is thus ensured."
>
>
> I'm hitting this bug too...it's surprising that there's not many people
> reporting more bugs about this, because it's really annoying.
>
>
> There's a patch by Jan Kara (that I'm including here because bugzilla
> didn't include it and took me a while to find it) which I don't know if
> it's supposed to fix the problem , but it'd be interesting to try:


Thanks a lot, but it doesn't fix it.

--
Al

2008-01-26 05:28:39

by Al Boldi

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

[email protected] wrote:
> On Thu, 24 Jan 2008 23:36:00 +0300, Al Boldi said:
> > This RFC proposes to introduce a tunable which allows to disable fsync
> > and changes ordered into writeback writeout on a per-process basis like
> > this:
:
:
> But if you want to give them enough rope to shoot themselves in the foot
> with, I'd suggest abusing LD_PRELOAD to replace the fsync() glibc code
> instead. No need to clutter the kernel with rope that can be (and has
> been) done in userspace.

Ok that's possible, but as you cannot use LD_PRELOAD to deal with changing
ordered into writeback mode, we might as well allow them to disable fsync
here, because it is in the same use-case.


Thanks!

--
Al

2008-01-26 05:28:56

by Al Boldi

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

Jan Kara wrote:
> > Greetings!
> >
> > data=ordered mode has proven reliable over the years, and it does this
> > by ordering filedata flushes before metadata flushes. But this
> > sometimes causes contention in the order of a 10x slowdown for certain
> > apps, either due to the misuse of fsync or due to inherent behaviour
> > like db's, as well as inherent starvation issues exposed by the
> > data=ordered mode.
> >
> > data=writeback mode alleviates data=order mode slowdowns, but only works
> > per-mount and is too dangerous to run as a default mode.
> >
> > This RFC proposes to introduce a tunable which allows to disable fsync
> > and changes ordered into writeback writeout on a per-process basis like
> > this:
> >
> > echo 1 > /proc/`pidof process`/softsync
>
> I guess disabling fsync() was already commented on enough. Regarding
> switching to writeback mode on per-process basis - not easily possible
> because sometimes data is not written out by the process which stored
> them (think of mmaped file).

Do you mean there is a locking problem?

> And in case of DB, they use direct-io
> anyway most of the time so they don't care about journaling mode anyway.

Testing with sqlite3 and mysql4 shows that performance drastically improves
with writeback writeout.

> But as Diego wrote, there is definitely some room for improvement in
> current data=ordered mode so the difference shouldn't be as big in the
> end.

Yes, it would be nice to get to the bottom of this starvation problem, but
even then, the proposed tunable remains useful for misbehaving apps.


Thanks!

--
Al

2008-01-26 05:29:44

by Al Boldi

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

Chris Snook wrote:
> Al Boldi wrote:
> > Greetings!
> >
> > data=ordered mode has proven reliable over the years, and it does this
> > by ordering filedata flushes before metadata flushes. But this
> > sometimes causes contention in the order of a 10x slowdown for certain
> > apps, either due to the misuse of fsync or due to inherent behaviour
> > like db's, as well as inherent starvation issues exposed by the
> > data=ordered mode.
> >
> > data=writeback mode alleviates data=order mode slowdowns, but only works
> > per-mount and is too dangerous to run as a default mode.
> >
> > This RFC proposes to introduce a tunable which allows to disable fsync
> > and changes ordered into writeback writeout on a per-process basis like
> > this:
> >
> > echo 1 > /proc/`pidof process`/softsync
> >
> >
> > Your comments are much welcome!
>
> This is basically a kernel workaround for stupid app behavior.

Exactly right to some extent, but don't forget the underlying data=ordered
starvation problem, which looks like a genuinely deep problem maybe related
to blockIO.

> It
> wouldn't be the first time we've provided such an option, but we shouldn't
> do it without a very good justification. At the very least, we need a
> test case that demonstrates the problem

See the 'konqueror deadlocks in 2.6.22' thread.

> and benchmark results that prove that this approach actually fixes it.

8M-record insert into indexed db-table:
ordered writeback
sqlite3: 75m22s 8m45s
mysql4 : 23m35s 5m29s

> I suspect we can find a cleaner fix for the problem.

I hope so, but even with a fix available addressing the data=ordered
starvation issue, this tunable could remain useful for those apps that
misbehave.


Thanks!

--
Al

2008-01-28 17:27:38

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Sat 26-01-08 08:27:59, Al Boldi wrote:
> Jan Kara wrote:
> > > Greetings!
> > >
> > > data=ordered mode has proven reliable over the years, and it does this
> > > by ordering filedata flushes before metadata flushes. But this
> > > sometimes causes contention in the order of a 10x slowdown for certain
> > > apps, either due to the misuse of fsync or due to inherent behaviour
> > > like db's, as well as inherent starvation issues exposed by the
> > > data=ordered mode.
> > >
> > > data=writeback mode alleviates data=order mode slowdowns, but only works
> > > per-mount and is too dangerous to run as a default mode.
> > >
> > > This RFC proposes to introduce a tunable which allows to disable fsync
> > > and changes ordered into writeback writeout on a per-process basis like
> > > this:
> > >
> > > echo 1 > /proc/`pidof process`/softsync
> >
> > I guess disabling fsync() was already commented on enough. Regarding
> > switching to writeback mode on per-process basis - not easily possible
> > because sometimes data is not written out by the process which stored
> > them (think of mmaped file).
>
> Do you mean there is a locking problem?
No, but if you write to an mmaped file, then we can find out only later
we have dirty data in pages and we call writepage() on behalf of e.g.
pdflush().

> > And in case of DB, they use direct-io
> > anyway most of the time so they don't care about journaling mode anyway.
>
> Testing with sqlite3 and mysql4 shows that performance drastically improves
> with writeback writeout.
And do you have the databases configured to use direct IO or not?

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2008-01-28 17:34:20

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Sat 26-01-08 08:27:43, Al Boldi wrote:
> Diego Calleja wrote:
> > El Thu, 24 Jan 2008 23:36:00 +0300, Al Boldi <[email protected]> escribi?:
> > > Greetings!
> > >
> > > data=ordered mode has proven reliable over the years, and it does this
> > > by ordering filedata flushes before metadata flushes. But this
> > > sometimes causes contention in the order of a 10x slowdown for certain
> > > apps, either due to the misuse of fsync or due to inherent behaviour
> > > like db's, as well as inherent starvation issues exposed by the
> > > data=ordered mode.
> >
> > There's a related bug in bugzilla:
> > http://bugzilla.kernel.org/show_bug.cgi?id=9546
> >
> > The diagnostic from Jan Kara is different though, but I think it may be
> > the same problem...
> >
> > "One process does data-intensive load. Thus in the ordered mode the
> > transaction is tiny but has tons of data buffers attached. If commit
> > happens, it takes a long time to sync all the data before the commit
> > can proceed... In the writeback mode, we don't wait for data buffers, in
> > the journal mode amount of data to be written is really limited by the
> > maximum size of a transaction and so we write by much smaller chunks
> > and better latency is thus ensured."
> >
> >
> > I'm hitting this bug too...it's surprising that there's not many people
> > reporting more bugs about this, because it's really annoying.
> >
> >
> > There's a patch by Jan Kara (that I'm including here because bugzilla
> > didn't include it and took me a while to find it) which I don't know if
> > it's supposed to fix the problem , but it'd be interesting to try:
>
> Thanks a lot, but it doesn't fix it.
Hmm, if you're willing to test patches, then you could try a debug patch:
http://bugzilla.kernel.org/attachment.cgi?id=14574
and send me the output. What kind of load do you observe problems with
and which problems exactly?

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2008-01-28 20:17:46

by Al Boldi

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

Jan Kara wrote:
> On Sat 26-01-08 08:27:59, Al Boldi wrote:
> > Do you mean there is a locking problem?
>
> No, but if you write to an mmaped file, then we can find out only later
> we have dirty data in pages and we call writepage() on behalf of e.g.
> pdflush().

Ok, that's a special case, which we could code for, but doesn't seem
worthwile. In any case, child-forks should inherit its parent mode.

> > > And in case of DB, they use direct-io
> > > anyway most of the time so they don't care about journaling mode
> > > anyway.
> >
> > Testing with sqlite3 and mysql4 shows that performance drastically
> > improves with writeback writeout.
>
> And do you have the databases configured to use direct IO or not?

I don't think so, but these tests are only meant to expose the underlying
problem which needs to be fixed, while this RFC proposes a useful
workaround.

In another post Jan Kara wrote:
> Hmm, if you're willing to test patches, then you could try a debug
> patch: http://bugzilla.kernel.org/attachment.cgi?id=14574
> and send me the output. What kind of load do you observe problems with
> and which problems exactly?

8M-record insert into indexed db-table:
ordered writeback
sqlite3: 75m22s 8m45s
mysql4 : 23m35s 5m29s

Also, see the 'konqueror deadlocks in 2.6.22' thread.


Thanks!

--
Al

2008-01-29 17:22:54

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

> Chris Snook wrote:
> > Al Boldi wrote:
> > > Greetings!
> > >
> > > data=ordered mode has proven reliable over the years, and it does this
> > > by ordering filedata flushes before metadata flushes. But this
> > > sometimes causes contention in the order of a 10x slowdown for certain
> > > apps, either due to the misuse of fsync or due to inherent behaviour
> > > like db's, as well as inherent starvation issues exposed by the
> > > data=ordered mode.
> > >
> > > data=writeback mode alleviates data=order mode slowdowns, but only works
> > > per-mount and is too dangerous to run as a default mode.
> > >
> > > This RFC proposes to introduce a tunable which allows to disable fsync
> > > and changes ordered into writeback writeout on a per-process basis like
> > > this:
> > >
> > > echo 1 > /proc/`pidof process`/softsync
> > >
> > >
> > > Your comments are much welcome!
> >
> > This is basically a kernel workaround for stupid app behavior.
>
> Exactly right to some extent, but don't forget the underlying data=ordered
> starvation problem, which looks like a genuinely deep problem maybe related
> to blockIO.
It is a problem with the way how ext3 does fsync (at least that's what
we ended up with in that konqueror problem)... It has to flush the
current transaction which means that app doing fsync() has to wait till
all dirty data of all files on the filesystem are written (if we are in
ordered mode). And that takes quite some time... There are possibilities
how to avoid that but especially with freshly created files, it's tough
and I don't see a way how to do it without some fundamental changes to
JBD.

Honza
--
Jan Kara <[email protected]>
SuSE CR Labs

2008-01-30 06:07:45

by Al Boldi

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

Jan Kara wrote:
> > Chris Snook wrote:
> > > Al Boldi wrote:
> > > > This RFC proposes to introduce a tunable which allows to disable
> > > > fsync and changes ordered into writeback writeout on a per-process
> > > > basis like this:
> > > >
> > > > echo 1 > /proc/`pidof process`/softsync
> > >
> > > This is basically a kernel workaround for stupid app behavior.
> >
> > Exactly right to some extent, but don't forget the underlying
> > data=ordered starvation problem, which looks like a genuinely deep
> > problem maybe related to blockIO.
>
> It is a problem with the way how ext3 does fsync (at least that's what
> we ended up with in that konqueror problem)... It has to flush the
> current transaction which means that app doing fsync() has to wait till
> all dirty data of all files on the filesystem are written (if we are in
> ordered mode). And that takes quite some time... There are possibilities
> how to avoid that but especially with freshly created files, it's tough
> and I don't see a way how to do it without some fundamental changes to
> JBD.

Ok, but keep in mind that this starvation occurs even in the absence of
fsync, as the benchmarks show.

And, a quick test of successive 1sec delayed syncs shows no hangs until about
1 minute (~180mb) of db-writeout activity, when the sync abruptly hangs for
minutes on end, and io-wait shows almost 100%.

Now it turns out that 'echo 3 > /proc/.../drop_caches' has no effect, but
doing it a few more times makes the hangs go away for while, only to come
back again and again.


Thanks!

--
Al

2008-01-30 14:31:21

by Chris Mason

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Wednesday 30 January 2008, Al Boldi wrote:
> Jan Kara wrote:
> > > Chris Snook wrote:
> > > > Al Boldi wrote:
> > > > > This RFC proposes to introduce a tunable which allows to disable
> > > > > fsync and changes ordered into writeback writeout on a per-process
> > > > > basis like this:
> > > > >
> > > > > echo 1 > /proc/`pidof process`/softsync
> > > >
> > > > This is basically a kernel workaround for stupid app behavior.
> > >
> > > Exactly right to some extent, but don't forget the underlying
> > > data=ordered starvation problem, which looks like a genuinely deep
> > > problem maybe related to blockIO.
> >
> > It is a problem with the way how ext3 does fsync (at least that's what
> > we ended up with in that konqueror problem)... It has to flush the
> > current transaction which means that app doing fsync() has to wait till
> > all dirty data of all files on the filesystem are written (if we are in
> > ordered mode). And that takes quite some time... There are possibilities
> > how to avoid that but especially with freshly created files, it's tough
> > and I don't see a way how to do it without some fundamental changes to
> > JBD.
>
> Ok, but keep in mind that this starvation occurs even in the absence of
> fsync, as the benchmarks show.
>
> And, a quick test of successive 1sec delayed syncs shows no hangs until
> about 1 minute (~180mb) of db-writeout activity, when the sync abruptly
> hangs for minutes on end, and io-wait shows almost 100%.

Do you see this on older kernels as well? The first thing we need to
understand is if this particular stall is new.

-chris

2008-01-30 18:42:27

by Al Boldi

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

Chris Mason wrote:
> On Wednesday 30 January 2008, Al Boldi wrote:
> > Jan Kara wrote:
> > > > Chris Snook wrote:
> > > > > Al Boldi wrote:
> > > > > > This RFC proposes to introduce a tunable which allows to disable
> > > > > > fsync and changes ordered into writeback writeout on a
> > > > > > per-process basis like this:
> > > > > >
> > > > > > echo 1 > /proc/`pidof process`/softsync
> > > > >
> > > > > This is basically a kernel workaround for stupid app behavior.
> > > >
> > > > Exactly right to some extent, but don't forget the underlying
> > > > data=ordered starvation problem, which looks like a genuinely deep
> > > > problem maybe related to blockIO.
> > >
> > > It is a problem with the way how ext3 does fsync (at least that's
> > > what we ended up with in that konqueror problem)... It has to flush
> > > the current transaction which means that app doing fsync() has to wait
> > > till all dirty data of all files on the filesystem are written (if we
> > > are in ordered mode). And that takes quite some time... There are
> > > possibilities how to avoid that but especially with freshly created
> > > files, it's tough and I don't see a way how to do it without some
> > > fundamental changes to JBD.
> >
> > Ok, but keep in mind that this starvation occurs even in the absence of
> > fsync, as the benchmarks show.
> >
> > And, a quick test of successive 1sec delayed syncs shows no hangs until
> > about 1 minute (~180mb) of db-writeout activity, when the sync abruptly
> > hangs for minutes on end, and io-wait shows almost 100%.
>
> Do you see this on older kernels as well? The first thing we need to
> understand is if this particular stall is new.

2.6.24,22,19 and 2.4.32 show the same problem.


Thanks!

--
Al

2008-01-31 00:39:18

by Andreas Dilger

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Wednesday 30 January 2008, Al Boldi wrote:
> And, a quick test of successive 1sec delayed syncs shows no hangs until
> about 1 minute (~180mb) of db-writeout activity, when the sync abruptly
> hangs for minutes on end, and io-wait shows almost 100%.

How large is the journal in this filesystem? You can check via
"debugfs -R 'stat <8>' /dev/XXX". Is this affected by increasing
the journal size? You can set the journal size via "mke2fs -J size=400"
at format time, or on an unmounted filesystem by running
"tune2fs -O ^has_journal /dev/XXX" then "tune2fs -J size=400 /dev/XXX".

I suspect that the stall is caused by the journal filling up, and then
waiting while the entire journal is checkpointed back to the filesystem
before the next transaction can start.

It is possible to improve this behaviour in JBD by reducing the amount
of space that is cleared if the journal becomes "full", and also doing
journal checkpointing before it becomes full. While that may reduce
performance a small amount, it would help avoid such huge latency problems.
I believe we have such a patch in one of the Lustre branches already,
and while I'm not sure what kernel it is for the JBD code rarely changes
much....

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

2008-01-31 06:23:53

by Al Boldi

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

Andreas Dilger wrote:
> On Wednesday 30 January 2008, Al Boldi wrote:
> > And, a quick test of successive 1sec delayed syncs shows no hangs until
> > about 1 minute (~180mb) of db-writeout activity, when the sync abruptly
> > hangs for minutes on end, and io-wait shows almost 100%.
>
> How large is the journal in this filesystem? You can check via
> "debugfs -R 'stat <8>' /dev/XXX".

32mb.

> Is this affected by increasing
> the journal size? You can set the journal size via "mke2fs -J size=400"
> at format time, or on an unmounted filesystem by running
> "tune2fs -O ^has_journal /dev/XXX" then "tune2fs -J size=400 /dev/XXX".

Setting size=400 doesn't help, nor does size=4.

> I suspect that the stall is caused by the journal filling up, and then
> waiting while the entire journal is checkpointed back to the filesystem
> before the next transaction can start.
>
> It is possible to improve this behaviour in JBD by reducing the amount
> of space that is cleared if the journal becomes "full", and also doing
> journal checkpointing before it becomes full. While that may reduce
> performance a small amount, it would help avoid such huge latency
> problems. I believe we have such a patch in one of the Lustre branches
> already, and while I'm not sure what kernel it is for the JBD code rarely
> changes much....

The big difference between ordered and writeback is that once the slowdown
starts, ordered goes into ~100% iowait, whereas writeback continues 100%
user.


Thanks!

--
Al

2008-01-31 16:58:04

by Chris Mason

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Thursday 31 January 2008, Al Boldi wrote:
> Andreas Dilger wrote:
> > On Wednesday 30 January 2008, Al Boldi wrote:
> > > And, a quick test of successive 1sec delayed syncs shows no hangs until
> > > about 1 minute (~180mb) of db-writeout activity, when the sync abruptly
> > > hangs for minutes on end, and io-wait shows almost 100%.
> >
> > How large is the journal in this filesystem? You can check via
> > "debugfs -R 'stat <8>' /dev/XXX".
>
> 32mb.
>
> > Is this affected by increasing
> > the journal size? You can set the journal size via "mke2fs -J size=400"
> > at format time, or on an unmounted filesystem by running
> > "tune2fs -O ^has_journal /dev/XXX" then "tune2fs -J size=400 /dev/XXX".
>
> Setting size=400 doesn't help, nor does size=4.
>
> > I suspect that the stall is caused by the journal filling up, and then
> > waiting while the entire journal is checkpointed back to the filesystem
> > before the next transaction can start.
> >
> > It is possible to improve this behaviour in JBD by reducing the amount
> > of space that is cleared if the journal becomes "full", and also doing
> > journal checkpointing before it becomes full. While that may reduce
> > performance a small amount, it would help avoid such huge latency
> > problems. I believe we have such a patch in one of the Lustre branches
> > already, and while I'm not sure what kernel it is for the JBD code rarely
> > changes much....
>
> The big difference between ordered and writeback is that once the slowdown
> starts, ordered goes into ~100% iowait, whereas writeback continues 100%
> user.

Does data=ordered write buffers in the order they were dirtied? This might
explain the extreme problems in transactional workloads.

-chris

2008-01-31 17:11:05

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Thu 31-01-08 11:56:01, Chris Mason wrote:
> On Thursday 31 January 2008, Al Boldi wrote:
> > Andreas Dilger wrote:
> > > On Wednesday 30 January 2008, Al Boldi wrote:
> > > > And, a quick test of successive 1sec delayed syncs shows no hangs until
> > > > about 1 minute (~180mb) of db-writeout activity, when the sync abruptly
> > > > hangs for minutes on end, and io-wait shows almost 100%.
> > >
> > > How large is the journal in this filesystem? You can check via
> > > "debugfs -R 'stat <8>' /dev/XXX".
> >
> > 32mb.
> >
> > > Is this affected by increasing
> > > the journal size? You can set the journal size via "mke2fs -J size=400"
> > > at format time, or on an unmounted filesystem by running
> > > "tune2fs -O ^has_journal /dev/XXX" then "tune2fs -J size=400 /dev/XXX".
> >
> > Setting size=400 doesn't help, nor does size=4.
> >
> > > I suspect that the stall is caused by the journal filling up, and then
> > > waiting while the entire journal is checkpointed back to the filesystem
> > > before the next transaction can start.
> > >
> > > It is possible to improve this behaviour in JBD by reducing the amount
> > > of space that is cleared if the journal becomes "full", and also doing
> > > journal checkpointing before it becomes full. While that may reduce
> > > performance a small amount, it would help avoid such huge latency
> > > problems. I believe we have such a patch in one of the Lustre branches
> > > already, and while I'm not sure what kernel it is for the JBD code rarely
> > > changes much....
> >
> > The big difference between ordered and writeback is that once the slowdown
> > starts, ordered goes into ~100% iowait, whereas writeback continues 100%
> > user.
>
> Does data=ordered write buffers in the order they were dirtied? This might
> explain the extreme problems in transactional workloads.
Well, it does but we submit them to block layer all at once so elevator
should sort the requests for us...

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2008-01-31 17:16:47

by Chris Mason

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Thursday 31 January 2008, Jan Kara wrote:
> On Thu 31-01-08 11:56:01, Chris Mason wrote:
> > On Thursday 31 January 2008, Al Boldi wrote:
> > > Andreas Dilger wrote:
> > > > On Wednesday 30 January 2008, Al Boldi wrote:
> > > > > And, a quick test of successive 1sec delayed syncs shows no hangs
> > > > > until about 1 minute (~180mb) of db-writeout activity, when the
> > > > > sync abruptly hangs for minutes on end, and io-wait shows almost
> > > > > 100%.
> > > >
> > > > How large is the journal in this filesystem? You can check via
> > > > "debugfs -R 'stat <8>' /dev/XXX".
> > >
> > > 32mb.
> > >
> > > > Is this affected by increasing
> > > > the journal size? You can set the journal size via "mke2fs -J
> > > > size=400" at format time, or on an unmounted filesystem by running
> > > > "tune2fs -O ^has_journal /dev/XXX" then "tune2fs -J size=400
> > > > /dev/XXX".
> > >
> > > Setting size=400 doesn't help, nor does size=4.
> > >
> > > > I suspect that the stall is caused by the journal filling up, and
> > > > then waiting while the entire journal is checkpointed back to the
> > > > filesystem before the next transaction can start.
> > > >
> > > > It is possible to improve this behaviour in JBD by reducing the
> > > > amount of space that is cleared if the journal becomes "full", and
> > > > also doing journal checkpointing before it becomes full. While that
> > > > may reduce performance a small amount, it would help avoid such huge
> > > > latency problems. I believe we have such a patch in one of the Lustre
> > > > branches already, and while I'm not sure what kernel it is for the
> > > > JBD code rarely changes much....
> > >
> > > The big difference between ordered and writeback is that once the
> > > slowdown starts, ordered goes into ~100% iowait, whereas writeback
> > > continues 100% user.
> >
> > Does data=ordered write buffers in the order they were dirtied? This
> > might explain the extreme problems in transactional workloads.
>
> Well, it does but we submit them to block layer all at once so elevator
> should sort the requests for us...

nr_requests is fairly small, so a long stream of random requests should still
end up being random IO.

Al, could you please compare the write throughput from vmstat for the
data=ordered vs data=writeback runs? I would guess the data=ordered one has
a lower overall write throughput.

-chris

2008-02-01 21:28:18

by Al Boldi

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

Chris Mason wrote:
> On Thursday 31 January 2008, Jan Kara wrote:
> > On Thu 31-01-08 11:56:01, Chris Mason wrote:
> > > On Thursday 31 January 2008, Al Boldi wrote:
> > > > The big difference between ordered and writeback is that once the
> > > > slowdown starts, ordered goes into ~100% iowait, whereas writeback
> > > > continues 100% user.
> > >
> > > Does data=ordered write buffers in the order they were dirtied? This
> > > might explain the extreme problems in transactional workloads.
> >
> > Well, it does but we submit them to block layer all at once so
> > elevator should sort the requests for us...
>
> nr_requests is fairly small, so a long stream of random requests should
> still end up being random IO.
>
> Al, could you please compare the write throughput from vmstat for the
> data=ordered vs data=writeback runs? I would guess the data=ordered one
> has a lower overall write throughput.

That's what I would have guessed, but it's actually going up 4x fold for
mysql from 559mb to 2135mb, while the db-size ends up at 549mb.

This may mean that data=ordered isn't buffering redundant writes; or worse.


Thanks!

--
Al

2008-02-04 17:54:27

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Sat 02-02-08 00:26:00, Al Boldi wrote:
> Chris Mason wrote:
> > On Thursday 31 January 2008, Jan Kara wrote:
> > > On Thu 31-01-08 11:56:01, Chris Mason wrote:
> > > > On Thursday 31 January 2008, Al Boldi wrote:
> > > > > The big difference between ordered and writeback is that once the
> > > > > slowdown starts, ordered goes into ~100% iowait, whereas writeback
> > > > > continues 100% user.
> > > >
> > > > Does data=ordered write buffers in the order they were dirtied? This
> > > > might explain the extreme problems in transactional workloads.
> > >
> > > Well, it does but we submit them to block layer all at once so
> > > elevator should sort the requests for us...
> >
> > nr_requests is fairly small, so a long stream of random requests should
> > still end up being random IO.
> >
> > Al, could you please compare the write throughput from vmstat for the
> > data=ordered vs data=writeback runs? I would guess the data=ordered one
> > has a lower overall write throughput.
>
> That's what I would have guessed, but it's actually going up 4x fold for
> mysql from 559mb to 2135mb, while the db-size ends up at 549mb.
So you say we write 4-times as much data in ordered mode as in writeback
mode. Hmm, probably possible because we force all the dirty data to disk
when committing a transation in ordered mode (and don't do this in
writeback mode). So if the workload repeatedly dirties the whole DB, we are
going to write the whole DB several times in ordered mode but in writeback
mode we just keep the data in memory all the time. But this is what you
ask for if you mount in ordered mode so I wouldn't consider it a bug.
I still don't like your hack with per-process journal mode setting but we
could easily do per-file journal mode setting (we already have a flag to do
data journaling for a file) and that would help at least your DB
workload...

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2008-02-05 07:11:14

by Al Boldi

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

Jan Kara wrote:
> On Sat 02-02-08 00:26:00, Al Boldi wrote:
> > Chris Mason wrote:
> > > Al, could you please compare the write throughput from vmstat for the
> > > data=ordered vs data=writeback runs? I would guess the data=ordered
> > > one has a lower overall write throughput.
> >
> > That's what I would have guessed, but it's actually going up 4x fold for
> > mysql from 559mb to 2135mb, while the db-size ends up at 549mb.
>
> So you say we write 4-times as much data in ordered mode as in writeback
> mode. Hmm, probably possible because we force all the dirty data to disk
> when committing a transation in ordered mode (and don't do this in
> writeback mode). So if the workload repeatedly dirties the whole DB, we
> are going to write the whole DB several times in ordered mode but in
> writeback mode we just keep the data in memory all the time. But this is
> what you ask for if you mount in ordered mode so I wouldn't consider it a
> bug.

Ok, maybe not a bug, but a bit inefficient. Check out this workload:

sync;

while :; do
dd < /dev/full > /mnt/sda2/x.dmp bs=1M count=20
rm -f /mnt/sda2/x.dmp
usleep 10000
done

vmstat 1 ( with mount /dev/sda2 /mnt/sda2 -o data=writeback) << note io-bo >>

procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 0 293008 5232 57436 0 0 0 0 18 206 4 80 16 0
1 0 0 282840 5232 67620 0 0 0 0 18 238 3 81 16 0
1 0 0 297032 5244 53364 0 0 0 152 21 211 4 79 17 0
1 0 0 285236 5244 65224 0 0 0 0 18 232 4 80 16 0
1 0 0 299464 5244 50880 0 0 0 0 18 222 4 80 16 0
1 0 0 290156 5244 60176 0 0 0 0 18 236 3 80 17 0
0 0 0 302124 5256 47788 0 0 0 152 21 213 4 80 16 0
1 0 0 292180 5256 58248 0 0 0 0 18 239 3 81 16 0
1 0 0 287452 5256 62444 0 0 0 0 18 202 3 80 17 0
1 0 0 293016 5256 57392 0 0 0 0 18 250 4 80 16 0
0 0 0 302052 5256 47788 0 0 0 0 19 194 3 81 16 0
1 0 0 297536 5268 52928 0 0 0 152 20 233 4 79 17 0
1 0 0 286468 5268 63872 0 0 0 0 18 212 3 81 16 0
1 0 0 301572 5268 48812 0 0 0 0 18 267 4 79 17 0
1 0 0 292636 5268 57776 0 0 0 0 18 208 4 80 16 0
1 0 0 302124 5280 47788 0 0 0 152 21 237 4 80 16 0
1 0 0 291436 5280 58976 0 0 0 0 18 205 3 81 16 0
1 0 0 302068 5280 47788 0 0 0 0 18 234 3 81 16 0
1 0 0 293008 5280 57388 0 0 0 0 18 221 4 79 17 0
1 0 0 297288 5292 52532 0 0 0 156 22 233 2 81 16 1
1 0 0 294676 5292 55724 0 0 0 0 19 199 3 81 16 0


vmstat 1 (with mount /dev/sda2 /mnt/sda2 -o data=ordered)

procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 0 291052 5156 59016 0 0 0 0 19 223 3 82 15 0
1 0 0 291408 5156 58704 0 0 0 0 18 218 3 81 16 0
1 0 0 291888 5156 58276 0 0 0 20 23 229 3 80 17 0
1 0 0 300764 5168 49472 0 0 0 12864 91 235 3 69 13 15
1 0 0 300740 5168 49456 0 0 0 0 19 215 3 80 17 0
1 0 0 301088 5168 49044 0 0 0 0 18 241 4 80 16 0
1 0 0 298220 5168 51872 0 0 0 0 18 225 3 81 16 0
0 1 0 289168 5168 60752 0 0 0 12712 45 237 3 77 15 5
1 0 0 300260 5180 49852 0 0 0 152 68 211 4 72 15 9
1 0 0 298616 5180 51460 0 0 0 0 18 237 3 81 16 0
1 0 0 296988 5180 53092 0 0 0 0 18 223 3 81 16 0
1 0 0 296608 5180 53480 0 0 0 0 18 223 3 81 16 0
0 0 0 301640 5192 48036 0 0 0 12868 93 206 4 67 13 16
0 0 0 301624 5192 48036 0 0 0 0 21 218 3 81 16 0
0 0 0 301600 5192 48036 0 0 0 0 18 212 3 81 16 0
0 0 0 301584 5192 48036 0 0 0 0 18 209 4 80 16 0
0 0 0 301568 5192 48036 0 0 0 0 18 208 3 81 16 0
1 0 0 285520 5204 64548 0 0 0 12864 95 216 3 69 13 15
2 0 0 285124 5204 64924 0 0 0 0 18 222 4 80 16 0
1 0 0 283612 5204 66392 0 0 0 0 18 231 3 81 16 0
1 0 0 284216 5204 65736 0 0 0 0 18 218 4 80 16 0
0 1 0 289160 5204 60752 0 0 0 12712 56 213 3 74 15 8
1 0 0 285884 5216 64128 0 0 0 152 54 209 4 75 15 6
1 0 0 287472 5216 62572 0 0 0 0 18 223 3 81 16 0

Do you think these 12mb redundant writeouts could be buffered?

(Note: you may need to adjust dd count and usleep to see the same effect)

> I still don't like your hack with per-process journal mode setting
> but we could easily do per-file journal mode setting (we already have a
> flag to do data journaling for a file) and that would help at least your
> DB workload...

Well, that depends on what kind of db you use. mysql creates db's as a dir,
and then manages the tables and indexes as files inside that dir. So I don't
think this flag would be feasible for that use-case. Much easier to just say:

echo 1 > /proc/`pidof mysqld`/soft-sync

But the per-file flag could definitely help the file-mmap case, and as such
could be a great additional feature in combination to this RFC.


Thanks!

--
Al

2008-02-05 15:07:26

by Jan Kara

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Tue 05-02-08 10:07:44, Al Boldi wrote:
> Jan Kara wrote:
> > On Sat 02-02-08 00:26:00, Al Boldi wrote:
> > > Chris Mason wrote:
> > > > Al, could you please compare the write throughput from vmstat for the
> > > > data=ordered vs data=writeback runs? I would guess the data=ordered
> > > > one has a lower overall write throughput.
> > >
> > > That's what I would have guessed, but it's actually going up 4x fold for
> > > mysql from 559mb to 2135mb, while the db-size ends up at 549mb.
> >
> > So you say we write 4-times as much data in ordered mode as in writeback
> > mode. Hmm, probably possible because we force all the dirty data to disk
> > when committing a transation in ordered mode (and don't do this in
> > writeback mode). So if the workload repeatedly dirties the whole DB, we
> > are going to write the whole DB several times in ordered mode but in
> > writeback mode we just keep the data in memory all the time. But this is
> > what you ask for if you mount in ordered mode so I wouldn't consider it a
> > bug.
>
> Ok, maybe not a bug, but a bit inefficient. Check out this workload:
>
> sync;
>
> while :; do
> dd < /dev/full > /mnt/sda2/x.dmp bs=1M count=20
> rm -f /mnt/sda2/x.dmp
> usleep 10000
> done
>
> vmstat 1 ( with mount /dev/sda2 /mnt/sda2 -o data=writeback) << note io-bo >>
>
> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy id wa
> 2 0 0 293008 5232 57436 0 0 0 0 18 206 4 80 16 0
> 1 0 0 282840 5232 67620 0 0 0 0 18 238 3 81 16 0
> 1 0 0 297032 5244 53364 0 0 0 152 21 211 4 79 17 0
> 1 0 0 285236 5244 65224 0 0 0 0 18 232 4 80 16 0
> 1 0 0 299464 5244 50880 0 0 0 0 18 222 4 80 16 0
> 1 0 0 290156 5244 60176 0 0 0 0 18 236 3 80 17 0
> 0 0 0 302124 5256 47788 0 0 0 152 21 213 4 80 16 0
> 1 0 0 292180 5256 58248 0 0 0 0 18 239 3 81 16 0
> 1 0 0 287452 5256 62444 0 0 0 0 18 202 3 80 17 0
> 1 0 0 293016 5256 57392 0 0 0 0 18 250 4 80 16 0
> 0 0 0 302052 5256 47788 0 0 0 0 19 194 3 81 16 0
> 1 0 0 297536 5268 52928 0 0 0 152 20 233 4 79 17 0
> 1 0 0 286468 5268 63872 0 0 0 0 18 212 3 81 16 0
> 1 0 0 301572 5268 48812 0 0 0 0 18 267 4 79 17 0
> 1 0 0 292636 5268 57776 0 0 0 0 18 208 4 80 16 0
> 1 0 0 302124 5280 47788 0 0 0 152 21 237 4 80 16 0
> 1 0 0 291436 5280 58976 0 0 0 0 18 205 3 81 16 0
> 1 0 0 302068 5280 47788 0 0 0 0 18 234 3 81 16 0
> 1 0 0 293008 5280 57388 0 0 0 0 18 221 4 79 17 0
> 1 0 0 297288 5292 52532 0 0 0 156 22 233 2 81 16 1
> 1 0 0 294676 5292 55724 0 0 0 0 19 199 3 81 16 0
>
>
> vmstat 1 (with mount /dev/sda2 /mnt/sda2 -o data=ordered)
>
> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy id wa
> 2 0 0 291052 5156 59016 0 0 0 0 19 223 3 82 15 0
> 1 0 0 291408 5156 58704 0 0 0 0 18 218 3 81 16 0
> 1 0 0 291888 5156 58276 0 0 0 20 23 229 3 80 17 0
> 1 0 0 300764 5168 49472 0 0 0 12864 91 235 3 69 13 15
> 1 0 0 300740 5168 49456 0 0 0 0 19 215 3 80 17 0
> 1 0 0 301088 5168 49044 0 0 0 0 18 241 4 80 16 0
> 1 0 0 298220 5168 51872 0 0 0 0 18 225 3 81 16 0
> 0 1 0 289168 5168 60752 0 0 0 12712 45 237 3 77 15 5
> 1 0 0 300260 5180 49852 0 0 0 152 68 211 4 72 15 9
> 1 0 0 298616 5180 51460 0 0 0 0 18 237 3 81 16 0
> 1 0 0 296988 5180 53092 0 0 0 0 18 223 3 81 16 0
> 1 0 0 296608 5180 53480 0 0 0 0 18 223 3 81 16 0
> 0 0 0 301640 5192 48036 0 0 0 12868 93 206 4 67 13 16
> 0 0 0 301624 5192 48036 0 0 0 0 21 218 3 81 16 0
> 0 0 0 301600 5192 48036 0 0 0 0 18 212 3 81 16 0
> 0 0 0 301584 5192 48036 0 0 0 0 18 209 4 80 16 0
> 0 0 0 301568 5192 48036 0 0 0 0 18 208 3 81 16 0
> 1 0 0 285520 5204 64548 0 0 0 12864 95 216 3 69 13 15
> 2 0 0 285124 5204 64924 0 0 0 0 18 222 4 80 16 0
> 1 0 0 283612 5204 66392 0 0 0 0 18 231 3 81 16 0
> 1 0 0 284216 5204 65736 0 0 0 0 18 218 4 80 16 0
> 0 1 0 289160 5204 60752 0 0 0 12712 56 213 3 74 15 8
> 1 0 0 285884 5216 64128 0 0 0 152 54 209 4 75 15 6
> 1 0 0 287472 5216 62572 0 0 0 0 18 223 3 81 16 0
>
> Do you think these 12mb redundant writeouts could be buffered?
No, I don't think so. At least when I run it, number of blocks written
out varies which confirms that these 12mb are just data blocks which happen
to be in the file when transaction commits (which is every 5 seconds). And
to satisfy journaling gurantees in ordered mode you must write them so you
really have no choice...

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2008-02-05 19:21:56

by Al Boldi

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

Jan Kara wrote:
> On Tue 05-02-08 10:07:44, Al Boldi wrote:
> > Jan Kara wrote:
> > > On Sat 02-02-08 00:26:00, Al Boldi wrote:
> > > > Chris Mason wrote:
> > > > > Al, could you please compare the write throughput from vmstat for
> > > > > the data=ordered vs data=writeback runs? I would guess the
> > > > > data=ordered one has a lower overall write throughput.
> > > >
> > > > That's what I would have guessed, but it's actually going up 4x fold
> > > > for mysql from 559mb to 2135mb, while the db-size ends up at 549mb.
> > >
> > > So you say we write 4-times as much data in ordered mode as in
> > > writeback mode. Hmm, probably possible because we force all the dirty
> > > data to disk when committing a transation in ordered mode (and don't
> > > do this in writeback mode). So if the workload repeatedly dirties the
> > > whole DB, we are going to write the whole DB several times in ordered
> > > mode but in writeback mode we just keep the data in memory all the
> > > time. But this is what you ask for if you mount in ordered mode so I
> > > wouldn't consider it a bug.
> >
> > Ok, maybe not a bug, but a bit inefficient. Check out this workload:
> >
> > sync;
> >
> > while :; do
> > dd < /dev/full > /mnt/sda2/x.dmp bs=1M count=20
> > rm -f /mnt/sda2/x.dmp
> > usleep 10000
> > done
:
:
> > Do you think these 12mb redundant writeouts could be buffered?
>
> No, I don't think so. At least when I run it, number of blocks written
> out varies which confirms that these 12mb are just data blocks which
> happen to be in the file when transaction commits (which is every 5
> seconds).

Just a thought, but maybe double-buffering can help?

> And to satisfy journaling gurantees in ordered mode you must
> write them so you really have no choice...

Making this RFC rather useful.

What we need now is an implementation, which should be easy.

Maybe something on these lines:

<< in ext3_ordered_write_end >>
if (current->soft_sync & 1)
return ext3_writeback_write_end;

<< in ext3_ordered_writepage >>
if (current->soft_sync & 2)
return ext3_writeback_writepage;

<< in ext3_sync_file >>
if (current->soft_sync & 4)
return ret;

<< in ext3_file_write >>
if (current->soft_sync & 8)
return ret;

As you can see soft_sync is masked and bits are ordered by importance.

It would be neat if somebody interested could cook-up a patch.


Thanks!

--
Al

2008-02-07 00:33:47

by Andreas Dilger

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

On Jan 26, 2008 08:27 +0300, Al Boldi wrote:
> Jan Kara wrote:
> > > data=ordered mode has proven reliable over the years, and it does this
> > > by ordering filedata flushes before metadata flushes. But this
> > > sometimes causes contention in the order of a 10x slowdown for certain
> > > apps, either due to the misuse of fsync or due to inherent behaviour
> > > like db's, as well as inherent starvation issues exposed by the
> > > data=ordered mode.
> > >
> > > data=writeback mode alleviates data=order mode slowdowns, but only works
> > > per-mount and is too dangerous to run as a default mode.
> > >
> > > This RFC proposes to introduce a tunable which allows to disable fsync
> > > and changes ordered into writeback writeout on a per-process basis like
> > > this:
> > >
> > > echo 1 > /proc/`pidof process`/softsync
> >
> > I guess disabling fsync() was already commented on enough. Regarding
> > switching to writeback mode on per-process basis - not easily possible
> > because sometimes data is not written out by the process which stored
> > them (think of mmaped file).
>
> Do you mean there is a locking problem?
>
> > And in case of DB, they use direct-io
> > anyway most of the time so they don't care about journaling mode anyway.
>
> Testing with sqlite3 and mysql4 shows that performance drastically improves
> with writeback writeout.
>
> > But as Diego wrote, there is definitely some room for improvement in
> > current data=ordered mode so the difference shouldn't be as big in the
> > end.
>
> Yes, it would be nice to get to the bottom of this starvation problem, but
> even then, the proposed tunable remains useful for misbehaving apps.

Al, can you try a patch posted to linux-fsdevel and linux-ext4 from
Hisashi Hifumi <[email protected]> to see if this improves
your situation? Dated Mon, 04 Feb 2008 19:15:25 +0900.

[PATCH] ext3,4:fdatasync should skip metadata writeout when overwriting

It may be that we already have a solution in that patch for database
workloads where the pages are already allocated by avoiding the need
for ordered mode journal flushing in that case.

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

2008-02-10 14:55:27

by Al Boldi

[permalink] [raw]
Subject: Re: [RFC] ext3: per-process soft-syncing data=ordered mode

Andreas Dilger wrote:
> On Jan 26, 2008 08:27 +0300, Al Boldi wrote:
> > Jan Kara wrote:
> > > > data=ordered mode has proven reliable over the years, and it does
> > > > this by ordering filedata flushes before metadata flushes. But this
> > > > sometimes causes contention in the order of a 10x slowdown for
> > > > certain apps, either due to the misuse of fsync or due to inherent
> > > > behaviour like db's, as well as inherent starvation issues exposed
> > > > by the data=ordered mode.
> > > >
> > > > data=writeback mode alleviates data=order mode slowdowns, but only
> > > > works per-mount and is too dangerous to run as a default mode.
> > > >
> > > > This RFC proposes to introduce a tunable which allows to disable
> > > > fsync and changes ordered into writeback writeout on a per-process
> > > > basis like this:
> > > >
> > > > echo 1 > /proc/`pidof process`/softsync
> > >
> > > I guess disabling fsync() was already commented on enough. Regarding
> > > switching to writeback mode on per-process basis - not easily possible
> > > because sometimes data is not written out by the process which stored
> > > them (think of mmaped file).
> >
> > Do you mean there is a locking problem?
> >
> > > And in case of DB, they use direct-io
> > > anyway most of the time so they don't care about journaling mode
> > > anyway.
> >
> > Testing with sqlite3 and mysql4 shows that performance drastically
> > improves with writeback writeout.
> >
> > > But as Diego wrote, there is definitely some room for improvement in
> > > current data=ordered mode so the difference shouldn't be as big in the
> > > end.
> >
> > Yes, it would be nice to get to the bottom of this starvation problem,
> > but even then, the proposed tunable remains useful for misbehaving apps.
>
> Al, can you try a patch posted to linux-fsdevel and linux-ext4 from
> Hisashi Hifumi <[email protected]> to see if this improves
> your situation? Dated Mon, 04 Feb 2008 19:15:25 +0900.
>
> [PATCH] ext3,4:fdatasync should skip metadata writeout when
> overwriting
>
> It may be that we already have a solution in that patch for database
> workloads where the pages are already allocated by avoiding the need
> for ordered mode journal flushing in that case.

Well, it seems that it does have a positive effect for the 'konqueror hangs'
case, but doesn't improve the db case.

This shouldn't be surprising, as the db redundant writeout problem is
localized not in fsync but rather in ext3_ordered_write_end.

Maybe some form of a staged merged commit could help.


Thanks!

--
Al