2010-04-09 14:56:34

by Ben Gamari

[permalink] [raw]
Subject: Re: Poor interactive performance with I/O loads with fsync()ing

On Mon, 29 Mar 2010 00:08:58 +0200, Andi Kleen <[email protected]> wrote:
> Ben Gamari <[email protected]> writes:
> ext4/XFS/JFS/btrfs should be better in this regard
>
I am using btrfs, so yes, I was expecting things to be better. Unfortunately,
the improvement seems to be non-existent under high IO/fsync load.

- Ben


2010-04-11 15:04:26

by Avi Kivity

[permalink] [raw]
Subject: Re: Poor interactive performance with I/O loads with fsync()ing

On 04/09/2010 05:56 PM, Ben Gamari wrote:
> On Mon, 29 Mar 2010 00:08:58 +0200, Andi Kleen<[email protected]> wrote:
>
>> Ben Gamari<[email protected]> writes:
>> ext4/XFS/JFS/btrfs should be better in this regard
>>
>>
> I am using btrfs, so yes, I was expecting things to be better. Unfortunately,
> the improvement seems to be non-existent under high IO/fsync load.
>
>

btrfs is known to perform poorly under fsync.

--
error compiling committee.c: too many arguments to function

2010-04-11 16:35:41

by Ben Gamari

[permalink] [raw]
Subject: Re: Poor interactive performance with I/O loads with fsync()ing

On Sun, 11 Apr 2010 18:03:00 +0300, Avi Kivity <[email protected]> wrote:
> On 04/09/2010 05:56 PM, Ben Gamari wrote:
> > On Mon, 29 Mar 2010 00:08:58 +0200, Andi Kleen<[email protected]> wrote:
> >
> >> Ben Gamari<[email protected]> writes:
> >> ext4/XFS/JFS/btrfs should be better in this regard
> >>
> >>
> > I am using btrfs, so yes, I was expecting things to be better. Unfortunately,
> > the improvement seems to be non-existent under high IO/fsync load.
> >
>
> btrfs is known to perform poorly under fsync.
>
Has the reason for this been identified? Judging from the nature of metadata
loads, it would seem that it should be substantially easier to implement
fsync() efficiently.

- Ben

2010-04-11 17:20:36

by Andi Kleen

[permalink] [raw]
Subject: Re: Poor interactive performance with I/O loads with fsync()ing

> Has the reason for this been identified? Judging from the nature of metadata
> loads, it would seem that it should be substantially easier to implement
> fsync() efficiently.

By design a copy on write tree fs would need to flush a whole
tree hierarchy on a sync. btrfs avoids this by using a special
log for fsync, but that causes more overhead if you have that
log on the same disk. So IO subsystem will do more work.

It's a bit like JBD data journaling.

However it should not have the stalls inherent in ext3's journaling.

-Andi
--
[email protected] -- Speaking for myself only.

2010-04-11 18:19:09

by Thomas Gleixner

[permalink] [raw]
Subject: Re: Poor interactive performance with I/O loads with fsync()ing

On Sun, 11 Apr 2010, Avi Kivity wrote:

> On 04/09/2010 05:56 PM, Ben Gamari wrote:
> > On Mon, 29 Mar 2010 00:08:58 +0200, Andi Kleen<[email protected]> wrote:
> >
> > > Ben Gamari<[email protected]> writes:
> > > ext4/XFS/JFS/btrfs should be better in this regard
> > >
> > >
> > I am using btrfs, so yes, I was expecting things to be better.
> > Unfortunately,
> > the improvement seems to be non-existent under high IO/fsync load.
> >
> >
>
> btrfs is known to perform poorly under fsync.

XFS does not do much better. Just moved my VM images back to ext for
that reason.

Thanks,

tglx

2010-04-11 18:42:35

by Andi Kleen

[permalink] [raw]
Subject: Re: Poor interactive performance with I/O loads with fsync()ing

> XFS does not do much better. Just moved my VM images back to ext for
> that reason.

Did you move from XFS to ext3? ext3 defaults to barriers off, XFS on,
which can make a big difference depending on the disk. You can
disable them on XFS too of course, with the known drawbacks.

XFS also typically needs some tuning to get reasonable log sizes.

My point was merely (before people chime in with counter examples)
that XFS/btrfs/jfs don't suffer from the "need to sync all transactions for
every fsync" issue. There can (and will be) still other issues.

-Andi

--
[email protected] -- Speaking for myself only.

2010-04-11 21:56:22

by Thomas Gleixner

[permalink] [raw]
Subject: Re: Poor interactive performance with I/O loads with fsync()ing

On Sun, 11 Apr 2010, Andi Kleen wrote:

> > XFS does not do much better. Just moved my VM images back to ext for
> > that reason.
>
> Did you move from XFS to ext3? ext3 defaults to barriers off, XFS on,
> which can make a big difference depending on the disk. You can
> disable them on XFS too of course, with the known drawbacks.
>
> XFS also typically needs some tuning to get reasonable log sizes.
>
> My point was merely (before people chime in with counter examples)
> that XFS/btrfs/jfs don't suffer from the "need to sync all transactions for
> every fsync" issue. There can (and will be) still other issues.

Yes, I moved them back from XFS to ext3 simply because moving them
from ext3 to XFS turned out to be a completely unusable disaster.

I know that I can tweak knobs on XFS (or any other file system), but I
would not have expected that it sucks that much for KVM with the
default settings which are perfectly fine for the other use cases
which made us move to XFS.

Thanks,

tglx

2010-04-11 23:44:18

by Hans-Peter Jansen

[permalink] [raw]
Subject: Re: Poor interactive performance with I/O loads with fsync()ing

On Sunday 11 April 2010, 23:54:34 Thomas Gleixner wrote:
> On Sun, 11 Apr 2010, Andi Kleen wrote:
> > > XFS does not do much better. Just moved my VM images back to ext for
> > > that reason.
> >
> > Did you move from XFS to ext3? ext3 defaults to barriers off, XFS on,
> > which can make a big difference depending on the disk. You can
> > disable them on XFS too of course, with the known drawbacks.
> >
> > XFS also typically needs some tuning to get reasonable log sizes.
> >
> > My point was merely (before people chime in with counter examples)
> > that XFS/btrfs/jfs don't suffer from the "need to sync all transactions
> > for every fsync" issue. There can (and will be) still other issues.
>
> Yes, I moved them back from XFS to ext3 simply because moving them
> from ext3 to XFS turned out to be a completely unusable disaster.
>
> I know that I can tweak knobs on XFS (or any other file system), but I
> would not have expected that it sucks that much for KVM with the
> default settings which are perfectly fine for the other use cases
> which made us move to XFS.

Thomas, what Andi was merely turning out, is that xfs has a really
concerning different default: barriers, that hurts with fsync().

In order to make a fair comparison of the two, you may want to mount xfs
with nobarrier or ext3 with barrier option set, and _then_ check which one
is sucking less.

I guess, that outcome will be interesting for quite a bunch of people in the
audience (including me?).

Pete

?) while in transition of getting rid of even suckier technology junk like
VMware-Server - but digging out a current?, but _stable_ kernel release
seems harder then ever nowadays.
?) with operational VT-d support for kvm

2010-04-13 01:22:40

by Dave Chinner

[permalink] [raw]
Subject: Re: Poor interactive performance with I/O loads with fsync()ing

On Sun, Apr 11, 2010 at 08:16:09PM +0200, Thomas Gleixner wrote:
> On Sun, 11 Apr 2010, Avi Kivity wrote:
> > On 04/09/2010 05:56 PM, Ben Gamari wrote:
> > > On Mon, 29 Mar 2010 00:08:58 +0200, Andi Kleen<[email protected]> wrote:
> > > > Ben Gamari<[email protected]> writes:
> > > > ext4/XFS/JFS/btrfs should be better in this regard
> > > >
> > > I am using btrfs, so yes, I was expecting things to be better.
> > > Unfortunately,
> > > the improvement seems to be non-existent under high IO/fsync load.
> >
> > btrfs is known to perform poorly under fsync.
>
> XFS does not do much better. Just moved my VM images back to ext for
> that reason.

Numbers? Workload description? Mount options? I hate it when all I
hear is "XFS sucked, so I went back to extN" reports without any
more details - it's hard to improve anything without any details
of the problems.

Also worth remembering is that XFS defaults to slow-but-safe
options, but ext3 defaults to fast-and-I-don't-give-a-damn-about-
data-safety, so there's a world of difference between the
filesystem defaults....

And FWIW, I run all my VMs on XFS using default mkfs and mount options,
and I can't say that I've noticed any performance problems at all
despite hammering the IO subsystems all the time. The only thing
I've ever done is occasionally run xfs_fsr across permanent qcow2
VM images to defrag them as the grow slowly over time...

Cheers,

Dave.
--
Dave Chinner
[email protected]

2010-04-14 18:41:41

by Ric Wheeler

[permalink] [raw]
Subject: Re: Poor interactive performance with I/O loads with fsync()ing

On 04/11/2010 05:22 PM, Dave Chinner wrote:
> On Sun, Apr 11, 2010 at 08:16:09PM +0200, Thomas Gleixner wrote:
>
>> On Sun, 11 Apr 2010, Avi Kivity wrote:
>>
>>> On 04/09/2010 05:56 PM, Ben Gamari wrote:
>>>
>>>> On Mon, 29 Mar 2010 00:08:58 +0200, Andi Kleen<[email protected]> wrote:
>>>>
>>>>> Ben Gamari<[email protected]> writes:
>>>>> ext4/XFS/JFS/btrfs should be better in this regard
>>>>>
>>>>>
>>>> I am using btrfs, so yes, I was expecting things to be better.
>>>> Unfortunately,
>>>> the improvement seems to be non-existent under high IO/fsync load.
>>>>
>>> btrfs is known to perform poorly under fsync.
>>>
>> XFS does not do much better. Just moved my VM images back to ext for
>> that reason.
>>
> Numbers? Workload description? Mount options? I hate it when all I
> hear is "XFS sucked, so I went back to extN" reports without any
> more details - it's hard to improve anything without any details
> of the problems.
>
> Also worth remembering is that XFS defaults to slow-but-safe
> options, but ext3 defaults to fast-and-I-don't-give-a-damn-about-
> data-safety, so there's a world of difference between the
> filesystem defaults....
>
> And FWIW, I run all my VMs on XFS using default mkfs and mount options,
> and I can't say that I've noticed any performance problems at all
> despite hammering the IO subsystems all the time. The only thing
> I've ever done is occasionally run xfs_fsr across permanent qcow2
> VM images to defrag them as the grow slowly over time...
>
> Cheers,
>
> Dave.
>

And if you are asking for details, the type of storage you use is also
quite interesting.

Thanks!

Ric