Hi!
This fall I built myself a new computer. It's a dual cpu quad core xeon
(E5405) on a Supermicro X7DWA-N motherboard. It has 8GB 800mhz ram, and
a Supermicro AOC-USAS-S8iR RAID SAS controller, with a pair of Fujitsu
MBA3073RC disks in raid 0. Should be pretty fast, but it isn't. In daily
use it feels just about the same as my old Dual P3 1ghz with a single
SCSI disk. Has to wait a couple of seconds before the gnome menu appears
after I've clicked on it, for instance. But with operations involving
heavy disk use, it's extremely slow! For instance, unpacking the bzipped
2.6.27.6 kernel takes 6:40 minutes, while compiling it takes a mere
5:13. I thought I had a hardware problem (still not ruling that out), so
I tried to install Vista, and then everything was blazingly fast. I have
no clue where to look, but a static benchmark like hdparm gives me a
throughput of 235MB/s on the raid 0 array. Could it be
filesystem-related? I'm using XFS (always have)...
Please cc me on any replies.
Regards,
Stian
On Sat, Nov 15, 2008 at 12:44 PM, Stian Jordet <[email protected]> wrote:
> Hi!
>
> This fall I built myself a new computer. It's a dual cpu quad core xeon
> (E5405) on a Supermicro X7DWA-N motherboard. It has 8GB 800mhz ram, and
> a Supermicro AOC-USAS-S8iR RAID SAS controller, with a pair of Fujitsu
> MBA3073RC disks in raid 0. Should be pretty fast, but it isn't. In daily
> use it feels just about the same as my old Dual P3 1ghz with a single
> SCSI disk. Has to wait a couple of seconds before the gnome menu appears
> after I've clicked on it, for instance. But with operations involving
> heavy disk use, it's extremely slow! For instance, unpacking the bzipped
> 2.6.27.6 kernel takes 6:40 minutes, while compiling it takes a mere
> 5:13. I thought I had a hardware problem (still not ruling that out), so
> I tried to install Vista, and then everything was blazingly fast. I have
> no clue where to look, but a static benchmark like hdparm gives me a
> throughput of 235MB/s on the raid 0 array. Could it be
> filesystem-related? I'm using XFS (always have)...
>
> Please cc me on any replies.
>
> Regards,
> Stian
>
Hello,
please post your config and dmesg output. It might help people try to
identify something wrong.
ti., 18.11.2008 kl. 11.51 -0200, skrev Sergio Luis:
> please post your config and dmesg output. It might help people try to
> identify something wrong.
(see http://marc.info/?l=linux-kernel&m=122676220019858&w=2 for the
start of this thread)
Hmm. I have now converted my root and /home filesystems to ext3, and
that actually fixed it (!). I have no idea why xfs is performing so
extremely poorly on this machine, i'm running xfs on every filesystem on
eight other computers (one heavy loaded server, as well).
Anyway, I'm now unpacking the 2.6.27.6 kernel in 15 seconds, with xfs on
the same array it used between five and six minutes.
I still have a xfs array, and when I copy files to or from that array,
the gnome session practically freezes, and the load average easily goes
beyond 10.
How should I debug this? I'm not very eager about reformatting my 1,5TB
xfs array... Besides, xfs have never let me down earlier.
Thanks.
Please cc me on any replies.
Regards,
Stian
On Sun, 23 Nov 2008, Stian Jordet wrote:
> ti., 18.11.2008 kl. 11.51 -0200, skrev Sergio Luis:
>> please post your config and dmesg output. It might help people try to
>> identify something wrong.
>
> (see http://marc.info/?l=linux-kernel&m=122676220019858&w=2 for the
> start of this thread)
>
> Hmm. I have now converted my root and /home filesystems to ext3, and
> that actually fixed it (!). I have no idea why xfs is performing so
> extremely poorly on this machine, i'm running xfs on every filesystem on
> eight other computers (one heavy loaded server, as well).
>
> Anyway, I'm now unpacking the 2.6.27.6 kernel in 15 seconds, with xfs on
> the same array it used between five and six minutes.
>
> I still have a xfs array, and when I copy files to or from that array,
> the gnome session practically freezes, and the load average easily goes
> beyond 10.
>
> How should I debug this? I'm not very eager about reformatting my 1,5TB
> xfs array... Besides, xfs have never let me down earlier.
>
> Thanks.
>
> Please cc me on any replies.
>
> Regards,
> Stian
>
> _______________________________________________
> xfs mailing list
> [email protected]
> http://oss.sgi.com/mailman/listinfo/xfs
>
As the original posted stated:
1. please post dmesg output
2. you may want to include your kernel .config
3. xfs_info /dev/mdX or /dev/device may also be useful
4. you can also check fragmentation:
# xfs_db -c frag -f /dev/md2
actual 257492, ideal 242687, fragmentation factor 5.75%
5. something sounds very strange, I also run XFS on a lot of systems and
have never heard of that before..
6. also post your /etc/fstab options
7. what distribution are you running?
8. are -only- the two fujitsu's (raid0) affected or are other arrays
affected on this HW as well (separate disks etc)?
9. you can also compile in support for latency_top & power_top to see
if there is any excessive polling going on by any one specific
device/function as well
Justin.
sø., 23.11.2008 kl. 17.25 -0500, skrev Justin Piszcz:
> As the original posted stated:
>
> 1. please post dmesg output
> 2. you may want to include your kernel .config
> 3. xfs_info /dev/mdX or /dev/device may also be useful
> 4. you can also check fragmentation:
> # xfs_db -c frag -f /dev/md2
> actual 257492, ideal 242687, fragmentation factor 5.75%
> 5. something sounds very strange, I also run XFS on a lot of systems and
> have never heard of that before..
> 6. also post your /etc/fstab options
> 7. what distribution are you running?
> 8. are -only- the two fujitsu's (raid0) affected or are other arrays
> affected on this HW as well (separate disks etc)?
> 9. you can also compile in support for latency_top & power_top to see
> if there is any excessive polling going on by any one specific
> device/function as well
1 & 2: Oh, sorry I forgot to attach dmesg and config in the last mail.
3:
root@chevrolet:~# xfs_info /dev/sdb1
meta-data=/dev/sdb1 isize=256 agcount=32,
agsize=11426984 blks
= sectsz=512 attr=0
data = bsize=4096 blocks=365663488,
imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=32768, version=1
= sectsz=512 sunit=0 blks, lazy-count=0
realtime =none extsz=4096 blocks=0, rtextents=0
4:
root@chevrolet:~# xfs_db -c frag -f /dev/sdb1
actual 380037, ideal 373823, fragmentation factor 1.64%
6: The only mount-option is relatime, which Ubuntu adds automatically.
Hmm. I haven't tried to mount without that option. Well, didn't help
without it neither, tried just now.
7: Ubuntu 8.10 Intrepid. This is a new system, and it has never run
anything other than Intrepid. This affects both the standard kernel, and
the vanilla 2.6.27.7 that I have compiled (dmesg and config attached is
from that kernel). Have also tried both 64 bit and 32 bit (just for fun)
8: I'll explain my setup a little bit more. I explained the hardware in
my first post. But I have the two Fujitsu SAS disks in RAID-0,
with /dev/sda1 as root, and /dev/sda2 as home. Earlier they were both
xfs, and dog slow. I have now converted both to ext3, and everything is
normal. In addition I have four Seagate ST3500320AS 500GB SATA disks in
hardware RAID-5 on the same controller. This 1,5TB array is still xfs.
It also had and has the same symptoms.
9: I don't know how to do that. But what ever it is, it doesn't happen
with ext3...
Thanks for looking into this!
Regards,
Stian
On Mon, 24 Nov 2008, Stian Jordet wrote:
> sø., 23.11.2008 kl. 17.25 -0500, skrev Justin Piszcz:
>> As the original posted stated:
>>
>> 1. please post dmesg output
>> 2. you may want to include your kernel .config
>> 3. xfs_info /dev/mdX or /dev/device may also be useful
>> 4. you can also check fragmentation:
>> # xfs_db -c frag -f /dev/md2
>> actual 257492, ideal 242687, fragmentation factor 5.75%
>> 5. something sounds very strange, I also run XFS on a lot of systems and
>> have never heard of that before..
>> 6. also post your /etc/fstab options
>> 7. what distribution are you running?
>> 8. are -only- the two fujitsu's (raid0) affected or are other arrays
>> affected on this HW as well (separate disks etc)?
>> 9. you can also compile in support for latency_top & power_top to see
>> if there is any excessive polling going on by any one specific
>> device/function as well
>
> 1 & 2: Oh, sorry I forgot to attach dmesg and config in the last mail.
>
> 3:
> root@chevrolet:~# xfs_info /dev/sdb1
> meta-data=/dev/sdb1 isize=256 agcount=32,
> agsize=11426984 blks
> = sectsz=512 attr=0
> data = bsize=4096 blocks=365663488,
> imaxpct=25
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096
> log =internal bsize=4096 blocks=32768, version=1
> = sectsz=512 sunit=0 blks, lazy-count=0
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> 4:
> root@chevrolet:~# xfs_db -c frag -f /dev/sdb1
> actual 380037, ideal 373823, fragmentation factor 1.64%
>
> 6: The only mount-option is relatime, which Ubuntu adds automatically.
> Hmm. I haven't tried to mount without that option. Well, didn't help
> without it neither, tried just now.
>
> 7: Ubuntu 8.10 Intrepid. This is a new system, and it has never run
> anything other than Intrepid. This affects both the standard kernel, and
> the vanilla 2.6.27.7 that I have compiled (dmesg and config attached is
> from that kernel). Have also tried both 64 bit and 32 bit (just for fun)
>
> 8: I'll explain my setup a little bit more. I explained the hardware in
> my first post. But I have the two Fujitsu SAS disks in RAID-0,
> with /dev/sda1 as root, and /dev/sda2 as home. Earlier they were both
> xfs, and dog slow. I have now converted both to ext3, and everything is
> normal. In addition I have four Seagate ST3500320AS 500GB SATA disks in
> hardware RAID-5 on the same controller. This 1,5TB array is still xfs.
> It also had and has the same symptoms.
>
> 9: I don't know how to do that. But what ever it is, it doesn't happen
> with ext3...
>
> Thanks for looking into this!
>
> Regards,
> Stian
>
While there still may be something else wrong, the first problem I see is
your sunit and swidth are set to 0.
Please read, a good article on what they are and how to set them:
http://www.socalsysadmin.com/
Justin.
ma., 24.11.2008 kl. 04.50 -0500, skrev Justin Piszcz:
> While there still may be something else wrong, the first problem I see is
> your sunit and swidth are set to 0.
>
> Please read, a good article on what they are and how to set them:
> http://www.socalsysadmin.com/
Oh, this was new to me. But the setting didn't change anything. I can
copy one large file between the xfs and ext3 disk (both ways), and I get
speeds between 160 and 200 MB/s. But unpacking the kernel source takes
between 5 and 10 minutes on the xfs disk, and a mere 15 seconds on the
ext3... (and it also used to take between 5 and 10 minutes when I had
xfs on the raid0, so it doesn't seem to be hardware related...)
If anyone has anything more I can try before I do the lengthy process of
backup 1TB, reformat and restore...
Thanks :)
Regards,
Stian
On Tue, 25 Nov 2008, Stian Jordet wrote:
> ma., 24.11.2008 kl. 04.50 -0500, skrev Justin Piszcz:
>> While there still may be something else wrong, the first problem I see is
>> your sunit and swidth are set to 0.
>>
>> Please read, a good article on what they are and how to set them:
>> http://www.socalsysadmin.com/
>
> Oh, this was new to me. But the setting didn't change anything. I can
> copy one large file between the xfs and ext3 disk (both ways), and I get
> speeds between 160 and 200 MB/s. But unpacking the kernel source takes
> between 5 and 10 minutes on the xfs disk, and a mere 15 seconds on the
> ext3... (and it also used to take between 5 and 10 minutes when I had
> xfs on the raid0, so it doesn't seem to be hardware related...)
>
> If anyone has anything more I can try before I do the lengthy process of
> backup 1TB, reformat and restore...
>
> Thanks :)
>
> Regards,
> Stian
>
When you 'unpack the kernel source' ext3 will cache it etc, best way to
test:
/usr/bin/time cmd
or
time cmd
where cmd is: bash -c 'tar xvf file.tar; sync'
Make sure its a tar and not gzip/bzip2 (limited by CPU/etc)
The other option is the relatime you stated, use this instead:
defaults,noatime,logbufs=8,logbsize=262144
then tell me what you get
(instead of relatime)
Justin.
On Tue, 25 Nov 2008, Stian Jordet wrote:
> ma., 24.11.2008 kl. 04.50 -0500, skrev Justin Piszcz:
> > While there still may be something else wrong, the first problem I see is
> > your sunit and swidth are set to 0.
> >
> > Please read, a good article on what they are and how to set them:
> > http://www.socalsysadmin.com/
>
> Oh, this was new to me. But the setting didn't change anything. I can
> copy one large file between the xfs and ext3 disk (both ways), and I get
> speeds between 160 and 200 MB/s. But unpacking the kernel source takes
> between 5 and 10 minutes on the xfs disk, and a mere 15 seconds on the
> ext3... (and it also used to take between 5 and 10 minutes when I had
> xfs on the raid0, so it doesn't seem to be hardware related...)
I had the same problem when I tried it on my laptop (T60) - using it on
the unencrypted root filesystem (with /usr/src) took ages, using it on
the LUKS encrypted /home was blasing fast - on the same disk.
This test was some time ago with something like 2.6.20 or 2.6.24 - I
gave up and reformatted / with ext3 as I needed the machine.
I think barriers were the problem, they seem to cost performance like
hell, especially for operations with many small files. My laptop used
barriers for xfs on the direct partition, but not on crypto drivermapper
mounts.
So perhaps try mounting with nobarrier and see if the speed problem goes
away - but know that you sacrifice some crash-resilience when doing so.
c'ya
sven
--
The Internet treats censorship as a routing problem, and routes around
it. (John Gilmore on http://www.cygnus.com/~gnu/)
Stian Jordet wrote:
> ma., 24.11.2008 kl. 04.50 -0500, skrev Justin Piszcz:
>> While there still may be something else wrong, the first problem I see is
>> your sunit and swidth are set to 0.
>>
>> Please read, a good article on what they are and how to set them:
>> http://www.socalsysadmin.com/
>
> Oh, this was new to me. But the setting didn't change anything. I can
> copy one large file between the xfs and ext3 disk (both ways), and I get
> speeds between 160 and 200 MB/s. But unpacking the kernel source takes
> between 5 and 10 minutes on the xfs disk, and a mere 15 seconds on the
> ext3... (and it also used to take between 5 and 10 minutes when I had
> xfs on the raid0, so it doesn't seem to be hardware related...)
>
> If anyone has anything more I can try before I do the lengthy process of
> backup 1TB, reformat and restore...
I don't know if the storage you're on passes barriers or not, but xfs
has barriers on by default, while ext3 does not. ext3 will still likely
win the "untar a kernel" race, but for a fairer test, make the barrier
settings consistent between the two.
-Eric
On Mon, 24 Nov 2008, Eric Sandeen wrote:
> Stian Jordet wrote:
>> ma., 24.11.2008 kl. 04.50 -0500, skrev Justin Piszcz:
>>> While there still may be something else wrong, the first problem I see is
>>> your sunit and swidth are set to 0.
>>>
>>> Please read, a good article on what they are and how to set them:
>>> http://www.socalsysadmin.com/
>>
>> Oh, this was new to me. But the setting didn't change anything. I can
>> copy one large file between the xfs and ext3 disk (both ways), and I get
>> speeds between 160 and 200 MB/s. But unpacking the kernel source takes
>> between 5 and 10 minutes on the xfs disk, and a mere 15 seconds on the
>> ext3... (and it also used to take between 5 and 10 minutes when I had
>> xfs on the raid0, so it doesn't seem to be hardware related...)
>>
>> If anyone has anything more I can try before I do the lengthy process of
>> backup 1TB, reformat and restore...
>
> I don't know if the storage you're on passes barriers or not, but xfs
> has barriers on by default, while ext3 does not. ext3 will still likely
> win the "untar a kernel" race, but for a fairer test, make the barrier
> settings consistent between the two.
>
> -Eric
>
barriers enabled:
$ time bash -c 'tar xf linux-2.6.27.7.tar; sync'
block 573932: ** Block of NULs **
Total bytes read: 293857280 (281MiB, 1.9MiB/s)
real 2m40.643s
user 0m0.194s
sys 0m1.541s
barriers disabled:
time bash -c 'tar xf linux-2.6.27.7.tar; sync'
block 573932: ** Block of NULs **
Total bytes read: 293857280 (281MiB, 11MiB/s)
real 0m27.612s
user 0m0.182s
sys 0m1.617s
On Tue, Nov 25, 2008 at 04:56:24AM -0500, Justin Piszcz wrote:
> barriers enabled:
>
> $ time bash -c 'tar xf linux-2.6.27.7.tar; sync'
> block 573932: ** Block of NULs **
> Total bytes read: 293857280 (281MiB, 1.9MiB/s)
>
> real 2m40.643s
> user 0m0.194s
> sys 0m1.541s
>
> barriers disabled:
>
> time bash -c 'tar xf linux-2.6.27.7.tar; sync'
> block 573932: ** Block of NULs **
> Total bytes read: 293857280 (281MiB, 11MiB/s)
>
> real 0m27.612s
> user 0m0.182s
> sys 0m1.617s
That's worse than usual, and even the no-barriers numbers are still
really bad. What kind of disk and controller is this? Did you
try to disable the cache with hdparm and see what that gives?
On Tue, 25 Nov 2008, Christoph Hellwig wrote:
> On Tue, Nov 25, 2008 at 04:56:24AM -0500, Justin Piszcz wrote:
>> barriers enabled:
>>
>> $ time bash -c 'tar xf linux-2.6.27.7.tar; sync'
>> block 573932: ** Block of NULs **
>> Total bytes read: 293857280 (281MiB, 1.9MiB/s)
>>
>> real 2m40.643s
>> user 0m0.194s
>> sys 0m1.541s
>>
>> barriers disabled:
>>
>> time bash -c 'tar xf linux-2.6.27.7.tar; sync'
>> block 573932: ** Block of NULs **
>> Total bytes read: 293857280 (281MiB, 11MiB/s)
>>
>> real 0m27.612s
>> user 0m0.182s
>> sys 0m1.617s
>
> That's worse than usual, and even the no-barriers numbers are still
> really bad. What kind of disk and controller is this? Did you
> try to disable the cache with hdparm and see what that gives?
>
WD 750G on ICH7
Device Model: WDC WD7500AAKS-00RBA0
00:1f.2 SATA controller: Intel Corporation 82801GR/GH (ICH7 Family) SATA
AHCI Controller (rev 01)
I've not tried playing with the cache.
Justin.
Justin Piszcz wrote:
[]
> barriers enabled:
> real 2m40.643s
> barriers disabled:
> real 0m27.612s
Barriers enabled:
$ time sh -c "tar xf linux-2.6.27.tar.bz2; sync"
real 2m3.317s
user 0m33.990s
sys 0m3.980s
$ time sh -c "rm -rf linux-2.6.27; sync"
real 1m4.033s
user 0m0.080s
sys 0m2.860s
Barriers disabled:
$ time sh -c "tar xf linux-2.6.27.tar.bz2; sync"
real 0m36.279s
user 0m25.610s
sys 0m2.800s
$ time sh -c "rm -rf linux-2.6.27; sync"
real 0m3.694s
user 0m0.010s
sys 0m2.230s
During unpack, with barriers=on, the cpu usage stays
hardy noticeable, while with barriers=off, the thing
becomes CPU-bound (needed for bzip2).
For comparison, here are results for jfs on the
same drive:
$ time sh -c "tar xf /stage/build/kernel/linux-2.6.27.tar.bz2; sync"
real 0m36.062s
user 0m25.370s
sys 0m2.860s
$ time sh -c "rm -rf linux-2.6.27; sync"
real 0m3.024s
user 0m0.040s
sys 0m0.750s
This jfs partition is located a bit further on the same disk,
so raw speed of it is a bit lower. Yet the numbers are pretty
similar. In any case, xfs with barriers is MUCH worse...
The disk is a 500Gig Hitachi HUA72105 one ("raid edition"),
on an AMD 780g/SB700 chipset in ahci mode.
/mjt
ti., 25.11.2008 kl. 01.09 +0100, skrev Sven-Haegar Koch:
> I had the same problem when I tried it on my laptop (T60) - using it on
> the unencrypted root filesystem (with /usr/src) took ages, using it on
> the LUKS encrypted /home was blasing fast - on the same disk.
>
> This test was some time ago with something like 2.6.20 or 2.6.24 - I
> gave up and reformatted / with ext3 as I needed the machine.
>
> I think barriers were the problem, they seem to cost performance like
> hell, especially for operations with many small files. My laptop used
> barriers for xfs on the direct partition, but not on crypto drivermapper
> mounts.
>
> So perhaps try mounting with nobarrier and see if the speed problem goes
> away - but know that you sacrifice some crash-resilience when doing so.
Barriers were the problem indeed. My old system had no problems with
barriers, but here it did an incredible difference.
Thanks!
-Stian
ma., 24.11.2008 kl. 19.36 -0600, skrev Eric Sandeen:
> I don't know if the storage you're on passes barriers or not, but xfs
> has barriers on by default, while ext3 does not. ext3 will still
> likely
> win the "untar a kernel" race, but for a fairer test, make the barrier
> settings consistent between the two.
As I wrote earlier, the point wasn't to find the fastest fs. That's not
what I'm looking for. I just want xfs to perform at least as good on my
new workstation as it did on my six years old other workstation.
Which disabling barriers helped (notice the rm -rf with barriers...
nobarrier is almost 200 times faster, 10 times faster on the unpacking):
With barrier:
time bash -c 'tar xjf linux-2.6.27.7.tar.bz2 ; sync'
real 9m57.320s
user 0m16.253s
sys 0m2.692s
time bash -c 'rm -rf linux-2.6.27.7 ; sync'
real 4m46.130s
user 0m0.032s
sys 0m1.300s
No barrier:
bash -c 'tar xjf linux-2.6.27.7.tar.bz2 ; sync'
real 0m57.028s
user 0m15.157s
sys 0m2.632s
time bash -c 'rm -rf linux-2.6.27.7 ; sync'
real 0m1.502s
user 0m0.032s
sys 0m1.436s
### Ext3
time bash -c 'tar xjf linux-2.6.27.7.tar.bz2 ; sync'
real 0m18.663s
user 0m14.693s
sys 0m2.828s
time bash -c 'rm -r linux-2.6.27.7 ; sync'
real 0m0.635s
user 0m0.028s
sys 0m0.564s
Altough I find it weird that both Michael and Justin does it faster on a
single disk than I do on my beefy hardware raid. But either way, finally
the system works ok, so I'm happy :) Ohh, wait. Justin is using just
the .tar... Well, that didn't really help that much here:
time bash -c 'tar xf linux-2.6.27.7.tar ; sync'
real 0m43.703s
user 0m0.256s
sys 0m3.312s
Thanks!
Regards,
Stian
Stian Jordet wrote:
> ti., 25.11.2008 kl. 01.09 +0100, skrev Sven-Haegar Koch:
>> I had the same problem when I tried it on my laptop (T60) - using it on
>> the unencrypted root filesystem (with /usr/src) took ages, using it on
>> the LUKS encrypted /home was blasing fast - on the same disk.
>>
>> This test was some time ago with something like 2.6.20 or 2.6.24 - I
>> gave up and reformatted / with ext3 as I needed the machine.
>>
>> I think barriers were the problem, they seem to cost performance like
>> hell, especially for operations with many small files. My laptop used
>> barriers for xfs on the direct partition, but not on crypto drivermapper
>> mounts.
>>
>> So perhaps try mounting with nobarrier and see if the speed problem goes
>> away - but know that you sacrifice some crash-resilience when doing so.
>
> Barriers were the problem indeed. My old system had no problems with
> barriers, but here it did an incredible difference.
Depending on the old system, perhaps its storage did not allow the
barriers to be honored, so after xfs saw a test barrier write fail at
mount time, it disabled them ... you'd see a message if that were the
case, FWIW.
-Eric
ti., 25.11.2008 kl. 15.22 -0600, skrev Eric Sandeen:
> Depending on the old system, perhaps its storage did not allow the
> barriers to be honored, so after xfs saw a test barrier write fail at
> mount time, it disabled them ... you'd see a message if that were the
> case, FWIW.
I know those messages, I get them on my old server. But my old
workstation did not disable the barrier...
But I still don't understand this. On my laptop (HP EliteBook 8530w,
very very powerful, but still a laptop, not an eight core, hardware raid
workstation...), I get this (xfs partition):
barrier:
time bash -c 'tar xjf linux-2.6.27.7.tar.bz2; sync'
real 1m30.855s
user 0m23.265s
sys 0m4.096s
nobarrier:
time bash -c 'tar xjf linux-2.6.27.7.tar.bz2; sync'
real 0m37.602s
user 0m15.281s
sys 0m4.184s
With no barriers, it's 22s faster than my workstation. That can't be the
way it's supposed to be?
-Stian