I think that you missed the main problem with all this new "great"
filesystems. And the main problem is potential data loss in case of a
crash. Only ext3 supports ordered or journal data mode.
XFS and JFS are designed for large multiprocessor machines powered by UPS
etc., where the risk of power fail, or some kind of tecnical problem is
veri low.
On the other side Linux works in much "risky" environment - old
machines, assembled from "yellow" parts, unstable power suply and so on.
With XFS every time when power fails while writing to file the entire file
is lost. The joke is that it is normal according FAQ :)
JFS has the same problem.
With ReiserFS this happens sometimes, but much much rarely. May be v4 will
solve this problem at all.
The above three filesystems have problems with badblocks too.
So the main problem is how usable is the filesystem. I mean if a company
spends a few tousand $ to provide a "low risky" environment, then may be
it will use AIX or IRIX, but not Linux.
And if I am running a <$1000 "server" I will never use XFS/JFS.
-----------------
Best Regards
Ivan
Ivan Ivanov wrote:
> I think that you missed the main problem with all this new "great"
> filesystems. And the main problem is potential data loss in case of a
> crash. Only ext3 supports ordered or journal data mode.
>
> XFS and JFS are designed for large multiprocessor machines powered by UPS
> etc., where the risk of power fail, or some kind of tecnical problem is
> veri low.
>
> On the other side Linux works in much "risky" environment - old
> machines, assembled from "yellow" parts, unstable power suply and so on.
>
> With XFS every time when power fails while writing to file the entire file
> is lost. The joke is that it is normal according FAQ :)
> JFS has the same problem.
> With ReiserFS this happens sometimes, but much much rarely. May be v4 will
> solve this problem at all.
>
> The above three filesystems have problems with badblocks too.
>
> So the main problem is how usable is the filesystem. I mean if a company
> spends a few tousand $ to provide a "low risky" environment, then may be
> it will use AIX or IRIX, but not Linux.
> And if I am running a <$1000 "server" I will never use XFS/JFS.
This just is not the issue. If we only wanted filesystems which behaved
like ext2/3, we would only have ext2/3. The issue, if you have all
forgotten, is Linus not providing information on why XFS is a problem to
be merged. He asked them to make it easy to merge - they have done so.
Now they ask why the patch is ignored, and are promptly ignored further.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On vrijdag, september 13, 2002, at 09:47 , Ivan Ivanov wrote:
>
> XFS and JFS are designed for large multiprocessor machines powered by
> UPS
> etc., where the risk of power fail, or some kind of tecnical problem is
> veri low.
>
Hmm, not entirely true. We run (C)XFS on Irix on our 1024 CPU SGI Origin
3800 box over here. Every few weeks the @$%#@ thing breaks, (CPU, bad
memory that kind of things). This takes down at least one partition of
the system, and sometimes a filesystem (or all filesystems). Without the
journaling features of XFS we'd spend all of our uptime fsck-ing. What
I'm saying, big box with lots of parts has a lot of parts that could
possible break....
> On the other side Linux works in much "risky" environment - old
> machines, assembled from "yellow" parts, unstable power suply and so on.
>
> With XFS every time when power fails while writing to file the entire
> file
> is lost. The joke is that it is normal according FAQ :)
> JFS has the same problem.
> With ReiserFS this happens sometimes, but much much rarely. May be v4
> will
> solve this problem at all.
>
Of course, loosing a file during a crash is not nice, but often the
whole job has to be rerun, at least from it's last checkpoint, so
loosing one file is not a problem. The same is true for most of the
desktop work, it's much clearer to a user not to find his/her file in
place, than a 'maybe corrupted' version.
> The above three filesystems have problems with badblocks too.
>
> So the main problem is how usable is the filesystem. I mean if a company
> spends a few tousand $ to provide a "low risky" environment, then may be
> it will use AIX or IRIX, but not Linux.
> And if I am running a <$1000 "server" I will never use XFS/JFS.
>
A few 1000 $ do not buy you an IRIX or a AIX box with support. So,
spending that money wisely buys you a nice Linux box, decent hardware
and a decent FS. Even in our very well protected environment, the
no-break powersupply is able to fail in the most horrible way( thoiug
that happend only once in over 20 years), having a robust FS is a must.
There is a world of possibilities between spending $200 at Walmart for a
low-end pc and >>$5k for your low-end IBM box. For 'small' servers that
people will want to depend on, a decent FS is a must.
Now if XFS was as non-intrusive as FreeVFS, it probbably whould have
been part of the main stream a long time ago. Unfortunately the XFS
people wanted to provide functions not in the VFS layer... Now maybe if
we cut that problem in two parts: filesystem and functional (dmapi
IIRC), the intrusion into the VFS layer would not be taken as bad as it
had been as it has been in the past....
- ---
Met vriendelijke groeten,
Remco Post
SARA - Stichting Academisch Rekencentrum Amsterdam http://www.sara.nl
High Performance Computing Tel. +31 20 592 8008 Fax. +31 20 668 3167
PGP keys at http://home.sara.nl/~remco/keys.asc
"I really didn't foresee the Internet. But then, neither did the computer
industry. Not that that tells us very much of course - the computer
industry
didn't even foresee that the century was going to end." -- Douglas Adams
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.7 (Darwin)
iD8DBQE9gbWYBIoCv9yTlOwRAuZNAJ9G+HxDINeeeT0QTZn7Ly1tpqHXAwCeLxCd
OMWrvLeT643az91jwHEq240=
=zAGH
-----END PGP SIGNATURE-----
On Fri, 13 Sep 2002, Nero wrote:
> Ivan Ivanov wrote:
> > I think that you missed the main problem with all this new "great"
> > filesystems. And the main problem is potential data loss in case of a
> > crash. Only ext3 supports ordered or journal data mode.
> >
> > XFS and JFS are designed for large multiprocessor machines powered by UPS
> > etc., where the risk of power fail, or some kind of tecnical problem is
> > veri low.
> >
> > On the other side Linux works in much "risky" environment - old
> > machines, assembled from "yellow" parts, unstable power suply and so on.
> >
> > With XFS every time when power fails while writing to file the entire file
> > is lost. The joke is that it is normal according FAQ :)
> > JFS has the same problem.
> > With ReiserFS this happens sometimes, but much much rarely. May be v4 will
> > solve this problem at all.
> >
> > The above three filesystems have problems with badblocks too.
> >
> > So the main problem is how usable is the filesystem. I mean if a company
> > spends a few tousand $ to provide a "low risky" environment, then may be
> > it will use AIX or IRIX, but not Linux.
> > And if I am running a <$1000 "server" I will never use XFS/JFS.
>
> This just is not the issue. If we only wanted filesystems which behaved
> like ext2/3, we would only have ext2/3. The issue, if you have all
> forgotten, is Linus not providing information on why XFS is a problem to
> be merged. He asked them to make it easy to merge - they have done so.
> Now they ask why the patch is ignored, and are promptly ignored further.
>
I think that it is not fair to insist for merging of XFS only. There ara
many other projects that are of bigger value for linux then iet another
filesystem - RSBAC,OpenMosix,LSM,HTree and more.
Some people like Linus, Alan, Marchelo etc. have the responsibility to
provide users with a usable, stable kernel.
And if somebody doesn't like their way of work he is free to make it's own
kernel tree.
I am not an expert, just a sysadmin, and I am testing XFS since kernel
2.4.6 ( I am writing this mail from a test machine with kernel 2.4.18
and XFS root filesystem ), and I also think that XFS is not ready for
production ( I lost some unimportant files after a crash yesterday ).
And after all do you think that such kind of presure over kernel
maintainers is the way of making free software.
--------------------
Cheers
Ivan
>I think that it is not fair to insist for merging of XFS only. There ara
>many other projects that are of bigger value for linux then iet another
>filesystem - RSBAC,OpenMosix,LSM,HTree and more.
LSM is mainstream now
OpenMosix si too intrusive (I think) as XFS _used_ to_ be_.....
>Some people like Linus, Alan, Marchelo etc. have the responsibility to
>provide users with a usable, stable kernel.
>And if somebody doesn't like their way of work he is free to make it's own
>kernel tree.
>I am not an expert, just a sysadmin, and I am testing XFS since kernel
>2.4.6 ( I am writing this mail from a test machine with kernel 2.4.18
>and XFS root filesystem ), and I also think that XFS is not ready for
>production ( I lost some unimportant files after a crash yesterday ).
You missing the point again, "ready" does _not_ means "stable"
I use XFS on allmost all of my PC/Servers and I never ever lose a single dot
in any file on my XFS partitions
>And after all do you think that such kind of presure over kernel
>maintainers is the way of making free software.
Kostadin Karaivanov
Senior System Administrator @ Ministry Of Finace
tel: +359 2 98592062
[email protected]
On Fri, Sep 13, 2002 at 01:52:50PM +0300, Kostadin Karaivanov wrote:
> LSM is mainstream now
> OpenMosix si too intrusive (I think) as XFS _used_ to_ be_.....
Mosix is more intrusive then XFS ever was. Not to mention it's written in
an unportable way and integrated into the Linux enviroment very badly.
It doesn't look like the maintainers aim for integration, and even if they
did they have a long long way to get it polished up.
On Fri, Sep 13, 2002 at 01:22:22PM +0300, Ivan Ivanov wrote:
>
> I think that it is not fair to insist for merging of XFS only. There ara
> many other projects that are of bigger value for linux then iet another
> filesystem - RSBAC,OpenMosix,LSM,HTree and more.
And who are most likely far more intrusive than XFS is currently, or have
other issues. [1]
> Some people like Linus, Alan, Marchelo etc. have the responsibility to
> provide users with a usable, stable kernel.
So they mark XFS experimental, and unless the user configures for
experimental features to be asked for they won't even notice their presence.
> I am not an expert, just a sysadmin, and I am testing XFS since kernel
> 2.4.6 ( I am writing this mail from a test machine with kernel 2.4.18
> and XFS root filesystem ), and I also think that XFS is not ready for
> production ( I lost some unimportant files after a crash yesterday ).
So, you are not using ext2 then either? Since that can loose files, too, on
a crash. (I've actually even once seen a whole ext2 partition disappear
after a crash. Same for reiserfs, BTW)
Any fs can have bugs. Even while ext2 is indeed more likely to be the most
tested, it too can bite you sometimes. [1]
Regards,
Filip
[1] Actually I've had problems with dma timeouts resulting in ide hangs on
an ext2 system last week, and it too managed to lose a few files. Sure,
fsck picked up most of them, and none were critical, but it does prove
my point well enough.
--
We have joy, we have fun,
we have Linux on our Sun.
-- Andreas Tille
Ivan Ivanov wrote:
>With ReiserFS this happens sometimes, but much much rarely. May be v4 will
>solve this problem at all.
>
We have a data ordered patch that is waiting for 2.4.21pre1.
V4 uses fully atomic transactions for every fs modifying syscall
including data, and still goes way faster than v3....
Hans
Ivan Ivanov wrote:
>
>
>
>
>I am not an expert, just a sysadmin, and I am testing XFS since kernel
>2.4.6 ( I am writing this mail from a test machine with kernel 2.4.18
>and XFS root filesystem ), and I also think that XFS is not ready for
>production ( I lost some unimportant files after a crash yesterday ).
>
>
>
This merely means that it should be flagged as experimental for a while.
There is no way a new filesystem can go into the Linux Kernel and not
have lots of bugs found by users during the first few months anyway,
however much we programmers might try to avoid it.
Hans
Bill Davidsen wrote:
>On Thu, 12 Sep 2002, Nikita Danilov wrote:
>
>
>
>>Then you missed "reiserfs inclusion into the kernel" soap opera.
>>
>>And besides, reiserfs in mainline to no extent means reiser4 in mainline
>>(unfortunately).
>>
>>
>
>No, that's probably a good thing. I don't care how good any programming
>team might be, an implementation written from scratch probably should burn
>in for a while before going in anywhere it might be used for production.
>
>And with all respect to the group, a 4th rewite from scratch in only a few
>years suggests that the ratio of coding to designing is pretty high.
>
>
>
Version 3 came out in 1998 or so, and large software projects should be,
but rarely are, rewritten from scratch every 5 years. If you want to
object to XFS, object that it hasn't been rewritten in recent times.
As for the notion that the more designing you do, the less rewriting you
need to do, it is a bit like the belief that the better your scientific
theories the less you need to perform experiments.
Projects that are no longer attempting rewrites of their cores are dead
in their soul, and their authors should pass them on to someone younger.
That said, version 4 will be followed by semantic enhancements and
distributed filesystem work, as I finally have in version 4 a storage
layer good enough that I can move mostly to the tasks that first
interested me about FS design. Most of the stuff that needs improvement
in the version 4 storage layer can be done as new plugins, or so I
fondly hope.;-)
Hans
On Friday 13 September 2002 02:47 am, Ivan Ivanov wrote:
> I think that you missed the main problem with all this new "great"
> filesystems. And the main problem is potential data loss in case of a
> crash. Only ext3 supports ordered or journal data mode.
>
> XFS and JFS are designed for large multiprocessor machines powered by UPS
> etc., where the risk of power fail, or some kind of tecnical problem is
> veri low.
>
> On the other side Linux works in much "risky" environment - old
> machines, assembled from "yellow" parts, unstable power suply and so on.
>
> With XFS every time when power fails while writing to file the entire file
> is lost. The joke is that it is normal according FAQ :)
Also note, it has been my experience that the blocks allocated to the file are
also lost. It takes a fsck operation to recover that.
I had a raided XFS filesystem that lost power at 3am every night... IRIX
panic/crash/dead. After the third one in a row half of the raid volume was
missing. I noticed that when the aviailable space was exausted. It took an
xfs_repair to rebuild the free space. (power failure due to overloaded circuit
and somebody turned on a monitor...)
> JFS has the same problem.
> With ReiserFS this happens sometimes, but much much rarely. May be v4 will
> solve this problem at all.
>
> The above three filesystems have problems with badblocks too.
>
> So the main problem is how usable is the filesystem. I mean if a company
> spends a few tousand $ to provide a "low risky" environment, then may be
> it will use AIX or IRIX, but not Linux.
> And if I am running a <$1000 "server" I will never use XFS/JFS.
>
> -----------------
> Best Regards
> Ivan
--
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: [email protected]
Any opinions expressed are solely my own.
Ivan Ivanov wrote:
>I think that you missed the main problem with all this new "great"
>filesystems. And the main problem is potential data loss in case of a
>crash. Only ext3 supports ordered or journal data mode.
>
>XFS and JFS are designed for large multiprocessor machines powered by UPS
>etc., where the risk of power fail, or some kind of tecnical problem is
>veri low.
>
>On the other side Linux works in much "risky" environment - old
>machines, assembled from "yellow" parts, unstable power suply and so on.
>
>With XFS every time when power fails while writing to file the entire file
>is lost. The joke is that it is normal according FAQ :)
>
>
This isn't true. I picked XFS as the filesystem for Echostar's DP-721
partially because when I power cycle tested them all it seemed to behave
in the most predictable way. The meta data always seemed to be correct
and the unflushed blocks were screwed up and *usually pointed to null
blocks, which is what I expect. If we're talking about a tiny little
file then you might lose the whole thing, it's all an unflushed block.
Since then I've seen the product in the field have the plug pulled
multiple times during a PVR recording and you lose the time during the
boot but just about everything else is there.
* I think after hundreds of reboots you could screw that up, we fixed it
by doing a repair during the boot periodically which was still very very
fast compared to a fsck. Also, not terribly important since a few
blocks is only a couple seconds of recording.
I'm not entirely sure what the correct semantics are for losing power
during a write, with some of the Reiserfs cuts I was looking at (circa
kernel 2.3.99) when you pulled the plug the last blocks committed would
be garbage. I remember a thread that said something to the extent the
the DMAs keep going for a few milliseconds after power is cut but the
data they transfer is trash; I don't know if I believe that or not. It
was very consistent though, it could be that the metadata just pointed
to blocks on the disk that didn't have zeros in them or something.
Still, it didn't trash the whole file, it did it mostly correct
assuming that you detect that there was a crash and intervene; your logs
or whatever could have some garbage but everything keeps running for the
most part.
I really don't know how you call a filesystem good or not. I think XFS
isn't in yet simply because it's big and Linus may not have had the time
yet to read it all. XFS, JFS, Reiserfs, and even EXT3 are way too big
to just test in a lab (Alan's house?) and call "bug free, ready for
production" You put them in, call them experimental, more of us hammer
on them, and they grow into trusted. From my personal experience, all
of them have been pretty good and I haven't seen major problems with any
of them in a long time and I did try to do some rigorous scientific
testing of them all, I'm not just spouting hearsay.
Ian Nelson
On Fri, 13 Sep 2002, Hans Reiser wrote:
> Bill Davidsen wrote:
> >No, that's probably a good thing. I don't care how good any programming
> >team might be, an implementation written from scratch probably should burn
> >in for a while before going in anywhere it might be used for production.
> >
> >And with all respect to the group, a 4th rewite from scratch in only a few
> >years suggests that the ratio of coding to designing is pretty high.
> As for the notion that the more designing you do, the less rewriting you
> need to do, it is a bit like the belief that the better your scientific
> theories the less you need to perform experiments.
Exactly so. I spent several decades doing software development at GE's
Corporate R&D Center, and I had ample proof that both of those things are
true. I think the phrase you want in English is "fewer experiments you
need to perform," but you did see the principle.
> Projects that are no longer attempting rewrites of their cores are dead
> in their soul, and their authors should pass them on to someone younger.
Hear that, Linus? Off to the retirement home with you unless you "rm *"
your source tree and "go back to Baltimore and start over again as a
virgin." Actually I think that Linux is an example of major software
designed from the start to be rewritten in parts and to evolve as a whole.
no clean sheet of paper needed.
--
bill davidsen <[email protected]>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.
wow, what does alans house look like?
Cheers, Dean McEwan. Currently hacking KGI, which I don't understand, oh and ask me about OpenModemTalk...
On Fri, 13 Sep 2002 07:33:42 -0600 "Ian S. Nelson" <[email protected]> wrote: