Sorry if this is a repost.
I am upgrading to a new 36GB HD, and intend to split it into 3 pieces:
one 7GB vfat, one ~28GB linux data (reiser or ext2), and 1GB swap.
I need to know if I can trust ReiserFS, as I do believe that I do want
ReiserFS.
On 14-Jul 07:54, Adam Schrotenboer wrote:
> Sorry if this is a repost.
>
> I am upgrading to a new 36GB HD, and intend to split it into 3 pieces:
> one 7GB vfat, one ~28GB linux data (reiser or ext2), and 1GB swap.
>
> I need to know if I can trust ReiserFS, as I do believe that I do want
> ReiserFS.
I have never lost data on ReiserFS. Infact, /usr shrunk by ~20megs changing
from ext2 to reiserfs.
Thomas
On Sat, 14 Jul 2001, Adam Schrotenboer wrote:
> Sorry if this is a repost.
>
> I am upgrading to a new 36GB HD, and intend to split it into 3 pieces:
> one 7GB vfat, one ~28GB linux data (reiser or ext2), and 1GB swap.
>
> I need to know if I can trust ReiserFS, as I do believe that I do want
> ReiserFS.
Which is a good point - can ext2 handle more than 4gig partitions ? I have
some vague ideas that it doesn't (and that it does not handle files more
than 2gig long). I am reasonable sure that ReiserFS is better in this
regard though I am not certain about this either.
Vladimir Dergachev
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
On Sun, 15 Jul 2001 [email protected] wrote:
> Which is a good point - can ext2 handle more than 4gig partitions ? I have
It can.
> some vague ideas that it doesn't (and that it does not handle files more
> than 2gig long).
It does.
> Which is a good point - can ext2 handle more than 4gig partitions ? I have
> some vague ideas that it doesn't (and that it does not handle files more
> than 2gig long). I am reasonable sure that ReiserFS is better in this
> regard though I am not certain about this either.
Ext2 handles files larger than 2Gb, and can handle up to about 1Tb per volume
which is the block layer fs size limit.
Alan
Alan Cox wrote:
>
> > Which is a good point - can ext2 handle more than 4gig partitions ? I have
> > some vague ideas that it doesn't (and that it does not handle files more
> > than 2gig long). I am reasonable sure that ReiserFS is better in this
> > regard though I am not certain about this either.
>
> Ext2 handles files larger than 2Gb, and can handle up to about 1Tb per volume
> which is the block layer fs size limit.
>
> Alan
The limits for reiserfs and ext2 for kernels 2.4.x are the same (and they are 2Tb not 1Tb). The
limits are not in the individual filesystems. We need to have Linux go to 64 bit blocknumbers in
2.5.x, I am seeing a lot of customer demand for it. (Or we could use scalable integers, which would
be better.)
Hans
> > Ext2 handles files larger than 2Gb, and can handle up to about 1Tb per volume
> > which is the block layer fs size limit.
> >
> The limits for reiserfs and ext2 for kernels 2.4.x are the same (and they are 2Tb not 1Tb). The
> limits are not in the individual filesystems. We need to have Linux go to 64 bit blocknumbers in
Its 1 terabyte - there are some unclean sign bit abuses
> 2.5.x, I am seeing a lot of customer demand for it. (Or we could use scalable integers, which would
> be better.)
We definitely need larger than 1Tb on 2.5.x. No argument there. I believe Ben
had some prototype code for that.
Alan Cox wrote:
>
> > > Ext2 handles files larger than 2Gb, and can handle up to about 1Tb per volume
> > > which is the block layer fs size limit.
> > >
> > The limits for reiserfs and ext2 for kernels 2.4.x are the same (and they are 2Tb not 1Tb). The
> > limits are not in the individual filesystems. We need to have Linux go to 64 bit blocknumbers in
>
> Its 1 terabyte - there are some unclean sign bit abuses
but for some servers that bigstorage.com ships, using some drivers, 2Tb works...:-)
Ok, the limit is 1-2TB, depending, yes?
Hans
"Hans Reiser" <[email protected]> wrote in message
news:[email protected]...
>
> The limits for reiserfs and ext2 for kernels 2.4.x are the same (and they
are 2Tb not 1Tb). The
> limits are not in the individual filesystems. We need to have Linux go to
64 bit blocknumbers in
> 2.5.x, I am seeing a lot of customer demand for it. (Or we could use
scalable integers, which would
> be better.)
>
> Hans
> -
It appears to be 1TB. Just last week I tried a 1.1TB fibre RAID array, and
found several signed/unsigned issues. Fdisk was unable to work with the
array. This on Slackware 8.0 distribution with 2.4.6 kernel.
Rob
> > Its 1 terabyte - there are some unclean sign bit abuses
>
> but for some servers that bigstorage.com ships, using some drivers, 2Tb works...:-)
> Ok, the limit is 1-2TB, depending, yes?
Yes. Something like that.
On Sunday 15 July 2001 18:44, Hans Reiser wrote:
> The limits for reiserfs and ext2 for kernels 2.4.x are the same (and
> they are 2Tb not 1Tb). The limits are not in the individual
> filesystems. We need to have Linux go to 64 bit blocknumbers in
> 2.5.x, I am seeing a lot of customer demand for it. (Or we could use
> scalable integers, which would be better.)
Or we could introduce the notion of logical blocksize for each block
minor so that we can measure blocks in the same units the filesystem
uses. This would give us 16 TB while being able to stay with 32 bits
everywhere outside the block drivers themselves.
We are not that far away from being able to handle 8K blocks, so that
would bump it up to 32 TB.
--
Daniel
Daniel Phillips wrote:
>
> On Sunday 15 July 2001 18:44, Hans Reiser wrote:
> > The limits for reiserfs and ext2 for kernels 2.4.x are the same (and
> > they are 2Tb not 1Tb). The limits are not in the individual
> > filesystems. We need to have Linux go to 64 bit blocknumbers in
> > 2.5.x, I am seeing a lot of customer demand for it. (Or we could use
> > scalable integers, which would be better.)
>
> Or we could introduce the notion of logical blocksize for each block
> minor so that we can measure blocks in the same units the filesystem
> uses. This would give us 16 TB while being able to stay with 32 bits
> everywhere outside the block drivers themselves.
>
> We are not that far away from being able to handle 8K blocks, so that
> would bump it up to 32 TB.
>
> --
> Daniel
16TB is not enough.
I agree that blocknumbers are a significant space user in FS metadata, which is why I think scalable
integers are correct.
Hans
On Monday 16 July 2001 00:05, Hans Reiser wrote:
> Daniel Phillips wrote:
> > On Sunday 15 July 2001 18:44, Hans Reiser wrote:
> > > The limits for reiserfs and ext2 for kernels 2.4.x are the same
> > > (and they are 2Tb not 1Tb). The limits are not in the individual
> > > filesystems. We need to have Linux go to 64 bit blocknumbers in
> > > 2.5.x, I am seeing a lot of customer demand for it. (Or we could
> > > use scalable integers, which would be better.)
> >
> > Or we could introduce the notion of logical blocksize for each
> > block minor so that we can measure blocks in the same units the
> > filesystem uses. This would give us 16 TB while being able to stay
> > with 32 bits everywhere outside the block drivers themselves.
> >
> > We are not that far away from being able to handle 8K blocks, so
> > that would bump it up to 32 TB.
> >
> > --
> > Daniel
>
> 16TB is not enough.
>
> I agree that blocknumbers are a significant space user in FS
> metadata, which is why I think scalable integers are correct.
I must have missed the place where you defined what scalable integers
are. I'd think the prefered way of representing a logical block size
is as a bit shift, not an absolute size, because it's far more
efficient to use that way. Is this the same as a scalable integer?
--
Daniel
Daniel Phillips writes:
> Or we could introduce the notion of logical blocksize for each block
> minor so that we can measure blocks in the same units the filesystem
> uses. This would give us 16 TB while being able to stay with 32 bits
> everywhere outside the block drivers themselves.
>
> We are not that far away from being able to handle 8K blocks, so that
> would bump it up to 32 TB.
This is like what the hard drive and BIOS industry has been doing.
First we had the 528 MB limit. Then the 2 GB limit. Then the 4 GB limit.
Then the 8.3 GB limit. Then the 33 GB limit. Then the 127 GB limit.
All along the way, users are cursing the damn limits.
An extra 4 bits buys us 6 years maybe. Nice, except that we
already have people complaining. Maybe somebody remembers when
the complaining started.
On Sun, 15 Jul 2001, Alexander Viro wrote:
>
>
> On Sun, 15 Jul 2001 [email protected] wrote:
>
> > Which is a good point - can ext2 handle more than 4gig partitions ? I have
>
> It can.
>
> > some vague ideas that it doesn't (and that it does not handle files more
> > than 2gig long).
>
> It does.
>
Umm that is very interesting - I was rather sure there were some problems
a while ago (2.2.x ?). Is there anything special necessary to use large
files ? Because I tried to create a 3+gig file and now I cannot ls or rm
it. (More details: the file was created using dd from block device (tried
to backup a smaller ext2 partition), ls and rm say "Value too large for
defined data type" and I upgraded everything mentioned in Documentation/Changes).
Vladimir Dergachev
PS Yep, the new limits are clearly documented in
Documentation/filesystems/ext2.txt - sorry for bothering anyone..
On Sun, Jul 15, 2001 at 08:50:03PM -0400, [email protected] wrote:
> Umm that is very interesting - I was rather sure there were some problems
> a while ago (2.2.x ?). Is there anything special necessary to use large
> files ? Because I tried to create a 3+gig file and now I cannot ls or rm
> it. (More details: the file was created using dd from block device (tried
> to backup a smaller ext2 partition), ls and rm say "Value too large for
> defined data type" and I upgraded everything mentioned in Documentation/Changes).
Your utilities must be compiled with a recent glibc and with LFS (large
file support). Any recent distribution should support this.
--
Ragnar Kjorstad
Big Storage
On Sun, 15 Jul 2001 [email protected] wrote:
> Umm that is very interesting - I was rather sure there were some problems
> a while ago (2.2.x ?). Is there anything special necessary to use large
> files ? Because I tried to create a 3+gig file and now I cannot ls or rm
> it. (More details: the file was created using dd from block device (tried
> to backup a smaller ext2 partition), ls and rm say "Value too large for
> defined data type" and I upgraded everything mentioned in Documentation/Changes).
<shrug> you need fileutils built with large file support enabled (basically,
it should use stat64(), etc. and pass O_LARGEFILE to open()) and you need
sufficiently recent libc. But that's the same regardless of fs type.
On Sun, 15 Jul 2001, Alexander Viro wrote:
>
>
> On Sun, 15 Jul 2001 [email protected] wrote:
>
> > Umm that is very interesting - I was rather sure there were some problems
> > a while ago (2.2.x ?). Is there anything special necessary to use large
> > files ? Because I tried to create a 3+gig file and now I cannot ls or rm
> > it. (More details: the file was created using dd from block device (tried
> > to backup a smaller ext2 partition), ls and rm say "Value too large for
> > defined data type" and I upgraded everything mentioned in Documentation/Changes).
>
> <shrug> you need fileutils built with large file support enabled (basically,
> it should use stat64(), etc. and pass O_LARGEFILE to open()) and you need
> sufficiently recent libc. But that's the same regardless of fs type.
>
May I ask where does one get a patched fileutils package ? I have just
downloaded fileutils-4.1 from prep.ai.mit.edu and it has 0 information in
README, configure --help, etc on how to enable this and when compiled ls
still complains.
thanks !
Vladimir Dergachev
On Sun, 15 Jul 2001 [email protected] wrote:
> May I ask where does one get a patched fileutils package ? I have just
> downloaded fileutils-4.1 from prep.ai.mit.edu and it has 0 information in
> README, configure --help, etc on how to enable this and when compiled ls
> still complains.
>
> thanks !
>
> Vladimir Dergachev
The actual instructions are in the glibc documentation, in the section about
file position. If I'm reading this correctly, you have to define
_FILE_OFFSET_BITS to be 64, then the large file functions should overlay the
regular ones transparently, but YMMV.
--
Ignacio Vazquez-Abrams <[email protected]>
On Sun, 15 Jul 2001 [email protected] wrote:
>> I am upgrading to a new 36GB HD, and intend to split it into 3 pieces:
>> one 7GB vfat, one ~28GB linux data (reiser or ext2), and 1GB swap.
>>
>> I need to know if I can trust ReiserFS, as I do believe that I do want
>> ReiserFS.
>
>Which is a good point - can ext2 handle more than 4gig partitions ? I have
>some vague ideas that it doesn't
Very vague indeed. ;o)
/dev/md1 on /mnt/md1 type ext2 (rw,nosuid)
$ df /dev/md1
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/md1 210042576 197033208 2339736 99% /mnt/md1
That is mission critical 210Gb ext2 over software RAID.
>(and that it does not handle files more than 2gig long).
pts/0 mharris@devel:~$ ls -o bigfile.dat
-rw-rw---- 1 mharris 6634951680 Jul 16 00:37 bigfile.dat
>I am reasonable sure that ReiserFS is better in this
>regard though I am not certain about this either.
That is a contradition. ;o) "Reasonably sure" and "certain" are
pretty close in meaning IMHO. I don't see how you can be
uncertain, but reasonably sure... ;o)
----------------------------------------------------------------------
Mike A. Harris - Linux advocate - Open Source advocate
Opinions and viewpoints expressed are solely my own.
----------------------------------------------------------------------
On Monday 16 July 2001 02:22, Albert D. Cahalan wrote:
> Daniel Phillips writes:
> > Or we could introduce the notion of logical blocksize for each
> > block minor so that we can measure blocks in the same units the
> > filesystem uses. This would give us 16 TB while being able to stay
> > with 32 bits everywhere outside the block drivers themselves.
> >
> > We are not that far away from being able to handle 8K blocks, so
> > that would bump it up to 32 TB.
>
> This is like what the hard drive and BIOS industry has been doing.
> First we had the 528 MB limit. Then the 2 GB limit. Then the 4 GB
> limit. Then the 8.3 GB limit. Then the 33 GB limit. Then the 127 GB
> limit. All along the way, users are cursing the damn limits.
>
> An extra 4 bits buys us 6 years maybe. Nice, except that we
> already have people complaining. Maybe somebody remembers when
> the complaining started.
Well, that coincides nicely with the period when most of us will still
be using 32 bit processors, don't you think? If we solve the problem
of internal fragmentation (as Reiserfs has) and memory management then
we can keep going on up to 64K blocksize, giving a 256 TB limit. Not
too shabby. (Some things need fixing after that, e.g. Ext2 directory
entry record sizes.)
At the same time, the larger block size means that, to transfer a given
number of blocks at random locations, less time is spent seeking and
less time in setup. Larger blocks are good - there's a reason why the
industry is heading in that direction. If it also helps us with our
partition size limits, then why not take advantage of it. I'd say, do
both, use the logical blocksize measurements and provide 64 bit block
numbers as an option.
Note that there is a bug-by-design that comes from measuring device
capacity in 1K blocks the way we do now: on a device with 512 byte
blocks we can't correctly determine when a block access is out of
range. Measuring in logical block size would fix that cleanly.
--
Daniel
On Sun, 15 Jul 2001, Alan Cox wrote:
> > > Ext2 handles files larger than 2Gb, and can handle up to about 1Tb per volume
> > > which is the block layer fs size limit.
> > >
> > The limits for reiserfs and ext2 for kernels 2.4.x are the same (and they are 2Tb not 1Tb). The
> > limits are not in the individual filesystems. We need to have Linux go to 64 bit blocknumbers in
>
> Its 1 terabyte - there are some unclean sign bit abuses
Is this true also on 64-bits archs (Alpha, UltraSparc)? I guess limits
listed in Documentation/filesystems/ext2.txt assume 32 bits.
.TM.
--
____/ ____/ /
/ / / Marco Colombo
___/ ___ / / Technical Manager
/ / / ESI s.r.l.
_____/ _____/ _/ [email protected]
Daniel Phillips wrote:
>
> We are not that far away from being able to handle 8K blocks, so that
> would bump it up to 32 TB.
That's way too small. Something like 32 PB would be better... ;)
We need at least one extra bit in volume/file size every year.
- Jussi Laako
--
PGP key fingerprint: 161D 6FED 6A92 39E2 EB5B 39DD A4DE 63EB C216 1E4B
Available at PGP keyservers
On Monday 16 July 2001 19:19, Jussi Laako wrote:
> Daniel Phillips wrote:
> > We are not that far away from being able to handle 8K blocks, so
> > that would bump it up to 32 TB.
>
> That's way too small. Something like 32 PB would be better... ;)
Are you serious? What kind of application are you running?
> We need at least one extra bit in volume/file size every year.
OK, well hmm, then in 1969 we needed a volume size of 4K. Um, it's
probably more accurate to use 18 months as the doubling period.
Anyway, that's what the 64 bit option for buffer_head->b_blocknr is
supposed to handle. The question is, is it necessary to go to a
uniform 64 bit quantity for all users regardless of whether they feel
restricted by a 32 TB volume size limit or not.
/me figures it will be 9 years before he even has a 1 TB disk in his
laptop
OK, I looked again and saw the smiley. Sometimes it's hard to tell
what's outrageous when talking about disk sizes.
--
Daniel
Jussi Laako wrote:
>
> Daniel Phillips wrote:
> >
> > We are not that far away from being able to handle 8K blocks, so that
> > would bump it up to 32 TB.
>
> That's way too small. Something like 32 PB would be better... ;)
> We need at least one extra bit in volume/file size every year.
>
> - Jussi Laako
>
> --
> PGP key fingerprint: 161D 6FED 6A92 39E2 EB5B 39DD A4DE 63EB C216 1E4B
> Available at PGP keyservers
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
Daniel, if I was real sure that 64k blocks were the right answer, I would agree with you. I think
nobody knows what will happen with reiserfs if we go to 64k blocks. It could be great. On the
other hand, the average number of bytes memcopied with every small file insertion increases with
node size. Scalable integers (Xanadu project idea in which the last bit of an integer indicates
whether the integer is longer than the base size by an amount equal to the base size, chain can be
infinitely long, they used a base size of 1 byte, but we could use a base size of 32 bits, and limit
it to 64 bits rather than allowing infinite scaling) seem like more conservative coding.
Hans
Hans Reiser wrote:
>
> infinitely long, they used a base size of 1 byte, but we could use a base
> size of 32 bits, and limit it to 64 bits rather than allowing infinite
> scaling) seem like more conservative coding.
I think we should use either 32, 64 or 128 bits (or other 2^x) but not
fiddle with something like 48 bits. I believe we lose more than we gain from
added complexity.
Ok, 128 bits sounds like an insane amount, but so did 2 TB in early 80's.
- Jussi Laako
--
PGP key fingerprint: 161D 6FED 6A92 39E2 EB5B 39DD A4DE 63EB C216 1E4B
Available at PGP keyservers
On Monday 16 July 2001 21:16, Hans Reiser wrote:
> Jussi Laako wrote:
> > Daniel Phillips wrote:
> > > We are not that far away from being able to handle 8K blocks, so
> > > that would bump it up to 32 TB.
> >
> > That's way too small. Something like 32 PB would be better... ;)
> > We need at least one extra bit in volume/file size every year.
>
> Daniel, if I was real sure that 64k blocks were the right answer, I
> would agree with you. I think nobody knows what will happen with
> reiserfs if we go to 64k blocks.
For 32 bit block numbers:
Logical Block Size Largest Volume
------------------ --------------
4K 16 TB
8K 32 TB
16K 64 TB
32K 128 TB
64K 256 TB
You don't have to go to the extreme of 64K blocksize to get big
volumes. Anyway, with tailmerging there isn't really a downside to big
blocks, assuming the tailmerging code is fairly mature and efficient.
Maybe that's where we're still guessing?
> It could be great. On the other
> hand, the average number of bytes memcopied with every small file
> insertion increases with node size. Scalable integers (Xanadu
> project idea in which the last bit of an integer indicates whether
> the integer is longer than the base size by an amount equal to the
> base size, chain can be infinitely long, they used a base size of 1
> byte, but we could use a base size of 32 bits, and limit it to 64
> bits rather than allowing infinite scaling) seem like more
> conservative coding.
Yes, I've used similar things in the past, but only in serialized
structures. In a fixed sized field it doesn't make a lot of sense.
--
Daniel
On Mon, 16 Jul 2001, Jussi Laako wrote:
> Daniel Phillips wrote:
> > We are not that far away from being able to handle 8K blocks, so that
> > would bump it up to 32 TB.
> That's way too small. Something like 32 PB would be better... ;)
> We need at least one extra bit in volume/file size every year.
volume size grows faster than file size, doesn't it? maybe extra bit of
volume and 1/2 bit file size per year...
-Dan
--
[-] Omae no subete no kichi wa ore no mono da. [-]
On Sunday 15 July 2001 20:22, Albert D. Cahalan wrote:
> An extra 4 bits buys us 6 years maybe. Nice, except that we
> already have people complaining. Maybe somebody remembers when
> the complaining started.
I blame Charles Babbage, myself...
As for the scalable block numbers, assuming moore's law holds at 18
months/doubling without hitting subatomic quantum weirdness limits, the jump
from 32 to 64 bits gives us another 48 years. 48 years ago was 1953. Univac
(powered by vacuum tubes) hit the market in 1951. Project whirlwind would do
prototype work applying transistors to computers in 1954.
Just a sense of perspective. Scalable block numbers sound cool if they save
metadata space, but not as a source of extra scalability. And they sound
like a can of worms in terms of complexity.
Feel free to bring up the Y2K problem as a counter-example as to why
"rewriting it when it becomes a problem" is a bad idea. But the problem
there was closed (and lost) source code, wasn't it?
Rob