2005-01-21 14:11:11

by Wichert Akkerman

[permalink] [raw]
Subject: negative diskspace usage

After cleaning up a bit df suddenly showed interesting results:

Filesystem Size Used Avail Use% Mounted on
/dev/md4 1019M -64Z 1.1G 101% /tmp

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/md4 1043168 -73786976294838127736 1068904 101% /tmp

This is on a ext3 filesystem on a 2.6.10-ac10 kernel.

Wichert.

--
Wichert Akkerman <[email protected]> | Technical Manager
Phone: +31 620 607 695 | Attingo, airport internet services
Fax: +31 30 693 2557 | http://www.attingo.nl/


2005-01-22 08:54:38

by ndiamond

[permalink] [raw]
Subject: Re: negative diskspace usage

Wichert Akkerman wrote:

> After cleaning up a bit df suddenly showed interesting results:
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/md4 1019M -64Z 1.1G 101% /tmp
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/md4 1043168 -73786976294838127736 1068904 101% /tmp

It looks like Windows 95's FDISK
command created the partitions.
After that it doesn't matter which
operating systems you connect the
drive to when formatting the
partitions and writing files and
cleaning whatever you want to clean.
The partition boundaries still remain
where Windows 95 put them, and you
have overlapping partitions.

After backing up whatever files you
can still access (and don't trust
the contents of the files either),
zero out the MBR and start over.

2005-01-22 10:09:38

by Wichert Akkerman

[permalink] [raw]
Subject: Re: negative diskspace usage

Previously [email protected] wrote:
> Wichert Akkerman wrote:
>
> > After cleaning up a bit df suddenly showed interesting results:
> >
> > Filesystem Size Used Avail Use% Mounted on
> > /dev/md4 1019M -64Z 1.1G 101% /tmp
> >
> > Filesystem 1K-blocks Used Available Use% Mounted on
> > /dev/md4 1043168 -73786976294838127736 1068904 101% /tmp
>
> It looks like Windows 95's FDISK
> command created the partitions.

There is no way you can see that from the output I gave, and it is also
incorrect.

> The partition boundaries still remain where Windows 95 put them, and
> you have overlapping partitions.

fdisk does not create overlapping partitions.

Wichert.

--
Wichert Akkerman <[email protected]> It is simple to make things.
http://www.wiggy.net/ It is hard to make things simple.

2005-01-22 21:08:32

by Norbert van Nobelen

[permalink] [raw]
Subject: Re: negative diskspace usage

I think the 101% usage is the interesting point here
You are using more diskspace than you have available.
I missed the first mail though, so what filesystem is this and which kernel
version?

On Saturday 22 January 2005 11:09, Wichert Akkerman wrote:
> Previously [email protected] wrote:
> > Wichert Akkerman wrote:
> > > After cleaning up a bit df suddenly showed interesting results:
> > >
> > > Filesystem Size Used Avail Use% Mounted on
> > > /dev/md4 1019M -64Z 1.1G 101% /tmp
> > >
> > > Filesystem 1K-blocks Used Available Use% Mounted on
> > > /dev/md4 1043168 -73786976294838127736 1068904 101%
> > > /tmp
> >
> > It looks like Windows 95's FDISK
> > command created the partitions.
>
> There is no way you can see that from the output I gave, and it is also
> incorrect.
>
> > The partition boundaries still remain where Windows 95 put them, and
> > you have overlapping partitions.
>
> fdisk does not create overlapping partitions.
>
> Wichert.

--
<a href="http://www.edusupport.nl">EduSupport: Linux Desktop for schools and
small to medium business in The Netherlands and Belgium</a>

2005-01-22 21:27:05

by Andries Brouwer

[permalink] [raw]
Subject: Re: negative diskspace usage

On Fri, Jan 21, 2005 at 03:11:06PM +0100, Wichert Akkerman wrote:

> After cleaning up a bit df suddenly showed interesting results:
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/md4 1019M -64Z 1.1G 101% /tmp
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/md4 1043168 -73786976294838127736 1068904 101% /tmp
>
> This is on a ext3 filesystem on a 2.6.10-ac10 kernel.

Funny.

The Used column is total-free, so free was 2^66 + 964440.
That 2^66 no doubt was 2^64 in a computation counting 4K-blocks,
and arose at some point where a negative number was considered unsigned.

But having available=1068904 larger than free=964440 is strange.

I assume this was produced by statfs or statfs64 or so.
You can check using "strace -e statfs64 df /dev/md4" that
these really are the values returned by the kernel,
so that we can partition the blame between df and the kernel.

The values are computed by

buf->f_blocks = es->s_blocks_count - overhead;
buf->f_bfree = ext3_count_free_blocks (sb);
buf->f_bavail = buf->f_bfree - es->s_r_blocks_count;

that is: blocks = total - overhead, and available = free - reserved.
strace shows three values, and I expect tune2fs or so will show 2 more.

More available than free sounds like a negative count of reserved blocks.
Are you still able to examine the situation?

Andries

2005-01-23 22:56:31

by Wichert Akkerman

[permalink] [raw]
Subject: Re: negative diskspace usage

Previously Andries Brouwer wrote:
> I assume this was produced by statfs or statfs64 or so.

statfs64 indeed.

> Are you still able to examine the situation?

No, but I do have some more information. A e2fsck run on that filesystem
was just as interesting:

/dev/md4: clean, 16/132480 files, -15514/264960 blocks

Forcing an e2fsck revelated a few groups with incorrect block counts:

Free blocks count wrong for group #2 (34308, counted=32306).
Free blocks count wrong for group #6 (45805, counted=32306).
Free blocks count wrong for group #8 (14741, counted=2354).
Free blocks count wrong (280474, counted=252586).

After fixing those everything returned to normal. I did run dumpe2fs
on the filesystem, if that is interesting I can retrieve and post that.

Wichert.

--
Wichert Akkerman <[email protected]> It is simple to make things.
http://www.wiggy.net/ It is hard to make things simple.

2005-01-23 23:32:57

by Andries Brouwer

[permalink] [raw]
Subject: Re: negative diskspace usage

On Sun, Jan 23, 2005 at 11:56:28PM +0100, Wichert Akkerman wrote:

> > Are you still able to examine the situation?
>
> No, but I do have some more information. A e2fsck run on that filesystem
> was just as interesting:
>
> /dev/md4: clean, 16/132480 files, -15514/264960 blocks
>
> Forcing an e2fsck revelated a few groups with incorrect block counts:
>
> Free blocks count wrong for group #2 (34308, counted=32306).
> Free blocks count wrong for group #6 (45805, counted=32306).
> Free blocks count wrong for group #8 (14741, counted=2354).
> Free blocks count wrong (280474, counted=252586).
>
> After fixing those everything returned to normal. I did run dumpe2fs
> on the filesystem, if that is interesting I can retrieve and post that.

It is an interesting situation, but probably there is not enough
information to find out what happened. On the other hand, if your
dumpe2fs output is not too big you might as well post it.

Andries

2005-01-24 09:07:34

by Wichert Akkerman

[permalink] [raw]
Subject: Re: negative diskspace usage

Previously Andries Brouwer wrote:
> It is an interesting situation, but probably there is not enough
> information to find out what happened. On the other hand, if your
> dumpe2fs output is not too big you might as well post it.

It is indeed not too big, so here it is:

Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: 33476a1a-cc34-4668-a4a3-fd0efaa01818
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal filetype needs_recovery sparse_super
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 132480
Block count: 264960
Reserved block count: 13248
Free blocks: 268779
Free inodes: 129353
First block: 0
Block size: 4096
Fragment size: 4096
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 14720
Inode blocks per group: 460
Last mount time: Wed Jan 19 16:38:17 2005
Last write time: Wed Jan 19 16:38:17 2005
Mount count: 8
Maximum mount count: 20
Last checked: Wed Aug 25 16:32:54 2004
Check interval: 15552000 (6 months)
Next check after: Mon Feb 21 15:32:54 2005
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
Journal backup: inode blocks


Group 0: (Blocks 0-32767)
Primary superblock at 0, Group descriptors at 1-1
Block bitmap at 464 (+464), Inode bitmap at 465 (+465)
Inode table at 4-463 (+4)
24101 free blocks, 14708 free inodes, 1 directories
Free blocks: 3, 8667-20480, 20482-32767
Free inodes: 11, 13, 15-14720
Group 1: (Blocks 32768-65535)
Backup superblock at 32768, Group descriptors at 32769-32769
Block bitmap at 33264 (+496), Inode bitmap at 33265 (+497)
Inode table at 32772-33231 (+4)
32303 free blocks, 14719 free inodes, 1 directories
Free blocks: 32770-32771, 33232-33263, 33266-55295, 55297-65535
Free inodes: 14722-29440
Group 2: (Blocks 65536-98303)
Block bitmap at 66064 (+528), Inode bitmap at 66065 (+529)
Inode table at 65540-65999 (+4)
34308 free blocks, 14720 free inodes, 0 directories
Free blocks: 65536-65539, 66000-66063, 66066-98303
Free inodes: 29441-44160
Group 3: (Blocks 98304-131071)
Backup superblock at 98304, Group descriptors at 98305-98305
Block bitmap at 98864 (+560), Inode bitmap at 98865 (+561)
Inode table at 98308-98767 (+4)
32303 free blocks, 14718 free inodes, 1 directories
Free blocks: 98306-98307, 98768-98863, 98866-129023, 129025-131071
Free inodes: 44162-44163, 44165-58880
Group 4: (Blocks 131072-163839)
Block bitmap at 131664 (+592), Inode bitmap at 131665 (+593)
Inode table at 131076-131535 (+4)
32305 free blocks, 14719 free inodes, 1 directories
Free blocks: 131073-131075, 131536-131663, 131666-163839
Free inodes: 58882-73600
Group 5: (Blocks 163840-196607)
Backup superblock at 163840, Group descriptors at 163841-163841
Block bitmap at 164464 (+624), Inode bitmap at 164465 (+625)
Inode table at 163844-164303 (+4)
32304 free blocks, 14720 free inodes, 0 directories
Free blocks: 163842-163843, 164304-164463, 164466-196607
Free inodes: 73601-88320
Group 6: (Blocks 196608-229375)
Block bitmap at 197264 (+656), Inode bitmap at 197265 (+657)
Inode table at 196612-197071 (+4)
45805 free blocks, 14720 free inodes, 0 directories
Free blocks: 196608-196611, 197072-197263, 197266-229375
Free inodes: 88321-103040
Group 7: (Blocks 229376-262143)
Backup superblock at 229376, Group descriptors at 229377-229377
Block bitmap at 230064 (+688), Inode bitmap at 230065 (+689)
Inode table at 229380-229839 (+4)
32304 free blocks, 14720 free inodes, 0 directories
Free blocks: 229378-229379, 229840-230063, 230066-262143
Free inodes: 103041-117760
Group 8: (Blocks 262144-264959)
Block bitmap at 262864 (+720), Inode bitmap at 262865 (+721)
Inode table at 262148-262607 (+4)
14741 free blocks, 14720 free inodes, 0 directories
Free blocks: 262144-262147, 262608-262863, 262866-264959
Free inodes: 117761-132480
--
Wichert Akkerman <[email protected]> It is simple to make things.
http://www.wiggy.net/ It is hard to make things simple.

2005-05-03 11:32:24

by Dave Gilbert (Home)

[permalink] [raw]
Subject: Re: negative diskspace usage

Hi,
Just a 'me too'.

Configuration: 2.6.11.3 (SuSE 9.2 tree)
3ware 9000 hardware raid setup as RAID5,
just over 1.5T total, with a 1.5T partition with ext3
created with mke2fs 1.27 (debian installation).
(A rather slow 1.4GHz Athlon and 512MB of RAM on
the box I'm using to test this RAID).

We'd been running bonnie on the partition for a while and
also created a test file that filled the partition;
then I rm'd that 1.5TB file - this took a while; this took
a long while - probably over an hour, doing a df as it was going showed
the amount of space used dropping.

So then I start to copy stuff onto it and do a df and find it showing
the -64Z on the free. (df (fileutils) 4.1), I've got some stuff
unbzip'ing on it and it now seems to be showing sensible sizes on it again.

If anyone wants me to try stuff I can since this RAID isn't in service yet.

Dave