2002-10-01 19:53:48

by Paul Komkoff

[permalink] [raw]
Subject: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

This is the stupidiest testcase I've done but it worth seeing (maybe)

We create 300000 files named from 00000000 to 000493E0 in one
directory, then delete it in order.

Tests taken on ext3+htree and reiserfs. ext3 w/o htree hadn't
evaluated because it will take long long time ...

both filesystems was mounted with noatime,nodiratime and ext3 was
data=writeback to be somewhat fair ...

real user sys
reiserfs:
Creating: 3m13.208s 0m4.412s 2m54.404s
Deleting: 4m41.250s 0m4.206s 4m17.926s

Ext3:
Creating: 4m9.331s 0m3.927s 2m21.757s
Deleting: 9m14.838s 0m3.446s 1m39.508s

htree improved this a much but it still beaten by reiserfs. seems odd
to me - deleting taking twice time then creating ...

--
Paul P 'Stingray' Komkoff 'Greatest' Jr /// (icq)23200764 /// (http)stingr.net
When you're invisible, the only one really watching you is you (my keychain)


2002-10-01 20:37:47

by Hans Reiser

[permalink] [raw]
Subject: Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

Paul P Komkoff Jr wrote:

>This is the stupidiest testcase I've done but it worth seeing (maybe)
>
>We create 300000 files named from 00000000 to 000493E0 in one
>directory, then delete it in order.
>
>Tests taken on ext3+htree and reiserfs. ext3 w/o htree hadn't
>evaluated because it will take long long time ...
>
>both filesystems was mounted with noatime,nodiratime and ext3 was
>data=writeback to be somewhat fair ...
>
> real user sys
>reiserfs:
>Creating: 3m13.208s 0m4.412s 2m54.404s
>Deleting: 4m41.250s 0m4.206s 4m17.926s
>
>Ext3:
>Creating: 4m9.331s 0m3.927s 2m21.757s
>Deleting: 9m14.838s 0m3.446s 1m39.508s
>
>htree improved this a much but it still beaten by reiserfs. seems odd
>to me - deleting taking twice time then creating ...
>
>
>
Can you send us the code so we can try it on reiser4? We are going to
release reiser4 sometime this month (don't ask me when), and we'd be
happy to see you run it when you do.

Hans

2002-10-01 20:40:32

by Andreas Dilger

[permalink] [raw]
Subject: Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

On Oct 01, 2002 23:59 +0400, Paul P Komkoff Jr wrote:
> This is the stupidiest testcase I've done but it worth seeing (maybe)
>
> We create 300000 files named from 00000000 to 000493E0 in one
> directory, then delete it in order.
>
> Tests taken on ext3+htree and reiserfs. ext3 w/o htree hadn't
> evaluated because it will take long long time ...
>
> both filesystems was mounted with noatime,nodiratime and ext3 was
> data=writeback to be somewhat fair ...

Why do you think data=writeback is better than data=journal? If the
files have no data then it should not make a difference.

> real user sys
> reiserfs:
> Creating: 3m13.208s 0m4.412s 2m54.404s
> Deleting: 4m41.250s 0m4.206s 4m17.926s
>
> Ext3:
> Creating: 4m9.331s 0m3.927s 2m21.757s
> Deleting: 9m14.838s 0m3.446s 1m39.508s
>
> htree improved this a much but it still beaten by reiserfs. seems odd
> to me - deleting taking twice time then creating ...

This is a known issue with the current htree code (not the algorithm
or the on-disk format, luckily). The problem is that inodes are being
allocated essentially sequentially on disk. If you are deleting in
creation order (as you are) then you are randomly dirtying directory
leaf blocks, and if you are deleting in readdir() order, then you are
randomly dirtying inode blocks.

As a result, if the size of the directory + inode table blocks is larger
than memory, and also larger than 1/4 of the journal, you are essentially
seek-bound because of random block dirtying.

This can be fixed by changing the inode allocation routines to allocate
inodes in "chunks" which correspond to the leaf page for which the
dirent is being allocated. This will try to keep the inodes for a given
directory block relatively close together on disk and greatly improve
delete performance.

You should see what the size of the directory is at its peak (probably
16 bytes * 300k ~= 5MB, and add in the size of the directory blocks
(128 bytes * 300k ~= 38MB) and make the journal 4x as large as that,
so 192MB (mke2fs -j -J size=192) and re-run the test (I assume you have
at least 256MB+ of RAM on the test system).

What is very interesting from the above results is that the CPU usage
is _much_ smaller for ext3+htree than for reiserfs. It looks like
reiserfs is nearly CPU-bound by the tests, so it is unlikely that they
can run much faster. In theory, ext3+htree run at the CPU time if we
fixed the allocation and/or seeking issues.

Cheers, Andreas
--
Andreas Dilger
http://www-mddsp.enel.ucalgary.ca/People/adilger/
http://sourceforge.net/projects/ext2resize/

2002-10-01 20:44:05

by Hans Reiser

[permalink] [raw]
Subject: Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

Hans Reiser wrote:

> Paul P Komkoff Jr wrote:
>
>> This is the stupidiest testcase I've done but it worth seeing (maybe)
>>
>> We create 300000 files named from 00000000 to 000493E0 in one
>> directory, then delete it in order.
>>
>> Tests taken on ext3+htree and reiserfs. ext3 w/o htree hadn't
>> evaluated because it will take long long time ...
>>
>> both filesystems was mounted with noatime,nodiratime and ext3 was
>> data=writeback to be somewhat fair ...
>>
>> real user sys
>> reiserfs:
>> Creating: 3m13.208s 0m4.412s 2m54.404s
>> Deleting: 4m41.250s 0m4.206s 4m17.926s
>>
>> Ext3:
>> Creating: 4m9.331s 0m3.927s 2m21.757s
>> Deleting: 9m14.838s 0m3.446s 1m39.508s
>>
>> htree improved this a much but it still beaten by reiserfs. seems odd
>> to me - deleting taking twice time then creating ...
>>
>>
>>
> Can you send us the code so we can try it on reiser4? We are going to
> release reiser4 sometime this month (don't ask me when), and we'd be
> happy to see you run it when you do.

^you^we

Sorry to list for bandwidth waste.


2002-10-01 21:14:05

by Hans Reiser

[permalink] [raw]
Subject: Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

Andreas Dilger wrote:

>
>
> It looks like
>reiserfs is nearly CPU-bound by the tests, so it is unlikely that they
>can run much faster.
>
Um, usually being CPU bound is easier to fix. We have probably not CPU
profiled this code path, and after Halloween we probably should (but for
reiser4, since reiser3 is soon to be obsoleted). It is being IO bound
that is usually hard to fix, though since I haven't read the htree code
I trust you that it is different in this case....

>In theory, ext3+htree run at the CPU time if we
>fixed the allocation and/or seeking issues.
>
>Cheers, Andreas
>--
>Andreas Dilger
>http://www-mddsp.enel.ucalgary.ca/People/adilger/
>http://sourceforge.net/projects/ext2resize/
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to [email protected]
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
>
>
>
>



2002-10-01 21:12:55

by Rik van Riel

[permalink] [raw]
Subject: Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

On Wed, 2 Oct 2002, Hans Reiser wrote:
> Hans Reiser wrote:

[snip 50 lines]

> ^you^we
>
> Sorry to list for bandwidth waste.

So learn quoting ;)

Rik
--
A: No.
Q: Should I include quotations after my reply?

http://www.surriel.com/ http://distro.conectiva.com/

2002-10-01 21:22:05

by Daniel Phillips

[permalink] [raw]
Subject: Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

On Tuesday 01 October 2002 21:59, Paul P Komkoff Jr wrote:
> This is the stupidiest testcase I've done but it worth seeing (maybe)
>
> We create 300000 files

How big are the files?

> named from 00000000 to 000493E0 in one directory, then delete it in order.

You probably want to try creating the files in random order as well. A
program to do that is attached, use in the form:

randfiles <basename> <count> y

where 'y' means 'print the names', for debugging purposes.

What did your delete command look like, "rm -rf" or "echo * | xargs rm"?

> Tests taken on ext3+htree and reiserfs. ext3 w/o htree hadn't
> evaluated because it will take long long time ...
>
> both filesystems was mounted with noatime,nodiratime and ext3 was
> data=writeback to be somewhat fair ...
>
> real user sys
> reiserfs:
> Creating: 3m13.208s 0m4.412s 2m54.404s
> Deleting: 4m41.250s 0m4.206s 4m17.926s
>
> Ext3:
> Creating: 4m9.331s 0m3.927s 2m21.757s
> Deleting: 9m14.838s 0m3.446s 1m39.508s
>
> htree improved this a much but it still beaten by reiserfs. seems odd
> to me - deleting taking twice time then creating ...

Only 300,000 files, you haven't got enough to cause inode table thrashing,
though some kernels shrink the inode cache too agressively and that can
cause thrashing at lower numbers. Maybe a bottleneck in the journal?

Not that anybody is going to complain about any of the above - it's still
running less than 1 ms/create, 2 ms/delete. Still, it's slower than I'm
used to.

--
Daniel


Attachments:
randfiles.c (539.00 B)

2002-10-01 21:26:13

by Daniel Phillips

[permalink] [raw]
Subject: Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

Hi Hans,

On Tuesday 01 October 2002 22:49, Hans Reiser wrote:
> > Can you send us the code so we can try it on reiser4? We are going to
> > release reiser4 sometime this month (don't ask me when), and we'd be
> > happy to see you run it when you do.
>
> ^you^we
>
> Sorry to list for bandwidth waste.

Can be much reduced by selective quoting...

--
Daniel

2002-10-02 06:34:21

by Nikita Danilov

[permalink] [raw]
Subject: Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

Paul P Komkoff Jr writes:
> This is the stupidiest testcase I've done but it worth seeing (maybe)
>
> We create 300000 files named from 00000000 to 000493E0 in one
> directory, then delete it in order.
>
> Tests taken on ext3+htree and reiserfs. ext3 w/o htree hadn't
> evaluated because it will take long long time ...
>
> both filesystems was mounted with noatime,nodiratime and ext3 was
> data=writeback to be somewhat fair ...
>
> real user sys
> reiserfs:
> Creating: 3m13.208s 0m4.412s 2m54.404s
> Deleting: 4m41.250s 0m4.206s 4m17.926s
>
> Ext3:
> Creating: 4m9.331s 0m3.927s 2m21.757s
> Deleting: 9m14.838s 0m3.446s 1m39.508s

Why user times are so different?

>
> htree improved this a much but it still beaten by reiserfs. seems odd
> to me - deleting taking twice time then creating ...
>
> --
> Paul P 'Stingray' Komkoff 'Greatest' Jr /// (icq)23200764 /// (http)stingr.net
> When you're invisible, the only one really watching you is you (my keychain)

Nikita.

2002-10-02 10:43:42

by Paul Komkoff

[permalink] [raw]
Subject: Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

Replying to Andreas Dilger:
> Why do you think data=writeback is better than data=journal? If the
> files have no data then it should not make a difference.

It is better than default data=ordered I think :)

Thanks for detailed explanation - it saved much time for me and
accortind to yours directions I have recalculated my test. Now ext3 is
better :)

e3
create 2m49.545s 0m4.162s 2m20.766s
delete 2m8.155s 0m3.614s 1m34.945s

reiser
create 3m13.577s 0m4.338s 2m54.026s
delete 4m39.249s 0m3.968s 4m16.297s

e3
create 2m50.766s 0m4.024s 2m21.197s
delete 2m8.755s 0m3.501s 1m35.737s

reiser
create 3m13.015s 0m4.432s 2m53.412s
delete 4m41.011s 0m3.893s 4m16.845s


this is two typical runs. Now I creating ext3 with
mke2fs -j -O dir_index -J size=192 -T news /dev/sda4

as you can see, this improves performance by 1/4

Unfortunately, there still one issue in ext3. It called "inode limit".
Initially I wanted to run this test on 1000000 files but ... I hit
inode limit and don't want to increase it artificially yet.

Reiserfs worked fine because it don't have such kind of limit ...

--
Paul P 'Stingray' Komkoff 'Greatest' Jr /// (icq)23200764 /// (http)stingr.net
When you're invisible, the only one really watching you is you (my keychain)

2002-10-02 16:32:57

by Paul Komkoff

[permalink] [raw]
Subject: Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

Replying to Daniel Phillips:
> How big are the files?

0.

#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdio.h>

main(int argc, char* argv[]) {

int i, j, k = atoi(argv[1]);
char t[128];

for (i = 0; i < k; i++) {
snprintf(t, 127, "%08X", i);
if (-1 == (j = creat(t, S_IRWXU))) {
perror("Create file");
printf("no: %d\n", i);
return;
}
close(j);
}

}


> You probably want to try creating the files in random order as well. A
> program to do that is attached, use in the form:
>
> randfiles <basename> <count> y
>
> where 'y' means 'print the names', for debugging purposes.

this will be the next series of tests :)

> What did your delete command look like, "rm -rf" or "echo * | xargs rm"?

#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdio.h>

main(int argc, char* argv[]) {

int i, j, k = atoi(argv[1]);
char t[128];

for (i = 0; i < k; i++) {
snprintf(t, 127, "%08X", i);
if (-1 == unlink(t)) {
perror("unlink");
printf("no: %d\n", i);
return;
}
}

}

> Only 300,000 files, you haven't got enough to cause inode table thrashing,
> though some kernels shrink the inode cache too agressively and that can
> cause thrashing at lower numbers. Maybe a bottleneck in the journal?

Yes, increasing journal to fit the whole directory in it (as Andreas
Dilger said) improved results by 1/4. But. Initially my test was
1000000 files. /dev/sda4 in my tests 1882844. And I am quickly hitting
inode limit on -t news ext3 filesystem so I need to artificially
increase it at mke2fs time, but I decided to not do so (yet).

> Not that anybody is going to complain about any of the above - it's still
> running less than 1 ms/create, 2 ms/delete. Still, it's slower than I'm
> used to.

I just trying to write a caching proxy-like application and not
reinvent the wheel (aka design my own filesystem and store it in a big
file just because some filesystem is so slow on large
directories/cannot make more than N empty objects etc).

--
Paul P 'Stingray' Komkoff 'Greatest' Jr /// (icq)23200764 /// (http)stingr.net
When you're invisible, the only one really watching you is you (my keychain)

2002-10-02 16:51:50

by Andreas Dilger

[permalink] [raw]
Subject: Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

On Oct 02, 2002 14:48 +0400, Paul P Komkoff Jr wrote:
> Unfortunately, there still one issue in ext3. It called "inode limit".
> Initially I wanted to run this test on 1000000 files but ... I hit
> inode limit and don't want to increase it artificially yet.
>
> Reiserfs worked fine because it don't have such kind of limit ...

We have plans to fix this already, but it is not high enough on anyones
priority list quite yet (most filesystems have enough inodes for regular
usage).

Cheers, Andreas
--
Andreas Dilger
http://www-mddsp.enel.ucalgary.ca/People/adilger/
http://sourceforge.net/projects/ext2resize/

2002-10-03 00:32:40

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [Ext2-devel] Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

On Wed, Oct 02, 2002 at 10:54:54AM -0600, Andreas Dilger wrote:
> On Oct 02, 2002 14:48 +0400, Paul P Komkoff Jr wrote:
> > Unfortunately, there still one issue in ext3. It called "inode limit".
> > Initially I wanted to run this test on 1000000 files but ... I hit
> > inode limit and don't want to increase it artificially yet.
> >
> > Reiserfs worked fine because it don't have such kind of limit ...
>
> We have plans to fix this already, but it is not high enough on anyones
> priority list quite yet (most filesystems have enough inodes for regular
> usage).

Just to be clear, the limit which Paul is referring to is just simply
a matter of creating the filesystem with a sufficient number of
inodes. (i.e., mke2fs -N 1200000). Yes, having a dynamic inode table
would be good, but in practice sysadmins know how many inodes are
needed in advance.

- Ted

2002-10-04 17:06:17

by Andreas Dilger

[permalink] [raw]
Subject: Re: [Ext2-devel] Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

On Oct 04, 2002 19:53 +0400, Oleg Drokin wrote:
> On Tue, Oct 01, 2002 at 02:43:30PM -0600, Andreas Dilger wrote:
> > As a result, if the size of the directory + inode table blocks is larger
> > than memory, and also larger than 1/4 of the journal, you are essentially
> > seek-bound because of random block dirtying.
> > You should see what the size of the directory is at its peak (probably
> > 16 bytes * 300k ~= 5MB, and add in the size of the directory blocks
> > (128 bytes * 300k ~= 38MB) and make the journal 4x as large as that,
> > so 192MB (mke2fs -j -J size=192) and re-run the test (I assume you have
> > at least 256MB+ of RAM on the test system).
>
> Hm. But all of that won't help if you need to read inodes from disk first,
> right? (until that inode allocation in chunks implemented, of course).

Ah, but see the follow-up reply - increasing the size of the journal as
advised improved the htree performance to 15% and 55% faster than
reiserfs for creates and deletes, respectively:

On Wed, 2 Oct 2002 14:48:59 +0400 Paul P Komkoff Jr replied:
> Thanks for detailed explanation - it saved much time for me and
> accortind to yours directions I have recalculated my test. Now ext3 is
> better :)
>
> real user cpu
> e3
> create 2m49.545s 0m4.162s 2m20.766s
> delete 2m8.155s 0m3.614s 1m34.945s
>
> reiser
> create 3m13.577s 0m4.338s 2m54.026s
> delete 4m39.249s 0m3.968s 4m16.297s
>
> e3
> create 2m50.766s 0m4.024s 2m21.197s
> delete 2m8.755s 0m3.501s 1m35.737s
>
> reiser
> create 3m13.015s 0m4.432s 2m53.412s
> delete 4m41.011s 0m3.893s 4m16.845s


On Oct 04, 2002 19:53 +0400, Oleg Drokin wrote some more:
> BTW, in case of inode allocation in chunks attached to directory blocks,
> you won't get any benefit in case if application creates file in some
> tempoarry dir and then rename()s it to its proper place, or am I missing
> something?

No, you are correct. Renaming the files will randomly re-hash the names
and break any coherency between the directory leaf blocks and the inode
blocks. However, such files are often short-lived anyways (mail spools
and such), and for the normal case (e.g. untar of a file) the names are
constant, so there should be a benefit for smaller journals from this.

> > What is very interesting from the above results is that the CPU usage
> > is _much_ smaller for ext3+htree than for reiserfs. It looks like
>
> This is only in case of deletion, probably somehow related to constant item
> shifting when some of the items are deleted.

Well, even for creates it is 19% less CPU. The re-tested wall-clock
time for htree creates is now less than the CPU usage of reiserfs, so
it is impossible for reiserfs to achieve this number without
optimization of the code somehow. For deletes the cpu usage of htree
is 40% less, but we are currently not doing leaf block compaction, so
there would probably be a slight performance hit to merge blocks
(although we have some plans to do that efficiently also).

Cheers, Andreas
--
Andreas Dilger
http://www-mddsp.enel.ucalgary.ca/People/adilger/
http://sourceforge.net/projects/ext2resize/

2002-10-04 15:47:49

by Oleg Drokin

[permalink] [raw]
Subject: Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

Hello!

On Tue, Oct 01, 2002 at 02:43:30PM -0600, Andreas Dilger wrote:

> As a result, if the size of the directory + inode table blocks is larger
> than memory, and also larger than 1/4 of the journal, you are essentially
> seek-bound because of random block dirtying.
> You should see what the size of the directory is at its peak (probably
> 16 bytes * 300k ~= 5MB, and add in the size of the directory blocks
> (128 bytes * 300k ~= 38MB) and make the journal 4x as large as that,
> so 192MB (mke2fs -j -J size=192) and re-run the test (I assume you have
> at least 256MB+ of RAM on the test system).

Hm. But all of that won't help if you need to read inodes from disk first,
right? (until that inode allocation in chunks implemented, of course).

BTW, in case of inode allocation in chunks attached to directory blocks,
you won't get any benefit in case if application creates file in some
tempoarry dir and then rename()s it to its proper place, or am I missing
something?

> What is very interesting from the above results is that the CPU usage
> is _much_ smaller for ext3+htree than for reiserfs. It looks like

This is only in case of deletion, probably somehow related to constant item
shifting when some of the items are deleted.

Bye,
Oleg

2002-10-07 06:49:18

by Oleg Drokin

[permalink] [raw]
Subject: Re: [Ext2-devel] Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

Hello!

On Fri, Oct 04, 2002 at 11:09:35AM -0600, Andreas Dilger wrote:
> > > As a result, if the size of the directory + inode table blocks is larger
> > > than memory, and also larger than 1/4 of the journal, you are essentially
> > > seek-bound because of random block dirtying.
> > > You should see what the size of the directory is at its peak (probably
> > > 16 bytes * 300k ~= 5MB, and add in the size of the directory blocks
> > > (128 bytes * 300k ~= 38MB) and make the journal 4x as large as that,
> > > so 192MB (mke2fs -j -J size=192) and re-run the test (I assume you have
> > > at least 256MB+ of RAM on the test system).
> > Hm. But all of that won't help if you need to read inodes from disk first,
> > right? (until that inode allocation in chunks implemented, of course).
> Ah, but see the follow-up reply - increasing the size of the journal as
> advised improved the htree performance to 15% and 55% faster than
> reiserfs for creates and deletes, respectively:

Yes, but that was the case with warm caches, as I understand it.
Usually you cannot count that all inodes of large file set are already present
and should not be read.

> > > What is very interesting from the above results is that the CPU usage
> > > is _much_ smaller for ext3+htree than for reiserfs. It looks like
> > This is only in case of deletion, probably somehow related to constant item
> > shifting when some of the items are deleted.
> Well, even for creates it is 19% less CPU. The re-tested wall-clock

I afraid other parts of code might have contributed there.
Like setting s_dirt way more often than needed.

Bye,
Oleg

2002-10-10 00:20:50

by Daniel Phillips

[permalink] [raw]
Subject: Re: [Ext2-devel] Re: [STUPID TESTCASE] ext3 htree vs. reiserfs on 2.5.40-mm1

On Friday 04 October 2002 19:09, Andreas Dilger wrote:
> On Oct 04, 2002 19:53 +0400, Oleg Drokin wrote:
> > On Tue, Oct 01, 2002 at 02:43:30PM -0600, Andreas Dilger wrote:
> > > What is very interesting from the above results is that the CPU usage
> > > is _much_ smaller for ext3+htree than for reiserfs. It looks like
> >
> > This is only in case of deletion, probably somehow related to constant item
> > shifting when some of the items are deleted.
>
> Well, even for creates it is 19% less CPU. The re-tested wall-clock
> time for htree creates is now less than the CPU usage of reiserfs, so
> it is impossible for reiserfs to achieve this number without
> optimization of the code somehow. For deletes the cpu usage of htree
> is 40% less, but we are currently not doing leaf block compaction, so
> there would probably be a slight performance hit to merge blocks
> (although we have some plans to do that efficiently also).

I convinced myself at some point that compaction will cost no more
than a couple of percent for deletes and nothing for creates.

--
Daniel