ftp://ftp.namesys.com/pub/reiser4-for-2.6/2.6.17-rc4-mm1/reiser4-for-2.6.17-rc4-mm1-2.patch.gz
The referenced patch replaces all reiser4 patches to mm. It revises the
existing reiser4 code to do a good job for writes that are larger than
4k at a time by assiduously adhering to the principle that things that
need to be done once per write should be done once per write, not once
per 4k. That statement is a slight simplification: there are times when
due to the limited size of RAM you want to do some things once per
WRITE_GRANULARITY, where WRITE_GRANULARITY is a #define that defines
some moderate number of pages to write at once. This code empirically
proves that the generic code design which passes 4k at a time to the
underlying FS can be improved. Performance results show that the new
code consumes 40% less CPU when doing "dd bs=1MB ....." (your hardware,
and whether the data is in cache, may vary this result). Note that this
has only a small effect on elapsed time for most hardware.
The planned future(as discussed with akpm previously): we will ship
very soon (testing it now) an improved reiser4 read code that does reads
in more than little 4k chunks. Then we will revise the generic code to
allow an FS to receive the writes and reads in whole increments. How
best to revise the generic code is still being discussed. Nate is
discussing doing it in some way that improves code symmetry in the io
scheduler layer as well, if there is interest by others in it maybe a
thread can start on that topic, or maybe it can wait for him+zam to make
a patch.
Note for users: this patch also contains numerous important bug fixes.
Hi!
I'm actively using Reiser4 on a production servers (and I know a lot
of people that do that too).
Could you please release the patch against the vanilla tree?
I don't think there's a lot of people that will test -mm version,
especially on production servers - -mm is a little bit too unstable.
Thanks.
On 5/24/06, Hans Reiser <[email protected]> wrote:
> ftp://ftp.namesys.com/pub/reiser4-for-2.6/2.6.17-rc4-mm1/reiser4-for-2.6.17-rc4-mm1-2.patch.gz
>
> The referenced patch replaces all reiser4 patches to mm. It revises the
> existing reiser4 code to do a good job for writes that are larger than
> 4k at a time by assiduously adhering to the principle that things that
> need to be done once per write should be done once per write, not once
> per 4k. That statement is a slight simplification: there are times when
> due to the limited size of RAM you want to do some things once per
> WRITE_GRANULARITY, where WRITE_GRANULARITY is a #define that defines
> some moderate number of pages to write at once. This code empirically
> proves that the generic code design which passes 4k at a time to the
> underlying FS can be improved. Performance results show that the new
> code consumes 40% less CPU when doing "dd bs=1MB ....." (your hardware,
> and whether the data is in cache, may vary this result). Note that this
> has only a small effect on elapsed time for most hardware.
>
> The planned future(as discussed with akpm previously): we will ship
> very soon (testing it now) an improved reiser4 read code that does reads
> in more than little 4k chunks. Then we will revise the generic code to
> allow an FS to receive the writes and reads in whole increments. How
> best to revise the generic code is still being discussed. Nate is
> discussing doing it in some way that improves code symmetry in the io
> scheduler layer as well, if there is interest by others in it maybe a
> thread can start on that topic, or maybe it can wait for him+zam to make
> a patch.
>
> Note for users: this patch also contains numerous important bug fixes.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
--
Alexey Polyakov
Hi Hans,
On 23/05/06, Alexey Polyakov <[email protected]> wrote:
> Hi!
>
> I'm actively using Reiser4 on a production servers (and I know a lot
> of people that do that too).
> Could you please release the patch against the vanilla tree?
> I don't think there's a lot of people that will test -mm version,
> especially on production servers - -mm is a little bit too unstable.
Any chance to get this patch against 2.6.17-rc4-mm3?
Regards,
Michal
--
Michal K. K. Piotrowski
LTG - Linux Testers Group
(http://www.stardust.webpages.pl/ltg/wiki/)
Hello
On Tue, 2006-05-23 at 22:33 +0200, Michal Piotrowski wrote:
> Hi Hans,
>
> On 23/05/06, Alexey Polyakov <[email protected]> wrote:
> > Hi!
> >
> > I'm actively using Reiser4 on a production servers (and I know a lot
> > of people that do that too).
> > Could you please release the patch against the vanilla tree?
> > I don't think there's a lot of people that will test -mm version,
> > especially on production servers - -mm is a little bit too unstable.
>
> Any chance to get this patch against 2.6.17-rc4-mm3?
>
yes, reiser4 updates for latest stock and mm kernels will be out in one
or two days
> Regards,
> Michal
>
On Tue, May 23, 2006 at 01:14:54PM -0700, Hans Reiser wrote:
> underlying FS can be improved. Performance results show that the new
> code consumes 40% less CPU when doing "dd bs=1MB ....." (your hardware,
> and whether the data is in cache, may vary this result). Note that this
> has only a small effect on elapsed time for most hardware.
Write requests in linux are restricted to one page?
--
Tom Vier <[email protected]>
DSA Key ID 0x15741ECE
Tom Vier wrote:
>On Tue, May 23, 2006 at 01:14:54PM -0700, Hans Reiser wrote:
>
>
>>underlying FS can be improved. Performance results show that the new
>>code consumes 40% less CPU when doing "dd bs=1MB ....." (your hardware,
>>and whether the data is in cache, may vary this result). Note that this
>>has only a small effect on elapsed time for most hardware.
>>
>>
>
>Write requests in linux are restricted to one page?
>
>
>
It may go to the kernel as a 64MB write, but VFS sends it to the FS as
64MB/4k separate 4k writes.
I should add, you execute a lot more than 4k worth of instructions for
each of these 4k writes, thus performance is non-optimal. This is why
bios exist in the kernel, because the io layer has a similar problem
when you send it only 4k at a time (of executing a lot more than 4k of
instructions per io submission).The way the io layer handles bios is not
as clean as it could be though, Nate can say more on that.