2002-10-08 17:04:06

by Andrew Morton

[permalink] [raw]
Subject: Re: The reason to call it 3.0 is the desktop (was Re: [OT] 2.6 not 3.0 - (NUMA))

[email protected] wrote:
>
> On Mon, Oct 07, 2002 at 07:50:27PM -0700, Andrew Morton wrote:
>
> > I have the core code for ext3. It's at
> > http://www.zip.com.au/~akpm/linux/patches/2.4/2.4.19-pre10/ext3-reloc-page.patch
> > I never tested it, but that's a formality ;)
> >
> > It offers a simple ioctl to reloate a single page's worth of blocks.
> > It's fully journalled and recoverable, pagecache coherent, etc.
> > But the userspace application which calls that ioctl hasn't been
> > written.
>
> Hi Andrew,
> I decided not to let the fact that I have never written any FS code
> stand in the way of making suggestions :-) :-)
> Do you think it would be better to make the defragmentation part of
> the normal operation of the FS rather than a seperate application. For
> example, if you did a fragmentation check/fix on the last close of a file
> you would know that coherency issues were not going to be important. It
> might also give you some way to determine which files were important to
> keep close together.
>

Well the initial approach was to put the minimum functionality
in-kernel and drive it all from userspace. I that proved to
be inadequate then the kernel-side might need to be grown.

I'd expect that a defrag would be a batch process which is done
during quiet times. Although one _could_ have a `defragd' which
ticks along all the time I suppose.

A defragmentation algorithm probably would not be a "per file" thing;
it would need to gather a fair amount of state about the fs, or
at least an individual block group before starting to shuffle things.


2002-10-10 20:51:26

by Thomas Zimmerman

[permalink] [raw]
Subject: Re: The reason to call it 3.0 is the desktop (was Re: [OT] 2.6 not 3.0 - (NUMA))

On Tue, 08 Oct 2002 10:09:38 -0700
Andrew Morton <[email protected]> wrote:
[snip]
> Well the initial approach was to put the minimum functionality
> in-kernel and drive it all from userspace. I that proved to
> be inadequate then the kernel-side might need to be grown.
>
> I'd expect that a defrag would be a batch process which is done
> during quiet times. Although one _could_ have a `defragd' which
> ticks along all the time I suppose.
>
> A defragmentation algorithm probably would not be a "per file" thing;
> it would need to gather a fair amount of state about the fs, or
> at least an individual block group before starting to shuffle things.

I seem to remember a "drive optimzier" on an old SE Mac. It would move
files and dirs about so that commonly used files all sat together. It
would run in the background too...after the disk was idle for about 5
minuest (configurable, iirc) it would go to work moving things about. It
really helped, as programs and used libs usually all sat in nice self
contained directories. I wounder if load times could be significantly
reduced by having libraries/programs fault in w/o all the seeking that
goes on on X load; as I first test, I guess, I'll have to see if
"prefaulting" all the X/kde dependences in help much.

Thomas "all lurk, no code" Zimmerman



Attachments:
(No filename) (189.00 B)