As a budding kernel hacker looking to cut my teeth, I've become curious about
what types of setups people hack the kernel with. I am very interested in
descriptions of the computers you hack the kernel with and their use patterns.
I was thinking of starting with a modern machine for developing/compiling on,
and then older machine(s) for testing. This way I would not risk losing data
if I oops or somesuch. Alternately, is there a common practice of using lilo
to create development and testing kernel command lines? Is this a useful
thing to do or is it too much of brain drain to switch between hacking and
testing mindsets?
Instead of having separate machines, there is the possibility of using the
Usermode port. As I understand it this lags behind the -ac and linus kernels
so it would be hard to test things like the new VM's. Usermode would not be
suitable for driver development either. Again, thoughts on this mode of
development?
Which brings me to the final question. Is there any reason to choose
architecture A over architecture B for any reason besides arch-specific
development in the kernel or for device drivers?
AKK
--
Adam K. Keys
<[email protected]> (Remove the HOTARD to email me)
On Fri, 2001-10-05 at 00:20, Adam Keys wrote:
> As a budding kernel hacker looking to cut my teeth, I've become curious about
> what types of setups people hack the kernel with. I am very interested in
> descriptions of the computers you hack the kernel with and their use patterns.
Here's what each developer was equipped with at my former place of
employment, back when they had money and all:
Two x86 machines, one workstation and one "blow up box".
Console on serial port, minicom logging to a file.
/usr/src on the "blow up box" nfs-mounted from workstation; 100MBit
ethernet
Used kdb sometimes.
[email protected] said:
> I was thinking of starting with a modern machine for developing/
> compiling on, and then older machine(s) for testing. This way I
> would not risk losing data if I oops or somesuch.
With journalling filesystems you needn't worry _too_ much about losing
data; depending of course on what you're hacking on. Having two separate
boxen for development and testing is mostly valuable because you can keep
working when you break it - it doesn't take your entire desktop environment
down with it.
[email protected] said:
> Which brings me to the final question. Is there any reason to choose
> architecture A over architecture B for any reason besides
> arch-specific development in the kernel or for device drivers?
If you're developing device drivers and have the choice, pick something
esoteric to enforce good behaviour. Something which does out-of-order
stores, has non-cache-coherent DMA, is big-endian and preferably 64-bit. I
think both mips64 and sparc64 boards can meet all those criteria - if not,
get as close as you can.
--
dwmw2
> I was thinking of starting with a modern machine for developing/compiling on,
> and then older machine(s) for testing. This way I would not risk losing data
That is how I work. One box editing/building and one testing. It also allows
you to stare at dumps, oopses and things as well as the source at the same
time.
> Instead of having separate machines, there is the possibility of using the
> Usermode port. As I understand it this lags behind the -ac and linus kernels
> so it would be hard to test things like the new VM's. Usermode would not be
Usermode Linux is merged with the -ac tree - it is great if you want to do
non device driver hardware work.
[email protected] said:
> Instead of having separate machines, there is the possibility of
> using the Usermode port. As I understand it this lags behind the -ac
> and linus kernels so it would be hard to test things like the new
> VM's.
Not really. The latest UML is sometimes pretty far ahead of what's in the
-ac tree, but it usually works fine. So, if you're interested in the generic
kernel, and not UML itself, that shouldn't be a problem. And currently,
the -ac tree is pretty close to my CVS.
Also, the latest patches usually go pretty cleanly into the -linus pre kernels,
so getting those running in UML shouldn't be hard.
> Usermode would not be suitable for driver development either.
This is just because no one has written the code to do it. It is perfectly
possible to do hardware device driver development in UML. Various USB people
have started trying to do USB driver development under UML, for example.
Jeff
> I was thinking of starting with a modern machine for developing/compiling on,
> and then older machine(s) for testing. This way I would not risk losing data
> if I oops or somesuch. Alternately, is there a common practice of using lilo
> to create development and testing kernel command lines? Is this a useful
> thing to do or is it too much of brain drain to switch between hacking and
> testing mindsets?
I like the two box strategy and have written a detailed description of
how to set it up (right down to the wiring diagram for the serial
cables):
http://www.kernelhacking.org/docs/2boxdebugging.txt
This will become part of the forthcoming kernelhacking-HOWTO...
Feedback on this document from anyone would be very much appreciated
from anyone :)
happy hacking,
Andy
Hi Adam.
> As a budding kernel hacker looking to cut my teeth, I've become
> curious about what types of setups people hack the kernel with.
> I am very interested in descriptions of the computers you hack
> the kernel with and their use patterns.
Here's the collection I use...
1. 386sx/16 with 8M of RAM running RedHat 6.0 as none of the later
RedHat's will install on it - they all need >8M of RAM to install.
This serves as my network print server.
2. 386sx/25 with 387sx/25 with 8M of RAM running RedHat 6.1 as none
of the later RedHat's will install on it, as stated above. It is
noticable that the presence of a 387 maths copro allowed 6.1 to
install where it wouldn't otherwise.
3. 486sx/25 with 12M of RAM running RedHat 6.2 as none of the
RedHat 7.x's will install, all needing >12M of RAM to install.
4. 486dx2/66 with 16M of RAM running RedHat 6.2 and serving as my
network dial-up server. It's stable as it currently stands, so
is unlikely to be upgraded anytime soon.
5. 486dx4/120 with 32M of RAM running RedHat 6.2 as RedHat 7.x runs
out of hard disk space - it only has a 350M hard drive in it at
the moment.
6. P75 with 32M of RAM running Win95 so I can check that the Linux
systems I set up for customers will correctly interact with any
Win9x systems they may have, and also used to run the software
I need to run that's only available for Win9x.
7. P166 with 96M of RAM awaiting a new hard drive (the existing one
self-destructed a week or so ago). Once the new hard drive is
obtained, I'll be installing RedHat 7.1 on it.
Depending on what else I'm doing at the time, I can use any of the above
to "hack" the kernel, including the Win95 machine if everything else is
busy. I generally use (3) to compile the results on.
> I was thinking of starting with a modern machine for developing and
> compiling on, and then older machine(s) for testing. This way I
> would not risk losing data if I oops or somesuch.
> Alternately, is there a common practice of using lilo to create
> development and testing kernel command lines?
I have a lilo entry that reads as follows:
image=/usr/src/linux/arch/i386/boot/bzImage
label=tryme
alias=develop
I also have a script set up for only root to run that reads...
#!/bin/bash
lilo && lilo -D develop && reboot
...which I run to try the kernel out.
> Is this a useful thing to do or is it too much of brain drain to
> switch between hacking and testing mindsets?
That depends on how you set your system up.
> Instead of having separate machines, there is the possibility of
> using the Usermode port. As I understand it this lags behind the -ac
> and linus kernels so it would be hard to test things like the new
> VM's. Usermode would not be suitable for driver development either.
> Again, thoughts on this mode of development?
I've never tried it, and have no plans to do so.
> Which brings me to the final question. Is there any reason to
> choose architecture A over architecture B for any reason besides
> arch-specific development in the kernel or for device drivers?
Not that I'm aware of.
Best wishes from Riley.
Andrew Ebling wrote:
> Feedback on this document from anyone would be very much appreciated
> from anyone :)
The only thing I'd add is some pointers to setting up the target box
with NFS root. In my setup for driver development both my x86 and ppc
target boxes are diskless. The x86 boots using etherboot on a floppy,
and the ppc has network booting in the rom. I just compile a new kernel
on the development box, copy it into my /tftpboot directory, and hit the
reset button on the target. No mess, no fuss, no fsck.
--
Adrian Cox http://www.humboldt.co.uk/
Hi!
> > I was thinking of starting with a modern machine for developing/
> > compiling on, and then older machine(s) for testing. This way I
> > would not risk losing data if I oops or somesuch.
>
> With journalling filesystems you needn't worry _too_ much about losing
> data; depending of course on what you're hacking on. Having two separate
> boxen for development and testing is mostly valuable because you can keep
> working when you break it - it doesn't take your entire desktop environment
> down with it.
I disagree.. With journal filesystem, when something is silently corrupting
your disk, you'll never know. With ext2, you sometimes sync & reset to make
sure your disks are still healthy. I would not recommend journaling on
experimental boxes.
Pavel
--
Philips Velo 1: 1"x4"x8", 300gram, 60, 12MB, 40bogomips, linux, mutt,
details at http://atrey.karlin.mff.cuni.cz/~pavel/velo/index.html.
>
> I disagree.. With journal filesystem, when something is silently corrupting
> your disk, you'll never know. With ext2, you sometimes sync & reset to make
> sure your disks are still healthy. I would not recommend journaling on
> experimental boxes.
> Pavel
On the otherhand I have found the main problem with using XFS on development
platforms is that you do not test the kernel shutdown code very much.
It is much faster to just reset the box than to do a shutdown, and it
does not make a difference when you bring it back up.
Steve
> I was thinking of starting with a modern machine for developing/compiling on,
> and then older machine(s) for testing. This way I would not risk losing data
I use the 'fast/slow' model for app server development. The fastest
machine is used to build kernels for the slower test machine(s)
regardless of architecture or latest/greatest hardware. Most results
can be scaled once you understand interactions. NFS w no_root_squash is
useful provided a secure LAN.
rgds,
tim.
--
Pavel writes:
> I disagree.. With journal filesystem, when something is silently corrupting
> your disk, you'll never know. With ext2, you sometimes sync & reset to make
> sure your disks are still healthy. I would not recomment journaling on
> experimental boxes.
I would say just the opposite with ext3 - I prefer to use it on my development
boxes.
1) No fsck time (normally) after crashing, which can happen a lot.
2) You can set ext3 to fsck automatically after a fixed number of reboots/time
if you are worried about a bad disk/cable/kernel.
Most of the mke2fs which support ext3 (1.20+ or so) will set the check
interval to 20 + rand(20) reboots per check by default, or 6 months. Since
this is a random value, you don't get all of your filesystems checked at the
same time, but it happens at least once in a while to ensure that each fs is
OK.
Of course, you can turn it off if you want, but the option is there to do
periodic checks. The time interval is probably still a good idea, even if
you turn of the per-N-mount checking, because of bit rot, etc.
Cheers, Andreas
--
Andreas Dilger \ "If a man ate a pound of pasta and a pound of antipasto,
\ would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/ -- Dogbert