2000-11-05 23:11:05

by Sushil Agarwal

[permalink] [raw]
Subject: rdtsc to mili secs?

Hi,
According to the Intel Arch. Instruction set reference the
resolution of the "rdtsc" instruction is a clock cycle. How
do I convert this to mili seconds?

Thanks,
Sushil.


2000-11-06 00:10:33

by Andrea Arcangeli

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

On Mon, Nov 06, 2000 at 04:39:23AM +0530, Sushil Agarwal wrote:
> Hi,
> According to the Intel Arch. Instruction set reference the
> resolution of the "rdtsc" instruction is a clock cycle. How
> do I convert this to mili seconds?

fast_gettimeoffset_quotient, see do_fast_gettimeoffset().

Andrea

2000-11-06 00:27:44

by Alan

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

> On Mon, Nov 06, 2000 at 04:39:23AM +0530, Sushil Agarwal wrote:
> > Hi,
> > According to the Intel Arch. Instruction set reference the
> > resolution of the "rdtsc" instruction is a clock cycle. How
> > do I convert this to mili seconds?
>
> fast_gettimeoffset_quotient, see do_fast_gettimeoffset().

Also remember that the TSC may not be available due to the chip era, chip bugs
or running SMP with non matched CPU clocks.

Alan

2000-11-06 00:35:13

by Andrea Arcangeli

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

On Mon, Nov 06, 2000 at 12:28:00AM +0000, Alan Cox wrote:
> or running SMP with non matched CPU clocks.

In this last case I guess he will have more problems than not being able to
convert from cpu-clock to usec 8). Scheduler and gettimeofday will do the wrong
thing in that case (scheduler both for bougs avg_slice and fairness).

Andrea

2000-11-06 00:45:47

by Alan

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

> On Mon, Nov 06, 2000 at 12:28:00AM +0000, Alan Cox wrote:
> > or running SMP with non matched CPU clocks.
>
> In this last case I guess he will have more problems than not being able to
> convert from cpu-clock to usec 8). Scheduler and gettimeofday will do the wrong
> thing in that case (scheduler both for bougs avg_slice and fairness).

2.2 handles this case correctly. Ok well at least without bad things happening.

Alan

2000-11-06 08:18:48

by ming_l

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Andrea Arcangeli <[email protected]> said:

> On Mon, Nov 06, 2000 at 12:28:00AM +0000, Alan Cox wrote:
> > or running SMP with non matched CPU clocks.
>
> In this last case I guess he will have more problems than not being able to
> convert from cpu-clock to usec 8). Scheduler and gettimeofday will do the
wrong
> thing in that case (scheduler both for bougs avg_slice and fairness).
>
> Andrea
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> Please read the FAQ at http://www.tux.org/lkml/
>
>



2000-11-06 17:17:54

by Anton Blanchard

[permalink] [raw]
Subject: Re: rdtsc to mili secs?


> > fast_gettimeoffset_quotient, see do_fast_gettimeoffset().
>
> Also remember that the TSC may not be available due to the chip era, chip
> bugs or running SMP with non matched CPU clocks.

When I boot my thinkpad 600e off battery and then change to AC power,
gettimeofday has a nasty habit of going backwards. Stephen Rothwell
tells me it is one of these machines in which the cycle counter
slows down when power is removed.

This means our offset calculations in do_fast_gettimeoffset are way off
and taking a reading just before a timer tick and just after results in
a negative interval. Perhaps we should disable tsc based gettimeofday
for these type of machines.

Anton

2000-11-06 17:27:25

by Alan

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

> This means our offset calculations in do_fast_gettimeoffset are way off
> and taking a reading just before a timer tick and just after results in
> a negative interval. Perhaps we should disable tsc based gettimeofday
> for these type of machines.

I seem to remember we have a 'notsc' option. Figuring out which boxes are
infected with the problem may be trickier. We really need to be able to
read the current CPU clock rate off whatever generates the clocks when we
do a udelay

2000-11-07 05:01:11

by Antony Suter

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Anton Blanchard wrote:
>
> > > fast_gettimeoffset_quotient, see do_fast_gettimeoffset().
> >
> > Also remember that the TSC may not be available due to the chip era, chip
> > bugs or running SMP with non matched CPU clocks.
>
> When I boot my thinkpad 600e off battery and then change to AC power,
> gettimeofday has a nasty habit of going backwards. Stephen Rothwell
> tells me it is one of these machines in which the cycle counter
> slows down when power is removed.
>
> This means our offset calculations in do_fast_gettimeoffset are way off
> and taking a reading just before a timer tick and just after results in
> a negative interval. Perhaps we should disable tsc based gettimeofday
> for these type of machines.

This issue, and all related issues, need to be taken care of for all
speed
changing CPUs from Intel, AMD and Transmeta. Is the answer of "howto
write
userland programs correctly with a speed changing cpu" in a FAQ
somewhere?

--
- Antony Suter ([email protected]) "Examiner" openpgp:71ADFC87
- "And how do you store the nuclear equivalent of the universal
- solvent?"

2000-11-07 06:11:26

by H. Peter Anvin

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Followup to: <[email protected]>
By author: Antony Suter <[email protected]>
In newsgroup: linux.dev.kernel
>
> This issue, and all related issues, need to be taken care of for all
> speed changing CPUs from Intel, AMD and Transmeta. Is the answer of
> "howto write userland programs correctly with a speed changing cpu"
> in a FAQ somewhere?
>

At least in case of Transmeta, RDTSC will return wall time.

-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt

2000-11-07 12:17:47

by Alan

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

> This issue, and all related issues, need to be taken care of for all
> speed
> changing CPUs from Intel, AMD and Transmeta. Is the answer of "howto

Sensibly configured power saving/speed throttle systems do not change the
frequency at all. The duty cycle is changed and this controls the cpu
performance but the tsc is constant

2000-11-10 21:25:06

by Pavel Machek

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Hi!

> > This issue, and all related issues, need to be taken care of for all
> > speed
> > changing CPUs from Intel, AMD and Transmeta. Is the answer of "howto
>
> Sensibly configured power saving/speed throttle systems do not change the
> frequency at all. The duty cycle is changed and this controls the cpu
> performance but the tsc is constant

Do you have an example of notebook that does powersaving like that?

I have 2 examples of notebooks with changing TSC speed...

--
I'm [email protected]. "In my country we have almost anarchy and I don't care."
Panos Katsaloulis describing me w.r.t. patents at [email protected]

2000-11-10 21:39:17

by H. Peter Anvin

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Followup to: <[email protected]>
By author: Pavel Machek <[email protected]>
In newsgroup: linux.dev.kernel
> >
> > Sensibly configured power saving/speed throttle systems do not change the
> > frequency at all. The duty cycle is changed and this controls the cpu
> > performance but the tsc is constant
>
> Do you have an example of notebook that does powersaving like that?
> I have 2 examples of notebooks with changing TSC speed...
>

Intel PIIX-based systems will do duty-cycle throttling, for example.
However, there are definitely notebooks that will mess with the
frequency. At Transmeta, we went through some considerable pain to
make sure RDTSC would count walltime even across Longrun transitions.

-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt

2000-11-10 22:24:35

by Rogier Wolff

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

H. Peter Anvin wrote:
> Followup to: <[email protected]>
> By author: Pavel Machek <[email protected]>
> In newsgroup: linux.dev.kernel
> > >
> > > Sensibly configured power saving/speed throttle systems do not change the
> > > frequency at all. The duty cycle is changed and this controls the cpu
> > > performance but the tsc is constant
> >
> > Do you have an example of notebook that does powersaving like that?
> > I have 2 examples of notebooks with changing TSC speed...
> >
>
> Intel PIIX-based systems will do duty-cycle throttling, for example.

What's this "duty cycle throtteling"? Some people seem to think this
refers to changing the duty cycle on the clock, and thereby saving
power. I think it doesn't save any power if you do it that way. You
are referring to the duty cycle on a "stpclk" signal, right?


Roger.

> However, there are definitely notebooks that will mess with the
> frequency. At Transmeta, we went through some considerable pain to
> make sure RDTSC would count walltime even across Longrun transitions.


--
** [email protected] ** http://www.BitWizard.nl/ ** +31-15-2137555 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
* Common sense is the collection of *
****** prejudices acquired by age eighteen. -- Albert Einstein ********

2000-11-10 23:00:39

by H. Peter Anvin

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Followup to: <[email protected]>
By author: [email protected] (Rogier Wolff)
In newsgroup: linux.dev.kernel
> >
> > Intel PIIX-based systems will do duty-cycle throttling, for example.
>
> What's this "duty cycle throtteling"? Some people seem to think this
> refers to changing the duty cycle on the clock, and thereby saving
> power. I think it doesn't save any power if you do it that way. You
> are referring to the duty cycle on a "stpclk" signal, right?
>

Yes. The clock to the CPU isn't actually halted, but the STPCLK input
is pulsed, usually at 4 kHz, with some specific duty cycle.

It saves power roughly linearly with the duty cycle, minus some
overhead. It tends to be used mostly for thermal protection; if all
you have is duty cycle throttling (or frequency change without
corresponding voltage change), you're usually better off getting
things done as soon as possible and then go into deep sleep instead.
However, if you are about to emit smoke, you don't really have a whole
lot of options.

-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt

2000-11-15 21:34:22

by Pavel Machek

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Hi!

> > > Sensibly configured power saving/speed throttle systems do not change the
> > > frequency at all. The duty cycle is changed and this controls the cpu
> > > performance but the tsc is constant
> >
> > Do you have an example of notebook that does powersaving like that?
> > I have 2 examples of notebooks with changing TSC speed...
> >
>
> Intel PIIX-based systems will do duty-cycle throttling, for example.

Don't think so. My toshiba is PIIX-based, AFAIC:

root@bug:~# cat /proc/pci
Bus 0, device 5, function 0:
Bridge: Intel Corporation 82371AB PIIX4 ISA (rev 2).
Bus 0, device 5, function 1:
IDE interface: Intel Corporation 82371AB PIIX4 IDE (rev 1).
Master Capable. Latency=64.
I/O at 0x1000 [0x100f].
Bus 0, device 5, function 2:
USB Controller: Intel Corporation 82371AB PIIX4 USB (rev 1).
IRQ 11.
Master Capable. Latency=64.
I/O at 0xffe0 [0xffff].
Bus 0, device 5, function 3:
Bridge: Intel Corporation 82371AB PIIX4 ACPI (rev 2).

Still, it is willing to run with RDTSC at 300MHz, 150MHz, and
40MHz. (The last one in _extreme_ cases when CPU fan fails -- running
at 40MHz is better than cooking cpu).


--
I'm [email protected]. "In my country we have almost anarchy and I don't care."
Panos Katsaloulis describing me w.r.t. patents at [email protected]

2000-11-15 21:36:22

by H. Peter Anvin

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Pavel Machek wrote:
> >
> > Intel PIIX-based systems will do duty-cycle throttling, for example.
>
> Don't think so. My toshiba is PIIX-based, AFAIC:
>

Interesting. Some will, definitely. Didn't know that wasn't universal.

Clearly, on a machine like that, there is no hope for RDTSC, at least
unless the CPU (and OS!) gets notification that the TSC needs to be
recalibrated whenever it switches.

> root@bug:~# cat /proc/pci
> Bus 0, device 5, function 0:
> Bridge: Intel Corporation 82371AB PIIX4 ISA (rev 2).
> Bus 0, device 5, function 1:
> IDE interface: Intel Corporation 82371AB PIIX4 IDE (rev 1).
> Master Capable. Latency=64.
> I/O at 0x1000 [0x100f].
> Bus 0, device 5, function 2:
> USB Controller: Intel Corporation 82371AB PIIX4 USB (rev 1).
> IRQ 11.
> Master Capable. Latency=64.
> I/O at 0xffe0 [0xffff].
> Bus 0, device 5, function 3:
> Bridge: Intel Corporation 82371AB PIIX4 ACPI (rev 2).
>
> Still, it is willing to run with RDTSC at 300MHz, 150MHz, and
> 40MHz. (The last one in _extreme_ cases when CPU fan fails -- running
> at 40MHz is better than cooking cpu).
>
> --
> I'm [email protected]. "In my country we have almost anarchy and I don't care."
> Panos Katsaloulis describing me w.r.t. patents at [email protected]

--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt

2000-11-16 15:08:10

by Vojtech Pavlik

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

On Wed, Nov 15, 2000 at 01:05:27PM -0800, H. Peter Anvin wrote:

> > > Intel PIIX-based systems will do duty-cycle throttling, for example.
> >
> > Don't think so. My toshiba is PIIX-based, AFAIC:
>
> Interesting. Some will, definitely. Didn't know that wasn't universal.
>
> Clearly, on a machine like that, there is no hope for RDTSC, at least
> unless the CPU (and OS!) gets notification that the TSC needs to be
> recalibrated whenever it switches.
>
> > Still, it is willing to run with RDTSC at 300MHz, 150MHz, and
> > 40MHz. (The last one in _extreme_ cases when CPU fan fails -- running
> > at 40MHz is better than cooking cpu).

I believe that pulsing the STPCLK pin of the processor by connecting it
to a say 32kHz signal and then changing the duty cycle of that signal
could have the effect of slowing down the processor to these speeds.

Somehow I can't believe a PMMX would be able to run at 40MHz. Which in
turn means that STPCLK also stops TSC, which is equally bad.

Anyway, this should be solvable by checking for clock change in the
timer interrupt. This way we should be able to detect when the clock
went weird with a 10 ms accuracy. And compensate for that. It should be
possible to keep a 'reasonable' clock running even through the clock
changes, where reasonable means constantly growing and as close to real
time as 10 ms difference max.

Yes, this is not perfect, but still keep every program quite happy and
running.

--
Vojtech Pavlik
SuSE Labs

2000-11-16 23:40:07

by H. Peter Anvin

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Followup to: <[email protected]>
By author: Vojtech Pavlik <[email protected]>
In newsgroup: linux.dev.kernel
>
> Anyway, this should be solvable by checking for clock change in the
> timer interrupt. This way we should be able to detect when the clock
> went weird with a 10 ms accuracy. And compensate for that. It should be
> possible to keep a 'reasonable' clock running even through the clock
> changes, where reasonable means constantly growing and as close to real
> time as 10 ms difference max.
>

Actually, on machines where RDTSC works correctly, you'd like to use
that to detect a lost timer interrupt.

It's tough, it really is :(

-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt

2000-11-18 22:31:07

by Pavel Machek

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Hi!

> > Anyway, this should be solvable by checking for clock change in the
> > timer interrupt. This way we should be able to detect when the clock
> > went weird with a 10 ms accuracy. And compensate for that. It should be
> > possible to keep a 'reasonable' clock running even through the clock
> > changes, where reasonable means constantly growing and as close to real
> > time as 10 ms difference max.
> >
>
> Actually, on machines where RDTSC works correctly, you'd like to use
> that to detect a lost timer interrupt.
>
> It's tough, it really is :(

Well, my patch did not do that but you probably want lost timer
interrupt detection so that you avoid false alarms.

But that means you can no longer detect speed change after 10msec:

going from 150MHz to 300MHz is very similar to one lost timer
interrupt.

Pavel
--
I'm [email protected]. "In my country we have almost anarchy and I don't care."
Panos Katsaloulis describing me w.r.t. patents at [email protected]

2000-11-18 22:31:47

by Pavel Machek

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Hi!

> > > > Intel PIIX-based systems will do duty-cycle throttling, for example.
> > >
> > > Don't think so. My toshiba is PIIX-based, AFAIC:
> >
> > Interesting. Some will, definitely. Didn't know that wasn't universal.
> >
> > Clearly, on a machine like that, there is no hope for RDTSC, at least
> > unless the CPU (and OS!) gets notification that the TSC needs to be
> > recalibrated whenever it switches.
> >
> > > Still, it is willing to run with RDTSC at 300MHz, 150MHz, and
> > > 40MHz. (The last one in _extreme_ cases when CPU fan fails -- running
> > > at 40MHz is better than cooking cpu).
>
> I believe that pulsing the STPCLK pin of the processor by connecting it
> to a say 32kHz signal and then changing the duty cycle of that signal
> could have the effect of slowing down the processor to these speeds.
>
> Somehow I can't believe a PMMX would be able to run at 40MHz. Which in
> turn means that STPCLK also stops TSC, which is equally bad.

Why not? From 300MHz to 40MHz... 10 times, that is not that big
difference. (I've ran k6/400 at 66MHz, IIRC, while debugging -- I'm
not really sure, and don't want to open machine, but it should work).

> Anyway, this should be solvable by checking for clock change in the
> timer interrupt. This way we should be able to detect when the clock
> went weird with a 10 ms accuracy. And compensate for that. It should be
> possible to keep a 'reasonable' clock running even through the clock
> changes, where reasonable means constantly growing and as close to real
> time as 10 ms difference max.
>
> Yes, this is not perfect, but still keep every program quite happy and
> running.

No. Udelay has just gone wrong and your old ISA xxx card just crashed
whole system. Oops.

BTW I mailed patch to do exactly that kind of autodetection to the
list some time ago. (I just can't find it now :-( -- search archives
for 'TSC is slower than it should be'.
Pavel

--
I'm [email protected]. "In my country we have almost anarchy and I don't care."
Panos Katsaloulis describing me w.r.t. patents at [email protected]

2000-11-18 22:44:40

by Vojtech Pavlik

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

On Sat, Nov 18, 2000 at 09:12:31PM +0100, Pavel Machek wrote:

> > Anyway, this should be solvable by checking for clock change in the
> > timer interrupt. This way we should be able to detect when the clock
> > went weird with a 10 ms accuracy. And compensate for that. It should be
> > possible to keep a 'reasonable' clock running even through the clock
> > changes, where reasonable means constantly growing and as close to real
> > time as 10 ms difference max.
> >
> > Yes, this is not perfect, but still keep every program quite happy and
> > running.
>
> No. Udelay has just gone wrong and your old ISA xxx card just crashed
> whole system. Oops.

Yes. But can you do any better than that? Anyway, I wouldn't expect to
be able to put my old ISA cards into a recent notebook which fiddles
with the CPU speed (or STPCLK ratio).

> BTW I mailed patch to do exactly that kind of autodetection to the
> list some time ago. (I just can't find it now :-( -- search archives
> for 'TSC is slower than it should be'.

If I recall correctly, that patch didn't create a 'reasonable clock' -
it wasn't growing all the time and could skip back sometimes.

--
Vojtech Pavlik
SuSE Labs

2000-11-19 00:18:45

by H. Peter Anvin

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Followup to: <[email protected]>
By author: Pavel Machek <[email protected]>
In newsgroup: linux.dev.kernel

> > Actually, on machines where RDTSC works correctly, you'd like to use
> > that to detect a lost timer interrupt.
> >
> > It's tough, it really is :(
>
> Well, my patch did not do that but you probably want lost timer
> interrupt detection so that you avoid false alarms.
>
> But that means you can no longer detect speed change after 10msec:
>
> going from 150MHz to 300MHz is very similar to one lost timer
> interrupt.
>

That's the point.

-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt

2000-11-19 09:52:25

by Vojtech Pavlik

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

On Sat, Nov 18, 2000 at 03:48:06PM -0800, H. Peter Anvin wrote:
> Followup to: <[email protected]>
> By author: Pavel Machek <[email protected]>
> In newsgroup: linux.dev.kernel
>
> > > Actually, on machines where RDTSC works correctly, you'd like to use
> > > that to detect a lost timer interrupt.
> > >
> > > It's tough, it really is :(
> >
> > Well, my patch did not do that but you probably want lost timer
> > interrupt detection so that you avoid false alarms.
> >
> > But that means you can no longer detect speed change after 10msec:
> >
> > going from 150MHz to 300MHz is very similar to one lost timer
> > interrupt.
> >
>
> That's the point.

... and, you still can have both - detection of lost timer interrupts
and detection of speed changing. It'll take longer than 10ms to notice,
though (I think 20 or 30 will just do it).

--
Vojtech Pavlik
SuSE Labs

2000-11-19 20:58:03

by Pavel Machek

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

Hi!

> > > Anyway, this should be solvable by checking for clock change in the
> > > timer interrupt. This way we should be able to detect when the clock
> > > went weird with a 10 ms accuracy. And compensate for that. It should be
> > > possible to keep a 'reasonable' clock running even through the clock
> > > changes, where reasonable means constantly growing and as close to real
> > > time as 10 ms difference max.
> > >
> > > Yes, this is not perfect, but still keep every program quite happy and
> > > running.
> >
> > No. Udelay has just gone wrong and your old ISA xxx card just crashed
> > whole system. Oops.
>
> Yes. But can you do any better than that? Anyway, I wouldn't expect to
> be able to put my old ISA cards into a recent notebook which fiddles
> with the CPU speed (or STPCLK ratio).

PCMCIA is just that: putting old ISA crap into modern hardware. Sorry.

Pavel

--
I'm [email protected]. "In my country we have almost anarchy and I don't care."
Panos Katsaloulis describing me w.r.t. patents at [email protected]

2000-11-19 22:23:33

by Vojtech Pavlik

[permalink] [raw]
Subject: Re: rdtsc to mili secs?

On Sun, Nov 19, 2000 at 09:24:04PM +0100, Pavel Machek wrote:
> Hi!
>
> > > > Anyway, this should be solvable by checking for clock change in the
> > > > timer interrupt. This way we should be able to detect when the clock
> > > > went weird with a 10 ms accuracy. And compensate for that. It should be
> > > > possible to keep a 'reasonable' clock running even through the clock
> > > > changes, where reasonable means constantly growing and as close to real
> > > > time as 10 ms difference max.
> > > >
> > > > Yes, this is not perfect, but still keep every program quite happy and
> > > > running.
> > >
> > > No. Udelay has just gone wrong and your old ISA xxx card just crashed
> > > whole system. Oops.
> >
> > Yes. But can you do any better than that? Anyway, I wouldn't expect to
> > be able to put my old ISA cards into a recent notebook which fiddles
> > with the CPU speed (or STPCLK ratio).
>
> PCMCIA is just that: putting old ISA crap into modern hardware. Sorry.

Not really, fortunately. There are ISA-sytle NE2000's on PCMCIA, but
1) You know that you have a card there via the PCMCIA services and
2) They're not the old crappy NE2000's that'd die on a random read anyway.

--
Vojtech Pavlik
SuSE Labs