Hi Andreas !!!
You've got the point. My system is diskless, keyboardless and mouseless
therefore I will never get randomness without the patch you talked about.
BTW, does anyone know where I can found the patch to get randomness from the
network cards interrupt ?
regards
Henrique
On Thursday 15 August 2002 06:25 pm, Andreas Dilger wrote:
> On Aug 15, 2002 15:14 +0000, henrique wrote:
> > Hello !!!
> >
> > I am trying to use a program (ipsec newhostkey) that uses the random
> > device provided by the linux-kernel. In a x86 machine the program works
> > fine but when I tried to run the program in a PPC machine it doesn't
> > work.
> >
> > Looking carefully I have discovered that the problem is in the driver
> > random.c. When the program tries to read any amount of data it locks and
> > never returns. It happens because the variable
> > "random_state->entropy_count" is always zero, that is, any random number
> > is generated at all !!!??.
> >
> > Does anyone know anything about this problem ? Any sort of help is very
> > welcomed.
>
> Maybe the PPC keyboard/mouse drivers do not add randomness? You should
> also get randomness from disk I/O. If your PPC system is diskless,
> mouseless, and keyboardless, there is also a patch for 2.4 which allows
> you to get randomness from network card interrupts, which is good enough
> for all but the most incredibly paranoid people.
>
> Cheers, Andreas
--
-------------------------------------------------------
--
---
Henrique Gobbi
Software Engineer
+55 11 50333339
Cyclades Corporation - The Leader in Linux Connectivity
Followup to: <[email protected]>
By author: Tom Rini <[email protected]>
In newsgroup: linux.dev.kernel
>
> Ah, thanks. In that case, no. It doesn't look like the input-layer USB
> keyboards contribute to entropy (but mice do), and I don't think the ADB
> ones do. I'll take a crack at adding this to keyboards monday maybe.
>
Be careful... USB devices are *always* going to speak at the same
place in the USB cycle... I believe that is 1 ms. Thus,
submillisecond resolution is *not* random.
-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt <[email protected]>
>BTW, does anyone know where I can found the patch to get randomness from the
>network cards interrupt ?
Add the flag SA_SAMPLE_RANDOM into the request_irq() flags in the driver for
whichever interrupt source you want to use
e.g. from drivers/net/3c523.c
ret = request_irq(dev->irq, &elmc_interrupt, SA_SHIRQ | SA_SAMPLE_RANDOM,
dev->name, dev);
Jon
On Thu, Aug 15, 2002 at 06:50:32PM -0700, H. Peter Anvin wrote:
> Followup to: <[email protected]>
> By author: Tom Rini <[email protected]>
> In newsgroup: linux.dev.kernel
> >
> > Ah, thanks. In that case, no. It doesn't look like the input-layer USB
> > keyboards contribute to entropy (but mice do), and I don't think the ADB
> > ones do. I'll take a crack at adding this to keyboards monday maybe.
> >
>
> Be careful... USB devices are *always* going to speak at the same
> place in the USB cycle... I believe that is 1 ms. Thus,
> submillisecond resolution is *not* random.
Currently not a problem for anyone except X86, as they'll only use HZ
resolution. But the same problem exists on regular mice and keyboards,
which are typically scanned for events at a fixed rate.
This is just the tip of the iceberg for problems with entropy
accounting. I've got some free time today, I'll try to clean up my
patches for this.
--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."
On Fri, Aug 16, 2002 at 10:01:26AM -0700, Tom Rini wrote:
> On Fri, Aug 16, 2002 at 11:28:02AM -0500, Oliver Xymoron wrote:
>
> > What PPC and other arches really need in this area is a higher
> > resolution timing source. The jiffies-based timing is rather
> > limiting, especially after the entropy accounting stops overestimating
> > things by orders of magnitude. Does the PPC port have a convenient way
> > to access the TBR or something similar?
>
> Not knowing i386 well, what's the TBR? But yes, on at least some of the
> cores I know there are time registers. I'm not sure if it's an optional
> feature for PPC or not tho.
I was actually referring to the PPC's time base register, which is
about the closest equivalent to the x86's TSC. I'm pretty sure it's a
non-optional architecture feature (but I don't have my books handy),
and I believe this usually gets wired to the clock driving the memory
bus (typically 133MHz), but I've only worked with embedded PPC so I
have no idea how it's set up in commodity hardware.
--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."
On Fri, Aug 16, 2002 at 11:00:00AM +0100, Jon Burgess wrote:
>
>
> >BTW, does anyone know where I can found the patch to get randomness from the
> >network cards interrupt ?
>
> Add the flag SA_SAMPLE_RANDOM into the request_irq() flags in the driver for
> whichever interrupt source you want to use
> e.g. from drivers/net/3c523.c
>
> ret = request_irq(dev->irq, &elmc_interrupt, SA_SHIRQ | SA_SAMPLE_RANDOM,
> dev->name, dev);
Don't do this. This is the Enron method of entropy accounting.
There is little to no reliably unpredictable data in network
interrupts and the current scheme does not include for the mixing of
untrusted sources. It's very likely that an attacker can measure,
model, and control such timings down to the resolution of the PCI bus
clock on a quiescent system. This is more than good enough to defeat
entropy generation on systems without a TSC and given that the bus
clock is a multiple of the processor clock, it's likely possible to
extend this to TSC-based systems as well.
Entropy accounting is very fickle - if you overestimate _at all_, your
secret state becomes theoretically predictable. I have some patches
that create an API for adding such hard to predict but potentially
observable data to the entropy pool without accounting it as actual
entropy, as well as cleaning up some other major accounting errors but
I'm not quite done testing them.
--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."
Hello Oliver !!!
What would you do in my situation. I am dealing with the Motorola MPC860T and
my system has no disk (I use a flash), no mouse, no keyboard, no PCI bus. It
has just a fast-ethernet, a console port and some serial ports.
After reading the discussion on the lkml I realize that the only places I can
get randomness in my system is in the serial.c (that controls the serial
ports) and arch/ppc/8xx_io/fec.c (fast eth driver) interrupts.
What do you think about this solution ???
regards
Henrique
My fast ethernet is controlled by processor internal controler (called MII)
On Friday 16 August 2002 07:52 pm, Oliver Xymoron wrote:
> On Fri, Aug 16, 2002 at 11:00:00AM +0100, Jon Burgess wrote:
> > >BTW, does anyone know where I can found the patch to get randomness from
> > > the network cards interrupt ?
> >
> > Add the flag SA_SAMPLE_RANDOM into the request_irq() flags in the driver
> > for whichever interrupt source you want to use
> > e.g. from drivers/net/3c523.c
> >
> > ret = request_irq(dev->irq, &elmc_interrupt, SA_SHIRQ |
> > SA_SAMPLE_RANDOM, dev->name, dev);
>
> Don't do this. This is the Enron method of entropy accounting.
>
> There is little to no reliably unpredictable data in network
> interrupts and the current scheme does not include for the mixing of
> untrusted sources. It's very likely that an attacker can measure,
> model, and control such timings down to the resolution of the PCI bus
> clock on a quiescent system. This is more than good enough to defeat
> entropy generation on systems without a TSC and given that the bus
> clock is a multiple of the processor clock, it's likely possible to
> extend this to TSC-based systems as well.
>
> Entropy accounting is very fickle - if you overestimate _at all_, your
> secret state becomes theoretically predictable. I have some patches
> that create an API for adding such hard to predict but potentially
> observable data to the entropy pool without accounting it as actual
> entropy, as well as cleaning up some other major accounting errors but
> I'm not quite done testing them.
--
Oliver Xymoron wrote:
> There is little to no reliably unpredictable data in network
> interrupts and the current scheme does not include for the mixing of
> untrusted sources. It's very likely that an attacker can measure,
> model, and control such timings down to the resolution of the PCI bus
> clock on a quiescent system. This is more than good enough to defeat
> entropy generation on systems without a TSC and given that the bus
> clock is a multiple of the processor clock, it's likely possible to
> extend this to TSC-based systems as well.
> Entropy accounting is very fickle - if you overestimate _at all_, your
> secret state becomes theoretically predictable. I have some patches
> that create an API for adding such hard to predict but potentially
> observable data to the entropy pool without accounting it as actual
> entropy, as well as cleaning up some other major accounting errors but
> I'm not quite done testing them.
The problem is this. If you have an embedded system that is headless, diskless, keyboardless, and
mouseless, then your only remaining source of any interrupt-based entropy is the network. Also, if
you add entropy to the pool without accounting it as entropy, then how does that help anything? You
can currently add anything you want to the pool and it will stir it in but not bump the entropy
count.
Granted, a proper solution would involve a hardware-based system for entropy generation, but in the
absence of a proper solution you do the best you can.
For the general user, network-based interrupts are likely okay. If you have an attacker
sophisticated enough that it can predict the arrival time of a network packet down to 30ns (the
timing of a 33MHz PCI bus) from across the network, then I think you will probably have the
resources for a hardware entropy generator.
This has been discussed many times, and the concensus seems to be that yes it is not secure, but
there are people who have literally no other option.
Chris
--
Chris Friesen | MailStop: 043/33/F10
Nortel Networks | work: (613) 765-0557
3500 Carling Avenue | fax: (613) 765-2986
Nepean, ON K2H 8E9 Canada | email: [email protected]
On Fri, 16 Aug 2002, henrique wrote:
>What would you do in my situation. I am dealing with the Motorola MPC860T and
>my system has no disk (I use a flash), no mouse, no keyboard, no PCI bus. It
>has just a fast-ethernet, a console port and some serial ports.
>
>After reading the discussion on the lkml I realize that the only places I can
>get randomness in my system is in the serial.c (that controls the serial
>ports) and arch/ppc/8xx_io/fec.c (fast eth driver) interrupts.
Is there another way -- add a 'noise' device by connecting a PIO pin or
similar to suitable hardware? It shouldn't bee too hard to do as a one-off.
For example:
[noise-diode]--[amplifier]--[schmidt-trigger-inverter]---[PIO INT pin]
Ruth
--
Ruth Ivimey-Cook
Software engineer and technical writer.
On Fri, Aug 16, 2002 at 04:52:49PM -0400, Chris Friesen wrote:
> Oliver Xymoron wrote:
>
> > There is little to no reliably unpredictable data in network
> > interrupts and the current scheme does not include for the mixing of
> > untrusted sources. It's very likely that an attacker can measure,
> > model, and control such timings down to the resolution of the PCI bus
> > clock on a quiescent system. This is more than good enough to defeat
> > entropy generation on systems without a TSC and given that the bus
> > clock is a multiple of the processor clock, it's likely possible to
> > extend this to TSC-based systems as well.
>
> > Entropy accounting is very fickle - if you overestimate _at all_, your
> > secret state becomes theoretically predictable. I have some patches
> > that create an API for adding such hard to predict but potentially
> > observable data to the entropy pool without accounting it as actual
> > entropy, as well as cleaning up some other major accounting errors but
> > I'm not quite done testing them.
>
> The problem is this. If you have an embedded system that is
> headless, diskless, keyboardless, and mouseless, then your only
> remaining source of any interrupt-based entropy is the network.
> Also, if you add entropy to the pool without accounting it as
> entropy, then how does that help anything?
Yes, you _potentially_ improve the unpredictability of /dev/urandom
without throwing out the guarantees of /dev/random. There is exactly
one difference between urandom and random - guaranteed entropy (ignore
for the moment that it's currently completely buggered, I'm fixing
that.). If you need guaranteed entropy, then you _need_ an
unobservable entropy source. Period. Pretending network interrupts are
unpredictable is just pretending.
> For the general user, network-based interrupts are likely okay.
If that's really true, then /dev/urandom is okay too, _by
definition_. Use it.
--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."
On Fri, Aug 16, 2002 at 10:21:28PM +0100, Ruth Ivimey-Cook wrote:
>
> Is there another way -- add a 'noise' device by connecting a PIO pin or
> similar to suitable hardware? It shouldn't bee too hard to do as a one-off.
> For example:
>
> [noise-diode]--[amplifier]--[schmidt-trigger-inverter]---[PIO INT pin]
Doable, but it obviously takes some tuning of the amplifier. And then
you'll have to run it through some whitening.
--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."
On Fri, Aug 16, 2002 at 05:51:35PM +0000, henrique wrote:
>
> What would you do in my situation. I am dealing with the Motorola
> MPC860T and my system has no disk (I use a flash), no mouse, no
> keyboard, no PCI bus. It has just a fast-ethernet, a console port
> and some serial ports.
I've just recently dealt with analyzing this very situation in my own work.
> After reading the discussion on the lkml I realize that the only
> places I can get randomness in my system is in the serial.c (that
> controls the serial ports) and arch/ppc/8xx_io/fec.c (fast eth
> driver) interrupts.
>
> What do you think about this solution ???
For the purposes of a network appliance, it's probably sufficient. But
if you're making a term server, beware. You may be able to trust an
interactive user who's authenticated themselves to your configuration
UI to generate randomly timed keystrokes, but you can't trust just any
signal sent to you on a serial port.
Realistically, the hashing done by /dev/urandom is probably strong
enough for most purposes. It's as cryptographically strong as whatever
block cipher you're likely to use with it. /dev/random goes one step
further and tries to offer something that's theoretically
unbreakable. Useful for generating things like large public keys, less
useful for generating the session keys used by SSL and the
like. They're easier to break by direct attack.
If that's not good enough for you, build in a noise generator like a
reverse biased diode.
--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."
On Aug 16, 2002 19:45 -0500, Oliver Xymoron wrote:
> Realistically, the hashing done by /dev/urandom is probably strong
> enough for most purposes. It's as cryptographically strong as whatever
> block cipher you're likely to use with it. /dev/random goes one step
> further and tries to offer something that's theoretically
> unbreakable. Useful for generating things like large public keys, less
> useful for generating the session keys used by SSL and the
> like. They're easier to break by direct attack.
One of the problems, I believe, is that reading from /dev/urandom will
also deplete the entropy pool, just like reading from /dev/random.
The only difference is that when the entropy is gone /dev/random will
stop and /dev/urandom will continue to provide data.
If you are in there fixing things, it might make sense to have
/dev/urandom extract entropy from the random pool far less often than
/dev/random. This way people who use /dev/urandom for a source of
less-strong randomness (e.g. TCP sequence numbers or whatever), will
not be shooting themselves in the foot for when they need a 2048-byte
PGP key, if they are low on entropy sources.
Cheers, Andreas
--
Andreas Dilger
http://www-mddsp.enel.ucalgary.ca/People/adilger/
http://sourceforge.net/projects/ext2resize/
On Sat, Aug 17, 2002 at 12:05:07AM -0600, Andreas Dilger wrote:
> On Aug 16, 2002 19:45 -0500, Oliver Xymoron wrote:
> > Realistically, the hashing done by /dev/urandom is probably strong
> > enough for most purposes. It's as cryptographically strong as whatever
> > block cipher you're likely to use with it. /dev/random goes one step
> > further and tries to offer something that's theoretically
> > unbreakable. Useful for generating things like large public keys, less
> > useful for generating the session keys used by SSL and the
> > like. They're easier to break by direct attack.
>
> One of the problems, I believe, is that reading from /dev/urandom will
> also deplete the entropy pool, just like reading from /dev/random.
> The only difference is that when the entropy is gone /dev/random will
> stop and /dev/urandom will continue to provide data.
Yep, this is a longstanding problem. Will look into it and a couple
other things once I get the my current batch of patches running
against -current.
BTW, did ttyso ever ACK your last set of random changes or is it safe
to assume it's unmaintained?
> If you are in there fixing things, it might make sense to have
> /dev/urandom extract entropy from the random pool far less often than
> /dev/random. This way people who use /dev/urandom for a source of
> less-strong randomness (e.g. TCP sequence numbers or whatever), will
> not be shooting themselves in the foot for when they need a 2048-byte
> PGP key, if they are low on entropy sources.
Not sure this is an ideal fix. We might instead have an entropy
low-water mark (say 1/2 pool size), below which /dev/urandom will not
deplete the pool. This way when we have ample entropy, both devices
will behave like TRNGs, with /dev/urandom falling back to PRNG when a
shortage is threatened.
--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."
On Aug 17, 2002 02:23 -0500, Oliver Xymoron wrote:
> On Sat, Aug 17, 2002 at 12:05:07AM -0600, Andreas Dilger wrote:
> > On Aug 16, 2002 19:45 -0500, Oliver Xymoron wrote:
> > One of the problems, I believe, is that reading from /dev/urandom will
> > also deplete the entropy pool, just like reading from /dev/random.
> > The only difference is that when the entropy is gone /dev/random will
> > stop and /dev/urandom will continue to provide data.
>
> Yep, this is a longstanding problem. Will look into it and a couple
> other things once I get the my current batch of patches running
> against -current.
Sure.
> BTW, did ttyso ever ACK your last set of random changes or is it safe
> to assume it's unmaintained?
Yes, a while later. I'm sure it wouldn't hurt to post your patches here
before submission, but I don't think you need to funnel them through Ted.
> > If you are in there fixing things, it might make sense to have
> > /dev/urandom extract entropy from the random pool far less often than
> > /dev/random. This way people who use /dev/urandom for a source of
> > less-strong randomness (e.g. TCP sequence numbers or whatever), will
> > not be shooting themselves in the foot for when they need a 2048-byte
> > PGP key, if they are low on entropy sources.
>
> Not sure this is an ideal fix. We might instead have an entropy
> low-water mark (say 1/2 pool size), below which /dev/urandom will not
> deplete the pool. This way when we have ample entropy, both devices
> will behave like TRNGs, with /dev/urandom falling back to PRNG when a
> shortage is threatened.
Well, I can think of a few mechanisms that would work better than a
simple on/off method that you are proposing. The current code will
basically "fill" the urandom pool each time it is depleted, and then
when the entropy is gone it will just go on dumping out data. You
could make urandom only get more entropy each N times through its pool,
or make N a function of the "fullness" of the available entropy. Then
if a system has lots of entropy sources urandom is TRNG, but if not
it will gracefully degrade from TRNG to PRNG without wiping out all
the entropy in the process.
As an alternative, instead of taking poolsize bytes of entropy each
N uses, you could take some small amount of entropy to mix into the
pool slowly.
Cheers, Andreas
--
Andreas Dilger
http://www-mddsp.enel.ucalgary.ca/People/adilger/
http://sourceforge.net/projects/ext2resize/
On Sat, Aug 17, 2002 at 03:09:50AM -0600, Andreas Dilger wrote:
> >
> > Not sure this is an ideal fix. We might instead have an entropy
> > low-water mark (say 1/2 pool size), below which /dev/urandom will not
> > deplete the pool. This way when we have ample entropy, both devices
> > will behave like TRNGs, with /dev/urandom falling back to PRNG when a
> > shortage is threatened.
>
> Well, I can think of a few mechanisms that would work better than a
> simple on/off method that you are proposing. The current code will
> basically "fill" the urandom pool each time it is depleted, and then
> when the entropy is gone it will just go on dumping out data. You
> could make urandom only get more entropy each N times through its pool,
> or make N a function of the "fullness" of the available entropy. Then
> if a system has lots of entropy sources urandom is TRNG, but if not
> it will gracefully degrade from TRNG to PRNG without wiping out all
> the entropy in the process.
>
> As an alternative, instead of taking poolsize bytes of entropy each
> N uses, you could take some small amount of entropy to mix into the
> pool slowly.
Actually, that gives us exactly the scenario catastrophic reseeding
intends to avoid. Presume for a moment that the internal state of the
PRNG is somehow known to an attacker - the PRNG has been broken. If we
take eight bits out of the entropy pool, mix them in, then generate a
new random number, an attacker needs to only test 256 possible pool
states before he knows the entire state again. If he keeps up this
state extension attack, /dev/urandom will never return to an unknown
state. If, on the other hand, /dev/urandom waits until there are, say,
64 entropy bits to mix in, it can make a leap to a state that the
attacker will have a very hard time guessing and the PRNG has
recovered.
We merely need to avoid starvation of /dev/random, and I think a
straightforward low watermark approach will do that nicely. Starving
/dev/urandom is no worse than feeding it a trickle.
--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."
On Sat, 17 Aug 2002, Oliver Xymoron wrote:
> > If you are in there fixing things, it might make sense to have
> > /dev/urandom extract entropy from the random pool far less often than
> > /dev/random. This way people who use /dev/urandom for a source of
> > less-strong randomness (e.g. TCP sequence numbers or whatever), will
> > not be shooting themselves in the foot for when they need a 2048-byte
> > PGP key, if they are low on entropy sources.
>
> Not sure this is an ideal fix. We might instead have an entropy
> low-water mark (say 1/2 pool size), below which /dev/urandom will not
> deplete the pool. This way when we have ample entropy, both devices
> will behave like TRNGs, with /dev/urandom falling back to PRNG when a
> shortage is threatened.
How can you make /dev/urandom return something without leaking
information about the internal pool state to the observer?
Do you plan to switch to a completely different source and reseed the
PRNG with data not taken from the pool?
I don't understand what you mean with the "1/2 pool size": the pool
size is always the same (once you set sys.kernel.random.poolsize),
it's just that the pool is somewhat known to the external observer.
The random bits you return to userspace either are related to the
internal pool (and thus leak knowledge about it) or not. Tertium non
datur. Or am I missing something?
I believe your 1/2 idea it's better implemented having two different
(and separated) pools: you can make /dev/urandom drain one but not the
other. /dev/random can drain one or both, it's just an implementation
choice.
In my experience, there's little you can do when the entropy demand is
higher than the rate at which the kernel collects it. Either we implement
/dev/random quotas, or it will be always easy to drain the internal pool
from userspace.
I'd say that /dev/urandom interface is somewhat broken: the application
either can live with an almost pure PRNG (and use an userspace
implementation) or needs true, pure and strong randomness. The programmer
should know the mimimal need for true randomness of the application.
For every application that uses /dev/urandom, it's 0 by definition of
/dev/urandom, and the application should just use an userspace PRNG.
If you need a weak solution (a perturbated PRNG), just read a few bits
from /dev/random at times (but in a controlled and defined way).
IMHO, the whole urandom thing should be extended and put into a
userspace library.
.TM.
On Mon, Aug 19, 2002 at 11:29:00AM +0200, Marco Colombo wrote:
> On Sat, 17 Aug 2002, Oliver Xymoron wrote:
>
> > > If you are in there fixing things, it might make sense to have
> > > /dev/urandom extract entropy from the random pool far less often than
> > > /dev/random. This way people who use /dev/urandom for a source of
> > > less-strong randomness (e.g. TCP sequence numbers or whatever), will
> > > not be shooting themselves in the foot for when they need a 2048-byte
> > > PGP key, if they are low on entropy sources.
> >
> > Not sure this is an ideal fix. We might instead have an entropy
> > low-water mark (say 1/2 pool size), below which /dev/urandom will not
> > deplete the pool. This way when we have ample entropy, both devices
> > will behave like TRNGs, with /dev/urandom falling back to PRNG when a
> > shortage is threatened.
>
> How can you make /dev/urandom return something without leaking
> information about the internal pool state to the observer?
> Do you plan to switch to a completely different source and reseed the
> PRNG with data not taken from the pool?
I plan to make a third pool, reseeding from the first. The code
appears to actually be structured with that in mind, it just hasn't
been done.
> In my experience, there's little you can do when the entropy demand is
> higher than the rate at which the kernel collects it. Either we implement
> /dev/random quotas, or it will be always easy to drain the internal pool
> from userspace.
Root can decide, for instance, to make /dev/random privileged to some
group if important_set is getting starved by unimportant_set.
> I'd say that /dev/urandom interface is somewhat broken: the application
> either can live with an almost pure PRNG (and use an userspace
> implementation) or needs true, pure and strong randomness. The programmer
> should know the mimimal need for true randomness of the application.
> For every application that uses /dev/urandom, it's 0 by definition of
> /dev/urandom, and the application should just use an userspace PRNG.
Many actually do this. I believe OpenSSL merely seeds though I'd have
to doublecheck.
> If you need a weak solution (a perturbated PRNG), just read a few bits
> from /dev/random at times (but in a controlled and defined way).
It might be helpful to think of /dev/urandom as akin to /dev/random with
O_NONBLOCK. "Give me stronger bits if you got 'em" is desirable,
otherwise this thread would be much shorter.
--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."
On Mon, 19 Aug 2002, Oliver Xymoron wrote:
> > If you need a weak solution (a perturbated PRNG), just read a few bits
> > from /dev/random at times (but in a controlled and defined way).
>
> It might be helpful to think of /dev/urandom as akin to /dev/random with
> O_NONBLOCK. "Give me stronger bits if you got 'em" is desirable,
> otherwise this thread would be much shorter.
"desirable", yes, I see... B-). But I have to understand why, yet.
"Give me the best you can, but even 0 is ok" just serves to help people
waste resources. If your application is fine with (potentially)
guessable bits, you don't need /dev/random at all. If you do care
about a minimum, you know it in advance, so do fetch those bits
(and only them) from /dev/random, and use them. Yes, it may block,
but that's life. Resources aren't infinite.
I haven't seen the code, but I guess most programs don't even use
*/dev/random* output correctly. If you read 512 bits from it, you
only get 512 of the kernel 'best effort' for random bits. Using those
512 to produce a 512 bits long key it as optimistic as saying:
"the kernel is a (mathematically) perfect source of random bits".
You'd better scramble them to produce a 256 or 128 or 64 bits key
and hope for the best. (yes it drains away entropy bits even faster,
and yes that's being paranoid)
If you ride Linus' "we're living in a pratical world" horse, and say:
the application can't block, so let's use /dev/urandom, then a good
PRNG (+ all the crypto hiding you need) fits as well. And don't see
anything "sad" about it.
I'm missing any real argument for having /dev/urandom logic into the
kernel.
As regards /dev/random being as strong (and paranoid) as possible,
I'm 101% with you.
.TM.
--
____/ ____/ /
/ / / Marco Colombo
___/ ___ / / Technical Manager
/ / / ESI s.r.l.
_____/ _____/ _/ [email protected]
On Mon, Aug 19, 2002 at 05:11:03PM +0200, Marco Colombo wrote:
> On Mon, 19 Aug 2002, Oliver Xymoron wrote:
>
> > > If you need a weak solution (a perturbated PRNG), just read a few bits
> > > from /dev/random at times (but in a controlled and defined way).
> >
> > It might be helpful to think of /dev/urandom as akin to /dev/random with
> > O_NONBLOCK. "Give me stronger bits if you got 'em" is desirable,
> > otherwise this thread would be much shorter.
>
> "desirable", yes, I see... B-). But I have to understand why, yet.
>
> "Give me the best you can, but even 0 is ok" just serves to help people
> waste resources. If your application is fine with (potentially)
> guessable bits, you don't need /dev/random at all. If you do care
> about a minimum, you know it in advance, so do fetch those bits
> (and only them) from /dev/random, and use them. Yes, it may block,
> but that's life. Resources aren't infinite.
For most people, entropy input far exceeds entropy output and the pool
is a finite size. There's no reason not to use these entropy bits as
the pool is always full and we're discarding entropy constantly. It's
only a problem when the pool is running low and we risk making
/dev/random block.
> I'm missing any real argument for having /dev/urandom logic into the
> kernel.
Convenience and control of resource sharing. The latter is slightly
under-implemented.
--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."
On Mon, 19 Aug 2002, Oliver Xymoron wrote:
> On Mon, Aug 19, 2002 at 05:11:03PM +0200, Marco Colombo wrote:
> > On Mon, 19 Aug 2002, Oliver Xymoron wrote:
> >
> > > > If you need a weak solution (a perturbated PRNG), just read a few bits
> > > > from /dev/random at times (but in a controlled and defined way).
> > >
> > > It might be helpful to think of /dev/urandom as akin to /dev/random with
> > > O_NONBLOCK. "Give me stronger bits if you got 'em" is desirable,
> > > otherwise this thread would be much shorter.
> >
> > "desirable", yes, I see... B-). But I have to understand why, yet.
> >
> > "Give me the best you can, but even 0 is ok" just serves to help people
> > waste resources. If your application is fine with (potentially)
> > guessable bits, you don't need /dev/random at all. If you do care
> > about a minimum, you know it in advance, so do fetch those bits
> > (and only them) from /dev/random, and use them. Yes, it may block,
> > but that's life. Resources aren't infinite.
>
> For most people, entropy input far exceeds entropy output and the pool
> is a finite size. There's no reason not to use these entropy bits as
> the pool is always full and we're discarding entropy constantly. It's
We're never "discarding" entropy. We're just feeling more and more
confortable in saying we've stored 'poolsize' random bits.
My point being: /dev/urandom is, by definition, for users that are fine
with 0 bits of true entropy. Why give them more?
But what you say is definitely true... on my desktop system I only need
to move the mouse a couple of times to fill the pool completely in a
matter of seconds. Once the first pool is "random enough" according
to our esteem (the entropy_count) it's ok feeding additional random data
to the second pool (used by urandom).
> only a problem when the pool is running low and we risk making
> /dev/random block.
as usual: problems arise when the resouce is low... B-)
> > I'm missing any real argument for having /dev/urandom logic into the
> > kernel.
>
> Convenience and control of resource sharing. The latter is slightly
> under-implemented.
I see we're in disagreement, here. On the control part, I agree fully,
but it of course applies to /dev/random, too.
I see little convenience in having /dev/urandom semantic in the kernel
(besides the fact it's already there, of course). You can easily write
an userland deamon which does the same, keeping its own pool and PRNG
algoritm, pulling bits from /dev/random at times, and only when the pool
is almost full. No inside knowledge required here (we may need a
get_random_bytes_if_count_is_over_threshold IOCTL for /dev/random, to
make things really clean). Or it can be done by a library, with
every application managing a pool of desidered size, and an algorithm
of desided crypto strengh, with the desired ratio of real random bits
in the random output (output "quality"), with optional safeguard
tests (FIPS?) in place. It's just that I see no need to be in kernelland.
It only requires a working /dev/random, which is definitely a kernel thing,
and a better defined API.
But I see I'm repeating myself, so we can just drop the discussion,
and peacefully live in disagreement... B-)
.TM.
--
____/ ____/ /
/ / / Marco Colombo
___/ ___ / / Technical Manager
/ / / ESI s.r.l.
_____/ _____/ _/ [email protected]
On Mon, Aug 19, 2002 at 06:20:14PM +0200, Marco Colombo wrote:
>
> We're never "discarding" entropy. We're just feeling more and more
> confortable in saying we've stored 'poolsize' random bits.
Yes we are. The pool can only hold n bits. If it's full and we mix in
m more bits, we're losing m bits in the process.
> But what you say is definitely true... on my desktop system I only need
> to move the mouse a couple of times to fill the pool completely in a
> matter of seconds.
Heh. That's actually a bug. Your mouse movement is only giving the
system a few hundred bits of entropy (by the current code's
measurements), but /dev/random will give out thousands.
Note that above a certain velocity, mouse samples are back to back
characters on the serial port (or packets in the keyboard controller),
so most of the timing entropy is in the early acceleration or during
direction changes.
> I see little convenience in having /dev/urandom semantic in the kernel
> (besides the fact it's already there, of course).
The kernel uses it internally for numerous things like sequence
numbers, syncookies, and UUIDs. So it doesn't take much to justify
exporting it..
--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."
On Mon, 19 Aug 2002, Oliver Xymoron wrote:
> On Mon, Aug 19, 2002 at 06:20:14PM +0200, Marco Colombo wrote:
> >
> > We're never "discarding" entropy. We're just feeling more and more
> > confortable in saying we've stored 'poolsize' random bits.
>
> Yes we are. The pool can only hold n bits. If it's full and we mix in
> m more bits, we're losing m bits in the process.
B-) we're speaking of different "bits" here. A pool of, say, 4096 bits
only holds 4096 bits of data. But necessarily less than 4096 bits of
randomness, unless we have the 'perfect' random source... of course
if we inject 40960 random bits (from a "good" source) we get closer to
4096 "really" randoms bits. At some point it makes no sense to inject
more, I agree.
> > But what you say is definitely true... on my desktop system I only need
> > to move the mouse a couple of times to fill the pool completely in a
> > matter of seconds.
>
> Heh. That's actually a bug. Your mouse movement is only giving the
> system a few hundred bits of entropy (by the current code's
> measurements), but /dev/random will give out thousands.
>
> Note that above a certain velocity, mouse samples are back to back
> characters on the serial port (or packets in the keyboard controller),
> so most of the timing entropy is in the early acceleration or during
> direction changes.
>
> > I see little convenience in having /dev/urandom semantic in the kernel
> > (besides the fact it's already there, of course).
>
> The kernel uses it internally for numerous things like sequence
> numbers, syncookies, and UUIDs. So it doesn't take much to justify
> exporting it..
It should really have its own pool, so userspace can't DoS the parts
that really need randomness.
.TM.
--
____/ ____/ /
/ / / Marco Colombo
___/ ___ / / Technical Manager
/ / / ESI s.r.l.
_____/ _____/ _/ [email protected]
Chris Friesen wrote:
>The problem is this. If you have an embedded system that is headless,
>diskless, keyboardless, and
>mouseless, then your only remaining source of any interrupt-based
>entropy is the network.
Try replacing "network" with "/dev/zero" to see how your sentence
sounds, and then maybe the flaw in your reasoning will become apparent.
"If you have an embedded system that is headless, etc., then your
only remaining source of entropy is /dev/zero."
Well, sometimes there is just no reliable entropy source on hand.
Maybe it's better to admit that than to fool ourselves.
Marco Colombo wrote:
>I'd say that /dev/urandom interface is somewhat broken: [...]
Also, the naming scheme is sub-optimal: it encourages
people to use /dev/random as the default, when /dev/random
should really be the exception rather than the norm.
David Wagner wrote:
> "If you have an embedded system that is headless, etc., then your
> only remaining source of entropy is /dev/zero."
>
> Well, sometimes there is just no reliable entropy source on hand.
> Maybe it's better to admit that than to fool ourselves.
And if you could time to the nanosecond exactly when each zero was read in, and the latencies in
this reading are varying with the rest of the workload on the machine, then yes, you can get entropy
reading from /dev/zero.
I submit that if you have an attacker with the resources to model and predict your interrupt
handling down to the timing of the pci bus (ie 30 nanoseconds) from across the other end of your LAN
then you will probably have the resources to use a hardware RNG. If you don't have those resources,
chances are good that your competitors don't have the ability to do the requesite network
modelling/influencing.
It's a calculated risk, but I would argue that some security (even if theoretically compromiseable)
is better than none.
Chris
--
Chris Friesen | MailStop: 043/33/F10
Nortel Networks | work: (613) 765-0557
3500 Carling Avenue | fax: (613) 765-2986
Nepean, ON K2H 8E9 Canada | email: [email protected]
On donderdag, augustus 22, 2002, at 03:19 , David Wagner wrote:
> Chris Friesen wrote:
>> The problem is this. If you have an embedded system that is headless,
>> diskless, keyboardless, and
>> mouseless, then your only remaining source of any interrupt-based
>> entropy is the network.
>
> Try replacing "network" with "/dev/zero" to see how your sentence
> sounds, and then maybe the flaw in your reasoning will become apparent.
>
> "If you have an embedded system that is headless, etc., then your
> only remaining source of entropy is /dev/zero."
>
> Well, sometimes there is just no reliable entropy source on hand.
> Maybe it's better to admit that than to fool ourselves.
And how would you see this. On a diskless/headless system, you'll be
sure to find more than one NIC, making it IMHO very unlikely for one
attacker to be able to generate so much traffic on all of those that all
NICs are continuesly generating interrupts. Granted, it's not the best
source of entropy, but it's guanranteed to be more that /dev/zero.
Having your network cards in your entropy pool, is not really bad, and
is a lot better than not being able to generate session keys at all for
your ssh connection.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel"
> in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
---
Met vriendelijke groeten,
Remco Post
SARA - Stichting Academisch Rekencentrum Amsterdam http://www.sara.nl
High Performance Computing Tel. +31 20 592 8008 Fax. +31 20 668 3167
"I really didn't foresee the Internet. But then, neither did the computer
industry. Not that that tells us very much of course - the computer
industry
didn't even foresee that the century was going to end." -- Douglas Adams