2006-11-22 23:54:39

by Gunter Ohrner

[permalink] [raw]
Subject: Entropy Pool Contents

Hi!

(PEBKAC warning. I'm probably doing something dump. I just don't know
what...)

I seem to have an entropy pool on a headless machine which is not nearly
empty (a common problem in this case, I know), but completely empty and
stuck in this state...

Hornburg:~# cat /proc/sys/kernel/random/entropy_avail
0
Hornburg:~# fuser /dev/urandom
Hornburg:~# lsof | grep random
Hornburg:~# cat /proc/sys/kernel/random/entropy_avail
0
Hornburg:~# dd if=/dev/hdf of=/dev/urandom bs=512 count=1
1+0 records in
1+0 records out
512 bytes transferred in 0.016268 seconds (31473 bytes/sec)
Hornburg:~# dd if=/dev/hdf of=/dev/random bs=512 count=1
1+0 records in
1+0 records out
512 bytes transferred in 0.031943 seconds (16029 bytes/sec)
Hornburg:~# cat /proc/sys/kernel/random/entropy_avail
0
Hornburg:~# fuser /dev/urandom
Hornburg:~# fuser /dev/random
Hornburg:~# lsof | grep random
Hornburg:~# cat /proc/sys/kernel/random/poolsize
4096
Hornburg:~#

Also causing disk activities doesn't help at all. (Two disks on a Promise
PDC20268 controller.)

The system runs a rather ancient Debian Sarge 2.4 kernel:
Linux Hornburg 2.4.27-3-386 #1 Thu Sep 14 08:44:58 UTC 2006 i486 GNU/Linux

However as the machine itself is also ancient, the 2.4 seems like a good
match. And also 2.4 ought to have a refilling entropy pool, doesn't it?

Maybe someone can shed some light on what's happening here...

Greetings,

Gunter


2006-11-23 00:00:39

by Gunter Ohrner

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Gunter Ohrner wrote:
> I'm probably doing something dump.
^^^^

Uh, yeah... It's getting late... The pool still is empty, tough...

Greetings,

Gunter

2006-11-23 00:10:32

by Jan Engelhardt

[permalink] [raw]
Subject: Re: Entropy Pool Contents


>Hi!
>
>(PEBKAC warning. I'm probably doing something dump. I just don't know
>what...)
>
>I seem to have an entropy pool on a headless machine which is not nearly
>empty (a common problem in this case, I know), but completely empty and
>stuck in this state...
>
>Hornburg:~# cat /proc/sys/kernel/random/entropy_avail
>0

You really must have bad luck with your entropy...


01:05 ichi:/home/k > cat /proc/sys/kernel/random/entropy_avail
3596
01:08 ichi:/home/k > dd if=/dev/urandom of=/dev/null bs=3596 count=1
1+0 records in
1+0 records out
3596 bytes (3.6 kB) copied, 0.00115262 seconds, 3.1 MB/s
01:08 ichi:/home/k > cat /proc/sys/kernel/random/entropy_avail
157

however that might be caused because I am in X, mouse moves, kernel
compiles, etc.


>Also causing disk activities doesn't help at all. (Two disks on a Promise
>PDC20268 controller.)

Disk activities are "somewhat predictable", like network traffic, and
hence are not (or should not - have not checked it) contribute to the
pool. Note that urandom is the device which _always_ gives you data, and
when the pool is exhausted, returns pseudorandom data.


>The system runs a rather ancient Debian Sarge 2.4 kernel:
>Linux Hornburg 2.4.27-3-386 #1 Thu Sep 14 08:44:58 UTC 2006 i486 GNU/Linux

[I have] No memories about a kernel this old. :>

>However as the machine itself is also ancient, the 2.4 seems like a good
>match. And also 2.4 ought to have a refilling entropy pool, doesn't it?
>
>Maybe someone can shed some light on what's happening here...


-`J'
--

2006-11-23 20:54:40

by Lennart Sorensen

[permalink] [raw]
Subject: Re: Entropy Pool Contents

On Thu, Nov 23, 2006 at 12:54:03AM +0100, Gunter Ohrner wrote:
> (PEBKAC warning. I'm probably doing something dump. I just don't know
> what...)
>
> I seem to have an entropy pool on a headless machine which is not nearly
> empty (a common problem in this case, I know), but completely empty and
> stuck in this state...
>
> Hornburg:~# cat /proc/sys/kernel/random/entropy_avail
> 0
> Hornburg:~# fuser /dev/urandom
> Hornburg:~# lsof | grep random
> Hornburg:~# cat /proc/sys/kernel/random/entropy_avail
> 0
> Hornburg:~# dd if=/dev/hdf of=/dev/urandom bs=512 count=1
> 1+0 records in
> 1+0 records out
> 512 bytes transferred in 0.016268 seconds (31473 bytes/sec)
> Hornburg:~# dd if=/dev/hdf of=/dev/random bs=512 count=1
> 1+0 records in
> 1+0 records out
> 512 bytes transferred in 0.031943 seconds (16029 bytes/sec)
> Hornburg:~# cat /proc/sys/kernel/random/entropy_avail
> 0
> Hornburg:~# fuser /dev/urandom
> Hornburg:~# fuser /dev/random
> Hornburg:~# lsof | grep random
> Hornburg:~# cat /proc/sys/kernel/random/poolsize
> 4096
> Hornburg:~#
>
> Also causing disk activities doesn't help at all. (Two disks on a Promise
> PDC20268 controller.)
>
> The system runs a rather ancient Debian Sarge 2.4 kernel:
> Linux Hornburg 2.4.27-3-386 #1 Thu Sep 14 08:44:58 UTC 2006 i486 GNU/Linux
>
> However as the machine itself is also ancient, the 2.4 seems like a good
> match. And also 2.4 ought to have a refilling entropy pool, doesn't it?
>
> Maybe someone can shed some light on what's happening here...

Only some devices/drivers generate entropy data. Some network drivers,
mouse, keyboard. None of the disk drivers are appear to do so. Serial
ports do not in general either. On my headless systems I patched
pcnet32 and the 8250 driver to generate entropy since otherwise I tended
to run out very quickly.

--
Len Sorensen

2006-11-23 21:04:30

by Jeff Garzik

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Gunter Ohrner wrote:
> Hi!
>
> (PEBKAC warning. I'm probably doing something dump. I just don't know
> what...)
>
> I seem to have an entropy pool on a headless machine which is not nearly
> empty (a common problem in this case, I know), but completely empty and
> stuck in this state...
>
> Hornburg:~# cat /proc/sys/kernel/random/entropy_avail
> 0
> Hornburg:~# fuser /dev/urandom
> Hornburg:~# lsof | grep random
> Hornburg:~# cat /proc/sys/kernel/random/entropy_avail
> 0
> Hornburg:~# dd if=/dev/hdf of=/dev/urandom bs=512 count=1
> 1+0 records in
> 1+0 records out
> 512 bytes transferred in 0.016268 seconds (31473 bytes/sec)
> Hornburg:~# dd if=/dev/hdf of=/dev/random bs=512 count=1
> 1+0 records in
> 1+0 records out
> 512 bytes transferred in 0.031943 seconds (16029 bytes/sec)
> Hornburg:~# cat /proc/sys/kernel/random/entropy_avail
> 0
> Hornburg:~# fuser /dev/urandom
> Hornburg:~# fuser /dev/random
> Hornburg:~# lsof | grep random
> Hornburg:~# cat /proc/sys/kernel/random/poolsize
> 4096
> Hornburg:~#
>
> Also causing disk activities doesn't help at all. (Two disks on a Promise
> PDC20268 controller.)
>
> The system runs a rather ancient Debian Sarge 2.4 kernel:
> Linux Hornburg 2.4.27-3-386 #1 Thu Sep 14 08:44:58 UTC 2006 i486 GNU/Linux
>
> However as the machine itself is also ancient, the 2.4 seems like a good
> match. And also 2.4 ought to have a refilling entropy pool, doesn't it?
>
> Maybe someone can shed some light on what's happening here...

Grab an entropy generator like egd or audio-entropyd, etc.

Jeff



2006-11-23 21:35:41

by Gunter Ohrner

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Lennart Sorensen wrote:
> Only some devices/drivers generate entropy data. Some network drivers,

Yes, I know, but block device operations should, and directly feeding data
into /dev/*random, as I did, definitely should.

This machine usually has only very limited entropy available, but the pool
currently seeems to bee stuck at "0" - there's no way to get it to even
display a slightly different number. That's what confused me pretty much...

Normally doing disk IO helps a bit, but it currently does not at all.

> pcnet32 and the 8250 driver to generate entropy since otherwise I tended
> to run out very quickly.

I guess I also should do that - as this machine has several network cards on
different networks, that will be definiteely more seecure than running with
a completely empty entropy pool stuck at zero bits for several days in a
row...

Greetings,

Gunter

2006-11-23 21:41:27

by Gunter Ohrner

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Jan Engelhardt wrote:
>>Hornburg:~# cat /proc/sys/kernel/random/entropy_avail
>>0
> You really must have bad luck with your entropy...

IMHO something really fishy's going on there. If I explicitely write data
into the pool, it shouldd not stay at "zero", from wwhat I understood about
how /dev/*random work.

> Disk activities are "somewhat predictable", like network traffic, and
> hence are not (or should not - have not checked it) contribute to the
> pool.

Well, they do, block device operations do, using the function
add_blkdev_randomness, as far as I know.

> Note that urandom is the device which _always_ gives you data, and
> when the pool is exhausted, returns pseudorandom data.

I know, and running on deterministically computed random values only for
days in a row is no situation I'm paticularily happy about...

I'm mainly wondering why writing stuff to /dev/*random does not change the
entropy from zero to at least any low non-zero value...

Greetings,

Gunter

2006-11-23 21:45:10

by Gunter Ohrner

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Jeff Garzik wrote:
> Grab an entropy generator like egd or audio-entropyd, etc.

I thought about running rngd, but will this be of any help if writing
into /dev/*random does not change the entropy from zero on this machine?

Greetings,

Gunter

2006-11-24 00:48:59

by Theodore Ts'o

[permalink] [raw]
Subject: Re: Entropy Pool Contents

On Thu, Nov 23, 2006 at 01:10:08AM +0100, Jan Engelhardt wrote:
> Disk activities are "somewhat predictable", like network traffic, and
> hence are not (or should not - have not checked it) contribute to the
> pool. Note that urandom is the device which _always_ gives you data, and
> when the pool is exhausted, returns pseudorandom data.

Plesae read the following article before making such assertions:

D. Davis, R. Ihaka, P.R. Fenstermacher, "Cryptographic
Randomness from Air Turbulence in Disk Drives", in Advances in
Cryptology -- CRYPTO '94 Conference Proceedings, edited by Yvo
G. Desmedt, pp.114--120. Lecture Notes in Computer Science
#839. Heidelberg: Springer-Verlag, 1994.
http://world.std.com/~dtd/random/forward.ps

Regards,

- Ted

2006-11-24 01:01:51

by Jeff Garzik

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Theodore Tso wrote:
> On Thu, Nov 23, 2006 at 01:10:08AM +0100, Jan Engelhardt wrote:
>> Disk activities are "somewhat predictable", like network traffic, and
>> hence are not (or should not - have not checked it) contribute to the
>> pool. Note that urandom is the device which _always_ gives you data, and
>> when the pool is exhausted, returns pseudorandom data.
>
> Plesae read the following article before making such assertions:
>
> D. Davis, R. Ihaka, P.R. Fenstermacher, "Cryptographic
> Randomness from Air Turbulence in Disk Drives", in Advances in
> Cryptology -- CRYPTO '94 Conference Proceedings, edited by Yvo
> G. Desmedt, pp.114--120. Lecture Notes in Computer Science
> #839. Heidelberg: Springer-Verlag, 1994.
> http://world.std.com/~dtd/random/forward.ps

Note that the controller hardware in question plays a large role in
these things. Most modern network controllers, and a few recent SATA or
SAS controllers, include hardware interrupt mitigation, which can cause
interrupts to fire on a timed basis in some load profiles.

Compounding that, both software and hardware interrupt mitigation lead
(intentionally) to a marked decrease in overall interrupts, which leads
to less entropy even if the interrupt handler is sampling randomness.

IMO there is an overall trend needing-more-entropy-than-you-have for
headless network servers. If you have a hardware RNG, use that and rngd
to fill the entropy pool. If you don't, look into various entropy
gathering daemons (audio-entropyd, video-entropyd, egd, and others).
You can gather entropy from system stats, open microphones, open video
channels, thermal diodes, ...

Jeff



2006-11-26 01:26:40

by folkert

[permalink] [raw]
Subject: Re: Entropy Pool Contents

> Hornburg:~# cat /proc/sys/kernel/random/entropy_avail
> 0

Please have a look at:
audio-entropyd: http://www.vanheusden.com/aed/
fills the kernel entropy buffer with noise from your audio-card
video-entropyd: http://www.vanheusden.com/ved/
fills the kernel entropy buffer with noise from a video4linux device,
e.g. a webcam or a framegrabber or whaterver


Folkert van Heusden

--
http://www.vanheusden.com/multitail - win een vlaai van multivlaai! zorg
ervoor dat multitail opgenomen wordt in Fedora Core, AIX, Solaris of
HP/UX en win een vlaai naar keuze
----------------------------------------------------------------------
Phone: +31-6-41278122, PGP-key: 1F28D8AE, http://www.vanheusden.com

2006-11-27 16:15:48

by Phillip Susi

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Gunter Ohrner wrote:
> IMHO something really fishy's going on there. If I explicitely write data
> into the pool, it shouldd not stay at "zero", from wwhat I understood about
> how /dev/*random work.
>

<snip>

> I'm mainly wondering why writing stuff to /dev/*random does not change the
> entropy from zero to at least any low non-zero value...
>

I ran into this the other day myself and when I investigated the kernel
code, I found that writes to /dev/random do accept the data into the
entropy pool, but do NOT update the entropy estimate. In order to do
that, you have to use a root only ioctl to add the data and update the
estimate. I am not sure why this is, or if there is a tool already
written somewhere to use this ioctl, maybe someone else can comment?


2006-11-27 16:20:05

by Chris Friesen

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Phillip Susi wrote:

> I ran into this the other day myself and when I investigated the kernel
> code, I found that writes to /dev/random do accept the data into the
> entropy pool, but do NOT update the entropy estimate. In order to do
> that, you have to use a root only ioctl to add the data and update the
> estimate. I am not sure why this is, or if there is a tool already
> written somewhere to use this ioctl, maybe someone else can comment?

I believe the idea was that you don't want random users being able to
artificially inflate your entropy count. So the kernel tries to make
use of entropy entered by regular users (by stirring it into the pool)
but it doesn't increase the entropy estimate unless root says its okay.

Chris

2006-11-27 18:54:25

by Phillip Susi

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Chris Friesen wrote:
> I believe the idea was that you don't want random users being able to
> artificially inflate your entropy count. So the kernel tries to make
> use of entropy entered by regular users (by stirring it into the pool)
> but it doesn't increase the entropy estimate unless root says its okay.

Why are non root users allowed write access in the first place? Can't
the pollute the entropy pool and thus actually REDUCE the amount of good
entropy? It seems to me that only root should have write access in the
first place because of this, and thus, anything root writes should
increase the entropy count since one can assume that root is supplying
good random data for the purpose of increasing the entropy count.

I was planning on just setting up a little root cron script to pull some
random data from another machine on the network to add to the local
pool, then push some random data back to the other machine to increase
its pool, but found that this doesn't work due to this restriction.

2006-11-27 19:39:46

by David Wagner

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Phillip Susi wrote:
>Why are non root users allowed write access in the first place? Can't
>the pollute the entropy pool and thus actually REDUCE the amount of good
>entropy?

Nope, I don't think so. If they could, that would be a security hole,
but /dev/{,u}random was designed to try to make this impossible, assuming
the cryptographic algorithms are secure.

After all, some of the entropy sources come from untrusted sources and
could be manipulated by an external adversary who doesn't have any
account on your machine (root or non-root), so the scheme has to be
secure against introduction of maliciously chosen samples in any event.

2006-11-27 20:38:09

by Phillip Susi

[permalink] [raw]
Subject: Re: Entropy Pool Contents

David Wagner wrote:
> Nope, I don't think so. If they could, that would be a security hole,
> but /dev/{,u}random was designed to try to make this impossible, assuming
> the cryptographic algorithms are secure.
>
> After all, some of the entropy sources come from untrusted sources and
> could be manipulated by an external adversary who doesn't have any
> account on your machine (root or non-root), so the scheme has to be
> secure against introduction of maliciously chosen samples in any event.

Assuming it works because it would be a bug if it didn't is a logical
fallacy. Either the new entropy pool is guaranteed to be improved by
injecting data or it isn't. If it is, then only root should be allowed
to inject data. If it isn't, then the entropy estimate should increase
when the pool is stirred.

2006-11-27 20:46:58

by David Wagner

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Phillip Susi wrote:
>David Wagner wrote:
>> Nope, I don't think so. If they could, that would be a security hole,
>> but /dev/{,u}random was designed to try to make this impossible, assuming
>> the cryptographic algorithms are secure.
>>
>> After all, some of the entropy sources come from untrusted sources and
>> could be manipulated by an external adversary who doesn't have any
>> account on your machine (root or non-root), so the scheme has to be
>> secure against introduction of maliciously chosen samples in any event.
>
>Assuming it works because it would be a bug if it didn't is a logical
>fallacy. Either the new entropy pool is guaranteed to be improved by
>injecting data or it isn't. If it is, then only root should be allowed
>to inject data. If it isn't, then the entropy estimate should increase
>when the pool is stirred.

Sorry, but I disagree with just about everything you wrote in this
message. I'm not committing any logical fallacies. I'm not assuming
it works because it would be a bug if it didn't; I'm just trying to
help you understand the intuition. I have looked at the algorithm
used by /dev/{,u}random, and I am satisfied that it is safe to feed in
entropy samples from malicious sources, as long as you don't bump up the
entropy counter when you do so. Doing so can't do any harm, and cannot
reduce the entropy in the pool. However, there is no guarantee that
it will increase the entropy. If the adversary knows what bytes you
are feeding into the pool, then it doesn't increase the entropy count,
and the entropy estimate should not be increased.

Therefore:
- It is safe to allow non-root users to inject data into the pool
by writing to /dev/random, as long as you don't bump up the entropy
estimate. Doing so cannot decrease the amount of entropy in the
pool.
- It is not a good idea to bump up the entropy estimate when non-root
users write to /dev/random. If a malicious non-root user writes
the first one million digits of pi to /dev/random, then this hasn't
increased the uncertainty that this attacker has in the pool, so
you shouldn't increase the entropy estimate.
- Whether you automatically bump up the entropy estimate when
root users write to /dev/random is a design choice where you could
reasonably go either way. On the one hand, you might want to ensure
that root has to take some explicit action to allege that it is
providing a certain degree of entropy, and you might want to insist
that root tell /dev/random how much entropy it added (since root
knows best where the data came from and how much entropy it is likely
to contain). On the other hand, you might want to make it easier
for shell scripts to add entropy that will count towards the overall
entropy estimate, without requiring them to go through weird
contortions to call various ioctl()s. I can see arguments both
ways, but the current behavior seems reasonable and defensible.

Note that, in any event, the vast majority of applications should be
using /dev/urandom (not /dev/random!), so in an ideal world, most of
these issues should be pretty much irrelevant to the vast majority of
applications. Sadly, in practice many applications wrongly use
/dev/random when they really should be using /dev/urandom, either out
of ignorance, or because of serious flaws in the /dev/random man page.

2006-11-27 21:52:26

by Kyle Moffett

[permalink] [raw]
Subject: Re: Entropy Pool Contents

On Nov 27, 2006, at 15:40:16, David Wagner wrote:
> Phillip Susi wrote:
>> David Wagner wrote:
>>> Nope, I don't think so. If they could, that would be a security
>>> hole, but /dev/{,u}random was designed to try to make this
>>> impossible, assuming the cryptographic algorithms are secure.

Actually, our current /dev/random implementation is secure even if
the cryptographic algorithms can be broken under traditional
circumstances. Essentially /dev/random will refuse to output any
more data well before enough could be revealed to predict the current
pool state, such that it is fairly secure even in the event of total
failure of the cryptographic primatives.

>>> After all, some of the entropy sources come from untrusted
>>> sources and could be manipulated by an external adversary who
>>> doesn't have any account on your machine (root or non-root), so
>>> the scheme has to be secure against introduction of maliciously
>>> chosen samples in any event.

The way the /dev/random pool works is that writes are always
guaranteed to add entropy to the pool (or at least never remove it),
even if someone runs "dd if=/dev/zero of=/dev/random". The initial
state for any given write is secure, and when hashing a random value
for which a significant part of the state has not even been
theoretically revealed with a known value, the result is still
random. Even beyond that, the random pool also hashes the current
value of the cycle-counter or time of day into the pool with each
call, adding a bit of extra entropy in any case. The same hashing of
the time of day also occurs on reads.

>> Assuming it works because it would be a bug if it didn't is a
>> logical fallacy. Either the new entropy pool is guaranteed to be
>> improved by injecting data or it isn't. If it is, then only root
>> should be allowed to inject data. If it isn't, then the entropy
>> estimate should increase when the pool is stirred.

Well, actually the entropy pool is guaranteed not to lose entropy
when it is stirred with data, but the whole point is to ensure that
no userspace program *ever* has enough knowledge of the state of the
pool to even begin a theoretical attack against past or future random
values. As a result it is perfectly OK for programs to dump whatever
data they want into the random pool as extra security for _itself_,
but the kernel does not trust it as extra security for itself. Only
root may inject guaranteed entropy and even then only using a
specific ioctl, but any program may stir up the entropy pool however
much it likes.

> I am satisfied that it is safe to feed in entropy samples from
> malicious sources, as long as you don't bump up the entropy counter
> when you do so. Doing so can't do any harm, and cannot reduce the
> entropy in the pool. However, there is no guarantee that it will
> increase the entropy. If the adversary knows what bytes you are
> feeding into the pool, then it doesn't increase the entropy count,
> and the entropy estimate should not be increased.

Exactly.

> Note that, in any event, the vast majority of applications should
> be using /dev/urandom (not /dev/random!), so in an ideal world,
> most of these issues should be pretty much irrelevant to the vast
> majority of applications. Sadly, in practice many applications
> wrongly use /dev/random when they really should be using /dev/
> urandom, either out of ignorance, or because of serious flaws in
> the /dev/random man page.

Precisely. Personally I generate my random passwords using a little
perl script reading from /dev/random (as opposed to /dev/urandom) but
that's more due to personal paranoia than any practical reason.

When generating long-term cryptographic private keys, however, you
*should* use /dev/random as it provides better guarantees about
theoretical randomness security than does /dev/urandom. Such
guarantees are useful when the random data will be used as a
fundamental cornerstone of data security for a server or network
(think your root CA certificate or HTTPS certificate for your million-
dollar-per-year web store).

Cheers,
Kyle Moffett

2006-11-27 22:23:13

by Gunter Ohrner

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Phillip Susi wrote:
>> I'm mainly wondering why writing stuff to /dev/*random does not change
>> the entropy from zero to at least any low non-zero value...
> I ran into this the other day myself and when I investigated the kernel
> code, I found that writes to /dev/random do accept the data into the
> entropy pool, but do NOT update the entropy estimate. In order to do

Heck, you're right.

Thanks, that's just the answer I was looking for.

> that, you have to use a root only ioctl to add the data and update the
> estimate. I am not sure why this is, or if there is a tool already
> written somewhere to use this ioctl, maybe someone else can comment?

rngd seems to do, from reading the documentation.

Greetings,

Gunter

2006-11-28 04:23:49

by David Wagner

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Warning: tangent with little practical relevance follows:

Kyle Moffett wrote:
>Actually, our current /dev/random implementation is secure even if
>the cryptographic algorithms can be broken under traditional
>circumstances.

Maybe. But, I've never seen any careful analysis to support this or
characterize exactly what assumptions are needed for this to be true.
Some weakened version of your claim might be accurate, but at a minimum
you probably need to make some heuristic assumptions about the sources
of randomness and the distribution of values they generate, and you may
also need some assumptions that the SHA hash function isn't *totally*
broken. If you make worst-case assumptions, I doubt that this claim
can be justified in any rigorous way.

(For instance, compressing random samples with the CRC process is a
heuristic that presumably works fine for most randomness sources, but
it cannot be theoretically justified: there exist sources for which it
is problematic. Also, the entropy estimator is heuristic and will
overestimate the true amount of entropy available, for some sources.
Likewise, if you assume that the cryptographic hash function is totally
insecure, then it is plausible that carefully chosen malicious writes to
/dev/random might be able to reduce the total amount of entropy in the
pool -- at least, I don't see how to prove that this is impossible.)

Anyway, I suspect this is all pretty thoroughly irrelevant in practice.
It is very unlikely that the crypto schemes are the weakest link in the
security of a typical Linux system, so I'm just not terribly worried
about the scenario where the cryptography is completely broken. It's
like talking about whether, hypothetically, /dev/random would still be
secure if pigs had wings.

>When generating long-term cryptographic private keys, however, you
>*should* use /dev/random as it provides better guarantees about
>theoretical randomness security than does /dev/urandom. Such
>guarantees are useful when the random data will be used as a
>fundamental cornerstone of data security for a server or network
>(think your root CA certificate or HTTPS certificate for your million-
>dollar-per-year web store).

Well, if you want to talk about really high-value keys like the scenarios
you mention, you probably shouldn't be using /dev/random, either; you
should be using a hardware security module with a built-in FIPS certified
hardware random number source. The risk of your server getting hacked
probably exceeds the risk of a PRNG failure.

I agree that there is a plausible argument that it's safer to use
/dev/random when generating, say, your long-term PGP private key.
I think that's a reasonable view. Still, the difference in risk
level in practice is probably fairly minor. The algorithms that use
that private key are probably going to rely upon the security of hash
functions and other crypto primitives, anyway. So if you assume that
all modern crypto algorithms are secure, then /dev/urandom may be just
as good as /dev/random; whereas if you assume that all modern crypto
algorithms are broken, then it may not matter much what you do. I can
see a reasonable argument for using /dev/random for those kinds of keys,
on general paranoia and defense-in-depth grounds, but you're shooting
at a somewhat narrow target. You only benefit if the crypto algorithms
are broken just enough to make a difference between /dev/random and
/dev/urandom, but not broken enough to make PGP insecure no matter how
you pick your random numbers. That's the narrow target. There are
better things to spend your time worrying about.

Nothing you say is unreasonable; I'm just sharing a slightly different
perspective on it all.

2006-11-28 05:19:32

by Ben Pfaff

[permalink] [raw]
Subject: Re: Entropy Pool Contents

[email protected] (David Wagner) writes:

> Well, if you want to talk about really high-value keys like the scenarios
> you mention, you probably shouldn't be using /dev/random, either; you
> should be using a hardware security module with a built-in FIPS certified
> hardware random number source.

Is there such a thing? "Annex C: Approved Random Number
Generators for FIPS PUB 140-2, Security Requirements for
Cryptographic Modules", or at least the version of it I was able
to find with Google in a few seconds, simply states:

There are no FIPS Approved nondeterministic random number
generators.
--
"Welcome to the Slippery Slope. Here is your handbasket.
Say, can you work 70 hours this week?"
--Ron Mansolino

Subject: Re: Entropy Pool Contents

On Mon, 27 Nov 2006, Ben Pfaff wrote:
> [email protected] (David Wagner) writes:
> > Well, if you want to talk about really high-value keys like the scenarios
> > you mention, you probably shouldn't be using /dev/random, either; you
> > should be using a hardware security module with a built-in FIPS certified
> > hardware random number source.
>
> Is there such a thing? "Annex C: Approved Random Number
> Generators for FIPS PUB 140-2, Security Requirements for
> Cryptographic Modules", or at least the version of it I was able
> to find with Google in a few seconds, simply states:
>
> There are no FIPS Approved nondeterministic random number
> generators.

There used to exist a battery of tests for this, but a FIPS revision removed
them. You cannot really easily define a True RNG as secure or not with
simple tests.

I'd suggest googling after the papers validating the Intel and VIA Padlog
hardware RNGs, they are much better reading than FIPS for this.

If you want a software implementation of all the former FIPS tests, please
get the Debian fork of rng-tools, or Jeff's upstream rng-tools (Debian's has
a lot more stuff, but I don't recall if it has any extra FIPS
functionality).

I should get around to submit patches to Jeff one of these years. It is
about a week-man-hours of tedious work, though.

--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh

2006-11-28 13:05:01

by David Wagner

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Continuing the tangent:

Henrique de Moraes Holschuh wrote:
>On Mon, 27 Nov 2006, Ben Pfaff wrote:
>> [email protected] (David Wagner) writes:
>> > Well, if you want to talk about really high-value keys like the scenarios
>> > you mention, you probably shouldn't be using /dev/random, either; you
>> > should be using a hardware security module with a built-in FIPS certified
>> > hardware random number source.
>>
>> Is there such a thing? [...]
>
>There used to exist a battery of tests for this, but a FIPS revision removed
>them. [...]

The point I was making in my email was not about the use of FIPS
randomness tests. The FIPS randomness tests are not very important.
The point I was making was about the use of a hardware security module
to store really high-value keys. If you have a really high-value key,
that key should never be stored on a Linux server: standard advice is
that it should be generated on a hardware security module (HSM) and never
leave the HSM. If you are in charge of Verisign's root cert private key,
you should never let this private key escape onto any general-purpose
computer (including any Linux machine). The reason for this advice is
that it's probably much harder to hack a HSM remotely than to hack a
general-purpose computer (such as a Linux machine).

Again, this is probably a tangent from anything related to Linux kernel
development.

2006-11-28 13:15:30

by Martin Mares

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Hello!

> - Whether you automatically bump up the entropy estimate when
> root users write to /dev/random is a design choice where you could
> reasonably go either way. On the one hand, you might want to ensure
> that root has to take some explicit action to allege that it is
> providing a certain degree of entropy, and you might want to insist
> that root tell /dev/random how much entropy it added (since root
> knows best where the data came from and how much entropy it is likely
> to contain).

More importantly, it should be possible for root to write to /dev/random
_without_ increasing the entropy count, for example when restoring random
pool contents after reboot. In such cases you want the pool to contain
at least some unpredictable data before real entropy arrives, so that
/dev/urandom cannot be guessed, but you unless you remember the entropy
counter as well, you should not add any entropy.

Have a nice fortnight
--
Martin `MJ' Mares <[email protected]> http://atrey.karlin.mff.cuni.cz/~mj/
Faculty of Math and Physics, Charles University, Prague, Czech Rep., Earth
Q: How to start hacking Linux? A: vi /boot/vmlinuz

2006-11-28 13:33:27

by Eran Tromer

[permalink] [raw]
Subject: Re: Entropy Pool Contents

On 2006-11-27 23:52, Kyle Moffett wrote:
> Actually, our current /dev/random implementation is secure even if the
> cryptographic algorithms can be broken under traditional circumstances.

This is far from obvious, and in my opinion incorrect. David explained
this very well in his follow-up. Other pertinent references are
Gutterman Pinkas Reinman '06 [1], Barak and Halevi '05 [2, Section 5.1],
and the "/dev/random is probably not" thread [3].

The current algorithm is probably OK for casual users in normal
circumstances, but advertising it as absolutely secure is dangerously
misleading.

Eran

[1] http://www.gutterman.net/publications/GuttermanPinkasReinman2006.pdf
[2] http://eprint.iacr.org/2005/029
[3] http://www.mail-archive.com/[email protected]/msg04215.html

2006-11-28 17:21:52

by Phillip Susi

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Martin Mares wrote:
> More importantly, it should be possible for root to write to /dev/random
> _without_ increasing the entropy count, for example when restoring random
> pool contents after reboot. In such cases you want the pool to contain
> at least some unpredictable data before real entropy arrives, so that
> /dev/urandom cannot be guessed, but you unless you remember the entropy
> counter as well, you should not add any entropy.

After a reboot the entropy estimate starts at zero, so if you are adding
data to the pool from the previous boot, you DO want the estimate to
increase because you are, in fact, adding entropy.

2006-11-28 17:24:13

by Martin Mares

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Hello!

> After a reboot the entropy estimate starts at zero, so if you are adding
> data to the pool from the previous boot, you DO want the estimate to
> increase because you are, in fact, adding entropy.

I'm adding entropy, but unless I record the exact amount of entropy when
dumping the pool, I don't know how much I am adding, so using any fixed
number is obviously wrong.

Have a nice fortnight
--
Martin `MJ' Mares <[email protected]> http://atrey.karlin.mff.cuni.cz/~mj/
Faculty of Math and Physics, Charles University, Prague, Czech Rep., Earth
"Object orientation is in the mind, not in the compiler." -- Alan Cox

2006-11-28 17:41:42

by Phillip Susi

[permalink] [raw]
Subject: Re: Entropy Pool Contents

First, please don't remove the Cc: list.

David Wagner wrote:
> Sorry, but I disagree with just about everything you wrote in this
> message. I'm not committing any logical fallacies. I'm not assuming
> it works because it would be a bug if it didn't; I'm just trying to

>> Nope, I don't think so. If they could, that would be a security hole,
>> but /dev/{,u}random was designed to try to make this impossible, assuming
>> the cryptographic algorithms are secure.

That sure reads to me like you were saying that it would be a security
hole, so that can't be how it works. Maybe I just misinterpreted, but
at any rate it is a non sequitur, so let's move on.


> help you understand the intuition. I have looked at the algorithm
> used by /dev/{,u}random, and I am satisfied that it is safe to feed in
> entropy samples from malicious sources, as long as you don't bump up the
> entropy counter when you do so. Doing so can't do any harm, and cannot
> reduce the entropy in the pool. However, there is no guarantee that
> it will increase the entropy. If the adversary knows what bytes you
> are feeding into the pool, then it doesn't increase the entropy count,
> and the entropy estimate should not be increased.

I still don't see how feeding tons of zeros ( or some other carefully
crafted sequence ) in will not decrease the entropy of the pool ( even
if it does so in a way that is impossible to predict ), but assuming it
can't, what good does a non root user do by writing to random? If it
does not increase the entropy estimate, and it may not actually increase
the entropy, why bother allowing it?

> - Whether you automatically bump up the entropy estimate when
> root users write to /dev/random is a design choice where you could
> reasonably go either way. On the one hand, you might want to ensure
> that root has to take some explicit action to allege that it is
> providing a certain degree of entropy, and you might want to insist
> that root tell /dev/random how much entropy it added (since root
> knows best where the data came from and how much entropy it is likely
> to contain). On the other hand, you might want to make it easier
> for shell scripts to add entropy that will count towards the overall
> entropy estimate, without requiring them to go through weird
> contortions to call various ioctl()s. I can see arguments both
> ways, but the current behavior seems reasonable and defensible.
>

I would favor the latter argument since the entropy estimate is only
that: an estimate. Trying to come up with an estimate of the amount of
entropy that will be added to the existing unknown pool after it is
stirred by the new data seems to be an exercise in futility.


2006-11-28 17:46:14

by Phillip Susi

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Martin Mares wrote:
> I'm adding entropy, but unless I record the exact amount of entropy when
> dumping the pool, I don't know how much I am adding, so using any fixed
> number is obviously wrong.

You aren't dumping and restoring the entropy pool; you are dumping
random data generated by the pool, and using that data to stir the new
entropy pool after the next boot. There is no direct relationship
between the entropy of the old and new pools. The kernel needs to
decide how much entropy you added based on how much random data you
provide it with to stir the pool.


2006-11-28 17:49:45

by Martin Mares

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Hello!

> You aren't dumping and restoring the entropy pool; you are dumping
> random data generated by the pool, and using that data to stir the new
> entropy pool after the next boot. There is no direct relationship
> between the entropy of the old and new pools. The kernel needs to
> decide how much entropy you added based on how much random data you
> provide it with to stir the pool.

Yes, but the point is that you cannot tell how much randomness is in the
data you provide.

Have a nice fortnight
--
Martin `MJ' Mares <[email protected]> http://atrey.karlin.mff.cuni.cz/~mj/
Faculty of Math and Physics, Charles University, Prague, Czech Rep., Earth
Noli tangere fila metalica, ne in solum incasa quidem.

2006-11-28 17:59:22

by Martin Mares

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Hello!

> I still don't see how feeding tons of zeros ( or some other carefully
> crafted sequence ) in will not decrease the entropy of the pool ( even
> if it does so in a way that is impossible to predict ), but assuming it
> can't, what good does a non root user do by writing to random?

Even if so, you should control that by filesystem permissions, not by
in-kernel policy.

Have a nice fortnight
--
Martin `MJ' Mares <[email protected]> http://atrey.karlin.mff.cuni.cz/~mj/
Faculty of Math and Physics, Charles University, Prague, Czech Rep., Earth
Man is the highest animal. Man does the classifying.

2006-11-28 18:40:16

by Phillip Susi

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Martin Mares wrote:
> Yes, but the point is that you cannot tell how much randomness is in the
> data you provide.

That is exactly my point. Since you can not tell how much randomness is
in the data you provide, you can not tell the kernel how much to add to
its entropy estimate. Instead it just has to estimate based on the
amount of data you provide.

2006-11-28 21:05:35

by Martin Mares

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Hello!

> That is exactly my point. Since you can not tell how much randomness is
> in the data you provide, you can not tell the kernel how much to add to
> its entropy estimate. Instead it just has to estimate based on the
> amount of data you provide.

No, the only safe thing the kernel can do is to add NO entropy,
unless explicitly told otherwise.

Have a nice fortnight
--
Martin `MJ' Mares <[email protected]> http://atrey.karlin.mff.cuni.cz/~mj/
Faculty of Math and Physics, Charles University, Prague, Czech Rep., Earth
"All that is necessary for the triumph of evil is that good men do nothing." -- E. Burke

2006-11-28 22:52:38

by Eran Tromer

[permalink] [raw]
Subject: Re: Entropy Pool Contents

On 2006-11-28 19:42, Phillip Susi wrote:

> what good does a non root user do by writing to random? If it
> does not increase the entropy estimate, and it may not actually increase
> the entropy, why bother allowing it?

It is not guaranteed to actually increase the entropy, but it might. And
in case the entropy was previously overestimated, you will have gained
security.

Think of it this way: you can have several users feeding the entropy
pool, and it suffices that *any* of them is feeding strings with nonzero
entropy (with respect to the adversary) in order to get that gain.


That said, I don't feel comfortable about allowing untrusted users to
directly feed the entropy pool, as it can aggravate some failure modes.
To take an extreme example, suppose the adversary has somehow learned
the full state of the pool, i.e., the real entropy is 0, contrary to the
kernel's estimate.

Can things get any worse? Sure they can:

Thus far the adversary can mount attacks that require *known*
randomness. However, if he can now feed his own strings into the pool
mixer as an untrusted user, then he can achieve a *chosen* randomness,
and this undoubtedly enables a wider class of attacks (e.g., covert
channels).

Fully chosen randomness is unlikely here due to the SHA-1
postprocessing, but numerous bits in the next /dev/random read can be
fixed simply by exhaustive search. Worse yet, if the injected string is
mixed directly into the pool without cryptographic preprocessing, then
the exhaustive search can be done via off-line preprocessing: once the
primary pool is estimated to have full entropy, the /dev/random
algorithm lets you linearly manipulate the /dev/random pool into any
state. That's a nasty design flaw, BTW (see Gutterman et al., section 3).

Of course, in principle the same is possible by manipulating the
existing /dev/random event sources. But it's much harder to produce
bit-exact inputs through such indirect means.

Eran

2006-11-29 20:03:35

by Phillip Susi

[permalink] [raw]
Subject: Re: Entropy Pool Contents

Martin Mares wrote:
> No, the only safe thing the kernel can do is to add NO entropy,
> unless explicitly told otherwise.

Ahh, I think I see where I got confused now. I thought you wanted to
save and restore the entropy estimate after a reboot. I was trying to
say that you don't want to/can't do that. I would think that since you
are, in fact, adding some entropy by writing the data, that increasing
the entropy count would be fine, you just can't set it to its 'full'
value ( assuming it was full at shutdown ).

> More importantly, it should be possible for root to write to /dev/random
> _without_ increasing the entropy count, for example when restoring random
> pool contents after reboot. In such cases you want the pool to contain
> at least some unpredictable data before real entropy arrives, so that
> /dev/urandom cannot be guessed, but you unless you remember the entropy
> counter as well, you should not add any entropy.

I believe that random and urandom use separate entropy pools, so boot
scripts save/restore urandom to keep that nicely seeded, but not random.
It has to start clean each boot and rely on entropy created by the
usual input methods. That is actually why I have a problem with the
ioctl being required, because I can't just write a simple boot script to
save/restore random data as is done with urandom, and be able to extract
some random data right away.