2005-04-13 23:48:43

by Jean-Luc Cooke

[permalink] [raw]
Subject: Fortuna

Ahh. Thanks Herbert.

Matt,

Any insight on how to test syn cookies and the other network stuff in
random.c? My patch is attached, but I havn't tested that part yet.

JLC

----- Forwarded message from Herbert Xu <[email protected]> -----

Envelope-to: [email protected]
Delivery-date: Wed, 13 Apr 2005 17:39:54 -0400
To: Jean-Luc Cooke <[email protected]>
Subject: Re: [PATCH] API for TRNG (2.6.11) [Fortuna]
From: Herbert Xu <[email protected]>
X-Spam-Checker-Version: SpamAssassin 3.0.2 (2004-11-16) on certainkey.com
X-Spam-Level:
X-Spam-Status: No, score=0.0 required=5.0 tests=AWL autolearn=unavailable
version=3.0.2

On Wed, Apr 13, 2005 at 11:36:40AM -0400, Jean-Luc Cooke wrote:
> Patch attached.

You might want to CC the current /dev/random maintainer Matt Mackall
<[email protected]>.
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

----- End forwarded message -----


Attachments:
(No filename) (1.05 kB)
fortuna-random-2.6.11.4.patch (63.77 kB)
Download all attachments

2005-04-14 00:12:26

by Matt Mackall

[permalink] [raw]
Subject: Re: Fortuna

On Wed, Apr 13, 2005 at 07:43:37PM -0400, Jean-Luc Cooke wrote:
> Ahh. Thanks Herbert.
>
> Matt,
>
> Any insight on how to test syn cookies and the other network stuff in
> random.c? My patch is attached, but I havn't tested that part yet.

For starters, this is not against anything like a current random.c. A
great number of cleanups have been done.

Syncookies are perhaps best tested with printk of the cookie
ingredients in the generation and check steps.

--
Mathematics is the supreme nostalgia of our time.

2005-04-14 00:27:11

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: Fortuna

On Wed, Apr 13, 2005 at 05:09:39PM -0700, Matt Mackall wrote:
> On Wed, Apr 13, 2005 at 07:43:37PM -0400, Jean-Luc Cooke wrote:
> > Ahh. Thanks Herbert.
> >
> > Matt,
> >
> > Any insight on how to test syn cookies and the other network stuff in
> > random.c? My patch is attached, but I havn't tested that part yet.
>
> For starters, this is not against anything like a current random.c. A
> great number of cleanups have been done.

You caught me. :)

Last I proposed Fortuna for /dev/random it nearly got me drawn and quarterd.
So I've left it as a kenrel config option, leaving the current random.c
alone. I thought this was a way to make everyone happy.

JLC

2005-04-14 00:45:17

by Matt Mackall

[permalink] [raw]
Subject: Re: Fortuna

On Wed, Apr 13, 2005 at 08:26:47PM -0400, Jean-Luc Cooke wrote:
> On Wed, Apr 13, 2005 at 05:09:39PM -0700, Matt Mackall wrote:
> > On Wed, Apr 13, 2005 at 07:43:37PM -0400, Jean-Luc Cooke wrote:
> > > Ahh. Thanks Herbert.
> > >
> > > Matt,
> > >
> > > Any insight on how to test syn cookies and the other network stuff in
> > > random.c? My patch is attached, but I havn't tested that part yet.
> >
> > For starters, this is not against anything like a current random.c. A
> > great number of cleanups have been done.
>
> You caught me. :)
>
> Last I proposed Fortuna for /dev/random it nearly got me drawn and quarterd.

It still might. Ted and I are as yet unconvince that the Fortuna
approach is superior. While it may have some good properties, it lacks
some that random.c has, particularly robustness in the face of failure
of crypto primitives.

> So I've left it as a kenrel config option, leaving the current random.c
> alone. I thought this was a way to make everyone happy.

Duplicated code rarely does that.

At any rate, you ought to review the changes (there've been 40+
patches recently). There are a number of bug fixes in there and quite
a bit of cleanup. Syncookies in particular no longer live in random.c.

--
Mathematics is the supreme nostalgia of our time.

2005-04-14 14:16:02

by George Spelvin

[permalink] [raw]
Subject: Re: Fortuna

Just to refersh everyone's memory as to the strengths and weaknesses
of Fortuna...

First, a reminder that the design goal of /dev/random proper is
information-theoretic security. That is, it should be secure against
an attacker with infinite computational power. If you want a weaker
guarantee, use /dev/urandom.

I've heard a suggestion that something like /dev/urandom, but which blocks
until it has received a minimum amount of seed material at least once,
would be a nice thing. So boot-time crypto initialization can stall
until it has achieved a minimum level of security.

The minimum could be measured two ways:
- Some reasonable default, like 256 bits, or
- (clever) the size of the read() request, which is presumably the
key size that the user needs. This might be too subtle to explain
to programmers, though.


Anyway, there are two main parts to Fortuna. The central "random pool",
and the round-robin sub-pool system for reseeding the main pool.

Fortuna uses a "small pool, trust the crypto" central pool.
/dev/random uses a large pool that is deliberately designed to be
robust against failure of the crypto primitives. Indeed, the basic
design just uses it as a uniform hash. Cryptographic strength is
irrelevant to the information-theoretic guarantees.

This portion of Fortuna is at odds with /dev/random's design goals,
but it could be replaced easily enough. It's not the clever part.


The "neat idea" in Fortuna is the round-robin sub-pool seeding technique,
which attempts to avoid the entire issue of entropy estimation.

Now, for information-theoretic guarantees, entropy measurement is
*required*. You cannot be information-theoretic secure unless you
have received more seed entropy than you have produced output.

However, Fortuna has a different philosophy. This difference is why
Fortuna will NEVER be an "exact drop-in replacement" for /dev/random,
although it can do the job for /dev/urandom. There are important
user-visible differences in the guarantees it makes. Someone may
argue that the difference is immaterial in practice, but existence of
a difference is indisputable.


It tries to divide up the seed entropy into sub-pools and hold off
re-seeding the main pool until the sub-pool has accumulated enough entropy
to cause "catastrophic reseeding" of the main pool, adding enough entropy
at once that someone who had captured the prior state of the main pool
would not be able (due to computational limits and the one-way nature
of the pool output function) to reverse-engineer the post-state.

The way it schedules the additions, it doesn't know *which* re-seed of
the main pool will be catastrophic, but it knows that it will be within
a factor of 64 of the shortest time that's possible.


However, it possible to come up with some pathological entropy sources
for which the standard Fortuna design completely fails to achieve that
goal.

So Fortuna would be helped by some better understanding of what exactly
makes it fail, so the discussion could move to whether real-world
seed sources have those properties.

But until that understanding is gained, Fortuna is questionable.


Appendix: how to "break" Fortuna.

First, the standard assumptions: the "attacker" can collect arbitrary
amounts of RNG output at any time, and can see all operations on the
pool except for the value of the seed material.

In particular, if the attacker's uncertainty about the pool's state
is small, she can collect enough output to distinguish all the
possibilities and attempt a brute-force computation to recover the
unknown state information.

Finally, assume the attacker knows the full initial state of the
generator.

For a classic 32-subpool Fortuna, let the seed be a 32-bit integer.
Every time a new seed is produced, shift the previous value and insert
a fresh random bit. Thus, the seed does deliver 1 bit of entropy
per sample, which should make for a secure source.

If you want to be cruel, encrypt that seed with a strong cipher known
only to the attacker before producing output. That keeps the nature
of the seed from the Fortuna implementation.

Anyway, every 32 cycles, a seed word is added to subpool 0, which is then
immediately added to the main pool. An attacker who knows the prior
state of the main pool can attempt a brute-force search for the 32 bits
of the seed word in a reasonable period of time.

Because of the shifting property, an attacker who knows the seeds added
to pool 0 at times t and t+32 can also deduce, with perfect accuracy,
the seeds added to the other 31 pools at times t+1 through t+31.

The result being that the other subpools, which are supposed to be hoarding
secret information unknown to the attacker, are actually being seeded with
bits known to the attacker.

The ultimate result is that catastrohpic reseeding never takes place.
If we could somehow separate the "fresh" entropy in the input samples
from the "holdover" material, the problem would go away, but that's
an entropy measurement problem!

Until this cloud is dissipated by further analysis, it's not possible to
say "this is shiny and new and better; they's use it!" in good conscience.

2005-04-14 14:52:36

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: Fortuna

Thanks for the post.

Waiting for 256bits of entropy before outputting data is a good goal. Problem
becomes how do you measure entropy in a reliable way? This had me lynched
last time I asked it so I'll stop right there.

Info-theoretic randomness is a strong desire of some/many users, and they think
the current entropy estimation technique does the trick, so I left it there
with the option to #define it out.

I'll not make any claim that random-fortuna.c should be mv'd to random.c, the
patch given allows people to kernel config it in under the Cryptographic
Options menu. Perhaps a disclaimer in the help menu would be in order to
inform users that Fortuna has profound consequences for those expecting
Info-theoretic /dev/random?

The case where an attacker has some small amount of unknown in the pool is a
problem that effects BOTH random-fortuna.c and random.c (and any other
replacement for that matter). Just an FYI.

As for the "shifting property" problem of an attacker controlling some input
to the pooling system, I've tried to get around this:

r->pools_bytes[r->pool_index] += nwords * sizeof(__u32);
crypto_digest_update(r->pools[r->pool_index], sg, 1);
if (r->pool_index == 0) {
r->pool0_len += nwords*sizeof(__u32);
}
/* idx = (idx + r) mod ( (2^N)-1 ) */
r->pool_index = (r->pool_index + r->key[keyidx])
& ((1<<random_state->pool_number)-1);
/* first 8 bytes of the key are used, 8 * 8 = 64 bits */
keyidx = (keyidx + 1) & 7;

The add_entropy_words() function uses the first 8 bytes of the central
pool to aggravate the predictability of where entropy goes. It's still a
linear progression untill the central pool is re-keyed, then you don't know
where it went. The central pool is reseeded every 0.1ms.

This causes problems of course, the hard upper bound of "we recover from
state compromise in at most a factor of 64 times the rate of entropy inflow"
is no longer non-deterministic. That is, it changes to "we expect to recover
from state compromise in a factor of 64 times the rate of entropy inflow."

Not sure if this was caught or not, but <linux@>Anyway, every 32 cycles, a seed word
is added to subpool 0, which is then immediately added to the main pool</linux@>.
Entirely true however, every time other than the first seeding other pools
will be adding words to the main pool as well.

Cheers,

JLC

On Thu, Apr 14, 2005 at 02:15:38PM -0000, [email protected] wrote:
> Just to refersh everyone's memory as to the strengths and weaknesses
> of Fortuna...
>
> First, a reminder that the design goal of /dev/random proper is
> information-theoretic security. That is, it should be secure against
> an attacker with infinite computational power. If you want a weaker
> guarantee, use /dev/urandom.
>
> I've heard a suggestion that something like /dev/urandom, but which blocks
> until it has received a minimum amount of seed material at least once,
> would be a nice thing. So boot-time crypto initialization can stall
> until it has achieved a minimum level of security.
>
> The minimum could be measured two ways:
> - Some reasonable default, like 256 bits, or
> - (clever) the size of the read() request, which is presumably the
> key size that the user needs. This might be too subtle to explain
> to programmers, though.
>
>
> Anyway, there are two main parts to Fortuna. The central "random pool",
> and the round-robin sub-pool system for reseeding the main pool.
>
> Fortuna uses a "small pool, trust the crypto" central pool.
> /dev/random uses a large pool that is deliberately designed to be
> robust against failure of the crypto primitives. Indeed, the basic
> design just uses it as a uniform hash. Cryptographic strength is
> irrelevant to the information-theoretic guarantees.
>
> This portion of Fortuna is at odds with /dev/random's design goals,
> but it could be replaced easily enough. It's not the clever part.
>
>
> The "neat idea" in Fortuna is the round-robin sub-pool seeding technique,
> which attempts to avoid the entire issue of entropy estimation.
>
> Now, for information-theoretic guarantees, entropy measurement is
> *required*. You cannot be information-theoretic secure unless you
> have received more seed entropy than you have produced output.
>
> However, Fortuna has a different philosophy. This difference is why
> Fortuna will NEVER be an "exact drop-in replacement" for /dev/random,
> although it can do the job for /dev/urandom. There are important
> user-visible differences in the guarantees it makes. Someone may
> argue that the difference is immaterial in practice, but existence of
> a difference is indisputable.
>
>
> It tries to divide up the seed entropy into sub-pools and hold off
> re-seeding the main pool until the sub-pool has accumulated enough entropy
> to cause "catastrophic reseeding" of the main pool, adding enough entropy
> at once that someone who had captured the prior state of the main pool
> would not be able (due to computational limits and the one-way nature
> of the pool output function) to reverse-engineer the post-state.
>
> The way it schedules the additions, it doesn't know *which* re-seed of
> the main pool will be catastrophic, but it knows that it will be within
> a factor of 64 of the shortest time that's possible.
>
>
> However, it possible to come up with some pathological entropy sources
> for which the standard Fortuna design completely fails to achieve that
> goal.
>
> So Fortuna would be helped by some better understanding of what exactly
> makes it fail, so the discussion could move to whether real-world
> seed sources have those properties.
>
> But until that understanding is gained, Fortuna is questionable.
>
>
> Appendix: how to "break" Fortuna.
>
> First, the standard assumptions: the "attacker" can collect arbitrary
> amounts of RNG output at any time, and can see all operations on the
> pool except for the value of the seed material.
>
> In particular, if the attacker's uncertainty about the pool's state
> is small, she can collect enough output to distinguish all the
> possibilities and attempt a brute-force computation to recover the
> unknown state information.
>
> Finally, assume the attacker knows the full initial state of the
> generator.
>
> For a classic 32-subpool Fortuna, let the seed be a 32-bit integer.
> Every time a new seed is produced, shift the previous value and insert
> a fresh random bit. Thus, the seed does deliver 1 bit of entropy
> per sample, which should make for a secure source.
>
> If you want to be cruel, encrypt that seed with a strong cipher known
> only to the attacker before producing output. That keeps the nature
> of the seed from the Fortuna implementation.
>
> Anyway, every 32 cycles, a seed word is added to subpool 0, which is then
> immediately added to the main pool. An attacker who knows the prior
> state of the main pool can attempt a brute-force search for the 32 bits
> of the seed word in a reasonable period of time.
>
> Because of the shifting property, an attacker who knows the seeds added
> to pool 0 at times t and t+32 can also deduce, with perfect accuracy,
> the seeds added to the other 31 pools at times t+1 through t+31.
>
> The result being that the other subpools, which are supposed to be hoarding
> secret information unknown to the attacker, are actually being seeded with
> bits known to the attacker.
>
> The ultimate result is that catastrohpic reseeding never takes place.
> If we could somehow separate the "fresh" entropy in the input samples
> from the "holdover" material, the problem would go away, but that's
> an entropy measurement problem!
>
> Until this cloud is dissipated by further analysis, it's not possible to
> say "this is shiny and new and better; they's use it!" in good conscience.

2005-04-14 17:33:52

by Theodore Ts'o

[permalink] [raw]
Subject: Re: Fortuna

On Thu, Apr 14, 2005 at 02:15:38PM -0000, [email protected] wrote:
> I've heard a suggestion that something like /dev/urandom, but which blocks
> until it has received a minimum amount of seed material at least once,
> would be a nice thing. So boot-time crypto initialization can stall
> until it has achieved a minimum level of security.

With a properly set up set of init scripts, /dev/random is initialized
with seed material for all but the initial boot (and even that problem
can be solved by having the distribution's install scripts set up
/var/lib/urandom/random-seed after installing the system).

If the init scripts aren't set up correctly, then you're screwed, yes,
but (a) all distributions that I know do the init scripts correctly,
and the right of doing this is documented in random.c, and (b) if user
space is screwed up in other ways, you can even worse off.

What if someone accesses the seed file directly before the machine
boots? Well, either (a) they have broken root already, or (b) have
direct access to the disk. In either case, the attacker with such
powers can just has easily trojan an executable or a kernel, and
you've got far worse problems to worry about that the piddling worries
of (cue Gilbert and Sullivan overly dramatic music, <Oh, horror!>) the
dreaded state-extension attack.

> It tries to divide up the seed entropy into sub-pools and hold off
> re-seeding the main pool until the sub-pool has accumulated enough entropy
> to cause "catastrophic reseeding" of the main pool, adding enough entropy
> at once that someone who had captured the prior state of the main pool
> would not be able (due to computational limits and the one-way nature
> of the pool output function) to reverse-engineer the post-state.

Actually, if you check the current random.c code, you'll see that it
has catastrophic reseeding in its design already.

> So Fortuna would be helped by some better understanding of what exactly
> makes it fail, so the discussion could move to whether real-world
> seed sources have those properties.
>
> But until that understanding is gained, Fortuna is questionable.
>...
> Until this cloud is dissipated by further analysis, it's not possible to
> say "this is shiny and new and better; they's use it!" in good conscience.

My big concern with Fortuna is that it really is the result of
cryptographic masturbation. It fundamentally assumes that crypto
primitives are secure (when the recent break of MD4, MD5, and now SHA1
should have been a warning that this is a Bad Idea (tm)), and makes
assumptions that have no real world significance (assume the attacker
has enough privileges that he might as well be superuser and can
trojan your system to a fare-thee-well ---- now, if you can't recover
from a state extension attack, your random number generator is fatally
flawed.)

In addition, Frotuna is profligate with entropy, and wastes it in
order to be able make certain claims of being able to recover from a
state-extension attack. Hopefully everyone agrees that entropy
collected from the hardware is precious (especially if you don't have
special-purpose a hardware RNG), and shouldn't be wasted. Wasting
collected entropy for no benefit, only to protect against a largely
theoretical attack --- where if a bad guy has enough privileges to
compromise your RNG state, there are far easier ways to compromise
your entire system, not just the RNG --- is Just Stupid(tm).

- Ted

2005-04-15 00:55:17

by George Spelvin

[permalink] [raw]
Subject: Re: Fortuna

> Waiting for 256bits of entropy before outputting data is a good goal.
> Problem becomes how do you measure entropy in a reliable way? This had
> me lynched last time I asked it so I'll stop right there.

It's a problem. Also, with the current increase in wireless keyboards
and mice, that source should possibly not be considered secure any more.

On the other hand, clock jitter in GHz clocks is a *very* rich source
of seed material. One rdtsc per timer tick, even accounted for at
0.1 bit per rdtsc (I think reality is more like 3-5 bits) would keep
you in clover.

> I'll not make any claim that random-fortuna.c should be mv'd to random.c, the
> patch given allows people to kernel config it in under the Cryptographic
> Options menu. Perhaps a disclaimer in the help menu would be in order to
> inform users that Fortuna has profound consequences for those expecting
> Info-theoretic /dev/random?

I don't mean to disparage you efforts, but then what's the upside to
the resultant code maintenance hassles? Other than hack value, what's the
advantage of even offering the choice? An option like that is justified
when the two options have value for different people and it's not
possible to build a single merged solutions that satisfies both markets.

Also, Ted very deliberately made /dev/random non-optional so it could
be relied on without a lot of annoying run-time testing. Would
a separate /dev/fortuna be better?



> The case where an attacker has some small amount of unknown in the pool is a
> problem that effects BOTH random-fortuna.c and random.c (and any other
> replacement for that matter). Just an FYI.

Yes, but entropy estimation is supposed to deal with that. If the
attacker is never allowed enough information out of the pool to
distinguish various input states, the output is secure.

> As for the "shifting property" problem of an attacker controlling some input
> to the pooling system, I've tried to get around this:

(Code that varies the pool things get added to based on r->key[i++ & 7])

> The add_entropy_words() function uses the first 8 bytes of the central
> pool to aggravate the predictability of where entropy goes. It's still a
> linear progression untill the central pool is re-keyed, then you don't know
> where it went. The central pool is reseeded every 0.1ms.

You need to think more carefully. You're just elaborating it without
adding security. Yes, the attacker *does* know! The entire underlying
assumption is that the attacker knows the entire initial state of the
pool, owing to some compromise or another.

The assumption is that the only thing the attacker doesn't know is the
exact value of the incoming seed material. (But the attacker does have
an excellent model of its distribution.)

Given that, the attacker knows the initial value of the key[] array,
and thus knows the order in which the pools are fed. The question
then arises, come next reseed, can the attacker (with a small mountain
of computers, able to brute-force 40-bit problems in the blink of an
eye) infer the *complete* state of the pool from the output.

The confusion is over the word "random". In programming jargon, the
word is most often used to mean "arbitrary" or "the choice doesn't
matter". But that doesn't capture the idea of "unpredictable to
a skilled and determined opponent" that is needed in /dev/random.

So while the contents of key[] may be "random-looking", they're not
*unpredictable*, any more than the digits of pi are.

The attacker just has to, after each reseeding, brute-force the seed
bits fed to the (predictable) pools chosen to mix in, and then
use that information to infer the seeds added to the not-selected
pools.

If the attacker's uncertainty about the state of some of the subpools
increases to the catastrophic reseeding level, then the Fortuna design
goal is achieved.

If the seed samples are independent, then it's easy to see that the
schedule works.

But if the seed samples are correlated, it gets a lot trickier.

2005-04-15 01:34:27

by George Spelvin

[permalink] [raw]
Subject: Re: Fortuna

> What if someone accesses the seed file directly before the machine
> boots? Well, either (a) they have broken root already, or (b) have
> direct access to the disk. In either case, the attacker with such
> powers can just has easily trojan an executable or a kernel, and
> you've got far worse problems to worry about that the piddling worries
> of (cue Gilbert and Sullivan overly dramatic music, <Oh, horror!>) the
> dreaded state-extension attack.

So the only remaining case is where an attacker can read the random
seed file before boot but can't install any trojans. Which seems
like an awfully small case. (Although in some scenarios, passive
attacks are much easier to mount than active ones.)

> Actually, if you check the current random.c code, you'll see that it
> has catastrophic reseeding in its design already.

Yes, I know. Fortuna's claim to fame is that it tries to achieve that
without explicitly measuring entropy.

> My big concern with Fortuna is that it really is the result of
> cryptographic masturbation. It fundamentally assumes that crypto
> primitives are secure (when the recent break of MD4, MD5, and now SHA1
> should have been a warning that this is a Bad Idea (tm)), and makes
> assumptions that have no real world significance (assume the attacker
> has enough privileges that he might as well be superuser and can
> trojan your system to a fare-thee-well ---- now, if you can't recover
> from a state extension attack, your random number generator is fatally
> flawed.)

I'm not a big fan of Fortuna either, but the issues are separate.
I agree that trusting crypto prmitives that you don't have to is a bad
idea. If my application depends on SHA1 being secure, then I might as
well go ahead and use SHA1 in my PRNG. But a kernel service doesn't
know what applications are relying on.

(Speaking of which, perhaps it's time, in light of the breaking of MD5,
to revisit the cut-down MD4 routine used in the TCP ISN selection?
I haven't read the MD5 & SHA1 papers in enough detail to understand the
flaw, but perhaps some defenses could be erected?)

But still, all else being equal, an RNG resistant to a state extension
attack *is* preferable to one that's not. And the catastrophic
reseeding support in /dev/random provides exactly that feature.

What Fortuna tries to do is sidestep the hard problem of entropy
measurement. And that's very admirable. It's a very hard thing to do in
general, and the current technique of heuristics plus a lot of derating
is merely adequate.

If a technique could be developed that didn't need an accurate entropy
measurement, then things would be much better.


> In addition, Frotuna is profligate with entropy, and wastes it in
> order to be able make certain claims of being able to recover from a
> state-extension attack. Hopefully everyone agrees that entropy
> collected from the hardware is precious (especially if you don't have
> special-purpose a hardware RNG), and shouldn't be wasted. Wasting
> collected entropy for no benefit, only to protect against a largely
> theoretical attack --- where if a bad guy has enough privileges to
> compromise your RNG state, there are far easier ways to compromise
> your entire system, not just the RNG --- is Just Stupid(tm).

Just to be clear, I don't remember it ever throwing entropy away, but
it hoards some for years, thereby making it effectively unavailable.
Any catastrophic reseeding solution has to hold back entropy for some
time.

And I think that, even in the absence of special-purpose RNG hardware,
synchronization jitter on modern GHz+ CPUs is a fruitful source of
entropy.

2005-04-15 14:42:32

by Theodore Ts'o

[permalink] [raw]
Subject: Re: Fortuna

On Fri, Apr 15, 2005 at 01:34:17AM -0000, [email protected] wrote:
> (Speaking of which, perhaps it's time, in light of the breaking of MD5,
> to revisit the cut-down MD4 routine used in the TCP ISN selection?
> I haven't read the MD5 & SHA1 papers in enough detail to understand the
> flaw, but perhaps some defenses could be erected?)

The ISN selection is there only to make it harder to accomplish TCP
hijacking attacks from people who are on networking path between the
source and destination. And you have to guess the ISI before the
3-way TCP handshake has been negotiated (or if you can stop the SYN
packet in flight, before the other side times out the attempted TCP
connection); and we also rekey every few minutes, so even if you do
figure out the seed that we are using to generate the ISI (which is
harder than just merely finding a hash collision; find the preimage
that we are using as a seed given the only a portion of the crypto
hash output), it becomes useless and you have to start all over again
within a few minutes.

Furthermore, if you really cared about preventing TCP hijacks, the
only real way to do this is to use Real Crypto. And these days, Cisco
boxes support Kerberized telnets and ssh connections, which is the
real Right Answer.

It might be possible to use a more expensive crypto primitive here,
since CPU's have gotten so much faster, but the reason why we moved to
MD4, and then cut it down, was because otherwise it was causing a
noticeable difference in the speed we could establish TCP connections,
and hence for certain network-based benchmarks.

> Just to be clear, I don't remember it ever throwing entropy away, but
> it hoards some for years, thereby making it effectively unavailable.
> Any catastrophic reseeding solution has to hold back entropy for some
> time.

It depends on how often you reseed, but my recollection was that it
was far more than years; it was *centuries*. And as far as I'm
concerned, that's equivalent to throwing it away, especially given the
pathetically small size of the Fortuna pools.

- Ted

2005-04-15 15:38:10

by George Spelvin

[permalink] [raw]
Subject: Re: Fortuna

> The ISN selection is there only to make it harder to accomplish TCP
> hijacking attacks from people who are on networking path between the
> source and destination. And you have to guess the ISI before the
> 3-way TCP handshake has been negotiated (or if you can stop the SYN
> packet in flight, before the other side times out the attempted TCP
> connection); and we also rekey every few minutes, so even if you do
> figure out the seed that we are using to generate the ISI (which is
> harder than just merely finding a hash collision; find the preimage
> that we are using as a seed given the only a portion of the crypto
> hash output), it becomes useless and you have to start all over again
> within a few minutes.

Er... are you sure that's right? I think there are several mistakes
there.

First of all, people *on* the netowrk path can just *see* the packets.
Or modify them. Or whatever.
The point is to prevent hijacking by people *not* on the path.
Such people can only blindly inject forged packets, and try to send
"passwd\nhax0r\nhax0r\n" into an open telnet connection. Or sending
FINs to DoS.

Such injection requires guessing the TCP sequence number. That depends
on the ISN plus the number of bytes sent. Fortunately, some of the most
interesting protocols to attack are interactive ones like telnet where
the number of bytes sent is small and the uncertainty can be exhausted
with a flood of packets.

The problem was to prevent someone from connecting to endpoints A and B
and using the ISN they received to guess the ISNs that A and B would use
to talk to each other. The classic TCP ISN algorithm just uses a global
timestamp, so it's pretty easy.

Further, if I capture ISNs from A and B in the same rekey interval as
the initiation of the connection I'm trying to hijack, and that
connection proceeds slowly, then I have the lifetime of the connection
to solve the crypto problem.


All that said, you are quite right (I had just figured it out myself,
actually) that the required attack is a (first) preimage attack (albeit
with partially known plaintext), not a collision, so the published
collision attacks are not directly relevant. And the fact that the
amount of hash output is not enough to determine the input key material
so you have to make multiple probes with dirrecnt port numbers, and
it's all masked by a high-speed clock that you can only come close to
guessing the value of, makes it all even nastier for a would-be
cryptanalyst.

> Furthermore, if you really cared about preventing TCP hijacks, the
> only real way to do this is to use Real Crypto. And these days, Cisco
> boxes support Kerberized telnets and ssh connections, which is the
> real Right Answer.

It's just so high-level. While ipsec is the most general solution,
setting it up is a PITA. I've often thought about trying to add a TCP
option for stream encryption what could provide opportunistic encryption
for everyone.


> It might be possible to use a more expensive crypto primitive here,
> since CPU's have gotten so much faster, but the reason why we moved to
> MD4, and then cut it down, was because otherwise it was causing a
> noticeable difference in the speed we could establish TCP connections,
> and hence for certain network-based benchmarks.

Yes, I recall. I was more thinking that the MD5 and SHA0 problems were
insufficient mixing between columns of bits, and even SHA1 still has
the problem. Notice how much more aggressive the shifting is in the
SHA256 key schedule.

As long as it's homebrew ad-hoc crypto, perhaps some study of the
attacks could suggest a better key schedule than just using the key
words in different orders and with different constants added.

>> Just to be clear, I don't remember it ever throwing entropy away, but
>> it hoards some for years, thereby making it effectively unavailable.
>> Any catastrophic reseeding solution has to hold back entropy for some
>> time.

> It depends on how often you reseed, but my recollection was that it
> was far more than years; it was *centuries*. And as far as I'm
> concerned, that's equivalent to throwing it away, especially given the
> pathetically small size of the Fortuna pools.

Well, subpool n is added to the main pool every 1/2^n additions of
subpool 0. So with pool 0 added every 0.1 second (say), then
subpool 31 is added every 6.8 years.

But the whole things depends on a minimum assumed entropy feed rate.
The idea is to be sure that *some* subpool will hoard entropy long
enough to cause catastrophic reseeding. You can reduce the number
of subpools to fit any arbitrary bound.

E.g. if you think that 24 hours is long enough, and you have the same
0.1 second maximum reseed rate, then 21 pools will have pool 20 added
every 29:07:37.6.

As I keep saying, the small size of the Fortuna pools is a separate
matter, and can be changed without affecting the subpool structure that
is Fortuna's real novel contribution. That was just what Niels and
Bruce came up with to make the whole thing concrete.

2005-04-15 16:22:49

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: Fortuna

On Fri, Apr 15, 2005 at 10:42:16AM -0400, Theodore Ts'o wrote:
> > Just to be clear, I don't remember it ever throwing entropy away, but
> > it hoards some for years, thereby making it effectively unavailable.
> > Any catastrophic reseeding solution has to hold back entropy for some
> > time.
>
> It depends on how often you reseed, but my recollection was that it
> was far more than years; it was *centuries*. And as far as I'm
> concerned, that's equivalent to throwing it away, especially given the
> pathetically small size of the Fortuna pools.

"pathetically" is an interesting term to call it. 256 bits is the most any
one of the 32 pools can hold, true. And the n-th pool is used only once
every 2^n times (where the first pool is the 0-th pool, hence it's used
everytime).

2^31 * 0.1s reseeds, mean the 31st (aka. last) pool will be drawn from once
every 6.8 years.

And the argument that "random.c doesn't rely on the strength of crypto
primitives" is kinda lame, though I see where you're coming from. random.c's
entropy mixing and output depends on the (endian incorrect) SHA-1
implementation hard coded in that file to be pre-image resistant. If that
fails (and a few other things) then it's broken.

Fortuna depends on known cipher-text attacks on the cipher in use (in this
case AES-256) and the digest algo in use (in this case SHA-256) to be
pre-image resistant. Cryptographers like reducing things to known
quantities, it may be flaw.

Once I set some personal things sorted out I'll take another crack at making
/dev/fortuna with it's claws in random.c to feed it some of that entropy as
[email protected] suggested.

JLC

2005-04-15 16:50:47

by George Spelvin

[permalink] [raw]
Subject: Re: Fortuna

> And the argument that "random.c doesn't rely on the strength of crypto
> primitives" is kinda lame, though I see where you're coming from.
> random.c's entropy mixing and output depends on the (endian incorrect)
> SHA-1 implementation hard coded in that file to be pre-image resistant.
> If that fails (and a few other things) then it's broken.

/dev/urandom depends on the strength of the crypto primitives.
/dev/random does not. All it needs is a good uniform hash.

Do a bit of reading on the subject of "unicity distance".

(And as for the endianness of the SHA-1, are you trying to imply
something? Because it makes zero difference, and reduces the code
size and execution time. Which is obviously a Good Thing.)


As for hacking Fortuna in, could you give a clear statement of what
you're trying to achieve?

Do you like:
- The neat name,
- The strong ciphers used in the pools, or
- The multi-pool reseeding strategy, or
- Something else?

If you're doing it just for hack value, or to learn how to write a
device driver or whatever, then fine. But if you're proposing it as
a mainline patch, then could we discuss the technical goals?

I don't think anyone wants to draw and quarter *you*, but your
code is going to get some extremely critical examination.

2005-04-15 17:05:11

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: Fortuna

On Fri, Apr 15, 2005 at 04:50:36PM -0000, [email protected] wrote:
> (And as for the endianness of the SHA-1, are you trying to imply
> something? Because it makes zero difference, and reduces the code
> size and execution time. Which is obviously a Good Thing.)

It just bugged me when I was reading random.c for the first time. First
impressions and all.

> As for hacking Fortuna in, could you give a clear statement of what
> you're trying to achieve?
>
> Do you like:
> - The neat name,
> - The strong ciphers used in the pools, or
> - The multi-pool reseeding strategy, or
> - Something else?
>
> If you're doing it just for hack value, or to learn how to write a
> device driver or whatever, then fine. But if you're proposing it as
> a mainline patch, then could we discuss the technical goals?
>
> I don't think anyone wants to draw and quarter *you*, but your
> code is going to get some extremely critical examination.

The multi-pool reseeding strategy is what I find particular interesting.
It's design that can fit neatly into my little head and I hope into the heads
of others in the future. You can call it Bob or whatever you want, names
doesn't matter to me. As for the ciphers/digest algos used, they are
not if great concern to me, replace them with what you want. It's the design
of the RNG itself that has my attention for the time being.

Ted and yourself are in the nest position to say "we understand random.c
currently, and should be need arise, we can feel strong enough that we can
find someone to take it over and go up the learning curve" and with a
statement "we like the current RNG design now for mixing entropy and don't
see the need to confuse people with alternatives to /dev/{u}random" and
that'll be that.

I'll take my patch and not bother you anymore. I'm sure I've taken a lot of
your time as it is.

Not to sound like a "I'm taking my ball and going home" - just explaining
that I like the Fortuna design, I think it's elegant, I want it for my
systems. GPL requires I submit changes back, so I did with the unpleasant
side-dish of my opinion on random.c.

JLC

2005-04-15 18:26:36

by Theodore Ts'o

[permalink] [raw]
Subject: Re: Fortuna

On Fri, Apr 15, 2005 at 03:38:01PM -0000, [email protected] wrote:
>
> First of all, people *on* the netowrk path can just *see* the packets.
> Or modify them. Or whatever.
> The point is to prevent hijacking by people *not* on the path.

Yes, you're correct of course. Of course, I'll note that people who
*not* on the path have to not only guess the ISN, but they also have
to somehow know that there is a TCP connection between hosts A and B,
and what ports they are using. Someone not on the path isn't
necessarily going to know this, unless there are publically accessible
SNMP-enabled routers that are overly free with this sort of
information. (Loose lips sink ships!)

> Further, if I capture ISNs from A and B in the same rekey interval as
> the initiation of the connection I'm trying to hijack, and that
> connection proceeds slowly, then I have the lifetime of the connection
> to solve the crypto problem.

True, although the longer it takes to break the crypto, the greater
the uncertainty of how much data has gone across the connection, which
makes the problem harder in other ways.

> > Furthermore, if you really cared about preventing TCP hijacks, the
> > only real way to do this is to use Real Crypto. And these days, Cisco
> > boxes support Kerberized telnets and ssh connections, which is the
> > real Right Answer.
>
> It's just so high-level. While ipsec is the most general solution,
> setting it up is a PITA. I've often thought about trying to add a TCP
> option for stream encryption what could provide opportunistic encryption
> for everyone.

But ssh is pretty easy, and even if you completely ignore the host key
issue (to protect you against man-in-the-middle attacks), a simple
diffie-helman type approach is plenty to protect you against the class
of attacks which the randomized ISN buys you.

- Ted

2005-04-15 19:35:08

by Matt Mackall

[permalink] [raw]
Subject: Re: Fortuna

On Fri, Apr 15, 2005 at 12:22:25PM -0400, Jean-Luc Cooke wrote:
> And the argument that "random.c doesn't rely on the strength of crypto
> primitives" is kinda lame, though I see where you're coming from. random.c's
> entropy mixing and output depends on the (endian incorrect) SHA-1
> implementation hard coded in that file to be pre-image resistant. If that
> fails (and a few other things) then it's broken.

You really ought to look at the _current_ implementation. There is no
SHA1 code in random.c.

--
Mathematics is the supreme nostalgia of our time.

2005-04-16 01:04:54

by daw

[permalink] [raw]
Subject: Re: Fortuna

Matt Mackall wrote:
>While it may have some good properties, it lacks
>some that random.c has, particularly robustness in the face of failure
>of crypto primitives.

It's probably not a big deal, because I'm not worried about the
failure of standard crypto primitives, but--

Do you know of any analysis to back up the claim that /dev/random
will be robust in the failure of crypto primitives? I have never
seen anyone try to do such an analysis, but maybe you know of something
that I don't.

2005-04-16 01:10:50

by daw

[permalink] [raw]
Subject: Re: Fortuna

>First, a reminder that the design goal of /dev/random proper is
>information-theoretic security. That is, it should be secure against
>an attacker with infinite computational power.

I am skeptical.
I have never seen any convincing evidence for this claim,
and I suspect that there are cases in which /dev/random fails
to achieve this standard.

And it seems I am not the only one. See, e.g., Section 5.3 of:
http://eprint.iacr.org/2005/029

Fortunately, it doesn't matter whether /dev/random provides
information-theoretic security. I have reasonable confidence that
it provides computational security, and that is all that applications
need.

2005-04-16 01:21:05

by daw

[permalink] [raw]
Subject: Re: Fortuna

Jean-Luc Cooke wrote:
>Info-theoretic randomness is a strong desire of some/many users, [..]

I don't know. Most of the time that I've seen users say they want
information-theoretic randomness, I've gotten the impression that those
users didn't really understand what information-theoretic randomness means,
and their applications usually didn't need information-theoretic randomness
in the first place.

As for those who reject computational security because of its
unproven nature, they should perhaps be warned that any conjectured
information-theoretic security of /dev/random is also unproven.

Personally, I feel the issue of information-theoretic security
is a distraction. Given the widespread confusion about what
information-theoretic security means, I certainly sympathize with why
Jean-Luc Cooke left in the existing entropy estimation technique as a
way of side-stepping the whole argument.

Anyway, the bottom line is I don't consider "information-theoretic
arguments" as a very convincing reason to reject Cooke's proposal.

2005-04-16 01:27:37

by daw

[permalink] [raw]
Subject: Re: Fortuna

Theodore Ts'o wrote:
>With a properly set up set of init scripts, /dev/random is initialized
>with seed material for all but the initial boot [...]

I'm not so sure. Someone posted on this mailing list several months
ago examples of code in the kernel that looks like it could run before
those init scripts are run, and that looks like it might well be using
/dev/*random before it has been seeded.

I never saw any response.

>It fundamentally assumes that crypto
>primitives are secure (when the recent break of MD4, MD5, and now SHA1
>should have been a warning that this is a Bad Idea (tm)),

It looks to me like the recent attacks on MD4, MD5, and SHA1 do not
endanger /dev/random. Those attacks affect collision-resistance, but
it looks to me like the security of /dev/random relies on other properties
of the hash function (e.g., pseudorandomness, onewayness) which do not
seem to be threatened by these attacks. But of course this is a complicated
business, and maybe I overlooked something about the way /dev/random uses
those hash functions. Did I miss something?

As for which threat models are realistic, I consider it more likely
that my box will be hacked in a way that affects /dev/random than that
SHA1 will be broken in a way that affects /dev/random.

>In addition, Fortuna is profligate with entropy, [...]

Yup.

2005-04-16 01:30:49

by daw

[permalink] [raw]
Subject: Re: Fortuna

linux wrote:
>/dev/urandom depends on the strength of the crypto primitives.
>/dev/random does not. All it needs is a good uniform hash.

That's not at all clear. I'll go farther: I think it is unlikely
to be true.

If you want to think about cryptographic primitives being arbitrarily
broken, I think there will be scenarios where /dev/random is insecure.

As for what you mean by "good uniform hash", I think you'll need to
be a bit more precise.

>Do a bit of reading on the subject of "unicity distance".

Yes, I've read Shannon's original paper on the subject, as well
as many other treatments.

I stand by my comments above.

2005-04-16 10:06:21

by George Spelvin

[permalink] [raw]
Subject: Re: Fortuna

MErging e-mails, first from [email protected]:
> You really ought to look at the _current_ implementation. There is no
> SHA1 code in random.c.

So I'm imagining the call to sha_transform() in 2.6.12-rc2's
extract_buf()? The SHA1 code has been moved to lib/sha1.c, so there's
no SHA1 code *lexically* in random.c, but that's a facile response;
it's a cryptologically meaningless change.

The way it's done, the length ends up being byte-swapped twice. Here's an
alternative sha1_final, moving the byte-swapping into sha1_update and
saving a few cycles and a few bytes (234 -> 214 bytes with gcc-4.0 -O
-fomit-frame-pointer on x86). I assume that moving the byte-swapping
from sha_transform to sha1_update has no code size impact.



extern void sha_transform(u32 state[5], u32 temp[80]);

void sha1_final(void* ctx, u8 *out)
{
struct sha1_ctx *sctx = ctx;
u32 temp[SHA_WORKSPACE_WORDS];
u64 count = sctx->count;
u32 i, j, index;
static const u8 padding[64] = { 0x80, };

/* Pad out to 56 mod 64 */
index = (((unsigned)sctx->count >> 3) + 8) & 0x3f;
sha1_update(sctx, padding, 64-index);

/* Byte-swap the first 14 words of the buffer */
for (i = 0; i < 14; i++)
temp[i] = be32_to_cpu(((u32 const *)sctx->buffer)[i]);

/* Append length, two native words, in big-endian word order. */
((u32 *)sctx->buffer)[14] = (u32)(count >> 32);
((u32 *)sctx->buffer)[15] = (u32)count;

/* And the final transformation */
sha_transform(sctx->state, temp);

/* Store state in digest */
for (i = j = 0; i < 5; i++, j += 4) {
u32 t2 = sctx->state[i];
out[j+3] = t2 & 0xff; t2>>=8;
out[j+2] = t2 & 0xff; t2>>=8;
out[j+1] = t2 & 0xff; t2>>=8;
out[j ] = t2 & 0xff;
}

/* Wipe context */
memset(temp, 0, sizeof temp);
memset(sctx, 0, sizeof *sctx);
}



Anyway, back to the long-suffering [email protected]:

>> something? Because it makes zero difference, and reduces the code
>> size and execution time. Which is obviously a Good Thing.)

> It just bugged me when I was reading random.c for the first time. First
> impressions and all.

Interesting. Hopefully, further reflecton has shown you that the change
has zero security implications. (For the same reason, cryptanalysts
studying DES ignore the initial permutation.)

>> Do you like:
>> - The neat name,
>> - The strong ciphers used in the pools, or
>> - The multi-pool reseeding strategy, or
>> - Something else?

> The multi-pool reseeding strategy is what I find particular interesting.
> It's design that can fit neatly into my little head and I hope into the heads
> of others in the future. You can call it Bob or whatever you want, names
> doesn't matter to me. As for the ciphers/digest algos used, they are
> not if great concern to me, replace them with what you want. It's the design
> of the RNG itself that has my attention for the time being.

Great, we agree! (Okay, almost.) My point about "the neat name" was a
facetious straw-man. That's a metaphor for "some stupid non-technical
reason". Newer is not neceaarily better. But your reason for liking
Fortuna is not quite that shallow. Good.

Ted doesn't like Fortuna's pool design, I don't care for it much, but
you agree it's not important. Great, no real point of contention there.
With lots of subpools, they need to be smaller, but the details of the
structure can be settled later.

But as I think I've said often enough, the sub-pool structure is what
makes Fortuna interesting. *That* is Fortuna's contribution to the
state of the art, and the neat idea worth stealing. We still agree!

However, there are a few issues, and if you'd like to address them, we
can actually get closer to agreeing, or at least clearly stating what
we disagree about.

1) Fortuna is trying to sidestep the hard problem of entropy measurement,
to make it unnecessary. This is a laudable goal, but /dev/random's
information-theoretic design goal precludes this simplification.
It *has* to measure entropy. Are Fortuna's benefits sufficient to
motivate a public interface change like that? If you can argue that
the current entropy measurement is such crap that it's not actually
delivering its promise anyway, that would be a good reason.

But as long as you *have* entropy measurement, you don't *need*
Fortuna's elaborate scheme for avoiding it. You only need *one*
sub-pool.

2) Fortuna tries to support such a wide range of entropy source rates,
down to very low rates, that it sequesters some entropy for literally
years. Ted thinks that's inexcusable, and I can't really disagree.
This can be fixed to a significant degree by tweaking the number
of subpools.

3) Fortuna's design doesn't actually *work*. The authors' analysis
only works in the case that the entropy seeds are independent, but
forgot to state the assumption. Some people reviewing the design
don't notice the omission.

It's that assumption which lets to "divide up" the seed material
among various sub-pools. Without it, seed information leaks from
the sequestered sub-pools to the more exposed ones, decreasing the
"value" of the sequestered pools.

I've shown a contrived pathological example, but I haven't managed
to figure out how to characterize the leakage in a more general way.
But let me give a realistic example.

Again, suppose we have an entropy source that delivers one fresh
random bit each time it is sampled.

But suppose that rather than delivering a bare bit, it delivers the
running sum of the bits. So adjacent samples are either the same or
differ by +1. This seems to me an extremely plausible example.

Consider a Fortuna-like thing with two pools. The first pool is seeded
with n, then the second with n+b0, then the first again with n+b0+b1.
n is the arbitrary starting count, while b0 and b1 are independent
random bits.

Assuming that an attacker can see the first pool, they can find n.
After the second step, their uncertainty about the second pool is 1
bit, the value of b0.

But the third step is interesting. The attacker can see the value of
b0+b1. If the sum is 0 or 2, the value of b0 is determined uniquely.
Only in the case that b0+b1 = 1 is there uncertainty. So we have
only *half* a bit of uncertainty (one bit, half of the time) in the
second pool.

Where did the missing entropy go? Well, remember the Shannon formula
for entropy, H(p_1,...,p_n) = - Sum(p_i * log(p_i)). If the log is
to the base 2, the result is in bits.

Well, p_0 = 1/4, p_1 = 1/2, and p_2 = 1/4. The logs of those are
-2, -1, and -2, respectively. So the sum works out to
2 * 1/4 + 1 * 1/2 + 2 * 1/4 = 1.5.

Half a bit of entropy has leaked from the second pool back into the first!

I probably just don't have enough mathematical background, but I don't
currently know how to bound this leakage. In pathological cases,
*all* of the entropy leaks into the lowest-order pool, at which point
the whole elaborate structure of Fortuna is completely useless.

*That* is my big problem with Fortuna. If someone can finish the
analysis and actually bound the leakage, then we can construct something
that works. But I've pushed the idea around for a while and not figured
it out.

> I'll take my patch and not bother you anymore. I'm sure I've taken a
> lot of your time as it is.

And you've spent a lot of time preparing that patch. It's not a bad idea
to revisit the ideas occasionally, but let's talk about the real *meat*
of the issue.

If you think my analysis of Fortuna's issues above is flawed, please
say so! If you disagree about the importance of the issues, that's
worth discussing too, although I can't promise that such a difference
of opinions will ever be totally resolved. But arguing about the
relative importance of good and bad points is meaningful.

Ideally, we manage to come up with a solution that has all the good points.

The only thing that's frustrating is discussing it with someone who doesn't
even seem to *see* the issues.

> Not to sound like a "I'm taking my ball and going home" - just explaining
> that I like the Fortuna design, I think it's elegant, I want it for my
> systems. GPL requires I submit changes back, so I did with the unpleasant
> side-dish of my opinion on random.c.

Actually, the GNU GPL doesn't. It only requires that you give out the
source if and when you give out the binary. You can make as many
private changes as you like. (Search debian-legal for "desert island
test".)

2005-04-16 11:11:05

by George Spelvin

[permalink] [raw]
Subject: Re: Fortuna

>> First, a reminder that the design goal of /dev/random proper is
>> information-theoretic security. That is, it should be secure against
>> an attacker with infinite computational power.

> I am skeptical.
> I have never seen any convincing evidence for this claim,
> and I suspect that there are cases in which /dev/random fails
> to achieve this standard.

I'm not sure which claim you're skeptical of. The claim that it's
a design goal, or the claim that it achieves it?


I'm pretty sure that's been the *goal* since the beginning, and it says
so in the comments:
* Even if it is possible to
* analyze SHA in some clever way, as long as the amount of data
* returned from the generator is less than the inherent entropy in
* the pool, the output data is totally unpredictable.

That's basically the information-theoretic definition, or at least
alluding to it. "We're never going to give an attacker the unicity
distance needed to *start* breaking the crypto."

The whole division into two pools was because the original single-pool
design allowed (information-theoretically) deriving previous
/dev/random output from subsequent /dev/urandom output. That's
discussed in section 5.3 of the paper you cited, and has been fixed.

There's probably more discussion of the subject in linux-kernel around
the time that change went in.


Whether the goal is *achieved* is a different issue. random.c tries
pretty hard, but makes some concessions to practicality, relying on
computational security as a backup. (But suggestions as to how to get
closer to the goal are still very much appreciated!)

In particular, it is theoretically possible for an attacker to exploit
knowledge of the state of the pool and the input mixing transform to
feed in data that permutes the pool state to cluster in SHA1 collisions
(thereby reducing output entropy), or to use the SHA1 feedback to induce
state collisions (therby reducing pool entropy). But that seems to bring
whole new meaning to the word "computationally infeasible", requiring
first preimage solutions over probability distributions.

Also, the entropy estimation may be flawed, and is pretty crude, just
heavily derated for safety. And given recent developments in keyboard
skiffing, and wireless keyboard deployment, I'm starting to think that
the idea (taken from PGP) of using the keyboard and mouse as an entropy
source is one whose time is past.

Given current processor clock rates and the widespread availability of
high-resolution timers, interrupt synchronization jitter seems like
a much more fruitful source. I think there are many bits of entropy
in the lsbits of the RDTSC time of interrupts, even from the periodic
timer interrupt! Even derating that to 0.1 bit per sample, that's still
a veritable flood of seed material.


/dev/random has an even more important design goal of being universally
available; it should never cost enough to make disabling it attractive.
If this conflicts with information-theoretic security, the latter will
be compromised. But if a practical information-theoretic /dev/random is
(say) just too bulky for embedded systems, perhaps making a scaled-back
version available for such hosts (as a config option) could satisfy
both goals.

Ted, you wrote the thing in the first place; is my summary of the goals
correct? Would you like comment patches to clarify any of this?


Thank you for pointing out the paper; Appendix A is particularly
interesting. And the [BST03] reference looks *really* nice! I haven't
finished it yet, but based on what I've read so far, I'd like to
*strongly* recommnd that any would-be /dev/random hackers read it
carefully. It can be found at
http://www.wisdom.weizmann.ac.il/~tromer/papers/rng.pdf

Happily, it *appears* to confirm the value of the LFSR-based input
mixing function. Although the suggested construction in section 4.1 is
different, and I haven't seen if the proof can be extended.

2005-04-16 11:44:53

by George Spelvin

[permalink] [raw]
Subject: Re: Fortuna

>> /dev/urandom depends on the strength of the crypto primitives.
>> /dev/random does not. All it needs is a good uniform hash.
>
> That's not at all clear. I'll go farther: I think it is unlikely
> to be true.
>
> If you want to think about cryptographic primitives being arbitrarily
> broken, I think there will be scenarios where /dev/random is insecure.
>
> As for what you mean by "good uniform hash", I think you'll need to
> be a bit more precise.

Well, you just pointed me to a very nice paper that *makes* it precise:

Boaz Barak, Ronen Shaltiel, and Eran Tromer. True random number generators
secure in a changing environment. In Workshop on Cryptographic Hardware
and Embedded Systems (CHES), pages 166-180, 2003. LNCS no. 2779.

I haven't worked through all the proofs yet, but it looks to be highly
applicable.

>> Do a bit of reading on the subject of "unicity distance".
>
> Yes, I've read Shannon's original paper on the subject, as well
> as many other treatments.

I hope it's obvious that I didn't mean to patronize *you* with such
a suggestion! Clearly, you're intimately familiar with the concept,
and any discussion can go straight on to more detailed issues.

I just hope you'll grant me that understanding the concept is pretty
fundamental to any meaningful discussion of information-theoretic
security.

> I stand by my comments above.

Cool! So there's a problem to be solved!

2005-04-16 15:06:30

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: Fortuna

On Sat, Apr 16, 2005 at 11:10:33AM -0000, [email protected] wrote:
> Thank you for pointing out the paper; Appendix A is particularly
> interesting. And the [BST03] reference looks *really* nice! I haven't
> finished it yet, but based on what I've read so far, I'd like to
> *strongly* recommnd that any would-be /dev/random hackers read it
> carefully. It can be found at
> http://www.wisdom.weizmann.ac.il/~tromer/papers/rng.pdf
>
> Happily, it *appears* to confirm the value of the LFSR-based input
> mixing function. Although the suggested construction in section 4.1 is
> different, and I haven't seen if the proof can be extended.

Correct me if I'm wrong here, but uniformity of the linear function isn't
sufficant even if we implemented like this (right now it's more a+X than
a <dot> X).

The part which suggests choosing an irreducible poly and a value "a" in the
preprocessing stage ... last I checked the value for a and the poly need to
be secret. How do you generate poly and a, Catch-22? Perhaps I'm missing
something and someone can point it out.

JLC

2005-04-16 15:46:43

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: Fortuna

On Sat, Apr 16, 2005 at 10:05:55AM -0000, [email protected] wrote:
> Anyway, back to the long-suffering [email protected]:

;)

> >> something? Because it makes zero difference, and reduces the code
> >> size and execution time. Which is obviously a Good Thing.)
>
> > It just bugged me when I was reading random.c for the first time. First
> > impressions and all.

> Interesting. Hopefully, further reflecton has shown you that the change
> has zero security implications. (For the same reason, cryptanalysts
> studying DES ignore the initial permutation.)

Yes I saw that the first time, just struck me as "why? why did I have to sit
here and add a that to the litany of things to remember when it's so easy to
just do it right?"

Guess I just see a different approach to software engineering, making it
easier to analyse makes it easier to improve. You're 100% correct - security
wise it doesn't matter.

> 1) Fortuna is trying to sidestep the hard problem of entropy measurement,
> to make it unnecessary. This is a laudable goal, but /dev/random's
> information-theoretic design goal precludes this simplification.
> It *has* to measure entropy. Are Fortuna's benefits sufficient to
> motivate a public interface change like that? If you can argue that
> the current entropy measurement is such crap that it's not actually
> delivering its promise anyway, that would be a good reason.
>
> But as long as you *have* entropy measurement, you don't *need*
> Fortuna's elaborate scheme for avoiding it. You only need *one*
> sub-pool.

I agree with the statement:
"If the current entropy estimation does not over estimate and crypto
primitives don't fail to leak information, then we're OK"

What I've seen in my study of an older kernel (2.6.x) (the estimation scheme
hasn't changed afaik) is it's very conservative from what I understand of my
devices (keyboard,mouse not included, which is good. But they don't contribute
as much randomness as disks and interrupts.)

> 2) Fortuna tries to support such a wide range of entropy source rates,
> down to very low rates, that it sequesters some entropy for literally
> years. Ted thinks that's inexcusable, and I can't really disagree.
> This can be fixed to a significant degree by tweaking the number
> of subpools.

Well, I'd take hording entropy for 7 years over trusting a general entropy
accounting heuristic that is impossible to determine proper functioning.
Assuming again that the problems of malicious entropy event injection is
overcome. Havn't come up with anything yet on how to do this nicely.

> 3) Fortuna's design doesn't actually *work*. The authors' analysis
> only works in the case that the entropy seeds are independent, but
> forgot to state the assumption. Some people reviewing the design
> don't notice the omission.
...
> Assuming that an attacker can see the first pool, they can find n.
> After the second step, their uncertainty about the second pool is 1
> bit, the value of b0.
...
> Half a bit of entropy has leaked from the second pool back into the first!
...
> *That* is my big problem with Fortuna. If someone can finish the
> analysis and actually bound the leakage, then we can construct something
> that works. But I've pushed the idea around for a while and not figured
> it out.

I feel this goes back to the round-robin event input system. Your argument
of n, n+b0, n+b0+b1, ... makes sense, in the state compromise / state
initialisation scenarios if the attacker can know for certain that they will
see 1 out of every 32 inputs. If each of the pools has a 1:32 probability of
landing into any of the pools, then the things get a lot harder.

Does this make sense? Ignoring the "where are you going to get a secure 1-in-32
RNG" issue.

If we look at this "slow-incremental entropy event input" situation with current
random.c, do we get a similar result?

Assuming an attacker can see the secondary-pool (/dev/random output) and
events {n, n+b0, b+b0+b1} enter primary-pool at times {t, t+t1, t+t1+t2}.
If the rate is slow enough, then an accurate entropy estimation will stop
anything from getting out, great.

"How does the entropy estimator measure entropy of the event?" becomes a
crucial concern here. What if, by your leading example, there is 1/2 bit of
entropy in each event? Will the estimator even account for 1/2 bits? Or
will it see each event as 3 bits of entropy? How much of a margin of error
can we tolerate?

/dev/random will output once it has at least 160 bits of entropy (iirc), 1/2
bit turning into 3 bits would mean that 160bits of output it effectively only
27 bits worth of true entropy (again, assuming the catastrophic reseeder and
output function don't waste entropy).

It's a lot of "ifs" for my taste.
waste any entropy

> > I'll take my patch and not bother you anymore. I'm sure I've taken a
> > lot of your time as it is.
>
> And you've spent a lot of time preparing that patch. It's not a bad idea
> to revisit the ideas occasionally, but let's talk about the real *meat*
> of the issue.
>
> If you think my analysis of Fortuna's issues above is flawed, please
> say so! If you disagree about the importance of the issues, that's
> worth discussing too, although I can't promise that such a difference
> of opinions will ever be totally resolved. But arguing about the
> relative importance of good and bad points is meaningful.
>
> Ideally, we manage to come up with a solution that has all the good points.
>
> The only thing that's frustrating is discussing it with someone who doesn't
> even seem to *see* the issues.

Well, I'll do what I can then. I understand your position and do my best to
explain in the thread in LKML.

> Actually, the GNU GPL doesn't. It only requires that you give out the
> source if and when you give out the binary. You can make as many
> private changes as you like. (Search debian-legal for "desert island
> test".)

Humm, must be confused with another license then. One that effects my
company required we submit back to the authors, so we just assumed that to be
the case for all of them and save the legal fees.

JLC

2005-04-16 16:30:35

by George Spelvin

[permalink] [raw]
Subject: Re: Fortuna

> Correct me if I'm wrong here, but uniformity of the linear function isn't
> sufficent even if we implemented like this (right now it's more a+X than
> a <dot> X).
>
> The part which suggests choosing an irreducible poly and a value "a" in the
> preprocessing stage ... last I checked the value for a and the poly need to
> be secret. How do you generate poly and a, Catch-22? Perhaps I'm missing
> something and someone can point it out.

No, the value (the parameter pi) are specifically described as "the public
parameter". See the "Preprocessing" paragraph at the end of section 1.2
on page 3. "This string is then hardwired into the implementation and
need not be kept secret."

All that's required is that the adversary can't tailor his limited
control over the input based on knowing pi.

There's a simple proof in all the papers that if an adversary knows
*everything* about the randomness extraction function, and has total
control over the input distribution, you're screwed.

Basically, suppose you have a 1024-bit input block, the attacker
is required to choose a distribution with at least 1023 bits of entropy,
and you only want 1 bit out. Should be easy, right?
Well, with any *fixed* function, the possible inputs are divided into
those that hash to 0, and those that hash to 1. One of those sets
must have at least 2^1023 members. Suppose it's 0. The attacker can
choose the input distribution to be "uniformly at random from the
>= 2^1023 inputs that hash to 0" and keep the promise while totally
breaking your extraction function.

But this paper says that if the attacker has to choose 2^t possible
input distributions (based on t bits of control over the input)
*before* the random parameter pi is chosen, then they're locked out.
*After* learning pi, they can choose *which* of the 2^t input
distributions to use.


The thing is, you need a parameterized family of hash functions.
They choose a random multiplier mod GF(2^n). Their construction
is based on the well-known 2-universal family of hash functions
hash(x) = (a*x+b) mod p.

The /dev/random input mix is based on choosing a "random" polynomial
(since there was a lot of efficiency pressure, it isn't actually very
random; the question is, is it non-random enough to help an attacker).
Remiander modulo a uniformly chosen random irreducible polynomial is a
well-known ("division hash") family of universal hash functions, but
it's a little bit weaker than the above, and I have to figure out of
the proof extends.

2005-04-16 17:16:45

by George Spelvin

[permalink] [raw]
Subject: Re: Fortuna

> Yes I saw that the first time, just struck me as "why? why did I have to sit
> here and add a that to the litany of things to remember when it's so easy to
> just do it right?"

Huh? Why? Because it's faster and less code, that's why.
In what conceivable way is it NOT right?

On little-endian machines, we define the mixing function to the
the bit-reversed hash of the bit-reversed pool. We could just as
easily store the entire pool bit-reversed and bit-reverse the inputs
before mising them in, and use stock SHA1.

Since we don't need to interoperate with anybody, details like
bit-ordering conventions are at the implementor's convenience.

The entire transformation has zero cryptological impact, so do it with
less code. You don't *have* to remember it, any more than you have to
remember whether I incremented the loop counter with i++, ++i, i += 1,
or i = i+1. It's SHA-1, and all the security properties still apply.

> Guess I just see a different approach to software engineering, making it
> easier to analyse makes it easier to improve. You're 100% correct - security
> wise it doesn't matter.

I don't even see how it's any harder to analyze. Any more than it
matters for most algorithm analysis whether the numbers are stored in
binary or decimal, or whether the loop counter is called i or j.

> I agree with the statement:
> "If the current entropy estimation does not over estimate and crypto
> primitives don't fail to leak information, then we're OK"

Er.. "if the crypto primitives don't fail TO leak information"?
I'm not quite sure I follow...

>> 2) Fortuna tries to support such a wide range of entropy source rates,
>> down to very low rates, that it sequesters some entropy for literally
>> years. Ted thinks that's inexcusable, and I can't really disagree.
>> This can be fixed to a significant degree by tweaking the number
>> of subpools.

> Well, I'd take hording entropy for 7 years over trusting a general entropy
> accounting heuristic that is impossible to determine proper functioning.
> Assuming again that the problems of malicious entropy event injection is
> overcome. Havn't come up with anything yet on how to do this nicely.

The problem is that it's spectacularly unlikely that the computer
will still be running in 7 years. Any entropy put into that pool is
effectively lost *permanently*.

But maybe we can come to a compromise like 24 hours.

Plus, even a crude entropy estimate can do better.
Suppose we want do to 128-bit catastrophic reseeding. But we don't
trust our entropy estimator. Still, we think it's within a factor of
100 of being correct. So we keep 8 subpools. The first is dumped into
the main pool when we estimate it holds 128 bits. The second, when the
extimate reaches 256. Etc, until the eighth, which we wait for 16K bits -
the 128 bits we want plus a factor-of-128 safety margin.

And that only requires 8 pools, not 32.

(Still, unless our 8th pool can actually hold 16 kbits = 2048 bytes,
we're still wasting lots of entropy in the case that our entropy
estimator is correct. And it's already pretty damn conservative
by design.)

> I feel this goes back to the round-robin event input system. Your
> argument of n, n+b0, n+b0+b1, ... makes sense, in the state compromise
> / state initialisation scenarios if the attacker can know for certain
> that they will see 1 out of every 32 inputs. If each of the pools has
> a 1:32 probability of landing into any of the pools, then the things
> get a lot harder.

Yes, but if you had a way to get 5 uniform random bits the attacker
didn't know, the entire issue would be trivial; just keep getting
5 random bits until you have as much as you need.

> Does this make sense? Ignoring the "where are you going to get a secure
> 1-in-32 RNG" issue.

I have a really hard time letting go of the latter, but let me think...

No, I don't see how it helps more than the 5 random bits you're
giving yourself for free. After any given subpool-0-to-main-pool
transfer, the attacker can easily see if subpool 0 was reseeded or not.
If it was, we can brute-force the seed. With enough seeds at known
sequence positions, and high enough serial correlations, we can deduce
all the intermediate seeds. Now the only uncertainty is which other pools
they got added to.

Which is the 5 bits of uncertainty you're giving yourself for free.

> Assuming an attacker can see the secondary-pool (/dev/random output) and
> events {n, n+b0, b+b0+b1} enter primary-pool at times {t, t+t1, t+t1+t2}.
> If the rate is slow enough, then an accurate entropy estimation will stop
> anything from getting out, great.

That's the hope.

> "How does the entropy estimator measure entropy of the event?" becomes a
> crucial concern here. What if, by your leading example, there is 1/2 bit
> of entropy in each event? Will the estimator even account for 1/2 bits?
> Or will it see each event as 3 bits of entropy? How much of a margin
> of error can we tolerate?

H'm... the old code *used* to handle fractional bits, but the new code
seems to round down to the nearest bit. May have to get fixed to
handle low-rate inputs.

As for margin of error, any persistent entropy overestimate is Bad.
a 6-fold overestimate is disastrous.

What we can do is refuse to drain the main pool below, say, 128 bits of
entropy. Then we're safe against any *occasional* overestimates
as long as they don't add up to 128 bits.

> /dev/random will output once it has at least 160 bits of entropy
> (iirc), 1/2 bit turning into 3 bits would mean that 160bits of output
> it effectively only 27 bits worth of true entropy (again, assuming the
> catastrophic reseeder and output function don't waste entropy).
>
> It's a lot of "ifs" for my taste.

/dev/random will output once it has as many bits of entropy as you're
asking for. If you do a 20-byte read, it'll output once it has 160
bits. If you do a 1-byte read, it'll output once it has 8 bits.

2005-04-16 19:01:10

by Matt Mackall

[permalink] [raw]
Subject: Re: Fortuna

On Sat, Apr 16, 2005 at 10:05:55AM -0000, [email protected] wrote:
> MErging e-mails, first from [email protected]:
> > You really ought to look at the _current_ implementation. There is no
> > SHA1 code in random.c.
>
> So I'm imagining the call to sha_transform() in 2.6.12-rc2's
> extract_buf()? The SHA1 code has been moved to lib/sha1.c, so there's
> no SHA1 code *lexically* in random.c, but that's a facile response;
> it's a cryptologically meaningless change.

No, it's exactly to the point: he's forked random.c before a large set
of changes that he needs to be aware of. The SHA1 code is now shared
by cryptolib and obviously no longer suffers from the (non-existent)
weakness he referenced.

--
Mathematics is the supreme nostalgia of our time.

2005-04-16 19:22:55

by Matt Mackall

[permalink] [raw]
Subject: Re: Fortuna

On Sat, Apr 16, 2005 at 05:16:22PM -0000, [email protected] wrote:
> > "How does the entropy estimator measure entropy of the event?" becomes a
> > crucial concern here. What if, by your leading example, there is 1/2 bit
> > of entropy in each event? Will the estimator even account for 1/2 bits?
> > Or will it see each event as 3 bits of entropy? How much of a margin
> > of error can we tolerate?
>
> H'm... the old code *used* to handle fractional bits, but the new code
> seems to round down to the nearest bit. May have to get fixed to
> handle low-rate inputs.

I don't believe that was ever true, though it can fairly trivially be added.
JLC, please note that entropy estimation is much more conservative now
than it was a month ago.

> As for margin of error, any persistent entropy overestimate is Bad.
> a 6-fold overestimate is disastrous.
>
> What we can do is refuse to drain the main pool below, say, 128 bits of
> entropy. Then we're safe against any *occasional* overestimates
> as long as they don't add up to 128 bits.

I've been moving in that direction already, most of the infrastructure
is already in place.

> > /dev/random will output once it has at least 160 bits of entropy
> > (iirc), 1/2 bit turning into 3 bits would mean that 160bits of output
> > it effectively only 27 bits worth of true entropy (again, assuming the
> > catastrophic reseeder and output function don't waste entropy).
> >
> > It's a lot of "ifs" for my taste.
>
> /dev/random will output once it has as many bits of entropy as you're
> asking for. If you do a 20-byte read, it'll output once it has 160
> bits. If you do a 1-byte read, it'll output once it has 8 bits.

That's not quite right. It needs 8 bits in the relevant output pool.
Failing that, it needs 64 bits in the input pool to reseed the output
pool. In the case of /dev/urandom, it needs 128 bits in the input
pool, it always leaves enough for /dev/random to reseed.

--
Mathematics is the supreme nostalgia of our time.

2005-04-16 23:43:07

by daw

[permalink] [raw]
Subject: Re: Fortuna

linux wrote:
>David Wagner wrote:
>>linux wrote:
>>> First, a reminder that the design goal of /dev/random proper is
>>> information-theoretic security. That is, it should be secure against
>>> an attacker with infinite computational power.
>
>> I am skeptical.
>> I have never seen any convincing evidence for this claim, [..]
>
>I'm not sure which claim you're skeptical of. The claim that it's
>a design goal, or the claim that it achieves it?

Oops! Gosh, I was unclear, wasn't it? Sorry about that.
I meant the latter claim.

I certainly agree that information-theoretic security is a stated goal
of /dev/random. I'm just not certain that it achieves this goal.
(Caveat: I don't think that failing to achieve this goal is problematic.)

>Whether the goal is *achieved* is a different issue. random.c tries
>pretty hard, but makes some concessions to practicality, relying on
>computational security as a backup. (But suggestions as to how to get
>closer to the goal are still very much appreciated!)

Ok.

>In particular, it is theoretically possible for an attacker to exploit
>knowledge of the state of the pool and the input mixing transform to
>feed in data that permutes the pool state to cluster in SHA1 collisions
>(thereby reducing output entropy), or to use the SHA1 feedback to induce
>state collisions (therby reducing pool entropy). But that seems to bring
>whole new meaning to the word "computationally infeasible", requiring
>first preimage solutions over probability distributions.

Well, wait a second. You have to make up your mind about whether you
are claiming information-theoretic security, or claiming computational
security. If the former, then this is absolutely an admissible attack.
There is nothing whatsoever wrong with this attack, from an
information-theoretic point of view. On the other hand, if we are talking
about computational security, then I totally agree that this is a
(computationally) infeasible attack.

>Also, the entropy estimation may be flawed, and is pretty crude, just
>heavily derated for safety. And given recent developments in keyboard
>skiffing, and wireless keyboard deployment, I'm starting to think that
>the idea (taken from PGP) of using the keyboard and mouse as an entropy
>source is one whose time is past.
>
>Given current processor clock rates and the widespread availability of
>high-resolution timers, interrupt synchronization jitter seems like
>a much more fruitful source. I think there are many bits of entropy
>in the lsbits of the RDTSC time of interrupts, even from the periodic
>timer interrupt! Even derating that to 0.1 bit per sample, that's still
>a veritable flood of seed material.

Makes sense.


As for your question about what one could do to achieve
information-theoretic security, there is a bunch of theoretical work
in the CS theory world on this subject (look up, e.g., "extractors").
Here is my summary about what is possible:

1) If you don't know anything about your source, and you don't
start with any entropy, then information-theoretically secure randomness
extraction is impossible -- at least in principle. You pick any
deterministic algorithm for randomness extraction, and I will show you
a source for which that algorithm fails.

2) If you start with a short seed of uniformly distributed perfect
randomness, and you have a lower bound on the amount of entropy provided
by your source, then you can extract random bits in a way that is
provably information-theoretically secure. Note that you don't have
to know anything about the distribution of the source, other than that
its (min-)entropy is not too small. The simplest construction uses a
2-universal hash function keyed by the seed (its security is established
by the Leftover Hashing Lemma), but there are other constructions,
including a class of schemes known as "extractors". This approach
does require a short seed of perfect true randomness for every chunk
of output you want to generate, though.

3) If you make certain assumptions about the source, you can extract
entropy in a way that is provably information-theoretically secure,
without needing the short seed. However, the assumptions required are
typically fairly strong: e.g., that your source is completely memoryless;
that you have multiple sources that are totally independent (i.e.,
uncorrelated in any way); or that your source has a certain structure
(e.g., k of the n bits are uniformly random and independent, and the
other n-k bits are fixed in advance). People are actively working to
push the boundaries in this category.

I'm not sure whether any of the above will be practically relevant.
They may be too theoretical for real-world use. But if you're interested,
I could try to give you more information about any of these categories.

2005-04-17 00:21:20

by daw

[permalink] [raw]
Subject: Re: Fortuna

linux wrote:
>3) Fortuna's design doesn't actually *work*. The authors' analysis
> only works in the case that the entropy seeds are independent, but
> forgot to state the assumption. Some people reviewing the design
> don't notice the omission.

Ok, now I understand your objection. Yup, this is a real objection.
You are right to ask questions about whether this is a reasonable assumption.

I don't know whether /dev/random makes the same assumption. I suspect that
its entropy estimator is making a similar assumption (not exactly the same
one), but I don't know for sure.

I also don't know whether this is a realistic assumption to make about
the physical sources we currently feed into /dev/random. That would require
some analysis of the physics of those sources, and I don't have the skills
it would take to do that kind of analysis.

> Again, suppose we have an entropy source that delivers one fresh
> random bit each time it is sampled.
>
> But suppose that rather than delivering a bare bit, it delivers the
> running sum of the bits. So adjacent samples are either the same or
> differ by +1. This seems to me an extremely plausible example.
>
> Consider a Fortuna-like thing with two pools. The first pool is seeded
> with n, then the second with n+b0, then the first again with n+b0+b1.
> n is the arbitrary starting count, while b0 and b1 are independent
> random bits.
>
> Assuming that an attacker can see the first pool, they can find n.
> After the second step, their uncertainty about the second pool is 1
> bit, the value of b0.
>
> But the third step is interesting. The attacker can see the value of
> b0+b1. If the sum is 0 or 2, the value of b0 is determined uniquely.
> Only in the case that b0+b1 = 1 is there uncertainty. So we have
> only *half* a bit of uncertainty (one bit, half of the time) in the
> second pool.
[..]
> I probably just don't have enough mathematical background, but I don't
> currently know how to bound this leakage.

Actually, this example scenario is not a problem. I'll finish the
analysis for you. Suppose that the adversary can observe the entire
evolution of the first pool (its initial value, and all updates to it).
Assume the adversary knows n. In one round (i.e., a pair of updates),
the adversary learns the value of b0 + b1 (and nothing more!). In the
next round, the adversary learns b0' + b1' -- and so on.

How many bits of uncertainty have been added to the second pool in
each round? With probability 1/2, the uncertainty of the second pool
remains unchanged. With probability 1/2, the uncertainty increases by
exactly 1 bit. This means there are two classes of updates, and both
classes are equally likely.

Suppose we perform 200 rounds of updates. Then we can expect about
100 of these updates to be of the second class. If the updates were
split evently (50/50) between these two classes, the adversary would
have 100 bits of uncertainty about the second pool. In general, we
expect somewhere near 100 bits of uncertainty -- sometimes a bit more,
sometimes a bit less, but the chances that it is a lot less than 100
bits of uncertainty are exponentially small.

Therefore, except for an event that occurs with exponentially small
probability, the adversary will be left with many bits of uncertainty
about the second pool. So this kind of source should not pose a serious
problem for Fortuna, or for any two-pool solution.


If you want a better example of where the two-pool scheme completely
falls apart, consider this: our source picks a random bit, uses this
same bit the next two times it is queried, and then picks a new bit.
Its sequence of outputs will look like (b0,b0,b1,b1,b2,b2,..,). If
we alternate pools, then the first pool sees the sequence b0,b1,b2,..
and the second pool sees exactly the same sequence. Consequently, an
adversary who can observe the entire evolution of the first pool can
deduce everything there is to know about the second pool. This just
illustrates that these multiple-pool solutions make some assumptions
about the time-independence of their sources, or at least that the bits
going into one pool don't have too much correlation with the bits going
into the other pool.

2005-04-17 00:38:48

by daw

[permalink] [raw]
Subject: Re: Fortuna

linux wrote:
>Thank you for pointing out the paper; Appendix A is particularly
>interesting. And the [BST03] reference looks *really* nice! I haven't
>finished it yet, but based on what I've read so far, I'd like to
>*strongly* recommnd that any would-be /dev/random hackers read it
>carefully. It can be found at
>http://www.wisdom.weizmann.ac.il/~tromer/papers/rng.pdf

Yeah, [BST03] seems worth reading. It has a reasonable survey of some
previous work, and is well-written.

However, I'm pretty skeptical about [BST03] as a basis for a real-world
randomness generator. It assumes that there are only 2^t possible
distributions for the source, and the set of possible distributions has
been fixed in advance (before the design of your randomness generator
is revealed). Consequently, it fails to defend against adaptive attacks.

If the attacker can feed in maliciously chosen inputs (chosen after the
attacker learns which randomness extraction algorithm you are using),
then the BST03 scheme promises nothing. For instance, if you feed in
timings of network packets, then even if you don't count them as providing
any entropy, the mere act of feeding them into your randomness generator
causes their theorems to be clearly inapplicable (since no matter what
value of t you pick, the adversary can arrange to get more than t bits
of freedom in the network packets he sends you).

So I'm not sure [BST03]'s theorems actually promise what you'd want.

On the other hand, if you want to take their constructions as providing
some intuition or ideas about how one might build a randomness generator,
while realizing that their theorems don't apply and there may be no
useful guarantees that can be proven about such an approach, I don't
have any objections to that view.

By the way, another example of work along these lines is
http://theory.lcs.mit.edu/~yevgen/ps/2-ext.ps
That paper is more technical and theoretically-oriented, so it might
be harder to read and less immediately useful. It makes a strong
assumption (that you have two sources that are independent -- i.e.,
totally uncorrelated), but the construction at the heart of their paper
is pretty simple, which might be of interest.

>Happily, it *appears* to confirm the value of the LFSR-based input
>mixing function. Although the suggested construction in section 4.1 is
>different, and I haven't seen if the proof can be extended.

Well, I don't know. I don't think I agree with that interpretation.

Let me give a little background about 2-universal hashing. There is a
basic result about use of 2-universal hash functions, which says that
if you choose the seed K truly at random, then you can use h_K(X) to
extract uniform random bits from a non-uniform source X. (Indeed, you
can even reveal K without harming the randomness of h_K(X).) The proof
of this fact is usually known as the Leftover Hashing Lemma.

One of the standard constructions of a 2-universal hash function is
as a LFSR-like scheme, where the seed K is used to select the feedback
polynomial. But notice that it is critical that the feedback polynomial
be chosen uniformly at random, in a way that is unpredictable to the
attacker, and kept secret until you receive data from the source.

What /dev/random does is quite different from the idea of 2-universal
hashing developed in the theory literature and recounted in [BST03].
/dev/random fixes a single feedback polynomial in advance, and publishes
it for the world to see. The theorems about 2-universal hashing promise
nothing about use of a LFSR with a fixed feedback polynomial.

2005-04-17 00:40:00

by daw

[permalink] [raw]
Subject: Re: Fortuna

Jean-Luc Cooke wrote:
>The part which suggests choosing an irreducible poly and a value "a" in the
>preprocessing stage ... last I checked the value for a and the poly need to
>be secret. How do you generate poly and a, Catch-22? Perhaps I'm missing
>something and someone can point it out.

I don't think you're missing anything. What you say matches my
understanding as well.

2005-04-17 09:22:08

by George Spelvin

[permalink] [raw]
Subject: Re: Fortuna

Okay, I've merged several e-mails here for compactness...

>> I'm not sure which claim you're skeptical of. The claim that it's
>> a design goal, or the claim that it achieves it?
>
> Oops! Gosh, I was unclear, wasn't it? Sorry about that.
> I meant the latter claim.
>
> I certainly agree that information-theoretic security is a stated goal
> of /dev/random. I'm just not certain that it achieves this goal.
> (Caveat: I don't think that failing to achieve this goal is problematic.)

Great, there's one point of contention disposed of. As I said, a
*more* important goal is universal availability. Theoretical perfection
can be sacrificed to achieve practicality. In particular, we want
to limit code size, data size, and particularly execution time for
entropy gathering. Execution time for entropy output is not a major
concern.

>> [Description of attack requiring truly preposterous amount of computaton.]

> Well, wait a second. You have to make up your mind about whether you
> are claiming information-theoretic security, or claiming computational
> security. If the former, then this is absolutely an admissible attack.
> There is nothing whatsoever wrong with this attack, from an
> information-theoretic point of view. On the other hand, if we are talking
> about computational security, then I totally agree that this is a
> (computationally) infeasible attack.

I don't diasgree with anything you're saying, but I'd like to point out
a large grey area between information-theoretic and computation security,
which is "robustness in the face of failure of the crypto primitives".

Computational security would be plenty, and everyone would be happy to
scale back /dev/random's ambitions to that, if we could strictly bound
the amount of effort required for a break. But until someone comes
up with an explicit hard function (and proves that P != NP), that's
not available. So all computational security statements are based on
the somewhat unsatisfactory assumption that our cryptographic primtive
is a hard function when we don't actually have a proof that there are any.

And I'm really not up on quantum computing, but there's a whole 'nother
pile of risk that NP-hard might not be hard enough.

Given all that, there is a continual danger that any particular crypto
primitive will be broken. The recent developments in hash functions
are an excellent example.

So /dev/random, where it relies on computational assumptions, wants to
be very robust in the face of compromise of the primitives. Like the
example I gave: finding collisions in a hash function is one thing;
the attack I described makes finding a first preimage look easy.

No, it's not information-theoretic secure, but I'm not losing a lot
of sleep worrying about it, either.


But anyway, does that make sense? The goal is "rely as little as possible
on the unproven hardness of cryptographic functions", which reaches
information-theoretic security in the limit. So that's the *goal*, but
not reaching it isn't quite *failure*, either.

> As for your question about what one could do to achieve
> information-theoretic security, there is a bunch of theoretical work
> in the CS theory world on this subject (look up, e.g., "extractors").
> Here is my summary about what is possible:

Thank you very much. I've been reading on the subject and learned this,
but it's nice to have it confirmed that I didn't miss anything.

> [Descrptions snipped for conciseness]

Given that we *do* have multiple mostly-independent sources,
perhaps the two-source construction could be adapted. What would be
really nice is if it could be extended to n sources such that if any
pair were independent, it would be secure.

Oh, duh, that's trivial... run the two-source construction over every
pair of sources, and XOR all the outputs together. Of course,
I still need to take care of that nasty word "mostly".

Even if that's horribly inefficient, it could be used to bootstrap a short
seed to start an efficient seed-using extractor.

> 3) If you make certain assumptions about the source, you can extract
> entropy in a way that is provably information-theoretically secure,
> without needing the short seed. However, the assumptions required are
> typically fairly strong: e.g., that your source is completely memoryless;
> that you have multiple sources that are totally independent (i.e.,
> uncorrelated in any way); or that your source has a certain structure
> (e.g., k of the n bits are uniformly random and independent, and the
> other n-k bits are fixed in advance). People are actively working to
> push the boundaries in this category.

[BST03] added a nice one to this: if your source can't adapt to
your seed, you can use a fixed seed.

Also, unless I'm misunderstanding the definition very badly, any "strong
extractor" can use a fixed secret seed.

> I'm not sure whether any of the above will be practically relevant.
> They may be too theoretical for real-world use. But if you're interested,
> I could try to give you more information about any of these categories.

I'm doing some reading to see if something practical can be dug out of the
pile. I'm also looking at "compressors", which are a lot like our random
pools; they reduce the size of an input while preserving its entropy,
just not necesarily to 100% density like an extractor.

This is attractive because our entropy measurement is known to be
heavily derated for safety. An extractor, in producing an output that
is as large as our guaranteed entropy, will throw away any additional
entropy that might be remaining.

Th other thing that I absolutely need is some guarantee that things will
still mostly work if our entropy estimate is wrong. If /dev/random
produces 128 bits of output that only have 120 bits of entropy in them,
then your encryption is still secure. But these extractor constructions
are very simple and linear. If everything falls apart if I overestimate
the source entropy by 1 bit, it's probably a bad idea. Maybe it can be
salvaged with some cryptographic techniques as backup.

>> 3) Fortuna's design doesn't actually *work*. The authors' analysis
>> only works in the case that the entropy seeds are independent, but
>> forgot to state the assumption. Some people reviewing the design
>> don't notice the omission.

> Ok, now I understand your objection. Yup, this is a real objection.
> You are right to ask questions about whether this is a reasonable assumption.
>
> I don't know whether /dev/random makes the same assumption. I suspect that
> its entropy estimator is making a similar assumption (not exactly the same
> one), but I don't know for sure.

Well, the entropy *accumulator* doesn't use any such assumption.
Fortuna uses the independence assumption when it divides up the seed
material round-robin among the various subpools. The current /dev/random
doesn't do anything like that. (Of course, non-independence affects
us by limiting us to the conditional entropy of any given piece of
seed material.)

> I also don't know whether this is a realistic assumption to make about
> the physical sources we currently feed into /dev/random. That would require
> some analysis of the physics of those sources, and I don't have the skills
> it would take to do that kind of analysis.

And given the variety of platforms that Linux runs on, it gets insane.
Yes, it can be proved based on fluid flow computations that hard drive
rotation rates are chaotic and thus disk access timing is a usable
entropy source, but then someone installs a solid-state disk.

That's why I like clock jitter. That just requires studying oscillators
and PLLs, which are universal across all platforms.

> Actually, this example scenario is not a problem. I'll finish the
> analysis for you.

Er... thank you, but I already knew that; I omitted the completion
because it seemed obvious. And yes, there are many other distributions
which are worse.

But your 200-round assumption is flawed; I'm assuming the Fortuna
schedule, which is that subpool i is dumped into the main pool (and
thus inforation-theoretically available at the output) every 2^i
rounds. So the second pool is dumped in every 2 rounds, not every 200.
And with 1/3 the entropy rate, if the first pool is brute-forceable
(which is our basic assumption), then the second one certainly is.

Now, this simple construction doesn't extend to more pools, but it's
trying to point out the lack of a *disproof* of a source distribution
where higher-order pools get exponentially less entropy per seed
due to the exposure of lower-order pools.

Which would turn Fortuna into an elaborate exercise in bit-shuffling
for no security benefit at all.

This can all be dimly seen through the papers on extractors, where
low-k sources are really hard to work with; all the designs want you
to accumulate enough input to get a large k.

> If you want a better example of where the two-pool scheme completely
> falls apart, consider this: our source picks a random bit, uses this
> same bit the next two times it is queried, and then picks a new bit.
> Its sequence of outputs will look like (b0,b0,b1,b1,b2,b2,..,). If
> we alternate pools, then the first pool sees the sequence b0,b1,b2,..
> and the second pool sees exactly the same sequence. Consequently, an
> adversary who can observe the entire evolution of the first pool can
> deduce everything there is to know about the second pool. This just
> illustrates that these multiple-pool solutions make some assumptions
> about the time-independence of their sources, or at least that the bits
> going into one pool don't have too much correlation with the bits going
> into the other pool.

Yes, exactly. I had a more elaborate construction (based on a shift
register as long as the number of pools) that actually delivered one
fresh shiny bit of conditional entropy per source sample and yet still
totally broke Fortuna.

What I'd like to figure out is a way of expressing what the correlation
requirements *are* for Fortuna to be secure. Then we could have a
meaningful discussion as to whether real-world sources meet those
conditions.


> Yeah, [BST03] seems worth reading. It has a reasonable survey of some
> previous work, and is well-written.
>
> However, I'm pretty skeptical about [BST03] as a basis for a real-world
> randomness generator. It assumes that there are only 2^t possible
> distributions for the source, and the set of possible distributions has
> been fixed in advance (before the design of your randomness generator
> is revealed). Consequently, it fails to defend against adaptive attacks.

Yes, but as the authors claim, if you can use physics/hardware arguments
to *bound* the extent of the adaptability (which is what their parameter
t is all about), then you can used a fixed-parameter extractor.

Just for future reference, the usual extractor parameters are:
n = Input size (bits)
k = Input entropy (min-entropy measure, not Shanon!)
m = Output size (<= k)
eps (epsilon, really) = Statistical distance from perfect uniformity,
a very small number that measures the quality of the output.
Technically, it's 1/2 the sum, over all 2^m possible outputs x,
of abs(probability(output = x) - 1/(2^m)). The worst possible
value, for a constant function, approaches 1 (actually 1 - 1/(2^m)).

> If the attacker can feed in maliciously chosen inputs (chosen after the
> attacker learns which randomness extraction algorithm you are using),
> then the BST03 scheme promises nothing. For instance, if you feed in
> timings of network packets, then even if you don't count them as providing
> any entropy, the mere act of feeding them into your randomness generator
> causes their theorems to be clearly inapplicable (since no matter what
> value of t you pick, the adversary can arrange to get more than t bits
> of freedom in the network packets he sends you).

That wouldn't be a problem if we could extract the k bits of entropy
from the m-t uncontrolled degrees of freedom; just make m proportional
to the non-malicious source rate.

However, their proof only works for m = k - 2*t - 4*log(1/eps) - 2.
Malicious bits count twice as bad as known bits, so a source that's more
than half malicious bits is useless.

It may turn out that trying to use this for information-theoretic security
requires throwing out bits we can't prove aren't malicious, which would
require giving up so many possibly good bits that we don't want to do it.

But the seeded extractors still look promising. They don't care how
the n-k non-random input bits are chosen.

> One of the standard constructions of a 2-universal hash function is
> as a LFSR-like scheme, where the seed K is used to select the feedback
> polynomial. But notice that it is critical that the feedback polynomial
> be chosen uniformly at random, in a way that is unpredictable to the
> attacker, and kept secret until you receive data from the source.

Ah, it *is* a 2-universal hash, good. I was getting confused between
2-universal hash and epsilon-almost universal hash, and wasn't quite
sure if the definitions were compatible.

> What /dev/random does is quite different from the idea of 2-universal
> hashing developed in the theory literature and recounted in [BST03].
> /dev/random fixes a single feedback polynomial in advance, and publishes
> it for the world to see. The theorems about 2-universal hashing promise
> nothing about use of a LFSR with a fixed feedback polynomial.

I know it doesn't, but what it *does* say mirrors what the /dev/random
soruce says... *if* the source is *not* adaptive to the polynomial,
*then* it's a good extractor.

The tolerance of malicious inputs is a separate argument based on the
reversibility of the LFSR input mix and the computational infeasibility
of creating collisions in the output mixing function. This is the
the attack I pointed out a couple of messages ago, but it's such
a hard problem that I'm not worried about it.

Briefly, suppose an attacker has partial information about the state
of the input pool. This can be expressed as a list of probabilities
for each of the 2^4096 possible states. The attacker can feed whatever
data he likes into the input hash, and the effect will only be to
permute those states, which won't change the Shannon- or min-entropy
measures of the pool in any way.

The only thing the attacker can do is try to cluster those states in
places that cause hash collisions. By feeding data to the input
mix, he can:
- Multiply the state of the pool by any desired power of x (mod the
polynomial), and
- XOR the pool with any desired number.
If the attacker can use this freedom, combined with *partial* knowledge
of the pool, to cause excess collisions in the output mixing function,
he can (partially) control the RNG output. (The fact that the attacker
is limited to 2^8192 of the (2^4096)! possible state permutations
definitely limits him somewhat. As does the fact that he can't actually
multiply by *any* power of x, since the time taken is proportional to
the power, and not all 2^4096-1 possibilities can be reached before
the sun goes nova.)


But what I am figuring out is that *if* we can consider the input stream
to be a sort of bit-fixing source (some bits of the input are *not*
observable or controllable by the attacker), then the current LFSR is
a decent concentrator. That seems like a pretty plausible assumption,
and applies no matter how large a fraction of the input is under enemy
control, as long as we account for the entropy properly.

In particular, this gets our input entropy k up to a reasonable fraction
of our input block size n, which makes the information-theoretic extractors
work better. (The downside is that it's no longer a bit-fixing source.)

If we can then add an information-theoretic extractor as an output hash,
things are looking very promising. First of all, this is not a performance-
critical step, so I have a lot of run-time to play with. And second, the
first-stage concentrator has got k up to a reasonable fraction of the
input block size n, which helps all of the extractor constructions.

(Further, I can limit my extractor function to a fixed output size m,
chosen to be a suitable catastrophic-reseed value for subsequent stages.)

I'm sure the Leftover Hash Lemma applies here, and I've found a couple of
statements of it, but I don't *quite* understand it yet. That is, I
understand one statement ("If H is a universal hash family from n bits
to l bits, where l = k - 2 * log(1/eps), then Ext(x,h) = (h, h(x)) is
a (k,eps)-extractor."), but I don't see how it's the same as the
other, which indicates that my understanding is incomplete. But I'm
still working on it.


Anyway, thanks a lot for all the pointers!

2005-04-18 19:13:23

by Matt Mackall

[permalink] [raw]
Subject: Re: Fortuna

[please reply to all when posting to lkml]

On Sat, Apr 16, 2005 at 01:08:47AM +0000, David Wagner wrote:
> >First, a reminder that the design goal of /dev/random proper is
> >information-theoretic security. That is, it should be secure against
> >an attacker with infinite computational power.
>
> I am skeptical.
> I have never seen any convincing evidence for this claim,
> and I suspect that there are cases in which /dev/random fails
> to achieve this standard.
>
> And it seems I am not the only one. See, e.g., Section 5.3 of:
> http://eprint.iacr.org/2005/029

Unfortunately, this paper's analysis of /dev/random is so shallow that
they don't even know what hash it's using. Almost all of section 5.3
is wrong (and was when I read it initially).

--
Mathematics is the supreme nostalgia of our time.

2005-04-18 21:42:41

by daw

[permalink] [raw]
Subject: Re: Fortuna

Matt Mackall wrote:
>On Sat, Apr 16, 2005 at 01:08:47AM +0000, David Wagner wrote:
>> http://eprint.iacr.org/2005/029
>
>Unfortunately, this paper's analysis of /dev/random is so shallow that
>they don't even know what hash it's using. Almost all of section 5.3
>is wrong (and was when I read it initially).

Yes, that is a minor glitch, but I believe all their points remain
valid nonetheless. My advice is to apply the appropriate s/MD5/SHA1/g
substitution, and re-read the paper to see what you can get out of it.

The problem is not that the paper is shallow; it is not. The source
of the error is likely that this paper was written by theorists, not
implementors. There are important things we can learn from them, and I
think it is worth reading their paper carefully to understand what they
have to offer.

I believe they raise substantial and deep questions in their Section 5.3.
I don't see why you say Section 5.3 is all wrong. Can you elaborate?
Can you explain one or two of the substantial errors you see?

2005-04-19 04:01:41

by Theodore Ts'o

[permalink] [raw]
Subject: Re: Fortuna

On Mon, Apr 18, 2005 at 09:40:37PM +0000, David Wagner wrote:
> Yes, that is a minor glitch, but I believe all their points remain
> valid nonetheless. My advice is to apply the appropriate s/MD5/SHA1/g
> substitution, and re-read the paper to see what you can get out of it.
>
> The problem is not that the paper is shallow; it is not. The source
> of the error is likely that this paper was written by theorists, not
> implementors. There are important things we can learn from them, and I
> think it is worth reading their paper carefully to understand what they
> have to offer.

Since the paper was written by theorists, it appears that they didn't
bother to read the implementation, but instead made assumptions from
the man pages, as well as making the assumption that the man page
(which was not written as a specification from which the code was
implemented, and indeed was not even written by the code authors) was
in fact an accurate representation of drivers/char/random.c.

So section 5.3 is essense a criticism of a straw man implementation
based on a flawed reading of a flawed man page. Other than that, it's
fine. :-)

> I believe they raise substantial and deep questions in their Section 5.3.
> I don't see why you say Section 5.3 is all wrong. Can you elaborate?
> Can you explain one or two of the substantial errors you see?

For one, /dev/urandom and /dev/random don't use the same pool
(anymore). They used to, a long time ago, but certainly as of the
writing of the paper this was no longer true. This invalidates the
entire last paragraph of Section 5.3.

The criticisms of the /dev/random man page do have some point, but the
man page != the implementation. Also, the paper does not so much make
an attack on the entropy estimator, so much as it casts asperions on
it, while at the same time making the unspoken assumption that
cryptographic primitives are iron-clad and unbreakable.

So I don't see any particular substantial deep questions, unless you
count, "It is not at all clear that /dev/random ... provides
information-theoretic security. Indeed, we suspect it sometimes
doesn't" as a deep question. I don't.

- Ted

2005-04-19 04:33:51

by daw

[permalink] [raw]
Subject: Re: Fortuna

Theodore Ts'o wrote:
>For one, /dev/urandom and /dev/random don't use the same pool
>(anymore). They used to, a long time ago, but certainly as of the
>writing of the paper this was no longer true. This invalidates the
>entire last paragraph of Section 5.3.

Ok, you're right, this is a serious flaw, and one that I overlooked.
Thanks for elaborating. (By the way, has anyone contacted to let them
know about these two errors? Should I?)

I see three remaining criticisms from their Section 5.3:
1) Due to the way the documentation describes /dev/random, many
programmers will choose /dev/random by default. This default
seems inappropriate and unfortunate.
2) There is a widespread perception that /dev/urandom's security is
unproven and /dev/random's is proven. This perception is wrong.
On a related topic, it is "not at all clear" that /dev/random provides
information-theoretic security.
3) Other designs place less stress on the entropy estimator, and
thus are more tolerant to failures of entropy estimation. A failure
in the entropy estimator seems more likely than a failure in the
cryptographic algorithms.
These three criticisms look right to me.

Apart from the merits or demerits of Section 5.3, the rest of the paper
seemed to have some interesting ideas for how to simplify and possibly
improve the /dev/random generator, which might be worth considering at
some point.

2005-04-19 19:31:20

by Patrick J. LoPresti

[permalink] [raw]
Subject: Re: Fortuna

Theodore Ts'o <[email protected]> writes:

> With a properly set up set of init scripts, /dev/random is
> initialized with seed material for all but the initial boot

What about CD-ROM based distros (e.g., Knoppix), where every boot is
the initial boot?

> (and even that problem can be solved by having the distribution's
> install scripts set up /var/lib/urandom/random-seed after installing
> the system).

Could you elaborate? How should Knoppix seed its /dev/urandom?

Would reading 256 bits from /dev/random and writing them to
/dev/urandom do the job?

- Pat

2005-04-20 21:35:11

by Theodore Ts'o

[permalink] [raw]
Subject: Re: Fortuna

On Tue, Apr 19, 2005 at 04:31:47AM +0000, David Wagner wrote:
> Theodore Ts'o wrote:
> >For one, /dev/urandom and /dev/random don't use the same pool
> >(anymore). They used to, a long time ago, but certainly as of the
> >writing of the paper this was no longer true. This invalidates the
> >entire last paragraph of Section 5.3.
>
> Ok, you're right, this is a serious flaw, and one that I overlooked.
> Thanks for elaborating. (By the way, has anyone contacted to let them
> know about these two errors? Should I?)

I don't know of anyone who has contacted the authors yet. I haven't
had the time, since I'm currently travelling at the moment.

> I see three remaining criticisms from their Section 5.3:
> 1) Due to the way the documentation describes /dev/random, many
> programmers will choose /dev/random by default. This default
> seems inappropriate and unfortunate.
> 2) There is a widespread perception that /dev/urandom's security is
> unproven and /dev/random's is proven. This perception is wrong.
> On a related topic, it is "not at all clear" that /dev/random provides
> information-theoretic security.
> 3) Other designs place less stress on the entropy estimator, and
> thus are more tolerant to failures of entropy estimation. A failure
> in the entropy estimator seems more likely than a failure in the
> cryptographic algorithms.
> These three criticisms look right to me.

/dev/urandom is a cryptographic RNG, which is seeded from RNG.
/dev/random uses a very similar cryptographic RNG, but it uses large
pool to collect entropy, and uses an entropy estimator to limit the
amount of data that can be extracted from the rng.

If the entropy estimator fails, /dev/random degrades to a
cryptographic RNG. So it is not a disaster, whereas the approach
described in this paper (which uses a non-cryptographic extractor)
would seem to me to be *more* prone to catastrophically failure if the
entropy estimator fails than /dev/random.

As to whether or not applications should be using /dev/random or
/dev/urandom, /dev/random is no worse than /dev/urandom, as long as
the application doesn't mind blocking when the entropy levels are too
low. In practice, most of the time this situation doesn't arise since
the appropriate way of using /dev/random is only to extract a small
amount of entropy when needed to generate long-term keys, or when
seeding a userspace cryptographic RNG for session keys.

- Ted