2004-09-24 01:05:58

by George Spelvin

[permalink] [raw]
Subject: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

Fortuna is an attempt to avoid the need for entropy estimation.
It doesn't do a perfect job. And I don't think it's received enough
review to be "regarded as the state of the art".

Entropy estimation is very difficult, but not doing it leads to problems.

Bruce Schneier's "catastrophic reseeding" ideas have some merit. If,
for some reason, the state of your RNG pool has been captured, then
adding one bit of seed material doesn't hurt an attacker who can look
at the output and brute-force that bit.

Thus, once you've lost security, you never regain it. If you save up,
say, 128 bits of seed material and add it all at once, your attacker
can't brute-force it.

/dev/random tries to solve this my never letting anyone see more output
than there is seed material input. So regardless of the initial state
of the pool, an attacker can never get enough output to compute a unique
solution to the seed material question. (See "unicity distance".)

However, this requires knowing the entropy content of the input, which is
a hard thing to measure.

The while issue of catastrophic reseeding applies to output-larger-than-key
generators like something like /dev/urandom (that uses cryptographic


Here's an example of how Fortuna's design fails.

Suppose we have a source which produces 32-bit samples, which are
guaranteed to contain 1 bit of new entropy per sample. We should be
able to feed that into Fortuna and have a good RNG, right? Wrong.

Suppose that each time you sample the source, it adds one bit to a 32-bit
shift register, and gives you the result. So sample[0] shares 31 bits
with sample[1], 30 bits with sample[2], etc.

Now, suppose that we add samples to 32 buckets in round-robin order,
and dump bucket[i] into the pool every round 2^i rounds. Further,
assume that our attacker can query the pool's output and brute-force 32
bits of seed material. In the following, "+=" is some cryptographic
mixing primitive, not literal addition.

Pool: Initial state known to attacker (e.g. empty)
Buckets: Initial state known to attacker (e.g. empty)
bucket[0] += sample[0]; pool += bucket[0]
-> attacker can query the pool and brute-force compute sample[0].
bucket[1] += sample[1] (= sample[0] << 1 | sample[32] >> 31)
bucket[2] += sample[2] (= sample[0] << 2 | sample[32] >> 30)
...
bucket[31] += sample[31] (= sample[0] << 31 | sample[32] >> 1)
bucket[0] += sample[32]; pool += bucket[0]
-> attacker can query the pool and brute-force compute sample[32].
-> Attacker now knows sample[1] through sample[31]
-> Attacker now knows bucket[1] through bucket[31.

Note that the attacker now knows the value of sample[1] through sample[31] and
thus the state of all the buckets, and can continue tracking the pool's
state indefinitely:

bucket[1] += sample[33]; pool += bucket[1]
-> attacker can query the pool and brute-force compute sample[33].
etc.

This shift register behaviour should be obvious, but suppose that sample[i]
is put through an encryption (known to the attacker) before being presented.
You can't tell that you're being fed cooked data, but the attack works just the
same.


Now, this is, admittedly, a highly contrived example, but it shows that
Fortuna does not completely achieve its stated design goal of achieving
catastrophic reseeding after having received some contant times the
necessary entropy as seed material. Its round-robin structure makes it
vulnerable to serial correlations in the input seed material. If they're
bad enough, its security can be completely destroyed. What *are* the
requirements for it to be secure? I don't know.

All I know is that it hasn't been analyzed well enough to be a panacea.

(The other thing I don't care for is the limited size of the
entropy pools. I like the "big pool" approach. Yes, 256 bits is
enough if everything works okay, but why take chances? But that's a
philosophical/style/gut feel argument more than a really technical one.)


I confess I haven't dug into the /dev/{,u}random code lately. The various
problems with low-latency random numbers needed by the IP stack suggest
that perhaps a faster PRNG would be useful in-kernel. If so, there may
be a justification for an in-kernel PRNG fast enough to use for disk
overwriting or the like. (As people persist in using /dev/urandom for,
even though it's explicitly not designed for that.)


2004-09-24 02:41:18

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

"linux",

The Fortuna patch I've submitted tried to achieve this "more than 256 bits per
pool" by carrying forward the digest output to the next pool. Stock Fortuna
does not carry forward digest output form previous iterations.

reseed:
reseedCount++;
for (i=0..31) {
if (2^i is a factor of reseedCount) {
hash_final(pool[i], dgst);
hash_init(pool[i]);
hash_update(pool[i], dgst); // my addition
...
}
}
...

Considering each pool has 256 bits of digest output, and there are 32 pools,
this gives about 8192 bits for the pool size. Far greater then current
design. If you extremely pessimistically consider the probability of drawing
pool j is 1/2 that of {j-1}, then it's a 512bit RNG.


But I'd like talk to your attack for a second. I'd argue that it is valid
for the current /dev/random and Yarrow with entropy estimators as well.

I agree that if the state is known by an active attacker, then a trickle of
entropy into Fortuna compared to the output gathered by an attacker would
make for an argument that "Fortuna doesn't have it right." And no matter
what PRNG engine you but between the attack and the random sources, there is
no solution other than accurate entropy measurement (*not* estimation).

However, this places the security of the system in the hands of the entropy
estimator. If it is too liberal, we have the nearly the same situation with
Fortuna. As much as I rely on Ted's work everyday for the smooth running of
my machine, I can't concede to the notion that Ted got it right.

Fortuna, I'd argue reduces the attack on the PRNG to that of the base
crypto primitives, the randomness of the events and the rate at which data is
output by /dev/random. This holds true for the current /dev/random except:
1) crypto primitives are do not pass test vectors, and the input mixing
function is linear.
2) The randomness of the events can only be estimated, their true randomness
requires analysis of the hardware device itself... not Feasible
considering all the possible IRQ sources, mice, and hard disks that Linux
drives.
3) Following on (2) above, the output rate of /dev/random is directly
related to the estimated randomness.

If you have ideas on how to make a PRNG that can more closely tie output rate
to input events and survive state compromise attacks (backtracking, forward
secrecy, etc) then please drop anonymity and contact me at my email address.
Perhaps a collaboration is possible.

Cheers,

JLC


On Fri, Sep 24, 2004 at 12:59:38AM -0000, [email protected] wrote:
> Fortuna is an attempt to avoid the need for entropy estimation.
> It doesn't do a perfect job. And I don't think it's received enough
> review to be "regarded as the state of the art".
>
> Entropy estimation is very difficult, but not doing it leads to problems.
>
> Bruce Schneier's "catastrophic reseeding" ideas have some merit. If,
> for some reason, the state of your RNG pool has been captured, then
> adding one bit of seed material doesn't hurt an attacker who can look
> at the output and brute-force that bit.
>
> Thus, once you've lost security, you never regain it. If you save up,
> say, 128 bits of seed material and add it all at once, your attacker
> can't brute-force it.
>
> /dev/random tries to solve this my never letting anyone see more output
> than there is seed material input. So regardless of the initial state
> of the pool, an attacker can never get enough output to compute a unique
> solution to the seed material question. (See "unicity distance".)
>
> However, this requires knowing the entropy content of the input, which is
> a hard thing to measure.
>
> The while issue of catastrophic reseeding applies to output-larger-than-key
> generators like something like /dev/urandom (that uses cryptographic
>
>
> Here's an example of how Fortuna's design fails.
>
> Suppose we have a source which produces 32-bit samples, which are
> guaranteed to contain 1 bit of new entropy per sample. We should be
> able to feed that into Fortuna and have a good RNG, right? Wrong.
>
> Suppose that each time you sample the source, it adds one bit to a 32-bit
> shift register, and gives you the result. So sample[0] shares 31 bits
> with sample[1], 30 bits with sample[2], etc.
>
> Now, suppose that we add samples to 32 buckets in round-robin order,
> and dump bucket[i] into the pool every round 2^i rounds. Further,
> assume that our attacker can query the pool's output and brute-force 32
> bits of seed material. In the following, "+=" is some cryptographic
> mixing primitive, not literal addition.
>
> Pool: Initial state known to attacker (e.g. empty)
> Buckets: Initial state known to attacker (e.g. empty)
> bucket[0] += sample[0]; pool += bucket[0]
> -> attacker can query the pool and brute-force compute sample[0].
> bucket[1] += sample[1] (= sample[0] << 1 | sample[32] >> 31)
> bucket[2] += sample[2] (= sample[0] << 2 | sample[32] >> 30)
> ...
> bucket[31] += sample[31] (= sample[0] << 31 | sample[32] >> 1)
> bucket[0] += sample[32]; pool += bucket[0]
> -> attacker can query the pool and brute-force compute sample[32].
> -> Attacker now knows sample[1] through sample[31]
> -> Attacker now knows bucket[1] through bucket[31.
>
> Note that the attacker now knows the value of sample[1] through sample[31] and
> thus the state of all the buckets, and can continue tracking the pool's
> state indefinitely:
>
> bucket[1] += sample[33]; pool += bucket[1]
> -> attacker can query the pool and brute-force compute sample[33].
> etc.
>
> This shift register behaviour should be obvious, but suppose that sample[i]
> is put through an encryption (known to the attacker) before being presented.
> You can't tell that you're being fed cooked data, but the attack works just the
> same.
>
>
> Now, this is, admittedly, a highly contrived example, but it shows that
> Fortuna does not completely achieve its stated design goal of achieving
> catastrophic reseeding after having received some contant times the
> necessary entropy as seed material. Its round-robin structure makes it
> vulnerable to serial correlations in the input seed material. If they're
> bad enough, its security can be completely destroyed. What *are* the
> requirements for it to be secure? I don't know.
>
> All I know is that it hasn't been analyzed well enough to be a panacea.
>
> (The other thing I don't care for is the limited size of the
> entropy pools. I like the "big pool" approach. Yes, 256 bits is
> enough if everything works okay, but why take chances? But that's a
> philosophical/style/gut feel argument more than a really technical one.)
>
>
> I confess I haven't dug into the /dev/{,u}random code lately. The various
> problems with low-latency random numbers needed by the IP stack suggest
> that perhaps a faster PRNG would be useful in-kernel. If so, there may
> be a justification for an in-kernel PRNG fast enough to use for disk
> overwriting or the like. (As people persist in using /dev/urandom for,
> even though it's explicitly not designed for that.)
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2004-09-24 06:23:45

by George Spelvin

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

BTW, you write:
> It is regarded in crypto circles as the current state-of-the-art
> in cryptographically secure PRNGs.

The question this brings to mind is:
It is? Can you point me to a single third-party paper on the subject?

There's nothing in the IACR preprint archive. Nor citeseer,


The big difference between when /dev/random was designed and today:

- USB is a broadcast bus, and a lot (timing, at least) can be sniffed
by a small dongle. Wireless keyboards and mice are popular. That
sort of user data probably shouldn't be trusted any more. (No harm
mixing it in, just in case it is good, but accord it zero weight.)
- Clock speeds are a *lot* higher (> 1 GHz) and the timestamp counter is
almost universally available. Even an attacker with multiple antennas
pointed at the computer is going to have a hard time figuring out on which
tick of the clock an interrupt arrived even if they can see it.

Thus, the least-significant bits of the TSC are useful entropy on *every*
interrupt, timer included.


For a fun exercise, install a kernel hack to capture the TSC on every
timer interrupt. Run it for a while on an idle system (processor in the
halt state, waiting for interrupts on a cycle-by-cycle basis).

Take the resultant points, subtract the best-fit line, and throw out any
outliers caused by delayed interrupts.

Now do some statistical analysis of the residue. How much entropy do
you have from the timer interrupt? Does it look random? How many lsbits
can you take and still pass Marsaglia's DIEHARD suite? Do any patterns
show up in an FFT?

2004-09-24 21:42:33

by George Spelvin

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

> What if I told the SHA-1 implementation in random.c right now is weaker
> than those hashs in terms of collisions? The lack of padding in the
> implementation is the cause. HASH("a\0\0\0\0...") == HASH("a") There
> are billions of other examples.

EXCUSE me? You're a little unclear, so I don't want to be attacking strawmen
of my own devising, but are you claiming the failure to do Merkle-Damgaard
padding in the output mixing operation of /dev/random is a WEAKNESS?

If true, this is a level of cluelessness incompatible with being trusted
to design decent crypto.

The entire purpose of Merkle-Damgaard padding (also know as
Merkle-Damgaard strengthening) is to include the length in the data
hashed, to make hashing variable-sized messages as secure as fixed-size
messages. If what you are hashing is, by design, always fixed-length,
this is completely unnecessary.

If I were designing a protocol for message interchange, I might add
the padding anyway, just to use pre-existing primitives easily, but
for a 100% internal use like a PRNG, let's see... I can reduce code
size AND implementation complexity AND run time without ANY security
consequences, and there are no interoperability issues...

I could argue it's a design flaw to *include* the padding.

2004-09-25 14:55:58

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Fri, Sep 24, 2004 at 09:42:30PM -0000, [email protected] wrote:
> > What if I told the SHA-1 implementation in random.c right now is weaker
> > than those hashs in terms of collisions? The lack of padding in the
> > implementation is the cause. HASH("a\0\0\0\0...") == HASH("a") There
> > are billions of other examples.
>
> EXCUSE me?

...

> I could argue it's a design flaw to *include* the padding.

I was trying to point out a flaw in Ted's logic. He said "we've recently
discoverd these hashs are weak because we found collsions. Current
/dev/random doesn't care about this."

I certainly wasn't saying padding was a requirment. But I was trying to
point out that the SHA-1 implementaion crrently in /dev/random by design is
collision vulnerable. Collision resistance isn't a requirment for it's
purposes obviously.

Guess my pointing this out is a lost cause.

JLC

2004-09-25 18:52:19

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Sat, Sep 25, 2004 at 10:54:44AM -0400, Jean-Luc Cooke wrote:
>
> I was trying to point out a flaw in Ted's logic. He said "we've recently
> discoverd these hashs are weak because we found collsions. Current
> /dev/random doesn't care about this."
>
> I certainly wasn't saying padding was a requirment. But I was trying to
> point out that the SHA-1 implementaion crrently in /dev/random by design is
> collision vulnerable. Collision resistance isn't a requirment for it's
> purposes obviously.

You still haven't shown the flaw in the logic. My point is that an
over-reliance on crypto primitives is dangerous, especially given
recent developments. Fortuna relies on the crypto primitives much
more than /dev/random does. Ergo, if you consider weaknesses in
crypto primitives to be a potential problem, then it might be
reasonable to take a somewhat more jaundiced view towards Fortuna
compared with other alternatives.

Whether or not /dev/random performs the SHA finalization step (which
adds the padding and the length to the hash) is completely and totally
irrelevant to this particular line of reasoning.

And actually, not doing the padding does not make the crypto hash
vulnerable to collisions, as you claim. This is because in
/dev/random, we are always using the full block size of the crypto
hash. It is true that it is vulernable to extension attacks, but
that's irrelevant to this particular usage of the SHA-1 round
function. Whether or not we should trust the design of something as
critical to the security of security applications as /dev/random to
someone who fails to grasp the difference between these two rather
basic issues is something I will leave to the others on LKML.

- Ted

2004-09-26 01:43:16

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Sat, Sep 25, 2004 at 02:43:52PM -0400, Theodore Ts'o wrote:
> You still haven't shown the flaw in the logic. My point is that an
> over-reliance on crypto primitives is dangerous, especially given
> recent developments. Fortuna relies on the crypto primitives much
> more than /dev/random does. Ergo, if you consider weaknesses in
> crypto primitives to be a potential problem, then it might be
> reasonable to take a somewhat more jaundiced view towards Fortuna
> compared with other alternatives.

Correct me if I'm wrong here.

You claimed that the collision techniques found for the UFN design hashs
(sha0, md5, md5, haval, ripemd) demonstrated the need to not rely on hash
algorithms for a RNG. Right?

And I showed that the SHA-1 in random.c now can produce collisions. So, if
your argument against the fallen UFN hashs above (SHA-1 is a UFN hash also
btw. We can probably expect more annoucments from the crypto community in
early 2005) should it not apply to SHA-1 in random.c?

Or did I misunderstand you? Were you just mentioning the weakened algorithms
as a "what if they were more serious discoveries? Wouldn't be be nice if we
didn't rely on them?" ?

The decision to place trust in a entropy estimation scheme vs. a crypto
algorithm we have different views on. I can live with that.

> Whether or not /dev/random performs the SHA finalization step (which
> adds the padding and the length to the hash) is completely and totally
> irrelevant to this particular line of reasoning.

I "completly and totally" agree. I'm pointing out that no added padding
makes me, the new guy reading your code, work harder to decide if it's a
weakness. You shouldn't do that to people if you can avoid it. Just like
you shouldn't obfuscate code, even if it doesn't "weaken" its implementation.
It's just rude. Take the performance penalty to avoid scaring people away
from a very important peice of the kernel.

> ... Whether or not we should trust the design of something as
> critical to the security of security applications as /dev/random to
> someone who fails to grasp the difference between these two rather
> basic issues is something I will leave to the others on LKML.

... biting my toung ... so hard it bleeds ...

The quantitaive aspects of the two RNGs in question are not being discussed.
It's the qualitative aspects we do not see eye to eye on. So I will no
longer suggest replacing the status-quo. I'd like to submit a patch to let
users chose at compile-time under Cryptographic options weither to drop in
Fortuna.

Ted, can we leave it at this?

JLC

2004-09-26 02:31:18

by George Spelvin

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

> I was trying to point out a flaw in Ted's logic. He said "we've recently
> discoverd these hashes are weak because we found collsions. Current
> /dev/random doesn't care about this."

And he's exactly right. The only attack that would be vaguely relevant
to /dev/random's use would be a (first) preimage attack, and even that's
probably not helpful.

There *is* no flaw in his logic. The attack we need to guard against
is, given hash(x) and a (currently mostly linear) state mixing function
mix(), one that would let you compute (partial information about)
y[i+1] = hash(x[i+1]) from y[1] = hash(x[1]) ... y[i] = hash(x[i])
where x[i] = mix(x[i-1]).

Given that y[i] is much smaller than x[i], you'd need to put together
a lot of them to derive something, and that's distinctly harder than
a single-output preimage attack.

> I certainly wasn't saying padding was a requirment. But I was trying to
> point out that the SHA-1 implementaion crrently in /dev/random by design is
> collision vulnerable. Collision resistance isn't a requirment for its
> purposes obviously.

No, it is, by design, 100% collision-resistant. An attacker neither
sees nor controls the input x, so cannot use a collision attack.
Thus, it's resistant to collisions in the same way that it's resistant
to AIDS.

[There's actually a flaw in my logic. I know Ted knows about it, because
he implemented a specific defense in the /dev/random code against it; it's
just not 100% information-theoretic ironclad. If anyone else can spot
it, award yourself a clue point. But it's still not a plausible attack.]

FURTHERMORE, even if an attacker *could* control the input, it's still
exactly as collision resistant as unmodified SHA-1. Because it only
accepts fixed-size input blocks, padding is unnecessary and irrelevant
to security. Careful padding is ONLY required if you are working with
VARIABLE-SIZED input.

The fact that collision resistance is not a security requirement is a
third point.

> Guess my pointing this out is a lost cause.

In much the same way that pointing out that the earth is flat is a
lost cause. If you want people to believe nonsense, you need to dress
it up a lot and call it a religion.

As for Ted's words:
> Whether or not we should trust the design of something as
> critical to the security of security applications as /dev/random to
> someone who fails to grasp the difference between these two rather
> basic issues is something I will leave to the others on LKML.

Fortuna may be a good idea after all (I disagree, but I can imagine
being persuaded otherwise), but it has a very bad advocate right now.
Would anyone else like to pick up the torch?


By the way, I'd like to repeat my earlier question: you say Fortuna ia
well-regarded in crypto circles. Can you cite a single paper to back
that conclusion? Name a single well-known cryptographer, other than
the authors, who has looked at it in some detail?

There might be one, but I don't know of any. I respect the authors
enough to know that even they recognize that an algorithm's designers
sometimes have blind spots.

2004-09-26 05:25:46

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Sat, Sep 25, 2004 at 09:42:18PM -0400, Jean-Luc Cooke wrote:
> On Sat, Sep 25, 2004 at 02:43:52PM -0400, Theodore Ts'o wrote:
> > You still haven't shown the flaw in the logic. My point is that an
> > over-reliance on crypto primitives is dangerous, especially given
> > recent developments. Fortuna relies on the crypto primitives much
> > more than /dev/random does. Ergo, if you consider weaknesses in
> > crypto primitives to be a potential problem, then it might be
> > reasonable to take a somewhat more jaundiced view towards Fortuna
> > compared with other alternatives.
>
> Correct me if I'm wrong here.
>
> You claimed that the collision techniques found for the UFN design hashs
> (sha0, md5, md5, haval, ripemd) demonstrated the need to not rely on hash
> algorithms for a RNG. Right?

For Fortuna, correct. This is why I believe /dev/random's current
design to be superior.

> And I showed that the SHA-1 in random.c now can produce collisions. So, if
> your argument against the fallen UFN hashs above (SHA-1 is a UFN hash also
> btw. We can probably expect more annoucments from the crypto community in
> early 2005) should it not apply to SHA-1 in random.c?

(1) Your method of "producing collisions" assumed that /dev/random was
of the form HASH("a\0\0\0...") == HASH("a) --- i.e., you were
kvetching about the lack of padding. But we've already agreed that
the padding argument isn't applicable for /dev/random, since it only
hashes block-sizes at the same time. (2) Even if there were real
collisions demonstrated in SHA-1's cryptographic core at some point in
the future, it wouldn't harm the security of the algorithm, since
/dev/random doesn't depend on SHA-1 being resistant against
collisions. (Similarly, HMAC-MD5 is still safe for now since it also
is designed such that the ability to find collisions do not harm its
security. It's a matter of how you use the cryptographic primitives.)

> Or did I misunderstand you? Were you just mentioning the weakened algorithms
> as a "what if they were more serious discoveries? Wouldn't be be nice if we
> didn't rely on them?" ?

That's correct. It is my contention that Fortuna is brittle in this
regard, especially in comparison to /dev/random current design.

And you still haven't pointed out the logic flaw in any argument but
your own.

- Ted

2004-09-26 06:46:24

by George Spelvin

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

> You claimed that the collision techniques found for the UFN design hashs
> (sha0, md5, md5, haval, ripemd) demonstrated the need to not rely on hash
> algorithms for a RNG. Right?

I'm putting words into Ted's mouth, but it seemed clear to me he said it
was good not to rely *entirely* on the ahsh algorithms.

> And I showed that the SHA-1 in random.c now can produce collisions.

This, I do not recall. I must have missed it. Will you please show me
two inputs that, when fed to the SHA-1 in random.c, will produce
identical output?

> So, if your argument against the fallen UFN hashs above (SHA-1 is a UFN
> hash also btw. We can probably expect more annoucments from the crypto
> community in early 2005) should it not apply to SHA-1 in random.c?

No, not at all. The point is that the current random.c design DOES NOT
RELY on the security of the hash function. Ted could drop MD4 in there
and it still couldn't be broken, although using a better-regarded hash
function just feels better.

> Or did I misunderstand you? Were you just mentioning the weakened algorithms
> as a "what if they were more serious discoveries? Wouldn't be be nice if we
> didn't rely on them?" ?

Yes. And Fortuna's *only* layer of armor is the block cipher. Yes,
it's a damn good layer of armor, but defense in depth sure helps.

That is NOT to say that lots of half-assed algorithms piled on top of
each other makes good crypto, but if you can have a good primitive and
*then* use it safely as well, that's better.

For example, AES is supposed to be resistant to adaptive chosen
plaintext/ciphertext attacks. Suppose you are given two ciphertexts
and two corresponding plaintexts, but not which corresponds to which.
And then you are given access to an oracle which will, using the same
key as was used on the plaintext/ciphertext pairs, give you the plaintext
for any ciphertet that's not one of the two, and the ciphertext for any
plaintext that's not one of the two. The orace can answer basically an
infinite number of questions (well, 2^128-2) and you can look at one set
of answers before posing the next.

AES is supposed to prevent you from figuring out, with all that help,
which plaintext of the two goes with which ciphertext, with more than 50%
certainty. I.e. you are given an infinite series of such challenges and
offered even-odds bets on your answer. In the long run, you shouldn't
be able to make money.

Yes, AES *should* be able to hold up even to that, but that's really
placing all your eggs in one basket. If you can give it more help
without weakening other parts, that's Good Design.

If I'm designing a protocol, I'll try to design it so that an attacker
*doesn't* have access to such an oracle, or the responses are too slow
to make billions of them, or asking more than a few dozen questions will
raise alarms, or some such. I'll change keys so the time in which an
attacker has to mount their attack is limited. I'll do any of a number
of things which let the German navy keep half of their U-boat traffic
out of the hands of Bletchley park even through they didn't know there
were vast gaping holes in the underlying cipher.

> The decision to place trust in a entropy estimation scheme vs. a crypto
> algorithm we have different views on. I can live with that.

Better crypto is fine. But why *throw out* the entropy estimation and
rely *entirely* on the crypto? Feel free to argue that the crypto in
Fortuna is better (although Ted is making some strong points that it
*isn't*), but is it necessary to throw the baby out with the bathwater?
Can't you get the best of both worlds?

> I "completly and totally" agree. I'm pointing out that no added padding
> makes me, the new guy reading your code, work harder to decide if it's a
> weakness. You shouldn't do that to people if you can avoid it.

Sorry, but if you know enough to know why the padding is necessary, you
should know when it isn't. Feel free to say "isn't this a weakness?
I read in $BOOK that that padding was important to prevent some attacks"
and propose a comment patch. But to say "this is crap because I don't
understand one little detail and you should replace it with my shiny
new 2005 model" when it's your ignorance and not a real problem is
unbelievably arrogant.

> Just like you shouldn't obfuscate code, even if it doesn't "weaken"
> its implementation. It's just rude. Take the performance penalty to
> avoid scaring people away from a very important peice of the kernel.

Tell it to the marines. I'd say "tell it to Linus", because he'll laugh
louder, but his time is valuable to me.

Part of the Linux developer's credo, learned at Linus' knee, is that
Performance Matters. If you don't worry about 5% all the time, after 15
revisions you've running at half speed and it's a lot of work to catch up.

The -mm guys have been doing backflips for years to try to get good
paging behaviour without high run-time overhead. This is one of the
major reasons why the kernel refuses to promise a stable binary
interface to kernel modules. Rearranging the order of fields in a
strucure for better cache performance is a minor revision.

In fact, large parts of /dev/random deliberately *don't* care about
performance. The entire output mixing stage is not performance
critical, and is deliberately slow.

What *is* critical is the input mixing stage, because that happens at
interrupt time, and many many people care passionately about interrupt
latency. And /dev/random wants to be non-optional, always there for
people to use so they don't have to invent their own half-assed
equivalent.

> The quantitaive aspects of the two RNGs in question are not being discussed.
> It's the qualitative aspects we do not see eye to eye on. So I will no
> longer suggest replacing the status-quo. I'd like to submit a patch to let
> users chose at compile-time under Cryptographic options weither to drop in
> Fortuna.
>
> Ted, can we leave it at this?

You're welcome to write the patch. But I have to warn you, if you
hope to get it into the standard kernel rather than just have a
separately maintained patch, you'll need to persuade Linus or someone
he trusts (who in theis case is probably Ted) that your patch is
a) better in some way or another than the existing code, and
b) important enough to warrant the maintenance burden that having
two sets of equivalent code imposes.

You're being offered a lot of clues. Please, take some.

2004-09-26 16:34:44

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Sun, Sep 26, 2004 at 06:46:17AM -0000, [email protected] wrote:
> > And I showed that the SHA-1 in random.c now can produce collisions.
>
> This, I do not recall. I must have missed it. Will you please show me
> two inputs that, when fed to the SHA-1 in random.c, will produce
> identical output?

SHA-1 without padding, sure.

hash("a") = hash("a\0") = hash("a\0\0") = ...
hash("b") = hash("b\0") = hash("b\0\0") = ...
hash("c") = hash("c\0") = hash("c\0\0") = ...

I've failed in my attempt to present a good argument for Fortuna. Guess I'll
just sit on this patch. Is this above a big issue? No because as you two
pointed out the hash() uses full block sizes.

This is a trying thread for me to continue, by no fault of yours. I thought
I made it very clear when I started that I saw *no* vulnerability in the
current /dev/random. This did not prevent Ted and yourself to ignore this
statement as immediately assume when I say "you could have done this better"
to mean "ha! I've hax0rd your silly code, I'm l33t." - an infuriating blow
to my professionalism. Then I simply follow that up with insult to injury
by trying to clear up the whole mess and only making things worse.

> > Or did I misunderstand you? Were you just mentioning the weakened algorithms
> > as a "what if they were more serious discoveries? Wouldn't be be nice if we
> > didn't rely on them?" ?
>
> Yes. And Fortuna's *only* layer of armor is the block cipher. Yes,
> it's a damn good layer of armor, but defense in depth sure helps.
>
> That is NOT to say that lots of half-assed algorithms piled on top of
> each other makes good crypto, but if you can have a good primitive and
> *then* use it safely as well, that's better.
>
> For example, AES is supposed to be resistant to adaptive chosen
> plaintext/ciphertext attacks. Suppose you are given two ciphertexts
> and two corresponding plaintexts, but not which corresponds to which.
> And then you are given access to an oracle which will, using the same
> key as was used on the plaintext/ciphertext pairs, give you the plaintext
> for any ciphertet that's not one of the two, and the ciphertext for any
> plaintext that's not one of the two. The orace can answer basically an
> infinite number of questions (well, 2^128-2) and you can look at one set
> of answers before posing the next.
>
> AES is supposed to prevent you from figuring out, with all that help,
> which plaintext of the two goes with which ciphertext, with more than 50%
> certainty. I.e. you are given an infinite series of such challenges and
> offered even-odds bets on your answer. In the long run, you shouldn't
> be able to make money.
>
> Yes, AES *should* be able to hold up even to that, but that's really
> placing all your eggs in one basket. If you can give it more help
> without weakening other parts, that's Good Design.
>
> If I'm designing a protocol, I'll try to design it so that an attacker
> *doesn't* have access to such an oracle, or the responses are too slow
> to make billions of them, or asking more than a few dozen questions will
> raise alarms, or some such. I'll change keys so the time in which an
> attacker has to mount their attack is limited. I'll do any of a number
> of things which let the German navy keep half of their U-boat traffic
> out of the hands of Bletchley park even through they didn't know there
> were vast gaping holes in the underlying cipher.

If say, the key for the AES256-CTR layer changed after every block-read from
/dev/random?

> > The decision to place trust in a entropy estimation scheme vs. a crypto
> > algorithm we have different views on. I can live with that.
>
> Better crypto is fine. But why *throw out* the entropy estimation and
> rely *entirely* on the crypto? Feel free to argue that the crypto in
> Fortuna is better (although Ted is making some strong points that it
> *isn't*), but is it necessary to throw the baby out with the bathwater?
> Can't you get the best of both worlds?

My past arguments for removing entropy estimation were hand-waving at best
(rate of /dev/random output ~= rate of event sources' activity like
keyboards, disks, etc). This could (not likely) lead to information about
what the system is doing. If an attacker could open and close tcp ports, or
ping an ethernet card to generate IRQs which are fed into the PRNG and
increasing the entropy count - would this be usable in an attack? Not likely.
Would you want to close-off this avenue of attack? Majority seems to say
"no", but I personally would like to. And that is where my argument falls
apart.

> > I "completly and totally" agree. I'm pointing out that no added padding
> > makes me, the new guy reading your code, work harder to decide if it's a
> > weakness. You shouldn't do that to people if you can avoid it.
>
> Sorry, but if you know enough to know why the padding is necessary, you
> should know when it isn't. Feel free to say "isn't this a weakness?
> I read in $BOOK that that padding was important to prevent some attacks"
> and propose a comment patch. But to say "this is crap because I don't
> understand one little detail and you should replace it with my shiny
> new 2005 model" when it's your ignorance and not a real problem is
> unbelievably arrogant.

Sigh. Perhaps I need to be excruciatingly clear:
- SHA1-nopadding() is less secure than SHA1-withpadding()
- It doesn't apply to random.c

I though it was clear ... clearly I was delusional.

> > Just like you shouldn't obfuscate code, even if it doesn't "weaken"
> > its implementation. It's just rude. Take the performance penalty to
> > avoid scaring people away from a very important peice of the kernel.
>
> Tell it to the marines. I'd say "tell it to Linus", because he'll laugh
> louder, but his time is valuable to me.
>
> Part of the Linux developer's credo, learned at Linus' knee, is that
> Performance Matters. If you don't worry about 5% all the time, after 15
> revisions you've running at half speed and it's a lot of work to catch up.

I see. And in the -mm examples, is the code easily readable for other
os-MemMgt types? If no, then I guess random.c is not the exception and I
apologize.

> What *is* critical is the input mixing stage, because that happens at
> interrupt time, and many many people care passionately about interrupt
> latency. And /dev/random wants to be non-optional, always there for
> people to use so they don't have to invent their own half-assed
> equivalent.

And the ring-buffer system which delays the expensive mixing stages untill a
a sort interrupt does a great job (current and my fortuna-patch). Different
being, fortuna-patch appears to be 2x faster.

> > The quantitaive aspects of the two RNGs in question are not being discussed.
> > It's the qualitative aspects we do not see eye to eye on. So I will no
> > longer suggest replacing the status-quo. I'd like to submit a patch to let
> > users chose at compile-time under Cryptographic options weither to drop in
> > Fortuna.
> >
> > Ted, can we leave it at this?
>
> You're welcome to write the patch. But I have to warn you, if you
> hope to get it into the standard kernel rather than just have a
> separately maintained patch, you'll need to persuade Linus or someone
> he trusts (who in theis case is probably Ted) that your patch is
> a) better in some way or another than the existing code, and
> b) important enough to warrant the maintenance burden that having
> two sets of equivalent code imposes.
>
> You're being offered a lot of clues. Please, take some.

I appreciate the feedback for what it's worth. Thanks.

JLC

2004-09-27 00:50:36

by George Spelvin

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

>> This, I do not recall. I must have missed it. Will you please show me
>> two inputs that, when fed to the SHA-1 in random.c, will produce
>> identical output?

> SHA-1 without padding, sure.

> hash("a") = hash("a\0") = hash("a\0\0") = ...
> hash("b") = hash("b\0") = hash("b\0\0") = ...
> hash("c") = hash("c\0") = hash("c\0\0") = ...

And how do I hash one byte with SHA-1 *without padding*? The only
hashing code I can find in random.c works 64 bytes at a time.
What are the other 63 bytes?

(I agree that that *naive* padding leads to collisions, but random.c
doesn't do ANY padding.)

> I see. And in the -mm examples, is the code easily readable for other
> os-MemMgt types? If no, then I guess random.c is not the exception and I
> apologize.

The Linux core -mm code is a fairly legendary piece of Heavy Wizardry.
To paraphrase, "do not meddle in the affairs of /usr/src/linux/mm/, for
it is subtle and quick to anger." There *are* people who understand it,
and it *is* designed (not a decaying pile of old hacks that *nobody*
understands how it works like some software), but it's also a remarkably
steep learning curve. A basic overview isn't so hard to acquire, but the
locking rules have subtle details. There are places where someone very good
noticed that a given lock doesn't have to be taken on a fast path if you
avoid doing certain things anywhere else that you'd think would be legal.

And so if someone tries to add code to do the "obvious" thing, the
lock-free fast path develops a race condition. And we all know what
fun race conditions are to debug.

Fortunately, some people see this as a challenge and Linux is blessed with
some extremely skilled VM hackers. And some of them even write and publish
books on the subject. But while a working VM system can be clear, making it
go fast leads to a certain amount of tension with the clarity goal.

> And the ring-buffer system which delays the expensive mixing stages untill a
> a sort interrupt does a great job (current and my fortuna-patch). Difference
> being, fortuna-patch appears to be 2x faster.

Ooh, cool! Must play with to steal the speed benefits. Thank you!

2004-09-27 13:08:55

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Mon, Sep 27, 2004 at 12:50:33AM -0000, [email protected] wrote:
> > SHA-1 without padding, sure.
>
> > hash("a") = hash("a\0") = hash("a\0\0") = ...
> > hash("b") = hash("b\0") = hash("b\0\0") = ...
> > hash("c") = hash("c\0") = hash("c\0\0") = ...
>
> And how do I hash one byte with SHA-1 *without padding*? The only
> hashing code I can find in random.c works 64 bytes at a time.
> What are the other 63 bytes?
>
> (I agree that that *naive* padding leads to collisions, but random.c
> doesn't do ANY padding.)

And I guess it is my fault to assume "no padding" is naive padding.

> > I see. And in the -mm examples, is the code easily readable for other
> > os-MemMgt types? If no, then I guess random.c is not the exception and I
> > apologize.
>
> The Linux core -mm code is a fairly legendary piece of Heavy Wizardry.
> To paraphrase, "do not meddle in the affairs of /usr/src/linux/mm/, for
> it is subtle and quick to anger." There *are* people who understand it,
> and it *is* designed (not a decaying pile of old hacks that *nobody*
> understands how it works like some software), but it's also a remarkably
> steep learning curve. A basic overview isn't so hard to acquire, but the
> locking rules have subtle details. There are places where someone very good
> noticed that a given lock doesn't have to be taken on a fast path if you
> avoid doing certain things anywhere else that you'd think would be legal.
>
> And so if someone tries to add code to do the "obvious" thing, the
> lock-free fast path develops a race condition. And we all know what
> fun race conditions are to debug.
>
> Fortunately, some people see this as a challenge and Linux is blessed with
> some extremely skilled VM hackers. And some of them even write and publish
> books on the subject. But while a working VM system can be clear, making it
> go fast leads to a certain amount of tension with the clarity goal.

Freightning ... but informative thank you.

> > And the ring-buffer system which delays the expensive mixing stages untill a
> > a sort interrupt does a great job (current and my fortuna-patch). Difference
> > being, fortuna-patch appears to be 2x faster.
>
> Ooh, cool! Must play with to steal the speed benefits. Thank you!

I'll have a patch for a "enable in crypto options" and "blocking with entropy
estimation" random-fortuna.c patch this week. My fiance is out of town and
there should be time to hack one up.

JLC

2004-09-27 14:29:15

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Mon, Sep 27, 2004 at 12:50:33AM -0000, [email protected] wrote:
> > And the ring-buffer system which delays the expensive mixing stages untill a
> > a sort interrupt does a great job (current and my fortuna-patch). Difference
> > being, fortuna-patch appears to be 2x faster.
>
> Ooh, cool! Must play with to steal the speed benefits. Thank you!

The speed benefits come from the fact that /dev/random is currently
using a large pool to store entropy, and so we end up taking cache
line misses as we access the memory. Worse yet, the cache lines are
scattered across the memory (due to the how the LFSR works), and we're
using/updating information from the pool 32 bits at a time. In
contrast, in JLC's patch, each pool only has enough space for 256 bits
of entropy (assuming the use of SHA-256), and said 256 bits are stored
packed next to each other, so it can fetch the entire pool in one or
two cache lines.

This is somewhat fundamental to the philosophical question of whether
you store a large amount of entropy, taking advantage of the fact that
the kernel has easy access to hardware-generated entropy, or use tiny
pools and put a greater faith in crypto primitives.

So the bottom line is that while Fortuna's input mixing uses more CPU
(ALU) resources, /dev/random is slower because of memory latency
issue. On processors with Hyperthreading / SMT enabled (which seems
to be the trend across all architectures --- PowerPC, AMD64, Intel,
etc.), the memory latency usage may be less important, since other
tasks will be able to use the other (virtual) half of the CPU while
the entropy mixing is waiting on the memory access to complete. On
the other hand, it does mean that we're chewing up a slightly greater
amount of memory bandwidth during the entropy mixing process. Whether
or not any of this is actually measurable during real-life mixing is
an interesting and non-obvious question.

- Ted

2004-09-27 14:43:39

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Mon, Sep 27, 2004 at 10:23:52AM -0400, Theodore Ts'o wrote:
> On Mon, Sep 27, 2004 at 12:50:33AM -0000, [email protected] wrote:
> > > And the ring-buffer system which delays the expensive mixing stages untill a
> > > a sort interrupt does a great job (current and my fortuna-patch). Difference
> > > being, fortuna-patch appears to be 2x faster.
> >
> > Ooh, cool! Must play with to steal the speed benefits. Thank you!
>
> This is somewhat fundamental to the philosophical question of whether
> you store a large amount of entropy, taking advantage of the fact that
> the kernel has easy access to hardware-generated entropy, or use tiny
> pools and put a greater faith in crypto primitives.

Tiny in that at most you can only pull out 256bits of entropy from one pool,
you are correct. SHA-256 buffers 64 bytes at time. The transform requires
512 bytes for its mixing. The mixing of the 512 byte W[] array is done
serially.

random_state->pool is POOLBYTES in size. Which is poolwords*4, which is
DEFAULT'd to 512 bytes. The "5 tap" LFSR reaches all over that 512byte
memory for its mixing.

If page sizes get big enough and we page-align the pool[] member, the
standard RNG will get faster.

JLC

2004-09-27 14:56:41

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Mon, Sep 27, 2004 at 09:32:03AM -0400, Jean-Luc Cooke wrote:
>
> I'll read over this once I finish re-writing my patch to use your entropy
> estimation.

While you're at it, please re-read RFC 793 and RFC 1185. You still
don't have TCP sequence generation done right. The global counter
is being increased for every TCP connection, and with only eight bits,
it can wrap very frequently. Encrypting the source/destination
address/port tuple and using that as an offset to the global clock,
and then only bumping the counter when you rekey would be much more in
the spirit of RFC 1185, and would result in sequence numbers much less
likely to cause stale packets to get mistakenly accepted.

I'm still a bit concerned about whether doing AES is going to be a
speed issue. Your comparisons against MD4 using openssl don't really
prove much, because (a) the original code used a cut-down MD4, and (b)
the openssl benchmark does a large number of encryptions and nothing
else, so all of the AES key schedule and tables will be in cache.

The only real way to settle this would be to ask Jamal and some of the
other networking hackers to repeat their benchmarks and see if the AES
encryption for every TCP SYN is a problem or not. CPU's have gotten
faster (but then again so have networks, and memory has *not* gotten
much faster), so only a real benchmark will tell us for sure.

- Ted

2004-09-27 15:21:07

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

Thanks.

My re-writing wilol appear as more like an editorial revision than a
re-write.

I will certainly talk to Jamal et al. Thanks

JLC

On Mon, Sep 27, 2004 at 10:55:55AM -0400, Theodore Ts'o wrote:
> On Mon, Sep 27, 2004 at 09:32:03AM -0400, Jean-Luc Cooke wrote:
> >
> > I'll read over this once I finish re-writing my patch to use your entropy
> > estimation.
>
> While you're at it, please re-read RFC 793 and RFC 1185. You still
> don't have TCP sequence generation done right. The global counter
> is being increased for every TCP connection, and with only eight bits,
> it can wrap very frequently. Encrypting the source/destination
> address/port tuple and using that as an offset to the global clock,
> and then only bumping the counter when you rekey would be much more in
> the spirit of RFC 1185, and would result in sequence numbers much less
> likely to cause stale packets to get mistakenly accepted.
>
> I'm still a bit concerned about whether doing AES is going to be a
> speed issue. Your comparisons against MD4 using openssl don't really
> prove much, because (a) the original code used a cut-down MD4, and (b)
> the openssl benchmark does a large number of encryptions and nothing
> else, so all of the AES key schedule and tables will be in cache.
>
> The only real way to settle this would be to ask Jamal and some of the
> other networking hackers to repeat their benchmarks and see if the AES
> encryption for every TCP SYN is a problem or not. CPU's have gotten
> faster (but then again so have networks, and memory has *not* gotten
> much faster), so only a real benchmark will tell us for sure.
>
> - Ted

2004-09-27 18:54:30

by Manfred Spraul

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Mon, Sep 27, 2004 at 10:55:55AM -0400, Theodore Ts'o wrote:
>
> While you're at it, please re-read RFC 793 and RFC 1185. You still
> don't have TCP sequence generation done right.

Actually trying to replace the partial MD4 might be worth an attempt:
I'm certain that the partial MD4 is not the best/fastest way to generate
sequence numbers.
>
>The only real way to settle this would be to ask Jamal and some of the
>other networking hackers to repeat their benchmarks and see if the AES
>encryption for every TCP SYN is a problem or not.
>
It would be unfair: The proposed implementation is not optimized - e.g.
the sequence number generation runs under a global spinlock. On large
SMP systems this will kill the performance, regardless of the internal
implementation.

For the Linux-variant of RFC 1948, the sequence number generation can be
described as:
A hash function that generates 24 bit output from 96 bit input. Some of
the input bits can be chosen by the attacker, all of these bits are
known to the attacker. The attacker can query the output of the hash for
some inputs - realistically less than 2^16 to 2^20 inputs. A successful
attack means guessing the output of the hash function for one of the
inputs that the attacker can't query.

Current implementation:
Set the MD4 initialization vector to the 96 bit input plus 32 secret,
random bits.
Perform an MD4 hash over 256 secret, random bits.
Take the lowest 24 bits from one of the MD4 state words.
Every 5 minutes the secret bits are reset.

For IPV6, the requirements are similiar, except that the input is 288
bits long.

--
Manfred

2004-09-27 19:46:23

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Mon, Sep 27, 2004 at 08:53:56PM +0200, Manfred Spraul wrote:
> On Mon, Sep 27, 2004 at 10:55:55AM -0400, Theodore Ts'o wrote:
> >
> >While you're at it, please re-read RFC 793 and RFC 1185. You still
> >don't have TCP sequence generation done right.
>
> Actually trying to replace the partial MD4 might be worth an attempt:
> I'm certain that the partial MD4 is not the best/fastest way to generate
> sequence numbers.

It infact uses two full SHA1 hashs for tcp sequence numbers (endian and
padding issues aside). my patch aims to do this in 1 AES256 Encrypt or 2
AES256 encrypts for ipv6.

> >The only real way to settle this would be to ask Jamal and some of the
> >other networking hackers to repeat their benchmarks and see if the AES
> >encryption for every TCP SYN is a problem or not.
> >
> It would be unfair: The proposed implementation is not optimized - e.g.
> the sequence number generation runs under a global spinlock. On large
> SMP systems this will kill the performance, regardless of the internal
> implementation.

This would be nice to have in both RNG implementations.

> For the Linux-variant of RFC 1948, the sequence number generation can be
> described as:
> A hash function that generates 24 bit output from 96 bit input. Some of
> the input bits can be chosen by the attacker, all of these bits are
> known to the attacker. The attacker can query the output of the hash for
> some inputs - realistically less than 2^16 to 2^20 inputs. A successful
> attack means guessing the output of the hash function for one of the
> inputs that the attacker can't query.
>
> Current implementation:
> Set the MD4 initialization vector to the 96 bit input plus 32 secret,
> random bits.
> Perform an MD4 hash over 256 secret, random bits.
> Take the lowest 24 bits from one of the MD4 state words.
> Every 5 minutes the secret bits are reset.
>
> For IPV6, the requirements are similiar, except that the input is 288
> bits long.
>
> --
> Manfred

2004-09-28 00:08:25

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Mon, Sep 27, 2004 at 03:45:02PM -0400, Jean-Luc Cooke wrote:
> > Actually trying to replace the partial MD4 might be worth an attempt:
> > I'm certain that the partial MD4 is not the best/fastest way to generate
> > sequence numbers.
>
> It infact uses two full SHA1 hashs for tcp sequence numbers (endian and
> padding issues aside). my patch aims to do this in 1 AES256 Encrypt or 2
> AES256 encrypts for ipv6.

No, that's not correct. We rekey once at most every five minutes, and
that requires a SHA hash, but in the normal case, it's only a partial MD4.

An AES encrypt for every TCP connection *might* be faster, but I'd
want to time it to make sure, and doing a bulk test ala "openssl
speed" isn't necessarily going to be predictive, as I've discussed earlier.

- Ted

2004-09-28 02:25:31

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Mon, Sep 27, 2004 at 08:07:19PM -0400, Theodore Ts'o wrote:
> On Mon, Sep 27, 2004 at 03:45:02PM -0400, Jean-Luc Cooke wrote:
> > > Actually trying to replace the partial MD4 might be worth an attempt:
> > > I'm certain that the partial MD4 is not the best/fastest way to generate
> > > sequence numbers.
> >
> > It infact uses two full SHA1 hashs for tcp sequence numbers (endian and
> > padding issues aside). my patch aims to do this in 1 AES256 Encrypt or 2
> > AES256 encrypts for ipv6.
>
> No, that's not correct. We rekey once at most every five minutes, and
> that requires a SHA hash, but in the normal case, it's only a partial MD4.

Pardon, the SYN cookies use two SHA1's, not the TCP sequence numbers. Easy
to mistake to make with comments "Compute the secure sequence number." in the
secure_tcp_syn_cookie() function. :)

> An AES encrypt for every TCP connection *might* be faster, but I'd
> want to time it to make sure, and doing a bulk test ala "openssl
> speed" isn't necessarily going to be predictive, as I've discussed earlier.

Agreed.

Was meaning to ask:
add_timer_randomness()

There is a comment:
/* if over the trickle threshold, use only 1 in 4096 samples */
if ( random_state->entropy_count > trickle_thresh &&
(__get_cpu_var(trickle_count)++ & 0xfff))
return;

"if (x++ & 0xfff)" will return true 0xfff out of 0x1000 of the time. Is this
the goal, because I don't think this will trickle control very well.

JLC

2004-09-28 13:46:21

by Herbert Poetzl

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random

On Mon, Sep 27, 2004 at 10:24:09PM -0400, Jean-Luc Cooke wrote:
> On Mon, Sep 27, 2004 at 08:07:19PM -0400, Theodore Ts'o wrote:
> > On Mon, Sep 27, 2004 at 03:45:02PM -0400, Jean-Luc Cooke wrote:
> > > > Actually trying to replace the partial MD4 might be worth an attempt:
> > > > I'm certain that the partial MD4 is not the best/fastest way to generate
> > > > sequence numbers.
> > >
> > > It infact uses two full SHA1 hashs for tcp sequence numbers (endian and
> > > padding issues aside). my patch aims to do this in 1 AES256 Encrypt or 2
> > > AES256 encrypts for ipv6.
> >
> > No, that's not correct. We rekey once at most every five minutes, and
> > that requires a SHA hash, but in the normal case, it's only a partial MD4.
>
> Pardon, the SYN cookies use two SHA1's, not the TCP sequence numbers. Easy
> to mistake to make with comments "Compute the secure sequence number." in the
> secure_tcp_syn_cookie() function. :)
>
> > An AES encrypt for every TCP connection *might* be faster, but I'd
> > want to time it to make sure, and doing a bulk test ala "openssl
> > speed" isn't necessarily going to be predictive, as I've discussed earlier.
>
> Agreed.
>
> Was meaning to ask:
> add_timer_randomness()
>
> There is a comment:
> /* if over the trickle threshold, use only 1 in 4096 samples */
> if ( random_state->entropy_count > trickle_thresh &&
> (__get_cpu_var(trickle_count)++ & 0xfff))
> return;
>
> "if (x++ & 0xfff)" will return true 0xfff out of 0x1000 of the time. Is this
> the goal, because I don't think this will trickle control very well.

and it will 'return' 0xfff times of 0x1000 ...

(just one case (x == 0) will pass this check)

best,
Herbert

> JLC
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2004-09-29 17:15:40

by Jean-Luc Cooke

[permalink] [raw]
Subject: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random

Team,

I have attempted to gather everyones comments into this re-write of my Fortuna
/dev/random patch. I will summarize.

1. Requires the Cryptographic API.

2. Enable the Fortuna replacement to random.c in the Cryptographic Options menu
of kernel config.

3. As-is, it checks at compile time that AES and SHA256 are built-in modules.

4. You can change this block cipher and message digest setting in the
crypto/random-fortuna.c file. /proc/sys/kernel/random/digest_algo
and cipher_algo are now strings telling the users what algorithms are
being used.

5. Entropy estimation still exists in much the same way as in the Legacay
random.c with the following exceptions:
a. /proc/sys/kernel/random/read_wakeup_thresholds lower limit was 8, it is now
0 for those of us crazy enough to want to remove blocking.
b. /proc/sys/kernel/random/entropy_avail only increases when data is added to
pool-0. Pool-0 is the only 1 out of 32 Fortuna pools which is drawn from
at every reseeding. So we don't over-estimate the amount of entropy we
consume, this change was done.
c. Since Fortuna resists consuming its entropy, it seemed inappropriate to debit
1MB of entropy from the count when reading 1MB from /dev/urandom. Now, every
reseeding deducts min(cipherKeySize, hashDigestSize) from the entropy count.
It should be noted that by doubling the blocksize from which you read
/dev/urandom, you double the speed since you reduce the number of reseeds by
a half.

6. The input mixing and output generation functions now use Fortuna
a. 32 independent feedback input mixing pools using a cryptographic hash from the
CryptoAPI are feed event data in round robin.
b. A block cipher in CTR mode generates the output.
c. Every file system block read from /dev/{u}random causes Fortuna to reseed
the block cipher's key with the digest output from 1 or more of the input
mixing pools. Pool-0 is used every time, pool-j is used every 2^j times.

7. Since Fortuna resists consuming its entropy, saving the 2048 byte random-seed
should be changed to: dd if=/dev/urandom of=$seedfile bs=256 count=8. After
2^3 = 8 block reads, pools 0, 1, 2, and 3 will certainly be used. After
2^4 = 16, pools up to number 4 will be used and so on.

8. The difference in bzImage size is 7,268 bytes on my P4 laptop. This is a
compressed image. My comparison was the 2.6.8.1 tarball with no kernel config
changes vs. Enabling the cryptoapi, Fortuna, AES (i586 assembly) and SHA256.
I have not yet done run-time memory consumption comparisons, but
Fortuna is certainly heavier than Legacy.

9. The difference in performance in /dev/random is roughly a 32x decrease in
output rates due to the 32x decrease in entropy estimation (see 5.c)

10. The difference in performance in /dev/urandom is 3x increase in output rates
for a 512 byte block size. A doubling in block size, doubles the performance.
I produced 512,000,000 bytes of output with a 32k block size in less than 10
seconds. The legacy /dev/urandom by comparison accomplished the same thing in
2.5 minutes. The assembly version of AES for the i586 gets credit for this.

I tried to test the syn-cookie code but was unable to determine if it works.
Printk()s in the syn cookie generation function were never called even though I
echod 1 to the proc file. If someone can tell me what Im going wrong, I'd love
to test this further.

I'd appreciate beta-testers on other platforms to provide feedback.

JLC


Attachments:
(No filename) (3.62 kB)
fortuna-2.6.8.1.patch (61.19 kB)
Download all attachments

2004-09-29 19:33:20

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random

While addition of the entropy estimator helps protect the Fortuna
Random number generator against a state extension attack, /dev/urandom
is using the same entropy extraction routine as /dev/random, and so
Fortuna is still vulernable to state extension attacks. This is
because a key aspect of the Fortuna design has been ignored in JLC's
implementation.

This missing piece to assure that a rekey can only take place when
there has been sufficient entropy built up in the higher order pools
in order to assure a catastrophic rekey. Otherwise, the attacker can
simply brute force a wide variety of entropy inputs from the hardware,
and see if any of them matches output from the /dev/urandom (from
which the attacker is continuously pulling output). So in the
original design, the rekey from a higher order pool only takes place
after k*2^n seconds, where n is the order of the pool, and k is some
constant. The idea is that after some period of time hopefully one of
the pools has built up at least 128 bits or so worth of entropy, and
so the catastrophic reseeding will prevent an attacker from trying all
possible inputs and determining the state of the pool. (Neils
recommends that k be at least a tenth of a second; see pages 38-40 of
http://th.informatik.uni-mannheim.de/people/lucks/papers/Ferguson/Fortuna.pdf).

Unfortunately, Fortuna will call random_reseed() after every single
read from /dev/urandom. This is not time-limited at all, so as long
as the attacker can call /dev/urandom fast enough, it can continue to
monitor the various higher-level pools. This can be fixed easily by
simply changing the rekey function so that it only attempts a reseed
after some period of time has gone by.

There is of course the question of whether a state extension attack is
realistic. After all, most attacks where the attacker as sufficient
privileges to obtain the complete state of the RNG is also one where
the attacker also has enough privileges to install a rootkit, or
compromise the kernel by loading a hostile loadable kernel module,
etc. Also, there is the question about whether an attacker could read
sufficient amounts of to keep track of the the contents of the pool,
and whether the attacker can either do the brute-forcing on the local
machine, or send the large amounts of information read from
/dev/urandom to an outside machine, without using enough CPU time that
it would be noticed by a system administrator ---- but then again, the
Crypto academics that are worried about things like state extension
attacks aren't worried about practical niceties. But then again, if
we decide that state extension attacks aren't practically possible, or
otherwise not worthy of concern, or if JLC's Fortuna implementation is
vulnerable to state extension attacks, there's no reason to use JLC's
implementation in the first place.

- Ted

2004-09-29 20:31:58

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random



Why would we want to miss that when so much effort was made to meet the
requirements of the traditional /dev/random? So...

Here's patch v2.1.2 that waits at least 0.1 sec before reseeding for
non-blocking reads to alleviate Ted's concern wrt waiting for reseeds.



When reading nbytes from /dev/{u}random, Legacy /dev/random would:
- Mix nbytes of data from primary pool into secondary pool
- Then generate nbytes from secondary pool

When reading nbytes from /dev/{u}random, Fortuna-patch /dev/random would:
- Mix ??? of data from input pools into the AES key for output generation
- Then generate nbytes from AES256-CTR

Perhaps I miss the subtlety of the difference in terms of security. If
nbytes >= size of both pools - wouldn't Legacy also be vulnerable to the
same attack?

JLC

On Wed, Sep 29, 2004 at 03:31:17PM -0400, Theodore Ts'o wrote:
> While addition of the entropy estimator helps protect the Fortuna
> Random number generator against a state extension attack, /dev/urandom
> is using the same entropy extraction routine as /dev/random, and so
> Fortuna is still vulernable to state extension attacks. This is
> because a key aspect of the Fortuna design has been ignored in JLC's
> implementation.

2004-09-29 21:41:51

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random

On Wed, Sep 29, 2004 at 04:27:07PM -0400, Jean-Luc Cooke wrote:
>
> When reading nbytes from /dev/{u}random, Legacy /dev/random would:
> - Mix nbytes of data from primary pool into secondary pool
> - Then generate nbytes from secondary pool
>
> When reading nbytes from /dev/{u}random, Fortuna-patch /dev/random would:
> - Mix ??? of data from input pools into the AES key for output generation
> - Then generate nbytes from AES256-CTR
>
> Perhaps I miss the subtlety of the difference in terms of security. If
> nbytes >= size of both pools - wouldn't Legacy also be vulnerable to the
> same attack?

Sure, but the Fortuna is supposed to be "more secure" because it
resists the state extension attack. I don't think the state extension
attack is at all realistic, for the reasons cited above. But if your
implementation doesn't resist the state extension attack, then why
bloat the kernel with an alternate random algorithm that's no better
as far as security is concerned? (And is more heavy weight, and is
more wasteful with its entropy, etc., etc.?)

- Ted

P.S. I'll also note by the way, that in more recent versions of
/dev/random, we use a separate pool for /dev/urandom and /dev/random.
A further enhancement which I'm thinking might be a good one to add is
to limit the rate at which we transfer randomness from the primary
pool to the urandom pool. So that it's not that I'm against making
changes; it's just that I want the changes to make sense, and protect
against realistic threats, not imaginary ones.

2004-09-29 21:53:59

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random

On Wed, Sep 29, 2004 at 04:27:07PM -0400, Jean-Luc Cooke wrote:
>
> Here's patch v2.1.2 that waits at least 0.1 sec before reseeding for
> non-blocking reads to alleviate Ted's concern wrt waiting for reseeds.

You didn't include the patch, and in any case, you'll probably want to
probably want to do it for both blocking as well as non-blocking
reads. And keep in mind, it's not *my* concerns, but it's Neil
Ferguson and Bruce Schneier's concerns. After all, if you're going to
call it Fortuna, you might as well be faithful to their design,
especially since if you don't, you're leaving it to be utterly
vulnerable to this state extension attack they are so worried about.

- Ted

2004-09-29 23:28:45

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random

Oops! my bad. Attached here along with a few other changes. I'm running a
change log now at the top of the random-fortuna.c file.

Being 100% faithful to their designs would require such constructions as each
event source would hold its own pool index pointer, right now event collection
is separated from event mixing for API and performance reasons.



Trying to find solace and more articulate advise I've sat down with Practical
Crypto again reviewed the state compromise-extension attack recovery section.

If copyright laws smile on me, I'll quote section 10.5.2 starting at chapter
6 on page 170 of the soft-cover print.

-- start quote --
The speed at which the system recovers from a compromised state depends on
the rate at which entropy (with respect to the attacker) flaws into the
pools. If we assume that this is a fixed rate R, then after t seconds we
have in total R*t bits of entropy. Each pool receives about R*t/32
bits in this time period. The attacker can no longer keep track of the state
if the generator is reseeded with a pool with more then 128 bits of entropy
in it. There are two cases. If pool P_0 collects 128 bits of entropy before
the next reseed operation, then we have recovered from the compromise. How
fast this happens depends on how large we let P_0 grow before we reseed. The
second case is when P_0 is reseeding too fast, due to random events unknown to
(or generated by) the attacker. Let t be the time between reseeds. Then pool
P_i collects 2^i*R*t/32 bits of entropy between reseeds, and is used in a
reseed every 2^i*t seconds. The recovery from the compromise happens the
first time we reseed with pool P_i where 128 <= 2^i*R*t < 256. (The upper
bound derives from the fact that otherwise pool P_{i-1} would contain 128
bits of entropy between reseeds.) This inequality gives us

2^i * R * t
----------- < 256
32

and thus

8192
2^i * t < ----
R

In other words, the time between recovery points (2^i*t) is bounded by the
time it takes to collect 2^13 bits of entropy (8192 / R). The number 2^13
seems a bit large, but it can be explained in the following way. We need at
least 2^7 bits to recover from a compromise. We might be unlucky if the
system reseeds just before we have collected 2^7 bits in a particular pool,
and then we have to use the next pool, which collect close to 2^8 bits before
the reseed. Finally, we divide our data over 32 pools, which accounts for
another factor of 2^5.

This is a very good result. The solution is within a factor of 64 of an
ideal solution (it needs at most 64 times as much randomness as an ideal
solution would need). The is a constant factor and it ensures that we can
never do terribly badly, and will always recover eventually. Furthermore, we
do not need to know how much entropy out events have or how much the attacker
knows. That is the real advantage Fortuna has over Yarrow. The
impossible-to-construct entropy estimators are gone for good. Everything is
fully automatic; if there is a good flow of random data, the PRNG will
recover quickly. If there is only a trickle of random data, it takes a long
time to recover.
-- end quote --

Hopefully the above quote from the book will be interpreted as free
advertising and not theft.



On Wed, Sep 29, 2004 at 05:53:15PM -0400, Theodore Ts'o wrote:
> On Wed, Sep 29, 2004 at 04:27:07PM -0400, Jean-Luc Cooke wrote:
> >
> > Here's patch v2.1.2 that waits at least 0.1 sec before reseeding for
> > non-blocking reads to alleviate Ted's concern wrt waiting for reseeds.
>
> You didn't include the patch, and in any case, you'll probably want to
> probably want to do it for both blocking as well as non-blocking
> reads. And keep in mind, it's not *my* concerns, but it's Neil
> Ferguson and Bruce Schneier's concerns. After all, if you're going to
> call it Fortuna, you might as well be faithful to their design,
> especially since if you don't, you're leaving it to be utterly
> vulnerable to this state extension attack they are so worried about.
>
> - Ted

2004-09-30 00:27:40

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random

Damn,

Need to eat me some brain-food.

JLC

On Wed, Sep 29, 2004 at 05:53:15PM -0400, Theodore Ts'o wrote:
> On Wed, Sep 29, 2004 at 04:27:07PM -0400, Jean-Luc Cooke wrote:
> >
> > Here's patch v2.1.2 that waits at least 0.1 sec before reseeding for
> > non-blocking reads to alleviate Ted's concern wrt waiting for reseeds.
>
> You didn't include the patch, and in any case, you'll probably want to
> probably want to do it for both blocking as well as non-blocking
> reads. And keep in mind, it's not *my* concerns, but it's Neil
> Ferguson and Bruce Schneier's concerns. After all, if you're going to
> call it Fortuna, you might as well be faithful to their design,
> especially since if you don't, you're leaving it to be utterly
> vulnerable to this state extension attack they are so worried about.
>
> - Ted
> _______________________________________________
>
> Subscription: http://lists.logix.cz/mailman/listinfo/cryptoapi
> List archive: http://lists.logix.cz/pipermail/cryptoapi


Attachments:
(No filename) (0.98 kB)
fortuna-2.6.8.1.patch (63.62 kB)
Download all attachments

2004-09-30 04:27:22

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random

This should be the last one for a while.

v2.1.4 crypto/random-fortuna.c

Ted, since this is a crypto-API feature as well as an optional replacement to
/dev/random, should I be passing this threw James or both of you?

Cheers,

JLC

On Wed, Sep 29, 2004 at 08:21:00PM -0400, Jean-Luc Cooke wrote:
> Damn,
>
> Need to eat me some brain-food.
>
> JLC


Attachments:
(No filename) (351.00 B)
fortuna-2.6.8.1.patch (63.20 kB)
Download all attachments

2004-09-30 06:51:04

by James Morris

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random

On Thu, 30 Sep 2004, Jean-Luc Cooke wrote:

> This should be the last one for a while.
>
> v2.1.4 crypto/random-fortuna.c
>
> Ted, since this is a crypto-API feature as well as an optional replacement to
> /dev/random, should I be passing this threw James or both of you?

Whatever the case, I would follow Ted's advice on this code.


- James
--
James Morris
<[email protected]>


2004-09-30 09:03:57

by Felipe Alfaro Solana

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random

On Sep 30, 2004, at 06:23, Jean-Luc Cooke wrote:

> <fortuna-2.6.8.1.patch>

You said AES and SHA-256 _must_ be built-in, but I can't see any code
on your patch that enforces selection of those config options. Thus,
it's possible to compile the kernel when CONFIG_CRYPTO_SHA256=n and
CONFIG_CRYPTO_AES=n although, of course, it will fail.

2004-09-30 10:46:15

by Jan-Benedict Glaw

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random

On Thu, 2004-09-30 00:23:03 -0400, Jean-Luc Cooke <[email protected]>
wrote in message <[email protected]>:
> --- linux-2.6.8.1/crypto/Kconfig 2004-08-14 06:56:22.000000000 -0400
> +++ linux-2.6.8.1-rand2/crypto/Kconfig 2004-09-28 23:30:04.000000000 -0400
> @@ -9,6 +9,15 @@
> help
> This option provides the core Cryptographic API.
>
> +config CRYPTO_RANDOM_FORTUNA
> + bool "The Fortuna RNG"
> + help
> + Replaces the legacy Linux RNG with one using the crypto API
> + and Fortuna by Ferguson and Schneier. Entropy estimation, and
> + a throttled /dev/random remain. Improvements include faster
> + /dev/urandom output and event input mixing.
> + Note: Requires AES and SHA256 to be built-in.
> +
> config CRYPTO_HMAC
> bool "HMAC support"

Instead of mentioning AES and SHA256 being required built-in, why not
just "select" them?

MfG, JBG

--
Jan-Benedict Glaw [email protected] . +49-172-7608481 _ O _
"Eine Freie Meinung in einem Freien Kopf | Gegen Zensur | Gegen Krieg _ _ O
fuer einen Freien Staat voll Freier B?rger" | im Internet! | im Irak! O O O
ret = do_actions((curr | FREE_SPEECH) & ~(NEW_COPYRIGHT_LAW | DRM | TCPA));


Attachments:
(No filename) (1.19 kB)
signature.asc (189.00 B)
Digital signature
Download all attachments

2004-09-30 13:52:34

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random

In the random-fortuna.c file I have some "#if !defined CONFIG_CRYPTO_SHA256"

As Jan-Benedict Glaw pointed out, I could just manually select the
algorithms, which is what I did.

Updated patch, only changes are to crypto/Kconfig.

Cheers,

JLC

On Thu, Sep 30, 2004 at 11:03:52AM +0200, Felipe Alfaro Solana wrote:
> On Sep 30, 2004, at 06:23, Jean-Luc Cooke wrote:
>
> ><fortuna-2.6.8.1.patch>
>
> You said AES and SHA-256 _must_ be built-in, but I can't see any code
> on your patch that enforces selection of those config options. Thus,
> it's possible to compile the kernel when CONFIG_CRYPTO_SHA256=n and
> CONFIG_CRYPTO_AES=n although, of course, it will fail.


Attachments:
(No filename) (673.00 B)
fortuna-2.6.8.1.patch (63.24 kB)
Download all attachments

2004-10-01 13:07:14

by Jean-Luc Cooke

[permalink] [raw]
Subject: Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random

Chris Han pointed out that my #include of "random-fortuna.c" could be done
much cleaner if the drivers/char/Makefile had some ifeq-else-endif logic in
it. He also pointed our my #include of crypto/internal.h was not needed
anymore.

Here is an update. v2.1.5 1-Oct-2004

JLC

On Thu, Sep 30, 2004 at 09:36:49AM -0400, Jean-Luc Cooke wrote:
> In the random-fortuna.c file I have some "#if !defined CONFIG_CRYPTO_SHA256"
>
> As Jan-Benedict Glaw pointed out, I could just manually select the
> algorithms, which is what I did.
>
> Updated patch, only changes are to crypto/Kconfig.
>
> Cheers,
>
> JLC
>
> On Thu, Sep 30, 2004 at 11:03:52AM +0200, Felipe Alfaro Solana wrote:
> > On Sep 30, 2004, at 06:23, Jean-Luc Cooke wrote:
> >
> > ><fortuna-2.6.8.1.patch>
> >
> > You said AES and SHA-256 _must_ be built-in, but I can't see any code
> > on your patch that enforces selection of those config options. Thus,
> > it's possible to compile the kernel when CONFIG_CRYPTO_SHA256=n and
> > CONFIG_CRYPTO_AES=n although, of course, it will fail.


Attachments:
(No filename) (1.03 kB)
fortuna-2.6.8.1.patch (63.13 kB)
Download all attachments