I noticed peculiarities in the behaviour of the delta-delta-3 system for
entropy estimation in the random.c code./ When I hold right alt or control, I
get about 8 bits of entropy per repeat fro the /dev/random which is
overestimated. I think the real entropy is 0 bits because it is absolutely
deterministic when the interrupt comes. Am I right or is there any hidden
magic source of entropy in this case?
Right shift, left alt, ctrl and shift make 4 bits per repeat. Is greater
randomness being expected from the keys that return 8 bits?
When I have a server where n blobk read, keyboard and mouse events occur
(everything is cached within huge amount of semiconductor RAM), the /dev/random
depends solely on the network packets. These can be manipulated and their
leading edge precisely sniffed. I think here exists a severe risk of
compromise. Am I right?
Clock
> I noticed peculiarities in the behaviour of the delta-delta-3 system for
> entropy estimation in the random.c code./ When I hold right alt
> or control, I
> get about 8 bits of entropy per repeat fro the /dev/random which is
> overestimated. I think the real entropy is 0 bits because it is absolutely
> deterministic when the interrupt comes. Am I right or is there any hidden
> magic source of entropy in this case?
There are hidden sources of entropy. One is clock skew between the keyboard
processor's clock, the keyboard controller's clock, and the CPU clock
generator's PLL. Another is data motion between the CPU cache and main
memory as various interupt service routines are executed interspersed with
other system activity.
> Right shift, left alt, ctrl and shift make 4 bits per repeat. Is greater
> randomness being expected from the keys that return 8 bits?
The code does its best to estimate how much actual entropy it is gathering.
> When I have a server where n blobk read, keyboard and mouse events occur
> (everything is cached within huge amount of semiconductor RAM),
> the /dev/random
> depends solely on the network packets. These can be manipulated and their
> leading edge precisely sniffed. I think here exists a severe risk of
> compromise. Am I right?
Nope. There is no way to sniff their leading edge accurate to a billionth
of a second. If you have a 1Ghz Pentium 3, that's the accuracy you'd need.
And you'd need to know that relative to the CPU clock, which comes from an
uncompensated quartz crystal oscillator fed into a noisy multiplier. Top
that off with variations in the oscillator frequency due to microscopic zone
temperature variations.
There is no known method to predict these numbers.
DS
> There are hidden sources of entropy. One is clock skew between the keyboard
> processor's clock, the keyboard controller's clock, and the CPU clock
> generator's PLL. Another is data motion between the CPU cache and main
In the RFC 1750, they write it is not recommended to rely on computer clocks to
generate random. Isn't it this case?
> > depends solely on the network packets. These can be manipulated and their
> > leading edge precisely sniffed. I think here exists a severe risk of
> > compromise. Am I right?
>
> Nope. There is no way to sniff their leading edge accurate to a billionth
> of a second. If you have a 1Ghz Pentium 3, that's the accuracy you'd need.
But it reduces the entropy. When I have a 486/66 and sniff packets accurately to
3MHz, only 4 bits remain. These bits need not to show a uniform distribution so
it could be even easier to guess them.
Clock
At 09:21 AM 12/18/2000 +0100, Karel Kulhavy wrote:
> > There are hidden sources of entropy. One is clock skew between
> the keyboard
> > processor's clock, the keyboard controller's clock, and the CPU clock
> > generator's PLL. Another is data motion between the CPU cache and main
>
>In the RFC 1750, they write it is not recommended to rely on computer
>clocks to
>generate random. Isn't it this case?
This is the case, but the important thing that David Schwartz said is that
it does not rely on the time in a clock, but rather on the pretty much
completely random skew between several independent clocks. Any particular
oscillator will vary in speed semi-randomly, and if you compare multiple
clocks you can get pretty random numbers.
--
This message has been brought to you by the letter alpha and the number pi.
Open Source: Think locally; act globally.
David Feuer
[email protected]
Hello!
> I noticed peculiarities in the behaviour of the delta-delta-3 system for
> entropy estimation in the random.c code./ When I hold right alt or control, I
> get about 8 bits of entropy per repeat fro the /dev/random which is
> overestimated. I think the real entropy is 0 bits because it is absolutely
> deterministic when the interrupt comes. Am I right or is there any hidden
> magic source of entropy in this case?
It isn't _absolutely_ deterministic (see the other replies about clock skew), but
I agree it isn't a reliable source of entropy. This is the reason why
add_keyboard_randomness() tries not to count autorepeated keys, but unfortunately
it's buggy since it doesn't work with keys producing multiple scan codes per
repeat. It would be probably better to make it work with key codes instead
of scan codes.
> Right shift, left alt, ctrl and shift make 4 bits per repeat.
Did you really test it? I bet they don't.
> When I have a server where n blobk read, keyboard and mouse events occur
> (everything is cached within huge amount of semiconductor RAM), the /dev/random
> depends solely on the network packets. These can be manipulated and their
> leading edge precisely sniffed.
How precisely? Remember that at least on the ia32, you need to guess the timing
down to one CPU clock cycle.
> I think here exists a severe risk of compromise. Am I right?
Even if you were able to predict all entropy sources, to predict the generated
random numbers you would need to invert the cryptographic hash used there.
Have a nice fortnight
--
Martin `MJ' Mares <[email protected]> <[email protected]> http://atrey.karlin.mff.cuni.cz/~mj/
You can't do that in horizontal mode!
David Schwartz wrote:
> The code does its best to estimate how much actual entropy it is gathering.
A potential weakness. The entropy estimator can be manipulated by
feeding data which looks random to the estimator, but which is in fact
not random at all.
-- Jamie
Date: Mon, 18 Dec 2000 21:38:01 +0100
From: Jamie Lokier <[email protected]>
David Schwartz wrote:
> The code does its best to estimate how much actual entropy it is gathering.
A potential weakness. The entropy estimator can be manipulated by
feeding data which looks random to the estimator, but which is in fact
not random at all.
Yes, absolutely. That's why you have to be careful before you make
changes to the kernel code to feed additional data to the estimator.
*Usually* relying on interrupt timing is safe, but not always. For
example, an adversary can observe, and in some cases control the
arrivial of network packets which control the network card's interrupt
timings. Is it enough to be able to predict with cpu-counter
resolution the inputs to the /dev/random pool? Maybe; it depends on how
paranoid you are.
Note that writing to /dev/random does *not* update the entropy estimate,
for this very reason. The assumption is that inputs to the entropy
estimator have to be trusted, and since /dev/random is typically
world-writeable, it is not so trusted.
- Ted
> David Schwartz wrote:
> > The code does its best to estimate how much actual entropy it
> > is gathering.
> A potential weakness. The entropy estimator can be manipulated by
> feeding data which looks random to the estimator, but which is in fact
> not random at all.
> -- Jamie
Sort of, but not really. You are correct to the extent that it's possible
for someone to make the RNG think it has somewhat more actualy entropy than
it actually has. However, you can't directly feed seeds into the RNG anyway
without root access.
The process of feeding those seeds into the RNG would inject some actual
entropy at the same time. And so long as the RNG was ever properly seeded,
it will always produce cryptographically secure random numbers no matter
what.
Even if it's not properly seeded, it doesn't take long before the machine
accumulates enough entropy to be cryptographically secure. So there is only
a brief window of vulnerability after the machine is started and before it
has accumulated sufficient entropy.
During that window, the amount of entropy present might be underestimated.
The simple fix is for programs that really need good entropy to be extra
conservative within a few minutes of startup.
DS
Jamie Lokier <[email protected]> writes:
> > A potential weakness. The entropy estimator can be manipulated by
> > feeding data which looks random to the estimator, but which is in fact
> > not random at all.
Ted Ts'o replied:
> Yes, absolutely. That's why you have to be careful before you make
> changes to the kernel code to feed additional data to the estimator.
> *Usually* relying on interrupt timing is safe, but not always. For
> example, an adversary can observe, and in some cases control the
> arrivial of network packets which control the network card's interrupt
> timings. Is it enough to be able to predict with cpu-counter
> resolution the inputs to the /dev/random pool? Maybe; it depends on how
> paranoid you are.
I think that for the case of dedicated firewall/IPSec machines, it
_should_ be possible to generate some entropy from network packets,
because this may be the only place where they get any activity (no
keyboard/mouse/disk). Given the fact we are dealing with a router,
there shouldn't be any way one person can control all of the network
traffic to/through/from the router, and if they can you probably have
another security problem entirely.
Maybe a hook into the ipchains/netfilter code to allow selecting only
traffic from certain interfaces, and discarding "repeat" source and/or
destination addresses or packets arriving less than X ticks apart, just
like we discard repeated keystrokes. The larger X is, the harder it is
to estimate the low-order bits on the timers when a packet arrives.
This would allow you to say "eth0 is my internal network and I'm not
trying to hack my own system, so use IP traffic on that interface to add
entropy to the pool, but not packets that are on port 6699/21/23 or reply
packets". It would probably just be a matter of adding a new flag to a
filter rule to say "use packets that match this rule for entropy", and
then it is up to the user to determine what is safe to use. The fact
that it is user configurable makes it even harder for a cracker to know
what affects the entropy pool.
Cheers, Andreas
--
Andreas Dilger \ "If a man ate a pound of pasta and a pound of antipasto,
\ would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/ -- Dogbert
> This would allow you to say "eth0 is my internal network and I'm not
> trying to hack my own system, so use IP traffic on that interface to add
> entropy to the pool, but not packets that are on port 6699/21/23 or reply
> packets". It would probably just be a matter of adding a new flag to a
> filter rule to say "use packets that match this rule for entropy", and
> then it is up to the user to determine what is safe to use. The fact
> that it is user configurable makes it even harder for a cracker to know
> what affects the entropy pool.
This isn't from the kernel, but works great in userspace:
iptables -n RANDOM
iptables -A INPUT -i eth0 -j RANDOM
iptables -A RANDOM -p tcp --dport 6699 -j <otherchain/rule>
iptables -A RANDOM -p tcp --dport 21 -j <asabove>
iptables -A RANDOM -p tcp --dport 32 -j <ditto,etc>
iptables -A RANDOM -m state --state ! NEW -j <thisisgettingstupidnow>
iptables -P RANDOM -j ULOG --ulog-nlgroup 32
This sends a message down netlink in ULOG format.
ULOG is a userspace logging extension written by Harald Welte, but it's
extensible like you wouldn't believe, so you could easily do some whacky
stuff with it. Or just hook in to a Netfilter hook and do it all from kernel
land.
ULOG's homepage: http://www.gnumonks.org/gnumonks/projects/project_details?p_id=1
:) d
On Sun, Dec 17, 2000 at 10:50:57PM +0100, Karel Kulhavy wrote:
> I noticed peculiarities in the behaviour of the delta-delta-3 system for
> entropy estimation in the random.c code./ When I hold right alt or control, I
> get about 8 bits of entropy per repeat fro the /dev/random which is
> overestimated. I think the real entropy is 0 bits because it is absolutely
> deterministic when the interrupt comes. Am I right or is there any hidden
not absolutely, but we should ignore repeated keys that generate more than
one scancode.
tytso, here's the patch to do it again:
--- linux/drivers/char/random.c Sun Jul 30 18:01:23 2000
+++ linux-prumpf/drivers/char/random.c Thu Sep 28 17:07:03 2000
@@ -763,10 +763,15 @@
void add_keyboard_randomness(unsigned char scancode)
{
- static unsigned char last_scancode = 0;
- /* ignore autorepeat (multiple key down w/o key up) */
- if (scancode != last_scancode) {
- last_scancode = scancode;
+ static unsigned char last_scancode[2] = { 0, 0 };
+
+ /* ignore autorepeat (multiple key down w/o key up).
+ * add_keyboard_randomness is called twice for certain AT keyboard
+ * keys, so we keep a longer history. */
+ if (scancode != last_scancode[0] &&
+ scancode != last_scancode[1]) {
+ last_scancode[0] = last_scancode[1];
+ last_scancode[1] = scancode;
add_timer_randomness(&keyboard_timer_state, scancode);
}
}
If we want to rely solely on the add_timer_randomness checks, we should
remove the autorepeat check completely.
Philipp Rumpf
On Mon, Dec 18, 2000 at 04:33:13PM -0500, Theodore Y. Ts'o wrote:
> Note that writing to /dev/random does *not* update the entropy estimate,
> for this very reason. The assumption is that inputs to the entropy
> estimator have to be trusted, and since /dev/random is typically
> world-writeable, it is not so trusted.
It should not be world-writeable, IMHO. So the only one who can feed entropy
there is root, who should know aht (s)he's doing ...
Here (SuSE Linux 7.x), it is 644:
crw-r--r-- 1 root root 1, 8 Dec 17 22:41 /dev/random
crw-r--r-- 1 root root 1, 9 Dec 17 22:41 /dev/urandom
Regards,
--
Kurt Garloff <[email protected]> Eindhoven, NL
GPG key: See mail header, key servers Linux kernel development
SuSE GmbH, Nuernberg, FRG SCSI, Security
[Kurt Garloff]
> It should not be world-writeable, IMHO. So the only one who can feed
> entropy there is root, who should know aht (s)he's doing ...
No, it is *good* to allow users to add entropy to the RNG pool, but it
is *bad* to assume that it is in fact entropy.
The beauty of cryptographic hashes is that the user can't *decrease*
the total entropy, even with 'cat /dev/zero > /dev/random'. All he can
do by adding to the pool is *increase* your confidence that you do in
fact have at least the estimated amount of randomness. The more
"untrusted" entropy you feed into the pool, the less it will matter (in
practical terms) if in the future a "trusted" source is compromised.
Peter
Date: Tue, 19 Dec 2000 12:49:48 +0100
From: Kurt Garloff <[email protected]>
On Mon, Dec 18, 2000 at 04:33:13PM -0500, Theodore Y. Ts'o wrote:
> Note that writing to /dev/random does *not* update the entropy estimate,
> for this very reason. The assumption is that inputs to the entropy
> estimator have to be trusted, and since /dev/random is typically
> world-writeable, it is not so trusted.
It should not be world-writeable, IMHO. So the only one who can feed entropy
there is root, who should know aht (s)he's doing ...
Here (SuSE Linux 7.x), it is 644:
crw-r--r-- 1 root root 1, 8 Dec 17 22:41 /dev/random
crw-r--r-- 1 root root 1, 9 Dec 17 22:41 /dev/urandom
No, writing to /dev/random does not feed update entropy estimate. It
does mix data into the pool, but the mixing algorithm is designed so
that you can do no harm by mixing any data into the pool --- even nasty
data chosen by an attacker. Hence, allowing someone to write into
/dev/random is perfectly safe; it can cause no damage, and might improve
things. That's why /dev/random should be world-writeable.
There is a separate ioctl which requires root privs to atomically mix
data into the pool and update the entropy estimate. That's the
interface which is supposed to be used by trusted daemons which pull
data from various hardware devices, and feed them into /dev/random.
Note that in this case, the trusted daemon is supposed to estimate the
amount of entropy which it is feeding into the system. That's because
the daemon may be able to use much more sophisticated entropy estimation
systems, including ones which may require large amounts of CPU time (for
example, to do FFT's, trial compression of the data, etc.).
- Ted
Hi!
> On Mon, Dec 18, 2000 at 04:33:13PM -0500, Theodore Y. Ts'o wrote:
> > Note that writing to /dev/random does *not* update the entropy estimate,
> > for this very reason. The assumption is that inputs to the entropy
> > estimator have to be trusted, and since /dev/random is typically
> > world-writeable, it is not so trusted.
>
> It should not be world-writeable, IMHO. So the only one who can feed entropy
> there is root, who should know aht (s)he's doing ...
> Here (SuSE Linux 7.x), it is 644:
You actually *want* random people to send entropy into your pool. Just
do not increase counters. That way, entropy can only get better :-).
Pavel
--
The best software in life is free (not shareware)! Pavel
GCM d? s-: !g p?:+ au- a--@ w+ v- C++@ UL+++ L++ N++ E++ W--- M- Y- R+
In article <[email protected]> you wrote:
> A potential weakness. The entropy estimator can be manipulated by
> feeding data which looks random to the estimator, but which is in fact
> not random at all.
That's why feeding randomness is a priveledgedoperation.
Greetings
Bernd
In article <[email protected]> you wrote:
> Even if you were able to predict all entropy sources, to predict the generated
> random numbers you would need to invert the cryptographic hash used there.
If you can predict ALL input in the pool, including the initial boot state
you can just rerun the PNRG algorithm and get the random numbers (as long as
you even can predict read access to the device).
But thats not the real-world Attack. The Real world attack is more to reduce
the randomness in terms of stochastic tests can detect some patterns like
unequal distribution or cycles. Those will lower the strengt of some
algorithms...
Greetings
Bernd
Bernd Eckenfels wrote:
> In article <[email protected]> you wrote:
> > A potential weakness. The entropy estimator can be manipulated by
> > feeding data which looks random to the estimator, but which is in fact
> > not random at all.
>
> That's why feeding randomness is a priveledgedoperation.
I was referring to randomness influenced externally, e.g. network
packet timing, hard disk timing by choice of which http requests, etc.
-- Jamie