2002-08-18 02:11:27

by Oliver Xymoron

[permalink] [raw]
Subject: [PATCH] (0/4) Entropy accounting fixes

I've done an analysis of entropy collection and accounting in current
Linux kernels and founds some major weaknesses and bugs. As entropy
accounting is only one part of the security of the random number
device, it's unlikely that these flaws are compromisable, nonetheless
it makes sense to fix them.

- Broken analysis of entropy distribution
- Spoofable delta model
- Interrupt timing independence
- Ignoring time scale of entropy sources
- Confusion of unpredictable and merely complex sources and trusting the
latter
- Broken pool transfers
- Entropy pool can be overrun with untrusted data

Net effect: a typical box will claim to generate 2-5 _orders of magnitude_
more entropy than it actually does.

Note that entropy accounting is mostly useful for things like the
generation of large public key pairs where the number of bits of
entropy in the key is comparable to the size of the PRNG's internal
state. For most purposes, /dev/urandom is still more than strong
enough to make attacking a cipher directly more productive than
attacking the PRNG.

The following patches against 2.5.31 have been tested on x86, but
should compile elsewhere just fine.

I've tried to cover some of the issues in detail below:

Broken analysis of entropy distribution
---------------------------------------

(I know the topic of entropy is rather poorly understood, so here's a couple
useful pieces of background for kernel folks:

Cryptanalytic Attacks on Pseudorandom Number Generators
Kelsey, Schneier, Wagner, Hall
http://www.counterpane.com/pseudorandom_number.pdf

Cryptographic Randomness from Air Turbulence in Disk Drives
D. Davis, R. Ihaka, P.R. Fenstermacher
http://world.std.com/~dtd/random/forward.ps)

Mathematically defining entropy

For a probability distribution P of samples K, the entropy is:

E = sum (-P(K) * log2 P(K))

For a uniform distribution of n bits of data, the entropy is
n. Anything other than a uniform distribution has less than n bits of
entropy.

Non-Uniform Distribution Of Timing

Unfortunately, our sample source is far from uniform. For starters, each
interrupt has a finite time associated with it - the interrupt latency.
Back to back interrupts will result in samples that are periodically
spaced by a fixed interval.

A priori, we might expect a typical interrupt to be a Poisson
process, resulting in a gamma-like distribution. It would also have
zero probability up to some minimum latency, have a peak at minimum
latency representing the likelihood of back-to-back interrupts, a
smooth hump around the average interrupt rate, and an infinite tail.

Not surprisingly, this distribution has less entropy in it than a
uniform distribution would. Linux takes the approach of assuming the
distribution is "scale invariant" (which is true for exponential
distributions and approximately true for the tails of gamma
distributions) and that the amount of entropy in a sample is in
relation to the number of bits in a given interrupt delta.

Assuming the interrupt actually has a nice gamma-like distribution
(which is unlikely in practice), then this is indeed true. The
trouble is that Linux assumes that if a delta is 13 bits, it contains
12 bits of actual entropy. A moment of thought will reveal that
binary numbers of the form 1xxxx can contain at most 4 bits of
entropy - it's a tautology that all binary numbers start with 1 when
you take off the leading zeros. This is actually a degenerate case of
Benford's Law (http://mathworld.wolfram.com/BenfordsLaw.html), which
governs the distribution of leading digits in scale invariant
distributions.

What we're concerned with is the entropy contained in digits
following the leading 1, which we can derive with a simple extension
of Benford's Law (and some Python):

def entropy(l):
s=0
for pk in l:
if pk: s=s+(-pk*log2(pk))
return s

def benford(digit, place=0, base=10):
if not place:
s=log(1+1.0/digit)
else:
s=0
for k in range(base**(place-1), (base**place)):
s=s+log(1+1.0/(k*base+digit))
print k,s

return s/log(base)

for b in range(3,16):
l=[]
for k in range(1,(2**(b-1))-1):
l.append(benford(k,0,2**(b-1)))
print "%2d %6f" % (b, entropy(l))

Which gives us:

3 1.018740
4 2.314716
5 3.354736
6 4.238990
7 5.032280
8 5.769212
9 6.468756
10 7.141877
11 7.795288
12 8.433345
13 9.059028
14 9.674477
15 10.281286

As it turns out, our 13-bit number has at most 9 bits of entropy, and
as we'll see in a bit, probably significantly less.

All that said, this is easily dealt with by lookup table.

Interrupt Timing Independence
-----------------------------

Linux entropy estimate also wrongly assumes independence of different
interrupt sources. While SMP complicates the matter, this is
generally not the case. Low-priority interrupts must wait on high
priority ones and back to back interrupts on shared lines will
serialize themselves ABABABAB. Further system-wide CLI, cache flushes
and the like will skew -all- the timings and cause them to bunch up
in predictable fashion.

Furthermore, all this is observable from userspace in the same way
that worst-case latency is measured.

To protect against back to back measurements and userspace
observation, we insist that at least one context switch has occurred
since we last sampled before we trust a sample.

Questionable Sources and Time Scales
------------------------------------

Due to the vagarities of computer architecture, things like keyboard
and mouse interrupts occur on their respective scanning or serial
clock edges, and are clocked relatively slowly. Worse, devices like
USB keyboards, mice, and disks tend to share interrupts and probably
line up on USB clock boundaries. Even PCI interrupts have a
granularity on the order of 33MHz (or worse, depending on the
particular adapter), which when timed by a fast processor's 2GHz
clock, make the low six bits of timing measurement predictable.

And as far as I can find, no one's tried to make a good model or
estimate of actual keyboard or mouse entropy. Randomness caused by
disk drive platter turbulence has actually been measured and is on
the order of 100bits/minute and is well correlated on timescales of
seconds - we're likely way overestimating it.

We can deal with this by having each trusted source declare its clock
resolution and removing extra timing resolution bits when we make samples.

Trusting Predictable or Measurable Sources
------------------------------------------

What entropy can be measured from disk timings are very often leaked
by immediately relaying data to web, shell, or X clients. Further,
patterns of drive head movement can be remotely controlled by clients
talking to file and web servers. Thus, while disk timing might be an
attractive source of entropy, it can't be used in a typical server
environment without great caution.

Complexity of analyzing timing sources should not be confused with
unpredictability. Disk caching has no entropy, disk head movement has
entropy only to the extent that it creates turbulence. Network
traffic is potentially completely observable.

(Incidentally, tricks like Matt Blaze's truerand clock drift
technique probably don't work on most PCs these days as the
"realtime" clock source is often derived directly from the
bus/PCI/memory/CPU clock.)

If we're careful, we can still use these timings to seed our RNG, as
long as we don't account them as entropy.

Batching
--------

Samples to be mixed are batched into a 256 element ring
buffer. Because this ring isn't allowed to wrap, it's dangerous to
store untrusted samples as they might flood out trusted ones.

We can allow untrusted data to be safely added to the pool by XORing
new samples in rather than copying and allowing the pool to wrap
around. As non-random data won't be correlated with random data, this
mixing won't destroy any entropy.

Broken Pool Transfers
---------------------

Worst of all, the accounting of entropy transfers between the
primary and secondary pools has been broken for quite some time and
produces thousands of bits of entropy out of thin air.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."


2002-08-18 02:19:15

by Oliver Xymoron

[permalink] [raw]
Subject: [PATCH] (1/4) Entropy accounting fixes

- change API to allow timing granularity, trusted vs untrusted sources
- smarter logarithm
- removal of static state arrays and source type methods
- use Benford's law to estimate entropy of sample
- use nr_context_switches to avoid polling and back-to-back interrupt attacks
- fix major bug in pool transfer accounting
- update comments

diff -ur a/drivers/char/random.c b/drivers/char/random.c
--- a/drivers/char/random.c 2002-07-20 14:11:07.000000000 -0500
+++ b/drivers/char/random.c 2002-08-17 19:47:54.000000000 -0500
@@ -1,7 +1,8 @@
/*
* random.c -- A strong random number generator
*
- * Version 1.89, last modified 19-Sep-99
+ * Version 2.0, last modified 8-Aug-2002
+ * by Oliver Xymoron <[email protected]>
*
* Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All
* rights reserved.
@@ -116,8 +117,9 @@
* The /dev/urandom device does not have this limit, and will return
* as many bytes as are requested. As more and more random bytes are
* requested without giving time for the entropy pool to recharge,
- * this will result in random numbers that are merely cryptographically
- * strong. For many applications, however, this is acceptable.
+ * this will result in random numbers that are merely
+ * cryptographically strong. For almost all applications other than
+ * generation of large public/private key pairs, this is acceptable.
*
* Exported interfaces ---- input
* ==============================
@@ -125,30 +127,24 @@
* The current exported interfaces for gathering environmental noise
* from the devices are:
*
- * void add_keyboard_randomness(unsigned char scancode);
- * void add_mouse_randomness(__u32 mouse_data);
- * void add_interrupt_randomness(int irq);
- * void add_blkdev_randomness(int irq);
- *
- * add_keyboard_randomness() uses the inter-keypress timing, as well as the
- * scancode as random inputs into the "entropy pool".
- *
- * add_mouse_randomness() uses the mouse interrupt timing, as well as
- * the reported position of the mouse from the hardware.
- *
- * add_interrupt_randomness() uses the inter-interrupt timing as random
- * inputs to the entropy pool. Note that not all interrupts are good
- * sources of randomness! For example, the timer interrupts is not a
- * good choice, because the periodicity of the interrupts is too
- * regular, and hence predictable to an attacker. Disk interrupts are
- * a better measure, since the timing of the disk interrupts are more
- * unpredictable.
- *
- * add_blkdev_randomness() times the finishing time of block requests.
- *
- * All of these routines try to estimate how many bits of randomness a
- * particular randomness source. They do this by keeping track of the
- * first and second order deltas of the event timings.
+ * void *create_entropy_source(int granularity_khz);
+ * void free_entropy_source(void *src);
+ * void add_timing_entropy(void *src, unsigned datum);
+ *
+ * create_entropy_source() returns a handle for future calls to
+ * add_timing_entropy. The granularity_khz parameter is used to
+ * describe the intrinsic timing granularity of the source, eg 33000
+ * for a fast PCI device or 9 for a 9600bps serial device.
+ *
+ * Untrusted sources can simply call add_timing_entropy with a null
+ * handle. Note that network timings cannot be trusted, nor can disk
+ * timings if they're immediately fed to the network! We'll assume the
+ * user has a modern ssh implementation that doesn't leak local
+ * keyboard and mouse timings.
+ *
+ * add_timing_entropy() mixes timing information and the given datum
+ * into the pool after making initial checks for randomness and
+ * estimating the number of usable entropy bits.
*
* Ensuring unpredictability at system startup
* ============================================
@@ -253,6 +249,8 @@
#include <linux/init.h>
#include <linux/fs.h>
#include <linux/tqueue.h>
+#include <linux/kernel_stat.h>
+#include <linux/bitops.h>

#include <asm/processor.h>
#include <asm/uaccess.h>
@@ -430,41 +428,19 @@
#endif

/*
- * More asm magic....
- *
* For entropy estimation, we need to do an integral base 2
* logarithm.
- *
- * Note the "12bits" suffix - this is used for numbers between
- * 0 and 4095 only. This allows a few shortcuts.
*/
-#if 0 /* Slow but clear version */
-static inline __u32 int_ln_12bits(__u32 word)
-{
- __u32 nbits = 0;
-
- while (word >>= 1)
- nbits++;
- return nbits;
-}
-#else /* Faster (more clever) version, courtesy Colin Plumb */
-static inline __u32 int_ln_12bits(__u32 word)
+static inline __u32 int_log2_16bits(__u32 word)
{
/* Smear msbit right to make an n-bit mask */
word |= word >> 8;
word |= word >> 4;
word |= word >> 2;
word |= word >> 1;
- /* Remove one bit to make this a logarithm */
- word >>= 1;
- /* Count the bits set in the word */
- word -= (word >> 1) & 0x555;
- word = (word & 0x333) + ((word >> 2) & 0x333);
- word += (word >> 4);
- word += (word >> 8);
- return word & 15;
+
+ return hweight16(word)-1;
}
-#endif

#if 0
#define DEBUG_ENT(fmt, arg...) printk(KERN_DEBUG "random: " fmt, ## arg)
@@ -546,7 +522,7 @@
}

/*
- * This function adds a byte into the entropy "pool". It does not
+ * This function adds a word into the entropy "pool". It does not
* update the entropy estimate. The caller should call
* credit_entropy_store if this is appropriate.
*
@@ -646,11 +622,12 @@
}

/*
- * Changes to the entropy data is put into a queue rather than being added to
- * the entropy counts directly. This is presumably to avoid doing heavy
- * hashing calculations during an interrupt in add_timer_randomness().
+ * Changes to the entropy data is put into a queue rather than being
+ * added to the entropy counts directly. This is to avoid doing heavy
+ * hashing calculations during an interrupt in add_timing_entropy().
* Instead, the entropy is only added to the pool once per timer tick.
*/
+
void batch_entropy_store(u32 a, u32 b, int num)
{
int new;
@@ -705,42 +682,64 @@
*
*********************************************************************/

-/* There is one of these per entropy source */
-struct timer_rand_state {
- __u32 last_time;
- __s32 last_delta,last_delta2;
- int dont_count_entropy:1;
+#if defined (__i386__) || defined (__x86_64__)
+#define CLOCK_KHZ cpu_khz
+#else
+#define CLOCK_KHZ HZ/1000
+#endif
+
+struct entropy_source
+{
+ int shift;
+ __u32 time, delta;
};

-static struct timer_rand_state keyboard_timer_state;
-static struct timer_rand_state mouse_timer_state;
-static struct timer_rand_state extract_timer_state;
-static struct timer_rand_state *irq_timer_state[NR_IRQS];
-static struct timer_rand_state *blkdev_timer_state[MAX_BLKDEV];
+void *create_entropy_source(int granularity_khz)
+{
+ int factor;
+ struct entropy_source *es;
+
+ es = kmalloc(sizeof(struct entropy_source), GFP_KERNEL);
+
+ if(!es) return 0; /* untrusted */
+
+ /* figure out how many bits of clock resolution we
+ * have to throw out given the source granularity */
+
+ factor=CLOCK_KHZ/granularity_khz;
+
+ /* count bits in factor */
+ es->shift=0;
+ while(factor>>=1) es->shift++;
+
+ return (void *)es;
+}
+
+void free_entropy_source(void *src)
+{
+ kfree(src);
+}

/*
- * This function adds entropy to the entropy "pool" by using timing
- * delays. It uses the timer_rand_state structure to make an estimate
- * of how many bits of entropy this call has added to the pool.
- *
- * The number "num" is also added to the pool - it should somehow describe
- * the type of event which just happened. This is currently 0-255 for
- * keyboard scan codes, and 256 upwards for interrupts.
- * On the i386, this is assumed to be at most 16 bits, and the high bits
- * are used for a high-resolution timer.
- *
+ * estimation of entropy contained in a n-bit delta from an exponential
+ * distribution, derived from Benford's Law (rounded down)
*/
-static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
+static int benford[16]={0,0,0,1,2,3,4,5,5,6,7,7,8,9,9,10};
+static int last_ctxt=0;
+
+void add_timing_entropy(void *src, unsigned datum)
{
- __u32 time;
- __s32 delta, delta2, delta3;
- int entropy = 0;
+ struct entropy_source *es=(struct entropy_source *)src;
+ unsigned long ctxt;
+ __u32 time, delta;
+ __s32 delta2;
+ int bits = 0;

#if defined (__i386__) || defined (__x86_64__)
if (cpu_has_tsc) {
__u32 high;
rdtsc(time, high);
- num ^= high;
+ datum ^= high;
} else {
time = jiffies;
}
@@ -748,80 +747,26 @@
time = jiffies;
#endif

- /*
- * Calculate number of bits of randomness we probably added.
- * We take into account the first, second and third-order deltas
- * in order to make our estimate.
- */
- if (!state->dont_count_entropy) {
- delta = time - state->last_time;
- state->last_time = time;
-
- delta2 = delta - state->last_delta;
- state->last_delta = delta;
-
- delta3 = delta2 - state->last_delta2;
- state->last_delta2 = delta2;
-
- if (delta < 0)
- delta = -delta;
- if (delta2 < 0)
- delta2 = -delta2;
- if (delta3 < 0)
- delta3 = -delta3;
- if (delta > delta2)
- delta = delta2;
- if (delta > delta3)
- delta = delta3;
-
- /*
- * delta is now minimum absolute delta.
- * Round down by 1 bit on general principles,
- * and limit entropy entimate to 12 bits.
- */
- delta >>= 1;
- delta &= (1 << 12) - 1;
-
- entropy = int_ln_12bits(delta);
- }
- batch_entropy_store(num, time, entropy);
-}
-
-void add_keyboard_randomness(unsigned char scancode)
-{
- static unsigned char last_scancode;
- /* ignore autorepeat (multiple key down w/o key up) */
- if (scancode != last_scancode) {
- last_scancode = scancode;
- add_timer_randomness(&keyboard_timer_state, scancode);
- }
-}
-
-void add_mouse_randomness(__u32 mouse_data)
-{
- add_timer_randomness(&mouse_timer_state, mouse_data);
-}
-
-void add_interrupt_randomness(int irq)
-{
- if (irq >= NR_IRQS || irq_timer_state[irq] == 0)
- return;
-
- add_timer_randomness(irq_timer_state[irq], 0x100+irq);
-}
-
-void add_blkdev_randomness(int major)
-{
- if (major >= MAX_BLKDEV)
- return;
-
- if (blkdev_timer_state[major] == 0) {
- rand_initialize_blkdev(major, GFP_ATOMIC);
- if (blkdev_timer_state[major] == 0)
- return;
+ if(es) /* trusted */
+ {
+ /* Check for obvious periodicity in sources */
+ delta = time - es->time;
+ delta2 = delta - es->delta;
+ es->time = time;
+ es->delta = delta;
+ if (delta2 < 0) delta2 = -delta2;
+ if (delta2 < delta) delta=delta2;
+
+ /* Check for possible busy waiting or irq flooding */
+ ctxt=nr_context_switches();
+ if (ctxt == last_ctxt) delta=0;
+ last_ctxt=ctxt;
+
+ /* Calculate entropy distribution */
+ delta>>=es->shift;
+ bits=benford[int_log2_16bits(delta & 0xffff)];
}
-
- add_timer_randomness(blkdev_timer_state[major], 0x200+major);
+ batch_entropy_store(datum, time, bits);
}

/******************************************************************
@@ -1239,18 +1184,18 @@

if (r->entropy_count < nbytes * 8 &&
r->entropy_count < r->poolinfo.POOLBITS) {
- int nwords = min_t(int,
- r->poolinfo.poolwords - r->entropy_count/32,
- sizeof(tmp) / 4);
+ int bytes = min_t(int,
+ nbytes - r->entropy_count/8,
+ sizeof(tmp));

- DEBUG_ENT("xfer %d from primary to %s (have %d, need %d)\n",
- nwords * 32,
+ DEBUG_ENT("xfer %d to %s (have %d, need %d)\n",
+ bytes * 8,
r == sec_random_state ? "secondary" : "unknown",
r->entropy_count, nbytes * 8);

- extract_entropy(random_state, tmp, nwords * 4, 0);
- add_entropy_words(r, tmp, nwords);
- credit_entropy_store(r, nwords * 32);
+ extract_entropy(random_state, tmp, bytes, 0);
+ add_entropy_words(r, tmp, (bytes+3)/4);
+ credit_entropy_store(r, bytes*8);
}
if (r->extract_count > 1024) {
DEBUG_ENT("reseeding %s with %d from primary\n",
@@ -1282,8 +1227,6 @@
__u32 tmp[TMP_BUF_SIZE];
__u32 x;

- add_timer_randomness(&extract_timer_state, nbytes);
-
/* Redundant, but just in case... */
if (r->entropy_count > r->poolinfo.POOLBITS)
r->entropy_count = r->poolinfo.POOLBITS;
@@ -1366,7 +1309,6 @@
nbytes -= i;
buf += i;
ret += i;
- add_timer_randomness(&extract_timer_state, nbytes);
}

/* Wipe data just returned from memory */
@@ -1429,8 +1371,6 @@

void __init rand_initialize(void)
{
- int i;
-
if (create_entropy_store(DEFAULT_POOL_SIZE, &random_state))
return; /* Error, return */
if (batch_entropy_init(BATCH_ENTROPY_SIZE, random_state))
@@ -1443,53 +1383,8 @@
#ifdef CONFIG_SYSCTL
sysctl_init_random(random_state);
#endif
- for (i = 0; i < NR_IRQS; i++)
- irq_timer_state[i] = NULL;
- for (i = 0; i < MAX_BLKDEV; i++)
- blkdev_timer_state[i] = NULL;
- memset(&keyboard_timer_state, 0, sizeof(struct timer_rand_state));
- memset(&mouse_timer_state, 0, sizeof(struct timer_rand_state));
- memset(&extract_timer_state, 0, sizeof(struct timer_rand_state));
- extract_timer_state.dont_count_entropy = 1;
}

-void rand_initialize_irq(int irq)
-{
- struct timer_rand_state *state;
-
- if (irq >= NR_IRQS || irq_timer_state[irq])
- return;
-
- /*
- * If kmalloc returns null, we just won't use that entropy
- * source.
- */
- state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL);
- if (state) {
- memset(state, 0, sizeof(struct timer_rand_state));
- irq_timer_state[irq] = state;
- }
-}
-
-void rand_initialize_blkdev(int major, int mode)
-{
- struct timer_rand_state *state;
-
- if (major >= MAX_BLKDEV || blkdev_timer_state[major])
- return;
-
- /*
- * If kmalloc returns null, we just won't use that entropy
- * source.
- */
- state = kmalloc(sizeof(struct timer_rand_state), mode);
- if (state) {
- memset(state, 0, sizeof(struct timer_rand_state));
- blkdev_timer_state[major] = state;
- }
-}
-
-
static ssize_t
random_read(struct file * file, char * buf, size_t nbytes, loff_t *ppos)
{
@@ -1520,6 +1415,11 @@
schedule();
continue;
}
+
+ DEBUG_ENT("extracting %d bits, p: %d s: %d\n",
+ n*8, random_state->entropy_count,
+ sec_random_state->entropy_count);
+
n = extract_entropy(sec_random_state, buf, n,
EXTRACT_ENTROPY_USER |
EXTRACT_ENTROPY_SECONDARY);
@@ -2277,10 +2177,9 @@



-EXPORT_SYMBOL(add_keyboard_randomness);
-EXPORT_SYMBOL(add_mouse_randomness);
-EXPORT_SYMBOL(add_interrupt_randomness);
-EXPORT_SYMBOL(add_blkdev_randomness);
+EXPORT_SYMBOL(create_entropy_source);
+EXPORT_SYMBOL(free_entropy_source);
+EXPORT_SYMBOL(add_timing_entropy);
EXPORT_SYMBOL(batch_entropy_store);
EXPORT_SYMBOL(generate_random_uuid);

diff -ur a/include/linux/random.h b/include/linux/random.h
--- a/include/linux/random.h 2002-07-20 14:11:18.000000000 -0500
+++ b/include/linux/random.h 2002-08-17 19:34:37.000000000 -0500
@@ -43,15 +43,11 @@
#ifdef __KERNEL__

extern void rand_initialize(void);
-extern void rand_initialize_irq(int irq);
-extern void rand_initialize_blkdev(int irq, int mode);

extern void batch_entropy_store(u32 a, u32 b, int num);
-
-extern void add_keyboard_randomness(unsigned char scancode);
-extern void add_mouse_randomness(__u32 mouse_data);
-extern void add_interrupt_randomness(int irq);
-extern void add_blkdev_randomness(int major);
+extern void *create_entropy_source(int granularity_khz);
+extern void free_entropy_source(void *src);
+extern void add_timing_entropy(void *src, unsigned datum);

extern void get_random_bytes(void *buf, int nbytes);
void generate_random_uuid(unsigned char uuid_out[16]);



--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 02:22:45

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes


On Sat, 17 Aug 2002, Oliver Xymoron wrote:
>
> Net effect: a typical box will claim to generate 2-5 _orders of magnitude_
> more entropy than it actually does.

On the other hand, if you are _too_ anal you won't consider _anything_
"truly random", and /dev/random becomes practically useless on things that
don't have special randomness hardware.

To me it sounds from your description that you may well be on the edge of
"too anal". Real life _has_ to be taken into account, and not accepting
entropy because of theoretical issues is _not_ a good idea.

Quite frankly, I'd rather have a usable /dev/random than one that runs out
so quickly that it's unreasonable to use it for things like generating
4096-bit host keys for sshd etc.

In particular, if a machine needs to generate a strong random number, and
/dev/random cannot give that more than once per day because it refuses to
use things like bits from the TSC on network packets, then /dev/random is
no longer practically useful.

Theory is theory, practice is practice. And theory should be used to
_analyze_ practice, but should never EVER be the overriding concern.

So please also do a writeup on whether your patches are _practical_. I
will not apply them otherwise.

Linus

2002-08-18 02:28:12

by Oliver Xymoron

[permalink] [raw]
Subject: [PATCH] (4/4) entropy batching update

This patch lets the entropy batching pool safely wrap around, allowing
untrusted samples to be mixed in without risk of flooding out trusted
samples.

diff -ur a/drivers/char/random.c b/drivers/char/random.c
--- a/drivers/char/random.c 2002-08-17 19:55:31.000000000 -0500
+++ b/drivers/char/random.c 2002-08-17 19:55:38.000000000 -0500
@@ -596,25 +596,19 @@
*
**********************************************************************/

-static __u32 *batch_entropy_pool;
-static int *batch_entropy_credit;
-static int batch_max;
-static int batch_head, batch_tail;
+static __u32 *batch_entropy_pool=0;
+static int batch_max, batch_pos, batch_credit, batch_samples;
static struct tq_struct batch_tqueue;
static void batch_entropy_process(void *private_);

/* note: the size must be a power of 2 */
static int __init batch_entropy_init(int size, struct entropy_store *r)
{
- batch_entropy_pool = kmalloc(2*size*sizeof(__u32), GFP_KERNEL);
+ batch_entropy_pool = kmalloc(size*sizeof(__u32), GFP_KERNEL);
if (!batch_entropy_pool)
return -1;
- batch_entropy_credit =kmalloc(size*sizeof(int), GFP_KERNEL);
- if (!batch_entropy_credit) {
- kfree(batch_entropy_pool);
- return -1;
- }
- batch_head = batch_tail = 0;
+
+ batch_pos = batch_credit = batch_samples = 0;
batch_max = size;
batch_tqueue.routine = batch_entropy_process;
batch_tqueue.data = r;
@@ -622,57 +616,61 @@
}

/*
- * Changes to the entropy data is put into a queue rather than being
+ * Changes to the entropy data are put into a queue rather than being
* added to the entropy counts directly. This is to avoid doing heavy
* hashing calculations during an interrupt in add_timing_entropy().
* Instead, the entropy is only added to the pool once per timer tick.
+ *
+ * The batch pool intentionally allows wrap-around, to protect against
+ * flooding of untrusted data. Non-random data will not correlate with
+ * random data and can be safely XORed over existing data.
*/

-void batch_entropy_store(u32 a, u32 b, int num)
+void batch_entropy_store(u32 val, int bits)
{
- int new;
-
if (!batch_max)
return;

- batch_entropy_pool[2*batch_head] = a;
- batch_entropy_pool[(2*batch_head) + 1] = b;
- batch_entropy_credit[batch_head] = num;
-
- new = (batch_head+1) & (batch_max-1);
- if (new != batch_tail) {
- queue_task(&batch_tqueue, &tq_timer);
- batch_head = new;
- } else {
- DEBUG_ENT("batch entropy buffer full\n");
- }
+ batch_entropy_pool[batch_pos] ^= val;
+ batch_credit+=bits;
+ batch_samples++;
+ batch_pos = (batch_pos+1) & (batch_max-1);
+
+ queue_task(&batch_tqueue, &tq_timer);
}

/*
- * Flush out the accumulated entropy operations, adding entropy to the passed
- * store (normally random_state). If that store has enough entropy, alternate
- * between randomizing the data of the primary and secondary stores.
+ * Flush out the accumulated entropy operations, adding entropy to the
+ * passed store (normally random_state). Alternate between randomizing
+ * the data of the primary and secondary stores.
*/
static void batch_entropy_process(void *private_)
{
- struct entropy_store *r = (struct entropy_store *) private_, *p;
- int max_entropy = r->poolinfo.POOLBITS;
-
+ struct entropy_store *r = (struct entropy_store *) private_;
+ int samples, credit;
+
if (!batch_max)
return;

- p = r;
- while (batch_head != batch_tail) {
- if (r->entropy_count >= max_entropy) {
- r = (r == sec_random_state) ? random_state :
- sec_random_state;
- max_entropy = r->poolinfo.POOLBITS;
- }
- add_entropy_words(r, batch_entropy_pool + 2*batch_tail, 2);
- credit_entropy_store(r, batch_entropy_credit[batch_tail]);
- batch_tail = (batch_tail+1) & (batch_max-1);
+ /* switch pools if current full */
+ if (r->entropy_count >= r->poolinfo.POOLBITS) {
+ r = (r == sec_random_state) ?
+ random_state : sec_random_state;
}
- if (p->entropy_count >= random_read_wakeup_thresh)
+
+ credit=batch_credit;
+ samples=batch_samples;
+ batch_pos = batch_credit = batch_samples = 0;
+
+ /* Don't allow more credit BITS > pool WORDS */
+ if(credit > batch_max) credit=batch_max;
+ /* Check for pool wrap-around */
+ if(samples > batch_max) samples=batch_max;
+
+ add_entropy_words(r, batch_entropy_pool, samples);
+ credit_entropy_store(r, credit);
+
+ if (r->entropy_count >= random_read_wakeup_thresh)
wake_up_interruptible(&random_read_wait);
}

@@ -766,7 +764,7 @@
delta>>=es->shift;
bits=benford[int_log2_16bits(delta & 0xffff)];
}
- batch_entropy_store(datum, time, bits);
+ batch_entropy_store(datum^time, bits);
}

/******************************************************************
diff -ur a/include/linux/random.h b/include/linux/random.h
--- a/include/linux/random.h 2002-08-17 19:55:31.000000000 -0500
+++ b/include/linux/random.h 2002-08-17 19:55:38.000000000 -0500
@@ -44,7 +44,7 @@

extern void rand_initialize(void);

-extern void batch_entropy_store(u32 a, u32 b, int num);
+extern void batch_entropy_store(u32 val, int bits);
extern void *create_entropy_source(int granularity_khz);
extern void free_entropy_source(void *src);
extern void add_timing_entropy(void *src, unsigned datum);

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 02:23:08

by Oliver Xymoron

[permalink] [raw]
Subject: [PATCH] (2/4) Update input drivers

This moves the existing safe users of randomness to the new API. Note
that just about all input devices have a timing granularity around 1kHz.

diff -ur a/drivers/acorn/char/mouse_ps2.c b/drivers/acorn/char/mouse_ps2.c
--- a/drivers/acorn/char/mouse_ps2.c 2002-08-17 00:30:02.000000000 -0500
+++ b/drivers/acorn/char/mouse_ps2.c 2002-08-17 01:00:22.000000000 -0500
@@ -34,6 +34,7 @@
static int aux_count = 0;
/* used when we send commands to the mouse that expect an ACK. */
static unsigned char mouse_reply_expected = 0;
+static void *entropy;

#define MAX_RETRIES 60 /* some aux operations take long time*/

@@ -107,7 +108,7 @@
mouse_reply_expected = 0;
}

- add_mouse_randomness(val);
+ add_timing_entropy(entropy, val);
if (aux_count) {
int head = queue->head;

@@ -271,6 +272,8 @@
iomd_writeb(0, IOMD_MSECTL);
iomd_writeb(8, IOMD_MSECTL);

+ entropy = create_entropy_source(1);
+
if (misc_register(&psaux_mouse))
return -ENODEV;
queue = (struct aux_queue *) kmalloc(sizeof(*queue), GFP_KERNEL);
diff -ur a/drivers/block/DAC960.c b/drivers/block/DAC960.c
--- a/drivers/block/DAC960.c 2002-08-17 00:29:59.000000000 -0500
+++ b/drivers/block/DAC960.c 2002-08-17 01:00:22.000000000 -0500
@@ -3036,7 +3036,7 @@
complete(Command->Completion);
Command->Completion = NULL;
}
- add_blkdev_randomness(DAC960_MAJOR + Controller->ControllerNumber);
+ add_timing_entropy(0, DAC960_MAJOR + Controller->ControllerNumber);
}
else if ((CommandStatus == DAC960_V1_IrrecoverableDataError ||
CommandStatus == DAC960_V1_BadDataEncountered) &&
@@ -4142,7 +4142,7 @@
complete(Command->Completion);
Command->Completion = NULL;
}
- add_blkdev_randomness(DAC960_MAJOR + Controller->ControllerNumber);
+ add_timing_entropy(0, DAC960_MAJOR + Controller->ControllerNumber);
}
else if (Command->V2.RequestSense.SenseKey
== DAC960_SenseKey_MediumError &&
diff -ur a/drivers/block/floppy.c b/drivers/block/floppy.c
--- a/drivers/block/floppy.c 2002-08-17 00:30:02.000000000 -0500
+++ b/drivers/block/floppy.c 2002-08-17 01:00:22.000000000 -0500
@@ -2294,7 +2294,7 @@

if (end_that_request_first(req, uptodate, req->hard_cur_sectors))
return;
- add_blkdev_randomness(major(dev));
+ add_timing_entropy(0, major(dev));
floppy_off(DEVICE_NR(dev));
blkdev_dequeue_request(req);
end_that_request_last(req);
diff -ur a/drivers/char/busmouse.c b/drivers/char/busmouse.c
--- a/drivers/char/busmouse.c 2002-07-20 14:11:11.000000000 -0500
+++ b/drivers/char/busmouse.c 2002-08-17 01:00:22.000000000 -0500
@@ -47,6 +47,7 @@
char ready;
int dxpos;
int dypos;
+ void *entropy;
};

#define NR_MICE 15
@@ -84,7 +85,7 @@
changed = (dx != 0 || dy != 0 || mse->buttons != buttons);

if (changed) {
- add_mouse_randomness((buttons << 16) + (dy << 8) + dx);
+ add_timing_entropy(mse->entropy, (buttons << 16) + (dy << 8) + dx);

mse->buttons = buttons;
mse->dxpos += dx;
@@ -387,6 +388,8 @@
mse->lock = (spinlock_t)SPIN_LOCK_UNLOCKED;
init_waitqueue_head(&mse->wait);

+ mse->entropy = create_entropy_source(1);
+
busmouse_data[msedev] = mse;

ret = misc_register(&mse->miscdev);
@@ -419,13 +422,15 @@
}

down(&mouse_sem);
-
+
if (!busmouse_data[mousedev]) {
printk(KERN_WARNING "busmouse: trying to free free mouse"
" on mousedev %d\n", mousedev);
goto fail;
}

+ free_entropy_source(busmouse_data[mousedev].entropy);
+
if (busmouse_data[mousedev]->active) {
printk(KERN_ERR "busmouse: trying to free active mouse"
" on mousedev %d\n", mousedev);
diff -ur a/drivers/char/hp_psaux.c b/drivers/char/hp_psaux.c
--- a/drivers/char/hp_psaux.c 2002-07-20 14:12:22.000000000 -0500
+++ b/drivers/char/hp_psaux.c 2002-08-17 01:00:22.000000000 -0500
@@ -182,6 +182,8 @@
static void lasi_ps2_init_hw(void)
{
++inited;
+
+ entropy = create_entropy_source(1);
}


@@ -205,6 +207,7 @@
static spinlock_t kbd_controller_lock = SPIN_LOCK_UNLOCKED;
static unsigned char mouse_reply_expected;
static int aux_count;
+static void *entropy;

static int fasync_aux(int fd, struct file *filp, int on)
{
@@ -233,7 +236,7 @@
return;
}

- add_mouse_randomness(scancode);
+ add_timing_entropy(entropy, scancode);
if (aux_count) {
int head = queue->head;

@@ -509,6 +512,8 @@
if (!queue)
return -ENOMEM;

+ entropy = create_entropy_source(1);
+
memset(queue, 0, sizeof(*queue));
queue->head = queue->tail = 0;
init_waitqueue_head(&queue->proc_list);
diff -ur a/drivers/char/keyboard.c b/drivers/char/keyboard.c
--- a/drivers/char/keyboard.c 2002-07-20 14:11:14.000000000 -0500
+++ b/drivers/char/keyboard.c 2002-08-17 01:00:22.000000000 -0500
@@ -153,6 +153,7 @@
pm_callback pm_kbd_request_override = NULL;
typedef void (pm_kbd_func) (void);
static struct pm_dev *pm_kbd;
+static void *entropy;

static unsigned char ledstate = 0xff; /* undefined */
static unsigned char ledioctl;
@@ -803,7 +804,7 @@
char raw_mode;

pm_access(pm_kbd);
- add_keyboard_randomness(scancode | up_flag);
+ add_timing_entropy(entropy, scancode | up_flag);

tty = vc->vc_tty;

@@ -935,6 +936,8 @@
int i;
struct kbd_struct kbd0;

+ entropy = create_entropy_source(1);
+
kbd0.ledflagstate = kbd0.default_ledflagstate = KBD_DEFLEDS;
kbd0.ledmode = LED_SHOW_FLAGS;
kbd0.lockstate = KBD_DEFLOCK;
diff -ur a/drivers/char/pc_keyb.c b/drivers/char/pc_keyb.c
--- a/drivers/char/pc_keyb.c 2002-07-20 14:11:15.000000000 -0500
+++ b/drivers/char/pc_keyb.c 2002-08-17 01:00:22.000000000 -0500
@@ -76,7 +76,7 @@
static volatile unsigned char reply_expected;
static volatile unsigned char acknowledge;
static volatile unsigned char resend;
-
+static void *entropy;

#if defined CONFIG_PSMOUSE
/*
@@ -451,7 +451,7 @@
}

prev_code = scancode;
- add_mouse_randomness(scancode);
+ add_timing_entropy(entropy, scancode);
spin_lock_irqsave(&aux_count_lock, flags);
if ( aux_count ) {
int head = queue->head;
@@ -931,6 +931,7 @@

#if defined CONFIG_PSMOUSE
psaux_init();
+ entropy = create_entropy_source(1);
#endif

kbd_rate = pckbd_rate;
diff -ur a/drivers/char/qtronix.c b/drivers/char/qtronix.c
--- a/drivers/char/qtronix.c 2002-07-20 14:11:13.000000000 -0500
+++ b/drivers/char/qtronix.c 2002-08-17 01:00:22.000000000 -0500
@@ -93,6 +93,7 @@
struct cir_port *cir;
static unsigned char kbdbytes[5];
static unsigned char cir_data[32]; /* we only need 16 chars */
+static void *entropy;

static void kbd_int_handler(int irq, void *dev_id, struct pt_regs *regs);
static int handle_data(unsigned char *p_data);
@@ -153,6 +154,7 @@
cir->port, IT8172_CIR0_IRQ);
}
#ifdef CONFIG_PSMOUSE
+ entropy = create_entropy_source(1);
psaux_init();
#endif
}
@@ -441,7 +443,7 @@
return;
}

- add_mouse_randomness(scancode);
+ add_timing_entropy(entropy, scancode);
if (aux_count) {
int head = queue->head;

diff -ur a/drivers/ide/ide.c b/drivers/ide/ide.c
--- a/drivers/ide/ide.c 2002-08-17 00:30:02.000000000 -0500
+++ b/drivers/ide/ide.c 2002-08-17 01:00:22.000000000 -0500
@@ -132,7 +132,7 @@
}

if (!end_that_request_first(rq, uptodate, nr_secs)) {
- add_blkdev_randomness(ch->major);
+ add_timing_entropy(0, ch->major);
if (!blk_rq_tagged(rq))
blkdev_dequeue_request(rq);
else
diff -ur a/drivers/input/input.c b/drivers/input/input.c
--- a/drivers/input/input.c 2002-08-17 00:29:59.000000000 -0500
+++ b/drivers/input/input.c 2002-08-17 01:00:22.000000000 -0500
@@ -78,7 +78,7 @@
if (type > EV_MAX || !test_bit(type, dev->evbit))
return;

- add_mouse_randomness((type << 4) ^ code ^ (code >> 4) ^ value);
+ add_timing_entropy(dev->entropy, (type << 4) ^ code ^ value);

switch (type) {

@@ -462,6 +462,8 @@
dev->rep[REP_DELAY] = HZ/4;
dev->rep[REP_PERIOD] = HZ/33;

+ dev->entropy = create_entropy_source(1);
+
/*
* Add the device.
*/
diff -ur a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
--- a/drivers/s390/block/dasd.c 2002-07-20 14:11:17.000000000 -0500
+++ b/drivers/s390/block/dasd.c 2002-08-17 01:00:22.000000000 -0500
@@ -1489,7 +1489,7 @@
{
if (end_that_request_first(req, uptodate, req->hard_nr_sectors))
BUG();
- add_blkdev_randomness(major(req->rq_dev));
+ add_timing_entropy(0, major(req->rq_dev));
end_that_request_last(req);
return;
}
diff -ur a/drivers/s390/char/tapeblock.c b/drivers/s390/char/tapeblock.c
--- a/drivers/s390/char/tapeblock.c 2002-08-17 00:29:59.000000000 -0500
+++ b/drivers/s390/char/tapeblock.c 2002-08-17 01:00:22.000000000 -0500
@@ -283,9 +283,6 @@
bh->b_end_io (bh, uptodate);
}
if (!end_that_request_first (td->blk_data.current_request, uptodate, "tBLK")) {
-#ifndef DEVICE_NO_RANDOM
- add_blkdev_randomness (MAJOR (td->blk_data.current_request->rq_dev));
-#endif
end_that_request_last (td->blk_data.current_request);
}
if (treq!=NULL) {
diff -ur a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
--- a/drivers/scsi/scsi_lib.c 2002-08-17 00:29:54.000000000 -0500
+++ b/drivers/scsi/scsi_lib.c 2002-08-17 01:00:58.000000000 -0500
@@ -335,7 +335,7 @@
return SCpnt;
}

- add_blkdev_randomness(major(req->rq_dev));
+ add_timing_entropy(0, major(req->rq_dev));

if(blk_rq_tagged(req))
blk_queue_end_tag(q, req);
diff -ur a/include/linux/blk.h b/include/linux/blk.h
--- a/include/linux/blk.h 2002-07-20 14:11:33.000000000 -0500
+++ b/include/linux/blk.h 2002-08-17 01:00:58.000000000 -0500
@@ -8,7 +8,6 @@
#include <linux/compiler.h>

extern void set_device_ro(kdev_t dev,int flag);
-extern void add_blkdev_randomness(int major);

#ifdef CONFIG_BLK_DEV_RAM

@@ -83,12 +82,14 @@
* If we have our own end_request, we do not want to include this mess
*/
#ifndef LOCAL_END_REQUEST
+extern void add_timing_entropy(void *src, int datum);
+
static inline void end_request(struct request *req, int uptodate)
{
if (end_that_request_first(req, uptodate, req->hard_cur_sectors))
return;

- add_blkdev_randomness(major(req->rq_dev));
+ add_timing_entropy(major(req->rq_dev));
blkdev_dequeue_request(req);
end_that_request_last(req);
}
diff -ur a/include/linux/input.h b/include/linux/input.h
--- a/include/linux/input.h 2002-08-17 00:30:00.000000000 -0500
+++ b/include/linux/input.h 2002-08-17 01:00:58.000000000 -0500
@@ -781,6 +781,7 @@

unsigned int repeat_key;
struct timer_list timer;
+ void *entropy;

struct pm_dev *pm_dev;
int state;


--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 02:26:05

by Oliver Xymoron

[permalink] [raw]
Subject: [PATCH] (3/4) SA_RANDOM user fixup

Users of the SA_RANDOM irq flag are moved to the new untrusted entropy
interface. It is now possible to safely add timing samples from
network devices.

diff -ur a/arch/alpha/kernel/irq.c b/arch/alpha/kernel/irq.c
--- a/arch/alpha/kernel/irq.c 2002-08-17 00:30:02.000000000 -0500
+++ b/arch/alpha/kernel/irq.c 2002-08-17 02:07:10.000000000 -0500
@@ -88,7 +88,7 @@
action = action->next;
} while (action);
if (status & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();

return status;
@@ -166,17 +166,6 @@
* so we have to be careful not to interfere with a
* running system.
*/
- if (new->flags & SA_SAMPLE_RANDOM) {
- /*
- * This function might sleep, we want to call it first,
- * outside of the atomic block.
- * Yes, this might clear the entropy pool if the wrong
- * driver is attempted to be loaded, without actually
- * installing a new handler, but is this really a problem,
- * only the sysadmin is able to do this.
- */
- rand_initialize_irq(irq);
- }

/*
* The following block of code has to be executed atomically
diff -ur a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c
--- a/arch/arm/kernel/irq.c 2002-08-17 00:30:02.000000000 -0500
+++ b/arch/arm/kernel/irq.c 2002-08-17 02:06:30.000000000 -0500
@@ -200,7 +200,7 @@
} while (action);

if (status & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);

spin_lock_irq(&irq_controller_lock);
}
@@ -458,17 +458,6 @@
* so we have to be careful not to interfere with a
* running system.
*/
- if (new->flags & SA_SAMPLE_RANDOM) {
- /*
- * This function might sleep, we want to call it first,
- * outside of the atomic block.
- * Yes, this might clear the entropy pool if the wrong
- * driver is attempted to be loaded, without actually
- * installing a new handler, but is this really a problem,
- * only the sysadmin is able to do this.
- */
- rand_initialize_irq(irq);
- }

/*
* The following block of code has to be executed atomically
diff -ur a/arch/cris/kernel/irq.c b/arch/cris/kernel/irq.c
--- a/arch/cris/kernel/irq.c 2002-08-17 00:29:50.000000000 -0500
+++ b/arch/cris/kernel/irq.c 2002-08-17 02:07:42.000000000 -0500
@@ -275,7 +275,7 @@
action = action->next;
} while (action);
if (do_random & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();
}
irq_exit(cpu);
@@ -315,9 +315,6 @@
shared = 1;
}

- if (new->flags & SA_SAMPLE_RANDOM)
- rand_initialize_irq(irq);
-
save_flags(flags);
cli();
*p = new;
diff -ur a/arch/i386/kernel/irq.c b/arch/i386/kernel/irq.c
--- a/arch/i386/kernel/irq.c 2002-08-17 00:29:58.000000000 -0500
+++ b/arch/i386/kernel/irq.c 2002-08-17 02:07:50.000000000 -0500
@@ -212,7 +212,7 @@
action = action->next;
} while (action);
if (status & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();

return status;
@@ -733,17 +733,6 @@
* so we have to be careful not to interfere with a
* running system.
*/
- if (new->flags & SA_SAMPLE_RANDOM) {
- /*
- * This function might sleep, we want to call it first,
- * outside of the atomic block.
- * Yes, this might clear the entropy pool if the wrong
- * driver is attempted to be loaded, without actually
- * installing a new handler, but is this really a problem,
- * only the sysadmin is able to do this.
- */
- rand_initialize_irq(irq);
- }

/*
* The following block of code has to be executed atomically
diff -ur a/arch/ia64/kernel/irq.c b/arch/ia64/kernel/irq.c
--- a/arch/ia64/kernel/irq.c 2002-08-17 00:29:50.000000000 -0500
+++ b/arch/ia64/kernel/irq.c 2002-08-17 02:07:04.000000000 -0500
@@ -497,7 +497,7 @@
action = action->next;
} while (action);
if (status & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();

local_irq_exit(irq);
@@ -1016,17 +1016,6 @@
* so we have to be careful not to interfere with a
* running system.
*/
- if (new->flags & SA_SAMPLE_RANDOM) {
- /*
- * This function might sleep, we want to call it first,
- * outside of the atomic block.
- * Yes, this might clear the entropy pool if the wrong
- * driver is attempted to be loaded, without actually
- * installing a new handler, but is this really a problem,
- * only the sysadmin is able to do this.
- */
- rand_initialize_irq(irq);
- }

if (new->flags & SA_PERCPU_IRQ) {
desc->status |= IRQ_PER_CPU;
diff -ur a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
--- a/arch/ia64/kernel/irq_ia64.c 2002-07-20 14:11:14.000000000 -0500
+++ b/arch/ia64/kernel/irq_ia64.c 2002-08-17 02:08:50.000000000 -0500
@@ -22,7 +22,6 @@
#include <linux/kernel_stat.h>
#include <linux/slab.h>
#include <linux/ptrace.h>
-#include <linux/random.h> /* for rand_initialize_irq() */
#include <linux/signal.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
diff -ur a/arch/mips/baget/irq.c b/arch/mips/baget/irq.c
--- a/arch/mips/baget/irq.c 2002-08-17 00:29:50.000000000 -0500
+++ b/arch/mips/baget/irq.c 2002-08-17 02:07:15.000000000 -0500
@@ -195,7 +195,7 @@
action = action->next;
} while (action);
if (do_random & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();
} else {
printk("do_IRQ: Unregistered IRQ (0x%X) occurred\n", irq);
@@ -284,9 +284,6 @@
shared = 1;
}

- if (new->flags & SA_SAMPLE_RANDOM)
- rand_initialize_irq(irq);
-
save_and_cli(flags);
*p = new;
restore_flags(flags);
diff -ur a/arch/mips/dec/irq.c b/arch/mips/dec/irq.c
--- a/arch/mips/dec/irq.c 2002-08-17 00:29:50.000000000 -0500
+++ b/arch/mips/dec/irq.c 2002-08-17 02:07:38.000000000 -0500
@@ -145,7 +145,7 @@
action = action->next;
} while (action);
if (do_random & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();
unmask_irq(irq);
}
@@ -181,8 +181,6 @@
} while (old);
shared = 1;
}
- if (new->flags & SA_SAMPLE_RANDOM)
- rand_initialize_irq(irq);

save_and_cli(flags);
*p = new;
diff -ur a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
--- a/arch/mips/kernel/irq.c 2002-08-17 00:29:50.000000000 -0500
+++ b/arch/mips/kernel/irq.c 2002-08-17 02:07:34.000000000 -0500
@@ -122,7 +122,7 @@
action = action->next;
} while (action);
if (status & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();

irq_exit(cpu, irq);
@@ -649,17 +649,6 @@
* so we have to be careful not to interfere with a
* running system.
*/
- if (new->flags & SA_SAMPLE_RANDOM) {
- /*
- * This function might sleep, we want to call it first,
- * outside of the atomic block.
- * Yes, this might clear the entropy pool if the wrong
- * driver is attempted to be loaded, without actually
- * installing a new handler, but is this really a problem,
- * only the sysadmin is able to do this.
- */
- rand_initialize_irq(irq);
- }

/*
* The following block of code has to be executed atomically
diff -ur a/arch/mips/kernel/old-irq.c b/arch/mips/kernel/old-irq.c
--- a/arch/mips/kernel/old-irq.c 2002-08-17 00:29:50.000000000 -0500
+++ b/arch/mips/kernel/old-irq.c 2002-08-17 02:07:28.000000000 -0500
@@ -192,7 +192,7 @@
action = action->next;
} while (action);
if (do_random & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();
unmask_irq (irq);

@@ -228,7 +228,7 @@
action = action->next;
} while (action);
if (do_random & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();
}
irq_exit(cpu, irq);
@@ -263,9 +263,6 @@
shared = 1;
}

- if (new->flags & SA_SAMPLE_RANDOM)
- rand_initialize_irq(irq);
-
save_and_cli(flags);
*p = new;

diff -ur a/arch/mips/philips/nino/irq.c b/arch/mips/philips/nino/irq.c
--- a/arch/mips/philips/nino/irq.c 2002-08-17 00:29:50.000000000 -0500
+++ b/arch/mips/philips/nino/irq.c 2002-08-17 02:07:20.000000000 -0500
@@ -185,7 +185,7 @@
action = action->next;
} while (action);
if (do_random & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
unmask_irq(irq);
local_irq_disable();
} else {
@@ -227,8 +227,6 @@
} while (old);
shared = 1;
}
- if (new->flags & SA_SAMPLE_RANDOM)
- rand_initialize_irq(irq);

save_and_cli(flags);
*p = new;
diff -ur a/arch/mips64/mips-boards/malta/malta_int.c b/arch/mips64/mips-boards/malta/malta_int.c
--- a/arch/mips64/mips-boards/malta/malta_int.c 2002-07-20 14:11:29.000000000 -0500
+++ b/arch/mips64/mips-boards/malta/malta_int.c 2002-08-17 02:06:12.000000000 -0500
@@ -247,9 +247,6 @@
shared = 1;
}

- if (new->flags & SA_SAMPLE_RANDOM)
- rand_initialize_irq(irq);
-
*p = new;
if (!shared)
enable_irq(irq);
diff -ur a/arch/mips64/sgi-ip22/ip22-int.c b/arch/mips64/sgi-ip22/ip22-int.c
--- a/arch/mips64/sgi-ip22/ip22-int.c 2002-08-17 00:29:50.000000000 -0500
+++ b/arch/mips64/sgi-ip22/ip22-int.c 2002-08-17 02:05:16.000000000 -0500
@@ -315,7 +315,7 @@
action = action->next;
} while (action);
if (do_random & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();
}
irq_exit(cpu, irq);
diff -ur a/arch/mips64/sgi-ip27/ip27-irq.c b/arch/mips64/sgi-ip27/ip27-irq.c
--- a/arch/mips64/sgi-ip27/ip27-irq.c 2002-08-17 00:29:50.000000000 -0500
+++ b/arch/mips64/sgi-ip27/ip27-irq.c 2002-08-17 02:06:17.000000000 -0500
@@ -183,7 +183,7 @@
action = action->next;
} while (action);
if (do_random & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();
}
irq_exit(thiscpu, irq);
@@ -339,8 +339,6 @@
printk("IRQ array overflow %d\n", irq);
while(1);
}
- if (new->flags & SA_SAMPLE_RANDOM)
- rand_initialize_irq(irq);

save_and_cli(flags);
p = irq_action + irq;
diff -ur a/arch/ppc/kernel/irq.c b/arch/ppc/kernel/irq.c
--- a/arch/ppc/kernel/irq.c 2002-08-17 00:30:02.000000000 -0500
+++ b/arch/ppc/kernel/irq.c 2002-08-17 02:08:08.000000000 -0500
@@ -133,17 +133,6 @@
* so we have to be careful not to interfere with a
* running system.
*/
- if (new->flags & SA_SAMPLE_RANDOM) {
- /*
- * This function might sleep, we want to call it first,
- * outside of the atomic block.
- * Yes, this might clear the entropy pool if the wrong
- * driver is attempted to be loaded, without actually
- * installing a new handler, but is this really a problem,
- * only the sysadmin is able to do this.
- */
- rand_initialize_irq(irq);
- }

/*
* The following block of code has to be executed atomically
@@ -412,7 +401,7 @@
action = action->next;
} while (action);
if (status & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();
}

diff -ur a/arch/ppc64/kernel/irq.c b/arch/ppc64/kernel/irq.c
--- a/arch/ppc64/kernel/irq.c 2002-08-17 00:29:50.000000000 -0500
+++ b/arch/ppc64/kernel/irq.c 2002-08-17 02:06:37.000000000 -0500
@@ -120,17 +120,6 @@
* so we have to be careful not to interfere with a
* running system.
*/
- if (new->flags & SA_SAMPLE_RANDOM) {
- /*
- * This function might sleep, we want to call it first,
- * outside of the atomic block.
- * Yes, this might clear the entropy pool if the wrong
- * driver is attempted to be loaded, without actually
- * installing a new handler, but is this really a problem,
- * only the sysadmin is able to do this.
- */
- rand_initialize_irq(irq);
- }

/*
* The following block of code has to be executed atomically
@@ -396,7 +385,7 @@
action = action->next;
} while (action);
if (status & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();
}

diff -ur a/arch/sh/kernel/irq.c b/arch/sh/kernel/irq.c
--- a/arch/sh/kernel/irq.c 2002-08-17 00:29:50.000000000 -0500
+++ b/arch/sh/kernel/irq.c 2002-08-17 02:06:04.000000000 -0500
@@ -138,7 +138,7 @@
action = action->next;
} while (action);
if (status & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();

irq_exit(cpu, irq);
@@ -499,17 +499,6 @@
* so we have to be careful not to interfere with a
* running system.
*/
- if (new->flags & SA_SAMPLE_RANDOM) {
- /*
- * This function might sleep, we want to call it first,
- * outside of the atomic block.
- * Yes, this might clear the entropy pool if the wrong
- * driver is attempted to be loaded, without actually
- * installing a new handler, but is this really a problem,
- * only the sysadmin is able to do this.
- */
- rand_initialize_irq(irq);
- }

/*
* The following block of code has to be executed atomically
diff -ur a/arch/sparc64/kernel/irq.c b/arch/sparc64/kernel/irq.c
--- a/arch/sparc64/kernel/irq.c 2002-08-17 00:30:02.000000000 -0500
+++ b/arch/sparc64/kernel/irq.c 2002-08-17 02:08:01.000000000 -0500
@@ -316,20 +316,6 @@
if (!handler)
return -EINVAL;

- if ((bucket != &pil0_dummy_bucket) && (irqflags & SA_SAMPLE_RANDOM)) {
- /*
- * This function might sleep, we want to call it first,
- * outside of the atomic block. In SA_STATIC_ALLOC case,
- * random driver's kmalloc will fail, but it is safe.
- * If already initialized, random driver will not reinit.
- * Yes, this might clear the entropy pool if the wrong
- * driver is attempted to be loaded, without actually
- * installing a new handler, but is this really a problem,
- * only the sysadmin is able to do this.
- */
- rand_initialize_irq(irq);
- }
-
spin_lock_irqsave(&irq_action_lock, flags);

action = *(bucket->pil + irq_action);
@@ -793,7 +779,7 @@
upa_writel(ICLR_IDLE, bp->iclr);
/* Test and add entropy */
if (random)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
}
} else
bp->pending = 1;
diff -ur a/arch/x86_64/kernel/irq.c b/arch/x86_64/kernel/irq.c
--- a/arch/x86_64/kernel/irq.c 2002-08-17 00:29:50.000000000 -0500
+++ b/arch/x86_64/kernel/irq.c 2002-08-17 02:06:24.000000000 -0500
@@ -450,7 +450,7 @@
action = action->next;
} while (action);
if (status & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
+ add_timing_entropy(0, irq);
local_irq_disable();

irq_exit(0, irq);
@@ -986,17 +986,6 @@
* so we have to be careful not to interfere with a
* running system.
*/
- if (new->flags & SA_SAMPLE_RANDOM) {
- /*
- * This function might sleep, we want to call it first,
- * outside of the atomic block.
- * Yes, this might clear the entropy pool if the wrong
- * driver is attempted to be loaded, without actually
- * installing a new handler, but is this really a problem,
- * only the sysadmin is able to do this.
- */
- rand_initialize_irq(irq);
- }

/*
* The following block of code has to be executed atomically


--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 02:58:43

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes


On Sat, 17 Aug 2002, Oliver Xymoron wrote:

> To protect against back to back measurements and userspace
> observation, we insist that at least one context switch has occurred
> since we last sampled before we trust a sample.

This sounds particularly obnoxious, since it is entirely possible to have
an idle machine that is just waiting for more entropy, and this will mean
(for example) that such an idle machine will effectively never get any
more entropy simply because your context switch counting will not work.

This is particularly true on things like embedded routers, where the
machine usually doesn't actually _run_ much user-level software, but is
just shuffling packets back and forth. Your logic seems to make it not add
any entropy from those packets, which can be _deadly_ if then the router
is also used for occasionally generating some random numbers for other
things.

Explain to me why I should consider these kinds of draconian measures
acceptable? It seems to be just fascist and outright stupid: avoiding
randomness just because you think you can't prove that it is random is not
very productive.

We might as well get rid of /dev/random altogether if it is not useful.

Linus

2002-08-18 02:55:16

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 07:30:02PM -0700, Linus Torvalds wrote:
>
> On Sat, 17 Aug 2002, Oliver Xymoron wrote:
> >
> > Net effect: a typical box will claim to generate 2-5 _orders of magnitude_
> > more entropy than it actually does.
>
> On the other hand, if you are _too_ anal you won't consider _anything_
> "truly random", and /dev/random becomes practically useless on things that
> don't have special randomness hardware.
>
> To me it sounds from your description that you may well be on the edge of
> "too anal". Real life _has_ to be taken into account, and not accepting
> entropy because of theoretical issues is _not_ a good idea.

There are exactly two cases:

a) you care about entropy accounting being done right
b) /dev/urandom is good enough

Note that these patches allow you to satisfy folks in a) while mixing
in _more_ data for b).

> Quite frankly, I'd rather have a usable /dev/random than one that runs out
> so quickly that it's unreasonable to use it for things like generating
> 4096-bit host keys for sshd etc.

Let me clarify that 2-5 orders thing. The kernel trusts about 10 times
as many samples as it should, and overestimates each samples' entropy
by about a factor of 10 (on x86 with TSC) or 1.3 (using 1kHz jiffies).

The 5 orders comes in when the pool is exhausted and the pool xfer
function magically manufactures 1024 bits or more the next time an
entropy bit (or .1 or 0 entropy bits, see above) comes in.

> In particular, if a machine needs to generate a strong random number, and
> /dev/random cannot give that more than once per day because it refuses to
> use things like bits from the TSC on network packets, then /dev/random is
> no longer practically useful.

My box has been up for about the time it's taken to write this email
and it's already got a full entropy pool. A 4096-bit public key has
significantly less than that many bits of entropy in it (primes thin
out in approximate proportion to log2(n)).

> So please also do a writeup on whether your patches are _practical_. I
> will not apply them otherwise.

The patches will be a nuisance for anyone who's currently using
/dev/random to generate session keys on busy SSL servers. But again,
with the old code, they were fooling themselves anyway. /dev/urandom
is appropriate for such applications, and this patch allows giving it
more data without sacrificing /dev/random.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 03:01:12

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes


On Sat, 17 Aug 2002, Oliver Xymoron wrote:
>
> Let me clarify that 2-5 orders thing. The kernel trusts about 10 times
> as many samples as it should, and overestimates each samples' entropy
> by about a factor of 10 (on x86 with TSC) or 1.3 (using 1kHz jiffies).

Lookin gat the code, your _new_ code just throws samples away _entirely_
just because some random event hasn't happened (the first thing I noticed
was the context switch testing, there may be others there that I just
didn't react to).

In short, you seem to cut those random events to _zero_. And this happens
under what seems to be perfectly realistic loads. That's not trying to fix
things, that's just breaking them.

> The patches will be a nuisance for anyone who's currently using
> /dev/random to generate session keys on busy SSL servers.

No, it appears to be a nuisanse even for people who have real issues, ie
just generating _occasional_ numbers on machines that just don't happen to
run much user space programs.

Linus

2002-08-18 03:18:45

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes


Hmm.. After more reading, it looks like (if I understood correctly), that
since network activity isn't considered trusted -at-all-, your average
router / firewall / xxx box will not _ever_ get any output from
/dev/random what-so-ever. Quite regardless of the context switch issue,
since that only triggers for trusted sources. So it was even more
draconian than I expected.

Are you seriously trying to say that a TSC running at a gigahertz cannot
be considered to contain any random information just because you think you
can time the network activity so well from the outside?

Oliver, I really think this patch (which otherwise looks perfectly fine)
is just unrealistic. There are _real_ reasons why a firewall box (ie one
that probably comes with a flash memory disk, and runs a small web-server
for configuration) would want to have strong random numbers (exactly for
things like generating host keys when asked to by the sysadmin), yet you
seem to say that such a user would have to use /dev/urandom.

If I read the patch correctly, you give such a box _zero_ "trusted"
sources of randomness, and thus zero bits of information in /dev/random.
It obviously won't have a keyboard or anything like that.

This is ludicrous.

Linus

2002-08-18 03:47:52

by Robert Love

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, 2002-08-17 at 23:05, Linus Torvalds wrote:

> This is particularly true on things like embedded routers, where the
> machine usually doesn't actually _run_ much user-level software, but is
> just shuffling packets back and forth. Your logic seems to make it not add
> any entropy from those packets, which can be _deadly_ if then the router
> is also used for occasionally generating some random numbers for other
> things.

Agreed. Further, embedded routers - since they are headless/diskless -
have problems even with the _current_ /dev/random code. They simply do
not generate enough entropy to fulfill sshd requests [1].

Saying "use /dev/urandom" in this case means we may as well not have a
/dev/random. There is a difference between incorrect accounting (which
it seems you have identified) and just too strict gathering behavior.

Robert Love

[1] this is why I wrote my netdev-random patches. some machines just
have to take the entropy from the network card... there is nothing
else.

2002-08-18 03:53:54

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes


On 17 Aug 2002, Robert Love wrote:
>
> [1] this is why I wrote my netdev-random patches. some machines just
> have to take the entropy from the network card... there is nothing
> else.

I suspect that Oliver is 100% correct in that the current code is just
_too_ trusting. And parts of his patches seem to be in the "obviously
good" category (ie the xor'ing of the buffers instead of overwriting).

So I think that if we just made the code be much less trusting (say,
consider the TSC information per interrupt to give only a single bit of
entropy, for example), and coupled that with making network devices always
be considered sources of entropy, we'd have a reasonable balance.

Linus

2002-08-18 04:02:09

by dean gaudet

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On 17 Aug 2002, Robert Love wrote:

> [1] this is why I wrote my netdev-random patches. some machines just
> have to take the entropy from the network card... there is nothing
> else.

many southbridges come with audio these days ... isn't it possible to get
randomness off the adc even without anything connected to it?

-dean

2002-08-18 04:24:20

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 08:05:55PM -0700, Linus Torvalds wrote:
>
> On Sat, 17 Aug 2002, Oliver Xymoron wrote:
>
> > To protect against back to back measurements and userspace
> > observation, we insist that at least one context switch has occurred
> > since we last sampled before we trust a sample.
>
> This sounds particularly obnoxious, since it is entirely possible to have
> an idle machine that is just waiting for more entropy, and this will mean
> (for example) that such an idle machine will effectively never get any
> more entropy simply because your context switch counting will not work.

My presumption here is that entropy sources such as mouse and keyboard
will trigger plenty of context switches. I did in fact instrument
this, and cases where samples were not accounted as entropy were less
than 1%. But they were still mixed in.

> This is particularly true on things like embedded routers, where the
> machine usually doesn't actually _run_ much user-level software, but is
> just shuffling packets back and forth. Your logic seems to make it not add
> any entropy from those packets, which can be _deadly_ if then the router
> is also used for occasionally generating some random numbers for other
> things.

This analysis actually all stems from some work I did for an embedded
(non-Linux) router.

Note that we currently don't use network traffic for entropy except
for a few MCA cards.

> Explain to me why I should consider these kinds of draconian measures
> acceptable? It seems to be just fascist and outright stupid: avoiding
> randomness just because you think you can't prove that it is random is not
> very productive.

This approach still mixes in the very same data, regardless of whether
it decided to trust it or not. So it's not avoiding randomness, it's
just being picky about accounting it.

Let's be clear what entropy measurement is useful for. In the case
where your PRNG's initial state is as hard to guess as inverting the
hash function its using or breaking the cipher you're using it for,
reseeding with entropy is just icing on the cake. Given that we've got
up to 4kbits of initial state, a 160-bit+ hashing function, and are
generally using this with 128-bit keys, or generating 30 odd bits of
sequence number, it buys us nothing. Where it buys us something is
generating large public keys.

Now there's currently an orthogonal problem that /dev/urandom can
deplete /dev/random's entropy pool entirely and things like generating
sequence numbers get in the way. I intend to fix that separately.

> We might as well get rid of /dev/random altogether if it is not useful.

If it's not accounting properly, it's not useful.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 04:26:19

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 08:08:36PM -0700, Linus Torvalds wrote:
>
> On Sat, 17 Aug 2002, Oliver Xymoron wrote:
> >
> > Let me clarify that 2-5 orders thing. The kernel trusts about 10 times
> > as many samples as it should, and overestimates each samples' entropy
> > by about a factor of 10 (on x86 with TSC) or 1.3 (using 1kHz jiffies).
>
> Lookin gat the code, your _new_ code just throws samples away _entirely_
> just because some random event hasn't happened (the first thing I noticed
> was the context switch testing, there may be others there that I just
> didn't react to).

No, it still mixes them in.

> In short, you seem to cut those random events to _zero_. And this happens
> under what seems to be perfectly realistic loads. That's not trying to fix
> things, that's just breaking them.
>
> > The patches will be a nuisance for anyone who's currently using
> > /dev/random to generate session keys on busy SSL servers.
>
> No, it appears to be a nuisanse even for people who have real issues, ie
> just generating _occasional_ numbers on machines that just don't happen to
> run much user space programs.

Read the code again. Better yet, try it.

23:06ash~$ cat /proc/sys/kernel/random/entropy_avail
4096
23:17ash~$ w
23:17:22 up 11 min, 2 users, load average: 0.09, 0.06, 0.05

It was probably full in less than 11 minutes.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 04:40:32

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 09:06:08PM -0700, dean gaudet wrote:
> On 17 Aug 2002, Robert Love wrote:
>
> > [1] this is why I wrote my netdev-random patches. some machines just
> > have to take the entropy from the network card... there is nothing
> > else.
>
> many southbridges come with audio these days ... isn't it possible to get
> randomness off the adc even without anything connected to it?

Many southbridges (like Intel 8x0) come with a real RNG.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 04:38:44

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 08:25:55PM -0700, Linus Torvalds wrote:
>
> Hmm.. After more reading, it looks like (if I understood correctly), that
> since network activity isn't considered trusted -at-all-, your average
> router / firewall / xxx box will not _ever_ get any output from
> /dev/random what-so-ever. Quite regardless of the context switch issue,
> since that only triggers for trusted sources. So it was even more
> draconian than I expected.

But it will get data of _equal quality_ to the current approach from
/dev/urandom.

> Are you seriously trying to say that a TSC running at a gigahertz cannot
> be considered to contain any random information just because you think you
> can time the network activity so well from the outside?

Yes. The clock of interest is the PCI bus clock, which is not terribly
fast next to a gigabit network analyzer.

> Oliver, I really think this patch (which otherwise looks perfectly fine)
> is just unrealistic. There are _real_ reasons why a firewall box (ie one
> that probably comes with a flash memory disk, and runs a small web-server
> for configuration) would want to have strong random numbers (exactly for
> things like generating host keys when asked to by the sysadmin), yet you
> seem to say that such a user would have to use /dev/urandom.
>
> If I read the patch correctly, you give such a box _zero_ "trusted"
> sources of randomness, and thus zero bits of information in /dev/random.
> It obviously won't have a keyboard or anything like that.

Anyone who'd have that problem has it today. Current kernels only add
entropy for a small number of rare cards. Grep for SA_SAMPLE_RANDOM:

./net/e1000/e1000_main.c
./net/3c523.c
./net/ibmlana.c
./net/sk_mca.c

In reality, most apps are using /dev/urandom for routine entropy as
they should be.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 04:44:19

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes


On Sat, 17 Aug 2002, Oliver Xymoron wrote:
>
> > We might as well get rid of /dev/random altogether if it is not useful.
>
> If it's not accounting properly, it's not useful.

My point exactly.

And if it isn't useful, it might as well not be there.

And your accounting isn't "proper" either. It's not useful on a
network-only device. It's just swinging the error the _other_ way, but
that's still an error. The point of /dev/random was to have an estimate of
the amount of truly random data in the algorithm - and the important word
here is _estimate_. Not "minimum number", nor "maximum number".

And yes, it still mixes in the random data, but since it doesn't account
for the randomness, that only helps /dev/urandom.

And helping /dev/urandom is _fine_. Don't get me wrong. It just doesn't
make /dev/random any more useful - quite the reverse. Your patch will just
make more people say "/dev/random isn't useful, use /dev/urandom instead".

Do you not see the fallacy of that approach? You're trying to make
/dev/random safer, but what you are actually _doing_ is to make people not
use it, and use /dev/urandom instead. Which makes all of the estimation
code useless.

THIS is my argument. Randomness is like security: if you make it too hard
to use, then you're shooting yourself in the foot, since people end up
unable to practically use it.

The point of /dev/random was to make it _easy_ for people to get random
numbers that we can feel comfortable about. The point of the accounting is
not a theoretical argument, but a way to make us feel _comfortable_ with
the amount of true randomness we're seeding in. It was not meant as a
theoretical exercise.

Linus


2002-08-18 04:46:19

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes


On Sat, 17 Aug 2002, Oliver Xymoron wrote:
>
> But it will get data of _equal quality_ to the current approach from
> /dev/urandom.

So what?

If you make /dev/random useless ("but you can use /dev/urandom instead")
then we should not have it.

> > Are you seriously trying to say that a TSC running at a gigahertz cannot
> > be considered to contain any random information just because you think you
> > can time the network activity so well from the outside?
>
> Yes. The clock of interest is the PCI bus clock, which is not terribly
> fast next to a gigabit network analyzer.

Be realistic. This is what I ask of you. We want _real_world_ security,
not a completely made-up-example-for-the-NSA-that-is-useless-to-anybody-
else.

All your arguments seem to boil down to "people shouldn't use /dev/random
at all, they should use /dev/urandom".

Which is just ridiculous.

Linus

2002-08-18 04:47:25

by David Brownell

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

> Questionable Sources and Time Scales
> ------------------------------------
>
> Due to the vagarities of computer architecture, things like keyboard
> and mouse interrupts occur on their respective scanning or serial
> clock edges, and are clocked relatively slowly. Worse, devices like
> USB keyboards, mice, and disks tend to share interrupts and probably
> line up on USB clock boundaries.

Last time I looked, USB didn't feed into the pool of randomness at
any level ... certainly none of the host controller drivers do, and
I didn't notice any drivers layered on top of those bus drivers
doing so either. That includes keyboards, mice, and disks; as well
as network devices, webcams, and more. So that "worse" isn't real.

It's likely correct that the host controller drivers don't use the
SA_SAMPLE_RANDOM flag: they're effectively sharing interrupts from
many kinds of devices, some of which may be very predictable (like
webcams). But it might be OK for for some of those layered drivers
(HID, storage, ...) to act as entropy sources ... if such timescale
issues get addressed! Maybe an URB_SAMPLE_RANDOM flag would be the
right kind of model.

FWIW, interrupts for OHCI and UHCI occur on frame (1 msec) boundaries,
according to their specs, though one person said he saw one vendor's
OHCI silicon doing interrupts more often. For EHCI (USB 2.0) it's
tunable from 1 to 64 microframes, where there are 8 microframes per
frame. (Linux defaults EHCI latency to 1 microframe == 125 usec.)

- Dave

2002-08-18 04:53:27

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 11:51:52PM -0400, Robert Love wrote:
> On Sat, 2002-08-17 at 23:05, Linus Torvalds wrote:
>
> > This is particularly true on things like embedded routers, where the
> > machine usually doesn't actually _run_ much user-level software, but is
> > just shuffling packets back and forth. Your logic seems to make it not add
> > any entropy from those packets, which can be _deadly_ if then the router
> > is also used for occasionally generating some random numbers for other
> > things.
>
> Agreed. Further, embedded routers - since they are headless/diskless -
> have problems even with the _current_ /dev/random code. They simply do
> not generate enough entropy to fulfill sshd requests [1].

This analysis actually stemmed from my work to port OpenSSH to a headless
(non-UNIX, non-POSIX, non-protected-memory, diskless) network appliance. SSH
only needs real entropy for the keys generated by ssh-keygen. It's
complete overkill for session keys.

And guess what? Stock Portable OpenSSH (v3.4p1) uses /dev/urandom
(configure.ac):

# Check for user-specified random device, otherwise check /dev/urandom
AC_ARG_WITH(random,
[ --with-random=FILE read entropy from FILE
(default=/dev/urandom)],

> Saying "use /dev/urandom" in this case means we may as well not have a
> /dev/random. There is a difference between incorrect accounting (which
> it seems you have identified) and just too strict gathering behavior.
>
> Robert Love
>
> [1] this is why I wrote my netdev-random patches. some machines just
> have to take the entropy from the network card... there is nothing
> else.

This patch is perfectly compatible with your netdev-random patches, in
fact I encourage it's resubmission after this one gets
in. /dev/urandom users will get all the benefits of network sampling
without /dev/random suffering at all.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 05:01:52

by Dmitri

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

Quoting Linus Torvalds <[email protected]>:

> Be realistic. This is what I ask of you. We want _real_world_ security,
> not a completely made-up-example-for-the-NSA-that-is-useless-to-anybody-
> else.
>
> All your arguments seem to boil down to "people shouldn't use /dev/random
> at all, they should use /dev/urandom".

Wouldn't it be much easier to ask -very few- people (GnuPG/SSL/SSH teams
primarily) to use /dev/super-reliable-mathematically-proven-random if
available, instead of asking much larger crowd to hack their code? This
will be backward compatible, and at the same time offers a much better
randomness for those who care about it. Myself, I read 128-bit session
keys for multiple, not-so-secure, short connections from /dev/random and
it would be sad if it runs out of data.

Also, /dev/random may take data from /dev/super-...random until it sucks
it dry, and then switches to less secure sources. This will guarantee that
the enthropy of readings is -not worse than-, and for moderate requests is
much better.

Dmitri

--
16. The Evil Overlord will not risk his life to save yours. Why risk
yours for his?
("Evil Overlord" by Peter Anspach and John VanSickl)


Attachments:
(No filename) (1.17 kB)
(No filename) (189.00 B)
Download all attachments

2002-08-18 05:20:19

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 09:51:32PM -0700, Linus Torvalds wrote:
>
> On Sat, 17 Aug 2002, Oliver Xymoron wrote:
> >
> > > We might as well get rid of /dev/random altogether if it is not useful.
> >
> > If it's not accounting properly, it's not useful.
>
> My point exactly.
>
> And if it isn't useful, it might as well not be there.
>
> And your accounting isn't "proper" either. It's not useful on a
> network-only device. It's just swinging the error the _other_ way, but
> that's still an error. The point of /dev/random was to have an estimate of
> the amount of truly random data in the algorithm - and the important word
> here is _estimate_. Not "minimum number", nor "maximum number".

The key word is actually conservative, as in conservative estimate.
Conservative here means less than or equal to.

> And yes, it still mixes in the random data, but since it doesn't account
> for the randomness, that only helps /dev/urandom.
>
> And helping /dev/urandom is _fine_. Don't get me wrong. It just doesn't
> make /dev/random any more useful - quite the reverse. Your patch will just
> make more people say "/dev/random isn't useful, use /dev/urandom instead".

No, it says /dev/random is primarily useful for generating large
(>>160 bit) keys.

> Do you not see the fallacy of that approach? You're trying to make
> /dev/random safer, but what you are actually _doing_ is to make people not
> use it, and use /dev/urandom instead. Which makes all of the estimation
> code useless.

> THIS is my argument. Randomness is like security: if you make it too hard
> to use, then you're shooting yourself in the foot, since people end up
> unable to practically use it.

Actually, half of the point here is in fact to make /dev/urandom safer
too, by allowing mixing of untrusted data that would otherwise
compromise /dev/random. 99.9% of users aren't using network sampling
currently, after these patches we can turn it on for everyone and
still sleep well at night. See?

> The point of /dev/random was to make it _easy_ for people to get random
> numbers that we can feel comfortable about. The point of the accounting is
> not a theoretical argument, but a way to make us feel _comfortable_ with
> the amount of true randomness we're seeding in. It was not meant as a
> theoretical exercise.

That is an interesting point. A counterpoint is if we account so much
as 1 bit of entropy per network interrupt on a typical system, the
system will basically _always_ feel comfortable (see
/proc/interrupts). It will practically never block and thus it is
again identical to /dev/urandom.

With my scheme, it's usefully distinguished from /dev/urandom for the
purposes of things such as one-time public key generation.

See my note to RML about who actually uses it currently.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 05:26:00

by Andreas Dilger

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Aug 17, 2002 21:59 -0500, Oliver Xymoron wrote:
> On Sat, Aug 17, 2002 at 07:30:02PM -0700, Linus Torvalds wrote:
> > Quite frankly, I'd rather have a usable /dev/random than one that runs out
> > so quickly that it's unreasonable to use it for things like generating
> > 4096-bit host keys for sshd etc.
>
> > In particular, if a machine needs to generate a strong random number, and
> > /dev/random cannot give that more than once per day because it refuses to
> > use things like bits from the TSC on network packets, then /dev/random is
> > no longer practically useful.
>
> My box has been up for about the time it's taken to write this email
> and it's already got a full entropy pool. A 4096-bit public key has
> significantly less than that many bits of entropy in it (primes thin
> out in approximate proportion to log2(n)).

It is fairly trivial to change the init scripts to save/restore more than
4096 bits of entropy, and for /dev/random to accumulate more than this.
For people who have _any_ source of "real" entropy, but it is occasionally
in high demand, they could set up a larger pool to accumulate entropy
in between peak demand. It is basically just a few lines of change in
/etc/init.d/[u]random - all the kernel hooks are there.

Even so, I would agree with Linus in the thought that being "too
paranoid" makes it basically useless. If you have people sniffing
your network right next to the WAN side of your IPSec firewall with
GHz network analyzers and crafting packets to corrupt your entropy
pool, then chances are they could just as easily sniff the LAN side
of your network and get the unencrypted data directly. The same
holds true for keystroke logging (or spy camera) to capture your pass
phrase instead of trying an incredibly difficult strategy to "influence"
the generation of this huge key in advance.

In the end, if you make it so hard to extract your secrets in a stealthy
manner, they will just start with a few big guys and a rubber hose...

Cheers, Andreas
--
Andreas Dilger
http://www-mddsp.enel.ucalgary.ca/People/adilger/
http://sourceforge.net/projects/ext2resize/

2002-08-18 05:35:03

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 09:01:20PM -0700, Linus Torvalds wrote:
>
> On 17 Aug 2002, Robert Love wrote:
> >
> > [1] this is why I wrote my netdev-random patches. some machines just
> > have to take the entropy from the network card... there is nothing
> > else.
>
> I suspect that Oliver is 100% correct in that the current code is just
> _too_ trusting. And parts of his patches seem to be in the "obviously
> good" category (ie the xor'ing of the buffers instead of overwriting).

Make sure you don't miss this bit, I should have sent it
separately. This is a longstanding bug that manufactures about a
thousand bits out of thin air when the pool runs dry.

Ironically, any confidence anyone had in /dev/random vs /dev/urandom
up until now has been misplaced.

--- a/drivers/char/random.c 2002-07-20 14:11:07.000000000 -0500
+++ b/drivers/char/random.c 2002-08-17 19:47:54.000000000 -0500
@@ -1239,18 +1184,18 @@

if (r->entropy_count < nbytes * 8 &&
r->entropy_count < r->poolinfo.POOLBITS) {
- int nwords = min_t(int,
- r->poolinfo.poolwords - r->entropy_count/32,
- sizeof(tmp) / 4);
+ int bytes = min_t(int,
+ nbytes - r->entropy_count/8,
+ sizeof(tmp));

- DEBUG_ENT("xfer %d from primary to %s (have %d, need %d)\n",
- nwords * 32,
+ DEBUG_ENT("xfer %d to %s (have %d, need %d)\n",
+ bytes * 8,
r == sec_random_state ? "secondary" : "unknown",
r->entropy_count, nbytes * 8);

- extract_entropy(random_state, tmp, nwords * 4, 0);
- add_entropy_words(r, tmp, nwords);
- credit_entropy_store(r, nwords * 32);
+ extract_entropy(random_state, tmp, bytes, 0);
+ add_entropy_words(r, tmp, (bytes+3)/4);
+ credit_entropy_store(r, bytes*8);
}
if (r->extract_count > 1024) {
DEBUG_ENT("reseeding %s with %d from primary\n",

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 05:49:51

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 11:28:08PM -0600, Andreas Dilger wrote:
>
> Even so, I would agree with Linus in the thought that being "too
> paranoid" makes it basically useless. If you have people sniffing
> your network right next to the WAN side of your IPSec firewall with
> GHz network analyzers and crafting packets to corrupt your entropy
> pool, then chances are they could just as easily sniff the LAN side
> of your network and get the unencrypted data directly. The same
> holds true for keystroke logging (or spy camera) to capture your pass
> phrase instead of trying an incredibly difficult strategy to "influence"
> the generation of this huge key in advance.

Actually, my attack model here assumes a hostile LAN. GHz WAN is
fairly uncommon.

This design comes from an entropy pool I made from scratch for another
system and fixed what I thought the deficits are in the Linux
model. Current Linux a) overestimates entropy b) can't incorporate
untrusted data c) doesn't sample from the network at all.

I think if I'd broken this up this way:

piece 1a: mix in untrusted data without accounting it
piece 1b: start sampling the network as untrusted data
piece 2a: clean up bogosity in entropy accounting

..no one would have objected.

> In the end, if you make it so hard to extract your secrets in a stealthy
> manner, they will just start with a few big guys and a rubber hose...

Oddly enough, I came home today from running errands to discover that
someone had tried brute-forcing my front door with a crowbar.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 05:58:40

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 09:57:02PM -0700, David Brownell wrote:
> > Questionable Sources and Time Scales
> >
> > Due to the vagarities of computer architecture, things like keyboard
> > and mouse interrupts occur on their respective scanning or serial
> > clock edges, and are clocked relatively slowly. Worse, devices like
> > USB keyboards, mice, and disks tend to share interrupts and probably
> > line up on USB clock boundaries.
>
> Last time I looked, USB didn't feed into the pool of randomness at
> any level ... certainly none of the host controller drivers do, and
> I didn't notice any drivers layered on top of those bus drivers
> doing so either. That includes keyboards, mice, and disks; as well
> as network devices, webcams, and more. So that "worse" isn't real.

I may be mistaken, but I believe the USB HID devices go through the
generic input layer, which did get touched by my second patch.

> It's likely correct that the host controller drivers don't use the
> SA_SAMPLE_RANDOM flag: they're effectively sharing interrupts from
> many kinds of devices, some of which may be very predictable (like
> webcams). But it might be OK for for some of those layered drivers
> (HID, storage, ...) to act as entropy sources ... if such timescale
> issues get addressed! Maybe an URB_SAMPLE_RANDOM flag would be the
> right kind of model.

Shared interrupts are actually ok as long as my context switch
criteria (or something similar) is met.

Storage is a bad source of entropy. We go out of our way to leak that
timing information as accurately as possible in the typical server
configuration. It's still ok to use as seed though.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 06:14:09

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 10:05:49PM -0700, Dmitri wrote:
>
> Wouldn't it be much easier to ask -very few- people (GnuPG/SSL/SSH teams
> primarily) to use /dev/super-reliable-mathematically-proven-random if
> available, instead of asking much larger crowd to hack their code?

Most people (including OpenSSH!) are already using /dev/urandom where
appropriate.

If you care about the difference between /dev/random and /dev/urandom,
then you ought to care about the difference _actually being there_. If
your entropy estimates are not conservative, then your system will
leak entropy faster than it takes it in and then /dev/random and
/dev/urandom will by identical _by definition_.

> This will be backward compatible, and at the same time offers a much
> better randomness for those who care about it. Myself, I read
> 128-bit session keys for multiple, not-so-secure, short connections
> from /dev/random and it would be sad if it runs out of data.

Why would that be sad? It's at least billions of times easier to break
a 128-bit key than to guess the internal state of /dev/urandom, even
if the system has no entropy sources.

> Also, /dev/random may take data from /dev/super-...random until it sucks
> it dry, and then switches to less secure sources. This will guarantee that
> the enthropy of readings is -not worse than-, and for moderate requests is
> much better.

Simple enough:

mv /dev/random /dev/super-random
ln -s /dev/urandom /dev/random

Backward compatible and everything.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 06:26:55

by Robert Love

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, 2002-08-18 at 00:01, Linus Torvalds wrote:

> So I think that if we just made the code be much less trusting (say,
> consider the TSC information per interrupt to give only a single bit of
> entropy, for example), and coupled that with making network devices always
> be considered sources of entropy, we'd have a reasonable balance.

I think that sounds good.

I have a patch which I can send - it needs to be rediffed I suspect -
that has each network device feed the entropy pool. (Actually, it
creates a new flag, SA_NET_RANDOM, that defines to SA_SAMPLE_RANDOM or 0
depending on a configure setting. If you want it unconditional, that is
just as easy though).

Robert Love

2002-08-18 06:44:49

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, Aug 18, 2002 at 02:31:02AM -0400, Robert Love wrote:
> On Sun, 2002-08-18 at 00:01, Linus Torvalds wrote:
>
> > So I think that if we just made the code be much less trusting (say,
> > consider the TSC information per interrupt to give only a single bit of
> > entropy, for example), and coupled that with making network devices always
> > be considered sources of entropy, we'd have a reasonable balance.
>
> I think that sounds good.
>
> I have a patch which I can send - it needs to be rediffed I suspect -
> that has each network device feed the entropy pool. (Actually, it
> creates a new flag, SA_NET_RANDOM, that defines to SA_SAMPLE_RANDOM or 0
> depending on a configure setting. If you want it unconditional, that is
> just as easy though).

I think I have a compromise that'll make everyone happy here. I've
added a tunable called trust_pct in /proc/sys/kernel/random. Set it to
100% and untrusted sources will always add a bit of entropy. Defaults
to zero, which should be fine for everyone who's got a head.

Then we can add SA_SAMPLE_RANDOM for network devices unconditionally.

Untested, applies on top of my previous patches:

diff -ur a/drivers/char/random.c b/drivers/char/random.c
--- a/drivers/char/random.c 2002-08-17 20:54:02.000000000 -0500
+++ b/drivers/char/random.c 2002-08-18 01:38:58.000000000 -0500
@@ -724,6 +724,7 @@
*/
static int benford[16]={0,0,0,1,2,3,4,5,5,6,7,7,8,9,9,10};
static int last_ctxt=0;
+static int trust_break=50, trust_pct=0, trust_min=0, trust_max=100;

void add_timing_entropy(void *src, unsigned datum)
{
@@ -764,6 +765,18 @@
delta>>=es->shift;
bits=benford[int_log2_16bits(delta & 0xffff)];
}
+
+ /* Throw in an untrusted bit as entropy trust_pct% of the time */
+ if(trust_pct && !bits)
+ {
+ trust_break+=trust_pct;
+ if(trust_break>100)
+ {
+ bits=1;
+ trust_break-=100;
+ }
+ }
+
batch_entropy_store(datum^time, bits);
}

@@ -1779,6 +1792,10 @@
{RANDOM_UUID, "uuid",
NULL, 16, 0444, NULL,
&proc_do_uuid, &uuid_strategy},
+ {RANDOM_TRUST_PCT, "trust_pct",
+ &trust_pct, sizeof(int), 0644, NULL,
+ &proc_dointvec_minmax, &sysctl_intvec, 0,
+ &trust_min, &trust_max},
{0}
};

diff -ur a/include/linux/sysctl.h b/include/linux/sysctl.h
--- a/include/linux/sysctl.h 2002-08-17 00:30:00.000000000 -0500
+++ b/include/linux/sysctl.h 2002-08-18 01:37:54.000000000 -0500
@@ -182,7 +182,8 @@
RANDOM_READ_THRESH=3,
RANDOM_WRITE_THRESH=4,
RANDOM_BOOT_ID=5,
- RANDOM_UUID=6
+ RANDOM_UUID=6,
+ RANDOM_TRUST_PCT=7
};

/* /proc/sys/bus/isa */


--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 07:27:40

by Bernd Eckenfels

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

dean gaudet <[email protected]> wrote:
> many southbridges come with audio these days ... isn't it possible to get
> randomness off the adc even without anything connected to it?

they also come with RNGs.

A question: what is the denger of the network entropy? is it, that one is a
fraid that snooping could gather knowledge about random source, or is it
more along the line that one fears that specially deviced packages can feed
manufactured, and therefore not randmom bits?

In the first case, how big would be the hardware proccessing variance for an
interrupt be? Is it realy predictable from sniffing the network how that
will result in interrupts?

How can softinterrupts help here?

Greetings
Bernd

2002-08-18 10:22:34

by Alan

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, 2002-08-18 at 05:06, dean gaudet wrote:
> On 17 Aug 2002, Robert Love wrote:
>
> > [1] this is why I wrote my netdev-random patches. some machines just
> > have to take the entropy from the network card... there is nothing
> > else.
>
> many southbridges come with audio these days ... isn't it possible to get
> randomness off the adc even without anything connected to it?

Both the AMD and Intel bridges also come with high speed random number
generators (i810-rng, amd768-rng). ADC randomness itself tends to be
very suspect.


2002-08-18 10:26:32

by Alan

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, 2002-08-18 at 04:25, Linus Torvalds wrote:
> Hmm.. After more reading, it looks like (if I understood correctly), that
> since network activity isn't considered trusted -at-all-, your average
> router / firewall / xxx box will not _ever_ get any output from
> /dev/random what-so-ever. Quite regardless of the context switch issue,
> since that only triggers for trusted sources. So it was even more
> draconian than I expected.

The current policy has always been not to trust events that are
precisely externally controllable. Oliver hasn't changed the network
policy there at all.

Its probably true there are low bits of randomness available from such
sources providing we know the machine has a tsc, unless the I/O APIC is
clocked at a divider of the processor clock in which case our current
behaviour is probably much saner.

With modern systems that have real RNG's its a non issue.

2002-08-18 15:04:44

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, Aug 18, 2002 at 11:30:05AM +0100, Alan Cox wrote:
>
> Its probably true there are low bits of randomness available from such
> sources providing we know the machine has a tsc, unless the I/O APIC is
> clocked at a divider of the processor clock in which case our current
> behaviour is probably much saner.

I actually looked at this a bit for embedded applications - there's a
technique for pulling entropy from free-running clock skew.

Unfortunately with modern chipsets, just about all clocks are
generated from a single source these days. This is even true in the
non-x86 world now. Any extra clocks you have are likely to be out in
peripheral-land and too slow (serial port) or too inaccessible (VGA
dot clocks) to be interesting.

> With modern systems that have real RNG's its a non issue.

Thankfully.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 16:50:30

by Robert Love

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, 2002-08-18 at 00:28, Oliver Xymoron wrote:

> Now there's currently an orthogonal problem that /dev/urandom can
> deplete /dev/random's entropy pool entirely and things like generating
> sequence numbers get in the way. I intend to fix that separately.

Curious, how do you intend on doing this?

The reason /dev/urandom depletes /dev/random (it actually just lowers
the entropy estimate) is due to the assumption that knowledge of the
state is now known, so the estimate of how random (actually, we mean
non-deterministic here) needs to decrease...

Are you thinking of (semi) decoupling the two pools?

I sort of agree with you that we should make random "stricter" and push
some of the sillier uses onto urandom... but only to a degree or else
/dev/random grows worthless. Right now, on my workstation with plenty
of user input and my netdev-random patches, I have plenty of entropy and
can safely use random for everything... and I like that.

Robert Love

2002-08-18 16:52:22

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes


On Sun, 18 Aug 2002, Oliver Xymoron wrote:
>
> The key word is actually conservative, as in conservative estimate.
> Conservative here means less than or equal to.

Your argument is that even with a gigahz logic analyzer on the network
line, you should certainly see randomness that is worth considering.

I dare you to actually show perfect correlation from it: the interrupt may
be synchronized to the PCI clock, but the code executed there-after
certainly will not. And even if the machine is 100% idle, and the whole
working set fits in the L1 cache, the DMA generated by the packet itself
will result in cache invalidations.

In other words, in order for you to actually be able to predict the TSC
from the outside, you'd have to not just have the gigahz logic analyzer on
the network line, you' dalso have to be able to correlate the ethernet
heartbeat to the PCI clock (which you probably could do by looking at the
timing of the reply packets from a ping flood, although it would be
"interestng" to say the least and probably depends on how the network card
generates the ethernet clock), _and_ you'd have to be able to do a cache
eviction analysis (which in turn requires knowing the initial memory
layout for the kernel data structures for networking).

And your argument that there is zero randomness in the TSC _depends_ on
your ability to perfectly estimate what the TSC is. If you cannot do it,
there is obviously at least one bit of randomness there. So I don't think
your "zero" is a good conservative estimate.

At some point being conservative turns into being useless [ insert
obligatory political joke here ].

[ Side note: the most common source of pseudo-random numbers is the old
linear congruental generator, which really is a sampling of a "beat"
between two frequencies that are supposed to be "close", but prime.

That's a fairly simple and accepted pseudo-random generator _despite_
the fact that the two frequencies are totally known, and there is zero
noise inserted. I'll bet you'll see a _very_ hard-to-predict stream from
something like the PCI clock / CPU clock thing, with noise inserted
thanks to things like cache misses and shared bus interactions. Never
mind the _real_ noise of having a work-load. ]

> No, it says /dev/random is primarily useful for generating large
> (>>160 bit) keys.

Which is exactly what something like sshd would want to use for generating
keys for the machine, right? That is _the_ primary reason to use
/dev/random.

Yet apparently our /dev/random has been too conservative to be actually
useful, because (as you point out somewhere else) even sshd uses
/dev/urandom for the host key generation by default.

That is really sad. That is the _one_ application that is common and that
should really have a reason to maybe care about /dev/random vs urandom.
And that application uses urandom. To me that says that /dev/random has
turned out to be less than useful in real life.

Is there anything that actually uses /dev/random at all (except for
clueless programs that really don't need to)?

Please realize that _this_ is my worry: making /dev/random so useless
that any practical program has no choice but to look elsewhere.

> Actually, half of the point here is in fact to make /dev/urandom safer
> too, by allowing mixing of untrusted data that would otherwise
> compromise /dev/random.

Now this I absolutely agree with. The xor'ing of the buffer data is
clearly a good idea. I agree 100% with this part. You'll see no arguments
against this part at all.

> 99.9% of users aren't using network sampling
> currently, after these patches we can turn it on for everyone and
> still sleep well at night. See?

Oh, that's the _good_ part. Yes.

The bad part is that I think our current /dev/random is close to useless
already, and I'd like to reverse that trend.

> That is an interesting point. A counterpoint is if we account so much as
> 1 bit of entropy per network interrupt on a typical system, the system
> will basically _always_ feel comfortable (see /proc/interrupts). It will
> practically never block and thus it is again identical to /dev/urandom.

But what's the problem with that? The "/dev/random may block" is not the
intrisic value of /dev/random - if people want to wait they are much
better off just using "sleep(1)" than trying to read from /dev/random.

My argument is that on a typical system there really _is_ so much
randomness that /dev/random is actually a useful thing. I think that you'd
have to _work_ at finding a system where /dev/random should block under
any normal use (where "normal use" also obviously means that only programs
that really need it would use it, ie ssh-keygen etc).

Linus

2002-08-18 16:54:37

by Robert Love

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, 2002-08-18 at 03:31, Bernd Eckenfels wrote:

> A question: what is the denger of the network entropy? is it, that one is a
> fraid that snooping could gather knowledge about random source, or is it
> more along the line that one fears that specially deviced packages can feed
> manufactured, and therefore not randmom bits?

The later. Some believe someone on your local network, who understands
the timing of the physical layers and devices, can influence the entropy
pool. Further, then, we assume SHA is cracked. Then they know enough
state and can presumably determine the output of [u]random.

Of course, this is all theoretical but I would not gamble in certain
situations either. In some, like my workstation on my network, I feel
safe and enjoy having sshd not block waiting for entropy. So I wrote
the netdev-random stuff...

Note the patches work in the converse manner, too: _today_ there are
network drivers that contribute entropy. With netdev-random you can
stop that if you wish.

> In the first case, how big would be the hardware proccessing variance for an
> interrupt be? Is it realy predictable from sniffing the network how that
> will result in interrupts?

Possibly.

> How can softinterrupts help here?

Under load the execution of softinterrupts is virtually
non-deterministic, so they do help but we do the timing off the
interrupt handler itself.

Robert Love

2002-08-18 16:59:59

by Robert Love

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, 2002-08-18 at 12:59, Linus Torvalds wrote:

> Is there anything that actually uses /dev/random at all (except for
> clueless programs that really don't need to)?

Some OpenSSH installs must use /dev/random (either an earlier version
than what Oliver quoted or the distribution changed it) because I have
seen headless/diskless machines where they block on ssh session key
generation indefinitely. I wrote my netdev-random to solve this...

We have seen similar stuff on embedded devices at MontaVista.

> Now this I absolutely agree with. The xor'ing of the buffer data is
> clearly a good idea. I agree 100% with this part. You'll see no arguments
> against this part at all.

Yes this is _very_ smart.

Robert Love

2002-08-18 17:16:43

by Robert Love

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, 2002-08-18 at 13:18, Oliver Xymoron wrote:

> I'm thinking of changing urandom read to avoid pulling entropy from
> the primary pool (via xfer) if it falls below a given low
> watermark. The important part is to prevent starvation of
> /dev/random, it doesn't have to be fair.

A watermark (perhaps /proc configurable) is a very sane way of doing
this. Great.

> My patches should provide sufficient entropy for any workstation use
> with or without network sampling. It's only the headless case that's
> problematic - see my compromise patch with trust_pct.

Sounds good to me, so we shall see...

Robert Love

2002-08-18 17:14:56

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, Aug 18, 2002 at 12:54:27PM -0400, Robert Love wrote:
> On Sun, 2002-08-18 at 00:28, Oliver Xymoron wrote:
>
> > Now there's currently an orthogonal problem that /dev/urandom can
> > deplete /dev/random's entropy pool entirely and things like generating
> > sequence numbers get in the way. I intend to fix that separately.
>
> Curious, how do you intend on doing this?
>
> The reason /dev/urandom depletes /dev/random (it actually just lowers
> the entropy estimate) is due to the assumption that knowledge of the
> state is now known, so the estimate of how random (actually, we mean
> non-deterministic here) needs to decrease...

Indeed. However, we have to be careful about how we do this. See
Schneier, et al's paper on catastrophic reseeding. It's better to
batch up entropy transfers rather than trickle it in.

> Are you thinking of (semi) decoupling the two pools?

I'm thinking of changing urandom read to avoid pulling entropy from
the primary pool (via xfer) if it falls below a given low
watermark. The important part is to prevent starvation of
/dev/random, it doesn't have to be fair.

> I sort of agree with you that we should make random "stricter" and push
> some of the sillier uses onto urandom... but only to a degree or else
> /dev/random grows worthless. Right now, on my workstation with plenty
> of user input and my netdev-random patches, I have plenty of entropy and
> can safely use random for everything... and I like that.

My patches should provide sufficient entropy for any workstation use
with or without network sampling. It's only the headless case that's
problematic - see my compromise patch with trust_pct.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-18 17:27:53

by Jonathan Lundell

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

At 11:30 AM +0100 8/18/02, Alan Cox wrote:
>Its probably true there are low bits of randomness available from such
>sources providing we know the machine has a tsc, unless the I/O APIC is
>clocked at a divider of the processor clock in which case our current
>behaviour is probably much saner.

The clock spec for the APIC bus makes it very convenient to clock it
at 16.66_ MHz, 1/2 (or some other submultiple) of the PCI clock,
which of course would be highly correlated with the low bits of TSC.
--
/Jonathan Lundell.

2002-08-18 17:27:48

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, Aug 18, 2002 at 09:59:41AM -0700, Linus Torvalds wrote:
>
> On Sun, 18 Aug 2002, Oliver Xymoron wrote:
> >
> > The key word is actually conservative, as in conservative estimate.
> > Conservative here means less than or equal to.
>
> Your argument is that even with a gigahz logic analyzer on the network
> line, you should certainly see randomness that is worth considering.
>
> I dare you to actually show perfect correlation from it: the interrupt may
> be synchronized to the PCI clock, but the code executed there-after
> certainly will not. And even if the machine is 100% idle, and the whole
> working set fits in the L1 cache, the DMA generated by the packet itself
> will result in cache invalidations.
>
> In other words, in order for you to actually be able to predict the TSC
> from the outside, you'd have to not just have the gigahz logic analyzer on
> the network line, you' dalso have to be able to correlate the ethernet
> heartbeat to the PCI clock (which you probably could do by looking at the
> timing of the reply packets from a ping flood, although it would be
> "interestng" to say the least and probably depends on how the network card
> generates the ethernet clock), _and_ you'd have to be able to do a cache
> eviction analysis (which in turn requires knowing the initial memory
> layout for the kernel data structures for networking).

Yes, obviously 'nontrivial'. But then so are sideband attacks on smart
cards.

> And your argument that there is zero randomness in the TSC _depends_ on
> your ability to perfectly estimate what the TSC is. If you cannot do it,
> there is obviously at least one bit of randomness there. So I don't think
> your "zero" is a good conservative estimate.

Have you seen my compromise patch yet? I think this (or something like
it) should make us both happy.

diff -ur a/drivers/char/random.c b/drivers/char/random.c
--- a/drivers/char/random.c 2002-08-17 20:54:02.000000000 -0500
+++ b/drivers/char/random.c 2002-08-18 01:38:58.000000000 -0500
@@ -724,6 +724,7 @@
*/
static int benford[16]={0,0,0,1,2,3,4,5,5,6,7,7,8,9,9,10};
static int last_ctxt=0;
+static int trust_break=50, trust_pct=0, trust_min=0, trust_max=100;

void add_timing_entropy(void *src, unsigned datum)
{
@@ -764,6 +765,18 @@
delta>>=es->shift;
bits=benford[int_log2_16bits(delta & 0xffff)];
}
+
+ /* Throw in an untrusted bit as entropy trust_pct% of the time */
+ if(trust_pct && !bits)
+ {
+ trust_break+=trust_pct;
+ if(trust_break>=100)
+ {
+ bits=1;
+ trust_break-=100;
+ }
+ }
+
batch_entropy_store(datum^time, bits);
}

@@ -1779,6 +1792,10 @@
{RANDOM_UUID, "uuid",
NULL, 16, 0444, NULL,
&proc_do_uuid, &uuid_strategy},
+ {RANDOM_TRUST_PCT, "trust_pct",
+ &trust_pct, sizeof(int), 0644, NULL,
+ &proc_dointvec_minmax, &sysctl_intvec, 0,
+ &trust_min, &trust_max},
{0}
};

diff -ur a/include/linux/sysctl.h b/include/linux/sysctl.h
--- a/include/linux/sysctl.h 2002-08-17 00:30:00.000000000 -0500
+++ b/include/linux/sysctl.h 2002-08-18 01:37:54.000000000 -0500
@@ -182,7 +182,8 @@
RANDOM_READ_THRESH=3,
RANDOM_WRITE_THRESH=4,
RANDOM_BOOT_ID=5,
- RANDOM_UUID=6
+ RANDOM_UUID=6,
+ RANDOM_TRUST_PCT=7
};

/* /proc/sys/bus/isa */


>
> [ Side note: the most common source of pseudo-random numbers is the old
> linear congruental generator, which really is a sampling of a "beat"
> between two frequencies that are supposed to be "close", but prime.
>
> That's a fairly simple and accepted pseudo-random generator _despite_
> the fact that the two frequencies are totally known, and there is zero
> noise inserted. I'll bet you'll see a _very_ hard-to-predict stream from
> something like the PCI clock / CPU clock thing, with noise inserted
> thanks to things like cache misses and shared bus interactions. Never
> mind the _real_ noise of having a work-load. ]

In practically all modern machines, the CPU clock is driven off the
same source as the PCI clock, the northbus clock, and every other
clock used in the core.

> > No, it says /dev/random is primarily useful for generating large
> > (>>160 bit) keys.
>
> Which is exactly what something like sshd would want to use for generating
> keys for the machine, right? That is _the_ primary reason to use
> /dev/random.

Sshd doesn't generate large keys, ssh-keygen does.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-19 04:17:46

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, Aug 18, 2002 at 12:38:59AM -0500, Oliver Xymoron wrote:
> On Sat, Aug 17, 2002 at 09:01:20PM -0700, Linus Torvalds wrote:
> >
> > On 17 Aug 2002, Robert Love wrote:
> > >
> > > [1] this is why I wrote my netdev-random patches. some machines just
> > > have to take the entropy from the network card... there is nothing
> > > else.
> >
> > I suspect that Oliver is 100% correct in that the current code is just
> > _too_ trusting. And parts of his patches seem to be in the "obviously
> > good" category (ie the xor'ing of the buffers instead of overwriting).
>
> Make sure you don't miss this bit, I should have sent it
> separately. This is a longstanding bug that manufactures about a
> thousand bits out of thin air when the pool runs dry.

There's a reason why I did what I did here, and it has to do with an
attack which Bruce Schneier describes in his Yarrow paper:

http://www.counterpane.com/yarrow-notes.html

called the "iterative guessing attack". Assume that the adversary has
somehow knows the current state of the pool. This could because the
initial state was known to the attacker, either because it was in a
known, initialized state (this could happen if the distribution
doesn't save the state of the pool via an /etc/init.d/random script),
or because the attacker managed to capture the initial seed file used
by the /etc/init.d/random script. Now what the attacker can do is
periodically sample the pool, and attempt to explore all possible
values which have been mixed into the pool that would result in the
value which he/she read out of /dev/random.

So in fact, by being more selective about which values get mixed into
the pool, you can actually help the iterative guessing attack! That's
why the current code tries to mix in sufficient randomness to
completely reseed the secondary extraction pool, and not just enough
randomness for the number of bytes required. This was a deliberate
design decision to try to get the benefits of Yarrow's "catastrophic
reseeding".

Your complaint in terms of "manufacturing about a thousand bits out of
thin air" is a fair one, but it depends on how you view things. From
the point of view of absolute randomness, you're of course right. If
the primary pool only has 100 bits of randomness, and
xfer_secondary_pool attempts to transfer 1100 bits of randomness, it
drains the primary pool down to 0, but credits the secondary pool with
1100 bits of randomness, and yes, we have "created" a thousand bits of
randomness.

That being said though, from the adversary only gets to see results
pulled out of the secondary pool, and the primary pool is completely
hidden from the adversary. So when xfer_secondary_pool extracts a
large amount of randomness from the primary pool, it's doing so using
extract_entropy(), which uses SHA to extract randomness from the
primary pool. Significant amounts of cryptographic analysis (which
would also as a side effect break the SHA hash) would be required in
order to figure out information in the primary pool based solely on
the outputs that are being fed into the secondary pool.

So is it legitimate to credit the secondary pool with 1100 bits of
randomness even though the primary pool only had 100 bits of
randomness in it? Maybe. It depends on whether you care more about
"absolute randomness", or "cryptographic randomness". Yarrow relies
entirely on cryptographic randomness; the effective size of its
primary and secondary pools are 160 bits and 112 bits, respectively.

I tried to take a bit more of a moderate position between relying
solely on crypgraphic randomness and a pure absolute randomness model.
So we use large pools for mixing, and a catastrophic reseeding policy.

>From a pure theory point of view, I can see where this might be quite
bothersome. On the other hand, practically, I think what we're doing
is justifiable, and not really a secucity problem.

That being said, if you really want to use your patch, please do it
differently. In order to avoid the iterative guessing attack
described by Bruce Schneier, it is imperative that you extract
r->poolinfo.poolwirds - r->entropy_count/32 words of randomness from
the primary pool, and mix it into the secondary. However, if you want
to save the entropy count from the primary pool, and use that to cap
the amount of entropy which is credited into the secondary pool, so
that entropy credits aren't "manufacturered", that's certainly
accepted. It would make /dev/random much more conservative about its
entropy count, which might not be a bad thing from the point of view
of encouraging people to use it only for the creation of long-term
keys, and not to use it for generation of session keys.

- Ted

P.S. /dev/urandom should probably also be changed to use an entirely
separate pool, which then periodically pulls a small amount of entropy
from the priamry pool as necessary. That would make /dev/urandom
slightly more dependent on the strength of SHA, while causing it to
not draw down as heavily on the entropy stored in /dev/random, which
would be a good thing.

2002-08-19 05:40:02

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 09:15:22PM -0500, Oliver Xymoron wrote:
>
> Non-Uniform Distribution Of Timing
>
> Unfortunately, our sample source is far from uniform. For starters, each
> interrupt has a finite time associated with it - the interrupt latency.
> Back to back interrupts will result in samples that are periodically
> spaced by a fixed interval.

This is true. But that's also why /dev/random measures not just the
first order delta, but the second and third order delta's as well, and
takes the minimum of the three delta's, capped by 12 bits, as the
entropy estimate. So for example, if the interrupts are happening as
fast as possible (back to back) and there is merely the fixed interval
caused by the interrupt latency, then delta_1 will be the interrupt
latency, but delta_2 will be zero!

The rest of your analysis also ignores the fact that the current
/dev/random code uses the second and third order delta's (think of
them as second and third order deritives, except that we're in the
discrete rather than continous domain).

> Assuming the interrupt actually has a nice gamma-like distribution
> (which is unlikely in practice), then this is indeed true. The
> trouble is that Linux assumes that if a delta is 13 bits, it contains
> 12 bits of actual entropy. A moment of thought will reveal that
> binary numbers of the form 1xxxx can contain at most 4 bits of
> entropy - it's a tautology that all binary numbers start with 1 when
> you take off the leading zeros. This is actually a degenerate case of
> Benford's Law (http://mathworld.wolfram.com/BenfordsLaw.html), which
> governs the distribution of leading digits in scale invariant
> distributions.
>
> What we're concerned with is the entropy contained in digits
> following the leading 1, which we can derive with a simple extension
> of Benford's Law (and some Python):

I'm not a statistician, and my last probability class was over 15
years ago, but I don't buy your extension of Benford's law. Sure, if
we take street addresses numbering from 1 to 10000, the probability
that the leading digit will be 1 will be roughly 30%. But the
distribution of the low two digits will be quite evenly distributed.
Put another way, by picking a house at random, and looking at the low
two digits, and can very fairly choose a number between 0 and 99.

> Interrupt Timing Independence
> -----------------------------
>
> Linux entropy estimate also wrongly assumes independence of different
> interrupt sources. While SMP complicates the matter, this is
> generally not the case. Low-priority interrupts must wait on high
> priority ones and back to back interrupts on shared lines will
> serialize themselves ABABABAB. Further system-wide CLI, cache flushes
> and the like will skew -all- the timings and cause them to bunch up
> in predictable fashion.
>
> Furthermore, all this is observable from userspace in the same way
> that worst-case latency is measured.
>
> To protect against back to back measurements and userspace
> observation, we insist that at least one context switch has occurred
> since we last sampled before we trust a sample.

First of all, the second order delta already protects against
back-to-back measurements.

Secondly, what is observable from userpsace is time which is not spent
in a particular process. But whether that time was spent in the
system or in another process is not at all obvious, and it's also not
obvious whether that time is spent handling interrupts or processing
bottom half tasks (i.e., processing network packets, et. al).
Moreover, it is not observable to the user process when multiple
interrupts might be happening in this whole mess.

That being said, global CLI's are a problem in that it does mean that
multiple interrupts could be serviced at the same time, and while the
outside adversary won't know exactly when a global CLI/STI might have
happened, it does reduce the quality of the randomness. The solution
to this though is to avoid global CLI's, not to throw away randomness
samples until after a context switch.

> Questionable Sources and Time Scales
> ------------------------------------
>
> Due to the vagarities of computer architecture, things like keyboard
> and mouse interrupts occur on their respective scanning or serial
> clock edges, and are clocked relatively slowly. Worse, devices like
> USB keyboards, mice, and disks tend to share interrupts and probably
> line up on USB clock boundaries. Even PCI interrupts have a
> granularity on the order of 33MHz (or worse, depending on the
> particular adapter), which when timed by a fast processor's 2GHz
> clock, make the low six bits of timing measurement predictable.

We are not mixing in solely the low order bits of the timing
measurement; we're mixing in the entire timestamp. So the fact that
the low-order bits of the timing measurement might be predictable
isn't necessarily a problem.

We are looking at the low 12 bits of the first, second, and third
order deltas when estimating the entropy. So predictibility in the
low six bits of the timing measurement will tend to drag the entropy
estiamte down, not up.

> And as far as I can find, no one's tried to make a good model or
> estimate of actual keyboard or mouse entropy.

Since a human is involved, and we're measuring to a fairly high level
of accuracy, I'm not particularly worried.

> Randomness caused by
> disk drive platter turbulence has actually been measured and is on
> the order of 100bits/minute and is well correlated on timescales of
> seconds - we're likely way overestimating it.

This has worried me. The Davis paper was done a long time ago and I
suspect the turblence values has gone down over time, not up. But
let's be clear what the problem is. If the adversary knows the exact
stream of requests sent to a disk, it can probably model the time
necesasry to service those requests very accurately --- possibly
missing much less than the 100 bits/minute guestimated by the Davis
pater, given modern disk drives.

So when we measure the disk drive completion interrupts, what we are
really measuring is the uncertainity to the attacker exactly *when*
those disk drive requests were made, and what order of disk drive
requests were sent to it.

Can the adversary control this, or know this? Sometimes, to a certain
extent. But remember, we're using a very high resolution timer, and
while the adversary might not the rough time to a few milliseconds
when a request might be issued, it would be much harder for the
adversary to know at what exact time stamp clock value was at a
particular event.

> Trusting Predictable or Measurable Sources
> ------------------------------------------
>
> What entropy can be measured from disk timings are very often leaked
> by immediately relaying data to web, shell, or X clients. Further,
> patterns of drive head movement can be remotely controlled by clients
> talking to file and web servers. Thus, while disk timing might be an
> attractive source of entropy, it can't be used in a typical server
> environment without great caution.

This is something to be concerned about, to be sure. But generally a
client won't have complete control of the drive head movement ---
there are other clients involved --- and the adversary generally won't
have complete knowledge of the block allocation of files, for example,
so he/she would not be able to characterize the disk drive timings to
the degree of accuracy required.

Certianly a major concern that I've always had is measuring network
device interrupts, since packet arrial times can be measured by an
outsider. Such an attack would require that the adversary have a
sophisticated device on the local LAN segment of the attack victim
(since the attacker needs to see all of the packets directed at the
victim in order to make guesses about the inter-packet arrival times,
however. So the how practical this attack might be is certainly quite
debatable.

> (Incidentally, tricks like Matt Blaze's truerand clock drift
> technique probably don't work on most PCs these days as the
> "realtime" clock source is often derived directly from the
> bus/PCI/memory/CPU clock.)

Actually, many peripherals do have their own clock crystals and clock
circuitry (network cards, serial cards, IDE disks, etc.).....

> Batching
> --------
>
> Samples to be mixed are batched into a 256 element ring
> buffer. Because this ring isn't allowed to wrap, it's dangerous to
> store untrusted samples as they might flood out trusted ones.

It's not dangerous in the sense that we might suffer a security breach
(assuming that our entropy estimation routines are accurate). It's a
bad thing in that we might lose some good randomness, but that's not a
disaster.

That being said, if we are running out space in the ring buffer, it
would be good to increase its size.

- Ted

2002-08-19 10:11:22

by Marco Colombo

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Mon, 19 Aug 2002, Theodore Ts'o wrote:

[...]
> P.S. /dev/urandom should probably also be changed to use an entirely
> separate pool, which then periodically pulls a small amount of entropy
> from the priamry pool as necessary. That would make /dev/urandom
> slightly more dependent on the strength of SHA, while causing it to
> not draw down as heavily on the entropy stored in /dev/random, which
> would be a good thing.

Shouldn't it be moved to userpace, instead? Pulling a small amount
of entropy from /dev/random can be done in userspace, too. And the
application could choose *how often* and *how many* bits to pull.
The kernel can only make a choice which may be too much for an application
(making it drain more entropy than it needs) or too little for another
(forcing it to use /dev/random directly). Let the kernel implement
the Real Thing only (/dev/random). /dev/urandom really belongs to
userspace.

.TM.

2002-08-19 10:22:14

by Oliver Neukum

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

Am Montag, 19. August 2002 12:15 schrieb Marco Colombo:
> On Mon, 19 Aug 2002, Theodore Ts'o wrote:
>
> [...]
>
> > P.S. /dev/urandom should probably also be changed to use an entirely
> > separate pool, which then periodically pulls a small amount of entropy
> > from the priamry pool as necessary. That would make /dev/urandom
> > slightly more dependent on the strength of SHA, while causing it to
> > not draw down as heavily on the entropy stored in /dev/random, which
> > would be a good thing.
>
> Shouldn't it be moved to userpace, instead? Pulling a small amount
> of entropy from /dev/random can be done in userspace, too. And the

1. You create a problem for in kernel users of random numbers.
2. You forgo the benefit of randomness by concurrent access to /dev/urandom
3. You will not benefit from hardware random number generators as easily.

> application could choose *how often* and *how many* bits to pull.

If you really care, you can implement this for /dev/urandom, too.

Regards
Oliver

2002-08-19 10:43:55

by Marco Colombo

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On 18 Aug 2002, Alan Cox wrote:

> On Sun, 2002-08-18 at 05:06, dean gaudet wrote:
> > On 17 Aug 2002, Robert Love wrote:
> >
> > > [1] this is why I wrote my netdev-random patches. some machines just
> > > have to take the entropy from the network card... there is nothing
> > > else.
> >
> > many southbridges come with audio these days ... isn't it possible to get
> > randomness off the adc even without anything connected to it?
>
> Both the AMD and Intel bridges also come with high speed random number
> generators (i810-rng, amd768-rng). ADC randomness itself tends to be
> very suspect.

BTW, I know you wrote the amd768-rng driver, I wonder if you have any
indication of how good these rng are. What is the typical output bits/
random bits ratio in normal applications?

.TM.
--
____/ ____/ /
/ / / Marco Colombo
___/ ___ / / Technical Manager
/ / / ESI s.r.l.
_____/ _____/ _/ [email protected]

2002-08-19 10:59:59

by Marco Colombo

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Mon, 19 Aug 2002, Oliver Neukum wrote:

> Am Montag, 19. August 2002 12:15 schrieb Marco Colombo:
> > On Mon, 19 Aug 2002, Theodore Ts'o wrote:
> >
> > [...]
> >
> > > P.S. /dev/urandom should probably also be changed to use an entirely
> > > separate pool, which then periodically pulls a small amount of entropy
> > > from the priamry pool as necessary. That would make /dev/urandom
> > > slightly more dependent on the strength of SHA, while causing it to
> > > not draw down as heavily on the entropy stored in /dev/random, which
> > > would be a good thing.
> >
> > Shouldn't it be moved to userpace, instead? Pulling a small amount
> > of entropy from /dev/random can be done in userspace, too. And the
>
> 1. You create a problem for in kernel users of random numbers.
> 2. You forgo the benefit of randomness by concurrent access to /dev/urandom
> 3. You will not benefit from hardware random number generators as easily.

You lost me. The kernel of course has "client" access to the internal
pool. And since the userspace reads from /dev/random, it benefits
from HRNG just the same way it does now. Point 2 is somewhat obscure
to me. The kernel has only one observer to deal with, in theory.
One that gains *all* the knowlegde about the internal pool the
kernel leaks, mainly by returning random bits to those who ask. Assuming
the observer has less than 100% knowledge of that is just overstimating
the entropy (== understimating the ability of the observer to guess
the internal state). And from the application standpoint, it could just
discard part of the values it gets from the PRNG, to emulate concurrent
access, but what for? The result is as good as the untouched PRNG
sequence...

.TM.
--
____/ ____/ /
/ / / Marco Colombo
___/ ___ / / Technical Manager
/ / / ESI s.r.l.
_____/ _____/ _/ [email protected]

2002-08-19 12:25:09

by Alan

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Mon, 2002-08-19 at 11:47, Marco Colombo wrote:
>
> BTW, I know you wrote the amd768-rng driver, I wonder if you have any
> indication of how good these rng are. What is the typical output bits/
> random bits ratio in normal applications?

It seems random. People have subjected both the intel and AMD one to
statistical test sets. I'm not a cryptographer or a statistician so I
can't answer usefully

2002-08-19 12:35:40

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Mon, Aug 19, 2002 at 12:21:41AM -0400, Theodore Ts'o wrote:
> On Sun, Aug 18, 2002 at 12:38:59AM -0500, Oliver Xymoron wrote:
> > On Sat, Aug 17, 2002 at 09:01:20PM -0700, Linus Torvalds wrote:
> > >
> > > On 17 Aug 2002, Robert Love wrote:
> > > >
> > > > [1] this is why I wrote my netdev-random patches. some machines just
> > > > have to take the entropy from the network card... there is nothing
> > > > else.
> > >
> > > I suspect that Oliver is 100% correct in that the current code is just
> > > _too_ trusting. And parts of his patches seem to be in the "obviously
> > > good" category (ie the xor'ing of the buffers instead of overwriting).
> >
> > Make sure you don't miss this bit, I should have sent it
> > separately. This is a longstanding bug that manufactures about a
> > thousand bits out of thin air when the pool runs dry.
>
> There's a reason why I did what I did here, and it has to do with an
> attack which Bruce Schneier describes in his Yarrow paper:
>
> http://www.counterpane.com/yarrow-notes.html
>
> called the "iterative guessing attack". Assume that the adversary has
> somehow knows the current state of the pool.

Yes, I understand the catastrophic reseeding, I've read that and a
few other of his papers on PRNGs.

> I tried to take a bit more of a moderate position between relying
> solely on crypgraphic randomness and a pure absolute randomness model.
> So we use large pools for mixing, and a catastrophic reseeding policy.
>
> >From a pure theory point of view, I can see where this might be quite
> bothersome. On the other hand, practically, I think what we're doing
> is justifiable, and not really a secucity problem.

That's certainly a reasonable approach (to the extent that all the
attacks we know of are purely theoretical), but a) it's not the
impression one gets from the docs and b) I think we can easily do better.

> That being said, if you really want to use your patch, please do it
> differently. In order to avoid the iterative guessing attack
> described by Bruce Schneier, it is imperative that you extract
> r->poolinfo.poolwirds - r->entropy_count/32 words of randomness from
> the primary pool, and mix it into the secondary.

Ah, I thought we were relying on the extract_count clause of the xfer
function to handle this, the first clause just seemed buggy. I'm not
quite convinced that this isn't sufficient. Do you see a theoretical
problem with that?

It certainly doesn't hurt to transfer a larger chunk of data from the
primary pool (and I'll update my patch to reflect this), but if it
says there are only 8 bits of entropy there, we should only tally 8
bits of credit in the secondary pool. With the current behavior, you
can do 'cat /dev/random | hexdump', wait for it to block, hit a key or
two, and have it spew out another K of data. I think this goes against
everyone's expectations of how this should work.

> P.S. /dev/urandom should probably also be changed to use an entirely
> separate pool, which then periodically pulls a small amount of entropy
> from the priamry pool as necessary. That would make /dev/urandom
> slightly more dependent on the strength of SHA, while causing it to
> not draw down as heavily on the entropy stored in /dev/random, which
> would be a good thing.

Indeed - I intend to take advantage of the multiple pool flexibility
in the current code. I'll have a third pool that's allowed to draw
from the primary whenever it's more than half full. Assuming input
entropy rate > output entropy rate, this will make it exactly as
strong as /dev/random, while not starving /dev/random when there's a
shortage.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-19 12:52:17

by Marco Colombo

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On 19 Aug 2002, Alan Cox wrote:

> On Mon, 2002-08-19 at 11:47, Marco Colombo wrote:
> >
> > BTW, I know you wrote the amd768-rng driver, I wonder if you have any
> > indication of how good these rng are. What is the typical output bits/
> > random bits ratio in normal applications?
>
> It seems random. People have subjected both the intel and AMD one to
> statistical test sets. I'm not a cryptographer or a statistician so I
> can't answer usefully

Oh, ok. I've asked only because some time ago I wrote a small tool set
to get random bits from /dev/audio and feed them to /dev/random
(ala audio-entropyd, but it does FIPS 140-2). You can configure how
many bits to account via RNDADDTOENTCNT every N bits you write to
/dev/random (I use 1/80). Since in principle it can take random data
from any source, I'd wondered if someone had an esteem of the parameter
for HRNGs. Thanks anyway.

.TM.
--
____/ ____/ /
/ / / Marco Colombo
___/ ___ / / Technical Manager
/ / / ESI s.r.l.
_____/ _____/ _/ [email protected]

2002-08-19 13:48:26

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Mon, Aug 19, 2002 at 01:43:59AM -0400, Theodore Ts'o wrote:
> On Sat, Aug 17, 2002 at 09:15:22PM -0500, Oliver Xymoron wrote:
> >
> > Non-Uniform Distribution Of Timing
> >
> > Unfortunately, our sample source is far from uniform. For starters, each
> > interrupt has a finite time associated with it - the interrupt latency.
> > Back to back interrupts will result in samples that are periodically
> > spaced by a fixed interval.
>
> This is true. But that's also why /dev/random measures not just the
> first order delta, but the second and third order delta's as well, and
> takes the minimum of the three delta's, capped by 12 bits, as the
> entropy estimate. So for example, if the interrupts are happening as
> fast as possible (back to back) and there is merely the fixed interval
> caused by the interrupt latency, then delta_1 will be the interrupt
> latency, but delta_2 will be zero!
>
> The rest of your analysis also ignores the fact that the current
> /dev/random code uses the second and third order delta's (think of
> them as second and third order deritives, except that we're in the
> discrete rather than continous domain).

No, I just chopped that part out of my analysis as it was already
getting rather lengthy. I concluded that it was easy to spoof the
deltas with untrusted sources if we weren't keeping track of at least
one global piece of information (I chose context switches) and that
there wasn't a good physical interpretation of the third order
delta. Neither of them protect from the latency-watching attack. I've
kept the first and second order deltas.

> > Assuming the interrupt actually has a nice gamma-like distribution
> > (which is unlikely in practice), then this is indeed true. The
> > trouble is that Linux assumes that if a delta is 13 bits, it contains
> > 12 bits of actual entropy. A moment of thought will reveal that
> > binary numbers of the form 1xxxx can contain at most 4 bits of
> > entropy - it's a tautology that all binary numbers start with 1 when
> > you take off the leading zeros. This is actually a degenerate case of
> > Benford's Law (http://mathworld.wolfram.com/BenfordsLaw.html), which
> > governs the distribution of leading digits in scale invariant
> > distributions.
> >
> > What we're concerned with is the entropy contained in digits
> > following the leading 1, which we can derive with a simple extension
> > of Benford's Law (and some Python):
>
> I'm not a statistician, and my last probability class was over 15
> years ago, but I don't buy your extension of Benford's law. Sure, if
> we take street addresses numbering from 1 to 10000, the probability
> that the leading digit will be 1 will be roughly 30%. But the
> distribution of the low two digits will be quite evenly distributed.
> Put another way, by picking a house at random, and looking at the low
> two digits, and can very fairly choose a number between 0 and 99.

That's correct, and that's what's being calculated. The example Python
code calculates the Benford distribution not in base 2, but in base
2^bits. Then it uses the Shannon entropy definition to calculate the
overall entropy of that distribution. For a 12 bit number, we've got
about 7 bits of entropy, most of it concentrated in the LSBs.

> > Interrupt Timing Independence
> >
> > Linux entropy estimate also wrongly assumes independence of different
> > interrupt sources. While SMP complicates the matter, this is
> > generally not the case. Low-priority interrupts must wait on high
> > priority ones and back to back interrupts on shared lines will
> > serialize themselves ABABABAB. Further system-wide CLI, cache flushes
> > and the like will skew -all- the timings and cause them to bunch up
> > in predictable fashion.
> >
> > Furthermore, all this is observable from userspace in the same way
> > that worst-case latency is measured.
> >
> > To protect against back to back measurements and userspace
> > observation, we insist that at least one context switch has occurred
> > since we last sampled before we trust a sample.
>
> First of all, the second order delta already protects against
> back-to-back measurements.

Only from the same source. Imagine the case of sending a series of
back to back network interrupts that hold off a pending keyboard or
disk interrupt to a predictable window. Sorry, another long bit I snipped
from my analysis.

> Secondly, what is observable from userpsace is time which is not spent
> in a particular process. But whether that time was spent in the
> system or in another process is not at all obvious, and it's also not
> obvious whether that time is spent handling interrupts or processing
> bottom half tasks (i.e., processing network packets, et. al).
> Moreover, it is not observable to the user process when multiple
> interrupts might be happening in this whole mess.

But a userspace process can see that it's on an idle system with only
eg timing interrupts happening. See the code that's used to measure
worst-case latency for the lowlat and preempt patches. This same code
could easily be modified to record time stamps for non-timer
interrupts, and with respectable accuracy.

Again, I keep the first and second order deltas of your approach and
watch context switching as an additional protection.

> That being said, global CLI's are a problem in that it does mean that
> multiple interrupts could be serviced at the same time, and while the
> outside adversary won't know exactly when a global CLI/STI might have
> happened, it does reduce the quality of the randomness. The solution
> to this though is to avoid global CLI's, not to throw away randomness
> samples until after a context switch.

The context switch protection fires rarely (1-2%) and I'm still mixing
said data into the pool.

> > Questionable Sources and Time Scales
> >
> > Due to the vagarities of computer architecture, things like keyboard
> > and mouse interrupts occur on their respective scanning or serial
> > clock edges, and are clocked relatively slowly. Worse, devices like
> > USB keyboards, mice, and disks tend to share interrupts and probably
> > line up on USB clock boundaries. Even PCI interrupts have a
> > granularity on the order of 33MHz (or worse, depending on the
> > particular adapter), which when timed by a fast processor's 2GHz
> > clock, make the low six bits of timing measurement predictable.
>
> We are not mixing in solely the low order bits of the timing
> measurement; we're mixing in the entire timestamp. So the fact that
> the low-order bits of the timing measurement might be predictable
> isn't necessarily a problem.
>
> We are looking at the low 12 bits of the first, second, and third
> order deltas when estimating the entropy. So predictibility in the
> low six bits of the timing measurement will tend to drag the entropy
> estiamte down, not up.

On a gigahertz processor, n order deltas on the order of a millisecond
all show up as > 12 bits of accuracy. We're sampling a source that we
know to be aligned to a fixed multiple of the processor clock on
modern machines, therefore we can expect very little randomness in the
LSBs. Even if our peripheral clocks are not generated from the same
clock source, we can't count on the LSBs being chaotic - clocks tend
to be extremely stable and clock drift over time scales of 1GHz/2^12
are going to be very small indeed.

> > And as far as I can find, no one's tried to make a good model or
> > estimate of actual keyboard or mouse entropy.
>
> Since a human is involved, and we're measuring to a fairly high level
> of accuracy, I'm not particularly worried.

Me neither, really. I would be worried if we tried to estimate entropy
in position or keystroke data, but we don't.

> > Randomness caused by
> > disk drive platter turbulence has actually been measured and is on
> > the order of 100bits/minute and is well correlated on timescales of
> > seconds - we're likely way overestimating it.
>
> This has worried me. The Davis paper was done a long time ago and I
> suspect the turblence values has gone down over time, not up. But
> let's be clear what the problem is. If the adversary knows the exact
> stream of requests sent to a disk, it can probably model the time
> necesasry to service those requests very accurately --- possibly
> missing much less than the 100 bits/minute guestimated by the Davis
> pater, given modern disk drives.
>
> So when we measure the disk drive completion interrupts, what we are
> really measuring is the uncertainity to the attacker exactly *when*
> those disk drive requests were made, and what order of disk drive
> requests were sent to it.
>
> Can the adversary control this, or know this? Sometimes, to a certain
> extent. But remember, we're using a very high resolution timer, and
> while the adversary might not the rough time to a few milliseconds
> when a request might be issued, it would be much harder for the
> adversary to know at what exact time stamp clock value was at a
> particular event.

I think the adversary can do much better than that, especially given
things like sendfile() and zerocopy networking that go to lengths to
make the latency as low as possible. Again, we don't have to guess to
the resolution of the processor clock, but to the bus clock (3 orders
of magnitude slower) or to the likely even slower microcontroller on
the peripheral. To the best of my knowledge, servo control loops for
drives are _not_ run in MHz. So the question becomes, is gigabit
network latency more predictable than the drive servo loop? I wouldn't
bet on the drive..

> > Trusting Predictable or Measurable Sources
> >
> > What entropy can be measured from disk timings are very often leaked
> > by immediately relaying data to web, shell, or X clients. Further,
> > patterns of drive head movement can be remotely controlled by clients
> > talking to file and web servers. Thus, while disk timing might be an
> > attractive source of entropy, it can't be used in a typical server
> > environment without great caution.
>
> This is something to be concerned about, to be sure. But generally a
> client won't have complete control of the drive head movement ---
> there are other clients involved --- and the adversary generally won't
> have complete knowledge of the block allocation of files, for example,
> so he/she would not be able to characterize the disk drive timings to
> the degree of accuracy required.

No. But if I manage to break into the firewall protecting a webserver,
I can likely arrange for that server to have no other clients for some
period. Also, see tools like dsniff, which can effectively change the
topology of switched LANs.

> Certianly a major concern that I've always had is measuring network
> device interrupts, since packet arrial times can be measured by an
> outsider. Such an attack would require that the adversary have a
> sophisticated device on the local LAN segment of the attack victim
> (since the attacker needs to see all of the packets directed at the
> victim in order to make guesses about the inter-packet arrival times,
> however. So the how practical this attack might be is certainly quite
> debatable.

A broken x86 firewall is quite probably capable of such measurement on
an owned 100baseT switched network.

> > (Incidentally, tricks like Matt Blaze's truerand clock drift
> > technique probably don't work on most PCs these days as the
> > "realtime" clock source is often derived directly from the
> > bus/PCI/memory/CPU clock.)
>
> Actually, many peripherals do have their own clock crystals and clock
> circuitry (network cards, serial cards, IDE disks, etc.)..

True, but many of those clocks are effectively invisible, being behind
a PCI or slower bus, being non-programmable, and modern interface
logic having buffering clocked to compensate for bus contention.

> > Batching
> >
> > Samples to be mixed are batched into a 256 element ring
> > buffer. Because this ring isn't allowed to wrap, it's dangerous to
> > store untrusted samples as they might flood out trusted ones.
>
> It's not dangerous in the sense that we might suffer a security breach
> (assuming that our entropy estimation routines are accurate). It's a
> bad thing in that we might lose some good randomness, but that's not a
> disaster.
>
> That being said, if we are running out space in the ring buffer, it
> would be good to increase its size.

In my patches, untrusted samples outnumber trusted ones by over 10 to
1. Kick in entropy for network at gigabit rates (some fraction of 80k
packets per second) and the mixing overhead for the larger pool starts
getting painful. If we need more entropy than a bit per sample *
sizeof(batch pool) * HZ (=256000bps!), I think we ought to buy a few
hardware RNGs. I'm actually tempted to make the ring buffer or the
mixing rate smaller - we're really not going to have 100k real entropy
events/sec.

(Let's try to narrow the scope of this a bit, please.)

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-19 14:19:37

by Oliver Neukum

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes


> > 1. You create a problem for in kernel users of random numbers.
> > 2. You forgo the benefit of randomness by concurrent access to
> > /dev/urandom 3. You will not benefit from hardware random number
> > generators as easily.
>
> You lost me. The kernel of course has "client" access to the internal
> pool. And since the userspace reads from /dev/random, it benefits

The kernel users of random numbers may be unable to block.
Thus the kernel has to have a PRNG anyway.
You may as well export it.

> from HRNG just the same way it does now. Point 2 is somewhat obscure
> to me. The kernel has only one observer to deal with, in theory.

In theory. In practice what goes out through eg. the network is
most important. Additional accesses to a PRNG bitstream unknown
outside make it harder to predict the bitstream.

Regards
Oliver

2002-08-19 15:18:05

by Marco Colombo

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Mon, 19 Aug 2002, Oliver Neukum wrote:

>
> > > 1. You create a problem for in kernel users of random numbers.
> > > 2. You forgo the benefit of randomness by concurrent access to
> > > /dev/urandom 3. You will not benefit from hardware random number
> > > generators as easily.
> >
> > You lost me. The kernel of course has "client" access to the internal
> > pool. And since the userspace reads from /dev/random, it benefits
>
> The kernel users of random numbers may be unable to block.
> Thus the kernel has to have a PRNG anyway.
> You may as well export it.
>
> > from HRNG just the same way it does now. Point 2 is somewhat obscure
> > to me. The kernel has only one observer to deal with, in theory.
>
> In theory. In practice what goes out through eg. the network is
> most important. Additional accesses to a PRNG bitstream unknown
> outside make it harder to predict the bitstream.

Not at all. Let me (one process) read 1MB from /dev/urandom,
and analyze it. If I can break SHA-1, I'm able to predict *future*
/dev/urandom output, expecially if I keep draining bits from /dev/random.

And even if it was not the case, you don't necessary need to read
the PRNG output *in order* to be able to guess it. Every N bits you
read, you learn more about its internal state.

Despite that, you tells you I'm not able to capture outgoing network
packets as well?

.TM.
--
____/ ____/ /
/ / / Marco Colombo
___/ ___ / / Technical Manager
/ / / ESI s.r.l.
_____/ _____/ _/ [email protected]

2002-08-19 16:26:49

by Oliver Neukum

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes


> Not at all. Let me (one process) read 1MB from /dev/urandom,
> and analyze it. If I can break SHA-1, I'm able to predict *future*
> /dev/urandom output, expecially if I keep draining bits from
> /dev/random.

True, but you cannot predict which task will read which part
of the output of urandom.
Also not all attackers can read from urandom.

If you really care, you might implement entropy pools
per user.

Regards
Oliver

2002-08-20 08:56:05

by Tommi Kyntola

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Mon, 19 Aug 2002, Oliver Xymoron wrote:
> On Mon, Aug 19, 2002 at 01:43:59AM -0400, Theodore Ts'o wrote:
> > On Sat, Aug 17, 2002 at 09:15:22PM -0500, Oliver Xymoron wrote:
> >
> > > Assuming the interrupt actually has a nice gamma-like distribution
> > > (which is unlikely in practice), then this is indeed true. The
> > > trouble is that Linux assumes that if a delta is 13 bits, it contains
> > > 12 bits of actual entropy. A moment of thought will reveal that
> > > binary numbers of the form 1xxxx can contain at most 4 bits of
> > > entropy - it's a tautology that all binary numbers start with 1 when
> > > you take off the leading zeros. This is actually a degenerate case of
> > > Benford's Law (http://mathworld.wolfram.com/BenfordsLaw.html), which
> > > governs the distribution of leading digits in scale invariant
> > > distributions.
> > >
> > > What we're concerned with is the entropy contained in digits
> > > following the leading 1, which we can derive with a simple extension
> > > of Benford's Law (and some Python):
> >
> > I'm not a statistician, and my last probability class was over 15
> > years ago, but I don't buy your extension of Benford's law. Sure, if
> > we take street addresses numbering from 1 to 10000, the probability
> > that the leading digit will be 1 will be roughly 30%. But the
> > distribution of the low two digits will be quite evenly distributed.
> > Put another way, by picking a house at random, and looking at the low
> > two digits, and can very fairly choose a number between 0 and 99.
>
> That's correct, and that's what's being calculated. The example Python
> code calculates the Benford distribution not in base 2, but in base
> 2^bits. Then it uses the Shannon entropy definition to calculate the
> overall entropy of that distribution. For a 12 bit number, we've got
> about 7 bits of entropy, most of it concentrated in the LSBs.

I think you have it slightly wrong there. By snipping away the first digit
from a number leaves you with, not Benford's distribution, but
uniform distribution, for which the Shannon entropy is naturally roughly
the bitcount.

Benford's distribution as presented (1 being the first digit 30% of the
time) concerns firstly base 10 and secondly randomly upper bound sets.
For base 2 the Benford's leading digit distribution naturally narrows down
to 1 being the first digit 100% of the time. Benford's law says little
about the remaining digits. Thus by snipping that first bit away leaves us
with a set that reflects the properties of the choice. Assuming it's
random, we get uniform distribution.

Wether the bit count of the smallest of the three deltas is
actually sufficient to guarantee us that amount of randomness in the
choice is another question. Like stated here already, it can be easily
fooled, and there's a strong possibility that it gets "fooled" already.

Clearly periodic but not delta-wise detectable sequences would pass the
delta checks. If interrupts get timed like, 1 3 11 50 1 3 11 50 1 3 11 50,
the deltas would indicate more entropy than there's present.

Some level of fourier analysis would be necessary to go further than what
we can with the deltas.

(For the record, I'm not one bit worried about the kernel rng, perhaps
the entropy bit count may be theoretically a bit off from time to time, but
remarks like "If someone then breaks SHA-1, he can do this and that..."
only amuze me. If someone breaks SHA-1 we're bound to change the hash
then, eh? Not mention that we'd be in deep shit not just kernelwise.)

--
Tommi Kynde Kyntola [email protected]
"A man alone in the forest talking to himself and
no women around to hear him. Is he still wrong?"

2002-08-20 12:26:15

by Ralf Baechle

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, Aug 18, 2002 at 09:31:41AM +0200, Bernd Eckenfels wrote:

> dean gaudet <[email protected]> wrote:
> > many southbridges come with audio these days ... isn't it possible to get
> > randomness off the adc even without anything connected to it?
>
> they also come with RNGs.

I know of one soundcard RND that is simply implemented as a small
mask programmed ROM full of random numbers. That's good enough for
audio purposes but doesn't even qualify for /dev/urandom's use ...

Ralf

2002-08-20 12:47:12

by Bernd Eckenfels

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, Aug 18, 2002 at 11:48:58AM +0200, Ralf Baechle wrote:
> On Sun, Aug 18, 2002 at 09:31:41AM +0200, Bernd Eckenfels wrote:
>
> > dean gaudet <[email protected]> wrote:
> > > many southbridges come with audio these days ... isn't it possible to get
> > > randomness off the adc even without anything connected to it?
> >
> > they also come with RNGs.
>
> I know of one soundcard RND that is simply implemented as a small
> mask programmed ROM full of random numbers. That's good enough for
> audio purposes but doesn't even qualify for /dev/urandom's use ...

I am taking about southbridges with random sources, not about soundchips :)

They can be used to contribute some bits to the entropy pool. I dont think
they are the only source one should trust. But the specs say that they are
no PNRG, and I havent heared about too disasterous results from statistical
tests. So they are better than ethernet drivers, anyway.

Greetings
Bernd
--
(OO) -- [email protected] --
( .. ) ecki@{inka.de,linux.de,debian.org} http://home.pages.de/~eckes/
o--o *plush* 2048/93600EFD eckes@irc +497257930613 BE5-RIPE
(O____O) When cryptography is outlawed, bayl bhgynjf jvyy unir cevinpl!

2002-08-20 13:17:39

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Tue, Aug 20, 2002 at 11:59:49AM +0300, Tommi Kyntola wrote:
> On Mon, 19 Aug 2002, Oliver Xymoron wrote:
> > On Mon, Aug 19, 2002 at 01:43:59AM -0400, Theodore Ts'o wrote:
> > > On Sat, Aug 17, 2002 at 09:15:22PM -0500, Oliver Xymoron wrote:
> > >
> > > > Assuming the interrupt actually has a nice gamma-like distribution
> > > > (which is unlikely in practice), then this is indeed true. The
> > > > trouble is that Linux assumes that if a delta is 13 bits, it contains
> > > > 12 bits of actual entropy. A moment of thought will reveal that
> > > > binary numbers of the form 1xxxx can contain at most 4 bits of
> > > > entropy - it's a tautology that all binary numbers start with 1 when
> > > > you take off the leading zeros. This is actually a degenerate case of
> > > > Benford's Law (http://mathworld.wolfram.com/BenfordsLaw.html), which
> > > > governs the distribution of leading digits in scale invariant
> > > > distributions.
> > > >
> > > > What we're concerned with is the entropy contained in digits
> > > > following the leading 1, which we can derive with a simple extension
> > > > of Benford's Law (and some Python):

> I think you have it slightly wrong there. By snipping away the first digit
> from a number leaves you with, not Benford's distribution, but
> uniform distribution, for which the Shannon entropy is naturally roughly
> the bitcount.

No, it's much more complicated than that - that doesn't give us scale
invariance. Observe that the distribution of the first and second
digits in base n is the same as the distribution of the first digit in
base n*2. The probability of finding a 0 as the second digit
base 10 is simply the sum of the probabilities of finding 10, 20,
30,..90 as the first digit, base 100, see? It's .1197, rather than the
expected .1. In base 2, the odds of the second digit being 0 is .58.

My original message included the code I used to calculate the
distribution, along with the calculated entropy content of n-digit
strings.

> Wether the bit count of the smallest of the three deltas is
> actually sufficient to guarantee us that amount of randomness in the
> choice is another question. Like stated here already, it can be easily
> fooled, and there's a strong possibility that it gets "fooled" already.

That's why my code makes a distinction between trusted and untrusted
sources. We will only trust sources that can't be used to spoof us.
Detecting spoofing is impossible.

> Some level of fourier analysis would be necessary to go further than what
> we can with the deltas.

There's no point in going further. If an attacker is trusted, he can
send timing streams that would fool _any_ filter. An example is some
subset of the digits of pi, which appear perfectly evenly distributed
but are of course completely deterministic.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-20 16:15:35

by Tommi Kyntola

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Tue, 20 Aug 2002, Oliver Xymoron wrote:
> On Tue, Aug 20, 2002 at 11:59:49AM +0300, Tommi Kyntola wrote:
> > On Mon, 19 Aug 2002, Oliver Xymoron wrote:
> > > On Mon, Aug 19, 2002 at 01:43:59AM -0400, Theodore Ts'o wrote:
> > > > On Sat, Aug 17, 2002 at 09:15:22PM -0500, Oliver Xymoron wrote:
> > > >
> > > > > Assuming the interrupt actually has a nice gamma-like distribution
> > > > > (which is unlikely in practice), then this is indeed true. The
> > > > > trouble is that Linux assumes that if a delta is 13 bits, it contains
> > > > > 12 bits of actual entropy. A moment of thought will reveal that
> > > > > binary numbers of the form 1xxxx can contain at most 4 bits of
> > > > > entropy - it's a tautology that all binary numbers start with 1 when
> > > > > you take off the leading zeros. This is actually a degenerate case of
> > > > > Benford's Law (http://mathworld.wolfram.com/BenfordsLaw.html), which
> > > > > governs the distribution of leading digits in scale invariant
> > > > > distributions.
> > > > >
> > > > > What we're concerned with is the entropy contained in digits
> > > > > following the leading 1, which we can derive with a simple extension
> > > > > of Benford's Law (and some Python):
>
> > I think you have it slightly wrong there. By snipping away the first digit
> > from a number leaves you with, not Benford's distribution, but
> > uniform distribution, for which the Shannon entropy is naturally roughly
> > the bitcount.
>
> No, it's much more complicated than that - that doesn't give us scale
> invariance. Observe that the distribution of the first and second
> digits in base n is the same as the distribution of the first digit in
> base n*2. The probability of finding a 0 as the second digit
> base 10 is simply the sum of the probabilities of finding 10, 20,
> 30,..90 as the first digit, base 100, see? It's .1197, rather than the
> expected .1. In base 2, the odds of the second digit being 0 is .58.

Oh now I see, you're relying on the given times to be gamma distributed
(1/x, and thus naturally scale invariant). I was constantly thinking about
the jiffie count that got masked that I took as uniform, but naturally
within the delta context it's not uniform, since the first order delta
naturally defines the timestamp when the previous timestamp is known.
And thus yes, you are indeed right. Furthermore it wouldn't even have to
be gamma and as such scale invariant. Even mere nonuniformness (which I'm
sure the timings are) guarantees that mere first bit snipping is not
enough to ensure entropy inside that full range.

The current the delta bitcount - 1 is not the correct entropy amount in
any said gamma (or likes) distributed number. Does strict gamma assumption
really lead to so strict figures as you showed in your patch :
static int benford[16]={0,0,0,1,2,3,4,5,5,6,7,7,8,9,9,10};

Numbers below 0..7, don't have a single bit of entropy?

I'm more inclined to think that where as this may be sound for the larger
delta bit counts, I don't think it applies that well for the smaller
deltas. It's unlikely that the timing distributions stay scale invariant
for the lower end bits.

Not mention that if the actual time separation is something like 12 bits,
but the second order delta drops the entropy bit count down to 4,
couldn't those four be considered uniformly distributed, atleast roughly
enough, so that the benford array reduction could be skipped.
Because for a 12 bit numbers the 4 lower bits are by far not scale
invariant. Atleast that part of the patch might need tweaking.
Or did I miss something?
I can't say for sure though, gotta sleep on it first.

Although Linus was not so hot about your "draconian measures" patch set
this part of it would atleast seem worth the while. Atleast when the
limiting delta is the first order, it is indeed unreasonable to think
that it's bit count - 1 would show the entropy that's present.

> > Wether the bit count of the smallest of the three deltas is
> > actually sufficient to guarantee us that amount of randomness in the
> > choice is another question. Like stated here already, it can be easily
> > fooled, and there's a strong possibility that it gets "fooled" already.
>
> That's why my code makes a distinction between trusted and untrusted
> sources. We will only trust sources that can't be used to spoof us.
> Detecting spoofing is impossible.

Fully agreed.
I was merely suggesting that it might be if not common, atleast not
totally out of the question, that certain interrupt or interrupt bursts
kick in with predictable intervals (tcp handshake and other protocol
related stuff, that might lead to predictable network traffic), e.g. 1 13
2 50, which would completely fool the delta analysis. But yes, such cases
should be ruled out by source selection. As we have, or atleast last I
checked the network irqs didnt contribute entropy, although IMHO even in
network traffic there could be some level of entropy that could be gotten,
just not as simply as presently from user input timings to be fully
trustworthy.

> > Some level of fourier analysis would be necessary to go further than what
> > we can with the deltas.
>
> There's no point in going further. If an attacker is trusted, he can
> send timing streams that would fool _any_ filter. An example is some
> subset of the digits of pi, which appear perfectly evenly distributed
> but are of course completely deterministic.

I couldn't agree more.

--
Tommi Kynde Kyntola [email protected]
"A man alone in the forest talking to himself and
no women around to hear him. Is he still wrong?"


2002-08-20 17:18:14

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Tue, Aug 20, 2002 at 07:19:26PM +0300, Tommi Kyntola wrote:
> On Tue, 20 Aug 2002, Oliver Xymoron wrote:
> > On Tue, Aug 20, 2002 at 11:59:49AM +0300, Tommi Kyntola wrote:
> > > On Mon, 19 Aug 2002, Oliver Xymoron wrote:
> > > > On Mon, Aug 19, 2002 at 01:43:59AM -0400, Theodore Ts'o wrote:
> > > > > On Sat, Aug 17, 2002 at 09:15:22PM -0500, Oliver Xymoron wrote:
> > > > >
> > > > > > Assuming the interrupt actually has a nice gamma-like distribution
> > > > > > (which is unlikely in practice), then this is indeed true. The
> > > > > > trouble is that Linux assumes that if a delta is 13 bits, it contains
> > > > > > 12 bits of actual entropy. A moment of thought will reveal that
> > > > > > binary numbers of the form 1xxxx can contain at most 4 bits of
> > > > > > entropy - it's a tautology that all binary numbers start with 1 when
> > > > > > you take off the leading zeros. This is actually a degenerate case of
> > > > > > Benford's Law (http://mathworld.wolfram.com/BenfordsLaw.html), which
> > > > > > governs the distribution of leading digits in scale invariant
> > > > > > distributions.
> > > > > >
> > > > > > What we're concerned with is the entropy contained in digits
> > > > > > following the leading 1, which we can derive with a simple extension
> > > > > > of Benford's Law (and some Python):
> >
> > > I think you have it slightly wrong there. By snipping away the first digit
> > > from a number leaves you with, not Benford's distribution, but
> > > uniform distribution, for which the Shannon entropy is naturally roughly
> > > the bitcount.
> >
> > No, it's much more complicated than that - that doesn't give us scale
> > invariance. Observe that the distribution of the first and second
> > digits in base n is the same as the distribution of the first digit in
> > base n*2. The probability of finding a 0 as the second digit
> > base 10 is simply the sum of the probabilities of finding 10, 20,
> > 30,..90 as the first digit, base 100, see? It's .1197, rather than the
> > expected .1. In base 2, the odds of the second digit being 0 is .58.
>
> Oh now I see, you're relying on the given times to be gamma distributed
> (1/x, and thus naturally scale invariant). I was constantly thinking about
> the jiffie count that got masked that I took as uniform, but naturally
> within the delta context it's not uniform, since the first order delta
> naturally defines the timestamp when the previous timestamp is known.
> And thus yes, you are indeed right. Furthermore it wouldn't even have to
> be gamma and as such scale invariant. Even mere nonuniformness (which I'm
> sure the timings are) guarantees that mere first bit snipping is not
> enough to ensure entropy inside that full range.
>
> The current the delta bitcount - 1 is not the correct entropy amount in
> any said gamma (or likes) distributed number. Does strict gamma assumption
> really lead to so strict figures as you showed in your patch :
> static int benford[16]={0,0,0,1,2,3,4,5,5,6,7,7,8,9,9,10};
>
> Numbers below 0..7, don't have a single bit of entropy?

They have fractional bits of entropy.

> I'm more inclined to think that where as this may be sound for the larger
> delta bit counts, I don't think it applies that well for the smaller
> deltas. It's unlikely that the timing distributions stay scale invariant
> for the lower end bits.

Possibly. If anything, I'd expect them to get worse, not better, more
strongly displaying quantization due to cache flushes or the like.

> Not mention that if the actual time separation is something like 12 bits,
> but the second order delta drops the entropy bit count down to 4,
> couldn't those four be considered uniformly distributed, atleast roughly
> enough, so that the benford array reduction could be skipped.

Actually, no. Scale invariant means we expect to see the same sorts of
numbers regardless of the units we use. Differences of such numbers
won't magically become associated with a particular unit and should
still match up with the so-called 'distribution of distributions'.

> Although Linus was not so hot about your "draconian measures" patch set
> this part of it would atleast seem worth the while. Atleast when the
> limiting delta is the first order, it is indeed unreasonable to think
> that it's bit count - 1 would show the entropy that's present.

I've got another spin that I'll send him after he's kicked out .32
that he'll probably find acceptable. Gives headless folks the rope to
hang themselves with.

Ooh, that's clever. I'll have to stick that in the comments somewhere.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-21 08:40:18

by Rogier Wolff

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Aug 17, 2002 at 08:08:36PM -0700, Linus Torvalds wrote:
>
> On Sat, 17 Aug 2002, Oliver Xymoron wrote:
> >
> > Let me clarify that 2-5 orders thing. The kernel trusts about 10 times
> > as many samples as it should, and overestimates each samples' entropy
> > by about a factor of 10 (on x86 with TSC) or 1.3 (using 1kHz jiffies).
>
> Lookin gat the code, your _new_ code just throws samples away _entirely_
> just because some random event hasn't happened (the first thing I noticed
> was the context switch testing, there may be others there that I just
> didn't react to).

Oliver,

Let me state that with a proper mixing function you should always
mix in possible entropy sources, even if they CAN be controlled
from the outside.

If you mistrust the source, feel free to add (almost) zero to the
"proven entropy".

Now, how about keeping both a conservative and a bit more liberal
count of the entropy in the pool? Then we can have three device
nodes, which provide random entropy. One should follow YOUR rules,
and can only be used on desktop machines with humans typing and
mousing at the console (that's your proposition for "random").
The other is useful for random numbers for keys and such (that's
our current "random"). The last is our old urandom.

Roger.

--
** [email protected] ** http://www.BitWizard.nl/ ** +31-15-2600998 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
* There are old pilots, and there are bold pilots.
* There are also old, bald pilots.

2002-08-21 12:43:38

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Wed, Aug 21, 2002 at 10:44:10AM +0200, Rogier Wolff wrote:
> On Sat, Aug 17, 2002 at 08:08:36PM -0700, Linus Torvalds wrote:
> >
> > On Sat, 17 Aug 2002, Oliver Xymoron wrote:
> > >
> > > Let me clarify that 2-5 orders thing. The kernel trusts about 10 times
> > > as many samples as it should, and overestimates each samples' entropy
> > > by about a factor of 10 (on x86 with TSC) or 1.3 (using 1kHz jiffies).
> >
> > Lookin gat the code, your _new_ code just throws samples away _entirely_
> > just because some random event hasn't happened (the first thing I noticed
> > was the context switch testing, there may be others there that I just
> > didn't react to).
>
> Oliver,
>
> Let me state that with a proper mixing function you should always
> mix in possible entropy sources, even if they CAN be controlled
> from the outside.
>
> If you mistrust the source, feel free to add (almost) zero to the
> "proven entropy".
>
> Now, how about keeping both a conservative and a bit more liberal
> count of the entropy in the pool? Then we can have three device
> nodes, which provide random entropy. One should follow YOUR rules,
> and can only be used on desktop machines with humans typing and
> mousing at the console (that's your proposition for "random").
> The other is useful for random numbers for keys and such (that's
> our current "random"). The last is our old urandom.

My current version adds a sysctl that lets you assign between 0.01 and 1
bits to untrusted sources, defaulting to 0. Headless folks without
hardware RNGs can use it to get ample amounts of entropy from network
packets (together with patches that turn on SA_SAMPLE_RANDOM for
NICs). I believe this should be acceptable to all parties - I'll
resend sometime soon.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-08-22 03:39:32

by daw

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

Linus Torvalds wrote:
>There are _real_ reasons why a firewall box (ie one
>that probably comes with a flash memory disk, and runs a small web-server
>for configuration) would want to have strong random numbers (exactly for
>things like generating host keys when asked to by the sysadmin), yet you
>seem to say that such a user would have to use /dev/urandom.

Such users should be using /dev/urandom anyway.
Most apps I've seen that use /dev/random do so out of confusion,
not because /dev/random is the right thing to use.

2002-08-22 03:37:42

by daw

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

Linus Torvalds wrote:
>On the other hand, if you are _too_ anal you won't consider _anything_
>"truly random", and /dev/random becomes practically useless on things that
>don't have special randomness hardware.

Well, /dev/random never was the right interface for most applications, and
this is arguably the real source of the problem. For most applications,
what you want is something like /dev/urandom (possibly a version that
doesn't deplete all the true randomness available for /dev/random). Very
few applications need true randomness; for most, cryptographic-quality
pseudorandomness should suffice.

1 bit of true randomness a minute should be more than sufficient for most
real applications. (That means you can catastrophically reseed with 128
bits once every two hours, which sounds good to me.)

2002-08-22 03:45:35

by daw

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

Linus Torvalds wrote:
>If you make /dev/random useless ("but you can use /dev/urandom instead")
>then we should not have it.

There's a purpose for /dev/random, but it's a special case.
For the common case, /dev/urandom is a much better choice
than /dev/random. Sadly, the naming was poorly chosen, IMHO;
I feel /dev/urandom ought to be the default choice, and /dev/random
should be only for those who know what they're doing and have
a special need.

The special case where /dev/random is needed is applications
that (1) are managing their own user-level randomness pools,
(2) need forward security, and (3) can't rely on the kernel to
provide the forward security.

In more detail:
(1) Why would an app want to manage its own randomness pool?
I don't know, but possibly for better performance.
(2) Suppose someone breaks into your firewall / IPSec gateway.
You don't want them to be able to muck through memory and
deduce all past and future session keys. The solution?
Once a day, you collect 128 bits of true randomness and do
a "catastrophic reseed": you hash them into the pool all at
once. Then you can be sure that anyone who breaks into your
machine won't be able to get more than a day's worth of past
IPSec keys, and once you kick the hacker off, they won't be
able to predict more than a day's worth of future keys.
(3) The kernel should be doing catastrophic reseeding of
/dev/urandom's pool. However, if the application manages
its own randomness pool, it will need to take responsibility
for doing its own catastrophic reseeding.
If all these conditions are satisfied, the application will need
true randomness, so that it can do its catastrophic reseeding.
But this doesn't seem to be the common case, as far as I can tell.

2002-08-23 19:50:32

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

Hi!

> > What entropy can be measured from disk timings are very often leaked
> > by immediately relaying data to web, shell, or X clients. Further,
> > patterns of drive head movement can be remotely controlled by clients
> > talking to file and web servers. Thus, while disk timing might be an
> > attractive source of entropy, it can't be used in a typical server
> > environment without great caution.
>
> This is something to be concerned about, to be sure. But generally a
> client won't have complete control of the drive head movement ---
> there are other clients involved --- and the adversary generally won't
> have complete knowledge of the block allocation of files, for example,
> so he/she would not be able to characterize the disk drive timings to
> the degree of accuracy required.

You seem assume that adversary is on remote system. That's not neccessarily
the case.

Imagine you wanting to generate gpg key while I have normal user account
on the machine. I can sample /proc/interrupts, measure disk speeds etc...
Perhaps we are alone on [otherwise idle] machine. You still don't want me
to guess your key.

Pavel
--
Philips Velo 1: 1"x4"x8", 300gram, 60, 12MB, 40bogomips, linux, mutt,
details at http://atrey.karlin.mff.cuni.cz/~pavel/velo/index.html.

2002-08-23 19:50:32

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

Hi!

> And your argument that there is zero randomness in the TSC _depends_ on
> your ability to perfectly estimate what the TSC is. If you cannot do it,
> there is obviously at least one bit of randomness there. So I don't think
> your "zero" is a good conservative estimate.

Actually, no. If something is not predictable it does not mean >= 1 bit.

There is < half of bit of information in "this human is older than 100 years".

There's about 7.3 bits per word in english sentence. Etc, fractional bits
exist.
Pavel
--
Philips Velo 1: 1"x4"x8", 300gram, 60, 12MB, 40bogomips, linux, mutt,
details at http://atrey.karlin.mff.cuni.cz/~pavel/velo/index.html.

2002-08-23 20:11:02

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes


On Fri, 2 Nov 2001, Pavel Machek wrote:
>
> Actually, no. If something is not predictable it does not mean >= 1 bit.

I agree. It might be less than one bit. However, I suspect that you can
basically almost never predict the _exact_ TSC value, which implies that
there is one or more bits of true randomness there.

If you can predict it (exactly) a noticeable fraction of the time, there
is clearly less than one bit of randomness.

To some degree it _should_ be fairly easy to test this even without a GHz
scope - just put two totally idle machines, connect them directly with a
cross-over cable, and make one of them send packets as quickly as it can
using something like "ping -f" with no route back to the sender (ie make
the ethernet card on the sending machine be your "exact clock" for sending
purposes).

Then record the stream of TSC on the receiving machine, and if you can
generate a function to predict a noticeable percentage of them exactly,
you've shown that it's much less than 1 bit of information.

Maybe somebody would find this an interesting exercise.

Linus

2002-09-08 03:37:27

by D. Hugh Redelmeier

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

| From: Alan Cox <[email protected]>
| Date: 19 Aug 2002 13:29:10 +0100

| On Mon, 2002-08-19 at 11:47, Marco Colombo wrote:
| > BTW, I know you wrote the amd768-rng driver, I wonder if you have any
| > indication of how good these rng are. What is the typical output bits/
| > random bits ratio in normal applications?
|
| It seems random. People have subjected both the intel and AMD one to
| statistical test sets. I'm not a cryptographer or a statistician so I
| can't answer usefully

[Note: I am not a cryptographer.]

The Intel (and, I assume, the AMD) hardware random generator cannot be
audited. There is no way to tell if it is a RNG. These days, you
don't need to be paranoid to lack trust in such devices.

Governments could easily pressure Intel or AMD in a number of ways.
Perhaps other forces could corrupt the RNG implementation.

I've heard that Intel's RNG "whitens" its output in a way that hides
failure and makes analysis difficult. This cannot be turned off.
This doesn't add to my confidence.

Adding the putative RNG's output to the pool is a Good Thing.
Depending on it may not be.

Hugh Redelmeier
[email protected] voice: +1 416 482-8253

2002-09-08 03:45:47

by D. Hugh Redelmeier

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

| From: Oliver Xymoron <[email protected]>
| Date: Tue, 20 Aug 2002 12:22:16 -0500

| On Tue, Aug 20, 2002 at 07:19:26PM +0300, Tommi Kyntola wrote:

| > Does strict gamma assumption
| > really lead to so strict figures as you showed in your patch :
| > static int benford[16]={0,0,0,1,2,3,4,5,5,6,7,7,8,9,9,10};
| >
| > Numbers below 0..7, don't have a single bit of entropy?
|
| They have fractional bits of entropy.

If entropy is added in such small amounts, should the entropy counter
be denominated in, say, 1/4 bits? Would this allow the entropy
estimate to safely grow significantly faster? Are the estimates
accurate enough to justify such precision?

Hugh Redelmeier
[email protected] voice: +1 416 482-8253

2002-09-08 04:27:27

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sat, Sep 07, 2002 at 11:51:47PM -0400, D. Hugh Redelmeier wrote:
> | From: Oliver Xymoron <[email protected]>
> | Date: Tue, 20 Aug 2002 12:22:16 -0500
>
> | On Tue, Aug 20, 2002 at 07:19:26PM +0300, Tommi Kyntola wrote:
>
> | > Does strict gamma assumption
> | > really lead to so strict figures as you showed in your patch :
> | > static int benford[16]={0,0,0,1,2,3,4,5,5,6,7,7,8,9,9,10};
> | >
> | > Numbers below 0..7, don't have a single bit of entropy?
> |
> | They have fractional bits of entropy.

Turns out that the analysis that lead to the above numbers was subtly
flawed. Together with Arvend Bayer, I've come up with a new analysis
where the "effect" falls off surprisingly rapidly. In the end, an
n-bit number will have n-2 bits of entropy. I throw off a third bit
just to be safe.

> If entropy is added in such small amounts, should the entropy counter
> be denominated in, say, 1/4 bits?

If entropy were truly scarce, we could do that. For folks who are
getting starved, my current patch set has a tunable called trust_pct,
which effectively lets you mix in untrusted entropy in hundredths of a
bit.

> Would this allow the entropy estimate to safely grow significantly
> faster? Are the estimates accurate enough to justify such precision?

We'd need to be able to do a non-integer log2 to do much better.
That's putting too much faith in the model, which is just some
handwaving that keyboard and mouse interrupts, etc., ought to follow
the magical "distribution of distributions". Fortunately, you can vary
quite a bit from that distribution into various gamma forms and the
like and still fall within a bit or so of the presumed entropy, which
is why I've added another bit of safety margin.

I'll repost my patches as soon as I take a swing at the /dev/random vs
/dev/urandom pool business.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-09-08 18:15:13

by daw

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

D. Hugh Redelmeier wrote:
>The Intel (and, I assume, the AMD) hardware random generator cannot be
>audited.

I disagree. The Intel RNG can be audited, and it has been audited.

The Intel RNG works like this. In hardware, there is a noise source,
which outputs bits that may be somewhat biased. Then, also in hardware,
there is a von Neumann stage that does a little bit of conditioning to
reduce the bias slightly. Finally, in software, the Intel driver applies
SHA-1 to do heavy bit-mixing and pseudorandomization.

Of course, running randomness tests after the SHA-1 stage will tell
you nothing. You could run SHA-1 on a counter and it would pass DIEHARD.
So, don't do that.

Rather, to audit the Intel RNG, the first thing to do is to run
statistical tests on the input to SHA-1. Ideally, you'd like to do
this before the von Neumann stage, but since the von Neumann compensator
is in hardware, that's not possible. Fortunately, you can do the
auditing on the output of the von Neumann stage, and this is almost
as good. Because the von Neumann filter does only very light conditioning,
any flaws in the input to the von Neumann stage are likely to be apparent
after the output stage as well, if you have a large number of samples.
Fortunately, because the SHA-1 is in software, this test is feasible.

The second thing to do is to look at the design of the hardware noise
source to see whether it looks like a reliable source of random bits.

Both of these tests have been performed. Paul Kocher has looked
carefully at the Intel RNG, and given it high scores. See
http://www.cryptography.com/resources/whitepapers/IntelRNG.pdf

Of course, there are no guarantees. But let's look at the alternatives.
If you pick software-based noise sources, there's always the risk that
they may fail to produce useful entropy. (For instance, you sample the
soundcard, but 5% of machines have no soundcard and hence give no
entropy, or 5% of the time you get back stuff highly correlated to
60Hz AC.) The risk that a software-based noise source fails seems much
higher than the risk that the Intel RNG has a backdoor. And the Intel
RNG seems very unlikely to fail at random. If you're going to rely on
any single source, the Intel RNG seems like by far the most reliable
source around.

Of course, in cryptography you should never be relying on only one noise
source anyway. You should mix as many noise sources together as possible.
Then, as long as the hash is secure, you'll be secure as long as any
one of those noise sources is working, even if the others fail adversarially.
(At least, this should be the case for any well-designed entropy crunching
algorithm.) Given this, there is no reason not to use the Intel RNG,
and every reason to use it. It can only help, not hurt.

So it seems to me that using the Intel RNG is a big win, and the risk
of a backdoor is "in the noise".

2002-09-09 16:48:28

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Sun, Sep 08, 2002 at 06:03:09PM +0000, David Wagner wrote:
> D. Hugh Redelmeier wrote:
> >The Intel (and, I assume, the AMD) hardware random generator cannot be
> >audited.

...

> Rather, to audit the Intel RNG, the first thing to do is to run
> statistical tests on the input to SHA-1. Ideally, you'd like to do
> this before the von Neumann stage, but since the von Neumann compensator
> is in hardware, that's not possible. Fortunately, you can do the
> auditing on the output of the von Neumann stage, and this is almost
> as good. Because the von Neumann filter does only very light conditioning,
> any flaws in the input to the von Neumann stage are likely to be apparent
> after the output stage as well, if you have a large number of samples.

This argument assumes you have knowledge of the inner workings of this
step. To the best of my knowledge no one outside of Intel has cracked
open this chip and actually tested that this black box _does what it
says its doing_. This is what is meant by auditing.

Randomness tests like DIEHARD are absolutely useless for anything
other than telling you the spectral uniformity of a source, which is
no indication as to whether it's deterministic or not.

> Both of these tests have been performed. Paul Kocher has looked
> carefully at the Intel RNG, and given it high scores. See
> http://www.cryptography.com/resources/whitepapers/IntelRNG.pdf

What right-thinking paranoid would place any faith in an analysis with
an Intel copyright? This is practically marketing fluff anyway.

> Of course, there are no guarantees. But let's look at the alternatives.
> If you pick software-based noise sources, there's always the risk that
> they may fail to produce useful entropy. (For instance, you sample the
> soundcard, but 5% of machines have no soundcard and hence give no
> entropy, or 5% of the time you get back stuff highly correlated to
> 60Hz AC.) The risk that a software-based noise source fails seems much
> higher than the risk that the Intel RNG has a backdoor.

But we can actually audit the former and decided whether to trust it.
For the Intel part, we only have faith. If you're one of the numerous
governments that's bought crypto solutions from respectable
corporations for your diplomatic communications that later turned out
to be backdoored, that faith doesn't have much currency. See Lotus
Notes and Crypto AG for two of the more notorious cases.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-09-09 17:10:54

by daw

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

Oliver Xymoron wrote:
>This argument assumes you have knowledge of the inner workings of this
>step. To the best of my knowledge no one outside of Intel has cracked
>open this chip and actually tested that this black box _does what it
>says its doing_. This is what is meant by auditing.

Yes, I agree. The tests are only useful if you trust that Intel is
not maliciously out to get you. For instance, they are useful if you
believe that Intel is well-intentioned but fallible (which strikes me
as likely to be the right threat model for most ordinary Linux users).

Whether you like it or not, you're already trusting Intel, if you're
using an Intel chip. If Intel were malicious and out to get you, they
could have put a backdoor in the chip. And a RNG is *much* easier to
reverse-engineer and audit than an entire CPU, so it would probably be
riskier for Intel to hide a backdoor in the RNG than in, say, the CPU.

>What right-thinking paranoid would place any faith in an analysis with
>an Intel copyright? This is practically marketing fluff anyway.

Marketing fluff? I take it that's a joke: I wish the marketing fluff
I get had this much technical content and documented its experimental
procedure like this.

Anyway, I place some trust in the analysis not because of who owns its
copyright (I mean, come on) but because it has Paul Kocher's name on it.
His reputation is excellent. He is one of the top two or three in the
business. I know him personally, and I don't believe he would place
his stamp of approval on a RNG he knows to be broken.

Obviously Intel paid for this analysis to be performed -- that's how the
security consulting business works -- but that doesn't mean Intel paid
for preferential, biased treatment. Frankly, Intel probably couldn't
afford it. I trust Paul Kocher to do an impartial, independent analysis,
because I've seen him do it before. Intel probably couldn't pay him
enough to make up for the amount of money he'd lose if he were caught
"cheating".

Again, this review is no guarantee. There are still lots of things that
could go wrong. Maybe there is a flaw that Paul Kocher overlooked.
Maybe there is a secret backdoor. Maybe Intel changed the design
of their RNG since the review and inadvertently introduced a defect.
But Kocher's review raises the level of assurance we can have.

Bottom line: I claim that the Intel RNG is better studied than most
other entropy sources available on the typical PC. I challenge you to
find a review like this for, say, soundcard entropy.

>But we can actually audit the former and decided whether to trust it.

I'm not sure what makes a soundcard any easier to audit than the Intel
RNG. Our soundcards could have a secret hardware backdoor hidden in
them, too.

Anyway, as I said in my previous email, you shouldn't be using only
a single entropy source in any case. Use both the soundcard and the
Intel RNG. They have different failure modes, and the risk that they
both fail is smaller than the risk that just one will fail.

And, if you're a government agency protecting classified information,
you probably have other RNG sources at your disposal and would be unlikely
to use the Linux RNG in any case.

2002-09-09 18:50:09

by Kent Borg

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

Most of this thread is about on-going sources of entropy, but what
about initial state? Yes, distributions such as Red Hat save the pool
when shutting down and restore it on booting. But what about initial
initial state? What about those poor headless servers that get built
by Kickstart? What about the individual blades in a server? What
about poor embedded routers that are really impoverished: Nothing but
a CPU, timers, ethernet hardware, and ROM. And DRAM!

Why not use the power-on state of DRAM as a source of initial pool
entropy?

Sure, DRAM isn't very random in its power-up state, but say it is as
much as 5-nines predicatable (which I seriously doubt it is), that is
still over 300-bytes of valuable entropy for a box with a mere 4MB of
DRAM. I could easily imagine the number to be 10-times that: If just
one in 8192 bits changed from one power up to the next that would
match the full 4096-bits currently kept in the Linux entropy pool.

Even if a given DRAM chip doesn't change very much from one boot to
the next, different DRAM chips (in different boxes), will vary from
each other by much more and that is useful.

And, even if a given DRAM chip doesn't vary much from one power-on to
another 10-minutes later, DRAM does vary over time: some paranoids
worry that keys stored in RAM might be recoverable because of long
term changes to the chips. If this is a reasonable worry, then isn't
this kind of imprinting also another source of uniqueness for a given
box? Yes, if you let the Bad Guy analyze your RAM you some much of
this benefit, but if you let the Bad Guy have your hardware to play
with you are in bigger trouble.

Anyone know of measurements of DRAM power-on variability?

Wouldn't everyone benefit from initializing the pool with a hash of
(at least some of--don't want to be too slow) RAM? Even big
installations that save state on disk shouldn't mind. And make this a
configuration option for those embedded folks who have a slow CPU and
a need for fast booting (and a disinterest in entropy).


-kb

2002-09-09 19:42:27

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Mon, Sep 09, 2002 at 04:58:50PM +0000, David Wagner wrote:
>
> Whether you like it or not, you're already trusting Intel, if you're
> using an Intel chip. If Intel were malicious and out to get you, they
> could have put a backdoor in the chip. And a RNG is *much* easier to
> reverse-engineer and audit than an entire CPU, so it would probably be
> riskier for Intel to hide a backdoor in the RNG than in, say, the CPU.

Not sure I buy that. Consider that with a CPU, you control the inputs,
you've got a complete spec of the core functionality, you've got a
huge testbed of code to test that functionality, and you've got a
whole industry of people reverse engineering your work.

More to the point, the backdoor we're worried about is one that
compromises our encryption keys to passive observers. Doing that by
making the RNG guessable is relatively easy. On the other hand
determining whether a given snippet of code is doing RSA, etc. is
equivalent to solving the halting problem, so it's seems to me pretty
damn hard to usefully put this sort of back door into a CPU without
sacrificing general-purpose functionality.

For other types of back doors, it seems likely that the NSA would go
straight to the OS vendor, or, given the state of mainstream OSes,
just use existing holes.

> >What right-thinking paranoid would place any faith in an analysis with
> >an Intel copyright? This is practically marketing fluff anyway.
>
> Marketing fluff? I take it that's a joke: I wish the marketing fluff
> I get had this much technical content and documented its experimental
> procedure like this.

Yes, I was being hyperbolic.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-09-09 19:52:53

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Mon, Sep 09, 2002 at 02:54:51PM -0400, Kent Borg wrote:
> Most of this thread is about on-going sources of entropy, but what
> about initial state? Yes, distributions such as Red Hat save the pool
> when shutting down and restore it on booting. But what about initial
> initial state? What about those poor headless servers that get built
> by Kickstart? What about the individual blades in a server? What
> about poor embedded routers that are really impoverished: Nothing but
> a CPU, timers, ethernet hardware, and ROM. And DRAM!
>
> Why not use the power-on state of DRAM as a source of initial pool
> entropy?

There's actually not a lot there, and what is there is not random, but
rather a rapidly fading "ghost" of what was there last. And for most
folks, this gets wiped by POST or the kernel long before the RNG gets
its hands on it.

Nonetheless, there's no reason not to take whatever state we get when
we allocate our pool. Amusingly, the current code needlessly zeroes out
its pool before using it - twice! I've already fixed this in my patches.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2002-09-09 20:07:15

by Kent Borg

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Mon, Sep 09, 2002 at 02:57:34PM -0500, Oliver Xymoron wrote:
> > Why not use the power-on state of DRAM as a source of initial pool
> > entropy?
>
> There's actually not a lot there, and what is there is not random, but
> rather a rapidly fading "ghost" of what was there last.

Which means it is not completely predictable, which means there is
going to be some entropy in there...

> And for most folks, this gets wiped by POST or the kernel long
> before the RNG gets its hands on it.

Well now, *that's* a pain.

But for us embedded folks who control our boot ROM there is the
potential to grab some entropy from that melting ghost.

> Nonetheless, there's no reason not to take whatever state we get
> when we allocate our pool. Amusingly, the current code needlessly
> zeroes out its pool before using it - twice! I've already fixed this
> in my patches.

Perfect! So if some embedded-types do manage to dig up some entropy
from the end/beginning of the world, we will have a place to put it
where it will be put to use. Cool.


Thanks,

-kb, the Kent who suggests a comment left in that code might prevent
someone from "fixing" it at a later date.

2002-09-09 23:34:30

by daw

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

Oliver Xymoron wrote:
>On Mon, Sep 09, 2002 at 04:58:50PM +0000, David Wagner wrote:
>> Whether you like it or not, you're already trusting Intel, if you're
>> using an Intel chip. If Intel were malicious and out to get you, they
>> could have put a backdoor in the chip. And a RNG is *much* easier to
>> reverse-engineer and audit than an entire CPU, so it would probably be
>> riskier for Intel to hide a backdoor in the RNG than in, say, the CPU.
>
>Not sure I buy that. Consider that with a CPU, you control the inputs,
>you've got a complete spec of the core functionality, you've got a
>huge testbed of code to test that functionality, and you've got a
>whole industry of people reverse engineering your work.

There's no guarantee that the CPU behaves deterministically.
Maybe on April 1, 2005 (or after executing a magic sequence of
1000 instructions) it starts ignoring all privilege and permission
bits. Good luck finding that through mere testing.

>More to the point, the backdoor we're worried about is one that
>compromises our encryption keys to passive observers. Doing that by
>making the RNG guessable is relatively easy. On the other hand
>determining whether a given snippet of code is doing RSA, etc. is
>equivalent to solving the halting problem, so it's seems to me pretty
>damn hard to usefully put this sort of back door into a CPU without
>sacrificing general-purpose functionality.

Ok, I agree: The RNG is the most natural, and one of the more
devastating places, to put a backdoor. (There are other ways
of putting a backdoor in a CPU other than scanning to identify
RSA code, though.)

2002-09-16 22:46:59

by dean gaudet

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Mon, 9 Sep 2002, Oliver Xymoron wrote:

> making the RNG guessable is relatively easy. On the other hand
> determining whether a given snippet of code is doing RSA, etc. is
> equivalent to solving the halting problem, so it's seems to me pretty
> damn hard to usefully put this sort of back door into a CPU without
> sacrificing general-purpose functionality.

while the general problem is certainly halting-problem level of
complexity, there's a much more simple problem which amounts to string
matching. the simple problem is "is this a specific portion of openssl /
cryptoapi / whatever?"

if you consider a technology like transmeta's which only has to
compile/translate code infrequently (rather than a traditional technology
with decoders running all the time) then it's pretty easy to see how you
could use a few cycles to do the string matching.

people have been doing this in compilers for years, where the string
matching question is "is this part of SPECCPU?"

-dean

2002-09-17 01:13:40

by Oliver Xymoron

[permalink] [raw]
Subject: Re: [PATCH] (0/4) Entropy accounting fixes

On Mon, Sep 16, 2002 at 03:51:56PM -0700, dean gaudet wrote:
> On Mon, 9 Sep 2002, Oliver Xymoron wrote:
>
> > making the RNG guessable is relatively easy. On the other hand
> > determining whether a given snippet of code is doing RSA, etc. is
> > equivalent to solving the halting problem, so it's seems to me pretty
> > damn hard to usefully put this sort of back door into a CPU without
> > sacrificing general-purpose functionality.
>
> while the general problem is certainly halting-problem level of
> complexity, there's a much more simple problem which amounts to string
> matching. the simple problem is "is this a specific portion of openssl /
> cryptoapi / whatever?"
>
> if you consider a technology like transmeta's which only has to
> compile/translate code infrequently (rather than a traditional technology
> with decoders running all the time) then it's pretty easy to see how you
> could use a few cycles to do the string matching.

If you're the compiler, it's pretty damn easy. If you're the CPU
watching the instruction stream generated by an unknown compiler for a
lengthy piece of code with context switches and interrupts going on,
it's back to being nontrivial again. It's simply much easier to
backdoor the RNG..

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."