2001-10-01 17:00:02

by Andreas Dilger

[permalink] [raw]
Subject: /dev/random entropy calculations broken?

On Oct 01, 2001 11:52 +0200, Florian Weimer wrote:
> [email protected] (David Wagner) writes:
> > Incrementing the entropy counter based on externally observable
> > values is dangerous.
>
> How do you want to collect any entropy with such a requirement in
> place? Computers tend to send out a lot of information on the air.
>
> BTW, I still think that the entropy estimate for mouse movements is
> much too high. And the compression function used probably doesn't
> have the intended property.

Has anyone even checked whether the current entropy estimates even work
properly? I was testing this, and it appears something is terribly wrong.
Check /proc/sys/kernel/random/entropy_avail. On a system that has been
running any length of time, it should be 4096 (512 bytes * 8 bits of
entropy for a full pool).

Now, "dd if=/dev/random bs=16 count=1 | wc -c" and check entropy_avail
again. It "loses" thousands of bits of entropy for generating 16 bytes
(128 bits) of random data. Same thing happens with /dev/urandom consuming
the available entropy.

Now if you do the above test on /dev/random several times in a row, you see
that you really HAVE used up the entropy, because it will return a number
of bytes less than what you requested. At this point, however, it is at
least consistent, returning a number of bytes = entropy_avail/8.

Ted, any ideas about this? I'm just looking through the code to see where
the entropy is counted, and where it goes. It _may_ be a bug with the
entropy_avail value itself, but then why the short reads? The output values
are at least consistent in that they grow slowly only when kb/mouse/disk
activity happens, and are constant otherwise.

This may be a major source of problems for entropy-poor environments, since
you will basically only be able to read a single random value from /dev/random
before the pool "dries up", regardless of the pool size (I tried with a
4096-byte pool, and the same problem happens). Hence, in such systems there
would be more of a desire to use the "less secure" network interrupts for
entropy.

Cheers, Andreas

PS - For systems which have _some_ entropy, but just not very much, it is
possible to increase the size of the pool (if it actually worked ;-)
so that you can save entropy for periods of high demand. You can do
this by "echo 4096 > /proc/sys/kernel/random/poolsize" (or some other
larger power-of-two size).
--
Andreas Dilger \ "If a man ate a pound of pasta and a pound of antipasto,
\ would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/ -- Dogbert


Subject: Re: /dev/random entropy calculations broken?



--On Monday, 01 October, 2001 10:59 AM -0600 Andreas Dilger
<[email protected]> wrote:

> Has anyone even checked whether the current entropy estimates even work
> properly?

And while we're at it, many things which add to entropy are
observable by non-root users (interupt timings always, mouse
movements, keypresses if a non-root user is logged in at
the console). And entropy is overestimate on some non-x86
platforms due to lack of fine-grained timer implementations.
And without Robert Love's patch the choice of whether to source
it from network events is completely arbitrary (NIC dependent)

However, unless one is worried about someone having broken
SHA-1 OR one is worried about annoying blocking behavour
on read(), I'm not convinced the entropy calculation is
doing anything useful anyway.

People do, however, seem particularly reluctant to accept
any change in this area.

--
Alex Bligh

2001-10-01 22:51:01

by antirez

[permalink] [raw]
Subject: Re: /dev/random entropy calculations broken?

On Mon, Oct 01, 2001 at 10:55:26PM +0100, Alex Bligh - linux-kernel wrote:
> However, unless one is worried about someone having broken
> SHA-1 OR one is worried about annoying blocking behavour
> on read(), I'm not convinced the entropy calculation is
> doing anything useful anyway.

I think the /dev/random /dev/urandom solution is perfect
from this point of view. Using /dev/random you can get
random number that are secure _even_ if tomorrow SHA1 will
be broken at the cost of a very slow generation.
Or you may trust SHA1 (or some other crypto primitive) to
avoid to collect too much entropy to generate the output.
Probably real world applications should use
/dev/urandom, assuming it's properly designed, i.e. the
internal state is changed once there is enough entropy
to create a new unguessable key, the entropy isn't
overstimated, and so on.

BTW I agree, instead to create a key being paranoid about
the SHA1 security it's better to double check if the
entropy source is ok. For example the linux PRNG used
to collect a lot of entropy bits from the keyboard
auto-repeat aaaaaaaaaaaaaa ...
It's useless to try to collect enough entropy (and maybe
overstimating it) to produce an output secure against
SHA1 possible weakness (i.e. totally generated with
entropy bits). It's probably better a more conservative
approach in the entropy collection and to use a belived secure
crypto primitive to produce the PRNG output.

After all, unless you are using a one-time pad probably
the key generated with /dev/random will be used with
the crypto algorithms you refused to trust.

--
Salvatore Sanfilippo <[email protected]>
http://www.kyuzz.org/antirez
finger [email protected] for PGP key
28 52 F5 4A 49 65 34 29 - 1D 1B F6 DA 24 C7 12 BF

2001-10-02 07:51:57

by Andreas Dilger

[permalink] [raw]
Subject: Re: /dev/random entropy calculations broken?

On Oct 01, 2001 10:59 -0600, adilger wrote:
> Has anyone even checked whether the current entropy estimates even work
> properly? I was testing this, and it appears something is terribly wrong.
> Check /proc/sys/kernel/random/entropy_avail. On a system that has been
> running any length of time, it should be 4096 (512 bytes * 8 bits of
> entropy for a full pool).
>
> Now, "dd if=/dev/random bs=16 count=1 | wc -c" and check entropy_avail
> again. It "loses" thousands of bits of entropy for generating 16 bytes
> (128 bits) of random data.
>
> Now if you do the above test on /dev/random several times in a row, you see
> that you really HAVE used up the entropy, because it will return a number
> of bytes less than what you requested. At this point, however, it is at
> least consistent, returning a number of bytes = entropy_avail/8.

OK, after going through the code carefully, it appears that there are
several big problems in entropy accounting in drivers/char/random.c
which essentially mean that any time we read /dev/random or /dev/urandom
or /proc/sys/kernel/random/uuid we would entirely deplete the entropy pool.

While this in itself is not a security problem (i.e. we discard huge
amounts of entropy on each operation) it could be a problem for systems
that need entropy and don't want it wasted. Similarly, it _may_ be a
factor in "inflating" entropy counts from sources, because we were
throwing so much away it wouldn't be usable without such high entropy
counts. In my testing, be basically ALWAYS get 9-12 bits of entropy
from each keystroke, disk I/O, and (many, many) mouse interrupts, rather
than dribble in entropy in small bits at a time.


The below patch fixes these problems, at least to be sane in regards
to accounting of entropy in the system. There are still some issues
to be discussed however.

First fix (minor): in batch_entropy_process() if the store had the maximum
amount of entropy, the remaining entropy _appeared_ that it should go
into the secondary store. However, the test "entropy_count > max_entropy"
could never be true (entropy_count is clamped in credit_entropy_store()).
Fixed so we add entropy to secondary store until it is also full, and
then alternate stirring in values into both stores (accumulates more
entropy, and adds more randomness even if we don't account for entropy).

Second fix (minor): in batch_entropy_process() IF we switched to adding
entropy to the secondary store, we credited this store with entropy
that was actually added to the first store. Also in this case we only
woke up a reader if the _secondary_ store had enough entropy, and not
it the _primary_ store had enough entropy (which is much more likely).

Third fix (minor): in xfer_secondary_pool() we extract entropy from the
primary pool, even if the secondary pool is already full, if we request
more entropy than can fit into the secondary pool. The pool will be
refilled anyways because it will be depleted after this action. We now
only refill the secondary pool if it is not already full (saving entropy).
We could save even more entropy by only adding as much entropy as we need.

Fourth fix (minor): in xfer_secondary_pool() we credit the secondary pool
with the correct number of BITS taken from the primary pool(*). We were
adding 1/4 of the number of bits taken from the primary to the secondary,
and losing the other 3/4 of the entropy (byte/word mismatch?)(**).

Fifth fix: in extract_entropy() the "redundant but just in case" check was
wrong, comparing entropy bit count and pool words. This had the effect
of losing 31/32 of the random pool on each access. BAD, BAD program!


(*) It is my assumption that if you extract N bits of entropy from one
pool, you add N bits into the other pool. If this isn't true, then
something needs to be fixed elsewhere. I believe the problem is
that extract_entropy() returns "nbytes" of data regardless of how
much real entropy is in the pool, hence "creating" entropy where
there was none in xfer_secondary_pool(). Either we should return
the REAL amount of entropy backed data (and fix the urandom return
values), or figure out some way to credit the secondary pool with
only as much entropy as is really available.

(**) When using the SHA hash, we extract 5+80 words = 2720 bits from the
primary pool and "add" it to the secondary pool. However, the
secondary pool is only 1024 bits in size, so we throw half of this
entropy away each time (including extra CPU overhead). Should we
fix the secondary pool to be at least TMP_BUF_SIZE to hold this data?


The second patch is purely cosmetic fixes (added comments and such),
and also exports "generate_random_uuid()" for use in modules (I was
playing around with this in reiserfs as a module, and it needs to
be exported anyways, according to the comments above it).

Cheers, Andreas
===========================================================================
--- linux.orig/drivers/char/random.c Tue Jul 10 17:05:24 2001
+++ linux/drivers/char/random.c Tue Oct 2 00:17:40 2001
@@ -403,6 +403,12 @@
#define MIN(a,b) (((a) < (b)) ? (a) : (b))
#endif

+#if 0
+#define DEBUG_ENT(fmt, arg...) printk(KERN_DEBUG "random: " fmt, ## arg)
+#else
+#define DEBUG_ENT(fmt, arg...) do {} while (0)
+#endif
+
/*
* Unfortunately, while the GCC optimizer for the i386 understands how
* to optimize a static rotate left of x bits, it doesn't know how to
@@ -651,24 +670,23 @@

static void batch_entropy_process(void *private_)
{
- int num = 0;
- int max_entropy;
struct entropy_store *r = (struct entropy_store *) private_, *p;
-
+ int max_entropy = r->poolinfo.poolwords*32;
+
if (!batch_max)
return;

- max_entropy = r->poolinfo.poolwords*32;
+ p = r;
while (batch_head != batch_tail) {
+ if (r->entropy_count >= max_entropy) {
+ r = (r == sec_random_state) ? random_state :
+ sec_random_state;
+ }
add_entropy_words(r, batch_entropy_pool + 2*batch_tail, 2);
- p = r;
- if (r->entropy_count > max_entropy && (num & 1))
- r = sec_random_state;
credit_entropy_store(r, batch_entropy_credit[batch_tail]);
batch_tail = (batch_tail+1) & (batch_max-1);
- num++;
}
- if (r->entropy_count >= random_read_wakeup_thresh)
+ if (p->entropy_count >= random_read_wakeup_thresh)
wake_up_interruptible(&random_read_wait);
}

@@ -1210,12 +1233,19 @@
{
__u32 tmp[TMP_BUF_SIZE];

- if (r->entropy_count < nbytes*8) {
+ if (r->entropy_count < nbytes*8 && r->entropy_count < r->poolsize*32) {
+ DEBUG_ENT("xfer %d from primary to %s (have %d need %d)\n",
+ sizeof(tmp) * 8,
+ r == sec_random_state ? "secondary" : "unknown",
+ r->entropy_count, nbytes * 8);
extract_entropy(random_state, tmp, sizeof(tmp), 0);
add_entropy_words(r, tmp, TMP_BUF_SIZE);
- credit_entropy_store(r, TMP_BUF_SIZE*8);
+ credit_entropy_store(r, TMP_BUF_SIZE*32);
}
if (r->extract_count > 1024) {
+ DEBUG_ENT("reseeding %s with %d from primary\n",
+ r == sec_random_state ? "secondary" : "unknown",
+ sizeof(tmp) * 8);
extract_entropy(random_state, tmp, sizeof(tmp), 0);
add_entropy_words(r, tmp, TMP_BUF_SIZE);
r->extract_count = 0;
@@ -1240,10 +1270,10 @@
__u32 x;

add_timer_randomness(&extract_timer_state, nbytes);
-
+
/* Redundant, but just in case... */
- if (r->entropy_count > r->poolinfo.poolwords)
- r->entropy_count = r->poolinfo.poolwords;
+ if (r->entropy_count > r->poolinfo.poolwords * 32)
+ r->entropy_count = r->poolinfo.poolwords * 32;

if (flags & EXTRACT_ENTROPY_SECONDARY)
xfer_secondary_pool(r, nbytes);



===========================================================================
--- linux.orig/drivers/char/random.c Tue Jul 10 17:05:24 2001
+++ linux/drivers/char/random.c Tue Oct 2 00:17:40 2001
@@ -272,8 +272,8 @@
static int random_write_wakeup_thresh = 128;

/*
- * A pool of size POOLWORDS is stirred with a primitive polynomial
- * of degree POOLWORDS over GF(2). The taps for various sizes are
+ * A pool of size .poolwords is stirred with a primitive polynomial
+ * of degree .poolwords over GF(2). The taps for various sizes are
* defined below. They are chosen to be evenly spaced (minimum RMS
* distance from evenly spaced; the numbers in the comments are a
* scaled squared error sum) except for the last tap, which is 1 to
@@ -552,11 +558,12 @@
0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 };
unsigned i;
int new_rotate;
+ int wordmask = r->poolinfo.poolwords - 1;
__u32 w;

while (num--) {
w = rotate_left(r->input_rotate, *in);
- i = r->add_ptr = (r->add_ptr - 1) & (r->poolinfo.poolwords-1);
+ i = r->add_ptr = (r->add_ptr - 1) & wordmask;
/*
* Normally, we add 7 bits of rotation to the pool.
* At the beginning of the pool, add an extra 7 bits
@@ -569,11 +576,11 @@
r->input_rotate = new_rotate & 31;

/* XOR in the various taps */
- w ^= r->pool[(i+r->poolinfo.tap1)&(r->poolinfo.poolwords-1)];
- w ^= r->pool[(i+r->poolinfo.tap2)&(r->poolinfo.poolwords-1)];
- w ^= r->pool[(i+r->poolinfo.tap3)&(r->poolinfo.poolwords-1)];
- w ^= r->pool[(i+r->poolinfo.tap4)&(r->poolinfo.poolwords-1)];
- w ^= r->pool[(i+r->poolinfo.tap5)&(r->poolinfo.poolwords-1)];
+ w ^= r->pool[(i + r->poolinfo.tap1) & wordmask];
+ w ^= r->pool[(i + r->poolinfo.tap2) & wordmask];
+ w ^= r->pool[(i + r->poolinfo.tap3) & wordmask];
+ w ^= r->pool[(i + r->poolinfo.tap4) & wordmask];
+ w ^= r->pool[(i + r->poolinfo.tap5) & wordmask];
w ^= r->pool[i];
r->pool[i] = (w >> 3) ^ twist_table[w & 7];
}
@@ -586,12 +593,20 @@
{
int max_entropy = r->poolinfo.poolwords*32;

- if (r->entropy_count + num < 0)
+ if (r->entropy_count + num < 0) {
+ DEBUG_ENT("negative entropy/overflow (%d+%d)\n",
+ r->entropy_count, num);
r->entropy_count = 0;
- else if (r->entropy_count + num > max_entropy)
+ } else if (r->entropy_count + num > max_entropy) {
r->entropy_count = max_entropy;
- else
- r->entropy_count = r->entropy_count + num;
+ } else {
+ r->entropy_count += num;
+ if (num)
+ DEBUG_ENT("%s added %d bits, now %d\n",
+ r == sec_random_state ? "secondary" :
+ r == random_state ? "primary" : "unknown",
+ num, r->entropy_count);
+ }
}

/**********************************************************************
@@ -627,6 +642,12 @@
return 0;
}

+/*
+ * Changes to the entropy data are put into a queue rather than being added
+ * to the entropy counts directly. This is presumably to avoid doing heavy
+ * hashing calculations during an interrupt in add_timer_randomness().
+ * Instead, the entropy is only added to the pool once per timer tick.
+ */
void batch_entropy_store(u32 a, u32 b, int num)
{
int new;
@@ -643,12 +664,15 @@
queue_task(&batch_tqueue, &tq_timer);
batch_head = new;
} else {
-#if 0
- printk(KERN_NOTICE "random: batch entropy buffer full\n");
-#endif
+ DEBUG_ENT("batch entropy buffer full\n");
}
}

+/*
+ * Flush out the accumulated entropy operations, adding entropy to the passed
+ * store (normally random_state). If that store has enough entropy, alternate
+ * between randomizing the data of the primary and secondary stores.
+ */
static void batch_entropy_process(void *private_)
{
struct entropy_store *r = (struct entropy_store *) private_, *p;
@@ -1228,9 +1258,12 @@
* bits of entropy are left in the pool, but it does not restrict the
* number of bytes that are actually obtained. If the EXTRACT_ENTROPY_USER
* flag is given, then the buf pointer is assumed to be in user space.
- * If the EXTRACT_ENTROPY_SECONDARY flag is given, then this function will
*
- * Note: extract_entropy() assumes that POOLWORDS is a multiple of 16 words.
+ * If the EXTRACT_ENTROPY_SECONDARY flag is given, then we are actually
+ * extracting entropy from the secondary pool, and can refill from the
+ * primary pool if needed.
+ *
+ * Note: extract_entropy() assumes that .poolwords is a multiple of 16 words.
*/
static ssize_t extract_entropy(struct entropy_store *r, void * buf,
size_t nbytes, int flags)
@@ -1248,6 +1278,11 @@
if (flags & EXTRACT_ENTROPY_SECONDARY)
xfer_secondary_pool(r, nbytes);

+ DEBUG_ENT("%s has %d bits, want %d bits\n",
+ r == sec_random_state ? "secondary" :
+ r == random_state ? "primary" : "unknown",
+ r->entropy_count, nbytes * 8);
+
if (r->entropy_count / 8 >= nbytes)
r->entropy_count -= nbytes*8;
else
@@ -2238,4 +2276,5 @@
EXPORT_SYMBOL(add_interrupt_randomness);
EXPORT_SYMBOL(add_blkdev_randomness);
EXPORT_SYMBOL(batch_entropy_store);
+EXPORT_SYMBOL(generate_random_uuid);

--
Andreas Dilger \ "If a man ate a pound of pasta and a pound of antipasto,
\ would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/ -- Dogbert

2001-10-02 08:10:33

by Andreas Dilger

[permalink] [raw]
Subject: Re: /dev/random entropy calculations broken?

On Oct 02, 2001 01:51 -0600, Andreas Dilger wrote:
> - if (r->entropy_count < nbytes*8) {
> + if (r->entropy_count < nbytes*8 && r->entropy_count < r->poolsize*32) {
^^^^^^^^
Doh! I hate it when I do that (compile, test, think, edit, send patch
without the compile, test steps). Should be "poolinfo.poolwords".

Cheers, Andreas
--
Andreas Dilger \ "If a man ate a pound of pasta and a pound of antipasto,
\ would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/ -- Dogbert

2001-10-02 15:35:33

by Oliver Xymoron

[permalink] [raw]
Subject: Re: /dev/random entropy calculations broken?

On Tue, 2 Oct 2001, Andreas Dilger wrote:

> Fifth fix: in extract_entropy() the "redundant but just in case" check was
> wrong, comparing entropy bit count and pool words. This had the effect
> of losing 31/32 of the random pool on each access. BAD, BAD program!

> + if (r->entropy_count > r->poolinfo.poolwords * 32)
> + r->entropy_count = r->poolinfo.poolwords * 32;

Damnit, I read that line 30 times yesterday!

While we're on words/bytes/bits confusion, add_entropy_words() seems to
get called with number of bytes rather than words.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-10-02 21:03:02

by Andreas Dilger

[permalink] [raw]
Subject: Re: /dev/random entropy calculations broken?

On Oct 02, 2001 10:37 -0500, Oliver Xymoron wrote:
> On Tue, 2 Oct 2001, Andreas Dilger wrote:
> > Fifth fix: in extract_entropy() the "redundant but just in case" check was
> > wrong, comparing entropy bit count and pool words. This had the effect
> > of losing 31/32 of the random pool on each access. BAD, BAD program!
>
> > + if (r->entropy_count > r->poolinfo.poolwords * 32)
> > + r->entropy_count = r->poolinfo.poolwords * 32;
>
> Damnit, I read that line 30 times yesterday!

So did I. It wasn't till my system was spewing debugging output that I
saw where it was going.

> While we're on words/bytes/bits confusion, add_entropy_words() seems to
> get called with number of bytes rather than words.

Makes it that much more random, doesn't it ;-). OK, here is a new version
of the patch. It clears up the parameters to the functions, and makes sure
that we pass the right values to each.

Cheers, Andreas
===========================================================================
--- linux.orig/drivers/char/random.c Tue Jul 10 17:05:24 2001
+++ linux/drivers/char/random.c Tue Oct 2 14:50:19 2001
@@ -272,8 +272,8 @@
static int random_write_wakeup_thresh = 128;

/*
- * A pool of size POOLWORDS is stirred with a primitive polynomial
- * of degree POOLWORDS over GF(2). The taps for various sizes are
+ * A pool of size .poolwords is stirred with a primitive polynomial
+ * of degree .poolwords over GF(2). The taps for various sizes are
* defined below. They are chosen to be evenly spaced (minimum RMS
* distance from evenly spaced; the numbers in the comments are a
* scaled squared error sum) except for the last tap, which is 1 to
@@ -281,49 +281,47 @@
*/
static struct poolinfo {
int poolwords;
+ int poolbits; /* poolwords * 32 */
int tap1, tap2, tap3, tap4, tap5;
} poolinfo_table[] = {
/* x^2048 + x^1638 + x^1231 + x^819 + x^411 + x + 1 -- 115 */
- { 2048, 1638, 1231, 819, 411, 1 },
+ { 2048, 65536, 1638, 1231, 819, 411, 1 },

/* x^1024 + x^817 + x^615 + x^412 + x^204 + x + 1 -- 290 */
- { 1024, 817, 615, 412, 204, 1 },
-
+ { 1024, 32768, 817, 615, 412, 204, 1 },
#if 0 /* Alternate polynomial */
/* x^1024 + x^819 + x^616 + x^410 + x^207 + x^2 + 1 -- 115 */
- { 1024, 819, 616, 410, 207, 2 },
+ { 1024, 32768, 819, 616, 410, 207, 2 },
#endif
-
- /* x^512 + x^411 + x^308 + x^208 + x^104 + x + 1 -- 225 */
- { 512, 411, 308, 208, 104, 1 },

+ /* x^512 + x^411 + x^308 + x^208 + x^104 + x + 1 -- 225 */
+ { 512, 16384, 411, 308, 208, 104, 1 },
#if 0 /* Alternates */
/* x^512 + x^409 + x^307 + x^206 + x^102 + x^2 + 1 -- 95 */
- { 512, 409, 307, 206, 102, 2 },
+ { 512, 16384, 409, 307, 206, 102, 2 },
/* x^512 + x^409 + x^309 + x^205 + x^103 + x^2 + 1 -- 95 */
- { 512, 409, 309, 205, 103, 2 },
+ { 512, 16384, 409, 309, 205, 103, 2 },
#endif

/* x^256 + x^205 + x^155 + x^101 + x^52 + x + 1 -- 125 */
- { 256, 205, 155, 101, 52, 1 },
-
- /* x^128 + x^103 + x^76 + x^51 +x^25 + x + 1 -- 105 */
- { 128, 103, 76, 51, 25, 1 },
+ { 256, 8192, 205, 155, 101, 52, 1 },

+ /* x^128 + x^103 + x^76 + x^51 +x^25 + x + 1 -- 105 */
+ { 128, 4096, 103, 76, 51, 25, 1 },
#if 0 /* Alternate polynomial */
/* x^128 + x^103 + x^78 + x^51 + x^27 + x^2 + 1 -- 70 */
- { 128, 103, 78, 51, 27, 2 },
+ { 128, 4096, 103, 78, 51, 27, 2 },
#endif

/* x^64 + x^52 + x^39 + x^26 + x^14 + x + 1 -- 15 */
- { 64, 52, 39, 26, 14, 1 },
+ { 64, 2048, 52, 39, 26, 14, 1 },

/* x^32 + x^26 + x^20 + x^14 + x^7 + x + 1 -- 15 */
- { 32, 26, 20, 14, 7, 1 },
+ { 32, 1024, 26, 20, 14, 7, 1 },
+
+ { 0, 0, 0, 0, 0, 0, 0 },
+};

- { 0, 0, 0, 0, 0, 0 },
-};
-
/*
* For the purposes of better mixing, we use the CRC-32 polynomial as
* well to make a twisted Generalized Feedback Shift Reigster
@@ -461,6 +459,12 @@
}
#endif

+#if 0
+#define DEBUG_ENT(fmt, arg...) printk(KERN_DEBUG "random: " fmt, ## arg)
+#else
+#define DEBUG_ENT(fmt, arg...) do {} while (0)
+#endif
+
/**********************************************************************
*
* OS independent entropy store. Here are the functions which handle
@@ -545,18 +549,19 @@
* the entropy is concentrated in the low-order bits.
*/
static void add_entropy_words(struct entropy_store *r, const __u32 *in,
- int num)
+ int nwords)
{
static __u32 const twist_table[8] = {
0, 0x3b6e20c8, 0x76dc4190, 0x4db26158,
0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 };
unsigned i;
int new_rotate;
+ int wordmask = r->poolinfo.poolwords - 1;
__u32 w;

- while (num--) {
+ while (nwords--) {
w = rotate_left(r->input_rotate, *in);
- i = r->add_ptr = (r->add_ptr - 1) & (r->poolinfo.poolwords-1);
+ i = r->add_ptr = (r->add_ptr - 1) & wordmask;
/*
* Normally, we add 7 bits of rotation to the pool.
* At the beginning of the pool, add an extra 7 bits
@@ -569,11 +574,11 @@
r->input_rotate = new_rotate & 31;

/* XOR in the various taps */
- w ^= r->pool[(i+r->poolinfo.tap1)&(r->poolinfo.poolwords-1)];
- w ^= r->pool[(i+r->poolinfo.tap2)&(r->poolinfo.poolwords-1)];
- w ^= r->pool[(i+r->poolinfo.tap3)&(r->poolinfo.poolwords-1)];
- w ^= r->pool[(i+r->poolinfo.tap4)&(r->poolinfo.poolwords-1)];
- w ^= r->pool[(i+r->poolinfo.tap5)&(r->poolinfo.poolwords-1)];
+ w ^= r->pool[(i + r->poolinfo.tap1) & wordmask];
+ w ^= r->pool[(i + r->poolinfo.tap2) & wordmask];
+ w ^= r->pool[(i + r->poolinfo.tap3) & wordmask];
+ w ^= r->pool[(i + r->poolinfo.tap4) & wordmask];
+ w ^= r->pool[(i + r->poolinfo.tap5) & wordmask];
w ^= r->pool[i];
r->pool[i] = (w >> 3) ^ twist_table[w & 7];
}
@@ -582,16 +587,22 @@
/*
* Credit (or debit) the entropy store with n bits of entropy
*/
-static void credit_entropy_store(struct entropy_store *r, int num)
+static void credit_entropy_store(struct entropy_store *r, int nbits)
{
- int max_entropy = r->poolinfo.poolwords*32;
-
- if (r->entropy_count + num < 0)
+ if (r->entropy_count + nbits < 0) {
+ DEBUG_ENT("negative entropy/overflow (%d+%d)\n",
+ r->entropy_count, nbits);
r->entropy_count = 0;
- else if (r->entropy_count + num > max_entropy)
- r->entropy_count = max_entropy;
- else
- r->entropy_count = r->entropy_count + num;
+ } else if (r->entropy_count + nbits > r->poolinfo.poolbits) {
+ r->entropy_count = r->poolinfo.poolbits;
+ } else {
+ r->entropy_count += nbits;
+ if (nbits)
+ DEBUG_ENT("%s added %d bits, now %d\n",
+ r == sec_random_state ? "secondary" :
+ r == random_state ? "primary" : "unknown",
+ nbits, r->entropy_count);
+ }
}

/**********************************************************************
@@ -627,6 +638,12 @@
return 0;
}

+/*
+ * Changes to the entropy data is put into a queue rather than being added to
+ * the entropy counts directly. This is presumably to avoid doing heavy
+ * hashing calculations during an interrupt in add_timer_randomness().
+ * Instead, the entropy is only added to the pool once per timer tick.
+ */
void batch_entropy_store(u32 a, u32 b, int num)
{
int new;
@@ -643,32 +660,33 @@
queue_task(&batch_tqueue, &tq_timer);
batch_head = new;
} else {
-#if 0
- printk(KERN_NOTICE "random: batch entropy buffer full\n");
-#endif
+ DEBUG_ENT("batch entropy buffer full\n");
}
}

+/*
+ * Flush out the accumulated entropy operations, adding entropy to the passed
+ * store (normally random_state). If that store has enough entropy, alternate
+ * between randomizing the data of the primary and secondary stores.
+ */
static void batch_entropy_process(void *private_)
{
- int num = 0;
- int max_entropy;
struct entropy_store *r = (struct entropy_store *) private_, *p;
-
+
if (!batch_max)
return;

- max_entropy = r->poolinfo.poolwords*32;
+ p = r;
while (batch_head != batch_tail) {
+ if (r->entropy_count >= r->poolinfo.poolbits) {
+ r = (r == sec_random_state) ? random_state :
+ sec_random_state;
+ }
add_entropy_words(r, batch_entropy_pool + 2*batch_tail, 2);
- p = r;
- if (r->entropy_count > max_entropy && (num & 1))
- r = sec_random_state;
credit_entropy_store(r, batch_entropy_credit[batch_tail]);
batch_tail = (batch_tail+1) & (batch_max-1);
- num++;
}
- if (r->entropy_count >= random_read_wakeup_thresh)
+ if (p->entropy_count >= random_read_wakeup_thresh)
wake_up_interruptible(&random_read_wait);
}

@@ -1210,14 +1228,26 @@
{
__u32 tmp[TMP_BUF_SIZE];

- if (r->entropy_count < nbytes*8) {
- extract_entropy(random_state, tmp, sizeof(tmp), 0);
- add_entropy_words(r, tmp, TMP_BUF_SIZE);
- credit_entropy_store(r, TMP_BUF_SIZE*8);
+ if (r->entropy_count < nbytes * 8 &&
+ r->entropy_count < r->poolinfo.poolbits) {
+ int nwords = MIN(r->poolinfo.poolwords - r->entropy_count/32,
+ sizeof(tmp) / 4);
+
+ DEBUG_ENT("xfer %d from primary to %s (have %d need %d)\n",
+ nwords * 32,
+ r == sec_random_state ? "secondary" : "unknown",
+ r->entropy_count, nbytes * 8);
+
+ extract_entropy(random_state, tmp, nwords, 0);
+ add_entropy_words(r, tmp, nwords);
+ credit_entropy_store(r, nwords * 32);
}
if (r->extract_count > 1024) {
+ DEBUG_ENT("reseeding %s with %d from primary\n",
+ r == sec_random_state ? "secondary" : "unknown",
+ sizeof(tmp) * 8);
extract_entropy(random_state, tmp, sizeof(tmp), 0);
- add_entropy_words(r, tmp, TMP_BUF_SIZE);
+ add_entropy_words(r, tmp, sizeof(tmp) / 4);
r->extract_count = 0;
}
}
@@ -1228,9 +1258,12 @@
* bits of entropy are left in the pool, but it does not restrict the
* number of bytes that are actually obtained. If the EXTRACT_ENTROPY_USER
* flag is given, then the buf pointer is assumed to be in user space.
- * If the EXTRACT_ENTROPY_SECONDARY flag is given, then this function will
*
- * Note: extract_entropy() assumes that POOLWORDS is a multiple of 16 words.
+ * If the EXTRACT_ENTROPY_SECONDARY flag is given, then we are actually
+ * extracting entropy from the secondary pool, and can refill from the
+ * primary pool if needed.
+ *
+ * Note: extract_entropy() assumes that .poolwords is a multiple of 16 words.
*/
static ssize_t extract_entropy(struct entropy_store *r, void * buf,
size_t nbytes, int flags)
@@ -1240,14 +1273,19 @@
__u32 x;

add_timer_randomness(&extract_timer_state, nbytes);
-
+
/* Redundant, but just in case... */
- if (r->entropy_count > r->poolinfo.poolwords)
- r->entropy_count = r->poolinfo.poolwords;
+ if (r->entropy_count > r->poolinfo.poolbits)
+ r->entropy_count = r->poolinfo.poolbits;

if (flags & EXTRACT_ENTROPY_SECONDARY)
xfer_secondary_pool(r, nbytes);

+ DEBUG_ENT("%s has %d bits, want %d bits\n",
+ r == sec_random_state ? "secondary" :
+ r == random_state ? "primary" : "unknown",
+ r->entropy_count, nbytes * 8);
+
if (r->entropy_count / 8 >= nbytes)
r->entropy_count -= nbytes*8;
else
@@ -1543,9 +1581,7 @@
c -= bytes;
p += bytes;

- /* Convert bytes to words */
- bytes = (bytes + 3) / sizeof(__u32);
- add_entropy_words(random_state, buf, bytes);
+ add_entropy_words(random_state, buf, (bytes + 3) / 4);
}
if (p == buffer) {
return (ssize_t)ret;
@@ -1589,7 +1625,7 @@
ent_count = random_state->entropy_count;
if (put_user(ent_count, p++))
return -EFAULT;
-
+
if (get_user(size, p))
return -EFAULT;
if (put_user(random_state->poolinfo.poolwords, p++))
@@ -1598,7 +1634,7 @@
return -EINVAL;
if (size > random_state->poolinfo.poolwords)
size = random_state->poolinfo.poolwords;
- if (copy_to_user(p, random_state->pool, size*sizeof(__u32)))
+ if (copy_to_user(p, random_state->pool, size * 4))
return -EFAULT;
return 0;
case RNDADDENTROPY:
@@ -1845,8 +1881,7 @@
{
min_read_thresh = 8;
min_write_thresh = 0;
- max_read_thresh = max_write_thresh =
- random_state->poolinfo.poolwords * 32;
+ max_read_thresh = max_write_thresh = random_state->poolinfo.poolbits;
random_table[1].data = &random_state->entropy_count;
}
#endif /* CONFIG_SYSCTL */
@@ -2238,4 +2273,5 @@
EXPORT_SYMBOL(add_interrupt_randomness);
EXPORT_SYMBOL(add_blkdev_randomness);
EXPORT_SYMBOL(batch_entropy_store);
+EXPORT_SYMBOL(generate_random_uuid);

--
Andreas Dilger \ "If a man ate a pound of pasta and a pound of antipasto,
\ would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/ -- Dogbert

2001-10-02 21:27:14

by Oliver Xymoron

[permalink] [raw]
Subject: Re: /dev/random entropy calculations broken?

On Tue, 2 Oct 2001, Andreas Dilger wrote:

> > While we're on words/bytes/bits confusion, add_entropy_words() seems to
> > get called with number of bytes rather than words.
>
> Makes it that much more random, doesn't it ;-). OK, here is a new version
> of the patch. It clears up the parameters to the functions, and makes sure
> that we pass the right values to each.

Cool. Not sure if I like the introduction of poolbits. My personal
preference would be to s/poolwords/words/ and just use ->words*32, since
foo->foomember is a throwback to pre-ANSI compilers with a flat namespace
for structure members. Note that we don't bother prefixing tap*.

If not, at least put poolbits in the structure first...

> static struct poolinfo {
> int poolwords;
> + int poolbits; /* poolwords * 32 */
> int tap1, tap2, tap3, tap4, tap5;
> } poolinfo_table[] = {
> /* x^2048 + x^1638 + x^1231 + x^819 + x^411 + x + 1 -- 115 */
> - { 2048, 1638, 1231, 819, 411, 1 },
> + { 2048, 65536, 1638, 1231, 819, 411, 1 },
^^^^^
...because it's not as confusing comparing the polynomial in the comment
to the initializer.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-10-02 22:29:24

by Andreas Dilger

[permalink] [raw]
Subject: Re: /dev/random entropy calculations broken?

On Oct 02, 2001 16:29 -0500, Oliver Xymoron wrote:
> Cool. Not sure if I like the introduction of poolbits. My personal
> preference would be to s/poolwords/words/ and just use ->words*32, since
> foo->foomember is a throwback to pre-ANSI compilers with a flat namespace
> for structure members. Note that we don't bother prefixing tap*.

I added poolbits because we were doing poolwords * 32 all the time in the
commonly called functions credit_entropy_store() and batch_entropy_process()).
I don't really care either way, except that it makes the code easier to read.

We could always do the following (hackish, but makes code more readable):

#define POOLBITS poolwords*32
#define POOLBYTES poolwords*4

> If not, at least put poolbits in the structure first...
>
> > static struct poolinfo {
> > int poolwords;
> > + int poolbits; /* poolwords * 32 */
> > int tap1, tap2, tap3, tap4, tap5;
> > } poolinfo_table[] = {
> > /* x^2048 + x^1638 + x^1231 + x^819 + x^411 + x + 1 -- 115 */
> > - { 2048, 1638, 1231, 819, 411, 1 },
> > + { 2048, 65536, 1638, 1231, 819, 411, 1 },
> ^^^^^
> ...because it's not as confusing comparing the polynomial in the comment
> to the initializer.

Sorry, I didn't notice that the poolwords was also part of the polynomial.
I'll wait a while before reposting in case of more comments (Ted has been
silent thus far).

Cheers, Andreas
--
Andreas Dilger \ "If a man ate a pound of pasta and a pound of antipasto,
\ would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/ -- Dogbert