2009-06-14 15:51:45

by Matt Mackall

[permalink] [raw]
Subject: Re: issue with /dev/random? gets depleted very quick

[cc:ed to lkml]

On Sun, 2009-06-14 at 14:51 +0200, Folkert van Heusden wrote:
> Hi,
>
> On an idle system (no gui, no daemons, nothing) system, /dev/random gets
> empty in a matter of 20 seconds with a 2.6.26 kernel.
>
> My test:
>
> add 1000 bits to the device:
>
> zolder:/tmp# cat test-RNDADDENTROPY.c
> #include <sys/types.h>
> #include <sys/stat.h>
> #include <fcntl.h>
> #include <sys/ioctl.h>
> #include <stdio.h>
> #include <stdlib.h>
> #include <asm/types.h>
> #include <linux/random.h>
>
> int main(int argc, char *argv[])
> {
> struct rand_pool_info *output;
> int fd = open("/dev/random", O_WRONLY);
>
> output = (struct rand_pool_info *)malloc(10000);
> output -> entropy_count = 1000;
> output -> buf_size = 8000;
>
> printf("%d\n", ioctl(fd, RNDADDENTROPY, output));
>
> return 0;
> }
>
> and then check whayt is in it:
>
> zolder:/tmp# ./a.out ; while true ; do echo `date` `cat /proc/sys/kernel/random/entropy_avail` ; sleep 1 ; done
> 0
> Sun Jun 14 14:50:44 CEST 2009 1117
> Sun Jun 14 14:50:45 CEST 2009 989
> Sun Jun 14 14:50:46 CEST 2009 925
> Sun Jun 14 14:50:47 CEST 2009 797
> Sun Jun 14 14:50:48 CEST 2009 733
> Sun Jun 14 14:50:49 CEST 2009 605
> Sun Jun 14 14:50:50 CEST 2009 541
> Sun Jun 14 14:50:51 CEST 2009 413
> Sun Jun 14 14:50:52 CEST 2009 349
> Sun Jun 14 14:50:53 CEST 2009 221
> Sun Jun 14 14:50:54 CEST 2009 157
> Sun Jun 14 14:50:55 CEST 2009 157
>
> Is there something wrong with it?

Does it go below 128? If not, that's the behavior of something depleting
the pool down to the anti-starvation threshold via either /dev/urandom
or get_random_bytes.

On my system, I'm seeing that behavior as well. fuser reports a bunch of
processes hold /dev/urandom open, but stracing them doesn't reveal a
culprit. Which means there's now probably something in the kernel
calling get_random_bytes continuously.

Is this a problem? It really shouldn't be. Everyone should be
using /dev/urandom anyway. And the anti-starvation threshold guarantees
that if there's entropy being collected, readers of /dev/random can
always make forward progress.

--
http://selenic.com : development and support for Mercurial and Linux


2009-06-14 19:04:23

by folkert

[permalink] [raw]
Subject: Re: issue with /dev/random? gets depleted very quick

> [cc:ed to lkml]

> > On an idle system (no gui, no daemons, nothing) system, /dev/random gets
> > empty in a matter of 20 seconds with a 2.6.26 kernel.
> > My test:
> > add 1000 bits to the device:
> > zolder:/tmp# cat test-RNDADDENTROPY.c
...
> > }
> > and then check whayt is in it:
> > zolder:/tmp# ./a.out ; while true ; do echo `date` `cat /proc/sys/kernel/random/entropy_avail` ; sleep 1 ; done
> > 0
> > Sun Jun 14 14:50:44 CEST 2009 1117
...
> > Sun Jun 14 14:50:55 CEST 2009 157
> > Is there something wrong with it?
> Does it go below 128? If not, that's the behavior of something depleting
> the pool down to the anti-starvation threshold via either /dev/urandom
> or get_random_bytes.

No, it stays above 128. Sometimes around 13x, sometimes 151, so not
always close to 128.

> On my system, I'm seeing that behavior as well. fuser reports a bunch of
> processes hold /dev/urandom open, but stracing them doesn't reveal a
> culprit. Which means there's now probably something in the kernel
> calling get_random_bytes continuously.

Yes. On the systems I tried, nothing had /dev/*random open, also no
cronjobs that could use it. And still it gets lower.

> Is this a problem? It really shouldn't be. Everyone should be
> using /dev/urandom anyway. And the anti-starvation threshold guarantees

Well, if I understood correctly how /dev/*random works, urandom is fed
by /dev/random. So if there's almost nothing left in the main pool and
urandom demands bits then we have an issue.
Also, if you frequently want to generate keys (thing gpg, ssl), I think
you want bits from /dev/random and not urandom.

> that if there's entropy being collected, readers of /dev/random can
> always make forward progress.

Also if it is used so heavily, you need quit an entropy-source to keep
it filled.


Folkert van Heusden

--
http://www.vanheusden.com/multitail - multitail is tail on steroids. multiple
windows, filtering, coloring, anything you can think of
----------------------------------------------------------------------
Phone: +31-6-41278122, PGP-key: 1F28D8AE, http://www.vanheusden.com

2009-06-14 19:34:51

by Matt Mackall

[permalink] [raw]
Subject: Re: issue with /dev/random? gets depleted very quick

On Sun, 2009-06-14 at 21:04 +0200, Folkert van Heusden wrote:
> > [cc:ed to lkml]
>
> > > On an idle system (no gui, no daemons, nothing) system, /dev/random gets
> > > empty in a matter of 20 seconds with a 2.6.26 kernel.
> > > My test:
> > > add 1000 bits to the device:
> > > zolder:/tmp# cat test-RNDADDENTROPY.c
> ...
> > > }
> > > and then check whayt is in it:
> > > zolder:/tmp# ./a.out ; while true ; do echo `date` `cat /proc/sys/kernel/random/entropy_avail` ; sleep 1 ; done
> > > 0
> > > Sun Jun 14 14:50:44 CEST 2009 1117
> ...
> > > Sun Jun 14 14:50:55 CEST 2009 157
> > > Is there something wrong with it?
> > Does it go below 128? If not, that's the behavior of something depleting
> > the pool down to the anti-starvation threshold via either /dev/urandom
> > or get_random_bytes.
>
> No, it stays above 128. Sometimes around 13x, sometimes 151, so not
> always close to 128.
>
> > On my system, I'm seeing that behavior as well. fuser reports a bunch of
> > processes hold /dev/urandom open, but stracing them doesn't reveal a
> > culprit. Which means there's now probably something in the kernel
> > calling get_random_bytes continuously.
>
> Yes. On the systems I tried, nothing had /dev/*random open, also no
> cronjobs that could use it. And still it gets lower.
>
> > Is this a problem? It really shouldn't be. Everyone should be
> > using /dev/urandom anyway. And the anti-starvation threshold guarantees
>
> Well, if I understood correctly how /dev/*random works, urandom is fed
> by /dev/random. So if there's almost nothing left in the main pool and
> urandom demands bits then we have an issue.
> Also, if you frequently want to generate keys (thing gpg, ssl), I think
> you want bits from /dev/random and not urandom.

There is really no difference.

In an ideal world, we could accurately estimate input entropy and thus
guarantee that we never output more than we took in. But it's pretty
clear we don't have a solid theoretical basis for estimating the real
entropy in most, if not all, of our input devices. In fact, I'm pretty
sure they're all significantly more observable than we're giving them
credit for. And without that basis, we can only make handwaving
arguments about the relative strength of /dev/random vs /dev/urandom.

So if you're running into /dev/random blocking, my advice is to delete
the device and symlink it to /dev/urandom.

Also note that if something in the kernel is rapidly consuming entropy
but not visibly leaking it to the world, it is effectively not consuming
it. The simplest case is:

get_random_bytes(...);
memset(...); /* clear previous result */

In this case, if no one hears the tree fall, it hasn't actually fallen.
There is exactly as much 'unknown' data in the entropy pool as before.
If anything, the pool contents are now harder to guess because it's been
mixed more.

--
http://selenic.com : development and support for Mercurial and Linux

2009-06-14 19:59:03

by folkert

[permalink] [raw]
Subject: Re: issue with /dev/random? gets depleted very quick

[ /dev/random gets emptied very quickly ]
...
> > > Is this a problem? It really shouldn't be. Everyone should be
> > > using /dev/urandom anyway. And the anti-starvation threshold guarantees
> >
> > Well, if I understood correctly how /dev/*random works, urandom is fed
> > by /dev/random. So if there's almost nothing left in the main pool and
> > urandom demands bits then we have an issue.
> > Also, if you frequently want to generate keys (thing gpg, ssl), I think
> > you want bits from /dev/random and not urandom.
>
> There is really no difference.
> In an ideal world, we could accurately estimate input entropy and thus
> guarantee that we never output more than we took in. But it's pretty
> clear we don't have a solid theoretical basis for estimating the real
> entropy in most, if not all, of our input devices. In fact, I'm pretty
> sure they're all significantly more observable than we're giving them
> credit for. And without that basis, we can only make handwaving
> arguments about the relative strength of /dev/random vs /dev/urandom.
> So if you're running into /dev/random blocking, my advice is to delete
> the device and symlink it to /dev/urandom.

Two questions:
- if the device gets empty constantly, that means that filling
applicaties (e.g. the ones that feed /dev/random from /dev/hwrng or
from an audio-source or whatever)
- if we don't know if we're accounting correctly, why doing at all?
especially if one should use urandom instead of random

> Also note that if something in the kernel is rapidly consuming entropy
> but not visibly leaking it to the world, it is effectively not consuming
> it.

Then the counter should not be decreased?

> In this case, if no one hears the tree fall, it hasn't actually fallen.
> There is exactly as much 'unknown' data in the entropy pool as before.
> If anything, the pool contents are now harder to guess because it's been
> mixed more.


Folkert van Heusden

--
MultiTail ist eine flexible Applikation um Logfiles und Kommando
Eingaben zu ?berpr?fen. Inkl. Filter, Farben, Zusammenf?hren,
Ansichten etc. http://www.vanheusden.com/multitail/
----------------------------------------------------------------------
Phone: +31-6-41278122, PGP-key: 1F28D8AE, http://www.vanheusden.com

2009-06-14 20:22:55

by Matt Mackall

[permalink] [raw]
Subject: Re: issue with /dev/random? gets depleted very quick

On Sun, 2009-06-14 at 21:58 +0200, Folkert van Heusden wrote:
> [ /dev/random gets emptied very quickly ]
> ...
> > > > Is this a problem? It really shouldn't be. Everyone should be
> > > > using /dev/urandom anyway. And the anti-starvation threshold guarantees
> > >
> > > Well, if I understood correctly how /dev/*random works, urandom is fed
> > > by /dev/random. So if there's almost nothing left in the main pool and
> > > urandom demands bits then we have an issue.
> > > Also, if you frequently want to generate keys (thing gpg, ssl), I think
> > > you want bits from /dev/random and not urandom.
> >
> > There is really no difference.
> > In an ideal world, we could accurately estimate input entropy and thus
> > guarantee that we never output more than we took in. But it's pretty
> > clear we don't have a solid theoretical basis for estimating the real
> > entropy in most, if not all, of our input devices. In fact, I'm pretty
> > sure they're all significantly more observable than we're giving them
> > credit for. And without that basis, we can only make handwaving
> > arguments about the relative strength of /dev/random vs /dev/urandom.
> > So if you're running into /dev/random blocking, my advice is to delete
> > the device and symlink it to /dev/urandom.
>
> Two questions:
> - if the device gets empty constantly, that means that filling
> applicaties (e.g. the ones that feed /dev/random from /dev/hwrng or
> from an audio-source or whatever)

This question appears incomplete. Also, your device is not 'getting
empty'.

> - if we don't know if we're accounting correctly, why doing at all?
> especially if one should use urandom instead of random

Inertia.

> > Also note that if something in the kernel is rapidly consuming entropy
> > but not visibly leaking it to the world, it is effectively not consuming
> > it.
>
> Then the counter should not be decreased?

There's no way for us to know. In other words, the counter isn't
terribly meaningful in either direction.

--
http://selenic.com : development and support for Mercurial and Linux