Hi,
[1] patch at http://www.chronox.de/jent/jitterentropy-20130516.tar.bz2
A new version of the CPU Jitter random number generator is released at
http://www.chronox.de/ . The heart of the RNG is about 30 lines of easy
to read code. The readme in the main directory explains the different
code files. A changelog can be found on the web site.
In a previous attempt (http://lkml.org/lkml/2013/2/8/476), the first
iteration received comments for the lack of tests, documentation and
entropy assessment. All these concerns have been addressed. The
documentation of the CPU Jitter random number generator
(http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.html and PDF at
http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.pdf -- the graphs and
pictures are better in PDF) offers a full analysis of:
- the root cause of entropy
- a design of the RNG
- statistical tests and analyses
- entropy assessment and explanation of the flow of entropy
The document also explains the core concept to have a fully
decentralized entropy collector for every caller in need of entropy.
Also, this RNG is well suitable for virtualized environments.
Measurements on OpenVZ and KVM environments have been conducted as
documented. As the Linux kernel is starved of entropy in virtualized as
well as server environments, new sources of entropy are vital.
The appendix of the documentation contains example use cases by
providing link code to the Linux kernel crypto API, libgcrypt and
OpenSSL. Links to other cryptographic libraries should be straight
forward to implement. These implementations follow the concept of
decentralized entropy collection.
The man page provided with the source code explains the use of the API
of the CPU Jitter random number generator.
The test cases used to compile the documentation are available at the
web site as well.
Note: for the kernel crypto API, please read the provided Kconfig file
for the switches and which of them are recommended in regular
operation. These switches must currently be set manually in the
Makefile.
Ciao
Stephan
Signed-off-by: Stephan Mueller <[email protected]>
I very much like the basic notion here. The existing random(4) driver
may not get enough entropy in a VM or on a device like a Linux router
and I think work such as yours or HAVEGE
(http://www.irisa.fr/caps/projects/hipsor/) are important research.
The paper by McGuire et al of "Analysis of inherent randomness of the
Linux kernel" (http://lwn.net/images/conf/rtlws11/random-hardware.pdf)
seems to show that this is a fine source of more entropy.
On the other hand, I am not certain you are doing it in the right
place. My own attempt (ftp://ftp.cs.sjtu.edu.cn:990/sandy/maxwell/)
put it in a demon that just feeds /dev/random, probably also not the
right place. haveged(8) (http://www.issihosts.com/haveged/) also puts
it in a demon process. It may, as you suggest, belong in the kernel
instead, but I think there are arguments both ways.
Could we keep random(4) mostly as is and rearrange your code to just
give it more entropy? I think the large entropy pool in the existing
driver is essential since we sometimes want to generate things like a
2 Kbit PGP key and it is not clear to me that your driver is entirely
trustworthy under such stress.
On Tue, May 21, 2013 at 2:44 AM, Stephan Mueller <[email protected]> wrote:
> Hi,
>
> [1] patch at http://www.chronox.de/jent/jitterentropy-20130516.tar.bz2
>
> A new version of the CPU Jitter random number generator is released at
> http://www.chronox.de/ . The heart of the RNG is about 30 lines of easy
> to read code. The readme in the main directory explains the different
> code files. A changelog can be found on the web site.
>
> In a previous attempt (http://lkml.org/lkml/2013/2/8/476), the first
> iteration received comments for the lack of tests, documentation and
> entropy assessment. All these concerns have been addressed. The
> documentation of the CPU Jitter random number generator
> (http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.html and PDF at
> http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.pdf -- the graphs and
> pictures are better in PDF) offers a full analysis of:
>
> - the root cause of entropy
>
> - a design of the RNG
>
> - statistical tests and analyses
>
> - entropy assessment and explanation of the flow of entropy
>
> The document also explains the core concept to have a fully
> decentralized entropy collector for every caller in need of entropy.
>
> Also, this RNG is well suitable for virtualized environments.
> Measurements on OpenVZ and KVM environments have been conducted as
> documented. As the Linux kernel is starved of entropy in virtualized as
> well as server environments, new sources of entropy are vital.
>
> The appendix of the documentation contains example use cases by
> providing link code to the Linux kernel crypto API, libgcrypt and
> OpenSSL. Links to other cryptographic libraries should be straight
> forward to implement. These implementations follow the concept of
> decentralized entropy collection.
>
> The man page provided with the source code explains the use of the API
> of the CPU Jitter random number generator.
>
> The test cases used to compile the documentation are available at the
> web site as well.
>
> Note: for the kernel crypto API, please read the provided Kconfig file
> for the switches and which of them are recommended in regular
> operation. These switches must currently be set manually in the
> Makefile.
>
> Ciao
> Stephan
>
> Signed-off-by: Stephan Mueller <[email protected]>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Who put a stop payment on my reality check?
On Tue, 21 May 2013 12:09:02 -0400
Sandy Harris <[email protected]> wrote:
Hi Sandy,
> I very much like the basic notion here. The existing random(4) driver
> may not get enough entropy in a VM or on a device like a Linux router
> and I think work such as yours or HAVEGE (
> http://www.irisa.fr/caps/projects/hipsor/) are important research. The
> paper by McGuire et al of "Analysis of inherent randomness of the
> Linux
> kernel" (http://lwn.net/images/conf/rtlws11/random-hardware.pdf)
> seems to show that this is a fine source of more entropy.
>
> On the other hand, I am not certain you are doing it in the right
> place. My own attempt (ftp://ftp.cs.sjtu.edu.cn:990/sandy/maxwell/)
> put it in a demon that just feeds /dev/random, probably also not the
> right place. haveged(8) ( http://www.issihosts.com/haveged/) also
> puts it in a demon process. It may, as you suggest, belong in the
> kernel instead, but I think there are arguments both ways.
Thanks for your insights. What I propose is that it shall NOT have any
fixed place at all.
The entropy collection shall be as close to the "consumer" as
possible. There shall be NO single one entropy collector, but one for
every consumer.
That is the reason, why the code I am offering has that many links to
different crypto libs or even a stand-alone shared lib compilation.
Also, the implementation for the kernel crypto API should be used in a
way where one "consumer" instantiates the raw RNG or even the DRNGs
independently from others. That means, in-kernel users of entropy like
IPSEC shall instantiate the the kernel crypto API code independently
of others.
>
> Could we keep random(4) mostly as is and rearrange your code to just
> give it more entropy? I think the large entropy pool in the existing
> driver is essential since we sometimes want to generate things like a
> 2 Kbit PGP key and it is not clear to me that your driver is entirely
> trustworthy under such stress.
We can easily do that -- the different links I provide to different
crypto libs can be extended by a patch to random(4) too. My goal is to
go away from a central source of entropy to a fully decentralized
source.
Ciao
Stephan
--
| Cui bono? |
I continue to be suspicious about claims that userspace timing
measurements are measuring anything other than OS behaviour. But that
doesn't mean that they shouldn't exist. Personally, I believe you
should try to collect as much entropy as you can, from as many places
as you can. For VM's, it means we should definitely use
paravirtualization to get randomness from the host OS. If you don't
trust the host OS, then what on earth are you doing trying to generate
a long-term public key pair on the VM in the first place? For that
matter, why are you willing to expose a high value private keypair on
the VM?
For devices like Linux routers, what we desperately need is hardware
assist; either a on-CPU hardware random number generator, or a
hardware RNG from a TPM module, or having an individualized secret key
generated at manufacturing time and burned onto the device. If you
don't trust that the Intel hardware RNG honest, then by all means mix
in additional timing information either at kernel device driver level,
or from systems such as HAVEGE.
What I'm against is relying only on solutions such as HAVEGE or
replacing /dev/random with something scheme that only relies on CPU
timing and ignores interrupt timing.
Regards,
- Ted
On Tue, May 21, 2013 at 3:01 PM, Theodore Ts'o <[email protected]> wrote:
> I continue to be suspicious about claims that userspace timing
> measurements are measuring anything other than OS behaviour.
Yes, but they do seem to contain some entropy. See links in the
original post of this thread, the havege stuff and especially the
McGuire et al paper.
> But that
> doesn't mean that they shouldn't exist. Personally, I believe you
> should try to collect as much entropy as you can, from as many places
> as you can.
Yes.
> For VM's, it means we should definitely use
> paravirtualization to get randomness from the host OS.
Yes, I have not worked out the details but it seems clear that
something along those lines would be a fine idea.
> For devices like Linux routers, what we desperately need is hardware
> assist; [or] mix
> in additional timing information either at kernel device driver level,
> or from systems such as HAVEGE.
>
> What I'm against is relying only on solutions such as HAVEGE or
> replacing /dev/random with something scheme that only relies on CPU
> timing and ignores interrupt timing.
My question is how to incorporate some of that into /dev/random.
At one point, timing info was used along with other stuff. Some
of that got deleted later, What is the current state? Should we
add more?
--
Who put a stop payment on my reality check?
On Tue, 21 May 2013 17:39:49 -0400
Sandy Harris <[email protected]> wrote:
Hi Sandy,
> On Tue, May 21, 2013 at 3:01 PM, Theodore Ts'o <[email protected]> wrote:
>
> > I continue to be suspicious about claims that userspace timing
> > measurements are measuring anything other than OS behaviour.
>
> Yes, but they do seem to contain some entropy. See links in the
> original post of this thread, the havege stuff and especially the
> McGuire et al paper.
Ted is right that the non-deterministic behavior is caused by the OS
due to its complexity. This complexity implies that you do not have a
clue what the fill levels of caches are, placement of data in RAM, etc.
I would expect that if you would have a tiny microkernel as your sole
software body on a CPU, there would be hardly any jitter. On the other
hand, the jitter is not mainly caused by interrupts and such, because
interrupts would cause a time delta that is by orders of magnitude
higher than most deltas (deltas vary around 20 to 40, interrupts cause
deltas in the mid thousands at least and ranging to more than 100,000).
>
> > But that
> > doesn't mean that they shouldn't exist. Personally, I believe you
> > should try to collect as much entropy as you can, from as many
> > places as you can.
>
> Yes.
That is the goal with the collection approach I offer. With the
repetition of the time delta measurements thousands of times to get one
64 bit random value, the goal is that you magnify and collect that tiny
bit of entropy.
My implementation is based on a sound mathematical base as I only use
XOR and concatenation of data. It has been reviewed by a mathematician
and other folks who worked on RNGs for a long time. Thus, once you
accept that the root cause typically delivers more than 1 bit of
entropy per measurement (the measurements I did showed more than 2 bits
of Shannon Entropy), then the collection process will result in a
random number that contains the claimed entropy.
>
> > For VM's, it means we should definitely use
> > paravirtualization to get randomness from the host OS.
>
> Yes, I have not worked out the details but it seems clear that
> something along those lines would be a fine idea.
That is already in place at least with KVM and Xen as QEMU can pass
through access to the host /dev/random to the guest. Yet, that approach
is dangerous IMHO because you have one central source of entropy for
the host and all guests. One guest can easily starve all other guests
and the host of entropy. I know that is the case in user space as well.
That is why I am offering an implementation that is able to
decentralize the entropy collection process. I think it would be wrong
to simply update /dev/random with another seed source of the CPU
jitter -- it could be done as one aspect to increase the entropy in
the system. I think users should slowly but surely instantiate their own
instance of an entropy collector.
>
> > For devices like Linux routers, what we desperately need is hardware
> > assist; [or] mix
> > in additional timing information either at kernel device driver
> > level, or from systems such as HAVEGE.
I would personally think that precisely for routers, the approach
fails, because there may be no high-resolution timer. At least trying
to execute my code on a raspberry pie resulted in a failure: the
initial jent_entropy_init() call returned with the indication that
there is no high-res timer.
> >
> > What I'm against is relying only on solutions such as HAVEGE or
> > replacing /dev/random with something scheme that only relies on CPU
> > timing and ignores interrupt timing.
>
> My question is how to incorporate some of that into /dev/random.
> At one point, timing info was used along with other stuff. Some
> of that got deleted later, What is the current state? Should we
> add more?
Again, I would like to suggest that we look beyond a central entropy
collector like /dev/random. I would like to suggest to consider
decentralizing the collection of entropy.
Ciao
Stephan
>
> --
> Who put a stop payment on my reality check?
--
| Cui bono? |
Stephan Mueller <[email protected]> wrote:
> Ted is right that the non-deterministic behavior is caused by the OS
> due to its complexity. ...
>> > For VM's, it means we should definitely use
>> > paravirtualization to get randomness from the host OS.
>> ...
>
> That is already in place at least with KVM and Xen as QEMU can pass
> through access to the host /dev/random to the guest. Yet, that approach
> is dangerous IMHO because you have one central source of entropy for
> the host and all guests. One guest can easily starve all other guests
> and the host of entropy. I know that is the case in user space as well.
Yes, I have always thought that random(4) had a problem in that
area; over-using /dev/urandom can affect /dev/random. I've never
come up with a good way to fix it, though.
> That is why I am offering an implementation that is able to
> decentralize the entropy collection process. I think it would be wrong
> to simply update /dev/random with another seed source of the CPU
> jitter -- it could be done as one aspect to increase the entropy in
> the system. I think users should slowly but surely instantiate their own
> instance of an entropy collector.
I'm not sure that's a good idea. Certainly for many apps just seeding
a per-process PRNG well is enough, and a per-VM random device
looks essential, though there are at least two problems possible
because random(4) was designed before VMs were at all common
so it is not clear it can cope with that environment. The host
random device may be overwhelmed, and the guest entropy may
be inadequate or mis-estimated because everything it relies on --
devices, interrupts, ... -- is virtualised.
I want to keep the current interface where a process can just
read /dev/random or /dev/urandom as required. It is clean,
simple and moderately hard for users to screw up. It may
need some behind-the-scenes improvements to handle new
loads, but I cannot see changing the interface itself.
> I would personally think that precisely for routers, the approach
> fails, because there may be no high-resolution timer. At least trying
> to execute my code on a raspberry pie resulted in a failure: the
> initial jent_entropy_init() call returned with the indication that
> there is no high-res timer.
My maxwell(8) uses the hi-res timer by default but also has a
compile-time option to use the lower-res timer if required. You
still get entropy, just not as much.
This affects more than just routers. Consider using Linux on
a tablet PC or in a web server running in a VM. Neither needs
the realtime library; in fact adding that may move them away
from their optimisation goals.
>> > What I'm against is relying only on solutions such as HAVEGE or
>> > replacing /dev/random with something scheme that only relies on CPU
>> > timing and ignores interrupt timing.
>>
>> My question is how to incorporate some of that into /dev/random.
>> At one point, timing info was used along with other stuff. Some
>> of that got deleted later, What is the current state? Should we
>> add more?
>
> Again, I would like to suggest that we look beyond a central entropy
> collector like /dev/random. I would like to suggest to consider
> decentralizing the collection of entropy.
I'm with Ted on this one.
--
Who put a stop payment on my reality check?
On Wed, 22 May 2013 13:40:04 -0400
Sandy Harris <[email protected]> wrote:
Hi Sandy,
> Stephan Mueller <[email protected]> wrote:
>
> > Ted is right that the non-deterministic behavior is caused by the OS
> > due to its complexity. ...
>
> >> > For VM's, it means we should definitely use
> >> > paravirtualization to get randomness from the host OS.
> >> ...
> >
> > That is already in place at least with KVM and Xen as QEMU can pass
> > through access to the host /dev/random to the guest. Yet, that
> > approach is dangerous IMHO because you have one central source of
> > entropy for the host and all guests. One guest can easily starve
> > all other guests and the host of entropy. I know that is the case
> > in user space as well.
>
> Yes, I have always thought that random(4) had a problem in that
> area; over-using /dev/urandom can affect /dev/random. I've never
> come up with a good way to fix it, though.
I think there is no way unless we either:
- use a seed source that is very fast, like hardware oscillators
- use a per-consumer seed source where the consumer can only hurt
himself when he overuses the resource
>
> > That is why I am offering an implementation that is able to
> > decentralize the entropy collection process. I think it would be
> > wrong to simply update /dev/random with another seed source of the
> > CPU jitter -- it could be done as one aspect to increase the
> > entropy in the system. I think users should slowly but surely
> > instantiate their own instance of an entropy collector.
>
> I'm not sure that's a good idea. Certainly for many apps just seeding
> a per-process PRNG well is enough, and a per-VM random device
> looks essential, though there are at least two problems possible
> because random(4) was designed before VMs were at all common
> so it is not clear it can cope with that environment. The host
> random device may be overwhelmed, and the guest entropy may
> be inadequate or mis-estimated because everything it relies on --
> devices, interrupts, ... -- is virtualised.
Right. That is why we need to open up other sources for entropy that
work also in a virtual environment.
The proposed solution generates entropy equally well in a virtual
environment as outlined in the documentation. I also performed testing
in virtual environments and obtained the same results as the tests on a
host system.
What could be done is:
- in the short term to wire up the CPU Jitter RNG to /dev/random as
another source for entropy in the host and the guest. This way,
the /dev/random implementation in the guest would get good entropy
without requiring host support.
- in the medium term, move consumers of entropy in user space and
kernel space (like SSL connections, VPN implementations,
OpenSSH, ....) to instantiate an independent copy of the jitter RNG
and thus easing the load on /dev/random. This can be implemented by
using the proposed connections to the different crypto libraries of
OpenSSL, libgcrypt, ..., and even the kernel crypto API. Every
consumer that has its own instance of the jitter RNG would not need to
call /dev/random any more
>
> I want to keep the current interface where a process can just
> read /dev/random or /dev/urandom as required. It is clean,
> simple and moderately hard for users to screw up. It may
I am not so sure about the last words. Using /dev/random correctly has
many pitfalls, IMHO:
- The OS must ensure that it is seeded during boot and that seed is
stored during shutdown. This is already a problem in many embedded
devices where this is done incorrectly.
- When you install full disk encryptions during the initial
installation, there is hardly any entropy in /dev/random (at least
when using a non-GUI installer), but you want to get entropy for a
very long living key.
- A simple read(fd) from /dev/random is not sufficient. You must take
care of EINTR. I have seen many uses of /dev/random where developers
even overlooked that simple problem.
- Currently /dev/random uses SSDs as seed source. You must manually
turn them off as seed source via /sys files.
> need some behind-the-scenes improvements to handle new
> loads, but I cannot see changing the interface itself.
I am not proposing any change to that interface. I am proposing a
complete independent offering of an entropy source that a caller could
use instead of /dev/random, if he wishes.
>
> > I would personally think that precisely for routers, the approach
> > fails, because there may be no high-resolution timer. At least
> > trying to execute my code on a raspberry pie resulted in a failure:
> > the initial jent_entropy_init() call returned with the indication
> > that there is no high-res timer.
>
> My maxwell(8) uses the hi-res timer by default but also has a
> compile-time option to use the lower-res timer if required. You
> still get entropy, just not as much.
>
> This affects more than just routers. Consider using Linux on
> a tablet PC or in a web server running in a VM. Neither needs
> the realtime library; in fact adding that may move them away
> from their optimisation goals.
>
> >> > What I'm against is relying only on solutions such as HAVEGE or
> >> > replacing /dev/random with something scheme that only relies on
> >> > CPU timing and ignores interrupt timing.
> >>
> >> My question is how to incorporate some of that into /dev/random.
> >> At one point, timing info was used along with other stuff. Some
> >> of that got deleted later, What is the current state? Should we
> >> add more?
> >
> > Again, I would like to suggest that we look beyond a central entropy
> > collector like /dev/random. I would like to suggest to consider
> > decentralizing the collection of entropy.
>
> I'm with Ted on this one.
When you want to consider the jitter RNG for /dev/random, it should be
used as a seed source similar to the add_*_randomness functions. I
could implement a suggestion if that is the wish. For example, such a
seed source could be triggered if the entropy estimator of the
input_pool falls below some threshold. The jitter RNG could be used to
top the entropy off to some level above another threshold.
But again, the long term goal is that there is no need of central
entropy collection device like /dev/random any more.
Ciao
Stephan
>
> --
> Who put a stop payment on my reality check?
--
| Cui bono? |
Hi Sandy,
> On Wed, 22 May 2013 13:40:04 -0400
> Sandy Harris <[email protected]> wrote:
>
[...]
> >
> > >> > What I'm against is relying only on solutions such as HAVEGE or
> > >> > replacing /dev/random with something scheme that only relies on
> > >> > CPU timing and ignores interrupt timing.
> > >>
> > >> My question is how to incorporate some of that into /dev/random.
> > >> At one point, timing info was used along with other stuff. Some
> > >> of that got deleted later, What is the current state? Should we
> > >> add more?
> > >
> > > Again, I would like to suggest that we look beyond a central
> > > entropy collector like /dev/random. I would like to suggest to
> > > consider decentralizing the collection of entropy.
> >
> > I'm with Ted on this one.
>
> When you want to consider the jitter RNG for /dev/random, it should be
> used as a seed source similar to the add_*_randomness functions. I
> could implement a suggestion if that is the wish. For example, such a
> seed source could be triggered if the entropy estimator of the
> input_pool falls below some threshold. The jitter RNG could be used to
> top the entropy off to some level above another threshold.
Please see a possible integration of the CPU Jitter RNG
into /dev/random as follows. The patch does not contain the
jitterentropy-base.c, jitterentropy.h and jitterentropy-base-kernel.h
from the tarball available at http://www.chronox.de.
This patch would only use the CPU Jitter RNG if there is no more
entropy in the entropy pool. Thus, the CPU Jitter RNG is only used as a
fallback.
The patch is tested with 3.9.
Signed-off-by: Stephan Mueller <[email protected]>
---
diff -urNp linux-3.9.orig/drivers/char/Makefile linux-3.9/drivers/char/Makefile
--- linux-3.9.orig/drivers/char/Makefile 2013-05-22 20:55:58.547094987 +0200
+++ linux-3.9/drivers/char/Makefile 2013-05-22 22:11:32.975008931 +0200
@@ -2,7 +2,7 @@
# Makefile for the kernel character device drivers.
#
-obj-y += mem.o random.o
+obj-y += mem.o random.o jitterentropy-base.o
obj-$(CONFIG_TTY_PRINTK) += ttyprintk.o
obj-y += misc.o
obj-$(CONFIG_ATARI_DSP56K) += dsp56k.o
diff -urNp linux-3.9.orig/drivers/char/random.c linux-3.9/drivers/char/random.c
--- linux-3.9.orig/drivers/char/random.c 2013-05-22 20:55:58.675094985 +0200
+++ linux-3.9/drivers/char/random.c 2013-05-23 11:26:25.214103807 +0200
@@ -269,6 +269,8 @@
#define CREATE_TRACE_POINTS
#include <trace/events/random.h>
+#include "jitterentropy.h"
+
/*
* Configuration information
*/
@@ -435,6 +437,8 @@ struct entropy_store {
unsigned int initialized:1;
bool last_data_init;
__u8 last_data[EXTRACT_SIZE];
+ int jent_enable;
+ struct rand_data entropy_collector;
};
static __u32 input_pool_data[INPUT_POOL_WORDS];
@@ -446,7 +450,8 @@ static struct entropy_store input_pool =
.name = "input",
.limit = 1,
.lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
- .pool = input_pool_data
+ .pool = input_pool_data,
+ .jent_enable = -1
};
static struct entropy_store blocking_pool = {
@@ -455,7 +460,8 @@ static struct entropy_store blocking_poo
.limit = 1,
.pull = &input_pool,
.lock = __SPIN_LOCK_UNLOCKED(blocking_pool.lock),
- .pool = blocking_pool_data
+ .pool = blocking_pool_data,
+ .jent_enable = -1
};
static struct entropy_store nonblocking_pool = {
@@ -463,7 +469,8 @@ static struct entropy_store nonblocking_
.name = "nonblocking",
.pull = &input_pool,
.lock = __SPIN_LOCK_UNLOCKED(nonblocking_pool.lock),
- .pool = nonblocking_pool_data
+ .pool = nonblocking_pool_data,
+ .jent_enable = -1
};
static __u32 const twist_table[8] = {
@@ -633,6 +640,47 @@ struct timer_rand_state {
unsigned dont_count_entropy:1;
};
+/* lock of the entropy_store must already been taken */
+void add_jent_randomness(struct entropy_store *r)
+{
+#define JENTBLOCKSIZE 8 /* the most efficient use of the CPU jitter RNG is a block
+ aligned invocation. The block size of the CPU jitter RNG
+ is 8 bytes */
+ char rand[JENTBLOCKSIZE];
+ int ret = 0;
+
+ /* the initialization process determines that we cannot use the
+ * CPU Jitter RNG */
+ if(!r->jent_enable)
+ return;
+ memset(rand, 0, JENTBLOCKSIZE);
+ if(-1 == r->jent_enable)
+ {
+ /* we are uninitialized, try to initialize */
+ if(jent_entropy_init())
+ {
+ /* there is no CPU Jitter, disable the entropy collector */
+ r->jent_enable = 0;
+ return;
+ }
+ /* we do not use jent_entropy_collector_alloc as we are in early
+ * boot */
+ memset(&r->entropy_collector, 0, sizeof(struct rand_data));
+ /* initialize the entropy collector */
+ jent_read_entropy(&r->entropy_collector, rand, JENTBLOCKSIZE);
+ r->jent_enable = 1;
+ }
+ ret = jent_read_entropy(&r->entropy_collector, rand, JENTBLOCKSIZE);
+ if(JENTBLOCKSIZE == ret)
+ {
+ /* we do not need to worry about trickle threshold as we are called
+ * when we are low on entropy */
+ _mix_pool_bytes(r, rand, JENTBLOCKSIZE, NULL);
+ credit_entropy_bits(r, JENTBLOCKSIZE * 8);
+ }
+ memset(rand, 0, JENTBLOCKSIZE);
+}
+
/*
* Add device- or boot-specific data to the input and nonblocking
* pools to help initialize them to unique values.
@@ -862,6 +910,10 @@ static size_t account(struct entropy_sto
nbytes * 8, r->name);
/* Can we pull enough? */
+ /* XXX shall we limit this call to r->limit? */
+ if (r->entropy_count / 8 < min + reserved)
+ add_jent_randomness(r);
+
if (r->entropy_count / 8 < min + reserved) {
nbytes = 0;
} else {
Am Dienstag, 21. Mai 2013, 17:39:49 schrieb Sandy Harris:
Hi Sandy, Ted,
I prepared a new release of the CPU Jitter RNG available at [1]. The
core of the RNG remains unchanged. However, there are the following
changes:
- addition of a patch to integrate the RNG into /dev/random as explained
in appendix B.3 of [2], although the long-term goal of the RNG is rather
the integration into the kernel crypto API when considering the Linux
kernel as outlined in appendix B.1 of [2]
- ensure that the code is compiled without optimizations based on the
reasons outlined in section 5.1 of [2]
- addition of chapter 5.1 to [2] explaining how the entropy is collected
- additional code to execute the CPU Jitter RNG on different OSes
(specifically AIX, MacOS and z/OS -- other Unixes are good without
additional changes)
>On Tue, May 21, 2013 at 3:01 PM, Theodore Ts'o <[email protected]> wrote:
>> I continue to be suspicious about claims that userspace timing
>> measurements are measuring anything other than OS behaviour.
>
>Yes, but they do seem to contain some entropy. See links in the
>original post of this thread, the havege stuff and especially the
>McGuire et al paper.
With the initially shown implementation and documentation I did not
really show that sufficient entropy is gathered from the CPU execution
jitter. With a new test I now closed that hole. The newly added test
measures the entropy gathered during execution jitter collection, i.e.
heart of the RNG in terms of how much statistical entropy it provides.
The description of the test is given in section 5.1 of [2]. To ensure
that the statistical entropy measurements are indeed showing the
information theoretical entropy, section 4.4 of [2] outlines that
patterns are not identified in the output of the RNG which would
diminish the information theoretical entropy compared to the statistical
entropy.
That test was then executed on about 200 different systems with the
results given in appendix F of [2]. The table stated there supported by
the many graphs demonstrates that the CPU Jitter random number generator
delivers high-quality entropy on:
- a large range of CPUs ranging from embedded systems of MIPS and ARM
CPUs, covering desktop systems with AMD and Intel x86 32 bit and 64 bit
CPUs up to server CPUs of Intel Itanium, Sparc, POWER and IBM System Z;
- a large range of operating systems: Linux (including Android),
OpenBSD, FreeBSD, NetBSD, AIX, OpenIndiana (OpenSolaris), AIX, z/OS;
- a range of different compilers: GCC, Clang and the z/OS C compiler.
The test results show an interesting yet common trend -- i.e. common for
the different CPU types: the newer the CPU is, the more CPU execution
time jitter is present.
[2] appendix F.37 contains entropy measurements on different operating
systems on the very same hardware, indicating that the jitter
measurements are present regardless of the OS.
With the test results, Ted's concerns should be removed.
[...]
>> For devices like Linux routers, what we desperately need is hardware
>> assist; [or] mix
>> in additional timing information either at kernel device driver
>> level,
>> or from systems such as HAVEGE.
The concern with HAVEGE is that it is very complex. The implementation
is far from being straight forward.
>>
>> What I'm against is relying only on solutions such as HAVEGE or
>> replacing /dev/random with something scheme that only relies on CPU
>> timing and ignores interrupt timing.
>
>My question is how to incorporate some of that into /dev/random.
>At one point, timing info was used along with other stuff. Some
>of that got deleted later, What is the current state? Should we
>add more?
Please see the suggestion for an integration with /dev/random given in
appendix B.3 of [2]. The source code for the integration is given in
patches/linux-3.9-random.patch which is described in patches/README. The
patch only utilizes the CPU Jitter RNG when the entropy in the entropy
pool falls below the low threshold, i.e. when no entropy from other
sources is present.
[1] http://www.chronox.de/jent/jitterentropy-20130724.tar.bz2
[2] http://www.chronox.de/jent/doc/CPU-Jitter-NPTRNG.pdf
Ciao
Stephan
--
| Cui bono? |