I am a cryptographer, a hacker, and exploit developer. I am an
accomplished security engineer, and I've even hacked Majordomo2, the
maling list we are using now (CVE-2011-0049).
It is highly unusual that /dev/random is allowed to degrade the
performance of all other subsystems - and even bring the system to a
halt when it runs dry. No other kernel feature is given this
dispensation, and I wrote some code to prove that it isn't necessary.
Due to a lockless design this proposed /dev/random has much higher
bandwidth for writes to the entropy pool. Furthermore, an additional
8 octets of entropy is collected per syscall meaning that the random
source generated will be difficult to undermine.
This is an experimental subsystem that is easy to compile and to verify:
Should compile right after cloning, even on osx or windows - and
feedback is welcome. I'm making it easy for people to verify and to
discuss this design. There are other cool features here, and I have a
detailed writeup in the readme.
A perhaps heretical view is that this design doesn't require
handle_irq_event_percpu() to produce a NIST-compliant CSPRNG. Which
is a particularly useful feature for low power usage, or high
performance applications of the linux kernel. Making this source of
entropy optional is extremely interesting as it appears as though
Kernel performance has degraded substantially in recent releases, and
this feature will give more performance back to the user who should be
free to choose their own security/performance tradeoff. Disabling the
irq event handler as a source of entropy should not produce an
exploitable condition, A keypool has a much larger period over the
current entropy pool design, so the concerns around emptying the pool
are non-existent in the span of a human lifetime.
All the best,
The basic ideas here look good to me; I will look at details later.
Meanwhile I wonder what others might think, so I've added some to cc
One thing disturbs me, wanting to give more control to
"the user who should be free to choose their own security/performance tradeoff"
I doubt most users, or even sys admins, know enough to make such
choices. Yes, some options like the /dev/random vs /dev/urandom choice
can be given, but I'm not convinced even that is necessary. Our
objective should be to make the thing foolproof, incapable of being
messed up by user actions.
Am Freitag, 11. Juni 2021, 05:59:52 CEST schrieb Sandy Harris:
> The basic ideas here look good to me; I will look at details later.
> Meanwhile I wonder what others might think, so I've added some to cc
> One thing disturbs me, wanting to give more control to
> "the user who should be free to choose their own security/performance
> I doubt most users, or even sys admins, know enough to make such
> choices. Yes, some options like the /dev/random vs /dev/urandom choice
> can be given, but I'm not convinced even that is necessary. Our
> objective should be to make the thing foolproof, incapable of being
> messed up by user actions.
Thank you for your considerations.
I would think you are referring to the boottime/runtime configuration of the
I think you are right that normal admins should not have the capability to
influence the entropy source configuration. Normal users would not be able to
do that anyway even today.
Yet, I am involved with many different system integrators which must make
quite an effort to adjust the operation to their needs these days. This
includes adding proprietary patches. System integrators normally would compile
their own kernel, I would see no problems in changing the LRNG such that:
- the entropy source configuration is a compile time-only setting with the
current default values
- the runtime configuration is only enabled with a compile time option that is
clearly marked as a development / test option and not to be used for runtime
(like the other test interfaces). It would be disabled by default. Note, I
have developed a regression test suite to test the LRNG operation and
behavior. For this, such boottime/runtime settings come in very handy.
Regular administrators would not recompile their kernel. Thus Linux distros
would simply go with the default by not enabling the test interface and have
safe defaults. This implies that normal admins do not have the freedom to make
adjustments. Therefore, I think we would have what you propose: a foolproof
operation. Yet, people who really need the freedom (as otherwise they will
make some other problematic changes) have the ability to alter the kernel
compile time configuration to suit their needs.
Besides, the LRNG contains the logic to verify the settings and guarantee that
wrong configurations cannot be applied even at compile time. The term wrong
configuration refers to configurations which would violate mathematical
constraints. Therefore, the offered flexibility is still ensuring that such
integrators cannot mess things up to the extent that mathematically something
On the other hand, when you refer to the changing of the used cryptographic
algorithms, I think all offered options are per definition safe: all offer the
same security strength. A configuration of the cryptographic algorithms is
what I would suggest to allow to administrators. This is similar to changing
the cryptographic algorithms for, say, network communication where the
administrator is in charge of configuring the allowed / used cipher suites.
Sandy Harris <[email protected]> wrote:
> The basic ideas here look good to me; I will look at details later.
Looking now, finding some things questionable.
Your doc has:
" /dev/random needs to be fast, and in the past it relied on using a
cryptographic primitive for expansion of PNRG to fill a given request
" urandom on the other hand uses a cryptographic primitive to compact
rather than expand,
This does not seem coherent to me & as far as I can tell, it is wrong as well.
/dev/random neither uses a PRNG nor does expansion.
/dev/urandom does both, but you seem to be saying the opposite.
" We can assume AES preserves confidentiality...
That is a reasonable assumption & it does make the design easier, but
is it necessary? If I understood some of Ted's writing correctly, one
of his design goals was not to have to trust the crypto too much. It
seems to me that is a worthy goal. One of John Denker's papers has
some quite nice stuff about using a hash function to compress input
data while preserving entropy. It needs only quite weak assumptions
about the hash.
You want to use AES in OFB mode. Why? The existing driver uses ChaCha,
I think mainly because it is faster.
The classic analysis of how to use a block cipher to build a hash is
Preneel et al.
As I recall, it examines 64 possibilities & finds only 9 are secure. I
do not know if OFB, used as you propose, is one of those. Do you?
Am Freitag, dem 11.06.2021 um 07:59 +0200 schrieb Stephan Müller:
> Am Freitag, 11. Juni 2021, 05:59:52 CEST schrieb Sandy Harris:
> Hi Sandy,
Please apologize and disregard my email. I erroneously thought you are
referring to the LRNG work.
I did not want to hijack any discussion.
(but it gave me a good input :-) )