2022-03-31 16:41:14

by Michael Brooks

[permalink] [raw]
Subject: Re: [PATCH v2] random: mix build-time latent entropy into pool at init

mix_pool_bytes() has numerous problems, as discussed in prior emails.
Do we still want to be putting so much effort into a development dead
end?

-Michael

On Thu, Mar 31, 2022 at 8:28 AM Jason A. Donenfeld <[email protected]> wrote:
>
> Prior, the "input_pool_data" array needed no real initialization, and so
> it was easy to mark it with __latent_entropy to populate it during
> compile-time. In switching to using a hash function, this required us to
> specifically initialize it to some specific state, which means we
> dropped the __latent_entropy attribute. An unfortunate side effect was
> this meant the pool was no longer seeded using compile-time random data.
> In order to bring this back, we declare an array in rand_initialize()
> with __latent_entropy and call mix_pool_bytes() on that at init, which
> accomplishes the same thing as before. We make this __initconst, so that
> it doesn't take up space at runtime after init.
>
> Fixes: 6e8ec2552c7d ("random: use computational hash for entropy extraction")
> Cc: Dominik Brodowski <[email protected]>
> Cc: Theodore Ts'o <[email protected]>
> Signed-off-by: Jason A. Donenfeld <[email protected]>
> ---
> Changes v1->v2:
> - Use __initconst.
>
> drivers/char/random.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index 40107f8b9e9e..1d8242969751 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -975,6 +975,11 @@ int __init rand_initialize(void)
> bool arch_init = true;
> unsigned long rv;
>
> +#if defined(LATENT_ENTROPY_PLUGIN)
> + static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy;
> + _mix_pool_bytes(compiletime_seed, sizeof(compiletime_seed));
> +#endif
> +
> for (i = 0; i < BLAKE2S_BLOCK_SIZE; i += sizeof(rv)) {
> if (!arch_get_random_seed_long_early(&rv) &&
> !arch_get_random_long_early(&rv)) {
> --
> 2.35.1
>


2022-03-31 19:39:44

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2] random: mix build-time latent entropy into pool at init

On Thu, Mar 31, 2022 at 09:02:27AM -0700, Michael Brooks wrote:
> mix_pool_bytes() has numerous problems, as discussed in prior emails.
> Do we still want to be putting so much effort into a development dead
> end?

Michael, with respect, there were a number of things in your analysis
which simply didn't make any sense. Discussing it on an e-mail thread
relating to stable bacports wasn't the right place, so I didn't extend
the discussion there.

You believe that max_pool_bytes() has numerous problems. That's not
the same thing as it having problems.

And making incremental changes, with code review, is the much better
approach than just doing a rip-and-replace with some something else
--- which might have different, even more exciting problems.

Something for you to consider, since your comments seem to indicate
that you are not familiar with the full random driver design. There
are two halves to how the random driver works. The first half is the
collection of entropy, and the priamry way this is accomplished is by
taking timestamps of various events that an external attacker
hopefully won't have access to. For example, keystrokes from the
user, mouse motion events, network and disk interrupts, etc. Where
possible, we don't just use jiffies, but we also use high preceision
counters, such as the CPU counter. The idea here is that even if the
external interrupts sources can be seen by an attacker, when the
interrupt is serviced when measured by a high precision cycle counter
(for example) is not going to be as easily guessed. That being said,
we only get a tiny amount of entropy (by which I mean uncertainty by
the attacker) out of each event. This is why it is important to
distill it in an input pool, so that as we add more and more
unpredictable inputs into the pool, it becomes less and less tractible
for the attacker to make educating guesses about what is in the pool.

Then periodically (and doing this periodically is important, because
we want to wait until there we have a large amount of uncertainty with
respect to the attacker accumulated in the pool) we extract from the
input pool and use that to reseed the second part of the random
driver, which is used to be called the "output pool".

It used to be that both the input pool and output pool were literally
bitpools that were mixed using an LFSR scheme, and then extracted
using cryptographic hash.

The output pool is now a ChaCha-based CRNG, and most recently the
"input pool" is a accumulating entropy using a Blake2 hash. So in
many ways, the term "input pool" is a bit of a misnomer now, and
perhaps should be renamed.

For more information, I direct you to the Yarrow paper[1]. The basic
idea of using two pools coupled with a catastrophic reseed was
shamelessly stolen from Bruce Schneier's work.

[1] https://www.schneier.com/wp-content/uploads/2016/02/paper-yarrow.pdf

Are there reasons why we didn't just implement Yarrow? That's because
/dev/random predates Yarrow, and we made incremental changes to adopt
("steal") good ideas from other sources, which hopefully don't
invalidate previous analysis and reviews about /dev/random. Please
note that there are a number of academic researches who have published
peer previews of /dev/random, and that is incredibly useful.

We've made changes over time to improve /dev/random and to addresses
various theoretical weaknesses noted by these academic reviewers. So
when you claim that there are "numerous problems" with the input pool,
I'll have to note that /dev/random has undergone reviews by
cryptographers, and they have not identified the problems that you
claim are there.

Regards,

- Ted