2023-12-01 10:39:11

by Johannes Berg

[permalink] [raw]
Subject: jitterentropy vs. simulation

Hi,

In ARCH=um, we have a mode where we simulate clocks completely, and even
simulate that the CPU is infinitely fast. Thus, reading the clock will
return completely predictable values regardless of the work happening.

This is clearly incompatible with jitterentropy, but now jitterentropy
seems to be mandatory on pretty much every system that needs any crypto,
so we can't just seem to turn it off (any more?)

Now given that the (simulated) clock doesn't have jitter, it's derivates
are all constant/zero, and so jent_measure_jitter() - called via
jent_entropy_collector_alloc() - will always detect a stuck measurement,
and thus jent_gen_entropy() loops infinitely.

I wonder what you'd think about a patch like this?

--- a/crypto/jitterentropy.c
+++ b/crypto/jitterentropy.c
@@ -552,10 +552,13 @@ static int jent_measure_jitter(struct rand_data *ec, __u64 *ret_current_delta)
* Function fills rand_data->hash_state
*
* @ec [in] Reference to entropy collector
+ *
+ * Return: 0 if entropy reading failed (was stuck), 1 otherwise
*/
-static void jent_gen_entropy(struct rand_data *ec)
+static int jent_gen_entropy(struct rand_data *ec)
{
unsigned int k = 0, safety_factor = 0;
+ unsigned int stuck_counter = 0;

if (fips_enabled)
safety_factor = JENT_ENTROPY_SAFETY_FACTOR;
@@ -565,8 +568,13 @@ static void jent_gen_entropy(struct rand_data *ec)

while (!jent_health_failure(ec)) {
/* If a stuck measurement is received, repeat measurement */
- if (jent_measure_jitter(ec, NULL))
+ if (jent_measure_jitter(ec, NULL)) {
+ if (stuck_counter++ > 100)
+ return 0;
continue;
+ }
+
+ stuck_counter = 0;

/*
* We multiply the loop value with ->osr to obtain the
@@ -575,6 +583,8 @@ static void jent_gen_entropy(struct rand_data *ec)
if (++k >= ((DATA_SIZE_BITS + safety_factor) * ec->osr))
break;
}
+
+ return 1;
}

/*
@@ -611,7 +621,8 @@ int jent_read_entropy(struct rand_data *ec, unsigned char *data,
while (len > 0) {
unsigned int tocopy, health_test_result;

- jent_gen_entropy(ec);
+ if (!jent_gen_entropy(ec))
+ return -3;

health_test_result = jent_health_failure(ec);
if (health_test_result > JENT_PERMANENT_FAILURE_SHIFT) {


johannes


2023-12-01 18:49:12

by Johannes Berg

[permalink] [raw]
Subject: Re: jitterentropy vs. simulation

[I guess we should keep the CCs so other see it]

> Looking at the stuck check it will be bogus in simulations.

True.

> You might as well ifdef that instead.
>
> If a simulation is running insert the entropy regardless and do not compute the derivatives used in the check.

Actually you mostly don't want anything inserted in that case, so it's
not bad to skip it.

I was mostly thinking this might be better than adding a completely
unrelated ifdef. Also I guess in real systems with a bad implementation
of random_get_entropy(), the second/third derivates might be
constant/zero for quite a while, so may be better to abort?

In any case, I couldn't figure out any way to not configure this into
the kernel when any kind of crypto is also in ...

johannes


2023-12-01 20:38:34

by Simo Sorce

[permalink] [raw]
Subject: Re: jitterentropy vs. simulation

On Fri, 2023-12-01 at 19:35 +0100, Johannes Berg wrote:
> [I guess we should keep the CCs so other see it]
>
> > Looking at the stuck check it will be bogus in simulations.
>
> True.
>
> > You might as well ifdef that instead.
> >
> > If a simulation is running insert the entropy regardless and do not compute the derivatives used in the check.
>
> Actually you mostly don't want anything inserted in that case, so it's
> not bad to skip it.
>
> I was mostly thinking this might be better than adding a completely
> unrelated ifdef. Also I guess in real systems with a bad implementation
> of random_get_entropy(), the second/third derivates might be
> constant/zero for quite a while, so may be better to abort?
>
> In any case, I couldn't figure out any way to not configure this into
> the kernel when any kind of crypto is also in ...

Doesn't this imply the simulation is not complete and you need to add
clock jitter for the simulation to be more useful?

You can use the host rng to add random jitter to the simulation clock.

Simo.

--
Simo Sorce,
DE @ RHEL Crypto Team,
Red Hat, Inc





2023-12-01 20:38:43

by Johannes Berg

[permalink] [raw]
Subject: Re: jitterentropy vs. simulation

On Fri, 2023-12-01 at 14:25 -0500, Simo Sorce wrote:
>
> Doesn't this imply the simulation is not complete

Kind of?

> and you need to add
> clock jitter for the simulation to be more useful?
>

No, it's more _intentionally_ incomplete. This works fine in normal
ARCH=um, but with time-travel variants we have more of a discrete event-
based simulation, to integrates well with other things (e.g. SystemC or
similar based device simulations). So this is quite intentional, and
yes, it breaks in a few spots such as this one.

johannes

2023-12-04 10:42:00

by Stephan Müller

[permalink] [raw]
Subject: Re: jitterentropy vs. simulation

Am Freitag, 1. Dezember 2023, 19:35:11 CET schrieb Johannes Berg:

Hi Johannes,

> [I guess we should keep the CCs so other see it]
>
> > Looking at the stuck check it will be bogus in simulations.
>
> True.
>
> > You might as well ifdef that instead.
> >
> > If a simulation is running insert the entropy regardless and do not
> > compute the derivatives used in the check.
> Actually you mostly don't want anything inserted in that case, so it's
> not bad to skip it.
>
> I was mostly thinking this might be better than adding a completely
> unrelated ifdef. Also I guess in real systems with a bad implementation
> of random_get_entropy(), the second/third derivates might be
> constant/zero for quite a while, so may be better to abort?
>
> In any case, I couldn't figure out any way to not configure this into
> the kernel when any kind of crypto is also in ...

The reason for the Jitter RNG to be dragged in is the Kconfig select in
CRYPTO_DRBG. Why does the DRBG require it?

The DRBG seeds from get_random_bytes || jitter rng output. It pulls an equal
amount of data from each source where each source alone is able to provide all
entropy that the DRBG needs. That said, the Jitter RNG can be designated as
optional, because the code already can handle the situation where this Jitter
RNG is not available. However, in FIPS mode, we must have the Jitter RNG
source.

That said, I could fathom to change CRYPTO_DRBG to remove the select
CRYPTO_JITTERENTROPY. But instead, add the select to CRYPTO_FIPS.

This change would entail a new log entry when a DRBG instance initializes:

pr_info("DRBG: Continuing without Jitter RNG\n");

I would assume that this change could help you to deselect the Jitter RNG in
your environment.

Side note: do you have an idea how that Jitter RNG perhaps could still be
selected by default when the DRBG is enabled, but allows it being deselected
following the aforementioned suggestion?

Ciao
Stephan



2023-12-04 10:42:11

by Johannes Berg

[permalink] [raw]
Subject: Re: jitterentropy vs. simulation

Hi Stephan,

> The reason for the Jitter RNG to be dragged in is the Kconfig select in
> CRYPTO_DRBG.

Yes, but you can't get rid of DRBG either. That'd require getting rid of
CRYPTO_RNG_DEFAULT, which means you cannot even have CRYPTO_GENIV?

Hmm. Maybe it _is_ possible to get rid of all these.

> Why does the DRBG require it?
>
> The DRBG seeds from get_random_bytes || jitter rng output. It pulls an equal
> amount of data from each source where each source alone is able to provide all
> entropy that the DRBG needs. That said, the Jitter RNG can be designated as
> optional, because the code already can handle the situation where this Jitter
> RNG is not available. However, in FIPS mode, we must have the Jitter RNG
> source.
>
> That said, I could fathom to change CRYPTO_DRBG to remove the select
> CRYPTO_JITTERENTROPY. But instead, add the select to CRYPTO_FIPS.

Well, CRYPTO_FIPS also would break our testing, since we still have to
support WEP/TKIP (RC4-based) ...

> This change would entail a new log entry when a DRBG instance initializes:
>
> pr_info("DRBG: Continuing without Jitter RNG\n");
>
> I would assume that this change could help you to deselect the Jitter RNG in
> your environment.
>
> Side note: do you have an idea how that Jitter RNG perhaps could still be
> selected by default when the DRBG is enabled, but allows it being deselected
> following the aforementioned suggestion?

Well it _looks_ like it might be possible to get rid of it (see above),
however, what I was really thinking is that jitter RNG might detect that
it doesn't actually get any reasonable data and eventually give up,
rather than looping indefinitely? It seems that might be useful more
generally too?

johannes

2023-12-04 10:42:21

by Stephan Müller

[permalink] [raw]
Subject: Re: jitterentropy vs. simulation

Am Montag, 4. Dezember 2023, 11:24:25 CET schrieb Johannes Berg:

Hi Johannes,

>
> Well it _looks_ like it might be possible to get rid of it (see above),
> however, what I was really thinking is that jitter RNG might detect that
> it doesn't actually get any reasonable data and eventually give up,
> rather than looping indefinitely? It seems that might be useful more
> generally too?

Agreed. Let me have a close look at the patch then.

Thanks.

Ciao
Stephan



2023-12-04 12:35:37

by Benjamin Beichler

[permalink] [raw]
Subject: Re: jitterentropy vs. simulation

Am 01.12.2023 um 19:35 schrieb Johannes Berg:
> [I guess we should keep the CCs so other see it]
>
>> Looking at the stuck check it will be bogus in simulations.
>
> True.
>
>> You might as well ifdef that instead.
>>
>> If a simulation is running insert the entropy regardless and do not compute the derivatives used in the check.
>
> Actually you mostly don't want anything inserted in that case, so it's
> not bad to skip it.
>
> I was mostly thinking this might be better than adding a completely
> unrelated ifdef. Also I guess in real systems with a bad implementation
> of random_get_entropy(), the second/third derivates might be
> constant/zero for quite a while, so may be better to abort?
Maybe dump question: could we simply implement a timex.h for UM which
delegates in non-timetravel mode to the x86 variant and otherwise pull
some randomness from the host or from a file/pipe configurable from the
UML commandline for random_get_entropy()?

I would say, if the random jitter is truly deterministic for a
simulation, that seems to be good enough.

Said that, it would be nice to be able to configure all random sources
to pull entropy from some file that we are able to configure from the
command line, but that is a different topic.

>
> In any case, I couldn't figure out any way to not configure this into
> the kernel when any kind of crypto is also in ...
>
> johannes
>
>





2023-12-04 14:40:23

by Anton Ivanov

[permalink] [raw]
Subject: Re: jitterentropy vs. simulation



On 04/12/2023 12:06, Benjamin Beichler wrote:
> Am 01.12.2023 um 19:35 schrieb Johannes Berg:
>> [I guess we should keep the CCs so other see it]
>>
>>> Looking at the stuck check it will be bogus in simulations.
>>
>> True.
>>
>>> You might as well ifdef that instead.
>>>
>>> If a simulation is running insert the entropy regardless and do not compute the derivatives used in the check.
>>
>> Actually you mostly don't want anything inserted in that case, so it's
>> not bad to skip it.
>>
>> I was mostly thinking this might be better than adding a completely
>> unrelated ifdef. Also I guess in real systems with a bad implementation
>> of random_get_entropy(), the second/third derivates might be
>> constant/zero for quite a while, so may be better to abort?
> Maybe dump question: could we simply implement a timex.h for UM which delegates in non-timetravel mode to the x86 variant

Sounds reasonable.

> and otherwise pull some randomness from the host or from a file/pipe configurable from the UML commandline for random_get_entropy()?

Second one.

We can run haveged in pipe mode and read from the pipe. Additionally, this will allow deterministic simulation if need be. You can record the haveged output and use it for more than one simulation.

>
> I would say, if the random jitter is truly deterministic for a simulation, that seems to be good enough.
>
> Said that, it would be nice to be able to configure all random sources to pull entropy from some file that we are able to configure from the command line, but that is a different topic.
>
>>
>> In any case, I couldn't figure out any way to not configure this into
>> the kernel when any kind of crypto is also in ...
>>
>> johannes
>>
>>
>
>
>
>
>

--
Anton R. Ivanov
https://www.kot-begemot.co.uk/