2022-01-13 15:44:31

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH 0/7] first in overall series of rng code house cleaning

The RNG has been through a lot of changes over the years, and the code
could use a bit of house cleaning. This is the first series in what I
anticipate will be a few of these. The goal is to have each component of
it clearly analyzable for what it is, which should make the job of
analyzing its security as well as overall maintenance easier.

Jason A. Donenfeld (7):
random: cleanup poolinfo abstraction
random: cleanup integer types
random: remove incomplete last_data logic
random: remove unused reserved argument
random: rather than entropy_store abstraction, use global
random: remove unused OUTPUT_POOL constants
random: de-duplicate INPUT_POOL constants

drivers/char/random.c | 430 ++++++++++++++--------------------
include/trace/events/random.h | 56 ++---
2 files changed, 198 insertions(+), 288 deletions(-)

--
2.34.1



2022-01-13 15:44:33

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH 1/7] random: cleanup poolinfo abstraction

Now that we're only using one polynomial, we can cleanup its
representation into constants, instead of passing around pointers
dynamically to select different polynomials. This improves the codegen
and makes the code a bit more straightforward.

Signed-off-by: Jason A. Donenfeld <[email protected]>
---
drivers/char/random.c | 85 ++++++++++++++++++++-----------------------
1 file changed, 39 insertions(+), 46 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 227fb7802738..c8f05b7551dc 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -430,14 +430,20 @@ static int random_write_wakeup_bits = 28 * OUTPUT_POOL_WORDS;
* polynomial which improves the resulting TGFSR polynomial to be
* irreducible, which we have made here.
*/
-static const struct poolinfo {
- int poolbitshift, poolwords, poolbytes, poolfracbits;
-#define S(x) ilog2(x)+5, (x), (x)*4, (x) << (ENTROPY_SHIFT+5)
- int tap1, tap2, tap3, tap4, tap5;
-} poolinfo_table[] = {
- /* was: x^128 + x^103 + x^76 + x^51 +x^25 + x + 1 */
+enum poolinfo {
+ POOL_WORDS = 128,
+ POOL_WORDMASK = POOL_WORDS - 1,
+ POOL_BYTES = POOL_WORDS * sizeof(u32),
+ POOL_BITS = POOL_BYTES * 8,
+ POOL_BITSHIFT = ilog2(POOL_WORDS) + 5,
+ POOL_FRACBITS = POOL_WORDS << (ENTROPY_SHIFT + 5),
+
/* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */
- { S(128), 104, 76, 51, 25, 1 },
+ POOL_TAP1 = 104,
+ POOL_TAP2 = 76,
+ POOL_TAP3 = 51,
+ POOL_TAP4 = 25,
+ POOL_TAP5 = 1
};

/*
@@ -503,7 +509,6 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression");
struct entropy_store;
struct entropy_store {
/* read-only data: */
- const struct poolinfo *poolinfo;
__u32 *pool;
const char *name;

@@ -525,7 +530,6 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r);
static __u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy;

static struct entropy_store input_pool = {
- .poolinfo = &poolinfo_table[0],
.name = "input",
.lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
.pool = input_pool_data
@@ -548,33 +552,26 @@ static __u32 const twist_table[8] = {
static void _mix_pool_bytes(struct entropy_store *r, const void *in,
int nbytes)
{
- unsigned long i, tap1, tap2, tap3, tap4, tap5;
+ unsigned long i;
int input_rotate;
- int wordmask = r->poolinfo->poolwords - 1;
const unsigned char *bytes = in;
__u32 w;

- tap1 = r->poolinfo->tap1;
- tap2 = r->poolinfo->tap2;
- tap3 = r->poolinfo->tap3;
- tap4 = r->poolinfo->tap4;
- tap5 = r->poolinfo->tap5;
-
input_rotate = r->input_rotate;
i = r->add_ptr;

/* mix one byte at a time to simplify size handling and churn faster */
while (nbytes--) {
w = rol32(*bytes++, input_rotate);
- i = (i - 1) & wordmask;
+ i = (i - 1) & POOL_WORDMASK;

/* XOR in the various taps */
w ^= r->pool[i];
- w ^= r->pool[(i + tap1) & wordmask];
- w ^= r->pool[(i + tap2) & wordmask];
- w ^= r->pool[(i + tap3) & wordmask];
- w ^= r->pool[(i + tap4) & wordmask];
- w ^= r->pool[(i + tap5) & wordmask];
+ w ^= r->pool[(i + POOL_TAP1) & POOL_WORDMASK];
+ w ^= r->pool[(i + POOL_TAP2) & POOL_WORDMASK];
+ w ^= r->pool[(i + POOL_TAP3) & POOL_WORDMASK];
+ w ^= r->pool[(i + POOL_TAP4) & POOL_WORDMASK];
+ w ^= r->pool[(i + POOL_TAP5) & POOL_WORDMASK];

/* Mix the result back in with a twist */
r->pool[i] = (w >> 3) ^ twist_table[w & 7];
@@ -672,7 +669,6 @@ static void process_random_ready_list(void)
static void credit_entropy_bits(struct entropy_store *r, int nbits)
{
int entropy_count, orig;
- const int pool_size = r->poolinfo->poolfracbits;
int nfrac = nbits << ENTROPY_SHIFT;

if (!nbits)
@@ -690,41 +686,41 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
* ideal case of pure Shannon entropy, new contributions
* approach the full value asymptotically:
*
- * entropy <- entropy + (pool_size - entropy) *
- * (1 - exp(-add_entropy/pool_size))
+ * entropy <- entropy + (POOL_FRACBITS - entropy) *
+ * (1 - exp(-add_entropy/POOL_FRACBITS))
*
- * For add_entropy <= pool_size/2 then
- * (1 - exp(-add_entropy/pool_size)) >=
- * (add_entropy/pool_size)*0.7869...
+ * For add_entropy <= POOL_FRACBITS/2 then
+ * (1 - exp(-add_entropy/POOL_FRACBITS)) >=
+ * (add_entropy/POOL_FRACBITS)*0.7869...
* so we can approximate the exponential with
- * 3/4*add_entropy/pool_size and still be on the
- * safe side by adding at most pool_size/2 at a time.
+ * 3/4*add_entropy/POOL_FRACBITS and still be on the
+ * safe side by adding at most POOL_FRACBITS/2 at a time.
*
- * The use of pool_size-2 in the while statement is to
+ * The use of POOL_FRACBITS-2 in the while statement is to
* prevent rounding artifacts from making the loop
- * arbitrarily long; this limits the loop to log2(pool_size)*2
+ * arbitrarily long; this limits the loop to log2(POOL_FRACBITS)*2
* turns no matter how large nbits is.
*/
int pnfrac = nfrac;
- const int s = r->poolinfo->poolbitshift + ENTROPY_SHIFT + 2;
+ const int s = POOL_BITSHIFT + ENTROPY_SHIFT + 2;
/* The +2 corresponds to the /4 in the denominator */

do {
- unsigned int anfrac = min(pnfrac, pool_size/2);
+ unsigned int anfrac = min(pnfrac, POOL_FRACBITS/2);
unsigned int add =
- ((pool_size - entropy_count)*anfrac*3) >> s;
+ ((POOL_FRACBITS - entropy_count)*anfrac*3) >> s;

entropy_count += add;
pnfrac -= anfrac;
- } while (unlikely(entropy_count < pool_size-2 && pnfrac));
+ } while (unlikely(entropy_count < POOL_FRACBITS-2 && pnfrac));
}

if (WARN_ON(entropy_count < 0)) {
pr_warn("negative entropy/overflow: pool %s count %d\n",
r->name, entropy_count);
entropy_count = 0;
- } else if (entropy_count > pool_size)
- entropy_count = pool_size;
+ } else if (entropy_count > POOL_FRACBITS)
+ entropy_count = POOL_FRACBITS;
if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
goto retry;

@@ -741,13 +737,11 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)

static int credit_entropy_bits_safe(struct entropy_store *r, int nbits)
{
- const int nbits_max = r->poolinfo->poolwords * 32;
-
if (nbits < 0)
return -EINVAL;

/* Cap the value to avoid overflows */
- nbits = min(nbits, nbits_max);
+ nbits = min(nbits, POOL_BITS);

credit_entropy_bits(r, nbits);
return 0;
@@ -1343,7 +1337,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min,
int entropy_count, orig, have_bytes;
size_t ibytes, nfrac;

- BUG_ON(r->entropy_count > r->poolinfo->poolfracbits);
+ BUG_ON(r->entropy_count > POOL_FRACBITS);

/* Can we pull enough? */
retry:
@@ -1409,8 +1403,7 @@ static void extract_buf(struct entropy_store *r, __u8 *out)

/* Generate a hash across the pool */
spin_lock_irqsave(&r->lock, flags);
- blake2s_update(&state, (const u8 *)r->pool,
- r->poolinfo->poolwords * sizeof(*r->pool));
+ blake2s_update(&state, (const u8 *)r->pool, POOL_BYTES);
blake2s_final(&state, hash); /* final zeros out state */

/*
@@ -1766,7 +1759,7 @@ static void __init init_std_data(struct entropy_store *r)
unsigned long rv;

mix_pool_bytes(r, &now, sizeof(now));
- for (i = r->poolinfo->poolbytes; i > 0; i -= sizeof(rv)) {
+ for (i = POOL_BYTES; i > 0; i -= sizeof(rv)) {
if (!arch_get_random_seed_long(&rv) &&
!arch_get_random_long(&rv))
rv = random_get_entropy();
--
2.34.1


2022-01-13 15:44:44

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH 3/7] random: remove incomplete last_data logic

There were a few things added under the "if (fips_enabled)" banner,
which never really got completed, and the FIPS people anyway are
choosing a different direction. Rather than keep around this halfbaked
code, get rid of it so that we can focus on a single design of the RNG
rather than two designs.

Signed-off-by: Jason A. Donenfeld <[email protected]>
---
drivers/char/random.c | 39 ++++-----------------------------------
1 file changed, 4 insertions(+), 35 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 4b84b95428bc..7a4d858ac731 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -337,7 +337,6 @@
#include <linux/spinlock.h>
#include <linux/kthread.h>
#include <linux/percpu.h>
-#include <linux/fips.h>
#include <linux/ptrace.h>
#include <linux/workqueue.h>
#include <linux/irq.h>
@@ -517,14 +516,12 @@ struct entropy_store {
unsigned short add_ptr;
unsigned short input_rotate;
int entropy_count;
- unsigned int last_data_init:1;
- u8 last_data[EXTRACT_SIZE];
};

static ssize_t extract_entropy(struct entropy_store *r, void *buf,
size_t nbytes, int min, int rsvd);
static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
- size_t nbytes, int fips);
+ size_t nbytes);

static void crng_reseed(struct crng_state *crng, struct entropy_store *r);
static u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy;
@@ -821,7 +818,7 @@ static void crng_initialize_secondary(struct crng_state *crng)

static void __init crng_initialize_primary(struct crng_state *crng)
{
- _extract_entropy(&input_pool, &crng->state[4], sizeof(u32) * 12, 0);
+ _extract_entropy(&input_pool, &crng->state[4], sizeof(u32) * 12);
if (crng_init_try_arch_early(crng) && trust_cpu && crng_init < 2) {
invalidate_batched_entropy();
numa_crng_init();
@@ -1426,22 +1423,13 @@ static void extract_buf(struct entropy_store *r, u8 *out)
}

static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
- size_t nbytes, int fips)
+ size_t nbytes)
{
ssize_t ret = 0, i;
u8 tmp[EXTRACT_SIZE];
- unsigned long flags;

while (nbytes) {
extract_buf(r, tmp);
-
- if (fips) {
- spin_lock_irqsave(&r->lock, flags);
- if (!memcmp(tmp, r->last_data, EXTRACT_SIZE))
- panic("Hardware RNG duplicated output!\n");
- memcpy(r->last_data, tmp, EXTRACT_SIZE);
- spin_unlock_irqrestore(&r->lock, flags);
- }
i = min_t(int, nbytes, EXTRACT_SIZE);
memcpy(buf, tmp, i);
nbytes -= i;
@@ -1467,28 +1455,9 @@ static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
static ssize_t extract_entropy(struct entropy_store *r, void *buf,
size_t nbytes, int min, int reserved)
{
- u8 tmp[EXTRACT_SIZE];
- unsigned long flags;
-
- /* if last_data isn't primed, we need EXTRACT_SIZE extra bytes */
- if (fips_enabled) {
- spin_lock_irqsave(&r->lock, flags);
- if (!r->last_data_init) {
- r->last_data_init = 1;
- spin_unlock_irqrestore(&r->lock, flags);
- trace_extract_entropy(r->name, EXTRACT_SIZE,
- ENTROPY_BITS(r), _RET_IP_);
- extract_buf(r, tmp);
- spin_lock_irqsave(&r->lock, flags);
- memcpy(r->last_data, tmp, EXTRACT_SIZE);
- }
- spin_unlock_irqrestore(&r->lock, flags);
- }
-
trace_extract_entropy(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
nbytes = account(r, nbytes, min, reserved);
-
- return _extract_entropy(r, buf, nbytes, fips_enabled);
+ return _extract_entropy(r, buf, nbytes);
}

#define warn_unseeded_randomness(previous) \
--
2.34.1


2022-01-13 15:44:48

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH 2/7] random: cleanup integer types

Rather than using the userspace type, __uXX, switch to using uXX. And
rather than using variously chosen `char *` or `unsigned char *`, use
`u8 *` uniformly for things that aren't strings, in the case where we
are doing byte-by-byte traversal.

Signed-off-by: Jason A. Donenfeld <[email protected]>
---
drivers/char/random.c | 95 +++++++++++++++++++++----------------------
1 file changed, 47 insertions(+), 48 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index c8f05b7551dc..4b84b95428bc 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -456,7 +456,7 @@ static DEFINE_SPINLOCK(random_ready_list_lock);
static LIST_HEAD(random_ready_list);

struct crng_state {
- __u32 state[16];
+ u32 state[16];
unsigned long init_time;
spinlock_t lock;
};
@@ -483,9 +483,9 @@ static bool crng_need_final_init = false;
static int crng_init_cnt = 0;
static unsigned long crng_global_init_time = 0;
#define CRNG_INIT_CNT_THRESH (2*CHACHA_KEY_SIZE)
-static void _extract_crng(struct crng_state *crng, __u8 out[CHACHA_BLOCK_SIZE]);
+static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]);
static void _crng_backtrack_protect(struct crng_state *crng,
- __u8 tmp[CHACHA_BLOCK_SIZE], int used);
+ u8 tmp[CHACHA_BLOCK_SIZE], int used);
static void process_random_ready_list(void);
static void _get_random_bytes(void *buf, int nbytes);

@@ -509,7 +509,7 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression");
struct entropy_store;
struct entropy_store {
/* read-only data: */
- __u32 *pool;
+ u32 *pool;
const char *name;

/* read-write data: */
@@ -518,7 +518,7 @@ struct entropy_store {
unsigned short input_rotate;
int entropy_count;
unsigned int last_data_init:1;
- __u8 last_data[EXTRACT_SIZE];
+ u8 last_data[EXTRACT_SIZE];
};

static ssize_t extract_entropy(struct entropy_store *r, void *buf,
@@ -527,7 +527,7 @@ static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
size_t nbytes, int fips);

static void crng_reseed(struct crng_state *crng, struct entropy_store *r);
-static __u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy;
+static u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy;

static struct entropy_store input_pool = {
.name = "input",
@@ -535,7 +535,7 @@ static struct entropy_store input_pool = {
.pool = input_pool_data
};

-static __u32 const twist_table[8] = {
+static u32 const twist_table[8] = {
0x00000000, 0x3b6e20c8, 0x76dc4190, 0x4db26158,
0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 };

@@ -554,8 +554,8 @@ static void _mix_pool_bytes(struct entropy_store *r, const void *in,
{
unsigned long i;
int input_rotate;
- const unsigned char *bytes = in;
- __u32 w;
+ const u8 *bytes = in;
+ u32 w;

input_rotate = r->input_rotate;
i = r->add_ptr;
@@ -608,10 +608,10 @@ static void mix_pool_bytes(struct entropy_store *r, const void *in,
}

struct fast_pool {
- __u32 pool[4];
+ u32 pool[4];
unsigned long last;
- unsigned short reg_idx;
- unsigned char count;
+ u16 reg_idx;
+ u8 count;
};

/*
@@ -621,8 +621,8 @@ struct fast_pool {
*/
static void fast_mix(struct fast_pool *f)
{
- __u32 a = f->pool[0], b = f->pool[1];
- __u32 c = f->pool[2], d = f->pool[3];
+ u32 a = f->pool[0], b = f->pool[1];
+ u32 c = f->pool[2], d = f->pool[3];

a += b; c += d;
b = rol32(b, 6); d = rol32(d, 27);
@@ -814,14 +814,14 @@ static bool __init crng_init_try_arch_early(struct crng_state *crng)
static void crng_initialize_secondary(struct crng_state *crng)
{
chacha_init_consts(crng->state);
- _get_random_bytes(&crng->state[4], sizeof(__u32) * 12);
+ _get_random_bytes(&crng->state[4], sizeof(u32) * 12);
crng_init_try_arch(crng);
crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
}

static void __init crng_initialize_primary(struct crng_state *crng)
{
- _extract_entropy(&input_pool, &crng->state[4], sizeof(__u32) * 12, 0);
+ _extract_entropy(&input_pool, &crng->state[4], sizeof(u32) * 12, 0);
if (crng_init_try_arch_early(crng) && trust_cpu && crng_init < 2) {
invalidate_batched_entropy();
numa_crng_init();
@@ -911,10 +911,10 @@ static struct crng_state *select_crng(void)
* path. So we can't afford to dilly-dally. Returns the number of
* bytes processed from cp.
*/
-static size_t crng_fast_load(const char *cp, size_t len)
+static size_t crng_fast_load(const u8 *cp, size_t len)
{
unsigned long flags;
- char *p;
+ u8 *p;
size_t ret = 0;

if (!spin_trylock_irqsave(&primary_crng.lock, flags))
@@ -923,7 +923,7 @@ static size_t crng_fast_load(const char *cp, size_t len)
spin_unlock_irqrestore(&primary_crng.lock, flags);
return 0;
}
- p = (unsigned char *) &primary_crng.state[4];
+ p = (u8 *) &primary_crng.state[4];
while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) {
p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp;
cp++; crng_init_cnt++; len--; ret++;
@@ -951,14 +951,14 @@ static size_t crng_fast_load(const char *cp, size_t len)
* like a fixed DMI table (for example), which might very well be
* unique to the machine, but is otherwise unvarying.
*/
-static int crng_slow_load(const char *cp, size_t len)
+static int crng_slow_load(const u8 *cp, size_t len)
{
unsigned long flags;
- static unsigned char lfsr = 1;
- unsigned char tmp;
+ static u8 lfsr = 1;
+ u8 tmp;
unsigned i, max = CHACHA_KEY_SIZE;
- const char * src_buf = cp;
- char * dest_buf = (char *) &primary_crng.state[4];
+ const u8 * src_buf = cp;
+ u8 * dest_buf = (u8 *) &primary_crng.state[4];

if (!spin_trylock_irqsave(&primary_crng.lock, flags))
return 0;
@@ -987,8 +987,8 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
unsigned long flags;
int i, num;
union {
- __u8 block[CHACHA_BLOCK_SIZE];
- __u32 key[8];
+ u8 block[CHACHA_BLOCK_SIZE];
+ u32 key[8];
} buf;

if (r) {
@@ -1015,7 +1015,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
}

static void _extract_crng(struct crng_state *crng,
- __u8 out[CHACHA_BLOCK_SIZE])
+ u8 out[CHACHA_BLOCK_SIZE])
{
unsigned long flags, init_time;

@@ -1033,7 +1033,7 @@ static void _extract_crng(struct crng_state *crng,
spin_unlock_irqrestore(&crng->lock, flags);
}

-static void extract_crng(__u8 out[CHACHA_BLOCK_SIZE])
+static void extract_crng(u8 out[CHACHA_BLOCK_SIZE])
{
_extract_crng(select_crng(), out);
}
@@ -1043,26 +1043,26 @@ static void extract_crng(__u8 out[CHACHA_BLOCK_SIZE])
* enough) to mutate the CRNG key to provide backtracking protection.
*/
static void _crng_backtrack_protect(struct crng_state *crng,
- __u8 tmp[CHACHA_BLOCK_SIZE], int used)
+ u8 tmp[CHACHA_BLOCK_SIZE], int used)
{
unsigned long flags;
- __u32 *s, *d;
+ u32 *s, *d;
int i;

- used = round_up(used, sizeof(__u32));
+ used = round_up(used, sizeof(u32));
if (used + CHACHA_KEY_SIZE > CHACHA_BLOCK_SIZE) {
extract_crng(tmp);
used = 0;
}
spin_lock_irqsave(&crng->lock, flags);
- s = (__u32 *) &tmp[used];
+ s = (u32 *) &tmp[used];
d = &crng->state[4];
for (i=0; i < 8; i++)
*d++ ^= *s++;
spin_unlock_irqrestore(&crng->lock, flags);
}

-static void crng_backtrack_protect(__u8 tmp[CHACHA_BLOCK_SIZE], int used)
+static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used)
{
_crng_backtrack_protect(select_crng(), tmp, used);
}
@@ -1070,7 +1070,7 @@ static void crng_backtrack_protect(__u8 tmp[CHACHA_BLOCK_SIZE], int used)
static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
{
ssize_t ret = 0, i = CHACHA_BLOCK_SIZE;
- __u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4);
+ u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4);
int large_request = (nbytes > 256);

while (nbytes) {
@@ -1241,15 +1241,15 @@ static void add_interrupt_bench(cycles_t start)
#define add_interrupt_bench(x)
#endif

-static __u32 get_reg(struct fast_pool *f, struct pt_regs *regs)
+static u32 get_reg(struct fast_pool *f, struct pt_regs *regs)
{
- __u32 *ptr = (__u32 *) regs;
+ u32 *ptr = (u32 *) regs;
unsigned int idx;

if (regs == NULL)
return 0;
idx = READ_ONCE(f->reg_idx);
- if (idx >= sizeof(struct pt_regs) / sizeof(__u32))
+ if (idx >= sizeof(struct pt_regs) / sizeof(u32))
idx = 0;
ptr += idx++;
WRITE_ONCE(f->reg_idx, idx);
@@ -1263,8 +1263,8 @@ void add_interrupt_randomness(int irq)
struct pt_regs *regs = get_irq_regs();
unsigned long now = jiffies;
cycles_t cycles = random_get_entropy();
- __u32 c_high, j_high;
- __u64 ip;
+ u32 c_high, j_high;
+ u64 ip;

if (cycles == 0)
cycles = get_reg(fast_pool, regs);
@@ -1282,8 +1282,7 @@ void add_interrupt_randomness(int irq)

if (unlikely(crng_init == 0)) {
if ((fast_pool->count >= 64) &&
- crng_fast_load((char *) fast_pool->pool,
- sizeof(fast_pool->pool)) > 0) {
+ crng_fast_load((u8 *)fast_pool->pool, sizeof(fast_pool->pool)) > 0) {
fast_pool->count = 0;
fast_pool->last = now;
}
@@ -1380,7 +1379,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min,
*
* Note: we assume that .poolwords is a multiple of 16 words.
*/
-static void extract_buf(struct entropy_store *r, __u8 *out)
+static void extract_buf(struct entropy_store *r, u8 *out)
{
struct blake2s_state state __aligned(__alignof__(unsigned long));
u8 hash[BLAKE2S_HASH_SIZE];
@@ -1430,7 +1429,7 @@ static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
size_t nbytes, int fips)
{
ssize_t ret = 0, i;
- __u8 tmp[EXTRACT_SIZE];
+ u8 tmp[EXTRACT_SIZE];
unsigned long flags;

while (nbytes) {
@@ -1468,7 +1467,7 @@ static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
static ssize_t extract_entropy(struct entropy_store *r, void *buf,
size_t nbytes, int min, int reserved)
{
- __u8 tmp[EXTRACT_SIZE];
+ u8 tmp[EXTRACT_SIZE];
unsigned long flags;

/* if last_data isn't primed, we need EXTRACT_SIZE extra bytes */
@@ -1530,7 +1529,7 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller,
*/
static void _get_random_bytes(void *buf, int nbytes)
{
- __u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4);
+ u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4);

trace_get_random_bytes(nbytes, _RET_IP_);

@@ -1724,7 +1723,7 @@ EXPORT_SYMBOL(del_random_ready_callback);
int __must_check get_random_bytes_arch(void *buf, int nbytes)
{
int left = nbytes;
- char *p = buf;
+ u8 *p = buf;

trace_get_random_bytes_arch(left, _RET_IP_);
while (left) {
@@ -1866,7 +1865,7 @@ static int
write_pool(struct entropy_store *r, const char __user *buffer, size_t count)
{
size_t bytes;
- __u32 t, buf[16];
+ u32 t, buf[16];
const char __user *p = buffer;

while (count > 0) {
@@ -1876,7 +1875,7 @@ write_pool(struct entropy_store *r, const char __user *buffer, size_t count)
if (copy_from_user(&buf, p, bytes))
return -EFAULT;

- for (b = bytes ; b > 0 ; b -= sizeof(__u32), i++) {
+ for (b = bytes; b > 0; b -= sizeof(u32), i++) {
if (!arch_get_random_int(&t))
break;
buf[i] ^= t;
--
2.34.1


2022-01-13 15:44:50

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH 4/7] random: remove unused reserved argument

This argument is always set to zero, as a result of us not caring about
keeping a certain amount reserved in the pool these days. So just remove
it and cleanup the function signatures.

Signed-off-by: Jason A. Donenfeld <[email protected]>
---
drivers/char/random.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 7a4d858ac731..5bef9565b251 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -519,7 +519,7 @@ struct entropy_store {
};

static ssize_t extract_entropy(struct entropy_store *r, void *buf,
- size_t nbytes, int min, int rsvd);
+ size_t nbytes, int min);
static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
size_t nbytes);

@@ -989,7 +989,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
} buf;

if (r) {
- num = extract_entropy(r, &buf, 32, 16, 0);
+ num = extract_entropy(r, &buf, 32, 16);
if (num == 0)
return;
} else {
@@ -1327,8 +1327,7 @@ EXPORT_SYMBOL_GPL(add_disk_randomness);
* This function decides how many bytes to actually take from the
* given pool, and also debits the entropy count accordingly.
*/
-static size_t account(struct entropy_store *r, size_t nbytes, int min,
- int reserved)
+static size_t account(struct entropy_store *r, size_t nbytes, int min)
{
int entropy_count, orig, have_bytes;
size_t ibytes, nfrac;
@@ -1342,7 +1341,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min,
/* never pull more than available */
have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);

- if ((have_bytes -= reserved) < 0)
+ if (have_bytes < 0)
have_bytes = 0;
ibytes = min_t(size_t, ibytes, have_bytes);
if (ibytes < min)
@@ -1448,15 +1447,13 @@ static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
* returns it in a buffer.
*
* The min parameter specifies the minimum amount we can pull before
- * failing to avoid races that defeat catastrophic reseeding while the
- * reserved parameter indicates how much entropy we must leave in the
- * pool after each pull to avoid starving other readers.
+ * failing to avoid races that defeat catastrophic reseeding.
*/
static ssize_t extract_entropy(struct entropy_store *r, void *buf,
- size_t nbytes, int min, int reserved)
+ size_t nbytes, int min)
{
trace_extract_entropy(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
- nbytes = account(r, nbytes, min, reserved);
+ nbytes = account(r, nbytes, min);
return _extract_entropy(r, buf, nbytes);
}

--
2.34.1


2022-01-13 15:44:53

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH 5/7] random: rather than entropy_store abstraction, use global

Originally, the RNG used several pools, so having things abstracted out
over a generic entropy_store object made sense. These days, there's only
one input pool, and then an uneven mix of usage via the abstraction and
usage via &input_pool. Rather than this uneasy mixture, just get rid of
the abstraction entirely and have things always use the global. This
simplifies the code and makes reading it a bit easier.

Signed-off-by: Jason A. Donenfeld <[email protected]>
---
drivers/char/random.c | 219 +++++++++++++++-------------------
include/trace/events/random.h | 56 ++++-----
2 files changed, 117 insertions(+), 158 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 5bef9565b251..b14de6456921 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -375,7 +375,7 @@
* credit_entropy_bits() needs to be 64 bits wide.
*/
#define ENTROPY_SHIFT 3
-#define ENTROPY_BITS(r) ((r)->entropy_count >> ENTROPY_SHIFT)
+#define ENTROPY_BITS() (input_pool.entropy_count >> ENTROPY_SHIFT)

/*
* If the entropy count falls under this number of bits, then we
@@ -505,33 +505,27 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression");
*
**********************************************************************/

-struct entropy_store;
-struct entropy_store {
+static u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy;
+
+static struct {
/* read-only data: */
u32 *pool;
- const char *name;

/* read-write data: */
spinlock_t lock;
unsigned short add_ptr;
unsigned short input_rotate;
int entropy_count;
-};
-
-static ssize_t extract_entropy(struct entropy_store *r, void *buf,
- size_t nbytes, int min);
-static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
- size_t nbytes);
-
-static void crng_reseed(struct crng_state *crng, struct entropy_store *r);
-static u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy;
-
-static struct entropy_store input_pool = {
- .name = "input",
+} input_pool = {
.lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
.pool = input_pool_data
};

+static ssize_t extract_entropy(void *buf, size_t nbytes, int min);
+static ssize_t _extract_entropy(void *buf, size_t nbytes);
+
+static void crng_reseed(struct crng_state *crng, bool use_input_pool);
+
static u32 const twist_table[8] = {
0x00000000, 0x3b6e20c8, 0x76dc4190, 0x4db26158,
0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 };
@@ -546,16 +540,15 @@ static u32 const twist_table[8] = {
* it's cheap to do so and helps slightly in the expected case where
* the entropy is concentrated in the low-order bits.
*/
-static void _mix_pool_bytes(struct entropy_store *r, const void *in,
- int nbytes)
+static void _mix_pool_bytes(const void *in, int nbytes)
{
unsigned long i;
int input_rotate;
const u8 *bytes = in;
u32 w;

- input_rotate = r->input_rotate;
- i = r->add_ptr;
+ input_rotate = input_pool.input_rotate;
+ i = input_pool.add_ptr;

/* mix one byte at a time to simplify size handling and churn faster */
while (nbytes--) {
@@ -563,15 +556,15 @@ static void _mix_pool_bytes(struct entropy_store *r, const void *in,
i = (i - 1) & POOL_WORDMASK;

/* XOR in the various taps */
- w ^= r->pool[i];
- w ^= r->pool[(i + POOL_TAP1) & POOL_WORDMASK];
- w ^= r->pool[(i + POOL_TAP2) & POOL_WORDMASK];
- w ^= r->pool[(i + POOL_TAP3) & POOL_WORDMASK];
- w ^= r->pool[(i + POOL_TAP4) & POOL_WORDMASK];
- w ^= r->pool[(i + POOL_TAP5) & POOL_WORDMASK];
+ w ^= input_pool.pool[i];
+ w ^= input_pool.pool[(i + POOL_TAP1) & POOL_WORDMASK];
+ w ^= input_pool.pool[(i + POOL_TAP2) & POOL_WORDMASK];
+ w ^= input_pool.pool[(i + POOL_TAP3) & POOL_WORDMASK];
+ w ^= input_pool.pool[(i + POOL_TAP4) & POOL_WORDMASK];
+ w ^= input_pool.pool[(i + POOL_TAP5) & POOL_WORDMASK];

/* Mix the result back in with a twist */
- r->pool[i] = (w >> 3) ^ twist_table[w & 7];
+ input_pool.pool[i] = (w >> 3) ^ twist_table[w & 7];

/*
* Normally, we add 7 bits of rotation to the pool.
@@ -582,26 +575,24 @@ static void _mix_pool_bytes(struct entropy_store *r, const void *in,
input_rotate = (input_rotate + (i ? 7 : 14)) & 31;
}

- r->input_rotate = input_rotate;
- r->add_ptr = i;
+ input_pool.input_rotate = input_rotate;
+ input_pool.add_ptr = i;
}

-static void __mix_pool_bytes(struct entropy_store *r, const void *in,
- int nbytes)
+static void __mix_pool_bytes(const void *in, int nbytes)
{
- trace_mix_pool_bytes_nolock(r->name, nbytes, _RET_IP_);
- _mix_pool_bytes(r, in, nbytes);
+ trace_mix_pool_bytes_nolock(nbytes, _RET_IP_);
+ _mix_pool_bytes(in, nbytes);
}

-static void mix_pool_bytes(struct entropy_store *r, const void *in,
- int nbytes)
+static void mix_pool_bytes(const void *in, int nbytes)
{
unsigned long flags;

- trace_mix_pool_bytes(r->name, nbytes, _RET_IP_);
- spin_lock_irqsave(&r->lock, flags);
- _mix_pool_bytes(r, in, nbytes);
- spin_unlock_irqrestore(&r->lock, flags);
+ trace_mix_pool_bytes(nbytes, _RET_IP_);
+ spin_lock_irqsave(&input_pool.lock, flags);
+ _mix_pool_bytes(in, nbytes);
+ spin_unlock_irqrestore(&input_pool.lock, flags);
}

struct fast_pool {
@@ -663,16 +654,16 @@ static void process_random_ready_list(void)
* Use credit_entropy_bits_safe() if the value comes from userspace
* or otherwise should be checked for extreme values.
*/
-static void credit_entropy_bits(struct entropy_store *r, int nbits)
+static void credit_entropy_bits(int nbits)
{
- int entropy_count, orig;
+ int entropy_count, entropy_bits, orig;
int nfrac = nbits << ENTROPY_SHIFT;

if (!nbits)
return;

retry:
- entropy_count = orig = READ_ONCE(r->entropy_count);
+ entropy_count = orig = READ_ONCE(input_pool.entropy_count);
if (nfrac < 0) {
/* Debit */
entropy_count += nfrac;
@@ -713,26 +704,21 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
}

if (WARN_ON(entropy_count < 0)) {
- pr_warn("negative entropy/overflow: pool %s count %d\n",
- r->name, entropy_count);
+ pr_warn("negative entropy/overflow: count %d\n", entropy_count);
entropy_count = 0;
} else if (entropy_count > POOL_FRACBITS)
entropy_count = POOL_FRACBITS;
- if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
+ if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig)
goto retry;

- trace_credit_entropy_bits(r->name, nbits,
- entropy_count >> ENTROPY_SHIFT, _RET_IP_);
+ trace_credit_entropy_bits(nbits, entropy_count >> ENTROPY_SHIFT, _RET_IP_);

- if (r == &input_pool) {
- int entropy_bits = entropy_count >> ENTROPY_SHIFT;
-
- if (crng_init < 2 && entropy_bits >= 128)
- crng_reseed(&primary_crng, r);
- }
+ entropy_bits = entropy_count >> ENTROPY_SHIFT;
+ if (crng_init < 2 && entropy_bits >= 128)
+ crng_reseed(&primary_crng, true);
}

-static int credit_entropy_bits_safe(struct entropy_store *r, int nbits)
+static int credit_entropy_bits_safe(int nbits)
{
if (nbits < 0)
return -EINVAL;
@@ -740,7 +726,7 @@ static int credit_entropy_bits_safe(struct entropy_store *r, int nbits)
/* Cap the value to avoid overflows */
nbits = min(nbits, POOL_BITS);

- credit_entropy_bits(r, nbits);
+ credit_entropy_bits(nbits);
return 0;
}

@@ -818,7 +804,7 @@ static void crng_initialize_secondary(struct crng_state *crng)

static void __init crng_initialize_primary(struct crng_state *crng)
{
- _extract_entropy(&input_pool, &crng->state[4], sizeof(u32) * 12);
+ _extract_entropy(&crng->state[4], sizeof(u32) * 12);
if (crng_init_try_arch_early(crng) && trust_cpu && crng_init < 2) {
invalidate_batched_entropy();
numa_crng_init();
@@ -979,7 +965,7 @@ static int crng_slow_load(const u8 *cp, size_t len)
return 1;
}

-static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
+static void crng_reseed(struct crng_state *crng, bool use_input_pool)
{
unsigned long flags;
int i, num;
@@ -988,8 +974,8 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
u32 key[8];
} buf;

- if (r) {
- num = extract_entropy(r, &buf, 32, 16);
+ if (use_input_pool) {
+ num = extract_entropy(&buf, 32, 16);
if (num == 0)
return;
} else {
@@ -1020,8 +1006,7 @@ static void _extract_crng(struct crng_state *crng,
init_time = READ_ONCE(crng->init_time);
if (time_after(READ_ONCE(crng_global_init_time), init_time) ||
time_after(jiffies, init_time + CRNG_RESEED_INTERVAL))
- crng_reseed(crng, crng == &primary_crng ?
- &input_pool : NULL);
+ crng_reseed(crng, crng == &primary_crng);
}
spin_lock_irqsave(&crng->lock, flags);
chacha20_block(&crng->state[0], out);
@@ -1132,8 +1117,8 @@ void add_device_randomness(const void *buf, unsigned int size)

trace_add_device_randomness(size, _RET_IP_);
spin_lock_irqsave(&input_pool.lock, flags);
- _mix_pool_bytes(&input_pool, buf, size);
- _mix_pool_bytes(&input_pool, &time, sizeof(time));
+ _mix_pool_bytes(buf, size);
+ _mix_pool_bytes(&time, sizeof(time));
spin_unlock_irqrestore(&input_pool.lock, flags);
}
EXPORT_SYMBOL(add_device_randomness);
@@ -1152,7 +1137,6 @@ static struct timer_rand_state input_timer_state = INIT_TIMER_RAND_STATE;
*/
static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
{
- struct entropy_store *r;
struct {
long jiffies;
unsigned cycles;
@@ -1163,8 +1147,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
sample.jiffies = jiffies;
sample.cycles = random_get_entropy();
sample.num = num;
- r = &input_pool;
- mix_pool_bytes(r, &sample, sizeof(sample));
+ mix_pool_bytes(&sample, sizeof(sample));

/*
* Calculate number of bits of randomness we probably added.
@@ -1196,7 +1179,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
* Round down by 1 bit on general principles,
* and limit entropy estimate to 12 bits.
*/
- credit_entropy_bits(r, min_t(int, fls(delta>>1), 11));
+ credit_entropy_bits(min_t(int, fls(delta>>1), 11));
}

void add_input_randomness(unsigned int type, unsigned int code,
@@ -1211,7 +1194,7 @@ void add_input_randomness(unsigned int type, unsigned int code,
last_value = value;
add_timer_randomness(&input_timer_state,
(type << 4) ^ code ^ (code >> 4) ^ value);
- trace_add_input_randomness(ENTROPY_BITS(&input_pool));
+ trace_add_input_randomness(ENTROPY_BITS());
}
EXPORT_SYMBOL_GPL(add_input_randomness);

@@ -1255,7 +1238,6 @@ static u32 get_reg(struct fast_pool *f, struct pt_regs *regs)

void add_interrupt_randomness(int irq)
{
- struct entropy_store *r;
struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness);
struct pt_regs *regs = get_irq_regs();
unsigned long now = jiffies;
@@ -1290,18 +1272,17 @@ void add_interrupt_randomness(int irq)
!time_after(now, fast_pool->last + HZ))
return;

- r = &input_pool;
- if (!spin_trylock(&r->lock))
+ if (!spin_trylock(&input_pool.lock))
return;

fast_pool->last = now;
- __mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool));
- spin_unlock(&r->lock);
+ __mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool));
+ spin_unlock(&input_pool.lock);

fast_pool->count = 0;

/* award one bit for the contents of the fast pool */
- credit_entropy_bits(r, 1);
+ credit_entropy_bits(1);
}
EXPORT_SYMBOL_GPL(add_interrupt_randomness);

@@ -1312,7 +1293,7 @@ void add_disk_randomness(struct gendisk *disk)
return;
/* first major is 1, so we get >= 0x200 here */
add_timer_randomness(disk->random, 0x100 + disk_devt(disk));
- trace_add_disk_randomness(disk_devt(disk), ENTROPY_BITS(&input_pool));
+ trace_add_disk_randomness(disk_devt(disk), ENTROPY_BITS());
}
EXPORT_SYMBOL_GPL(add_disk_randomness);
#endif
@@ -1327,16 +1308,16 @@ EXPORT_SYMBOL_GPL(add_disk_randomness);
* This function decides how many bytes to actually take from the
* given pool, and also debits the entropy count accordingly.
*/
-static size_t account(struct entropy_store *r, size_t nbytes, int min)
+static size_t account(size_t nbytes, int min)
{
int entropy_count, orig, have_bytes;
size_t ibytes, nfrac;

- BUG_ON(r->entropy_count > POOL_FRACBITS);
+ BUG_ON(input_pool.entropy_count > POOL_FRACBITS);

/* Can we pull enough? */
retry:
- entropy_count = orig = READ_ONCE(r->entropy_count);
+ entropy_count = orig = READ_ONCE(input_pool.entropy_count);
ibytes = nbytes;
/* never pull more than available */
have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);
@@ -1348,8 +1329,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min)
ibytes = 0;

if (WARN_ON(entropy_count < 0)) {
- pr_warn("negative entropy count: pool %s count %d\n",
- r->name, entropy_count);
+ pr_warn("negative entropy count: count %d\n", entropy_count);
entropy_count = 0;
}
nfrac = ibytes << (ENTROPY_SHIFT + 3);
@@ -1358,11 +1338,11 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min)
else
entropy_count = 0;

- if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
+ if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig)
goto retry;

- trace_debit_entropy(r->name, 8 * ibytes);
- if (ibytes && ENTROPY_BITS(r) < random_write_wakeup_bits) {
+ trace_debit_entropy(8 * ibytes);
+ if (ibytes && ENTROPY_BITS() < random_write_wakeup_bits) {
wake_up_interruptible(&random_write_wait);
kill_fasync(&fasync, SIGIO, POLL_OUT);
}
@@ -1375,7 +1355,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min)
*
* Note: we assume that .poolwords is a multiple of 16 words.
*/
-static void extract_buf(struct entropy_store *r, u8 *out)
+static void extract_buf(u8 *out)
{
struct blake2s_state state __aligned(__alignof__(unsigned long));
u8 hash[BLAKE2S_HASH_SIZE];
@@ -1397,8 +1377,8 @@ static void extract_buf(struct entropy_store *r, u8 *out)
}

/* Generate a hash across the pool */
- spin_lock_irqsave(&r->lock, flags);
- blake2s_update(&state, (const u8 *)r->pool, POOL_BYTES);
+ spin_lock_irqsave(&input_pool.lock, flags);
+ blake2s_update(&state, (const u8 *)input_pool.pool, POOL_BYTES);
blake2s_final(&state, hash); /* final zeros out state */

/*
@@ -1410,8 +1390,8 @@ static void extract_buf(struct entropy_store *r, u8 *out)
* brute-forcing the feedback as hard as brute-forcing the
* hash.
*/
- __mix_pool_bytes(r, hash, sizeof(hash));
- spin_unlock_irqrestore(&r->lock, flags);
+ __mix_pool_bytes(hash, sizeof(hash));
+ spin_unlock_irqrestore(&input_pool.lock, flags);

/* Note that EXTRACT_SIZE is half of hash size here, because above
* we've dumped the full length back into mixer. By reducing the
@@ -1421,14 +1401,13 @@ static void extract_buf(struct entropy_store *r, u8 *out)
memzero_explicit(hash, sizeof(hash));
}

-static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
- size_t nbytes)
+static ssize_t _extract_entropy(void *buf, size_t nbytes)
{
ssize_t ret = 0, i;
u8 tmp[EXTRACT_SIZE];

while (nbytes) {
- extract_buf(r, tmp);
+ extract_buf(tmp);
i = min_t(int, nbytes, EXTRACT_SIZE);
memcpy(buf, tmp, i);
nbytes -= i;
@@ -1449,12 +1428,11 @@ static ssize_t _extract_entropy(struct entropy_store *r, void *buf,
* The min parameter specifies the minimum amount we can pull before
* failing to avoid races that defeat catastrophic reseeding.
*/
-static ssize_t extract_entropy(struct entropy_store *r, void *buf,
- size_t nbytes, int min)
+static ssize_t extract_entropy(void *buf, size_t nbytes, int min)
{
- trace_extract_entropy(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
- nbytes = account(r, nbytes, min);
- return _extract_entropy(r, buf, nbytes);
+ trace_extract_entropy(nbytes, ENTROPY_BITS(), _RET_IP_);
+ nbytes = account(nbytes, min);
+ return _extract_entropy(buf, nbytes);
}

#define warn_unseeded_randomness(previous) \
@@ -1539,7 +1517,7 @@ EXPORT_SYMBOL(get_random_bytes);
*/
static void entropy_timer(struct timer_list *t)
{
- credit_entropy_bits(&input_pool, 1);
+ credit_entropy_bits(1);
}

/*
@@ -1563,14 +1541,14 @@ static void try_to_generate_entropy(void)
while (!crng_ready()) {
if (!timer_pending(&stack.timer))
mod_timer(&stack.timer, jiffies+1);
- mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now));
+ mix_pool_bytes(&stack.now, sizeof(stack.now));
schedule();
stack.now = random_get_entropy();
}

del_timer_sync(&stack.timer);
destroy_timer_on_stack(&stack.timer);
- mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now));
+ mix_pool_bytes(&stack.now, sizeof(stack.now));
}

/*
@@ -1711,26 +1689,24 @@ EXPORT_SYMBOL(get_random_bytes_arch);
/*
* init_std_data - initialize pool with system data
*
- * @r: pool to initialize
- *
* This function clears the pool's entropy count and mixes some system
* data into the pool to prepare it for use. The pool is not cleared
* as that can only decrease the entropy in the pool.
*/
-static void __init init_std_data(struct entropy_store *r)
+static void __init init_std_data(void)
{
int i;
ktime_t now = ktime_get_real();
unsigned long rv;

- mix_pool_bytes(r, &now, sizeof(now));
+ mix_pool_bytes(&now, sizeof(now));
for (i = POOL_BYTES; i > 0; i -= sizeof(rv)) {
if (!arch_get_random_seed_long(&rv) &&
!arch_get_random_long(&rv))
rv = random_get_entropy();
- mix_pool_bytes(r, &rv, sizeof(rv));
+ mix_pool_bytes(&rv, sizeof(rv));
}
- mix_pool_bytes(r, utsname(), sizeof(*(utsname())));
+ mix_pool_bytes(utsname(), sizeof(*(utsname())));
}

/*
@@ -1745,7 +1721,7 @@ static void __init init_std_data(struct entropy_store *r)
*/
int __init rand_initialize(void)
{
- init_std_data(&input_pool);
+ init_std_data();
if (crng_need_final_init)
crng_finalize_init(&primary_crng);
crng_initialize_primary(&primary_crng);
@@ -1782,7 +1758,7 @@ urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes,

nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
ret = extract_crng_user(buf, nbytes);
- trace_urandom_read(8 * nbytes, 0, ENTROPY_BITS(&input_pool));
+ trace_urandom_read(8 * nbytes, 0, ENTROPY_BITS());
return ret;
}

@@ -1822,13 +1798,13 @@ random_poll(struct file *file, poll_table * wait)
mask = 0;
if (crng_ready())
mask |= EPOLLIN | EPOLLRDNORM;
- if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits)
+ if (ENTROPY_BITS() < random_write_wakeup_bits)
mask |= EPOLLOUT | EPOLLWRNORM;
return mask;
}

static int
-write_pool(struct entropy_store *r, const char __user *buffer, size_t count)
+write_pool(const char __user *buffer, size_t count)
{
size_t bytes;
u32 t, buf[16];
@@ -1850,7 +1826,7 @@ write_pool(struct entropy_store *r, const char __user *buffer, size_t count)
count -= bytes;
p += bytes;

- mix_pool_bytes(r, buf, bytes);
+ mix_pool_bytes(buf, bytes);
cond_resched();
}

@@ -1862,7 +1838,7 @@ static ssize_t random_write(struct file *file, const char __user *buffer,
{
size_t ret;

- ret = write_pool(&input_pool, buffer, count);
+ ret = write_pool(buffer, count);
if (ret)
return ret;

@@ -1878,7 +1854,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
switch (cmd) {
case RNDGETENTCNT:
/* inherently racy, no point locking */
- ent_count = ENTROPY_BITS(&input_pool);
+ ent_count = ENTROPY_BITS();
if (put_user(ent_count, p))
return -EFAULT;
return 0;
@@ -1887,7 +1863,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
return -EPERM;
if (get_user(ent_count, p))
return -EFAULT;
- return credit_entropy_bits_safe(&input_pool, ent_count);
+ return credit_entropy_bits_safe(ent_count);
case RNDADDENTROPY:
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
@@ -1897,11 +1873,10 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
return -EINVAL;
if (get_user(size, p++))
return -EFAULT;
- retval = write_pool(&input_pool, (const char __user *)p,
- size);
+ retval = write_pool((const char __user *)p, size);
if (retval < 0)
return retval;
- return credit_entropy_bits_safe(&input_pool, ent_count);
+ return credit_entropy_bits_safe(ent_count);
case RNDZAPENTCNT:
case RNDCLEARPOOL:
/*
@@ -1917,7 +1892,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
return -EPERM;
if (crng_init < 2)
return -ENODATA;
- crng_reseed(&primary_crng, &input_pool);
+ crng_reseed(&primary_crng, true);
WRITE_ONCE(crng_global_init_time, jiffies - 1);
return 0;
default:
@@ -2241,11 +2216,9 @@ randomize_page(unsigned long start, unsigned long range)
void add_hwgenerator_randomness(const char *buffer, size_t count,
size_t entropy)
{
- struct entropy_store *poolp = &input_pool;
-
if (unlikely(crng_init == 0)) {
size_t ret = crng_fast_load(buffer, count);
- mix_pool_bytes(poolp, buffer, ret);
+ mix_pool_bytes(buffer, ret);
count -= ret;
buffer += ret;
if (!count || crng_init == 0)
@@ -2258,9 +2231,9 @@ void add_hwgenerator_randomness(const char *buffer, size_t count,
*/
wait_event_interruptible(random_write_wait,
!system_wq || kthread_should_stop() ||
- ENTROPY_BITS(&input_pool) <= random_write_wakeup_bits);
- mix_pool_bytes(poolp, buffer, count);
- credit_entropy_bits(poolp, entropy);
+ ENTROPY_BITS() <= random_write_wakeup_bits);
+ mix_pool_bytes(buffer, count);
+ credit_entropy_bits(entropy);
}
EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);

diff --git a/include/trace/events/random.h b/include/trace/events/random.h
index 3d7b432ca5f3..a2d9aa16a5d7 100644
--- a/include/trace/events/random.h
+++ b/include/trace/events/random.h
@@ -28,80 +28,71 @@ TRACE_EVENT(add_device_randomness,
);

DECLARE_EVENT_CLASS(random__mix_pool_bytes,
- TP_PROTO(const char *pool_name, int bytes, unsigned long IP),
+ TP_PROTO(int bytes, unsigned long IP),

- TP_ARGS(pool_name, bytes, IP),
+ TP_ARGS(bytes, IP),

TP_STRUCT__entry(
- __field( const char *, pool_name )
__field( int, bytes )
__field(unsigned long, IP )
),

TP_fast_assign(
- __entry->pool_name = pool_name;
__entry->bytes = bytes;
__entry->IP = IP;
),

- TP_printk("%s pool: bytes %d caller %pS",
- __entry->pool_name, __entry->bytes, (void *)__entry->IP)
+ TP_printk("input pool: bytes %d caller %pS",
+ __entry->bytes, (void *)__entry->IP)
);

DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes,
- TP_PROTO(const char *pool_name, int bytes, unsigned long IP),
+ TP_PROTO(int bytes, unsigned long IP),

- TP_ARGS(pool_name, bytes, IP)
+ TP_ARGS(bytes, IP)
);

DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes_nolock,
- TP_PROTO(const char *pool_name, int bytes, unsigned long IP),
+ TP_PROTO(int bytes, unsigned long IP),

- TP_ARGS(pool_name, bytes, IP)
+ TP_ARGS(bytes, IP)
);

TRACE_EVENT(credit_entropy_bits,
- TP_PROTO(const char *pool_name, int bits, int entropy_count,
- unsigned long IP),
+ TP_PROTO(int bits, int entropy_count, unsigned long IP),

- TP_ARGS(pool_name, bits, entropy_count, IP),
+ TP_ARGS(bits, entropy_count, IP),

TP_STRUCT__entry(
- __field( const char *, pool_name )
__field( int, bits )
__field( int, entropy_count )
__field(unsigned long, IP )
),

TP_fast_assign(
- __entry->pool_name = pool_name;
__entry->bits = bits;
__entry->entropy_count = entropy_count;
__entry->IP = IP;
),

- TP_printk("%s pool: bits %d entropy_count %d caller %pS",
- __entry->pool_name, __entry->bits,
- __entry->entropy_count, (void *)__entry->IP)
+ TP_printk("input pool: bits %d entropy_count %d caller %pS",
+ __entry->bits, __entry->entropy_count, (void *)__entry->IP)
);

TRACE_EVENT(debit_entropy,
- TP_PROTO(const char *pool_name, int debit_bits),
+ TP_PROTO(int debit_bits),

- TP_ARGS(pool_name, debit_bits),
+ TP_ARGS( debit_bits),

TP_STRUCT__entry(
- __field( const char *, pool_name )
__field( int, debit_bits )
),

TP_fast_assign(
- __entry->pool_name = pool_name;
__entry->debit_bits = debit_bits;
),

- TP_printk("%s: debit_bits %d", __entry->pool_name,
- __entry->debit_bits)
+ TP_printk("input pool: debit_bits %d", __entry->debit_bits)
);

TRACE_EVENT(add_input_randomness,
@@ -170,36 +161,31 @@ DEFINE_EVENT(random__get_random_bytes, get_random_bytes_arch,
);

DECLARE_EVENT_CLASS(random__extract_entropy,
- TP_PROTO(const char *pool_name, int nbytes, int entropy_count,
- unsigned long IP),
+ TP_PROTO(int nbytes, int entropy_count, unsigned long IP),

- TP_ARGS(pool_name, nbytes, entropy_count, IP),
+ TP_ARGS(nbytes, entropy_count, IP),

TP_STRUCT__entry(
- __field( const char *, pool_name )
__field( int, nbytes )
__field( int, entropy_count )
__field(unsigned long, IP )
),

TP_fast_assign(
- __entry->pool_name = pool_name;
__entry->nbytes = nbytes;
__entry->entropy_count = entropy_count;
__entry->IP = IP;
),

- TP_printk("%s pool: nbytes %d entropy_count %d caller %pS",
- __entry->pool_name, __entry->nbytes, __entry->entropy_count,
- (void *)__entry->IP)
+ TP_printk("input pool: nbytes %d entropy_count %d caller %pS",
+ __entry->nbytes, __entry->entropy_count, (void *)__entry->IP)
);


DEFINE_EVENT(random__extract_entropy, extract_entropy,
- TP_PROTO(const char *pool_name, int nbytes, int entropy_count,
- unsigned long IP),
+ TP_PROTO(int nbytes, int entropy_count, unsigned long IP),

- TP_ARGS(pool_name, nbytes, entropy_count, IP)
+ TP_ARGS(nbytes, entropy_count, IP)
);

TRACE_EVENT(urandom_read,
--
2.34.1


2022-01-13 15:44:56

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH 6/7] random: remove unused OUTPUT_POOL constants

We no longer have an output pool. Rather, we have just a wakeup bits
threshold for /dev/random reads, presumably so that processes don't
hang. This value, random_write_wakeup_bits, is configurable anyway. So
all the no longer usefully named OUTPUT_POOL constants were doing was
setting a reasonable default for random_write_wakeup_bits. This commit
gets rid of the constants and just puts it all in the default value of
random_write_wakeup_bits.

Signed-off-by: Jason A. Donenfeld <[email protected]>
---
drivers/char/random.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index b14de6456921..46aee3dcd807 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -363,8 +363,6 @@
*/
#define INPUT_POOL_SHIFT 12
#define INPUT_POOL_WORDS (1 << (INPUT_POOL_SHIFT-5))
-#define OUTPUT_POOL_SHIFT 10
-#define OUTPUT_POOL_WORDS (1 << (OUTPUT_POOL_SHIFT-5))
#define EXTRACT_SIZE (BLAKE2S_HASH_SIZE / 2)

/*
@@ -382,7 +380,7 @@
* should wake up processes which are selecting or polling on write
* access to /dev/random.
*/
-static int random_write_wakeup_bits = 28 * OUTPUT_POOL_WORDS;
+static int random_write_wakeup_bits = 28 * (1 << 5);

/*
* Originally, we used a primitive polynomial of degree .poolwords
--
2.34.1


2022-01-13 15:44:59

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH 7/7] random: de-duplicate INPUT_POOL constants

We already had the POOL_* constants, so deduplicate the older INPUT_POOL
ones. As well, fold EXTRACT_SIZE into the poolinfo enum, since it's
related.

Signed-off-by: Jason A. Donenfeld <[email protected]>
---
drivers/char/random.c | 17 ++++++-----------
1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 46aee3dcd807..839c231485f2 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -358,13 +358,6 @@

/* #define ADD_INTERRUPT_BENCH */

-/*
- * Configuration information
- */
-#define INPUT_POOL_SHIFT 12
-#define INPUT_POOL_WORDS (1 << (INPUT_POOL_SHIFT-5))
-#define EXTRACT_SIZE (BLAKE2S_HASH_SIZE / 2)
-
/*
* To allow fractional bits to be tracked, the entropy_count field is
* denominated in units of 1/8th bits.
@@ -440,7 +433,9 @@ enum poolinfo {
POOL_TAP2 = 76,
POOL_TAP3 = 51,
POOL_TAP4 = 25,
- POOL_TAP5 = 1
+ POOL_TAP5 = 1,
+
+ EXTRACT_SIZE = BLAKE2S_HASH_SIZE / 2
};

/*
@@ -503,7 +498,7 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression");
*
**********************************************************************/

-static u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy;
+static u32 input_pool_data[POOL_WORDS] __latent_entropy;

static struct {
/* read-only data: */
@@ -1961,7 +1956,7 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
#include <linux/sysctl.h>

static int min_write_thresh;
-static int max_write_thresh = INPUT_POOL_WORDS * 32;
+static int max_write_thresh = POOL_WORDS * 32;
static int random_min_urandom_seed = 60;
static char sysctl_bootid[16];

@@ -2018,7 +2013,7 @@ static int proc_do_entropy(struct ctl_table *table, int write,
return proc_dointvec(&fake_table, write, buffer, lenp, ppos);
}

-static int sysctl_poolsize = INPUT_POOL_WORDS * 32;
+static int sysctl_poolsize = POOL_BITS;
extern struct ctl_table random_table[];
struct ctl_table random_table[] = {
{
--
2.34.1


2022-01-14 22:41:29

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH] random: cleanup fractional entropy shift constants

The entropy estimator is calculated in terms of 1/8 bits, which means
there are various constants where things are shifted by 3. Move these
into our pool info enum with the other relevant constants, and normalize
the name a bit, prepending a POOL_ like the rest. While we're at it,
move an English assertion about sizes into a proper BUILD_BUG_ON so
that the compiler can ensure this invariant.

Signed-off-by: Jason A. Donenfeld <[email protected]>
---
drivers/char/random.c | 60 +++++++++++++++++++++----------------------
1 file changed, 29 insertions(+), 31 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 04b7efb034f7..1ca6d9c6d768 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -358,16 +358,6 @@

/* #define ADD_INTERRUPT_BENCH */

-/*
- * To allow fractional bits to be tracked, the entropy_count field is
- * denominated in units of 1/8th bits.
- *
- * 2*(ENTROPY_SHIFT + poolbitshift) must <= 31, or the multiply in
- * credit_entropy_bits() needs to be 64 bits wide.
- */
-#define ENTROPY_SHIFT 3
-#define ENTROPY_BITS() (input_pool.entropy_count >> ENTROPY_SHIFT)
-
/*
* If the entropy count falls under this number of bits, then we
* should wake up processes which are selecting or polling on write
@@ -425,8 +415,13 @@ enum poolinfo {
POOL_WORDMASK = POOL_WORDS - 1,
POOL_BYTES = POOL_WORDS * sizeof(u32),
POOL_BITS = POOL_BYTES * 8,
- POOL_BITSHIFT = ilog2(POOL_WORDS) + 5,
- POOL_FRACBITS = POOL_WORDS << (ENTROPY_SHIFT + 5),
+ POOL_BITSHIFT = ilog2(POOL_BITS),
+
+ /* To allow fractional bits to be tracked, the entropy_count field is
+ * denominated in units of 1/8th bits. */
+ POOL_ENTROPY_SHIFT = 3,
+#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT)
+ POOL_FRACBITS = POOL_BITS << POOL_ENTROPY_SHIFT,

/* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */
POOL_TAP1 = 104,
@@ -650,7 +645,10 @@ static void process_random_ready_list(void)
static void credit_entropy_bits(int nbits)
{
int entropy_count, entropy_bits, orig;
- int nfrac = nbits << ENTROPY_SHIFT;
+ int nfrac = nbits << POOL_ENTROPY_SHIFT;
+
+ /* Ensure that the multiplication can avoid being 64 bits wide. */
+ BUILD_BUG_ON(2 * (POOL_ENTROPY_SHIFT + POOL_BITSHIFT) > 31);

if (!nbits)
return;
@@ -683,17 +681,17 @@ static void credit_entropy_bits(int nbits)
* turns no matter how large nbits is.
*/
int pnfrac = nfrac;
- const int s = POOL_BITSHIFT + ENTROPY_SHIFT + 2;
+ const int s = POOL_BITSHIFT + POOL_ENTROPY_SHIFT + 2;
/* The +2 corresponds to the /4 in the denominator */

do {
- unsigned int anfrac = min(pnfrac, POOL_FRACBITS/2);
+ unsigned int anfrac = min(pnfrac, POOL_FRACBITS / 2);
unsigned int add =
- ((POOL_FRACBITS - entropy_count)*anfrac*3) >> s;
+ ((POOL_FRACBITS - entropy_count) * anfrac * 3) >> s;

entropy_count += add;
pnfrac -= anfrac;
- } while (unlikely(entropy_count < POOL_FRACBITS-2 && pnfrac));
+ } while (unlikely(entropy_count < POOL_FRACBITS - 2 && pnfrac));
}

if (WARN_ON(entropy_count < 0)) {
@@ -704,9 +702,9 @@ static void credit_entropy_bits(int nbits)
if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig)
goto retry;

- trace_credit_entropy_bits(nbits, entropy_count >> ENTROPY_SHIFT, _RET_IP_);
+ trace_credit_entropy_bits(nbits, entropy_count >> POOL_ENTROPY_SHIFT, _RET_IP_);

- entropy_bits = entropy_count >> ENTROPY_SHIFT;
+ entropy_bits = entropy_count >> POOL_ENTROPY_SHIFT;
if (crng_init < 2 && entropy_bits >= 128)
crng_reseed(&primary_crng, true);
}
@@ -1187,7 +1185,7 @@ void add_input_randomness(unsigned int type, unsigned int code,
last_value = value;
add_timer_randomness(&input_timer_state,
(type << 4) ^ code ^ (code >> 4) ^ value);
- trace_add_input_randomness(ENTROPY_BITS());
+ trace_add_input_randomness(POOL_ENTROPY_BITS());
}
EXPORT_SYMBOL_GPL(add_input_randomness);

@@ -1286,7 +1284,7 @@ void add_disk_randomness(struct gendisk *disk)
return;
/* first major is 1, so we get >= 0x200 here */
add_timer_randomness(disk->random, 0x100 + disk_devt(disk));
- trace_add_disk_randomness(disk_devt(disk), ENTROPY_BITS());
+ trace_add_disk_randomness(disk_devt(disk), POOL_ENTROPY_BITS());
}
EXPORT_SYMBOL_GPL(add_disk_randomness);
#endif
@@ -1313,7 +1311,7 @@ static size_t account(size_t nbytes, int min)
entropy_count = orig = READ_ONCE(input_pool.entropy_count);
ibytes = nbytes;
/* never pull more than available */
- have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);
+ have_bytes = entropy_count >> (POOL_ENTROPY_SHIFT + 3);

if (have_bytes < 0)
have_bytes = 0;
@@ -1325,7 +1323,7 @@ static size_t account(size_t nbytes, int min)
pr_warn("negative entropy count: count %d\n", entropy_count);
entropy_count = 0;
}
- nfrac = ibytes << (ENTROPY_SHIFT + 3);
+ nfrac = ibytes << (POOL_ENTROPY_SHIFT + 3);
if ((size_t) entropy_count > nfrac)
entropy_count -= nfrac;
else
@@ -1335,7 +1333,7 @@ static size_t account(size_t nbytes, int min)
goto retry;

trace_debit_entropy(8 * ibytes);
- if (ibytes && ENTROPY_BITS() < random_write_wakeup_bits) {
+ if (ibytes && POOL_ENTROPY_BITS() < random_write_wakeup_bits) {
wake_up_interruptible(&random_write_wait);
kill_fasync(&fasync, SIGIO, POLL_OUT);
}
@@ -1423,7 +1421,7 @@ static ssize_t _extract_entropy(void *buf, size_t nbytes)
*/
static ssize_t extract_entropy(void *buf, size_t nbytes, int min)
{
- trace_extract_entropy(nbytes, ENTROPY_BITS(), _RET_IP_);
+ trace_extract_entropy(nbytes, POOL_ENTROPY_BITS(), _RET_IP_);
nbytes = account(nbytes, min);
return _extract_entropy(buf, nbytes);
}
@@ -1749,9 +1747,9 @@ urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes,
{
int ret;

- nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
+ nbytes = min_t(size_t, nbytes, INT_MAX >> (POOL_ENTROPY_SHIFT + 3));
ret = extract_crng_user(buf, nbytes);
- trace_urandom_read(8 * nbytes, 0, ENTROPY_BITS());
+ trace_urandom_read(8 * nbytes, 0, POOL_ENTROPY_BITS());
return ret;
}

@@ -1791,7 +1789,7 @@ random_poll(struct file *file, poll_table * wait)
mask = 0;
if (crng_ready())
mask |= EPOLLIN | EPOLLRDNORM;
- if (ENTROPY_BITS() < random_write_wakeup_bits)
+ if (POOL_ENTROPY_BITS() < random_write_wakeup_bits)
mask |= EPOLLOUT | EPOLLWRNORM;
return mask;
}
@@ -1847,7 +1845,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
switch (cmd) {
case RNDGETENTCNT:
/* inherently racy, no point locking */
- ent_count = ENTROPY_BITS();
+ ent_count = POOL_ENTROPY_BITS();
if (put_user(ent_count, p))
return -EFAULT;
return 0;
@@ -2005,7 +2003,7 @@ static int proc_do_entropy(struct ctl_table *table, int write,
struct ctl_table fake_table;
int entropy_count;

- entropy_count = *(int *)table->data >> ENTROPY_SHIFT;
+ entropy_count = *(int *)table->data >> POOL_ENTROPY_SHIFT;

fake_table.data = &entropy_count;
fake_table.maxlen = sizeof(entropy_count);
@@ -2224,7 +2222,7 @@ void add_hwgenerator_randomness(const char *buffer, size_t count,
*/
wait_event_interruptible(random_write_wait,
!system_wq || kthread_should_stop() ||
- ENTROPY_BITS() <= random_write_wakeup_bits);
+ POOL_ENTROPY_BITS() <= random_write_wakeup_bits);
mix_pool_bytes(buffer, count);
credit_entropy_bits(entropy);
}
--
2.34.1

2022-01-14 22:42:09

by David Laight

[permalink] [raw]
Subject: RE: [PATCH] random: cleanup fractional entropy shift constants

From: Jason A. Donenfeld
> Sent: 14 January 2022 15:33
>
> The entropy estimator is calculated in terms of 1/8 bits, which means
> there are various constants where things are shifted by 3. Move these
> into our pool info enum with the other relevant constants, and normalize
> the name a bit, prepending a POOL_ like the rest. While we're at it,
> move an English assertion about sizes into a proper BUILD_BUG_ON so
> that the compiler can ensure this invariant.
>
...
> -#define ENTROPY_SHIFT 3
> -#define ENTROPY_BITS() (input_pool.entropy_count >> ENTROPY_SHIFT)
..
> + POOL_ENTROPY_SHIFT = 3,
> +#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT)

The rename ought to be a different patch.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2022-01-14 22:42:30

by Jason A. Donenfeld

[permalink] [raw]
Subject: Re: [PATCH] random: cleanup fractional entropy shift constants

On Fri, Jan 14, 2022 at 4:39 PM David Laight <[email protected]> wrote:
>
> From: Jason A. Donenfeld
> > Sent: 14 January 2022 15:33
> >
> > The entropy estimator is calculated in terms of 1/8 bits, which means
> > there are various constants where things are shifted by 3. Move these
> > into our pool info enum with the other relevant constants, and normalize
> > the name a bit, prepending a POOL_ like the rest. While we're at it,
> > move an English assertion about sizes into a proper BUILD_BUG_ON so
> > that the compiler can ensure this invariant.
> >
> ...
> > -#define ENTROPY_SHIFT 3
> > -#define ENTROPY_BITS() (input_pool.entropy_count >> ENTROPY_SHIFT)
> ..
> > + POOL_ENTROPY_SHIFT = 3,
> > +#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT)
>
> The rename ought to be a different patch.

I can certainly do that.

2022-01-17 07:58:50

by Dominik Brodowski

[permalink] [raw]
Subject: Re: [PATCH 1/7] random: cleanup poolinfo abstraction

Am Thu, Jan 13, 2022 at 04:44:07PM +0100 schrieb Jason A. Donenfeld:
> Now that we're only using one polynomial, we can cleanup its
> representation into constants, instead of passing around pointers
> dynamically to select different polynomials. This improves the codegen
> and makes the code a bit more straightforward.
>
> Signed-off-by: Jason A. Donenfeld <[email protected]>

> -} poolinfo_table[] = {
> - /* was: x^128 + x^103 + x^76 + x^51 +x^25 + x + 1 */
> +enum poolinfo {
> + POOL_WORDS = 128,
> + POOL_WORDMASK = POOL_WORDS - 1,
> + POOL_BYTES = POOL_WORDS * sizeof(u32),
> + POOL_BITS = POOL_BYTES * 8,
> + POOL_BITSHIFT = ilog2(POOL_WORDS) + 5,
> + POOL_FRACBITS = POOL_WORDS << (ENTROPY_SHIFT + 5),
> +
> /* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */
> - { S(128), 104, 76, 51, 25, 1 },
> + POOL_TAP1 = 104,
> + POOL_TAP2 = 76,

The only information lost seems to be that POOL_TAP1 used to be 103. But
that comment is still available in git history, so feel free to add:

Reviewed-by: Dominik Brodowski <[email protected]>

Thanks,
Dominik

2022-01-17 07:59:41

by Dominik Brodowski

[permalink] [raw]
Subject: Re: [PATCH 3/7] random: remove incomplete last_data logic

Am Thu, Jan 13, 2022 at 04:44:09PM +0100 schrieb Jason A. Donenfeld:
> There were a few things added under the "if (fips_enabled)" banner,
> which never really got completed, and the FIPS people anyway are
> choosing a different direction. Rather than keep around this halfbaked
> code, get rid of it so that we can focus on a single design of the RNG
> rather than two designs.
>
> Signed-off-by: Jason A. Donenfeld <[email protected]>

Reviewed-by: Dominik Brodowski <[email protected]>

Thanks,
Dominik

2022-01-17 07:59:42

by Dominik Brodowski

[permalink] [raw]
Subject: Re: [PATCH 7/7] random: de-duplicate INPUT_POOL constants

Am Thu, Jan 13, 2022 at 04:44:13PM +0100 schrieb Jason A. Donenfeld:
> We already had the POOL_* constants, so deduplicate the older INPUT_POOL
> ones. As well, fold EXTRACT_SIZE into the poolinfo enum, since it's
> related.
>
> Signed-off-by: Jason A. Donenfeld <[email protected]>

Reviewed-by: Dominik Brodowski <[email protected]>

Thanks,
Dominik

2022-01-17 07:59:44

by Dominik Brodowski

[permalink] [raw]
Subject: Re: [PATCH 4/7] random: remove unused reserved argument

Am Thu, Jan 13, 2022 at 04:44:10PM +0100 schrieb Jason A. Donenfeld:
> This argument is always set to zero, as a result of us not caring about
> keeping a certain amount reserved in the pool these days. So just remove
> it and cleanup the function signatures.
>
> Signed-off-by: Jason A. Donenfeld <[email protected]>

I'd suggest noting in the patch title and commit message that this relates
to the extract_entropy() function.


> @@ -1342,7 +1341,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min,
> /* never pull more than available */
> have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);
>
> - if ((have_bytes -= reserved) < 0)
> + if (have_bytes < 0)
> have_bytes = 0;
> ibytes = min_t(size_t, ibytes, have_bytes);

Hmm. We already WARN_ON(entropy_count < 0) a few lines below. Maybe move
that assertion before the assignement of have_bytes? Then, have_bytes can
never be lower than zero, and the code becomes even simpler. What do you
think?


Reviewed-by: Dominik Brodowski <[email protected]>

Thanks,
Dominik

2022-01-17 07:59:53

by Dominik Brodowski

[permalink] [raw]
Subject: Re: [PATCH 2/7] random: cleanup integer types

Am Thu, Jan 13, 2022 at 04:44:08PM +0100 schrieb Jason A. Donenfeld:
> Rather than using the userspace type, __uXX, switch to using uXX. And
> rather than using variously chosen `char *` or `unsigned char *`, use
> `u8 *` uniformly for things that aren't strings, in the case where we
> are doing byte-by-byte traversal.
>
> Signed-off-by: Jason A. Donenfeld <[email protected]>

> - unsigned short reg_idx;
> - unsigned char count;
> + u16 reg_idx;
> + u8 count;

As you do not change other unsigned shorts to u16, and that change is
not explained in the changelog, please defer that to a separate patch.
Otherwise, feel free to add:

Reviewed-by: Dominik Brodowski <[email protected]>

Thanks,
Dominik

2022-01-17 08:00:18

by Dominik Brodowski

[permalink] [raw]
Subject: Re: [PATCH 6/7] random: remove unused OUTPUT_POOL constants

Am Thu, Jan 13, 2022 at 04:44:12PM +0100 schrieb Jason A. Donenfeld:
> We no longer have an output pool. Rather, we have just a wakeup bits
> threshold for /dev/random reads, presumably so that processes don't
> hang. This value, random_write_wakeup_bits, is configurable anyway. So
> all the no longer usefully named OUTPUT_POOL constants were doing was
> setting a reasonable default for random_write_wakeup_bits. This commit
> gets rid of the constants and just puts it all in the default value of
> random_write_wakeup_bits.
>
> Signed-off-by: Jason A. Donenfeld <[email protected]>

Reviewed-by: Dominik Brodowski <[email protected]>

Thanks,
Dominik

2022-01-17 08:00:39

by Dominik Brodowski

[permalink] [raw]
Subject: Re: [PATCH 5/7] random: rather than entropy_store abstraction, use global

> -static void __mix_pool_bytes(struct entropy_store *r, const void *in,
> - int nbytes)
> +static void __mix_pool_bytes(const void *in, int nbytes)
> {
> - trace_mix_pool_bytes_nolock(r->name, nbytes, _RET_IP_);
> - _mix_pool_bytes(r, in, nbytes);
> + trace_mix_pool_bytes_nolock(nbytes, _RET_IP_);
> + _mix_pool_bytes(in, nbytes);

Can the parameters of these tracepoints be modified, or does this break
any part of our API?

I haven't looked at the tracepoint bits in detail; otherwise, the changes
look good:

Reviewed-by: Dominik Brodowski <[email protected]> # random.c only

Thanks,
Dominik

2022-01-17 08:45:25

by Jason A. Donenfeld

[permalink] [raw]
Subject: Re: [PATCH 4/7] random: remove unused reserved argument

On Sun, Jan 16, 2022 at 2:45 PM Dominik Brodowski <[email protected]> wrote:
> > @@ -1342,7 +1341,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min,
> >       /* never pull more than available */
> >       have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);
> >
> > -     if ((have_bytes -= reserved) < 0)
> > +     if (have_bytes < 0)
> >               have_bytes = 0;
> >       ibytes = min_t(size_t, ibytes, have_bytes);
>
> Hmm. We already WARN_ON(entropy_count < 0) a few lines below. Maybe move
> that assertion before the assignement of have_bytes? Then, have_bytes can
> never be lower than zero, and the code becomes even simpler. What do you
> think?

Can you send a separate patch for this that we can apply on top? It
seems reasonable anyhow. Something like:

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 327086b35797..419156d2146d 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1329,7 +1329,7 @@ EXPORT_SYMBOL_GPL(add_disk_randomness);
*/
static size_t account(struct entropy_store *r, size_t nbytes, int min)
{
- int entropy_count, orig, have_bytes;
+ int entropy_count, orig;
size_t ibytes, nfrac;

BUG_ON(r->entropy_count > POOL_FRACBITS);
@@ -1337,21 +1337,17 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min)
/* Can we pull enough? */
retry:
entropy_count = orig = READ_ONCE(r->entropy_count);
- ibytes = nbytes;
- /* never pull more than available */
- have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);
-
- if (have_bytes < 0)
- have_bytes = 0;
- ibytes = min_t(size_t, ibytes, have_bytes);
- if (ibytes < min)
- ibytes = 0;
-
if (WARN_ON(entropy_count < 0)) {
pr_warn("negative entropy count: pool %s count %d\n",
r->name, entropy_count);
entropy_count = 0;
}
+
+ /* never pull more than available */
+ ibytes = min_t(size_t, nbytes, entropy_count >> (ENTROPY_SHIFT + 3));
+ if (ibytes < min)
+ ibytes = 0;
+
nfrac = ibytes << (ENTROPY_SHIFT + 3);
if ((size_t) entropy_count > nfrac)
entropy_count -= nfrac;


2022-01-17 08:49:21

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH 1/4] random: prepend remaining pool constants with POOL_

The other pool constants are prepended with POOL_, but not these last
ones. Rename them. This will then let us move them into the enum in the
following commit.

Signed-off-by: Jason A. Donenfeld <[email protected]>
---
drivers/char/random.c | 40 ++++++++++++++++++++--------------------
1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 7e9caf092cd8..de1c14787ae8 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -362,11 +362,11 @@
* To allow fractional bits to be tracked, the entropy_count field is
* denominated in units of 1/8th bits.
*
- * 2*(ENTROPY_SHIFT + poolbitshift) must <= 31, or the multiply in
+ * 2*(POOL_ENTROPY_SHIFT + poolbitshift) must <= 31, or the multiply in
* credit_entropy_bits() needs to be 64 bits wide.
*/
-#define ENTROPY_SHIFT 3
-#define ENTROPY_BITS() (input_pool.entropy_count >> ENTROPY_SHIFT)
+#define POOL_ENTROPY_SHIFT 3
+#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT)

/*
* If the entropy count falls under this number of bits, then we
@@ -426,7 +426,7 @@ enum poolinfo {
POOL_BYTES = POOL_WORDS * sizeof(u32),
POOL_BITS = POOL_BYTES * 8,
POOL_BITSHIFT = ilog2(POOL_WORDS) + 5,
- POOL_FRACBITS = POOL_WORDS << (ENTROPY_SHIFT + 5),
+ POOL_FRACBITS = POOL_WORDS << (POOL_ENTROPY_SHIFT + 5),

/* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */
POOL_TAP1 = 104,
@@ -650,7 +650,7 @@ static void process_random_ready_list(void)
static void credit_entropy_bits(int nbits)
{
int entropy_count, entropy_bits, orig;
- int nfrac = nbits << ENTROPY_SHIFT;
+ int nfrac = nbits << POOL_ENTROPY_SHIFT;

if (!nbits)
return;
@@ -683,7 +683,7 @@ static void credit_entropy_bits(int nbits)
* turns no matter how large nbits is.
*/
int pnfrac = nfrac;
- const int s = POOL_BITSHIFT + ENTROPY_SHIFT + 2;
+ const int s = POOL_BITSHIFT + POOL_ENTROPY_SHIFT + 2;
/* The +2 corresponds to the /4 in the denominator */

do {
@@ -704,9 +704,9 @@ static void credit_entropy_bits(int nbits)
if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig)
goto retry;

- trace_credit_entropy_bits(nbits, entropy_count >> ENTROPY_SHIFT, _RET_IP_);
+ trace_credit_entropy_bits(nbits, entropy_count >> POOL_ENTROPY_SHIFT, _RET_IP_);

- entropy_bits = entropy_count >> ENTROPY_SHIFT;
+ entropy_bits = entropy_count >> POOL_ENTROPY_SHIFT;
if (crng_init < 2 && entropy_bits >= 128)
crng_reseed(&primary_crng, true);
}
@@ -1187,7 +1187,7 @@ void add_input_randomness(unsigned int type, unsigned int code,
last_value = value;
add_timer_randomness(&input_timer_state,
(type << 4) ^ code ^ (code >> 4) ^ value);
- trace_add_input_randomness(ENTROPY_BITS());
+ trace_add_input_randomness(POOL_ENTROPY_BITS());
}
EXPORT_SYMBOL_GPL(add_input_randomness);

@@ -1286,7 +1286,7 @@ void add_disk_randomness(struct gendisk *disk)
return;
/* first major is 1, so we get >= 0x200 here */
add_timer_randomness(disk->random, 0x100 + disk_devt(disk));
- trace_add_disk_randomness(disk_devt(disk), ENTROPY_BITS());
+ trace_add_disk_randomness(disk_devt(disk), POOL_ENTROPY_BITS());
}
EXPORT_SYMBOL_GPL(add_disk_randomness);
#endif
@@ -1313,7 +1313,7 @@ static size_t account(size_t nbytes, int min)
entropy_count = orig = READ_ONCE(input_pool.entropy_count);
ibytes = nbytes;
/* never pull more than available */
- have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);
+ have_bytes = entropy_count >> (POOL_ENTROPY_SHIFT + 3);

if (have_bytes < 0)
have_bytes = 0;
@@ -1325,7 +1325,7 @@ static size_t account(size_t nbytes, int min)
pr_warn("negative entropy count: count %d\n", entropy_count);
entropy_count = 0;
}
- nfrac = ibytes << (ENTROPY_SHIFT + 3);
+ nfrac = ibytes << (POOL_ENTROPY_SHIFT + 3);
if ((size_t) entropy_count > nfrac)
entropy_count -= nfrac;
else
@@ -1335,7 +1335,7 @@ static size_t account(size_t nbytes, int min)
goto retry;

trace_debit_entropy(8 * ibytes);
- if (ibytes && ENTROPY_BITS() < random_write_wakeup_bits) {
+ if (ibytes && POOL_ENTROPY_BITS() < random_write_wakeup_bits) {
wake_up_interruptible(&random_write_wait);
kill_fasync(&fasync, SIGIO, POLL_OUT);
}
@@ -1423,7 +1423,7 @@ static ssize_t _extract_entropy(void *buf, size_t nbytes)
*/
static ssize_t extract_entropy(void *buf, size_t nbytes, int min)
{
- trace_extract_entropy(nbytes, ENTROPY_BITS(), _RET_IP_);
+ trace_extract_entropy(nbytes, POOL_ENTROPY_BITS(), _RET_IP_);
nbytes = account(nbytes, min);
return _extract_entropy(buf, nbytes);
}
@@ -1749,9 +1749,9 @@ urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes,
{
int ret;

- nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
+ nbytes = min_t(size_t, nbytes, INT_MAX >> (POOL_ENTROPY_SHIFT + 3));
ret = extract_crng_user(buf, nbytes);
- trace_urandom_read(8 * nbytes, 0, ENTROPY_BITS());
+ trace_urandom_read(8 * nbytes, 0, POOL_ENTROPY_BITS());
return ret;
}

@@ -1791,7 +1791,7 @@ random_poll(struct file *file, poll_table * wait)
mask = 0;
if (crng_ready())
mask |= EPOLLIN | EPOLLRDNORM;
- if (ENTROPY_BITS() < random_write_wakeup_bits)
+ if (POOL_ENTROPY_BITS() < random_write_wakeup_bits)
mask |= EPOLLOUT | EPOLLWRNORM;
return mask;
}
@@ -1847,7 +1847,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
switch (cmd) {
case RNDGETENTCNT:
/* inherently racy, no point locking */
- ent_count = ENTROPY_BITS();
+ ent_count = POOL_ENTROPY_BITS();
if (put_user(ent_count, p))
return -EFAULT;
return 0;
@@ -2005,7 +2005,7 @@ static int proc_do_entropy(struct ctl_table *table, int write,
struct ctl_table fake_table;
int entropy_count;

- entropy_count = *(int *)table->data >> ENTROPY_SHIFT;
+ entropy_count = *(int *)table->data >> POOL_ENTROPY_SHIFT;

fake_table.data = &entropy_count;
fake_table.maxlen = sizeof(entropy_count);
@@ -2224,7 +2224,7 @@ void add_hwgenerator_randomness(const char *buffer, size_t count,
*/
wait_event_interruptible(random_write_wait,
!system_wq || kthread_should_stop() ||
- ENTROPY_BITS() <= random_write_wakeup_bits);
+ POOL_ENTROPY_BITS() <= random_write_wakeup_bits);
mix_pool_bytes(buffer, count);
credit_entropy_bits(entropy);
}
--
2.34.1

2022-01-17 08:49:23

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH 2/4] random: cleanup fractional entropy shift constants

The entropy estimator is calculated in terms of 1/8 bits, which means
there are various constants where things are shifted by 3. Move these
into our pool info enum with the other relevant constants. While we're
at it, move an English assertion about sizes into a proper BUILD_BUG_ON
so that the compiler can ensure this invariant.

Signed-off-by: Jason A. Donenfeld <[email protected]>
---
drivers/char/random.c | 28 +++++++++++++---------------
1 file changed, 13 insertions(+), 15 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index de1c14787ae8..7343bff086c5 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -358,16 +358,6 @@

/* #define ADD_INTERRUPT_BENCH */

-/*
- * To allow fractional bits to be tracked, the entropy_count field is
- * denominated in units of 1/8th bits.
- *
- * 2*(POOL_ENTROPY_SHIFT + poolbitshift) must <= 31, or the multiply in
- * credit_entropy_bits() needs to be 64 bits wide.
- */
-#define POOL_ENTROPY_SHIFT 3
-#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT)
-
/*
* If the entropy count falls under this number of bits, then we
* should wake up processes which are selecting or polling on write
@@ -425,8 +415,13 @@ enum poolinfo {
POOL_WORDMASK = POOL_WORDS - 1,
POOL_BYTES = POOL_WORDS * sizeof(u32),
POOL_BITS = POOL_BYTES * 8,
- POOL_BITSHIFT = ilog2(POOL_WORDS) + 5,
- POOL_FRACBITS = POOL_WORDS << (POOL_ENTROPY_SHIFT + 5),
+ POOL_BITSHIFT = ilog2(POOL_BITS),
+
+ /* To allow fractional bits to be tracked, the entropy_count field is
+ * denominated in units of 1/8th bits. */
+ POOL_ENTROPY_SHIFT = 3,
+#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT)
+ POOL_FRACBITS = POOL_BITS << POOL_ENTROPY_SHIFT,

/* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */
POOL_TAP1 = 104,
@@ -652,6 +647,9 @@ static void credit_entropy_bits(int nbits)
int entropy_count, entropy_bits, orig;
int nfrac = nbits << POOL_ENTROPY_SHIFT;

+ /* Ensure that the multiplication can avoid being 64 bits wide. */
+ BUILD_BUG_ON(2 * (POOL_ENTROPY_SHIFT + POOL_BITSHIFT) > 31);
+
if (!nbits)
return;

@@ -687,13 +685,13 @@ static void credit_entropy_bits(int nbits)
/* The +2 corresponds to the /4 in the denominator */

do {
- unsigned int anfrac = min(pnfrac, POOL_FRACBITS/2);
+ unsigned int anfrac = min(pnfrac, POOL_FRACBITS / 2);
unsigned int add =
- ((POOL_FRACBITS - entropy_count)*anfrac*3) >> s;
+ ((POOL_FRACBITS - entropy_count) * anfrac * 3) >> s;

entropy_count += add;
pnfrac -= anfrac;
- } while (unlikely(entropy_count < POOL_FRACBITS-2 && pnfrac));
+ } while (unlikely(entropy_count < POOL_FRACBITS - 2 && pnfrac));
}

if (WARN_ON(entropy_count < 0)) {
--
2.34.1

2022-01-17 08:49:30

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH 3/4] random: access input_pool_data directly rather than through pointer

This gets rid of another abstraction we no longer need. It would be nice
if we could instead make pool an array rather than a pointer, but the
latent entropy plugin won't be able to do its magic in that case. So
instead we put all accesses to the input pool's actual data through the
input_pool_data array directly.

Signed-off-by: Jason A. Donenfeld <[email protected]>
---
drivers/char/random.c | 21 ++++++++-------------
1 file changed, 8 insertions(+), 13 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 7343bff086c5..274056155e50 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -496,17 +496,12 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression");
static u32 input_pool_data[POOL_WORDS] __latent_entropy;

static struct {
- /* read-only data: */
- u32 *pool;
-
- /* read-write data: */
spinlock_t lock;
u16 add_ptr;
u16 input_rotate;
int entropy_count;
} input_pool = {
.lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
- .pool = input_pool_data
};

static ssize_t extract_entropy(void *buf, size_t nbytes, int min);
@@ -544,15 +539,15 @@ static void _mix_pool_bytes(const void *in, int nbytes)
i = (i - 1) & POOL_WORDMASK;

/* XOR in the various taps */
- w ^= input_pool.pool[i];
- w ^= input_pool.pool[(i + POOL_TAP1) & POOL_WORDMASK];
- w ^= input_pool.pool[(i + POOL_TAP2) & POOL_WORDMASK];
- w ^= input_pool.pool[(i + POOL_TAP3) & POOL_WORDMASK];
- w ^= input_pool.pool[(i + POOL_TAP4) & POOL_WORDMASK];
- w ^= input_pool.pool[(i + POOL_TAP5) & POOL_WORDMASK];
+ w ^= input_pool_data[i];
+ w ^= input_pool_data[(i + POOL_TAP1) & POOL_WORDMASK];
+ w ^= input_pool_data[(i + POOL_TAP2) & POOL_WORDMASK];
+ w ^= input_pool_data[(i + POOL_TAP3) & POOL_WORDMASK];
+ w ^= input_pool_data[(i + POOL_TAP4) & POOL_WORDMASK];
+ w ^= input_pool_data[(i + POOL_TAP5) & POOL_WORDMASK];

/* Mix the result back in with a twist */
- input_pool.pool[i] = (w >> 3) ^ twist_table[w & 7];
+ input_pool_data[i] = (w >> 3) ^ twist_table[w & 7];

/*
* Normally, we add 7 bits of rotation to the pool.
@@ -1369,7 +1364,7 @@ static void extract_buf(u8 *out)

/* Generate a hash across the pool */
spin_lock_irqsave(&input_pool.lock, flags);
- blake2s_update(&state, (const u8 *)input_pool.pool, POOL_BYTES);
+ blake2s_update(&state, (const u8 *)input_pool_data, POOL_BYTES);
blake2s_final(&state, hash); /* final zeros out state */

/*
--
2.34.1

2022-01-17 08:49:31

by Jason A. Donenfeld

[permalink] [raw]
Subject: [PATCH 4/4] random: selectively clang-format where it makes sense

This is an old driver that has seen a lot of different eras of kernel
coding style. In an effort to make it easier to code for, unify the
coding style around the current norm, by accepting some of -- but
certainly not all of -- the suggestions from clang-format. This should
remove ambiguity in coding style, especially with regards to spacing,
when code is being changed or amended. Consequently it also makes code
review easier on the eyes, following one uniform style rather than
several.

Signed-off-by: Jason A. Donenfeld <[email protected]>
---
drivers/char/random.c | 191 ++++++++++++++++++++----------------------
1 file changed, 90 insertions(+), 101 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 274056155e50..d0c14eb05c66 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -443,9 +443,9 @@ static DEFINE_SPINLOCK(random_ready_list_lock);
static LIST_HEAD(random_ready_list);

struct crng_state {
- u32 state[16];
- unsigned long init_time;
- spinlock_t lock;
+ u32 state[16];
+ unsigned long init_time;
+ spinlock_t lock;
};

static struct crng_state primary_crng = {
@@ -469,7 +469,7 @@ static bool crng_need_final_init = false;
#define crng_ready() (likely(crng_init > 1))
static int crng_init_cnt = 0;
static unsigned long crng_global_init_time = 0;
-#define CRNG_INIT_CNT_THRESH (2*CHACHA_KEY_SIZE)
+#define CRNG_INIT_CNT_THRESH (2 * CHACHA_KEY_SIZE)
static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]);
static void _crng_backtrack_protect(struct crng_state *crng,
u8 tmp[CHACHA_BLOCK_SIZE], int used);
@@ -579,10 +579,10 @@ static void mix_pool_bytes(const void *in, int nbytes)
}

struct fast_pool {
- u32 pool[4];
- unsigned long last;
- u16 reg_idx;
- u8 count;
+ u32 pool[4];
+ unsigned long last;
+ u16 reg_idx;
+ u8 count;
};

/*
@@ -622,7 +622,7 @@ static void process_random_ready_list(void)
struct random_ready_callback *rdy, *tmp;

spin_lock_irqsave(&random_ready_list_lock, flags);
- list_for_each_entry_safe(rdy, tmp, &random_ready_list, list) {
+ list_for_each_entry_safe (rdy, tmp, &random_ready_list, list) {
struct module *owner = rdy->owner;

list_del_init(&rdy->list);
@@ -710,7 +710,7 @@ static int credit_entropy_bits_safe(int nbits)
return -EINVAL;

/* Cap the value to avoid overflows */
- nbits = min(nbits, POOL_BITS);
+ nbits = min(nbits, POOL_BITS);

credit_entropy_bits(nbits);
return 0;
@@ -722,7 +722,7 @@ static int credit_entropy_bits_safe(int nbits)
*
*********************************************************************/

-#define CRNG_RESEED_INTERVAL (300*HZ)
+#define CRNG_RESEED_INTERVAL (300 * HZ)

static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);

@@ -746,9 +746,9 @@ early_param("random.trust_cpu", parse_trust_cpu);

static bool crng_init_try_arch(struct crng_state *crng)
{
- int i;
- bool arch_init = true;
- unsigned long rv;
+ int i;
+ bool arch_init = true;
+ unsigned long rv;

for (i = 4; i < 16; i++) {
if (!arch_get_random_seed_long(&rv) &&
@@ -764,9 +764,9 @@ static bool crng_init_try_arch(struct crng_state *crng)

static bool __init crng_init_try_arch_early(struct crng_state *crng)
{
- int i;
- bool arch_init = true;
- unsigned long rv;
+ int i;
+ bool arch_init = true;
+ unsigned long rv;

for (i = 4; i < 16; i++) {
if (!arch_get_random_seed_long_early(&rv) &&
@@ -836,8 +836,8 @@ static void do_numa_crng_init(struct work_struct *work)
struct crng_state *crng;
struct crng_state **pool;

- pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL|__GFP_NOFAIL);
- for_each_online_node(i) {
+ pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL | __GFP_NOFAIL);
+ for_each_online_node (i) {
crng = kmalloc_node(sizeof(struct crng_state),
GFP_KERNEL | __GFP_NOFAIL, i);
spin_lock_init(&crng->lock);
@@ -846,7 +846,7 @@ static void do_numa_crng_init(struct work_struct *work)
}
/* pairs with READ_ONCE() in select_crng() */
if (cmpxchg_release(&crng_node_pool, NULL, pool) != NULL) {
- for_each_node(i)
+ for_each_node (i)
kfree(pool[i]);
kfree(pool);
}
@@ -892,7 +892,7 @@ static size_t crng_fast_load(const u8 *cp, size_t len)
spin_unlock_irqrestore(&primary_crng.lock, flags);
return 0;
}
- p = (u8 *) &primary_crng.state[4];
+ p = (u8 *)&primary_crng.state[4];
while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) {
p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp;
cp++; crng_init_cnt++; len--; ret++;
@@ -922,12 +922,12 @@ static size_t crng_fast_load(const u8 *cp, size_t len)
*/
static int crng_slow_load(const u8 *cp, size_t len)
{
- unsigned long flags;
- static u8 lfsr = 1;
- u8 tmp;
- unsigned i, max = CHACHA_KEY_SIZE;
- const u8 * src_buf = cp;
- u8 * dest_buf = (u8 *) &primary_crng.state[4];
+ unsigned long flags;
+ static u8 lfsr = 1;
+ u8 tmp;
+ unsigned i, max = CHACHA_KEY_SIZE;
+ const u8 *src_buf = cp;
+ u8 *dest_buf = (u8 *)&primary_crng.state[4];

if (!spin_trylock_irqsave(&primary_crng.lock, flags))
return 0;
@@ -938,7 +938,7 @@ static int crng_slow_load(const u8 *cp, size_t len)
if (len > max)
max = len;

- for (i = 0; i < max ; i++) {
+ for (i = 0; i < max; i++) {
tmp = lfsr;
lfsr >>= 1;
if (tmp & 1)
@@ -953,11 +953,11 @@ static int crng_slow_load(const u8 *cp, size_t len)

static void crng_reseed(struct crng_state *crng, bool use_input_pool)
{
- unsigned long flags;
- int i, num;
+ unsigned long flags;
+ int i, num;
union {
- u8 block[CHACHA_BLOCK_SIZE];
- u32 key[8];
+ u8 block[CHACHA_BLOCK_SIZE];
+ u32 key[8];
} buf;

if (use_input_pool) {
@@ -971,11 +971,11 @@ static void crng_reseed(struct crng_state *crng, bool use_input_pool)
}
spin_lock_irqsave(&crng->lock, flags);
for (i = 0; i < 8; i++) {
- unsigned long rv;
+ unsigned long rv;
if (!arch_get_random_seed_long(&rv) &&
!arch_get_random_long(&rv))
rv = random_get_entropy();
- crng->state[i+4] ^= buf.key[i] ^ rv;
+ crng->state[i + 4] ^= buf.key[i] ^ rv;
}
memzero_explicit(&buf, sizeof(buf));
WRITE_ONCE(crng->init_time, jiffies);
@@ -983,8 +983,7 @@ static void crng_reseed(struct crng_state *crng, bool use_input_pool)
crng_finalize_init(crng);
}

-static void _extract_crng(struct crng_state *crng,
- u8 out[CHACHA_BLOCK_SIZE])
+static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE])
{
unsigned long flags, init_time;

@@ -1013,9 +1012,9 @@ static void extract_crng(u8 out[CHACHA_BLOCK_SIZE])
static void _crng_backtrack_protect(struct crng_state *crng,
u8 tmp[CHACHA_BLOCK_SIZE], int used)
{
- unsigned long flags;
- u32 *s, *d;
- int i;
+ unsigned long flags;
+ u32 *s, *d;
+ int i;

used = round_up(used, sizeof(u32));
if (used + CHACHA_KEY_SIZE > CHACHA_BLOCK_SIZE) {
@@ -1023,9 +1022,9 @@ static void _crng_backtrack_protect(struct crng_state *crng,
used = 0;
}
spin_lock_irqsave(&crng->lock, flags);
- s = (u32 *) &tmp[used];
+ s = (u32 *)&tmp[used];
d = &crng->state[4];
- for (i=0; i < 8; i++)
+ for (i = 0; i < 8; i++)
*d++ ^= *s++;
spin_unlock_irqrestore(&crng->lock, flags);
}
@@ -1070,7 +1069,6 @@ static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
return ret;
}

-
/*********************************************************************
*
* Entropy input management
@@ -1165,11 +1163,11 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
* Round down by 1 bit on general principles,
* and limit entropy estimate to 12 bits.
*/
- credit_entropy_bits(min_t(int, fls(delta>>1), 11));
+ credit_entropy_bits(min_t(int, fls(delta >> 1), 11));
}

void add_input_randomness(unsigned int type, unsigned int code,
- unsigned int value)
+ unsigned int value)
{
static unsigned char last_value;

@@ -1189,19 +1187,19 @@ static DEFINE_PER_CPU(struct fast_pool, irq_randomness);
#ifdef ADD_INTERRUPT_BENCH
static unsigned long avg_cycles, avg_deviation;

-#define AVG_SHIFT 8 /* Exponential average factor k=1/256 */
-#define FIXED_1_2 (1 << (AVG_SHIFT-1))
+#define AVG_SHIFT 8 /* Exponential average factor k=1/256 */
+#define FIXED_1_2 (1 << (AVG_SHIFT - 1))

static void add_interrupt_bench(cycles_t start)
{
- long delta = random_get_entropy() - start;
+ long delta = random_get_entropy() - start;

- /* Use a weighted moving average */
- delta = delta - ((avg_cycles + FIXED_1_2) >> AVG_SHIFT);
- avg_cycles += delta;
- /* And average deviation */
- delta = abs(delta) - ((avg_deviation + FIXED_1_2) >> AVG_SHIFT);
- avg_deviation += delta;
+ /* Use a weighted moving average */
+ delta = delta - ((avg_cycles + FIXED_1_2) >> AVG_SHIFT);
+ avg_cycles += delta;
+ /* And average deviation */
+ delta = abs(delta) - ((avg_deviation + FIXED_1_2) >> AVG_SHIFT);
+ avg_deviation += delta;
}
#else
#define add_interrupt_bench(x)
@@ -1209,7 +1207,7 @@ static void add_interrupt_bench(cycles_t start)

static u32 get_reg(struct fast_pool *f, struct pt_regs *regs)
{
- u32 *ptr = (u32 *) regs;
+ u32 *ptr = (u32 *)regs;
unsigned int idx;

if (regs == NULL)
@@ -1224,12 +1222,12 @@ static u32 get_reg(struct fast_pool *f, struct pt_regs *regs)

void add_interrupt_randomness(int irq)
{
- struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness);
- struct pt_regs *regs = get_irq_regs();
- unsigned long now = jiffies;
- cycles_t cycles = random_get_entropy();
- u32 c_high, j_high;
- u64 ip;
+ struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness);
+ struct pt_regs *regs = get_irq_regs();
+ unsigned long now = jiffies;
+ cycles_t cycles = random_get_entropy();
+ u32 c_high, j_high;
+ u64 ip;

if (cycles == 0)
cycles = get_reg(fast_pool, regs);
@@ -1239,8 +1237,8 @@ void add_interrupt_randomness(int irq)
fast_pool->pool[1] ^= now ^ c_high;
ip = regs ? instruction_pointer(regs) : _RET_IP_;
fast_pool->pool[2] ^= ip;
- fast_pool->pool[3] ^= (sizeof(ip) > 4) ? ip >> 32 :
- get_reg(fast_pool, regs);
+ fast_pool->pool[3] ^=
+ (sizeof(ip) > 4) ? ip >> 32 : get_reg(fast_pool, regs);

fast_mix(fast_pool);
add_interrupt_bench(cycles);
@@ -1254,8 +1252,7 @@ void add_interrupt_randomness(int irq)
return;
}

- if ((fast_pool->count < 64) &&
- !time_after(now, fast_pool->last + HZ))
+ if ((fast_pool->count < 64) && !time_after(now, fast_pool->last + HZ))
return;

if (!spin_trylock(&input_pool.lock))
@@ -1319,7 +1316,7 @@ static size_t account(size_t nbytes, int min)
entropy_count = 0;
}
nfrac = ibytes << (POOL_ENTROPY_SHIFT + 3);
- if ((size_t) entropy_count > nfrac)
+ if ((size_t)entropy_count > nfrac)
entropy_count -= nfrac;
else
entropy_count = 0;
@@ -1422,10 +1419,9 @@ static ssize_t extract_entropy(void *buf, size_t nbytes, int min)
}

#define warn_unseeded_randomness(previous) \
- _warn_unseeded_randomness(__func__, (void *) _RET_IP_, (previous))
+ _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous))

-static void _warn_unseeded_randomness(const char *func_name, void *caller,
- void **previous)
+static void _warn_unseeded_randomness(const char *func_name, void *caller, void **previous)
{
#ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM
const bool print_once = false;
@@ -1433,8 +1429,7 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller,
static bool print_once __read_mostly;
#endif

- if (print_once ||
- crng_ready() ||
+ if (print_once || crng_ready() ||
(previous && (caller == READ_ONCE(*previous))))
return;
WRITE_ONCE(*previous, caller);
@@ -1442,9 +1437,8 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller,
print_once = true;
#endif
if (__ratelimit(&unseeded_warning))
- printk_deferred(KERN_NOTICE "random: %s called from %pS "
- "with crng_init=%d\n", func_name, caller,
- crng_init);
+ printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n",
+ func_name, caller, crng_init);
}

/*
@@ -1487,7 +1481,6 @@ void get_random_bytes(void *buf, int nbytes)
}
EXPORT_SYMBOL(get_random_bytes);

-
/*
* Each time the timer fires, we expect that we got an unpredictable
* jump in the cycle counter. Even if the timer is running on another
@@ -1526,7 +1519,7 @@ static void try_to_generate_entropy(void)
timer_setup_on_stack(&stack.timer, entropy_timer, 0);
while (!crng_ready()) {
if (!timer_pending(&stack.timer))
- mod_timer(&stack.timer, jiffies+1);
+ mod_timer(&stack.timer, jiffies + 1);
mix_pool_bytes(&stack.now, sizeof(stack.now));
schedule();
stack.now = random_get_entropy();
@@ -1736,9 +1729,8 @@ void rand_initialize_disk(struct gendisk *disk)
}
#endif

-static ssize_t
-urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes,
- loff_t *ppos)
+static ssize_t urandom_read_nowarn(struct file *file, char __user *buf,
+ size_t nbytes, loff_t *ppos)
{
int ret;

@@ -1748,8 +1740,8 @@ urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes,
return ret;
}

-static ssize_t
-urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
+static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes,
+ loff_t *ppos)
{
static int maxwarn = 10;

@@ -1763,8 +1755,8 @@ urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
return urandom_read_nowarn(file, buf, nbytes, ppos);
}

-static ssize_t
-random_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
+static ssize_t random_read(struct file *file, char __user *buf, size_t nbytes,
+ loff_t *ppos)
{
int ret;

@@ -1774,8 +1766,7 @@ random_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
return urandom_read_nowarn(file, buf, nbytes, ppos);
}

-static __poll_t
-random_poll(struct file *file, poll_table * wait)
+static __poll_t random_poll(struct file *file, poll_table *wait)
{
__poll_t mask;

@@ -1789,8 +1780,7 @@ random_poll(struct file *file, poll_table * wait)
return mask;
}

-static int
-write_pool(const char __user *buffer, size_t count)
+static int write_pool(const char __user *buffer, size_t count)
{
size_t bytes;
u32 t, buf[16];
@@ -1892,9 +1882,9 @@ static int random_fasync(int fd, struct file *filp, int on)
}

const struct file_operations random_fops = {
- .read = random_read,
+ .read = random_read,
.write = random_write,
- .poll = random_poll,
+ .poll = random_poll,
.unlocked_ioctl = random_ioctl,
.compat_ioctl = compat_ptr_ioctl,
.fasync = random_fasync,
@@ -1902,7 +1892,7 @@ const struct file_operations random_fops = {
};

const struct file_operations urandom_fops = {
- .read = urandom_read,
+ .read = urandom_read,
.write = random_write,
.unlocked_ioctl = random_ioctl,
.compat_ioctl = compat_ptr_ioctl,
@@ -1910,19 +1900,19 @@ const struct file_operations urandom_fops = {
.llseek = noop_llseek,
};

-SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
- unsigned int, flags)
+SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int,
+ flags)
{
int ret;

- if (flags & ~(GRND_NONBLOCK|GRND_RANDOM|GRND_INSECURE))
+ if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE))
return -EINVAL;

/*
* Requesting insecure and blocking randomness at the same time makes
* no sense.
*/
- if ((flags & (GRND_INSECURE|GRND_RANDOM)) == (GRND_INSECURE|GRND_RANDOM))
+ if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM))
return -EINVAL;

if (count > INT_MAX)
@@ -1962,8 +1952,8 @@ static char sysctl_bootid[16];
* returned as an ASCII string in the standard UUID format; if via the
* sysctl system call, as 16 bytes of binary data.
*/
-static int proc_do_uuid(struct ctl_table *table, int write,
- void *buffer, size_t *lenp, loff_t *ppos)
+static int proc_do_uuid(struct ctl_table *table, int write, void *buffer,
+ size_t *lenp, loff_t *ppos)
{
struct ctl_table fake_table;
unsigned char buf[64], tmp_uuid[16], *uuid;
@@ -1992,8 +1982,8 @@ static int proc_do_uuid(struct ctl_table *table, int write,
/*
* Return entropy available scaled to integral bits
*/
-static int proc_do_entropy(struct ctl_table *table, int write,
- void *buffer, size_t *lenp, loff_t *ppos)
+static int proc_do_entropy(struct ctl_table *table, int write, void *buffer,
+ size_t *lenp, loff_t *ppos)
{
struct ctl_table fake_table;
int entropy_count;
@@ -2090,7 +2080,7 @@ struct batched_entropy {
* point prior.
*/
static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = {
- .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock),
+ .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock),
};

u64 get_random_u64(void)
@@ -2115,7 +2105,7 @@ u64 get_random_u64(void)
EXPORT_SYMBOL(get_random_u64);

static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = {
- .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u32.lock),
+ .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u32.lock),
};
u32 get_random_u32(void)
{
@@ -2176,8 +2166,7 @@ static void invalidate_batched_entropy(void)
* Return: A page aligned address within [start, start + range). On error,
* @start is returned.
*/
-unsigned long
-randomize_page(unsigned long start, unsigned long range)
+unsigned long randomize_page(unsigned long start, unsigned long range)
{
if (!PAGE_ALIGNED(start)) {
range -= PAGE_ALIGN(start) - start;
--
2.34.1

2022-01-18 02:54:15

by Dominik Brodowski

[permalink] [raw]
Subject: Re: [PATCH 2/4] random: cleanup fractional entropy shift constants

Am Sun, Jan 16, 2022 at 05:35:45PM +0100 schrieb Jason A. Donenfeld:
> The entropy estimator is calculated in terms of 1/8 bits, which means
> there are various constants where things are shifted by 3. Move these
> into our pool info enum with the other relevant constants. While we're
> at it, move an English assertion about sizes into a proper BUILD_BUG_ON
> so that the compiler can ensure this invariant.
>
> Signed-off-by: Jason A. Donenfeld <[email protected]>
> ---
> drivers/char/random.c | 28 +++++++++++++---------------
> 1 file changed, 13 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index de1c14787ae8..7343bff086c5 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -358,16 +358,6 @@
>
> /* #define ADD_INTERRUPT_BENCH */
>
> -/*
> - * To allow fractional bits to be tracked, the entropy_count field is
> - * denominated in units of 1/8th bits.
> - *
> - * 2*(POOL_ENTROPY_SHIFT + poolbitshift) must <= 31, or the multiply in
> - * credit_entropy_bits() needs to be 64 bits wide.
> - */
> -#define POOL_ENTROPY_SHIFT 3
> -#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT)
> -
> /*
> * If the entropy count falls under this number of bits, then we
> * should wake up processes which are selecting or polling on write
> @@ -425,8 +415,13 @@ enum poolinfo {
> POOL_WORDMASK = POOL_WORDS - 1,
> POOL_BYTES = POOL_WORDS * sizeof(u32),
> POOL_BITS = POOL_BYTES * 8,
> - POOL_BITSHIFT = ilog2(POOL_WORDS) + 5,
> - POOL_FRACBITS = POOL_WORDS << (POOL_ENTROPY_SHIFT + 5),
> + POOL_BITSHIFT = ilog2(POOL_BITS),
> +
> + /* To allow fractional bits to be tracked, the entropy_count field is
> + * denominated in units of 1/8th bits. */
> + POOL_ENTROPY_SHIFT = 3,
> +#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT)
> + POOL_FRACBITS = POOL_BITS << POOL_ENTROPY_SHIFT,


the #define here confuses me a bit, as it is optically breaking the POOL
enum. But that relates to coding style preferences only, so

Reviewed-by: Dominik Brodowski <[email protected]>

2022-01-18 02:54:18

by Dominik Brodowski

[permalink] [raw]
Subject: Re: [PATCH 4/4] random: selectively clang-format where it makes sense

Am Sun, Jan 16, 2022 at 05:35:47PM +0100 schrieb Jason A. Donenfeld:
> This is an old driver that has seen a lot of different eras of kernel
> coding style. In an effort to make it easier to code for, unify the
> coding style around the current norm, by accepting some of -- but
> certainly not all of -- the suggestions from clang-format. This should
> remove ambiguity in coding style, especially with regards to spacing,
> when code is being changed or amended. Consequently it also makes code
> review easier on the eyes, following one uniform style rather than
> several.
>
> Signed-off-by: Jason A. Donenfeld <[email protected]>

Reviewed-by: Dominik Brodowski <[email protected]>

Thanks,
Dominik

2022-01-18 02:54:19

by Dominik Brodowski

[permalink] [raw]
Subject: Re: [PATCH 3/4] random: access input_pool_data directly rather than through pointer

Am Sun, Jan 16, 2022 at 05:35:46PM +0100 schrieb Jason A. Donenfeld:
> This gets rid of another abstraction we no longer need. It would be nice
> if we could instead make pool an array rather than a pointer, but the
> latent entropy plugin won't be able to do its magic in that case. So
> instead we put all accesses to the input pool's actual data through the
> input_pool_data array directly.
>
> Signed-off-by: Jason A. Donenfeld <[email protected]>

Reviewed-by: Dominik Brodowski <[email protected]>

Thanks,
Dominik

2022-01-18 02:54:34

by Dominik Brodowski

[permalink] [raw]
Subject: Re: [PATCH 1/4] random: prepend remaining pool constants with POOL_

Am Sun, Jan 16, 2022 at 05:35:44PM +0100 schrieb Jason A. Donenfeld:
> The other pool constants are prepended with POOL_, but not these last
> ones. Rename them. This will then let us move them into the enum in the
> following commit.
>
> Signed-off-by: Jason A. Donenfeld <[email protected]>

Reviewed-by: Dominik Brodowski <[email protected]>

Thanks,
Dominik

2022-01-18 02:55:30

by Dominik Brodowski

[permalink] [raw]
Subject: Re: [PATCH 4/7] random: remove unused reserved argument


Am Sun, Jan 16, 2022 at 05:22:32PM +0100 schrieb Jason A. Donenfeld:
> On Sun, Jan 16, 2022 at 2:45 PM Dominik Brodowski <[email protected]> wrote:
> > > @@ -1342,7 +1341,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min,
> > > ? ? ? /* never pull more than available */
> > > ? ? ? have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);
> > >
> > > - ? ? if ((have_bytes -= reserved) < 0)
> > > + ? ? if (have_bytes < 0)
> > > ? ? ? ? ? ? ? have_bytes = 0;
> > > ? ? ? ibytes = min_t(size_t, ibytes, have_bytes);
> >
> > Hmm. We already WARN_ON(entropy_count < 0) a few lines below. Maybe move
> > that assertion before the assignement of have_bytes? Then, have_bytes can
> > never be lower than zero, and the code becomes even simpler. What do you
> > think?
>
> Can you send a separate patch for this that we can apply on top? It
> seems reasonable anyhow. Something like:

As you've written that patch yourself now, just take that, and feel free to
add my Reviewed-by tag.

Thanks,
Dominik