2022-05-28 19:02:30

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 00/47] 5.18.1-rc1 review

This is the start of the stable review cycle for the 5.18.1 release.
There are 47 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.

Responses should be made by Sun, 29 May 2022 08:46:45 +0000.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.18.1-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.18.y
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <[email protected]>
Linux 5.18.1-rc1

Edward Matijevic <[email protected]>
ALSA: ctxfi: Add SB046x PCI ID

Lorenzo Pieralisi <[email protected]>
ACPI: sysfs: Fix BERT error region memory mapping

Jason A. Donenfeld <[email protected]>
random: check for signals after page of pool writes

Jens Axboe <[email protected]>
random: wire up fops->splice_{read,write}_iter()

Jens Axboe <[email protected]>
random: convert to using fops->write_iter()

Jens Axboe <[email protected]>
random: convert to using fops->read_iter()

Jason A. Donenfeld <[email protected]>
random: unify batched entropy implementations

Jason A. Donenfeld <[email protected]>
random: move randomize_page() into mm where it belongs

Jason A. Donenfeld <[email protected]>
random: move initialization functions out of hot pages

Jason A. Donenfeld <[email protected]>
random: make consistent use of buf and len

Jason A. Donenfeld <[email protected]>
random: use proper return types on get_random_{int,long}_wait()

Jason A. Donenfeld <[email protected]>
random: remove extern from functions in header

Jason A. Donenfeld <[email protected]>
random: use static branch for crng_ready()

Jason A. Donenfeld <[email protected]>
random: credit architectural init the exact amount

Jason A. Donenfeld <[email protected]>
random: handle latent entropy and command line from random_init()

Jason A. Donenfeld <[email protected]>
random: use proper jiffies comparison macro

Jason A. Donenfeld <[email protected]>
random: remove ratelimiting for in-kernel unseeded randomness

Jason A. Donenfeld <[email protected]>
random: move initialization out of reseeding hot path

Jason A. Donenfeld <[email protected]>
random: avoid initializing twice in credit race

Jason A. Donenfeld <[email protected]>
random: use symbolic constants for crng_init states

Jason A. Donenfeld <[email protected]>
siphash: use one source of truth for siphash permutations

Jason A. Donenfeld <[email protected]>
random: help compiler out with fast_mix() by using simpler arguments

Jason A. Donenfeld <[email protected]>
random: do not use input pool from hard IRQs

Jason A. Donenfeld <[email protected]>
random: order timer entropy functions below interrupt functions

Jason A. Donenfeld <[email protected]>
random: do not pretend to handle premature next security model

Jason A. Donenfeld <[email protected]>
random: use first 128 bits of input as fast init

Jason A. Donenfeld <[email protected]>
random: do not use batches when !crng_ready()

Jason A. Donenfeld <[email protected]>
random: insist on random_get_entropy() existing in order to simplify

Jason A. Donenfeld <[email protected]>
xtensa: use fallback for random_get_entropy() instead of zero

Jason A. Donenfeld <[email protected]>
sparc: use fallback for random_get_entropy() instead of zero

Jason A. Donenfeld <[email protected]>
um: use fallback for random_get_entropy() instead of zero

Jason A. Donenfeld <[email protected]>
x86/tsc: Use fallback for random_get_entropy() instead of zero

Jason A. Donenfeld <[email protected]>
nios2: use fallback for random_get_entropy() instead of zero

Jason A. Donenfeld <[email protected]>
arm: use fallback for random_get_entropy() instead of zero

Jason A. Donenfeld <[email protected]>
mips: use fallback for random_get_entropy() instead of just c0 random

Jason A. Donenfeld <[email protected]>
riscv: use fallback for random_get_entropy() instead of zero

Jason A. Donenfeld <[email protected]>
m68k: use fallback for random_get_entropy() instead of zero

Jason A. Donenfeld <[email protected]>
timekeeping: Add raw clock fallback for random_get_entropy()

Jason A. Donenfeld <[email protected]>
powerpc: define get_cycles macro for arch-override

Jason A. Donenfeld <[email protected]>
alpha: define get_cycles macro for arch-override

Jason A. Donenfeld <[email protected]>
parisc: define get_cycles macro for arch-override

Jason A. Donenfeld <[email protected]>
s390: define get_cycles macro for arch-override

Jason A. Donenfeld <[email protected]>
ia64: define get_cycles macro for arch-override

Jason A. Donenfeld <[email protected]>
init: call time_init() before rand_initialize()

Jason A. Donenfeld <[email protected]>
random: fix sysctl documentation nits

Basavaraj Natikar <[email protected]>
HID: amd_sfh: Add support for sensor discovery

Daniel Thompson <[email protected]>
lockdown: also lock down previous kgdb use


-------------

Diffstat:

Documentation/admin-guide/sysctl/kernel.rst | 8 +-
Makefile | 4 +-
arch/alpha/include/asm/timex.h | 1 +
arch/arm/include/asm/timex.h | 1 +
arch/ia64/include/asm/timex.h | 1 +
arch/m68k/include/asm/timex.h | 2 +-
arch/mips/include/asm/timex.h | 17 +-
arch/nios2/include/asm/timex.h | 3 +
arch/parisc/include/asm/timex.h | 3 +-
arch/powerpc/include/asm/timex.h | 1 +
arch/riscv/include/asm/timex.h | 2 +-
arch/s390/include/asm/timex.h | 1 +
arch/sparc/include/asm/timex_32.h | 4 +-
arch/um/include/asm/timex.h | 9 +-
arch/x86/include/asm/timex.h | 9 +
arch/x86/include/asm/tsc.h | 7 +-
arch/xtensa/include/asm/timex.h | 6 +-
drivers/acpi/sysfs.c | 25 +-
drivers/char/random.c | 1213 +++++++++++----------------
drivers/hid/amd-sfh-hid/amd_sfh_client.c | 11 +
drivers/hid/amd-sfh-hid/amd_sfh_pcie.c | 7 +
drivers/hid/amd-sfh-hid/amd_sfh_pcie.h | 4 +
include/linux/mm.h | 1 +
include/linux/prandom.h | 23 +-
include/linux/random.h | 92 +-
include/linux/security.h | 2 +
include/linux/siphash.h | 28 +
include/linux/timex.h | 8 +
init/main.c | 13 +-
kernel/debug/debug_core.c | 24 +
kernel/debug/kdb/kdb_main.c | 62 +-
kernel/time/timekeeping.c | 15 +
lib/Kconfig.debug | 3 +-
lib/siphash.c | 32 +-
mm/util.c | 32 +
security/security.c | 2 +
sound/pci/ctxfi/ctatc.c | 2 +
sound/pci/ctxfi/cthardware.h | 3 +-
38 files changed, 821 insertions(+), 860 deletions(-)




2022-05-28 19:04:32

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 10/47] timekeeping: Add raw clock fallback for random_get_entropy()

From: "Jason A. Donenfeld" <[email protected]>

commit 1366992e16bddd5e2d9a561687f367f9f802e2e4 upstream.

The addition of random_get_entropy_fallback() provides access to
whichever time source has the highest frequency, which is useful for
gathering entropy on platforms without available cycle counters. It's
not necessarily as good as being able to quickly access a cycle counter
that the CPU has, but it's still something, even when it falls back to
being jiffies-based.

In the event that a given arch does not define get_cycles(), falling
back to the get_cycles() default implementation that returns 0 is really
not the best we can do. Instead, at least calling
random_get_entropy_fallback() would be preferable, because that always
needs to return _something_, even falling back to jiffies eventually.
It's not as though random_get_entropy_fallback() is super high precision
or guaranteed to be entropic, but basically anything that's not zero all
the time is better than returning zero all the time.

Finally, since random_get_entropy_fallback() is used during extremely
early boot when randomizing freelists in mm_init(), it can be called
before timekeeping has been initialized. In that case there really is
nothing we can do; jiffies hasn't even started ticking yet. So just give
up and return 0.

Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Reviewed-by: Thomas Gleixner <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Theodore Ts'o <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/linux/timex.h | 8 ++++++++
kernel/time/timekeeping.c | 15 +++++++++++++++
2 files changed, 23 insertions(+)

--- a/include/linux/timex.h
+++ b/include/linux/timex.h
@@ -62,6 +62,8 @@
#include <linux/types.h>
#include <linux/param.h>

+unsigned long random_get_entropy_fallback(void);
+
#include <asm/timex.h>

#ifndef random_get_entropy
@@ -74,8 +76,14 @@
*
* By default we use get_cycles() for this purpose, but individual
* architectures may override this in their asm/timex.h header file.
+ * If a given arch does not have get_cycles(), then we fallback to
+ * using random_get_entropy_fallback().
*/
+#ifdef get_cycles
#define random_get_entropy() ((unsigned long)get_cycles())
+#else
+#define random_get_entropy() random_get_entropy_fallback()
+#endif
#endif

/*
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -17,6 +17,7 @@
#include <linux/clocksource.h>
#include <linux/jiffies.h>
#include <linux/time.h>
+#include <linux/timex.h>
#include <linux/tick.h>
#include <linux/stop_machine.h>
#include <linux/pvclock_gtod.h>
@@ -2380,6 +2381,20 @@ static int timekeeping_validate_timex(co
return 0;
}

+/**
+ * random_get_entropy_fallback - Returns the raw clock source value,
+ * used by random.c for platforms with no valid random_get_entropy().
+ */
+unsigned long random_get_entropy_fallback(void)
+{
+ struct tk_read_base *tkr = &tk_core.timekeeper.tkr_mono;
+ struct clocksource *clock = READ_ONCE(tkr->clock);
+
+ if (unlikely(timekeeping_suspended || !clock))
+ return 0;
+ return clock->read(clock);
+}
+EXPORT_SYMBOL_GPL(random_get_entropy_fallback);

/**
* do_adjtimex() - Accessor function to NTP __do_adjtimex function



2022-05-28 19:13:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 39/47] random: move initialization functions out of hot pages

From: "Jason A. Donenfeld" <[email protected]>

commit 560181c27b582557d633ecb608110075433383af upstream.

Much of random.c is devoted to initializing the rng and accounting for
when a sufficient amount of entropy has been added. In a perfect world,
this would all happen during init, and so we could mark these functions
as __init. But in reality, this isn't the case: sometimes the rng only
finishes initializing some seconds after system init is finished.

For this reason, at the moment, a whole host of functions that are only
used relatively close to system init and then never again are intermixed
with functions that are used in hot code all the time. This creates more
cache misses than necessary.

In order to pack the hot code closer together, this commit moves the
initialization functions that can't be marked as __init into
.text.unlikely by way of the __cold attribute.

Of particular note is moving credit_init_bits() into a macro wrapper
that inlines the crng_ready() static branch check. This avoids a
function call to a nop+ret, and most notably prevents extra entropy
arithmetic from being computed in mix_interrupt_randomness().

Reviewed-by: Dominik Brodowski <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 46 +++++++++++++++++++++-------------------------
1 file changed, 21 insertions(+), 25 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -109,7 +109,7 @@ bool rng_is_initialized(void)
}
EXPORT_SYMBOL(rng_is_initialized);

-static void crng_set_ready(struct work_struct *work)
+static void __cold crng_set_ready(struct work_struct *work)
{
static_branch_enable(&crng_is_ready);
}
@@ -148,7 +148,7 @@ EXPORT_SYMBOL(wait_for_random_bytes);
* returns: 0 if callback is successfully added
* -EALREADY if pool is already initialised (callback not called)
*/
-int register_random_ready_notifier(struct notifier_block *nb)
+int __cold register_random_ready_notifier(struct notifier_block *nb)
{
unsigned long flags;
int ret = -EALREADY;
@@ -166,7 +166,7 @@ int register_random_ready_notifier(struc
/*
* Delete a previously registered readiness callback function.
*/
-int unregister_random_ready_notifier(struct notifier_block *nb)
+int __cold unregister_random_ready_notifier(struct notifier_block *nb)
{
unsigned long flags;
int ret;
@@ -177,7 +177,7 @@ int unregister_random_ready_notifier(str
return ret;
}

-static void process_random_ready_list(void)
+static void __cold process_random_ready_list(void)
{
unsigned long flags;

@@ -187,15 +187,9 @@ static void process_random_ready_list(vo
}

#define warn_unseeded_randomness() \
- _warn_unseeded_randomness(__func__, (void *)_RET_IP_)
-
-static void _warn_unseeded_randomness(const char *func_name, void *caller)
-{
- if (!IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) || crng_ready())
- return;
- printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n",
- func_name, caller, crng_init);
-}
+ if (IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) && !crng_ready()) \
+ printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", \
+ __func__, (void *)_RET_IP_, crng_init)


/*********************************************************************
@@ -614,7 +608,7 @@ EXPORT_SYMBOL(get_random_u32);
* This function is called when the CPU is coming up, with entry
* CPUHP_RANDOM_PREPARE, which comes before CPUHP_WORKQUEUE_PREP.
*/
-int random_prepare_cpu(unsigned int cpu)
+int __cold random_prepare_cpu(unsigned int cpu)
{
/*
* When the cpu comes back online, immediately invalidate both
@@ -789,13 +783,15 @@ static void extract_entropy(void *buf, s
memzero_explicit(&block, sizeof(block));
}

-static void credit_init_bits(size_t bits)
+#define credit_init_bits(bits) if (!crng_ready()) _credit_init_bits(bits)
+
+static void __cold _credit_init_bits(size_t bits)
{
static struct execute_work set_ready;
unsigned int new, orig, add;
unsigned long flags;

- if (crng_ready() || !bits)
+ if (!bits)
return;

add = min_t(size_t, bits, POOL_BITS);
@@ -979,7 +975,7 @@ EXPORT_SYMBOL_GPL(add_hwgenerator_random
* Handle random seed passed by bootloader, and credit it if
* CONFIG_RANDOM_TRUST_BOOTLOADER is set.
*/
-void add_bootloader_randomness(const void *buf, size_t len)
+void __cold add_bootloader_randomness(const void *buf, size_t len)
{
mix_pool_bytes(buf, len);
if (trust_bootloader)
@@ -995,7 +991,7 @@ static BLOCKING_NOTIFIER_HEAD(vmfork_cha
* don't credit it, but we do immediately force a reseed after so
* that it's used by the crng posthaste.
*/
-void add_vmfork_randomness(const void *unique_vm_id, size_t len)
+void __cold add_vmfork_randomness(const void *unique_vm_id, size_t len)
{
add_device_randomness(unique_vm_id, len);
if (crng_ready()) {
@@ -1008,13 +1004,13 @@ void add_vmfork_randomness(const void *u
EXPORT_SYMBOL_GPL(add_vmfork_randomness);
#endif

-int register_random_vmfork_notifier(struct notifier_block *nb)
+int __cold register_random_vmfork_notifier(struct notifier_block *nb)
{
return blocking_notifier_chain_register(&vmfork_chain, nb);
}
EXPORT_SYMBOL_GPL(register_random_vmfork_notifier);

-int unregister_random_vmfork_notifier(struct notifier_block *nb)
+int __cold unregister_random_vmfork_notifier(struct notifier_block *nb)
{
return blocking_notifier_chain_unregister(&vmfork_chain, nb);
}
@@ -1059,7 +1055,7 @@ static void fast_mix(unsigned long s[4],
* This function is called when the CPU has just come online, with
* entry CPUHP_AP_RANDOM_ONLINE, just after CPUHP_AP_WORKQUEUE_ONLINE.
*/
-int random_online_cpu(unsigned int cpu)
+int __cold random_online_cpu(unsigned int cpu)
{
/*
* During CPU shutdown and before CPU onlining, add_interrupt_
@@ -1214,7 +1210,7 @@ static void add_timer_randomness(struct
if (in_hardirq())
this_cpu_ptr(&irq_randomness)->count += max(1u, bits * 64) - 1;
else
- credit_init_bits(bits);
+ _credit_init_bits(bits);
}

void add_input_randomness(unsigned int type, unsigned int code, unsigned int value)
@@ -1242,7 +1238,7 @@ void add_disk_randomness(struct gendisk
}
EXPORT_SYMBOL_GPL(add_disk_randomness);

-void rand_initialize_disk(struct gendisk *disk)
+void __cold rand_initialize_disk(struct gendisk *disk)
{
struct timer_rand_state *state;

@@ -1271,7 +1267,7 @@ void rand_initialize_disk(struct gendisk
*
* So the re-arming always happens in the entropy loop itself.
*/
-static void entropy_timer(struct timer_list *t)
+static void __cold entropy_timer(struct timer_list *t)
{
credit_init_bits(1);
}
@@ -1280,7 +1276,7 @@ static void entropy_timer(struct timer_l
* If we have an actual cycle counter, see if we can
* generate enough entropy with timing noise
*/
-static void try_to_generate_entropy(void)
+static void __cold try_to_generate_entropy(void)
{
struct {
unsigned long entropy;



2022-05-28 19:15:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 13/47] mips: use fallback for random_get_entropy() instead of just c0 random

From: "Jason A. Donenfeld" <[email protected]>

commit 1c99c6a7c3c599a68321b01b9ec243215ede5a68 upstream.

For situations in which we don't have a c0 counter register available,
we've been falling back to reading the c0 "random" register, which is
usually bounded by the amount of TLB entries and changes every other
cycle or so. This means it wraps extremely often. We can do better by
combining this fast-changing counter with a potentially slower-changing
counter from random_get_entropy_fallback() in the more significant bits.
This commit combines the two, taking into account that the changing bits
are in a different bit position depending on the CPU model. In addition,
we previously were falling back to 0 for ancient CPUs that Linux does
not support anyway; remove that dead path entirely.

Cc: Thomas Gleixner <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Tested-by: Maciej W. Rozycki <[email protected]>
Acked-by: Thomas Bogendoerfer <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/mips/include/asm/timex.h | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)

--- a/arch/mips/include/asm/timex.h
+++ b/arch/mips/include/asm/timex.h
@@ -76,25 +76,24 @@ static inline cycles_t get_cycles(void)
else
return 0; /* no usable counter */
}
+#define get_cycles get_cycles

/*
* Like get_cycles - but where c0_count is not available we desperately
* use c0_random in an attempt to get at least a little bit of entropy.
- *
- * R6000 and R6000A neither have a count register nor a random register.
- * That leaves no entropy source in the CPU itself.
*/
static inline unsigned long random_get_entropy(void)
{
- unsigned int prid = read_c0_prid();
- unsigned int imp = prid & PRID_IMP_MASK;
+ unsigned int c0_random;

- if (can_use_mips_counter(prid))
+ if (can_use_mips_counter(read_c0_prid()))
return read_c0_count();
- else if (likely(imp != PRID_IMP_R6000 && imp != PRID_IMP_R6000A))
- return read_c0_random();
+
+ if (cpu_has_3kex)
+ c0_random = (read_c0_random() >> 8) & 0x3f;
else
- return 0; /* no usable register */
+ c0_random = read_c0_random() & 0x3f;
+ return (random_get_entropy_fallback() << 6) | (0x3f - c0_random);
}
#define random_get_entropy random_get_entropy




2022-05-28 19:21:43

by Rudi Heitbaum

[permalink] [raw]
Subject: Re: [PATCH 5.18 00/47] 5.18.1-rc1 review

On Fri, May 27, 2022 at 10:49:40AM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 5.18.1 release.
> There are 47 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.

Hi Greg,

5.18.1-rc1 tested.

Run tested on:
- Allwinner H6 (Tanix TX6)
- Intel Tiger Lake x86_64 (nuc11 i7-1165G7)

In addition - build tested for:
- Allwinner A64
- Allwinner H3
- Allwinner H5
- NXP iMX6
- NXP iMX8
- Qualcomm Dragonboard
- Rockchip RK3288
- Rockchip RK3328
- Rockchip RK3399pro
- Samsung Exynos

Tested-by: Rudi Heitbaum <[email protected]>
--
Rudi

2022-05-28 19:24:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 30/47] random: move initialization out of reseeding hot path

From: "Jason A. Donenfeld" <[email protected]>

commit 68c9c8b192c6dae9be6278e98ee44029d5da2d31 upstream.

Initialization happens once -- by way of credit_init_bits() -- and then
it never happens again. Therefore, it doesn't need to be in
crng_reseed(), which is a hot path that is called multiple times. It
also doesn't make sense to have there, as initialization activity is
better associated with initialization routines.

After the prior commit, crng_reseed() now won't be called by multiple
concurrent callers, which means that we can safely move the
"finialize_init" logic into crng_init_bits() unconditionally.

Reviewed-by: Dominik Brodowski <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 42 +++++++++++++++++++-----------------------
1 file changed, 19 insertions(+), 23 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -264,7 +264,6 @@ static void crng_reseed(void)
unsigned long flags;
unsigned long next_gen;
u8 key[CHACHA_KEY_SIZE];
- bool finalize_init = false;

extract_entropy(key, sizeof(key));

@@ -281,28 +280,10 @@ static void crng_reseed(void)
++next_gen;
WRITE_ONCE(base_crng.generation, next_gen);
WRITE_ONCE(base_crng.birth, jiffies);
- if (!crng_ready()) {
+ if (!crng_ready())
crng_init = CRNG_READY;
- finalize_init = true;
- }
spin_unlock_irqrestore(&base_crng.lock, flags);
memzero_explicit(key, sizeof(key));
- if (finalize_init) {
- process_random_ready_list();
- wake_up_interruptible(&crng_init_wait);
- kill_fasync(&fasync, SIGIO, POLL_IN);
- pr_notice("crng init done\n");
- if (unseeded_warning.missed) {
- pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n",
- unseeded_warning.missed);
- unseeded_warning.missed = 0;
- }
- if (urandom_warning.missed) {
- pr_notice("%d urandom warning(s) missed due to ratelimiting\n",
- urandom_warning.missed);
- urandom_warning.missed = 0;
- }
- }
}

/*
@@ -834,10 +815,25 @@ static void credit_init_bits(size_t nbit
new = min_t(unsigned int, POOL_BITS, orig + add);
} while (cmpxchg(&input_pool.init_bits, orig, new) != orig);

- if (orig < POOL_READY_BITS && new >= POOL_READY_BITS)
- crng_reseed();
- else if (orig < POOL_EARLY_BITS && new >= POOL_EARLY_BITS) {
+ if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) {
+ crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */
+ process_random_ready_list();
+ wake_up_interruptible(&crng_init_wait);
+ kill_fasync(&fasync, SIGIO, POLL_IN);
+ pr_notice("crng init done\n");
+ if (unseeded_warning.missed) {
+ pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n",
+ unseeded_warning.missed);
+ unseeded_warning.missed = 0;
+ }
+ if (urandom_warning.missed) {
+ pr_notice("%d urandom warning(s) missed due to ratelimiting\n",
+ urandom_warning.missed);
+ urandom_warning.missed = 0;
+ }
+ } else if (orig < POOL_EARLY_BITS && new >= POOL_EARLY_BITS) {
spin_lock_irqsave(&base_crng.lock, flags);
+ /* Check if crng_init is CRNG_EMPTY, to avoid race with crng_reseed(). */
if (crng_init == CRNG_EMPTY) {
extract_entropy(base_crng.key, sizeof(base_crng.key));
crng_init = CRNG_EARLY;



2022-05-28 19:25:22

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 43/47] random: convert to using fops->write_iter()

From: Jens Axboe <[email protected]>

commit 22b0a222af4df8ee9bb8e07013ab44da9511b047 upstream.

Now that the read side has been converted to fix a regression with
splice, convert the write side as well to have some symmetry in the
interface used (and help deprecate ->write()).

Signed-off-by: Jens Axboe <[email protected]>
[Jason: cleaned up random_ioctl a bit, require full writes in
RNDADDENTROPY since it's crediting entropy, simplify control flow of
write_pool(), and incorporate suggestions from Al.]
Cc: Al Viro <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 67 ++++++++++++++++++++++++++------------------------
1 file changed, 35 insertions(+), 32 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1291,39 +1291,31 @@ static __poll_t random_poll(struct file
return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM;
}

-static int write_pool(const char __user *ubuf, size_t len)
+static ssize_t write_pool(struct iov_iter *iter)
{
- size_t block_len;
- int ret = 0;
u8 block[BLAKE2S_BLOCK_SIZE];
+ ssize_t ret = 0;
+ size_t copied;

- while (len) {
- block_len = min(len, sizeof(block));
- if (copy_from_user(block, ubuf, block_len)) {
- ret = -EFAULT;
- goto out;
- }
- len -= block_len;
- ubuf += block_len;
- mix_pool_bytes(block, block_len);
+ if (unlikely(!iov_iter_count(iter)))
+ return 0;
+
+ for (;;) {
+ copied = copy_from_iter(block, sizeof(block), iter);
+ ret += copied;
+ mix_pool_bytes(block, copied);
+ if (!iov_iter_count(iter) || copied != sizeof(block))
+ break;
cond_resched();
}

-out:
memzero_explicit(block, sizeof(block));
- return ret;
+ return ret ? ret : -EFAULT;
}

-static ssize_t random_write(struct file *file, const char __user *ubuf,
- size_t len, loff_t *ppos)
+static ssize_t random_write_iter(struct kiocb *kiocb, struct iov_iter *iter)
{
- int ret;
-
- ret = write_pool(ubuf, len);
- if (ret)
- return ret;
-
- return (ssize_t)len;
+ return write_pool(iter);
}

static ssize_t urandom_read_iter(struct kiocb *kiocb, struct iov_iter *iter)
@@ -1362,9 +1354,8 @@ static ssize_t random_read_iter(struct k

static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
{
- int size, ent_count;
int __user *p = (int __user *)arg;
- int retval;
+ int ent_count;

switch (cmd) {
case RNDGETENTCNT:
@@ -1381,20 +1372,32 @@ static long random_ioctl(struct file *f,
return -EINVAL;
credit_init_bits(ent_count);
return 0;
- case RNDADDENTROPY:
+ case RNDADDENTROPY: {
+ struct iov_iter iter;
+ struct iovec iov;
+ ssize_t ret;
+ int len;
+
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
if (get_user(ent_count, p++))
return -EFAULT;
if (ent_count < 0)
return -EINVAL;
- if (get_user(size, p++))
+ if (get_user(len, p++))
+ return -EFAULT;
+ ret = import_single_range(WRITE, p, len, &iov, &iter);
+ if (unlikely(ret))
+ return ret;
+ ret = write_pool(&iter);
+ if (unlikely(ret < 0))
+ return ret;
+ /* Since we're crediting, enforce that it was all written into the pool. */
+ if (unlikely(ret != len))
return -EFAULT;
- retval = write_pool((const char __user *)p, size);
- if (retval < 0)
- return retval;
credit_init_bits(ent_count);
return 0;
+ }
case RNDZAPENTCNT:
case RNDCLEARPOOL:
/* No longer has any effect. */
@@ -1420,7 +1423,7 @@ static int random_fasync(int fd, struct

const struct file_operations random_fops = {
.read_iter = random_read_iter,
- .write = random_write,
+ .write_iter = random_write_iter,
.poll = random_poll,
.unlocked_ioctl = random_ioctl,
.compat_ioctl = compat_ptr_ioctl,
@@ -1430,7 +1433,7 @@ const struct file_operations random_fops

const struct file_operations urandom_fops = {
.read_iter = urandom_read_iter,
- .write = random_write,
+ .write_iter = random_write_iter,
.unlocked_ioctl = random_ioctl,
.compat_ioctl = compat_ptr_ioctl,
.fasync = random_fasync,



2022-05-28 19:26:10

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 47/47] ALSA: ctxfi: Add SB046x PCI ID

From: Edward Matijevic <[email protected]>

commit 1b073ebb174d0c7109b438e0a5eb4495137803ec upstream.

Adds the PCI ID for X-Fi cards sold under the Platnum and XtremeMusic names

Before: snd_ctxfi 0000:05:05.0: chip 20K1 model Unknown (1102:0021) is found
After: snd_ctxfi 0000:05:05.0: chip 20K1 model SB046x (1102:0021) is found

[ This is only about defining the model name string, and the rest is
handled just like before, as a default unknown device.
Edward confirmed that the stuff has been working fine -- tiwai ]

Signed-off-by: Edward Matijevic <[email protected]>
Cc: <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
sound/pci/ctxfi/ctatc.c | 2 ++
sound/pci/ctxfi/cthardware.h | 3 ++-
2 files changed, 4 insertions(+), 1 deletion(-)

--- a/sound/pci/ctxfi/ctatc.c
+++ b/sound/pci/ctxfi/ctatc.c
@@ -36,6 +36,7 @@
| ((IEC958_AES3_CON_FS_48000) << 24))

static const struct snd_pci_quirk subsys_20k1_list[] = {
+ SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0021, "SB046x", CTSB046X),
SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0022, "SB055x", CTSB055X),
SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x002f, "SB055x", CTSB055X),
SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0029, "SB073x", CTSB073X),
@@ -64,6 +65,7 @@ static const struct snd_pci_quirk subsys

static const char *ct_subsys_name[NUM_CTCARDS] = {
/* 20k1 models */
+ [CTSB046X] = "SB046x",
[CTSB055X] = "SB055x",
[CTSB073X] = "SB073x",
[CTUAA] = "UAA",
--- a/sound/pci/ctxfi/cthardware.h
+++ b/sound/pci/ctxfi/cthardware.h
@@ -26,8 +26,9 @@ enum CHIPTYP {

enum CTCARDS {
/* 20k1 models */
+ CTSB046X,
+ CT20K1_MODEL_FIRST = CTSB046X,
CTSB055X,
- CT20K1_MODEL_FIRST = CTSB055X,
CTSB073X,
CTUAA,
CT20K1_UNKNOWN,



2022-05-28 19:28:57

by Bagas Sanjaya

[permalink] [raw]
Subject: Re: [PATCH 5.18 00/47] 5.18.1-rc1 review

On Fri, May 27, 2022 at 10:49:40AM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 5.18.1 release.
> There are 47 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>

Successfully cross-compiled for arm64 (bcm2711_defconfig, GCC 12.1.0).

Tested-by: Bagas Sanjaya <[email protected]>

--
An old man doll... just what I always wanted! - Clara

2022-05-28 19:33:37

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 36/47] random: remove extern from functions in header

From: "Jason A. Donenfeld" <[email protected]>

commit 7782cfeca7d420e8bb707613d4cfb0f7ff29bb3a upstream.

Accoriding to the kernel style guide, having `extern` on functions in
headers is old school and deprecated, and doesn't add anything. So remove
them from random.h, and tidy up the file a little bit too.

Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/linux/random.h | 77 +++++++++++++++++++------------------------------
1 file changed, 31 insertions(+), 46 deletions(-)

--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -12,13 +12,12 @@

struct notifier_block;

-extern void add_device_randomness(const void *, size_t);
-extern void add_bootloader_randomness(const void *, size_t);
-extern void add_input_randomness(unsigned int type, unsigned int code,
- unsigned int value) __latent_entropy;
-extern void add_interrupt_randomness(int irq) __latent_entropy;
-extern void add_hwgenerator_randomness(const void *buffer, size_t count,
- size_t entropy);
+void add_device_randomness(const void *, size_t);
+void add_bootloader_randomness(const void *, size_t);
+void add_input_randomness(unsigned int type, unsigned int code,
+ unsigned int value) __latent_entropy;
+void add_interrupt_randomness(int irq) __latent_entropy;
+void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy);

#if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__)
static inline void add_latent_entropy(void)
@@ -26,30 +25,20 @@ static inline void add_latent_entropy(vo
add_device_randomness((const void *)&latent_entropy, sizeof(latent_entropy));
}
#else
-static inline void add_latent_entropy(void) {}
+static inline void add_latent_entropy(void) { }
#endif

#if IS_ENABLED(CONFIG_VMGENID)
-extern void add_vmfork_randomness(const void *unique_vm_id, size_t size);
-extern int register_random_vmfork_notifier(struct notifier_block *nb);
-extern int unregister_random_vmfork_notifier(struct notifier_block *nb);
+void add_vmfork_randomness(const void *unique_vm_id, size_t size);
+int register_random_vmfork_notifier(struct notifier_block *nb);
+int unregister_random_vmfork_notifier(struct notifier_block *nb);
#else
static inline int register_random_vmfork_notifier(struct notifier_block *nb) { return 0; }
static inline int unregister_random_vmfork_notifier(struct notifier_block *nb) { return 0; }
#endif

-extern void get_random_bytes(void *buf, size_t nbytes);
-extern int wait_for_random_bytes(void);
-extern int __init random_init(const char *command_line);
-extern bool rng_is_initialized(void);
-extern int register_random_ready_notifier(struct notifier_block *nb);
-extern int unregister_random_ready_notifier(struct notifier_block *nb);
-extern size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes);
-
-#ifndef MODULE
-extern const struct file_operations random_fops, urandom_fops;
-#endif
-
+void get_random_bytes(void *buf, size_t nbytes);
+size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes);
u32 get_random_u32(void);
u64 get_random_u64(void);
static inline unsigned int get_random_int(void)
@@ -81,11 +70,17 @@ static inline unsigned long get_random_l

static inline unsigned long get_random_canary(void)
{
- unsigned long val = get_random_long();
-
- return val & CANARY_MASK;
+ return get_random_long() & CANARY_MASK;
}

+unsigned long randomize_page(unsigned long start, unsigned long range);
+
+int __init random_init(const char *command_line);
+bool rng_is_initialized(void);
+int wait_for_random_bytes(void);
+int register_random_ready_notifier(struct notifier_block *nb);
+int unregister_random_ready_notifier(struct notifier_block *nb);
+
/* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, nbytes).
* Returns the result of the call to wait_for_random_bytes. */
static inline int get_random_bytes_wait(void *buf, size_t nbytes)
@@ -109,8 +104,6 @@ declare_get_random_var_wait(int)
declare_get_random_var_wait(long)
#undef declare_get_random_var

-unsigned long randomize_page(unsigned long start, unsigned long range);
-
/*
* This is designed to be standalone for just prandom
* users, but for now we include it from <linux/random.h>
@@ -121,22 +114,10 @@ unsigned long randomize_page(unsigned lo
#ifdef CONFIG_ARCH_RANDOM
# include <asm/archrandom.h>
#else
-static inline bool __must_check arch_get_random_long(unsigned long *v)
-{
- return false;
-}
-static inline bool __must_check arch_get_random_int(unsigned int *v)
-{
- return false;
-}
-static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
-{
- return false;
-}
-static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
-{
- return false;
-}
+static inline bool __must_check arch_get_random_long(unsigned long *v) { return false; }
+static inline bool __must_check arch_get_random_int(unsigned int *v) { return false; }
+static inline bool __must_check arch_get_random_seed_long(unsigned long *v) { return false; }
+static inline bool __must_check arch_get_random_seed_int(unsigned int *v) { return false; }
#endif

/*
@@ -160,8 +141,12 @@ static inline bool __init arch_get_rando
#endif

#ifdef CONFIG_SMP
-extern int random_prepare_cpu(unsigned int cpu);
-extern int random_online_cpu(unsigned int cpu);
+int random_prepare_cpu(unsigned int cpu);
+int random_online_cpu(unsigned int cpu);
+#endif
+
+#ifndef MODULE
+extern const struct file_operations random_fops, urandom_fops;
#endif

#endif /* _LINUX_RANDOM_H */



2022-05-28 19:45:09

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 23/47] random: do not pretend to handle premature next security model

From: "Jason A. Donenfeld" <[email protected]>

commit e85c0fc1d94c52483a603651748d4c76d6aa1c6b upstream.

Per the thread linked below, "premature next" is not considered to be a
realistic threat model, and leads to more serious security problems.

"Premature next" is the scenario in which:

- Attacker compromises the current state of a fully initialized RNG via
some kind of infoleak.
- New bits of entropy are added directly to the key used to generate the
/dev/urandom stream, without any buffering or pooling.
- Attacker then, somehow having read access to /dev/urandom, samples RNG
output and brute forces the individual new bits that were added.
- Result: the RNG never "recovers" from the initial compromise, a
so-called violation of what academics term "post-compromise security".

The usual solutions to this involve some form of delaying when entropy
gets mixed into the crng. With Fortuna, this involves multiple input
buckets. With what the Linux RNG was trying to do prior, this involves
entropy estimation.

However, by delaying when entropy gets mixed in, it also means that RNG
compromises are extremely dangerous during the window of time before
the RNG has gathered enough entropy, during which time nonces may become
predictable (or repeated), ephemeral keys may not be secret, and so
forth. Moreover, it's unclear how realistic "premature next" is from an
attack perspective, if these attacks even make sense in practice.

Put together -- and discussed in more detail in the thread below --
these constitute grounds for just doing away with the current code that
pretends to handle premature next. I say "pretends" because it wasn't
doing an especially great job at it either; should we change our mind
about this direction, we would probably implement Fortuna to "fix" the
"problem", in which case, removing the pretend solution still makes
sense.

This also reduces the crng reseed period from 5 minutes down to 1
minute. The rationale from the thread might lead us toward reducing that
even further in the future (or even eliminating it), but that remains a
topic of a future commit.

At a high level, this patch changes semantics from:

Before: Seed for the first time after 256 "bits" of estimated
entropy have been accumulated since the system booted. Thereafter,
reseed once every five minutes, but only if 256 new "bits" have been
accumulated since the last reseeding.

After: Seed for the first time after 256 "bits" of estimated entropy
have been accumulated since the system booted. Thereafter, reseed
once every minute.

Most of this patch is renaming and removing: POOL_MIN_BITS becomes
POOL_INIT_BITS, credit_entropy_bits() becomes credit_init_bits(),
crng_reseed() loses its "force" parameter since it's now always true,
the drain_entropy() function no longer has any use so it's removed,
entropy estimation is skipped if we've already init'd, the various
notifiers for "low on entropy" are now only active prior to init, and
finally, some documentation comments are cleaned up here and there.

Link: https://lore.kernel.org/lkml/[email protected]/
Cc: Theodore Ts'o <[email protected]>
Cc: Nadia Heninger <[email protected]>
Cc: Tom Ristenpart <[email protected]>
Reviewed-by: Eric Biggers <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 186 ++++++++++++++++++--------------------------------
1 file changed, 68 insertions(+), 118 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -15,14 +15,12 @@
* - Sysctl interface.
*
* The high level overview is that there is one input pool, into which
- * various pieces of data are hashed. Some of that data is then "credited" as
- * having a certain number of bits of entropy. When enough bits of entropy are
- * available, the hash is finalized and handed as a key to a stream cipher that
- * expands it indefinitely for various consumers. This key is periodically
- * refreshed as the various entropy collectors, described below, add data to the
- * input pool and credit it. There is currently no Fortuna-like scheduler
- * involved, which can lead to malicious entropy sources causing a premature
- * reseed, and the entropy estimates are, at best, conservative guesses.
+ * various pieces of data are hashed. Prior to initialization, some of that
+ * data is then "credited" as having a certain number of bits of entropy.
+ * When enough bits of entropy are available, the hash is finalized and
+ * handed as a key to a stream cipher that expands it indefinitely for
+ * various consumers. This key is periodically refreshed as the various
+ * entropy collectors, described below, add data to the input pool.
*/

#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -231,7 +229,10 @@ static void _warn_unseeded_randomness(co
*
*********************************************************************/

-enum { CRNG_RESEED_INTERVAL = 300 * HZ };
+enum {
+ CRNG_RESEED_START_INTERVAL = HZ,
+ CRNG_RESEED_INTERVAL = 60 * HZ
+};

static struct {
u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long));
@@ -253,26 +254,18 @@ static DEFINE_PER_CPU(struct crng, crngs
.lock = INIT_LOCAL_LOCK(crngs.lock),
};

-/* Used by crng_reseed() to extract a new seed from the input pool. */
-static bool drain_entropy(void *buf, size_t nbytes, bool force);
-/* Used by crng_make_state() to extract a new seed when crng_init==0. */
+/* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */
static void extract_entropy(void *buf, size_t nbytes);

-/*
- * This extracts a new crng key from the input pool, but only if there is a
- * sufficient amount of entropy available or force is true, in order to
- * mitigate bruteforcing of newly added bits.
- */
-static void crng_reseed(bool force)
+/* This extracts a new crng key from the input pool. */
+static void crng_reseed(void)
{
unsigned long flags;
unsigned long next_gen;
u8 key[CHACHA_KEY_SIZE];
bool finalize_init = false;

- /* Only reseed if we can, to prevent brute forcing a small amount of new bits. */
- if (!drain_entropy(key, sizeof(key), force))
- return;
+ extract_entropy(key, sizeof(key));

/*
* We copy the new key into the base_crng, overwriting the old one,
@@ -344,10 +337,10 @@ static void crng_fast_key_erasure(u8 key
}

/*
- * Return whether the crng seed is considered to be sufficiently
- * old that a reseeding might be attempted. This happens if the last
- * reseeding was CRNG_RESEED_INTERVAL ago, or during early boot, at
- * an interval proportional to the uptime.
+ * Return whether the crng seed is considered to be sufficiently old
+ * that a reseeding is needed. This happens if the last reseeding
+ * was CRNG_RESEED_INTERVAL ago, or during early boot, at an interval
+ * proportional to the uptime.
*/
static bool crng_has_old_seed(void)
{
@@ -359,7 +352,7 @@ static bool crng_has_old_seed(void)
if (uptime >= CRNG_RESEED_INTERVAL / HZ * 2)
WRITE_ONCE(early_boot, false);
else
- interval = max_t(unsigned int, 5 * HZ,
+ interval = max_t(unsigned int, CRNG_RESEED_START_INTERVAL,
(unsigned int)uptime / 2 * HZ);
}
return time_after(jiffies, READ_ONCE(base_crng.birth) + interval);
@@ -401,11 +394,11 @@ static void crng_make_state(u32 chacha_s
}

/*
- * If the base_crng is old enough, we try to reseed, which in turn
- * bumps the generation counter that we check below.
+ * If the base_crng is old enough, we reseed, which in turn bumps the
+ * generation counter that we check below.
*/
if (unlikely(crng_has_old_seed()))
- crng_reseed(false);
+ crng_reseed();

local_lock_irqsave(&crngs.lock, flags);
crng = raw_cpu_ptr(&crngs);
@@ -734,30 +727,24 @@ EXPORT_SYMBOL(get_random_bytes_arch);
*
* After which, if added entropy should be credited:
*
- * static void credit_entropy_bits(size_t nbits)
+ * static void credit_init_bits(size_t nbits)
*
- * Finally, extract entropy via these two, with the latter one
- * setting the entropy count to zero and extracting only if there
- * is POOL_MIN_BITS entropy credited prior or force is true:
+ * Finally, extract entropy via:
*
* static void extract_entropy(void *buf, size_t nbytes)
- * static bool drain_entropy(void *buf, size_t nbytes, bool force)
*
**********************************************************************/

enum {
POOL_BITS = BLAKE2S_HASH_SIZE * 8,
- POOL_MIN_BITS = POOL_BITS, /* No point in settling for less. */
- POOL_FAST_INIT_BITS = POOL_MIN_BITS / 2
+ POOL_INIT_BITS = POOL_BITS, /* No point in settling for less. */
+ POOL_FAST_INIT_BITS = POOL_INIT_BITS / 2
};

-/* For notifying userspace should write into /dev/random. */
-static DECLARE_WAIT_QUEUE_HEAD(random_write_wait);
-
static struct {
struct blake2s_state hash;
spinlock_t lock;
- unsigned int entropy_count;
+ unsigned int init_bits;
} input_pool = {
.hash.h = { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE),
BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4,
@@ -772,9 +759,9 @@ static void _mix_pool_bytes(const void *
}

/*
- * This function adds bytes into the entropy "pool". It does not
- * update the entropy estimate. The caller should call
- * credit_entropy_bits if this is appropriate.
+ * This function adds bytes into the input pool. It does not
+ * update the initialization bit counter; the caller should call
+ * credit_init_bits if this is appropriate.
*/
static void mix_pool_bytes(const void *in, size_t nbytes)
{
@@ -831,43 +818,24 @@ static void extract_entropy(void *buf, s
memzero_explicit(&block, sizeof(block));
}

-/*
- * First we make sure we have POOL_MIN_BITS of entropy in the pool unless force
- * is true, and then we set the entropy count to zero (but don't actually touch
- * any data). Only then can we extract a new key with extract_entropy().
- */
-static bool drain_entropy(void *buf, size_t nbytes, bool force)
-{
- unsigned int entropy_count;
- do {
- entropy_count = READ_ONCE(input_pool.entropy_count);
- if (!force && entropy_count < POOL_MIN_BITS)
- return false;
- } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count);
- extract_entropy(buf, nbytes);
- wake_up_interruptible(&random_write_wait);
- kill_fasync(&fasync, SIGIO, POLL_OUT);
- return true;
-}
-
-static void credit_entropy_bits(size_t nbits)
+static void credit_init_bits(size_t nbits)
{
- unsigned int entropy_count, orig, add;
+ unsigned int init_bits, orig, add;
unsigned long flags;

- if (!nbits)
+ if (crng_ready() || !nbits)
return;

add = min_t(size_t, nbits, POOL_BITS);

do {
- orig = READ_ONCE(input_pool.entropy_count);
- entropy_count = min_t(unsigned int, POOL_BITS, orig + add);
- } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig);
-
- if (!crng_ready() && entropy_count >= POOL_MIN_BITS)
- crng_reseed(false);
- else if (unlikely(crng_init == 0 && entropy_count >= POOL_FAST_INIT_BITS)) {
+ orig = READ_ONCE(input_pool.init_bits);
+ init_bits = min_t(unsigned int, POOL_BITS, orig + add);
+ } while (cmpxchg(&input_pool.init_bits, orig, init_bits) != orig);
+
+ if (!crng_ready() && init_bits >= POOL_INIT_BITS)
+ crng_reseed();
+ else if (unlikely(crng_init == 0 && init_bits >= POOL_FAST_INIT_BITS)) {
spin_lock_irqsave(&base_crng.lock, flags);
if (crng_init == 0) {
extract_entropy(base_crng.key, sizeof(base_crng.key));
@@ -978,13 +946,10 @@ int __init rand_initialize(void)
_mix_pool_bytes(&now, sizeof(now));
_mix_pool_bytes(utsname(), sizeof(*(utsname())));

- extract_entropy(base_crng.key, sizeof(base_crng.key));
- ++base_crng.generation;
-
- if (arch_init && trust_cpu && !crng_ready()) {
- crng_init = 2;
- pr_notice("crng init done (trusting CPU's manufacturer)\n");
- }
+ if (crng_ready())
+ crng_reseed();
+ else if (arch_init && trust_cpu)
+ credit_init_bits(BLAKE2S_BLOCK_SIZE * 8);

if (ratelimit_disable) {
urandom_warning.interval = 0;
@@ -1038,6 +1003,9 @@ static void add_timer_randomness(struct
_mix_pool_bytes(&num, sizeof(num));
spin_unlock_irqrestore(&input_pool.lock, flags);

+ if (crng_ready())
+ return;
+
/*
* Calculate number of bits of randomness we probably added.
* We take into account the first, second and third-order deltas
@@ -1068,7 +1036,7 @@ static void add_timer_randomness(struct
* Round down by 1 bit on general principles,
* and limit entropy estimate to 12 bits.
*/
- credit_entropy_bits(min_t(unsigned int, fls(delta >> 1), 11));
+ credit_init_bits(min_t(unsigned int, fls(delta >> 1), 11));
}

void add_input_randomness(unsigned int type, unsigned int code,
@@ -1121,18 +1089,15 @@ void rand_initialize_disk(struct gendisk
void add_hwgenerator_randomness(const void *buffer, size_t count,
size_t entropy)
{
+ mix_pool_bytes(buffer, count);
+ credit_init_bits(entropy);
+
/*
- * Throttle writing if we're above the trickle threshold.
- * We'll be woken up again once below POOL_MIN_BITS, when
- * the calling thread is about to terminate, or once
- * CRNG_RESEED_INTERVAL has elapsed.
+ * Throttle writing to once every CRNG_RESEED_INTERVAL, unless
+ * we're not yet initialized.
*/
- wait_event_interruptible_timeout(random_write_wait,
- kthread_should_stop() ||
- input_pool.entropy_count < POOL_MIN_BITS,
- CRNG_RESEED_INTERVAL);
- mix_pool_bytes(buffer, count);
- credit_entropy_bits(entropy);
+ if (!kthread_should_stop() && crng_ready())
+ schedule_timeout_interruptible(CRNG_RESEED_INTERVAL);
}
EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);

@@ -1144,7 +1109,7 @@ void add_bootloader_randomness(const voi
{
mix_pool_bytes(buf, size);
if (trust_bootloader)
- credit_entropy_bits(size * 8);
+ credit_init_bits(size * 8);
}
EXPORT_SYMBOL_GPL(add_bootloader_randomness);

@@ -1160,7 +1125,7 @@ void add_vmfork_randomness(const void *u
{
add_device_randomness(unique_vm_id, size);
if (crng_ready()) {
- crng_reseed(true);
+ crng_reseed();
pr_notice("crng reseeded due to virtual machine fork\n");
}
blocking_notifier_call_chain(&vmfork_chain, 0, NULL);
@@ -1279,7 +1244,7 @@ static void mix_interrupt_randomness(str
local_irq_enable();

mix_pool_bytes(pool, sizeof(pool));
- credit_entropy_bits(1);
+ credit_init_bits(1);

memzero_explicit(pool, sizeof(pool));
}
@@ -1326,7 +1291,7 @@ EXPORT_SYMBOL_GPL(add_interrupt_randomne
*/
static void entropy_timer(struct timer_list *t)
{
- credit_entropy_bits(1);
+ credit_init_bits(1);
}

/*
@@ -1419,16 +1384,8 @@ SYSCALL_DEFINE3(getrandom, char __user *

static __poll_t random_poll(struct file *file, poll_table *wait)
{
- __poll_t mask;
-
poll_wait(file, &crng_init_wait, wait);
- poll_wait(file, &random_write_wait, wait);
- mask = 0;
- if (crng_ready())
- mask |= EPOLLIN | EPOLLRDNORM;
- if (input_pool.entropy_count < POOL_MIN_BITS)
- mask |= EPOLLOUT | EPOLLWRNORM;
- return mask;
+ return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM;
}

static int write_pool(const char __user *ubuf, size_t count)
@@ -1508,7 +1465,7 @@ static long random_ioctl(struct file *f,
switch (cmd) {
case RNDGETENTCNT:
/* Inherently racy, no point locking. */
- if (put_user(input_pool.entropy_count, p))
+ if (put_user(input_pool.init_bits, p))
return -EFAULT;
return 0;
case RNDADDTOENTCNT:
@@ -1518,7 +1475,7 @@ static long random_ioctl(struct file *f,
return -EFAULT;
if (ent_count < 0)
return -EINVAL;
- credit_entropy_bits(ent_count);
+ credit_init_bits(ent_count);
return 0;
case RNDADDENTROPY:
if (!capable(CAP_SYS_ADMIN))
@@ -1532,27 +1489,20 @@ static long random_ioctl(struct file *f,
retval = write_pool((const char __user *)p, size);
if (retval < 0)
return retval;
- credit_entropy_bits(ent_count);
+ credit_init_bits(ent_count);
return 0;
case RNDZAPENTCNT:
case RNDCLEARPOOL:
- /*
- * Clear the entropy pool counters. We no longer clear
- * the entropy pool, as that's silly.
- */
+ /* No longer has any effect. */
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
- if (xchg(&input_pool.entropy_count, 0) >= POOL_MIN_BITS) {
- wake_up_interruptible(&random_write_wait);
- kill_fasync(&fasync, SIGIO, POLL_OUT);
- }
return 0;
case RNDRESEEDCRNG:
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
if (!crng_ready())
return -ENODATA;
- crng_reseed(false);
+ crng_reseed();
return 0;
default:
return -EINVAL;
@@ -1604,7 +1554,7 @@ const struct file_operations urandom_fop
*
* - write_wakeup_threshold - the amount of entropy in the input pool
* below which write polls to /dev/random will unblock, requesting
- * more entropy, tied to the POOL_MIN_BITS constant. It is writable
+ * more entropy, tied to the POOL_INIT_BITS constant. It is writable
* to avoid breaking old userspaces, but writing to it does not
* change any behavior of the RNG.
*
@@ -1619,7 +1569,7 @@ const struct file_operations urandom_fop
#include <linux/sysctl.h>

static int sysctl_random_min_urandom_seed = CRNG_RESEED_INTERVAL / HZ;
-static int sysctl_random_write_wakeup_bits = POOL_MIN_BITS;
+static int sysctl_random_write_wakeup_bits = POOL_INIT_BITS;
static int sysctl_poolsize = POOL_BITS;
static u8 sysctl_bootid[UUID_SIZE];

@@ -1675,7 +1625,7 @@ static struct ctl_table random_table[] =
},
{
.procname = "entropy_avail",
- .data = &input_pool.entropy_count,
+ .data = &input_pool.init_bits,
.maxlen = sizeof(int),
.mode = 0444,
.proc_handler = proc_dointvec,



2022-05-28 19:49:42

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 19/47] xtensa: use fallback for random_get_entropy() instead of zero

From: "Jason A. Donenfeld" <[email protected]>

commit e10e2f58030c5c211d49042a8c2a1b93d40b2ffb upstream.

In the event that random_get_entropy() can't access a cycle counter or
similar, falling back to returning 0 is really not the best we can do.
Instead, at least calling random_get_entropy_fallback() would be
preferable, because that always needs to return _something_, even
falling back to jiffies eventually. It's not as though
random_get_entropy_fallback() is super high precision or guaranteed to
be entropic, but basically anything that's not zero all the time is
better than returning zero all the time.

This is accomplished by just including the asm-generic code like on
other architectures, which means we can get rid of the empty stub
function here.

Cc: Thomas Gleixner <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Acked-by: Max Filippov <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/xtensa/include/asm/timex.h | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

--- a/arch/xtensa/include/asm/timex.h
+++ b/arch/xtensa/include/asm/timex.h
@@ -29,10 +29,6 @@

extern unsigned long ccount_freq;

-typedef unsigned long long cycles_t;
-
-#define get_cycles() (0)
-
void local_timer_setup(unsigned cpu);

/*
@@ -59,4 +55,6 @@ static inline void set_linux_timer (unsi
xtensa_set_sr(ccompare, SREG_CCOMPARE + LINUX_TIMER);
}

+#include <asm-generic/timex.h>
+
#endif /* _XTENSA_TIMEX_H */



2022-05-28 19:51:16

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 35/47] random: use static branch for crng_ready()

From: "Jason A. Donenfeld" <[email protected]>

commit f5bda35fba615ace70a656d4700423fa6c9bebee upstream.

Since crng_ready() is only false briefly during initialization and then
forever after becomes true, we don't need to evaluate it after, making
it a prime candidate for a static branch.

One complication, however, is that it changes state in a particular call
to credit_init_bits(), which might be made from atomic context, which
means we must kick off a workqueue to change the static key. Further
complicating things, credit_init_bits() may be called sufficiently early
on in system initialization such that system_wq is NULL.

Fortunately, there exists the nice function execute_in_process_context(),
which will immediately execute the function if !in_interrupt(), and
otherwise defer it to a workqueue. During early init, before workqueues
are available, in_interrupt() is always false, because interrupts
haven't even been enabled yet, which means the function in that case
executes immediately. Later on, after workqueues are available,
in_interrupt() might be true, but in that case, the work is queued in
system_wq and all goes well.

Cc: Theodore Ts'o <[email protected]>
Cc: Sultan Alsawaf <[email protected]>
Reviewed-by: Dominik Brodowski <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -77,8 +77,9 @@ static enum {
CRNG_EMPTY = 0, /* Little to no entropy collected */
CRNG_EARLY = 1, /* At least POOL_EARLY_BITS collected */
CRNG_READY = 2 /* Fully initialized with POOL_READY_BITS collected */
-} crng_init = CRNG_EMPTY;
-#define crng_ready() (likely(crng_init >= CRNG_READY))
+} crng_init __read_mostly = CRNG_EMPTY;
+static DEFINE_STATIC_KEY_FALSE(crng_is_ready);
+#define crng_ready() (static_branch_likely(&crng_is_ready) || crng_init >= CRNG_READY)
/* Various types of waiters for crng_init->CRNG_READY transition. */
static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
static struct fasync_struct *fasync;
@@ -108,6 +109,11 @@ bool rng_is_initialized(void)
}
EXPORT_SYMBOL(rng_is_initialized);

+static void crng_set_ready(struct work_struct *work)
+{
+ static_branch_enable(&crng_is_ready);
+}
+
/* Used by wait_for_random_bytes(), and considered an entropy collector, below. */
static void try_to_generate_entropy(void);

@@ -267,7 +273,7 @@ static void crng_reseed(void)
++next_gen;
WRITE_ONCE(base_crng.generation, next_gen);
WRITE_ONCE(base_crng.birth, jiffies);
- if (!crng_ready())
+ if (!static_branch_likely(&crng_is_ready))
crng_init = CRNG_READY;
spin_unlock_irqrestore(&base_crng.lock, flags);
memzero_explicit(key, sizeof(key));
@@ -785,6 +791,7 @@ static void extract_entropy(void *buf, s

static void credit_init_bits(size_t nbits)
{
+ static struct execute_work set_ready;
unsigned int new, orig, add;
unsigned long flags;

@@ -800,6 +807,7 @@ static void credit_init_bits(size_t nbit

if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) {
crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */
+ execute_in_process_context(crng_set_ready, &set_ready);
process_random_ready_list();
wake_up_interruptible(&crng_init_wait);
kill_fasync(&fasync, SIGIO, POLL_IN);
@@ -1348,7 +1356,7 @@ SYSCALL_DEFINE3(getrandom, char __user *
if (count > INT_MAX)
count = INT_MAX;

- if (!(flags & GRND_INSECURE) && !crng_ready()) {
+ if (!crng_ready() && !(flags & GRND_INSECURE)) {
int ret;

if (flags & GRND_NONBLOCK)



2022-05-28 19:54:01

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 27/47] siphash: use one source of truth for siphash permutations

From: "Jason A. Donenfeld" <[email protected]>

commit e73aaae2fa9024832e1f42e30c787c7baf61d014 upstream.

The SipHash family of permutations is currently used in three places:

- siphash.c itself, used in the ordinary way it was intended.
- random32.c, in a construction from an anonymous contributor.
- random.c, as part of its fast_mix function.

Each one of these places reinvents the wheel with the same C code, same
rotation constants, and same symmetry-breaking constants.

This commit tidies things up a bit by placing macros for the
permutations and constants into siphash.h, where each of the three .c
users can access them. It also leaves a note dissuading more users of
them from emerging.

Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 30 +++++++-----------------------
include/linux/prandom.h | 23 +++++++----------------
include/linux/siphash.h | 28 ++++++++++++++++++++++++++++
lib/siphash.c | 32 ++++++++++----------------------
4 files changed, 52 insertions(+), 61 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -51,6 +51,7 @@
#include <linux/completion.h>
#include <linux/uuid.h>
#include <linux/uaccess.h>
+#include <linux/siphash.h>
#include <crypto/chacha.h>
#include <crypto/blake2s.h>
#include <asm/processor.h>
@@ -1053,12 +1054,11 @@ struct fast_pool {

static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
#ifdef CONFIG_64BIT
- /* SipHash constants */
- .pool = { 0x736f6d6570736575UL, 0x646f72616e646f6dUL,
- 0x6c7967656e657261UL, 0x7465646279746573UL }
+#define FASTMIX_PERM SIPHASH_PERMUTATION
+ .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
#else
- /* HalfSipHash constants */
- .pool = { 0, 0, 0x6c796765U, 0x74656462U }
+#define FASTMIX_PERM HSIPHASH_PERMUTATION
+ .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 }
#endif
};

@@ -1070,27 +1070,11 @@ static DEFINE_PER_CPU(struct fast_pool,
*/
static void fast_mix(unsigned long s[4], unsigned long v1, unsigned long v2)
{
-#ifdef CONFIG_64BIT
-#define PERM() do { \
- s[0] += s[1]; s[1] = rol64(s[1], 13); s[1] ^= s[0]; s[0] = rol64(s[0], 32); \
- s[2] += s[3]; s[3] = rol64(s[3], 16); s[3] ^= s[2]; \
- s[0] += s[3]; s[3] = rol64(s[3], 21); s[3] ^= s[0]; \
- s[2] += s[1]; s[1] = rol64(s[1], 17); s[1] ^= s[2]; s[2] = rol64(s[2], 32); \
-} while (0)
-#else
-#define PERM() do { \
- s[0] += s[1]; s[1] = rol32(s[1], 5); s[1] ^= s[0]; s[0] = rol32(s[0], 16); \
- s[2] += s[3]; s[3] = rol32(s[3], 8); s[3] ^= s[2]; \
- s[0] += s[3]; s[3] = rol32(s[3], 7); s[3] ^= s[0]; \
- s[2] += s[1]; s[1] = rol32(s[1], 13); s[1] ^= s[2]; s[2] = rol32(s[2], 16); \
-} while (0)
-#endif
-
s[3] ^= v1;
- PERM();
+ FASTMIX_PERM(s[0], s[1], s[2], s[3]);
s[0] ^= v1;
s[3] ^= v2;
- PERM();
+ FASTMIX_PERM(s[0], s[1], s[2], s[3]);
s[0] ^= v2;
}

--- a/include/linux/prandom.h
+++ b/include/linux/prandom.h
@@ -10,6 +10,7 @@

#include <linux/types.h>
#include <linux/percpu.h>
+#include <linux/siphash.h>

u32 prandom_u32(void);
void prandom_bytes(void *buf, size_t nbytes);
@@ -27,15 +28,10 @@ DECLARE_PER_CPU(unsigned long, net_rand_
* The core SipHash round function. Each line can be executed in
* parallel given enough CPU resources.
*/
-#define PRND_SIPROUND(v0, v1, v2, v3) ( \
- v0 += v1, v1 = rol64(v1, 13), v2 += v3, v3 = rol64(v3, 16), \
- v1 ^= v0, v0 = rol64(v0, 32), v3 ^= v2, \
- v0 += v3, v3 = rol64(v3, 21), v2 += v1, v1 = rol64(v1, 17), \
- v3 ^= v0, v1 ^= v2, v2 = rol64(v2, 32) \
-)
+#define PRND_SIPROUND(v0, v1, v2, v3) SIPHASH_PERMUTATION(v0, v1, v2, v3)

-#define PRND_K0 (0x736f6d6570736575 ^ 0x6c7967656e657261)
-#define PRND_K1 (0x646f72616e646f6d ^ 0x7465646279746573)
+#define PRND_K0 (SIPHASH_CONST_0 ^ SIPHASH_CONST_2)
+#define PRND_K1 (SIPHASH_CONST_1 ^ SIPHASH_CONST_3)

#elif BITS_PER_LONG == 32
/*
@@ -43,14 +39,9 @@ DECLARE_PER_CPU(unsigned long, net_rand_
* This is weaker, but 32-bit machines are not used for high-traffic
* applications, so there is less output for an attacker to analyze.
*/
-#define PRND_SIPROUND(v0, v1, v2, v3) ( \
- v0 += v1, v1 = rol32(v1, 5), v2 += v3, v3 = rol32(v3, 8), \
- v1 ^= v0, v0 = rol32(v0, 16), v3 ^= v2, \
- v0 += v3, v3 = rol32(v3, 7), v2 += v1, v1 = rol32(v1, 13), \
- v3 ^= v0, v1 ^= v2, v2 = rol32(v2, 16) \
-)
-#define PRND_K0 0x6c796765
-#define PRND_K1 0x74656462
+#define PRND_SIPROUND(v0, v1, v2, v3) HSIPHASH_PERMUTATION(v0, v1, v2, v3)
+#define PRND_K0 (HSIPHASH_CONST_0 ^ HSIPHASH_CONST_2)
+#define PRND_K1 (HSIPHASH_CONST_1 ^ HSIPHASH_CONST_3)

#else
#error Unsupported BITS_PER_LONG
--- a/include/linux/siphash.h
+++ b/include/linux/siphash.h
@@ -138,4 +138,32 @@ static inline u32 hsiphash(const void *d
return ___hsiphash_aligned(data, len, key);
}

+/*
+ * These macros expose the raw SipHash and HalfSipHash permutations.
+ * Do not use them directly! If you think you have a use for them,
+ * be sure to CC the maintainer of this file explaining why.
+ */
+
+#define SIPHASH_PERMUTATION(a, b, c, d) ( \
+ (a) += (b), (b) = rol64((b), 13), (b) ^= (a), (a) = rol64((a), 32), \
+ (c) += (d), (d) = rol64((d), 16), (d) ^= (c), \
+ (a) += (d), (d) = rol64((d), 21), (d) ^= (a), \
+ (c) += (b), (b) = rol64((b), 17), (b) ^= (c), (c) = rol64((c), 32))
+
+#define SIPHASH_CONST_0 0x736f6d6570736575ULL
+#define SIPHASH_CONST_1 0x646f72616e646f6dULL
+#define SIPHASH_CONST_2 0x6c7967656e657261ULL
+#define SIPHASH_CONST_3 0x7465646279746573ULL
+
+#define HSIPHASH_PERMUTATION(a, b, c, d) ( \
+ (a) += (b), (b) = rol32((b), 5), (b) ^= (a), (a) = rol32((a), 16), \
+ (c) += (d), (d) = rol32((d), 8), (d) ^= (c), \
+ (a) += (d), (d) = rol32((d), 7), (d) ^= (a), \
+ (c) += (b), (b) = rol32((b), 13), (b) ^= (c), (c) = rol32((c), 16))
+
+#define HSIPHASH_CONST_0 0U
+#define HSIPHASH_CONST_1 0U
+#define HSIPHASH_CONST_2 0x6c796765U
+#define HSIPHASH_CONST_3 0x74656462U
+
#endif /* _LINUX_SIPHASH_H */
--- a/lib/siphash.c
+++ b/lib/siphash.c
@@ -18,19 +18,13 @@
#include <asm/word-at-a-time.h>
#endif

-#define SIPROUND \
- do { \
- v0 += v1; v1 = rol64(v1, 13); v1 ^= v0; v0 = rol64(v0, 32); \
- v2 += v3; v3 = rol64(v3, 16); v3 ^= v2; \
- v0 += v3; v3 = rol64(v3, 21); v3 ^= v0; \
- v2 += v1; v1 = rol64(v1, 17); v1 ^= v2; v2 = rol64(v2, 32); \
- } while (0)
+#define SIPROUND SIPHASH_PERMUTATION(v0, v1, v2, v3)

#define PREAMBLE(len) \
- u64 v0 = 0x736f6d6570736575ULL; \
- u64 v1 = 0x646f72616e646f6dULL; \
- u64 v2 = 0x6c7967656e657261ULL; \
- u64 v3 = 0x7465646279746573ULL; \
+ u64 v0 = SIPHASH_CONST_0; \
+ u64 v1 = SIPHASH_CONST_1; \
+ u64 v2 = SIPHASH_CONST_2; \
+ u64 v3 = SIPHASH_CONST_3; \
u64 b = ((u64)(len)) << 56; \
v3 ^= key->key[1]; \
v2 ^= key->key[0]; \
@@ -389,19 +383,13 @@ u32 hsiphash_4u32(const u32 first, const
}
EXPORT_SYMBOL(hsiphash_4u32);
#else
-#define HSIPROUND \
- do { \
- v0 += v1; v1 = rol32(v1, 5); v1 ^= v0; v0 = rol32(v0, 16); \
- v2 += v3; v3 = rol32(v3, 8); v3 ^= v2; \
- v0 += v3; v3 = rol32(v3, 7); v3 ^= v0; \
- v2 += v1; v1 = rol32(v1, 13); v1 ^= v2; v2 = rol32(v2, 16); \
- } while (0)
+#define HSIPROUND HSIPHASH_PERMUTATION(v0, v1, v2, v3)

#define HPREAMBLE(len) \
- u32 v0 = 0; \
- u32 v1 = 0; \
- u32 v2 = 0x6c796765U; \
- u32 v3 = 0x74656462U; \
+ u32 v0 = HSIPHASH_CONST_0; \
+ u32 v1 = HSIPHASH_CONST_1; \
+ u32 v2 = HSIPHASH_CONST_2; \
+ u32 v3 = HSIPHASH_CONST_3; \
u32 b = ((u32)(len)) << 24; \
v3 ^= key->key[1]; \
v2 ^= key->key[0]; \



2022-05-28 20:00:43

by Ron Economos

[permalink] [raw]
Subject: Re: [PATCH 5.18 00/47] 5.18.1-rc1 review

On 5/27/22 1:49 AM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 5.18.1 release.
> There are 47 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sun, 29 May 2022 08:46:45 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.18.1-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.18.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h

Built and booted successfully on RISC-V RV64 (HiFive Unmatched).

Tested-by: Ron Economos <[email protected]>


2022-05-28 20:08:56

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 42/47] random: convert to using fops->read_iter()

From: Jens Axboe <[email protected]>

commit 1b388e7765f2eaa137cf5d92b47ef5925ad83ced upstream.

This is a pre-requisite to wiring up splice() again for the random
and urandom drivers. It also allows us to remove the INT_MAX check in
getrandom(), because import_single_range() applies capping internally.

Signed-off-by: Jens Axboe <[email protected]>
[Jason: rewrote get_random_bytes_user() to simplify and also incorporate
additional suggestions from Al.]
Cc: Al Viro <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 65 ++++++++++++++++++++++----------------------------
1 file changed, 29 insertions(+), 36 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -446,13 +446,13 @@ void get_random_bytes(void *buf, size_t
}
EXPORT_SYMBOL(get_random_bytes);

-static ssize_t get_random_bytes_user(void __user *ubuf, size_t len)
+static ssize_t get_random_bytes_user(struct iov_iter *iter)
{
- size_t block_len, left, ret = 0;
u32 chacha_state[CHACHA_STATE_WORDS];
- u8 output[CHACHA_BLOCK_SIZE];
+ u8 block[CHACHA_BLOCK_SIZE];
+ size_t ret = 0, copied;

- if (!len)
+ if (unlikely(!iov_iter_count(iter)))
return 0;

/*
@@ -466,30 +466,22 @@ static ssize_t get_random_bytes_user(voi
* use chacha_state after, so we can simply return those bytes to
* the user directly.
*/
- if (len <= CHACHA_KEY_SIZE) {
- ret = len - copy_to_user(ubuf, &chacha_state[4], len);
+ if (iov_iter_count(iter) <= CHACHA_KEY_SIZE) {
+ ret = copy_to_iter(&chacha_state[4], CHACHA_KEY_SIZE, iter);
goto out_zero_chacha;
}

for (;;) {
- chacha20_block(chacha_state, output);
+ chacha20_block(chacha_state, block);
if (unlikely(chacha_state[12] == 0))
++chacha_state[13];

- block_len = min_t(size_t, len, CHACHA_BLOCK_SIZE);
- left = copy_to_user(ubuf, output, block_len);
- if (left) {
- ret += block_len - left;
+ copied = copy_to_iter(block, sizeof(block), iter);
+ ret += copied;
+ if (!iov_iter_count(iter) || copied != sizeof(block))
break;
- }

- ubuf += block_len;
- ret += block_len;
- len -= block_len;
- if (!len)
- break;
-
- BUILD_BUG_ON(PAGE_SIZE % CHACHA_BLOCK_SIZE != 0);
+ BUILD_BUG_ON(PAGE_SIZE % sizeof(block) != 0);
if (ret % PAGE_SIZE == 0) {
if (signal_pending(current))
break;
@@ -497,7 +489,7 @@ static ssize_t get_random_bytes_user(voi
}
}

- memzero_explicit(output, sizeof(output));
+ memzero_explicit(block, sizeof(block));
out_zero_chacha:
memzero_explicit(chacha_state, sizeof(chacha_state));
return ret ? ret : -EFAULT;
@@ -1265,6 +1257,10 @@ static void __cold try_to_generate_entro

SYSCALL_DEFINE3(getrandom, char __user *, ubuf, size_t, len, unsigned int, flags)
{
+ struct iov_iter iter;
+ struct iovec iov;
+ int ret;
+
if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE))
return -EINVAL;

@@ -1275,19 +1271,18 @@ SYSCALL_DEFINE3(getrandom, char __user *
if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM))
return -EINVAL;

- if (len > INT_MAX)
- len = INT_MAX;
-
if (!crng_ready() && !(flags & GRND_INSECURE)) {
- int ret;
-
if (flags & GRND_NONBLOCK)
return -EAGAIN;
ret = wait_for_random_bytes();
if (unlikely(ret))
return ret;
}
- return get_random_bytes_user(ubuf, len);
+
+ ret = import_single_range(READ, ubuf, len, &iov, &iter);
+ if (unlikely(ret))
+ return ret;
+ return get_random_bytes_user(&iter);
}

static __poll_t random_poll(struct file *file, poll_table *wait)
@@ -1331,8 +1326,7 @@ static ssize_t random_write(struct file
return (ssize_t)len;
}

-static ssize_t urandom_read(struct file *file, char __user *ubuf,
- size_t len, loff_t *ppos)
+static ssize_t urandom_read_iter(struct kiocb *kiocb, struct iov_iter *iter)
{
static int maxwarn = 10;

@@ -1348,23 +1342,22 @@ static ssize_t urandom_read(struct file
++urandom_warning.missed;
else if (ratelimit_disable || __ratelimit(&urandom_warning)) {
--maxwarn;
- pr_notice("%s: uninitialized urandom read (%zd bytes read)\n",
- current->comm, len);
+ pr_notice("%s: uninitialized urandom read (%zu bytes read)\n",
+ current->comm, iov_iter_count(iter));
}
}

- return get_random_bytes_user(ubuf, len);
+ return get_random_bytes_user(iter);
}

-static ssize_t random_read(struct file *file, char __user *ubuf,
- size_t len, loff_t *ppos)
+static ssize_t random_read_iter(struct kiocb *kiocb, struct iov_iter *iter)
{
int ret;

ret = wait_for_random_bytes();
if (ret != 0)
return ret;
- return get_random_bytes_user(ubuf, len);
+ return get_random_bytes_user(iter);
}

static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
@@ -1426,7 +1419,7 @@ static int random_fasync(int fd, struct
}

const struct file_operations random_fops = {
- .read = random_read,
+ .read_iter = random_read_iter,
.write = random_write,
.poll = random_poll,
.unlocked_ioctl = random_ioctl,
@@ -1436,7 +1429,7 @@ const struct file_operations random_fops
};

const struct file_operations urandom_fops = {
- .read = urandom_read,
+ .read_iter = urandom_read_iter,
.write = random_write,
.unlocked_ioctl = random_ioctl,
.compat_ioctl = compat_ptr_ioctl,



2022-05-28 20:13:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 15/47] nios2: use fallback for random_get_entropy() instead of zero

From: "Jason A. Donenfeld" <[email protected]>

commit c04e72700f2293013dab40208e809369378f224c upstream.

In the event that random_get_entropy() can't access a cycle counter or
similar, falling back to returning 0 is really not the best we can do.
Instead, at least calling random_get_entropy_fallback() would be
preferable, because that always needs to return _something_, even
falling back to jiffies eventually. It's not as though
random_get_entropy_fallback() is super high precision or guaranteed to
be entropic, but basically anything that's not zero all the time is
better than returning zero all the time.

Cc: Thomas Gleixner <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Acked-by: Dinh Nguyen <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/nios2/include/asm/timex.h | 3 +++
1 file changed, 3 insertions(+)

--- a/arch/nios2/include/asm/timex.h
+++ b/arch/nios2/include/asm/timex.h
@@ -8,5 +8,8 @@
typedef unsigned long cycles_t;

extern cycles_t get_cycles(void);
+#define get_cycles get_cycles
+
+#define random_get_entropy() (((unsigned long)get_cycles()) ?: random_get_entropy_fallback())

#endif



2022-05-28 20:15:04

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 38/47] random: make consistent use of buf and len

From: "Jason A. Donenfeld" <[email protected]>

commit a19402634c435a4eae226df53c141cdbb9922e7b upstream.

The current code was a mix of "nbytes", "count", "size", "buffer", "in",
and so forth. Instead, let's clean this up by naming input parameters
"buf" (or "ubuf") and "len", so that you always understand that you're
reading this variety of function argument.

Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 199 +++++++++++++++++++++++--------------------------
include/linux/random.h | 12 +-
2 files changed, 103 insertions(+), 108 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -208,7 +208,7 @@ static void _warn_unseeded_randomness(co
*
* There are a few exported interfaces for use by other drivers:
*
- * void get_random_bytes(void *buf, size_t nbytes)
+ * void get_random_bytes(void *buf, size_t len)
* u32 get_random_u32()
* u64 get_random_u64()
* unsigned int get_random_int()
@@ -249,7 +249,7 @@ static DEFINE_PER_CPU(struct crng, crngs
};

/* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */
-static void extract_entropy(void *buf, size_t nbytes);
+static void extract_entropy(void *buf, size_t len);

/* This extracts a new crng key from the input pool. */
static void crng_reseed(void)
@@ -403,24 +403,24 @@ static void crng_make_state(u32 chacha_s
local_unlock_irqrestore(&crngs.lock, flags);
}

-static void _get_random_bytes(void *buf, size_t nbytes)
+static void _get_random_bytes(void *buf, size_t len)
{
u32 chacha_state[CHACHA_STATE_WORDS];
u8 tmp[CHACHA_BLOCK_SIZE];
- size_t len;
+ size_t first_block_len;

- if (!nbytes)
+ if (!len)
return;

- len = min_t(size_t, 32, nbytes);
- crng_make_state(chacha_state, buf, len);
- nbytes -= len;
- buf += len;
+ first_block_len = min_t(size_t, 32, len);
+ crng_make_state(chacha_state, buf, first_block_len);
+ len -= first_block_len;
+ buf += first_block_len;

- while (nbytes) {
- if (nbytes < CHACHA_BLOCK_SIZE) {
+ while (len) {
+ if (len < CHACHA_BLOCK_SIZE) {
chacha20_block(chacha_state, tmp);
- memcpy(buf, tmp, nbytes);
+ memcpy(buf, tmp, len);
memzero_explicit(tmp, sizeof(tmp));
break;
}
@@ -428,7 +428,7 @@ static void _get_random_bytes(void *buf,
chacha20_block(chacha_state, buf);
if (unlikely(chacha_state[12] == 0))
++chacha_state[13];
- nbytes -= CHACHA_BLOCK_SIZE;
+ len -= CHACHA_BLOCK_SIZE;
buf += CHACHA_BLOCK_SIZE;
}

@@ -445,20 +445,20 @@ static void _get_random_bytes(void *buf,
* wait_for_random_bytes() should be called and return 0 at least once
* at any point prior.
*/
-void get_random_bytes(void *buf, size_t nbytes)
+void get_random_bytes(void *buf, size_t len)
{
warn_unseeded_randomness();
- _get_random_bytes(buf, nbytes);
+ _get_random_bytes(buf, len);
}
EXPORT_SYMBOL(get_random_bytes);

-static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes)
+static ssize_t get_random_bytes_user(void __user *ubuf, size_t len)
{
- size_t len, left, ret = 0;
+ size_t block_len, left, ret = 0;
u32 chacha_state[CHACHA_STATE_WORDS];
u8 output[CHACHA_BLOCK_SIZE];

- if (!nbytes)
+ if (!len)
return 0;

/*
@@ -472,8 +472,8 @@ static ssize_t get_random_bytes_user(voi
* use chacha_state after, so we can simply return those bytes to
* the user directly.
*/
- if (nbytes <= CHACHA_KEY_SIZE) {
- ret = nbytes - copy_to_user(buf, &chacha_state[4], nbytes);
+ if (len <= CHACHA_KEY_SIZE) {
+ ret = len - copy_to_user(ubuf, &chacha_state[4], len);
goto out_zero_chacha;
}

@@ -482,17 +482,17 @@ static ssize_t get_random_bytes_user(voi
if (unlikely(chacha_state[12] == 0))
++chacha_state[13];

- len = min_t(size_t, nbytes, CHACHA_BLOCK_SIZE);
- left = copy_to_user(buf, output, len);
+ block_len = min_t(size_t, len, CHACHA_BLOCK_SIZE);
+ left = copy_to_user(ubuf, output, block_len);
if (left) {
- ret += len - left;
+ ret += block_len - left;
break;
}

- buf += len;
- ret += len;
- nbytes -= len;
- if (!nbytes)
+ ubuf += block_len;
+ ret += block_len;
+ len -= block_len;
+ if (!len)
break;

BUILD_BUG_ON(PAGE_SIZE % CHACHA_BLOCK_SIZE != 0);
@@ -666,24 +666,24 @@ unsigned long randomize_page(unsigned lo
* use. Use get_random_bytes() instead. It returns the number of
* bytes filled in.
*/
-size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes)
+size_t __must_check get_random_bytes_arch(void *buf, size_t len)
{
- size_t left = nbytes;
+ size_t left = len;
u8 *p = buf;

while (left) {
unsigned long v;
- size_t chunk = min_t(size_t, left, sizeof(unsigned long));
+ size_t block_len = min_t(size_t, left, sizeof(unsigned long));

if (!arch_get_random_long(&v))
break;

- memcpy(p, &v, chunk);
- p += chunk;
- left -= chunk;
+ memcpy(p, &v, block_len);
+ p += block_len;
+ left -= block_len;
}

- return nbytes - left;
+ return len - left;
}
EXPORT_SYMBOL(get_random_bytes_arch);

@@ -694,15 +694,15 @@ EXPORT_SYMBOL(get_random_bytes_arch);
*
* Callers may add entropy via:
*
- * static void mix_pool_bytes(const void *in, size_t nbytes)
+ * static void mix_pool_bytes(const void *buf, size_t len)
*
* After which, if added entropy should be credited:
*
- * static void credit_init_bits(size_t nbits)
+ * static void credit_init_bits(size_t bits)
*
* Finally, extract entropy via:
*
- * static void extract_entropy(void *buf, size_t nbytes)
+ * static void extract_entropy(void *buf, size_t len)
*
**********************************************************************/

@@ -724,9 +724,9 @@ static struct {
.lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
};

-static void _mix_pool_bytes(const void *in, size_t nbytes)
+static void _mix_pool_bytes(const void *buf, size_t len)
{
- blake2s_update(&input_pool.hash, in, nbytes);
+ blake2s_update(&input_pool.hash, buf, len);
}

/*
@@ -734,12 +734,12 @@ static void _mix_pool_bytes(const void *
* update the initialization bit counter; the caller should call
* credit_init_bits if this is appropriate.
*/
-static void mix_pool_bytes(const void *in, size_t nbytes)
+static void mix_pool_bytes(const void *buf, size_t len)
{
unsigned long flags;

spin_lock_irqsave(&input_pool.lock, flags);
- _mix_pool_bytes(in, nbytes);
+ _mix_pool_bytes(buf, len);
spin_unlock_irqrestore(&input_pool.lock, flags);
}

@@ -747,7 +747,7 @@ static void mix_pool_bytes(const void *i
* This is an HKDF-like construction for using the hashed collected entropy
* as a PRF key, that's then expanded block-by-block.
*/
-static void extract_entropy(void *buf, size_t nbytes)
+static void extract_entropy(void *buf, size_t len)
{
unsigned long flags;
u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE];
@@ -776,12 +776,12 @@ static void extract_entropy(void *buf, s
spin_unlock_irqrestore(&input_pool.lock, flags);
memzero_explicit(next_key, sizeof(next_key));

- while (nbytes) {
- i = min_t(size_t, nbytes, BLAKE2S_HASH_SIZE);
+ while (len) {
+ i = min_t(size_t, len, BLAKE2S_HASH_SIZE);
/* output = HASHPRF(seed, RDSEED || ++counter) */
++block.counter;
blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed));
- nbytes -= i;
+ len -= i;
buf += i;
}

@@ -789,16 +789,16 @@ static void extract_entropy(void *buf, s
memzero_explicit(&block, sizeof(block));
}

-static void credit_init_bits(size_t nbits)
+static void credit_init_bits(size_t bits)
{
static struct execute_work set_ready;
unsigned int new, orig, add;
unsigned long flags;

- if (crng_ready() || !nbits)
+ if (crng_ready() || !bits)
return;

- add = min_t(size_t, nbits, POOL_BITS);
+ add = min_t(size_t, bits, POOL_BITS);

do {
orig = READ_ONCE(input_pool.init_bits);
@@ -834,14 +834,12 @@ static void credit_init_bits(size_t nbit
* The following exported functions are used for pushing entropy into
* the above entropy accumulation routines:
*
- * void add_device_randomness(const void *buf, size_t size);
- * void add_hwgenerator_randomness(const void *buffer, size_t count,
- * size_t entropy);
- * void add_bootloader_randomness(const void *buf, size_t size);
- * void add_vmfork_randomness(const void *unique_vm_id, size_t size);
+ * void add_device_randomness(const void *buf, size_t len);
+ * void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy);
+ * void add_bootloader_randomness(const void *buf, size_t len);
+ * void add_vmfork_randomness(const void *unique_vm_id, size_t len);
* void add_interrupt_randomness(int irq);
- * void add_input_randomness(unsigned int type, unsigned int code,
- * unsigned int value);
+ * void add_input_randomness(unsigned int type, unsigned int code, unsigned int value);
* void add_disk_randomness(struct gendisk *disk);
*
* add_device_randomness() adds data to the input pool that
@@ -909,7 +907,7 @@ int __init random_init(const char *comma
{
ktime_t now = ktime_get_real();
unsigned int i, arch_bytes;
- unsigned long rv;
+ unsigned long entropy;

#if defined(LATENT_ENTROPY_PLUGIN)
static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy;
@@ -917,13 +915,13 @@ int __init random_init(const char *comma
#endif

for (i = 0, arch_bytes = BLAKE2S_BLOCK_SIZE;
- i < BLAKE2S_BLOCK_SIZE; i += sizeof(rv)) {
- if (!arch_get_random_seed_long_early(&rv) &&
- !arch_get_random_long_early(&rv)) {
- rv = random_get_entropy();
- arch_bytes -= sizeof(rv);
+ i < BLAKE2S_BLOCK_SIZE; i += sizeof(entropy)) {
+ if (!arch_get_random_seed_long_early(&entropy) &&
+ !arch_get_random_long_early(&entropy)) {
+ entropy = random_get_entropy();
+ arch_bytes -= sizeof(entropy);
}
- _mix_pool_bytes(&rv, sizeof(rv));
+ _mix_pool_bytes(&entropy, sizeof(entropy));
}
_mix_pool_bytes(&now, sizeof(now));
_mix_pool_bytes(utsname(), sizeof(*(utsname())));
@@ -946,14 +944,14 @@ int __init random_init(const char *comma
* the entropy pool having similar initial state across largely
* identical devices.
*/
-void add_device_randomness(const void *buf, size_t size)
+void add_device_randomness(const void *buf, size_t len)
{
unsigned long entropy = random_get_entropy();
unsigned long flags;

spin_lock_irqsave(&input_pool.lock, flags);
_mix_pool_bytes(&entropy, sizeof(entropy));
- _mix_pool_bytes(buf, size);
+ _mix_pool_bytes(buf, len);
spin_unlock_irqrestore(&input_pool.lock, flags);
}
EXPORT_SYMBOL(add_device_randomness);
@@ -963,10 +961,9 @@ EXPORT_SYMBOL(add_device_randomness);
* Those devices may produce endless random bits and will be throttled
* when our pool is full.
*/
-void add_hwgenerator_randomness(const void *buffer, size_t count,
- size_t entropy)
+void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy)
{
- mix_pool_bytes(buffer, count);
+ mix_pool_bytes(buf, len);
credit_init_bits(entropy);

/*
@@ -982,11 +979,11 @@ EXPORT_SYMBOL_GPL(add_hwgenerator_random
* Handle random seed passed by bootloader, and credit it if
* CONFIG_RANDOM_TRUST_BOOTLOADER is set.
*/
-void add_bootloader_randomness(const void *buf, size_t size)
+void add_bootloader_randomness(const void *buf, size_t len)
{
- mix_pool_bytes(buf, size);
+ mix_pool_bytes(buf, len);
if (trust_bootloader)
- credit_init_bits(size * 8);
+ credit_init_bits(len * 8);
}
EXPORT_SYMBOL_GPL(add_bootloader_randomness);

@@ -998,9 +995,9 @@ static BLOCKING_NOTIFIER_HEAD(vmfork_cha
* don't credit it, but we do immediately force a reseed after so
* that it's used by the crng posthaste.
*/
-void add_vmfork_randomness(const void *unique_vm_id, size_t size)
+void add_vmfork_randomness(const void *unique_vm_id, size_t len)
{
- add_device_randomness(unique_vm_id, size);
+ add_device_randomness(unique_vm_id, len);
if (crng_ready()) {
crng_reseed();
pr_notice("crng reseeded due to virtual machine fork\n");
@@ -1220,8 +1217,7 @@ static void add_timer_randomness(struct
credit_init_bits(bits);
}

-void add_input_randomness(unsigned int type, unsigned int code,
- unsigned int value)
+void add_input_randomness(unsigned int type, unsigned int code, unsigned int value)
{
static unsigned char last_value;
static struct timer_rand_state input_timer_state = { INITIAL_JIFFIES };
@@ -1340,8 +1336,7 @@ static void try_to_generate_entropy(void
*
**********************************************************************/

-SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int,
- flags)
+SYSCALL_DEFINE3(getrandom, char __user *, ubuf, size_t, len, unsigned int, flags)
{
if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE))
return -EINVAL;
@@ -1353,8 +1348,8 @@ SYSCALL_DEFINE3(getrandom, char __user *
if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM))
return -EINVAL;

- if (count > INT_MAX)
- count = INT_MAX;
+ if (len > INT_MAX)
+ len = INT_MAX;

if (!crng_ready() && !(flags & GRND_INSECURE)) {
int ret;
@@ -1365,7 +1360,7 @@ SYSCALL_DEFINE3(getrandom, char __user *
if (unlikely(ret))
return ret;
}
- return get_random_bytes_user(buf, count);
+ return get_random_bytes_user(ubuf, len);
}

static __poll_t random_poll(struct file *file, poll_table *wait)
@@ -1374,21 +1369,21 @@ static __poll_t random_poll(struct file
return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM;
}

-static int write_pool(const char __user *ubuf, size_t count)
+static int write_pool(const char __user *ubuf, size_t len)
{
- size_t len;
+ size_t block_len;
int ret = 0;
u8 block[BLAKE2S_BLOCK_SIZE];

- while (count) {
- len = min(count, sizeof(block));
- if (copy_from_user(block, ubuf, len)) {
+ while (len) {
+ block_len = min(len, sizeof(block));
+ if (copy_from_user(block, ubuf, block_len)) {
ret = -EFAULT;
goto out;
}
- count -= len;
- ubuf += len;
- mix_pool_bytes(block, len);
+ len -= block_len;
+ ubuf += block_len;
+ mix_pool_bytes(block, block_len);
cond_resched();
}

@@ -1397,20 +1392,20 @@ out:
return ret;
}

-static ssize_t random_write(struct file *file, const char __user *buffer,
- size_t count, loff_t *ppos)
+static ssize_t random_write(struct file *file, const char __user *ubuf,
+ size_t len, loff_t *ppos)
{
int ret;

- ret = write_pool(buffer, count);
+ ret = write_pool(ubuf, len);
if (ret)
return ret;

- return (ssize_t)count;
+ return (ssize_t)len;
}

-static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes,
- loff_t *ppos)
+static ssize_t urandom_read(struct file *file, char __user *ubuf,
+ size_t len, loff_t *ppos)
{
static int maxwarn = 10;

@@ -1427,22 +1422,22 @@ static ssize_t urandom_read(struct file
else if (ratelimit_disable || __ratelimit(&urandom_warning)) {
--maxwarn;
pr_notice("%s: uninitialized urandom read (%zd bytes read)\n",
- current->comm, nbytes);
+ current->comm, len);
}
}

- return get_random_bytes_user(buf, nbytes);
+ return get_random_bytes_user(ubuf, len);
}

-static ssize_t random_read(struct file *file, char __user *buf, size_t nbytes,
- loff_t *ppos)
+static ssize_t random_read(struct file *file, char __user *ubuf,
+ size_t len, loff_t *ppos)
{
int ret;

ret = wait_for_random_bytes();
if (ret != 0)
return ret;
- return get_random_bytes_user(buf, nbytes);
+ return get_random_bytes_user(ubuf, len);
}

static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
@@ -1567,7 +1562,7 @@ static u8 sysctl_bootid[UUID_SIZE];
* UUID. The difference is in whether table->data is NULL; if it is,
* then a new UUID is generated and returned to the user.
*/
-static int proc_do_uuid(struct ctl_table *table, int write, void *buffer,
+static int proc_do_uuid(struct ctl_table *table, int write, void *buf,
size_t *lenp, loff_t *ppos)
{
u8 tmp_uuid[UUID_SIZE], *uuid;
@@ -1594,14 +1589,14 @@ static int proc_do_uuid(struct ctl_table
}

snprintf(uuid_string, sizeof(uuid_string), "%pU", uuid);
- return proc_dostring(&fake_table, 0, buffer, lenp, ppos);
+ return proc_dostring(&fake_table, 0, buf, lenp, ppos);
}

/* The same as proc_dointvec, but writes don't change anything. */
-static int proc_do_rointvec(struct ctl_table *table, int write, void *buffer,
+static int proc_do_rointvec(struct ctl_table *table, int write, void *buf,
size_t *lenp, loff_t *ppos)
{
- return write ? 0 : proc_dointvec(table, 0, buffer, lenp, ppos);
+ return write ? 0 : proc_dointvec(table, 0, buf, lenp, ppos);
}

static struct ctl_table random_table[] = {
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -12,12 +12,12 @@

struct notifier_block;

-void add_device_randomness(const void *, size_t);
-void add_bootloader_randomness(const void *, size_t);
+void add_device_randomness(const void *buf, size_t len);
+void add_bootloader_randomness(const void *buf, size_t len);
void add_input_randomness(unsigned int type, unsigned int code,
unsigned int value) __latent_entropy;
void add_interrupt_randomness(int irq) __latent_entropy;
-void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy);
+void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy);

#if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__)
static inline void add_latent_entropy(void)
@@ -29,7 +29,7 @@ static inline void add_latent_entropy(vo
#endif

#if IS_ENABLED(CONFIG_VMGENID)
-void add_vmfork_randomness(const void *unique_vm_id, size_t size);
+void add_vmfork_randomness(const void *unique_vm_id, size_t len);
int register_random_vmfork_notifier(struct notifier_block *nb);
int unregister_random_vmfork_notifier(struct notifier_block *nb);
#else
@@ -37,8 +37,8 @@ static inline int register_random_vmfork
static inline int unregister_random_vmfork_notifier(struct notifier_block *nb) { return 0; }
#endif

-void get_random_bytes(void *buf, size_t nbytes);
-size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes);
+void get_random_bytes(void *buf, size_t len);
+size_t __must_check get_random_bytes_arch(void *buf, size_t len);
u32 get_random_u32(void);
u64 get_random_u64(void);
static inline unsigned int get_random_int(void)



2022-05-28 20:17:07

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH 5.18 00/47] 5.18.1-rc1 review

On Fri, May 27, 2022 at 10:49:40AM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 5.18.1 release.
> There are 47 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sun, 29 May 2022 08:46:45 +0000.
> Anything received after that time might be too late.
>

Build results:
total: 151 pass: 151 fail: 0
Qemu test results:
total: 489 pass: 489 fail: 0

Tested-by: Guenter Roeck <[email protected]>

Guenter

2022-05-28 20:23:33

by Justin Forbes

[permalink] [raw]
Subject: Re: [PATCH 5.18 00/47] 5.18.1-rc1 review

On Fri, May 27, 2022 at 10:49:40AM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 5.18.1 release.
> There are 47 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sun, 29 May 2022 08:46:45 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.18.1-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.18.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h

Tested rc1 against the Fedora build system (aarch64, armv7, ppc64le,
s390x, x86_64), and boot tested x86_64. No regressions noted.

Tested-by: Justin M. Forbes <[email protected]>

2022-05-28 20:24:40

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 44/47] random: wire up fops->splice_{read,write}_iter()

From: Jens Axboe <[email protected]>

commit 79025e727a846be6fd215ae9cdb654368ac3f9a6 upstream.

Now that random/urandom is using {read,write}_iter, we can wire it up to
using the generic splice handlers.

Fixes: 36e2c7421f02 ("fs: don't allow splice read/write without explicit ops")
Signed-off-by: Jens Axboe <[email protected]>
[Jason: added the splice_write path. Note that sendfile() and such still
does not work for read, though it does for write, because of a file
type restriction in splice_direct_to_actor(), which I'll address
separately.]
Cc: Al Viro <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 4 ++++
1 file changed, 4 insertions(+)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1429,6 +1429,8 @@ const struct file_operations random_fops
.compat_ioctl = compat_ptr_ioctl,
.fasync = random_fasync,
.llseek = noop_llseek,
+ .splice_read = generic_file_splice_read,
+ .splice_write = iter_file_splice_write,
};

const struct file_operations urandom_fops = {
@@ -1438,6 +1440,8 @@ const struct file_operations urandom_fop
.compat_ioctl = compat_ptr_ioctl,
.fasync = random_fasync,
.llseek = noop_llseek,
+ .splice_read = generic_file_splice_read,
+ .splice_write = iter_file_splice_write,
};





2022-05-28 20:26:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 18/47] sparc: use fallback for random_get_entropy() instead of zero

From: "Jason A. Donenfeld" <[email protected]>

commit ac9756c79797bb98972736b13cfb239fd2cffb79 upstream.

In the event that random_get_entropy() can't access a cycle counter or
similar, falling back to returning 0 is really not the best we can do.
Instead, at least calling random_get_entropy_fallback() would be
preferable, because that always needs to return _something_, even
falling back to jiffies eventually. It's not as though
random_get_entropy_fallback() is super high precision or guaranteed to
be entropic, but basically anything that's not zero all the time is
better than returning zero all the time.

This is accomplished by just including the asm-generic code like on
other architectures, which means we can get rid of the empty stub
function here.

Cc: Thomas Gleixner <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David S. Miller <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/sparc/include/asm/timex_32.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

--- a/arch/sparc/include/asm/timex_32.h
+++ b/arch/sparc/include/asm/timex_32.h
@@ -9,8 +9,6 @@

#define CLOCK_TICK_RATE 1193180 /* Underlying HZ */

-/* XXX Maybe do something better at some point... -DaveM */
-typedef unsigned long cycles_t;
-#define get_cycles() (0)
+#include <asm-generic/timex.h>

#endif



2022-05-28 20:27:49

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 29/47] random: avoid initializing twice in credit race

From: "Jason A. Donenfeld" <[email protected]>

commit fed7ef061686cc813b1f3d8d0edc6c35b4d3537b upstream.

Since all changes of crng_init now go through credit_init_bits(), we can
fix a long standing race in which two concurrent callers of
credit_init_bits() have the new bit count >= some threshold, but are
doing so with crng_init as a lower threshold, checked outside of a lock,
resulting in crng_reseed() or similar being called twice.

In order to fix this, we can use the original cmpxchg value of the bit
count, and only change crng_init when the bit count transitions from
below a threshold to meeting the threshold.

Reviewed-by: Dominik Brodowski <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -821,7 +821,7 @@ static void extract_entropy(void *buf, s

static void credit_init_bits(size_t nbits)
{
- unsigned int init_bits, orig, add;
+ unsigned int new, orig, add;
unsigned long flags;

if (crng_ready() || !nbits)
@@ -831,12 +831,12 @@ static void credit_init_bits(size_t nbit

do {
orig = READ_ONCE(input_pool.init_bits);
- init_bits = min_t(unsigned int, POOL_BITS, orig + add);
- } while (cmpxchg(&input_pool.init_bits, orig, init_bits) != orig);
+ new = min_t(unsigned int, POOL_BITS, orig + add);
+ } while (cmpxchg(&input_pool.init_bits, orig, new) != orig);

- if (!crng_ready() && init_bits >= POOL_READY_BITS)
+ if (orig < POOL_READY_BITS && new >= POOL_READY_BITS)
crng_reseed();
- else if (unlikely(crng_init == CRNG_EMPTY && init_bits >= POOL_EARLY_BITS)) {
+ else if (orig < POOL_EARLY_BITS && new >= POOL_EARLY_BITS) {
spin_lock_irqsave(&base_crng.lock, flags);
if (crng_init == CRNG_EMPTY) {
extract_entropy(base_crng.key, sizeof(base_crng.key));



2022-05-28 20:31:41

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 06/47] s390: define get_cycles macro for arch-override

From: "Jason A. Donenfeld" <[email protected]>

commit 2e3df523256cb9836de8441e9c791a796759bb3c upstream.

S390x defines a get_cycles() function, but it does not do the usual
`#define get_cycles get_cycles` dance, making it impossible for generic
code to see if an arch-specific function was defined. While the
get_cycles() ifdef is not currently used, the following timekeeping
patch in this series will depend on the macro existing (or not existing)
when defining random_get_entropy().

Cc: Thomas Gleixner <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Alexander Gordeev <[email protected]>
Cc: Christian Borntraeger <[email protected]>
Cc: Sven Schnelle <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/s390/include/asm/timex.h | 1 +
1 file changed, 1 insertion(+)

--- a/arch/s390/include/asm/timex.h
+++ b/arch/s390/include/asm/timex.h
@@ -197,6 +197,7 @@ static inline cycles_t get_cycles(void)
{
return (cycles_t) get_tod_clock() >> 2;
}
+#define get_cycles get_cycles

int get_phys_clock(unsigned long *clock);
void init_cpu_timer(void);



2022-05-28 20:36:05

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 20/47] random: insist on random_get_entropy() existing in order to simplify

From: "Jason A. Donenfeld" <[email protected]>

commit 4b758eda851eb9336ca86a0041a4d3da55f66511 upstream.

All platforms are now guaranteed to provide some value for
random_get_entropy(). In case some bug leads to this not being so, we
print a warning, because that indicates that something is really very
wrong (and likely other things are impacted too). This should never be
hit, but it's a good and cheap way of finding out if something ever is
problematic.

Since we now have viable fallback code for random_get_entropy() on all
platforms, which is, in the worst case, not worse than jiffies, we can
count on getting the best possible value out of it. That means there's
no longer a use for using jiffies as entropy input. It also means we no
longer have a reason for doing the round-robin register flow in the IRQ
handler, which was always of fairly dubious value.

Instead we can greatly simplify the IRQ handler inputs and also unify
the construction between 64-bits and 32-bits. We now collect the cycle
counter and the return address, since those are the two things that
matter. Because the return address and the irq number are likely
related, to the extent we mix in the irq number, we can just xor it into
the top unchanging bytes of the return address, rather than the bottom
changing bytes of the cycle counter as before. Then, we can do a fixed 2
rounds of SipHash/HSipHash. Finally, we use the same construction of
hashing only half of the [H]SipHash state on 32-bit and 64-bit. We're
not actually discarding any entropy, since that entropy is carried
through until the next time. And more importantly, it lets us do the
same sponge-like construction everywhere.

Cc: Theodore Ts'o <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 86 +++++++++++++++-----------------------------------
1 file changed, 26 insertions(+), 60 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1025,15 +1025,14 @@ int __init rand_initialize(void)
*/
void add_device_randomness(const void *buf, size_t size)
{
- unsigned long cycles = random_get_entropy();
- unsigned long flags, now = jiffies;
+ unsigned long entropy = random_get_entropy();
+ unsigned long flags;

if (crng_init == 0 && size)
crng_pre_init_inject(buf, size, false);

spin_lock_irqsave(&input_pool.lock, flags);
- _mix_pool_bytes(&cycles, sizeof(cycles));
- _mix_pool_bytes(&now, sizeof(now));
+ _mix_pool_bytes(&entropy, sizeof(entropy));
_mix_pool_bytes(buf, size);
spin_unlock_irqrestore(&input_pool.lock, flags);
}
@@ -1056,12 +1055,11 @@ struct timer_rand_state {
*/
static void add_timer_randomness(struct timer_rand_state *state, unsigned int num)
{
- unsigned long cycles = random_get_entropy(), now = jiffies, flags;
+ unsigned long entropy = random_get_entropy(), now = jiffies, flags;
long delta, delta2, delta3;

spin_lock_irqsave(&input_pool.lock, flags);
- _mix_pool_bytes(&cycles, sizeof(cycles));
- _mix_pool_bytes(&now, sizeof(now));
+ _mix_pool_bytes(&entropy, sizeof(entropy));
_mix_pool_bytes(&num, sizeof(num));
spin_unlock_irqrestore(&input_pool.lock, flags);

@@ -1223,7 +1221,6 @@ struct fast_pool {
unsigned long pool[4];
unsigned long last;
unsigned int count;
- u16 reg_idx;
};

static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
@@ -1241,13 +1238,13 @@ static DEFINE_PER_CPU(struct fast_pool,
* This is [Half]SipHash-1-x, starting from an empty key. Because
* the key is fixed, it assumes that its inputs are non-malicious,
* and therefore this has no security on its own. s represents the
- * 128 or 256-bit SipHash state, while v represents a 128-bit input.
+ * four-word SipHash state, while v represents a two-word input.
*/
-static void fast_mix(unsigned long s[4], const unsigned long *v)
+static void fast_mix(unsigned long s[4], const unsigned long v[2])
{
size_t i;

- for (i = 0; i < 16 / sizeof(long); ++i) {
+ for (i = 0; i < 2; ++i) {
s[3] ^= v[i];
#ifdef CONFIG_64BIT
s[0] += s[1]; s[1] = rol64(s[1], 13); s[1] ^= s[0]; s[0] = rol64(s[0], 32);
@@ -1287,33 +1284,17 @@ int random_online_cpu(unsigned int cpu)
}
#endif

-static unsigned long get_reg(struct fast_pool *f, struct pt_regs *regs)
-{
- unsigned long *ptr = (unsigned long *)regs;
- unsigned int idx;
-
- if (regs == NULL)
- return 0;
- idx = READ_ONCE(f->reg_idx);
- if (idx >= sizeof(struct pt_regs) / sizeof(unsigned long))
- idx = 0;
- ptr += idx++;
- WRITE_ONCE(f->reg_idx, idx);
- return *ptr;
-}
-
static void mix_interrupt_randomness(struct work_struct *work)
{
struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
/*
- * The size of the copied stack pool is explicitly 16 bytes so that we
- * tax mix_pool_byte()'s compression function the same amount on all
- * platforms. This means on 64-bit we copy half the pool into this,
- * while on 32-bit we copy all of it. The entropy is supposed to be
- * sufficiently dispersed between bits that in the sponge-like
- * half case, on average we don't wind up "losing" some.
+ * The size of the copied stack pool is explicitly 2 longs so that we
+ * only ever ingest half of the siphash output each time, retaining
+ * the other half as the next "key" that carries over. The entropy is
+ * supposed to be sufficiently dispersed between bits so on average
+ * we don't wind up "losing" some.
*/
- u8 pool[16];
+ unsigned long pool[2];

/* Check to see if we're running on the wrong CPU due to hotplug. */
local_irq_disable();
@@ -1345,36 +1326,21 @@ static void mix_interrupt_randomness(str
void add_interrupt_randomness(int irq)
{
enum { MIX_INFLIGHT = 1U << 31 };
- unsigned long cycles = random_get_entropy(), now = jiffies;
+ unsigned long entropy = random_get_entropy();
struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness);
struct pt_regs *regs = get_irq_regs();
unsigned int new_count;
- union {
- u32 u32[4];
- u64 u64[2];
- unsigned long longs[16 / sizeof(long)];
- } irq_data;
-
- if (cycles == 0)
- cycles = get_reg(fast_pool, regs);
-
- if (sizeof(unsigned long) == 8) {
- irq_data.u64[0] = cycles ^ rol64(now, 32) ^ irq;
- irq_data.u64[1] = regs ? instruction_pointer(regs) : _RET_IP_;
- } else {
- irq_data.u32[0] = cycles ^ irq;
- irq_data.u32[1] = now;
- irq_data.u32[2] = regs ? instruction_pointer(regs) : _RET_IP_;
- irq_data.u32[3] = get_reg(fast_pool, regs);
- }

- fast_mix(fast_pool->pool, irq_data.longs);
+ fast_mix(fast_pool->pool, (unsigned long[2]){
+ entropy,
+ (regs ? instruction_pointer(regs) : _RET_IP_) ^ swab(irq)
+ });
new_count = ++fast_pool->count;

if (new_count & MIX_INFLIGHT)
return;

- if (new_count < 64 && (!time_after(now, fast_pool->last + HZ) ||
+ if (new_count < 64 && (!time_is_before_jiffies(fast_pool->last + HZ) ||
unlikely(crng_init == 0)))
return;

@@ -1410,28 +1376,28 @@ static void entropy_timer(struct timer_l
static void try_to_generate_entropy(void)
{
struct {
- unsigned long cycles;
+ unsigned long entropy;
struct timer_list timer;
} stack;

- stack.cycles = random_get_entropy();
+ stack.entropy = random_get_entropy();

/* Slow counter - or none. Don't even bother */
- if (stack.cycles == random_get_entropy())
+ if (stack.entropy == random_get_entropy())
return;

timer_setup_on_stack(&stack.timer, entropy_timer, 0);
while (!crng_ready() && !signal_pending(current)) {
if (!timer_pending(&stack.timer))
mod_timer(&stack.timer, jiffies + 1);
- mix_pool_bytes(&stack.cycles, sizeof(stack.cycles));
+ mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
schedule();
- stack.cycles = random_get_entropy();
+ stack.entropy = random_get_entropy();
}

del_timer_sync(&stack.timer);
destroy_timer_on_stack(&stack.timer);
- mix_pool_bytes(&stack.cycles, sizeof(stack.cycles));
+ mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
}





2022-05-28 20:37:27

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 21/47] random: do not use batches when !crng_ready()

From: "Jason A. Donenfeld" <[email protected]>

commit cbe89e5a375a51bbb952929b93fa973416fea74e upstream.

It's too hard to keep the batches synchronized, and pointless anyway,
since in !crng_ready(), we're updating the base_crng key really often,
where batching only hurts. So instead, if the crng isn't ready, just
call into get_random_bytes(). At this stage nothing is performance
critical anyhow.

Cc: Theodore Ts'o <[email protected]>
Reviewed-by: Dominik Brodowski <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -465,10 +465,8 @@ static void crng_pre_init_inject(const v

if (account) {
crng_init_cnt += min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt);
- if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
- ++base_crng.generation;
+ if (crng_init_cnt >= CRNG_INIT_CNT_THRESH)
crng_init = 1;
- }
}

spin_unlock_irqrestore(&base_crng.lock, flags);
@@ -624,6 +622,11 @@ u64 get_random_u64(void)

warn_unseeded_randomness(&previous);

+ if (!crng_ready()) {
+ _get_random_bytes(&ret, sizeof(ret));
+ return ret;
+ }
+
local_lock_irqsave(&batched_entropy_u64.lock, flags);
batch = raw_cpu_ptr(&batched_entropy_u64);

@@ -658,6 +661,11 @@ u32 get_random_u32(void)

warn_unseeded_randomness(&previous);

+ if (!crng_ready()) {
+ _get_random_bytes(&ret, sizeof(ret));
+ return ret;
+ }
+
local_lock_irqsave(&batched_entropy_u32.lock, flags);
batch = raw_cpu_ptr(&batched_entropy_u32);




2022-05-28 20:45:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 46/47] ACPI: sysfs: Fix BERT error region memory mapping

From: Lorenzo Pieralisi <[email protected]>

commit 1bbc21785b7336619fb6a67f1fff5afdaf229acc upstream.

Currently the sysfs interface maps the BERT error region as "memory"
(through acpi_os_map_memory()) in order to copy the error records into
memory buffers through memory operations (eg memory_read_from_buffer()).

The OS system cannot detect whether the BERT error region is part of
system RAM or it is "device memory" (eg BMC memory) and therefore it
cannot detect which memory attributes the bus to memory support (and
corresponding kernel mapping, unless firmware provides the required
information).

The acpi_os_map_memory() arch backend implementation determines the
mapping attributes. On arm64, if the BERT error region is not present in
the EFI memory map, the error region is mapped as device-nGnRnE; this
triggers alignment faults since memcpy unaligned accesses are not
allowed in device-nGnRnE regions.

The ACPI sysfs code cannot therefore map by default the BERT error
region with memory semantics but should use a safer default.

Change the sysfs code to map the BERT error region as MMIO (through
acpi_os_map_iomem()) and use the memcpy_fromio() interface to read the
error region into the kernel buffer.

Link: https://lore.kernel.org/linux-arm-kernel/[email protected]
Link: https://lore.kernel.org/linux-acpi/CAJZ5v0g+OVbhuUUDrLUCfX_mVqY_e8ubgLTU98=jfjTeb4t+Pw@mail.gmail.com
Signed-off-by: Lorenzo Pieralisi <[email protected]>
Tested-by: Veronika Kabatova <[email protected]>
Tested-by: Aristeu Rozanski <[email protected]>
Acked-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Cc: dann frazier <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/acpi/sysfs.c | 25 ++++++++++++++++++-------
1 file changed, 18 insertions(+), 7 deletions(-)

--- a/drivers/acpi/sysfs.c
+++ b/drivers/acpi/sysfs.c
@@ -415,19 +415,30 @@ static ssize_t acpi_data_show(struct fil
loff_t offset, size_t count)
{
struct acpi_data_attr *data_attr;
- void *base;
- ssize_t rc;
+ void __iomem *base;
+ ssize_t size;

data_attr = container_of(bin_attr, struct acpi_data_attr, attr);
+ size = data_attr->attr.size;

- base = acpi_os_map_memory(data_attr->addr, data_attr->attr.size);
+ if (offset < 0)
+ return -EINVAL;
+
+ if (offset >= size)
+ return 0;
+
+ if (count > size - offset)
+ count = size - offset;
+
+ base = acpi_os_map_iomem(data_attr->addr, size);
if (!base)
return -ENOMEM;
- rc = memory_read_from_buffer(buf, count, &offset, base,
- data_attr->attr.size);
- acpi_os_unmap_memory(base, data_attr->attr.size);

- return rc;
+ memcpy_fromio(buf, base + offset, count);
+
+ acpi_os_unmap_iomem(base, size);
+
+ return count;
}

static int acpi_bert_data_init(void *th, struct acpi_data_attr *data_attr)



2022-05-28 20:48:25

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.18 34/47] random: credit architectural init the exact amount

From: "Jason A. Donenfeld" <[email protected]>

commit 12e45a2a6308105469968951e6d563e8f4fea187 upstream.

RDRAND and RDSEED can fail sometimes, which is fine. We currently
initialize the RNG with 512 bits of RDRAND/RDSEED. We only need 256 bits
of those to succeed in order to initialize the RNG. Instead of the
current "all or nothing" approach, actually credit these contributions
the amount that is actually contributed.

Reviewed-by: Dominik Brodowski <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/random.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -899,9 +899,8 @@ early_param("random.trust_bootloader", p
*/
int __init random_init(const char *command_line)
{
- size_t i;
ktime_t now = ktime_get_real();
- bool arch_init = true;
+ unsigned int i, arch_bytes;
unsigned long rv;

#if defined(LATENT_ENTROPY_PLUGIN)
@@ -909,11 +908,12 @@ int __init random_init(const char *comma
_mix_pool_bytes(compiletime_seed, sizeof(compiletime_seed));
#endif

- for (i = 0; i < BLAKE2S_BLOCK_SIZE; i += sizeof(rv)) {
+ for (i = 0, arch_bytes = BLAKE2S_BLOCK_SIZE;
+ i < BLAKE2S_BLOCK_SIZE; i += sizeof(rv)) {
if (!arch_get_random_seed_long_early(&rv) &&
!arch_get_random_long_early(&rv)) {
rv = random_get_entropy();
- arch_init = false;
+ arch_bytes -= sizeof(rv);
}
_mix_pool_bytes(&rv, sizeof(rv));
}
@@ -924,8 +924,8 @@ int __init random_init(const char *comma

if (crng_ready())
crng_reseed();
- else if (arch_init && trust_cpu)
- credit_init_bits(BLAKE2S_BLOCK_SIZE * 8);
+ else if (trust_cpu)
+ credit_init_bits(arch_bytes * 8);

return 0;
}



2022-05-29 02:49:43

by Fox Chen

[permalink] [raw]
Subject: RE: [PATCH 5.18 00/47] 5.18.1-rc1 review

On Fri, 27 May 2022 10:49:40 +0200, Greg Kroah-Hartman <[email protected]> wrote:
> This is the start of the stable review cycle for the 5.18.1 release.
> There are 47 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sun, 29 May 2022 08:46:45 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.18.1-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.18.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>

5.18.1-rc1 Successfully Compiled and booted on my Raspberry PI 4b (8g) (bcm2711)

Tested-by: Fox Chen <[email protected]>