Changes v4->v5:
--------------
- Add example code to vDSO addition commit showing intended use and
interaction with allocations.
- Reset buffer to beginning when retrying.
- Rely on generation counter never being zero for fork detection, rather than
adding extra boolean.
- Make use of __ARCH_WANT_VGETRANDOM_ALLOC macro around new syscall so that
it's condition by archs that actually choose to add this, and don't forget
to bump __NR_syscalls.
- Separate __cvdso_getrandom() into __cvdso_getrandom() and
__cvdso_getrandom_data() so that powerpc can make a more efficient call.
Changes v3->v4:
--------------
- Split up into small series rather than one big patch.
- Use proper ordering in generation counter reads.
- Make properly generic, not just a hairball with x86, by moving symbols into
correct files.
Changes v2->v3:
--------------
Big changes:
Thomas' previous objection was two-fold: 1) vgetrandom
should really have the same function signature as getrandom, in
addition to all of the same behavior, and 2) having vgetrandom_alloc
be a vDSO function doesn't make sense, because it doesn't actually
need anything from the VDSO data page and it doesn't correspond to an
existing syscall.
After a discussion at Plumbers this last week, we devised the following
ways to fix these: 1) we make the opque state argument be the last
argument of vgetrandom, rather than the first one, since the real
syscall ignores the additional argument, and that way all the registers
are the same, and no behavior changes; and 2) we make vgetrandom_alloc a
syscall, rather than a vDSO function, which also gives it added
flexibility for the future, which is good.
Making those changes also reduced the size of this patch a bit.
Smaller changes:
- Properly add buffer offset position.
- Don't EXPORT_SYMBOL for vDSO code.
- Account for timens and vvar being in swapped pages.
--------------
Two statements:
1) Userspace wants faster cryptographically secure random numbers of
arbitrary size, big or small.
2) Userspace is currently unable to safely roll its own RNG with the
same security profile as getrandom().
Statement (1) has been debated for years, with arguments ranging from
"we need faster cryptographically secure card shuffling!" to "the only
things that actually need good randomness are keys, which are few and
far between" to "actually, TLS CBC nonces are frequent" and so on. I
don't intend to wade into that debate substantially, except to note that
recently glibc added arc4random(), whose goal is to return a
cryptographically secure uint32_t, and there are real user reports of it
being too slow. So here we are.
Statement (2) is more interesting. The kernel is the nexus of all
entropic inputs that influence the RNG. It is in the best position, and
probably the only position, to decide anything at all about the current
state of the RNG and of its entropy. One of the things it uniquely knows
about is when reseeding is necessary.
For example, when a virtual machine is forked, restored, or duplicated,
it's imparative that the RNG doesn't generate the same outputs. For this
reason, there's a small protocol between hypervisors and the kernel that
indicates this has happened, alongside some ID, which the RNG uses to
immediately reseed, so as not to return the same numbers. Were userspace
to expand a getrandom() seed from time T1 for the next hour, and at some
point T2 < hour, the virtual machine forked, userspace would continue to
provide the same numbers to two (or more) different virtual machines,
resulting in potential cryptographic catastrophe. Something similar
happens on resuming from hibernation (or even suspend), with various
compromise scenarios there in mind.
There's a more general reason why userspace rolling its own RNG from a
getrandom() seed is fraught. There's a lot of attention paid to this
particular Linuxism we have of the RNG being initialized and thus
non-blocking or uninitialized and thus blocking until it is initialized.
These are our Two Big States that many hold to be the holy
differentiating factor between safe and not safe, between
cryptographically secure and garbage. The fact is, however, that the
distinction between these two states is a hand-wavy wishy-washy inexact
approximation. Outside of a few exceptional cases (e.g. a HW RNG is
available), we actually don't really ever know with any rigor at all
when the RNG is safe and ready (nor when it's compromised). We do the
best we can to "estimate" it, but entropy estimation is fundamentally
impossible in the general case. So really, we're just doing guess work,
and hoping it's good and conservative enough. Let's then assume that
there's always some potential error involved in this differentiator.
In fact, under the surface, the RNG is engineered around a different
principal, and that is trying to *use* new entropic inputs regularly and
at the right specific moments in time. For example, close to boot time,
the RNG reseeds itself more often than later. At certain events, like VM
fork, the RNG reseeds itself immediately. The various heuristics for
when the RNG will use new entropy and how often is really a core aspect
of what the RNG has some potential to do decently enough (and something
that will probably continue to improve in the future from random.c's
present set of algorithms). So in your mind, put away the metal
attachment to the Two Big States, which represent an approximation with
a potential margin of error. Instead keep in mind that the RNG's primary
operating heuristic is how often and exactly when it's going to reseed.
So, if userspace takes a seed from getrandom() at point T1, and uses it
for the next hour (or N megabytes or some other meaningless metric),
during that time, potential errors in the Two Big States approximation
are amplified. During that time potential reseeds are being lost,
forgotten, not reflected in the output stream. That's not good.
The simplest statement you could make is that userspace RNGs that expand
a getrandom() seed at some point T1 are nearly always *worse*, in some
way, than just calling getrandom() every time a random number is
desired.
For those reasons, after some discussion on libc-alpha, glibc's
arc4random() now just calls getrandom() on each invocation. That's
trivially safe, and gives us latitude to then make the safe thing faster
without becoming unsafe at our leasure. Card shuffling isn't
particularly fast, however.
How do we rectify this? By putting a safe implementation of getrandom()
in the vDSO, which has access to whatever information a
particular iteration of random.c is using to make its decisions. I use
that careful language of "particular iteration of random.c", because the
set of things that a vDSO getrandom() implementation might need for making
decisions as good as the kernel's will likely change over time. This
isn't just a matter of exporting certain *data* to userspace. We're not
going to commit to a "data API" where the various heuristics used are
exposed, locking in how the kernel works for decades to come, and then
leave it to various userspaces to roll something on top and shoot
themselves in the foot and have all sorts of complexity disasters.
Rather, vDSO getrandom() is supposed to be the *same exact algorithm*
that runs in the kernel, except it's been hoisted into userspace as
much as possible. And so vDSO getrandom() and kernel getrandom() will
always mirror each other hermetically.
API-wise, the vDSO gains this function:
ssize_t vgetrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state);
The return value and the first 3 arguments are the same as ordinary
getrandom(), while the last argument is a pointer to some state
allocated with vgetrandom_alloc(), explained below. Were all four
arguments passed to the getrandom syscall, nothing different would
happen, and the functions would have the exact same behavior.
Then, we introduce a new syscall:
void *vgetrandom_alloc([inout] size_t *num, [out] size_t *size_per_each, unsigned int flags);
This takes the desired number of opaque states in `num`, and returns a
pointer to an array of opaque states, the number actually allocated back
in `num`, and the size in bytes of each one in `size_per_each`, enabling
a libc to slice up the returned array into a state per each thread. (The
`flags` argument is always zero for now.) We very intentionally do *not*
leave state allocation up to the caller of vgetrandom, but provide
vgetrandom_alloc for that allocation. There are too many weird things
that can go wrong, and it's important that vDSO does not provide too
generic of a mechanism. It's not going to store its state in just any
old memory address. It'll do it only in ones it allocates.
Right now this means it's a mlock'd page with WIPEONFORK set. In the
future maybe there will be other interesting page flags or
anti-heartbleed measures, or other platform-specific kernel-specific
things that can be set from the syscall. Again, it's important that the
kernel has a say in how this works rather than agreeing to operate on
any old address; memory isn't neutral.
The syscall currently accomplishes this with a call to vm_mmap() and
then a call to do_madvise(). It'd be nice to do this all at once, but
I'm not sure that a helper function exists for that now, and it seems a
bit premature to add one, at least for now.
The interesting meat of the implementation is in lib/vdso/getrandom.c,
as generic C code, and it aims to mainly follow random.c's buffered fast
key erasure logic. Before the RNG is initialized, it falls back to the
syscall. Right now it uses a simple generation counter to make its decisions
on reseeding (though this could be made more extensive over time).
The actual place that has the most work to do is in all of the other
files. Most of the vDSO shared page infrastructure is centered around
gettimeofday, and so the main structs are all in arrays for different
timestamp types, and attached to time namespaces, and so forth. I've
done the best I could to add onto this in an unintrusive way.
In my test results, performance is pretty stellar (around 15x for uint32_t
generation), and it seems to be working. There's an extended example in the
second commit of this series, showing how the syscall and the vDSO function
are meant to be used together.
Cc: [email protected]
Cc: [email protected]
Cc: Thomas Gleixner <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Adhemerval Zanella Netto <[email protected]>
Cc: Carlos O'Donell <[email protected]>
Jason A. Donenfeld (3):
random: add vgetrandom_alloc() syscall
random: introduce generic vDSO getrandom() implementation
x86: vdso: Wire up getrandom() vDSO implementation
MAINTAINERS | 2 +
arch/x86/Kconfig | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/x86/entry/vdso/Makefile | 3 +-
arch/x86/entry/vdso/vdso.lds.S | 2 +
arch/x86/entry/vdso/vgetrandom.c | 16 ++++
arch/x86/include/asm/unistd.h | 1 +
arch/x86/include/asm/vdso/getrandom.h | 37 ++++++++
arch/x86/include/asm/vdso/vsyscall.h | 2 +
arch/x86/include/asm/vvar.h | 16 ++++
drivers/char/random.c | 64 +++++++++++++
include/uapi/asm-generic/unistd.h | 7 +-
include/vdso/datapage.h | 6 ++
kernel/sys_ni.c | 3 +
lib/crypto/chacha.c | 4 +
lib/vdso/Kconfig | 5 ++
lib/vdso/getrandom.c | 115 ++++++++++++++++++++++++
lib/vdso/getrandom.h | 23 +++++
scripts/checksyscalls.sh | 4 +
tools/include/uapi/asm-generic/unistd.h | 7 +-
21 files changed, 317 insertions(+), 3 deletions(-)
create mode 100644 arch/x86/entry/vdso/vgetrandom.c
create mode 100644 arch/x86/include/asm/vdso/getrandom.h
create mode 100644 lib/vdso/getrandom.c
create mode 100644 lib/vdso/getrandom.h
--
2.38.1
The vDSO getrandom() works over an opaque per-thread state of an
unexported size, which must be marked as MADV_WIPEONFORK and be
mlock()'d for proper operation. Over time, the nuances of these
allocations may change or grow or even differ based on architectural
features.
The syscall has the signature:
void *vgetrandom_alloc([inout] size_t *num, [out] size_t *size_per_each, unsigned int flags);
This takes the desired number of opaque states in `num`, and returns a
pointer to an array of opaque states, the number actually allocated back
in `num`, and the size in bytes of each one in `size_per_each`, enabling
a libc to slice up the returned array into a state per each thread. (The
`flags` argument is always zero for now.) Libc is expected to allocate a
chunk of these on first use, and then dole them out to threads as
they're created, allocating more when needed. The following commit shows
an example of this, being used in conjunction with the getrandom() vDSO
function.
We very intentionally do *not* leave state allocation for vDSO
getrandom() up to userspace itself, but rather provide this new syscall
for such allocations. vDSO getrandom() must not store its state in just
any old memory address, but rather just ones that the kernel specially
allocates for it, leaving the particularities of those allocations up to
the kernel.
Signed-off-by: Jason A. Donenfeld <[email protected]>
---
MAINTAINERS | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/x86/include/asm/unistd.h | 1 +
drivers/char/random.c | 59 +++++++++++++++++++++++++
include/uapi/asm-generic/unistd.h | 7 ++-
kernel/sys_ni.c | 3 ++
lib/vdso/getrandom.h | 23 ++++++++++
scripts/checksyscalls.sh | 4 ++
tools/include/uapi/asm-generic/unistd.h | 7 ++-
10 files changed, 105 insertions(+), 2 deletions(-)
create mode 100644 lib/vdso/getrandom.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 256f03904987..843dd6a49538 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17287,6 +17287,7 @@ T: git https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git
S: Maintained
F: drivers/char/random.c
F: drivers/virt/vmgenid.c
+F: lib/vdso/getrandom.h
RAPIDIO SUBSYSTEM
M: Matt Porter <[email protected]>
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index 320480a8db4f..ea0fbc2ded5e 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -455,3 +455,4 @@
448 i386 process_mrelease sys_process_mrelease
449 i386 futex_waitv sys_futex_waitv
450 i386 set_mempolicy_home_node sys_set_mempolicy_home_node
+451 i386 vgetrandom_alloc sys_vgetrandom_alloc
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index c84d12608cd2..0186f173f0e8 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -372,6 +372,7 @@
448 common process_mrelease sys_process_mrelease
449 common futex_waitv sys_futex_waitv
450 common set_mempolicy_home_node sys_set_mempolicy_home_node
+451 common vgetrandom_alloc sys_vgetrandom_alloc
#
# Due to a historical design error, certain syscalls are numbered differently
diff --git a/arch/x86/include/asm/unistd.h b/arch/x86/include/asm/unistd.h
index 761173ccc33c..f9673293cd95 100644
--- a/arch/x86/include/asm/unistd.h
+++ b/arch/x86/include/asm/unistd.h
@@ -57,5 +57,6 @@
# define __ARCH_WANT_SYS_VFORK
# define __ARCH_WANT_SYS_CLONE
# define __ARCH_WANT_SYS_CLONE3
+# define __ARCH_WANT_VGETRANDOM_ALLOC
#endif /* _ASM_X86_UNISTD_H */
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 65ee69896967..ab6e02efa432 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -8,6 +8,7 @@
* into roughly six sections, each with a section header:
*
* - Initialization and readiness waiting.
+ * - vDSO support helpers.
* - Fast key erasure RNG, the "crng".
* - Entropy accumulation and extraction routines.
* - Entropy collection routines.
@@ -39,6 +40,7 @@
#include <linux/blkdev.h>
#include <linux/interrupt.h>
#include <linux/mm.h>
+#include <linux/mman.h>
#include <linux/nodemask.h>
#include <linux/spinlock.h>
#include <linux/kthread.h>
@@ -59,6 +61,7 @@
#include <asm/irq.h>
#include <asm/irq_regs.h>
#include <asm/io.h>
+#include "../../lib/vdso/getrandom.h"
/*********************************************************************
*
@@ -146,6 +149,62 @@ EXPORT_SYMBOL(wait_for_random_bytes);
__func__, (void *)_RET_IP_, crng_init)
+
+/********************************************************************
+ *
+ * vDSO support helpers.
+ *
+ * The actual vDSO function is defined over in lib/vdso/getrandom.c,
+ * but this section contains the kernel-mode helpers to support that.
+ *
+ ********************************************************************/
+
+#ifdef __ARCH_WANT_VGETRANDOM_ALLOC
+/*
+ * The vgetrandom() function in userspace requires an opaque state, which this
+ * function provides to userspace. The result is that it maps a certain
+ * number of special pages into the calling process and returns the address.
+ */
+SYSCALL_DEFINE3(vgetrandom_alloc, unsigned long __user *, num,
+ unsigned long __user *, size_per_each, unsigned int, flags)
+{
+ unsigned long alloc_size;
+ unsigned long num_states;
+ unsigned long pages_addr;
+ int ret;
+
+ if (flags)
+ return -EINVAL;
+
+ if (get_user(num_states, num))
+ return -EFAULT;
+
+ alloc_size = size_mul(num_states, sizeof(struct vgetrandom_state));
+ if (alloc_size == SIZE_MAX)
+ return -EOVERFLOW;
+ alloc_size = roundup(alloc_size, PAGE_SIZE);
+
+ if (put_user(alloc_size / sizeof(struct vgetrandom_state), num) ||
+ put_user(sizeof(struct vgetrandom_state), size_per_each))
+ return -EFAULT;
+
+ pages_addr = vm_mmap(NULL, 0, alloc_size, PROT_READ | PROT_WRITE,
+ MAP_PRIVATE | MAP_ANONYMOUS | MAP_LOCKED, 0);
+ if (IS_ERR_VALUE(pages_addr))
+ return pages_addr;
+
+ ret = do_madvise(current->mm, pages_addr, alloc_size, MADV_WIPEONFORK);
+ if (ret < 0)
+ goto err_unmap;
+
+ return pages_addr;
+
+err_unmap:
+ vm_munmap(pages_addr, alloc_size);
+ return ret;
+}
+#endif
+
/*********************************************************************
*
* Fast key erasure RNG, the "crng".
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 45fa180cc56a..77b6debe7e18 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -886,8 +886,13 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv)
#define __NR_set_mempolicy_home_node 450
__SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)
+#ifdef __ARCH_WANT_VGETRANDOM_ALLOC
+#define __NR_vgetrandom_alloc 451
+__SYSCALL(__NR_vgetrandom_alloc, sys_vgetrandom_alloc)
+#endif
+
#undef __NR_syscalls
-#define __NR_syscalls 451
+#define __NR_syscalls 452
/*
* 32 bit systems traditionally used different
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 860b2dcf3ac4..f28196cb919b 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -360,6 +360,9 @@ COND_SYSCALL(pkey_free);
/* memfd_secret */
COND_SYSCALL(memfd_secret);
+/* random */
+COND_SYSCALL(vgetrandom_alloc);
+
/*
* Architecture specific weak syscall entries.
*/
diff --git a/lib/vdso/getrandom.h b/lib/vdso/getrandom.h
new file mode 100644
index 000000000000..85c2f62c0f5f
--- /dev/null
+++ b/lib/vdso/getrandom.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <[email protected]>. All Rights Reserved.
+ */
+
+#ifndef _VDSO_LIB_GETRANDOM_H
+#define _VDSO_LIB_GETRANDOM_H
+
+#include <crypto/chacha.h>
+
+struct vgetrandom_state {
+ unsigned long generation;
+ union {
+ struct {
+ u8 key[CHACHA_KEY_SIZE];
+ u8 batch[CHACHA_BLOCK_SIZE * 3 / 2];
+ };
+ u8 key_batch[CHACHA_BLOCK_SIZE * 2];
+ };
+ u8 pos;
+};
+
+#endif /* _VDSO_LIB_GETRANDOM_H */
diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
index f33e61aca93d..7f7928c6487f 100755
--- a/scripts/checksyscalls.sh
+++ b/scripts/checksyscalls.sh
@@ -44,6 +44,10 @@ cat << EOF
#define __IGNORE_memfd_secret
#endif
+#ifndef __ARCH_WANT_VGETRANDOM_ALLOC
+#define __IGNORE_vgetrandom_alloc
+#endif
+
/* Missing flags argument */
#define __IGNORE_renameat /* renameat2 */
diff --git a/tools/include/uapi/asm-generic/unistd.h b/tools/include/uapi/asm-generic/unistd.h
index 45fa180cc56a..77b6debe7e18 100644
--- a/tools/include/uapi/asm-generic/unistd.h
+++ b/tools/include/uapi/asm-generic/unistd.h
@@ -886,8 +886,13 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv)
#define __NR_set_mempolicy_home_node 450
__SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)
+#ifdef __ARCH_WANT_VGETRANDOM_ALLOC
+#define __NR_vgetrandom_alloc 451
+__SYSCALL(__NR_vgetrandom_alloc, sys_vgetrandom_alloc)
+#endif
+
#undef __NR_syscalls
-#define __NR_syscalls 451
+#define __NR_syscalls 452
/*
* 32 bit systems traditionally used different
--
2.38.1
Provide a generic C vDSO getrandom() implementation, which operates on
an opaque state returned by vgetrandom_alloc() and produces random bytes
the same way as getrandom(). This has a the API signature:
ssize_t vgetrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state);
The return value and the first 3 arguments are the same as ordinary
getrandom(), while the last argument is a pointer to the opaque
allocated state. Were all four arguments passed to the getrandom()
syscall, nothing different would happen, and the functions would have
the exact same behavior.
The actual vDSO RNG algorithm implemented is the same one implemented by
drivers/char/random.c, using the same fast-erasure techniques as that.
Should the in-kernel implementation change, so too will the vDSO one.
Initially, the state is keyless, and so the first call makes a
getrandom() syscall to generate that key, and then uses it for
subsequent calls. By keeping track of a generation counter, it knows
when its key is invalidated and it should fetch a new one using the
syscall. Later, more than just a generation counter might be used.
Since MADV_WIPEONFORK is set on the opaque state, the key and related
state is wiped during a fork(), so secrets don't roll over into new
processes, and the same state doesn't accidentally generate the same
random stream. The generation counter, as well, is always >0, so that
the 0 counter is a useful indication of a fork() or otherwise
uninitialized state.
If the kernel RNG is not yet initialized, then the vDSO always calls the
syscall, because that behavior cannot be emulated in userspace, but
fortunately that state is short lived and only during early boot. If it
has been initialized, then there is no need to inspect the `flags`
argument, because the behavior does not change post-initialization
regardless of the `flags` value.
Together with the previous commit that introduces vgetrandom_alloc(),
this functionality is intended to be integrated into libc's thread
management. As an illustrative example, the following code might be used
to do the same outside of libc. All of the static functions are to be
considered implementation private, including the vgetrandom_alloc()
syscall wrapper, which generally shouldn't be exposed outside of libc,
with the non-static vgetrandom() function at the end being the exported
interface. The various pthread-isms are expected to be elided into libc
internals. This per-thread allocation scheme is very naive and does not
shrink; other implementations may choose to be more complex.
static void *vgetrandom_alloc(size_t *num, size_t *size_per_each, unsigned int flags)
{
unsigned long ret = syscall(__NR_vgetrandom_alloc, num, size_per_each, flags);
return ret > -4096UL ? NULL : (void *)ret;
}
static struct {
pthread_mutex_t lock;
void **states;
size_t len, cap;
} grnd_allocator = {
.lock = PTHREAD_MUTEX_INITIALIZER
};
static void *vgetrandom_get_state(void)
{
void *state = NULL;
pthread_mutex_lock(&grnd_allocator.lock);
if (!grnd_allocator.len) {
size_t new_cap, size_per_each, num = 16; /* Just a hint. */
void *new_block = vgetrandom_alloc(&num, &size_per_each, 0), *new_states;
if (!new_block)
goto out;
new_cap = grnd_allocator.cap + num;
new_states = reallocarray(grnd_allocator.states, new_cap, sizeof(*grnd_allocator.states));
if (!new_states) {
munmap(new_block, num * size_per_each);
goto out;
}
grnd_allocator.cap = new_cap;
grnd_allocator.states = new_states;
for (size_t i = 0; i < num; ++i) {
grnd_allocator.states[i] = new_block;
new_block += size_per_each;
}
grnd_allocator.len = num;
}
state = grnd_allocator.states[--grnd_allocator.len];
out:
pthread_mutex_unlock(&grnd_allocator.lock);
return state;
}
static void vgetrandom_put_state(void *state)
{
if (!state)
return;
pthread_mutex_lock(&grnd_allocator.lock);
grnd_allocator.states[grnd_allocator.len++] = state;
pthread_mutex_unlock(&grnd_allocator.lock);
}
static struct {
ssize_t(*fn)(void *buf, size_t len, unsigned long flags, void *state);
pthread_key_t key;
pthread_once_t initialized;
} grnd_ctx = {
.initialized = PTHREAD_ONCE_INIT
};
static void vgetrandom_init(void)
{
if (pthread_key_create(&grnd_ctx.key, vgetrandom_put_state) != 0)
return;
grnd_ctx.fn = __vdsosym("LINUX_2.6", "__vdso_getrandom");
}
ssize_t vgetrandom(void *buf, size_t len, unsigned long flags)
{
void *state;
pthread_once(&grnd_ctx.initialized, vgetrandom_init);
if (!grnd_ctx.fn)
return getrandom(buf, len, flags);
state = pthread_getspecific(grnd_ctx.key);
if (!state) {
state = vgetrandom_get_state();
if (pthread_setspecific(grnd_ctx.key, state) != 0) {
vgetrandom_put_state(state);
state = NULL;
}
if (!state)
return getrandom(buf, len, flags);
}
return grnd_ctx.fn(buf, len, flags, state);
}
Signed-off-by: Jason A. Donenfeld <[email protected]>
---
MAINTAINERS | 1 +
drivers/char/random.c | 5 ++
include/vdso/datapage.h | 6 +++
lib/crypto/chacha.c | 4 ++
lib/vdso/Kconfig | 5 ++
lib/vdso/getrandom.c | 115 ++++++++++++++++++++++++++++++++++++++++
6 files changed, 136 insertions(+)
create mode 100644 lib/vdso/getrandom.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 843dd6a49538..e0aa33f54c57 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17287,6 +17287,7 @@ T: git https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git
S: Maintained
F: drivers/char/random.c
F: drivers/virt/vmgenid.c
+F: lib/vdso/getrandom.c
F: lib/vdso/getrandom.h
RAPIDIO SUBSYSTEM
diff --git a/drivers/char/random.c b/drivers/char/random.c
index ab6e02efa432..7dfdbf424c92 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -61,6 +61,7 @@
#include <asm/irq.h>
#include <asm/irq_regs.h>
#include <asm/io.h>
+#include <vdso/datapage.h>
#include "../../lib/vdso/getrandom.h"
/*********************************************************************
@@ -307,6 +308,8 @@ static void crng_reseed(struct work_struct *work)
if (next_gen == ULONG_MAX)
++next_gen;
WRITE_ONCE(base_crng.generation, next_gen);
+ if (IS_ENABLED(CONFIG_HAVE_VDSO_GETRANDOM))
+ smp_store_release(&_vdso_rng_data.generation, next_gen + 1);
if (!static_branch_likely(&crng_is_ready))
crng_init = CRNG_READY;
spin_unlock_irqrestore(&base_crng.lock, flags);
@@ -756,6 +759,8 @@ static void __cold _credit_init_bits(size_t bits)
crng_reseed(NULL); /* Sets crng_init to CRNG_READY under base_crng.lock. */
if (static_key_initialized)
execute_in_process_context(crng_set_ready, &set_ready);
+ if (IS_ENABLED(CONFIG_HAVE_VDSO_GETRANDOM))
+ smp_store_release(&_vdso_rng_data.is_ready, true);
wake_up_interruptible(&crng_init_wait);
kill_fasync(&fasync, SIGIO, POLL_IN);
pr_notice("crng init done\n");
diff --git a/include/vdso/datapage.h b/include/vdso/datapage.h
index 73eb622e7663..cbacfd923a5c 100644
--- a/include/vdso/datapage.h
+++ b/include/vdso/datapage.h
@@ -109,6 +109,11 @@ struct vdso_data {
struct arch_vdso_data arch_data;
};
+struct vdso_rng_data {
+ unsigned long generation;
+ bool is_ready;
+};
+
/*
* We use the hidden visibility to prevent the compiler from generating a GOT
* relocation. Not only is going through a GOT useless (the entry couldn't and
@@ -120,6 +125,7 @@ struct vdso_data {
*/
extern struct vdso_data _vdso_data[CS_BASES] __attribute__((visibility("hidden")));
extern struct vdso_data _timens_data[CS_BASES] __attribute__((visibility("hidden")));
+extern struct vdso_rng_data _vdso_rng_data __attribute__((visibility("hidden")));
/*
* The generic vDSO implementation requires that gettimeofday.h
diff --git a/lib/crypto/chacha.c b/lib/crypto/chacha.c
index b748fd3d256e..944991bb36c7 100644
--- a/lib/crypto/chacha.c
+++ b/lib/crypto/chacha.c
@@ -17,8 +17,10 @@ static void chacha_permute(u32 *x, int nrounds)
{
int i;
+#ifndef CHACHA_FOR_VDSO_INCLUDE
/* whitelist the allowed round counts */
WARN_ON_ONCE(nrounds != 20 && nrounds != 12);
+#endif
for (i = 0; i < nrounds; i += 2) {
x[0] += x[4]; x[12] = rol32(x[12] ^ x[0], 16);
@@ -87,6 +89,7 @@ void chacha_block_generic(u32 *state, u8 *stream, int nrounds)
state[12]++;
}
+#ifndef CHACHA_FOR_VDSO_INCLUDE
EXPORT_SYMBOL(chacha_block_generic);
/**
@@ -112,3 +115,4 @@ void hchacha_block_generic(const u32 *state, u32 *stream, int nrounds)
memcpy(&stream[4], &x[12], 16);
}
EXPORT_SYMBOL(hchacha_block_generic);
+#endif
diff --git a/lib/vdso/Kconfig b/lib/vdso/Kconfig
index d883ac299508..c35fac664574 100644
--- a/lib/vdso/Kconfig
+++ b/lib/vdso/Kconfig
@@ -30,4 +30,9 @@ config GENERIC_VDSO_TIME_NS
Selected by architectures which support time namespaces in the
VDSO
+config HAVE_VDSO_GETRANDOM
+ bool
+ help
+ Selected by architectures that support vDSO getrandom().
+
endif
diff --git a/lib/vdso/getrandom.c b/lib/vdso/getrandom.c
new file mode 100644
index 000000000000..8bef1e92a79d
--- /dev/null
+++ b/lib/vdso/getrandom.c
@@ -0,0 +1,115 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <[email protected]>. All Rights Reserved.
+ */
+
+#include <linux/kernel.h>
+#include <linux/atomic.h>
+#include <linux/fs.h>
+#include <vdso/datapage.h>
+#include <asm/vdso/getrandom.h>
+#include <asm/vdso/vsyscall.h>
+#include "getrandom.h"
+
+#undef memcpy
+#define memcpy(d,s,l) __builtin_memcpy(d,s,l)
+#undef memset
+#define memset(d,c,l) __builtin_memset(d,c,l)
+
+#define CHACHA_FOR_VDSO_INCLUDE
+#include "../crypto/chacha.c"
+
+static void memcpy_and_zero(void *dst, void *src, size_t len)
+{
+#define CASCADE(type) \
+ while (len >= sizeof(type)) { \
+ *(type *)dst = *(type *)src; \
+ *(type *)src = 0; \
+ dst += sizeof(type); \
+ src += sizeof(type); \
+ len -= sizeof(type); \
+ }
+#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+#if BITS_PER_LONG == 64
+ CASCADE(u64);
+#endif
+ CASCADE(u32);
+ CASCADE(u16);
+#endif
+ CASCADE(u8);
+#undef CASCADE
+}
+
+static __always_inline ssize_t
+__cvdso_getrandom_data(const struct vdso_rng_data *rng_info, void *buffer, size_t len,
+ unsigned int flags, void *opaque_state)
+{
+ ssize_t ret = min_t(size_t, MAX_RW_COUNT, len);
+ struct vgetrandom_state *state = opaque_state;
+ u32 chacha_state[CHACHA_STATE_WORDS];
+ unsigned long current_generation;
+ size_t batch_len;
+
+ if (unlikely(!rng_info->is_ready))
+ return getrandom_syscall(buffer, len, flags);
+
+ if (unlikely(!len))
+ return 0;
+
+retry_generation:
+ current_generation = READ_ONCE(rng_info->generation);
+ if (unlikely(state->generation != current_generation)) {
+ if (getrandom_syscall(state->key, sizeof(state->key), 0) != sizeof(state->key))
+ return getrandom_syscall(buffer, len, flags);
+ WRITE_ONCE(state->generation, current_generation);
+ state->pos = sizeof(state->batch);
+ }
+
+ len = ret;
+more_batch:
+ batch_len = min_t(size_t, sizeof(state->batch) - state->pos, len);
+ if (batch_len) {
+ memcpy_and_zero(buffer, state->batch + state->pos, batch_len);
+ state->pos += batch_len;
+ buffer += batch_len;
+ len -= batch_len;
+ }
+ if (!len) {
+ /*
+ * Since rng_info->generation will never be 0, we re-read state->generation,
+ * rather than using the local current_generation variable, to learn whether
+ * we forked.
+ */
+ if (unlikely(READ_ONCE(state->generation) != READ_ONCE(rng_info->generation))) {
+ buffer -= ret;
+ goto retry_generation;
+ }
+ return ret;
+ }
+
+ chacha_init_consts(chacha_state);
+ memcpy(&chacha_state[4], state->key, CHACHA_KEY_SIZE);
+ memset(&chacha_state[12], 0, sizeof(u32) * 4);
+
+ while (len >= CHACHA_BLOCK_SIZE) {
+ chacha20_block(chacha_state, buffer);
+ if (unlikely(chacha_state[12] == 0))
+ ++chacha_state[13];
+ buffer += CHACHA_BLOCK_SIZE;
+ len -= CHACHA_BLOCK_SIZE;
+ }
+
+ chacha20_block(chacha_state, state->key_batch);
+ if (unlikely(chacha_state[12] == 0))
+ ++chacha_state[13];
+ chacha20_block(chacha_state, state->key_batch + CHACHA_BLOCK_SIZE);
+ state->pos = 0;
+ memzero_explicit(chacha_state, sizeof(chacha_state));
+ goto more_batch;
+}
+
+static __always_inline ssize_t
+__cvdso_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state)
+{
+ return __cvdso_getrandom_data(__arch_get_vdso_rng_data(), buffer, len, flags, opaque_state);
+}
--
2.38.1
Hook up the generic vDSO implementation to the x86 vDSO data page. Since
the existing vDSO infrastructure is heavily based on the timekeeping
functionality, which works over arrays of bases, a new macro is
introduced for vvars that are not arrays.
Signed-off-by: Jason A. Donenfeld <[email protected]>
---
arch/x86/Kconfig | 1 +
arch/x86/entry/vdso/Makefile | 3 ++-
arch/x86/entry/vdso/vdso.lds.S | 2 ++
arch/x86/entry/vdso/vgetrandom.c | 16 ++++++++++++
arch/x86/include/asm/vdso/getrandom.h | 37 +++++++++++++++++++++++++++
arch/x86/include/asm/vdso/vsyscall.h | 2 ++
arch/x86/include/asm/vvar.h | 16 ++++++++++++
7 files changed, 76 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/entry/vdso/vgetrandom.c
create mode 100644 arch/x86/include/asm/vdso/getrandom.h
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 67745ceab0db..210318da7505 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -269,6 +269,7 @@ config X86
select HAVE_UNSTABLE_SCHED_CLOCK
select HAVE_USER_RETURN_NOTIFIER
select HAVE_GENERIC_VDSO
+ select HAVE_VDSO_GETRANDOM
select HOTPLUG_SMT if SMP
select IRQ_FORCED_THREADING
select NEED_PER_CPU_EMBED_FIRST_CHUNK
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 3e88b9df8c8f..adc3792dbbac 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -27,7 +27,7 @@ VDSO32-$(CONFIG_X86_32) := y
VDSO32-$(CONFIG_IA32_EMULATION) := y
# files to link into the vdso
-vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o
+vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o vgetrandom.o
vobjs32-y := vdso32/note.o vdso32/system_call.o vdso32/sigreturn.o
vobjs32-y += vdso32/vclock_gettime.o
vobjs-$(CONFIG_X86_SGX) += vsgx.o
@@ -104,6 +104,7 @@ CFLAGS_REMOVE_vclock_gettime.o = -pg
CFLAGS_REMOVE_vdso32/vclock_gettime.o = -pg
CFLAGS_REMOVE_vgetcpu.o = -pg
CFLAGS_REMOVE_vsgx.o = -pg
+CFLAGS_REMOVE_vgetrandom.o = -pg
#
# X32 processes use x32 vDSO to access 64bit kernel data.
diff --git a/arch/x86/entry/vdso/vdso.lds.S b/arch/x86/entry/vdso/vdso.lds.S
index 4bf48462fca7..1919cc39277e 100644
--- a/arch/x86/entry/vdso/vdso.lds.S
+++ b/arch/x86/entry/vdso/vdso.lds.S
@@ -28,6 +28,8 @@ VERSION {
clock_getres;
__vdso_clock_getres;
__vdso_sgx_enter_enclave;
+ getrandom;
+ __vdso_getrandom;
local: *;
};
}
diff --git a/arch/x86/entry/vdso/vgetrandom.c b/arch/x86/entry/vdso/vgetrandom.c
new file mode 100644
index 000000000000..0a0c0ad93cd0
--- /dev/null
+++ b/arch/x86/entry/vdso/vgetrandom.c
@@ -0,0 +1,16 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <[email protected]>. All Rights Reserved.
+ */
+#include <linux/kernel.h>
+#include <linux/types.h>
+
+#include "../../../../lib/vdso/getrandom.c"
+
+ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *state)
+{
+ return __cvdso_getrandom(buffer, len, flags, state);
+}
+
+ssize_t getrandom(void *, size_t, unsigned int, void *)
+ __attribute__((weak, alias("__vdso_getrandom")));
diff --git a/arch/x86/include/asm/vdso/getrandom.h b/arch/x86/include/asm/vdso/getrandom.h
new file mode 100644
index 000000000000..c414043e975d
--- /dev/null
+++ b/arch/x86/include/asm/vdso/getrandom.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2022 Jason A. Donenfeld <[email protected]>. All Rights Reserved.
+ */
+#ifndef __ASM_VDSO_GETRANDOM_H
+#define __ASM_VDSO_GETRANDOM_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/unistd.h>
+#include <asm/vvar.h>
+
+static __always_inline ssize_t
+getrandom_syscall(void *buffer, size_t len, unsigned int flags)
+{
+ long ret;
+
+ asm ("syscall" : "=a" (ret) :
+ "0" (__NR_getrandom), "D" (buffer), "S" (len), "d" (flags) :
+ "rcx", "r11", "memory");
+
+ return ret;
+}
+
+#define __vdso_rng_data (VVAR(_vdso_rng_data))
+
+static __always_inline const struct vdso_rng_data *__arch_get_vdso_rng_data(void)
+{
+ if (__vdso_data->clock_mode == VDSO_CLOCKMODE_TIMENS)
+ return (void *)&__vdso_rng_data +
+ ((void *)&__timens_vdso_data - (void *)&__vdso_data);
+ return &__vdso_rng_data;
+}
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* __ASM_VDSO_GETRANDOM_H */
diff --git a/arch/x86/include/asm/vdso/vsyscall.h b/arch/x86/include/asm/vdso/vsyscall.h
index be199a9b2676..71c56586a22f 100644
--- a/arch/x86/include/asm/vdso/vsyscall.h
+++ b/arch/x86/include/asm/vdso/vsyscall.h
@@ -11,6 +11,8 @@
#include <asm/vvar.h>
DEFINE_VVAR(struct vdso_data, _vdso_data);
+DEFINE_VVAR_SINGLE(struct vdso_rng_data, _vdso_rng_data);
+
/*
* Update the vDSO data page to keep in sync with kernel timekeeping.
*/
diff --git a/arch/x86/include/asm/vvar.h b/arch/x86/include/asm/vvar.h
index 183e98e49ab9..9d9af37f7cab 100644
--- a/arch/x86/include/asm/vvar.h
+++ b/arch/x86/include/asm/vvar.h
@@ -26,6 +26,8 @@
*/
#define DECLARE_VVAR(offset, type, name) \
EMIT_VVAR(name, offset)
+#define DECLARE_VVAR_SINGLE(offset, type, name) \
+ EMIT_VVAR(name, offset)
#else
@@ -37,6 +39,10 @@ extern char __vvar_page;
extern type timens_ ## name[CS_BASES] \
__attribute__((visibility("hidden"))); \
+#define DECLARE_VVAR_SINGLE(offset, type, name) \
+ extern type vvar_ ## name \
+ __attribute__((visibility("hidden"))); \
+
#define VVAR(name) (vvar_ ## name)
#define TIMENS(name) (timens_ ## name)
@@ -44,12 +50,22 @@ extern char __vvar_page;
type name[CS_BASES] \
__attribute__((section(".vvar_" #name), aligned(16))) __visible
+#define DEFINE_VVAR_SINGLE(type, name) \
+ type name \
+ __attribute__((section(".vvar_" #name), aligned(16))) __visible
+
#endif
/* DECLARE_VVAR(offset, type, name) */
DECLARE_VVAR(128, struct vdso_data, _vdso_data)
+#if !defined(_SINGLE_DATA)
+#define _SINGLE_DATA
+DECLARE_VVAR_SINGLE(640, struct vdso_rng_data, _vdso_rng_data)
+#endif
+
#undef DECLARE_VVAR
+#undef DECLARE_VVAR_SINGLE
#endif
--
2.38.1
On Sat, Nov 19, 2022 at 01:09:27PM +0100, Jason A. Donenfeld wrote:
> +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned long __user *, num,
> + unsigned long __user *, size_per_each, unsigned int, flags)
> +{
> + unsigned long alloc_size;
> + unsigned long num_states;
> + unsigned long pages_addr;
> + int ret;
> +
> + if (flags)
> + return -EINVAL;
> +
> + if (get_user(num_states, num))
> + return -EFAULT;
> +
> + alloc_size = size_mul(num_states, sizeof(struct vgetrandom_state));
> + if (alloc_size == SIZE_MAX)
> + return -EOVERFLOW;
> + alloc_size = roundup(alloc_size, PAGE_SIZE);
Small detail: the roundup to PAGE_SIZE can make alloc_size overflow to 0.
Also, 'roundup(alloc_size, PAGE_SIZE)' could be 'PAGE_ALIGN(alloc_size)'.
> + pages_addr = vm_mmap(NULL, 0, alloc_size, PROT_READ | PROT_WRITE,
> + MAP_PRIVATE | MAP_ANONYMOUS | MAP_LOCKED, 0);
> + if (IS_ERR_VALUE(pages_addr))
> + return pages_addr;
This will only succeed if the userspace process has permission to mlock pages,
i.e. if there is space available in RLIMIT_MEMLOCK or if process has
CAP_IPC_LOCK. I suppose this is working as intended, as this syscall can be
used to try to allocate and mlock arbitrary amounts of memory.
I wonder if this permission check will cause problems. Maybe there could be a
way to relax it for just one page per task? I don't know how that would work,
though, especially when the planned usage involves userspace allocating a single
pool of these contexts per process that get handed out to threads.
- Eric
On Sat, Nov 19, 2022 at 01:09:28PM +0100, Jason A. Donenfeld wrote:
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index ab6e02efa432..7dfdbf424c92 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -61,6 +61,7 @@
> #include <asm/irq.h>
> #include <asm/irq_regs.h>
> #include <asm/io.h>
> +#include <vdso/datapage.h>
> #include "../../lib/vdso/getrandom.h"
>
> /*********************************************************************
> @@ -307,6 +308,8 @@ static void crng_reseed(struct work_struct *work)
> if (next_gen == ULONG_MAX)
> ++next_gen;
> WRITE_ONCE(base_crng.generation, next_gen);
> + if (IS_ENABLED(CONFIG_HAVE_VDSO_GETRANDOM))
> + smp_store_release(&_vdso_rng_data.generation, next_gen + 1);
Is the purpose of the smp_store_release() here to order the writes of
base_crng.generation and _vdso_rng_data.generation? It could use a comment.
> if (!static_branch_likely(&crng_is_ready))
> crng_init = CRNG_READY;
> spin_unlock_irqrestore(&base_crng.lock, flags);
> @@ -756,6 +759,8 @@ static void __cold _credit_init_bits(size_t bits)
> crng_reseed(NULL); /* Sets crng_init to CRNG_READY under base_crng.lock. */
> if (static_key_initialized)
> execute_in_process_context(crng_set_ready, &set_ready);
> + if (IS_ENABLED(CONFIG_HAVE_VDSO_GETRANDOM))
> + smp_store_release(&_vdso_rng_data.is_ready, true);
Similarly, is the purpose of this smp_store_release() to order the writes to the
the generation counters and is_ready? It could use a comment.
> diff --git a/lib/vdso/getrandom.c b/lib/vdso/getrandom.c
> new file mode 100644
> index 000000000000..8bef1e92a79d
> --- /dev/null
> +++ b/lib/vdso/getrandom.c
> @@ -0,0 +1,115 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2022 Jason A. Donenfeld <[email protected]>. All Rights Reserved.
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/atomic.h>
> +#include <linux/fs.h>
> +#include <vdso/datapage.h>
> +#include <asm/vdso/getrandom.h>
> +#include <asm/vdso/vsyscall.h>
> +#include "getrandom.h"
> +
> +#undef memcpy
> +#define memcpy(d,s,l) __builtin_memcpy(d,s,l)
> +#undef memset
> +#define memset(d,c,l) __builtin_memset(d,c,l)
> +
> +#define CHACHA_FOR_VDSO_INCLUDE
> +#include "../crypto/chacha.c"
> +
> +static void memcpy_and_zero(void *dst, void *src, size_t len)
> +{
> +#define CASCADE(type) \
> + while (len >= sizeof(type)) { \
> + *(type *)dst = *(type *)src; \
> + *(type *)src = 0; \
> + dst += sizeof(type); \
> + src += sizeof(type); \
> + len -= sizeof(type); \
> + }
> +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> +#if BITS_PER_LONG == 64
> + CASCADE(u64);
> +#endif
> + CASCADE(u32);
> + CASCADE(u16);
> +#endif
> + CASCADE(u8);
> +#undef CASCADE
> +}
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS doesn't mean that dereferencing
misaligned pointers is okay. You still need to use get_unaligned() and
put_unaligned(). Take a look at crypto_xor(), for example.
> +static __always_inline ssize_t
> +__cvdso_getrandom_data(const struct vdso_rng_data *rng_info, void *buffer, size_t len,
> + unsigned int flags, void *opaque_state)
> +{
> + ssize_t ret = min_t(size_t, MAX_RW_COUNT, len);
> + struct vgetrandom_state *state = opaque_state;
> + u32 chacha_state[CHACHA_STATE_WORDS];
> + unsigned long current_generation;
> + size_t batch_len;
> +
> + if (unlikely(!rng_info->is_ready))
> + return getrandom_syscall(buffer, len, flags);
> +
> + if (unlikely(!len))
> + return 0;
> +
> +retry_generation:
> + current_generation = READ_ONCE(rng_info->generation);
> + if (unlikely(state->generation != current_generation)) {
> + if (getrandom_syscall(state->key, sizeof(state->key), 0) != sizeof(state->key))
> + return getrandom_syscall(buffer, len, flags);
> + WRITE_ONCE(state->generation, current_generation);
> + state->pos = sizeof(state->batch);
> + }
> +
> + len = ret;
> +more_batch:
> + batch_len = min_t(size_t, sizeof(state->batch) - state->pos, len);
> + if (batch_len) {
> + memcpy_and_zero(buffer, state->batch + state->pos, batch_len);
> + state->pos += batch_len;
> + buffer += batch_len;
> + len -= batch_len;
> + }
> + if (!len) {
> + /*
> + * Since rng_info->generation will never be 0, we re-read state->generation,
> + * rather than using the local current_generation variable, to learn whether
> + * we forked.
> + */
> + if (unlikely(READ_ONCE(state->generation) != READ_ONCE(rng_info->generation))) {
> + buffer -= ret;
> + goto retry_generation;
> + }
> + return ret;
> + }
> +
> + chacha_init_consts(chacha_state);
> + memcpy(&chacha_state[4], state->key, CHACHA_KEY_SIZE);
> + memset(&chacha_state[12], 0, sizeof(u32) * 4);
> +
> + while (len >= CHACHA_BLOCK_SIZE) {
> + chacha20_block(chacha_state, buffer);
> + if (unlikely(chacha_state[12] == 0))
> + ++chacha_state[13];
> + buffer += CHACHA_BLOCK_SIZE;
> + len -= CHACHA_BLOCK_SIZE;
> + }
> +
> + chacha20_block(chacha_state, state->key_batch);
> + if (unlikely(chacha_state[12] == 0))
> + ++chacha_state[13];
> + chacha20_block(chacha_state, state->key_batch + CHACHA_BLOCK_SIZE);
> + state->pos = 0;
> + memzero_explicit(chacha_state, sizeof(chacha_state));
> + goto more_batch;
> +}
There's a lot of subtle stuff going on here. Adding some more comments would be
helpful. Maybe bring in some of the explanation that's currently only in the
commit message.
One question I have is about forking. So, when a thread calls fork(), in the
child the kernel automatically replaces all vgetrandom_state pages with zeroed
pages (due to MADV_WIPEONFORK). If the child calls __cvdso_getrandom_data()
afterwards, it sees the zeroed state. But that's indistinguishable from the
state at the very beginning, after sys_vgetrandom_alloc() was just called,
right? So as long as this code handles initializing the state at the beginning,
then I'd think it would naturally handle fork() as well.
However, it seems you have something a bit more subtle in mind, where the thread
calls fork() *while* it's in the middle of __cvdso_getrandom_data(). I guess
you are thinking of the case where a signal is sent to the thread while it's
executing __cvdso_getrandom_data(), and then the signal handler calls fork()?
Note that it doesn't matter if a different thread in the *process* calls fork().
If it's possible for the thread to fork() (and hence for the vgetrandom_state to
be zeroed) at absolutely any time, it probably would be a good idea to mark that
whole struct as volatile.
- Eric
Hi Eric,
On Sat, Nov 19, 2022 at 12:39:26PM -0800, Eric Biggers wrote:
> On Sat, Nov 19, 2022 at 01:09:27PM +0100, Jason A. Donenfeld wrote:
> > +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned long __user *, num,
> > + unsigned long __user *, size_per_each, unsigned int, flags)
> > +{
> > + unsigned long alloc_size;
> > + unsigned long num_states;
> > + unsigned long pages_addr;
> > + int ret;
> > +
> > + if (flags)
> > + return -EINVAL;
> > +
> > + if (get_user(num_states, num))
> > + return -EFAULT;
> > +
> > + alloc_size = size_mul(num_states, sizeof(struct vgetrandom_state));
> > + if (alloc_size == SIZE_MAX)
> > + return -EOVERFLOW;
> > + alloc_size = roundup(alloc_size, PAGE_SIZE);
>
> Small detail: the roundup to PAGE_SIZE can make alloc_size overflow to 0.
>
> Also, 'roundup(alloc_size, PAGE_SIZE)' could be 'PAGE_ALIGN(alloc_size)'.
Good catch, thanks. So perhaps this?
alloc_size = size_mul(num_states, sizeof(struct vgetrandom_state));
if (alloc_size > SIZE_MAX - PAGE_SIZE + 1)
return -EOVERFLOW;
alloc_size = PAGE_ALIGN(alloc_size);
Does that look right?
> > + pages_addr = vm_mmap(NULL, 0, alloc_size, PROT_READ | PROT_WRITE,
> > + MAP_PRIVATE | MAP_ANONYMOUS | MAP_LOCKED, 0);
> > + if (IS_ERR_VALUE(pages_addr))
> > + return pages_addr;
>
> This will only succeed if the userspace process has permission to mlock pages,
> i.e. if there is space available in RLIMIT_MEMLOCK or if process has
> CAP_IPC_LOCK. I suppose this is working as intended, as this syscall can be
> used to try to allocate and mlock arbitrary amounts of memory.
>
> I wonder if this permission check will cause problems. Maybe there could be a
> way to relax it for just one page per task? I don't know how that would work,
> though, especially when the planned usage involves userspace allocating a single
> pool of these contexts per process that get handed out to threads.
Probably though, we don't want to create a mlock backdoor, right? I
suppose if a user is above RLIMIT_MEMLOCK, it'll just fallback to the
slowpath, which still works. That seems like an okay enough
circumstance.
Jason
Hi Eric,
On Sat, Nov 19, 2022 at 03:10:12PM -0800, Eric Biggers wrote:
> > + if (IS_ENABLED(CONFIG_HAVE_VDSO_GETRANDOM))
> > + smp_store_release(&_vdso_rng_data.generation, next_gen + 1);
>
> Is the purpose of the smp_store_release() here to order the writes of
> base_crng.generation and _vdso_rng_data.generation? It could use a comment.
>
> > + if (IS_ENABLED(CONFIG_HAVE_VDSO_GETRANDOM))
> > + smp_store_release(&_vdso_rng_data.is_ready, true);
>
> Similarly, is the purpose of this smp_store_release() to order the writes to the
> the generation counters and is_ready? It could use a comment.
Yes, I guess so. Actually this comes from an unexplored IRC comment from
Andy back in July:
2022-07-29 21:21:56 <amluto> zx2c4: WRITE_ONCE(_vdso_rng_data.generation, next_gen + 1);
2022-07-29 21:22:23 <amluto> For x86 it shouldn’t matter much. For portability, smp_store_release
Though maybe that doesn't actually matter much? When the userspace CPU
learns about a change to vdso_rng_data, it's only course of action is
make a syscall to getrandom(), anyway, and those paths should be
consistent with themselves, thanks to the same locking and
synchronization there's always been there. So maybe I actually should
move back to WRITE_ONCE() here? Hm?
> > +static void memcpy_and_zero(void *dst, void *src, size_t len)
> > +{
> > +#define CASCADE(type) \
> > + while (len >= sizeof(type)) { \
> > + *(type *)dst = *(type *)src; \
> > + *(type *)src = 0; \
> > + dst += sizeof(type); \
> > + src += sizeof(type); \
> > + len -= sizeof(type); \
> > + }
> > +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> > +#if BITS_PER_LONG == 64
> > + CASCADE(u64);
> > +#endif
> > + CASCADE(u32);
> > + CASCADE(u16);
> > +#endif
> > + CASCADE(u8);
> > +#undef CASCADE
> > +}
>
> CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS doesn't mean that dereferencing
> misaligned pointers is okay. You still need to use get_unaligned() and
> put_unaligned(). Take a look at crypto_xor(), for example.
Right, thanks. Will do.
> There's a lot of subtle stuff going on here. Adding some more comments would be
> helpful. Maybe bring in some of the explanation that's currently only in the
> commit message.
Good idea.
> One question I have is about forking. So, when a thread calls fork(), in the
> child the kernel automatically replaces all vgetrandom_state pages with zeroed
> pages (due to MADV_WIPEONFORK). If the child calls __cvdso_getrandom_data()
> afterwards, it sees the zeroed state. But that's indistinguishable from the
> state at the very beginning, after sys_vgetrandom_alloc() was just called,
> right? So as long as this code handles initializing the state at the beginning,
> then I'd think it would naturally handle fork() as well.
Right, for this simple fork() case, it works fine. There are other cases
though that are trickier...
> However, it seems you have something a bit more subtle in mind, where the thread
> calls fork() *while* it's in the middle of __cvdso_getrandom_data(). I guess
> you are thinking of the case where a signal is sent to the thread while it's
> executing __cvdso_getrandom_data(), and then the signal handler calls fork()?
> Note that it doesn't matter if a different thread in the *process* calls fork().
>
> If it's possible for the thread to fork() (and hence for the vgetrandom_state to
> be zeroed) at absolutely any time, it probably would be a good idea to mark that
> whole struct as volatile.
Actually, this isn't something that matters, I don't think. If
state->key_batch is zeroed, the result will be wrong, but the function
logic will be fine. If state->pos is zeroed, it'll write to the
beginning of the batch, which might be wrong, but the function logic
will still be fine. That is, in both of these cases, even if the
calculation is wrong, there's no memory corruption or anything. So then,
the remaining member is state->generation. If this is zeroed, then it's
actually something we detect with that READ_ONCE()! And in this case,
it's a sign that something is off -- we forked -- and so we should start
over from the beginning. So I don't think there's a reason to mark the
whole struct as volatile. The one we care about is state->generation,
and for that we READ_ONCE() it at the place that matters.
There's actually a different scenario, though, that I'm concerned about,
and this is the case in which a multithreaded program forks in the
middle of one of its threads running this. Indeed, only the calling
thread will carry forward into the child process, but all the memory is
still left around from any concurrent threads in the middle of
vgetrandom(). And if they're in the middle of a vgetrandom() call, that
means they haven't yet done erasure and cleaned up the stack to prevent
their state from leaking, and so forward secrecy is potentially lost,
since the child process now has some state from the parent.
I'm not quite sure what the best approach here is. One idea would be to
just note that libcs should wait until vgetrandom() has returned
everywhere before forking, using its atfork functionality. Another
approach would be to say that multithreaded programs using this
shouldn't fork or something, but that seems disappointing. Or more state
could be allocated in the zeroing region, to hold a chacha state, so
another 64 bytes, which would be sort of unfortunate. Or something else?
I'd be interested to hear your impression of this quandary.
Jason
On Sun, Nov 20, 2022 at 12:59:07AM +0100, Jason A. Donenfeld wrote:
> Hi Eric,
>
> On Sat, Nov 19, 2022 at 12:39:26PM -0800, Eric Biggers wrote:
> > On Sat, Nov 19, 2022 at 01:09:27PM +0100, Jason A. Donenfeld wrote:
> > > +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned long __user *, num,
> > > + unsigned long __user *, size_per_each, unsigned int, flags)
> > > +{
> > > + unsigned long alloc_size;
> > > + unsigned long num_states;
> > > + unsigned long pages_addr;
> > > + int ret;
> > > +
> > > + if (flags)
> > > + return -EINVAL;
> > > +
> > > + if (get_user(num_states, num))
> > > + return -EFAULT;
> > > +
> > > + alloc_size = size_mul(num_states, sizeof(struct vgetrandom_state));
> > > + if (alloc_size == SIZE_MAX)
> > > + return -EOVERFLOW;
> > > + alloc_size = roundup(alloc_size, PAGE_SIZE);
> >
> > Small detail: the roundup to PAGE_SIZE can make alloc_size overflow to 0.
> >
> > Also, 'roundup(alloc_size, PAGE_SIZE)' could be 'PAGE_ALIGN(alloc_size)'.
>
> Good catch, thanks. So perhaps this?
>
> alloc_size = size_mul(num_states, sizeof(struct vgetrandom_state));
> if (alloc_size > SIZE_MAX - PAGE_SIZE + 1)
> return -EOVERFLOW;
> alloc_size = PAGE_ALIGN(alloc_size);
>
> Does that look right?
Yes. Maybe use 'SIZE_MAX & PAGE_MASK'?
Another alternative is the following:
if (num_states >
(SIZE_MAX & PAGE_MASK) / sizeof(struct vgetrandom_state))
return -EOVERFLOW;
alloc_size = PAGE_ALIGN(num_states * sizeof(struct vgetrandom_state));
- Eric
On Sat, Nov 19, 2022 at 05:40:04PM -0800, Eric Biggers wrote:
> On Sun, Nov 20, 2022 at 12:59:07AM +0100, Jason A. Donenfeld wrote:
> > Hi Eric,
> >
> > On Sat, Nov 19, 2022 at 12:39:26PM -0800, Eric Biggers wrote:
> > > On Sat, Nov 19, 2022 at 01:09:27PM +0100, Jason A. Donenfeld wrote:
> > > > +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned long __user *, num,
> > > > + unsigned long __user *, size_per_each, unsigned int, flags)
> > > > +{
> > > > + unsigned long alloc_size;
> > > > + unsigned long num_states;
> > > > + unsigned long pages_addr;
> > > > + int ret;
> > > > +
> > > > + if (flags)
> > > > + return -EINVAL;
> > > > +
> > > > + if (get_user(num_states, num))
> > > > + return -EFAULT;
> > > > +
> > > > + alloc_size = size_mul(num_states, sizeof(struct vgetrandom_state));
> > > > + if (alloc_size == SIZE_MAX)
> > > > + return -EOVERFLOW;
> > > > + alloc_size = roundup(alloc_size, PAGE_SIZE);
> > >
> > > Small detail: the roundup to PAGE_SIZE can make alloc_size overflow to 0.
> > >
> > > Also, 'roundup(alloc_size, PAGE_SIZE)' could be 'PAGE_ALIGN(alloc_size)'.
> >
> > Good catch, thanks. So perhaps this?
> >
> > alloc_size = size_mul(num_states, sizeof(struct vgetrandom_state));
> > if (alloc_size > SIZE_MAX - PAGE_SIZE + 1)
> > return -EOVERFLOW;
> > alloc_size = PAGE_ALIGN(alloc_size);
> >
> > Does that look right?
>
> Yes. Maybe use 'SIZE_MAX & PAGE_MASK'?
>
> Another alternative is the following:
>
> if (num_states >
> (SIZE_MAX & PAGE_MASK) / sizeof(struct vgetrandom_state))
> return -EOVERFLOW;
> alloc_size = PAGE_ALIGN(num_states * sizeof(struct vgetrandom_state));
Thanks, that's much nicer.
Jason
On Sun, Nov 20, 2022 at 01:53:53AM +0100, Jason A. Donenfeld wrote:
> shouldn't fork or something, but that seems disappointing. Or more state
> could be allocated in the zeroing region, to hold a chacha state, so
> another 64 bytes, which would be sort of unfortunate. Or something else?
> I'd be interested to hear your impression of this quandary.
Another 128 bytes, actually. And the current chacha in there isn't
cleaning up its stack as one might hope. So maybe the cleanest solution
would be to just bite the bullet and allocate another 128 bytes per
state and make a mini chacha that operates over that? (And I guess hope
it doesn't need to spill and such...)
Jason
On Sun, Nov 20, 2022 at 01:53:53AM +0100, Jason A. Donenfeld wrote:
> Hi Eric,
>
> On Sat, Nov 19, 2022 at 03:10:12PM -0800, Eric Biggers wrote:
> > > + if (IS_ENABLED(CONFIG_HAVE_VDSO_GETRANDOM))
> > > + smp_store_release(&_vdso_rng_data.generation, next_gen + 1);
> >
> > Is the purpose of the smp_store_release() here to order the writes of
> > base_crng.generation and _vdso_rng_data.generation? It could use a comment.
> >
> > > + if (IS_ENABLED(CONFIG_HAVE_VDSO_GETRANDOM))
> > > + smp_store_release(&_vdso_rng_data.is_ready, true);
> >
> > Similarly, is the purpose of this smp_store_release() to order the writes to the
> > the generation counters and is_ready? It could use a comment.
>
> Yes, I guess so. Actually this comes from an unexplored IRC comment from
> Andy back in July:
>
> 2022-07-29 21:21:56 <amluto> zx2c4: WRITE_ONCE(_vdso_rng_data.generation, next_gen + 1);
> 2022-07-29 21:22:23 <amluto> For x86 it shouldn’t matter much. For portability, smp_store_release
>
> Though maybe that doesn't actually matter much? When the userspace CPU
> learns about a change to vdso_rng_data, it's only course of action is
> make a syscall to getrandom(), anyway, and those paths should be
> consistent with themselves, thanks to the same locking and
> synchronization there's always been there. So maybe I actually should
> move back to WRITE_ONCE() here? Hm?
Well, sys_getrandom() will just do:
if (unlikely(crng->generation != READ_ONCE(base_crng.generation)))
So I think you do need ordering between base_crng.generation and
_vdso_rng_data.generation. If _vdso_rng_data.generation is changed, the change
in base_crng.generation needs to be visible too.
> > One question I have is about forking. So, when a thread calls fork(), in the
> > child the kernel automatically replaces all vgetrandom_state pages with zeroed
> > pages (due to MADV_WIPEONFORK). If the child calls __cvdso_getrandom_data()
> > afterwards, it sees the zeroed state. But that's indistinguishable from the
> > state at the very beginning, after sys_vgetrandom_alloc() was just called,
> > right? So as long as this code handles initializing the state at the beginning,
> > then I'd think it would naturally handle fork() as well.
>
> Right, for this simple fork() case, it works fine. There are other cases
> though that are trickier...
>
> > However, it seems you have something a bit more subtle in mind, where the thread
> > calls fork() *while* it's in the middle of __cvdso_getrandom_data(). I guess
> > you are thinking of the case where a signal is sent to the thread while it's
> > executing __cvdso_getrandom_data(), and then the signal handler calls fork()?
> > Note that it doesn't matter if a different thread in the *process* calls fork().
> >
> > If it's possible for the thread to fork() (and hence for the vgetrandom_state to
> > be zeroed) at absolutely any time, it probably would be a good idea to mark that
> > whole struct as volatile.
>
> Actually, this isn't something that matters, I don't think. If
> state->key_batch is zeroed, the result will be wrong, but the function
> logic will be fine. If state->pos is zeroed, it'll write to the
> beginning of the batch, which might be wrong, but the function logic
> will still be fine. That is, in both of these cases, even if the
> calculation is wrong, there's no memory corruption or anything. So then,
> the remaining member is state->generation. If this is zeroed, then it's
> actually something we detect with that READ_ONCE()! And in this case,
> it's a sign that something is off -- we forked -- and so we should start
> over from the beginning. So I don't think there's a reason to mark the
> whole struct as volatile. The one we care about is state->generation,
> and for that we READ_ONCE() it at the place that matters.
It's undefined behavior for C code to be working on values that can be mutated
underneath it, though, unless they are volatile. Granted, people still do this
all the time, but I'd hope we can be a bit more careful here...
> There's actually a different scenario, though, that I'm concerned about,
> and this is the case in which a multithreaded program forks in the
> middle of one of its threads running this. Indeed, only the calling
> thread will carry forward into the child process, but all the memory is
> still left around from any concurrent threads in the middle of
> vgetrandom(). And if they're in the middle of a vgetrandom() call, that
> means they haven't yet done erasure and cleaned up the stack to prevent
> their state from leaking, and so forward secrecy is potentially lost,
> since the child process now has some state from the parent.
That is a separate problem though, right? It does *not* mean that the
vgetrandom_state can be zeroed out from underneath __cvdso_getrandom_data().
- Eric
On Sun, Nov 20, 2022 at 02:43:31AM +0100, Jason A. Donenfeld wrote:
> On Sun, Nov 20, 2022 at 01:53:53AM +0100, Jason A. Donenfeld wrote:
> > shouldn't fork or something, but that seems disappointing. Or more state
> > could be allocated in the zeroing region, to hold a chacha state, so
> > another 64 bytes, which would be sort of unfortunate. Or something else?
> > I'd be interested to hear your impression of this quandary.
>
> Another 128 bytes, actually. And the current chacha in there isn't
> cleaning up its stack as one might hope. So maybe the cleanest solution
> would be to just bite the bullet and allocate another 128 bytes per
> state and make a mini chacha that operates over that? (And I guess hope
> it doesn't need to spill and such...)
I've got it implemented without using any stack now. Wasn't so bad. So
all of the additional concerns I added will be addressed in v+1.
Jason
On Sun, Nov 20, 2022 at 02:04:49AM +0100, Jason A. Donenfeld wrote:
> On Sun, Nov 20, 2022 at 01:53:53AM +0100, Jason A. Donenfeld wrote:
> > I'm not quite sure what the best approach here is. One idea would be to
> > just note that libcs should wait until vgetrandom() has returned
> > everywhere before forking, using its atfork functionality.
>
> To elaborate on this idea a bit, the way this looks is:
>
> rwlock_t l;
> pid_t fork(void)
> {
> pid_t pid;
> write_lock(&l);
> pid = syscall_fork();
> write_unlock(&l);
> return pid;
> }
> ssize_t getrandom(...)
> {
> ssize_t ret;
> ...
> if (!read_try_lock(&l))
> return syscall_getrandom(...);
> ret = vdso_getrandom(...);
> read_unlock(&l);
> return ret;
> }
>
> So maybe that doesn't seem that bad, especially considering libc already
> has the kind of infrastructure in place to do that somewhat easily.
> Maybe there's a priority locking thing to get right here -- the writer
> should immediately starve out all future readers, so it's not unbound --
> but that seems par for the course.
Fortunately none of this was necessary, and I've got things implemented
without needing to resort to that, for v+1.
Jason